text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Epigraphy (from Ancient Greek ἐπιγραφή (epigraphḗ) 'inscription') is the study of inscriptions, or epigraphs, as writing; it is the science of identifying graphemes, clarifying their meanings, classifying their uses according to dates and cultural contexts, and drawing conclusions about the writing and the writers. Specifically excluded from epigraphy are the historical significance of an epigraph as a document and the artistic value of a literary composition. A person using the methods of epigraphy is called an epigrapher or epigraphist. For example, the Behistun inscription is an official document of the Achaemenid Empire engraved on native rock at a location in Iran. Epigraphists are responsible for reconstructing, translating, and dating the trilingual inscription and finding any relevant circumstances. It is the work of historians, however, to determine and interpret the events recorded by the inscription as document. Often, epigraphy and history are competences practised by the same person. Epigraphy is a primary tool of archaeology when dealing with literate cultures. The US Library of Congress classifies epigraphy as one of the auxiliary sciences of history. Epigraphy also helps identify a forgery: epigraphic evidence formed part of the discussion concerning the James Ossuary.
An epigraph (not to be confused with epigram) is any sort of text, from a single grapheme (such as marks on a pot that abbreviate the name of the merchant who shipped commodities in the pot) to a lengthy document (such as a treatise, a work of literature, or a hagiographic inscription). Epigraphy overlaps other competences such as numismatics or palaeography. When compared to books, most inscriptions are short. The media and the forms of the graphemes are diverse: engravings in stone or metal, scratches on rock, impressions in wax, embossing on cast metal, cameo or intaglio on precious stones, painting on ceramic or in fresco. Typically the material is durable, but the durability might be an accident of circumstance, such as the baking of a clay tablet in a conflagration.
The character of the writing, the subject of epigraphy, is a matter quite separate from the nature of the text, which is studied in itself. Texts inscribed in stone are usually for public view and so they are essentially different from the written texts of each culture. Not all inscribed texts are public, however: in Mycenaean Greece the deciphered texts of "Linear B" were revealed to be largely used for economic and administrative record keeping. Informal inscribed texts are "graffiti" in its original sense.
The study of ideographic inscriptions, that is inscriptions representing an idea or concept, may also be called ideography. The German equivalent Sinnbildforschung was a scientific discipline in the Third Reich, but was later dismissed as being highly ideological. Epigraphic research overlaps with the study of petroglyphs, which deals with specimens of pictographic, ideographic and logographic writing. The study of ancient handwriting, usually in ink, is a separate field, palaeography. Epigraphy also differs from iconography, as it confines itself to meaningful symbols containing messages, rather than dealing with images.
== History ==
The science of epigraphy has been developing steadily since the 16th century. Principles of epigraphy vary culture by culture, and the infant science in Europe initially concentrated on Latin inscriptions. Individual contributions have been made by epigraphers such as Georg Fabricius (1516–1571); Stefano Antonio Morcelli (1737–1822); Luigi Gaetano Marini (1742–1815); August Wilhelm Zumpt (1815–1877); Theodor Mommsen (1817–1903); Emil Hübner (1834–1901); Franz Cumont (1868–1947); Louis Robert (1904–1985).
The Corpus Inscriptionum Latinarum, begun by Mommsen and other scholars, has been published in Berlin since 1863, with wartime interruptions. It is the largest and most extensive collection of Latin inscriptions. New fascicles are still produced as the recovery of inscriptions continues. The Corpus is arranged geographically: all inscriptions from Rome are contained in volume 6. This volume has the greatest number of inscriptions; volume 6, part 8, fascicle 3 was just recently published (2000). Specialists depend on such on-going series of volumes in which newly discovered inscriptions are published, often in Latin, not unlike the biologists' Zoological Record – the raw material of history.
Greek epigraphy has unfolded in the hands of a different team, with different corpora. There are two. The first is Corpus Inscriptionum Graecarum of which four volumes came out, again at Berlin, 1825–1877. This marked a first attempt at a comprehensive publication of Greek inscriptions copied from all over the Greek-speaking world. Only advanced students still consult it, for better editions of the texts have superseded it. The second, modern corpus is Inscriptiones Graecae arranged geographically under categories: decrees, catalogues, honorary titles, funeral inscriptions, various, all presented in Latin, to preserve the international neutrality of the field of classics.
Other such series include the Corpus Inscriptionum Etruscarum (Etruscan inscriptions), Corpus Inscriptionum Crucesignatorum Terrae Sanctae (Crusaders' inscriptions), Corpus Inscriptionum Insularum Celticarum (Celtic inscriptions), Corpus Inscriptionum Iranicarum (Iranian inscriptions), "Royal Inscriptions of Mesopotamia" and "Royal Inscriptions of the Neo-Assyrian Period" (Sumerian and Akkadian inscriptions) and so forth.
Egyptian hieroglyphs were solved using the Rosetta Stone, which was a multilingual stele in Classical Greek, Demotic Egyptian and Classical Egyptian hieroglyphs. The work was done by the French scholar, Jean-François Champollion, and the British scientist Thomas Young.
The interpretation of Maya hieroglyphs was lost as a result of the Spanish Conquest of Central America. However, recent work by Maya epigraphers and linguists has yielded a considerable amount of information on this complex writing system.
== Form ==
=== Materials and technique ===
== Content ==
== Greek inscriptions ==
=== Political and social ===
==== Codes of law and regulations ====
Ancient writers state that the earliest laws of Athens were inscribed upon tablets of wood, put together in a pyramidal shape. These, owing to their material, have perished; but we have some very early codes of law preserved on stone, notably at Gortyna in Crete. Here an inscription of great length is incised on the slabs of a theatre-shaped structure in 12 columns of 50 lines each; it is mainly concerned with the law of inheritance, adoption, etc. Doubtless similar inscriptions were set up in many places in Greece. An interesting series of inscriptions deals with the conditions under which colonists were sent out from various cities, and the measures that were taken to secure their rights as citizens. A bronze tablet records in some detail the arrangements of this sort made when Locrians established a colony in Naupactus; another inscription relates to the Athenian colonisation of Salamis, in the 6th century BC.
==== Decrees of people and rulers, later of kings and emperors ====
A very large number of inscriptions are in the form of decrees of various cities and peoples, even when their subject matter suggests that they should be classified under other headings. Almost all legislative and many administrative measures take this form; often a decree prescribes how and where the inscription should be set up. The formulae and preambles of such decrees vary considerably from place to place, and from period to period. Those of Athens are by far the most exactly known, owing to the immense number that have been discovered; and they are so strictly stereotyped that can be classified with the precision of algebraic formulae, and often dated to within a few years by this test alone. Very full lists for this purpose have been drawn up by epigraphist Wilhelm Larfeld, in his work on the subject. It is usual to record the year (by the name of the eponymous archon), the day of the month and of the prytany (or presiding commission according to tribes), various secretaries, the presiding officials and the proposer of the decree. It is also stated whether the resolution is passed by the senate (Boule) or the assembly of the people (Ecclesia), or both. The circumstances or the reason of the resolution are then given, and finally the decision itself. Some other cities followed Athens in the form of their decrees, with such local variations as were required; others were more independent in their development, and different magistracies or forms of government had various results. In the Hellenistic Age, and later, the forms of independent government were, to a great extent, kept up, though little real power remained with the people. On the other hand, it is common thing to find letters from kings, and later from Roman emperors, inscribed and set up in public places.
==== Public accounts, treasure lists, building inscriptions ====
It was customary to inscribe on stone all records of the receipt, custody and expenditure of public money or treasure, so that citizens could verify for themselves the safety and due control of the State in all financial matters. As in the case of temple accounts, it was usual for each temporary board of officials to render to their successors an account of their stewardship, and of the resources and treasures which they handed over. In all cases of public works, the expenditure was ordered by the State, and detailed reports were drawn up and inscribed on stone at intervals while the work was being carried out. In many cases there is a detailed specification of building work which makes it possible, not only to realise all the technical details and processes employed, but also the whole plan and structure of a building. A notable instance is the arsenal of Philon at the Peiraeus which has been completely reconstructed on paper by architects from the building specification. In the case of the Erechtheum, we have not only a detailed report on the unfinished state of the building in 409 BC, but also accounts of the expenditure and payments to the workmen employed in finishing it. Similar accounts have been preserved of the building of the Parthenon, spread over 15 years; in the case of both the Parthenon and the Erechtheum, there are included the payments made to those who made the sculptures.
Naval and military expenditure is also fully accounted for; among other information there are records of the galley-slips at the different harbours of the Piraeus, and of the ships of the Athenian navy, with their names and condition. In short, there is no department of state economy and financial administration that is not abundantly illustrated by the record of inscriptions. A set of records of high historical value are the "tribute lists", recording the quota paid to Athens by her subject allies during the 5th century BC. These throw much light on her relations with them at various periods.(Cf. Delian League).
==== Ephebic inscriptions ====
An institution as to which our knowledge is mainly derived from inscriptions is the ephebic system at Athens. There are not only records of lists of ephebi and of their guardians and instructors, but also decrees in honour of their services, especially in taking their due part in religious and other ceremonies, and resolutions of the ephebi themselves in honour of their officials. It is possible to trace in the inscriptions, which range over several centuries, how what was originally a system of physical and military training for Athenian youths from age of 18 to 20, with outpost and police duties, was gradually transformed. In later times there were added to the instructors in military exercises others who gave lectures on what we should now call arts and science subjects; so that in the Hellenistic and Roman times, when youths from all parts of the civilised world flocked to Athens as an intellectual centre, the ephebic system became a kind of cosmopolitan university.
==== Treaties and political and commercial agreements; arbitration, etc. ====
In addition to inscriptions which are concerned with the internal affairs of various cities, there are many others recording treaties or other agreements of an international character between various cities and states. These were incised on bronze or stone, and set up in places of public resort in the cities concerned, or in common religious centres such as Olympia and Delphi. The simplest form of treaty is merely an alliance for a certain term of years, usually with some penalty for any breach of the conditions. Often an oath was prescribed, to be taken by representatives on each side; it was also not unusual to appeal to the god in whose temple the treaty was exhibited. In other cases a list of gods by whom the two parties must swear is prescribed. Commercial clauses were sometimes added to treaties of alliance, and commercial treaties are also found, agreeing as to the export and import of merchandise and other things. In later days, especially in the time of the Hellenistic kings, treaties tend to become more complicated and detailed in their provisions.
Another series of records of great historical interest is concerned with arbitration between various states on various questions, mainly concerned with frontiers. In cases of dispute it was not uncommon for the two disputants to appoint a third party as arbitrator. Sometimes this third party was another State, sometimes a specified number of individuals. Thus, in a frontier dispute between Corinth and Epidaurus, 151 citizens of Megara were appointed by name to arbitrate, and when the decision was disputed, 31 from among them revised and confirmed it. In all such cases it was the custom for a full record to be preserved on stone and set up in the places concerned. In this case the initiative in referring the matter to arbitration came from the Achaean League.
==== Proxenia decrees ====
A very large class of inscriptions deals with the institution of proxenia. According to this a citizen of any State might be appointed proxenos of another State; his duties would then be to offer help and hospitality to any citizen of that other State who might be visiting his city, and to assist him in any dispute or in securing his legal rights. The office has been compared to the modern appointment of consuls, with the essential difference that the proxenos is always a citizen of the state in which he resides, not of that whose citizens and interests he assists. The decrees upon this matter frequently record the appointment of a proxenos, and the conferring on him of certain benefits and privileges in return for his services; they also contain resolutions of thanks from the city served by the proxenos, and record honours consequently conferred upon him.
==== Honours and privileges given to individuals ====
This class of inscription is in form not unlike the last, except that honours recorded are given for all sorts of services, private and public, to the State and to individuals. A frequent addition is an invitation to dine in the Prytaneum at Athens. Some are inscribed on the bases of statues set up to the recipient. In early times these inscriptions are usually brief and simple. The bust of Pericles on the Acropolis held nothing but the names of Pericles himself and of the sculptor Kresilas. Later it became usual to give, in some detail, the reasons for the honours awarded; and in Hellenistic and Roman times, these became more and more detailed and fulsome in laudatory detail.
==== Signatures of artists ====
These inscriptions are of special interest as throwing much light upon the history of art. The artist's name was usually, especially in earlier times, carved upon the base of the pedestal of a statue, and consequently was easily separated from it if the statue was carried off or destroyed. A case where both statue and pedestal are preserved is offered by the Victory, signed on its pedestal by Paeonius at Olympia. Occasionally, and more frequently in later times, the artist's signature was carved upon some portion of the statue itself. But in later copies of well-known works, it has to be considered whether the name is that of the original artist or of the copyist who reproduced his work. (see for example, the statue of Hercules/Heracles below)
A special class of artists' signatures is offered by the names signed by Attic and other vase painters upon their vases. These have been made the basis of a minute historical and stylistic study of the work of these painters, and unsigned vases also have been grouped with the signed ones, so as to make an exact and detailed record of this branch of Greek artistic production.
==== Historical records ====
The great majority of these fall into one of the classes already referred to. But there are some instances in which an inscription is set up merely as a record. For instance, a victor in athletic or other contests may set up a list of his victories. The most famous historical record is the autobiographical account of the deeds and administration of Augustus, which was reproduced and set up in many places; it is generally known as the Monumentum Ancyranum, because the most complete copy of it was found at Ancyra. The Marmor Parium at Oxford, found in Paros, is a chronological record of Greek history, probably made for educational purposes, and valuable as giving the traditional dates of events from the earliest time down.
==== Tombs and epitaphs ====
This is by far the most numerous class of inscriptions, both Greek and Latin. In early times there is often no record beyond the name of the deceased in Athens, often with the name of his father and his deme. Sometimes a word or two of conventional praise is added, such as "a good and wise man". Occasionally the circumstances of death are alluded to, especially if it took place in battle or at sea. Such epitaphs were frequently in metrical form, usually either hexameter or elegiacs. Many of them have been collected, and they form an interesting addition to the Greek anthology. In later times it becomes usual to give more elaborate praise of the deceased; but this is hardly ever so detailed and fulsome as on more modern tombstones. The age and other facts about the deceased are occasionally given, but not nearly so often as on Latin tombstones, which offer valuable statistical information in this respect.
== Latin inscriptions ==
Latin inscriptions may be classified on much the same lines as Greek; but certain broad distinctions may be drawn at the outset. They are generally more standardised as to form and as to content, not only in Rome and Italy, but also throughout the provinces of the Roman Empire. One of the chief difficulties in deciphering Latin Inscriptions lies in the very extensive use of initials and abbreviations. These are of great number and variety, and while some of them can be easily interpreted as belonging to well-known formulae, others offer considerable difficulty, especially to the inexperienced student. Often the same initial may have many different meanings according to the context. Some common formulae such as V.S.L.M. (votum solvit libens merito), or H.M.H.N.S. (hoc monumentum heredem non sequetur) offer little difficulty, but there are many which are not so obvious and leave room for conjecture. Often the only way to determine the meaning is to search through a list of initials, such as those given by modern Latin epigraphists, until a formula is found which fits the context.
Most of what has been said about Greek inscriptions applies to Roman also. The commonest materials in this case also are stone, marble and bronze; but a more extensive use is made of stamped bricks and tiles, which are often of historical value as identifying and dating a building or other construction. The same applies to leaden water pipes which frequently bear dates and names of officials. Terracotta lamps also frequently have their makers' names and other information stamped upon them. Arms, and especially shields, sometimes bear the name and corps of their owners. Leaden discs were also used to serve the same purpose as modern identification discs. Inscriptions are also found on sling bullets – Roman as well as Greek; there are also numerous classes of tesserae or tickets of admission to theatres or other shows.
As regards the contents of inscriptions, there must evidently be a considerable difference between records of a number of independent city states and an empire including almost all the civilised world; but municipalities maintained much of their independent traditions in Roman times, and consequently their inscriptions often follow the old formulas.
The classification of Roman inscriptions may, therefore, follow the same lines as the Greek, except that certain categories are absent, and that some others, not found in Greek, are of considerable importance.
=== Religious ===
==== Dedications and foundations of temples, etc. ====
These are very numerous; and the custom of placing the name of the dedicator in a conspicuous place on the building was prevalent, especially in the case of dedications by emperors or officials, or by public bodies. Restoration or repair was often recorded in the same manner. In the case of small objects the dedication is usually simple in form; it usually contains the name of the god or other recipient and of the donor, and a common formula is D.D. (dedit, donavit) often with additions such as L.M. (libens merito). Such dedications are often the result of a vow, and V.S. (votum solvit) is therefore often added. Bequests made under the wills of rich citizens are frequently recorded by inscriptions; these might either be for religious or for social purposes.
==== Priests and officials ====
A priesthood was frequently a political office and consequently is mentioned along with political honours in the list of a man's distinctions. The priesthoods that a man had held are usually mentioned first in inscriptions before his civil offices and distinctions. Religious offices, as well as civil, were restricted to certain classes, the highest to those of senatorial rank, the next to those of equestrian status; many minor offices, both in Rome and in the provinces, are enumerated in their due order.
==== Regulations as to religion and cult ====
Among the most interesting of these is the ancient song and accompanying dance performed by the priests known as the Arval Brothers. This is, however, not in the form of a ritual prescription, but a detailed record of the due performance of the rite. An important class of documents is the series of calendars that have been found in Rome and in the various Italian towns. These give notice of religious festivals and anniversaries, and also of the days available for various purposes.
==== Colleges ====
The various colleges for religious purposes were very numerous. Many of them, both in Rome and Italy, and in provincial municipalities, were of the nature of priesthoods. Some were regarded as offices of high distinction and were open only to men of senatorial rank; among these were the Augurs, the Fetiales, the Salii; also the Sodales Divorum Augustorum in imperial times. The records of these colleges sometimes give no information beyond the names of members, but these are often of considerable interest. Haruspices and Luperci were of equestrian rank.
=== Political and social ===
==== Codes of law and regulations ====
Our information as to these is not mainly drawn from inscriptions and, therefore, they need not here be considered. On the other hand, the word lex (law) is usually applied to all decrees of the senate or other bodies, whether of legislative or of administrative character. It is therefore, best to consider all together under the heading of public decrees.
==== Laws and plebiscites, senatus consulta, decrees of magistrates or later of emperors ====
A certain number of these dating from republican times are of considerable interest. One of the earliest relates to the prohibition of bacchanalian orgies in Italy; it takes the form of a message from the magistrates, stating the authority on which they acted. Laws all follow a fixed formula, according to the body which has passed them. First there is a statement that the legislative body was consulted by the appropriate magistrate in due form; then follows the text of the law; and finally the sanction, the statement that the law was passed. In decrees of the senate the formula differs somewhat. They begin with a preamble giving the names of the consulting magistrates, the place and conditions of the meeting; then comes the subject submitted for decision, ending with the formula QDERFP (quid de ea re fieri placeret); then comes the decision of the senate, opening with DERIC (de ea re ita censuerunt). C. is added at the end, to indicate that the decree was passed. In imperial times, the emperor sometimes addressed a speech to the senate, advising them to pass certain resolutions, or else, especially in later times, gave orders or instructions directly, either on his own initiative or in response to questions or references. The number and variety of such orders is such that no classification of them can be given here. One of the most famous is the edict of Diocletian, fixing the prices of all commodities. Copies of this in Greek as well as in Latin have been found in various parts of the Roman Empire.
==== Records of buildings, etc. ====
A very large number of inscriptions record the construction or repair of public buildings by private individuals, by magistrates, Roman or provincial, and by emperors. In addition to the dedication of temples, we find inscriptions recording the construction of aqueducts, roads, especially on milestones, baths, basilicas, porticos and many other works of public utility. In inscriptions of early period often nothing is given but the name of the person who built or restored the edifice and a statement that he had done so. But later it was usual to give more detail as to the motive of the building, the name of the emperor or a magistrate giving the date, the authority for the building and the names and distinctions of the builders; then follows a description of the building, the source of the expenditure (e.g., S.P., sua pecunia) and finally the appropriate verb for the work done, whether building, restoring, enlarging or otherwise improving. Other details are sometimes added, such as the name of the man under whose direction the work was done.
==== Military documents ====
These vary greatly in content, and are among the most important documents concerning the administration of the Roman Empire. "They are numerous and of all sorts – tombstones of every degree, lists of soldiers' burial clubs, certificates of discharge from service, schedules of time-expired men, dedications of altars, records of building or of engineering works accomplished. The facts directly commemorated are rarely important." But when the information from hundreds of such inscriptions is collected together, "you can trace the whole policy of the Imperial Government in the matter of recruiting, to what extent and till what date legionaries were raised in Italy; what contingents for various branches of the service were drawn from the provinces, and which provinces provided most; how far provincials garrisoned their own countries, and which of them, like the British recruits, were sent as a measure of precaution to serve elsewhere; or, finally, at what epoch the empire grew weak enough to require the enlistment of barbarians from beyond its frontiers."
==== Treaties and agreements ====
There were many treaties between Rome and other states in republican times; but we do not, as a rule, owe our knowledge of these to inscriptions, which are very rare in this earlier period. In imperial times, to which most Latin inscriptions belong, international relations were subject to the universal domination of Rome, and consequently the documents relating to them are concerned with reference to the central authority, and often take the form of orders from the emperor.
==== Proxeny ====
This custom belonged to Greece. What most nearly corresponded to it in Roman times was the adoption of some distinguished Roman as its patron, by a city or state. The relation was then recorded, usually on a bronze tablet placed in some conspicuous position in the town concerned. The patron probably also kept a copy in his house, or had a portable tablet which would ensure his recognition and reception.
==== Honorary ====
Honorary inscriptions are extremely common in all parts of the Roman world. Sometimes they are placed on the bases of statues, sometimes in documents set up to record some particular benefaction or the construction of some public work. The offices held by the person commemorated, and the distinctions conferred upon him are enumerated in a regularly established order (cursus honorum), either beginning with the lower and proceeding step by step to the higher, or in reverse order with the highest first. Religious and priestly offices are usually mentioned before civil and political ones. These might be exercised either in Rome itself, or in the various municipalities of the empire. There was also a distinction drawn between offices that might be held only by persons of senatorial rank, those that were assigned to persons of equestrian rank, and those of a less distinguished kind. It follows that when only a portion of an inscription has been found, it is often possible to restore the whole in accordance with the accepted order.
==== Signatures of artists ====
When these are attached to statues, it is sometimes doubtful whether the name is that of the man who actually made the statue, or of the master whose work it reproduces. Thus there are two well-known copies of a statue of Hercules by Lysippus, of which one is said to be the work of Lysippus, and the other states that it was made by Glycon (see images). Another kind of artist's or artificer's signature that is commoner in Roman times is to be found in the signatures of potters upon lamps and various kinds of vessels; they are usually impressed on the mould and stand out in relief on the terracotta or other material. These are of interest as giving much information as to the commercial spread of various kinds of handicrafts, and also as to the conditions under which they were manufactured.
==== Historical records ====
Many of these inscriptions might well be assigned to one of the categories already considered. But there are some which were expressly made to commemorate an important event, or to preserve a record. Among the most interesting is the inscription of the Columna Rostrata in Rome, which records the great naval victory of Gaius Duilius over the Carthaginians; this, however, is not the original, but a later and somewhat modified version. A document of high importance is a summary of the life and achievements of Augustus, already mentioned, and known as the Monumentum Ancyranum. The various sets of Fasti constituted a record of the names of consuls, and other magistrates or high officials, and also of the triumphs accorded to conquering generals.
==== Inscriptions on tombs ====
These are probably the most numerous of all classes of inscriptions; and though many of them are of no great individual interest, they convey, when taken collectively, much valuable information as to the distribution and transference of population, as to trades and professions, as to health and longevity, and as to many other conditions of ancient life. The most interesting early series is that on the tombs of the Scipios at Rome, recording, mostly in Saturnian Metre, the exploits and distinctions of the various members of that family.
About the end of the republic and the beginning of the empire, it became customary to head a tombstone with the letters D.M. or D.M.S. (Dis Manibus sacrum), thus consecrating the tomb to the deceased as having become members of the body of ghosts or spirits of the dead. These are followed by the name of the deceased, usually with his father's name and his tribe, by his honours and distinctions, sometimes by a record of his age. The inscription often concludes with H.I. (Hic iacet), or some similar formula, and also, frequently, with a statement of boundaries and a prohibition of violation or further use – for instance, H.M.H.N.S. (hoc monumentum heredem non sequetur, this monument is not to pass to the heir). The person who has erected the monument and his relation to the deceased are often stated; or if a man has prepared the tomb in his lifetime, this also may be stated, V.S.F. (vivus sibi fecit). But there is an immense variety in the information that either a man himself or his friend may wish to record.
==== Milestones and boundaries ====
Milliarium (milestones) have already been referred to, and may be regarded as records of the building of roads. Boundary stones (termini) are frequently found, both of public and private property. A well-known instance is offered by those set up by the commissioners called III. viri A.I.A. (agris iudicandis adsignandis) in the time of the Gracchi.
== Sanskrit inscriptions ==
Sanskrit epigraphy, the study of ancient inscriptions in Sanskrit, offers insight into the linguistic, cultural, and historical evolution of South Asia and its neighbors. Early inscriptions, such as those from the 1st century BCE in Ayodhya and Hathibada, are written in Brahmi script and reflect the transition to classical Sanskrit. The Mathura inscriptions from the 1st and 2nd centuries CE, including the Mora Well and Vasu Doorjamb inscriptions, represent significant contributions to the early use of Sanskrit, often linked to Hindu and Jaina traditions.
The turning point in Sanskrit epigraphy came with the Rudradaman I inscription from the mid-2nd century CE, which established a poetic eulogy style later adopted during the Gupta Empire. This era saw Sanskrit become the predominant language for royal and religious records, documenting donations, public works, and the glorification of rulers. In South India, inscriptions such as those from Nagarjunakonda and Amaravati illustrate early use in Buddhist and Shaivite contexts, transitioning to exclusive Sanskrit use from the 4th century CE.: 89, 91–94, 110–126
Sanskrit inscriptions extended beyond South Asia, influencing Southeast Asia from the 4th century CE onward. Indic scripts adapted for Sanskrit were found in regions like Vietnam, Malaysia, Indonesia, and Cambodia, where they evolved into local scripts such as Khmer, Javanese, and Balinese. These inscriptions highlight the spread of Indian cultural and religious practices.: 143–144 : 92–93
== See also ==
=== Related fields of study ===
=== Types of inscription ===
=== Notable inscriptions ===
== References ==
== External links == | Wikipedia/Epigraphy |
The Historiography of the Poor Laws can be said to have passed through three distinct phases. Early historiography was concerned with the deficiencies of the Old Poor Law system, later work can be characterized as an early attempt at revisionism before the writings of Mark Blaug present a truly revisionist analysis of the Poor Law system.
== Deficiencies of the Old Poor Law ==
Much of the early historiography of the poor law concerned the deficiencies of the Old Poor Law. One of the earliest academic attacks on outdoor relief was Joseph Townsend’s 1786 article “Dissertation on the Poor Laws” which criticized the Speenhamland system. Thomas Malthus was the leading intellectual critic of the Poor Law system. His famous work Essay on the Principle of Population contained one chapter dedicated to the Poor Law, and many of his criticisms found their way into the Poor Law report of 1834, which overhauled the system
== Early revisionism ==
The first real challenge to the traditional interpretation of the Poor Law occurred in 1911 with the publication of John and Barbara Hammond's The Village Labourer and, later in 1927 the publication in Beatrice and Sydney Webb’s English Local Government. Hammonds argued the Speenhamland system was a response to the enclosure system of the 17th century. The Webbs made important contributions to the historiography of the Poor Law. They are considered to be the first to point out that outdoor relief to able bodied paupers became important prior to 1795 and they were the first historians to critique the 1834 Report.
Another early revisionist analysis occurs in the work of Karl Polanyi who argues in The Great Transformation that the Speenhamland system was introduced to reinforce the “paternalistic system of labour organisation”
== Revisionism ==
The revisionist analysis of the Poor Law was first presented by Mark Blaug who in 1963 published the paper “The Myth of the Old Poor Law and the making of the New”.
Blaug's analysis rejects the notion that outdoor relief had a disastrous effect on the rural labour market. He argues that outdoor relief increased labour productivity, a conclusion at odds with the authors of the 1834 report. The work of Daniel Baugh, who has analysed poor relief in Essex, Sussex and Kent between 1790 and 1834, extends Blaug’s critique.
== New Poor Law ==
There is also debate surrounding the passing of the Poor Law Amendment Act 1834. The Marxist interpretation of the New Poor Law is that the newly enfranchised middle-classes following the Reform Act 1832 (2 & 3 Will. 4. c. 45) were able to exploit the working classes by legislation which lowered workhouse conditions and made it more difficult to claim poor relief. The New Poor Law would also decrease the amount of tax being paid by the bourgeoisie. The working and pauper classes were still without the vote at this time and left powerless to oppose it. The workhouse system meant that the peasants and working class could be kept under strict control as opposed to the system of outdoor relief under the old poor law. It was feared that this system could lead to a rise against the ruling class as happened in the French Revolution. The traditionalist view is that there was more continuity with the previous system than change. Faced with unrest, the rich reasserted their control. A revisionist view fuses the above views and states the rich reasserted their control but through a capitalist system which was seen as exploitative of the working class.
The implementation of the Poor Law Amendment Act is also an area of debate. Rose argues that unions were able to evade the act and continue to offer outdoor relief. Williams points to figures showing the number of able bodied receiving outdoor relief decreasing and the construction of workhouses to conclude that outdoor relief had been abolished by 1850. Lees concludes that it was possible in some areas of the country to apply for outdoor relief after 1850.
== See also ==
Historiography of the United Kingdom
== References == | Wikipedia/Historiography_of_the_Poor_Laws |
In the history of gunpowder there are a range of theories about the transmission of the knowledge of gunpowder and guns from Imperial China to the rest of the world following the Song, Jin and Yuan dynasties. The earliest bronze guns found in China date back to the 13th century, with archaeological and textual evidence for previous nascent gunpowder technology developed beforehand. Scholars note the scarcity of records for firearms in the Middle East prior to the mid-14th century, and in Russia before the late 14th century, yet cannons already appeared in Europe by the early 14th century. Less accepted theories include gunpowder as being independently invented in the Middle East or South Asia.
== Theories of non-Chinese invention ==
The earliest gunpowder recipe and gunpowder weapons date to China's Song dynasty and the oldest extant guns appear in the Mongol-led Yuan dynasty of China. However, historian Tonio Andrade notes that there is a surprising scarcity of reliable evidence of firearms in Iran or Central Asia prior to the late 14th century. He argues that, in the Middle East, no guns are mentioned prior to the 1360s, while Russian records do not contain reliable mentions of firearms until 1382, after the gun's arrival in western Europe, despite their closer proximity and interactions with the Mongol empires.
=== European origin ===
Although there is some evidence that points to the possible appearance of guns in Andalusia as early as the 1330s, Thomas T. Allsen says that "in the Latin West the first uncontestable evidence of firearms is from 1326, surprisingly somewhat earlier than in the lands that lie between China ... and western Europe. This has caused some doubt among historians on the gun transmission theory, and even whether or not there was a transmission at all. One dissident opinion comes from Stephen Morillo, Jeremy Black, and Paul Lococo's War in World History which argues that "the sources are not entirely clear about Chinese use of gunpowder in guns. There are references to bamboo and iron cannons, or perhaps proto-cannons, but these seem to have been small, unreliable, handheld weapons in this period. The Chinese do seem to have invented guns independently of the Europeans, at least in principle; but, in terms of effective cannon, the edge goes to Europe."
There was a stream of thought in Europe that emerged in the early 15th century that attributed the invention of both gunpowder and the gun to a certain Berthold Schwartz (Niger Berchtoldus or "Black Berthold"). By the turn of the 16th century, the story of Black Berthold was being repeated by numerous writers. In 1605, William Camden declared:
Some have sayled a long course as farre as China, the farthest part of the world, to fetch the invention of guns from thence, but we know the Spanicsh proverb 'long waies, long lies'. One writeth, I know not upon whose credit, that Roger Bacon, commonly called Friar Bacon, knew how to make an engine which with saltpetre and Brimstone, should prove notable for Batterie, but he, tendering the safety of mankind, would not discover it. The best approved authors agree that guns were invented in Germanie, by Berthold Swarte, a Monke skilful in Gebers Cookery or Alchimy, who tempering Brimstone and saltpetre in a mortar, perceived the force by casting up the stone which covered it, when a sparke fell upon it....
It is not exactly certain who Berthold was or if he ever existed as there are no contemporary records of him. Some consider him a mythical figure, used as a stand-in "for all the curious and ingenious experiments related to the new and dangerous mixture of saltpetre, sulfur (brimstone) and carbon." According to Henry Pratap Phillips, Berthold Schwartz was actually named Constantin Anchlitzen, and made gunpowder at Freiburg around the year 1330. J.R. Partington believes Schwartz is a purely legendary figure invented for the purpose of providing a German origin for gunpowder and cannon. Historian Jack Kelly concurs that Berthold was a "legendary figure" that existed to bolster German claims to the invention of the gun and to shield Europeans from the "fact that gunpowder, a critical force in their history, had emerged not from their own inventiveness."
Some European, especially German researchers of firearms history, believed that a German monk, Berthold Schwarz, is the inventor of gunpowder. However, there are different theories regarding various data concerning Berthold inventing gunpowder, including his last name, his nationality, his religion, the year, and location of the invention. The earliest German document mentioning him says that he was a Greek engaged in alchemy, rather than a monk. Later, there were theories saying that he was from Denmark, Prague, Cologne, Freiburg, Braunschweig, and Metz. In religious denomination, he was said by some to be a member of Franciscan faction of Christianity, and by others to be of the Dominique faction. No one can say for sure. When it comes to the year of his inventing gunpowder, there are a variety of claims, including 1,250, 1,313, 1,348, 1,354, 1,372, 1,380, and 1,393, with a difference as great as 143 years.
The dating of Schwartz' invention of gunpowder, given by the Jesuit Athanasius Kircher as 1354, is also later than even the first usage of cannons in Europe. The chronological problem did not go unnoticed and in 1732, Hermann Boerhaave shifted the invention of gunpowder to Roger Bacon while Schwartz was relegated to the role of discovering its explosive military properties. In 1753, Peter Shaw dismissed Schwartz by pointing to European usage of cannons as early as 1338. The idea of Berthold Schwartz as the inventor of gunpowder had already begun to decline in the 17th century. Two years after writing about Schwartz' invention of gunpowder, Kircher changed his mind and said that the "invention of gunpowder, which is not possible to deny took place long before our times in China." In 1678, the commander Louis de Gaya downgraded Schwartz' status as an inventor to a mere transmitter. According to de Gaya, Schwartz obtained gunpowder, invented in China, from Tartars during his travels in Muscovy around 1380. The idea that gunpowder was a Chinese invention was not new to Europeans by then, and had been in circulation in Europe since at least the late 16th century. According to Juan de Mendoza, writing in 1585, the Chinese told the Portuguese that they had invented gunpowder, contradicting their own belief that "an Almane" had been the inventor. By the 18th century, missionary writers with access to Chinese records were convinced that gunpowder and firearms had been invented in China. While Europeans increasingly came to accept that gunpowder and other inventions such as paper, printing, and the compass had originated in China, they added an Orientalist twist to the narrative: "only rational Europeans were able to fully utilize the inventions to create the modern age, while the backward Chinese had squandered them." Belief in a European origin also never died entirely. A well known monograph on the history of artillery by Colonel Henry Hime, published in 1915, attributed the discovery of gunpowder to Roger Bacon and claimed gunpowder was brought to China from the West.
A deeply rooted misconception in the West holds that the Chinese never used gunpowder for war, that they employed one of the most potent inventions in the history of mankind for idle entertainment and children’s whizbangs. This received wisdom is categorically false. The notion of China’s benign relationship with gunpowder sprang in part from Western prejudices about the Chinese character. Some viewed the Chinese as dilettantes who stumbled onto the secret of gunpowder but couldn’t envision its potential. Others saw them as pacifist sages who wisely turned away from its destructive possibilities.
Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, while the earliest Latin and Arabic descriptions of purifying saltpeter, a key ingredient in gunpowder, does not appear until the 13th century, seven centuries after the Chinese. Others have tried to extrapolate ancient mentions of producing thunder as proof of gunpowder, but invariably run into problems with dating, anachronisms, and interpolations, leading modern arms historians to conclude that true gunpowder was unknown in Europe before the 13th century.
=== Islamic origin ===
There is an independent invention theory supporting an Islamic origin of the gun, citing the Mamluk deployment of hand cannons in 1260 and a passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274: "[ The Sultan] installed siege engines … and gunpowder engines …, which project small balls of iron. These balls are ejected from a chamber … placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The passage, dated to 1382, and its interpretation has been rejected as anachronistic by most historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Needham believes Ibn Khaldun was speaking of fire lances or proto-guns rather than hand cannon.
Historian Ahmad Y. al-Hassan, based on his analysis of 14th-century Arabic manuscripts which he argues to be copies of earlier texts, claims that hand cannons were used at the Battle of Ain Jalut in 1260. However Hassan's claims have been refuted by other historians such as David Ayalon, Iqtidar Alam Khan, Joseph Needham, Tonio Andrade, and Gabor Ágoston. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannons only reached Mamluk Egypt in the 1370s. According to Needham, fire lances or proto-guns were known to Muslims by the late 13th century and early 14th century. However the term midfa, dated to textual sources from 1342 to 1352, cannot be proven to be true hand-guns or bombards, and contemporary accounts of a metal-barrel cannon in the Islamic world do not occur until 1365. Needham also concludes that in its original form the term midfa refers to the tube or cylinder of a naphtha projector (flamethrower), then after the invention of gunpowder it meant the tube of fire lances, and eventually it applied to the cylinder of hand-gun and cannon. Similarly, Andrade dates the textual appearance of cannon in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon believe the Mamluks had certainly used siege cannon by the 1360s, but earlier uses of cannon in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s, however evidence is inconclusive.
=== Indian origin ===
The idea that ancient Hindus had knowledge of gunpowder traces back to two 18th century authors: N.B. Halhed and Q. Craufurd. Halhed's Persian translation of a Sanskrit digest of laws, Code of Gentoo Laws (1776), translates agni-astra as "firearms" or "fire-arrow discharged from bamboo," and sataghni, which literally means "hundred-killer" as "cannon." Craufurd's text published in 1790 thought the old Hindus used gunpowder but was doubtful of their use before Europeans. In 1848, Professor Wilson, Director of the Asiatic Society at Calcutta, wrote that Indians were well acquainted with gunpowder and that rockets were an Indian invention. According to H.M. Elliot's The History of India as Told by its own Historians (1875), saltpetre may have possibly been used in explosives mentioned in the Ramayana and Sri Bhagavat.
There is no clear proof that gunpowder and rockets were known in any other country earlier than in China. J. Dubois (1765-1848) maintained that rockets were invented in India as early as 300 BCE, on the grounds that the ancient Sanskrit classic, the Rāmāyaṇa, spoke of vāṇa or bāṇa, which was at one time thought to mean 'rocket'. W. Egerton regarded the agnyastra of the Vedic Hymns as a type of rocket. Further examination of the Rāmayaṇa shows, however, that the term vāṇa or bāṇa simply means an arrow shot from a bow'.
In 1880, Gustav Oppert claimed that the oldest documents describing gunpowder were the Sanskrit texts Sukraniti and Nitiprakasika. The Sukraniti contains descriptions of firearms and a formula for agni-curna (fire-powder) or 'suvarcilavana' (well-shining salt) very similar to that mentioned the Wujing Zongyao: 5 parts saltpetre, 1 part sulphur, and 1 part charcoal. The two firearms mentioned in the Sukraniti are a musket and a cart-drawn gun. There are no definite dates for these works despite claims of their antiquity. Oppert uses archaeological evidence from the ancient temple carvings in India, where soldiers are depicted carrying or in some cases firing the firearms, as proof of ancient use of firearms. Most of these temples are not older than 500 years except Tirupallani temple. However he claims the use of firearms in Sukraniti as authentic and the use of firearms and gunpowder in India since the ancient Vedic period (1500–500 BCE).
The ingredients listed in Sukraniti as constituents for gunpowder such as realgar, opiment, lac, camphor, indigo, pine gum, magnetic oxide of iron, vermillion, graphite are used in the manufacture of incendiary weapons in Arthashastra and also appear in Chinese accounts.
The Arthashastra lists recipes for explosive and inflammable powder called 'agnisamyogas' or 'agniyoga' which J.R. Partington notes are very similar to gunpowder recipes quoted in Chinese, Arabic and European texts. However they do not contain saltpetre. A. Kalyanamaran argues that sulphur was not needed to create gunpowder and nitre could be obtained from fermented dung mentioned in the ingredients. The Greek historian Philostratos cites a letter written by Alexander saying that the reason why the Greek army refrained from advancing from Hydaspis to Ganges was because of the frightful dangers it encountered when people of Oxydraces threw flaming thunderbolts from the top of their forts. H. Wilkinson, who also believes Greek Fire was first discovered by the Indians, considers this as the earliest evidence of gunpowder in the world. According to J. Backman, gunpowder was invented in India and brought to Europe by Muslims. A device in the Arthashastra called ulka is used as a shower of firebrand which makes a thunder sound (or noise of drumming) in the sky which according to the Arthashastra is used by astrologists to show it to the enemy subjects on the day of their birth star. Authors such as A 7th century Chinese text mentions that people in northwest India were familiar with saltpetre and used it to produce purple flames.
Nitisara, variously dated between 4th century BCE – 6th century CE, is a treatise by a Buddhist scholar named Kamandaka mentions gunfiring (nalikadibhdi) and states that the bodyguards of the king should rouse him with gun-firing if he indulges in girls, drinks, bouts etc. The gun firing was probably shotless military pyrotechnic using tubular weapons (although Oppert states that another word 'Nadika'' is also used in one of the text's version and may well mean gongs).
Muhammad ibn Zakariya al-Razi mentions in a treaties dated 910 a material called 'Indian salt', which he describes as "black and friable, with very little glitter," which has been interpreted as saltpetre by Berthelot but this is disputed by Joseph Needham. According to Firishta, Mehmud Ghaznavi (r. 999–1030) employed 1,008 cannon (top) and muskets (tufang) during his battle of Peshawar with Kabul Shahi king Annandapal. In a text called Mujmalut Tawarikh dated to 1126 which was translated from Arabic which itself was based on an original Sanskrit work, some type of grenade shaped like a terracotta elephant with a fuse is mentioned which was placed in the army van and when the invading army drew near, it exploded and the flames destroyed great portion of that army.
Many western military and arms historians, as well as some Indian scholars, have cast doubts on the authenticity of the Sukraniti, mainly for two reasons. First, this work could not be dated with reasonable certainty and, second, the descriptions of gunpowder and firearms given in it appear to be far too advanced for the period to which this work is generally assigned. A few scholars are also of the opinion that the entire book is a clever piece of forgery.
According to Henry Pratap Phillips, some content in the Sanskrit works resemble that found in the Wujing Zongyao and it is possible that it was borrowed from the latter. However he believes it is the opposite and the gunpowder formula in the Wujing Zongyao came from the Sukraniti. Phillips and Oppert both consider The Rajalakshminarayana Hradaya, which Oppert dates to a "very remote period," as proof of ancient Indian knowledge of gunpowder since it mentions charcoal, sulphur, and other materials in the preparation of fire. The lack of saltpetre is explained by Phillips as a conscious omission for the sake of secrecy.
The Chinese texts are usually fairly precisely dated, whilst Indian works are often not. This difficulty must not be allowed to impair the interest or value of Indian works, but they must also be examined from the point of view of their scientific and technical contents with due care and with a suitably critical attitude. I feel that Oppert's treatment does not satisfy this requirement.
J.R. Partington rejected Oppert's claims in his A History of Greek Fire and Gunpowder. Partington believes that the sataghni mentioned in Sanskrit text was an iron-mace rather than a cannon while Joseph Needham is of the opinion that its translation as cannon cannot be sustained. The word for cannon, nalika, does not appear in any Sanskrit dictionary, and the source of Sukratini is the mythical Usanas of Sukracharya. There is also no classical Sanskrit word for saltpetre while shoraka in late Sanskrit is derived from Persian. Rajendralala Mitra raised doubts about the age of another work by Usanas, Nitisara of Sukracharya, noting that it contains descriptions of firearms as they were a hundred years ago. In Partington's opinion the work is legendary. In 1902, P.C. Ray raised doubts about the authenticity of textual evidence supporting ancient Hindu knowledge of gunpowder. Ray pointed out that the gunpowder mixture of 4:1:1 saltpetre, charcoal, and sulphur found in Sukraniti was the most efficient for guns and was not known in Europe until the 16th century, leading him to believe that the content was an interpolation by "the handiwork of some charlatan." P.K. Gode provided textual evidence that pyrotechnical recipes recorded in the Sanskrit treatise, Kautukacintamani, were copied from a Chinese source. Some scholars based on the fact that it mentions matchlock firearms date the text to the modern period. Similarly H.L. Blackmore wrote in 1965 that Oppert's theories were absurd and no proper attempt to date the sources had been made. H.W.L. Hime goes as far as to say that "early Indian gunpowder is definitely a fiction" while Partington calls it a "legend." According to Kaushik Roy, the ancient and medieval Indians used saltpetre for incendiary devices but not for gunpowder.
The theory of Indian origin of gunpowder has been utilized by the Hindu right-wing Bharatiya Janata Party to oppose all attempts to ban bursting of firecrackers by Hindus in Diwali.
== Arguments for and against Chinese transmission ==
Historian Tonio Andrade supports the gun transmission theory, noting that while records of gunpowder weapons and their evolution into the gun exist in China, "there are no records of any such developments in Europe," and that the arrival of the gun in Europe was such that it "appears fully formed around 1326." This is not strictly true, as Kelly DeVries points out that compilers of early gunpowder recipes in Europe understood that should the instrument carrying gunpowder be enclosed on one end, the gunpowder reaction inside would produce "flying fire." Andrade goes on to analyze the nature and etymology of gunpowder in Europe and comes to the conclusion that it is intrinsically in favor of the transmission theory rather than an independent invention. There are the older and more numerous formulas of gunpowder using a variety of different proportions of key ingredients – saltpeter, sulphur, and charcoal – which he believes is proof of its evolution and experimentation in China, where gunpowder was first applied to warfare as an incendiary, then explosive, and finally as a propellant. In contrast gunpowder formulas in Europe appear both later and offer very little divergence from the already ideal proportions for the purpose of creating an explosive and propellant powder. Another facet of the gunpowder transmission theory is the appearance of gunpowder in Europe ready made for military usage, and is generally referred to as gunpowder rather than a civilian term such as the Chinese "fire-drug," which suggests an originally non-military usage, whereas in Europe it was almost immediately and exclusively used for its military qualities. Muslim terms of saltpeter may also point toward a gunpowder transmission, if not the gun itself, as an Andalusian botanist referred to it as "Chinese snow," while in Persia it was called "Chinese salt." Perhaps even further in the Sinocentric gun transmission camp is Joseph Needham who claims that "all the long preparations and tentative experiments were made in China, and everything came to Islam and the West fully fledged, whether it was the fire-lance or the explosive bomb, the rocket or the metal-barrel hand-gun and bombard." However, theories of European, Islamic, and Indian origins for the gun and gunpowder still persist today in tandem with the transmission theory.
There are problems on both extremes of the gun transmission debate. Its proponents emphasize the older history of gunpowder evolution as attested by historical records and archaeological samples in China, its less obviously militarily focused name as "fire medicine," the Mongol role as a catalyst in disseminating gunpowder technology, and criticizes the scant or absent evidence of prior experimentation with gunpowder in Europe for non-military purposes before the arrival of the gun. However, there are still several blanks in the history of a gun transmission theory and the questions they raise which its proponents have been unable to answer. The rapid spread of guns across Eurasia, only 50 years from China to Europe, with non-existent evidence of its route from one extreme of the continent to the other, remains a mystery. Other Chinese inventions such as the compass, paper, and printing took centuries to reach Europe, with events such as the Battle of Talas as perhaps a possible takeoff point for discussion. No such event exists on record for either gunpowder or the gun. There is simply no clear route of transmission, and while the Mongols are often pointed to as the likeliest vector, Timothy May points out that "there is no concrete evidence that the Mongols used gunpowder weapons on a regular basis outside of China." According to Kate Raphael, the list of Chinese specialists recruited by Genghis Khan and Hulagu provided by the History of Yuan includes only carpenters and blacksmiths, but no gunpowder workers. A conclusion most military historians in the transmission camp have come to is that the rapid diffusion of gunpowder and the gun is probably best explained by its clear military applications.
Although the spread of gunpowder is directly related to the rise of the Mongols and the Pax Mongolica, it is unclear whether the Mongols themselves contributed to the spread. Some historians have claimed the Mongols used gunpowder weapons, essentially bombs hurled by catapults, in the Middle East and perhaps Eastern Europe; unfortunately there is no definite documentary or archaeological evidence to confirm it. Considering the Mongols rarely met a weapon they did not like, we can be certain that if they found a way to transport it safely it would have been incorporated into their arsenal outside China. Nonetheless, it remains speculation... However... the Mongols used it in their wars against the Jin, the Song and in their invasions of Japan.
Opponents of the transmission theory criticize the vagueness of Chinese records on the specific usage of gunpowder in weaponry, the existence of gunpowder or possibly lack thereof in incendiary weapons as described by Chinese documents, the weakness of Chinese firearms, the non-existent route of diffusion or evidence of guns between Europe and China before 1326, and emphasize the independent evolution of superior guns in Europe. This too becomes problematic as already discussed above. Notably there is an acute dearth of any significant evidence of evolution or experimentation with gunpowder or gunpowder weapons leading up to the gun in 1326, which can be found in China. Gunpowder appeared in Europe primed for military usage as an explosive and propellant, bypassing a process which took centuries of Chinese experimentation with gunpowder weaponry to reach, making a nearly instantaneous and seamless transition into gun warfare, as its name suggests. Furthermore, early European gunpowder recipes shared identical defects with Chinese recipes such as the inclusion of the poisons sal ammoniac and arsenic, which provide no benefit to gunpowder. Bert S. Hall explains this phenomenon in his Weapons and Warfare in Renaissance Europe: Gunpowder, Technology, and Tactics by drawing upon the gunpowder transmission theory, explaining that "gunpowder came [to Europe], not as an ancient mystery, but as a well-developed modern technology, in a manner very much like twentieth-century 'technology-transfer' projects." In a similar vein Peter Lorge supposes that the Europeans experienced gunpowder "free from preconceived notions of what could be done," in contrast to China, "where a wide range of formulas and a broad variety of weapons demonstrated the full range of possibilities and limitations of the technologies involved." There is also the vestige of Chinese influence, and not European, on Muslim terminology of some gunpowder related items such as saltpeter, which has been described as either Chinese snow or salt, fireworks which were called Chinese flowers, and rockets which were called Chinese arrows. Moreover, Europeans in particular experienced great difficulty in obtaining saltpeter, a primary ingredient of gunpowder which was relatively scarce in Europe compared to China, and had to be obtained from "distant lands or extracted at high cost from soil rich in dung and urine." Thomas Arnold believes that the similarities between early European cannons and contemporary Chinese models suggests a direct transmission of cannon making knowledge from China rather than a home grown development. Whatever the truth may be, the first unambiguous references to guns appeared in Europe in the 1320s.
== References ==
== Bibliography ==
Ágoston, Gábor (2005), Guns for the Sultan: Military Power and the Weapons Industry in the Ottoman Empire, Cambridge University Press, ISBN 978-0-521-60391-1
Andrade, Tonio (2016), The Gunpowder Age: China, Military Innovation, and the Rise of the West in World History, Princeton University Press, ISBN 978-0-691-13597-7.
Arnold, Thomas (2001), The Renaissance at War, Cassell & Co, ISBN 978-0-304-35270-8
Bachrach, David Stewart (July 2008). "Review of Gunpowder, Explosives and the State: A Technological History". Technology and Culture. 49 (3). Aldershot: Ashgate: 785–786. doi:10.1353/tech.0.0051. S2CID 111173101.
Buchanan, Brenda J. (2006), Gunpowder, Explosives and the State: A Technological History, Routledge, ISBN 978-0-7546-5259-5
Chase, Kenneth (2003), Firearms: A Global History to 1700, Cambridge University Press, ISBN 978-0-521-82274-9.
Cressy, David (2013), Saltpeter: The Mother of Gunpowder, Oxford University Press
Kalyanaraman, A. (1903). The Saga Of The Indo-aryans.
Kelly, Jack (2004), Gunpowder: Alchemy, Bombards, & Pyrotechnics: The History of the Explosive that Changed the World, Basic Books, ISBN 978-0-465-03718-6.
Khan, Iqtidar Alam (1996). "Coming of Gunpowder to the Islamic World and North India: Spotlight on the Role of the Mongols". Journal of Asian History. 30: 41–45.
Khan, Iqtidar Alam (2004), Gunpowder and Firearms: Warfare in Medieval India, Oxford University Press
Khan, Iqtidar Alam (2008), Historical Dictionary of Medieval India, The Scarecrow Press, Inc., ISBN 978-0-8108-5503-8
Liang, Jieming (2006), Chinese Siege Warfare: Mechanical Artillery & Siege Weapons of Antiquity, Singapore, Republic of Singapore: Leong Kit Meng, ISBN 978-981-05-5380-7
Lorge, Peter (2005), Warfare in China to 1600, Routledge
Lorge, Peter A. (2008), The Asian Military Revolution: from Gunpowder to the Bomb, Cambridge University Press, ISBN 978-0-521-60954-8
Lu, Yongxiang (2015), A History of Chinese Science and Technology 2
May, Timothy (2012), The Mongol Conquests in World History, Reaktion Books
Morillo, Stephen (2008), War in World History: Society, Technology, and War from Ancient Times to the Present, Volume 1, To 1500, McGraw-Hill, ISBN 978-0-07-052584-9
Needham, Joseph (1971), Science & Civilization in China: Volume 4 Part 3, Cambridge University Press
Needham, Joseph (1980), Science & Civilisation in China: Volume 5 Part 4, Cambridge University Press, ISBN 978-0-521-08573-1
Needham, Joseph (1986). Science & Civilisation in China: Volume 5, Chemistry and Chemical Technology, Part 7, Military Technology: The Gunpowder Epic. Cambridge University Press. ISBN 978-0-521-30358-3.
Oppert, Gustav Salomon (1880). On the weapons, army organisation, and political maxims of the ancient Hindus, with special reference to gunpowder and firearms. Albrecht Weber; Lakshmīkānta Varmā; Śukra.; Vaiśaṃpāyana. Madras: Higginbotham.
Partington, J.R. (1999), A History of Greek Fire and Gunpowder, Baltimore: Johns Hopkins University Press, ISBN 978-0-8018-5954-0
Phillips, Henry Prataps (2016), The History and Chronology of Gunpowder and Gunpowder Weapons (c.1000 to 1850), Notion Press, ISBN 9789352067633
Purton, Peter (2010), A History of the Late Medieval Siege, 1200–1500, Boydell Press, ISBN 978-1-84383-449-6
Raphael, Kate (2011). Muslim fortresses in the Levant: between Crusaders and Mongols. Culture and civilization in the Middle East. Vol. 23. London: Routledge. ISBN 978-0-4155-6925-5.
Roy, Kaushik (2014). Military Transition in Early Modern Asia, 1400–1750. London: Bloomsbury Academic. ISBN 978-1-7809-3765-6. | Wikipedia/Historiography_of_gunpowder_and_gun_transmission |
This article presents the historiography of the massacres of Poles in Volhynia and Eastern Galicia, as presented by historians in Poland and Ukraine after World War II.
Beginning in March 1943, and lasting until early 1945, a violent ethnic cleansing operation against Poles – conducted primarily by the Ukrainian Insurgent Army (UPA) – occurred in the regions of Eastern Galicia and Volhynia (now in Western Ukraine). According to political scientist Nathaniel Copsey, research into this event was quite partisan until 2009 (with some exceptions), and dominated by Polish researchers, some of whom lived there at the time or are descended from those who did. The most thorough is the work of Ewa and Władysław Siemaszko, the result of years of research conducted with the goal of demonstrating that the Poles were victims of genocide. Nonetheless, the 45 years of state censorship resulted in an excessive supply of works described as "heavy in narrative", "light in analysis" and "inherently – though perhaps unconsciously – biased against Ukrainians."
== Polish historiography ==
=== Communist Poland ===
The Polish historiography of the Volyn tragedy during the dictatorship of the communist party can be broken down into three periods:
End 1950–1960s
First half of the 1970s
Second half of the 1980s
In the early People's Republic of Poland, the question of the Polish–Ukrainian conflict was never a subject of independent studies. Ukrainian historian Roman Hrytskiv believes that Polish Communists avoided this subject as it could raise questions regarding the Polish population in Western Ukraine.
Paweł Machcewicz from the Polish Institute of National Remembrance suggests two reasons why, up until the collapse of communism, censorship blocked the subject of the massacres of Poles in Volhynia and Eastern Galicia perpetrated by the OUN-UPA:
Ukrainians were considered a friendly Soviet nation (a member of the USSR) and any mention of the Polish-Ukrainian conflict would be seen as anti-Soviet.
The previously Polish territories of Volhynia and Eastern Galicia were incorporated into the Soviet Union; therefore any reference to those lost lands would be treated as anti-Soviet revisionism.
As the subject matter of the anti-Polish action in Volhynia and Eastern Galica was prohibited, in Polish popular remembrance of the Ukrainian Insurgent Army (UPA), the site of the mass killings was transferred to Bieszczady and Eastern Lubelszczyzna; and thus communists were able to portray Operation Vistula as the only effective way of liquidating the UPA network. The theme of UPA "terrorism" was occasionally brought up in order to affirm the actions of the "people's government". According to Hrytskiv, Polish studies branded all Ukrainian nationalist organizations as anti-Polish, criminal and collaborationist.
New studies were initiated in the early 1970s based on factual information. Under the influence of the Soviet historiography, Polish historians continued to expound the mistaken Soviet concept of Ukrainian bourgeoise nationalists, viewed in their own specific manner. This category includes the works of Ryszard Torzecki, who explained the reason for the conflict as Hitlerite politics based on the tenet of divide and conquer, the chauvinism of Ukrainian nationalists, and Ukrainian-Polish disagreements in the interbellum, which were used by the Nazis in their interests. Similar concepts were formulated by A. Szczesiak and W. Szota in a publication that was soon removed from sale and libraries. Officially the book was dedicated to the activities of the Ukrainian nationalists in the interbellum but it also explained that the conflict had its origins in the late 19th century, and that the Volyn tragedy was a continuation of the terror campaigns of 1918-1939. This work investigated in detail the Ukrainian-Polish negotiations from 1942-45 to stop the conflict and to unite to fight a common foe – the USSR. This study brought about a re-evaluation of the UPA within the USSR itself.
The next significant study was published in the mid-1980s by Edward Prus – Heroes under the sign of the Trident (1985), which employed a journalistic and propagandistic style. Prus was the first to introduce the terms and concepts such as the Ukrainian "slaughter" Polish: rzeź of Poles. He compares the anti-Polish actions in Volyn and Galicia without the use of academic citations. According to Hryckiw, Prus's work has no scientific merit. In his opinion, the use of a journalistic style, falsification, and manipulation reflect the state of Polish historiography in the last years of the communist Poland.
=== Independent Poland ===
1989 marked the end of the Polish totalitarian state and a new era in Polish historiography. In the light of Polish independence the subject of Ukrainian–Polish relations became a growing concern. The first study to be published was Tadeusz Olszański's 1989 article which shattered previous understandings of the OUN. He ascribes the reason for the Volyn tragedy to the inadequate policies of the Polish government in the interbellum and the destruction of moral society during the Soviet and German occupations. Olszański notes that in pre-war Poland, a Ukrainian nationalist movement could develop relatively freely even in the most radical forms, including the use of terror, and that the Polish state wasn't able to solve the problems concerning coexistence of Poles and Ukrainians, which resulted in popularization of nationalist and communist movements among the Ukrainians.
Olszański introduces the term "de-polonisation". Olszański suggests that the OUN expected a return to the situation as it was from 1918, when Poles and Ukrainians fought over disputed territories, and that the Ukrainian leadership wanted an absence of Polish population and Polish military activity. De-polonization action started in March 1943. Contrary to what Ukrainian emigré authors claim, the fact of Ukrainian initiative and the unprovoked character of the action is confirmed by the German documents. In the article from 1991, he divides "Volhynian terror" into the following stages:
Up to December 1942 – murders of individual Poles and Polish families
From the beginning of 1943 – growth in attacks against Poles, which reached critical point in March 1943, the first point at which one can speak of mass terror
July – August 1943 – apogee of terror; Ukrainians also assault Polish self-defence forces; spread of terror to Eastern Galician districts
From September 1943 – attacks decline and Polish–Ukrainian confrontations gradually become more military in nature.
Olszański underlines the influence of numerous provocations by the NKVD and Soviet partisans in the occupied zones in directing conflict against the UPA. Retaliatory actions by Polish forces and the negative view of the Polish underground to Ukrainian independence were also factors. In the second period Olszański states that the Polish underground and the communists initiated a number of anti-Ukrainian terrorist actions. Ukrainian responses were restrained as at this time negotiations were being undertaken for a united Ukrainian–Polish front against Moscow. This was the first step to an understanding with the Poles, which in 1945 was affirmed by the tactical understanding between the UIA and AK. In the words of Olszawski, this was the end of conflict between these national movements, who would now wage a joint battle against the communist regime in Poland.
The UPA was unable to conduct the action alone. It mobilized Ukrainian peasants on a large scale, who were later given Polish properties. Vast numbers of peasants participating in anti-Polish attacks, together with UPA units or individually, were also motivated by numerous Banderist agitators and by communist agents from the north of Volhynia. Olszański sees the role of Soviets in the events as insufficiently explained. He points that field organizations of the OUN were penetrated by communist agents and in some instances Soviet units disguised as UPA murdered Poles to gain support of local Polish population.
According to Olszański, the responsibility for the terror lies mostly on the OUN-B leadership, which decided to perform the large-scale anti-Polish action.
Olszański expressed view that the goal of the action was to expel Poles and not to exterminate them. Ten years later, in the post script to his previous article, he admitted that he was wrong: "More and more documents prove not only that the de-polonization action was a planned military operation and that the order from OUN-UPA leadership existed (though still not found), but also that the purpose of this operation was physical extermination (murder) of at least most of the Polish population of these lands, and not only – as I erroneously believed – expulsion. Thus it was the crime of genocide."
Zbigniew Kowalewski's study (1993) stated that the role of the auxiliary police and its collaboration with Soviet Polish diversionist-partisan groups provoked the Ukrainians to use force. These actions were not just those of the OUN and UPA, but also Soviet partisans, auxiliary police and other independent groups. Kowalewski introduces the thesis that the reason why OUN changed its strategy regarding the Poles in 1944–45 was in order to form a unified Polish-Ukrainian front against the USSR. This would explain the waves of OUN propaganda that were spread in the Polish population regarding the formation of a unified front and the cessation of retributive actions. The Ukrainian and some sections of the Polish population understood that without an independent Poland there could not be an independent Ukraine.
The destruction of the totalitarian system in Poland allowed another direction in Polish historical studies fuelled by the previous studies published under the Communist regime. One of the first such studies was undertaken by J. Turowski and Władysław Siemaszko in 1990, based on 350 eyewitness accounts from veterans of the Polish Home army regarding the anti-Polish terror in Volyn. Subsequently, Władysław with daughter Ewa Siemaszko – in their own ten-year-long research project – went on to document murders committed on Polish citizens by Ukrainian Insurgents in some 1,865 villages and towns of Volhynia during the Nazi and Soviet occupations. Their books were based on witness accounts, court documents including transcripts from trials of Ukrainian war criminals, as well as the Polish national archives and statistical censuses. In 2010 the Institute of National Remembrance (Bulletin No. 7–8, 116–117) published an overview by Ewa Siemaszko of their joint research with the following up-to-date table of collected data.
The fall of the Communist system in Poland gave fuel to two directions in Polish historiography regarding the Ukrainian–Polish conflicts: liberal-democrаtic and nationalistic. The first group has focused on the reasons for the inter-ethnic conflict in Western Ukraine. This group is subscribed to by most professional historians. The second group focuses on the problems of anti-Polish terror by the UPA. This movement is supported by former inhabitants of Volyn and Galicia and members of the various Kresy societies.
The Polish emigre centre in London at this time began to actively support a nationalist view on the Volyn tragedy. In 1992, a magazine called Na rubieży began publishing studies by W. Siemaszko, E. Prus and Wiktor Poliszczuk. Attempts were made to document the Polish victims of Ukrainian nationalists, with the inclusion of unverified or even falsified information. Numerous memoirs were published, dominated by the works of E. Prus. Many of these publications were printed by the newly established Nortom publishing house in Wrocław.
The liberal-democratic movement is represented by the works of Ryszard Torzecki which reviewed the thesis put forward during the communist administration and developed a framework for further scholarship. Torzecki argues that the territorial integrity of Volyn lay in the Polish population. From this point of view the author examines anti-Polish actions, the spontaneous actions of the peasants, and the influence of OUN propaganda on their fight for social justice. Describing concrete incidence of terror in Volyn, he states that the pivotal moment for the development of bloodshed came with the transfer of armed Ukrainian police into the forests. This act raised the number of OUN-B dependent groups with people who had experience in the ethnic cleansing of Jews. Secondly it initiated a mass movement of Poles into the ranks of the auxiliary police, which further escalated the situation. The author postulates that anti-Polish terror may have been planned, it had as its main purpose to chase out the Poles. Torzecki states that the raid by Sydir Kovpak introduced a significant destabilizing factor to the region and initiated the formation of Ukrainian National Defence – the group which first initiated the anti-Polish actions. The discussions between the Polish and Ukrainian sides were doomed to failure because they were based on tactical considerations.
Although Torzecki states that in 1943–44 the attempts to curtail the conflict between Ukrainian and Polish nationalism were doomed to failure, in his opinion it was OUN-UPA that could have stopped the conflict and did not. He lays blame personally on Roman Shukhevych, who was accustomed to dealing with problems from a position of force, and would consider using terror on the civilian population.
In his 1997 study, Filar came to the conclusion that the sole result of the actions of OUN and UPA were to destroy the Polish population. This he bases on a command given by Klym Savur regarding the liquidation of the urban Polish population. Information about this command was obtained from a criminal deposition of Yuri Stelmashchuk which existed in the archive of the SBU in Volyn. The command was a secret one given orally by D. Kliachkivsky to liquidate all Polish elements in the region. Filar comes to the conclusion that UPA consciously aimed their actions against the civilian Polish population. He also concludes that the UPA made its priority the extermination of Polish elements rather than fighting the occupiers.
Grzegorz Motyka and Rafal Wnuk point out that at the end of World War II there was a real potential for Polish–Ukrainian dialogue and understanding. As a result, the two anti-communist forces UPA and WiN signed tactical agreements regarding further cooperation. A new era in Polish–Ukrainian relations for the struggle for freedom meant that former conflicts lost their principal meaning.
In a further study in 1999 Motyka states that the conflict between the Ukrainian and Polish peoples ended in 1945. After that UPA fought against the Communist government in Poland and not against the Polish population, as opposed to the Poles, who continued their terror against the Ukrainian population and the Polish communists who under the guise of fighting the Ukrainian nationalist underground deported the Ukrainian population in 1947.
After the marking of the "Wisla" action the thesis of a planned "Volyn terror" became mainstream in Polish historiography. More eyewitness accounts were collected and published in 2000 (edited by W. and E. Siemaszkos) in the collection Genocide, done by Ukrainian nationalists against the Polish population of Volyn 1939-1945 which documented OUN and UPA crimes against the civilian population.
Using a solid documentary base, the authors attempt to convince the reader of the genocidal character of the anti-Polish actions of the UPA. The first volume gives a chronological and geographical listing of 1686 witnesses, archival information, and other facts. The second volume gives the authors' interpretation of these events, a summation of Polish casualties, names of the perpetrators, and other documents.
With the 60th anniversary of the Volyn tragedy in 2003, a third era in the study of Ukrainian–Polish conflict started.
In 2002 Grzegorz Motyka, finding all the previous concepts regarding the anti-Polish actions of the UPA inadequate, suggested viewing the Ukrainian–Polish relations from the point of view of the question of Ukrainian insurrection during the war. Encouraged by the mass desertion of the Ukrainian police, Dmytro Kliachkivsky (Klym Savur) initiated a wave of UPA attacks against the Polish civilian population. Kliachkivsky gave the liquidation order on July 11. By Autumn 1944 most of the anti-Polish actions stopped and terror was used only against those who co-operated with the NKVD, and the Ukrainians leaders had understood that it was time to unite with the Poles against the USSR. Polish actions against the Ukrainian civilian population were restricted and punitive in their nature according to Motyka. They were done by the Polish auxiliary police, the self-defense leagues, the AK, and by Red partisan units formed from ethnic Poles. In his latest studies, Motyka attempts to synthesize the main concepts of both current directions in Polish historiography, analyzing and understanding the problem of anti-Polish terror in Volyn, the reasons and the results, in order to induce the Ukrainians to officially condemn the activities of the OUN and UPA.
According to Roman Hrytskiv a characteristic of Polish historiography is the national component in its understanding of the problem. This is evident by the treatment of the Ukrainian–Polish conflict as an episode of purely Polish history; focusing attention to the anti-Polish terror of the UPA and demanding that the Ukrainians condemn these actions. The treatment of the anti-Ukrainian actions of the Polish underground is relegated to a secondary position, and the moral and emotional style of exposition of materials, the inclusion of materials whose authenticity is questionable (memoirs, eye witness accounts, works of literature etc.). At the same time, it is in Poland that the professional study of Ukrainian–Polish conflict was started. Polish historians were the first to thoroughly study and analyse the facts of this conflict, developed a periodization, collected a significant number of Polish historic sources and developed a number of alternative methods of studying the problem.
== Ukrainian historiography ==
The issue of the Volyn massacres was largely non-existent in Ukrainian scholarly literature for many years, and Ukrainian historiography did not undertake any objective research of the events in Volyn. Until 1991, any independent Ukrainian historic research was only possible abroad, mainly in the USA and the Canadian diaspora. Despite publishing a number of works devoted to the history of UPA, the Ukrainian emigration researchers (with only few exceptions) remained completely mute about the Volyn events for many years. Until very recently much of the remaining documentation was closed in Ukrainian state archives, unavailable to researchers. As a result, Ukrainian historiography lacks broader reliable research of the events and the presence of the issue in Ukrainian publications is still very limited. The young generation of Ukrainian historians is often infected with Ukrainocentrism, and often borrows the stereotypes and myths about Poland and Poles from the biased publications of the Ukrainian diaspora.
The active collection and publication of information regarding the Volyn Tragedy began in the summer of 2002 after it became known that Kresy organizations (made up of AK veterans and various Associations) were planning to hold commemorations in memory of only the Polish victims of the conflict. The I. Krypiakevych Institute of the Academy of Sciences of Ukraine began to collect accounts. Tetiana Kostenko has been focusing on the region of Dubno, Ivan Pusko on the regions of Volyn, and Volodymyr Sobchuk, the area around Kremenets. Yaroslav Tsaruk began collecting accounts from the area around Volodymyr-Volyns'k in 1985 and became more active in the 1990s, stimulated by the writings of Yu. Turovsky and V. Siemaszko, when he noticed that the accounts in their writings did not correspond with his own findings. Travelling by bicycle throughout the area, he collected accounts from those who remembered the events, and made lists of those who died or were wounded during the war, including accounts about Poles, Ukrainians and others. He counterchecked the accounts against information that was collected immediately after the war and again in 1976. The post war account paralleled his findings, whereas the 1976 account was specially written to show all anti-soviet actions in a negative light.
According to the Ukrainian historian Yaroslav Tsaruk, who studied the materials collected by Siemaszko from Polish villagers, the number of ethnic Poles given by them in some of the villages he is familiar with, does not correspond to recorded Ukrainian statistical data. According to Tsaruk, Siemaszko included in the number of Polish casualties those who emigrated before the commencement of these hostilities, and that Siemaszko included colonies, subdivisions of villages and population points which were never separate administrative units, enlarging the number of Polish population points.
According to the information collected by the Siemaszkos in the Volodymyr-Volynsk region only 80 Ukrainians were murdered. According to materials collected by Ya. Tsaruk 1454 Ukrainians died from the hands of Polish paramilitary groups (the names of 1244 victims have been collected). Tsaruk stated that in the Volodymyr region initially there were attacks on Ukrainian villages by Polish–German police units which were retaliated in self-defence. According to Siemaszko 1915 Poles died at the hands of Ukrainian Nationalists. According to Tsaruk there were 430.
In the village of Biskupychy Verkhni (Nekhvoroshchi) Ya. Tsaruk notes 11 murdered Ukrainians (including a 3-year-old girl and a 95-year-old grandmother) which happened May 20, 1943. Siemaszko's book mentions the murder of 90 Poles on July 11, but doesn't mention the murder of the Ukrainians that, according to Tsaruk, took place on May 20. Tsaruk gives the names of 9 Ukrainians murdered on May 20 in the village of Khmeliv; among the victims were an 18-month-old child. Siemaszko states that 11 Poles were murdered there at the beginning of August, again neglecting, according to Tsaruk, to mention the previous Ukrainian victims.
Ya. Tsaruk gives hundreds of examples of Siemaszko's selective use of information where previously Ukrainian civilians were murdered by Polish military units such as Radekhiv, Mokrets, Zashkovychi, Volytsia, Koluna, Oryshchi, Zavydiv, and Rykovychi, which Siemaszko did not mention at all.
Władysław Siemaszko stated that Tsaruk isn't a historian or a reliable source, and that his research is based on reports from the locals long after war, while Siemaszko's sources were published and are widely available. He further stated that "almost every Ukrainian family in Volhynia was involved in the murder of Poles", and that "there is a desperate attempt to paint a completely false picture of the number of the alleged Ukrainian victims at the hands of Poles, explained by psychological defence mechanisms, which in some appear as the denial of crimes".
According to the Polish historian Grzegorz Motyka Tsaruk's research didn't change his point of view on Volhynia events: that the Ukrainian nationalists were responsible for beginning and escalating the massacres of Poles. Motyka points out that Tsaruk mistakenly blames Poles for crimes perpetrated by Germans. Another Polish historian Grzegorz Hryciuk writes that Tsaruk's work has marks of "literary talent of the author which went too far".
Tsaruk's criticism of Polish historiography was echoed by Ihor Ilyushin, a prominent Ukrainian historian tasked with investigating the events in Volhynia. Ilyushn questions whether the Polish approach typified by Siemaszko can be scholarly, objective, or impartial, and considered an approach to be flawed when it is based primarily on the testimony of one, Polish, side. He also questions Siemaszko's credibility because Siemaszko was a participant at the height of the conflict himself. Ilyushin notes that Siemaszko's large tome used only three OUN-UPA documents and fails to make use of any Soviet or German materials. As an example of inaccuracy that Siemaszko' approach leads to, Ilyushin described a case discussed in Siemaszko's work in which, based on AK testimony, Siemaszko claimed that 9 Poles were killed by Ukrainian nationalists in September 1939. The same event was described in the NKVD archives in Ukraine and according to those records the perpetrators were western Ukrainian communists.
== See also ==
Janowa Dolina massacre
== Notes ==
== References ==
(in English) Recent Polish Historiography on Polish-Ukrainian Relations during World War II and its Aftermath, Rafał Wnuk, Institute of National Remembrance, Lublin
(in Ukrainian) Грицьків, Роман (2003), "Польська Історіографія Українсько-Польського збройного конфлікту часів Другої Світової війни" by Roman Hrytskiv, published in the collection "Українсько-Польський конфлікт під час другої світової війни"; Book 2, Lviv.
Грицьків, Роман (2012), Україна: культурна спадщина, національна свідомість, державність. Вип. 22: Українська повстанська армія в контексті національно-визвольної боротьби народів Центрально-Східної Європи / [гол. редколегії Микола Литвин]. НАН України, Інститут українознавства ім. І. Крип’якевича, Львів. С. 514-522.
Грицьків Р. (2008), Діяльність Української повстанської армії у висвітленні польської історіографії (період Польської народної республіки) / Р. Грицьків // Україна: культурна спадщина, національна свідомість, державність / [гол. редкол. Я. Ісаєвич], Львів. Вип. 16: Ювілейний збірник на пошану Івана Патера. – С. 517–527.
(in Polish) Motyka, Grzegorz (2003). "Antypolska akcja OUN-UPA w ukraińskiej historiografii" in "Antypolska akcja OUN-UPA 1943-1944. Fakty i interpretacje" (in Polish). Warszawa: IPN. pp. 141–146. ISBN 83-89078-09-0.
(in Polish) Ilyushin, Ihor (2009). UPA i AK. Konflikt w Zachodniej Ukrainie (1939-1945) (in Polish). Warszawa: Związek Ukraińców w Polsce. ISBN 978-83-928483-0-1. | Wikipedia/Historiography_of_the_massacres_of_Poles_in_Volhynia_and_Eastern_Galicia |
The historiography of the War of 1812 reflects the numerous interpretations of the conflict, especially in reference to the war's outcome. The historical record has interpreted both the British and Americans as victors in the conflict, with substantial academic and popular literature published to support each claim.
The British viewed the War of 1812 as a minor theatre that was overshadowed by key victories at the Battle of Trafalgar in 1805 and the Battle of Waterloo in 1815, leading to the Pax Britannica. In the United States and Upper Canada, nationalistic mythology around the war took hold following its conclusion.
With the failure of the invasion of British Canada advancing the concept of Canadian identity, Canada remained a distinct region that would continue to evolve into a nation. Americans were able to enforce their sovereignty, and both the restoration of honor and what has been called the Second War of Independence are important themes in American historiography, and are considered significant results by historians. Indigenous nations are generally held to have lost in the war.
== Popular views ==
Both Americans and Canadians widely believed that their own countries had won the war. Each young country saw its self-perceived victory and settling of the border between them as an important foundation of its growing nationhood. The British, preoccupied with Napoleon's challenge in Europe, paid little attention to what was to them a peripheral and secondary dispute.
One interpretation, especially in Canada, is that of a British or Canadian win and an American defeat. Another leading interpretation, held mainly in the United States, is that of an American win. In a 2012 interview at The Christian Science Monitor, Donald R. Hickey said: "By my count, we lost the War of 1812 and we lost Vietnam. That's not a widely held opinion in the United States about the War of 1812. The common view is that the war ended in a draw." According to Claire Turenne Sjolander, "Canadians are unified (because we participated in our diversity in the war under the British Crown, which is our real heritage) and we are distinct from the United States (because we won, and because we are British)." According to Troy Bickham, the American victory at New Orleans "did not have an impact on the war's outcome", but it shaped "how the Americans received the end of the war by creating the illusion of military victory."
== American views ==
While American popular memory includes the British capture and the August 1814 burning of Washington, which necessitated extensive renovation, it focused on the victories at Baltimore, Plattsburgh, and New Orleans to present the war as a successful effort to assert American national honor, or a Second War of Independence, in which the mighty British Empire was humbled and humiliated. In keeping with this sentiment, there is a popularly held view that Britain had planned to annex Louisiana in 1815. The amoral depravity of the British, in contrast with the wholesome behavior of the Americans, has the "beauty and booty" story at the center of any account of Jackson's victory at New Orleans. In a speech before the U.S. Congress on 18 February 1815, the then U.S. president James Madison proclaimed the war a complete American victory.
This interpretation of the war was and remains the dominant American view of the war. The American newspaper Niles' Register announced in an editorial on 14 September 1816 that the Americans had crushed the British, declaring "we did virtually dictate the treaty of Ghent to the British." A minority of Americans, mostly associated with the Federalist Party, considered the war a defeat and an act of folly on Madison's part, caustically asking why the British Crown did not cede British North America to the United States if the Americans were dictating the terms of the Treaty of Ghent. The Federalist view of the war is not the mainstream American memory of the war. Congressman George Troup, who said in a speech in 1815 that the Treaty of Ghent was "the glorious termination of the most glorious war ever waged by any people", expressed American popular opinion and memory of the war.
Americans also celebrated the successful American defense of Fort McHenry in September 1814, which inspired the lyrics of what was adopted as the United States national anthem, called The Star-Spangled Banner. Captains of the United States Navy became popular heroes, and commemorative plates were produced with the likenesses of Stephen Decatur, Isaac Hull, and Charles Stewart on them, becoming popular items. Many of these plates were manufactured in England. The navy became a cherished institution, lauded for the victories that it won against all odds. The victory at New Orleans was mythically attributed to militiamen from the Southern states, expertly wielding their rifles, just like their forefathers in the American Revolution. This took hold in popular culture with the song "The Hunters of Kentucky", written about the battle.
After engagements during the final actions of the war, the United States Marines had acquired a reputation as marksmen, especially in ship-to-ship actions.
== Canadian views ==
In Upper Canada, the War of 1812 was seen by Loyalists as a victory since they had successfully defended their country from an American takeover. A consequence of the Canadian militias' successes was the view, Canada did not need a regular professional army. While Canadian militias had played instrumental roles in several engagements, it was the regular units of the British Army, that ensured the successful defense of Canada.
Several transgressions that took place in Upper Canada by the US Army during the War of 1812 resulted in a "deep prejudice against the United States," to emerge in the colony after the conflict.
By the 21st century, it was a forgotten war in Britain, although still remembered in Canada, especially Ontario. In a 2009 poll, 37 per cent of Canadians said the war was a Canadian victory, nine per cent said the United States won, 15 per cent called it a draw and 39 per cent said they knew too little to comment. A 2012 poll found that in a list of items that could be used to define Canadians' identity, the belief that Canada successfully repelled an American invasion in the War of 1812 places second (25 per cent). A decade after the debate spurred by J. L. Granatstein's 1998 book Who Killed Canadian History? and fears of "cancel culture", the Harper Premiership spent 28 million Canadian Dollars on Bicentenary commemorative events.
== Historians' views ==
Militarily historians hold the view that the war ended in a draw or stalemate, the Treaty of Ghent closing a war that had become militarily inconclusive. Neither side wanted to continue fighting since the main causes had disappeared and since there were no large lost territories for one side or the other to reclaim by force. Historians conclude that all three nations were "the real winners" of the War of 1812. Historians also add that the war could have been avoided in the first place by better diplomacy. The war is seen as a mistake for everyone concerned because it was badly planned and marked by multiple failures on both sides, especially as shown by the repeated American failures to seize parts of Canada and the failed British attack on New Orleans and upstate New York.
Historians disagree on who won the War of 1812 and have debated its outcome for nearly two centuries. While most historians reach the middle position that it was a draw, there are differing and complex interpretations of the war. A survey of school textbooks found that historians emphasize different aspects of the war according to their national narratives, with some British texts scarcely mentioning the war. According to Donald R. Hickey, a popular interpretation is that "everyone was happy with the outcome. Americans were happy because they thought they had won: Canadians were happy because they knew they had won; and the British were happiest of all because they quickly forgot about the war. ... For the British, in other words, the return to status quo ante Bellum as a triumph, for it had demonstrated that they could defeat Napoleonic France in Europe while still fending off U.S. aggression in North America." Historians who believe that both sides won argue that their main objectives were achieved as Britain defeated Napoleon and ruled the seas while the United States restored its independence and honor and opened the way to westward expansion. While historians like Wesley Turner held that both sides won, another interpretation held by those like Henry Adams came close to suggesting that both sides lost.
=== British Empire ===
Historians who hold that the war constituted a British victory and an American defeat argue that the British achieved their military objectives in 1812 by stopping the repeated American invasions of Canada and retaining their Canadian colonies. In contrast, the Americans suffered a defeat when their armies failed to achieve their war goal of seizing part or all of Canada. Additionally, they argue the United States lost as it failed to stop impressment which the British refused to repeal until the end of the Napoleonic Wars, arguing that the American actions had no effect on the Orders in Council which were rescinded before the war started. While acknowledging that the war is "usually seen as a draw", Brian Arthur argues that "it was in fact a British victory" because "the British achieved success through an effective commercial maritime blockade which had devastating consequences on the vulnerable, undeveloped US economy." Troy Bickham, author of The Weight of Vengeance: The United States, the British Empire, and the War of 1812, sees the British as having fought to a much stronger position than the United States.
According to Carl Benn, "[t]he main objective of keeping Canada had been met as of 1814", while "Americans realized that their own objectives in going to war could not be achieved, and thought the best they could probably get was the preservation of the status quo that they had been fighting so hard to upset." According to Andrew Lambert, "Americans began to rewrite the war as a victory, exploiting the ambiguity of the diplomatic settlement achieved in the Treaty of Ghent on 24 December 1814, a status quo ante compromise that did not reflect the depth of America's defeat." For Jon Latimer, "Britain was content to settle for the 1812 status quo, and that is what Britain got. The United States, in contrast, achieved none of its war aims, and in these terms, the War of 1812 must be seen as a British victory, however marginal." Bickham considers that the British offered the United States generous terms in place of their initially harsh terms, which included massive forfeiture of land to Canada and the Native Americans, because the "reigning Liverpool ministry in Britain held a loose grip on power and feared the war-weary, tax-exhausted public." For Bickham, the war was also technically a British victory "because the United States failed to achieve the aims listed in its declaration of war."
G. M. Trevelyan evaluated the war in negative terms for Britain. He stressed the long-term damage to what has been called the Special Relationship between Britain and the United States, and wrote: "The self-defense of the two Canadas against invasion, and the historical traditions that the infant nation thus acquired were an important result of the war. Otherwise, it had been fought in vain. It solved none of the disputed questions out of which it arose." According to Trevelyan, "the anti-British tradition had obtained a fresh lease of life in the United States, whose orators now had the theme of a second war against Britain as the second romantic period of their national history. The Tory Cabinet cannot be praised for the management of affairs that led to this breach of the peace."
=== United States ===
Historians who believe that it was an American success argue that the main motivation was restoring the nation's honor in the face of relentless British aggression toward American neutral rights on the high seas and in the Western lands. According to Norman K. Risjord, the results in terms of honor satisfied the War Hawks. Donald R. Hickey writes: "Did the cost in blood and treasure justify the U.S. decision to go to war? Most Republicans thought it did. In the beginning, they called the contest a 'second war of independence', and while Britain's maritime practices never truly threatened the Republic's independence, the war did in a broad sense vindicate U.S. sovereignty. But it ended in a draw on the battlefield." Historians argue that it was an American success to end the threat of indigenous nations' raids, kill the British plan for a semi-independent Native American sanctuary, and hereby to open an unimpeded path for westward expansion. Winston Churchill commented: "The lessons of the war were taken to heart. Anti-American feeling in Great Britain ran high for several years, but the United States was never again refused proper treatment as an independent power." George C. Daughan argues that the United States achieved enough of its war goals to claim a victorious result of the conflict and subsequent impact it had on the negotiations in Ghent. Daughan uses official correspondences from James Madison to the delegates at Ghent strictly prohibiting negotiations with regards to maritime law.
For Daughan, the British permanently stopped impressing Americans, although they never publicly rescinded the possibility of resuming that practice. The American delegates at the meeting understood it to be a dead issue after the 1814 surrender of Napoleon. In addition, the successful defense of Baltimore, Plattsburgh and Fort Erie (a strategic fortress located in Upper Canada on the Niagara River and occupied during the third and most successful offensive into Canada) had a very favorable influence on the negotiations for the Americans and prompted several famous responses from both sides. Henry Clay wrote to the delegates in October 1814, "for in our own country, my dear sir, at last, must we conquer the peace." With growing pressure in Britain, Arthur Wellesley, 1st Duke of Wellington, when asked to command the forces in the United States, wrote to Robert Jenkinson, 2nd Earl of Liverpool, on 9 November 1814: "I confess that I think you have no right, from the state of the war, to demand any concession of territory from America. ... You have not been able to carry [the war] ... into the enemy's territory, notwithstanding your military success and now undoubted military superiority, and have not even cleared your own territory on the point of attack [at Fort Erie]. ... Why stipulate for uti possidetis?" Daughan argues that the argument the United States failure to capture any Canadian territory influenced the negotiations is an outdated and highly criticized position. He cites the Edinburgh Review, a British newspaper, that had remained silent about the war with the United States for two years, writing that "the British government had embarked on a war of conquest after the American government had dropped its maritime demands, and the British had lost. It was folly to attempt to invade and conquer the United States. To do so would result in the same tragedy as the first war against them, and with the same result."
=== Canada ===
The militia myth of Canadian victory was created by the conservative elites of Upper Canada, such as the Family Compact, long after the war ended. Most people in Upper Canada were late Loyalists, namely economic migrants from the United States, the United Empire Loyalists were not a distinct group, about ten per cent of the Loyalists were former slaves, and most residents did not care who won the war and did not participate in it. The Family Compact disenfranchised most residents of Upper Canada after the war, with the idea of loyalty being used to justify the suppression of dissent. According to historians, such as David Mills, the myth was invented for immigrants who arrived after the end of the Napoleonic Wars. American spelling, which had been standard in the province, was rejected in favor of British spelling, and the local population began to call themselves Canadians. The militia myth of Canada being able to defend itself without a standing army came to be thought of politically as a fact.
=== Indigenous nations ===
Historians generally agree that the real losers of the War of 1812 were the indigenous nations. Tecumseh's Confederacy, which had joined the British side, was defeated and its leader, Tecumseh, killed in battle. Donald R. Hickey wrote: "The big losers in the war were the Indians. As a proportion of their population, they had suffered the heaviest casualties. Worse, they were left without any reliable European allies in North America. ... The crushing defeats at the Thames and Horseshoe Bend left them at the mercy of the Americans, hastening their confinement to reservations and the decline of their traditional way of life."
The indigenous nations of the Old Northwest (the modern Midwest) had hoped to create an indigenous state as a British protectorate. American settlers into the Midwest had been repeatedly blocked and threatened by indigenous raids before 1812, and that now came to an end. Throughout the war, the British had played on the terror of the Fs and scalping knives of their indigenous allies as it worked especially at William Hull's surrender at Detroit. By 1813, Americans had killed Tecumseh and broken his coalition of tribes. Andrew Jackson then defeated the Creeks in the Southwest. Historian John Sugden says that in both theaters, the indigenous nations' strength had been broken prior to the arrival of the major British forces in 1814. The one campaign that the Americans had decisively won was the campaign in the Old Northwest, which put the British in a weak hand to insist upon an indigenous nations' state in the Old Northwest.
Notwithstanding sympathy and support from commanders, such as Isaac Brock, Alexander Cochrane and Edward Nicolls, the policymakers in London reneged on this promise as making peace had a higher priority for the politicians. At the peace conference, the British demanded an independent indigenous state in the Midwest. Although the British and their indigenous allies maintained control over the territories in question (i.e. most of the Upper Midwest), British diplomats did not press the demand after an American refusal, effectively abandoning their allies. The withdrawal of British protection gave the Americans a free hand that resulted in the removal of most of the tribes to Indian Territory (present-day Oklahoma). According to historian Alan Taylor, the final victory at New Orleans had in that sense "enduring and massive consequences". It gave the Americans "continental predominance", while it left the indigenous nations dispossessed, powerless, and vulnerable.
The Treaty of Ghent technically required the United States to cease hostilities and "forthwith to restore to such Tribes or Nations respectively all possessions, rights, and privileges which they may have enjoyed, or been entitled to in 1811." The United States ignored this article of the treaty and proceeded to expand into this territory regardless. Meanwhile, Britain was unwilling to provoke a further war to enforce it. A shocked Henry Goulburn, one of the British negotiators at Ghent, remarked: "Till I came here, I had no idea of the fixed determination which there is in the heart of every American to extirpate the Indians and appropriate their territory."
The Creek War came to an end, with the Treaty of Fort Jackson being imposed upon the indigenous nations. About half of the Creek territory was ceded to the United States, with no payment made to the Creeks. This was in theory invalidated by Article 9 of the Treaty of Ghent. The British failed to press the issue and did not take up the indigenous cause as an infringement of an international treaty. Without this support, the indigenous nations' lack of power was apparent and the stage was set for further incursions of territory by the United States in subsequent decades.
== Notes ==
== References ==
== Bibliography == | Wikipedia/Historiography_of_the_War_of_1812 |
The historiography of religion is how historians have studied religion in terms of themes, sources and conflicting ideas. Historians typically focus on one particular topic in the overall history of religions in terms of geographical area or of theological tradition.
Historians for centuries focused on the theological developments of their own religious heritage. Social scientists in the 19th century took a strong interest in "primitive" and comparative religion. In the 20th century, the field focused mostly on theology and church organization and development. Since the 1970s the social history approach to religious behavior and belief has become important.
== Reformation ==
Jacob argues that there has been a dramatic shift in the historiography of the Reformation. Until the 1960s, historians focused their attention largely on 16th-century theologians such as Martin Luther, John Calvin, and Huldrych Zwingli. Their ideas were studied in depth. However, the rise of the new social history in the 1960s look at history from the bottom up, not from the top down. Historians began to concentrate on the values, beliefs and behavior of the people at large. She finds, "in contemporary scholarship, the Reformation is now seen as a vast cultural upheaval, a social and popular movement, textured and rich because of its diversity".
== 1700-1900 ==
New approaches to the history of Christianity were introduced by Leopold von Ranke (1795-1886) and Philip Schaff (1819–92). They emphasized the need for more neutrality, with the goal of understanding history as it actually happened, rather than promoting or defending ones theological heritage. Von Ranke in 1843 finished his six‐volume German History in the Epoch of the Reformation then turned to a multivolume History of the Popes during the Last Four Centuries. Schaff, deeply schooled in the German tradition, relocated to the United States in 1844. His History of the Christian Church (7 vols., 1858–1890), set new standards for the American study of ecclesiastical history. He demonstrated how to integrate liturgical developments. He also introduced European scholars to American religion, arguing that American sectarianism, with all its faults, was preferable to European church-statism.
=== Pietism and benevolence ===
Pietism was originated in 18th-century Germany and was emulated in neighboring countries. It had a major impact in England and North America, where it affected the Methodist movement and a series of revival outbursts known as the Great Awakening in the United States. It involved an intense internal focus on sin and salvation through Christ, and in the form of evangelicalism, remains a powerful force in Protestantism well into the 21st century. Pietism emphasize the value of revivals, leading to the born-again experience, and inspired its followers to set high moralistic standards for public behavior, as in such areas as opposition to alcohol and slavery.
Historians have explored the impact of the new religious sentiments of the 18th and 19th century on the organizational behavior of laymen. Protestants sponsored voluntary charitable and religious societies, including overseas missions throughout the empire, setting up Sunday schools, founding charity schools, distributing Bibles and devotional literature, creating and emphasizing hymns and communal singing, and setting up revivals. A major result was the establishment of an international battle against slavery as an affront to Protestant morality.
=== Comparative studies ===
Social scientists in the 19th century took a strong interest in comparative and "primitive" religion through the work of Max Müller, Edward Burnett Tylor, William Robertson Smith, James George Frazer, Émile Durkheim, Max Weber, and Rudolf Otto.
== 20th century ==
Hartmut Lehmann argues that four basic themes dominated the history of Christianity during the 20th century: the rise of "political religions", drastic technological changes, progressive secularization, and the impressive growth of Christian communities in the Southern Hemisphere.
=== Secularization ===
Secularization, the steady decline in religious activity in historically Protestant countries of Europe, has been an important field of study.
=== Missions and expansion ===
Much recent research is focused on the expansion of Christianity throughout the developing world. Protestant and Catholic religions, starting their strongholds in European colonial powers, propagated throughout the third world in the 20th century – especially in Africa. For example, Nigeria has far more Anglicans than does Great Britain. Missionaries, especially from the United States, promoted Mormonism, Jehovah's Witnesses, Seventh-day Adventists, and holiness and Pentecostal denominations to highly receptive audiences.
== Islam ==
The historiography of Islam grew slowly in the 20th century, and, since the 1980s, has become a major project for scholars.
== See also ==
Historiography
Historiography of early Islam
History of religion in the United States
Religious studies
Social history
== Notes ==
== Further reading ==
=== World ===
Kippenberg, Hans. Discovering Religious History in the Modern Age (2001)
McLeod, Hugh. "Religion and the City," Urban History Yearbook (1978) p7-22. reviews studies of religion in the cities of Europe and America 1820s-1970s
Ranger, T. O. and Isaria Kimambo. The Historical Study of African Religion (University of California Press, 1972)
Sanneh, Lamin, and Michael McClymond, eds. The Wiley Blackwell Companion to World Christianity (2016).
=== Europe ===
Bebbington, David W. Evangelicalism in Modern Britain: A History from the 1730s to the 1980s (Routledge, 2003)
Brown, Callum G. The Social History of Religion in Scotland Since 1730 (Methuen, 1987)
Davie, Grace. Religion in Britain since 1945: Believing Without Belonging (Blackwell, 1994)
Gajano, Sofia Boesch., "Italian Religious Historiography in the 1990s", Journal of Modern Italian Studies (1998) 3#3 pp 293–306
Hsia, Po-Chia, ed. A Companion to the Reformation World (2006)
Levitin, Dmitri., "From Sacred History to the History of Religion: Paganism, Judaism, and Christianity in European Historiography from Reformation to 'Enlightenment'", Historical Journal 55#4 (2012): 1117–1160
MacCulloch, Diarmaid; Laven, Mary; Duffy, Eamon., "Recent Trends in the Study of Christianity in Sixteenth-Century Europe", Renaissance Quarterly (2006) 58#3 pp 697–731
Marshall, Peter. "(Re)defining the English Reformation", Journal of British Studies (2009) 48#3 pp. 564–586
Menchi, Silvana Seidel., "The Age of Reformation and Counter-Reformation in Italian Historiography, 1939-2009", Archiv für Reformationsgeschichte (2009) Vol. 100, pp 193–217.
Thomas, Keith. Religion and the Decline of Magic: Studies in Popular Beliefs in Sixteenth and Seventeenth-century England (1991)
Weber, Eugen. "Religion and Superstition in Nineteenth-Century France", Historical Journal (1988) 31#2 pp. 399–423
=== America ===
* Chappel, James, "Beyond Tocqueville: A Plea to Stop 'Taking Religion Seriously'", Modern Intellectual History (2013) 10#9 pp 697–708.
Dolan, Jay P., and James P. Wind, eds. New Dimensions in American Religious History: Essays in Honor of Martin E. Marty (Eerdmans, 1993)
Fox, Richard Wightman. "Experience and Explanation in Twentieth-Century American Religious History", New Directions in American Religious History ed. by Harry Stout and D. G. Hart (1997) pp: 394-413.
Frey, Sylvia R. "The Visible Church: Historiography of African American Religion since Raboteau," Slavery & Abolition (2008) 29#1 pp 83–110.
McGreevy, John T., "Faith and Morals in the Modern United States, 1865-Present", Reviews in American History 26.1 (1998): 239-254
Schultz, Kevin M.; Harvey, Paul. "Everywhere and Nowhere: Recent Trends in American Religious History and Historiography", Journal of the American Academy of Religion, March 2010, Vol. 78#1 pp 129–162
Smith, Timothy L., "Religion and Ethnicity in America", American Historical Review (1978): 1155-1185
Stout, Harry S., and D. G. Hart, eds. New Directions in American Religious History (1997)
Sweet, Leonard I., ed. Communication and Change in American Religious History (1993), pp 355–479
Wilson, John F. Religion and the American Nation: Historiography and History (2003) 119pp | Wikipedia/Historiography_of_religion |
The great man theory is an approach to the study of history popularised in the 19th century according to which history can be largely explained by the impact of great men, or heroes: highly influential and unique individuals who, due to their natural attributes, such as superior intellect, heroic courage, extraordinary leadership abilities, or divine inspiration, have a decisive historical effect. The theory is primarily attributed to the Scottish essayist, historian, and philosopher Thomas Carlyle, who gave a series of lectures on heroism in 1840, later published as On Heroes, Hero-Worship, & the Heroic in History, in which he states:
Universal History, the history of what man has accomplished in this world, is at bottom the History of the Great Men who have worked here. They were the leaders of men, these great ones; the modellers, patterns, and in a wide sense creators, of whatsoever the general mass of men contrived to do or to attain; all things that we see standing accomplished in the world are properly the outer material result, the practical realisation and embodiment, of Thoughts that dwelt in the Great Men sent into the world: the soul of the whole world's history, it may justly be considered, were the history of these.This theory is usually contrasted with "history from below", which emphasizes the life of the masses creating overwhelming waves of smaller events which carry leaders along with them. Another contrasting school is historical materialism.
== Overview ==
Carlyle stated that "The History of the world is but the Biography of great men", reflecting his belief that heroes shape history through both their personal attributes and divine inspiration. In his book Heroes and Hero-Worship, Carlyle saw history as having turned on the decisions, works, ideas, and characters of "heroes", giving detailed analysis of six types: The hero as divinity (such as Odin), prophet (such as Muhammad), poet (such as Shakespeare), priest (such as Martin Luther), man of letters (such as Rousseau), and king (such as Napoleon). Carlyle also argued that the study of great men was "profitable" to one's own heroic side; that by examining the lives led by such heroes, one could not help but uncover something about one's own true nature.
As Sidney Hook notes, a common misinterpretation of the theory is that "all factors in history, save great men, were inconsequential", whereas Carlyle is instead claiming that great men are the decisive factor, owing to their unique genius. Hook then goes on to emphasize this uniqueness to illustrate the point: "Genius is not the result of compounding talent. How many battalions are the equivalent of a Napoleon? How many minor poets will give us a Shakespeare? How many run of the mine scientists will do the work of an Einstein?"
American scholar Frederick Adams Woods supported the great man theory in his work The Influence of Monarchs: Steps in a New Science of History. Woods investigated 386 rulers in Western Europe from the 12th century until the French Revolution in the late 18th century and their influence on the course of historical events.
The Great Man approach to history was most fashionable with professional historians in the 19th century; a popular work of this school is the Encyclopædia Britannica Eleventh Edition (1911) which contains lengthy and detailed biographies about the great men of history, but very few general or social histories. For example, all information on the post-Roman "Migrations Period" of European History is compiled under the biography of Attila the Hun. This heroic view of history was also strongly endorsed by some philosophers, such as Léon Bloy, Søren Kierkegaard, Oswald Spengler and Max Weber.
Georg Wilhelm Friedrich Hegel, proceeding from providentialist theory, argued that "what is real is reasonable" and World-Historical individuals are World-Spirit's agents. Hegel wrote: "Such are great historical men—whose own particular aims involve those large issues which are the will of the World-Spirit." Thus, according to Hegel, a great man does not create historical reality himself but only uncovers the inevitable future.
In Untimely Meditations, Friedrich Nietzsche writes that "the goal of humanity lies in its highest specimens". Although Nietzsche's body of work shows some overlap with Carlyle's line of thought, Nietzsche expressly rejected Carlyle's hero cult in Ecce Homo.
=== Assumptions ===
This theory rests on two main assumptions, as pointed out by Villanova University:
Every great leader is born already possessing certain traits that will enable them to rise and lead on instinct.
The need for them has to be great for these traits to then arise, allowing them to lead.
This theory, and history, claims these great leaders as heroes that were able to rise against the odds to defeat rivals while inspiring followers along the way. Theorists say that these leaders were then born with a specific set of traits and attributes that make them ideal candidates for leadership and roles of authority and power. This theory relies then heavily on born rather than made, nature rather than nurture and cultivates the idea that those in power deserve to lead and shouldn't be questioned because they have the unique traits that make them suited for the position.
== Responses ==
=== Herbert Spencer's critique ===
One of the most forceful critics of Carlyle's formulation of the great man theory was Herbert Spencer, who believed that attributing historical events to the decisions of individuals was an unscientific position. He believed that the men Carlyle supposed "great men" are merely products of their social environment:
You must admit that the genesis of a great man depends on the long series of complex influences which has produced the race in which he appears, and the social state into which that race has slowly grown. ... Before he can remake his society, his society must make him.
=== William James' defence ===
William James, in his 1880 lecture "Great Men, Great Thoughts, and the Environment", published in the Atlantic Monthly, forcefully defended Carlyle and refuted Spencer, condemning what James viewed as an "impudent", "vague", and "dogmatic" argument.
James' defence of the great man theory can be summarized as follows: The unique physiological nature of the individual is the deciding factor in making the great man, who, in turn, is the deciding factor in changing his environment in a unique way, without which the new environment would not have come to be, wherein the extent and nature of this change is also dependent on the reception of the environment to this new stimulus. To begin his argument, he first sardonically claims that these inherent physiological qualities have as much to do with "social, political, geographical [and] anthropological conditions" as the "conditions of the crater of Vesuvius has to do with the flickering of this gas by which I write".
James argues that genetic anomalies in the brains of these great men are the decisive factor by introducing an original influence into their environment. They might therefore offer original ideas, discoveries, inventions and perspectives which "would not, in the mind of another individual, have engendered just that conclusion ... It flashes out of one brain, and no other, because the instability of that brain is such as to tip and upset itself in just that particular direction."
James then argues that these spontaneous variations of genius, i.e. the great men, which are causally independent of their social environment, subsequently influence that environment which in turn will either preserve or destroy the newly encountered variations in a form of evolutionary selection. If the great man is preserved then the environment is changed by his influence in "an entirely original and peculiar way. He acts as a ferment, and changes its constitution, just as the advent of a new zoological species changes the faunal and floral equilibrium of the region in which it appears." Each ferment, each great man, exerts a new influence on their environment which is either embraced or rejected and if embraced will in turn shape the crucible for the selection process of future geniuses.
In the words of William James, "If we were to remove these geniuses or alter their idiosyncrasies, what increasing uniformities would the environment exhibit?" James challenges Mr. Spencer or anyone else to provide a reply. According to James, there are two distinct factors driving social evolution: personal agents and the impact of their unique qualities on the overall course of events.
He thus concludes: "Both factors are essential to change. The community stagnates without the impulse of the individual. The impulse dies away without the sympathy of the community."
=== Other responses ===
Before the 19th century, Blaise Pascal begins his Three Discourses on the Condition of the Great (written it seems for a young duke) by telling the story of a castaway on an island whose inhabitants take him for their missing king. He defends in his parable of the shipwrecked king, that the legitimacy of the greatness of great men is fundamentally custom and chance. A coincidence that gives birth to him in the right place with noble parents and arbitrary custom deciding, for example, on an unequal distribution of wealth in favor of the nobles.
Leo Tolstoy's War and Peace features criticism of great-man theories as a recurring theme in the philosophical digressions. According to Tolstoy, the significance of great individuals is imaginary; as a matter of fact they are only "history's slaves," realizing the decree of Providence.
Jacob Burckhardt affirmed the historical existence of great men in politics, even excusing the rarity among them to possess "greatness of soul", or magnanimity: "Contemporaries believe that if people will only mind their own business political morality will improve of itself and history will be purged of the crimes of the 'great men.' These optimists forget that the common people too are greedy and envious and when resisted tend to turn to collective violence." Burckhardt predicted that the belittling of great men would lead to a lowering of standards and rise in mediocrity generally.
Mark Twain suggests in his essay "The United States of Lyncherdom" that "moral cowardice" is "the commanding feature of the make-up of 9,999 men in the 10,000" and that "from the beginning of the world no revolt against a public infamy or oppression has ever been begun but by the one daring man in the 10,000, the rest timidly waiting, and slowly and reluctantly joining, under the influence of that man and his fellows from the other ten thousands."
In 1926, William Fielding Ogburn noted that Great Men history was being challenged by newer interpretations that focused on wider social forces. While not seeking to deny that individuals could have a role or show exceptional qualities, he saw Great Men as inevitable products of productive cultures. He noted for example that if Isaac Newton had not lived, calculus would have still been discovered by Gottfried Leibniz, and suspected that if neither man had lived, it would have been discovered by someone else. Among modern critics of the theory, Sidney Hook is supportive of the idea; he gives credit to those who shape events through their actions, and his book The Hero in History is devoted to the role of the hero and in history and influence of the outstanding persons.
In the introduction to a new edition of Heroes and Hero-Worship, David R. Sorensen notes the modern decline in support for Carlyle's theory in particular but also for "heroic distinction" in general. He cites Robert K. Faulkner as an exception, a proponent of Aristotelian magnanimity who in his book The Case for Greatness: Honorable Ambition and Its Critics, criticizes the political bias in discussions on greatness and heroism, stating: "the new liberalism’s antipathy to superior statesmen and to human excellence is peculiarly zealous, parochial, and antiphilosophic."
Ian Kershaw wrote in 1998 that "The figure of Hitler, whose personal attributes – distinguished from his political aura and impact – were scarcely noble, elevating or enriching, posed self-evident problems for such a tradition." Some historians like Joachim Fest responded by arguing that Hitler had a "negative greatness". By contrast, Kershaw rejects the Great Men theory and argues that it is more important to study wider political and social factors to explain the history of Nazi Germany. Kershaw argues that Hitler was an unremarkable person, but his importance came from how people viewed him, an example of Max Weber's concept of charismatic leadership.
== See also ==
== Bibliography ==
Bentley, Eric (1944). A Century of Hero-Worship: A study of the idea of heroism in Carlyle and Nietzsche, with notes on Wagner, Spengler, Stefan George, and D.H. Lawrence (Second, revised and reset ed.). Boston: Beacon Press (published 1957).
Harrold, Charles Frederick (1934). "Carlyle and Heroes". Carlyle and German Thought, 1819–1834. New Haven: Yale University Press. pp. 180–196.
Lehman, B. H. (1928). Carlyle's Theory of the Hero: Its Sources, Development, History, and Influence on Carlyle's Work. Durham, N.C.: Duke University Press. hdl:2027/mdp.39015008382213.
== References ==
== External links ==
"Twilight of the Idols", by Peter Dizikes, from The New York Times, November 5, 2006. "Do changes in science mean the traditional great-man science biography is going the way of the dodo?" | Wikipedia/Great_man_theory |
The first tentative efforts to comprehend the meaning and consequences of modern warfare began during the initial phases of World War I; this process continued throughout and after the end of hostilities, and is still underway more than a century later. Teaching World War I has presented special challenges. When compared with World War II, the First World War is often thought to be "a wrong war fought for the wrong reasons"; it lacks the metanarrative of good versus evil that characterizes retellings of the Second World War. Lacking recognizable heroes and villains, it is often taught thematically, invoking tropes like the wastefulness of war, the folly of generals and the innocence of soldiers. The complexity of the conflict is mostly obscured by these oversimplifications. George Kennan referred to the war as the "seminal catastrophe of the 20th century".
Historian Heather Jones argues that the historiography has been reinvigorated by a cultural turn in the 21st century. Scholars have raised entirely new questions regarding military occupation, radicalisation of politics, race, medical science, gender and mental health. Among the major subjects that historians have long debated regarding the war include: Why the war began; why the Allies won; whether generals were responsible for high casualty rates; how soldiers endured the poor conditions of trench warfare; and to what extent the civilian home front accepted and endorsed the war effort.
== Causes of the war ==
=== "Web of alliances" narratives ===
Although general narratives of the war tend to emphasize the importance of alliances in binding the major powers to act in the event of a crisis such as the July Crisis, historians such as Margaret MacMillan warn against the argument that alliances forced the Great Powers to act as they did: "What we tend to think of as fixed alliances before the First World War were nothing of the sort. They were much more loose, much more porous, much more capable of change."
The most important alliances in Europe required participants to agree to collective defence if they were attacked. Some represented formal alliances, but the Triple Entente represented only a frame of mind:
German-Austrian Treaty (1879)
The Franco-Russian Alliance (1894)
The addition of Italy to the Germany and Austrian alliance in 1882, forming the Triple Alliance
Treaty of London, 1839, guaranteeing the neutrality of Belgium
There are three notable exceptions that demonstrate that alliances did not in themselves force the great powers to act:
The Entente Cordiale between Britain and France in 1905 included a secret agreement that left the northern coast of France and the English Channel to be defended by the British Royal Navy, and the separate "entente" between Britain and Russia (1907) formed the so-called Triple Entente. However, the Triple Entente did not, in fact, force Britain to mobilise because it was not a military treaty.
Moreover, general narratives of the war regularly misstate that Russia was allied to Serbia. Clive Ponting noted: "Russia had no treaty of alliance with Serbia and was under no obligation to support it diplomatically, let alone go to its defence."
Italy, despite being part of the Triple Alliance, did not enter the war to defend the Triple Alliance partners.
== Cultural memory in the United Kingdom ==
World War I had a lasting impact on collective memory of the United Kingdom. It was seen by many in Britain as signalling the end of an era of stability stretching back to the Victorian period, and across Europe many regarded it as a watershed. Historian Samuel Hynes explained:
A generation of innocent young men, their heads full of high abstractions like Honour, Glory and England, went off to war to make the world safe for democracy. They were slaughtered in stupid battles planned by stupid generals. Those who survived were shocked, disillusioned and embittered by their war experiences, and saw that their real enemies were not the Germans, but the old men at home who had lied to them. They rejected the values of the society that had sent them to war, and in doing so separated their own generation from the past and from their cultural inheritance.
This has become the most common perception of World War I, perpetuated by the art, cinema, poems, and stories published subsequently. Films such as All Quiet on the Western Front, Paths of Glory and King and Country have perpetuated the idea, while war-time films including Camrades, Poppies of Flanders, and Shoulder Arms indicate that the most contemporary views of the war were overall far more positive. Likewise, the art of Paul Nash, John Nash, Christopher Nevinson, and Henry Tonks in Britain painted a negative view of the conflict in keeping with the growing perception, while popular war-time artists such as Muirhead Bone painted more serene and pleasant interpretations subsequently rejected as inaccurate. Several historians like John Terraine, Niall Ferguson and Gary Sheffield have challenged these interpretations as partial and polemical views:These beliefs did not become widely shared because they offered the only accurate interpretation of wartime events. In every respect, the war was much more complicated than they suggest. In recent years, historians have argued persuasively against almost every popular cliché of World War I. It has been pointed out that, although the losses were devastating, their greatest impact was socially and geographically limited. The many emotions other than horror experienced by soldiers in and out of the front line, including comradeship, boredom, and even enjoyment, have been recognised. The war is not now seen as a 'fight about nothing', but as a war of ideals, a struggle between aggressive militarism and more or less liberal democracy. It has been acknowledged that British generals were often capable men facing difficult challenges and that it was under their command that the British army played a major part in the defeat of the Germans in 1918: a great forgotten victory.Though these views have been discounted as "myths", they are common. They have dynamically changed according to contemporary influences, reflecting in the 1950s perceptions of the war as "aimless" following the contrasting Second World War and emphasising conflict within the ranks during times of class conflict in the 1960s. The majority of additions to the contrary are often rejected. Writers such as Ernest Hemingway wrote many stories on the experiences of veterans after the war, such as the short story Soldier's Home, about young veteran Harold Krebs trying to integrate back into society.
== Discontent in Germany and Austria ==
The rise of Nazism and fascism included a revival of the nationalist spirit and a rejection of many post-war changes. Similarly, the popularity of the stab-in-the-back legend (German: Dolchstoßlegende) was a testament to the psychological state of defeated Germany, and was a rejection of responsibility for the conflict. This conspiracy theory of the betrayal of the German war effort by Jews became common, and the German populace came to see themselves as victims. The widespread acceptance of the "stab-in-the-back" myth delegitimised the Weimar government and destabilised the system, opening it to extremes of right and left. The same occurred in Austria, which did not consider itself responsible for the outbreak of the war, and claimed not to have suffered a military defeat.
== Enabling the rise of totalitarianism ==
Communist and fascist movements around Europe drew strength from the societal upheaval caused by the war, and enjoyed new levels of popularity. These feelings were most pronounced in areas directly or harshly affected by the war where centuries-old royal dynasties were toppled, such as the Weimar Republic (1918-1933), Russian Civil War (1917-1923), and the successor states of Austria-Hungary. Adolf Hitler was able to gain popularity by using German discontent with the still controversial Treaty of Versailles. World War II was, in part, a continuation of the power struggle never fully resolved by World War I. Furthermore, it was common for Germans in the 1930s to justify acts of aggression due to perceived injustices imposed by the victors of World War I. American historian William Rubinstein wrote that:The 'Age of Totalitarianism' included nearly all the infamous examples of genocide in modern history, headed by the Jewish Holocaust, but also comprising the mass murders and purges of the Communist world, other mass killings carried out by Nazi Germany and its allies, and also the Armenian Genocide of 1915. All these slaughters, it is argued here, had a common origin, the collapse of the elite structure and normal modes of government of much of central, eastern and southern Europe as a result of World War I, without which surely neither Communism nor Fascism would have existed except in the minds of unknown agitators and crackpots.
== Social trauma ==
The social trauma caused by unprecedented rates of casualties manifested itself in different ways, which have been the subject of subsequent historical debate. Over 8 million Europeans died in the war. Millions suffered permanent disabilities. The war gave birth to fascism and Bolshevism and toppled the centuries-old dynasties that had ruled the Ottoman, Habsburg, Russian and German Empires.
The optimism of la belle époque was destroyed, and those who had fought in the war were referred to as the Lost Generation. For years afterward, people mourned the dead, the missing, and the many disabled. Many soldiers returned with severe trauma, suffering from shell shock (also called neurasthenia, a condition related to post-traumatic stress disorder). Many more returned home with few after-effects; however, their silence about the war contributed to the conflict's growing mythological status. Though many participants did not share in the experiences of combat or spend any significant time at the front, or had positive memories of their service, the images of suffering and trauma became the widely shared perception. Such historians as Dan Todman, Paul Fussell, and Samuel Heyns have all published works since the 1990s arguing that these common perceptions of the war are factually incorrect.
== See also ==
Bibliography of World War I § Historiography and memory
Historiography of World War II
World War I in popular culture
Juvenile books on "Air Service Boys" (6 at 1918-1920)
== References ==
=== Bibliography ===
Baker, Kevin (June 2006). "Stabbed in the Back! The past and future of a right-wing myth". Harper's Magazine.
Chickering, Rodger (2004). Imperial Germany and the Great War, 1914–1918. Cambridge: Cambridge University Press. ISBN 978-0-521-83908-2. OCLC 55523473.
Ponting, Clive (2002). Thirteen Days: The Road to the First World War. Chatto & Windus. ISBN 978-0-7011-7293-0.
Roden, Mike. "The Lost Generation – myth and reality". Aftermath – when the Boys Came Home. Archived from the original on 1 December 2009. Retrieved 13 April 2022.
Todman, Dan (2005). The Great War: Myth and Memory. A & C Black. ISBN 978-0-8264-6728-7.
Tucker, Spencer C.; Roberts, Priscilla Mary (2005). Encyclopedia of World War I. Santa Barbara: ABC-Clio. ISBN 978-1-85109-420-2. OCLC 61247250.
Wohl, Robert (1979). The Generation of 1914 (3rd ed.). Harvard University Press. ISBN 978-0-674-34466-2. | Wikipedia/Historiography_of_World_War_I |
A hagiography (; from Ancient Greek ἅγιος, hagios 'holy' and -γραφία, -graphia 'writing') is a biography of a saint or an ecclesiastical leader, as well as, by extension, an adulatory and idealized biography of a preacher, priest, founder, saint, monk, nun or icon in any of the world's religions. Early Christian hagiographies might consist of a biography or vita (from Latin vita, life, which begins the title of most medieval biographies), a description of the saint's deeds or miracles, an account of the saint's martyrdom (called a passio), or be a combination of these.
Christian hagiographies focus on the lives, and notably the miracles, ascribed to men and women canonized by the Roman Catholic church, the Eastern Orthodox Church, the Oriental Orthodox churches, and the Church of the East. Other religious traditions such as Buddhism, Hinduism, Taoism, Islam, Sikhism and Jainism also create and maintain hagiographical texts (such as the Sikh Janamsakhis) concerning saints, gurus and other individuals believed to be imbued with sacred power.
However, when referring to modern, non-ecclesiastical works, the term hagiography is often used today as a pejorative reference to biographies and histories whose authors are perceived to be uncritical or excessively reverential toward their subject.
Hagiographic works, especially those of the Middle Ages, can incorporate a record of institutional and local history, and evidence of popular cults, customs, and traditions.
== Christianity ==
=== Development ===
Hagiography constituted an important literary genre in the early Christian church, providing some informational history along with the more inspirational stories and legends. A hagiographic account of an individual saint could consist of a biography (vita), a description of the saint's deeds or miracles, an account of the saint's martyrdom (passio), or be a combination of these.
The genre of lives of the saints first came into being in the Roman Empire as legends about Christian martyrs were recorded. The dates of their deaths formed the basis of martyrologies. In the 4th century, there were three main types of catalogs of lives of the saints:
annual calendar catalogue, or menaion (in Greek, μηναῖον, menaion means "monthly" (adj, neut), lit. "lunar"), biographies of the saints to be read at sermons;
synaxarion ("something that collects"; Greek συναξάριον, from σύναξις, synaxis i.e. "gathering", "collection", "compilation"), or a short version of lives of the saints, arranged by dates;
paterikon ("that of the Fathers"; Greek πατερικόν; in Greek and Latin, pater means "father"), or biography of the specific saints, chosen by the catalog compiler.
The earliest lives of saints focused on desert fathers who lived as ascetics from the 4th century onwards. The life of Anthony of Egypt is usually considered the first example of this new genre of Christian biography.
In Western Europe, hagiography was one of the more important vehicles for the study of inspirational history during the Middle Ages. The Golden Legend of Jacobus de Voragine compiled a great deal of medieval hagiographic material, with a strong emphasis on miracle tales. Lives were often written to promote the cult of local or national states, and in particular to develop pilgrimages to visit relics. The bronze Gniezno Doors of Gniezno Cathedral in Poland are the only Romanesque doors in Europe to feature the life of a saint. The life of Saint Adalbert of Prague, who is buried in the cathedral, is shown in 18 scenes, probably based on a lost illuminated copy of one of his Lives.
The Bollandist Society continues the study, academic assembly, appraisal and publication of materials relating to the lives of Christian saints (see Acta Sanctorum).
=== Medieval England ===
Many of the important hagiographical texts composed in medieval England were written in the vernacular dialect Anglo-Norman. With the introduction of Latin literature into England in the 7th and 8th centuries the genre of the life of the saint grew increasingly popular. When one contrasts it to the popular heroic poem, such as Beowulf, one finds that they share certain common features. In Beowulf, the titular character battles against Grendel and his mother, while the saint, such as Athanasius' Anthony (one of the original sources for the hagiographic motif) or the character of Guthlac, battles against figures no less substantial in a spiritual sense. Both genres then focus on the hero-warrior figure, but with the distinction that the saint is of a spiritual sort.
Imitation of the life of Christ was then the benchmark against which saints were measured, and imitation of the lives of saints was the benchmark against which the general population measured itself. In Anglo-Saxon and medieval England, hagiography became a literary genre par excellence for the teaching of a largely illiterate audience. Hagiography provided priests and theologians with classical handbooks in a form that allowed them the rhetorical tools necessary to present their faith through the example of the saints' lives.
Of all the English hagiographers no one was more prolific nor so aware of the importance of the genre as Abbot Ælfric of Eynsham. His work Lives of the Saints contains set of sermons on saints' days, formerly observed by the English Church. The text comprises two prefaces, one in Latin and one in Old English, and 39 lives beginning on 25 December with the nativity of Christ and ending with three texts to which no saints' days are attached. The text spans the entire year and describes the lives of many saints, both English and continental, and harks back to some of the earliest saints of the early church.
There are two known instances where saint's lives were adapted into vernacular plays in Britain. These are the Cornish-language works Beunans Meriasek and Beunans Ke, about the lives of Saints Meriasek and Kea, respectively.
Other examples of hagiographies from England include:
the Chronicle by Hugh Candidus
the Secgan Manuscript
the list of John Leyland
possibly the book Life by Saint Cadog
Vita Sancti Ricardi Episcopi et Confessoris Cycestrensis/ Life of Richard of Chichester by Ralph Bocking.
The Book of Margery Kempe is an example of autohagiography, in which the subject dictates her life using the hagiographic form.
=== Medieval Ireland ===
Ireland is notable in its rich hagiographical tradition, and for the large amount of material which was produced during the Middle Ages. Irish hagiographers wrote primarily in Latin while some of the later saint's lives were written in the hagiographer's native vernacular Irish. Of particular note are the lives of St. Patrick, St. Columba (Latin)/Colum Cille (Irish) and St. Brigit/Brigid—Ireland's three patron saints. The earliest extant Life was written by Cogitosus. Additionally, several Irish calendars relating to the feastdays of Christian saints (sometimes called martyrologies or feastologies) contained abbreviated synopses of saint's lives, which were compiled from many different sources. Notable examples include the Martyrology of Tallaght and the Félire Óengusso. Such hagiographical calendars were important in establishing lists of native Irish saints, in imitation of continental calendars.
=== Eastern Orthodoxy ===
In the 10th century, a Byzantine monk Simeon Metaphrastes was the first one to change the genre of lives of the saints into something different, giving it a moralizing and panegyrical character. His catalog of lives of the saints became the standard for all of the Western and Eastern hagiographers, who would create relative biographies and images of the ideal saints by gradually departing from the real facts of their lives. Over the years, the genre of lives of the saints had absorbed a number of narrative plots and poetic images (often, of pre-Christian origin, such as dragon fighting etc.), mediaeval parables, short stories and anecdotes.
The genre of lives of the saints was introduced in the Slavic world in the Bulgarian Empire in the late 9th and early 10th century, where the first original hagiographies were produced on Cyril and Methodius, Clement of Ohrid and Naum of Preslav. Eventually the Bulgarians brought this genre to Kievan Rus' together with writing and also in translations from the Greek language. In the 11th century, they began to compile the original life stories of their first saints, e.g. Boris and Gleb, Theodosius Pechersky etc. In the 16th century, Metropolitan Macarius expanded the list of the Russian saints and supervised the compiling process of their life stories. They would all be compiled in the so-called Velikiye chet'yi-minei catalog (Великие Четьи-Минеи, or Great Menaion Reader), consisting of 12 volumes in accordance with each month of the year. They were revised and expanded by St. Dimitry of Rostov in 1684–1705.
The Life of Alexander Nevsky was a particularly notable hagiographic work of the era.
Today, the works in the genre of lives of the saints represent a valuable historical source and reflection of different social ideas, world outlook and aesthetic concepts of the past.
=== Oriental Orthodoxy ===
The Oriental Orthodox Churches also have their own hagiographic traditions. For instance, Ethiopian Orthodox Tewahedo Church hagiographies in the Ge'ez language are known as gadl (Saint's Life). There are some 200 hagiographies about indigenous saints. They are among the most important Medieval Ethiopian written sources, and some have accurate historical information. They are written by the disciples of the saints. Some were written a long time after the death of a saint, but others were written not long after the saint's demise. Fragments from an Old Nubian hagiography of Saint Michael are extant.
== Judaism ==
Jewish hagiographic writings are common in the case of Talmudic and Kabbalistic writings and later in the Hasidic movement.
== Islam ==
Hagiography in Islam began in the Arabic language with biographical writing about the Prophet Muhammad in the 8th century CE, a tradition known as sīra. From about the 10th century CE, a genre generally known as manāqib also emerged, which comprised biographies of the imams (madhāhib) who founded different schools of Islamic thought (madhhab) about shariʿa, and of Ṣūfī saints. Over time, hagiography about Ṣūfīs and their miracles came to predominate in the genre of manāqib.
Likewise influenced by early Islamic research into hadiths and other biographical information about the Prophet, Persian scholars began writing Persian hagiography, again mainly of Sūfī saints, in the eleventh century CE.
The Islamicisation of the Turkish regions led to the development of Turkish biographies of saints, beginning in the 13th century CE and gaining pace around the 16th. Production remained dynamic and kept pace with scholarly developments in historical biographical writing until 1925, when Mustafa Kemal Atatürk (d. 1938) placed an interdiction on Ṣūfī brotherhoods. As Turkey relaxed legal restrictions on Islamic practice in the 1950s and the 1980s, Ṣūfīs returned to publishing hagiography, a trend which continues in the 21st century.
== Other groups ==
The pseudobiography of L. Ron Hubbard compiled by the Church of Scientology is commonly described as a heavily fictionalized hagiography.
== See also ==
Jean Bolland
Boniface Consiliarius
Alban Butler
Danilo's student
Hippolyte Delehaye
Namtar (biography)
Pseudepigrapha
Reginald of Durham
Secular saint
== References ==
== Further reading ==
== External links ==
Hippolyte Delehaye, The Legends of the Saints: An Introduction to Hagiography (1907)
Delehaye, Hippolyte (1911). "Hagiology" . Encyclopædia Britannica. Vol. 12 (11th ed.). pp. 816–817.
Herbermann, Charles, ed. (1913). "Hagiography" . Catholic Encyclopedia. New York: Robert Appleton Company.
James Kiefer's Hagiographies
Societé des Bollandistes Archived 29 May 2011 at the Wayback Machine
Hagiography Society | Wikipedia/Hagiography |
Myth is a genre of folklore consisting primarily of narratives that play a fundamental role in a society. For scholars, this is very different from the vernacular usage of the term "myth" that refers to a belief that is not true. Instead, the veracity of a myth is not a defining criterion.
Myths are often endorsed by religious (when they are closely linked to religion or spirituality) and secular authorities. Many societies group their myths, legends, and history together, considering myths and legends to be factual accounts of their remote past. In particular, creation myths take place in a primordial age when the world had not achieved its later form. Origin myths explain how a society's customs, institutions, and taboos were established and sanctified. National myths are narratives about a nation's past that symbolize the nation's values. There is a complex relationship between recital of myths and the enactment of rituals.
== Etymology ==
The word "myth" comes from Ancient Greek μῦθος (mȳthos), meaning 'speech', 'narrative', or 'fiction'. In turn, Ancient Greek μυθολογία (mythología 'story', 'legends', or 'story-telling') combines the word mȳthos with the suffix -λογία (-logia 'study'). Accordingly, Plato used mythología as a general term for fiction or story-telling of any kind. This word began was adapted into other European languages in the early 19th century, in a much narrower sense, as a scholarly term for "[a] traditional story, especially one concerning the early history of a people or explaining a natural or social phenomenon, and typically involving supernatural beings or events."
The Greek term mythología was then borrowed into Late Latin, occurring in the title of Latin author Fabius Planciades Fulgentius' 5th-century Mythologiæ to denote what is now referred to as classical mythology—i.e., Greco-Roman etiological stories involving their gods. Fulgentius's Mythologiæ explicitly treated its subject matter as allegories requiring interpretation and not as true events. The Latin term was then adopted in Middle French as mythologie. Whether from French or Latin usage, English adopted the word "mythology" in the 15th century, initially meaning 'the exposition of a myth or myths', 'the interpretation of fables', or 'a book of such expositions'. The word is first attested in John Lydgate's Troy Book (c. 1425).
From Lydgate until the 17th or 18th century, "mythology" meant a moral, fable, allegory or a parable, or collection of traditional stories, understood to be false. It came eventually to be applied to similar bodies of traditional stories among other polytheistic cultures around the world. Thus "mythology" entered the English language before "myth". Johnson's Dictionary, for example, has an entry for mythology, but not for myth. Indeed, the Greek loanword mythos (pl. mythoi) and Latinate mythus (pl. mythi) both appeared in English before the first example of "myth" in 1830.
== Protagonists and structure ==
The main characters in myths are usually non-humans, such as gods, demigods, and other supernatural figures. Others include humans, animals, or combinations in their classification of myth. Stories of everyday humans, although often of leaders of some type, are usually contained in legends, as opposed to myths. Myths are sometimes distinguished from legends in that myths deal with gods, usually have no historical basis, and are set in a world of the remote past, very different from that of the present.
== Definitions ==
=== Myth ===
Definitions of "myth" vary to some extent among scholars, though Finnish folklorist Lauri Honko offers a widely-cited definition:
Myth, a story of the gods, a religious account of the beginning of the world, the creation, fundamental events, the exemplary deeds of the gods as a result of which the world, nature and culture were created together with all parts thereof and given their order, which still obtains. A myth expresses and confirms society's religious values and norms, it provides a pattern of behavior to be imitated, testifies to the efficacy of ritual with its practical ends and establishes the sanctity of cult.
Another definition of myth comes from myth criticism theorist and professor José Manuel Losada. According to Cultural Myth Criticism, the studies of myth must explain and understand "myth from inside", that is, only "as a myth". Losada defines myth as "a functional, symbolic and thematic narrative of one or several extraordinary events with a transcendent, sacred and supernatural referent; that lacks, in principle, historical testimony; and that refers to an individual or collective, but always absolute, cosmogony or eschatology". According to the hylistic myth research by assyriologist Annette Zgoll and classic philologist Christian Zgoll, "A myth can be defined as an Erzählstoff [narrative material] which is polymorphic through its variants and – depending on the variant – polystratic; an Erzählstoff in which transcending interpretations of what can be experienced are combined into a hyleme sequence with an implicit claim to relevance for the interpretation and mastering of the human condition."
Vernacularly, and among scholars in other fields, term "myth" is used in varied ways. In a broad sense, the word can refer to any traditional story, popular misconception, urban legend or imaginary entity.
Though myth and other folklore genres may overlap, myth is often thought to differ from genres such as legend and folktale in that neither are considered to be sacred narratives. Some kinds of folktales, such as fairy stories, are not considered true by anyone, and may be seen as distinct from myths for this reason. Main characters in myths are usually gods, demigods or supernatural humans, while legends generally feature humans as their main characters. Many exceptions and combinations exist, as in the Iliad, Odyssey and Aeneid. Moreover, as stories spread between cultures or as faiths change, myths can come to be considered folktales, their divine characters recast as either as humans or demihumans such as giants, elves and faeries. Conversely, historical and literary material may acquire mythological qualities over time. For example, the Matter of Britain (the legendary history of Great Britain, especially those focused on King Arthur and the knights of the Round Table) and the Matter of France, seem distantly to originate in historical events of the 5th and 8th centuries, respectively, and became mythologised over the following centuries.
In colloquial use, "myth" can also be used of a collectively held belief that has no basis in fact, or any false story. This usage, which is often pejorative, arose from labelling the religious myths and beliefs of other cultures as incorrect, but it has spread to cover non-religious beliefs as well.
As commonly used by folklorists and academics in other relevant fields, such as anthropology, "myth" has no implication whether the narrative may be understood as true or otherwise. Among biblical scholars of both the Old and New Testament, the word "myth" has a technical meaning, in that it usually refers to "describe the actions of the other‐worldly in terms of this world" such as the Creation and the Fall.
Since "myth" is popularly used to describe stories that are not objectively true, the identification of a narrative as a myth can be highly controversial. Many religious adherents believe that the narratives told in their respective religious traditions are historical without question, and so object to their identification as myths while labelling traditional narratives from other religions as such. Hence, some scholars may label all religious narratives as "myths" for practical reasons, such as to avoid depreciating any one tradition because cultures interpret each other differently relative to one another. Other scholars may abstain from using the term "myth" altogether for purposes of avoiding placing pejorative overtones on sacred narratives.
=== Related terms ===
==== Mythology ====
In present use, "mythology" usually refers to the collection of myths of a group of people. For example, Greek mythology, Roman mythology, Celtic mythology and Hittite mythology all describe the body of myths retold among those cultures.
"Mythology" can also refer to the study of myths and mythologies.
==== Mythography ====
The compilation or description of myths is sometimes known as "mythography", a term also used for a scholarly anthology of myths or of the study of myths generally.
Key mythographers in the Classical tradition include:
Ovid (43 BCE–17/18 CE), whose tellings of myths have been profoundly influential;
Fabius Planciades Fulgentius, a Latin writer of the late-5th to early-6th centuries, whose Mythologies (Latin: Mitologiarum libri III) gathered and gave moralistic interpretations of a wide range of myths;
the anonymous medieval Vatican Mythographers, who developed anthologies of Classical myths that remained influential to the end of the Middle Ages; and
Renaissance scholar Natalis Comes, whose ten-book Mythologiae became a standard source for classical mythology in later Renaissance Europe.
Other prominent mythographies include the thirteenth-century Prose Edda attributed to the Icelander Snorri Sturluson, which is the main surviving survey of Norse Mythology from the Middle Ages.
Jeffrey G. Snodgrass (professor of anthropology at the Colorado State University) has termed India's Bhats as mythographers.
==== Myth Criticism ====
Myth criticism is a system of anthropological interpretation of culture created by French philosopher Gilbert Durand. Scholars have used myth criticism to explain the mythical roots of contemporary fiction, which means that modern myth criticism needs to be interdisciplinary.
Professor Losada offers his own methodologic, hermeneutic and epistemological approach to myth. While assuming mythopoetical perspectives, Losada's Cultural Myth Criticism takes a step further, incorporating the study of the transcendent dimension (its function, its disappearance) to evaluate the role of myth as a mirror of contemporary culture.
Cultural myth criticism
Cultural myth criticism, without abandoning the analysis of the symbolic, invades all cultural manifestations and delves into the difficulties in understanding myth today. This cultural myth criticism studies mythical manifestations in fields as wide as literature, film and television, theater, sculpture, painting, video games, music, dancing, the Internet and other artistic fields.
Myth criticism, a discipline that studies myths (mythology contains them, like a pantheon its statues), is by nature interdisciplinary: it combines the contributions of literary theory, the history of literature, the fine arts and the new ways of dissemination in the age of communication. Likewise, it undertakes its object of study from its interrelation with other human and social sciences, in particular sociology, anthropology and economics. The need for an approach, for a methodology that allows us to understand the complexity of the myth and its manifestations in contemporary times, is justified.
==== Mythos ====
Because "myth" is sometimes used in a pejorative sense, some scholars have opted for "mythos" instead. "Mythos" now more commonly refers to its Aristotelian sense as a "plot point" or to a body of interconnected myths or stories, especially those belonging to a particular religious or cultural tradition. It is sometimes used specifically for modern, fictional mythologies, such as the world building of H. P. Lovecraft.
==== Mythopoeia ====
Mythopoeia (mytho- + -poeia, 'I make myth') was termed by J. R. R. Tolkien, amongst others, to refer to the "conscious generation" of mythology. It was notoriously also suggested, separately, by Nazi ideologist Alfred Rosenberg.
== Interpretations ==
=== Comparative mythology ===
Comparative mythology is a systematic comparison of myths from different cultures. It seeks to discover underlying themes that are common to the myths of multiple cultures. In some cases, comparative mythologists use the similarities between separate mythologies to argue that those mythologies have a common source. This source may inspire myths or provide a common "protomythology" that diverged into the mythologies of each culture.
=== Functionalism ===
A number of commentators have argued that myths function to form and shape society and social behaviour. Eliade argued that one of the foremost functions of myth is to establish models for behavior and that myths may provide a religious experience. By telling or reenacting myths, members of traditional societies detach themselves from the present, returning to the mythical age, thereby coming closer to the divine.
Honko asserted that, in some cases, a society reenacts a myth in an attempt to reproduce the conditions of the mythical age. For example, it might reenact the healing performed by a god at the beginning of time in order to heal someone in the present. Similarly, Barthes argued that modern culture explores religious experience. Since it is not the job of science to define human morality, a religious experience is an attempt to connect with a perceived moral past, which is in contrast with the technological present.
Pattanaik defines mythology as "the subjective truth of people communicated through stories, symbols and rituals." He says, "Facts are everybody's truth. Fiction is nobody's truth. Myths are somebody's truth."
=== Euhemerism ===
One theory claims that myths are distorted accounts of historical events. According to this theory, storytellers repeatedly elaborate upon historical accounts until the figures in those accounts gain the status of gods. For example, the myth of the wind-god Aeolus may have evolved from a historical account of a king who taught his people to use sails and interpret the winds. Herodotus (fifth-century BCE) and Prodicus made claims of this kind. This theory is named euhemerism after mythologist Euhemerus (c. 320 BCE), who suggested that Greek gods developed from legends about humans.
=== Allegory ===
Some theories propose that myths began as allegories for natural phenomena: Apollo represents the sun, Poseidon represents water, and so on. According to another theory, myths began as allegories for philosophical or spiritual concepts: Athena represents wise judgment, Aphrodite romantic desire, and so on. Müller supported an allegorical theory of myth. He believed myths began as allegorical descriptions of nature and gradually came to be interpreted literally. For example, a poetic description of the sea as "raging" was eventually taken literally and the sea was then thought of as a raging god.
=== Personification ===
Some thinkers claimed that myths result from the personification of objects and forces. According to these thinkers, the ancients worshiped natural phenomena, such as fire and air, gradually deifying them. For example, according to this theory, ancients tended to view things as gods, not as mere objects. Thus, they described natural events as acts of personal gods, giving rise to myths.
=== Ritualism ===
According to the myth-ritual theory, myth is tied to ritual. In its most extreme form, this theory claims myths arose to explain rituals. This claim was first put forward by Smith, who argued that people begin performing rituals for reasons not related to myth. Forgetting the original reason for a ritual, they account for it by inventing a myth and claiming the ritual commemorates the events described in that myth. James George Frazer—author of The Golden Bough, a book on the comparative study of mythology and religion—argued that humans started out with a belief in magical rituals; later, they began to lose faith in magic and invented myths about gods, reinterpreting their rituals as religious rituals intended to appease the gods.
== Academic discipline history ==
Historically, important approaches to the study of mythology have included those of Vico, Schelling, Schiller, Jung, Freud, Lévy-Bruhl, Lévi-Strauss, Frye, the Soviet school, and the Myth and Ritual School.
=== Ancient Greece ===
The critical interpretation of myth began with the Presocratics. Euhemerus was one of the most important pre-modern mythologists. He interpreted myths as accounts of actual historical events, though distorted over many retellings.
Sallustius divided myths into five categories:
theological;
physical (or concerning natural law);
animistic (or concerning soul);
material; and
mixed, which concerns myths that show the interaction between two or more of the previous categories and are particularly used in initiations.
Plato condemned poetic myth when discussing education in the Republic. His critique was primarily on the grounds that the uneducated might take the stories of gods and heroes literally. Nevertheless, he constantly referred to myths throughout his writings. As Platonism developed in the phases commonly called Middle Platonism and neoplatonism, writers such as Plutarch, Porphyry, Proclus, Olympiodorus, and Damascius wrote explicitly about the symbolic interpretation of traditional and Orphic myths.
Mythological themes were consciously employed in literature, beginning with Homer. The resulting work may expressly refer to a mythological background without itself becoming part of a body of myths (Cupid and Psyche). Medieval romance in particular plays with this process of turning myth into literature. Euhemerism, as stated earlier, refers to the rationalization of myths, putting themes formerly imbued with mythological qualities into pragmatic contexts. An example of this would be following a cultural or religious paradigm shift (notably the re-interpretation of pagan mythology following Christianization).
=== European Renaissance ===
Interest in polytheistic mythology revived during the Renaissance, with early works of mythography appearing in the sixteenth century, among them the Theologia Mythologica (1532).
=== 19th century ===
The first modern, Western scholarly theories of myth appeared during the second half of the 19th century—at the same time as "myth" was adopted as a scholarly term in European languages. They were driven partly by a new interest in Europe's ancient past and vernacular culture, associated with Romantic Nationalism and epitomised by the research of Jacob Grimm (1785–1863). This movement drew European scholars' attention not only to Classical myths, but also material now associated with Norse mythology, Finnish mythology, and so forth. Western theories were also partly driven by Europeans' efforts to comprehend and control the cultures, stories and religions they were encountering through colonialism. These encounters included both extremely old texts such as the Sanskrit Rigveda and the Sumerian Epic of Gilgamesh, and current oral narratives such as mythologies of the indigenous peoples of the Americas or stories told in traditional African religions.
The intellectual context for nineteenth-century scholars was profoundly shaped by emerging ideas about evolution. These ideas included the recognition that many Eurasian languages—and therefore, conceivably, stories—were all descended from a lost common ancestor (the Indo-European language) which could rationally be reconstructed through the comparison of its descendant languages. They also included the idea that cultures might evolve in ways comparable to species. In general, 19th-century theories framed myth as a failed or obsolete mode of thought, often by interpreting myth as the primitive counterpart of modern science within a unilineal framework that imagined that human cultures are travelling, at different speeds, along a linear path of cultural development.
==== Nature ====
One of the dominant mythological theories of the latter 19th century was nature mythology, the foremost exponents of which included Max Müller and Edward Burnett Tylor. This theory posited that "primitive man" was primarily concerned with the natural world. It tended to interpret myths that seemed distasteful to European Victorians—such as tales about sex, incest, or cannibalism—as metaphors for natural phenomena like agricultural fertility. Unable to conceive impersonal natural laws, early humans tried to explain natural phenomena by attributing souls to inanimate objects, thus giving rise to animism.
According to Tylor, human thought evolved through stages, starting with mythological ideas and gradually progressing to scientific ideas. Müller also saw myth as originating from language, even calling myth a "disease of language". He speculated that myths arose due to the lack of abstract nouns and neuter gender in ancient languages. Anthropomorphic figures of speech, necessary in such languages, were eventually taken literally, leading to the idea that natural phenomena were in actuality conscious or divine. Not all scholars, not even all 19th-century scholars, accepted this view. Lucien Lévy-Bruhl claimed that "the primitive mentality is a condition of the human mind and not a stage in its historical development." Recent scholarship, noting the fundamental lack of evidence for "nature mythology" interpretations among people who actually circulated myths, has likewise abandoned the key ideas of "nature mythology".
==== Ritual ====
Frazer saw myths as a misinterpretation of magical rituals, which were themselves based on a mistaken idea of natural law. This idea was central to the "myth and ritual" school of thought. According to Frazer, humans begin with an unfounded belief in impersonal magical laws. When they realize applications of these laws do not work, they give up their belief in natural law in favor of a belief in personal gods controlling nature, thus giving rise to religious myths. Meanwhile, humans continue practicing formerly magical rituals through force of habit, reinterpreting them as reenactments of mythical events. Finally, humans come to realize nature follows natural laws, and they discover their true nature through science. Here again, science makes myth obsolete as humans progress "from magic through religion to science." Segal asserted that by pitting mythical thought against modern scientific thought, such theories imply modern humans must abandon myth.
=== 20th century ===
The earlier 20th century saw major work developing psychoanalytical approaches to interpreting myth, led by Sigmund Freud, who, drawing inspiration from Classical myth, began developing the concept of the Oedipus complex in his 1899 The Interpretation of Dreams. Jung likewise tried to understand the psychology behind world myths. Jung asserted that all humans share certain innate unconscious psychological forces, which he called archetypes. He believed similarities between the myths of different cultures reveals the existence of these universal archetypes.
The mid-20th century saw the influential development of a structuralist theory of mythology, led by Lévi-Strauss. Strauss argued that myths reflect patterns in the mind and interpreted those patterns more as fixed mental structures, specifically pairs of opposites (good/evil, compassionate/callous), rather than unconscious feelings or urges. Meanwhile, Bronislaw Malinowski developed analyses of myths focusing on their social functions in the real world. He is associated with the idea that myths such as origin stories might provide a "mythic charter"—a legitimisation—for cultural norms and social institutions. Thus, following the Structuralist Era (c. 1960s–1980s), the predominant anthropological and sociological approaches to myth increasingly treated myth as a form of narrative that can be studied, interpreted, and analyzed like ideology, history, and culture. In other words, myth is a form of understanding and telling stories that are connected to power, political structures, and political and economic interests.
These approaches contrast with approaches, such as those of Joseph Campbell and Eliade, which hold that myth has some type of essential connection to ultimate sacred meanings that transcend cultural specifics. In particular, myth was studied in relation to history from diverse social sciences. Most of these studies share the assumption that history and myth are not distinct in the sense that history is factual, real, accurate, and truth, while myth is the opposite.
In the 1950s, Barthes published a series of essays examining modern myths and the process of their creation in his book Mythologies, which stood as an early work in the emerging post-structuralist approach to mythology, which recognised myths' existence in the modern world and in popular culture.
The 20th century saw rapid secularization in Western culture. This made Western scholars more willing to analyse narratives in the Abrahamic religions as myths; theologians such as Rudolf Bultmann argued that a modern Christianity needed to demythologize; and other religious scholars embraced the idea that the mythical status of Abrahamic narratives was a legitimate feature of their importance. This, in his appendix to Myths, Dreams and Mysteries, and in The Myth of the Eternal Return, Eliade attributed modern humans' anxieties to their rejection of myths and the sense of the sacred.
The Christian theologian Conrad Hyers wrote:
[M]yth today has come to have negative connotations which are the complete opposite of its meaning in a religious context... In a religious context, myths are storied vehicles of supreme truth, the most basic and important truths of all. By them, people regulate and interpret their lives and find worth and purpose in their existence. Myths put one in touch with sacred realities, the fundamental sources of being, power, and truth. They are seen not only as being the opposite of error but also as being clearly distinguishable from stories told for entertainment and from the workaday, domestic, practical language of a people. They provide answers to the mysteries of being and becoming, mysteries which, as mysteries, are hidden, yet mysteries which are revealed through story and ritual. Myths deal not only with truth but with ultimate truth.
=== 21st century ===
Both in 19th-century research, which tended to see existing records of stories and folklore as imperfect fragments of partially lost myths, and in 20th-century structuralist work, which sought to identify underlying patterns and structures in often diverse versions of a given myth, there had been a tendency to synthesise sources to attempt to reconstruct what scholars supposed to be more perfect or underlying forms of myths. From the late 20th century, researchers influenced by postmodernism tended instead to argue that each account of a given myth has its own cultural significance and meaning, and argued that rather than representing degradation from a once more perfect form, myths are inherently plastic and variable. There is, consequently, no such thing as the 'original version' or 'original form' of a myth. One prominent example of this movement was A. K. Ramanujan's essay "Three Hundred Ramayanas".
Correspondingly, scholars challenged the precedence that had once been given to texts as a medium for mythology, arguing that other media, such as the visual arts or even landscape and place-naming, could be as or more important. Myths are not texts, but narrative materials (Erzählstoffe) that can be adapted in various media (such as epics, hymns, handbooks, movies, dances, etc.). In contrast to other academic approaches, which primarily focus on the (social) function of myths, hylistic myth research aims to understand myths and their nature out of themselves. As part of the Göttingen myth research, Annette and Christian Zgoll developed the method of hylistics (narrative material research) to extract mythical materials from their media and make possible a transmedial comparison. The content of the medium is broken down into the smallest possible plot components (hylemes), which are listed in standardized form (so-called hyleme analysis). Inconsistencies in content can indicate stratification, i.e. the overlapping of several materials, narrative variants and edition layers within the same medial concretion. To a certain extent, this can also be used to reconstruct earlier and alternative variants of the same material that were in competition and/or were combined with each other. The juxtaposition of hyleme sequences enables the systematic comparison of different variants of the same material or several different materials that are related or structurally similar to each other. In his overall presentation of the hundred-year history of myth research, the classical philologist and myth researcher Udo Reinhardt mentions Christian Zgoll's basic work Tractatus mythologicus as "the latest handbook on myth theory" with "outstanding significance" for modern myth research.
== Modernity ==
Scholars in the field of cultural studies research how myth has worked itself into modern discourses. Mythological discourse can reach greater audiences than ever before via digital media. Various mythic elements appear in popular culture, as well as television, cinema and video games.
Although myth was traditionally transmitted through the oral tradition on a small scale, the film industry has enabled filmmakers to transmit myths to large audiences via film. In Jungian psychology, myths are the expression of a culture or society's goals, fears, ambitions and dreams.
The basis of modern visual storytelling is rooted in the mythological tradition. Many contemporary films rely on ancient myths to construct narratives. The Walt Disney Company is well known among cultural study scholars for "reinventing" traditional childhood myths. While few films are as obvious as Disney fairy tales, the plots of many films are based on the rough structure of myths. Mythological archetypes, such as the cautionary tale regarding the abuse of technology, battles between gods and creation stories, are often the subject of major film productions. These films are often created under the guise of cyberpunk action films, fantasy, dramas and apocalyptic tales.
21st-century films such as Clash of the Titans, Immortals and Thor continue the trend of using traditional mythology to frame modern plots. Authors use mythology as a basis for their books, such as Rick Riordan, whose Percy Jackson and the Olympians series is situated in a modern-day world where the Greek deities are manifest.
Scholars, particularly those within the field of fan studies, and fans of popular culture have also noted a connection between fan fiction and myth. Ika Willis identified three models of this: fan fiction as a reclaiming of popular stories from corporations, myth as a means of critiquing or dismantling hegemonic power, and myth as "a commons of story and a universal story world". Willis supports the third model, a universal story world, and argues that fanfiction can be seen as mythic due to its hyperseriality—a term invented by Sarah Iles Johnston to describe a hyperconnected universe in which characters and stories are interwoven. In an interview for the New York Times, Henry Jenkins stated that fanfiction 'is a way of the culture repairing the damage done in a system where contemporary myths are owned by corporations instead of owned by the folk.'
== See also ==
List of mythologies
List of mythological objects
List of mythology books and sources
Magic and mythology
Mythopoeia, artificially constructed mythology, mainly for the purpose of storytelling
Myth: Its Meaning and Functions in Ancient and Other Cultures by G.S. Kirk
== Notes ==
== Sources ==
== External links ==
"Mythology.net". | Wikipedia/Mythography |
Historiographical debates on the subject on Widerstand have often featured intense arguments about the nature, extent and effectiveness of resistance in the Third Reich. In particular, debate has focused around what to define as Widerstand (resistance).
== Immediate post-war period ==
Within both the Federal Republic of Germany and the German Democratic Republic, the memory of Widerstand was harnessed after 1949 as a way of providing legitimacy to the two rival German states. In East Germany, the focus was unabashedly on celebrating the KPD, which was represented as the only anti-fascist force in Germany; non-Communist resistance was either ignored or slighted. In East Germany, historical work on the subject of Widerstand was highly politicized and portrayed members of the KPD resistance as heroes. The general tone of East German work on the subject was well summarized by the introduction to the 1974 book Die deutsche antifaschistische Widerstandsbewegung, which stated: "The German anti-fascist resistance movement, especially the KPD and the forces allied to it, embodied the progressive line of German policy. The most consistent political force of this movement, the KPD, carried out from the first day of the fascist dictatorship, organized and, centrally directed the struggle against imperialism ... The expression of the victory of the resolute anti-fascists after the smashing of fascism by the Soviet Union, and the other states of the Anti-Hitler coalition, and the defeat of German imperialism is the existence of the GDR in which the legacy of the best of the German people who gave their lives in the anti-fascist struggle was realized".
In West Germany, the first works to appear on the subject, such as the books by Hans Rothfels and Gerhard Ritter, were intended both to rebut the "collective guilt" accusations against the German people by showing the existence of the "other Germany", and to prevent another Dolchstoßlegende from emerging by portraying those involved in Widerstand activities in as heroic light as possible. Under the influence of the Cold War, starting in the late 1940s, and continuing throughout the 1950s, historiographical work on the subject in the Federal Republic came to increasing exclude the KPD, and assigned a minor role to the SPD. In his biography of Goerdeler, Ritter drew a distinction between those Germans working for the defeat of their country, and those Germans working to overthrow the Nazi regime while being loyal to Germany. Thus, in Ritter's view, Goerdeler was a patriot while those involved in the Rote Kapelle were traitors who deserved to be executed. In general, West German historians in the 1950s came to define Widerstand as only including national-conservatives involved in the 20 July plot, and a "monumentalization" and "heroicization" of Widerstand occurred with those being involved being credited as acting from the highest possible ethical and moral motives. In the 1950s, resistance was depicted as middle-class and Christian with the emphasis on the heroic individual standing alone against tyranny.
== 1960s–1970s and re-examination of Widerstand ==
Starting in the 1960s, a younger generation of West German historians such as Hans Mommsen started to provide a more critical assessment of Widerstand within German elites, and came to decry the "monumentalization" of the 1950s. In two articles published in 1966, Mommsen proved the claim often advanced in the 1950s that the ideas behind "men of 20 July" were the inspiration for the 1949 Basic Law of the Federal Republic was false. Mommsen showed that the ideas of national-conservative opponents of the Nazis had their origins in the anti-Weimar right of the 1920s, that the system the national-conservatives wished to build in place of Nazism was not a democracy, and that national-conservatives wished to see a "Greater Germany" ruling over much of Central and Eastern Europe.
Increasingly, West German historians started in the 1960s and 1970s to examine Widerstand outside of elites, and by focusing on resistance by ordinary people to challenge the popular notion that had been "resistance without the people". An example of the changing trend in historical research was a series of local studies of varying degrees of quality on working-class resistance movements associated with the SPD and the KPD published in the 1970s, which shed much light on these previously little known movements. As the historical genre of Alltagsgeschichte (history of everyday life) started to enjoy increasing popularity as a research topic in the 1970s–80s, historians became more preoccupied with that they considered to be "everyday" resistance by individuals acting outside of any sort of organization".
The so-called "Bavaria Project" of the 1970s, an effort made by the Institute of Contemporary History to comprehensively document "everyday life" in Bavaria during the Third Reich did much to spur research into this area. The first director of the "Bavaria Project", Peter Hüttenberger, defined Widerstand as "every form of rebellion against at least potentially total rule within the context of asymmetrical relations of rule". For Hüttenberger, "symmetrical" rule occurs when there is a "bargain" struck between the different interests of the rulers and ruled which leads more or less to a "balance"; "asymmetrical rule" occurs when there is no "bargain" and the state seeks total Herrschaft (domination) over the ruled. For this reason, Hüttenberger discounted the East German claim that the KPD had been engaging in anti-Nazi resistance during the Weimar Republic. Hüttenberger argued that democracy is a form of "symmetrical" rule, and therefore merely being an opposition party under a democracy does not qualify as resistance.
Seen within this perspective as defined by Hüttenberger, any effort made to resist the claim of total Herrschaft, no matter how minor was a form of Widerstand. Thus, the six volumes which comprised the "Bavaria Project" edited by the project's second director, Martin Broszat depicted actions such as refusal to give the Nazi salute as a type of resistance. Moreover, the emphasis upon resistance in "everyday life" in the "Bavaria Project" portrayed Widerstand not as a total contrast between black and white, but rather in shades of grey, noting that people who often refused to behave as the Nazi regime wanted in one area often conformed in other areas; as an example the Bavarian peasants who did business with Jewish cattle dealers in the 1930s despite the efforts of the Nazi regime to stop these transactions otherwise often expressed approval of the anti-Semitic laws. Rather than defining resistance as a matter of intention, Broszat and his associates came to define Widerstand as a matter of Wirkung (effect) as a means of blocking the Nazi regime's total claim to control all aspects of German life, regardless of whether the intentions were political or not.
=== Resistenz ===
Realizing that not every action that blocked the Nazi regime's total claims should be considered a form of Widerstand, Broszat devised the controversial concept of Resistenz (immunity). By Resistenz, Broszat meant the ability of certain sections of German society, such as the Wehrmacht, the Roman Catholic Church and the bureaucracy, to enjoy immunity from the Nazis' claims to total power and to continue functioning according to their traditions and practices without seeking to fundamentally challenge the Nazi regime. Broszat's concept was used to advance the notion that, at least at the local level, there was much continuity in Germany between the Weimar and Nazi eras.
The Resistenz concept was often criticized by other historians for seeking to change the focus from "behavior" and intentions towards the Nazi regime towards the "effect" on one's actions on the regime. One of Broszat's leading critics, the Swiss historian Walter Hofer commented that in his view: "The concept of Resistenz leads to a levelling down of fundamental resistance against the system on the one hand and actions criticizing more or less accidental, superficial manifestations on the other: the tyrannicide appears on the same plane as the illegal cattle-slaughterer". Moreover, Hofter noted that the things that Broszat labeled Resistenz had no effect within the grander scheme of things on the ability of the Nazi regime to accomplish its objectives within Germany. Another of Broszat's critics, the German historian Klaus-Jürgen Müller argued that the term Widerstand should apply only to those having a "will to overcome the system" and that Broszat's Resistenz concept did too much to muddy the waters between by speaking of societal "immunity" to the regime. A more sympathetic appraisal of the Resistenz concept came from the historians Manfred Messerschmidt and Heinz Boberach who argued that Widerstand should be defined from the viewpoint of the Nazi state, and any activity that was contrary to the regime's wishes, such as listerning to jazz music, should be considered as a form of Widerstand. Hans Mommsen wrote about the Resistenz concept:
This raises, of course, the issue of how to distinguish between resistance that intended to overthrow the system, and active Resistenz (though judged from the angle of convictions of the individual, this constitutes an artificial separation). Those who risked their lives to hide Jewish fellow citizens and acquire forged exit permits for them, those who tried to help Russian prisoners-of-war, those who, at their workplaces, fought for the rights of workers and refused to be indoctrinated by the German Labour Front, those who protested against the treatment of the Jewish population or publicly denounced the euthanasia programme, those who refused to obey criminal orders, those who as a powerless protest against Nazi war policies daubed slogans on walls at night-time, those who protected the persecuted and shared their ration cards with them—in a wider sense they all belonged to the resistance.
== Der Deutsche Widerstand und die Juden ==
As part of a critical evaluation of those involved in anti-Nazi work, the German historian Christof Dipper in his 1983 essay "Der Deutsche Widerstand und die Juden" (translated into English as "The German Resistance and the Jews") argued that the majority of the anti-Nazi national-conservatives were anti-Semitic. Dipper wrote that for the majority of the national-conservatives "the bureaucratic, pseudo-legal deprivation of the Jews practiced until 1938 was still considered acceptable". Though Dipper noted no-one in the Widerstand movement supported the Holocaust, he also commented that the national-conservatives did not intend to restore civil rights to the Jews after the overthrow of Hitler. Dipper went on to argue that based on such views held by opponents of the regime that for "a large part of the German people...believed that a "Jewish Question" existed and had to be solved...".
In response to Dipper's charges, the Canadian historian Peter Hoffmann in his 2004 essay "The German Resistance and the Holocaust" sought to disapprove Dipper's thesis. Hoffmann argued that the majority of those involved in the 20 July putsch attempt were motivated in large part to moral objections to the Shoah. In particular, Hoffmann used the example of Claus von Stauffenberg's moral outrage to witnessing the massacre of Russian Jews in 1942, and of Carl Friedrich Goerdeler's advice in 1938–39 to his contact with British intelligence, the industrialist A.P. Young that the British government should take a tough line with the Nazi regime in regards to its anti-Semitism. The Israeli historian Danny Orbach in his 2010 book Valkyrie: Hahitnagdut Hagermanit Lehitler defended the German resistance fighters, particularly Goerdeler, against the charge that they were anti-Semitic by noting Goerdeler's strong support for Zionism, the importance of the Holocaust in the motives of the National-Conservative resistance, as well as attempts of other German resistance fighters to save persecuted Jews. In a recent article, Orbach also argued that Dipper's accusations of antisemitism are based on a misreading, if not distortion, of the primary sources, above all Goerdeler's memoranda on the Jewish Question.
== Other views ==
Another viewpoint advanced in the debate was that of Mommsen, who cautioned against the use of overtly rigid terminology, and spoke of a wide type of "resistance practice" (Widerstandspraxis), by which he meant that there were different types and forms of resistance, and that resistance should be considered a "process", in which individuals came to increasingly reject the Nazi system in its entirety. As an example of resistance as a "process", Mommsen used the example of Carl Friedrich Goerdeler, who initially supported the Nazis, became increasing disillusioned over Nazi economic policies while serving as Price Commissioner in the mid-1930s, and by the late 1930s was committed to Hitler's overthrow. Mommsen described national-conservative resistance as "a resistance of servants of the state", who, over time, came to gradually abandon their former support of the regime, and instead steadily came to accept that the only way of bringing about fundamental change was to seek the regime's destruction. In regards to the idea of "resistance as a process", several historians have worked out typologies. The German historian Detlev Peukert created a typology running from "nonconformity" (mostly done in private and not including total rejection of the Nazi system), "refusal of co-operation" (Verweigerung), "protest", and finally, "resistance" (those committed to the overthrow of the regime). The Austrian historian Gerhard Botz argued for a typology starting with "deviant behavior" (minor acts of non-conformity), "social protest", and "political resistance".
The British historian Sir Ian Kershaw has argued that there are two approaches to the Widerstand question, one of which he calls the fundamentalist (dealing with those committed to overthrowing the Nazi regime) and the societal (dealing with forms of dissent in "everyday life"). In Kershaw's viewpoint, the Resistenz concept works well in an Alltagsgeschichte approach, but works less well in the field of high politics, and moreover by focusing only on the "effect" of one's actions, fails to consider the crucial element of the "intention" behind one's actions. Kershaw has argued that the term Widerstand should be used only for those working for the total overthrow of the Nazi system, and those engaging in behavior which was counter to the regime's wishes without seeking to overthrow the regime should be included under the terms opposition and dissent, depending upon their motives and actions. Kershaw has used the Edelweiss Pirates as an example of whose behavior initially fell under dissent, and who advanced from there to opposition and finally to resistance. Similarly, the American historian Claudia Koonz in her 1992 article "Ethical Dilemmas and Nazi Eugenics", argued that those who protested against the Action T4 program, usually for religious reasons while remaining silent about the Holocaust cannot be considered as part of any resistance to the Nazis, and these protests can only be considered as a form of dissent. In Kershaw's opinion, there were three bands ranging from dissent to opposition to resistance. In Kershaw's view, there was much dissent and opposition within German society, but outside of the working-class, very little resistance. Though Kershaw has argued that the Resistenz concept has much merit, overall he concluded that the Nazi regime had a broad basis of support and consensus, and it is correct to speak of "resistance without the people".
== See also ==
Knowledge of the Holocaust in Nazi Germany and German-occupied Europe
== References ==
=== Bibliography ===
Hoffmann, Peter (2004). "The German Resistance and the Holocaust". In Michalczyk, John J. (ed.). Confront! Resistance in Nazi Germany. P. Lang. pp. 105–126. ISBN 978-0-8204-6317-9.
Kershaw, Ian (2000). The Nazi Dictatorship: Problems and Perspectives of Interpretation (4th ed.). Bloomsbury Academic. ISBN 978-0-340-76028-4. | Wikipedia/Historiography_of_German_resistance_to_Nazism |
As soon as the term "Cold War" was popularized to refer to postwar tensions between the United States and the Soviet Union, interpreting the course and origins of the conflict became a source of heated controversy among historians, political scientists and journalists. In particular, historians have sharply disagreed as to who was responsible for the breakdown of Soviet Union–United States relations after the World War II and whether the conflict between the two superpowers was inevitable, or could have been avoided. Historians have also disagreed on what exactly the Cold War was, what the sources of the conflict were and how to disentangle patterns of action and reaction between the two sides. While the explanations of the origins of the conflict in academic discussions are complex and diverse, several general schools of thought on the subject can be identified. Historians commonly speak of three differing approaches to the study of the Cold War: "orthodox" accounts, "revisionism" and "post-revisionism". However, much of the historiography on the Cold War weaves together two or even all three of these broad categories and more recent scholars have tended to address issues that transcend the concerns of all three schools.
== Pro-Soviet accounts ==
Soviet historiography on the Cold War era was overwhelmingly dictated by the Soviet state, and blamed the West for the Cold War. In Britain, the historian E. H. Carr wrote a 14-volume history of the Soviet Union, which was focused on the 1920s and published 1950–1978. His friend R. W. Davies said Carr belonged to the anti-Cold War school of history, which regarded the Soviet Union as the major progressive force in the world, the United States as the world's principal obstacle to the advancement of humanity and the Cold War as a case of American aggression against the Soviet Union. Carr criticized those Anglophone historians, who he felt had unfairly judged the Soviet Union by the cultural norms of Britain and the United States.
== Orthodox accounts ==
The first school of interpretation to emerge in the United States was "orthodox". For more than a decade after the end of the World War II, few American historians challenged the official American interpretation of the beginnings of the Cold War. The "orthodox" school places the responsibility for the Cold War on the Soviet Union and its expansion into Eastern Europe. For example, Thomas A. Bailey argued in his 1950 America Faces Russia that the breakdown of postwar peace was the result of Soviet expansionism in the immediate years following World War II. Bailey argued Joseph Stalin violated promises he had made at the Yalta Conference, imposed Soviet-dominated regimes on unwilling Eastern European populations and conspired to spread communism throughout the world. From that view, American officials were forced to respond to Soviet aggression with the Truman Doctrine, plans to contain communist subversion around the world and the Marshall Plan.
Another prominent "orthodox" historian was Herbert Feis, who in his works like Churchill, Roosevelt, Stalin and From Trust to Terror: The Onset of the Cold War stated similar views. According to him, Soviet aggression in Eastern Europe in the postwar period was responsible for starting of the Cold War. Apart from this, he also argued that Franklin D. Roosevelt's policies towards Stalin and his "surrender" to Stalin's demands in the Yalta Conference paved the way for Soviet aggression and destabilized balance of power in Europe in Soviet favor. The interpretation has been described as the "official" United States version of Cold War history. Although it lost its dominance as a mode of historical thought in academic discussions in 1960s, it continues to be influential.
== Revisionism ==
The role of the United States in the Vietnam War disillusioned New Left historians and created a minority of historians with sympathy towards the Viet Cong communist position and antipathy towards American policies. Much more important were the revisionists who argued that both United States and the Soviet Union were responsible for blundering into the war and rejected the premises of "containment". They battled the "orthodox" historians. "Revisionist" accounts emerged in the wake of the Vietnam War in the context of a larger rethinking of the United States role in international affairs, which was seen more in terms of American empire or hegemony.
While the new school of thought spanned many differences among individual scholars, the works comprising it were generally responses in one way or another to William Appleman Williams 1959 volume, The Tragedy of American Diplomacy. Williams challenged the long-held assumptions of "orthodox" accounts, arguing that Americans had always been an empire-building people even while American leaders denied it. The influence of Williams, who taught at the University of Wisconsin–Madison, and several of his students who subsequently published works on these themes, was enough to create what became known as the Wisconsin School of American diplomatic history. The Wisconsin School was distinct from the New Left; while members of each found themselves allied at times, New Left critiques tended to be a good deal more radical both in analysis and in proposed solutions.
Following Williams, revisionists placed more responsibility for the breakdown of postwar peace on the United States, citing a range of their efforts to isolate and confront the Soviet Union well before the end of World War II. They argued that American policymakers shared an overarching concern with maintaining the market system and capitalist democracy. To achieve that objective, they pursued an "open door" policy abroad, aimed at increasing access to foreign markets for American business and agriculture.
Revisionist scholars challenged the widely accepted scholarly research that Soviet leaders were committed to postwar expansion of communism. They cited evidence that the Soviet Union's occupation of Eastern Europe had a defensive rationale and that Soviet leaders saw themselves as attempting to avoid encirclement by the United States and its allies. In that view, the Soviet Union was so weak and devastated after the end of the World War II to be unable to pose any serious threat to the United States, who maintained a nuclear monopoly until the Soviet Union tested its first atomic bomb in August 1949.
Revisionist historians have also presented the view that the origins of the Cold War date to the Allied intervention in the Russian Civil War. Some reach back even further as Wisconsin School historian Walter LaFeber in his study America, Russia, and the Cold War, first published in 1972, argued that the Cold War had its origins in 19th century conflicts between Russia and the United States over the opening of East Asia to American trade, markets and influence. LaFeber argued that the United States commitment at the close of World War II to ensuring a world in which every state was open to American influence and trade, underpinned many of the conflicts that triggered the beginning of the Cold War.
Starting with Gar Alperovitz in his influential Atomic Diplomacy: Hiroshima and Potsdam (1965), revisionists have focused on the United States decision to use atomic weapons against Hiroshima and Nagasaki during the last days of World War II. In their belief, the nuclear bombing of Nagasaki and Hiroshima in effect started the Cold War. According to Alperovitz, the bombs were used not against an already-defeated Japan to win the war, but to intimidate the Soviets by signaling that the United States would use nuclear weapons to stop Soviet expansion, though they failed to do so.
New Left historians Joyce and Gabriel Kolko's The Limits of Power: The World and U.S. Foreign Policy, 1945–1954 (1972) has also received considerable attention in the historiography on the Cold War. The Kolkos argued American policy was both reflexively anticommunist and counterrevolutionary. The United States was fighting not necessarily Soviet influence, but also any form of challenge to the American economic and political prerogatives through covert or military means. In this sense, the Cold War is less a story of rivalry between two blocs, but more a story of the ways by which the dominant states within each bloc controlled and disciplined their own populations and clients and about who supported and stood to benefit from increased arms production and political anxiety over a perceived external enemy.
== Post-revisionism ==
The revisionist interpretation produced a critical reaction of its own. In a variety of ways, "post-revisionist" scholarship before the fall of Communism challenged earlier works on the origins and course of the Cold War.
During the period, "post-revisionism" challenged the "revisionists" by accepting some of their findings, but rejecting most of their key claims. Another current attempt to strike a balance between the "orthodox" and "revisionist" camps, identifying areas of responsibility for the origins of the conflict on both sides. For example, Thomas G. Paterson in Soviet-American Confrontation (1973) viewed Soviet hostility and United States efforts to dominate the postwar world as equally responsible for the Cold War.
The seminal work of this approach was John Lewis Gaddis's The United States and the Origins of the Cold War, 1941–1947 (1972). The account was immediately hailed as the beginning of a new school of thought on the Cold War claiming to synthesize a variety of interpretations. Gaddis then maintained that "neither side can bear sole responsibility for the onset of the Cold War". However, he emphasized the constraints imposed on United States policymakers by the complications of domestic politics. In addition, Gaddis has criticized some revisionist scholars, particularly Williams, for failing to understand the role of Soviet policy in the origins of the Cold War. Gaddis's 1983 distillation of post-revisionist scholarship became a major channel for guiding subsequent Cold War research. An almost immediate move to challenge Gaddis' framework came from Melvyn P. Leffler, who "demonstrated that it was not so much the actions of the Kremlin as it was fears about socioeconomic dislocation, revolutionary nationalism, British weakness, and Eurasian vacuums of power that triggered US initiatives to mold an international system to comport with its concept of security". That provoked "strong rebuttals" from Gaddis and his followers, but Leffler deemed their objections inaccurate and unsubstantiated. However, Leffler himself still falls within the overall post-revisionist camp.
Out of the "post-revisionist" literature emerged a new area of inquiry that was more sensitive to nuance and interested less in the question of who started the conflict than in offering insight into United States and Soviet actions and perspectives. From that perspective, the Cold War was not so much the responsibility of either side, but rather the result of predictable tensions between two world powers that had been suspicious of one another for nearly a century. For example, Ernest May wrote in a 1984 essay:
After the Second World War, the United States and the Soviet Union were doomed to be antagonists. ... There probably was never any real possibility that the post-1945 relationship could be anything but hostility verging on conflict. ... Traditions, belief systems, propinquity, and convenience ... all combined to stimulate antagonism, and almost no factor operated in either country to hold it back.
From that view of "post-revisionism" emerged a line of inquiry that examines how Cold War actors perceived various events and the degree of misperception involved in the failure of the two sides to reach common understandings of their wartime alliance and their disputes.
After the opening of the Soviet archives, John Lewis Gaddis began to argue that the Soviets should be held more accountable for conflict. According to Gaddis, Stalin was in a much better position to compromise than his Western counterparts, given his much broader power within his own regime than Truman, who was often undermined by vociferous political opposition at home. Asking if it would have been possible to predict that the wartime alliance would fall apart within a matter of months, leaving in its place nearly a half century of cold war, Gaddis wrote in his 1997 book We Now Know: Rethinking Cold War History the following:
Geography, demography, and tradition contributed to this outcome but did not determine it. It took men, responding unpredictably to circumstances, to forge the chain of causation; and it took [Stalin] in particular, responding predictably to his own authoritarian, paranoid, and narcissistic predisposition, to lock it into place.
According to Leffler, the most distinctive feature of We Now Know is the extent to which Gaddis "abandons post-revisionism and returns to a more traditional interpretation of the Cold War". Gaddis is now widely seen as more "orthodox" than "post-revisionist". The revisionist Bruce Cumings had a high-profile debate with Gaddis in the 1990s, where Cumings criticized post-revisionism generally and Gaddis in particular as moralistic and lacking in historical rigor. Cumings urged post-revisionists to employ modern geopolitical approaches like world-systems theory in their work.
Other post-revisionist accounts focus on the importance of the settlement of the German Question in the scheme of geopolitical relations between the United States and the Soviet Union.
=== 21st century scholarship ===
Since the 2000s, benefiting largely from the opening of Cold War-era archives in the Soviet Union and elsewhere in the world, Cold War historians have begun to move on from questions of blame and inevitability to consider the Cold War in the longue durée of the 20th century, alongside questions of culture, technology and ideology. Historians have also begun to consider the Cold War from a variety of international perspectives (non-American and non-Soviet) and most especially have stressed the importance of what was then called the "Third World" in the latter half of the Cold War. As Odd Arne Westad, co-editor of the Cambridge History of the Cold War (2010) has written:
Very few of our contributors believe that a "definitive" history of the Cold War is possible (or indeed that it should be possible). But a heterogeneous approach creates a strong need for contextualization. ... First and foremost we need to situate the Cold War within the wider history of the twentieth century in a global perspective. We need to indicate how Cold War conflicts connect to broader trends in social, economic, and intellectual history as well as to the political and military developments of the longer term of which it forms a part.
Corresponding to the broader "emotional turn" in 21st century historiography, historians have increasingly begun to consider the unfolding of the Cold War in emotional and psychological terms. They have sought emotional explanations for political decisions and developments typically examined from a rational perspective and have analysed interpersonal dynamics between world leaders. Frank Costigliola is a prolific proponent of the role of emotion in historical analysis. For example, he positions the breakdown of the wartime alliance between the United States and the Soviet Union and the hostilities of the early Cold War as being, in part, a result of the heightened strong emotional of key figures in American foreign policy, like Averell Harriman, following the death of Franklin D. Roosevelt. To Costigliola, it was the "attitudes and rhetoric" of key diplomats at the end of World War II that set the tone for future relations between the United States and the Soviet Union.
Consistent with the move away from questions of blame to questions of culture and ideology, American historians have also begun exploring the intersection between domestic U.S. political developments and the early years of Cold War outbreak. One such example is Thomas Borstelmann's 2003 work "The Cold War and the Color Line", which defines domestic racial discrimination after 1945 as a foreign as well as a domestic issue: America’s closest allies against the Soviet Union were colonial powers who had interests that needed to be balanced against those of the emerging 'Third World' in a diverse multiracial, anti-Communist alliance. Domestically, at the same time, U.S. racial reform was essential to preserve the national consensus needed to sustain the Cold War struggle.
==== Revisionism in the 21st century ====
Despite the overall focus away from the Orthodox vs. Revisionist debates there have been new revisionist works that have emerged in the 21st century. One such example is Norman Naimark's 2019 Stalin and the Fate of Europe: The Postwar Struggle for Sovereignty.. Another work arguing that Stalin in fact sought to avoid the Cold War is the 2006 Stalin's Wars: From World War to Cold War, 1939-1953 by Geoffrey Roberts.
== Espionage ==
After 1990s new memoirs and archival materials have opened up the study of espionage and intelligence during the Cold War. Scholars are reviewing how its origins, its course, and its outcome were shaped by the intelligence activities of the United States, the Soviet Union, and other key countries. Special attention is paid to how complex images of one's adversaries were shaped by secret intelligence that is now publicly known.
== See also ==
Historical revisionism
Realism (international relations)
== Historiography ==
Berger, Henry W. ed. A William Appleman Williams Reader (1992).
Ferrell, Robert H. Harry S. Truman and the Cold War Revisionists. (2006). 142 pp. excerpt and text search.
Fitzpatrick, Sheila. "Russia's Twentieth Century in History and Historiography," The Australian Journal of Politics and History, Vol. 46, 2000.
Gardner, Lloyd C. (ed.) Redefining the Past: Essays in Diplomatic History in Honor of William Appleman Williams (1986).
Garthoff, Raymond L. "Foreign Intelligence and the Historiography of the Cold War." Journal of Cold War Studies 2004 6(2): 21–56. ISSN 1520-3972 Fulltext: Project MUSE.
Isaac, Joel; Bell, Duncan, eds. Uncertain Empire: American History and the Idea of the Cold War (2012) online review by Victoria Hallinan.
Kaplan, Lawrence S. American Historians and the Atlantic Alliance, (1991) online edition..
Kort, Michael. The Columbia Guide to the Cold War (1998).
Matlock, Jack E. "The End of the Cold War" Harvard International Review, Vol. 23 (2001).
Melanson, Richard A. "Revisionism Subdued? Robert James Maddox and the Origins of the Cold War" Political Science Reviewer, Vol. 7 (1977).
Melanson, Richard A. Writing History and making Policy: The Cold War, Vietnam, and Revisionism (1983).
Olesen, Thorsten B.Ed. The Cold War and the Nordic Countries: Historiography at a Crossroads. Odense: U Southern Denmark Press, 2004. Pp. 194. online review.
Suri, Jeremi. "Explaining the End of the Cold War: A New Historical Consensus?" Journal of Cold War Studies - Volume 4, Number 4, Fall 2002, pp. 60–92 in Project MUSE.
Trachtenberg, Marc. "The Marshall Plan as Tragedy." Journal of Cold War Studies 2005 7(1): 135–140. ISSN 1520-3972 Fulltext: in Project MUSE.
Walker, J. Samuel. "Historians and Cold War Origins: The New Consensus", in Gerald K. Haines and J. Samuel Walker, eds., American Foreign Relations: A Historiographical Review (1981), 207–236.
Watry, David M. Diplomacy at the Brink: Eisenhower, Churchill, and Eden in the Cold War. Baton Rouge: Louisiana State University Press, 2014. ISBN 9780807157183.
Westad, Odd Arne, ed. Reviewing the Cold War: Approaches, Interpretations, Theory (2000) essays by scholars.
Westad, Odd Arne, "The New International History of the Cold War: Three (Possible) Paradigms," Diplomatic History, 2000, Vol. 24 in EBSCO.
Westad, Odd Arne, ed. Reviewing the Cold War: Approaches, Interpretations, Theory (2000) excerpt and text search.
Westad, Odd Arne, The Cold War: A World History, Basic Books, 2017. ISBN 0465054935.
White, Timothy J. "Cold War Historiography: New Evidence Behind Traditional Typographies" International Social Science Review, (2000).
Xia, Yafeng. "The Study of Cold War International History in China: A Review of the Last Twenty Years," Journal of Cold War Studies10#1 Winter 2008, pp. 81–115 in Project MUSE.
== References ==
== Bibliography == | Wikipedia/Historiography_of_the_Cold_War |
The Mughal Empire was an early modern empire in South Asia. At its peak, the empire stretched from the outer fringes of the Indus River Basin in the west, northern Afghanistan in the northwest, and Kashmir in the north, to the highlands of present-day Assam and Bangladesh in the east, and the uplands of the Deccan Plateau in South India.
The Mughal Empire is conventionally said to have been founded in 1526 by Babur, a chieftain from what is today Uzbekistan, who employed aid from the neighboring Safavid and Ottoman Empires to defeat the sultan of Delhi, Ibrahim Lodi, in the First Battle of Panipat, and to sweep down the plains of North India. The Mughal imperial structure, however, is sometimes dated to 1600, to the rule of Babur's grandson, Akbar. This imperial structure lasted until 1720, shortly after the death of the last major emperor, Aurangzeb, during whose reign the empire also achieved its maximum geographical extent. Reduced subsequently to the region in and around Old Delhi by 1760, the empire was formally dissolved by the British Raj after the Indian Rebellion of 1857.
Although the Mughal Empire was created and sustained by military warfare, it did not vigorously suppress the cultures and peoples it came to rule; rather it equalized and placated them through new administrative practices, and diverse ruling elites, leading to more efficient, centralised, and standardized rule. The base of the empire's collective wealth was agricultural taxes, instituted by the third Mughal emperor, Akbar. These taxes, which amounted to well over half the output of a peasant cultivator, were paid in the well-regulated silver currency, and caused peasants and artisans to enter larger markets.
The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion. The burgeoning European presence in the Indian Ocean and an increasing demand for Indian raw and finished products generated much wealth for the Mughal court. There was more conspicuous consumption among the Mughal elite, resulting in greater patronage of painting, literary forms, textiles, and architecture, especially during the reign of Shah Jahan. Among the Mughal UNESCO World Heritage Sites in South Asia are: Agra Fort, Fatehpur Sikri, Red Fort, Humayun's Tomb, Lahore Fort, Shalamar Gardens, and the Taj Mahal, which is described as "the jewel of Muslim art in India, and one of the universally admired masterpieces of the world's heritage".
== Name ==
The closest to an official name for the empire was Hindustan, which was documented in the Ain-i-Akbari. Mughal administrative records also refer to the empire as "dominion of Hindustan" (Wilāyat-i-Hindustān), "country of Hind" (Bilād-i-Hind), "Sultanate of Al-Hind" (Salṭanat(i) al-Hindīyyah) as observed in the epithet of Emperor Aurangzeb or endonymous identification from emperor Bahadur Shah Zafar as "Land of Hind" (Hindostān) in Hindustani. Contemporary Chinese chronicles referred to the empire as Hindustan (Héndūsītǎn). In the west, the term "Mughal" was used for the emperor, and by extension, the empire as a whole.
The Mughal designation for their dynasty was Gurkani (Gūrkāniyān), a reference to their descent from the Turco-Mongol conqueror Timur who took the title Gūrkān 'son-in-law' after his marriage to a Chinggisid princess. The word Mughal (also spelled Mogul or Moghul in English) is the Indo-Persian form of Mongol. The Mughal dynasty's early followers were Chagatai Turks and not Mongols. The term Mughal was applied to them in India by association with the Mongols and to distinguish them from the Afghan elite which ruled the Delhi Sultanate. The term remains disputed by Indologists. In Marshall Hodgson's view, the dynasty should be called Timurid/Timuri or Indo-Timurid.
== History ==
=== Babur and Humayun (1526–1556) ===
The Mughal Empire was founded by Babur (reigned 1526–1530), a Central Asian ruler who was descended from the Persianized Turco-Mongol conqueror Timur (the founder of the Timurid Empire) on his father's side, and from Genghis Khan on his mother's side. Paternally, Babur belonged to the Turkicized Barlas tribe of Mongol origin. Ousted from his ancestral domains in Central Asia, Babur turned to India to satisfy his ambitions. He established himself in Kabul and then pushed steadily southward into India from Afghanistan through the Khyber Pass. Babur's forces defeated Ibrahim Lodi, Sultan of Delhi, in the First Battle of Panipat in 1526. Through his use of firearms and cannons, he was able to shatter Ibrahim's armies despite being at a numerical disadvantage, expanding his dominion up to the mid Indo-Gangetic Plain. After the battle, the centre of Mughal power shifted to Agra. In the decisive Battle of Khanwa, fought near Agra a year later, the Timurid forces of Babur defeated the combined Rajput armies of Rana Sanga of Mewar, with his native cavalry employing traditional flanking tactics.
The preoccupation with wars and military campaigns, however, did not allow the new emperor to consolidate the gains he had made in India. The instability of the empire became evident under his son, Humayun (reigned 1530–1556), who was forced into exile in Persia by the rebellious Sher Shah Suri (reigned 1540–1545). Humayun's exile in Persia established diplomatic ties between the Safavid and Mughal courts and led to increasing Persian cultural influence in the later restored Mughal Empire. Humayun's triumphant return from Persia in 1555 restored Mughal rule in some parts of India, but he died in an accident the next year.
=== Akbar to Aurangzeb (1556–1707) ===
Akbar (reigned 1556–1605) was born Jalal-ud-din Muhammad in the Umarkot Fort, to Humayun and his wife Hamida Banu Begum, a Persian princess. Akbar succeeded to the throne under a regent, Bairam Khan, who helped consolidate the Mughal Empire in India. Through warfare, Akbar was able to extend the empire in all directions and controlled almost the entire Indian subcontinent north of the Godavari River. He created a new ruling elite loyal to him, implemented a modern administration, and encouraged cultural developments. He increased trade with European trading companies. India developed a strong and stable economy, leading to commercial expansion and economic development. Akbar allowed freedom of religion at his court and attempted to resolve socio-political and cultural differences in his empire by establishing a new religion, Din-i-Ilahi, with strong characteristics of a ruler cult. He left his son an internally stable state, which was in the midst of its golden age, but before long signs of political weakness would emerge.
Jahangir (born Salim, reigned 1605–1627) was born to Akbar and his wife Mariam-uz-Zamani, an Indian princess. Salim was named after the Indian Sufi saint, Salim Chishti. He "was addicted to opium, neglected the affairs of the state, and came under the influence of rival court cliques". Jahangir distinguished himself from Akbar by making substantial efforts to gain the support of the Islamic religious establishment. One way he did this was by bestowing many more madad-i-ma'ash (tax-free personal land revenue grants given to religiously learned or spiritually worthy individuals) than Akbar had. In contrast to Akbar, Jahangir came into conflict with non-Muslim religious leaders, notably the Sikh guru Arjan, whose execution was the first of many conflicts between the Mughal Empire and the Sikh community.
Shah Jahan (reigned 1628–1658) was born to Jahangir and his wife Jagat Gosain. His reign ushered in the golden age of Mughal architecture. During the reign of Shah Jahan, the splendour of the Mughal court reached its peak, as exemplified by the Taj Mahal. The cost of maintaining the court, however, began to exceed the revenue coming in. Shah Jahan extended the Mughal Empire to the Deccan by ending the Ahmadnagar Sultanate and forcing the Adil Shahis and Qutb Shahis to pay tribute.
Shah Jahan's eldest son, the liberal Dara Shikoh, became regent in 1658, as a result of his father's illness. Dara championed a syncretistic Hindu-Muslim culture, emulating his great-grandfather Akbar. With the support of the Islamic orthodoxy, however, a younger son of Shah Jahan, Aurangzeb (r. 1658–1707), seized the throne. Aurangzeb defeated Dara in 1659 and had him executed. Although Shah Jahan fully recovered from his illness, Aurangzeb kept Shah Jahan imprisoned until he died in 1666. Aurangzeb brought the empire to its greatest territorial extent, and oversaw an increase in the Islamicization of the Mughal state. He encouraged conversion to Islam, reinstated the jizya on non-Muslims, and compiled the Fatawa 'Alamgiri, a collection of Islamic law. Aurangzeb also ordered the execution of the Sikh guru Tegh Bahadur, leading to the militarization of the Sikh community. From the imperial perspective, conversion to Islam integrated local elites into the king's vision of a network of shared identity that would join disparate groups throughout the empire in obedience to the Mughal emperor. He led campaigns from 1682 in the Deccan, annexing its remaining Muslim powers of Bijapur and Golconda, though engaged in a prolonged conflict in the region which had a ruinous effect on the empire. The campaigns took a toll on the Mughal treasury, and Aurangzeb's absence led to a severe decline in governance, while stability and economic output in the Mughal Deccan plummeted.
Aurangzeb is considered the most controversial Mughal emperor, with some historians arguing his religious conservatism and intolerance undermined the stability of Mughal society, while other historians question this, noting that he financed or patronised the building of non-Muslim institutions, employed significantly more Hindus in his imperial bureaucracy than his predecessors did, and opposed bigotry against Hindus and Shia Muslims.
=== Decline (1707–1857) ===
Aurangzeb's son, Bahadur Shah I, repealed the religious policies of his father and attempted to reform the administration. "However, after he died in 1712, the Mughal dynasty began to sink into chaos and violent feuds. In 1719 alone, four emperors successively ascended the throne", as figureheads under the rule of a brotherhood of nobles belonging to the Indian Muslim caste known as the Sadaat-e-Bara, whose leaders, the Sayyid Brothers, became the de facto sovereigns of the empire.
During the reign of Muhammad Shah (reigned 1719–1748), the empire began to break up, and vast tracts of central India passed from Mughal to Maratha hands. As the Mughals tried to suppress the independence of Nizam-ul-Mulk, Asaf Jah I in the Deccan, he encouraged the Marathas to invade central and northern India. The Indian campaign of Nader Shah, who had previously reestablished Iranian suzerainty over most of West Asia, the Caucasus, and Central Asia, culminated with the Sack of Delhi shattering the remnants of Mughal power and prestige, and taking off all the accumulated Mughal treasury. The Mughals could no longer finance the huge armies with which they had formerly enforced their rule. Many of the empire's elites now sought to control their affairs and broke away to form independent kingdoms. But lip service continued to be paid to the Mughal Emperor as the highest manifestation of sovereignty. Not only the Muslim gentry, but the Maratha, Hindu, and Sikh leaders took part in ceremonial acknowledgements of the emperor as the sovereign of India.
Meanwhile, some regional polities within the increasingly fragmented Mughal Empire involved themselves and the state in global conflicts, leading only to defeat and loss of territory during conflicts such as the Carnatic wars and Bengal War.
The Mughal Emperor Shah Alam II (1759–1806) made futile attempts to reverse the Mughal decline. Delhi was sacked by the Afghans, and when the Third Battle of Panipat was fought between the Maratha Empire and the Afghans (led by Ahmad Shah Durrani) in 1761, in which the Afghans were victorious, the emperor had ignominiously taken temporary refuge with the British to the east. In 1771, the Marathas recaptured Delhi from the Rohillas, and in 1784 the Marathas officially became the protectors of the emperor in Delhi, a state of affairs that continued until the Second Anglo-Maratha War. Thereafter, the British East India Company became the protectors of the Mughal dynasty in Delhi. The British East India Company took control of the former Mughal province of Bengal-Bihar in 1793 after it abolished local rule (Nizamat) that lasted until 1858, marking the beginning of the British colonial era over the Indian subcontinent. By 1857 a considerable part of former Mughal India was under the East India Company's control. After a crushing defeat in the Indian Rebellion of 1857 which he nominally led, the last Mughal emperor, Bahadur Shah Zafar, was deposed by the British East India Company and exiled in 1858 to Rangoon, Burma.
=== Causes of decline ===
Historians have offered numerous accounts of the several factors involved in the rapid collapse of the Mughal Empire between 1707 and 1720, after a century of growth and prosperity. A succession of short-lived incompetent and weak rulers, and civil wars over the succession, created political instability at the centre. The Mughals appeared virtually unassailable during the 17th century, but, once gone, their imperial overstretch became clear, and the situation could not be recovered. The seemingly innocuous European trading companies, such as the British East Indies Company, played no real part in the initial decline; they were still racing to get permission from the Mughal rulers to establish trades and factories in India.
In fiscal terms, the throne lost the revenues needed to pay its chief officers, the emirs (nobles) and their entourages. The emperor lost authority as the widely scattered imperial officers lost confidence in the central authorities and made their deals with local men of influence. The imperial army bogged down in long, futile wars against the more aggressive Marathas, and lost its fighting spirit. Finally came a series of violent political feuds over control of the throne. After the execution of Emperor Farrukhsiyar in 1719, local Mughal successor states took power in region after region.
== Administration and state ==
The Mughal Empire had a highly centralised, bureaucratic government, most of which was instituted during the rule of the third Mughal emperor, Akbar. The central government was headed by the Mughal emperor; immediately beneath him were four ministries. The finance/revenue ministry, headed by an official called a diwan, was responsible for controlling revenues from the empire's territories, calculating tax revenues, and using this information to distribute assignments. The ministry of the military (army/intelligence) was headed by an official titled mir bakhshi, who was in charge of military organisation, messenger service, and the mansabdari system. The ministry in charge of law/religious patronage was the responsibility of the sadr as-sudr, who appointed judges and managed charities and stipends. Another ministry was dedicated to the imperial household and public works, headed by the mir saman. Of these ministers, the diwan held the most importance, and typically acted as the wazir (prime minister) of the empire.
=== Administrative divisions ===
The empire was divided into Subah (provinces), each of which was headed by a provincial governor called a subadar. The structure of the central government was mirrored at the provincial level; each suba had its own bakhshi, sadr as-sudr, and finance minister that reported directly to the central government rather than the subahdar. Subas were subdivided into administrative units known as sarkars, which were further divided into groups of villages known as parganas. The Mughal government in the pargana consisted of a Muslim judge and local tax collector. Parganas were the basic administrative unit of the Mughal Empire.
Mughal administrative divisions were not static. Territories were often rearranged and reconstituted for better administrative control, and to extend cultivation. For example, a sarkar could turn into a subah, and Parganas were often transferred between sarkars. The hierarchy of division was ambiguous sometimes, as a territory could fall under multiple overlapping jurisdictions. Administrative divisions were also vague in their geography—the Mughal state did not have enough resources or authority to undertake detailed land surveys, and hence the geographical limits of these divisions were not formalised and maps were not created. The Mughals instead recorded detailed statistics about each division, to assess the territory's capacity for revenue, based on simpler land surveys.
=== Capitals ===
The Mughals had multiple imperial capitals, established throughout their rule. These were the cities of Agra, Delhi, Lahore, and Fatehpur Sikri. Power often shifted back and forth between these capitals. Sometimes this was necessitated by political and military demands, but shifts also occurred for ideological reasons (for example, Akbar's establishment of Fatehpur Sikri), or even simply because the cost of establishing a new capital was marginal. Situations where two simultaneous capitals existed happened multiple times in Mughal history. Certain cities also served as short-term, provincial capitals, as was the case with Aurangzeb's shift to Aurangabad in the Deccan. Kabul was the summer capital of Mughals from 1526 to 1681.
The imperial camp, used for military expeditions and royal tours, also served as a kind of mobile, "de facto" administrative capital. From the time of Akbar, Mughal camps were huge in scale, accompanied by numerous personages associated with the royal court, as well as soldiers and labourers. All administration and governance were carried out within them. The Mughal Emperors spent a significant portion of their ruling period within these camps.
After Aurangzeb, the Mughal capital definitively became the walled city of Shahjahanabad (Old Delhi).
=== Law ===
The Mughal Empire's legal system was context-specific and evolved throughout the empire's rule. Being a Muslim state, the empire employed fiqh (Islamic jurisprudence) and therefore the fundamental institutions of Islamic law such as those of the qadi (judge), mufti (jurisconsult), and muhtasib (censor and market supervisor) were well-established in the Mughal Empire. However, the dispensation of justice also depended on other factors, such as administrative rules, local customs, and political convenience. This was due to Persianate influences on Mughal ideology and the fact that the Mughal Empire governed a non-Muslim majority. Scholar Mouez Khalfaoui notes that legal institutions in the Mughal Empire systemically suffered from the corruption of local judges.
==== Legal ideology ====
The Mughal Empire followed the Sunni Hanafi system of jurisprudence. In its early years, the empire relied on Hanafi legal references inherited from its predecessor, the Delhi Sultanate. These included the al-Hidayah (the best guidance) and the Fatawa al-Tatarkhaniyya (religious decisions of the Emire Tatarkhan). During the Mughal Empire's peak, the Fatawa 'Alamgiri was commissioned by Emperor Aurangzeb. This compendium of Hanafi law sought to serve as a central reference for the Mughal state that dealt with the specifics of the South Asian context.
The Mughal Empire also drew on Persian notions of kingship. Particularly, this meant that the Mughal emperor was considered the supreme authority on legal affairs.
==== Courts of law ====
Various kinds of courts existed in the Mughal Empire. One such court was that of the qadi. The Mughal qadi was responsible for dispensing justice; this included settling disputes, judging people for crimes, and dealing with inheritances and orphans. The qadi also had additional importance in documents, as the seal of the qadi was required to validate deeds and tax records. Qadis did not constitute a single position, but made up a hierarchy. For example, the most basic kind was the pargana (district) qadi. More prestigious positions were those of the qadi al-quddat (judge of judges) who accompanied the mobile imperial camp, and the qadi-yi lashkar (judge of the army). Qadis were usually appointed by the emperor or the sadr-us-sudr (chief of charities). The jurisdiction of the qadi was availed by Muslims and non-Muslims alike.
The jagirdar (local tax collector) was another kind of official approach, especially for high-stakes cases. Subjects of the Mughal Empire also took their grievances to the courts of superior officials, who held more authority and punitive power than the local qadi. Such officials included the kotwal (local police), the faujdar (an officer controlling multiple districts and troops of soldiers), and the most powerful, the subahdar (provincial governor). In some cases, the emperor dispensed justice directly. Jahangir was known to have installed a "chain of justice" in the Agra Fort that any aggrieved subject could shake to get the attention of the emperor and bypass the inefficacy of officials.
Self-regulating tribunals operating at the community or village level were common, but sparse documentation of them exists. For example, it is unclear how panchayats (village councils) operated in the Mughal era.
== Economy ==
The Mughal economy was large and prosperous. India was producing 24.5% of the world's manufacturing output up until 1750. Mughal India's economy has been described as a form of proto-industrialization, like that of 18th-century Western Europe before the Industrial Revolution.
Modern historians and researchers generally agree that the character of the Mughal Empire's economic policy resembles the laissez-faire system in dealing with trade and billions to achieve the economic ends.
The Mughals were responsible for building an extensive road system and creating a uniform currency. The empire had an extensive road network, which was vital to the economic infrastructure, built by a public works department set up by the Mughals which designed, constructed and maintained roads linking towns and cities across the empire, making trade easier to conduct.
The main base of the empire's collective wealth was agricultural taxes, instituted by the third Mughal emperor, Akbar. These taxes, which amounted to well over half the output of a peasant cultivator, were paid in the well-regulated silver currency, and caused peasants and artisans to enter larger markets. In circa 1595, Modern historians estimated the state's annual revenues of the Mughal Empire were around 99,000,000 rupees.
=== Coinage ===
The Mughals adopted and standardised the rupee (rupiya, or silver) and dam (copper) currencies introduced by Sur Emperor Sher Shah Suri during his brief rule. The Mughals minted coins with high purity, never dropping below 96%, and without debasement until the 1720s.
Despite India having its stocks of gold and silver, the Mughals produced minimal gold of their own but mostly minted coins from imported bullion, as a result of the empire's strong export-driven economy, with global demand for Indian agricultural and industrial products drawing a steady stream of precious metals into India.
=== Labour ===
The historian Shireen Moosvi estimates that in terms of contributions to the Mughal economy, in the late 16th century, the primary sector contributed 52%, the secondary sector 18% and the tertiary sector 29%; the secondary sector contributed a higher percentage than in early 20th-century British India, where the secondary sector only contributed 11% to the economy. In terms of the urban-rural divide, 18% of Mughal India's labour force were urban and 82% were rural, contributing 52% and 48% to the economy, respectively.
According to Moosvi, Mughal India had a per-capita income, in terms of wheat, 1.24% higher in the late 16th century than British India did in the early 20th century. This income, however, would have to be revised downwards if manufactured goods, like clothing, would be considered. Compared to food per capita, expenditure on clothing was much smaller though, so relative income between 1595 and 1596 should be comparable to 1901–1910. However, in a system where wealth was hoarded by elites, wages were depressed for manual labour. While slavery also existed, it was limited largely to household servants.
=== Agriculture ===
Indian agricultural production increased under the Mughal Empire. A variety of crops were grown, including food crops such as wheat, rice, and barley, and non-food cash crops such as cotton, indigo and opium. By the mid-17th century, Indian cultivators began to extensively grow two new crops from the Americas, maize and tobacco.
The Mughal administration emphasised the agrarian reform that began under the non-Mughal emperor Sher Shah Suri, which Akbar adopted and furthered with more reforms. The civil administration was organised hierarchically based on merit, with promotions based on performance. The Mughal government funded the building of irrigation systems across the empire, which produced much higher crop yields and increased the net revenue base, leading to increased agricultural production.
A major Mughal reform introduced by Akbar was a new land revenue system called zabt. He replaced the tribute system, previously common in India and used by Tokugawa Japan at the time, with a monetary tax system based on a uniform currency. The revenue system was biased in favour of higher value cash crops such as cotton, indigo, sugar cane, tree crops, and opium, providing state incentives to grow cash crops, in addition to rising market demand. Under the zabt system, the Mughals also conducted extensive cadastral surveying to assess the area of land under plough cultivation, with the Mughal state encouraging greater land cultivation by offering tax-free periods to those who brought new land under cultivation. The expansion of agriculture and cultivation continued under later Mughal emperors, including Aurangzeb.
Mughal agriculture was in some ways advanced compared to European agriculture at the time, exemplified by the common use of the seed drill among Indian peasants before its adoption in Europe. Geared sugar rolling mills first appeared in Mughal India, using the principle of rollers as well as worm gearing, by the 17th century.
=== Industrial manufacturing ===
South Asia during the Mughal's rule was a very fertile ground for manufacturing technologies coveted by the Europeans before the Industrial Revolution. Up until 1750, India produced about 25% of the world's industrial output.
Manufactured goods and cash crops from the Mughal Empire were sold throughout the world. The growth of manufacturing industries in the Indian subcontinent during the Mughal era in the 17th–18th centuries has been referred to as a form of proto-industrialization, similar to 18th-century Western Europe before the Industrial Revolution.
In early modern Europe, there was significant demand for products from Mughal India, particularly cotton textiles, as well as goods such as spices, peppers, indigo, silks, and saltpetre (for use in munitions). European fashion, for example, became increasingly dependent on Mughal Indian textiles and silks.
==== Textile industry ====
The largest manufacturing industry in the Mughal Empire was textile manufacturing, particularly cotton textile manufacturing, which included the production of piece goods, calicos, and muslins. The cotton textile industry was responsible for a large part of the empire's international trade. India had a 25% share of the global textile trade in the early 18th century, and it represented the most important manufactured goods in world trade in the 18th century. The most important centre of cotton production was the Bengal province, particularly around its capital city of Dhaka.
The production of cotton was advanced by the diffusion of the spinning wheel across India shortly before the Mughal era, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel and the incorporation of the worm gear and crank handle into the roller cotton gin led to greatly expanded Indian cotton textile production during the Mughal era.
=== Bengal Subah ===
The Bengal Subah province was especially prosperous from the time of its takeover by the Mughals in 1590 until the British East India Company seized control in 1757. Historian C. A. Bayly wrote that it was probably the Mughal Empire's wealthiest province. Domestically, much of India depended on Bengali products such as rice, silks and cotton textiles. Overseas, Europeans depended on Bengali products such as cotton textiles, silks, and opium. The province was a leading producer of grains, salt, fruits, liquors and wines, precious metals and ornaments.
After 150 years of rule by Mughal viceroys, Bengal gained de facto independence as a dominion under Murshid Quli Khan, the first Nawab of Bengal in 1717. The Nawabs permitted European companies to set up trading posts across the region, which regarded Bengal as the richest place for trade.
==== Shipbuilding industry ====
Mughal India had a large shipbuilding industry, which was also largely centred in the Bengal province. Economic historian Indrajit Ray estimates the shipbuilding output of Bengal during the sixteenth and seventeenth centuries at 223,250 tons annually, compared with 23,061 tons produced in nineteen colonies in North America from 1769 to 1771. He also assesses ship repairing as very advanced in Bengal.
== Demographics ==
=== Population ===
India's population growth accelerated under the Mughal Empire, with an unprecedented economic and demographic upsurge which boosted the Indian population by 60% to 253% in 200 years during 1500–1700. The Indian population had a faster growth during the Mughal era than at any known point in Indian history before the Mughal era. By the time of Aurangzeb's reign, there were a total of 455,698 villages in the Mughal Empire.
The following table gives population estimates for the Mughal Empire, compared to the total population of South Asia including the regions of modern India, Pakistan, and Bangladesh, and compared to the world population:
=== Urbanization ===
According to Irfan Habib, cities and towns boomed under the Mughal Empire, which had a relatively high degree of urbanization for its time, with 15% of its population living in urban centres. This was higher than the percentage of the urban population in contemporary Europe at the time and higher than that of British India in the 19th century; the level of urbanization in Europe did not reach 15% until the 19th century.
Under Akbar's reign in 1600, the Mughal Empire's urban population was up to 17 million people, 15% of the empire's total population. This was larger than the entire urban population in Europe at the time, and even a century later in 1700, the urban population of England, Scotland and Wales did not exceed 13% of its total population, while British India had an urban population that was under 13% of its total population in 1800 and 9% in 1881, a decline from the earlier Mughal era. By 1700, Mughal India had an urban population of 23 million people, larger than British India's urban population of 22.3 million in 1871.
Those estimates were criticised by Tim Dyson, who considers them exaggerations. According to Dyson, urbanization of the Mughal Empire was less than 9%.
The historian Nizamuddin Ahmad (1551–1621) reported that, under Akbar's reign, there were 120 large cities and 3200 townships. Several cities in India had a population between a quarter-million and half-million people, with larger cities including Agra (in Agra Subah) with up to 800,000 people, Lahore (in Lahore Subah) with up to 700,000 people, Dhaka (in Bengal Subah) with over 1 million people, and Delhi (in Delhi Subah) with over 600,000 people.
Cities acted as markets for the sale of goods, and provided homes for a variety of merchants, traders, shopkeepers, artisans, moneylenders, weavers, craftspeople, officials, and religious figures. However, several cities were military and political centres, rather than manufacturing or commerce centres.
== Culture ==
Generally, classical historiographies depicted the Mughal Empire's origin as a sedentarized agrarian society. However, modern historians such as André Wink, Jos J. L. Gommans, Anatoly Khazanov, Thomas J. Barfield, and others, argued the Mughals originated from nomadic culture. Pius Malekandathil argued instead that although it was true that the Mughal had their origin as nomadic civilization, they became more sendentarized as time passed, as exemplified by their military tradition. The Mughal Empire was definitive in the early-modern and modern periods of South Asian history, with its legacy in India, Pakistan, Bangladesh and Afghanistan seen in cultural contributions such as:
Centralised imperial rule that consolidated the smaller polities of South Asia.
The amalgamation of Persian art and literature with Indian art.
The development of Mughlai cuisine, an amalgamation of South Asian, Iranian and Central Asian culinary styles.
The development of Mughal clothing, jewellery and fashion, utilizing richly decorated fabrics such as muslin, silk, brocade and velvet.
The influence of the Persian language over Old Hindi led to the development of the Hindustani language.
The introduction of sophisticated Iranian-style waterworks and horticulture through Mughal gardening.
The introduction of Turkish baths into the Indian subcontinent.
The evolution and refinement of Mughal and Indian architecture, and, in turn, the development of later Rajput and Sikh palatial architecture. A famous Mughal landmark is the Taj Mahal.
The development of the Pehlwani style of Indian wrestling, a combination of Indian malla-yuddha and Persian varzesh-e bastani.
The construction of Maktab schools, where youth were taught the Quran and Islamic law such as the Fatawa 'Alamgiri in their indigenous languages.
The development of Hindustani classical music, and instruments such as the sitar.
=== Customs ===
The procession of marriage among the royals of the Mughal Empire was recorded with many reports of extravagant gifts. One occasion was during the marriage of a son of emperor Akbar, Salim, with the daughter of a ruler of Bijapur, Raja Bhagwant Das, where the gift presented by Bhagwant Das consisted of many horses, 100 elephants, many male and female slaves of Abyssinian, Caucasian, and native Indian origins, who brought with them various gold and silver utensils as dowry.
=== Architecture ===
The Mughals made a major contribution to the Indian subcontinent with the development of their distinctive architectural style. This style was derived from earlier Indo-Islamic architecture as well as from Iranian and Central Asian architecture (particularly Timurid architecture), while incorporating further influences from Hindu architecture. Mughal architecture is distinguished, among other things, by bulbous domes, ogive arches, carefully-composed and polished façades, and the use of hard red sandstone and marble as construction materials.
Furthermore, William Dalrymple mentioned that during the final days of the Mughal fall of Delhi in 1857, an ice house structure existed in Delhi. Emperor Shah Jahan has recorded establishing an ice-house in Sirmaur, north of Delhi.
Many monuments were built during the Mughal era by the Muslim emperors, especially Shah Jahan, including the Taj Mahal—a UNESCO World Heritage Site considered "the jewel of Muslim art in India and one of the universally admired masterpieces of the world's heritage", attracting 7–8 million unique visitors a year. The palaces, tombs, gardens and forts built by the dynasty stand today in Agra, Aurangabad, Delhi, Dhaka, Fatehpur Sikri, Jaipur, Lahore, Kabul, Sheikhupura, and many other cities of India, Pakistan, Afghanistan, and Bangladesh, such as:
=== Art and literature ===
The Mughal artistic tradition, mainly expressed in painted miniatures, as well as small luxury objects, was eclectic, borrowing from Iranian, Indian, Chinese and Renaissance European stylistic and thematic elements. Mughal emperors often took in Iranian bookbinders, illustrators, painters and calligraphers from the Safavid court due to the commonalities of their Timurid styles, and due to the Mughal affinity for Iranian art and calligraphy. Miniatures commissioned by the Mughal emperors initially focused on large projects illustrating books with eventful historical scenes and court life, but later included more single images for albums, with portraits and animal paintings displaying a profound appreciation for the serenity and beauty of the natural world. For example, Emperor Jahangir commissioned brilliant artists such as Ustad Mansur to realistically portray unusual flora and fauna throughout the empire.
The literary works Akbar and Jahangir ordered to be illustrated ranged from epics like the Razmnama (a Persian translation of the Hindu epic, the Mahabharata) to historical memoirs or biographies of the dynasty such as the Baburnama and Akbarnama, and Tuzk-e-Jahangiri. Richly finished albums (muraqqa) decorated with calligraphy and artistic scenes were mounted onto pages with decorative borders and then bound with covers of stamped and gilded or painted and lacquered leather. Aurangzeb (1658–1707) was never an enthusiastic patron of painting, largely for religious reasons, and took a turn away from the pomp and ceremonial of the court around 1668, after which he probably commissioned no more paintings.
=== Language ===
According to Mughal court historian Aminai Qazvini, by the time of Shah Jahan, the emperor was only familiar with a few Turki words and showed little interest in the study of the language as a child. Though the Mughals were of Turko-Mongol origin, their reign enacted the revival and height of the Persian language in the Indian subcontinent, and by the end of the 16th-century Turki (Chagatai) was understood by relatively few at court. Accompanied by literary patronage was the institutionalisation of Persian as an official and courtly language; this led to Persian reaching nearly the status of a first language for many inhabitants of Mughal India. Historian Muzaffar Alam argues that the Mughals used Persian purposefully as the vehicle of an overarching Indo-Persian political culture, to unite their diverse empire. Persian had a profound impact on the languages of South Asia; one such language, today known as Hindustani, developed in the imperial capital of Delhi in the late Mughal era. It began to be used as a literary language in the Mughal court from the reign of Shah Jahan, who described it as the language of his dastans (prose romances) and replaced Persian as the informal language of the Muslim elite. According to contemporary poet Mir Taqi Mir, "Urdu was the language of Hindustan by the authority of the King".
== Military ==
=== Gunpowder warfare ===
Mughal India was one of the three Islamic gunpowder empires, along with the Ottoman Empire and Safavid Persia. By the time he was invited by Lodi governor of Lahore, Daulat Khan, to support his rebellion against Lodi Sultan Ibrahim Khan, Babur was familiar with gunpowder firearms and field artillery, and a method for deploying them. Babur had employed Ottoman expert Ustad Ali Quli, who showed Babur the standard Ottoman formation—artillery and firearm-equipped infantry protected by wagons in the centre and the mounted archers on both wings. Babur used this formation at the First Battle of Panipat in 1526, where the Afghan and Rajput forces loyal to the Delhi Sultanate, though superior in numbers but without the gunpowder weapons, were defeated. The decisive victory of the Timurid forces is one reason opponents rarely met Mughal princes in pitched battles throughout the empire's history. In India, guns made of bronze were recovered from Calicut (1504) and Diu (1533).
Fathullah Shirazi (c. 1582), a Persian polymath and mechanical engineer who worked for Akbar, developed an early multi-gun shot. As opposed to the polybolos and repeating crossbows used earlier in ancient Greece and China, respectively, Shirazi's rapid-firing gun had multiple gun barrels that fired hand cannons loaded with gunpowder. It may be considered a version of a volley gun.
By the 17th century, Indians were manufacturing a diverse variety of firearms; large guns, in particular, became visible in Tanjore, Dacca, Bijapur and Murshidabad.
=== Rocketry and explosives ===
In the sixteenth century, Akbar was the first to initiate and use metal cylinder rockets known as bans, particularly against war elephants, during the battle of Sanbal. In 1657, the Mughal Army used rockets during the siege of Bidar. Prince Aurangzeb's forces discharged rockets and grenades while scaling the walls. Sidi Marjan was mortally wounded when a rocket struck his large gunpowder depot, and after twenty-seven days of hard fighting, Bidar was captured by the Mughals.
In A History of Greek Fire and Gunpowder, James Riddick Partington described Indian rockets and explosive mines:
The Indian war rockets ... were formidable weapons before such rockets were used in Europe. They had bam-boo rods, a rocket body lashed to the rod and iron points. They were directed at the target and fired by lighting the fuse, but the trajectory was rather erratic. The use of mines and counter-mines with explosive charges of gunpowder is mentioned for the times of Akbar and Jahangir.
== Science ==
A new curriculum for the madrasas that stressed the importance of uloom-i-muqalat (Rational Sciences) and introduced new subjects such as geometry, medicine, philosophy, and mathematics. The new curriculum produced a series of eminent scholars, engineers and architects.
=== Astronomy ===
While there appears to have been little concern for theoretical astronomy, Mughal astronomers made advances in observational astronomy and produced some Zij treatises. Humayun built a personal observatory near Delhi. According to Sulaiman Nadvi, Jahangir and Shah Jahan intended to build observatories too, but were unable to do so. The astronomical instruments and observational techniques used at the Mughal observatories were mainly derived from Islamic astronomy. In the 17th century, the Mughal Empire saw a synthesis between Islamic and Hindu astronomy, where Islamic observational instruments were combined with Hindu computational techniques.
During the decline of the Mughal Empire, the Hindu king Jai Singh II of Amber continued the work of Mughal astronomy. In the early 18th century, he built several large observatories called Yantra Mandirs, to rival Ulugh Beg's Samarkand observatory, and to improve on the earlier Hindu computations in the Siddhantas and Islamic observations in Zij-i-Sultani. The instruments he used were influenced by Islamic astronomy, while the computational techniques were derived from Hindu astronomy.
=== Metallurgy ===
The society within the Mughal Empire operated the Karkhanas, which functioned as workshops for craftsmen. These Karkhanas were producing arms, ammunition, and also various items for the court and emperor's need such as clothes, shawls, turbans, jewelry, gold and silverware, perfumes, medicines, carpets, beddings, tents, and for the imperial stable-harnesses for the horses in irons, copper and other metals.
Another aspect of the remarkable invention in Mughal India is the lost-wax cast, hollow, seamless, celestial globe. It was invented in Kashmir by Ali Kashmiri ibn Luqman in 998 AH (1589–90 CE). Twenty other such globes were later produced in Lahore and Kashmir during the Mughal Empire. Before they were rediscovered in the 1980s, it was believed by modern metallurgists to be technically impossible to produce hollow metal globes without any seams.
A 17th-century celestial globe was also made by Diya' ad-din Muhammad in Lahore, 1668 (now in Pakistan).
== List of emperors ==
== See also ==
History of India
Flags of the Mughal Empire
List of Mongol states
Mughal-Mongol genealogy
Islam in South Asia
== References ==
=== Footnotes ===
=== Citations ===
=== Sources ===
Ali, M. Athar (2008). "The Mughal Polity – A Critique of Revisionist Approaches". Modern Asian Studies. 27 (5). Cambridge University Press: 699–710. doi:10.1017/S0026749X00001256. ISSN 1469-8099. Retrieved 18 April 2024.
Andrew de la Garza (2016). The Mughal Empire at War Babur, Akbar and the Indian Military Revolution, 1500-1605. Taylor & Francis. ISBN 978-1-317-24531-5. Retrieved 6 December 2023.
Asher, Catherine B.; Talbot, Cynthia (2006). India before Europe. Cambridge University Press. ISBN 0-521-80904-5. OCLC 61303480.
J.J.L. Gommans (2002). Mughal Warfare Indian Frontiers and Highroads to Empire 1500–1700. Taylor & Francis. ISBN 9781134552757. Archived from the original on 2 January 2011. Retrieved 18 April 2024.
Habib, Irfan; Kumar, Dharma; Raychaudhuri, Tapan (1987). The Cambridge Economic History of India (PDF). Vol. 1. Cambridge University Press. Retrieved 11 August 2017.
Jorge Flores (2015). The Mughal Padshah: A Jesuit Treatise on Emperor Jahangir's Court and Household. Volume 6 of Rulers & Elites. Brill. p. 74. ISBN 978-9004307537. Retrieved 13 July 2024.
Kaushik Roy; Peter Lorge (2014). Chinese and Indian Warfare - From the Classical Age to 1870 (ebook). Taylor & Francis. p. 196. ISBN 978-1-317-58710-1. Retrieved 5 December 2023.
Majumdar, R.C. (1974). The Mughul Empire. Bharatiya Vidya Bhavan.
Moosvi, Shireen (2015) [First published 1987]. The economy of the Mughal Empire, c. 1595: a statistical study (2nd ed.). Oxford University Press. ISBN 978-0-19-908549-1. Archived from the original on 22 September 2023. Retrieved 13 June 2019.
Richards, John F. (1995). The Mughal Empire. Cambridge University Press. ISBN 978-0-521-56603-2.
Robb, Peter (2011), A History of India, Macmillan International Higher Education, ISBN 978-0-230-34424-2
Seyller, John (2011). "A Mughal Manuscript of the "Diwan" of Nawa'i". Artibus Asiae. 71 (2): 325–334.
Sinopoli, Carla M. (1994). "Monumentality and Mobility in Mughal Capitals". Asian Perspectives. 33 (2): 293–308. ISSN 0066-8435. JSTOR 42928323. Archived from the original on 1 May 2022. Retrieved 11 June 2021.
Stein, Burton (2010), A History of India, John Wiley & Sons, ISBN 978-1-4443-2351-1, retrieved 15 July 2019
Streusand, Douglas E. (2011). Islamic Gunpowder Empires: Ottomans, Safavids, and Mughals. Philadelphia: Westview Press. ISBN 978-0-8133-1359-7.
Streusand, Douglas E. (2018). Islamic Gunpowder Empires Ottomans, Safavids, and Mughals. Taylor & Francis. ISBN 9780429979217. Retrieved 24 April 2024.
Truschke, Audrey (2017). Aurangzeb: The Life and Legacy of India's Most Controversial King. Stanford University Press. ISBN 978-1-5036-0259-5.
== Further reading ==
== External links ==
Mughal India an interactive experience from the British Museum
The Mughal Empire, BBC Radio 4 discussion with Sanjay Subrahmanyam, Susan Stronge & Chandrika Kaul (In Our Time, 26 February 2004)
Sunil Khilnani's "Akbar", From BBC Radio 4's Incarnations: India in 50 Lives. | Wikipedia/Historiography_of_the_fall_of_the_Mughal_Empire |
During 40 years of post-Francoist Spain there have been some 200 works published on Carlist history during the Franco regime (1939 to 1975; the Civil War period is not discussed here); there are some 100 authors who have contributed. The number of major studies – books or unpublished PhD works - stands at around 50, the rest are articles in specialized reviews (pieces in popular newspapers or periodicals are not acknowledged here). Except some 15 titles, almost all have been published in Spain. The interest was scarce in the late 1970s and early 1980s, it grew in the late 1980s and since the early 1990s it remains stable, with some 30 titles published every 5 years.
== Overviews and synthetic works ==
A student unfamiliar with the subject should probably start with a large chapter dedicated to post-Civil War Carlism in general historical synthesis by Canal (2000): comprehensive and non-partisan, it provides a good overview. Another option is a concise 20-page review of Caspistegui Gorasurreta and Vázquez de Prada (1995). Definitely less valuable is a booklet of Dongil (2011), dedicated almost exclusively to the period in question and equally general, but flawed by a loose essayist style. Alternatively, there is an article intended to provide a helicopter-view summary, but overfocused on internal fragmentation of the movement, written by Brioso (1996), a sketchy overview in a section of the book of Alférez (1995) and final chapter of Blinkhorn's (1975) study, written during commencing transition and burdened with illusions of the author. There are 3 works to be followed as exhaustive, general in-depth attempts to capture the Carlist history during Francoism, all resulting from PhD research. The works of Caspistegui (1997) and Martorell (2009) are splendidly documented, though the former focuses on the 1960s and 1970s rather than the 1940s and 1950s, while the latter pursues a partisan, Progressist claim. Exactly the opposite, Traditionalist claim is pursued in the thesis of Rodón (2015), the work founded on scarce own research. Final pages of Sagarra and Andrés (2014) provide an illustrated and geography-focused overview. Works recommended only to students familiar with the subject and armed with appropriate criticism are writings on late Carlism by Clemente (1977 and 2003), chapters in the general overview of Pérez-Nievas (1999) and last essays in the volume of Miralles Climent (2004). A general synthesis on right-wing politics during Francoism by Gil Pecharromán (2019) contains many paragraphs on Carlism. The very recent work of an experienced Navarrese statesman Allí Aranguren (2021) is specifically calibrated as discussion on relations between Carlism and Francoism, with focus on the 1939-1955 period.
== Regional studies ==
Carlism has been traditionally most active in 3 areas: Vascongadas, Navarre and Catalonia. Unfortunately, there is no attempt to capture the post-1939 movement history in the Basque Country in general; either specific sub-periods or specific issues are covered in scarce passages of the available books by López (2000), Estornes (1976) and Garate (1980) or the articles of Toquero (1987), Luengo (1990), Sánchez (1994), Calvo (1999), Molina (2008), and Vázquez de Prada (2012). For Navarre, apart from a very sketchy overview of Miranda (1988), there is excellent book of Villanueva (1998), which, however, together with her minor contributions (1997, 2003) deals only with the so-called „primer franquismo”. Mid-Francoism is covered by Vázquez de Prada (1995, 2006, 2011). There is one major work (Carmona 1978) and few lesser ones (Larraza 2005, Baraibar 2006, Larraza, Baraibar 2013) which deal with political struggle to control the Pamplona ayuntamiento. A handful of articles traces cultural changes in Navarrese Carlism, almost all written by Caspistegui (1992, 1996, 1997, 1999, 2004). The very last period is tackled in a partisan book by Errea (2007), which nevertheless provides detailed info on El Pensamiento Navarro and the Pamplonese realm. The best covered area is Catalonia thanks to an exhaustive study of Vallverdú (2014), with a minor piece provided also by Thomàs (1992), Cubero (1993), Canals (1995) and Campás (2007). One book partially deals with the Francoist period in Valladolid (Herrera 2013) another one in Valencia (Pérez 2010) and one in Castellon (Miralles Climent, 2009). In case of Andalusia the late Francoist period in Seville has been targeted by Somé Laserna (2011), while the early one in Huelva by González Orta (2024). The Galician province of Orense earned a monograph by Rego Nieto (1985).
== Biographies and similar ==
There are a number of works which focus on individuals, though some, as general biographical studies, only partially deal with Carlism and politics. Perhaps the first one to be listed is definitely hagiographical biography of Don Javier by Borbón, Clemente and Cubero (1997). His son, Don Carlos Hugo, is featured twice by Clemente in 1999, and 2000, apart from a book by Francisco Manuel de las Heras y Borrero, which discusses also Don Javier, Don Sixto and other Borbon-Parmas (2010) and chapters in a book by Ferrer and Puga (2001) and Balanso (1994). Maria Teresa de Borbon is presented by Clemente (2002). The longtime political leader Manuel Fal earned hagiographical booklets by Fidaldo and Burgueño (1980) and Martínez (1998), apart from an article by Clemente (1978). Another leader José María Valiente has been dedicated one article by Vázquez de Prada (2012). The claimant Don Carlos Pio earned no biography as such, but his bid for the throne and Carloctavismo in general is dealt with in three books (Montells 1995, Heras 2004, Alcala 2012) and four articles (Heras 1983, Bolinaga 2007, Sánchez 2013, Vázquez de Prada 2011). Jaime del Burgo is discussed in one book by his son (del Burgo 2003) and in one article (Garralda 2008). Alcala (2001) published a partisan but highly recommended book on Sivatte, which provides much information on Sivattismo and Catalan Carlism as well; far less interesting are books of Monserrat (2001) on Joaquín Bau, Ballestero (2014) on José M. Oriol, Zavala (2008) on Antonio Arrue, articles on the latter by San Martín (1976), Martorell (2011) and Sudupe (2012), and another one by Wilhelmsen (2007) on Lizarza Inda. Rather peculiar contributions are the album of Piñeiro (2005) with paintings of Boveda and the work of Carmona (1995), discussing how Alfonso Comín turned from a Carlist to a Communist. Rodezno, Arauz de Robles and Elías de Tejada are discussed in one article (2009).
== Sub-periods covered ==
The so-called early Francoism was treated in major work of Miralles Climent (2018), while later years of the regime were discussed by the same author in another large book, published in 2023. Apart from studies on Navarre listed earlier there are two books which focus on the Civil War, the period not covered here, and contain minor passages related to the following years, namely the works of Peñas (1996) and Peñalba (2013), 2022. Mid-Francoism was targeted in particular by a 2016 book of Vázquez de Prada, her minor works of 2005 and 2009 and a manuscript by Cubero (1989). Special attention is needed when reading the works of Lavardín (1976), Massó (2005) and Ipiña (2010), insider stories from the project of launching Carlos Hugo in 1957-1966. Late-Francoism is covered in two recent PhD works. The one written by García Riol (2015) is largely repetitive though also highly focused, non-partisan, based on newly available documents and pursuing a handful of interesting concepts. The one written by Miralles Climent (2015) is a first-hand account from a Partido Carlista militant, perhaps the most instructive work written so far when it comes to understanding the rise of "socialismo autogestionario"; it contains also some 300 pages of documentary appendices. Minor works on late Francoism are articles offered by Cubero (1990), Sánchez (2004), Raguer (2004), del Burgo (2011), Campas (2013) and Ferrer (2015). A PhD dissertation was dedicated to the 1969-1980 period by Senent (2021), with a later scaled-down article (2024). Specific episodes are addressed in works of the Zubiaurs (2012), Rodón (2015) and the book by Domingo-Arnau (1998). A number of works, though generally studying Francoism and extreme-Right groupings in the early 1970s, offer some insight into the Carlist realm as well, especially the books of de la Cierva (1978, 1981 and 1987), Rodríguez (1994 and 1997) and the articles by Gallego (2008) and Casals (2009). Carlism and late Francoism was the subject of a series of lectures, organized in late 2018 in Pamplona. Works by Senent (2020, 2022) and del Corno (2009) target the post-Francoist era, but contain large passages dedicated to final years of Francoism.
== Key threads ==
In terms of key threads followed there are clearly two which attract most attention: Carlist position towards the Francoist regime and internal conflicts within Carlism, sometimes both merged as having been closely interrelated. Most of the works quoted earlier deal with these topics one way or another. All scholars – also those writing from Traditionalist and Progressist perspective - advance the thesis of intrinsic hostility between Carlism and Francoism, but there is no major scholarly work which systematically and comprehensively captures the problem of mutual relations between the two. The works approaching the topic are a collection of 3 essays by Martorell and Miralles (2009), a brief study of Calero Delso (2003), an analysis of the 1942 crisis by Thomas (2016), of the 1956 deadlock by Zaratiegui and Garcia (2017) and a synthetic overview by Alli Aranguren (2021). In case of internal conflict it is quite the opposite: there are many works dedicated almost entirely to the problem, but they offer competitive visions emphasizing either change or continuity. The first one is championed by already noted book of Caspistegui; apart from minor works listed earlier, it is presented also in some sketchy overviews. Blinkhorn (1991), Vázquez de Prada, Caspistegui (1991 and 1995) and MacClancy (1998), tend to view the change sympathetically, while Bartyzel (2011) builds a case-study, striving to present a unique European phenomen of a large and historically grounded political movement turned into its own ruinous negation. The key work adhering to the continuity reading is already noted PhD thesis of Martorell and the booklet of Pérez-Nievas. Countless and highly repetitive books of Clemente, dealing with the history of Carlism in general, advance the same theory, also from the Progressist perspective (1995, 1999, 2006, 2011, 2013). There is historiographical work which adheres to the continuity theory from the Traditionalist perspective, though the book of Gambra (1976) seems to fall rather into philosophy. A particular and very focused case-study is an article of Vázquez de Prada on Carlism and the Francoist law on religious liberty (2017).
== Studies on theory and theorists ==
There are a number of works which are useful to a student of Carlist history during Francoism, but intended rather as studies on philosophy and political science. The Progressist perspective, overrepresented in historiography, is hardly present here; the only work noted is a treaty by the claimant himself (de Borbón 1977). On the other hand, there are abundant studies dedicated to Traditionalist thinkers of the era and written almost entirely from hagiographic or at least sympathetic positions. Elías de Tejada was dedicated a book by Ayuso (1994) and a one edited by Sánchez (1995); smaller pieces dedicated to the extremeño keep appearing in Annales de la Fundación Elías de Tejada (Cantero 1995, Ayuso 1999, Cecotti 2005, Ayuso 2008) and other periodicals or books (Lamsdorff 1975, Lorca 1978, Vallet 1981, Díaz 1988, Fernandéz de la Mora 1989, Cienfuegos 1996, Turco 1998, Cuenca 2000, Giovine 2002, Bartyzel 2014). Rafael Gambra earned a book by Ayuso (1998) and a commemorative issue of Annales de la Fundacion Elías de Tejada (2004), plus articles in other reviews (Ayuso 1998, Canals 2004, Forment 2009, Alvear 2009, Alvear 2014). Similar intra-Traditionalist approach is a general study with many chapters dedicated to the Francoist era, written by Bartyzel (2015). Unique non-partisan works are a general analysis with some sections on post-Civil-War Carlism by Novella (2007), González (2005) and his later brief encyclopedical entry (2008). A work somewhat broader in scope as focusing on Traditionalism rather than on Carlism is a PhD thesis of Rodríguez Núñez (2014).
== Specific approaches ==
In historiography on Carlism during Francoism there is a tendency to depart from political analysis and focus on culture and anthropology. It is championed especially by MacClancy with his somewhat tilted but extremely interesting book (2000) and an article (2009), though also by Caspistegui with a number of his contributions (1997, 1999, 2007, 2012, 2013), apart from studies on Navarre listed earlier. The phenomenon of Montejurra, apart from both mentioned authors, is discussed also by Clemente (1978) and - with regard to 1969 - by Martorell (2006), while Mártires de la Tradición event is discussed in the article by Senent (2020). The press is focused on by Cubero (1995a, 1995b), Clemente (1999) and Alquézar (2013), while cinema is treated by Moral (2002). Finally, other perspectives are offered by Blinkhorn (1990), Calvo (1992) and Miralles (2005). Structural framework is at the foreground in very few works which study specific dedicated Carlist organizations; the one which attracted most attention is GAC, discussed in books by Onrubia (2001) and Clemente (2016) and articles by MacClancy (1989) and Porro (1999). MOT earned a minor piece by Cubero (2014) and a major volume - composed mostly of documents, with rather brief analytics - by Miralles (2007), which discusses also AET; the latter is separately treated by Vazquez de Prada (1998) and Juncosa (2002), while FARC was addressed by Onrubia (1999, 2003). An ex-combatant organisation is approached by Nonell (2001). The ETA war against Carlism, partially covering the late Francoist period, is discussed by Ibáñez (2017). Unfortunately, there is no socio-economic approach to history of Carlism during Francoism, though incursions into the area may be found in the article of García Riol (2016) and the paper of Miralles Climent (2000).
== Meta-historiography ==
Two works are documentary compilations and might serve as printed sources; the key one massive series by Santa Cruz and another is a book edited by Clemente (1994). There have been already a few meta-historiographical works published. Perhaps the one to start with is a bibliography prepared by Rubio and Talavera (2012), which does not cover works published beyond Spain and those after 2012; moreover, the authors do not quote ISBN numbers and overlooked some of the titles listed here. Another group of works are reviews of published works (not listed here) and attempts to discuss recent trends in Carlist historiography, partially covering also the Francoist era. A fairly old one is a 16-year-old chapter in already quoted work of Canal and similarly outdated articles of Larramendi (1996), Martorell (2000) and Gonzalez (2000); later essays, almost entirely irrelevant for students of Francoism, are those of Boyd (2003) Molina (2008), Canal (2011), Somé (2012) and Olábarri (2014). Appallingly relevant remains a note by Bullón de Mendoza (2013). The same question – related to partiality and bias – is tackled head on and specifically with regard to Carlism by Martínez (2002). Some proposals for future research were formulated - and partially met - 30 years ago by Blinkhorn (1986) and more recently, by Caspistegui (2008).
== Footnotes ==
== External links ==
footage of the latest book on Carlism during the Francoist era discussed, January 2017 | Wikipedia/Historiography_on_Carlism_during_the_Francoist_era |
Conversion to Christianity is the religious conversion of a previously non-Christian person that brings about changes in what sociologists refer to as the convert's "root reality" including their social behaviors, thinking and ethics. The sociology of religion indicates religious conversion was an important factor in the emergence of civilization and the making of the modern world. Conversion is the most studied aspect of religion by psychologists of religion, but there is still very little actual data available.
Christianity is growing rapidly in the global South and East, primarily through conversion. Different methods of conversion have been practiced historically. There is evidence of coercion by secular leaders in the Early and Late Middle Ages, though coercion as a method has never been approved or even supported by any majority of Christian theologians.
Different Christian denominations may perform various different kinds of rituals or ceremonies of initiation into their community of believers. The primary ritual of conversion is baptism, while different denominations differ with regards to confirmation.
According to a 2001 study by religion professor David B. Barrett of Columbia University and historian George Thomas Kurian, approximately 2.7 million people were converted to Christianity that year from another religion, while approximately 3.8 million people overall were converting annually. In the first decades of the twenty-first century, Pentecostalism is the largest and fastest growing form of Christianity; this growth is primarily due to religious conversion.
== Individual conversion ==
James P. Hanigan writes that individual conversion is the foundational experience and the central message of Christianity, adding that Christian conversion begins with an experience of being "thrown off balance" through cognitive and psychological "disequilibrium", followed by an "awakening" of consciousness and a new awareness of God. Hanigan compares it to "death and rebirth, a turning away..., a putting off of the old..., a change of mind and heart". The person responds by acknowledging and confessing personal lostness and sinfulness, and then accepting a call to holiness thus restoring balance. This initial internal conversion is followed by practices that further the process of conversion, which according to Hanigan, will include ethical changes.
In examples of conversion from the New Testament, such as Peter's conversion and Paul's, Hanigan perceives this same common "death and rebirth" experience. He says these individuals did not respond out of a sense of guilt, but from awe, reverence, and holy fear of what they perceived as God's presence.
Comparative studies of the early twenty-first century offer the insight that religious conversion provides a new locus of self-definition, moral authority and social identity through the acceptance of religious actions that seem more fitting and true to the recipient.
Religious conversion into Christianity sometimes came with physical incentives and rewards for new converts, such as the right of residence, access to land, or preferential legal status.
Anthropologist Robert Hefner adds that "Conversion assumes a variety of forms... because it is influenced by a larger interplay of identity, politics and morality". The message of Truth, a redemptive identity, and acceptance into a social organization whose purpose is the propagation of that message has proven to be a revolutionary force in its own right.
== Theology ==
According to sociologist Ines W. Jindra, there is a "theological dimension" to conversion. Avery Dulles quotes Bernard Lonergan saying "The subject of theology, then, is the person undergoing conversion to God". The conversion experience is basic and has the characteristics of being "concrete, dynamic, personal, communal, and historical." Through this focus on the individual, theology of conversion is provided with the same characteristics in its foundation.
Religious historian David W. Kling's History of Christian Conversion lists nine broad themes common to conversion narratives. Jindra describes the first theme as "human cognizance of divine presence," while Kling says, "God becomes real to people" through conversion. Conversion always has "context": humans are "socially constituted" beings and religious conversion always occurs in a social context. Jindra writes that, while all conversion accounts vary, they all show evidence of being based upon personal internal experiences of crisis expressed through the specific historical context in which the converts lived.
There are aspects of both "movement and resistance" in conversion. Christianity has, from its beginnings, been an evangelical mission oriented religion which has spread through conversion. However, people naturally tend toward inertia, toward the familiar, unless otherwise motivated toward change, making conversion the exception not the rule in history.
There is both "continuity and discontinuity" in the conversion process. Conversion can be disruptive and cause a rupture with the past, but rupture is rarely complete. Aspects of the past are frequently kept, resulting in a kind of "hybrid" faith. Gender also plays a direct role in how people do or do not convert.
Testimonies and narratives provide the vocabulary of conversion. In the more famous conversion stories, such as Augustine's and Martin Luther's, it is apparent the conversion story was later used, not only for personal insight and transformation, but also for drawing in potential converts. Kling writes that "the influence of [such] personal testimonies on the history of conversion cannot be over-estimated." Indications from Jandra's twenty-first century research indicates this is also true for more ordinary, less famous, conversions. Conversion produced change in the lives of most converts in important and positive ways: Jindra says "they became more stable, found meaning in life, tackled their former problematic biographical trajectories, and improved their relationships (Jindra, 2014)".
Conversion has historically been impacted by how personal "identity" and sense of self is defined. This can determine how much intentional action on the part of the individual convert has directed outcome, and how much outside forces may have impinged upon personal agency instead. In Christian conversion, there is nearly always a network of others who influenced the convert prior to conversion. Jindra writes that the specific context, which includes the ideology of the group being joined, the individual convert's particular crisis, "and the degree of agency vs. the influence of others" are important aspects influencing whether converts change or do not change after a conversion.
These factors overlap with research psychologist Lewis Rambo's stages of conversion. Rambo's model of conversion includes context, crisis (involving some form of searching by the prospective convert), encounter, and interaction, (with someone who believes in the new religious belief system). This is followed by commitment and its results.
== Social science ==
In his book Sociology of Religion, German sociologist Max Weber writes that religious conversion begins with the prophet, as the voice of revelation and vision, calling others to break with tradition and bring their lives into conformity with his "world-building truth." Weber believed that prophetic ideals can become, through the conversion of a community of followers, "a force for world transformation as powerful as anything in human history.
Calling conversion and Christianization "twin phenomena", Hefner has written that religious conversion was an important factor in the emergence of civilization and the making of the modern world. According to Hefner, the "reformulation of social relations, cultural meanings and personal experience" involved in conversion carries with it an inherent "world building aspect".
In the late nineteenth century, the development of world religions (Judaism, Christianity, Islam and Buddhism) was seen as part of the inevitable march toward human enlightenment in a linear upward evolution. Anthropology effectively demonstrated the failure of this model to provide explanation for religious variations.
The world religions developed institutions capable of standardizing knowledge and some have argued that this helped them survive while "empires and economic orders have come and gone". But in fact, only a few religions have been successful in propagating themselves over the long term, and standardized doctrine does not necessarily impact individual conversion and belief.
One of the most influential works in sociology of religion from the 1960s is Robert Bella's (1964) Religious Evolution, which argued that world religions all proclaim the existence of a transcendental realm that is superior to everyday reality, thereby legitimizing salvation/conversion experiences designed to link humans with that world. Bella describes the possibility of redemption/conversion under these terms as "world-shaking in its consequences". The tension between ordinary reality and the transcendent creates recognition of a need for social reform, driven by a redemptive vision, that remakes the world rather than passively accepting it. In this way, Hefner says, world religions loosened the grip of tradition and laid the foundation for human freedom.
== Psychology ==
While conversion is the most studied aspect of religion by psychologists of religion, there is little empirical data on the topic, and little change in method since William James' classic Varieties of Religious Experience in 1902. James Scroggs and William Douglas have written on seven current concerns in the psychology of conversion.
Definition. Calling this the "oldest issue in the field", Scroggs and Douglas indicate psychologists ask whether conversion requires a sudden about-face or gradual change. There is no consensus. The word connotes a sudden about-face, but psychologists are unwilling to let go of the possibility of gradual conversion.
Pathology. Freud saw religion as a pathology, and those who follow his school of thought have continued to do so. Empirical studies indicate religion is associated with good mental health among women, that it aids with depression and overcoming serious problems like heroin addiction, and that generally, there are significant links between religion and spirituality and good physical and mental health. In Scroggs and Douglas's view, which view a psychologist takes depends on their training and personal commitment to faith or non-faith.
Type of person. Many wonder if there is one kind of person that is more likely to be converted than others. Sociologists stress the importance of such variables as social class, group expectations, and social change (as in American frontier society or contemporary China). According to Scroggs and Douglas, William "James regarded the sick soul as the most likely candidate for conversion. The sick soul lives 'close to the pain threshold.' He is generally introverted and pessimistic in outlook, taking the evil of the world profoundly to heart. The sick soul is brooding, steeped in existential angst. He is Kierkegaard's man who is in despair and knows he is in despair".
Trauma and existential crisis can lead to conversion. For the already converted, trauma is also often associated with "beneficial changes in self-perception, relationships, and philosophy of life, and positive changes in the realm of existential, spiritual, or religious matters" according to a study by psychologists Rosemary de Castella and Janette Simmonds.
A 2011 study indicates conversion can take either an inward form, wherein religion becomes the primary guiding principle and goal of the convert's life, or it can take an outward form where religion mostly serves other purposes, such as political or economic goals, which are more important to that individual than religion. For those who experience inward conversion, lower levels of depression, anxiety and stress are associated, while higher levels are associated with those who practice outward conversion only.
Age. Scroggs and Douglas say that early writers on the psychology of conversion were unanimous in regarding adolescence as the most probable age for conversion.In surveys of three churches, psychologist Robert Ferm found the average age of conversion to be 43, 46, and 41 years respectively.(Ferm, Robert, The Psychology of Christian Conversion. Westwood, N. J., Fleming Revell, I959, p. 218.) Converts made by Graham's first British campaign averaged in their middle twenties. Jung emphasized mid to late thirties... Hiltner writes that conversion "is most important, most likely, and most cultivatable in the thirties, rather than being regarded primarily as an adolescent phenomenon". Accordingly Ferm writes that, "It is probably fair to conclude from Erikson's theories that both the identity crisis in adolescence and the integrity crisis in the middle years constitute ripe moments for conversion".
Conscious or unconscious. Exactly how much of the conversion experience is brought on by conscious control, and how much by unconscious factors behind or even beyond an individual, is also a matter of debate. Forces beyond conscious control are cited by the majority of converts. Scriggs and Douglas wrote that "most psychologists agree the role of unconscious factors is extensive and often decisive in conversion, and that a long period of subconscious incubation precedes sudden conversions". Allport, Maslow, Rogers, and others stress the role of conscious decision.
Science-versus-religion. Psychologists as social scientists tend to operate according to a nothing-but reductionism. Conversion must be described as a natural process. Theologians and others who accept the possibility of the supernatural, have tended to take a something-more, hands-off-the-sacred-preserve approach to studying conversion. Different worldviews can bias interpretations. Scroggs and Douglas write that "No solution to this very difficult problem appears in the immediate purview", but they do suggest that acknowledging bias and incorporating both views in "not only interdisciplinary but interbias research is necessary".
Which approach? Because there are different schools of psychology with conflicting theories, determining which is most appropriate to the study of conversion is one of the issues Scroggs and Douglas perceive. "Behaviorism, operationalism, and learning theory have rarely been applied to the study of religious conversion," and the overwhelming majority of works have been written from a single perspective: "functionalism" which defines what is true as what works.
== Neurology ==
Kelly Bulkeley in The Oxford Handbook of Religion Conversion has written that, as of 2014, no neuro-scientific research focused specifically on religious conversion has been done. Nor is there a single consensus on how the brain/mind system works, and researchers take many different approaches. There is controversy over the mind/body problem, as well as whether the brain is simply modular (composed of separate parts), or if that is too limited an explanation for what Bulkeley calls the complex, "global, synthetic, whole-is-greater-than-the-sum-of-its-parts aspects of brain function". There is disagreement over determinism vs. free will, the use of brain imaging, first-person reports of conversion, and the applications of quantum physics.
The phenomenon of conversion is based on the belief that humans have the ability to change the way they mentally perceive and experience the world. Research on the plasticity of the brain has shown that the brain's ability to create new neural pathways remains with someone throughout their life. Bulkeley writes that "Cognitive neuroscience in relation to religious conversions, where people undergo a basic reordering of the assumptions and expectations that frame their perceptions of the world, may lead to new evidence regarding the latent potential of brain/mind development".
Studies on prayer and meditation show they alter the brain's functioning in measurable, material, ways:
Several implications flow from that basic finding. One is that at least some aspects of religion are not generated by pathological brain functioning. Current [cognitive neuroscience] research refutes the idea that religion [...] stems from faulty brain/mind processes. The best available scientific evidence indicates that people who engage in religiously motivated contemplative practices have normal, healthy brains. Perhaps other forms of religion can be more directly tied to neuropathology, but in the case of meditation and prayer the CN literature supports a pragmatic appreciation of the effectiveness of religious practices in shaping the healthy interaction of brain and mind.
== Statistics ==
According to a 2001 study by religion professor David B. Barrett of Columbia University and historian George Thomas Kurian, approximately 2.7 million people were converted to Christianity that year from another religion, while approximately 3.8 million people overall were converting annually. In the first decades of the twenty-first century, Pentecostalism is the largest and fastest growing form of Christianity. Professor of religion Dyron B. Daughrity quotes Paul Freston: "Within a couple of decades, half the world's Christians will be in Africa and Latin America. By 2050, on current trends, there will be as many Pentecostals in the world as there are Hindus, and twice as many Pentecostals as Buddhists". This growth is primarily due to religious conversion.
Historian Philip Jenkins observes that Christianity is also growing rapidly in China and some other Asian countries. Sociologist and specialist in Chinese religion Fenggang Yang from Purdue University writes that Christianity is "spreading among the Chinese of South-East Asia", and "Evangelical and Pentecostal Christianity is growing more quickly in China". More than half of these converts have university degrees.
Social Anthropologist Juliette Koning and sociologist Heidi Dahles of Vrije Universiteit Amsterdam, agree there has been a "rapid expansion of charismatic Christianity from the 1980s onwards. Singapore, China, Hong Kong, Taiwan, Indonesia, and Malaysia are said to have the fastest-growing Christian communities and the majority of the new believers are "upwardly mobile, urban, middle-class Chinese". Allan Anderson and Edmond Tang have reported in their book Asian and Pentecostal: The Charismatic Face of Christianity in Asia that "Asia has the second largest number of Pentecostals/charismatics of any continent in the world, and seems to be fast catching up with the largest, Latin America." The World Christian Encyclopedia estimated 135 million in Asia compared to 80 million in North America.
It has been reported also that increasing numbers of young people are becoming Christians in several countries such as China, Hong Kong, Indonesia, Iran, Japan, Singapore, and South Korea.
The Council on Foreign Relations says the "number of Chinese Protestants has grown by an average of 10 percent annually since 1979". Award-winning historian of Christianity, Todd Hartch of Eastern Kentucky University, has written that by 2005, around 6 million Africans were converting annually to Christianity. According to Iranian historian Ladan Boroumand "Iran today is witnessing the highest rate of Christianization in the world".
While the exact number of Dalit converts to Christianity in India is not available, religion scholar William R. Burrow of Colorado State University has estimated that about 8% of Dalits have converted to Christianity. According to a 2021 study by the Pew Research Center, Christianity has grow in India in recent years due to conversion. Most converts are former Hindus, though some are former Muslims.
Since the 1960s, there has been a substantial increase in the number of conversions from Islam to Christianity, mostly to the Evangelical and Pentecostal denominations of Christianity. The 2015 study Believers in Christ from a Muslim Background: A Global Census estimated that 10.2 million Muslims converted to Christianity. Countries with the largest numbers of Muslims converted to Christianity include Indonesia (6,500,000), Nigeria (600,000), Iran (500,000 versus only 500 in 1979), the United States (450,000), Ethiopia (400,000), and Algeria (380,000). Indonesia is home to the largest Christian community of converts from Islam. Since the mid and late 1960s, between 2 and 2.5 million Muslims converted to Christianity. According to the Council on Foreign Relations in 2007, experts estimated that thousands of Muslims in the Western world converted to Christianity annually, but were not publicized due to fear of retribution.
== Methods of conversion ==
=== Prayer and example ===
Christians seek to help others discover God and come to know Jesus through prayer and example. Charles de Foucauld, for example, lived among the Berbers in north Africa, seeking their conversion not through sermons, but through his example.
=== Coercion ===
While Christian theologians, such as the fourth century Augustine and the ninth century Alcuin, have long maintained that conversion must be voluntary, there are historical examples of coercion in conversion to Christianity. Constantine used both law and force to eradicate the practice of sacrifice and repress heresy though not specifically to promote conversion. Theodosius also wrote laws to eliminate heresies, but made no requirement for pagans or Jews to convert to Christianity. However, the sixth century Eastern Roman emperor Justinian I and the seventh century emperor Heraclius attempted to force cultural and religious uniformity by requiring baptism of the Jews. In 612, the Visigothic King Sisebut, prompted by Heraclius, declared the obligatory conversion of all Jews in Spain. In the many new nation-states being formed in Eastern Europe of the Late Middle Ages, some kings and princes pressured their people to adopt the new religion, and in the Northern crusades, the fighting princes obtained widespread conversion through political pressure or military coercion even though the theologians continued to maintain that conversion must be voluntary.
=== Baptism ===
In most varieties of Christianity, baptism is the initiation rite for entrance into the Christian community. Almost all baptisms share in common the use of the Trinitarian formula (in the name of the Father, Son, and Holy Spirit) by the minister while baptizing the convert. Two aspects of baptism are sources of disagreement: mode and meaning. In Understanding Four Views on Baptism editors have written that Christians disagree on the meaning of baptism and whether it is a necessary aspect of conversion or simply demonstration of a conversion that has already taken place.
There are also different modes of baptism in Christianity. These include immersion (dunking), affusion (pouring), and aspersion (sprinkling). The most common practice in the ancient church was baptism by immersion of the whole head and body of an adult. It remained common into the Middle Ages and is still found in the Eastern church, the Anglican and Roman Catholic Churches, and in most Protestant denominations.
Historian Philip Schaff has written that sprinkling, or pouring of water on the head of a sick or dying person, where immersion was impractical, was also practiced in ancient times and up through the twelfth century, and is currently practiced in most of the West. However, according to the Oxford Dictionary of the Christian Church affusion has become the most common practice of the Western churches.
Infant baptism was controversial for the Protestant Reformers, and remains so for some Protestants, but according to Schaff, it was practiced by the ancients and is neither required nor forbidden in the New Testament.
The mode of baptism often depends on the denomination one enters, and in some cases, personal choice. Many Anglicans and Lutherans baptize by affusion. Presbyterians and Congregationalists accept baptism by pouring or sprinkling. Steven W. Lemke writes that the Presbyterian Westminster Confession says, "Dipping of the person into the water is not necessary". Many Evangelical Protestant churches, such as Baptists, believe that only full immersion baptism is valid. The Second London and Philadelphia confessions of the Baptists affirm that "immersion, or dipping of the person in water, is necessary". Baptism by immersion is again affirmed in Article 7 of the BF&M [Baptist Faith and Message]". Others, like Methodists, may conduct all three forms of baptism. Yet others, like Quakers, do not practice water baptism, believing that Jesus baptizes his followers in the Spirit while John baptized his followers in water.
=== Denominational switching ===
Switching from one Christian denomination, such as Presbyterianism, to another Christian denomination, such as Catholicism, has not generally been seen by researchers as conversion to Christianity. Mark C. Suchman says this is because most sociologists and other scientists have defined conversion as "radical personal change, particularly change involving a shift in one's sense of 'root reality'." However, in Suchman's view, this produces a form of 'selection bias' within the research. He writes that the study of "everyday" religious mobility is not a substitute for analyses of "true conversion," but the denominational switching that he refers to as "religious mobility" can be seen as an aspect of conversion.
Suchman describes six types, or causes, of "religious mobility" as a supplement and complement to the more traditionally limited concept of conversion. He draws on theories from the sociology of deviance where there is some recognition that "a change of religious affiliation generally represents a break with previous norms and a severing of social commitments—even when it does not involve a radical personality realignment".
Theories of deviance define what can be considered as the variables and determinants involved and what kind of mobility can be seen as random. "Strain theory" argues that those who are unhappy in their religious affiliation will generally "engage in deviance" from that group. Those who are not well integrated in their religious social group, those who become enmeshed in social relations outside the group with participants in deviant cultures, and those whose ethnicity and traditional background differs from their current affiliation are candidates for switching. Intermarriage, with partners of different religions and/or denominations, is also associated with religious switching.
=== Confirmation ===
Theologian Knut Alfsvåg writes that confirmation was first introduced by Pope Innocent I in the 5th century as part of the unified sacrament of baptism, chrismation (confirmation) and first communion that was commonly accepted by the 12th century. It was formally designated a sacrament in 1274 by the Council of Lyon. Baptism, along with the declaration and instruction involved in confirmation, and the Eucharist, have remained the essential elements of initiation in all Christian communities, however, Alfsvåg writes that confirmation has differing status in different denominations.
Some see baptism, confirmation, and communion as elements of a unified sacrament through which one becomes a Christian and part of the church. Also known as Chrismation by eastern Christians, under some circumstances, confirmation may be administered immediately after baptism. When an adult decides to convert to the Catholic or Orthodox Church, they become a "catechumen" and attend classes to learn what conversion means and requires. Once classes are completed and the candidate is baptized, adults can then be confirmed immediately following baptism. A clergy member will anoint their forehead, (or in the case of Byzantine Christians, the forehead, eyes, nostrils, mouth, ears, breast, hands, and feet), with the chrisma (oil) calling upon the Holy Spirit to seal the convert with the gifts of the Spirit.
In Western churches that practice infant baptism (Catholic Church, the Church of England, Anglicans, Lutherans, Presbyterians, Congregationalists, Methodists, Nazarenes, Moravians, and United Protestants), infants who are baptized are not generally confirmed immediately except in cases of emergency such as illness or impending death. Otherwise, child candidates must wait till they are old enough to make a decision for themselves. Confirmation cannot occur until the candidate has participated in confirmation classes, demonstrated an adequate understanding of what they are agreeing to, and are able to profess "with their own mouth" their desire to be confirmed in their faith. In the Eastern Churches (Eastern Orthodoxy, Oriental Orthodoxy, and the Church of the East), the rite is called chrismation, and is done immediately after baptism, regardless of age.
To be fully in communion with the Catholic Church (a phrase used since c. 205), the Catholic Church requires a convert to have professed faith and practice the sacraments—baptism, confirmation and the Eucharist. The Orthodox Church also maintains the tradition of baptism, chrismation and first communion as a united rite till this day, referring to chrismation as "the Pentecost of the individual" (a reference to the Holy Spirit).
The practice of confirmation was criticized during the Reformation by those who do not consider confirmation a condition for conversion to Christianity or being a fully accepted member of the church. Luther saw confirmation as "a churchly rite or sacramental ceremony", but for Luther, it was baptism that was necessary and not confirmation. John Wesley removed the rite altogether leaving Methodism with no rite of confirmation from 1785 to 1965. These see confirmation as a combination of intercessory prayer and as a graduation ceremony after the period of instruction.
== See also ==
Catholicisation
Christian mission to Jews
Christianization
Conversion of the Jews (future event)
Credo
Engel scale
Forced conversion
List of converts to Christianity
Rite of Christian Initiation for Adults (RCIA)
== References ==
== Bibliography == | Wikipedia/Conversion_to_Christianity |
Humanistic historiography is a method in historiography based on the principles of humanism, developing a higher standard of critical judgement in the study of history. The new style of humanistic historiography was established by historians of Florence, namely Leonardo Bruni in his Historiarum Florentini populi libri (published from 1416 to 1449), and the scholarly works of Francesco Petrarca, with Giovanni Villani's Istorie fiorentine being a precursor to humanistic historiography, identifying causes in human actions and motives rather than in fate. Certain characteristics of the model still determined the treatment of political history in Machiavelli's Istorie Firoentine, as well as his delimitation of political subject matter at large.
The humanists used ancient Greek and Roman historians, especially Livy, Sallust, and Julius Caesar as their models. This choice had certain consequences insofar as the treatment of history had to concentrate on such exciting events as wars and revolutions to the exclusion of the permanent factors and the long-range developments that determine the texture of history. Similar to the ancient historians, the humanists took the generic division of history seriously.
Moreover, in the interest of rhetorical and dramatic effectiveness, the individual had to become the center of action to such a degree that again the permanent determinants that in fact leave not so much room for heroic freedom were obscured.
The Roman model had, furthermore, the effect of a radical secularization of political problems. The humanistic concentration on the history of the republic in the Roman manner entailed the break with the Christian view of history. The rigidly closed stream of secular state history did not admit a divine Providence governing a universal history. Such problems as the translatio imperii and the speculation of the four world monarchies disappeared without a word of discussion.
In the eighteenth century, when Voltaire started his secularization of history, the polemic against the Christian position of Jacques-Bénigne Bossuet was of interest. The humanists of the fifteenth century ignored the Christian problem as if it did not exist.
== References ==
=== Bibliography ===
== External links ==
Humanistic Historiography Under the Sforzas: Politics and Propaganda in Fifteenth-century Milan
Humanistic historiography in Academia
Humanistic Historiography | Wikipedia/Humanistic_historiography |
Hellenic historiography (or Greek historiography) involves efforts made by Greeks to track and record historical events. By the 5th century BC, it became an integral part of ancient Greek literature and held a prestigious place in later Roman historiography and Byzantine literature.
== Overview ==
The historical period of ancient Greece is exclusive in world history as the first period attested directly in proper historiography, while earlier ancient history or proto-history is known by much more circumstantial evidence, such as annals, chronicles, king lists, and pragmatic epigraphy.
Herodotus is widely known as the "father of history," his Histories being eponymous of the entire field. Written between the 450s and 420s BC, the scope of Herodotus' work reaches about a century in the past, discussing 6th century BC historical figures such as Darius I of Persia, Cambyses II, and Psamtik III and alludes to some 8th century BC ones such as Candaules.
Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato, and Aristotle. Most of these authors were either Athenians or pro-Athenians, which explains why far more is known about the history and politics of Athens than of most other contemporary cities. Their scope is further limited by a focus on political, military and diplomatic history, generally ignoring economic and social history. However, while works approaching modern ethnography arose primarily amongst the Romans, some Greeks did include ancillary material describing the customs and rituals of different peoples, Herodotus himself being a prime example in his descriptions of the Egyptians, Scythians, and others.
== See also ==
Chinese historiography
List of Graeco-Roman geographers
List of Greek historiographers
List of historians
Modern Greek literature
Roman historiography
== References ==
== Further reading ==
Barcelo, P. (1994). "The Perception of Carthage in Classical Greek Historiography". Acta Classica. 37: 1–14.
Grethlein, Jonas (2010). The Greeks and their Past: Poetry, Oratory and History in the Fifth Century BCE. Cambridge: Cambridge University Press.
Hornblower, Simon, ed. (1994). Greek Historiography. Oxford: Clarendon Press.
Hornblower, Simon (2004). Thucydides and Pindar: Historical Narrative and the World of Epinikian Poetry. Oxford: Oxford University Press.
Luce, T. J. (1997). The Greek Historians. London and New York: Routledge.
Marincola, John (2001). "Greek Historians". Greece & Rome, New Surveys in the Classics. 31. Cambridge: Cambridge University Press.
Parmeggiani, Giovanni, ed. (2014). Between Thucydides and Polybius: The Golden Age of Greek Historiography. Cambridge, MA: Harvard University Press.
Pitcher, Luke (2009). Writing Ancient History: An Introduction to Classical Historiography. London and New York: I. B. Tauris.
Sacks, Kenneth S. (1990). Diodorus Siculus and the First Century. Princeton, NJ: Princeton University Press.
Sacks, Kenneth S. (1981). Polybius on the Writing of History. Berkeley: University of California Press.
Shrimpton, Gordon S. (1997). History and Memory in Ancient Greece. Montreal and Buffalo, NY: McGill-Queen's University Press.
Skinner, Joseph E. (2012). The Invention of Greek Ethnography: From Homer to Herodotus. Oxford: Oxford University Press.
Woodman, A. J. (1988). Rhetoric in Classical Historiography: Four Studies. London: Routledge. | Wikipedia/Hellenic_historiography |
The heroic theory of invention and scientific development is the view that the principal authors of inventions and scientific discoveries are unique heroic individuals—i.e., "great scientists" or "geniuses".
== Competing hypothesis ==
A competing hypothesis (that of multiple discovery) is that most inventions and scientific discoveries are made independently and simultaneously by multiple inventors and scientists.
The multiple-discovery hypothesis may be most patently exemplified in the evolution of mathematics, since mathematical knowledge is highly unified and any advances need, as a general rule, to be built from previously established results through a process of deduction. Thus, the development of infinitesimal calculus into a systematic discipline did not occur until the development of analytic geometry, the former being credited to both Sir Isaac Newton and Gottfried Leibniz, and the latter to both René Descartes and Pierre de Fermat.
== See also ==
Genius
Great man theory
Hive mind
List of multiple discoveries
Multiple discovery
People known as the father or mother of something
Scientific priority
Scientific theory
Discovery and invention controversies
== References ==
== Further reading ==
Epstein, Ralph C. 1926. "Industrial Invention: Heroic, or Systematic?" The Quarterly Journal of Economics 40(2):232–72. JSTOR 1884619. doi:10.2307/1884619.
Johansson, Frans. 2004. The Medici Effect: What Elephants and Epidemics Can Teach Us About Innovation. US: Harvard Business School Press. ISBN 1-4221-0282-3.
Merton, Robert K. 1957. "Priorities in Scientific Discovery: A Chapter in the Sociology of Science." American Sociological Review 22(6):635–59. JSTOR 2089193. doi:10.2307/2089193.
—— 1961. "Singletons and Multiples in Scientific Discovery: A Chapter in the Sociology of Science." Proceedings of the American Philosophical Society 105(5):470–86. JSTOR 985546
Shireman, William K. 1999. "Business strategies for sustainable profits: systems thinking in practice." Systems Research and Behavioral Science 16(5):453–62. doi:10.1002/(SICI)1099-1743(199909/10)16:5<453::AID-SRES336>3.0.CO;2-9.
Turney, Peter. 15 January 2007. "The Heroic Theory of Scientific Development." Apperceptual.
== External links ==
http://www.philsci.com/book2-2.htm
http://www.newyorker.com/reporting/2008/05/12/080512fa_fact_gladwell?currentPage=all | Wikipedia/Heroic_theory_of_invention_and_scientific_development |
The functionalism–intentionalism debate is a historiographical debate about the reasons for the Holocaust as well as most aspects of the Third Reich, such as foreign policy. It essentially centres on two questions:
Was there a master plan on the part of Adolf Hitler to launch the Holocaust? Intentionalists argue there was such a plan, while functionalists argue there was not.
Did the initiative for the Holocaust come from above with orders from Adolf Hitler or from below within the ranks of the German bureaucracy? Although neither side disputes the reality of the Holocaust, nor is there serious dispute over the premise that Hitler (as Führer) was personally responsible for encouraging the anti-Semitism that allowed the Holocaust to take place, intentionalists argue the initiative came from above, while functionalists contend it came from lower ranks within the bureaucracy.
The terms were coined in a 1981 essay by the British Marxist historian Timothy Mason. Notable functionalists have included Timothy Mason, Raul Hilberg, Karl Schleunes, Christopher Browning, Hans Mommsen, Martin Broszat, Götz Aly, Christian Gerlach, Zygmunt Bauman, Timothy Snyder and David Cesarani. Notable intentionalists have included William Shirer, Hugh Trevor-Roper, Alan Bullock, Karl Bracher, Andreas Hillgruber, Klaus Hildebrand, Eberhard Jäckel, Leni Yahil, Israel Gutman, Gerhard Weinberg, Walter Laqueur, Saul Friedländer, Richard Breitman, Lucy Dawidowicz and Daniel Goldhagen.
== Origin ==
The search for the causes of the Holocaust began almost as soon as World War II ended. At the Nuremberg War Crimes Trials of 1945–46, the "Final Solution" was represented by the prosecution as part of the long-term plan on the part of the Nazi leadership going back to the foundations of the Nazi Party in 1919. Subsequently, most historians subscribed to what would be nowadays considered to be the extreme intentionalist interpretation. Books such as Karl Schleunes' The Twisted Road to Auschwitz which was published in 1970 influenced a number of historians to challenge the prevailing interpretation and suggested there was no master plan for the Holocaust. In the 1970s, advocates of the intentionalist school of thought were known as "the straight road to Auschwitz" camp or as the "programmists", because they insisted that Hitler was fulfilling a programme. Advocates of the functionalist school were known as "the twisted road to Auschwitz" camp or as the "structuralists", because of their insistence that it was the internal power structures of the Third Reich that led to the Holocaust.
In 1981, the British historian Timothy Mason published an essay entitled "Intention and Explanation" that was in part an attack on the scholarship of Karl Dietrich Bracher and Klaus Hildebrand, both of whom Mason accused of focusing too much on Adolf Hitler as an explanation of the Holocaust. In this essay, Mason called the followers of "the twisted road to Auschwitz"/structuralist school "functionalists" because of their belief that the Holocaust arose as part of the functioning of the Nazi state, while the followers of "the straight road to Auschwitz"/programmist school were called "intentionalists" because of their belief that it was Hitler's intentions alone that explained the Holocaust. The terms "intentionalist" and "functionalist" have largely replaced the previous terms used to signify the conflicting schools of thought.
== Arguments ==
Those historians who take an intentionalist line, like Andreas Hillgruber, argue that everything that happened after Operation Barbarossa was part of a master plan he credited Hitler with developing in the 1920s. Hillgruber wrote in his 1967 book Germany and the Two World Wars that
...the conquest of European Russia, the cornerstone of the continental European phase of his program, was thus for Hitler inextricably linked with the extermination of these "bacilli", the Jews. In his conception they had gained dominance over Russia with the Bolshevik Revolution. Russia thereby became the center from which a global danger radiated, particularly threatening to the Aryan race and its German core. To Hitler, Bolshevism meant the consummate rule of Jewry, while democracy—as it had developed in Western Europe and Weimar Germany—represented a preliminary stage of Bolshevism, since the Jews there won a leading, if not yet a dominant, influence. This racist component of Hitler's thought was so closely interwoven with the central political element of his program, the conquest of European Russia, that Russia's defeat and the extermination of the Jews were—in theory as later in practice—inseparable for him. To the aim of expansion per se, however, Hitler gave not racial, but political, strategic, economic and demographic underpinnings".
The German historian Helmut Krausnick argued that:
What is certain is that the nearer Hitler's plan to overthrow Russia as the last possible enemy on the continent of Europe approached maturity, the more he became obsessed with an idea—with which he had been toying as a "final solution" for a long time—of wiping out the Jews in the territories under his control. It cannot have been later than March 1941, when he openly declared his intention of having the political commissars of the Red Army shot, that he issued his secret decree—which never appeared in writing though it was mentioned verbally on several occasions—that the Jews should be eliminated.
Alfred Streim wrote in response that Krausnick had been taken in by the line invented after the war to reduce the responsibility of the Einsatzgruppen leaders brought to trial. Klaus Hildebrand wrote that:
In qualitative terms, the executions by shooting were no different from the technically more efficient accomplishment of the 'physical final solution' by gassing, of which they were a prelude.
Against the intentionalist interpretation, functionalist historians like Martin Broszat argued that the lower officials of the Nazi state had started exterminating people on their own initiative. Broszat argued that the Holocaust began "bit by bit" as German officials stumbled into genocide. Broszat argued that in the autumn of 1941 German officials had begun "improvised" killing schemes as the "simplest" solution to the "Jewish Question". In Broszat's opinion, Hitler subsequently approved of the measures initiated by the lower officials and allowed the expansion of the Holocaust from Eastern Europe to all of Europe. In this way, Broszat argued that the Shoah was not begun in response to an order, written or unwritten, from Hitler but was rather “a way out of the blind alley into which the Nazis had manoeuvred themselves”. The American historian Christopher Browning has argued that:
Before the invasion, the Einsatzgruppen were not given explicit orders for the total extermination of Jews on Soviet territory. Along with the general incitement to an ideological and racial war, however, they were given the general task of liquidating "potential" enemies. Heydrich's much-debated directive of 2 July 1941 was a minimal list of those who had to be liquidated immediately, including all Jews in state and party positions. It is very likely, moreover, that the Einsatzgruppen leaders were told of the future goal of a Judenfrei [Jew-free] Russia through systematic mass murder.
By contrast, the Swiss historian Philippe Burrin argues that such a decision was not made before August 1941 at the earliest, pointing to orders given by Himmler on 30 July 1941 to the 2nd SS Cavalry Regiment and the SS Cavalry Brigade operating in the Pripet Marshes in the Pripyat Marshes massacres calling for the murder of male Jews only while the Jewish women and children were to be driven into the Marshes. Browning argues that sometime in mid-July 1941 Hitler made the decision to begin general genocide owing to his exhilaration over his victories over the Red Army, whereas Burrin contends that the decision was made in late August 1941 owing to Hitler's frustration over the slowing down of the Wehrmacht. Kershaw argues that the dramatic expansion in both the range of victims and the intensity of the killings after mid-August 1941 indicates that Hitler issued an order to that effect, most probably a verbal order conveyed to the Einsatzgruppen commanders through either Himmler or Heydrich. It remains unclear whether that was a decision made on Hitler's own initiative motivated only by his own anti-Semitic prejudices, or (impressed with the willingness and ability of Einsatzgruppe A to murder Jewish women and children) ordered that the other three Einsatzgruppen emulate Einsatzgruppe A's bloody example.
The Canadian historian Erich Haberer has contended that the "Baltic flashpoint of genocide", as the killings committed by Einsatzgruppe A between July–October 1941 are known to historians, were the key development in the evolution of Nazi anti-Semitic policy that resulted in the Holocaust. The Baltic area witnessed both the most extensive and intense killings of all the Einsatzgruppen with 90,000–100,000 Jews killed between July and October 1941, which led to the almost total destruction of the Jewish communities in that area. Haberer maintains that the "Baltic flashpoint of genocide" occurred at a time when the other Nazi plans for a "territorial final solution" such as the Madagascar Plan were unlikely to occur, and thus suggested to the Nazi leadership that genocide was indeed "feasible" as a "final solution to the Jewish Question".
== Functionalism ==
=== Extreme ===
Extreme functionalists such as Martin Broszat believe that the Nazi leadership did not consciously initiate the Holocaust and that initiative instead came from the lower ranks of the German bureaucracy. This philosophy is what is known as the bottom-up approach of the Holocaust. Götz Aly has made much of documents from the bureaucracy of the German Government-General of Poland arguing that the population of Poland would have to decrease by 25% to allow the Polish economy to grow. Additional criticism of functionalism points out that Hitler and other Nazi leaders delayed railcars providing supplies to front line troops in the Soviet Union so that Jews could be deported by rail from the USSR to death camps, thus demonstrating the pursuit of genocidal policies over pragmatic wartime actions. Hans Mommsen was a leading expert on Nazi Germany and the Holocaust. He argued that Hitler was a "weak dictator" who rather than acting decisively, reacted to various social pressures. Mommsen believed that Nazi Germany was not a totalitarian state and criticized totalitarianism as a concept in general. Together with his friend Broszat, Mommsen developed a structuralist interpretation of the Third Reich, that saw the Nazi state as a chaotic collection of rival bureaucracies engaged in endless power struggles, and the Final Solution as a result of the "cumulative radicalization" of the German state as opposed to a long-term plan on the part of Adolf Hitler.
=== Moderate ===
Moderate functionalists, such as Karl Schleunes and Christopher Browning, believe that the rivalry within the unstable Nazi power structure provided the major driving force behind the Holocaust. Moderate functionalists believe that the Nazis aimed to expel all of the Jews from Europe, but only after the failure of these schemes did they resort to genocide. This is sometimes referred to as the "twisted road" to genocide, after a book by Schleunes called The Twisted Road to Auschwitz.
== Intentionalism ==
=== Extreme ===
Lucy Dawidowicz argued that Hitler already decided upon the Holocaust no later than 1919. To support her interpretation, Dawidowicz pointed to numerous extreme anti-Semitic statements made by Hitler. According to a Reichswehr report from an April 1920 meeting, Hitler said: "We will carry on the struggle until the last Jew is removed from the German Reich". A Bavarian police report reported that according to Hitler, the NSDAP would bring about a revolution that would "thoroughly clean out the Jew-rabble". Hitler directly referenced killing Jews in Mein Kampf, when he states: "If at the beginning of the war and during the war twelve or fifteen thousand of these Hebrew corrupters of the people had been held under poison gas, as happened to hundreds of thousands of our very best German workers in the field, the sacrifice of millions at the front would not have been in vain." However, Dawidowicz's critics contend, given that Mein Kampf is 694 pages long, she makes too much of one sentence. Daniel Goldhagen went further, suggesting that popular opinion in Germany was already sympathetic to a policy of Jewish extermination before the Nazi party came to power. He asserts in his book Hitler's Willing Executioners that Germany enthusiastically welcomed the persecution of Jews by the Nazi regime in the period 1933–39. Dawidowicz responded that Hitler resorted to vague terms intentionally, often using esoteric and unclear language when discussing the Jews; she remarks that the esoteric and vague language was an inherent element of National Socialist rhetoric, as Heinrich Himmler repeated Hitler's vague terms from the 1920s as late as on 4 October 1943, when addressing the SS-Gruppenführer. Dawidowicz highlights that there was no reason to use vague terms to "that particular audience on that occasion at that late date".
Holocaust scholars such as John J. Michalczyk, Michael S. Bryant and Susan A. Michalczyk agree with Dawidowicz, arguing that Hitler already revealed his "exterminatory mindset" in Mein Kampf, infusing the book with "intimations of genocide". They state that Hitler's autobiography is redolent of calls for mass murder, and argue that "genocide is the inescapable conclusion entailed in Hitler’s premises". In his book, Hitler did argue that the existence of Germany as a country is threatened, portrayed the Jews as a danger to both Germany and the human race, and argued that the right to self-preservation is supreme and cancels all ethical restraint; these premises made the Final Solution a foreseeable conclusion. According to Bryant, Hitler calls for Final Solution in Mein Kampf, but conceals it with vague and esoteric language. This vagueness was caused by the circumstances Hitler faced after the failure of the Beer Putsch - Hitler wanted to obtain parole, avoid deportation to Austria and eventually have the bans that he and his party were facing lifted; this forced Hitler to "walk a legal line". Bryant concludes: "Hitler continued to dissemble his real intentions through the 1930s, falsely assuring the world of his peaceful inclinations toward countries he had rashly threatened in his memoir".
Dawidowicz also points to accounts of 1920s recruits to the National Socialist movement. Kurt Lüdecke, a German nationalist who joined the NSDAP and personally talked to Hitler, noted plans that Hitler expressed: "The hugeness of the task and the absurdity of the hope swept over me. Its execution meant the liquidation of Jewry, of Rome, of liberalism with its tangled capitalistic connections; of Marxism, Bolshevism, Toryism—in short, an abrupt and complete break with the past and an assault on all existing world political forces." Dawidowicz contends that while Hitler's anti-Semitism and related threats "remained geographically limited to Germany, albeit a greater Germany", he began to see Jews as "an international group whose destruction demanded an international policy". This change took place after Hitler met Alfred Rosenberg, who shared genocidal intentions towards Jews and greatly influenced Hitler; as a result, Hitler now discussed "cleaning up" the Jews internationally rather than domestically. Dawidowicz concludes that Hitler conceived his plans long before coming to power, and everything he did from then on was directed toward the achievement of his goal: "There never had been any ideological deviation or wavering determination. In the end only the question of opportunity mattered."
Wolfgang Benz points out that Adolf Hitler had already called for anti-Semitism in a 1919 publication "Gutachten zum Antisemitismus" and declared: "Its ultimate goal, however, must unalterably be the removal of the Jews altogether." That this "removal" meant for him the extermination of the Jews is shown by Hitler in a speech on 6 April 1920: "We do not want to be sentimental anti-Semites who want to create a pogrom mood, but we are animated by the implacable determination to seize the evil at its root and to exterminate it root and branch. In order to achieve our goal, any means must be acceptable to us, even if we have to join forces with the devil." On 3 July 1920 Hitler wrote to Konstantin Hierl: "As much as I cannot reproach a tubercle bacilli for an activity which means destruction for man but life for them, I am also compelled and entitled, for the sake of my personal existence, to wage the fight against tuberculosis by destroying its pathogens. The Jew, however, becomes and has become through thousands of years in his work the racial tuberculosis of the peoples. To fight him is to destroy him."
According to the journalist Josef Hell, Hitler is said to have replied to the question of what he would do against the Jews if he had full freedom of action: When I am once really in power, then the extermination of the Jews will be my first and most important task. As soon as I have the power to do so, I will, for example, in Munich on Marienplatz I will have gallows set up next to gallows, and as many as the traffic permits. Then the Jews are hanged, one after the other, and they hang until they stink. They are hanged as long as the laws of hygiene allow. As soon as they have been hanged, the next ones are hanged, and this continues until the last Jew in Munich has been exterminated. The same thing will be done in the other cities until Germany is cleansed of the last Jew. In 1924, Hitler further unfolded the racist rationale for it in Mein Kampf, also picking up on the views of Karl Eugen Dühring: Without clear recognition of the racial problem, and thus of the Jewish question, a resurgence of the German nation will no longer take place.'
=== Moderate ===
Moderate intentionalists such as Richard Breitman and Saul Friedlander believe that Hitler decided upon the Holocaust sometime after coming to power in the 1930s and no later than 1939 or 1941. This school refers to Hitler's "Prophecy Speech" of January 30, 1939 before the Reichstag where Hitler stated "If the international Jewish financiers in and outside Europe should succeed in plunging the nations once again into a world war, then the result will not be the victory of Jewry, but the annihilation of the Jewish race in Europe!" Yehuda Bauer criticizes the intentionalist reading of this quotation, arguing that though this statement clearly commits Hitler to genocide, Hitler made no effort after delivering this speech to have it carried out. Furthermore, Ian Kershaw has pointed out that there are several diary entries by Joseph Goebbels in late 1941, in which Goebbels writes that "The Führer's prophecy is coming true in a most terrible way." The general impression one gets is that Goebbels is quite surprised that Hitler was serious about carrying out the threat in the "Prophecy Speech". According to Lucy Dawidowicz, if Hitler's allies expressed surprise at the implementation of a systematic genocide, it was not because of "the suddenness with which they had to confront these plans, but because of lack of preparation".
Eberhard Jäckel argues that the genocide of Jews on a systematic and industrial level was already decided at the highest level before 1939. Jäckel notes that Hitler himself had announced the "annihilation of the Jewish race in Europe" in the event of a new war in a public speech on the anniversary of his "seizure of power" on 30 January 1939. Therefore when announcing the "annihilation of Jewish race" in event of war in the 30 January 1939 Reichstag speech in the Kroll Opera House, Hitler had already been preparing for war in advance and was now justifying the genocide that would accompany it. This is supported by the fact that the Nazi leadership started introduced rearming shortly after coming to power in 1933; Richard Overy remarks that for Hitler the "economy was not simply an arena for generating wealth and technical progress; its raison d'etre lay in its ability to provide the material springboard for military conquest". Because of that, the progressing militarisation and rearmanent of the German economy became "economically significant" as early as in 1934, and signalled Hitler's intention to start a war. In this context, intentional historians argue that the Madagascar Plan was ultimately never a serious option for the National Socialist leadership, but merely a consideration presented to the outside world in order to conceal from the public the actual goal that was being pursued.
Evidence used by intentionalist historians such as Kevin Sweeney to support the view that Hitler had decided on the Holocaust by the start of the war includes a statement by Hitler to František Chvalkovský in 1939 that "We are going to destroy the Jews. They are not going to get away with what they did on November 9, 1918. The day of reckoning has come." Sweeney argues that this and other public and private statements by Hitler before and during the war indicate that he had personally premeditated the Holocaust and was responsible for directing the policy.
== Synthesis ==
A number of scholars such as Arno J. Mayer, Yehuda Bauer, Peter Longerich, Ian Kershaw, Michael Burleigh, Frank McDonough and Michael Marrus have developed a synthesis of the functionalist and intentionalist schools. They have suggested the Holocaust was a result of pressures that came from both above and below and that Hitler lacked a master plan, but was the decisive force behind the Holocaust. The phrase 'cumulative radicalisation' is used in this context to sum up the way extreme rhetoric and competition among different Nazi agencies produced increasingly extreme policies, as fanatical bureaucratic underlings put into practice what they believed Hitler would have approved based on his widely disseminated speeches and propaganda. This phenomenon is referred to more generally in social psychology as groupshift.
Given the fact that scholars have written so much in relation to Nazi Germany, Richard Bessel asserts that "The result is a much better informed, much more detailed and more nuanced picture of the Nazi regime, and most serious historians of the Nazi regime now are to some extent both 'intentionalists' and 'functionalists'—insofar as those terms still can be used at all."
== See also ==
Holocaust denial
Bottom-up approach of the Holocaust
Nazi foreign policy debate
Auschwitz bombing debate
Historiography of Germany
Historikerstreit
Sonderweg
Vergangenheitsbewältigung
Victim theory, a theory that Austria was a victim of Nazism following the Anschluss
== References ==
== Sources == | Wikipedia/Functionalism–intentionalism_debate |
Connectionless-mode Network Service (CLNS) or simply Connectionless Network Service is an OSI network layer datagram service that does not require a circuit to be established before data is transmitted, and routes messages to their destinations independently of any other messages. As such it is a "best-effort" rather than a "reliable" delivery service. CLNS is not an Internet service, but provides capabilities in an OSI network environment similar to those provided by the Internet protocol suite. The service is specified in ISO/IEC 8348, the OSI Network Service Definition (which also defines the connection-oriented service, CONS.)
== Connectionless-mode Network Protocol ==
Connectionless-mode Network Protocol (CLNP) is an OSI protocol deployment. CLNS is the service provided by the Connectionless-mode Network Protocol (CLNP). From August 1990 to April 1995 the NSFNET backbone supported CLNP in addition to TCP/IP. However, CLNP usage remained low compared to TCP/IP.
== Transport Protocol Class 4 (TP4) in conjunction with CLNS ==
CLNS is used by ISO Transport Protocol Class 4 (TP4), one of the five transport layer protocols in the OSI suite. TP4 offers error recovery, performs segmentation and reassembly, and supplies multiplexing and demultiplexing of data streams over a single virtual circuit. TP4 sequences PDUs and retransmits them or re-initiates the connection if an excessive number are unacknowledged. TP4 provides reliable transport service and functions with either connection-oriented or connectionless network service. TP4 is the most commonly used of all the OSI transport protocols and is similar to the Transmission Control Protocol (TCP) in the Internet protocol suite.
== Protocols providing CLNS ==
Several protocols provide the CLNS service:
Connectionless-mode Network Protocol (CLNP), as specified in ITU-T Recommendation X.233.
End System-to-Intermediate System (ES-IS), a routing exchange protocol for use in conjunction with the protocol for providing the CLNS (ISO 9542).
Intermediate System-to-Intermediate System (IS-IS), an intradomain routing exchange protocol used in both the OSI and Internet environments (ISO/IEC 10589 and RFC 1142).
Interdomain Routing Protocol (IDRP), the OSI equivalent of BGP.
Signalling Connection Control Part (SCCP), as specified in ITU-T Recommendation Q.711 is a Signaling System 7 protocol.
== See also ==
X.25 protocol suite, an OSI Connection Oriented Network Service (CONS)
== References ==
== External links ==
What is CLNS? - a brief introduction by Ivan Pepelnjak | Wikipedia/Connectionless-mode_Network_Service |
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.
The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
== History ==
=== Early research ===
Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a "Networking Working Group" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community, the International Network Working Group, which Cerf chaired, and researchers at Xerox PARC.
By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974 by Cerf, Yogen Dalal and Carl Sunshine.
Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagrams. Several versions were developed through the Internet Experiment Note series. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Bob Metcalfe and Yogen Dalal at Xerox PARC; Danny Cohen, who needed it for his packet voice work; and Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 4, written in 1978, Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This end-to-end principle was pioneered by Louis Pouzin in the CYCLADES network, based on the ideas of Donald Davies. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke in 1999, the IP over Avian Carriers formal protocol specification was created and successfully tested two years later. 10 years later still, it was adapted for IPv6.
DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6).
=== Early implementation ===
In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983.
A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.
=== Adoption ===
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR/NDRE and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.
IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.
Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).
The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. For Windows 3.1, the dominant PC operating system among consumers in the first half of the 1990s, Peter Tattam's release of the Trumpet Winsock TCP/IP stack was key to bringing the Internet to home users. Trumpet Winsock allowed TCP/IP operations over a serial connection (SLIP or PPP). The typical home PC of the time had an external Hayes-compatible modem connected via an RS-232 port with an 8250 or 16550 UART which required this type of stack. Later, Microsoft would release their own TCP/IP add-on stack for Windows for Workgroups 3.11 and a native stack in Windows 95. These events helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.
=== Formal specification and standards ===
The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF).
The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specifications of the suite are RFC 1122 and 1123, which broadly outlines four abstraction layers (as well as related protocols); the link layer, IP layer, transport layer, and application layer, along with support protocols. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
== Key architectural principles ==
The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.
The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).": 23 "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features.": 13
Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level.
An early pair of architectural documents, RFC 1122 and 1123, titled Requirements for Internet Hosts, emphasizes architectural principles over layering. RFC 1122/23 are structured in sections referring to layers, but the documents refer to many other architectural principles, and do not emphasize layering. They loosely defines a four-layer model, with the layers having names, not numbers, as follows:
The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client–server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
The transport layer performs host-to-host communications on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination.
The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect the transmission of internet layer datagrams to next-neighbor hosts.
== Link layer ==
The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels.
The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model.
The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model.
== Internet layer ==
Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet.
The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.
The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.
== Transport layer ==
The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers).
For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services.
Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability.
TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:
data arrives in-order
data has minimal error (i.e., correctness)
duplicate data is discarded
lost or discarded packets are resent
includes traffic congestion control
The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP).
Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC).
The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media.
The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications.
The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer.
QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC.
== Application layer ==
The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer.
The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model.
Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.
At the application layer, the TCP/IP model distinguishes between user protocols and support protocols.: §1.1.3 Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.
Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload.
== Layering evolution and representations in the literature ==
The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools.
The following table shows various such networking models. The number of layers varies between three and seven.
Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources.
== Comparison of TCP/IP and OSI layering ==
The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.
Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.
The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful".
For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange.
Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, routing protocols are included in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers.
IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer.
== Implementations ==
The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
== See also ==
BBN Report 1822, an early layered network model
Internetwork Packet Exchange
Fast Local Internet Protocol
List of automation protocols
List of information technology initialisms
List of IP protocol numbers
Lists of network protocols
List of TCP and UDP port numbers
== Notes ==
== References ==
== Bibliography ==
Douglas E. Comer (2001). Internetworking with TCP/IP – Principles, Protocols and Architecture. CET [i. e.] Computer Equipment and Trade. ISBN 86-7991-142-9.
Joseph G. Davies; Thomas F. Lee (2003). Microsoft Windows Server 2003 TCP/IP Protocols and Services. Microsoft Press. ISBN 0-7356-1291-9.
Forouzan, Behrouz A. (2003). TCP/IP Protocol Suite (2nd ed.). McGraw-Hill. ISBN 978-0-07-246060-5.
Craig Hunt (1998). TCP/IP Network Administration. O'Reilly. ISBN 1-56592-322-7.
Maufer, Thomas A. (1999). IP Fundamentals. Prentice Hall. ISBN 978-0-13-975483-8.
Ian McLean (2000). Windows 2000 TCP/IP Black Book. Coriolis Group Books. ISBN 1-57610-687-X.
Ajit Mungale (September 29, 2004). Pro .NET 1.1 Network Programming. Apress. ISBN 1-59059-345-6.
W. Richard Stevens (April 24, 1994). TCP/IP Illustrated, Volume 1: The Protocols. Addison-Wesley. ISBN 0-201-63346-9.
W. Richard Stevens; Gary R. Wright (1994). TCP/IP Illustrated, Volume 2: The Implementation. Addison-Wesley. ISBN 0-201-63354-X.
W. Richard Stevens (1996). TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols. Addison-Wesley. ISBN 0-201-63495-3.
Andrew S. Tanenbaum (2003). Computer Networks. Prentice Hall PTR. ISBN 0-13-066102-3.
Clark, D. (1988). "The Design Philosophy of the DARPA Internet Protocols" (PDF). Proceedings of the Sigcomm '88 Symposium on Communications Architectures and Protocols. ACM. pp. 106–114. doi:10.1145/52324.52336. ISBN 978-0897912792. S2CID 6156615. Retrieved October 16, 2011.
Cerf, Vinton G.; Kahn, Robert E. (May 1974). "A Protocol for Packet Network Intercommunication" (PDF). IEEE Transactions on Communications. 22 (5): 637–648. doi:10.1109/TCOM.1974.1092259.
== External links ==
Internet History – Pages on Robert Kahn, Vinton Cerf, and TCP/IP (reviewed by Cerf and Kahn).
T. Socolofsky; C. Kale (January 1991). A TCP/IP Tutorial. Network Working Group. doi:10.17487/RFC1180. RFC 1180. Informational.
The Ultimate Guide to TCP/IP
The TCP/IP Guide – A comprehensive look at the protocols and the procedure and processes involved
A Study of the ARPANET TCP/IP Digest, archived from the original on December 4, 2021 | Wikipedia/TCP/IP_model |
"Equation of State Calculations by Fast Computing Machines" is a scholarly article published by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller in the Journal of Chemical Physics in 1953. This paper proposed what became known as the Metropolis Monte Carlo algorithm, later generalized as the Metropolis–Hastings algorithm, which forms the basis for Monte Carlo statistical mechanics simulations of atomic and molecular systems.
== Development ==
Some controversy exists with regard to credit for development of the algorithm. Prior to 2003, there was no detailed account of the algorithm's development. Then, shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics". Further historical clarification is made by Gubernatis in a 2005 journal article recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". Additional clarification of attribution is given in connection with the Metropolis–Hastings algorithm. The Rosenbluths would subsequently publish two additional, lesser-known papers using the Monte Carlo method, while the other authors would not continue to work on the topic. Already in 1953, however, Marshall was recruited to work on Project Sherwood and thereafter turned his attention to plasma physics. Here he laid the foundation for much of modern plasma fluid and kinetic theory, and particularly the theory of plasma instabilities.
== Algorithm ==
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. In statistical mechanics applications prior to the introduction of the Metropolis algorithm, the method consisted of generating a large number of random configurations of the system, computing the properties of interest (such as energy or density) for each configuration, and then producing a weighted average where the weight of each configuration is its Boltzmann factor, exp(−E/kT), where E is the energy, T is the temperature, and k is the Boltzmann constant. The key contribution of the Metropolis paper was the idea that
Instead of choosing configurations randomly, then weighting them with exp(−E/kT), we choose configurations with a probability exp(−E/kT) and weight them evenly.
This change makes the sampling focus on the low-energy configurations, which contribute the most to the Boltzmann average, resulting in improved convergence. To choose configurations with a probability exp(−E/kT) that can be weighed evenly, the authors devised the following algorithm: 1) each configuration is generated by a random move on the previous configuration and the new energy is computed; 2) if the new energy is lower, the move is always accepted; otherwise the move is accepted with a probability of exp(−ΔE/kT). When a move is rejected, the last accepted configuration is counted again for the statistical averages and is used as a base for the next attempted move.
The main topic of the article was the numerical calculation of the equation of state for a system of rigid spheres in two dimensions. Subsequent work generalized the method to three dimensions and to fluids using the Lennard-Jones potential. The simulations were done for a system of 224 particles; each simulation consisted of up to 48 cycles, where each cycle consisted of moving each particle once and took about three minutes of computer time using the MANIAC computer at Los Alamos National Lab.
To minimize surface effects, the authors introduced the use of periodic boundary conditions. This means that the simulated system is treated as a unit cell in a lattice, and when a particle moves out of the cell, it automatically comes in through the other side (making the system a topological torus).
== Review and reception ==
According to a perspective published nearly fifty years later by William L. Jorgensen, "Metropolis et al. introduced the samplic method and periodic boundary conditions that remain at the heart of Monte Carlo statistical mechanics simulations of fluids. This was one of the major contributions to theoretical chemistry of the twentieth century." As of 2011, the article has been cited over 18,000 times.
In another perspective, it was said that although "the Metropolis algorithm began as a technique for attacking specific problems in numerical simulations of physical systems [...] later, the subject exploded as the scope of applications broadened in many surprising directions, including function minimization, computational geometry, and combinatorial counting. Today, topics related to the Metropolis algorithm constitute an entire field of computational science supported by a deep theory and having applications ranging from physical simulations to the foundations of computational complexity."
== See also ==
Timeline of scientific computing
== References ==
== External links ==
Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.; Teller, Edward (1953). "Equation of State Calculations by Fast Computing Machines". J. Chem. Phys. 21 (6): 1087. Bibcode:1953JChPh..21.1087M. doi:10.1063/1.1699114. OSTI 4390578. S2CID 1046577. Retrieved 2011-10-20.
Nicholas Metropolis (1987). "The Beginning of the Monte Carlo Method". Los Alamos Science, No. 15, Page 125.
Herbert Anderson (1986). "Metropolis, Monte Carlo and the MANIAC". Los Alamos Science No. 14, Page 69. | Wikipedia/Equation_of_State_Calculations_by_Fast_Computing_Machines |
The Wang and Landau algorithm, proposed by Fugao Wang and David P. Landau, is a Monte Carlo method designed to estimate the density of states of a system. The method performs a non-Markovian random walk to build the density of states by quickly visiting all the available energy spectrum. The Wang and Landau algorithm is an important method to obtain the density of states required to perform a multicanonical simulation.
The Wang–Landau algorithm can be applied to any system which is characterized by a cost (or energy) function. For instance,
it has been applied to the solution of numerical integrals and the folding of proteins.
The Wang–Landau sampling is related to the metadynamics algorithm.
== Overview ==
The Wang and Landau algorithm is used to obtain an estimate for the density of states of a system characterized by a cost function. It uses a non-Markovian stochastic process which asymptotically converges to a multicanonical ensemble. (I.e. to a Metropolis–Hastings algorithm with sampling distribution inverse to the density of states) The major consequence is that this sampling distribution leads to a simulation where the energy barriers are invisible. This means that the algorithm visits all the accessible states (favorable and less favorable) much faster than a Metropolis algorithm.
== Algorithm ==
Consider a system defined on a phase space
Ω
{\displaystyle \Omega }
, and a cost function, E, (e.g. the energy), bounded on a spectrum
E
∈
Γ
=
[
E
min
,
E
max
]
{\displaystyle E\in \Gamma =[E_{\min },E_{\max }]}
, which has an associated density of states
ρ
(
E
)
{\displaystyle \rho (E)}
, which is to be estimated. The estimator is
ρ
^
(
E
)
≡
exp
(
S
(
E
)
)
{\displaystyle {\hat {\rho }}(E)\equiv \exp(S(E))}
. Because Wang and Landau algorithm works in discrete spectra, the spectrum
Γ
{\displaystyle \Gamma }
is divided in N discrete values with a difference between them of
Δ
{\displaystyle \Delta }
, such that
Δ
=
E
max
−
E
min
N
{\displaystyle \Delta ={\frac {E_{\max }-E_{\min }}{N}}}
.
Given this discrete spectrum, the algorithm is initialized by:
setting all entries of the microcanonical entropy to zero,
S
(
E
i
)
=
0
i
=
1
,
2
,
.
.
.
,
N
{\displaystyle S(E_{i})=0\ \ i=1,2,...,N}
initializing
f
=
1
{\displaystyle f=1}
and
initializing the system randomly, by putting in a random configuration
r
∈
Ω
{\displaystyle {\boldsymbol {r}}\in \Omega }
.
The algorithm then performs a multicanonical ensemble simulation: a Metropolis–Hastings random walk in the phase space of the system with a probability distribution given by
P
(
r
)
=
1
/
ρ
^
(
E
(
r
)
)
=
exp
(
−
S
(
E
(
r
)
)
)
{\displaystyle P({\boldsymbol {r}})=1/{\hat {\rho }}(E({\boldsymbol {r}}))=\exp(-S(E({\boldsymbol {r}})))}
and a probability of proposing a new state given by a probability distribution
g
(
r
→
r
′
)
{\displaystyle g({\boldsymbol {r}}\rightarrow {\boldsymbol {r}}')}
. A histogram
H
(
E
)
{\displaystyle H(E)}
of visited energies is stored. Like in the Metropolis–Hastings algorithm, a proposal-acceptance step is performed, and consists in (see Metropolis–Hastings algorithm overview):
proposing a state
r
′
∈
Ω
{\displaystyle {\boldsymbol {r}}'\in \Omega }
according to the arbitrary proposal distribution
g
(
r
→
r
′
)
{\displaystyle g({\boldsymbol {r}}\rightarrow {\boldsymbol {r}}')}
accept/refuse the proposed state according to
A
(
r
→
r
′
)
=
min
(
1
,
e
S
−
S
′
g
(
r
′
→
r
)
g
(
r
→
r
′
)
)
{\displaystyle A({\boldsymbol {r}}\rightarrow {\boldsymbol {r}}')=\min \left(1,e^{S-S'}{\frac {g({\boldsymbol {r}}'\rightarrow {\boldsymbol {r}})}{g({\boldsymbol {r}}\rightarrow {\boldsymbol {r}}')}}\right)}
where
S
=
S
(
E
(
r
)
)
{\displaystyle S=S(E({\boldsymbol {r}}))}
and
S
′
=
S
(
E
(
r
′
)
)
{\displaystyle S'=S(E({\boldsymbol {r}}'))}
.
After each proposal-acceptance step, the system transits to some value
E
i
{\displaystyle E_{i}}
,
H
(
E
i
)
{\displaystyle H(E_{i})}
is incremented by one and the following update is performed:
S
(
E
i
)
←
S
(
E
i
)
+
f
{\displaystyle S(E_{i})\leftarrow S(E_{i})+f}
.
This is the crucial step of the algorithm, and it is what makes the Wang and Landau algorithm non-Markovian: the stochastic process now depends on the history of the process. Hence the next time there is a proposal to a state with that particular energy
E
i
{\displaystyle E_{i}}
, that proposal is now more likely refused; in this sense, the algorithm forces the system to visit all of the spectrum equally. The consequence is that the histogram
H
(
E
)
{\displaystyle H(E)}
is more and more flat. However, this flatness depends on how well-approximated the calculated entropy is to the exact entropy, which naturally depends on the value of f. To better and better approximate the exact entropy (and thus histogram's flatness), f is decreased after M proposal-acceptance steps:
f
←
f
/
2
{\displaystyle f\leftarrow f/2}
.
It was later shown that updating the f by constantly dividing by two can lead to saturation errors. A small modification to the Wang and Landau method to avoid this problem is to use the f factor proportional to
1
/
t
{\displaystyle 1/t}
, where
t
{\displaystyle t}
is proportional to the number of steps of the simulation.
== Test system ==
We want to obtain the DOS for the harmonic oscillator potential.
E
(
x
)
=
x
2
,
{\displaystyle E(x)=x^{2},\,}
The analytical DOS is given by,
ρ
(
E
)
=
∫
δ
(
E
(
x
)
−
E
)
d
x
=
∫
δ
(
x
2
−
E
)
d
x
,
{\displaystyle \rho (E)=\int \delta (E(x)-E)\,dx=\int \delta (x^{2}-E)\,dx,}
by performing the last integral we obtain
ρ
(
E
)
∝
{
E
−
1
/
2
,
for
x
∈
R
1
const
,
for
x
∈
R
2
E
1
/
2
,
for
x
∈
R
3
{\displaystyle \rho (E)\propto {\begin{cases}E^{-1/2},{\text{for }}x\in \mathbb {R} ^{1}\\{\text{const}},{\text{for }}x\in \mathbb {R} ^{2}\\E^{1/2},{\text{for }}x\in \mathbb {R} ^{3}\\\end{cases}}}
In general, the DOS for a multidimensional harmonic oscillator will be given by some power of E, the exponent will be a function of the dimension of the system.
Hence, we can use a simple harmonic oscillator potential to test the accuracy of Wang–Landau algorithm because we know already the analytic form of the density of states. Therefore, we compare the estimated density of states
ρ
^
(
E
)
{\displaystyle {\hat {\rho }}(E)}
obtained by the Wang–Landau algorithm with
ρ
(
E
)
{\displaystyle \rho (E)}
.
== Sample code ==
The following is a sample code of the Wang–Landau algorithm in Python, where we assume that a symmetric proposal distribution g is used:
g
(
x
′
→
x
)
=
g
(
x
→
x
′
)
{\displaystyle g({\boldsymbol {x}}'\rightarrow {\boldsymbol {x}})=g({\boldsymbol {x}}\rightarrow {\boldsymbol {x}}')}
The code considers a "system" which is the underlying system being studied.
== Wang and Landau molecular dynamics: Statistical Temperature Molecular Dynamics (STMD) ==
Molecular dynamics (MD) is usually preferable to Monte Carlo (MC), so it is desirable to have a MD algorithm incorporating the basic WL idea for flat energy sampling. That algorithm is Statistical Temperature Molecular Dynamics (STMD), developed by Jaegil Kim et al at Boston University.
An essential first step was made with the Statistical Temperature Monte Carlo (STMC) algorithm. WLMC requires an extensive increase in the number of energy bins with system size, caused by working directly with the density of states. STMC is centered on an intensive quantity, the statistical temperature,
T
(
E
)
=
1
/
(
d
S
(
E
)
/
d
E
)
{\displaystyle T(E)=1/(dS(E)/dE)}
, where E is the potential energy. When combined with the relation,
Ω
(
E
)
=
e
S
(
E
)
{\displaystyle \Omega (E)=e^{S(E)}}
, where we set
k
B
=
1
{\displaystyle k_{B}=1}
, the WL rule for updating the density of states gives the rule for updating the discretized statistical temperature,
T
~
j
±
1
′
=
α
j
±
1
T
~
j
±
1
,
{\displaystyle {\tilde {T}}_{j\pm 1}^{\prime }=\alpha _{j\pm 1}{\tilde {T}}_{j\pm 1},}
where
α
j
±
1
=
1
/
(
1
∓
δ
f
T
~
j
±
1
)
,
δ
f
=
(
l
n
f
/
2
Δ
E
)
,
Δ
E
{\displaystyle \alpha _{j\pm 1}=1/(1\mp \delta f{\tilde {T}}_{j\pm 1}),\delta f=(lnf/2\Delta E),\Delta E}
is the energy bin size, and
T
~
{\displaystyle {\tilde {T}}}
denotes the running estimate. We define f as in, a factor >1 that multiplies the estimate of the DOS for the i'th energy bin when the system visits an energy in that bin.
The details are given in Ref. With an initial guess for
T
(
E
)
{\displaystyle T(E)}
and the range restricted to lie between
T
L
{\displaystyle T_{L}}
and
T
U
{\displaystyle T_{U}}
, the simulation proceeds as in WLMC, with significant numerical differences. An interpolation of
T
~
(
E
)
{\displaystyle {\tilde {T}}(E)}
gives a continuum expression of the estimated
S
(
E
)
{\displaystyle S(E)}
upon integration of its inverse, allowing the use of larger energy bins than in WL. Different values of
S
(
E
)
{\displaystyle S(E)}
are available within the same energy bin when evaluating the acceptance probability. When histogram fluctuations are less than 20% of the mean,
f
{\displaystyle f}
is reduced according to
f
→
f
{\displaystyle f\rightarrow {\sqrt {f}}}
.
STMC was compared with WL for the Ising model and the Lennard-Jones liquid. Upon increasing energy bin size, STMC gets the same results over a considerable range, while the performance of WL deteriorates rapidly. STMD can use smaller initial values of
f
d
=
f
−
1
{\displaystyle f_{d}=f-1}
for more rapid convergence. In sum, STMC needs fewer steps to obtain the same quality of results.
Now consider the main result, STMD. It is based on the observation that in a standard MD simulation at temperature
T
0
{\displaystyle T_{0}}
with forces derived from the potential energy
E
(
[
x
]
)
{\displaystyle E([x])}
, where
[
x
]
{\displaystyle [x]}
denotes all the positions, the sampling weight for a configuration is
e
−
E
(
[
x
]
)
/
T
0
{\displaystyle e^{-E([x])/T_{0}}}
. Furthermore, if the forces are derived from a function
W
(
E
)
{\displaystyle W(E)}
, the sampling weight is
e
−
W
(
E
(
[
x
]
)
)
/
T
0
{\displaystyle e^{-W(E([x]))/T_{0}}}
.
For flat energy sampling, let the effective potential be
T
0
S
(
E
)
{\displaystyle T_{0}S(E)}
- entropic molecular dynamics. Then the weight is
e
−
S
(
E
)
{\displaystyle e^{-S(E)}}
. Since the density of states is
e
+
S
(
E
)
{\displaystyle e^{+S(E)}}
, their product gives flat energy sampling.
The forces are calculated as
F
=
(
−
d
/
d
x
)
T
0
S
(
E
)
=
T
0
(
d
S
/
d
E
)
(
−
d
/
d
x
)
E
(
[
x
]
)
=
(
T
0
/
T
(
E
)
)
F
0
{\displaystyle F=(-d/dx)T_{0}S(E)=T_{0}(dS/dE)(-d/dx)E([x])=(T_{0}/T(E))F^{0}}
where
F
0
{\displaystyle F^{0}}
denotes the usual force derived from the potential energy. Scaling the usual forces by the factor
(
T
0
/
T
(
E
)
)
{\displaystyle (T_{0}/T(E))}
produces flat energy sampling.
STMD starts with an ordinary MD algorithm at constant
T
0
{\displaystyle T_{0}}
and V. The forces are scaled as indicated, and the statistical temperature is updated every time step, using the same procedure as in STMC. As the simulation converges to flat energy sampling, the running estimate
T
~
(
E
)
{\displaystyle {\tilde {T}}(E)}
converges to the true
T
(
E
)
{\displaystyle T(E)}
. Technical details including steps to speed convergence are described in and.
In STMD
T
0
{\displaystyle T_{0}}
is called the kinetic temperature as it controls the velocities as usual, but does not enter the configurational sampling, which is unusual. Thus STMD can probe low energies with fast particles. Any canonical average can be calculated with reweighting, but the statistical temperature,
T
(
E
)
{\displaystyle T(E)}
, is immediately available with no additional analysis. It is extremely valuable for studying phase transitions. In finite nanosystems
T
(
E
)
{\displaystyle T(E)}
has a feature corresponding to every “subphase transition”. For a sufficiently strong transition, an equal-area construction on an S-loop in
1
/
T
(
E
)
{\displaystyle 1/T(E)}
gives the transition temperature.
STMD has been refined by the BU group, and applied to several systems by them and others. It was recognized by D. Stelter that despite our emphasis on working with intensive quantities,
l
n
(
f
)
{\displaystyle ln(f)}
is extensive. However
δ
f
=
(
l
n
(
f
)
/
2
Δ
E
)
{\displaystyle \delta f=(ln(f)/2\Delta E)}
is intensive, and the procedure
f
→
f
{\displaystyle f\rightarrow {\sqrt {f}}}
based on histogram flatness is replaced by cutting
δ
f
{\displaystyle \delta f}
in half every fixed number of time steps. This simple change makes STMD entirely intensive and substantially improves performance for large systems. Furthermore, the final value of the intensive
δ
f
{\displaystyle \delta f}
is a constant that determines the magnitude of error in the converged
T
(
E
)
{\displaystyle T(E)}
, and is independent of system size. STMD is implemented in LAMMPS as fix stmd.
STMD is particularly useful for phase transitions. Equilibrium information is impossible to obtain with a canonical simulation, as supercooling or superheating is necessary to cause the transition. However an STMD run obtains flat energy sampling with a natural progression of heating and cooling, without getting trapped in the low energy or high energy state. Most recently it has been applied to the fluid/gel transition in lipid-wrapped nanoparticles.
Replica exchange STMD has also been presented by the BU group.
== References == | Wikipedia/Wang_and_Landau_algorithm |
Modeling photon propagation with Monte Carlo methods is a flexible yet rigorous approach to simulate photon transport. In the method, local rules of photon transport are expressed as probability distributions which describe the step size of photon movement between sites of photon-matter interaction and the angles of deflection in a photon's trajectory when a scattering event occurs. This is equivalent to modeling photon transport analytically by the radiative transfer equation (RTE), which describes the motion of photons using a differential equation. However, closed-form solutions of the RTE are often not possible; for some geometries, the diffusion approximation can be used to simplify the RTE, although this, in turn, introduces many inaccuracies, especially near sources and boundaries. In contrast, Monte Carlo simulations can be made arbitrarily accurate by increasing the number of photons traced. For example, see the movie, where a Monte Carlo simulation of a pencil beam incident on a semi-infinite medium models both the initial ballistic photon flow and the later diffuse propagation.
The Monte Carlo method is necessarily statistical and therefore requires significant computation time to achieve precision. In addition Monte Carlo simulations can keep track of multiple physical quantities simultaneously, with any desired spatial and temporal resolution. This flexibility makes Monte Carlo modeling a powerful tool. Thus, while computationally inefficient, Monte Carlo methods are often considered the standard for simulated measurements of photon transport for many biomedical applications.
== Biomedical applications of Monte Carlo methods ==
=== Biomedical imaging ===
The optical properties of biological tissue offer an approach to biomedical imaging. There are many endogenous contrasts, including absorption from blood and melanin and scattering from nerve cells and cancer cell nuclei. In addition, fluorescent probes can be targeted to many different tissues. Microscopy techniques (including confocal, two-photon, and optical coherence tomography) have the ability to image these properties with high spatial resolution, but, since they rely on ballistic photons, their depth penetration is limited to a few millimeters. Imaging deeper into tissues, where photons have been multiply scattered, requires a deeper understanding of the statistical behavior of large numbers of photons in such an environment. Monte Carlo methods provide a flexible framework that has been used by different techniques to reconstruct optical properties deep within tissue. A brief introduction to a few of these techniques is presented here.
Photoacoustic tomography In PAT, diffuse laser light is absorbed which generates a local temperature rise. This local temperature variation in turn generates ultrasound waves via thermoelastic expansion which are detected via an ultrasonic transducer. In practice, a variety of setup parameters are varied (i.e. light wavelength, transducer numerical aperture) and as a result Monte Carlo modeling is a valuable tool for predicting tissue response prior to experimental methods.
Diffuse optical tomography DOT is an imaging technique that uses an array of near-infrared light sources and detectors to measure optical properties of biological tissues. A variety of contrasts can be measured including the absorption due to oxy- and deoxy-hemoglobin (for functional neuro-imaging or cancer detection) and the concentration of fluorescent probes. In order to reconstruct an image, one must know the manner in which light traveled from a given source to a given detector and how the measurement depends on the distribution and changes in the optical properties (known as the forward model). Due to the highly scattering nature of biological tissue, such paths are complicated and the sensitivity functions are diffuse. The forward model is often generated using Monte Carlo methods.
=== Radiation therapy ===
The goal of radiation therapy is to deliver energy, generally in the form of ionizing radiation, to cancerous tissue while sparing the surrounding normal tissue. Monte Carlo modeling is commonly employed in radiation therapy to determine the peripheral dose the patient will experience due to scattering, both from the patient tissue as well as scattering from collimation upstream in the linear accelerator.
=== Photodynamic therapy ===
In Photodynamic therapy (PDT) light is used to activate chemotherapy agents. Due to the nature of PDT, it is useful to use Monte Carlo methods for modeling scattering and absorption in the tissue in order to ensure appropriate levels of light are delivered to activate chemotherapy agents.
== Implementation of photon transport in a scattering medium ==
Presented here is a model of a photon Monte Carlo method in a homogeneous infinite medium. The model is easily extended for multi-layered media, however. For an inhomogeneous medium, boundaries must be considered. In addition for a semi-infinite medium (in which photons are considered lost if they exit the top boundary), special consideration must be taken. For more information, please visit the links at the bottom of the page. We will solve the problem using an infinitely small point source (represented analytically as a Dirac delta function in space and time). Responses to arbitrary source geometries can be constructed using the method of Green's functions (or convolution, if enough spatial symmetry exists). The required parameters are the absorption coefficient, the scattering coefficient, and the scattering phase function. (If boundaries are considered the index of refraction for each medium must also be provided.) Time-resolved responses are found by keeping track of the total elapsed time of the photon's flight using the optical path length. Responses to sources with arbitrary time profiles can then be modeled through convolution in time.
In our simplified model we use the following variance reduction technique to reduce computational time. Instead of propagating photons individually, we create a photon packet with a specific weight (generally initialized as unity). As the photon interacts in the turbid medium, it will deposit weight due to absorption and the remaining weight will be scattered to other parts of the medium. Any number of variables can be logged along the way, depending on the interest of a particular application. Each photon packet will repeatedly undergo the following numbered steps until it is either terminated, reflected, or transmitted. The process is diagrammed in the schematic to the right. Any number of photon packets can be launched and modeled, until the resulting simulated measurements have the desired signal-to-noise ratio. Note that as Monte Carlo modeling is a statistical process involving random numbers, we will be using the variable ξ throughout as a pseudo-random number for many calculations.
=== Step 1: Launching a photon packet ===
In our model, we are ignoring initial specular reflectance associated with entering a medium that is not refractive index matched. With this in mind, we simply need to set the initial position of the photon packet as well as the initial direction. It is convenient to use a global coordinate system. We will use three Cartesian coordinates to determine position, along with three direction cosines to determine the direction of propagation. The initial start conditions will vary based on application, however for a pencil beam initialized at the origin, we can set the initial position and direction cosines as follows (isotropic sources can easily be modeled by randomizing the initial direction of each packet):
x
=
0
Position:
y
=
0
z
=
0
μ
x
=
0
Direction cosines:
μ
y
=
0
μ
z
=
1
{\displaystyle {\begin{aligned}x&=0\\{\text{Position: }}y&=0\\z&=0\\\\\mu _{x}&=0\\{\text{Direction cosines: }}\mu _{y}&=0\\\mu _{z}&=1\end{aligned}}}
=== Step 2: Step size selection and photon packet movement ===
The step size, s, is the distance the photon packet travels between interaction sites. There are a variety of methods for step size selection. Below is a basic form of photon step size selection (derived using the inverse distribution method and the Beer–Lambert law) from which we use for our homogeneous model:
s
=
−
ln
ξ
μ
t
{\displaystyle s=-{\frac {\ln \xi }{\mu _{t}}}}
where
ξ
{\displaystyle \xi }
is a random number and
μ
t
{\displaystyle {\mu _{t}}}
is the total interaction coefficient (i.e., the sum of the absorption and scattering coefficients).
Once a step size is selected, the photon packet is propagated by a distance s in a direction defined by the direction cosines. This is easily accomplished by simply updating the coordinates as follows:
x
←
x
+
μ
x
s
y
←
y
+
μ
y
s
z
←
z
+
μ
z
s
{\displaystyle {\begin{aligned}x&\leftarrow x+\mu _{x}s\\y&\leftarrow y+\mu _{y}s\\z&\leftarrow z+\mu _{z}s\end{aligned}}}
=== Step 3: Absorption and scattering ===
A portion of the photon weight is absorbed at each interaction site. This fraction of the weight is determined as follows:
Δ
W
=
μ
a
μ
t
W
{\displaystyle \Delta W={\frac {\mu _{a}}{\mu _{t}}}W}
where
μ
a
{\displaystyle {\mu _{a}}}
is the absorption coefficient.
The weight fraction can then be recorded in an array if an absorption distribution is of interest for the particular study. The weight of the photon packet must then be updated as follows:
W
←
W
−
Δ
W
{\displaystyle W\leftarrow W-\Delta W\,}
Following absorption, the photon packet is scattered. The weighted average of the cosine of the photon scattering angle is known as scattering anisotropy (g), which has a value between −1 and 1. If the optical anisotropy is 0, this generally indicates that the scattering is isotropic. If g approaches a value of 1 this indicates that the scattering is primarily in the forward direction. In order to determine the new direction of the photon packet (and hence the photon direction cosines), we need to know the scattering phase function. Often the Henyey-Greenstein phase function is used. Then the scattering angle, θ, is determined using the following formula.
cos
θ
=
{
1
2
g
[
1
+
g
2
−
(
1
−
g
2
1
−
g
+
2
g
ξ
)
2
]
if
g
≠
0
1
−
2
ξ
if
g
=
0
{\displaystyle \cos \theta ={\begin{cases}{\frac {1}{2g}}\left[1+g^{2}-\left({\frac {1-g^{2}}{1-g+2g\xi }}\right)^{2}\right]&{\text{ if }}g\neq 0\\1-2\xi &{\text{ if }}g=0\end{cases}}}
And, the polar angle φ is generally assumed to be uniformly distributed between 0 and
2
π
{\displaystyle 2\pi }
. Based on this assumption, we can set:
φ
=
2
π
ξ
{\displaystyle \varphi =2\pi \xi {\frac {}{}}}
Based on these angles and the original direction cosines, we can find a new set of direction cosines. The new propagation direction can be represented in the global coordinate system as follows:
μ
x
′
=
sin
θ
(
μ
x
μ
z
cos
φ
−
μ
y
sin
φ
)
1
−
μ
z
2
+
μ
x
cos
θ
μ
y
′
=
sin
θ
(
μ
y
μ
z
cos
φ
+
μ
x
sin
φ
)
1
−
μ
z
2
+
μ
y
cos
θ
μ
z
′
=
−
1
−
μ
z
2
sin
θ
cos
φ
+
μ
z
cos
θ
{\displaystyle {\begin{aligned}\mu '_{x}&={\frac {\sin \theta (\mu _{x}\mu _{z}\cos \varphi -\mu _{y}\sin \varphi )}{\sqrt {1-\mu _{z}^{2}}}}+\mu _{x}\cos \theta \\\mu '_{y}&={\frac {\sin \theta (\mu _{y}\mu _{z}\cos \varphi +\mu _{x}\sin \varphi )}{\sqrt {1-\mu _{z}^{2}}}}+\mu _{y}\cos \theta \\\mu '_{z}&=-{\sqrt {1-\mu _{z}^{2}}}\sin \theta \cos \varphi +\mu _{z}\cos \theta \\\end{aligned}}}
For a special case
μ
z
=
1
{\displaystyle {\begin{aligned}\mu _{z}=1\end{aligned}}}
use
μ
x
′
=
sin
θ
cos
φ
μ
y
′
=
sin
θ
sin
φ
μ
z
′
=
cos
θ
{\displaystyle {\begin{aligned}\mu '_{x}&=\sin \theta \cos \varphi \\\mu '_{y}&=\sin \theta \sin \varphi \\\mu '_{z}&=\cos \theta \\\end{aligned}}}
or
μ
z
=
−
1
{\displaystyle {\begin{aligned}\mu _{z}=-1\end{aligned}}}
use
μ
x
′
=
sin
θ
cos
φ
μ
y
′
=
−
sin
θ
sin
φ
μ
z
′
=
−
cos
θ
{\displaystyle {\begin{aligned}\mu '_{x}&=\sin \theta \cos \varphi \\\mu '_{y}&=-\sin \theta \sin \varphi \\\mu '_{z}&=-\cos \theta \\\end{aligned}}}
C-code:
/*********************** Indicatrix *********************
*New direction cosines after scattering by angle theta, fi.
* mux new=(sin(theta)*(mux*muz*cos(fi)-muy*sin(fi)))/sqrt(1-muz^2)+mux*cos(theta)
* muy new=(sin(theta)*(muy*muz*cos(fi)+mux*sin(fi)))/sqrt(1-muz^2)+muy*cos(theta)
* muz new= - sqrt(1-muz^2)*sin(theta)*cos(fi)+muz*cos(theta)
*---------------------------------------------------------
*Input:
* muxs,muys,muzs - direction cosine before collision
* mutheta, fi - cosine of polar angle and the azimuthal angle
*---------------------------------------------------------
*Output:
* muxd,muyd,muzd - direction cosine after collision
*---------------------------------------------------------
*/
void Indicatrix (double muxs, double muys, double muzs, double mutheta, double fi, double *muxd, double *muyd, double *muzd)
{
double costheta = mutheta;
double sintheta = sqrt(1.0-costheta*costheta); // sin(theta)
double sinfi = sin(fi);
double cosfi = cos(fi);
if (muzs == 1.0) {
*muxd = sintheta*cosfi;
*muyd = sintheta*sinfi;
*muzd = costheta;
} elseif (muzs == -1.0) {
*muxd = sintheta*cosfi;
*muyd = -sintheta*sinfi;
*muzd = -costheta;
} else {
double denom = sqrt(1.0-muzs*muzs);
double muzcosfi = muzs*cosfi;
*muxd = sintheta*(muxs*muzcosfi-muys*sinfi)/denom + muxs*costheta;
*muyd = sintheta*(muys*muzcosfi+muxs*sinfi)/denom + muys*costheta;
*muzd = -denom*sintheta*cosfi + muzs*costheta;
}
}
=== Step 4: Photon termination ===
If a photon packet has experienced many interactions, for most applications the weight left in the packet is of little consequence. As a result, it is necessary to determine a means for terminating photon packets of sufficiently small weight. A simple method would use a threshold, and if the weight of the photon packet is below the threshold, the packet is considered dead. The aforementioned method is limited as it does not conserve energy. To keep total energy constant, a Russian roulette technique is often employed for photons below a certain weight threshold. This technique uses a roulette constant m to determine whether or not the photon will survive. The photon packet has one chance in m to survive, in which case it will be given a new weight of mW where W is the initial weight (this new weight, on average, conserves energy). All other times, the photon weight is set to 0 and the photon is terminated. This is expressed mathematically below:
W
=
{
m
W
ξ
≤
1
/
m
0
ξ
>
1
/
m
{\displaystyle W={\begin{cases}mW&\xi \leq 1/m\\0&\xi >1/m\end{cases}}}
== Graphics Processing Units (GPU) and fast Monte Carlo simulations of photon transport ==
Monte Carlo simulation of photon migration in turbid media is a highly parallelizable problem, where a large number of photons are propagated independently, but according to identical rules and different random number sequences. The parallel nature of this special type of Monte Carlo simulation renders it highly suitable for execution on a graphics processing unit (GPU). The release of programmable GPUs started such a development, and since 2008 there have been a few reports on the use of GPU for high-speed Monte Carlo simulation of photon migration.
This basic approach can itself be parallelized by using multiple GPUs linked together. One example is the "GPU Cluster MCML," which can be downloaded from the authors' website (Monte Carlo Simulation of Light Transport in Multi-layered Turbid Media Based on GPU Clusters):
http://bmp.hust.edu.cn/GPU_Cluster/GPU_Cluster_MCML.HTM
== See also ==
Radiative transfer equation and diffusion theory for photon transport in biological tissue
Monte Carlo method
Convolution for optical broad-beam responses in scattering media
Monte Carlo methods for electron transport
== Links to other Monte Carlo resources ==
Optical Imaging Laboratory at Washington University in St. Louis (MCML)
Oregon Medical Laser Center
Photon migration Monte Carlo research at Lund University, Sweden GPU acceleration of Monte Carlo simulations and scalable Monte Carlo. Open source code for download.
Cloud-based Monte Carlo for light transport in turbid scattering medium The tool is free to use in research and non-commercial activities.
Light Transport in Tissue as an Example of Monte Carlo Simulation (with C++ source code).
== References ==
Wang, L-H; Wu Hsin-I (2007). Biomedical Optics: Principles and Imaging. Wiley.
L.-H. Wang; S. L. Jacques; L.-Q. Zheng (1995). "MCML—Monte Carlo modeling of light transport in multi-layered tissues". Computer Methods and Programs in Biomedicine. 47 (2): 131–146. doi:10.1016/0169-2607(95)01640-F. PMID 7587160.
L.-H. Wang; S. L. Jacques; L.-Q. Zheng (1997). "Conv—convolution for responses to a finite diameter photon beam incident on multi-layered tissues" (PDF). Computer Methods and Programs in Biomedicine. 54 (3): 141–150. doi:10.1016/S0169-2607(97)00021-7. PMID 9421660.
S. L. Jacques; L.-H. Wang (1995). "Monte Carlo modeling of light transport in tissues" (PDF). In A. J. Welch; M. J. C. van Gemert (eds.). Optical Thermal Response of Laser Irradiated Tissue. New York: Plenum Press. pp. 73–100.
L.-H. Wang; S. L. Jacques (1994). "Optimized radial and angular positions in Monte Carlo modeling" (PDF). Medical Physics. 21 (7): 1081–1083. Bibcode:1994MedPh..21.1081W. doi:10.1118/1.597351. PMID 7968840.
== Inline references == | Wikipedia/Monte_Carlo_method_for_photon_transport |
High-dimensional integrals in hundreds or thousands of variables occur commonly in finance. These integrals have to be computed numerically to within a threshold
ϵ
{\displaystyle \epsilon }
. If the integral is of dimension
d
{\displaystyle d}
then in the worst case, where one has a guarantee of error at most
ϵ
{\displaystyle \epsilon }
, the computational complexity is typically of order
ϵ
−
d
{\displaystyle \epsilon ^{-d}}
. That is, the problem suffers the curse of dimensionality. In 1977 P. Boyle, University of Waterloo, proposed using Monte Carlo (MC) to evaluate options. Starting in early 1992, J. F. Traub, Columbia University, and a graduate student at the time, S. Paskov, used quasi-Monte Carlo (QMC) to price a Collateralized mortgage obligation with parameters specified by Goldman Sachs. Even though it was believed by the world's leading experts that QMC should not be used for high-dimensional integration, Paskov and Traub found that QMC beat MC by one to three orders of magnitude and also enjoyed other desirable attributes. Their results were first published in 1995. Today QMC is widely used in the financial sector to value financial derivatives; see list of books below.
QMC is not a panacea for all high-dimensional integrals. A number of explanations have been proposed for why QMC is so good for financial derivatives. This continues to be a very fruitful research area.
== Monte Carlo and quasi-Monte Carlo methods ==
Integrals in hundreds or thousands of variables are common in computational finance. These have to be approximated numerically to within an error threshold
ϵ
{\displaystyle \epsilon }
. It is well known that if a worst case guarantee of error at most
ϵ
{\displaystyle \epsilon }
is required then the computational complexity of integration may be exponential in
d
{\displaystyle d}
, the dimension of the integrand; See Ch. 3 for details. To break this curse of dimensionality one can use the Monte Carlo (MC) method defined by
φ
M
C
(
f
)
=
1
n
∑
i
=
1
n
f
(
x
i
)
,
{\displaystyle \varphi ^{\mathop {\rm {MC}} }(f)={\frac {1}{n}}\sum _{i=1}^{n}f(x_{i}),}
where the evaluation points
x
i
{\displaystyle x_{i}}
are randomly chosen. It is well known that the expected error of Monte Carlo is of order
n
−
1
/
2
{\displaystyle n^{-1/2}}
. Thus, the cost of the algorithm that has error
ϵ
{\displaystyle \epsilon }
is of order
ϵ
−
2
{\displaystyle \epsilon ^{-2}}
breaking the curse of dimensionality.
Of course in computational practice pseudo-random points are used. Figure 1 shows the distribution of 500 pseudo-random points on the unit square.
Note there are regions where there are no points and other regions where there are clusters of points. It would be desirable to sample the integrand at uniformly distributed points. A rectangular grid would be uniform but even if there were only 2 grid points in each Cartesian direction there would be
2
d
{\displaystyle 2^{d}}
points. So the desideratum should be as few points as possible chosen as uniform as possible.
It turns out there is a well-developed part of number theory which deals exactly with this desideratum. Discrepancy is a measure of deviation from uniformity so what one wants are low discrepancy sequences (LDS). An example of distribution with 500 LDS points is given in Figure 2.
Numerous LDS have been created named after their inventors, for example:
Halton,
Hammersley,
Sobol,
Faure,
Niederreiter.
Generally, the quasi-Monte Carlo (QMC) method is defined by
φ
Q
M
C
(
f
)
=
1
n
∑
i
=
1
n
f
(
x
i
)
,
{\displaystyle \varphi ^{\mathop {\rm {QMC}} }(f)={\frac {1}{n}}\sum _{i=1}^{n}f(x_{i}),}
where the
x
i
{\displaystyle x_{i}}
belong to an LDS. The standard terminology quasi-Monte Carlo is somewhat unfortunate since MC is a randomized method whereas QMC is purely deterministic.
The uniform distribution of LDS is desirable. But the worst case error of QMC is of order
(
log
n
)
d
n
,
{\displaystyle {\frac {(\log n)^{d}}{n}},}
where
n
{\displaystyle n}
is the number of sample points. See for the theory of LDS and references to the literature. The rate of convergence of LDS may be contrasted with the expected rate of convergence of MC which is
n
−
1
/
2
{\displaystyle n^{-1/2}}
. For
d
{\displaystyle d}
small the rate of convergence of QMC is faster than MC but for
d
{\displaystyle d}
large the factor
(
log
n
)
d
{\displaystyle (\log n)^{d}}
is devastating. For example, if
d
=
360
{\displaystyle d=360}
, then even with
log
n
=
2
{\displaystyle \log n=2}
the QMC error is proportional to
2
360
{\displaystyle 2^{360}}
. Thus, it was widely believed by the world's leading experts that QMC should not be used for high-dimensional integration. For example, in 1992 Bratley, Fox and Niederreiter performed extensive testing on certain mathematical problems. They conclude "in high-dimensional problems (say
d
>
12
{\displaystyle d>12}
), QMC seems to offer no practical advantage over MC". In 1993, Rensburg and Torrie compared QMC with MC for the numerical estimation of high-dimensional integrals which occur in computing virial coefficients for the hard-sphere fluid. They conclude QMC is more effective than MC only if
d
<
10
{\displaystyle d<10}
. As we shall see, tests on 360-dimensional integrals arising from a collateralized mortgage obligation (CMO) lead to very different conclusions.
Woźniakowski's 1991 paper, showing the connection between average case complexity of integration and QMC, led to new interest in QMC.
Woźniakowski's result received considerable coverage in the scientific press.
In early 1992, I. T. Vanderhoof, New York University, became aware of Woźniakowski's result and gave Woźniakowski's colleague J. F. Traub, Columbia University, a CMO with parameters set by Goldman Sachs. This CMO had 10 tranches each requiring the computation of a 360 dimensional integral. Traub asked a Ph.D. student, Spassimir Paskov, to compare QMC with MC for the CMO. In 1992 Paskov built a software system called FinDer and ran extensive tests. To the Columbia's research group's surprise and initial disbelief, Paskov reported that QMC was always superior to MC in a number of ways. Details are given below. Preliminary results were presented by Paskov and Traub to a number of Wall Street firms in Fall 1993 and Spring 1994. The firms were initially skeptical of the claim that QMC was superior to MC for pricing financial derivatives. A January 1994 article in Scientific American by Traub and Woźniakowski discussed the theoretical issues and reported that "preliminary results obtained by testing certain finance problems suggests the superiority of the deterministic methods in practice".
In Fall 1994 Paskov wrote a Columbia University Computer Science Report which appeared in slightly modified form in 1997.
In Fall 1995 Paskov and Traub published a paper
in The Journal of Portfolio Management. They compared MC and two QMC methods. The two deterministic methods used Sobol and Halton low-discrepancy points. Since better LDS were created later, no comparison will be made between Sobol and Halton sequences. The experiments drew the following conclusions regarding the performance of MC and QMC on the 10 tranche CMO:
QMC methods converge significantly faster than MC,
MC is sensitive to the initial seed,
The convergence of QMC is smoother than the convergence of MC. This makes automatic termination easier for QMC.
To summarize, QMC beats MC for the CMO on accuracy, confidence level, and computational speed.
This paper was followed by reports on tests by a number of researchers which also led to the conclusion the QMC is superior to MC for a variety of high-dimensional finance problems. This includes papers by Caflisch and Morokoff (1996),
Joy, Boyle, Tan (1996),
Ninomiya and Tezuka (1996),
Papageorgiou and Traub (1996),
Ackworth, and Broadie and Glasserman (German Wikipedia) (1997).
Further testing of the CMO was carried out by Anargyros Papageorgiou, who developed an improved version of the FinDer software system. The new results include the following:
Small number of sample points: For the hardest CMO tranche QMC using the generalized Faure LDS due to S. Tezuka achieves accuracy
10
−
2
{\displaystyle 10^{-2}}
with just 170 points. MC requires 2700 points for the same accuracy. The significance of this is that due to future interest rates and prepayment rates being unknown, financial firms are content with accuracy of
10
−
2
{\displaystyle 10^{-2}}
.
Large number of sample points: The advantage of QMC over MC is further amplified as the sample size and accuracy demands grow. In particular, QMC is 20 to 50 times faster than MC with moderate sample sizes, and can be up to 1000 times faster than MC when high accuracy is desired QMC.
Currently the highest reported dimension for which QMC outperforms MC is 65536 (
2
16
)
{\displaystyle 2^{16})}
.
The software is the Sobol' Sequence generator SobolSeq65536 which generates Sobol' Sequences satisfying Property A for all dimensions and Property A' for the adjacent dimensions.
== Theoretical explanations ==
The results reported so far in this article are empirical. A number of possible theoretical explanations have been advanced. This has been a very research rich area leading to powerful new concepts but a definite answer has not been obtained.
A possible explanation of why QMC is good for finance is the following. Consider a tranche of the CMO mentioned earlier. The integral gives expected future cash flows from a basket of 30-year mortgages at 360 monthly intervals. Because of the discounted value of money variables representing future times are increasingly less important. In a seminal paper I. Sloan and H. Woźniakowski
introduced the idea of weighted spaces. In these spaces the dependence on the successive variables can be moderated by weights. If the weights decrease sufficiently rapidly the curse of dimensionality is broken even with a worst case guarantee. This paper led to a great amount of work on the tractability of integration and other problems. A problem is tractable when its complexity is of order
ϵ
−
p
{\displaystyle \epsilon ^{-p}}
and
p
{\displaystyle p}
is independent of the dimension.
On the other hand, effective dimension was proposed by Caflisch, Morokoff and Owen as an indicator
of the difficulty of high-dimensional integration. The purpose was to explain
the remarkable success of quasi-Monte Carlo (QMC) in approximating the very-high-dimensional integrals in finance. They argued that
the integrands are of low effective dimension and that is why QMC is much faster than Monte Carlo (MC).
The impact of the arguments of Caflisch et al. was great.
A number of papers deal with the relationship between the error of QMC and the effective dimension.
It is known that QMC fails for certain functions that have high effective dimension.
However, low effective dimension is not a necessary condition for QMC to beat MC and for
high-dimensional integration
to be tractable. In 2005, Tezuka exhibited a class of functions of
d
{\displaystyle d}
variables, all with maximum effective dimension equal to
d
{\displaystyle d}
. For these functions QMC is very fast since its convergence rate is of order
n
−
1
{\displaystyle n^{-1}}
, where
n
{\displaystyle n}
is the number of function evaluations.
== Isotropic integrals ==
QMC can also be superior to MC and to other methods for isotropic problems, that is, problems where all variables are equally important. For example, Papageorgiou and Traub reported test results on the model integration problems suggested by the physicist B. D. Keister
(
1
2
π
)
d
/
2
∫
R
d
cos
(
‖
x
‖
)
e
−
‖
x
‖
2
d
x
,
{\displaystyle \left({\frac {1}{2\pi }}\right)^{d/2}\int _{\mathbb {R} ^{d}}\cos(\|x\|)e^{-\|x\|^{2}}\,dx,}
where
‖
⋅
‖
{\displaystyle \|\cdot \|}
denotes the Euclidean norm and
d
=
25
{\displaystyle d=25}
. Keister reports that using a standard numerical method some 220,000 points were needed to obtain a relative error on the order of
10
−
2
{\displaystyle 10^{-2}}
. A QMC calculation using the generalized Faure low discrepancy sequence (QMC-GF) used only 500 points to obtain the same relative error. The same integral was tested for a range of values of
d
{\displaystyle d}
up to
d
=
100
{\displaystyle d=100}
. Its error was
c
⋅
n
−
1
,
{\displaystyle c\cdot n^{-1},}
c
<
110
{\displaystyle c<110}
, where
n
{\displaystyle n}
is the number of evaluations of
f
{\displaystyle f}
. This may be compared with the MC method whose error was proportional to
n
−
1
/
2
{\displaystyle n^{-1/2}}
.
These are empirical results. In a theoretical investigation Papageorgiou proved that the convergence rate of QMC for a class of
d
{\displaystyle d}
-dimensional isotropic integrals which includes the integral defined above is of the order
log
n
n
.
{\displaystyle {\frac {\sqrt {\log n}}{n}}.}
This is with a worst case guarantee compared to the expected convergence rate of
n
−
1
/
2
{\displaystyle n^{-1/2}}
of Monte Carlo and shows the superiority of QMC for this type of integral.
In another theoretical investigation Papageorgiou presented sufficient conditions for fast QMC convergence. The conditions apply to isotropic and non-isotropic problems and, in particular, to a number of problems in computational finance. He presented classes of functions where even in the worst case the convergence rate of QMC is of order
n
−
1
+
p
(
log
n
)
−
1
/
2
{\displaystyle n^{-1+p(\log n)^{-1/2}}}
,
where
p
≥
0
{\displaystyle p\geq 0}
is a constant that depends on the class of functions.
But this is only a sufficient condition and leaves open the major question we pose in the next section.
== Open questions ==
Characterize for which high-dimensional integration problems QMC is superior to MC.
Characterize types of financial instruments for which QMC is superior to MC.
== See also ==
Monte Carlo methods in finance
Historical simulation (finance)
== Resources ==
=== Books ===
Bruno Dupire (1998). Monte Carlo: methodologies and applications for pricing and risk management. Risk. ISBN 1-899332-91-X.
Paul Glasserman (2003). Monte Carlo methods in financial engineering. Springer-Verlag. ISBN 0-387-00451-3.
Peter Jaeckel (2002). Monte Carlo methods in finance. John Wiley and Sons. ISBN 0-471-49741-X.
Don L. McLeish (2005). Monte Carlo Simulation & Finance. ISBN 0-471-67778-7.
Christian P. Robert, George Casella (2004). Monte Carlo Statistical Methods. ISBN 0-387-21239-6.
=== Models ===
Spreadsheets available for download, Prof. Marco Dias, PUC-Rio
== References == | Wikipedia/Quasi-Monte_Carlo_methods_in_finance |
In mathematical finance, a Monte Carlo option model uses Monte Carlo methods to calculate the value of an option with multiple sources of uncertainty or with complicated features. The first application to option pricing was by Phelim Boyle in 1977 (for European options). In 1996, M. Broadie and P. Glasserman showed how to price Asian options by Monte Carlo. An important development was the introduction in 1996 by Carriere of Monte Carlo methods for options with early exercise features.
== Methodology ==
As is standard, Monte Carlo valuation relies on risk neutral valuation. Here the price of the option is its discounted expected value; see risk neutrality and rational pricing. The technique applied then, is (1) to generate a large number of possible, but random, price paths for the underlying (or underlyings) via simulation, and (2) to then calculate the associated exercise value (i.e. "payoff") of the option for each path. (3) These payoffs are then averaged and (4) discounted to today. This result is the value of the option.
This approach, although relatively straightforward, allows for increasing complexity:
An option on equity may be modelled with one source of uncertainty: the price of the underlying stock in question. Here the price of the underlying instrument
S
t
{\displaystyle \ S_{t}\,}
is usually modelled such that it follows a geometric Brownian motion with constant drift
μ
{\displaystyle \mu \,}
and volatility
σ
{\displaystyle \sigma \,}
. So:
d
S
t
=
μ
S
t
d
t
+
σ
S
t
d
W
t
{\displaystyle dS_{t}=\mu S_{t}\,dt+\sigma S_{t}\,dW_{t}\,}
, where
d
W
t
{\displaystyle dW_{t}\,}
is found via a random sampling from a normal distribution; see further under Black–Scholes. Since the underlying random process is the same, for enough price paths, the value of a european option here should be the same as under Black–Scholes. More generally though, simulation is employed for path dependent exotic derivatives, such as Asian options.
In other cases, the source of uncertainty may be at a remove. For example, for bond options the underlying is a bond, but the source of uncertainty is the annualized interest rate (i.e. the short rate). Here, for each randomly generated yield curve we observe a different resultant bond price on the option's exercise date; this bond price is then the input for the determination of the option's payoff. The same approach is used in valuing swaptions, where the value of the underlying swap is also a function of the evolving interest rate. (Whereas these options are more commonly valued using lattice based models, as above, for path dependent interest rate derivatives – such as CMOs – simulation is the primary technique employed.) For the models used to simulate the interest-rate see further under Short-rate model; "to create realistic interest rate simulations" Multi-factor short-rate models are sometimes employed. To apply simulation here, the analyst must first "calibrate" the model parameters, such that bond prices produced by the model best fit observed market prices.
Monte Carlo Methods allow for a compounding in the uncertainty. For example, where the underlying is denominated in a foreign currency, an additional source of uncertainty will be the exchange rate: the underlying price and the exchange rate must be separately simulated and then combined to determine the value of the underlying in the local currency. In all such models, correlation between the underlying sources of risk is also incorporated; see Cholesky decomposition § Monte Carlo simulation. Further complications, such as the impact of commodity prices or inflation on the underlying, can also be introduced. Since simulation can accommodate complex problems of this sort, it is often used in analysing real options where management's decision at any point is a function of multiple underlying variables.
Simulation can similarly be used to value options where the payoff depends on the value of multiple underlying assets such as a Basket option or Rainbow option. Here, correlation between asset returns is likewise incorporated.
As required, Monte Carlo simulation can be used with any type of probability distribution, including changing distributions: the modeller is not limited to normal or log-normal returns; see for example Datar–Mathews method for real option valuation. Additionally, the stochastic process of the underlying(s) may be specified so as to exhibit jumps or mean reversion or both; this feature makes simulation the primary valuation method applicable to energy derivatives. Further, some models even allow for (randomly) varying statistical (and other) parameters of the sources of uncertainty. For example, in models incorporating stochastic volatility, the volatility of the underlying changes with time; see Heston model.
== Least Square Monte Carlo ==
Least Square Monte Carlo is a technique for valuing early-exercise options (i.e. Bermudan or American options). It was first introduced by Jacques Carriere in 1996.
It is based on the iteration of a two step procedure:
First, a backward induction process is performed in which a value is recursively assigned to every state at every timestep. The value is defined as the least squares regression against market price of the option value at that state and time (-step). Option value for this regression is defined as the value of exercise possibilities (dependent on market price) plus the value of the timestep value which that exercise would result in (defined in the previous step of the process).
Secondly, when all states are valued for every timestep, the value of the option is calculated by moving through the timesteps and states by making an optimal decision on option exercise at every step on the hand of a price path and the value of the state that would result in. This second step can be done with multiple price paths to add a stochastic effect to the procedure.
== Application ==
As can be seen, Monte Carlo Methods are particularly useful in the valuation of options with multiple sources of uncertainty or with complicated features, which would make them difficult to value through a straightforward Black–Scholes-style or lattice based computation. The technique is thus widely used in valuing path dependent structures like lookback- and Asian options and in real options analysis. Additionally, as above, the modeller is not limited as to the probability distribution assumed.
Conversely, however, if an analytical technique for valuing the option exists—or even a numeric technique, such as a (modified) pricing tree—Monte Carlo methods will usually be too slow to be competitive. They are, in a sense, a method of last resort; see further under Monte Carlo methods in finance. With faster computing capability this computational constraint is less of a concern.
== See also ==
Monte Carlo methods in finance
Quasi-Monte Carlo methods in finance
Stochastic modelling (insurance)
Stochastic asset model
== References ==
Notes
Sources
Primary references
Boyle, Phelim P. (1977). "Options: A Monte Carlo Approach". Journal of Financial Economics. 4 (3): 323–338. doi:10.1016/0304-405x(77)90005-8. Retrieved June 28, 2012.
Broadie, M.; Glasserman, P. (1996). "Estimating Security Price Derivatives Using Simulation" (PDF). Management Science. 42 (2): 269–285. CiteSeerX 10.1.1.196.1128. doi:10.1287/mnsc.42.2.269. Retrieved June 28, 2012.
Longstaff, F.A.; Schwartz, E.S. (2001). "Valuing American options by simulation: a simple least squares approach". Review of Financial Studies. 14: 113–148. CiteSeerX 10.1.1.155.3462. doi:10.1093/rfs/14.1.113. Retrieved June 28, 2012.
Bibliography
Bruno Dupire (1998). Monte Carlo:methodologies and applications for pricing and risk management. Risk.
Paul Glasserman (2003). Monte Carlo methods in financial engineering. Springer-Verlag. ISBN 978-0-387-00451-8.
Peter Jaeckel (2002). Monte Carlo methods in finance. John Wiley and Sons. ISBN 978-0-471-49741-7.
Don L. McLeish (2005). Monte Carlo Simulation & Finance. ISBN 978-0-471-67778-9.
Christian P. Robert, George Casella (2004). Monte Carlo Statistical Methods. ISBN 978-0-387-21239-5.
== External links ==
Online tools
Monte Carlo simulated stock price time series and random number generator (allows for choice of distribution), Steven Whitney
Discussion papers and documents
Monte Carlo Simulation, Prof. Don M. Chance, Louisiana State University
Pricing complex options using a simple Monte Carlo Simulation, Peter Fink (reprint at quantnotes.com)
MonteCarlo Simulation in Finance, global-derivatives.com
Monte Carlo Derivative valuation, contd., Timothy L. Krehbiel, Oklahoma State University–Stillwater
Applications of Monte Carlo Methods in Finance: Option Pricing, Y. Lai and J. Spanier, Claremont Graduate University
Option pricing by simulation, Bernt Arne Ødegaard, Norwegian School of Management
Pricing and Hedging Exotic Options with Monte Carlo Simulations, Augusto Perilla, Diana Oancea, Prof. Michael Rockinger, HEC Lausanne
Monte Carlo Method, riskglossary.com | Wikipedia/Monte_Carlo_methods_for_option_pricing |
In probability theory, an interacting particle system (IPS) is a stochastic process
(
X
(
t
)
)
t
∈
R
+
{\displaystyle (X(t))_{t\in \mathbb {R} ^{+}}}
on some configuration space
Ω
=
S
G
{\displaystyle \Omega =S^{G}}
given by a site space, a countably-infinite-order graph
G
{\displaystyle G}
and a local state space, a compact metric space
S
{\displaystyle S}
. More precisely IPS are continuous-time Markov jump processes describing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue of stochastic cellular automata.
Among the main examples are the voter model, the contact process, the asymmetric simple exclusion process (ASEP), the Glauber dynamics and in particular the stochastic Ising model.
IPS are usually defined via their Markov generator giving rise to a unique Markov process using Markov semigroups and the Hille-Yosida theorem. The generator again is given via so-called transition rates
c
Λ
(
η
,
ξ
)
>
0
{\displaystyle c_{\Lambda }(\eta ,\xi )>0}
where
Λ
⊂
G
{\displaystyle \Lambda \subset G}
is a finite set of sites and
η
,
ξ
∈
Ω
{\displaystyle \eta ,\xi \in \Omega }
with
η
i
=
ξ
i
{\displaystyle \eta _{i}=\xi _{i}}
for all
i
∉
Λ
{\displaystyle i\notin \Lambda }
. The rates describe exponential waiting times of the process to jump from configuration
η
{\displaystyle \eta }
into configuration
ξ
{\displaystyle \xi }
. More generally the transition rates are given in form of a finite measure
c
Λ
(
η
,
d
ξ
)
{\displaystyle c_{\Lambda }(\eta ,d\xi )}
on
S
Λ
{\displaystyle S^{\Lambda }}
.
The generator
L
{\displaystyle L}
of an IPS has the following form. First, the domain of
L
{\displaystyle L}
is a subset of the space of "observables", that is, the set of real valued continuous functions on the configuration space
Ω
{\displaystyle \Omega }
. Then for any observable
f
{\displaystyle f}
in the domain of
L
{\displaystyle L}
, one has
L
f
(
η
)
=
∑
Λ
∫
ξ
:
ξ
Λ
c
=
η
Λ
c
c
Λ
(
η
,
d
ξ
)
[
f
(
ξ
)
−
f
(
η
)
]
{\displaystyle Lf(\eta )=\sum _{\Lambda }\int _{\xi :\xi _{\Lambda ^{c}}=\eta _{\Lambda ^{c}}}c_{\Lambda }(\eta ,d\xi )[f(\xi )-f(\eta )]}
.
For example, for the stochastic Ising model we have
G
=
Z
d
{\displaystyle G=\mathbb {Z} ^{d}}
,
S
=
{
−
1
,
+
1
}
{\displaystyle S=\{-1,+1\}}
,
c
Λ
=
0
{\displaystyle c_{\Lambda }=0}
if
Λ
≠
{
i
}
{\displaystyle \Lambda \neq \{i\}}
for some
i
∈
G
{\displaystyle i\in G}
and
c
i
(
η
,
η
i
)
=
exp
[
−
β
∑
j
:
|
j
−
i
|
=
1
η
i
η
j
]
{\displaystyle c_{i}(\eta ,\eta ^{i})=\exp[-\beta \sum _{j:|j-i|=1}\eta _{i}\eta _{j}]}
where
η
i
{\displaystyle \eta ^{i}}
is the configuration equal to
η
{\displaystyle \eta }
except it is flipped at site
i
{\displaystyle i}
.
β
{\displaystyle \beta }
is a new parameter modeling the inverse temperature.
== The Voter model ==
The voter model (usually in continuous time, but there are discrete versions as well) is a process similar to the contact process. In this process
η
(
x
)
{\displaystyle \eta (x)}
is taken to represent a voter's attitude on a particular topic. Voters reconsider their opinions at times distributed according to independent exponential random variables (this gives a Poisson process locally – note that there are in general infinitely many voters so no global Poisson process can be used). At times of reconsideration, a voter chooses one neighbor uniformly from amongst all neighbors and takes that neighbor's opinion. One can generalize the process by allowing the picking of neighbors to be something other than uniform.
=== Discrete time process ===
In the discrete time voter model in one dimension,
ξ
t
(
x
)
:
Z
→
{
0
,
1
}
{\displaystyle \xi _{t}(x):\mathbb {Z} \to \{0,1\}}
represents the state of particle
x
{\displaystyle x}
at time
t
{\displaystyle t}
. Informally each individual is arranged on a line and can "see" other individuals that are within a radius,
r
{\displaystyle r}
. If more than a certain proportion,
θ
{\displaystyle \theta }
of these people disagree then the individual changes her attitude, otherwise she keeps it the same. Durrett and Steif (1993) and Steif (1994) show that for large radii there is a critical value
θ
c
{\displaystyle \theta _{c}}
such that if
θ
>
θ
c
{\displaystyle \theta >\theta _{c}}
most individuals never change, and for
θ
∈
(
1
/
2
,
θ
c
)
{\displaystyle \theta \in (1/2,\theta _{c})}
in the limit most sites agree. (Both of these results assume the probability of
ξ
0
(
x
)
=
1
{\displaystyle \xi _{0}(x)=1}
is one half.)
This process has a natural generalization to more dimensions, some results for this are discussed in Durrett and Steif (1993).
=== Continuous time process ===
The continuous time process is similar in that it imagines each individual has a belief at a time and changes it based on the attitudes of its neighbors. The process is described informally by Liggett (1985, 226), "Periodically (i.e., at independent exponential times), an individual reassesses his view in a rather simple way: he chooses a 'friend' at random with certain probabilities and adopts his position." A model was constructed with this interpretation by Holley and Liggett (1975).
This process is equivalent to a process first suggested by Clifford and Sudbury (1973) where animals are in conflict over territory and are equally matched. A site is selected to be invaded by a neighbor at a given time.
== References ==
Clifford, Peter; Aidan Sudbury (1973). "A Model for Spatial Conflict". Biometrika. 60 (3): 581–588. doi:10.1093/biomet/60.3.581.
Durrett, Richard; Jeffrey E. Steif (1993). "Fixation Results for Threshold Voter Systems". The Annals of Probability. 21 (1): 232–247. doi:10.1214/aop/1176989403.
Holley, Richard A.; Thomas M. Liggett (1975). "Ergodic Theorems for Weakly Interacting Infinite Systems and The Voter Model". The Annals of Probability. 3 (4): 643–663. doi:10.1214/aop/1176996306.
Steif, Jeffrey E. (1994). "The Threshold Voter Automaton at a Critical Point". The Annals of Probability. 22 (3): 1121–1139. doi:10.1214/aop/1176988597.
Liggett, Thomas M. (1997). "Stochastic Models of Interacting Systems". The Annals of Probability. 25 (1). Institute of Mathematical Statistics: 1–29. doi:10.1214/aop/1024404276. ISSN 0091-1798.
Liggett, Thomas M. (1985). Interacting Particle Systems. New York: Springer Verlag. ISBN 0-387-96069-4. | Wikipedia/Interacting_particle_systems |
Multilevel Monte Carlo (MLMC) methods in numerical analysis are algorithms for computing expectations that arise in stochastic simulations. Just as Monte Carlo methods, they rely on repeated random sampling, but these samples are taken on different levels of accuracy. MLMC methods can greatly reduce the computational cost of standard Monte Carlo methods by taking most samples with a low accuracy and corresponding low cost, and only very few samples are taken at high accuracy and corresponding high cost.
== Goal ==
The goal of a multilevel Monte Carlo method is to approximate the expected value
E
[
G
]
{\displaystyle \operatorname {E} [G]}
of the random variable
G
{\displaystyle G}
that is the output of a stochastic simulation. Suppose this random variable cannot be simulated exactly, but there is a sequence of approximations
G
0
,
G
1
,
…
,
G
L
{\displaystyle G_{0},G_{1},\ldots ,G_{L}}
with increasing accuracy, but also increasing cost, that converges to
G
{\displaystyle G}
as
L
→
∞
{\displaystyle L\rightarrow \infty }
. The basis of the multilevel method is the telescoping sum identity,
that is trivially satisfied because of the linearity of the expectation operator. Each of the expectations
E
[
G
ℓ
−
G
ℓ
−
1
]
{\displaystyle \operatorname {E} [G_{\ell }-G_{\ell -1}]}
is then approximated by a Monte Carlo method, resulting in the multilevel Monte Carlo method. Note that taking a sample of the difference
G
ℓ
−
G
ℓ
−
1
{\displaystyle G_{\ell }-G_{\ell -1}}
at level
ℓ
{\displaystyle \ell }
requires a simulation of both
G
ℓ
{\displaystyle G_{\ell }}
and
G
ℓ
−
1
{\displaystyle G_{\ell -1}}
.
The MLMC method works if the variances
V
[
G
ℓ
−
G
ℓ
−
1
]
→
0
{\displaystyle \operatorname {V} [G_{\ell }-G_{\ell -1}]\rightarrow 0}
as
ℓ
→
∞
{\displaystyle \ell \rightarrow \infty }
, which will be the case if both
G
ℓ
{\displaystyle G_{\ell }}
and
G
ℓ
−
1
{\displaystyle G_{\ell -1}}
approximate the same random variable
G
{\displaystyle G}
. By the Central Limit Theorem, this implies that one needs fewer and fewer samples to accurately approximate the expectation of the difference
G
ℓ
−
G
ℓ
−
1
{\displaystyle G_{\ell }-G_{\ell -1}}
as
ℓ
→
∞
{\displaystyle \ell \rightarrow \infty }
. Hence, most samples will be taken on level
0
{\displaystyle 0}
, where samples are cheap, and only very few samples will be required at the finest level
L
{\displaystyle L}
. In this sense, MLMC can be considered as a recursive control variate strategy.
== Applications ==
The first application of MLMC is attributed to Mike Giles, in the context of stochastic differential equations (SDEs) for option pricing, however, earlier traces are found in the work of Heinrich in the context of parametric integration. Here, the random variable
G
=
f
(
X
(
T
)
)
{\displaystyle G=f(X(T))}
is known as the payoff function, and the sequence of approximations
G
ℓ
{\displaystyle G_{\ell }}
,
ℓ
=
0
,
…
,
L
{\displaystyle \ell =0,\ldots ,L}
use an approximation to the sample path
X
(
t
)
{\displaystyle X(t)}
with time step
h
ℓ
=
2
−
ℓ
T
{\displaystyle h_{\ell }=2^{-\ell }T}
.
The application of MLMC to problems in uncertainty quantification (UQ) is an active area of research. An important prototypical example of these problems are partial differential equations (PDEs) with random coefficients. In this context, the random variable
G
{\displaystyle G}
is known as the quantity of interest, and the sequence of approximations corresponds to a discretization of the PDE with different mesh sizes.
== An algorithm for MLMC simulation ==
A simple level-adaptive algorithm for MLMC simulation is given below in pseudo-code.
L
←
0
{\displaystyle L\gets 0}
repeat
Take warm-up samples at level
L
{\displaystyle L}
Compute the sample variance on all levels
ℓ
=
0
,
…
,
L
{\displaystyle \ell =0,\ldots ,L}
Define the optimal number of samples
N
ℓ
{\displaystyle N_{\ell }}
on all levels
ℓ
=
0
,
…
,
L
{\displaystyle \ell =0,\ldots ,L}
Take additional samples on each level
ℓ
{\displaystyle \ell }
according to
N
ℓ
{\displaystyle N_{\ell }}
if
L
≥
2
{\displaystyle L\geq 2}
then
Test for convergence
end
if not converged then
L
←
L
+
1
{\displaystyle L\gets L+1}
end
until converged
== Extensions of MLMC ==
Recent extensions of the multilevel Monte Carlo method include multi-index Monte Carlo, where more than one direction of refinement is considered, and the combination of MLMC with the Quasi-Monte Carlo method.
== See also ==
Monte Carlo method
Monte Carlo methods in finance
Quasi-Monte Carlo methods in finance
Uncertainty quantification
Partial differential equations with random coefficients
== References == | Wikipedia/Multilevel_Monte_Carlo_method |
The Journal of Modern Applied Statistical Methods is a biannual peer-reviewed open access journal. It was established in 2002 by Shlomo Sawilowsky, and is currently published by the Wayne State University Library System in Detroit, MI. The Current Index to Statistics classifies it as one of over 160 core statistics journals. The journal originally appeared as a print and electronic journal through volume 8(1) in 2009, and subsequently as an electronic journal only. It publishes peer-reviewed work pertaining to new statistical tests and the comparison of existing statistical tests; bootstrap, Jackknife, and resampling methods; nonparametric, robust, permutation, exact, and approximate randomization methods; and statistical algorithms, pseudorandom number generators, and simulation techniques. The journal is indexed in the Elsevier Bibliographic Database, EMBASE, Compendex, Geobase, PsycINFO, ScienceDirect, and Scopus. It is also listed in the Encyclopedia of Measurement and Statistics and Cabells.
== See also ==
Comparison of statistics journals
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Modern_Applied_Statistical_Methods |
Astrophysics and Space Science is a bimonthly peer-reviewed scientific journal covering astronomy, astrophysics, and space science and astrophysical aspects of astrobiology. It was established in 1968 and is published by Springer Science+Business Media. From 2016 to 2020, the editors-in-chief were both Prof. Elias Brinks and Prof. Jeremy Mould. Since 2020 the sole editor-in-chief is Prof. Elias Brinks. Other editors-in-chief in the past have been Zdeněk Kopal (Univ. of Manchester) (1968–1993) and Michael A. Dopita (Australian National University) (1994–2015).
== Abstracting and indexing ==
The journal is abstracted and indexed in the following databases:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.830.
== References ==
== External links ==
Official website | Wikipedia/Astrophysics_and_Space_Science |
The Monte Carlo method for electron transport is a semiclassical Monte Carlo (MC) approach of modeling semiconductor transport. Assuming the carrier motion consists of free flights interrupted by scattering mechanisms, a computer is utilized to simulate the trajectories of particles as they move across the device under the influence of an electric field using classical mechanics. The scattering events and the duration of particle flight is determined through the use of random numbers.
== Background ==
=== Boltzmann transport equation ===
The Boltzmann transport equation model has been the main tool used in the analysis of transport in semiconductors. The BTE equation is given by:
∂
f
∂
t
+
1
ℏ
∇
k
E
(
k
)
∇
r
f
+
q
F
(
r
)
ℏ
∇
k
f
=
[
∂
f
∂
t
]
c
o
l
l
i
s
i
o
n
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {1}{\hbar }}\nabla _{k}E(k)\nabla _{r}f+{\frac {qF(r)}{\hbar }}\nabla _{k}f=\left[{\frac {\partial f}{\partial t}}\right]_{\mathrm {collision} }}
v
=
1
ℏ
∇
k
E
(
k
)
{\displaystyle v={\frac {1}{\hbar }}\nabla _{k}E(k)}
The distribution function, f, is a dimensionless function which is used to extract all observable of interest and gives a full depiction of electron distribution in both real and k-space. Further, it physically represents the probability of particle occupation of energy k at position r and time t. In addition, due to being a seven-dimensional integro-differential equation (six dimensions in the phase space and one in time) the solution to the BTE is cumbersome and can be solved in closed analytical form under very special restrictions. Numerically, solution to the BTE is employed using either a deterministic method or a stochastic method. Deterministic method solution is based on a grid-based numerical method such as the spherical harmonics approach, whereas the Monte Carlo is the stochastic approach used to solve the BTE.
=== Monte Carlo method ===
The semiclassical Monte Carlo method is a statistical method used to yield exact solution to the Boltzmann transport equation which includes complex band structure and scattering processes. This approach is semiclassical for the reason that scattering mechanisms are treated quantum mechanically using the Fermi's Golden Rule, whereas the transport between scattering events is treated using the classical particle notion. The Monte Carlo model in essence tracks the particle trajectory at each free flight and chooses a corresponding scattering mechanism stochastically. Two of the great advantages of semiclassical Monte Carlo are its capability to provide accurate quantum mechanical treatment of various distinct scattering mechanisms within the scattering terms, and the absence of assumption about the form of carrier distribution in energy or k-space. The semiclassical equation describing the motion of an electron is
d
r
d
t
=
1
ℏ
∇
k
E
(
k
)
{\displaystyle {\frac {dr}{dt}}={\frac {1}{\hbar }}\nabla _{k}E(k)}
d
k
d
t
=
q
F
(
r
)
ℏ
{\displaystyle {\frac {dk}{dt}}={\frac {qF(r)}{\hbar }}}
where F is the electric field, E(k) is the energy dispersion relation, and k is the momentum wave vector. To solve the above equation, one needs strong knowledge of the band structure (E(k)). The E(k) relation describes how the particle moves inside the device, in addition to depicting useful information necessary for transport such as the density of states (DOS) and the particle velocity. A Full-band E(K) relation can be obtained using the semi-empirical pseudopotential method.
=== Hydrodynamic and drift diffusion method ===
Both drift diffusion (DD) and the hydrodynamic (HD) models can be derived from the moments of the Boltzmann transport equation (BTE) using simplified approximation valid for long channel devices. The DD scheme is the most classical approach and usually solves the Poisson equation and the continuity equations for carriers considering the drift and diffusion components. In this approach, the charge transit time is assumed to be very large in comparison to the energy relaxation time. On the other hand, the HD method solves
the DD scheme with the energy balance equations obtained from the moments of BTE. Thus, one may capture and calculate physical details such as carrier heating and the velocity overshoot effect. Needless to say, an accurate discretization method is required in HD simulation, since the governing equations are strongly coupled and one has to deal with larger number of variables compared to the DD scheme.
=== Comparison of semiclassical models ===
The accuracy of semiclassical models are compared based on the BTE by investigating how they treat the classical velocity overshoot problem, a key short channel effect (SCE) in transistor structures. Essentially, velocity overshoot is a nonlocal effects of scaled devices, which is related to the experimentally observed increase in current drive and transconductance. As the channel length becomes smaller, the velocity is no longer saturated in the high field region, but it overshoots the predicted saturation velocity. The cause of this phenomenon is that the carrier transit time becomes comparable to the energy relaxation time, and therefore the mobile carriers do not have enough time to reach equilibrium with the applied electric field by scattering in the short channel devices. The summary of simulation results (Illinois Tool: MOCA) with DD and HD model is shown in figure beside. In the figure (a), the case when the field is not high enough to cause the velocity overshoot effect in the whole channel region is shown. Note that at such limit, the data from the DD model fit well to the MC model in the non-overshoot region, but the HD model overestimate the velocity in that region. The velocity overshoot is observed only near the drain junction in the MC data and the HD model fits well in that region. From the MC data, it can be noticed that the velocity overshoot effect is abrupt in the high-field region, which is not properly included in the HD model. For high field conditions as shown in the figure (b) the velocity overshoot effect almost all over the channel and the HD results and the MC results are very close in the channel region.
== Monte Carlo for semiconductor transport ==
=== Band structure ===
Band structure describes the relationship between energy(E) and wave vector(k). The band structure is used to compute the movement of carriers under the action of the electric field, scattering rate, and final state after the collision. Silicon band structure and its Brillouin zone are shown in figure below, but there is no analytical expression which satisfies entire Brillouin zone. By using some approximation, there are two analytical models for band structure, namely the parabolic and the non-parabolic modes.
==== Parabolic band structure ====
For the concept of band structure, parabolic energy bands are generally assumed for simplicity. Electrons reside, at least when close to equilibrium, close to the minima of the E(k) relation. Then the E(k) relation can be extended in a Taylor series as
E
(
k
)
=
E
(
0
)
+
∂
E
(
k
)
∂
k
|
k
=
0
⋅
k
+
1
2
∂
2
E
(
k
)
∂
k
2
⋅
k
2
{\displaystyle E(k)=E(0)+\left.{\frac {\partial E(k)}{\partial k}}\right|_{\mathrm {k=0} }\cdot k+{\frac {1}{2}}{\frac {\partial ^{2}E(k)}{\partial k^{2}}}\cdot k^{2}}
Because the first derivative vanishes at the band minimum, so the gradient of E(k) is zero at k = 0. Thus,
E
(
k
)
=
ℏ
2
k
2
2
m
∗
{\displaystyle E(k)={\frac {\hbar ^{2}k^{2}}{2m^{*}}}}
which yields the definition of the effective mass tensor
1
m
∗
=
1
ℏ
2
∂
2
E
(
k
)
∂
k
2
{\displaystyle {\frac {1}{m^{*}}}={\frac {1}{\hbar ^{2}}}{\frac {\partial ^{2}E(k)}{\partial k^{2}}}}
This expression is true for semiconductor which has isotropic effective mass, for instance GaAs. In case of silicon, conduction band minima does not lie at k = 0 and the effective mass depends on the crystallographic orientation of the minimum as
E
(
k
)
=
ℏ
2
2
(
k
l
2
m
l
∗
+
2
k
t
2
m
t
∗
)
{\displaystyle E(k)={\frac {\hbar ^{2}}{2}}\left({\frac {k_{l}^{2}}{m_{l}^{*}}}+{\frac {2k_{t}^{2}}{m_{t}^{*}}}\right)}
where
m
l
∗
,
m
t
∗
{\displaystyle m_{l}^{*},m_{t}^{*}}
describe longitudinal and transverse effective mass, respectively.
==== Non-parabolic band structure ====
For higher applied fields, carriers reside above the minimum and the dispersion relation, E(k), does not satisfy the simple parabolic expression described above. This non-parabolicity is generally described by
E
(
1
+
α
E
)
=
ℏ
2
k
2
2
m
∗
{\displaystyle E(1+\alpha E)={\frac {\hbar ^{2}k^{2}}{2m^{*}}}}
where
α
{\displaystyle \alpha }
is a coefficient of non-parabolicity given by
α
=
(
1
−
m
∗
/
m
0
)
2
E
g
{\displaystyle \alpha ={\frac {(1-m^{*}/m_{0})^{2}}{E_{g}}}}
where
m
0
{\displaystyle m_{0}}
is the electron mass in vacuum, and Eg is the energy gap.
==== Full band structure ====
For many applications, non-parabolic band structure provides reasonable approximation. However, in case of very high field transport, which requires the better physical model of the full band structure. For full band approach, numerically generated table of E(k) is used. Full band approach for Monte Carlo simulation was first used by Karl Hess at the University of Illinois at Urbana-Champaign. This approach is based on empirical pseudopotential method suggested by Cohen and Bergstresser [18]. Full band approach is computationally expensive, however, following the advancement of the computational power, it can be used as a more general approach.
=== Types of Monte Carlo simulation ===
==== One-particle Monte Carlo ====
For this type of simulation, one carrier is injected and the motion is tracked in the domain, until it exits through contact. Another carrier is then injected and the process repeated to simulate an ensemble of trajectories. This approach is mostly useful to study bulk properties, like the steady state drift velocity as a function of field.
==== Ensemble Monte Carlo ====
Instead of single carrier, a large ensemble of carriers is simulated at the same time. This procedure is obviously a good candidate for super-computation, since one may apply parallelization and vectorization. Also, it is now possible to perform ensemble averages directly. This approach is suitable for transient simulations.
==== Self-consistent ensemble Monte Carlo ====
This method couples the ensemble Monte Carlo procedure to Poisson's equation, and is the most suitable for device simulation. Typically, Poisson's equation is solved at fixed intervals to update the internal field, to reflect the internal redistribution of charge, due to the movement of carriers.
=== Random flight selection ===
The probability that the electron will suffer its next collision during dt around t is given by
p
(
t
)
d
t
=
P
[
k
(
t
)
]
exp
[
−
∫
0
t
P
[
k
(
t
′
)
]
d
t
′
]
d
t
{\displaystyle p(t)\,dt=P[k(t)]\exp[-\int _{0}^{t}P[k(t')]\,dt']\,dt}
where P[k(t)]dt is the probability that an electron in the state k suffers a collision during the time dt. Because of the complexity of the integral at the exponent, it is impractical to generate stochastic free flights with the distribution of the equation above. In order to overcome this difficulty, people use a fictitious “self-scattering” scheme. By doing this, the total scattering rate, including this self-scattering, is constant and equal to, say,
Γ
{\displaystyle \Gamma }
. By random selection, if self-scattering is selected, k′ after the collision is the same as k and the carrier continues its flight without perturbation. Introducing a constant
P
(
k
)
=
τ
0
−
1
{\displaystyle P(k)=\tau _{0}^{-1}}
, the above equation reduces to
p
(
t
)
=
1
τ
0
exp
(
−
t
/
τ
0
)
.
{\displaystyle p(t)={\frac {1}{\tau _{0}}}\exp(-t/\tau _{0}).}
Random numbers r can be used very simply to generate stochastic free flights, which duration will then be given by
t
r
=
−
τ
0
ln
(
r
)
{\displaystyle t_{r}=-\tau _{0}\ln(r)}
. The computer time used for self-scattering is more than compensated for by the simplification of the calculation of the free-flight duration. To enhance the speed of free flight time calculation, several schemes such as “Constant Technique”, and “Piecewise Technique” are used to minimize the self-scattering events.
== Scattering mechanisms ==
=== General background in solid-state physics ===
Important charge transport properties of semiconductor devices such as the deviance from Ohm's law and the saturation of carriers mobility are a direct consequence of scattering mechanisms. It is thus of great importance for a semiconductor device simulation to capture the physics of such mechanisms. The semiconductor Monte Carlo simulation, in this scope, is a very powerful tool for the ease and the precision with which an almost exhaustive array of scattering mechanisms can be included. The duration of the free flights is determined from the scattering rates. At the end of each flight, the appropriate scattering mechanism must be chosen in order to determine the final energy of the scattered carrier, or equivalently, its new momentum and scattering angle. In this sense, one will distinguish two broad types of scattering mechanisms which naturally derive form the classic
kinetic theory of collision between two bodies:
Elastic scattering, where the energy of the particle is conserved after being scattered. Elastic scattering will hence only change the direction of the particle's momentum. Impurity scattering and surface scattering are, with a fair approximation, two good examples of elastic scattering processes.
Inelastic scattering, where energy is transferred between the scattered particle and the scattering center. Electronphonon interactions are essentially inelastic since a phonon of definite energy is either emitted or absorbed by the scattered particle.
Before characterizing scattering mechanisms in greater mathematical details, it is important to note that when running semiconductor Monte Carlo simulations, one has to deal mainly with the following types of scattering events:
Acoustic Phonon: The charge carrier exchanges energy with an acoustic mode of the vibration of atoms in the crystal lattice. Acoustic Phonons mainly arise from thermal excitation of the crystal lattice.
Polar Optical: The charge carrier exchanges energy with one of the polar optical modes of the crystal lattice. These modes are not present in covalent semiconductors. Optical phonons arise from the vibration against each other of atoms of different types when there is more than one atom in the smallest unit cell, and are usually excited by light.
Non-Polar Optical: Energy is exchanged with an optical mode. Non-polar optical phonons must generally be considered in covalent semiconductors and the L-valley of GaAs.
Equivalent Intervalley Phonon: Due to the interaction with a phonon, the charge carrier transitions from initial states to final states which belong to different but equivalent valleys. Typically, this type of scattering mechanism describes the transition of an electron from one X-valley to another X-valley, or from one L-valley to another L-valley.
Non Equivalent Intervalley Phonon: Involves the transition of a charge carrier between valleys of different types.
Piezoelectric Phonon: For low temperatures.
Ionized Impurity: Reflects the deviation of a particle from it ballistic trajectory due to Coulomb interaction with an ionized impurity in the crystal lattice. Because the mass of an electron is relatively small in comparison to the one of an impurity, the Coulomb cross section decreases rapidly with the difference of the modulus of momentum between the initial and final state. Therefore, impurity scattering events are mostly considered for intravalley scattering, intraband scattering and, to a minor extent, interband scattering.
Carrier-Carrier: (electron-electron, hole-hole and electron-hole interactions). When carrier concentration is high, this type of scattering reflects the electrostatic interaction between charge carriers. This problem becomes very quickly computationally intensive with an increasing number of particles in an ensemble simulation. In this scope, Particle-Particle–Particle-Mesh (P3M) algorithms, which distinguish short range and long range interaction of a particle with its surrounding charge gas, have proved efficient in including carrier-carrier interaction in the semiconductor Monte Carlo simulation. Very often, the charge of the carriers is assigned to a grid using a Cloud-in-Cell method, where part of the charge of a given particle is assigned to a given number of closest grid points with a certain weight factor.
Plasmon: Reflects the effect of the collective oscillation of the charge carriers on a given particle.
=== Inclusion of scattering mechanisms in Monte Carlo ===
A computationally efficient approach to including scattering in Monte Carlo simulation consists in storing the scattering rates of the individual mechanisms in tables. Given the different scattering rates for a precise particle state, one may then randomly select the scattering process at the end of the free flight. These scattering rates are very often derived using the Born approximation, in which a scattering event is merely a transition between two momentum states of the carrier involved. As discussed in section II-I, the quantum many-body problem arising from the interaction of a carrier with its surrounding environment (phonons, electrons, holes, plasmons, impurities,...) can be reduced to a two-body problem using the quasiparticle approximation, which separates the carrier of interest from the rest of the crystal. Within these approximations,
Fermi's Golden Rule gives, to the first order, the transition probability per unit time for a scattering mechanism from a state
|
k
⟩
{\displaystyle |k\rangle }
to a state
|
k
′
⟩
{\displaystyle |k'\rangle }
:
S
(
k
,
k
′
)
=
2
π
ℏ
|
⟨
k
|
H
′
|
k
′
⟩
|
2
⋅
δ
(
E
−
E
′
)
{\displaystyle S(k,k')={\frac {2\pi }{\hbar }}\left|\langle k|H'|k'\rangle \right|^{2}\cdot \delta (E-E')}
where H' is the perturbation Hamiltonian representing the collision and E and E′ are respectively the initial and final energies of the system constituted of both the carrier and the electron and phonon gas. The Dirac
δ
{\displaystyle \delta }
-function stands for the conservation of energy. In addition, the term
⟨
k
|
H
′
|
k
′
⟩
{\displaystyle \langle k|H'|k'\rangle }
, generally referred to as the matrix element, mathematically represents an inner product of the initial and final wave functions of the carrier:
⟨
k
|
H
′
|
k
′
⟩
=
1
V
o
l
∫
V
o
l
ψ
k
(
r
)
H
′
ψ
k
′
∗
(
r
)
d
r
{\displaystyle \langle k|H'|k'\rangle ={\frac {1}{Vol}}\int _{\mathrm {Vol} }\psi _{k}(r)H'\psi _{k'}^{*}(r)\,dr}
In a crystal lattice, the wavefunctions
ψ
k
(
r
)
{\displaystyle \psi _{k}(r)}
and
ψ
k
′
(
r
)
{\displaystyle \psi _{k'}(r)}
are simply Bloch waves. When it is possible, analytic expression of the Matrix elements are commonly found by Fourier expanding the Hamiltonian H', as in the case of Impurity scattering or acoustic phonon scattering. In the important case of a transition from an energy state E to an energy state E' due to a phonon of wave vector q and frequency
ω
q
{\displaystyle \omega _{q}}
, the energy and momentum change is:
E
′
−
E
=
E
(
k
′
)
−
E
(
k
)
±
ℏ
ω
q
{\displaystyle E'-E=E(k')-E(k)\pm \hbar \omega _{q}\,}
k
′
−
k
±
q
=
{
0
R
Umklapp-process
{\displaystyle k'-k\pm q={\begin{cases}0&{\text{ }}\\R&{\text{Umklapp-process}}\end{cases}}}
where R is a reciprocal lattice vector. Umklapp processes (or U-processes) change the momentum of the particle after scattering and are therefore limiting the conduction in semiconductor crystals. Physically, U-processes occur when the final momentum of the particle points out of the first Brillouin zone. Once one knows the scattering probability per unit time from a state k to a state k', it is interesting to determine the scattering rate for a given scattering process. The scattering rate gives the probability per unit time to scatter from a state k to any other state in the reciprocal space. Therefore, the scattering rate is
λ
(
k
)
=
∑
k
′
S
(
k
,
k
′
)
{\displaystyle \lambda (k)=\sum _{k'}S(k,k')}
which can be readily used to determine the free flight time and the scattering process as discussed in section 3-3. It is important to note that this scattering rate will be dependent on the band structure of the material (the dependence arises from the matrix elements).
=== Selection of scattering mode and scattered trajectory ===
At the end of a free flight, a scattering mode and angle must be randomly chosen. In order to determine the scattering mechanism, one has to consider all the scattering rates
λ
1
,
λ
2
,
.
.
.
,
λ
n
{\displaystyle \lambda _{1},\lambda _{2},...,\lambda _{n}}
of the mechanisms relevant to the simulation as well as the total scattering rate at the time of scattering
λ
t
o
t
(
t
s
c
)
=
∑
i
λ
i
.
{\displaystyle \lambda _{tot}(t_{sc})=\sum _{i}\lambda _{i}.}
Selecting a scattering mechanism then simply results in generating a uniformly distributed random number 0 < r < 1 and referring to the following rules
r
<
λ
1
λ
t
o
t
→
scattering-mechanism-
1
r
<
λ
1
+
λ
2
λ
t
o
t
→
scattering-mechanism-
2
⋮
r
<
∑
i
=
0
n
λ
i
λ
t
o
t
→
scattering-mechanism-
n
{\displaystyle {\begin{aligned}r&<{\frac {\lambda _{1}}{\lambda _{\mathrm {tot} }}}\rightarrow {\text{scattering-mechanism-}}1\\r&<{\frac {\lambda _{1}+\lambda _{2}}{\lambda _{\mathrm {tot} }}}\rightarrow {\text{scattering-mechanism-}}2\\&{}\ \vdots \\r&<{\frac {\sum _{i=0}^{n}\lambda _{i}}{\lambda _{\mathrm {tot} }}}\rightarrow {\text{scattering-mechanism-}}n\end{aligned}}}
A computationally efficient approach to selecting the scattering mechanism consists in adding a “void” scattering mechanism so that
λ
t
o
t
{\displaystyle \lambda _{\mathrm {tot} }}
remains constant over time. If a particle is scattered according to this mechanism, it will keep its ballistic trajectory after scattering takes place. In order to choose a new trajectory, one must first derive the energy (or momentum) of the particle after scattering
E
(
k
′
)
=
E
(
k
)
±
ℏ
ω
q
±
Δ
E
C
{\displaystyle E(k')=E(k)\pm \hbar \omega _{q}\pm \Delta E_{C}\,}
where the term
ℏ
ω
q
{\displaystyle \hbar \omega _{q}}
accounts for phonon emission or absorption and the term
Δ
E
C
{\displaystyle \Delta E_{C}}
is non-null for inter-valley scattering. The final energy (and the band structure) directly yield the modulus of the new momentum k'. At this point one only needs to choose a new direction (or angle) for the scattered particle. In some simple cases as phonon scattering and a parabolic dispersion relation, the scattering angle is random and evenly distributed on the sphere of radius k'. Using spherical coordinates, the process of choosing the angle is equivalent to randomly picking two angles
θ
{\displaystyle \theta }
and
ψ
{\displaystyle \psi }
. If the angle is distributed with a distribution
p
(
θ
,
ψ
)
{\displaystyle p(\theta ,\psi )}
, then for a uniform distribution of angles, the probability to pick a point of the sphere is
p
(
θ
,
ψ
)
d
θ
d
ψ
=
sin
θ
d
θ
d
ψ
4
π
{\displaystyle p(\theta ,\psi )\,d\theta d\psi ={\frac {\sin \theta \,d\theta \,d\psi }{4\pi }}}
It is possible, in this case, to separate the two variables. Integrating over
ψ
{\displaystyle \psi }
then over
θ
{\displaystyle \theta }
, one finds
p
(
θ
)
=
sin
θ
2
{\displaystyle p(\theta )={\frac {\sin \theta }{2}}}
p
(
ψ
)
=
1
2
π
{\displaystyle p(\psi )={\frac {1}{2\pi }}}
The two spherical angles can then be chosen, in the uniform case, by generating two random numbers 0 < r1, r2 < 1 such that
r
1
=
∫
0
ψ
p
(
ψ
′
)
d
ψ
′
=
ψ
2
π
{\displaystyle r_{1}=\int _{0}^{\psi }p(\psi ')\,d\psi '={\frac {\psi }{2\pi }}}
r
2
=
∫
0
θ
p
(
θ
′
)
d
θ
′
=
1
−
cos
θ
2
{\displaystyle r_{2}=\int _{0}^{\theta }p(\theta ')\,d\theta '={\frac {1-\cos \theta }{2}}}
== Quantum corrections for Monte Carlo simulation ==
The current trend of scaling down semiconductor devices has forced physicists to incorporate quantum mechanical issues in order to acquire a thorough understanding of device behavior. Simulating the behavior of nano-scale devices necessitates the use of a full quantum transport model especially for cases when the quantum effects cannot be ignored. This complication, however, can be avoided in the case of practical devices like the modern day MOSFET, by employing quantum corrections within a semi-classical framework. The semi-classical Monte Carlo model can then be employed to simulate the device characteristics. The quantum corrections can be incorporated into a Monte Carlo simulator by simply introducing a quantum potential term which is superimposed onto the classical electrostatic potential seen by the simulated particles. Figure beside pictorially depicts the essential features of this technique. The various quantum approaches available for implementation are described in the following subsections.
=== Wigner-based correction ===
The Wigner transport equation forms the basis for the Wigner-based quantum correction.
∂
f
∂
t
+
r
⋅
∇
r
f
−
1
ℏ
∇
r
V
⋅
∇
k
f
+
∑
α
=
1
∞
(
−
1
)
α
+
1
ℏ
4
α
(
2
α
+
1
)
!
×
(
∇
r
∇
k
)
2
α
+
1
V
f
=
(
∂
f
∂
t
)
c
{\displaystyle {\frac {\partial f}{\partial t}}+r\cdot \nabla _{r}f-{\frac {1}{\hbar }}\nabla _{r}V\cdot \nabla _{k}f+\sum _{\alpha =1}^{\infty }{\frac {(-1)^{\alpha +1}}{\hbar 4^{\alpha }(2\alpha +1)!}}\times (\nabla _{r}\nabla _{k})^{2\alpha +1}Vf=\left({\frac {\partial f}{\partial t}}\right)_{c}}
where, k is the crystal momentum, V is the classical potential, the term on the RHS is the effect of collision, the fourth term on the LHS represents non-local quantum mechanical effects. The standard Boltzmann Transport Equation is obtained when the non-local terms on the LHS disappear in the limit of slow spatial variations. The simplified (for
α
=
0
{\displaystyle \alpha =0}
) quantum corrected BTE then becomes
∂
f
∂
t
+
r
⋅
∇
r
f
−
1
ℏ
∇
r
V
⋅
∇
k
f
=
(
∂
f
∂
t
)
c
{\displaystyle {\frac {\partial f}{\partial t}}+r\cdot \nabla _{r}f-{\frac {1}{\hbar }}\nabla _{r}V\cdot \nabla _{k}f=\left({\frac {\partial f}{\partial t}}\right)_{c}}
where the quantum potential is contained in the term
V
ω
{\displaystyle V_{\omega }}
(must be an error:
V
ω
{\displaystyle V_{\omega }}
was never mentioned).
=== Effective potential correction ===
This method for quantum correction was developed by Feynman and Hibbs in 1965. In this method the effective potential is derived by calculating the contribution to the path integral of a particle's quantum fluctuations around its classical path. This calculation is undertaken by a variational method using a trial potential to first order. The effective classical potential in the average point on each path then becomes
V
e
f
f
(
x
)
=
1
2
π
a
∫
−
∞
∞
V
(
x
′
)
e
−
(
x
′
−
x
)
2
2
a
2
d
x
′
{\displaystyle V_{\mathrm {eff} }(x)={\frac {1}{{\sqrt {2\pi }}a}}\int _{-\infty }^{\infty }V(x')e^{-{\frac {(x'-x)^{2}}{2a^{2}}}}dx'}
a
2
=
ℏ
2
12
m
∗
k
B
T
{\displaystyle a^{2}={\frac {\hbar ^{2}}{12m^{*}k_{B}T}}}
=== Schrödinger-based correction ===
This approach involves periodical solving of a Schrödinger equation in a simulation with the input being the self-consistent electrostatic potential. The exact energy levels and wavefunctions relating to the electrostatic potential solution are employed to calculate the quantum potential. The quantum correction obtained on the basis of this method can be visualised by the following equation
V
s
c
h
r
(
z
)
=
−
k
B
T
⋅
log
(
n
q
(
z
)
)
−
V
p
(
z
)
+
V
0
{\displaystyle V_{\mathrm {schr} }(z)=-k_{B}T\cdot \log(n_{q}(z))-V_{p}(z)+V_{0}}
where Vschr is the quantum correction potential, z is the direction perpendicular to the interface, nq is the quantum density from the Schrödinger equation which is equivalent to the converged Monte Carlo concentration, Vp is the potential from the Poisson solution, V0 is the arbitrary reference potential far away from the quantum region such that the correction goes to null in the region of semi-classical behavior. Even though the above-mentioned potentials for quantum correction differ in their method of calculation and their basic assumptions, yet when it comes to their inclusion into Monte Carlo simulation they are all incorporated the same way.
== See also ==
Monte Carlo method
Semiconductor device
Monte Carlo method for photon transport
Electronic band structure
Method of quantum characteristics
Quantum Monte Carlo
Quasi-Monte Carlo method
== References == | Wikipedia/Monte_Carlo_methods_for_electron_transport |
Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
== Overview ==
The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics. To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space, PS for simplicity, the mean value of A using the Boltzmann distribution:
⟨
A
⟩
=
∫
P
S
A
r
→
e
−
β
E
r
→
Z
d
r
→
{\displaystyle \langle A\rangle =\int _{PS}A_{\vec {r}}{\frac {e^{-\beta E_{\vec {r}}}}{Z}}d{\vec {r}}}
.
where
E
(
r
→
)
=
E
r
→
{\displaystyle E({\vec {r}})=E_{\vec {r}}}
is the energy of the system for a given state defined by
r
→
{\displaystyle {\vec {r}}}
- a vector with all the degrees of freedom (for instance, for a mechanical system,
r
→
=
(
q
→
,
p
→
)
{\displaystyle {\vec {r}}=\left({\vec {q}},{\vec {p}}\right)}
),
β
≡
1
/
k
b
T
{\displaystyle \beta \equiv 1/k_{b}T}
and
Z
=
∫
P
S
e
−
β
E
r
→
d
r
→
{\displaystyle Z=\int _{PS}e^{-\beta E_{\vec {r}}}d{\vec {r}}}
is the partition function.
One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement.
For those systems, the Monte Carlo integration (and not to be confused with Monte Carlo method, which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as
1
/
N
{\displaystyle 1/{\sqrt {N}}}
, independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling, a technique that improves the computational time of the simulation.
In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed.
== Importance sampling ==
An estimation, under Monte Carlo integration, of an integral defined as
⟨
A
⟩
=
∫
P
S
A
r
→
e
−
β
E
r
→
d
r
→
/
Z
{\displaystyle \langle A\rangle =\int _{PS}A_{\vec {r}}e^{-\beta E_{\vec {r}}}d{\vec {r}}/Z}
is
⟨
A
⟩
≃
1
N
∑
i
=
1
N
A
r
→
i
e
−
β
E
r
→
i
/
Z
{\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}A_{{\vec {r}}_{i}}e^{-\beta E_{{\vec {r}}_{i}}}/Z}
where
r
→
i
{\displaystyle {\vec {r}}_{i}}
are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function evaluations).
From all the phase space, some zones of it are generally more important to the mean of the variable
A
{\displaystyle A}
than others. In particular, those that have the value of
e
−
β
E
r
→
i
{\displaystyle e^{-\beta E_{{\vec {r}}_{i}}}}
sufficiently high when compared to the rest of the energy spectra are the most relevant for the integral. Using this fact, the natural question to ask is: is it possible to choose, with more frequency, the states that are known to be more relevant to the integral? The answer is yes, using the importance sampling technique.
Lets assume
p
(
r
→
)
{\displaystyle p({\vec {r}})}
is a distribution that chooses the states that are known to be more relevant to the integral.
The mean value of
A
{\displaystyle A}
can be rewritten as
⟨
A
⟩
=
∫
P
S
p
−
1
(
r
→
)
A
r
→
p
−
1
(
r
→
)
e
−
β
E
r
→
/
Z
d
r
→
=
∫
P
S
p
−
1
(
r
→
)
A
r
→
∗
e
−
β
E
r
→
/
Z
d
r
→
{\displaystyle \langle A\rangle =\int _{PS}p^{-1}({\vec {r}}){\frac {A_{\vec {r}}}{p^{-1}({\vec {r}})}}e^{-\beta E_{\vec {r}}}/Zd{\vec {r}}=\int _{PS}p^{-1}({\vec {r}})A_{\vec {r}}^{*}e^{-\beta E_{\vec {r}}}/Zd{\vec {r}}}
,
where
A
r
→
∗
{\displaystyle A_{\vec {r}}^{*}}
are the sampled values taking into account the importance probability
p
(
r
→
)
{\displaystyle p({\vec {r}})}
. This integral can be estimated by
⟨
A
⟩
≃
1
N
∑
i
=
1
N
p
−
1
(
r
→
i
)
A
r
→
i
∗
e
−
β
E
r
→
i
/
Z
{\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}p^{-1}({\vec {r}}_{i})A_{{\vec {r}}_{i}}^{*}e^{-\beta E_{{\vec {r}}_{i}}}/Z}
where
r
→
i
{\displaystyle {\vec {r}}_{i}}
are now randomly generated using the
p
(
r
→
)
{\displaystyle p({\vec {r}})}
distribution. Since most of the times it is not easy to find a way of generating states with a given distribution, the Metropolis algorithm must be used.
=== Canonical ===
Because it is known that the most likely states are those that maximize the Boltzmann distribution, a good distribution,
p
(
r
→
)
{\displaystyle p({\vec {r}})}
, to choose for the importance sampling is the Boltzmann distribution or canonic distribution. Let
p
(
r
→
)
=
e
−
β
E
r
→
Z
{\displaystyle p({\vec {r}})={\frac {e^{-\beta E_{\vec {r}}}}{Z}}}
be the distribution to use. Substituting on the previous sum,
⟨
A
⟩
≃
1
N
∑
i
=
1
N
A
r
→
i
∗
{\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}A_{{\vec {r}}_{i}}^{*}}
.
So, the procedure to obtain a mean value of a given variable, using metropolis algorithm, with the canonical distribution, is to use the Metropolis algorithm to generate states given by the distribution
p
(
r
→
)
{\displaystyle p({\vec {r}})}
and perform means over
A
r
→
∗
{\displaystyle A_{\vec {r}}^{*}}
.
One important issue must be considered when using the metropolis algorithm with the canonical distribution: when performing a given measure, i.e. realization of
r
→
i
{\displaystyle {\vec {r}}_{i}}
, one must ensure that that realization is not correlated with the previous state of the system (otherwise the states are not being "randomly" generated). On systems with relevant energy gaps, this is the major drawback of the use of the canonical distribution because the time needed to the system de-correlate from the previous state can tend to infinity.
=== Multi-canonical ===
As stated before, micro-canonical approach has a major drawback, which becomes relevant in most of the systems that use Monte Carlo Integration. For those systems with "rough energy landscapes", the multicanonic approach can be used.
The multicanonic approach uses a different choice for importance sampling:
p
(
r
→
)
=
1
Ω
(
E
r
→
)
{\displaystyle p({\vec {r}})={\frac {1}{\Omega (E_{\vec {r}})}}}
where
Ω
(
E
)
{\displaystyle \Omega (E)}
is the density of states of the system. The major advantage of this choice is that the energy histogram is flat, i.e. the generated states are equally distributed on energy. This means that, when using the Metropolis algorithm, the simulation doesn't see the "rough energy landscape", because every energy is treated equally.
The major drawback of this choice is the fact that, on most systems,
Ω
(
E
)
{\displaystyle \Omega (E)}
is unknown. To overcome this, the Wang and Landau algorithm is normally used to obtain the DOS during the simulation. Note that after the DOS is known, the mean values of every variable can be calculated for every temperature, since the generation of states does not depend on
β
{\displaystyle \beta }
.
== Implementation ==
On this section, the implementation will focus on the Ising model. Lets consider a two-dimensional spin network, with L spins (lattice sites) on each side. There are naturally
N
=
L
2
{\displaystyle N=L^{2}}
spins, and so, the phase space is discrete and is characterized by N spins,
r
→
=
(
σ
1
,
σ
2
,
.
.
.
,
σ
N
)
{\displaystyle {\vec {r}}=(\sigma _{1},\sigma _{2},...,\sigma _{N})}
where
σ
i
∈
{
−
1
,
1
}
{\displaystyle \sigma _{i}\in \{-1,1\}}
is the spin of each lattice site. The system's energy is given by
E
(
r
→
)
=
∑
i
=
1
N
∑
j
∈
v
i
z
i
(
1
−
J
i
j
σ
i
σ
j
)
{\displaystyle E({\vec {r}})=\sum _{i=1}^{N}\sum _{j\in viz_{i}}(1-J_{ij}\sigma _{i}\sigma _{j})}
, where
v
i
z
i
{\displaystyle viz_{i}}
are the set of first neighborhood spins of i and J is the interaction matrix (for a ferromagnetic ising model, J is the identity matrix). The problem is stated.
On this example, the objective is to obtain
⟨
M
⟩
{\displaystyle \langle M\rangle }
and
⟨
M
2
⟩
{\displaystyle \langle M^{2}\rangle }
(for instance, to obtain the magnetic susceptibility of the system) since it is straightforward to generalize to other observables. According to the definition,
M
(
r
→
)
=
∑
i
=
1
N
σ
i
{\displaystyle M({\vec {r}})=\sum _{i=1}^{N}\sigma _{i}}
.
=== Canonical ===
First, the system must be initialized: let
β
=
1
/
k
b
T
{\displaystyle \beta =1/k_{b}T}
be the system's Boltzmann temperature and initialize the system with an initial state (which can be anything since the final result should not depend on it).
With micro-canonic choice, the metropolis method must be employed. Because there is no right way of choosing which state is to be picked, one can particularize and choose to try to flip one spin at the time. This choice is usually called single spin flip. The following steps are to be made to perform a single measurement.
step 1: generate a state that follows the
p
(
r
→
)
{\displaystyle p({\vec {r}})}
distribution:
step 1.1: Perform TT times the following iteration:
step 1.1.1: pick a lattice site at random (with probability 1/N), which will be called i, with spin
σ
i
{\displaystyle \sigma _{i}}
.
step 1.1.2: pick a random number
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
.
step 1.1.3: calculate the energy change of trying to flip the spin i:
Δ
E
=
2
σ
i
∑
j
∈
v
i
z
i
σ
j
{\displaystyle \Delta E=2\sigma _{i}\sum _{j\in viz_{i}}\sigma _{j}}
and its magnetization change:
Δ
M
=
−
2
σ
i
{\displaystyle \Delta M=-2\sigma _{i}}
step 1.1.4: if
α
<
min
(
1
,
e
−
β
Δ
E
)
{\displaystyle \alpha <\min(1,e^{-\beta \Delta E})}
, flip the spin (
σ
i
=
−
σ
i
{\displaystyle \sigma _{i}=-\sigma _{i}}
), otherwise, don't.
step 1.1.5: update the several macroscopic variables in case the spin flipped:
E
=
E
+
Δ
E
{\displaystyle E=E+\Delta E}
,
M
=
M
+
Δ
M
{\displaystyle M=M+\Delta M}
after TT times, the system is considered to be not correlated from its previous state, which means that, at this moment, the probability of the system to be on a given state follows the Boltzmann distribution, which is the objective proposed by this method.
step 2: perform the measurement:
step 2.1: save, on a histogram, the values of M and M2.
As a final note, one should note that TT is not easy to estimate because it is not easy to say when the system is de-correlated from the previous state. To surpass this point, one generally do not use a fixed TT, but TT as a tunneling time. One tunneling time is defined as the number of steps 1. the system needs to make to go from the minimum of its energy to the maximum of its energy and return.
A major drawback of this method with the single spin flip choice in systems like Ising model is that the tunneling time scales as a power law as
N
2
+
z
{\displaystyle N^{2+z}}
where z is greater than 0.5, phenomenon known as critical slowing down.
== Applicability ==
The method thus neglects dynamics, which can be a major drawback, or a great advantage. Indeed, the method can only be applied to static quantities, but the freedom to choose moves makes the method very flexible. An additional advantage is that some systems, such as the Ising model, lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible.
== Generalizations ==
The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is introduced and then gradually lowered.
== See also ==
Monte Carlo integration
Metropolis algorithm
Importance sampling
Quantum Monte Carlo
Monte Carlo molecular modeling
== References ==
Allen, M.P. & Tildesley, D.J. (1987). Computer Simulation of Liquids. Oxford University Press. ISBN 0-19-855645-4.
Frenkel, D. & Smit, B. (2001). Understanding Molecular Simulation. Academic Press. ISBN 0-12-267351-4.
Binder, K. & Heermann, D.W. (2002). Monte Carlo Simulation in Statistical Physics. An Introduction (4th ed.). Springer. ISBN 3-540-43221-3.
Spanier, Jerome; Gelbard, Ely M. (2008). "Importance Sampling". Monte Carlo Principles and Neutron Transport Problems. Dover. pp. 110–124. ISBN 978-0-486-46293-6. | Wikipedia/Monte_Carlo_method_in_statistical_physics |
In applied statistics, the Morris method for global sensitivity analysis is a so-called one-factor-at-a-time method, meaning that in each run only one input parameter is given a new value. It facilitates a global sensitivity analysis by making a number
r
{\displaystyle r}
of local changes at different points
x
(
1
→
r
)
{\displaystyle x(1\rightarrow r)}
of the possible range of input values.
== Method's details ==
=== Elementary effects' distribution ===
The finite distribution of elementary effects associated with the
i
t
h
{\displaystyle i_{th}}
input factor, is obtained by randomly sampling different
x
{\displaystyle x}
from
Ω
{\displaystyle \Omega }
, and is denoted by
F
i
{\displaystyle F_{i}}
.
=== Variations ===
In the original work of Morris the two sensitivity measures proposed were respectively the mean,
μ
{\displaystyle \mu }
,
and the standard deviation,
σ
{\displaystyle \sigma }
, of
F
i
{\displaystyle F_{i}}
. However, choosing Morris has the drawback that, if the distribution
F
i
{\displaystyle F_{i}}
contains negative elements, which occurs when the model is non-monotonic, when computing the mean some effects may cancel each other out. Thus, the measure
μ
{\displaystyle \mu }
on its own is not reliable for ranking factors in order
of importance. It is necessary to consider at the same time the values of
μ
{\displaystyle \mu }
and
σ
{\displaystyle \sigma }
, as a factor with elementary effects of different signs (that cancel each other out) would have a low value of
μ
{\displaystyle \mu }
but a
considerable value of
σ
{\displaystyle \sigma }
that avoids underestimating the factors.
=== ===
μ
∗
{\displaystyle \mu *}
If the distribution
F
i
{\displaystyle F_{i}}
contains negative elements, which occurs when the model is non-monotonic, when
computing the mean some effects may cancel each other out. When the goal is to rank factors in order of importance by making use of a single sensitivity measure, scientific advice is to use
μ
∗
{\displaystyle \mu *}
, which by making use of the absolute value, avoids the occurrence of effects of opposite signs.
In Revised Morris method
μ
∗
{\displaystyle \mu *}
is used to detect input factors with an important overall influence on the output.
σ
{\displaystyle \sigma }
is used to detect factors involved in interaction with other factors or whose effect is non-linear.
== Method's steps ==
The method starts by sampling a set of start values within the defined ranges of possible values for all input variables and calculating the subsequent model outcome. The second step changes the values for one variable (all other inputs remaining at their start values) and calculates the resulting change in model outcome compared to the first run. Next, the values for another variable are changed (the previous variable is kept at its changed value and all other ones kept at their start values) and the resulting change in model outcome compared to the second run is calculated. This goes on until all input variables are changed. This procedure is repeated
r
{\displaystyle r}
times (where
r
{\displaystyle r}
is usually taken between 5 and 15), each time with a different set of start values, which leads to a number of
r
(
k
+
1
)
{\displaystyle r(k+1)}
runs, where k is the number of input variables. Such number is very efficient compared to more demanding methods for sensitivity analysis.
A sensitivity analysis method widely used to screen factors in models of large dimensionality is the design proposed by Morris. The Morris method deals efficiently with models containing hundreds of input factors without relying on strict assumptions about the model, such as for instance additivity or monotonicity of the model input-output relationship. The Morris method is simple to understand and implement, and its results are easily interpreted. Furthermore, it is economic in the sense that it requires a number of model evaluations that is linear in the number of model factors. The method can be regarded as global as the final measure is obtained by averaging a number of local measures (the elementary effects), computed at different points of the input space.
== See also ==
Monte Carlo method
== References ==
== External links ==
Morris method paper
Campolongo, F., S. Tarantola and A. Saltelli. (1999). "Tackling quantitatively large dimensionality problems". Computer Physics Communications. 1999 (1–2): 75–85. Bibcode:1999CoPhC.117...75C. doi:10.1016/S0010-4655(98)00165-9.{{cite journal}}: CS1 maint: multiple names: authors list (link) | Wikipedia/Morris_method |
In condensed-matter physics, the binary collision approximation (BCA) is a heuristic used to more efficiently simulate the penetration depth and defect production by energetic ions (with kinetic energies in the kilo-electronvolt (keV) range or higher) in solids. In the method, the ion is approximated to travel through a material by experiencing a sequence of independent binary collisions with sample atoms (nuclei). Between the collisions, the ion is assumed to travel in a straight path, experiencing electronic stopping power, but losing no energy in collisions with nuclei.
== Simulation approaches ==
In the BCA approach, a single collision between the incoming ion and a target atom (nucleus) is treated by solving the classical scattering integral between two colliding particles for the
impact parameter of the incoming ion. Solution of the integral gives the scattering angle of the
ion as well as its energy loss to the sample atoms, and hence what the energy is after the collision compared to before it.
The scattering integral is defined in the centre-of-mass coordinate system (two particles reduced to one single particle with one interatomic potential) and relates the angle of scatter with the interatomic potential.
The scattering angle is determined from the repulsive pair interatomic potential
V
(
r
)
{\displaystyle V(r)}
as a function of the impact parameter
b
{\displaystyle b}
a
Θ
=
π
−
∫
−
∞
∞
b
d
r
r
2
1
−
b
2
r
2
−
V
(
r
)
E
C
M
{\displaystyle \Theta =\pi -\int _{-\infty }^{\infty }{\frac {bdr}{r^{2}{\sqrt {1-{\frac {b^{2}}{r^{2}}}-{\frac {V(r)}{E_{CM}}}}}}}}
where
E
C
M
{\displaystyle E_{CM}}
is the system energy in center-of-mass coordinates. From this the energy transfer can be obtained using
T
=
4
M
1
M
2
(
M
1
+
M
2
)
2
E
cos
2
θ
2
{\displaystyle T=4{\frac {M_{1}M_{2}}{(M_{1}+M_{2})^{2}}}E\cos ^{2}\theta _{2}}
where
θ
2
=
π
−
Θ
2
{\displaystyle \theta _{2}={\frac {\pi -\Theta }{2}}}
where
E
{\displaystyle E}
is the energy of the incoming ion of mass
M
1
{\displaystyle M_{1}}
and
M
2
{\displaystyle M_{2}}
is the mass of the material atom with which the collision is happening.
It is also possible to solve the time integral of the collision to know what time has elapsed during the collision. This is necessary at least when BCA is used in the "full cascade" mode, see below.
The energy loss to electrons, i.e. electronic stopping power,
can be treated either with impact-parameter dependent electronic stopping models
,
by subtracting a stopping power dependent on the ion velocity only between the collisions, or a combination of the two approaches.
The selection method for the impact parameter divided BCA codes into two main
varieties: "Monte Carlo" BCA and crystal-BCA codes.
In the so-called Monte Carlo BCA
approach the distance to and impact parameter of the next colliding atom is chosen randomly
from a probability distribution which depends only on the atomic density of the material.
This approach essentially simulates ion passage in a fully amorphous material.
(Note that some sources call this variety of BCA just Monte Carlo, which is
misleading since the name can then be confused with other completely different
Monte Carlo simulation varieties). SRIM and SDTrimSP are Monte-Carlo BCA codes.
It is also possible (although more difficult) to implement BCA methods for
crystalline materials, such that the moving ion has a defined position in a crystal,
and the distance and impact parameter to the next colliding atom is determined
to correspond to an atom in the crystal. In this approach BCA can be used
to simulate also atom motion during channelling. Codes such as MARLOWE operate with this approach.
The binary collision approximation can also be extended to simulate
dynamic composition changes of a material due to prolonged
ion irradiation, i.e. due to ion implantation and sputtering.
At low ion energies, the approximation of independent collisions between atoms starts to break down.
This issue can be to some extent augmented by solving the collision integral for multiple simultaneous collisions.
However, at very low energies (below ~1 keV, for a more accurate estimate see )
the BCA approximation always breaks down, and one should use molecular dynamics
ion irradiation simulation approaches because these can, per design, handle many-body collisions of arbitrarily many atoms. The MD simulations can either follow only the incoming ion (recoil interaction approximation or RIA )
or simulate all atoms involved in a collision cascade
.
== BCA collision cascade simulations ==
The BCA simulations can be further subdivided by type depending on whether they
only follow the incoming ion, or also follow the recoils produced by the ion (full cascade mode, e.g., in the popular BCA code SRIM).
If the code does not account for secondary collisions (recoils), the number of defects is then calculated using the Robinson extension of the Kinchin-Pease model.
If the initial recoil/ion mass is low, and the material where the cascade occurs has a low density (i.e. the recoil-material combination has a low stopping power), the collisions between the initial recoil and sample atoms occur rarely, and can be understood well as a sequence of independent binary collisions between atoms. This kind of a cascade can be theoretically well treated using BCA.
== Damage production estimates ==
The BCA simulations give naturally the ion penetration depth, lateral spread and nuclear and electronic deposition energy distributions in space. They can also be used to estimate the damage produced in materials, by using the assumption that any recoil which receives an energy higher than the threshold displacement energy of the material will produce a stable defect.
However, this approach should be used with great caution for several reasons. For instance, it does not account for any thermally activated recombination of damage, nor the well known fact that in metals the damage production is for high energies only something like 20% of the Kinchin-Pease prediction. Moreover, this approach only predicts the damage production as if all defects were isolated
Frenkel pairs, while in reality in many cases collision cascades produce defect clusters or even dislocations as the initial damage state.
BCA codes can, however, be extended with damage clustering and recombination models that improve on their reliability in this respect.
Finally, the average threshold displacement energy is not very accurately known in most materials.
== BCA codes ==
SRIM offers a graphical user interface and is likely the most used BCA code now. It can be used to simulate linear collision cascades in amorphous materials for all ion in all materials up to ion energies of 1 GeV. Note, however, that SRIM does not treat effects such as channelling, damage due to electronic energy deposition (necessary, e.g., to describe swift heavy ion damage in materials) or damage produced by excited electrons. The calculated sputter yields may be less accurate than that from other codes.
MARLOWE is a large code that can handle crystalline materials and support numerous different physics models.
TRIDYN, newer versions known as SDTrimSP, is a BCA code capably of handling dynamic composition changes.
DART, French code developed by the CEA (Commisariat à l'Energie Atomique) in Saclay. Differs from SRIM in its electronic stopping power and analytical resolution of the scattering integral (the amount of defects produced is determined from the elastic cross sections and the atomic concentrations of atoms). The nuclear stopping power comes from the universal interatomic potential (ZBL potential) while the electronic stopping power is derived from Bethe's equation for protons and Lindhard-Scharff for ions.
== See also ==
Collision cascade
Molecular dynamics
Stopping power (particle radiation)
== References ==
== External links ==
Media related to Binary collision approximation at Wikimedia Commons | Wikipedia/Binary_collision_approximation |
In computational biology, a Cellular Potts model (CPM, also known as the Glazier-Graner-Hogeweg model) is a computational model of cells and tissues. It is used to simulate individual and collective cell behavior, tissue morphogenesis and cancer development. CPM describes cells as deformable objects with a certain volume, that can adhere to each other and to the medium in which they live. The formalism can be extended to include cell behaviours such as cell migration, growth and division, and cell signalling. The first CPM was proposed for the simulation of cell sorting by François Graner and James A. Glazier as a modification of a large-Q Potts model. CPM was then popularized by Paulien Hogeweg for studying morphogenesis.
Although the model was developed to describe biological cells, it can also be used to model individual parts of a biological cell, or even regions of fluid.
== Model description ==
The CPM consists of a rectangular Euclidean lattice, where each cell is a subset of lattice sites sharing the same cell ID (analogous to spin in Potts models in physics). Lattice sites that are not occupied by cells are the medium. The dynamics of the model are governed by an energy function: the Hamiltonian which describes the energy of a particular configuration of cells in the lattice. In a basic CPM, this energy results from adhesion between cells and resistance of cells to volume changes. The algorithm for updating CPM minimizes this energy.
In order to evolve the model Metropolis-style updates are performed, that is:
choose a random lattice site i
choose a random neighboring lattice site j to copy its ID into i.
calculate the difference in energy (
Δ
H
{\displaystyle \Delta H}
) between the original and the proposed new configuration.
accept or reject this copy event based on the change in energy
Δ
H
{\displaystyle \Delta H}
, as follows:
if the new energy is lower, always accept the copy;
if the new energy is higher, accept the copy with probability
e
−
Δ
H
/
T
{\displaystyle e^{-\Delta H/T}}
(the Boltzmann temperature T determines the likelihood of energetically unfavorable fluctuations).
=== The Hamiltonian ===
The original model proposed by Graner and Glazier contains cells of two types, with different adhesion energies for cells of the same type and cells of a different type. Each cell type also has a different contact energy with the medium, and the cell volume is assumed to remain close to a target value. The Hamiltonian is formulated as:
H
=
∑
i
,
j
neighbors
J
(
τ
(
σ
i
)
,
τ
(
σ
j
)
)
(
1
−
δ
(
σ
i
,
σ
j
)
)
+
λ
∑
σ
i
(
v
(
σ
i
)
−
V
(
σ
i
)
)
2
,
{\displaystyle {\begin{aligned}H=\sum _{i,j{\text{ neighbors}}}J\left(\tau (\sigma _{i}),\tau (\sigma _{j})\right)\left(1-\delta (\sigma _{i},\sigma _{j})\right)+\lambda \sum _{\sigma _{i}}\left(v(\sigma _{i})-V(\sigma _{i})\right)^{2},\\\end{aligned}}}
where i, j are lattice sites, σi is the cell at site i, τ(σ) is the cell type of cell σ, J is the coefficient determining the adhesion between two cells of types τ(σ),τ(σ'), δ is the Kronecker delta, v(σ) is the volume of cell σ, V(σ) is the target volume, and λ is a Lagrange multiplier determining the strength of the volume constraint.
Cells with a lower J value for their membrane contact will stick together more strongly. Therefore, different patterns of cell sorting can be simulated by varying the J values.
== Extensions ==
Over time, the CPM has evolved from a specific model of cell sorting to a general framework with many extensions, some of which are partially or entirely off-lattice. Various cell behaviours, such as chemotaxis, elongation and haptotaxis can be incorporated by extending either the Hamiltonian, H, or the change in energy
Δ
H
{\displaystyle \Delta H}
. Auxiliary sub-lattices may be used to include additional spatial information, such as the concentrations of chemicals.
=== Chemotaxis ===
In CPM, cells can be made to move in the direction of higher chemokine concentration, by increasing the probability of copying the ID of site j into site i when the chemokine concentration is higher at j. This is done by modifying the change in energy
Δ
H
{\displaystyle \Delta H}
with a term that is proportional to the difference in concentration at i and j:
Δ
H
′
=
Δ
H
−
μ
(
C
i
−
C
j
)
{\displaystyle {\begin{aligned}\Delta H'=\Delta H-\mu (C_{i}-C_{j})\\\end{aligned}}}
Where
μ
{\displaystyle \mu }
is the strength of chemotactic movement, and
C
i
{\displaystyle C_{i}}
and
C
j
{\displaystyle C_{j}}
are the concentration of the chemokine at site i and j, respectively. The chemokine gradient is typically implemented on a separate lattice of the same dimensions as the cell lattice.
=== Multiscale and hybrid modeling using CPM ===
Core GGH (or CPM) algorithm which defines the evolution of the cellular level structures can easily be integrated with intracellular signaling dynamics, reaction diffusion dynamics and rule based model to account for the processes which happen at lower (or higher) time scale. Open source software Bionetsolver can be used to integrate intracellular dynamics with CPM algorithm.
=== Applications of Cellular-Potts Model ===
The Cellular Potts Model (CPM)—also known as the Glazier-Graner-Hogeweg (GGH) model—is widely used in computational biology to simulate multicellular systems. It allows modeling of cell shape, movement, adhesion, growth, and interactions with the environment, making it highly suitable for studying tissue morphogenesis, cancer, cell sorting, and pattern formation. Below is a table summarizing key biological applications of CPM along with representative references.
=== Frameworks Implementing the Cellular Potts Model ===
The following frameworks allow users to apply the CPM to multicellular model construction.
== References ==
Chen, Nan; Glazier, James A.; Izaguirre, Jesus A.; Alber, Mark S. (2007). "A parallel implementation of the Cellular Potts Model for simulation of cell-based morphogenesis". Computer Physics Communications. 176 (11–12): 670–681. Bibcode:2007CoPhC.176..670C. doi:10.1016/j.cpc.2007.03.007. PMC 2139985. PMID 18084624.
== External links ==
James Glazier (professional website)
CompuCell3D, a CPM simulation environment: Sourceforge
SimTK
Notre Dame development site
Artificial Life model of multicellular morphogenesis with autonomously generated gradients for positional information using the Cellular Potts model
Stochastic cellular automata | Wikipedia/Cellular_Potts_model |
Biology Monte Carlo methods (BioMOCA) have been developed at the University of Illinois at Urbana-Champaign to simulate ion transport in an electrolyte environment through ion channels or nano-pores embedded in membranes. It is a 3-D particle-based Monte Carlo simulator for analyzing and studying the ion transport problem in ion channel systems or similar nanopores in wet/biological environments. The system simulated consists of a protein forming an ion channel (or an artificial nanopores like a Carbon Nano Tube, CNT), with a membrane (i.e. lipid bilayer) that separates two ion baths on either side. BioMOCA is based on two methodologies, namely the Boltzmann transport Monte Carlo (BTMC) and particle-particle-particle-mesh (P3M). The first one uses Monte Carlo method to solve the Boltzmann equation, while the later splits the electrostatic forces into short-range and long-range components.
== Backgrounds ==
In full-atomic molecular dynamics simulations of ion channels, most of the computational cost is for following the trajectory of water molecules in the system. However, in BioMOCA the water is treated as a continuum dielectric background media. In addition to that, the protein atoms of the ion channel are also modeled as static point charges embedded in a finite volume with a given dielectric coefficient. So is the lipid membrane, which is treated as a static dielectric region inaccessible to ions. In fact the only non-static particles in the system are ions. Their motion is assumed classical, interacting with other ions through electrostatic interactions and pairwise Lennard-Jones potential. They also interact with the water background media, which is modeled using a scattering mechanism.
The ensemble of ions in the simulation region, are propagated synchronously in time and 3-D space by integrating the equations of motion using the second-order accurate leap-frog scheme. Ion positions r and forces F are defined at time steps t, and t + dt. The ion velocities are defined at t – dt/2, t + dt/2. The governing finite difference equations of motion are
v
→
(
t
+
d
t
2
)
=
v
→
(
t
−
d
t
2
)
+
F
→
(
t
)
d
t
{\displaystyle {\vec {v}}(t+{\frac {dt}{2}})={\vec {v}}(t-{\frac {dt}{2}})+{\vec {F}}(t)\,dt}
r
→
(
t
+
d
t
)
=
r
→
(
t
−
d
t
)
+
v
→
(
t
+
d
t
2
)
d
t
{\displaystyle {\vec {r}}(t+dt)={\vec {r}}(t-dt)+{\vec {v}}(t+{\frac {dt}{2}})\,dt}
where F is the sum of electrostatic and pairwise ion-ion interaction forces.
=== Electrostatic field solution ===
The electrostatic potential is computed at regular time intervals by solving the Poisson’s equation
∇
(
ε
(
r
)
∇
ϕ
(
r
,
t
)
)
=
−
(
ρ
ions
(
r
,
t
)
+
ρ
perm
(
r
)
)
{\displaystyle \nabla (\varepsilon (r)\nabla \phi (r,t))=-(\rho _{\text{ions}}(r,t)+\rho _{\text{perm}}(r))}
where
ρ
ions
(
r
,
t
)
{\displaystyle \rho _{\text{ions}}(r,t)}
and
ρ
perm
(
r
)
{\displaystyle \rho _{\text{perm}}(r)}
are the charge density of ions and permanent charges on the protein, respectively.
ϵ
(
r
)
{\displaystyle \epsilon (r)}
is the local dielectric constant or permittivity, and
ϕ
(
r
,
t
)
{\displaystyle \phi (r,t)}
is the local electrostatic potential. Solving this equation provides a self-consistent way to include applied bias and the effects of image charges induced at dielectric boundaries.
The ion and partial charges on protein residues are assigned to a finite rectangular grid using the cloud-in-cell (CIC) scheme. Solving the Poisson equation on the grid counts for the particlemesh component of the P3M scheme. However, this discretization leads to an unavoidable truncation of the short-range component of electrostatic force, which can be corrected by computing the short-range charge-charge Coulombic interactions.
=== Dielectric coefficient ===
Assigning the appropriate values for dielectric permittivity of the protein, membrane, and aqueous regions is of great importance. The dielectric coefficient determines the strength of the interactions between charged particles and also the dielectric boundary forces (DBF) on ions approaching a boundary between two regions of different permittivity. However, in nano scales the task of assigning specific permittivity is problematic and not straightforward.
The protein or membrane environment could respond to an external field in a number of different ways. Field induced dipoles, reorientation of permanent dipoles, protonation and deprotonation of protein residues, larger scale reorganization of ionized side-chains and water molecules, both within the interior and on the surface of the protein, are all examples of how complicated the assignment of permittivity is. In MD simulations, where all the charges, dipoles, and field induced atomic dipoles are treated explicitly then it is suggested that a dielectric value of 1 is appropriate. However, in reduced-particle ion simulation programs, such as ours, where the protein, membrane, and water are continuum backgrounds and treated implicitly, and on top of that, the ion motion takes place on the same time-scale as the protein’s response to its presence, it is very difficult to assign the dielectric coefficients. In fact, changing the dielectric coefficients could easily alter the channel characteristics, such as ion permeation and selectivity The assignment of dielectric coefficient for water is another key issue. The water molecules inside ion channels could be very ordered due to tapered size of the pore, which is often lined with highly charged residues, or hydrogen bond formation between water molecules and protein. As a result, the dielectric constant of water inside an ion channel could be quite different from the value under bulk conditions. To make the matter even more complicated, the dielectric coefficients of water inside nanopores is not necessarily an isotropic scalar value, but an anisotropic tensor having different values in different directions.
=== Anisotropic permittivity ===
It has become evident that the macroscopic properties of a system do not necessarily extend to the molecular length scales. In a recent research study carried by Reza Toghraee, R. Jay Mashl, and Eric Jakobsson at the University of Illinois, Urbana-Champaign, they used Molecular Dynamics simulations to study the properties of water in featureless hydrophobic cylinders with diameters ranging from 1 to 12 nm. This study showed that water undergoes distinct transitions in structure, dielectric properties, and mobility as the tube diameter is varied. In particular they found that the dielectric properties in the range of 1 to 10 nm is quite different from bulk water and is in fact anisotropic in nature.
Though, such featureless hydrophobic channels do not represent actual ion channels and more research has to be done in this area before one could use such data for ion channels, it is evident that water properties like permittivity inside an ion channel or nano-pore could be much more
complicated that it has been thought before. While a high axial dielectric constant shields ion’s electrostatic charges in the axial direction (along the channel), low radial dielectric constant increases the interaction between the mobile ion and the partial charges, or the dielectric charge images on the channel, conveying stronger selectivity in ion channels.
Solving the Poisson equation based on an anisotropic permittivity has been incorporated into BioMOCA using the box integration discretization method, which has been briefly described below.
== Calculations ==
=== Box integration discretization ===
In order to use box integration for discretizing a D-dimensional Poisson equation
∇
(
ε
∇
φ
)
=
ρ
{\displaystyle \nabla (\varepsilon \nabla \varphi )=\rho }
with
ε
{\displaystyle \varepsilon }
being a diagonal D × D tensor, this differential equation is reformulated as an integral equation. Integration the above equation over a D-dimensional region
Ω
{\displaystyle \Omega }
, and using Gauss theorem, then the integral formulation is obtained
∮
∂
Ω
n
^
(
ε
∇
φ
)
=
−
∫
Ω
ρ
{\displaystyle \oint _{\partial \Omega }{\hat {n}}(\varepsilon \nabla \varphi )=-\int _{\Omega }\rho }
In this appendix it is assumed to be a two-dimensional case. Upgrading to a three-dimensional system would be straightforward and legitimate as the Gauss theorem is also valid for the one and three dimensions.
ϵ
{\displaystyle \epsilon }
is assumed to be given on the rectangular regions between nodes, while
φ
{\displaystyle \varphi }
is defined on the grid nodes (as illustrated on figure at the right).
The integration regions
Ω
{\displaystyle \Omega }
are then chosen as rectangles centered around node and extending to the 4 nearest neighbor nodes. The gradient
∇
φ
{\displaystyle \nabla \varphi }
is then approximated using centered difference normal to the boundary of the integration region
Ω
{\displaystyle \Omega }
, and average
ϵ
{\displaystyle \epsilon }
over the integration surface
∂
Ω
{\displaystyle \partial \Omega }
. This approach allows us to approximate the left hand side of the Poisson equation above in first order as
∮
∂
Ω
n
^
(
ε
∇
φ
)
=
φ
i
+
1
,
j
−
φ
i
,
j
h
i
x
(
h
j
y
2
ϵ
i
,
j
x
+
h
j
−
1
y
2
ε
i
,
j
−
1
x
)
{\displaystyle \oint _{\partial \Omega }{\hat {n}}(\varepsilon \nabla \varphi )={\frac {\varphi _{i+1,j}-\varphi _{i,j}}{h_{i}^{x}}}\left({\frac {h_{j}^{y}}{2}}\epsilon _{i,j}^{x}+{\frac {h_{j-1}^{y}}{2}}\varepsilon _{i,j-1}^{x}\right)}
−
φ
i
,
j
−
φ
i
−
1
,
j
h
i
−
1
x
(
h
j
y
2
ϵ
i
−
1
,
j
x
+
h
j
−
1
y
2
ε
i
−
1
,
j
−
1
x
)
{\displaystyle {}-{\frac {\varphi _{i,j}-\varphi _{i-1,j}}{h_{i-1}^{x}}}\left({\frac {h_{j}^{y}}{2}}\epsilon _{i-1,j}^{x}+{\frac {h_{j-1}^{y}}{2}}\varepsilon _{i-1,j-1}^{x}\right)}
+
φ
i
,
j
+
1
−
φ
i
,
j
h
j
y
(
h
i
x
2
ε
i
,
j
y
+
h
i
−
1
x
2
ε
i
−
1
,
j
y
)
{\displaystyle {}+{\frac {\varphi _{i,j+1}-\varphi _{i,j}}{h_{j}^{y}}}\left({\frac {h_{i}^{x}}{2}}\varepsilon _{i,j}^{y}+{\frac {h_{i-1}^{x}}{2}}\varepsilon _{i-1,j}^{y}\right)}
−
φ
i
,
j
−
φ
i
,
j
−
1
h
j
−
1
y
(
h
i
x
2
ε
i
,
j
−
1
y
+
h
i
−
1
x
2
ε
i
−
1
,
j
−
1
y
)
{\displaystyle {}-{\frac {\varphi _{i,j}-\varphi _{i,j-1}}{h_{j-1}^{y}}}\left({\frac {h_{i}^{x}}{2}}\varepsilon _{i,j-1}^{y}+{\frac {h_{i-1}^{x}}{2}}\varepsilon _{i-1,j-1}^{y}\right)}
where
ε
x
{\displaystyle \varepsilon ^{x}}
and
ε
y
{\displaystyle \varepsilon ^{y}}
are the two components of the diagonal of the tensor
ϵ
{\displaystyle \epsilon }
.
Discretizing the right-hand side of the Poisson equation is fairly simple.
ρ
{\displaystyle \rho }
is discretized on the same grid nodes, as it's been done for
φ
{\displaystyle \varphi }
.
∫
Ω
i
ρ
=
Volume
(
Ω
i
)
ρ
i
{\displaystyle \int _{\Omega _{i}}\rho ={\text{Volume}}(\Omega _{i})\rho _{i}}
=== Ion size ===
The finite size of ions is accounted for in BioMOCA using pairwise repulsive forces derived from the 6–12 Lennard-Jones potential. A truncated-shifted form of the Lennard-Jones potential is used in the simulator to mimic ionic core repulsion. The modified form of the Lennard-Jones pairwise potential that retains only the repulsive component is given by
U
L
J
(
r
i
j
)
=
{
4
ϵ
L
J
(
(
σ
i
j
r
i
j
)
12
−
(
σ
i
j
r
i
j
)
6
)
+
ϵ
L
J
r
i
j
<
2
1
/
6
σ
i
j
0
r
i
j
>
2
1
/
6
σ
i
j
{\displaystyle U_{LJ}(r_{ij})={\begin{cases}4\epsilon _{LJ}\left(\left({\frac {\sigma _{ij}}{r_{ij}}}\right)^{12}-\left({\frac {\sigma _{ij}}{r_{ij}}}\right)^{6}\right)+\epsilon _{LJ}&r_{ij}<2^{1/6}\sigma _{ij}\\0&r_{ij}>2^{1/6}\sigma _{ij}\end{cases}}}
Here,
ϵ
L
J
{\displaystyle \epsilon _{LJ}}
is the Lennard-Jones energy parameter and
σ
i
j
=
(
σ
i
+
σ
j
)
/
2
{\displaystyle \sigma _{ij}=(\sigma _{i}+\sigma _{j})/2}
is the average of the individual Lennard-Jones distance parameters for particles i and j. Using a truncated form of the potential is computationally efficient while preventing the ions from overlapping or coalescing, something that would be clearly unphysical.
=== Ion-protein interaction ===
Availability of high-resolution X-ray crystallographic measurements of complete molecular structures provides information about the type and location of all atoms that forms the protein. In BioMOCA the protein atoms are modeled as static point charges embedded in a finite volume inaccessible to the ions and associated with a user-defined dielectric coefficient. Moreover, a number of force-field parameters are available that provide information about the charge and radii of atoms in different amino-acid groups. The conjunction of the molecular structure and force fields provide the coordinates, radii, and charge of each atom in the protein channel. BioMOCA uses such information in the standard PQR (Position-Charge-Radius) format to map the protein system onto a rectangular grid.
Ideally, the steric interactions between protein atoms and the ions in the aqueous medium are to use a repulsive potential like Lennard-Jones to prevent ions from penetrating the protein. As this approach could add a significant load to the amount of calculations, a simpler approach is chosen that treats the protein surfaces as predetermined hard wall boundaries. Many recent open source molecular biology packages have built-in facilities that determine the volume accessible to ions in a protein system. The Adaptive Poisson Boltzmann Solver (APBS) scheme has been incorporated to BioMOCA to obtain the accessible volume region and therefore partition the simulation domain into continuous regions.
Ions are deemed to have access to protein and lipid regions and if any point within the finite-size of ionic sphere crosses the protein or membrane boundary, a collision is assumed and the ion is reflected diffusively.
=== Ion-water interactions ===
As a reduced particle approach, BioMOCA replaces the explicit water molecules with continuum background and handles the ion-water interactions using BTMC method, in which, appropriate scattering rates should be chosen. In other words, ion trajectories are randomly interrupted by scattering events that account for the ions’ diffusive motion in water. In between these scattering events, ions follow the Newtonian forces. The free flight times, Tf, are generated statistically from the total scattering rate according to
−
ln
(
r
)
=
∫
0
T
f
λ
(
p
→
(
t
)
)
d
t
{\displaystyle -\ln(r)=\int _{0}^{T_{f}}\lambda ({\vec {p}}(t))\,dt}
where r is a random number uniformly distributed on the unit interval.
λ
{\displaystyle \lambda }
, a function of momentum, is the total scattering rate for all collision mechanisms. At the end of each free flight, the ion’s velocity is reselected randomly from a Maxwellian distribution. As the correct scattering mechanism for ion-water interactions in nonbulk electrolyte solutions has yet to be developed, a position dependent scattering rate linked to the local diffusivity is used in our model. This dependency on position comes from the fact that water molecules can have different order of organization in different regions, which will affect the scattering rate.
=== Position-dependent diffusivity ===
It is widely accepted that the ions and water molecules do not have the same mobility or diffusivity in confined regions as in bulk. In fact, it is more likely to have a lessening in the effective mobility of ions in ion channels. In reduced particle methods where the channel water is assumed as implicit continuum background, a mean ion mobility is needed to reveal how ions could diffuse due to local electrostatic forces and random events. In Transport Monte Carlo simulations, the total scattering rate (
λ
{\displaystyle \lambda }
), is assumed to only result from ion-water interactions; it is related to ion diffusivity with the expression
λ
=
k
T
m
D
{\displaystyle \lambda ={\frac {kT}{mD}}}
where m is the mass of the ion and D is its diffusion constant. As the equation indicates, reduced diffusivity of ions inside the lumen of the channel renders to increased incidence of scattering events.
=== Hydration shells ===
In addition to having a diffusive effect on ion transport, water molecules also form hydration shells around individual ions due to their polar nature. The hydration shell not only shields the charge on ions from other ions but also modulates the ion radial distribution function causing the formation of peaks and troughs. The average minimum distance between two ions is increased as there is always at least one layer of water molecules present between them, acting as a physical deterrent preventing two ions from getting too close to each other, in a manner that is similar to the short-range repulsive component of the Lennard-Jones potential.
The theory of hydration shells is well developed in the physical chemistry literature however a simple model is required that captures the essential effects with as little computational overhead as possible. For this purpose the same pairwise potential discussed by Im and Roux is implemented to include the effect of hydration shells.
U
h
y
=
c
0
exp
(
c
1
−
r
c
2
)
c
o
s
(
c
3
(
c
1
−
r
)
π
)
+
c
4
(
c
1
r
)
6
{\displaystyle U_{hy}=c_{0}\exp \left({\frac {c_{1}-r}{c_{2}}}\right)cos(c_{3}(c_{1}-r)\pi )+c_{4}\left({\frac {c_{1}}{r}}\right)^{6}}
The coefficients ci were determined empirically for a 1 M KCl solution, using MD simulations to benchmark the ion radial distribution functions against Equilibrium Monte Carlo simulations. The effect of hydration shells was found to be important in simulations at higher salt concentrations where the conductance of many ion channels, porin among them, is observed to saturate as the salt concentration in the electrolyte baths is further increased. Earlier simulations that did not include a model of hydration shells did not reproduce the conductance saturation behavior. This suggests an additional repulsive potential acting to prevent ion crowding, and hence limiting the concentration of ions and current density in the confined space of the pore even at high bath salt concentration. When the repulsive potential was included moderate channel conductance was observed.
== Conditions and methods ==
=== Boundary conditions ===
The electrical and physiological properties of ion channels are experimentally measured by inserting the channel into a lipid membrane separating two baths containing solutions of specific concentrations. A constant electrostatic bias is applied across the channel by immersing the electrodes in the two baths. Formulating boundary conditions that accurately represent these contact regions may require enormously large bath regions and is a challenging task. Beyond a Debye length from the membrane the electrostatic potential and ion densities do not vary appreciably. This assumption has been supported by the results of continuum results presented earlier. For typical salt concentrations used in ion channel simulations, the Debye length is of the order of 10 Å. Using the assumption, Dirichlet boundary conditions are imposed on the potential at the two domain boundary planes that are transverse to the channel, taking care that these planes are sufficiently far from the membrane.
The other problem in duplicating the experimental conditions is the problem of maintaining fixed charge density in the two baths. This problem is treated by maintaining the specified density in two buffer regions extending from the boundary plane toward the membrane. The number of ions needed to maintain the density in the two buffer regions is calculated at the start of the simulations. The count of the ions in these buffers is sampled throughout the simulation and an ion is injected whenever a deficit is observed. The initial velocity of the injected particle is decided according to Maxwellian distribution. The ions can leave the system only by exiting through the two Dirichlet boundary planes and an ion is not removed artificially from these buffer regions. The reflections from the Neumann boundary planes are treated as elastic reflections.
=== Multi-grids and grid focusing method ===
In all most any of the methods in simulation of ion channels, the major computational cost comes from the calculation of electrostatic forces acting on the ions. In continuum models, for instance, where ionic density exist rather than explicit ions, the electrostatic potential is calculated in a self-consistent manner by solving the Poisson equation. In MD simulations, on the other hand, the electrostatic forces acting on the particles are calculated by explicit evaluation of the Coulombic force term, often splitting the short-range and long-range electrostatic forces so they could be computed with different methods. In a model such as a reduced particle method, the longrange electrostatic forces are evaluated by solving the Poisson equation and augmenting the forces so obtained with a short-range component. By solving the Poisson equation it is possible to self-consistently include the forces arising from the bias to the system, while this is a difficult issue to be addressed in MD simulations.
Currently there are two Poisson solvers implemented in BioMOCA based on the finite difference method. One uses the pre-conditioned Conjugate Gradient scheme (pCG) and is used by default. The later is borrowed from an APBS solver, which uses a V-multi-grid scheme. Other than the numerical approach to solve the Poisson equation, the main difference between the two solvers is on how they address the permittivity in the system. In the first solver, a dielectric value is assigned to each cell in the grid, while in the APBS solver the dielectric coefficients are defined on the grid nodes. As discussed earlier box integration method is used in the pCG solver, which treats the Poisson equation in the most accurate way. Even though a full multigrid solver based on box-integration method has been under development, there is a neat way to reuse the already exiting code and treat the ion channel systems.
Ion channel simulations require the presence of large bath regions for accurate treatment of screening. There being of such bath regions make the mesh domain of Poisson equation large and leads to either a large number of grid points with fine mesh resolution or a small number of grid points with very coarse discretization. From bulk simulations a coarse mesh is sufficient for describing the baths using the P3M scheme. However, a fine resolution is required in the channel domain because of the highly charged nature of these regions and the presence of spatially varying dielectric regions. Besides the ultimate interest is to study the channel behavior in terms of ion permeability, selectivity, gating, density, etc.... In other words, it is better off to put more computational resources in the channel region, and bare minimum in the baths to reduce the overall computational cost and speed up of simulations from weeks to perhaps days instead.
A scheme based on the grid focusing method has been developed that makes it possible to satisfy the requirement of large bath region and a fine grid resolution in channel at the same time in a computationally effective way. This methodology is capable to have multiple fine mesh domains, which may be needed to describe multiple pore channels like OmpF porin, or an array of ion channels sharing the same bath regions or even having yet finer meshes inside a fine mesh for relatively large channels with narrow ion passages like Nicotine receptor channel.
The first grid is coarse mesh spanning the entire problem domain including the bath regions and the channel region. The second grid (and so on for any other grids, 3rd, 4th, etc.) is a relatively much finer mesh that spans a sub-domain of the system containing the region that requires fine resolution like the channel pore. The Poisson equation is first solved on the coarse mesh with all the Dirichlet and Neumann boundary conditions, taking into account the applied bias. Next the boundary conditions for the secondary meshes are obtained by interpolating from the first or previous solutions of the Poisson equation. The Poisson equation is solved again for the finer meshes using the new boundary conditions. In this way, electrostatic fields with different mesh discretization for different regions can be generated.
=== EMF and DBF ===
The electro-motive-force (EMF) is the measurement of the energy needed for a charged particle like ion to cross the ion channel embedded in a membrane. Part of this potential energy barrier is due to the interaction between the crossing ion and the permanent/partial charges on the protein residues. The other part comes from the induced dipoles in the protein/membrane dielectric medium, and is referred as dielectric-boundary-force (DBF). To compute the DBF alone, one may turn off all the static charges on the protein residues and drag the ion through the pore and compute the energy barrier using
P
D
B
F
=
∫
−
d
z
^
.
E
→
{\displaystyle P_{DBF}=\int -d{\hat {z}}.{\vec {E}}}
It is important to note that EMF or DBF measurements are just qualitative measurements, as an ion does not necessarily cross the channel through the center of its lumen in a straight line and it is often accompanied by other ions moving in the same or opposite directions, which dramatically changes the dynamics of the system. Moreover, unlike steered MD calculations where the protein residues dynamically reposition themselves as an ion or ions are bouncing across the channel, in our EMF or DBF calculations protein is modeled as a static continuum, which further affects the energy calculations in a more quantitative way. Another issue that additionally impacts the measurements is absence of water hydration molecules, which move with the ion and shield part of its charge. Having said all of above, still computing EMF or DBF is valuable to address channel selectivity or gating. Computing either of these two energy barriers is available as an option in BioMOCA.
=== Visualization using VMD ===
VMD was equipped with the option of loading BioMOCA structures. This is a very useful feature as one could load both the protein structure (i.e. PDB or PQR file) along with the structures generated by BioMOCA to make comparisons. Figure at the right shows how BioMOCA has generated a structure for Gramicidin channel with a membrane wrapped around it. Furthermore, BioMOCA also dumps the ion trajectories in standard formats so they could be later loaded to molecular visualization tools such as VMD and watched frame by frame in a movie format.
=== Recording trajectories in binary ===
Other than counting the number of ions crossing the channel, sometimes it is desirable to study their behavior at different regions of the channel. Such examples would be the average occupancy of ions or their average moving velocity inside the channel or a nanopore. BioMOCA has been equipped with the option of dumping every ions position, average and instantaneous velocities, potential and kinetic energies, average and instantaneous displacements and other info at every step (or few steps) of the simulations in ASCII format, so such trajectory information could be studied later on to gather further statistics. From a technical point of view however, dumping such information for tens of ions, even at every few hundreds of time steps, could slow down the simulations and end up with huge files accumulating to tens of gigabytes. Loading such files later on from disk storage is also a very time-consuming and computationally inefficient procedure. Over and above that, recoding the numerical information in ASCII format does not hold its machine precision and has loss of accuracy.
Solving such problems is actually an easy task and it is simply to avoid using ASCII format and use binary format instead. Not only it preserves the machine accuracy but also writing and reading to file system is a lot faster. The computational overhead to dump the trajectories becomes negligible and the trajectory files become about two orders of magnitude smaller in size. The downside might be that programming and decoding the data could become very tricky, but once it's done correctly and with care, the advantages of using binary format are well worth the extra effort. BioMOCA is now equipped with the tools to record the trajectory information in binary format.
== See also ==
Monte Carlo method
Biology
Computational biology
== References == | Wikipedia/Biology_Monte_Carlo_method |
The VEGAS algorithm, due to G. Peter Lepage, is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the integrand that make the greatest contribution to the final integral.
The VEGAS algorithm is based on importance sampling. It samples points from the probability distribution described by the function
|
f
|
,
{\displaystyle |f|,}
so that the points are concentrated in the regions that make the largest contribution to the integral. The GNU Scientific Library (GSL) provides a VEGAS routine.
== Sampling method ==
In general, if the Monte Carlo integral of
f
{\displaystyle f}
over a volume
Ω
{\displaystyle \Omega }
is sampled with points distributed according to a probability distribution described by the function
g
,
{\displaystyle g,}
we obtain an estimate
E
g
(
f
;
N
)
,
{\displaystyle \mathrm {E} _{g}(f;N),}
E
g
(
f
;
N
)
=
1
N
∑
i
N
f
(
x
i
)
/
g
(
x
i
)
.
{\displaystyle \mathrm {E} _{g}(f;N)={1 \over N}\sum _{i}^{N}{f(x_{i})}/g(x_{i}).}
The variance of the new estimate is then
V
a
r
g
(
f
;
N
)
=
V
a
r
(
f
/
g
;
N
)
{\displaystyle \mathrm {Var} _{g}(f;N)=\mathrm {Var} (f/g;N)}
where
V
a
r
(
f
;
N
)
{\displaystyle \mathrm {Var} (f;N)}
is the variance of the original estimate,
V
a
r
(
f
;
N
)
=
E
(
f
2
;
N
)
−
(
E
(
f
;
N
)
)
2
.
{\displaystyle \mathrm {Var} (f;N)=\mathrm {E} (f^{2};N)-(\mathrm {E} (f;N))^{2}.}
If the probability distribution is chosen as
g
=
|
f
|
/
∫
Ω
|
f
(
x
)
|
d
x
{\displaystyle g=|f|/\textstyle \int _{\Omega }|f(x)|dx}
then it can be shown that the variance
V
a
r
g
(
f
;
N
)
{\displaystyle \mathrm {Var} _{g}(f;N)}
vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution.
== Approximation of probability distribution ==
The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like
K
d
{\displaystyle K^{d}}
with dimension d the probability distribution is approximated by a separable function:
g
(
x
1
,
x
2
,
…
)
=
g
1
(
x
1
)
g
2
(
x
2
)
⋯
{\displaystyle g(x_{1},x_{2},\ldots )=g_{1}(x_{1})g_{2}(x_{2})\cdots }
so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS.
== See also ==
Las Vegas algorithm
Monte Carlo integration
Importance sampling
== References == | Wikipedia/VEGAS_algorithm |
In chemistry, dynamic Monte Carlo (DMC) is a Monte Carlo method for modeling the dynamic behaviors of molecules by comparing the rates of individual steps with random numbers. It is essentially the same as Kinetic Monte Carlo. Unlike the Metropolis Monte Carlo method, which has been employed to study systems at equilibrium, the DMC method is used to investigate non-equilibrium systems such as a reaction, diffusion, and so-forth (Meng and Weinberg 1994). This method is mainly applied to analyze adsorbates' behavior on surfaces.
There are several well-known methods for performing DMC simulations, including the First Reaction Method (FRM) and Random Selection Method (RSM). Although the FRM and RSM give the same results from a given model, the computer resources are different depending on the applied system.
In the FRM, the reaction whose time is minimum on the event list is advanced. In the event list, the tentative times for all possible reactions are stored. After the selection of one event, the system time is advanced to the reaction time, and the event list is recalculated. This method is efficient in computation time because the reaction always occurs in one event. On the other hand, it consumes a lot of computer memory because of the event list. Therefore, it is difficult to apply to large-scale systems.
The RSM decides whether the reaction of the selected molecule proceeds or not by comparing the transition probability with a random number. In this method, the reaction does not necessarily proceed in one event, so it needs significantly more computation time than FRM. However, this method saves computer memory because it does not use an event list. Large-scale systems are able to be calculated by this method.
== See also ==
Hybrid Monte Carlo
== References ==
(Meng and Weinberg 1994): B. Meng and W. H. Weinberg, J. Chem. Phys. 100, 5280 (1994)
(Meng and Weinberg 1996): B. Meng, W.H. Weinberg, Surface Science 364 (1996) 151-163. | Wikipedia/Dynamic_Monte_Carlo_method |
Particle filters, also known as sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998.
Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems.
Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution. In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.
From the statistical and probabilistic point of view, particle filters may be interpreted as mean-field particle interpretations of Feynman-Kac probability measures. These particle integration techniques were developed in molecular chemistry and computational physics by Theodore E. Harris and Herman Kahn in 1951, Marshall N. Rosenbluth and Arianna W. Rosenbluth in 1955, and more recently by Jack H. Hetherington in 1984. In computational physics, these Feynman-Kac type path particle integration methods are also used in Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods. Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.
The particle filter methodology is used to solve Hidden Markov Model (HMM) and nonlinear filtering problems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion. Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.
Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics, phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, quantitative risk and insurance and other fields.
== History ==
=== Heuristic-like algorithms ===
From a statistical and probabilistic viewpoint, particle filters belong to the class of branching/genetic type algorithms, and mean-field type interacting particle methodologies. The interpretation of these particle methods depends on the scientific discipline. In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a. Metaheuristic). In computational physics and molecular chemistry, they are used to solve Feynman-Kac path integration problems or to compute Boltzmann-Gibbs measures, top eigenvalues, and ground states of Schrödinger operators. In Biology and Genetics, they represent the evolution of a population of individuals or genes in some environment.
The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. The first trace of particle filters in statistical methodology dates back to the mid-1950s; the 'Poor Man's Monte Carlo', that was proposed by John Hammersley et al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game. In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book published in 1975.
In Biology and Genetics, the Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms. The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms.
From the mathematical viewpoint, the conditional distribution of the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. One can also quote the earlier seminal works of Theodore E. Harris and Herman Kahn in particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter", a slightly modified version of this article appeared in 1996. In April 1993, Neil J. Gordon et al., published in their seminal work an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin, and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.
=== Mathematical foundations ===
From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms.
The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral in 1996. The article also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.
Dan Crisan, Jessica Gaines, and Terry Lyons, as well as Pierre Del Moral, and Terry Lyons, created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet. The first rigorous analysis of genealogical tree-ased particle filter smoothers is due to P. Del Moral and L. Miclo in 2001
The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books. These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models, backward Markov particle models, adaptive mean-field particle models, island-type particle models, particle Markov chain Monte Carlo methodologies, Sequential Monte Carlo samplers and Sequential Monte Carlo Approximate Bayesian Computation methods and Sequential Monte Carlo ABC based Bayesian Bootstrap.
== The filtering problem ==
=== Objective ===
A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with a hidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known.
A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below:
X
0
→
X
1
→
X
2
→
X
3
→
⋯
signal
↓
↓
↓
↓
⋯
Y
0
Y
1
Y
2
Y
3
⋯
observation
{\displaystyle {\begin{array}{cccccccccc}X_{0}&\to &X_{1}&\to &X_{2}&\to &X_{3}&\to &\cdots &{\text{signal}}\\\downarrow &&\downarrow &&\downarrow &&\downarrow &&\cdots &\\Y_{0}&&Y_{1}&&Y_{2}&&Y_{3}&&\cdots &{\text{observation}}\end{array}}}
the filtering problem is to estimate sequentially the values of the hidden states
X
k
{\displaystyle X_{k}}
, given the values of the observation process
Y
0
,
⋯
,
Y
k
,
{\displaystyle Y_{0},\cdots ,Y_{k},}
at any time step k.
All Bayesian estimates of
X
k
{\displaystyle X_{k}}
follow from the posterior density
p
(
x
k
|
y
0
,
y
1
,
.
.
.
,
y
k
)
{\displaystyle p(x_{k}|y_{0},y_{1},...,y_{k})}
. The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior
p
(
x
0
,
x
1
,
.
.
.
,
x
k
|
y
0
,
y
1
,
.
.
.
,
y
k
)
{\displaystyle p(x_{0},x_{1},...,x_{k}|y_{0},y_{1},...,y_{k})}
.
=== The Signal-Observation model ===
Particle methods often assume
X
k
{\displaystyle X_{k}}
and the observations
Y
k
{\displaystyle Y_{k}}
can be modeled in this form:
X
0
,
X
1
,
⋯
{\displaystyle X_{0},X_{1},\cdots }
is a Markov process on
R
d
x
{\displaystyle \mathbb {R} ^{d_{x}}}
(for some
d
x
⩾
1
{\displaystyle d_{x}\geqslant 1}
) that evolves according to the transition probability density
p
(
x
k
|
x
k
−
1
)
{\displaystyle p(x_{k}|x_{k-1})}
. This model is also often written in a synthetic way as
X
k
|
X
k
−
1
=
x
k
∼
p
(
x
k
|
x
k
−
1
)
{\displaystyle X_{k}|X_{k-1}=x_{k}\sim p(x_{k}|x_{k-1})}
with an initial probability density
p
(
x
0
)
{\displaystyle p(x_{0})}
.
The observations
Y
0
,
Y
1
,
⋯
{\displaystyle Y_{0},Y_{1},\cdots }
take values in some state space on
R
d
y
{\displaystyle \mathbb {R} ^{d_{y}}}
(for some
d
y
⩾
1
{\displaystyle d_{y}\geqslant 1}
) and are conditionally independent provided that
X
0
,
X
1
,
⋯
{\displaystyle X_{0},X_{1},\cdots }
are known. In other words, each
Y
k
{\displaystyle Y_{k}}
only depends on
X
k
{\displaystyle X_{k}}
. In addition, we assume conditional distribution for
Y
k
{\displaystyle Y_{k}}
given
X
k
=
x
k
{\displaystyle X_{k}=x_{k}}
are absolutely continuous, and in a synthetic way we have
Y
k
|
X
k
=
y
k
∼
p
(
y
k
|
x
k
)
{\displaystyle Y_{k}|X_{k}=y_{k}\sim p(y_{k}|x_{k})}
An example of system with these properties is:
X
k
=
g
(
X
k
−
1
)
+
W
k
−
1
{\displaystyle X_{k}=g(X_{k-1})+W_{k-1}}
Y
k
=
h
(
X
k
)
+
V
k
{\displaystyle Y_{k}=h(X_{k})+V_{k}}
where both
W
k
{\displaystyle W_{k}}
and
V
k
{\displaystyle V_{k}}
are mutually independent sequences with known probability density functions and g and h are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions g and h in the above example are linear, and if both
W
k
{\displaystyle W_{k}}
and
V
k
{\displaystyle V_{k}}
are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if the probability distribution is Gaussian a third-order approximation is possible).
The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed. To design a particle filter we simply need to assume that we can sample the transitions
X
k
−
1
→
X
k
{\displaystyle X_{k-1}\to X_{k}}
of the Markov chain
X
k
,
{\displaystyle X_{k},}
and to compute the likelihood function
x
k
↦
p
(
y
k
|
x
k
)
{\displaystyle x_{k}\mapsto p(y_{k}|x_{k})}
(see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions of
X
k
{\displaystyle X_{k}}
is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.
=== Approximate Bayesian computation models ===
In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute. In this situation, an additional level of approximation is necessitated. One strategy is to replace the signal
X
k
{\displaystyle X_{k}}
by the Markov chain
X
k
=
(
X
k
,
Y
k
)
{\displaystyle {\mathcal {X}}_{k}=\left(X_{k},Y_{k}\right)}
and to introduce a virtual observation of the form
Y
k
=
Y
k
+
ϵ
V
k
for some parameter
ϵ
∈
[
0
,
1
]
{\displaystyle {\mathcal {Y}}_{k}=Y_{k}+\epsilon {\mathcal {V}}_{k}\quad {\mbox{for some parameter}}\quad \epsilon \in [0,1]}
for some sequence of independent random variables
V
k
{\displaystyle {\mathcal {V}}_{k}}
with known probability density functions. The central idea is to observe that
Law
(
X
k
|
Y
0
=
y
0
,
⋯
,
Y
k
=
y
k
)
≈
ϵ
↓
0
Law
(
X
k
|
Y
0
=
y
0
,
⋯
,
Y
k
=
y
k
)
{\displaystyle {\text{Law}}\left(X_{k}|{\mathcal {Y}}_{0}=y_{0},\cdots ,{\mathcal {Y}}_{k}=y_{k}\right)\approx _{\epsilon \downarrow 0}{\text{Law}}\left(X_{k}|Y_{0}=y_{0},\cdots ,Y_{k}=y_{k}\right)}
The particle filter associated with the Markov process
X
k
=
(
X
k
,
Y
k
)
{\displaystyle {\mathcal {X}}_{k}=\left(X_{k},Y_{k}\right)}
given the partial observations
Y
0
=
y
0
,
⋯
,
Y
k
=
y
k
,
{\displaystyle {\mathcal {Y}}_{0}=y_{0},\cdots ,{\mathcal {Y}}_{k}=y_{k},}
is defined in terms of particles evolving in
R
d
x
+
d
y
{\displaystyle \mathbb {R} ^{d_{x}+d_{y}}}
with a likelihood function given with some obvious abusive notation by
p
(
Y
k
|
X
k
)
{\displaystyle p({\mathcal {Y}}_{k}|{\mathcal {X}}_{k})}
. These probabilistic techniques are closely related to Approximate Bayesian Computation (ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter. They were further developed by P. Del Moral, A. Doucet and A. Jasra.
=== The nonlinear filtering equation ===
Bayes' rule for conditional probability gives:
p
(
x
0
,
⋯
,
x
k
|
y
0
,
⋯
,
y
k
)
=
p
(
y
0
,
⋯
,
y
k
|
x
0
,
⋯
,
x
k
)
p
(
x
0
,
⋯
,
x
k
)
p
(
y
0
,
⋯
,
y
k
)
{\displaystyle p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k})={\frac {p(y_{0},\cdots ,y_{k}|x_{0},\cdots ,x_{k})p(x_{0},\cdots ,x_{k})}{p(y_{0},\cdots ,y_{k})}}}
where
p
(
y
0
,
⋯
,
y
k
)
=
∫
p
(
y
0
,
⋯
,
y
k
|
x
0
,
⋯
,
x
k
)
p
(
x
0
,
⋯
,
x
k
)
d
x
0
⋯
d
x
k
p
(
y
0
,
⋯
,
y
k
|
x
0
,
⋯
,
x
k
)
=
∏
l
=
0
k
p
(
y
l
|
x
l
)
p
(
x
0
,
⋯
,
x
k
)
=
p
0
(
x
0
)
∏
l
=
1
k
p
(
x
l
|
x
l
−
1
)
{\displaystyle {\begin{aligned}p(y_{0},\cdots ,y_{k})&=\int p(y_{0},\cdots ,y_{k}|x_{0},\cdots ,x_{k})p(x_{0},\cdots ,x_{k})dx_{0}\cdots dx_{k}\\p(y_{0},\cdots ,y_{k}|x_{0},\cdots ,x_{k})&=\prod _{l=0}^{k}p(y_{l}|x_{l})\\p(x_{0},\cdots ,x_{k})&=p_{0}(x_{0})\prod _{l=1}^{k}p(x_{l}|x_{l-1})\end{aligned}}}
Particle filters are also an approximation, but with enough particles they can be much more accurate. The nonlinear filtering equation is given by the recursion
with the convention
p
(
x
0
|
y
0
,
⋯
,
y
k
−
1
)
=
p
(
x
0
)
{\displaystyle p(x_{0}|y_{0},\cdots ,y_{k-1})=p(x_{0})}
for k = 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially.
=== Feynman-Kac formulation ===
We fix a time horizon n and a sequence of observations
Y
0
=
y
0
,
⋯
,
Y
n
=
y
n
{\displaystyle Y_{0}=y_{0},\cdots ,Y_{n}=y_{n}}
, and for each k = 0, ..., n we set:
G
k
(
x
k
)
=
p
(
y
k
|
x
k
)
.
{\displaystyle G_{k}(x_{k})=p(y_{k}|x_{k}).}
In this notation, for any bounded function F on the set of trajectories of
X
k
{\displaystyle X_{k}}
from the origin k = 0 up to time k = n, we have the Feynman-Kac formula
∫
F
(
x
0
,
⋯
,
x
n
)
p
(
x
0
,
⋯
,
x
n
|
y
0
,
⋯
,
y
n
)
d
x
0
⋯
d
x
n
=
∫
F
(
x
0
,
⋯
,
x
n
)
{
∏
k
=
0
n
p
(
y
k
|
x
k
)
}
p
(
x
0
,
⋯
,
x
n
)
d
x
0
⋯
d
x
n
∫
{
∏
k
=
0
n
p
(
y
k
|
x
k
)
}
p
(
x
0
,
⋯
,
x
n
)
d
x
0
⋯
d
x
n
=
E
(
F
(
X
0
,
⋯
,
X
n
)
∏
k
=
0
n
G
k
(
X
k
)
)
E
(
∏
k
=
0
n
G
k
(
X
k
)
)
{\displaystyle {\begin{aligned}\int F(x_{0},\cdots ,x_{n})p(x_{0},\cdots ,x_{n}|y_{0},\cdots ,y_{n})dx_{0}\cdots dx_{n}&={\frac {\int F(x_{0},\cdots ,x_{n})\left\{\prod \limits _{k=0}^{n}p(y_{k}|x_{k})\right\}p(x_{0},\cdots ,x_{n})dx_{0}\cdots dx_{n}}{\int \left\{\prod \limits _{k=0}^{n}p(y_{k}|x_{k})\right\}p(x_{0},\cdots ,x_{n})dx_{0}\cdots dx_{n}}}\\&={\frac {E\left(F(X_{0},\cdots ,X_{n})\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}{E\left(\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}}\end{aligned}}}
Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences. Their interpretations are dependent on the application domain. For instance, if we choose the indicator function
G
n
(
x
n
)
=
1
A
(
x
n
)
{\displaystyle G_{n}(x_{n})=1_{A}(x_{n})}
of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have:
E
(
F
(
X
0
,
⋯
,
X
n
)
|
X
0
∈
A
,
⋯
,
X
n
∈
A
)
=
E
(
F
(
X
0
,
⋯
,
X
n
)
∏
k
=
0
n
G
k
(
X
k
)
)
E
(
∏
k
=
0
n
G
k
(
X
k
)
)
{\displaystyle E\left(F(X_{0},\cdots ,X_{n})|X_{0}\in A,\cdots ,X_{n}\in A\right)={\frac {E\left(F(X_{0},\cdots ,X_{n})\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}{E\left(\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}}}
and
P
(
X
0
∈
A
,
⋯
,
X
n
∈
A
)
=
E
(
∏
k
=
0
n
G
k
(
X
k
)
)
{\displaystyle P\left(X_{0}\in A,\cdots ,X_{n}\in A\right)=E\left(\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}
as soon as the normalizing constant is strictly positive.
== Particle filters ==
=== A Genetic type particle algorithm ===
Initially, such an algorithm starts with N independent random variables
(
ξ
0
i
)
1
⩽
i
⩽
N
{\displaystyle \left(\xi _{0}^{i}\right)_{1\leqslant i\leqslant N}}
with common probability density
p
(
x
0
)
{\displaystyle p(x_{0})}
. The genetic algorithm selection-mutation transitions
ξ
k
:=
(
ξ
k
i
)
1
⩽
i
⩽
N
⟶
selection
ξ
^
k
:=
(
ξ
^
k
i
)
1
⩽
i
⩽
N
⟶
mutation
ξ
k
+
1
:=
(
ξ
k
+
1
i
)
1
⩽
i
⩽
N
{\displaystyle \xi _{k}:=\left(\xi _{k}^{i}\right)_{1\leqslant i\leqslant N}{\stackrel {\text{selection}}{\longrightarrow }}{\widehat {\xi }}_{k}:=\left({\widehat {\xi }}_{k}^{i}\right)_{1\leqslant i\leqslant N}{\stackrel {\text{mutation}}{\longrightarrow }}\xi _{k+1}:=\left(\xi _{k+1}^{i}\right)_{1\leqslant i\leqslant N}}
mimic/approximate the updating-prediction transitions of the optimal filter evolution (Eq. 1):
During the selection-updating transition we sample N (conditionally) independent random variables
ξ
^
k
:=
(
ξ
^
k
i
)
1
⩽
i
⩽
N
{\displaystyle {\widehat {\xi }}_{k}:=\left({\widehat {\xi }}_{k}^{i}\right)_{1\leqslant i\leqslant N}}
with common (conditional) distribution
∑
i
=
1
N
p
(
y
k
|
ξ
k
i
)
∑
j
=
1
N
p
(
y
k
|
ξ
k
j
)
δ
ξ
k
i
(
d
x
k
)
{\displaystyle \sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k}^{j})}}\delta _{\xi _{k}^{i}}(dx_{k})}
where
δ
a
{\displaystyle \delta _{a}}
stands for the Dirac measure at a given state a.
During the mutation-prediction transition, from each selected particle
ξ
^
k
i
{\displaystyle {\widehat {\xi }}_{k}^{i}}
we sample independently a transition
ξ
^
k
i
⟶
ξ
k
+
1
i
∼
p
(
x
k
+
1
|
ξ
^
k
i
)
,
i
=
1
,
⋯
,
N
.
{\displaystyle {\widehat {\xi }}_{k}^{i}\longrightarrow \xi _{k+1}^{i}\sim p(x_{k+1}|{\widehat {\xi }}_{k}^{i}),\qquad i=1,\cdots ,N.}
In the above displayed formulae
p
(
y
k
|
ξ
k
i
)
{\displaystyle p(y_{k}|\xi _{k}^{i})}
stands for the likelihood function
x
k
↦
p
(
y
k
|
x
k
)
{\displaystyle x_{k}\mapsto p(y_{k}|x_{k})}
evaluated at
x
k
=
ξ
k
i
{\displaystyle x_{k}=\xi _{k}^{i}}
, and
p
(
x
k
+
1
|
ξ
^
k
i
)
{\displaystyle p(x_{k+1}|{\widehat {\xi }}_{k}^{i})}
stands for the conditional density
p
(
x
k
+
1
|
x
k
)
{\displaystyle p(x_{k+1}|x_{k})}
evaluated at
x
k
=
ξ
^
k
i
{\displaystyle x_{k}={\widehat {\xi }}_{k}^{i}}
.
At each time k, we have the particle approximations
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
)
:=
1
N
∑
i
=
1
N
δ
ξ
^
k
i
(
d
x
k
)
≈
N
↑
∞
p
(
d
x
k
|
y
0
,
⋯
,
y
k
)
≈
N
↑
∞
∑
i
=
1
N
p
(
y
k
|
ξ
k
i
)
∑
i
=
1
N
p
(
y
k
|
ξ
k
j
)
δ
ξ
k
i
(
d
x
k
)
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{{\widehat {\xi }}_{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k})\approx _{N\uparrow \infty }\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{i=1}^{N}p(y_{k}|\xi _{k}^{j})}}\delta _{\xi _{k}^{i}}(dx_{k})}
and
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
:=
1
N
∑
i
=
1
N
δ
ξ
k
i
(
d
x
k
)
≈
N
↑
∞
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k-1})}
In Genetic algorithms and Evolutionary computing community, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.
=== Monte Carlo principles ===
Particle methods, like all sampling-based approaches (e.g., Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density
p
(
x
k
|
y
0
,
⋯
,
y
k
)
.
{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k}).}
For example, we may have N samples from the approximate posterior distribution of
X
k
{\displaystyle X_{k}}
, where the samples are labeled with superscripts as:
ξ
^
k
1
,
⋯
,
ξ
^
k
N
.
{\displaystyle {\widehat {\xi }}_{k}^{1},\cdots ,{\widehat {\xi }}_{k}^{N}.}
Then, expectations with respect to the filtering distribution are approximated by
with
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
)
=
1
N
∑
i
=
1
N
δ
ξ
^
k
i
(
d
x
k
)
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{{\widehat {\xi }}_{k}^{i}}(dx_{k})}
where
δ
a
{\displaystyle \delta _{a}}
stands for the Dirac measure at a given state a. The function f, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some approximation error. When the approximation equation (Eq. 2) is satisfied for any bounded function f we write
p
(
d
x
k
|
y
0
,
⋯
,
y
k
)
:=
p
(
x
k
|
y
0
,
⋯
,
y
k
)
d
x
k
≈
N
↑
∞
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
)
=
1
N
∑
i
=
1
N
δ
ξ
^
k
i
(
d
x
k
)
{\displaystyle p(dx_{k}|y_{0},\cdots ,y_{k}):=p(x_{k}|y_{0},\cdots ,y_{k})dx_{k}\approx _{N\uparrow \infty }{\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{{\widehat {\xi }}_{k}^{i}}(dx_{k})}
Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines
(
ξ
^
0
,
k
i
,
ξ
^
1
,
k
i
,
⋯
,
ξ
^
k
−
1
,
k
i
,
ξ
^
k
,
k
i
)
{\displaystyle \left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k-1,k}^{i},{\widehat {\xi }}_{k,k}^{i}\right)}
of the particles
i
=
1
,
⋯
,
N
{\displaystyle i=1,\cdots ,N}
. The random states
ξ
^
l
,
k
i
{\displaystyle {\widehat {\xi }}_{l,k}^{i}}
, with the lower indices l=0,...,k, stands for the ancestor of the individual
ξ
^
k
,
k
i
=
ξ
^
k
i
{\displaystyle {\widehat {\xi }}_{k,k}^{i}={\widehat {\xi }}_{k}^{i}}
at level l=0,...,k. In this situation, we have the approximation formula
with the empirical measure
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
:=
1
N
∑
i
=
1
N
δ
(
ξ
^
0
,
k
i
,
ξ
^
1
,
k
i
,
⋯
,
ξ
^
k
,
k
i
)
(
d
(
x
0
,
⋯
,
x
k
)
)
{\displaystyle {\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))}
Here F stands for any founded function on the path space of the signal. In a more synthetic form (Eq. 3) is equivalent to
p
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
:=
p
(
x
0
,
⋯
,
x
k
|
y
0
,
⋯
,
y
k
)
d
x
0
⋯
d
x
k
≈
N
↑
∞
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
:=
1
N
∑
i
=
1
N
δ
(
ξ
^
0
,
k
i
,
⋯
,
ξ
^
k
,
k
i
)
(
d
(
x
0
,
⋯
,
x
k
)
)
{\displaystyle {\begin{aligned}p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})&:=p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k})\,dx_{0}\cdots dx_{k}\\&\approx _{N\uparrow \infty }{\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\\&:={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left({\widehat {\xi }}_{0,k}^{i},\cdots ,{\widehat {\xi }}_{k,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\end{aligned}}}
Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with a mean-field particle interpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.
=== Mean-field particle simulation ===
==== The general probabilistic principle ====
The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form
η
n
+
1
=
Φ
n
+
1
(
η
n
)
{\displaystyle \eta _{n+1}=\Phi _{n+1}\left(\eta _{n}\right)}
where
Φ
n
+
1
{\displaystyle \Phi _{n+1}}
stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictor
η
n
(
d
x
n
)
=
p
(
x
n
|
y
0
,
⋯
,
y
n
−
1
)
d
x
n
{\displaystyle \eta _{n}(dx_{n})=p(x_{n}|y_{0},\cdots ,y_{n-1})dx_{n}}
satisfies a nonlinear evolution starting with the probability distribution
η
0
(
d
x
0
)
=
p
(
x
0
)
d
x
0
{\displaystyle \eta _{0}(dx_{0})=p(x_{0})dx_{0}}
. One of the simplest ways to approximate these probability measures is to start with N independent random variables
(
ξ
0
i
)
1
⩽
i
⩽
N
{\displaystyle \left(\xi _{0}^{i}\right)_{1\leqslant i\leqslant N}}
with common probability distribution
η
0
(
d
x
0
)
=
p
(
x
0
)
d
x
0
{\displaystyle \eta _{0}(dx_{0})=p(x_{0})dx_{0}}
. Suppose we have defined a sequence of N random variables
(
ξ
n
i
)
1
⩽
i
⩽
N
{\displaystyle \left(\xi _{n}^{i}\right)_{1\leqslant i\leqslant N}}
such that
1
N
∑
i
=
1
N
δ
ξ
n
i
(
d
x
n
)
≈
N
↑
∞
η
n
(
d
x
n
)
{\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{n}^{i}}(dx_{n})\approx _{N\uparrow \infty }\eta _{n}(dx_{n})}
At the next step we sample N (conditionally) independent random variables
ξ
n
+
1
:=
(
ξ
n
+
1
i
)
1
⩽
i
⩽
N
{\displaystyle \xi _{n+1}:=\left(\xi _{n+1}^{i}\right)_{1\leqslant i\leqslant N}}
with common law .
Φ
n
+
1
(
1
N
∑
i
=
1
N
δ
ξ
n
i
)
≈
N
↑
∞
Φ
n
+
1
(
η
n
)
=
η
n
+
1
{\displaystyle \Phi _{n+1}\left({\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{n}^{i}}\right)\approx _{N\uparrow \infty }\Phi _{n+1}\left(\eta _{n}\right)=\eta _{n+1}}
==== A particle interpretation of the filtering equation ====
We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors
For k = 0 we use the convention
p
(
x
0
|
y
0
,
⋯
,
y
−
1
)
:=
p
(
x
0
)
{\displaystyle p(x_{0}|y_{0},\cdots ,y_{-1}):=p(x_{0})}
.
By the law of large numbers, we have
p
^
(
d
x
0
)
=
1
N
∑
i
=
1
N
δ
ξ
0
i
(
d
x
0
)
≈
N
↑
∞
p
(
x
0
)
d
x
0
{\displaystyle {\widehat {p}}(dx_{0})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{0}^{i}}(dx_{0})\approx _{N\uparrow \infty }p(x_{0})dx_{0}}
in the sense that
∫
f
(
x
0
)
p
^
(
d
x
0
)
=
1
N
∑
i
=
1
N
f
(
ξ
0
i
)
≈
N
↑
∞
∫
f
(
x
0
)
p
(
d
x
0
)
d
x
0
{\displaystyle \int f(x_{0}){\widehat {p}}(dx_{0})={\frac {1}{N}}\sum _{i=1}^{N}f(\xi _{0}^{i})\approx _{N\uparrow \infty }\int f(x_{0})p(dx_{0})dx_{0}}
for any bounded function
f
{\displaystyle f}
. We further assume that we have constructed a sequence of particles
(
ξ
k
i
)
1
⩽
i
⩽
N
{\displaystyle \left(\xi _{k}^{i}\right)_{1\leqslant i\leqslant N}}
at some rank k such that
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
:=
1
N
∑
i
=
1
N
δ
ξ
k
i
(
d
x
k
)
≈
N
↑
∞
p
(
x
k
|
y
0
,
⋯
,
y
k
−
1
)
d
x
k
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }~p(x_{k}~|~y_{0},\cdots ,y_{k-1})dx_{k}}
in the sense that for any bounded function
f
{\displaystyle f}
we have
∫
f
(
x
k
)
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
=
1
N
∑
i
=
1
N
f
(
ξ
k
i
)
≈
N
↑
∞
∫
f
(
x
k
)
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
d
x
k
{\displaystyle \int f(x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})={\frac {1}{N}}\sum _{i=1}^{N}f(\xi _{k}^{i})\approx _{N\uparrow \infty }\int f(x_{k})p(dx_{k}|y_{0},\cdots ,y_{k-1})dx_{k}}
In this situation, replacing
p
(
x
k
|
y
0
,
⋯
,
y
k
−
1
)
d
x
k
{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}}
by the empirical measure
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}
in the evolution equation of the one-step optimal filter stated in (Eq. 4) we find that
p
(
x
k
+
1
|
y
0
,
⋯
,
y
k
)
≈
N
↑
∞
∫
p
(
x
k
+
1
|
x
k
′
)
p
(
y
k
|
x
k
′
)
p
^
(
d
x
k
′
|
y
0
,
⋯
,
y
k
−
1
)
∫
p
(
y
k
|
x
k
″
)
p
^
(
d
x
k
″
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle p(x_{k+1}|y_{0},\cdots ,y_{k})\approx _{N\uparrow \infty }\int p(x_{k+1}|x'_{k}){\frac {p(y_{k}|x_{k}'){\widehat {p}}(dx'_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x''_{k}){\widehat {p}}(dx''_{k}|y_{0},\cdots ,y_{k-1})}}}
Notice that the right hand side in the above formula is a weighted probability mixture
∫
p
(
x
k
+
1
|
x
k
′
)
p
(
y
k
|
x
k
′
)
p
^
(
d
x
k
′
|
y
0
,
⋯
,
y
k
−
1
)
∫
p
(
y
k
|
x
k
″
)
p
^
(
d
x
k
″
|
y
0
,
⋯
,
y
k
−
1
)
=
∑
i
=
1
N
p
(
y
k
|
ξ
k
i
)
∑
i
=
1
N
p
(
y
k
|
ξ
k
j
)
p
(
x
k
+
1
|
ξ
k
i
)
=:
q
^
(
x
k
+
1
|
y
0
,
⋯
,
y
k
)
{\displaystyle \int p(x_{k+1}|x'_{k}){\frac {p(y_{k}|x_{k}'){\widehat {p}}(dx'_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x''_{k}){\widehat {p}}(dx''_{k}|y_{0},\cdots ,y_{k-1})}}=\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{i=1}^{N}p(y_{k}|\xi _{k}^{j})}}p(x_{k+1}|\xi _{k}^{i})=:{\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})}
where
p
(
y
k
|
ξ
k
i
)
{\displaystyle p(y_{k}|\xi _{k}^{i})}
stands for the density
p
(
y
k
|
x
k
)
{\displaystyle p(y_{k}|x_{k})}
evaluated at
x
k
=
ξ
k
i
{\displaystyle x_{k}=\xi _{k}^{i}}
, and
p
(
x
k
+
1
|
ξ
k
i
)
{\displaystyle p(x_{k+1}|\xi _{k}^{i})}
stands for the density
p
(
x
k
+
1
|
x
k
)
{\displaystyle p(x_{k+1}|x_{k})}
evaluated at
x
k
=
ξ
k
i
{\displaystyle x_{k}=\xi _{k}^{i}}
for
i
=
1
,
⋯
,
N
.
{\displaystyle i=1,\cdots ,N.}
Then, we sample N independent random variable
(
ξ
k
+
1
i
)
1
⩽
i
⩽
N
{\displaystyle \left(\xi _{k+1}^{i}\right)_{1\leqslant i\leqslant N}}
with common probability density
q
^
(
x
k
+
1
|
y
0
,
⋯
,
y
k
)
{\displaystyle {\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})}
so that
p
^
(
d
x
k
+
1
|
y
0
,
⋯
,
y
k
)
:=
1
N
∑
i
=
1
N
δ
ξ
k
+
1
i
(
d
x
k
+
1
)
≈
N
↑
∞
q
^
(
x
k
+
1
|
y
0
,
⋯
,
y
k
)
d
x
k
+
1
≈
N
↑
∞
p
(
x
k
+
1
|
y
0
,
⋯
,
y
k
)
d
x
k
+
1
{\displaystyle {\widehat {p}}(dx_{k+1}|y_{0},\cdots ,y_{k}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k+1}^{i}}(dx_{k+1})\approx _{N\uparrow \infty }{\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})dx_{k+1}\approx _{N\uparrow \infty }p(x_{k+1}|y_{0},\cdots ,y_{k})dx_{k+1}}
Iterating this procedure, we design a Markov chain such that
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
:=
1
N
∑
i
=
1
N
δ
ξ
k
i
(
d
x
k
)
≈
N
↑
∞
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
:=
p
(
x
k
|
y
0
,
⋯
,
y
k
−
1
)
d
x
k
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k-1}):=p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}}
Notice that the optimal filter is approximated at each time step k using the Bayes' formulae
p
(
d
x
k
|
y
0
,
⋯
,
y
k
)
≈
N
↑
∞
p
(
y
k
|
x
k
)
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
∫
p
(
y
k
|
x
k
′
)
p
^
(
d
x
k
′
|
y
0
,
⋯
,
y
k
−
1
)
=
∑
i
=
1
N
p
(
y
k
|
ξ
k
i
)
∑
j
=
1
N
p
(
y
k
|
ξ
k
j
)
δ
ξ
k
i
(
d
x
k
)
{\displaystyle p(dx_{k}|y_{0},\cdots ,y_{k})\approx _{N\uparrow \infty }{\frac {p(y_{k}|x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x'_{k}){\widehat {p}}(dx'_{k}|y_{0},\cdots ,y_{k-1})}}=\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k}^{j})}}~\delta _{\xi _{k}^{i}}(dx_{k})}
The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measure
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle p(dx_{k}|y_{0},\cdots ,y_{k-1})}
by the empirical approximation
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}
. The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.
=== Some convergence results ===
The analysis of the convergence of particle filters was started in 1996 and in 2000 in the book and the series of articles. More recent developments can be found in the books, When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates
I
k
(
f
)
:=
∫
f
(
x
k
)
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
≈
N
↑
∞
I
^
k
(
f
)
:=
∫
f
(
x
k
)
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle I_{k}(f):=\int f(x_{k})p(dx_{k}|y_{0},\cdots ,y_{k-1})\approx _{N\uparrow \infty }{\widehat {I}}_{k}(f):=\int f(x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}
are controlled by the non asymptotic uniform estimates
sup
k
⩾
0
|
E
(
I
^
k
(
f
)
)
−
I
k
(
f
)
|
⩽
c
1
N
{\displaystyle \sup _{k\geqslant 0}\left\vert E\left({\widehat {I}}_{k}(f)\right)-I_{k}(f)\right\vert \leqslant {\frac {c_{1}}{N}}}
sup
k
⩾
0
E
(
[
I
^
k
(
f
)
−
I
k
(
f
)
]
2
)
⩽
c
2
N
{\displaystyle \sup _{k\geqslant 0}E\left(\left[{\widehat {I}}_{k}(f)-I_{k}(f)\right]^{2}\right)\leqslant {\frac {c_{2}}{N}}}
for any function f bounded by 1, and for some finite constants
c
1
,
c
2
.
{\displaystyle c_{1},c_{2}.}
In addition, for any
x
⩾
0
{\displaystyle x\geqslant 0}
:
P
(
|
I
^
k
(
f
)
−
I
k
(
f
)
|
⩽
c
1
x
N
+
c
2
x
N
∧
sup
0
⩽
k
⩽
n
|
I
^
k
(
f
)
−
I
k
(
f
)
|
⩽
c
x
log
(
n
)
N
)
>
1
−
e
−
x
{\displaystyle \mathbf {P} \left(\left|{\widehat {I}}_{k}(f)-I_{k}(f)\right|\leqslant c_{1}{\frac {x}{N}}+c_{2}{\sqrt {\frac {x}{N}}}\land \sup _{0\leqslant k\leqslant n}\left|{\widehat {I}}_{k}(f)-I_{k}(f)\right|\leqslant c{\sqrt {\frac {x\log(n)}{N}}}\right)>1-e^{-x}}
for some finite constants
c
1
,
c
2
{\displaystyle c_{1},c_{2}}
related to the asymptotic bias and variance of the particle estimate, and some finite constant c. The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation.
== Genealogical trees and Unbiasedness properties ==
=== Genealogical tree based particle smoothing ===
Tracing back in time the ancestral lines
(
ξ
^
0
,
k
i
,
ξ
^
1
,
k
i
,
⋯
,
ξ
^
k
−
1
,
k
i
,
ξ
^
k
,
k
i
)
,
(
ξ
0
,
k
i
,
ξ
1
,
k
i
,
⋯
,
ξ
k
−
1
,
k
i
,
ξ
k
,
k
i
)
{\displaystyle \left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k-1,k}^{i},{\widehat {\xi }}_{k,k}^{i}\right),\quad \left(\xi _{0,k}^{i},\xi _{1,k}^{i},\cdots ,\xi _{k-1,k}^{i},\xi _{k,k}^{i}\right)}
of the individuals
ξ
^
k
i
(
=
ξ
^
k
,
k
i
)
{\displaystyle {\widehat {\xi }}_{k}^{i}\left(={\widehat {\xi }}_{k,k}^{i}\right)}
and
ξ
k
i
(
=
ξ
k
,
k
i
)
{\displaystyle \xi _{k}^{i}\left(={\xi }_{k,k}^{i}\right)}
at every time step k, we also have the particle approximations
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
:=
1
N
∑
i
=
1
N
δ
(
ξ
^
0
,
k
i
,
⋯
,
ξ
^
0
,
k
i
)
(
d
(
x
0
,
⋯
,
x
k
)
)
≈
N
↑
∞
p
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
≈
N
↑
∞
∑
i
=
1
N
p
(
y
k
|
ξ
k
,
k
i
)
∑
j
=
1
N
p
(
y
k
|
ξ
k
,
k
j
)
δ
(
ξ
0
,
k
i
,
⋯
,
ξ
0
,
k
i
)
(
d
(
x
0
,
⋯
,
x
k
)
)
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
−
1
)
:=
1
N
∑
i
=
1
N
δ
(
ξ
0
,
k
i
,
⋯
,
ξ
k
,
k
i
)
(
d
(
x
0
,
⋯
,
x
k
)
)
≈
N
↑
∞
p
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
−
1
)
:=
p
(
x
0
,
⋯
,
x
k
|
y
0
,
⋯
,
y
k
−
1
)
d
x
0
,
⋯
,
d
x
k
{\displaystyle {\begin{aligned}{\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})&:={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left({\widehat {\xi }}_{0,k}^{i},\cdots ,{\widehat {\xi }}_{0,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\\&\approx _{N\uparrow \infty }p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\\&\approx _{N\uparrow \infty }\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k,k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k,k}^{j})}}\delta _{\left(\xi _{0,k}^{i},\cdots ,\xi _{0,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\\&\ \\{\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})&:={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\\&\approx _{N\uparrow \infty }p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\\&:=p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k-1})dx_{0},\cdots ,dx_{k}\end{aligned}}}
These empirical approximations are equivalent to the particle integral approximations
∫
F
(
x
0
,
⋯
,
x
n
)
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
:=
1
N
∑
i
=
1
N
F
(
ξ
^
0
,
k
i
,
⋯
,
ξ
^
0
,
k
i
)
≈
N
↑
∞
∫
F
(
x
0
,
⋯
,
x
n
)
p
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
)
≈
N
↑
∞
∑
i
=
1
N
p
(
y
k
|
ξ
k
,
k
i
)
∑
j
=
1
N
p
(
y
k
|
ξ
k
,
k
j
)
F
(
ξ
0
,
k
i
,
⋯
,
ξ
k
,
k
i
)
∫
F
(
x
0
,
⋯
,
x
n
)
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
−
1
)
:=
1
N
∑
i
=
1
N
F
(
ξ
0
,
k
i
,
⋯
,
ξ
k
,
k
i
)
≈
N
↑
∞
∫
F
(
x
0
,
⋯
,
x
n
)
p
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle {\begin{aligned}\int F(x_{0},\cdots ,x_{n}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})&:={\frac {1}{N}}\sum _{i=1}^{N}F\left({\widehat {\xi }}_{0,k}^{i},\cdots ,{\widehat {\xi }}_{0,k}^{i}\right)\\&\approx _{N\uparrow \infty }\int F(x_{0},\cdots ,x_{n})p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\\&\approx _{N\uparrow \infty }\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k,k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k,k}^{j})}}F\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)\\&\ \\\int F(x_{0},\cdots ,x_{n}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})&:={\frac {1}{N}}\sum _{i=1}^{N}F\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)\\&\approx _{N\uparrow \infty }\int F(x_{0},\cdots ,x_{n})p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\end{aligned}}}
for any bounded function F on the random trajectories of the signal. As shown in the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.
=== Unbiased particle estimates of likelihood functions ===
We use the product formula
p
(
y
0
,
⋯
,
y
n
)
=
∏
k
=
0
n
p
(
y
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle p(y_{0},\cdots ,y_{n})=\prod _{k=0}^{n}p(y_{k}|y_{0},\cdots ,y_{k-1})}
with
p
(
y
k
|
y
0
,
⋯
,
y
k
−
1
)
=
∫
p
(
y
k
|
x
k
)
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle p(y_{k}|y_{0},\cdots ,y_{k-1})=\int p(y_{k}|x_{k})p(dx_{k}|y_{0},\cdots ,y_{k-1})}
and the conventions
p
(
y
0
|
y
0
,
⋯
,
y
−
1
)
=
p
(
y
0
)
{\displaystyle p(y_{0}|y_{0},\cdots ,y_{-1})=p(y_{0})}
and
p
(
x
0
|
y
0
,
⋯
,
y
−
1
)
=
p
(
x
0
)
,
{\displaystyle p(x_{0}|y_{0},\cdots ,y_{-1})=p(x_{0}),}
for k = 0. Replacing
p
(
x
k
|
y
0
,
⋯
,
y
k
−
1
)
d
x
k
{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}}
by the empirical approximation
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
:=
1
N
∑
i
=
1
N
δ
ξ
k
i
(
d
x
k
)
≈
N
↑
∞
p
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k-1})}
in the above displayed formula, we design the following unbiased particle approximation of the likelihood function
p
(
y
0
,
⋯
,
y
n
)
≈
N
↑
∞
p
^
(
y
0
,
⋯
,
y
n
)
=
∏
k
=
0
n
p
^
(
y
k
|
y
0
,
⋯
,
y
k
−
1
)
{\displaystyle p(y_{0},\cdots ,y_{n})\approx _{N\uparrow \infty }{\widehat {p}}(y_{0},\cdots ,y_{n})=\prod _{k=0}^{n}{\widehat {p}}(y_{k}|y_{0},\cdots ,y_{k-1})}
with
p
^
(
y
k
|
y
0
,
⋯
,
y
k
−
1
)
=
∫
p
(
y
k
|
x
k
)
p
^
(
d
x
k
|
y
0
,
⋯
,
y
k
−
1
)
=
1
N
∑
i
=
1
N
p
(
y
k
|
ξ
k
i
)
{\displaystyle {\widehat {p}}(y_{k}|y_{0},\cdots ,y_{k-1})=\int p(y_{k}|x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})={\frac {1}{N}}\sum _{i=1}^{N}p(y_{k}|\xi _{k}^{i})}
where
p
(
y
k
|
ξ
k
i
)
{\displaystyle p(y_{k}|\xi _{k}^{i})}
stands for the density
p
(
y
k
|
x
k
)
{\displaystyle p(y_{k}|x_{k})}
evaluated at
x
k
=
ξ
k
i
{\displaystyle x_{k}=\xi _{k}^{i}}
. The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article. Refined variance estimates can be found in and.
=== Backward particle smoothers ===
Using Bayes' rule, we have the formula
p
(
x
0
,
⋯
,
x
n
|
y
0
,
⋯
,
y
n
−
1
)
=
p
(
x
n
|
y
0
,
⋯
,
y
n
−
1
)
p
(
x
n
−
1
|
x
n
,
y
0
,
⋯
,
y
n
−
1
)
⋯
p
(
x
1
|
x
2
,
y
0
,
y
1
)
p
(
x
0
|
x
1
,
y
0
)
{\displaystyle p(x_{0},\cdots ,x_{n}|y_{0},\cdots ,y_{n-1})=p(x_{n}|y_{0},\cdots ,y_{n-1})p(x_{n-1}|x_{n},y_{0},\cdots ,y_{n-1})\cdots p(x_{1}|x_{2},y_{0},y_{1})p(x_{0}|x_{1},y_{0})}
Notice that
p
(
x
k
−
1
|
x
k
,
(
y
0
,
⋯
,
y
k
−
1
)
)
∝
p
(
x
k
|
x
k
−
1
)
p
(
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
1
)
)
p
(
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
1
)
∝
p
(
y
k
−
1
|
x
k
−
1
)
p
(
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
2
)
{\displaystyle {\begin{aligned}p(x_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))&\propto p(x_{k}|x_{k-1})p(x_{k-1}|(y_{0},\cdots ,y_{k-1}))\\p(x_{k-1}|(y_{0},\cdots ,y_{k-1})&\propto p(y_{k-1}|x_{k-1})p(x_{k-1}|(y_{0},\cdots ,y_{k-2})\end{aligned}}}
This implies that
p
(
x
k
−
1
|
x
k
,
(
y
0
,
⋯
,
y
k
−
1
)
)
=
p
(
y
k
−
1
|
x
k
−
1
)
p
(
x
k
|
x
k
−
1
)
p
(
x
k
−
1
|
y
0
,
⋯
,
y
k
−
2
)
∫
p
(
y
k
−
1
|
x
k
−
1
′
)
p
(
x
k
|
x
k
−
1
′
)
p
(
x
k
−
1
′
|
y
0
,
⋯
,
y
k
−
2
)
d
x
k
−
1
′
{\displaystyle p(x_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))={\frac {p(y_{k-1}|x_{k-1})p(x_{k}|x_{k-1})p(x_{k-1}|y_{0},\cdots ,y_{k-2})}{\int p(y_{k-1}|x'_{k-1})p(x_{k}|x'_{k-1})p(x'_{k-1}|y_{0},\cdots ,y_{k-2})dx'_{k-1}}}}
Replacing the one-step optimal predictors
p
(
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
2
)
)
d
x
k
−
1
{\displaystyle p(x_{k-1}|(y_{0},\cdots ,y_{k-2}))dx_{k-1}}
by the particle empirical measures
p
^
(
d
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
2
)
)
=
1
N
∑
i
=
1
N
δ
ξ
k
−
1
i
(
d
x
k
−
1
)
(
≈
N
↑
∞
p
(
d
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
2
)
)
:=
p
(
x
k
−
1
|
(
y
0
,
⋯
,
y
k
−
2
)
)
d
x
k
−
1
)
{\displaystyle {\widehat {p}}(dx_{k-1}|(y_{0},\cdots ,y_{k-2}))={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k-1}^{i}}(dx_{k-1})\left(\approx _{N\uparrow \infty }p(dx_{k-1}|(y_{0},\cdots ,y_{k-2})):={p}(x_{k-1}|(y_{0},\cdots ,y_{k-2}))dx_{k-1}\right)}
we find that
p
(
d
x
k
−
1
|
x
k
,
(
y
0
,
⋯
,
y
k
−
1
)
)
≈
N
↑
∞
p
^
(
d
x
k
−
1
|
x
k
,
(
y
0
,
⋯
,
y
k
−
1
)
)
:=
p
(
y
k
−
1
|
x
k
−
1
)
p
(
x
k
|
x
k
−
1
)
p
^
(
d
x
k
−
1
|
y
0
,
⋯
,
y
k
−
2
)
∫
p
(
y
k
−
1
|
x
k
−
1
′
)
p
(
x
k
|
x
k
−
1
′
)
p
^
(
d
x
k
−
1
′
|
y
0
,
⋯
,
y
k
−
2
)
=
∑
i
=
1
N
p
(
y
k
−
1
|
ξ
k
−
1
i
)
p
(
x
k
|
ξ
k
−
1
i
)
∑
j
=
1
N
p
(
y
k
−
1
|
ξ
k
−
1
j
)
p
(
x
k
|
ξ
k
−
1
j
)
δ
ξ
k
−
1
i
(
d
x
k
−
1
)
{\displaystyle {\begin{aligned}p(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))&\approx _{N\uparrow \infty }{\widehat {p}}(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))\\&:={\frac {p(y_{k-1}|x_{k-1})p(x_{k}|x_{k-1}){\widehat {p}}(dx_{k-1}|y_{0},\cdots ,y_{k-2})}{\int p(y_{k-1}|x'_{k-1})~p(x_{k}|x'_{k-1}){\widehat {p}}(dx'_{k-1}|y_{0},\cdots ,y_{k-2})}}\\&=\sum _{i=1}^{N}{\frac {p(y_{k-1}|\xi _{k-1}^{i})p(x_{k}|\xi _{k-1}^{i})}{\sum _{j=1}^{N}p(y_{k-1}|\xi _{k-1}^{j})p(x_{k}|\xi _{k-1}^{j})}}\delta _{\xi _{k-1}^{i}}(dx_{k-1})\end{aligned}}}
We conclude that
p
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
≈
N
↑
∞
p
^
b
a
c
k
w
a
r
d
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
{\displaystyle p(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))\approx _{N\uparrow \infty }{\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))}
with the backward particle approximation
p
^
b
a
c
k
w
a
r
d
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
=
p
^
(
d
x
n
|
(
y
0
,
⋯
,
y
n
−
1
)
)
p
^
(
d
x
n
−
1
|
x
n
,
(
y
0
,
⋯
,
y
n
−
1
)
)
⋯
p
^
(
d
x
1
|
x
2
,
(
y
0
,
y
1
)
)
p
^
(
d
x
0
|
x
1
,
y
0
)
{\displaystyle {\begin{aligned}{\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))={\widehat {p}}(dx_{n}|(y_{0},\cdots ,y_{n-1})){\widehat {p}}(dx_{n-1}|x_{n},(y_{0},\cdots ,y_{n-1}))\cdots {\widehat {p}}(dx_{1}|x_{2},(y_{0},y_{1})){\widehat {p}}(dx_{0}|x_{1},y_{0})\end{aligned}}}
The probability measure
p
^
b
a
c
k
w
a
r
d
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
{\displaystyle {\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))}
is the probability of the random paths of a Markov chain
(
X
k
,
n
♭
)
0
⩽
k
⩽
n
{\displaystyle \left(\mathbb {X} _{k,n}^{\flat }\right)_{0\leqslant k\leqslant n}}
running backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particles
ξ
k
i
,
i
=
1
,
⋯
,
N
.
{\displaystyle \xi _{k}^{i},i=1,\cdots ,N.}
Initially (at time k=n) the chain
X
n
,
n
♭
{\displaystyle \mathbb {X} _{n,n}^{\flat }}
chooses randomly a state with the distribution
p
^
(
d
x
n
|
(
y
0
,
⋯
,
y
n
−
1
)
)
=
1
N
∑
i
=
1
N
δ
ξ
n
i
(
d
x
n
)
{\displaystyle {\widehat {p}}(dx_{n}|(y_{0},\cdots ,y_{n-1}))={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{n}^{i}}(dx_{n})}
From time k to the time (k-1), the chain starting at some state
X
k
,
n
♭
=
ξ
k
i
{\displaystyle \mathbb {X} _{k,n}^{\flat }=\xi _{k}^{i}}
for some
i
=
1
,
⋯
,
N
{\displaystyle i=1,\cdots ,N}
at time k moves at time (k-1) to a random state
X
k
−
1
,
n
♭
{\displaystyle \mathbb {X} _{k-1,n}^{\flat }}
chosen with the discrete weighted probability
p
^
(
d
x
k
−
1
|
ξ
k
i
,
(
y
0
,
⋯
,
y
k
−
1
)
)
=
∑
j
=
1
N
p
(
y
k
−
1
|
ξ
k
−
1
j
)
p
(
ξ
k
i
|
ξ
k
−
1
j
)
∑
l
=
1
N
p
(
y
k
−
1
|
ξ
k
−
1
l
)
p
(
ξ
k
i
|
ξ
k
−
1
l
)
δ
ξ
k
−
1
j
(
d
x
k
−
1
)
{\displaystyle {\widehat {p}}(dx_{k-1}|\xi _{k}^{i},(y_{0},\cdots ,y_{k-1}))=\sum _{j=1}^{N}{\frac {p(y_{k-1}|\xi _{k-1}^{j})p(\xi _{k}^{i}|\xi _{k-1}^{j})}{\sum _{l=1}^{N}p(y_{k-1}|\xi _{k-1}^{l})p(\xi _{k}^{i}|\xi _{k-1}^{l})}}~\delta _{\xi _{k-1}^{j}}(dx_{k-1})}
In the above displayed formula,
p
^
(
d
x
k
−
1
|
ξ
k
i
,
(
y
0
,
⋯
,
y
k
−
1
)
)
{\displaystyle {\widehat {p}}(dx_{k-1}|\xi _{k}^{i},(y_{0},\cdots ,y_{k-1}))}
stands for the conditional distribution
p
^
(
d
x
k
−
1
|
x
k
,
(
y
0
,
⋯
,
y
k
−
1
)
)
{\displaystyle {\widehat {p}}(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))}
evaluated at
x
k
=
ξ
k
i
{\displaystyle x_{k}=\xi _{k}^{i}}
. In the same vein,
p
(
y
k
−
1
|
ξ
k
−
1
j
)
{\displaystyle p(y_{k-1}|\xi _{k-1}^{j})}
and
p
(
ξ
k
i
|
ξ
k
−
1
j
)
{\displaystyle p(\xi _{k}^{i}|\xi _{k-1}^{j})}
stand for the conditional densities
p
(
y
k
−
1
|
x
k
−
1
)
{\displaystyle p(y_{k-1}|x_{k-1})}
and
p
(
x
k
|
x
k
−
1
)
{\displaystyle p(x_{k}|x_{k-1})}
evaluated at
x
k
=
ξ
k
i
{\displaystyle x_{k}=\xi _{k}^{i}}
and
x
k
−
1
=
ξ
k
−
1
j
.
{\displaystyle x_{k-1}=\xi _{k-1}^{j}.}
These models allows to reduce integration with respect to the densities
p
(
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
{\displaystyle p((x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))}
in terms of matrix operations with respect to the Markov transitions of the chain described above. For instance, for any function
f
k
{\displaystyle f_{k}}
we have the particle estimates
∫
p
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
f
k
(
x
k
)
≈
N
↑
∞
∫
p
^
b
a
c
k
w
a
r
d
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
f
k
(
x
k
)
=
∫
p
^
(
d
x
n
|
(
y
0
,
⋯
,
y
n
−
1
)
)
p
^
(
d
x
n
−
1
|
x
n
,
(
y
0
,
⋯
,
y
n
−
1
)
)
⋯
p
^
(
d
x
k
|
x
k
+
1
,
(
y
0
,
⋯
,
y
k
)
)
f
k
(
x
k
)
=
[
1
N
,
⋯
,
1
N
]
⏟
N
times
M
n
−
1
⋯
M
k
[
f
k
(
ξ
k
1
)
⋮
f
k
(
ξ
k
N
)
]
{\displaystyle {\begin{aligned}\int p(d(x_{0},\cdots ,x_{n})&|(y_{0},\cdots ,y_{n-1}))f_{k}(x_{k})\\&\approx _{N\uparrow \infty }\int {\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))f_{k}(x_{k})\\&=\int {\widehat {p}}(dx_{n}|(y_{0},\cdots ,y_{n-1})){\widehat {p}}(dx_{n-1}|x_{n},(y_{0},\cdots ,y_{n-1}))\cdots {\widehat {p}}(dx_{k}|x_{k+1},(y_{0},\cdots ,y_{k}))f_{k}(x_{k})\\&=\underbrace {\left[{\tfrac {1}{N}},\cdots ,{\tfrac {1}{N}}\right]} _{N{\text{ times}}}\mathbb {M} _{n-1}\cdots \mathbb {M} _{k}{\begin{bmatrix}f_{k}(\xi _{k}^{1})\\\vdots \\f_{k}(\xi _{k}^{N})\end{bmatrix}}\end{aligned}}}
where
M
k
=
(
M
k
(
i
,
j
)
)
1
⩽
i
,
j
⩽
N
:
M
k
(
i
,
j
)
=
p
(
ξ
k
i
|
ξ
k
−
1
j
)
p
(
y
k
−
1
|
ξ
k
−
1
j
)
∑
l
=
1
N
p
(
ξ
k
i
|
ξ
k
−
1
l
)
p
(
y
k
−
1
|
ξ
k
−
1
l
)
{\displaystyle \mathbb {M} _{k}=(\mathbb {M} _{k}(i,j))_{1\leqslant i,j\leqslant N}:\qquad \mathbb {M} _{k}(i,j)={\frac {p(\xi _{k}^{i}|\xi _{k-1}^{j})~p(y_{k-1}|\xi _{k-1}^{j})}{\sum \limits _{l=1}^{N}p(\xi _{k}^{i}|\xi _{k-1}^{l})p(y_{k-1}|\xi _{k-1}^{l})}}}
This also shows that if
F
¯
(
x
0
,
⋯
,
x
n
)
:=
1
n
+
1
∑
k
=
0
n
f
k
(
x
k
)
{\displaystyle {\overline {F}}(x_{0},\cdots ,x_{n}):={\frac {1}{n+1}}\sum _{k=0}^{n}f_{k}(x_{k})}
then
∫
F
¯
(
x
0
,
⋯
,
x
n
)
p
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
≈
N
↑
∞
∫
F
¯
(
x
0
,
⋯
,
x
n
)
p
^
b
a
c
k
w
a
r
d
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
=
1
n
+
1
∑
k
=
0
n
[
1
N
,
⋯
,
1
N
]
⏟
N
times
M
n
−
1
M
n
−
2
⋯
M
k
[
f
k
(
ξ
k
1
)
⋮
f
k
(
ξ
k
N
)
]
{\displaystyle {\begin{aligned}\int {\overline {F}}(x_{0},\cdots ,x_{n})p(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))&\approx _{N\uparrow \infty }\int {\overline {F}}(x_{0},\cdots ,x_{n}){\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))\\&={\frac {1}{n+1}}\sum _{k=0}^{n}\underbrace {\left[{\tfrac {1}{N}},\cdots ,{\tfrac {1}{N}}\right]} _{N{\text{ times}}}\mathbb {M} _{n-1}\mathbb {M} _{n-2}\cdots \mathbb {M} _{k}{\begin{bmatrix}f_{k}(\xi _{k}^{1})\\\vdots \\f_{k}(\xi _{k}^{N})\end{bmatrix}}\end{aligned}}}
=== Some convergence results ===
We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.
In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by
E
(
p
^
(
y
0
,
⋯
,
y
n
)
)
=
p
(
y
0
,
⋯
,
y
n
)
,
E
(
[
p
^
(
y
0
,
⋯
,
y
n
)
p
(
y
0
,
⋯
,
y
n
)
−
1
]
2
)
⩽
c
n
N
,
{\displaystyle E\left({\widehat {p}}(y_{0},\cdots ,y_{n})\right)=p(y_{0},\cdots ,y_{n}),\qquad E\left(\left[{\frac {{\widehat {p}}(y_{0},\cdots ,y_{n})}{p(y_{0},\cdots ,y_{n})}}-1\right]^{2}\right)\leqslant {\frac {cn}{N}},}
for some finite constant c. In addition, for any
x
⩾
0
{\displaystyle x\geqslant 0}
:
P
(
|
1
n
log
p
^
(
y
0
,
⋯
,
y
n
)
−
1
n
log
p
(
y
0
,
⋯
,
y
n
)
|
⩽
c
1
x
N
+
c
2
x
N
)
>
1
−
e
−
x
{\displaystyle \mathbf {P} \left(\left\vert {\frac {1}{n}}\log {{\widehat {p}}(y_{0},\cdots ,y_{n})}-{\frac {1}{n}}\log {p(y_{0},\cdots ,y_{n})}\right\vert \leqslant c_{1}{\frac {x}{N}}+c_{2}{\sqrt {\frac {x}{N}}}\right)>1-e^{-x}}
for some finite constants
c
1
,
c
2
{\displaystyle c_{1},c_{2}}
related to the asymptotic bias and variance of the particle estimate, and for some finite constant c.
The bias and the variance of the particle particle estimates based on the ancestral lines of the genealogical trees
I
k
p
a
t
h
(
F
)
:=
∫
F
(
x
0
,
⋯
,
x
k
)
p
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
−
1
)
≈
N
↑
∞
I
^
k
p
a
t
h
(
F
)
:=
∫
F
(
x
0
,
⋯
,
x
k
)
p
^
(
d
(
x
0
,
⋯
,
x
k
)
|
y
0
,
⋯
,
y
k
−
1
)
=
1
N
∑
i
=
1
N
F
(
ξ
0
,
k
i
,
⋯
,
ξ
k
,
k
i
)
{\displaystyle {\begin{aligned}I_{k}^{path}(F)&:=\int F(x_{0},\cdots ,x_{k})p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\\&\approx _{N\uparrow \infty }{\widehat {I}}_{k}^{path}(F)\\&:=\int F(x_{0},\cdots ,x_{k}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\\&={\frac {1}{N}}\sum _{i=1}^{N}F\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)\end{aligned}}}
are controlled by the non asymptotic uniform estimates
|
E
(
I
^
k
p
a
t
h
(
F
)
)
−
I
k
p
a
t
h
(
F
)
|
⩽
c
1
k
N
,
E
(
[
I
^
k
p
a
t
h
(
F
)
−
I
k
p
a
t
h
(
F
)
]
2
)
⩽
c
2
k
N
,
{\displaystyle \left|E\left({\widehat {I}}_{k}^{path}(F)\right)-I_{k}^{path}(F)\right|\leqslant {\frac {c_{1}k}{N}},\qquad E\left(\left[{\widehat {I}}_{k}^{path}(F)-I_{k}^{path}(F)\right]^{2}\right)\leqslant {\frac {c_{2}k}{N}},}
for any function F bounded by 1, and for some finite constants
c
1
,
c
2
.
{\displaystyle c_{1},c_{2}.}
In addition, for any
x
⩾
0
{\displaystyle x\geqslant 0}
:
P
(
|
I
^
k
p
a
t
h
(
F
)
−
I
k
p
a
t
h
(
F
)
|
⩽
c
1
k
x
N
+
c
2
k
x
N
∧
sup
0
⩽
k
⩽
n
|
I
^
k
p
a
t
h
(
F
)
−
I
k
p
a
t
h
(
F
)
|
⩽
c
x
n
log
(
n
)
N
)
>
1
−
e
−
x
{\displaystyle \mathbf {P} \left(\left|{\widehat {I}}_{k}^{path}(F)-I_{k}^{path}(F)\right|\leqslant c_{1}{\frac {kx}{N}}+c_{2}{\sqrt {\frac {kx}{N}}}\land \sup _{0\leqslant k\leqslant n}\left|{\widehat {I}}_{k}^{path}(F)-I_{k}^{path}(F)\right|\leqslant c{\sqrt {\frac {xn\log(n)}{N}}}\right)>1-e^{-x}}
for some finite constants
c
1
,
c
2
{\displaystyle c_{1},c_{2}}
related to the asymptotic bias and variance of the particle estimate, and for some finite constant c. The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form
F
¯
(
x
0
,
⋯
,
x
n
)
:=
1
n
+
1
∑
0
⩽
k
⩽
n
f
k
(
x
k
)
{\displaystyle {\overline {F}}(x_{0},\cdots ,x_{n}):={\frac {1}{n+1}}\sum _{0\leqslant k\leqslant n}f_{k}(x_{k})}
with
I
n
p
a
t
h
(
F
¯
)
≈
N
↑
∞
I
n
♭
,
p
a
t
h
(
F
¯
)
:=
∫
F
¯
(
x
0
,
⋯
,
x
n
)
p
^
b
a
c
k
w
a
r
d
(
d
(
x
0
,
⋯
,
x
n
)
|
(
y
0
,
⋯
,
y
n
−
1
)
)
{\displaystyle I_{n}^{path}({\overline {F}})\approx _{N\uparrow \infty }I_{n}^{\flat ,path}({\overline {F}}):=\int {\overline {F}}(x_{0},\cdots ,x_{n}){\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))}
with functions
f
k
{\displaystyle f_{k}}
bounded by 1, we have
sup
n
⩾
0
|
E
(
I
^
n
♭
,
p
a
t
h
(
F
¯
)
)
−
I
n
p
a
t
h
(
F
¯
)
|
⩽
c
1
N
{\displaystyle \sup _{n\geqslant 0}{\left\vert E\left({\widehat {I}}_{n}^{\flat ,path}({\overline {F}})\right)-I_{n}^{path}({\overline {F}})\right\vert }\leqslant {\frac {c_{1}}{N}}}
and
E
(
[
I
^
n
♭
,
p
a
t
h
(
F
)
−
I
n
p
a
t
h
(
F
)
]
2
)
⩽
c
2
n
N
+
c
3
N
2
{\displaystyle E\left(\left[{\widehat {I}}_{n}^{\flat ,path}(F)-I_{n}^{path}(F)\right]^{2}\right)\leqslant {\frac {c_{2}}{nN}}+{\frac {c_{3}}{N^{2}}}}
for some finite constants
c
1
,
c
2
,
c
3
.
{\displaystyle c_{1},c_{2},c_{3}.}
More refined estimates including exponentially small probability of errors are developed in.
== Sequential Importance Resampling (SIR) ==
=== Monte Carlo filter and bootstrap filter ===
Sequential importance Resampling (SIR), Monte Carlo filtering (Kitagawa 1993), bootstrap filtering algorithm (Gordon et al. 1993) and single distribution resampling (Bejuri W.M.Y.B et al. 2017), are also commonly applied filtering algorithms, which approximate the filtering probability density
p
(
x
k
|
y
0
,
⋯
,
y
k
)
{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k})}
by a weighted set of N samples
{
(
w
k
(
i
)
,
x
k
(
i
)
)
:
i
∈
{
1
,
⋯
,
N
}
}
.
{\displaystyle \left\{\left(w_{k}^{(i)},x_{k}^{(i)}\right)\ :\ i\in \{1,\cdots ,N\}\right\}.}
The importance weights
w
k
(
i
)
{\displaystyle w_{k}^{(i)}}
are approximations to the relative posterior probabilities (or densities) of the samples such that
∑
i
=
1
N
w
k
(
i
)
=
1.
{\displaystyle \sum _{i=1}^{N}w_{k}^{(i)}=1.}
Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average
∫
f
(
x
k
)
p
(
x
k
|
y
0
,
…
,
y
k
)
d
x
k
≈
∑
i
=
1
N
w
k
(
i
)
f
(
x
k
(
i
)
)
.
{\displaystyle \int f(x_{k})p(x_{k}|y_{0},\dots ,y_{k})dx_{k}\approx \sum _{i=1}^{N}w_{k}^{(i)}f(x_{k}^{(i)}).}
For a finite set of samples, the algorithm performance is dependent on the choice of the proposal distribution
π
(
x
k
|
x
0
:
k
−
1
,
y
0
:
k
)
{\displaystyle \pi (x_{k}|x_{0:k-1},y_{0:k})\,}
.
The "optimal" proposal distribution is given as the target distribution
π
(
x
k
|
x
0
:
k
−
1
,
y
0
:
k
)
=
p
(
x
k
|
x
k
−
1
,
y
k
)
=
p
(
y
k
|
x
k
)
∫
p
(
y
k
|
x
k
)
p
(
x
k
|
x
k
−
1
)
d
x
k
p
(
x
k
|
x
k
−
1
)
.
{\displaystyle \pi (x_{k}|x_{0:k-1},y_{0:k})=p(x_{k}|x_{k-1},y_{k})={\frac {p(y_{k}|x_{k})}{\int p(y_{k}|x_{k})p(x_{k}|x_{k-1})dx_{k}}}~p(x_{k}|x_{k-1}).}
This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998. When it is difficult to sample transitions according to the distribution
p
(
x
k
|
x
k
−
1
,
y
k
)
{\displaystyle p(x_{k}|x_{k-1},y_{k})}
one natural strategy is to use the following particle approximation
p
(
y
k
|
x
k
)
∫
p
(
y
k
|
x
k
)
p
(
x
k
|
x
k
−
1
)
d
x
k
p
(
x
k
|
x
k
−
1
)
d
x
k
≃
N
↑
∞
p
(
y
k
|
x
k
)
∫
p
(
y
k
|
x
k
)
p
^
(
d
x
k
|
x
k
−
1
)
p
^
(
d
x
k
|
x
k
−
1
)
=
∑
i
=
1
N
p
(
y
k
|
X
k
i
(
x
k
−
1
)
)
∑
j
=
1
N
p
(
y
k
|
X
k
j
(
x
k
−
1
)
)
δ
X
k
i
(
x
k
−
1
)
(
d
x
k
)
{\displaystyle {\begin{aligned}{\frac {p(y_{k}|x_{k})}{\int p(y_{k}|x_{k})p(x_{k}|x_{k-1})dx_{k}}}p(x_{k}|x_{k-1})dx_{k}&\simeq _{N\uparrow \infty }{\frac {p(y_{k}|x_{k})}{\int p(y_{k}|x_{k}){\widehat {p}}(dx_{k}|x_{k-1})}}{\widehat {p}}(dx_{k}|x_{k-1})\\&=\sum _{i=1}^{N}{\frac {p(y_{k}|X_{k}^{i}(x_{k-1}))}{\sum _{j=1}^{N}p(y_{k}|X_{k}^{j}(x_{k-1}))}}\delta _{X_{k}^{i}(x_{k-1})}(dx_{k})\end{aligned}}}
with the empirical approximation
p
^
(
d
x
k
|
x
k
−
1
)
=
1
N
∑
i
=
1
N
δ
X
k
i
(
x
k
−
1
)
(
d
x
k
)
≃
N
↑
∞
p
(
x
k
|
x
k
−
1
)
d
x
k
{\displaystyle {\widehat {p}}(dx_{k}|x_{k-1})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{X_{k}^{i}(x_{k-1})}(dx_{k})~\simeq _{N\uparrow \infty }p(x_{k}|x_{k-1})dx_{k}}
associated with N (or any other large number of samples) independent random samples
X
k
i
(
x
k
−
1
)
,
i
=
1
,
⋯
,
N
{\displaystyle X_{k}^{i}(x_{k-1}),i=1,\cdots ,N}
with the conditional distribution of the random state
X
k
{\displaystyle X_{k}}
given
X
k
−
1
=
x
k
−
1
{\displaystyle X_{k-1}=x_{k-1}}
. The consistency of the resulting particle filter of this approximation and other extensions are developed in. In the above display
δ
a
{\displaystyle \delta _{a}}
stands for the Dirac measure at a given state a.
However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:
π
(
x
k
|
x
0
:
k
−
1
,
y
0
:
k
)
=
p
(
x
k
|
x
k
−
1
)
.
{\displaystyle \pi (x_{k}|x_{0:k-1},y_{0:k})=p(x_{k}|x_{k-1}).}
Sequential Importance Resampling (SIR) filters with transition prior probability distribution as importance function are commonly known as bootstrap filter and condensation algorithm.
Resampling is used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1993) is optimal in terms of variance.
A single step of sequential importance resampling is as follows:
1) For
i
=
1
,
⋯
,
N
{\displaystyle i=1,\cdots ,N}
draw samples from the proposal distribution
x
k
(
i
)
∼
π
(
x
k
|
x
0
:
k
−
1
(
i
)
,
y
0
:
k
)
{\displaystyle x_{k}^{(i)}\sim \pi (x_{k}|x_{0:k-1}^{(i)},y_{0:k})}
2) For
i
=
1
,
⋯
,
N
{\displaystyle i=1,\cdots ,N}
update the importance weights up to a normalizing constant:
w
^
k
(
i
)
=
w
k
−
1
(
i
)
p
(
y
k
|
x
k
(
i
)
)
p
(
x
k
(
i
)
|
x
k
−
1
(
i
)
)
π
(
x
k
(
i
)
|
x
0
:
k
−
1
(
i
)
,
y
0
:
k
)
.
{\displaystyle {\hat {w}}_{k}^{(i)}=w_{k-1}^{(i)}{\frac {p(y_{k}|x_{k}^{(i)})p(x_{k}^{(i)}|x_{k-1}^{(i)})}{\pi (x_{k}^{(i)}|x_{0:k-1}^{(i)},y_{0:k})}}.}
Note that when we use the transition prior probability distribution as the importance function,
π
(
x
k
(
i
)
|
x
0
:
k
−
1
(
i
)
,
y
0
:
k
)
=
p
(
x
k
(
i
)
|
x
k
−
1
(
i
)
)
,
{\displaystyle \pi (x_{k}^{(i)}|x_{0:k-1}^{(i)},y_{0:k})=p(x_{k}^{(i)}|x_{k-1}^{(i)}),}
this simplifies to the following :
w
^
k
(
i
)
=
w
k
−
1
(
i
)
p
(
y
k
|
x
k
(
i
)
)
,
{\displaystyle {\hat {w}}_{k}^{(i)}=w_{k-1}^{(i)}p(y_{k}|x_{k}^{(i)}),}
3) For
i
=
1
,
⋯
,
N
{\displaystyle i=1,\cdots ,N}
compute the normalized importance weights:
w
k
(
i
)
=
w
^
k
(
i
)
∑
j
=
1
N
w
^
k
(
j
)
{\displaystyle w_{k}^{(i)}={\frac {{\hat {w}}_{k}^{(i)}}{\sum _{j=1}^{N}{\hat {w}}_{k}^{(j)}}}}
4) Compute an estimate of the effective number of particles as
N
^
e
f
f
=
1
∑
i
=
1
N
(
w
k
(
i
)
)
2
{\displaystyle {\hat {N}}_{\mathit {eff}}={\frac {1}{\sum _{i=1}^{N}\left(w_{k}^{(i)}\right)^{2}}}}
This criterion reflects the variance of the weights. Other criteria can be found in the article, including their rigorous analysis and central limit theorems.
5) If the effective number of particles is less than a given threshold
N
^
e
f
f
<
N
t
h
r
{\displaystyle {\hat {N}}_{\mathit {eff}}<N_{thr}}
, then perform resampling:
a) Draw N particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For
i
=
1
,
⋯
,
N
{\displaystyle i=1,\cdots ,N}
set
w
k
(
i
)
=
1
/
N
.
{\displaystyle w_{k}^{(i)}=1/N.}
The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the term Importance Resampling is more accurate because the word "resampling" implies that the initial sampling has already been done.
=== Sequential importance sampling (SIS) ===
Sequential importance sampling (SIS) is the same as the SIR algorithm but without the resampling stage. This version often exhibits particle weight collapse, where all the probability gets concentrated on one or two particles, and the rest of the particle weights correspond to very small probability. The introduction of resampling alleviates this problem.
=== "Direct version" algorithm ===
The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from
p
x
k
|
y
1
:
k
(
x
|
y
1
:
k
)
{\displaystyle p_{x_{k}|y_{1:k}}(x|y_{1:k})}
:
1) Set n = 0 (This will count the number of particles generated so far)
2) Uniformly choose an index i from the range
{
1
,
.
.
.
,
N
}
{\displaystyle \{1,...,N\}}
3) Generate a test
x
^
{\displaystyle {\hat {x}}}
from the distribution
p
(
x
k
|
x
k
−
1
)
{\displaystyle p(x_{k}|x_{k-1})}
with
x
k
−
1
=
x
k
−
1
|
k
−
1
(
i
)
{\displaystyle x_{k-1}=x_{k-1|k-1}^{(i)}}
4) Generate the probability of
y
^
{\displaystyle {\hat {y}}}
using
x
^
{\displaystyle {\hat {x}}}
from
p
(
y
k
|
x
k
)
,
with
x
k
=
x
^
{\displaystyle p(y_{k}|x_{k}),~{\mbox{with}}~x_{k}={\hat {x}}}
where
y
k
{\displaystyle y_{k}}
is the measured value
5) Generate another uniform u from
[
0
,
m
k
]
{\displaystyle [0,m_{k}]}
where
m
k
=
sup
x
k
p
(
y
k
|
x
k
)
{\displaystyle m_{k}=\sup _{x_{k}}p(y_{k}|x_{k})}
6) Compare u and
p
(
y
^
)
{\displaystyle p\left({\hat {y}}\right)}
6a) If u is larger then repeat from step 2
6b) If u is smaller then save
x
^
{\displaystyle {\hat {x}}}
as
x
k
|
k
(
i
)
{\displaystyle x_{k|k}^{(i)}}
and increment n
7) If n == N then quit
The goal is to generate P "particles" at k using only the particles from
k
−
1
{\displaystyle k-1}
. This requires that a Markov equation can be written (and computed) to generate a
x
k
{\displaystyle x_{k}}
based only upon
x
k
−
1
{\displaystyle x_{k-1}}
. This algorithm uses the composition of the P particles from
k
−
1
{\displaystyle k-1}
to generate a particle at k and repeats (steps 2–6) until P particles are generated at k.
This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimension is the particle number. For example,
x
(
k
,
i
)
{\displaystyle x(k,i)}
would be the ith particle at
k
{\displaystyle k}
and can also be written
x
k
(
i
)
{\displaystyle x_{k}^{(i)}}
(as done above in the algorithm). Step 3 generates a potential
x
k
{\displaystyle x_{k}}
based on a randomly chosen particle (
x
k
−
1
(
i
)
{\displaystyle x_{k-1}^{(i)}}
) at time
k
−
1
{\displaystyle k-1}
and rejects or accepts it in step 6. In other words, the
x
k
{\displaystyle x_{k}}
values are generated using the previously generated
x
k
−
1
{\displaystyle x_{k-1}}
.
== Applications ==
Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:
Bayesian inference, machine learning, risk analysis and rare event sampling
Bioinformatics
Computational science
Economics, financial mathematics and mathematical finance: particle filters can perform simulations which are needed to compute the high-dimensional and/or complex integrals related to problems such as dynamic stochastic general equilibrium models in macro-economics and option pricing
Engineering
Infectious disease epidemiology where they have been applied to a number of epidemic forecasting problems, for example predicting seasonal influenza epidemics
Fault detection and isolation: in observer-based schemas a particle filter can forecast expected sensors output enabling fault isolation
Molecular chemistry and computational physics
Pharmacokinetics
Phylogenetics
Robotics, artificial intelligence: Monte Carlo localization is a de facto standard in mobile robot localization
Signal and image processing: visual localization, tracking, feature recognition
== Other particle filters ==
Auxiliary particle filter
Cost Reference particle filter
Exponential Natural Particle Filter
Feynman-Kac and mean-field particle methodologies
Gaussian particle filter
Gauss–Hermite particle filter
Hierarchical/Scalable particle filter
Nudged particle filter
Particle Markov-Chain Monte-Carlo, see e.g. pseudo-marginal Metropolis–Hastings algorithm.
Rao–Blackwellized particle filter
Regularized auxiliary particle filter
Rejection-sampling based optimal particle filter
Unscented particle filter
== See also ==
Ensemble Kalman filter
Generalized filtering
Genetic algorithm
Mean-field particle methods
Monte Carlo localization
Moving horizon estimation
Recursive Bayesian estimation
== References ==
== Bibliography ==
Del Moral, Pierre (1996). "Non Linear Filtering: Interacting Particle Solution" (PDF). Markov Processes and Related Fields. 2 (4): 555–580. Archived from the original (PDF) on 2016-03-04. Retrieved 2015-05-31.
Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. p. 575. "Series: Probability and Applications"
Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626. "Monographs on Statistics & Applied Probability"
Cappe, O.; Moulines, E.; Ryden, T. (2005). Inference in Hidden Markov Models. Springer.
Liu, J.S. (2001). Monte Carlo strategies in Scientific Computing. Springer.
Kong, A.; Liu, J.S.; Wong, W.H. (1994). "Sequential imputations and Bayesian missing data problems" (PDF). Journal of the American Statistical Association. 89 (425): 278–288. doi:10.1080/01621459.1994.10476469.
Liu, J.S.; Chen, R. (1995). "Blind deconvolution via sequential imputations" (PDF). Journal of the American Statistical Association. 90 (430): 567–576. doi:10.2307/2291068. JSTOR 2291068.
Ristic, B.; Arulampalam, S.; Gordon, N. (2004). Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House.
Doucet, A.; Johansen, A.M. (December 2008). "A tutorial on particle filtering and smoothing: fifteen years later" (PDF). Technical Report.
Doucet, A.; Godsill, S.; Andrieu, C. (2000). "On sequential Monte Carlo sampling methods for Bayesian filtering". Statistics and Computing. 10 (3): 197–208. doi:10.1023/A:1008935410038. S2CID 16288401.
Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. (2002). "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking". IEEE Transactions on Signal Processing. 50 (2): 174–188. Bibcode:2002ITSP...50..174A. CiteSeerX 10.1.1.471.8617. doi:10.1109/78.978374. S2CID 55577025.
Cappe, O.; Godsill, S.; Moulines, E. (2007). "An overview of existing methods and recent advances in sequential Monte Carlo". Proceedings of the IEEE. 95 (5): 899–924. doi:10.1109/JPROC.2007.893250. S2CID 3081664.
Kitagawa, G. (1996). "Monte carlo filter and smoother for non-Gaussian nonlinear state space models". Journal of Computational and Graphical Statistics. 5 (1): 1–25. doi:10.2307/1390750. JSTOR 1390750.
Kotecha, J.H.; Djuric, P. (2003). "Gaussian Particle filtering". IEEE Transactions on Signal Processing. 51 (10): 2592. Bibcode:2003ITSP...51.2592K. doi:10.1109/TSP.2003.816758.
Haug, A.J. (2005). "A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes" (PDF). The MITRE Corporation, USA, Tech. Rep., Feb. Archived (PDF) from the original on December 22, 2021. Retrieved 2021-12-22.
Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association. 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Archived from the original on 2007-10-16. Retrieved 2008-05-06.
Gordon, N. J.; Salmond, D. J.; Smith, A. F. M. (1993). "Novel approach to nonlinear/non-Gaussian Bayesian state estimation". IEE Proceedings F - Radar and Signal Processing. 140 (2): 107–113. doi:10.1049/ip-f-2.1993.0015.
Vaswani, N.; Rathi, Y.; Yezzi, A.; Tannenbaum, A. (2007). "Tracking deforming objects using particle filtering for geometric active contours". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (8): 1470–1475. doi:10.1109/tpami.2007.1081. PMC 3663080. PMID 17568149.
== External links == | Wikipedia/Sequential_Monte_Carlo_method |
Radiation materials science is a subfield of materials science which studies the interaction of radiation with matter: a broad subject covering many forms of irradiation and of matter.
== Main aim of radiation material science ==
Some of the most profound effects of irradiation on materials occur in the core of nuclear power reactors where atoms comprising the structural components are displaced numerous times over the course of their engineering lifetimes. The consequences of radiation to core components includes changes in shape and volume by tens of percent, increases in hardness by factors of five or more, severe reduction in ductility and increased embrittlement, and susceptibility to environmentally induced cracking. For these structures to fulfill their purpose, a firm understanding of the effect of radiation on materials is required in order to account for irradiation effects in design, to mitigate its effect by changing operating conditions, or to serve as a guide for creating new, more radiation-tolerant materials that can better serve their purpose.
== Radiation ==
The types of radiation that can alter structural materials are neutron radiation, ion beams, electrons (beta particles), and gamma rays. All of these forms of radiation have the capability to displace atoms from their lattice sites, which is the fundamental process that drives the changes in structural metals. The inclusion of ions among the irradiating particles provides a tie-in to other fields and disciplines such as the use of accelerators for the transmutation of nuclear waste, or in the creation of new materials by ion implantation, ion beam mixing, plasma-assisted ion implantation, and ion beam-assisted deposition.
The effect of irradiation on materials is rooted in the initial event in which an energetic projectile strikes a target. While the event is made up of several steps or processes, the primary result is the displacement of an atom from its lattice site. Irradiation displaces an atom from its site, leaving a vacant site behind (a vacancy) and the displaced atom eventually comes to rest in a location that is between lattice sites, becoming an interstitial atom. The vacancy-interstitial pair is central to radiation effects in crystalline solids and is known as a Frenkel pair. The presence of the Frenkel pair and other consequences of irradiation damage determine the physical effects, and with the application of stress, the mechanical effects of irradiation by the occurring of interstitial, phenomena, such as swelling, growth, phase transition, segregation, etc., will be effected. In addition to the atomic displacement, an energetic charged particle moving in a lattice also gives energy to electrons in the system, via the electronic stopping power. This energy transfer can also for high-energy particles produce damage in non-metallic materials, such as ion tracks and fission tracks in minerals.
== Radiation damage ==
The radiation damage event is defined as the transfer of energy from an incident projectile to the solid and the resulting distribution of target atoms after completion of the event. This event is composed of several distinct processes:
The interaction of an energetic incident particle with a lattice atom
The transfer of kinetic energy to the lattice atom giving birth to a primary knock-on atom
The displacement of the atom from its lattice site
The passage of the displaced atom through the lattice and the accompanying creation of additional knock-on atoms
The production of a displacement cascade (collection of point defects created by the primary knock-on atom)
The termination of the primary knock-on atom as an interstitial
The result of a radiation damage event is, if the energy given to a lattice atom is above the threshold displacement energy, the creation of a collection of point defects (vacancies and interstitials) and clusters of these defects in the crystal lattice.
The essence of the quantification of radiation damage in solids is the number of displacements per unit volume per unit time
R
{\displaystyle R}
:
R
=
N
∫
E
m
i
n
E
m
a
x
∫
T
m
i
n
T
m
a
x
ϕ
(
E
i
)
σ
(
E
i
,
T
)
υ
(
T
)
d
T
d
E
i
.
{\displaystyle R=N\int _{E_{min}}^{E_{max}}\int _{T_{min}}^{T_{max}}\phi (E_{i})\,\sigma (E_{i},T)\,\upsilon (T)\,dT\,dE_{i}.}
where
N
{\displaystyle N}
is the atom number density,
E
m
a
x
{\displaystyle E_{max}}
and
E
m
i
n
{\displaystyle E_{min}}
are the maximum and minimum energies of the incoming particle,
ϕ
(
E
i
)
{\displaystyle \phi (E_{i})}
is the energy dependent particle flux,
T
m
a
x
{\displaystyle T_{max}}
and
T
m
i
n
{\displaystyle T_{min}}
are the maximum and minimum energies transferred in a collision of a particle of energy
E
i
{\displaystyle E_{i}}
and a lattice atom,
σ
(
E
i
,
T
)
{\displaystyle \sigma (E_{i},T)}
is the cross section for the collision of a particle of energy
E
i
{\displaystyle E_{i}}
that results in a transfer of energy
T
{\displaystyle T}
to the struck atom,
υ
(
T
)
{\displaystyle \upsilon (T)}
is the number of displacements per primary knock-on atom.
The two key variables in this equation are
σ
(
E
i
,
T
)
{\displaystyle \sigma (E_{i},T)}
and
υ
(
T
)
{\displaystyle \upsilon (T)}
. The term
σ
(
E
i
,
T
)
{\displaystyle \sigma (E_{i},T)}
describes the transfer of energy from the incoming particle to the first atom it encounters in the target, the primary knock-on atom; The second quantity
υ
(
T
)
{\displaystyle \upsilon (T)}
is the total number of displacements that the primary knock-on atom goes on to make in the solid; Taken together, they describe the total number of displacements caused by an incoming particle of energy
E
i
{\displaystyle E_{i}}
, and the above equation accounts for the energy distribution of the incoming particles. The result is the total number of displacements in the target from a flux of particles with a known energy distribution.
In radiation material science the displacement damage in the alloy (
[
d
p
a
]
{\displaystyle \left[dpa\right]}
= displacements per atom in the solid ) is a better representation of the effect of irradiation on materials properties than the fluence ( neutron fluence,
[
M
e
V
]
{\displaystyle \left[MeV\right]}
).
See also Wigner effect.
== Radiation-resistant materials ==
To generate materials that fit the increasing demands of nuclear reactors to operate with higher efficiency or for longer lifetimes, materials must be designed with radiation resistance in mind. In particular, Generation IV nuclear reactors operate at higher temperatures and pressures compared to modern pressurized water reactors, which account for a vast amount of western reactors. This leads to increased vulnerability to normal mechanical failure in terms of creep resistance as well as radiation damaging events such as neutron-induced swelling and radiation-induced segregation of phases. By accounting for radiation damage, reactor materials would be able to withstand longer operating lifetimes. This allows reactors to be decommissioned after longer periods of time, improving return on investment of reactors without compromising safety. This is of particular interest in developing commercial viability of advanced and theoretical nuclear reactors, and this goal can be accomplished through engineering resistance to these displacement events.
=== Grain boundary engineering ===
Face-centered cubic metals such as austenitic steels and Ni-based alloys can benefit greatly from grain boundary engineering. Grain boundary engineering attempts to generate higher amounts of special grain boundaries, characterized by favorable orientations between grains. By increasing populations of low energy boundaries without increasing grain size, fracture mechanics of these face centered cubic metals can be changed to improve mechanical properties given a similar displacements per atom value versus non grain boundary engineered alloys. This method of treatment in particular yields better resistance to stress corrosion cracking and oxidation.
=== Materials selection ===
By using advanced methods of material selection, materials can be judged on criteria such as neutron-absorption cross sectional area. Selecting materials with minimum neutron-absorption can heavily minimize the number of displacements per atom that occur over a reactor material's lifetime. This slows the radiation embrittlement process by preventing mobility of atoms in the first place, proactively selecting materials that do not interact with the nuclear radiation as frequently. This can have a huge impact on total damage especially when comparing the materials of modern advanced reactors of zirconium to stainless steel reactor cores, which can differ in absorption cross section by an order of magnitude from more-optimal materials.
Example values for thermal neutron cross section are shown in the table below.
=== Short range order (SRO) self-organization ===
For nickel-chromium and iron-chromium alloys, short range order can be designed on the nano-scale (<5 nm) that absorbs the interstitial and vacancy's generated by primary knock-on atom events. This allows materials that mitigate the swelling that normally occurs in the presence of high displacements per atom and keep the overall volume percent change under the ten percent range. This occurs through generating a metastable phase that is in constant, dynamic equilibrium with surrounding material. This metastable phase is characterized by having an enthalpy of mixing that is effectively zero with respect to the main lattice. This allows phase transformation to absorb and disperse the point defects that typically accumulate in more rigid lattices. This extends the life of the alloy through making vacancy and interstitial creation less successful as constant neutron excitement in the form of displacement cascades transform the SRO phase, while the SRO reforms in the bulk solid solution.
== Resources ==
Fundamentals of Radiation Material Science: Metals and Alloys, 2nd Ed, Gary S. Was, SpringerNature, New York 2017
R. S. Averback and T. Diaz de la Rubia (1998). "Displacement damage in irradiated metals and semiconductors". In H. Ehrenfest and F. Spaepen. Solid State Physics 51. Academic Press. pp. 281–402.
R. Smith, ed. (1997). Atomic & ion collisions in solids and at surfaces: theory, simulation and applications. Cambridge University Press. ISBN 0-521-44022-X.
== References ==
== External links ==
Media related to Radiation material science at Wikimedia Commons | Wikipedia/Radiation_material_science |
A mission control center (MCC, sometimes called a flight control center or operations center) is a facility that manages space flights, usually from the point of launch until landing or the end of the mission. It is part of the ground segment of spacecraft operations. A staff of flight controllers and other support personnel monitor all aspects of the mission using telemetry, and send commands to the vehicle using ground stations. Personnel supporting the mission from an MCC can include representatives of the attitude control system, power, propulsion, thermal, attitude dynamics, orbital operations and other subsystem disciplines. The training for these missions usually falls under the responsibility of the flight controllers, typically including extensive rehearsals in the MCC.
== Government-operated Mission Control Centers ==
America
NASA Launch Control Center controls NASA launch missions prior to liftoff from facilities located at NASA's Kennedy Space Center on Merritt Island, Florida. Responsibility for the booster and spacecraft remains with the Launch Control Center until the booster has cleared the launch tower.
Christopher C. Kraft Jr. Mission Control Center assumes responsibility for crewed missions after liftoff. The facility (abbreviated MCC-H, full name Christopher C. Kraft Jr. Mission Control Center) is located in Houston Texas at the Lyndon B. Johnson Space Center. NASA's Mission Control Center in Houston also manages the U.S. portions of the International Space Station (ISS).
Mercury Control Center was located on the Cape Canaveral Air Force Station and was used during Project Mercury. One of its still standing buildings now serves as a makeshift bunker for the media if a rocket explodes near the ground.
Multi-Mission Operations Center at the Ames Research Center
The Space Flight Operations Facility is operated by the Jet Propulsion Laboratory (JPL) in Pasadena, California and manages all of NASA's uncrewed spacecraft outside Earth's orbit and several research probes within along with the Deep Space Network.
Space Telescope Operations Control Center (STOCC) is located at Goddard Space Flight Center in Greenbelt, Maryland and provides mission control for the Hubble Space Telescope.
Payload Operations and Integration Center at the Marshall Spaceflight Center in Huntsville, Alabama where science activities aboard the International Space Station are monitored around the clock.
The Multimission Operations Center at the Applied Physics Laboratory near Baltimore, Maryland controls spacecraft including the MESSENGER and New Horizons missions.
NOAA operates its constellation of satellites from the Satellite Operations Control Center (SOCC) at Suitland, Maryland and Command and Data Acquisition (CDA) facilities at Wallops, Virginia and Fairbanks, Alaska. Satellites controlled include JPSS and GOES.
The Canadian Space Agency Robotics Mission Control Centre in Longueuil, Quebec plans and conducts Canadarm and Dextre operations at the International Space Station.
The Centro de Operações Espaciais (COPE) is operated by the Brazilian Armed Forces to support government satellites in orbit. Two Space Operations Centers (COPEs) are part of its structure: the main center (COPE-P) in Brasília and the secondary center (COPE-S) in Rio de Janeiro.
Asia
Beijing Aerospace Command and Control Center is a command center for the Chinese space program which includes the Shenzhou missions. The building is inside a complex nicknamed Aerospace City. The city is located in a suburb northwest of Beijing.
The Master Control Facility of the Indian Space Research Organisation is located at Satish Dhawan Space Centre, Sriharikota, India.
JEM Control Center and the HTV Control Center at the Tsukuba Space Center (TKSC) in Tsukuba, Japan manages operations aboard JAXA's Kibo ISS research laboratory and the resupply flights of the H-II Transfer Vehicle. JAXAs satellite operations are also based here.
Europe
European Space Operations Centre (ESOC) is responsible for ESA's satellites and space probes. It is located in Darmstadt, Germany.
German Space Operations Center (GSOC) is responsible for DLR's satellites and other customers' missions. It is located in Oberpfaffenhofen near Munich, Germany.
The Columbus Control Centre (Col-CC) at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany. It is the mission control center for the European Columbus research laboratory at the International Space Station.
Europe's Galileo global navigation satellite system (GNSS) is operated by two Galileo Control Centres (GCC) situated in Oberpfaffenhofen, Germany and Fucino, Italy.
The French National Centre for Space Studies (CNES) ATV Control Centre (ATV-CC) is located at the Toulouse Space Centre (CST) in Toulouse, France. It is the mission control center for the European Automated Transfer Vehicles, that regularly resupply ISS.
The Rover Operations Control Centre (ROCC) is located in Turin, Italy. It will be the mission control center for the ExoMars rover Rosalind Franklin.
Russia
The Mission Control Center of the Russian Federal Space Agency (Russian: Центр управления полётами), also known by its acronym ЦУП ("TsUP") is located in Korolyov, near the RKK Energia plant. It contains an active control room for the ISS. It also houses a memorial control room for the Mir where the last few orbits of Mir before it burned up in the atmosphere are shown on the display screens.
Titov Main Test and Space Systems Control Centre, mission control center in Krasnoznamensk, Russia.
== Privately-operated Mission Control Centers ==
Axiom Space Mission Control Center (MCC-A) in Houston, Texas.
Boeing Satellite Development Center (SDC) Mission Control Center in El Segundo, California, US. In charge of several military satellites.
The Kongsberg Satellite Services (KSAT) space operations center in Tromsø, Norway performs command and control for 13 satellites.
Lockheed Martin A2100 Space Operations Center (ASOC) in Newtown, Pennsylvania, US. In charge of several military satellites.
Parsons Corporation operates the Parsons Space Operations Center (PSOC) in Colorado Springs, Colorado to support command and control for the NOAA POES and DARPA Blackjack satellite programs.
Satellite operator SES controls its fleet of more than 50 satellites from operations centers in both Princeton, New Jersey and Luxembourg.
Space Systems/Loral Mission Control Center in Palo Alto, California, US.
SpaceX Mission Control Center (MCC-X) in Hawthorne, California is the primary launch control facility for the company's Falcon rockets.
== See also ==
Control room
Ground segment
Launch status check
== References == | Wikipedia/Mission_control_center |
NASA's Christopher C. Kraft Jr. Mission Control Center (MCC-H, initially called Integrated Mission Control Center, or IMCC), also known by its radio callsign, Houston, is the facility at the Lyndon B. Johnson Space Center in Houston, Texas, that manages flight control for the United States human space program, currently involving astronauts aboard the International Space Station (ISS).
The center is in Building 30 at the Johnson Space Center and is named after Christopher C. Kraft Jr., a NASA engineer and manager who was instrumental in establishing the agency's Mission Control operation, and was the first Flight Director.
The MCC currently houses one operational control room in Building 30 from which flight controllers command, monitor, and plan operations for the ISS. This room has many computer and data-processing resources to monitor, command and communicate with the station. The ISS control room operates continuously. A second control room in the same building, which formerly hosted the Shuttle flight control team, can be set up for ISS operations should the need arise (e.g., during repairs or hardware upgrades in the main room), and also hosts training simulations.
== Cape Canaveral (1960–1965) ==
All Mercury–Redstone, Mercury-Atlas, the uncrewed Gemini 1 and Gemini 2, and crewed Gemini 3 missions were controlled by the Mission Control Center (called the Mercury Control Center through 1963) at Cape Canaveral Missile Test Annex, Florida. This facility was in the Engineering Support Building at the east end of Mission Control Road, about 0.5 mile (0.8 km) east of Phillips Parkway. Mercury and Gemini launches were conducted from separate blockhouses at the Cape.
The building, which was on the National Register of Historic Places, was demolished in May 2010 due to concerns about asbestos and the estimated $5 million cost of repairs after 40 years of exposure to salt air. Formerly a stop on the Kennedy Space Center Visitor Complex tours, in the late 1990s, the control room consoles were removed, refurbished, and relocated to a re-creation of the room in the Debus Center at the KSC Visitor Complex.
== Houston (1965–present) ==
=== Gemini and Apollo (1965–1975) ===
Located in Building 30 at the Johnson Space Center (known as the Manned Spacecraft Center until 1973), the Houston MCC was first used in June 1965 for Gemini 4. It housed two primary rooms known as Mission Operation Control Rooms (MOCR, pronounced "moh-ker"). These two rooms controlled all Gemini, Apollo, Skylab, and Space Shuttle flights up to 1996. Each consisted of a four-tier auditorium, dominated by a large map screen, which, with the exception of Apollo lunar flights, had a Mercator projection of the Earth, with locations of tracking stations, and a three-orbit "sine wave" track of the spacecraft in flight. Each MOCR tier was specialized, staffed by various controllers responsible for a specific spacecraft system.
MOCR 1, housed on the second floor of Building 30, was used for Apollo 5, Apollo 7, the Skylab and the Apollo-Soyuz Test Project (Saturn IB) missions.
==== Mission Operations Control Room 2 ====
MOCR 2 was used for all other Gemini and Apollo (Saturn V) flights (except Gemini 3) and was located on the third floor. As the flight control room for Apollo 11, the first crewed Moon landing, MOCR 2 was designated a National Historic Landmark in 1985. It was last used in 1992 as the flight control room for STS-53 and was subsequently converted back almost entirely to its Apollo-era configuration and preserved for historical purposes. Together with several support wings, it is now listed in the National Register of Historic Places as the "Apollo Mission Control Center". In January 2018, the first set of consoles in MOCR 2 were removed and sent to the Kansas Cosmosphere for archival cleaning, refurbishment, and restoration to Apollo-era configuration, for eventual display back in the control room. On July 1, 2019, the newly restored Apollo-era Mission Control was reopened to the public, after a two-year long effort to restore the room to its configuration as seen during the Apollo Moon landings. Period-appropriate accents were acquired, from cigarette packs and ashtrays to wallpaper and carpeting. The room is accessible via the tram tour at the nearby Space Center Houston visitors' center, but only from behind the glass in the restored Visitor's Gallery viewing room.
In July 2010, air to ground voice recordings and film footage shot in Mission Control during the Apollo 11 powered descent and landing was re-synchronized and released for the first time. This audio was used in creating an audio-visual presentation for the 2019 Mission Control update.
=== Space Shuttle (1981–2011) ===
When the Space Shuttle program began, the MOCRs were re-designated flight control rooms (FCR, pronounced "ficker"); and FCR 1 (formerly MOCR 1) became the first shuttle control room. FCR 2 was used mostly for classified Department of Defense shuttle flights, then was remodeled to its Apollo-era configuration. From the moment a Space Shuttle cleared its launch tower in Florida until it landed on Earth, it was in the hands of Mission Control. When a shuttle mission was underway, its control room was staffed around the clock, usually in three shifts.
In 1992, JSC began building an extension to Building 30. The new five-story section (30 South) went operational in 1998 and houses two flight control rooms, designated White, and Blue. The White FCR was used in tandem with FCR 2 for seven Space Shuttle missions, STS-70 through STS-76, and handled all following shuttle flights through the end of the program. When not in use for the shuttle program, the White FCR was reconfigured as a backup for the ISS FCR from time to time as needed (such as during periods of construction or upgrades in the ISS FCR).
=== International Space Station (1998–present) ===
The newer section of Building 30 also houses the International Space Station Flight Control Room. The first ISS control room, originally named the Special Vehicles Operations Room (SVO), then the Blue FCR, was operational around the clock to support the ISS until the fall of 2006.
FCR 1, meanwhile, had its original consoles and tiered decking removed after STS-71, and was first converted to a "Life Sciences Center" for ISS payload control operations. After substantial remodelling, mainly with new technologies not available in 1998, ISS flight control moved into the totally revamped FCR 1 in October 2006, due to the growth of the ISS and the international cooperation required among national control centers around the world.
=== Other facilities ===
Other MCC facilities include the Training Flight Control Room, sometimes referred to as the Red FCR, a training area for flight controllers; a Life Sciences Control Room used to oversee various experiments; the Simulation Control Area (SCA), primarily used during shuttle astronaut and flight control training; and an Exploration Planning Operations Center, used to test new concepts for operations beyond low-Earth orbit. Additionally, there are multi-purpose support rooms (MPSRs) which are staffed by backup flight controllers, who analyze data and run simulations as well as provide information and advice for the flight controllers.
Building 30 was named for Kraft on April 14, 2011.
=== MCC-21 ===
From 2012 to 2014, the rooms used during the Shuttle program underwent upgrades in preparation for future human space flight activities. The ISS FCR 1, the White FCR, the Blue FCR, the SCA, and the MPSRs all had their consoles removed and replaced with modern hardware, in part to support the new operational concept of commercial companies having a presence in Mission Control. This project is known as Mission Control Center for the 21st Century, or MCC-21. The White FCR was officially completed and unveiled in April 2014. The modernized White FCR is used for flight controller training and occasionally for nominal ISS operations when FCR 1 is temporarily removed from service for repairs or upgrades.
=== Commercial Crew ===
In 2019, the first of the Commercial Crew vehicles to be controlled from Houston was launched: the Boeing CST-100 Starliner. The SpaceX Dragon 2 demo flight launched earlier in the year, but SpaceX Mission Control is at their headquarters in Hawthorne, CA. The Boeing Starliner missions use a number of control centers across the United States, several in Houston in the Mission Control building:
Atlas V launch vehicle operations are controlled from the United Launch Alliance's Advanced Spaceflight Operations Center (formerly the Atlas Spaceflight Operations Center) (ASOC) at Cape Canaveral Space Force Station with support from a team in the Vehicle Ascent and Launch Operations Room (VALOR) at the company's headquarters in Denver, Colorado.
The Boeing Mission Control Center (BMCC) is at the Kennedy Space Center looking after Starliner during ascent, orbit, and entry.
Mission Control Center for the CST-100 is known as MCC-CST and operates out of the White FCR and Ops Suite 1 just outside the room.
The Guidance, Navigation and Control (GNC) and flight software team reside in the Blue FCR down the hall from Mission Control Center-CST.
== Console positions ==
=== Mercury Control Center (1960–1963) ===
During the early years at Cape Canaveral, the original MCC consisted only of three rows, as the Mercury capsule was simple in design and construction, with missions lasting no more than 35 hours.
The first row consisted of several controllers, the Booster Systems Engineer (BOOSTER), Flight Surgeon (SURGEON), capsule communicator (CAPCOM), Retrofire Officer (RETRO), Flight Dynamics Officer (FIDO), and Guidance Officer (GUIDO).
The BOOSTER controller, depending upon the type of rocket being used, was either an engineer from the Marshall Space Flight Center (for Mercury-Redstone flights) or an Air Force engineer (for Mercury-Atlas and later Gemini-Titan flights) assigned for that mission. The BOOSTER controller's job would last no more than six hours total and he would vacate his console after the booster was jettisoned.
The SURGEON controller, consisting of a flight surgeon (either a military or civilian physician), monitored the astronaut's vital signs during the flight, and if a medical need arose, could recommend treatment. They could also talk directly to the astronaut crew if there was a medical need that the astronauts needed to discuss.
The CAPCOM controller, filled by an astronaut, maintained nominal air-to-ground communications between the MCC and the orbiting spacecraft; the exception being the SURGEON or Flight Director, and only in an emergency.
The RETRO, FIDO, and GUIDO controllers monitored spacecraft trajectory and handled course changes.
The second row also consisted of several controllers, the ENVIRONMENTAL, PROCEDURES, FLIGHT, SYSTEMS, and NETWORK. The ENVIRONMENTAL controller, later called EECOM, oversaw the consumption of spacecraft oxygen and monitored pressurization, while the SYSTEMS controller, later called EGIL, monitored all other spacecraft systems, including electrical consumption. The PROCEDURES controller, first held by Gene Kranz, handled the writing of all mission milestones, "GO/NO GO" decisions, and synchronized the MCC with the launch countdowns and the Eastern Test Range. The PROCEDURES controller also handled communications, via teletype, between the MCC and the worldwide network of tracking stations and ships.
The flight director, known as FLIGHT, was ultimate supervisor of the Mission Control Center, and gave the final orbit entrance/exit, and, in emergencies, mission abort decisions. During Mercury missions, this position was held by Christopher Kraft, with John Hodge, an Englishman who came to NASA after the cancellation of the Canadian Avro Arrow project, joining the flight director ranks for the 22-orbit Mercury 9, requiring Kraft to divide Mission Control into two shifts. The flight director's console was also the only position in the Cape MCC to have a television monitor, allowing him to see the rocket lift off from the pad. The NETWORK controller, an Air Force officer, served as the "switchboard" between the MCC, the Goddard Space Flight Center in Greenbelt, Maryland (as on-site real-time computing did not exist), and the worldwide tracking station and ship network.
The back row, consisting primarily of NASA and Department of Defense (DOD) management, was the location of the operations director (held by Walt Williams), a general or flag officer who could coordinate with the DOD on all search-and-rescue missions, and the PAO ("Shorty" Powers during Mercury), who provided minute-by-minute mission commentary for the news media and public.
In addition to the controllers in the Cape MCC, each of the crewed tracking stations and the Rose Knot Victor and Coastal Sentry Quebec tracking ships, had three controllers, a CAPCOM, SURGEON, and an engineer. Unlike the Cape CAPCOM, which was always staffed by an astronaut, the tracking station/tracking ship CAPCOMs were either a NASA engineer, or an astronaut, with the latter located at stations deemed "critical" by the flight director and operations director.
=== MOCR (1965–1998) ===
After the move from the Cape MCC to the Houston MCC in 1965, the new MOCRs, which were larger and more sophisticated than the single Cape MCC, consisted of four rows, with the first row, later known as "the Trench" (a term coined by Apollo-era RETRO controller John Llewellyn, which, according to Flight Director Eugene Kranz, reminded him of the firing range during his years as a USAF officer). It was occupied by the BOOSTER, RETRO, FIDO, and GUIDO controllers. During Gemini, the BOOSTER position was handled by both an engineer from Martin Marietta and an astronaut, while all missions from Apollo 7 used engineers from the Marshall Space Flight Center.
The second row, after Project Gemini, consisted of the SURGEON, EECOM, and CAPCOM. The EECOM, which replaced the ENVIRONMENTAL controller and some of the SYSTEMS controller's functions, monitored the spacecraft's electrical and environmental systems. Like the CAPCOMs during Mercury, all CAPCOMs in the Houston MCC were astronauts.
On the other side of the aisle of the second row were controllers who monitored specific parts of Gemini, Apollo, Skylab, ASTP and Space Shuttle missions. During the Gemini program, the two Agena controllers monitored the Agena upper stage used as a docking target from Gemini 8 through Gemini 12. For the Apollo lunar flights, the TELMU and CONTROL controllers monitored the Lunar Module. During Skylab, the EGIL (pronounced "eagle") monitored Skylab's solar panels, while the EXPERIMENTS controller monitored experiments and the telescopes in the Apollo Telescope Mount. The PAYLOAD and EXPERIMENTS controllers monitored Space Shuttle operations. Another controller, the INCO, monitored the spacecraft's communications and instrumentation.
The third row consisted of the PAO, PROCEDURES, and the FAO (flight activities officer), who coordinated with the flight schedule. The AFD (assistant flight director) and the flight director were also located on the third row.
The fourth row, like the old Cape MCC's third row, was reserved for NASA management, including the director of the Johnson Space Center, the director of flight operations, the director of flight crew operations (chief astronaut), and the Department of Defense officer.
=== Blue FCR (1998–2006) ===
The Blue FCR, used primarily for ISS operations from 1998 to 2006, was arranged in five rows of three consoles, plus one in the rear right corner. From left to right, as viewed from the rear of the room:
The front row consisted of ADCO, THOR, and PHALCON.
The second row consisted of OSO, ECLSS pronounced "eekliss", and ROBO.
The third row consisted of ODIN; depending on phase of flight, either ACO (shuttle docked) or the CIO (Free-flight Operations); and OpsPlan.
The fourth row consisted of CATO, FLIGHT (flight director), and CAPCOM.
The fifth and last row consisted of GC; depending on the phase of flight, either RIO, EVA, VVO, or FDO (reboosts only); and SURGEON.
In the back, right corner, behind the Flight Surgeon, the PAO (Public Affairs Officer) was occasionally present on a separate single console.
=== White FCR (1998–2011) ===
The White FCR, which was used for Space Shuttle operations, was arranged in five rows. From left to right, as viewed from the rear of the room):
The front row (the "trench") had FDO (pronounced "fido"), responsible for orbital guidance and orbital changes, depending on the phase of flight; either Guidance, a specialist in the procedures of those two high-energy, fast-paced phases of flight or rendezvous, a specialist in orbital rendezvous procedures; and GC, the controller responsible for the computers and systems in MCC itself.
The second row had PROP, responsible for the propulsion system; GNC, responsible for systems that determine the spacecraft's attitude and issue commands to control it; MMACS (pronounced "max"), responsible for the mechanical systems on the space craft, such as the payload bay doors and landing gear; and EGIL (pronounced "eagle"), responsible for the fuel cells, electrical distribution and O2 & H2 supplies.
The third row had DPS, responsible for the computer systems; ACO or Payloads, responsible for all payload-related activities (depending on whether the shuttle flight supported an ISS assembly flight or not; FAO, responsible for the overall plans of activities for the entire flight; and EECOM responsible for the management of environmental systems.
The fourth row had INCO, responsible for communications systems for uploading all systems commands to the vehicle; FLIGHT—the Flight Director, the person in charge of the flight; CAPCOM, an astronaut who is normally the only controller to talk to the astronauts on board; and PDRS, responsible for robot arm operations.
The back row had PAO (Public Affairs Officer), the "voice" of MCC; MOD, a management representative, depending on the phase of flight; either RIO for MIR flights, a Russian-speaker who spoke with the Russian MCC, known as Цуп, (Tsup); BOOSTER responsible for the SRBs and the SSMEs during ascent, or EVA responsible for space suit systems and EVA tasks; and finally, SURGEON.
=== FCR 1 (2006–present) ===
All US International Space Station operations are currently controlled from FCR 1, remodeled in 2006. This FCR abandoned the traditional tiered floor layout, with all rows instead at the same level. A few engineering specialists are in the center of the front row, with the public affairs commentator at the right end behind a low partition. The space station trajectory position was moved to the third row.
During early ISS operations, a scheme known as Gemini was used, which reduced staffing for real-time ISS support by consolidating six system disciplines into two positions. From these two "super-consoles", named Atlas and Titan, two people can do the work of up to eight other flight controllers during low-activity periods. One position, call sign TITAN (Telemetry, Information Transfer, and Attitude Navigation), was responsible for Communication & Tracking (CATO), Command & Data Handling (ODIN), and Motion Control Systems (ADCO). The other position, call sign ATLAS (Atmosphere, Thermal, Lighting and Articulation Specialist), was responsible for Thermal Control (THOR), Environmental Control & Life Support (ECLSS), and Electrical Power Systems (PHALCON). ATLAS was also responsible for monitoring Robotics (ROBO) and Mechanical Systems (OSO) heaters, as those consoles were not supported during the majority of Gemini shifts. While Gemini officially reflected the fact that two controllers act as "twins" during operations, the name was also an homage to the first missions (Project Gemini) controlled from that room. In addition, Titan was the type of booster rocket which launched the Gemini spacecraft and Atlas boosters launched Gemini-era Agena target vehicles (and several missions in Project Mercury).
In 2010 after ISS assembly complete, the Gemini concept was removed and the six core disciplines were reduced to four. Those console positions are ETHOS (Environmental and Thermal Operating Systems) which consists of the ECLSS system as well as the internal thermal control system formerly held by THOR; SPARTAN (Station Power, ARticulation, Thermal, and ANalysis) which consists of the electrical power and external thermal control systems; CRONUS (Communications RF Onboard Networks Utilization Specialist), a combination of the previous ODIN and CATO positions; and ADCO (Motion Control Systems).
== Backup Control Center and mission control facilities ==
In the event that the MCC-H is unavailable due to a hurricane or other foreseeable event, NASA has the option of quickly relocating to a temporary Backup Control Center (BCC) offsite. In 2017 for Hurricane Harvey, BCC was a hotel in Round Rock, Texas, about four hours away, while in 2020 for Hurricane Laura the BCC was at the Columbia Scientific Balloon Facility in Palestine, Texas, the designated backup site since 2017.
For more long-term use, NASA will relocate to a more robust but farther control center at the Huntsville Operations Support Center (HOSC) at Marshall Space Flight Center for ISS operations. In 2008 for Hurricane Ike, NASA activated Backup Control Centers in both Round Rock and Huntsville for specific duties.
Uncrewed US civilian satellites are controlled from the Goddard Space Flight Center in Maryland, while California's Jet Propulsion Laboratory manages robotic US space probes.
== See also ==
Launch Control Center in Florida
Launch status check
Mission control center
== Notes ==
== References ==
NASA Mission Control Fact Sheet
Space Shuttle News Reference Archived January 14, 2012, at the Wayback Machine
2006 ISS Flight Control Room
== External links ==
NASA's MCC page
MCC history
Detailed behind-the-scenes tour of MOCR 2, by Sy Liebergot.
Mission Control Center Overview
Historic American Engineering Record (HAER) documentation, filed under 2101 NASA Parkway, Houston, Harris County, TX:
HAER No. TX-109-C, "NASA Johnson Space Center, Apollo Mission Control (Building 30)", 22 photos, 4 measured drawings, 101 data pages, 3 photo caption pages
HAER No. TX-109-E, "NASA Johnson Space Center, Shuttle Mission Control Room (Building 30)", 26 photos, 4 measured drawings, 3 photo caption pages | Wikipedia/Christopher_C._Kraft_Jr._Mission_Control_Center |
A control room or operations room is a central space where a large physical facility or physically dispersed service can be monitored and controlled. It is often part of a larger command center.
== Overview ==
A control room's purpose is production control, and serves as a central space where a large physical facility or physically dispersed service can be monitored and controlled. Central control rooms came into general use in factories during the 1920s.
Control rooms for vital facilities are typically tightly secured and inaccessible to the general public. Multiple electronic displays and control panels are usually present, and there may also be a large wall-sized display area visible from all locations within the space. Some control rooms are themselves under continuous video surveillance and recording, for security and personnel accountability purposes. Many control rooms are occupied on a "24/7/365" basis, and may have multiple people on duty at all times (such as implementation of a "two-man rule"), to ensure continuous vigilance.
Other special-purpose control room spaces may be temporarily set up for special projects (such as an oceanographic exploration mission), and closed or dismantled once the project is concluded.
== Examples ==
Control rooms are typically found in installations such as:
Nuclear power plants and other power-generating stations
Electric power distribution companies and other Utilities
Oil refineries and chemical plants
Airlines, where they are often referred to as operations control centers, and are responsible for flight operations dispatch, monitoring and support
Major transportation facilities such as bridges, tunnels, canals and rapid transit systems, where they are often staffed 24 hours a day to monitor and report on traffic congestion and to respond to emergencies
Military facilities (ranging in scale from a missile silo to NORAD), also referred to as operations rooms
NASA flight controllers work in several "flight control rooms" in mission control centers; affiliated facilities, such as the Jet Propulsion Laboratory have their own control rooms
Computerized data centers, often serving remote users in multiple time zones
Network operations centers
Large institutions such as universities, hospitals, major research facilities (such as particle accelerator laboratories), high security prisons, and theme parks
Emergency services including police, fire service and emergency medical service
Call centers, which may use them to monitor incoming and outgoing communications of customer service representatives, and to provide general oversight
Rail operations centers, such as the Union Pacific Harriman Dispatch Center, control rail operations over thousands of miles of railroad. Train dispatchers staff these facilities around the clock to manage efficient rail operations. In the UK, they are usually operated separately by each train operating company or by Network Rail, and include train crew and rolling stock resourcing.
== Special hazards and mitigation ==
Control rooms are usually equipped with elaborate fire suppression and security systems to safeguard their contents and occupants, and to ensure continued operation in emergencies. In hazardous environments, they may also be areas of refuge for personnel trapped on-site. They are typically crowded with equipment, mounted in multi-function rack mount cabinets to allow updating. The concentration of equipment often requires special electrical uninterruptible power supply (UPS) feeds and air conditioning.
Since the control equipment is intended to control other items in the surrounding facility, these often fire-resistance rated service rooms require many penetrations for cables. Due to routine equipment updates, these penetrations are subject to frequent changes, requiring maintenance programs to include vigilant firestop management for code compliance.
Due to the sensitive equipment in control room cabinets, it is useful to ensure the use of "T-rated" firestops that are massive and thick enough to resist heat transmission to the inside of the control room. It is also common to place control rooms under positive pressure ventilation to prevent smoke or toxic gases from entering. If used, gaseous fire suppressants must occupy the space that is to be protected for a minimum period of time to be sure a fire can be completely extinguished. Openings in such spaces must therefore be kept to a minimum to prevent the escape of the suppression gas.
A mobile control room is designated as particularly in high risk facilities, such as a nuclear power station or a petrochemical facility. It can provided a guaranteed life support for the anticipated safety control.
== Design ==
The design of a control room incorporates ergonomic and aesthetic features including optimum traffic flow, acoustics, illumination, and health and safety of the workers. Ergonomic considerations determine the placement of humans and equipment to ensure that operators can easily move into, out of, and around the control room, and can interact with each other without any hindrances during emergency situations; and to keep noise and other distractions to a minimum.
== In popular culture ==
Control room scenes dealing with crisis situations appear frequently in thriller novels and action films. In addition, a few documentaries have been filmed with scenes in real-life control room settings.
Fail-Safe - a 1964 Cold war thriller film directed by Sidney Lumet, based on the 1962 novel of the same name by Eugene Burdick and Harvey Wheeler. It portrays a fictional account of a Cold War nuclear crisis.
The Prisoner - a 1967 British television series (17 episodes), which follows a British former secret agent who is abducted and held prisoner in a mysterious coastal village resort where his captors try to find out why he abruptly resigned from his job.
The Taking of Pelham One Two Three - a 1974 American thriller film directed by Joseph Sargent, produced by Edgar J. Scherick, and starring Walter Matthau, Robert Shaw, Martin Balsam and Héctor Elizondo. Peter Stone adapted the screenplay, from the 1973 novel of the same name by Morton Freedgood (under the pen name John Godey) about a group of criminals taking hostage for ransom the passengers of a busy New York City Subway car.
The China Syndrome - a 1979 American thriller film that tells the story of a television reporter and her cameraman who discover safety coverups at a nuclear power plant. It stars Jane Fonda, Jack Lemmon and Michael Douglas, with Douglas also serving as the film's producer.
GoldenEye - a 1995 spy film, and 17th in the James Bond franchise, features 2 control rooms used for Command and control of a fictitious satellite based weapon, the original control room belonging to the USSR and a replica built by the Janus Crime Syndicate who have taken possession of the satellite for nefarious purposes. The latter also featured as a playable level in the videogame of the same name for the Nintendo 64.
Minority Report - a 2002 American neo-noir science fiction thriller film directed by Steven Spielberg, and loosely based on the short story of the same name by Philip K. Dick. It is set primarily in Washington DC, and Northern Virginia in the year 2054, where "PreCrime", a specialized police department, apprehends criminals based on foreknowledge provided by three psychics called "precogs".
Control Room - a 2004 documentary film about Al Jazeera and its relations with the US Central Command (CENTCOM), as well as the other news organizations that covered the 2003 invasion of Iraq.
== Image gallery ==
== See also ==
== References ==
== External links ==
Lindsay Bjerregaard (Dec 6, 2017). "Managing System Operations At JetBlue". MRO network. Penton. The airline tries to anticipate operational needs and stay ahead of potential disruptions as fleet expansion continues.
Control Room Design
US Army INSCOM Information Dominance Center | Wikipedia/Control_room |
Morehead Planetarium and Science Center is located on the campus of the University of North Carolina at Chapel Hill. As a unit of the university, Morehead receives about one-third of its funding through state sources, one-third through ticket and gift sales, and one-third through gifts and grants.
First opened in 1949, the dome theater in planetarium was used to train astronauts from the Gemini, Apollo, Mercury, Skylab, and Apollo-Soyuz missions in celestial navigation. Eleven of the twelve astronauts who walked on the Moon trained at Morehead Planetarium. Until the late 1990s, it contained one of the largest working Copernican orreries in the world. The facility was donated to the university by alumnus John Motley Morehead III who invested more than $3 million in the facility.
== History ==
Morehead Planetarium opened on May 10, 1949 after seventeen months of construction. The first planetarium in the South, it was the eighth to be built in the United States. The planetarium was the first built on a university campus worldwide. Designed by Eggers & Higgins, the same architects who planned the Jefferson Memorial, the $3 million cost of its construction (approximately $40 million in today's dollars) made it the most expensive building in North Carolina at the time. Morehead Planetarium was officially dedicated during a ceremony held on May 10, 1949.
Since Zeiss, the German firm that produced planetarium projectors, had lost most of its factories during World War II, there were very few projectors available at the time. Morehead had to travel to Sweden, where he had previously served as American ambassador, to purchase a Zeiss Model II to serve as the heart of North Carolina's new planetarium.
Let There Be Light was the planetarium's first show.
=== NASA ===
From 1959 through 1975 every astronaut in the Mercury, Gemini, Apollo, Skylab, and Apollo–Soyuz Test Project programs spent hours in celestial navigation training at the planetarium. Morehead technicians developed simplified replicas of flight modules and tools for use in the training, often from plywood or cardboard. A mockup simulating key parts of the Gemini capsule was constructed from plywood and mounted on a barber chair to enable changes in pitch and yaw. Several of these items are on display at the planetarium. That training may have helped save astronauts' lives on occasion. Astronauts aboard Apollo 12 called upon that training after their Saturn V rocket was hit by lightning twice during ascent, knocking spacecraft systems offline and requiring them to configure navigation systems based on fixes taken manually. Gordon Cooper used his training to make the most-accurate landing of Project Mercury after a power failure affected navigational systems. Astronauts enjoyed soft drinks, cookies and other snacks during their intense hours-long training session, leading planetarium employees to create the code name "cookie time" to refer to the training sessions. Occasionally, word of the sessions leaked out and noted clothing designer and Chapel Hill native Alexander Julian recalls meeting Mercury Astronauts during a visit to the planetarium while in junior high.
The first astronaut to train at Morehead, in March 1964, was Neil Armstrong. Armstrong visited again only months before the 1969 launch of Apollo 11, spending a total of 20 days at Morehead over 11 training sessions, more than any other astronaut. Astronauts commented that the "large dome" was "highly realistic", calling the facility "superb".
In all, the astronauts who trained at the planetarium were Buzz Aldrin, Joseph P. Allen, William Anders, Neil Armstrong, Charles Bassett, Alan Bean, Frank Borman, Vance D. Brand, John S. Bull, Scott Carpenter, Gerald P. Carr, Eugene Cernan, Roger B. Chaffee, Philip K. Chapman, Michael Collins, Pete Conrad, Gordon Cooper, Walter Cunningham, Charles Duke, Donn F. Eisele, Anthony W. England, Joe Engle, Ronald E. Evans, Theodore Freeman, Edward Givens, John Glenn, Richard F. Gordon Jr., Gus Grissom, Fred Haise, Karl Gordon Henize, James Irwin, Joseph P. Kerwin, William B. Lenoir, Don L. Lind, Anthony Llewellyn, Jack R. Lousma, Jim Lovell, Ken Mattingly, Bruce McCandless II, James McDivitt, Curt Michel, Edgar Mitchell, Story Musgrave, Brian O'Leary, Robert A. Parker, William R. Pogue, Stuart Roosa, Wally Schirra, Rusty Schweickart, David Scott, Elliot See, Alan Shepard, Deke Slayton, Thomas P. Stafford, Jack Swigert, William E. Thornton, Paul J. Weitz, Ed White, Clifton Williams, Alfred M. Worden, and John Young.
== Fulldome planetarium ==
Morehead's planetarium seats about 240 people, with a dome that is 68 feet (21 m) in diameter and 44 feet (13 m) tall. It currently has two different projection systems.
In February 2010, Morehead introduced its new fulldome digital video (FDV) projection system, the largest FDV installation in the southeastern United States. Funding for the new system was provided by GlaxoSmithKline, and the planetarium's star theater is now named the GlaxoSmithKline Fulldome Theater.
Morehead's historic star projector, a Carl Zeiss Model VI, was installed in 1969 and remained in limited use as of April 2011, primarily for field trip groups. This projector displays about 8,900 different stars on the dome overhead. The analog planetarium shows are a combined effort of the Zeiss projector, slide projectors and video projectors working together to form a multimedia experience. Planetarium officials hope to display at least a portion of the projector somewhere in the center. The projector was removed in August 2011.
== Exhibits ==
On February 23, 2007, Morehead opened its first interactive exhibit, Zoom In: Science at the Extremes. It features current science at the extremes of size and scale, including research done by UNC scientists.
The Ancient Carolinians opened on November 17, 2007. This exhibit explores the lives of the first people to live on the land now known as North Carolina, interpreting the use of 10,000-year-old artifacts from the Hardaway archaeological site. This exhibit is supported in part by the Alcoa Foundation, which donated the artifacts to the university. After January 1, 2009, the exhibit installation was removed so that The Ancient Carolinians could become Morehead's first traveling exhibit.
== Science stage ==
Morehead Planetarium and Science Center offers live, educator-led programs in its Science Stage. Science LIVE! shows present the foundations of science, on a variety of topics, through experiments and demonstrations that often involve volunteers from the audience.
== External programs ==
The DESTINY Traveling Science Learning Program, which was founded by UNC in 2000, became part of Morehead in 2006.
The PLANETS Portable Planetarium Program brings the fulldome planetarium experience to elementary schools that are too far from Morehead to plan a field trip to the center.
== Jupiter Fellowship ==
Each year, the Jupiter Committee (a volunteer group that assists in planning the Jupiter Ball fundraising gala for Morehead) awards the Jupiter Fellowship. This award goes to the student employee(s) who submits the best proposal for a Morehead program, activity or innovation. The fellowship funds the implementation of the proposal.
== Morehead Observatory ==
Morehead Observatory, located on the east end of the top floor of the Morehead building, houses a 24-inch (61 cm) Perkin-Elmer reflecting telescope operated by the UNC Physics and Astronomy Department. Faint object observing is still possible with narrow wavelength filters to block the city lights of Chapel Hill. The observatory supports research programs in bright star spectroscopy and optical counterparts of Gamma Ray Bursts.
== See also ==
North Carolina Science Festival
== References ==
== External links ==
Official website
More about the Morehead Planetarium | Wikipedia/Morehead_Planetarium_and_Science_Center |
The Science Mission Directorate (SMD) of the National Aeronautics and Space Administration (NASA) is one of the six directorates that constitute NASA. Its responsibility is to define and direct research into scientific questions of interest, sharing data acquired by NASA missions with the broader scientific community, sponsoring scientific research, and developing satellites and spacecraft in conjunction with other directorates and with partner organizations for scientific missions.
The Science Mission Directorate also sponsors research that both enables, and is enabled by, NASA's exploration activities. The SMD portfolio is contributing to NASA's achievement of the Vision for Space Exploration by striving to:
Understand the history of Mars and the formation of the Solar System. By understanding the formation of diverse terrestrial planets (with atmospheres) in the Solar System, researchers learn more about Earth's future and the most promising opportunities for habitation beyond our planet. For example, differences in the impacts of collisional processes on Earth, the Moon, and Mars can provide clues about differences in origin and evolution of each of these bodies.
Search for Earth-like planets and habitable environments around other stars. SMD pursues multiple research strategies with the goal of developing effective astronomically-detectable signatures of biological processes. The study of the Earth-Sun system may help researchers identify atmospheric biosignatures that distinguish Earth-like (and potentially habitable) planets around nearby stars. An understanding of the origin of life and the time evolution of the atmosphere on Earth may reveal likely signatures of life on extrasolar planets.
Explore the Solar System for scientific purposes while supporting safe robotic and human exploration of space. For example, large-scale coronal mass ejections from the Sun can cause potentially lethal consequences for improperly shielded human flight systems, as well as some types of robotic systems. SMD's pursuit of interdisciplinary scientific research focus areas will help predict potentially harmful conditions in space and protect NASA's robotic and human explorers.
== Leadership ==
Nicola Fox is the Associate Administrator for the Science Mission Directorate beginning February 27, 2023. Recent Associate Administrators for the SMD include Edward J. Weiler (1998–2004, 2008–2011), Mary L. Cleave (2004–2005), Alan Stern (2007–2008), John M. Grunsfeld (2012–2016), and Thomas Zurbuchen (2016-2022). Stern resigned 25 March 2008, to be effective 11 April, over disagreements with Administrator Michael D. Griffin.
Associate Administrator: Nicola Fox Assistant: Tricia Johnson
Acting Deputy Associate Administrator: Mark Clampin
Heliophysics Division Director: Joseph Westlake
Earth Science Division Director: Karen St. Germain Assistant: Jacob Griffin
Planetary Science Division Director: Louise M. Prockter
Astrophysics Division Director:
Biological and Physical Sciences Division Director: Lisa Carnell
Resource Management Division Director: Holly Degn
Joint Agency Satellite Division Director: John Gagosian
Science Engagement and Partnerships Division Director: Kristen J Erickson
== See also ==
Planetary Missions Program Office
Discovery Program
New Frontiers program
NASA large strategic science missions
Lunar Discovery and Exploration Program
== References ==
== External links ==
About the Science Mission Directorate | Wikipedia/Science_Mission_Directorate |
NASA's Solar Terrestrial Probes program (STP) is a series of missions focused on studying the Sun-Earth system. It is part of NASA's Heliophysics Science Division within the Science Mission Directorate.
== Objectives ==
Understand the fundamental physical processes of the complex space environment throughout the Solar System, which includes the flow of energy and charged material, known as plasma, as well as a dynamic system of magnetic and electric fields.
Understand how human society, technological systems, and the habitability of planets are affected by solar variability and planetary magnetic fields.
Develop the capability to predict the extreme and dynamic conditions in space in order to maximize the safety and productivity of human and robotic explorers.
== Missions ==
=== TIMED ===
The TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics) is an orbiter mission dedicated to study the dynamics of the Mesosphere and Lower Thermosphere (MLT) portion of the Earth's atmosphere. The mission was launched from Vandenberg Air Force Base in California on December 7, 2001 aboard a Delta II rocket launch vehicle.
=== Hinode ===
Hinode, an ongoing collaboration with JAXA, is a mission to explore the magnetic fields of the Sun. It was launched on the final flight of the M-V-7 rocket from Uchinoura Space Center, Japan on September 22, 2006.
=== STEREO ===
STEREO (Solar Terrestrial Relations Observatory) is a solar observation mission. It consists in two nearly identical spacecraft, launched on October 26, 2006.
=== MMS ===
The Magnetospheric Multiscale Mission (MMS) is a mission to study the Earth's magnetosphere, using four identical spacecraft flying in a tetrahedral formation. The spacecraft were launched on March 13, 2015.
=== IMAP ===
IMAP (Interstellar Mapping and Acceleration Probe) is a heliosphere observation mission. Planned for launch in 2025, it will sample, analyze, and map particles streaming to Earth from the edges of interstellar space.
== References ==
== External links ==
NASA Goddard Space Flight Center - Solar Terrestrial Probes Program
NASA Science Mission Directorate - Solar Terrestrial Probes Program | Wikipedia/Solar_Terrestrial_Probes_program |
The Mars Science Laboratory and its rover, Curiosity, were launched from Earth on 26 November 2011. As of June 6, 2025, Curiosity has been in Gale Crater on the planet Mars for 4562 sols (4687 total days; 12 years, 304 days) since landing on 6 August 2012. (See Current status.)
== Prelaunch (2004–2011) ==
In April 2004, the United States National Aeronautics and Space Administration (NASA) called for scientific experiments and instruments proposals for the Mars Science Laboratory and rover mission. Launch was proposed for September 2009. By 14 December 2004, eight proposals were selected, including instruments from Russia and Spain.
Testing of components also began in late 2004, including Aerojet's monopropellant engine with the ability to throttle from 15 to 100 percent thrust with a fixed propellant inlet pressure. By November 2008 most hardware and software development was complete, and testing continued. At this point, cost overruns were approximately $400 million. In December 2008, lift-off was delayed to November 2011 due to insufficient time for testing and integration.
Between 23–29 March 2009, the general public ranked nine finalist rover names (Adventure, Amelia, Journey, Perception, Pursuit, Sunrise, Vision, Wonder, and Curiosity) through a public poll on the NASA website. On 27 May 2009, the winning name was announced to be Curiosity. The name had been submitted in an essay contest by Clara Ma, a then sixth-grader from Kansas.
=== Landing site selection ===
At the first MSL Landing Site workshop, 33 potential landing sites were identified. By the second workshop in late 2007, the list had grown to include almost 50 sites, and by the end of the workshop, the list was reduced to six; in November 2008, project leaders at a third workshop reduced the list to these four landing sites:
A fourth landing site workshop was held in late September 2010, and the fifth and final workshop 16–18 May 2011. On 22 July 2011, it was announced that Gale Crater had been selected as the landing site of the Mars Science Laboratory mission.
== Launch (2011) ==
MSL was launched from Cape Canaveral Air Force Station Space Launch Complex 41 on 26 November 2011, at 10:02 EST (15:02 UTC) aboard an Atlas V 541 provided by United Launch Alliance. The first and second rocket stages, along with the rocket motors, were stacked on 9 October 2011, near the launch pad. The fairing containing the spacecraft was transported to the launch pad on 3 November 2011.
On 13 December 2011, the rover began monitoring space radiation to aid in planning for future crewed missions to Mars.
The interplanetary journey to Mars took more than eight months, time during which, the spacecraft performed four trajectory corrections: on 11 January, 26 March, 26 June and on 28 July. Mission design had allowed for a maximum of 6 trajectory correction opportunities.
== Landing (2012) ==
Curiosity landed in the Gale Crater at 05:17 UTC on 6 August 2012. Upon reaching Mars, an automated precision landing sequence took over the entire landing events. A cable cutter separated the cruise stage from the aeroshell and then the cruise stage was diverted into a trajectory for burn-up in the atmosphere. Landing was confirmed simultaneously by 3 monitoring Mars orbiters. Curiosity landed on target and only 2.4 km (1.5 mi) from its center. The coordinates of the landing site (named "Bradbury Landing") are: 4.5895°S 137.4417°E / -4.5895; 137.4417.
Some low resolution Hazcam images were beamed to Earth by relay orbiters confirming the rover's wheels were deployed correctly and on the ground. Three hours later, the rover begins to beam detailed data on its systems' status as well as on its entry, descent and landing experience. Aerial 3-D images of the landing site are available and include: the Curiosity rover and related Parachute (HiRISE, 10 October 2012).
On 8 August 2012, Mission Control began upgrading the rover's dual computers by deleting the entry-descent-landing software, then uploading and installing the surface operation software; the switchover was completed by 15 August.
== Prime mission (2012 - September 2014) ==
On 15 August 2012, the rover began several days of instrument checks and mobility tests. The first laser testing of the ChemCam by Curiosity on Mars was performed on a rock, N165 ("Coronation" rock), near Bradbury Landing on 19 August 2012.
The science and operations teams have identified at least six possible routes to the base of Mount Sharp, and estimate about a year studying the rocks and soil of the crater floor while Curiosity slowly makes its way to the base of the mountain. The ChemCam team expects to take approximately one dozen compositional measurements of rocks per day.
Having completed its mobility tests, the rover's first drive began on 29 August 2012, to a place called Glenelg about 400 m (1,300 ft) to the east. Glenelg is a location where three types of terrain intersect, and is the mission's first major driving destination. The drive across may take up to two months, after which Curiosity will stay at Glenelg for a month.
On the way, Curiosity studied a pyramidal rock dubbed "Jake Matijevic" after a mathematician-turned-rover-engineer who played a critical role in the design of the six-wheeled rover, but died just days after Curiosity landed in August.
The Jake rock measures about 25 cm (9.8 in) tall and 40 cm (16 in) wide. It is an igneous rock and may be a mugearite, a sodium rich oligoclase-bearing basaltic trachyandesite. Afterwards, on 30 September 2012, a finely-grained rock, named "Bathurst Inlet", was examined by Curiosity's Mars Hand Lens Imager (MAHLI) and Alpha particle X-ray spectrometer (APXS). The rock was named after Bathurst Inlet, a deep inlet located along the northern coast of the Canadian mainland. Also, a sand patch, named "Rocknest", is a test target for the first use of the scoop on the arm of the Curiosity rover.
=== Evidence for ancient water ===
On 27 September 2012, NASA scientists announced that the Curiosity rover found evidence for an ancient streambed suggesting a "vigorous flow" of water on Mars.
On 7 October 2012, a mysterious "bright object" (image), discovered in the sand at Rocknest, drew scientific interest. Several close-up pictures (close-up 1) (close-up 2) were taken of the object and preliminary interpretations by scientists suggest the object to be "debris from the spacecraft". Nonetheless, further images in the nearby sand have detected other "bright particles" (image) (close-up 1). These newly discovered objects are presently thought to be "native Martian material".
On 17 October 2012, at Rocknest, the first X-ray diffraction analysis of Martian soil was performed. The results revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the weathered basaltic soils of Hawaiian volcanoes. The sample used is composed of dust distributed from global dust storms and local fine sand. So far, the materials Curiosity has analyzed are consistent with the initial ideas of deposits in Gale Crater recording a transition through time from a wet to dry environment.
On 22 November 2012, the Curiosity rover analyzed a rock named "Rocknest 3" with the APXS and then resumed traveling toward "Point Lake" overlook on its way to Glenelg Intrigue.
On 3 December 2012, NASA reported that Curiosity performed its first extensive soil analysis, revealing the presence of water molecules, sulfur and chlorine in the Martian soil. The presence of perchlorates in the sample seems highly likely. The presence of sulfate and sulfide is also likely because sulfur dioxide and hydrogen sulfide were detected. Small amounts of chloromethane, dichloromethane and trichloromethane were detected. The source of the carbon in these molecules is unclear. Possible sources include contamination of the instrument, organics in the sample and inorganic carbonates.
=== Evidence for ancient habitability ===
In February 2013, the rover used its drill for the first time.
In March 2013, NASA reported Curiosity found evidence that geochemical conditions in Gale Crater were once suitable for microbial life after analyzing the first drilled sample of Martian rock, "John Klein" rock at Yellowknife Bay in Gale Crater. The rover detected water, carbon dioxide, oxygen, sulfur dioxide and hydrogen sulfide. Chloromethane and dichloromethane were also detected. Related tests found results consistent with the presence of smectite clay minerals. In addition, sandstone beds associated with the Gillespie Lake Member of Yellowknife Bay seem similar to microbially induced sedimentary structures (MISS) found on Earth, according to one study.
=== Evidence for atmospheric loss ===
On 8 April 2013, NASA reported that much of the atmosphere of Mars has been lost based on argon isotope ratios studies.
On 19 July 2013, NASA scientists published the results of a new analysis of the atmosphere of Mars, reporting a lack of methane around the landing site of the Curiosity rover. In addition, the scientists found evidence that Mars "has lost a good deal of its atmosphere over time", based on the abundance of isotopic compositions of gases, particularly those related to argon and carbon.
=== Other 2013 events ===
On 28 February 2013, NASA was forced to switch to the backup computer due to an issue with the then active computer's flash memory which resulted in the computer continuously rebooting in a loop. The backup computer was turned on in safe mode and was converted to operational status on 19 March 2013.
On 18 March 2013, NASA reported evidence of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 cm (2.0 ft), in the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain.
Between 4 April – 1 May 2013, Curiosity operated autonomously due to a Martian solar conjunction with Earth. While Curiosity transmitted a beep to Earth each day and the Odyssey spacecraft continued to relay information from the rover, no commands were sent from mission control since there was a possibility of data corruption due to interference from the Sun. Curiosity continued to perform stationary science at Yellowknife Bay for the duration of the conjunction.
On 5 June 2013, NASA announced that Curiosity will soon begin a 8 km (5.0 mi) journey from the Glenelg area to the base of Mount Sharp. The trip is expected to take nine months to a year with stops along the way to study the local terrain.
On 16 July 2013, the Curiosity rover reached a milestone in its journey across Mars, having traveled 1 km (0.62 mi), since its landing in 2012; on 1 August 2013, the rover traveled over one mile: 1.686 km (1.048 mi).
On 6 August 2013, NASA celebrated Curiosity's first year on Mars (6 August 2012 to 5 August 2013) by programming the rover to perform the "Happy Birthday" song to itself. NASA also released several videos (video-1, video-2) summarizing the rover's accomplishments over the year. Primarily, the mission found evidence of "ancient environments suitable for life" on Mars. The rover drove over one-mile across the Martian terrain, transmitted more than 190 gigabits of data to Earth, including 70,000 images (36,700 full images and 35,000 thumbnails), and the rover's laser fired more than 75,000 times at 2,000 targets.
On 27 August 2013, Curiosity used autonomous navigation (or "autonav"- the ability of the rover to decide for itself how to drive safely) over unknown Martian ground for the first time.
On 19 September 2013, NASA scientists, on the basis of further measurements by Curiosity, reported no detection of atmospheric methane with a measured value of 0.18±0.67 ppbv corresponding to an upper limit of only 1.3 ppbv (95% confidence limit) and, as a result, conclude that the probability of current methanogenic microbial activity on Mars is reduced.
On 26 September 2013, NASA scientists reported the Mars Curiosity rover detected "abundant, easily accessible" water (1.5 to 3 weight percent) in soil samples at the Rocknest region of Aeolis Palus in Gale Crater. In addition, NASA reported that the Curiosity rover found two principal soil types: a fine-grained mafic type and a locally derived, coarse-grained felsic type. The mafic type, similar to other Martian soils and Martian dust, was associated with hydration of the amorphous phases of the soil. Also, perchlorates, the presence of which may make detection of life-related organic molecules difficult, were found at the Curiosity rover landing site (and earlier at the more polar site of the Phoenix lander) suggesting a "global distribution of these salts". NASA also reported that Jake M rock, a rock encountered by Curiosity on the way to Glenelg, was a mugearite and very similar to terrestrial mugearite rocks.
On 17 October 2013, NASA reported, based on analysis of argon in the Martian atmosphere, that certain meteorites found on Earth thought to be from Mars are confirmed to be from Mars.
On 13 November 2013, NASA announced the names of two features on Mars important to two active Mars exploration rovers in honor of planetary scientist Bruce C. Murray (1931-2013): "Murray Buttes", an entryway the Curiosity rover will traverse on its way to Mount Sharp and "Murray Ridge", an uplifted crater that the Opportunity rover is exploring.
On 25 November 2013, NASA reported that Curiosity has resumed full science operations, with no apparent loss of capability, after completing the diagnosis of an electrical problem first observed on 17 November. Apparently, an internal short in the rover's power source, the Multi-Mission Radioisotope Thermoelectric Generator, caused an unusual and intermittent decrease in a voltage indicator on the rover.
On 27 November 2013, an overview (titled, "The World of Mars") of current and proposed Mars exploration by John Grotzinger, chief scientist of the Curiosity rover mission, was published in the New York Times.
On 9 December 2013, NASA reported that the planet Mars had a large freshwater lake (which could have been a hospitable environment for microbial life) based on evidence from the Curiosity rover studying Aeolis Palus near Mount Sharp in Gale Crater.
On 9 December 2013, NASA researchers described, in a series of six articles in the journal Science, many new discoveries from the Curiosity rover. Possible organics were found that could not be explained by contamination. Although the organic carbon was probably from Mars, it can all be explained by dust and meteorites that have landed on the planet. Because much of the carbon was released at a relatively low temperature in Curiosity's Sample Analysis at Mars (SAM) instrument package, it probably did not come from carbonates in the sample. The carbon could be from organisms, but this has not been proven. This organic-bearing material was obtained by drilling 5 centimeters deep in a site called Yellowknife Bay into a rock called "Sheepbed mudstone". The samples were named John Klein and Cumberland. Microbes could be living on Mars by obtaining energy from chemical imbalances between minerals in a process called chemolithotrophy which means "eating rock." However, in this process only a very tiny amount of carbon is involved — much less than was found at Yellowknife Bay.
Using SAM's mass spectrometer, scientists measured isotopes of helium, neon, and argon that cosmic rays produce as they go through rock. The fewer of these isotopes they find, the more recently the rock has been exposed near the surface. The 4-billion-year-old lakebed rock drilled by Curiosity was uncovered between 30 million and 110 million years ago by winds which sandblasted away 2 meters of overlying rock. Next, they hope to find a site tens of millions of years younger by drilling close to an overhanging outcrop.
The absorbed dose and dose equivalent from galactic cosmic rays and solar energetic particles on the Martian surface for ~300 days of observations during the current solar maximum was measured. These measurements are necessary for human missions to the surface of Mars, to provide microbial survival times of any possible extant or past life, and to determine how long potential organic biosignatures can be preserved. This study estimates that a 1-meter depth drill is necessary to access possible viable radioresistant microbe cells. The actual absorbed dose measured by the Radiation Assessment Detector (RAD) is 76 mGy/yr at the surface. Based on these measurements, for a round-trip Mars surface mission with 180 days (each way) cruise, and 500 days on the Martian surface for this current solar cycle, an astronaut would be exposed to a total mission dose equivalent of ~1.01 sievert. Exposure to 1 sievert is associated with a 5 percent increase in risk for developing fatal cancer. NASA's current lifetime limit for increased risk for its astronauts operating in low-Earth orbit is 3 percent. Maximum shielding from galactic cosmic rays can be obtained with about 3 meters of Martian soil.
The samples examined were probably once mud that for millions to tens of millions of years could have hosted living organisms. This wet environment had neutral pH, low salinity, and variable redox states of both iron and sulfur species. These types of iron and sulfur could have been used by living organisms. C, H, O, S, N, and P were measured directly as key biogenic elements, and by inference, P is assumed to have been there as well. The two samples, John Klein and Cumberland, contain basaltic minerals, Ca-sulfates, Fe oxide/hydroxides, Fe-sulfides, amorphous material, and trioctahedral smectites (a type of clay). Basaltic minerals in the mudstone are similar to those in nearby aeolian deposits. However, the mudstone has far less Fe-forsterite plus magnetite, so Fe-forsterite (type of olivine) was probably altered to form smectite (a type of clay) and magnetite. A Late Noachian/Early Hesperian or younger age indicates that clay mineral formation on Mars extended beyond Noachian time; therefore, in this location neutral pH lasted longer than previously thought.
On 20 December 2013, NASA reported that Curiosity has successfully upgraded, for the third time since landing, its software programs and is now operating with version 11. The new software is expected to provide the rover with better robotic arm and autonomous driving abilities. Due to wheel wear, a concern to drive more carefully over the rough terrain the rover is currently traveling on to Mount Sharp, was also reported.
=== Search for ancient life ===
On 24 January 2014, NASA reported that current studies by the Curiosity and Opportunity rovers will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective.
=== Arrival at Mount Sharp ===
On 11 September 2014 (Sol 746), Curiosity reached the slopes of Aeolis Mons (or Mount Sharp), the rover mission's long-term prime destination and where the rover is expected to learn more about the history of Mars. Curiosity had traveled an estimated linear distance of 6.9 km (4.3 mi) to the mountain slopes since leaving its "start" point in Yellowknife Bay on 4 July 2013.
=== Detection of organics ===
On 16 December 2014, NASA reported the Curiosity rover detected a "tenfold spike", likely localized, in the amount of methane in the Martian atmosphere. Sample measurements taken "a dozen times over 20 months" showed increases in late 2013 and early 2014, averaging "7 parts of methane per billion in the atmosphere." Before and after that, readings averaged around one-tenth that level. In addition, high levels of organic chemicals, particularly chlorobenzene, were detected in powder drilled from one of the rocks, named "Cumberland", analyzed by the Curiosity rover.
=== Other 2014 events ===
On 6 February 2014, the Curiosity rover, in order to reduce wear on its wheels by avoiding rougher terrain, successfully crossed (image) the "Dingo Gap" sand dune and is now expected to travel a smoother route to Mount Sharp.
On 19 May 2014, scientists announced that numerous microbes, like Tersicoccus phoenicis, may be resistant to methods usually used in spacecraft assembly clean rooms. It's not currently known if such resistant microbes could have withstood space travel and are present on the Curiosity rover now on Mars.
On 25 May 2014, Curiosity discovered an iron meteorite, and named it "Lebanon" (image).
On 3 June 2014, Curiosity observed the planet Mercury transiting the Sun, marking the first time a planetary transit has been observed from a celestial body besides Earth.
On 24 June 2014, Curiosity completed a Martian year—687 Earth days—after finding that Mars once had environmental conditions favorable for microbial life.
On 27 June 2014, Curiosity crossed the boundary line of its "3-sigma safe-to-land ellipse" and is now in territory that may get even more interesting, especially in terms of Martian geology and landscape (view from space).
On 12 July 2014, Curiosity imaged the first laser spark on Mars (related image; video (01:07).)
On 6 August 2014, Curiosity celebrated its second anniversary since landing on Mars in 2012.
On 11 September 2014, a panel of NASA scientists announced (video (01:25)) the arrival of Curiosity at Mount Sharp and discussed future rover plans.
== First extended mission (October 2014 - September 2016) ==
On 19 October 2014, the Curiosity rover viewed the flyby of Comet C/2013 A1.
On 8 December 2014, a panel of NASA scientists discussed (archive 62:03) the latest observations of Curiosity, including findings about how water may have helped shape the landscape of Mars and had a climate long ago that could have produced long-lasting lakes at many Martian locations.
On 16 December 2014, NASA reported detecting an unusual increase, then decrease, in the amounts of methane in the atmosphere of the planet Mars; in addition, organic chemicals were detected in powder drilled from a rock by the Curiosity rover. Also, based on deuterium to hydrogen ratio studies, much of the water at Gale Crater on Mars was found to have been lost during ancient times, before the lakebed in the crater was formed; afterwards, large amounts of water continued to be lost.
On 21 January 2015, NASA announced a collaborative effort with Microsoft that developed a software project called OnSight which allows scientists to perform virtual work on Mars based on data from the Curiosity rover.
On 6 March 2015, NASA reported performing tests on the rover to help uncover the reason for intermittent problems with the robotic arm used for rock drilling and analysis. Results of preliminary tests suggest the intermittent short-circuit problem may be related to the percussion mechanism of the drill. Further tests are planned to verify and adjust to the problem.
On 24 March 2015, NASA reported the first detection of nitrogen released after heating surface sediments on the planet Mars. The nitrogen, in the form of nitric oxide, was detected by the SAM instrument on the Curiosity rover and can be used by living organisms. The discovery supports the notion that ancient Mars may have been habitable for life.
On 27 March 2015, NASA reported that the Bradbury Landing, the mission's landing site, was fading from view in the two-and-a-half years since landing in 2012.
On 4 April 2015, NASA reported studies, based on measurements by the Sample Analysis at Mars (SAM) instrument on the Curiosity rover, of the Martian atmosphere using xenon and argon isotopes. Results provided support for a "vigorous" loss of atmosphere early in the history of Mars and were consistent with an atmospheric signature found in bits of atmosphere captured in some Martian meteorites found on Earth.On 19 August 2015, NASA scientists reported that the Dynamic Albedo of Neutrons (DAN) instrument on the Curiosity rover detected an unusual hydrogen-rich area, at "Marias Pass," on Mars. The hydrogen found seemed related to water or hydroxyl ions in rocks within three feet beneath the rover, according to the scientists.
On 5 October 2015, possible recurrent slope lineae, wet brine flows, were reported on Mount Sharp near Curiosity. In addition, on 5 October 2015, NASA reported an estimated 20,000 to 40,000 heat-resistant bacterial spores were on Curiosity at launch, as much as 1,000 times more than that may not have been counted.
On 8 October 2015, NASA confirmed that lakes and streams existed in Gale crater 3.3 - 3.8 billion years ago delivering sediments to build up the lower layers of Mount Sharp.
On 17 December 2015, NASA reported that as Curiosity climbed higher up Mount Sharp, the composition of rocks were changing substantially. For example, rocks found higher up the mountain contained much higher levels of silica than the basaltic rocks found earlier. After further analysis, the silica-rich rocks on Mars were found to be tridymite, a mineral that is not commonly found on Earth. Opal-A, another form of silica, was also found on Mars.
== Second extended mission (October 2016 - September 2019) ==
The second extended mission began on 1 October 2016. The rover explored a ridge known as the Murray Formation for most of the mission.
As of 3 October 2016, NASA summarized the findings of the mission, thus far, as follows: "The Curiosity mission has already achieved its main goal of determining whether the landing region ever offered environmental conditions that would have been favorable for microbial life, if Mars has ever hosted life. The mission found evidence of ancient rivers and lakes, with a chemical energy source and all of the chemical ingredients necessary for life as we know it." Plans for the next two years, up to September 2018, include further explorations of the uphill slopes of Mount Sharp, including a ridge rich in the mineral hematite and a region of clay-rich bedrock.
On 13 December 2016, NASA reported further evidence supporting habitability on Mars as the Curiosity rover climbed higher, studying younger layers, on Mount Sharp. Also reported, the very soluble element boron was detected for the first time on Mars. Since landing on Mars in August 2012, Curiosity has driven 15.0 km (9.3 mi) and climbed 165 m (541 ft) in elevation.
On 17 January 2017, NASA released an image of a rock slab, named "Old Soaker", which may contain mud cracks. Also, somewhat later, it released an animation of sand moving in a nearby area.
On 6 February 2017, NASA reported that rock samples analyzed by the rover have not revealed any significant carbonate. This poses a puzzle to researchers: the same rocks that indicate a lake existed also indicate there was very little carbon dioxide in the air to help keep the lake unfrozen.
On 27 February 2017, NASA presented the following mission overview: "During the first year after Curiosity's 2012 landing in Gale Crater, the mission fulfilled its main goal by finding that the region once offered environmental conditions favorable for microbial life. The conditions in long-lived ancient freshwater Martian lake environments included all of the key chemical elements needed for life as we know it, plus a chemical source of energy that is used by many microbes on Earth. The extended mission is investigating how and when the habitable ancient conditions evolved into conditions drier and less favorable for life."
From 3 to 7 May 2017, Curiosity used ChemCam to study what turned out to be manganese oxide deposits on the Sutton Island and Blunts Point layers of the Murray Formation. According to a 2024 paper, the deposits suggest Earth-level amounts of oxygen were present in the very early Martian atmosphere, hinting at microbial life.
On 1 June 2017, NASA reported that the Curiosity rover provided evidence of an ancient lake in Gale crater on Mars that could have been favorable for microbial life; the ancient lake was stratified, with shallows rich in oxidants and depths poor in oxidants, particularly silica; the ancient lake provided many different types of microbe-friendly environments at the same time. NASA further reported that the Curiosity rover will continue to explore higher and younger layers of Mount Sharp in order to determine how the lake environment in ancient times on Mars became the drier environment in more modern times.
Between 22 July – 1 August 2017, few commands were sent from the Earth to Mars since Mars was in conjunction with the sun.
On 5 August 2017, NASA celebrated the fifth anniversary of the Curiosity rover mission landing, and related exploratory accomplishments, on the planet Mars. (Videos: Curiosity's First Five Years (02:07); Curiosity's POV: Five Years Driving (05:49); Curiosity's Discoveries About Gale Crater (02:54))
On 5 September 2017, scientists reported that the Curiosity rover detected boron, an essential ingredient for life on Earth, on the planet Mars. Such a finding, along with previous discoveries that water may have been present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars.
On 13 September 2017, NASA reported that the Curiosity rover climbed an iron-oxide-bearing ridge called Vera Rubin Ridge (or Hematite Ridge) and will now start studying the numerous bright veins embedded in the various layers of the ridge, in order to provide more details about the history and habitability of ancient Mars.
On 30 September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25-times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month.
On 17 October 2017, NASA announced the testing of its systems on Curiosity in an attempt to better resume drilling. The drilling system had stopped working reliably in December 2016.
On 2 January 2018, Curiosity captured images of rock shapes that may require further study in order to help better determine whether the shapes are biological or geological.
On 22 March 2018, Curiosity had spent 2000 sols (2054 days) on Mars, and prepares to study a region of clay-bearing rocks.
In June 2018, a local dust storm occurred near the Opportunity rover which may affect Curiosity. The first signs of the storm, 1,000 km (620 mi) from Opportunity, were discovered on 1 June 2018, in photographs by the Mars Color Imager (MARCI) camera on the Mars Reconnaissance Orbiter (MRO). More weather reports from the MRO and the MARCI team indicated a prolonged storm. Although this was, at that time, still far away from the rover, it influenced the atmospheric permeability (opacity) at the location. Within days, the storm had spread. As of 12 June 2018, the storm spanned an area of 41 million km2 (16 million sq mi) - about the area of North America and Russia combined. Although such dust storms are not surprising, they rarely occur. They can arise within a short time and then persist for weeks to months. During the southern season of summer, the sunlight heats dust particles and brings them higher into the atmosphere. This creates wind, which in turn stirs up more dust. This results in a feedback loop that scientists are still trying to understand. NASA reported on 20 June 2018, that the dust storm had grown to completely cover the entire planet.
On 4 June 2018, NASA announced that Curiosity's ability to drill has been sufficiently restored by engineers. The rover had experienced drill mechanical problems since December 2016.
On 7 June 2018, NASA announced a cyclical seasonal variation in atmospheric methane, as well as the presence of kerogen and other complex organic compounds. The organic compounds were from mudstone rocks aged approximately 3.5 billion years old, sampled from two distinct sites in a dry lake in the Pahrump Hills of the Gale crater. The rock samples, when pyrolyzed via the Curiosity's Sample Analysis at Mars instrument, released an array of organic molecules; these include sulfur-containing thiophenes, aromatic compounds such as benzene and toluene, and aliphatic compounds such as propane and butene. The concentration of organic compounds are 100-fold higher than earlier measurements. The authors speculate that the presence of sulfur may have helped preserve them. The products resemble those obtained from the breakdown of kerogen, a precursor to oil and natural gas on Earth. NASA stated that these findings are not evidence that life existed on the planet, but that the organic compounds needed to sustain microscopic life were present, and that there may be deeper sources of organic compounds on the planet.
Since 15 September 2018, a glitch in Curiosity's active computer (Side-B) has prevented Curiosity from storing science and key engineering data. On 3 October 2018, the JPL began operating Curiosity on its backup computer (Side-A). Curiosity will store science and engineering data normally using its Side-A computer until the cause of the glitch in Side-B is determined and remedied.
On 4 November 2018, geologists presented evidence, based on studies in Gale Crater by the Curiosity rover, that there was plenty of water on early Mars.
On 26 November 2018, Curiosity viewed a shiny object (named, "Little Colonsay") on Mars. Although possibly a meteorite, further studies are planned to better understand its nature.
On 1 February 2019, NASA scientists reported that the Mars Curiosity rover determined, for the first time, the density of Mount Sharp in Gale crater, thereby establishing a clearer understanding of how the mountain was formed.
On 4 April 2019, NASA released images of solar eclipses by the two moons of the planet Mars, Phobos (animation1) and Deimos (animation2), as viewed by the Curiosity rover on the planet Mars in March 2019.
On 11 April 2019, NASA announced that the Curiosity rover on the planet Mars drilled into, and closely studied, a "clay-bearing unit" which, according to the rover Project Manager, is a "major milestone" in Curiosity's journey up Mount Sharp.
During June 2019, while still studying the clay-bearing unit, Curiosity detected the highest levels of methane gas, 21 parts per billion, compared to the typical 1 part per billion the rover detects as normal background readings. The levels of methane dropped quickly over a few days, leading NASA to call this event one of several methane plumes that they have observed before but without any observable pattern. The rover lacked the necessary instrumentation to determine if the methane was biological or inorganic in nature.
== Third extended mission (October 2019 - September 2022) ==
The third extended mission began on 1 October 2019 - the rover's 2544th sol on Mars.
In October 2019, evidence in the form of magnesium sulfate deposits left behind in ways that suggested evaporation, uncovered by the Curiosity rover on Mount Sharp, was reported of a 150 km (93 mi) wide ancient basin in Gale crater that once may have contained a salty lake.
In January 2020, a report was presented that compared Curiosity at the time of its landing on Mars in 2012, with the rover over seven years later in 2020.
In February 2020, scientists reported the detection of thiophene organic molecules by the Curiosity rover on the planet Mars. It is not currently known if the detected thiophenes — usually associated on Earth with kerogen, coal and crude oil — are the result of biological or non-biological processes.
In April 2020, scientists began operating the rover remotely from their homes due to the COVID-19 pandemic.
On 29 August 2020, NASA released several videos taken by the Curiosity rover, including those involving dust devils, as well as very high resolution images of the related local martian terrain.
In June 2021, scientists determined that the methane concentration around Curiosity varied according to the time of sol, with methane present only at night. This explains the difference in methane levels detected by Curiosity and the Trace Gas Orbiter (an open question since 2016), although it does not explain what is creating the methane or why the methane seems to be more short-lived than current models predict.
On 3 July 2021, the Curiosity rover viewed the "Rafael Navarro Mountain" area.
On 1 November 2021, astronomers reported detecting, in a "first-of-its-kind" process based on SAM instruments, organic molecules, including benzoic acid, ammonia and other related unknown compounds, on the planet Mars by the Curiosity rover.
On 17 January 2022, scientists reported finding an unusual signal of carbon isotopes on Mars by the Curiosity rover which may (or may not) be associated with ancient Martian life and suggesting, according to the scientists, that microbes residing underground may have emitted the "enriched carbon as methane gas". However, abiotic sources of the unusual carbon signal have not been completely ruled out.
In April 2022, Mars Science Laboratory was renewed for a fourth extended mission, which will include the exploration of the sulfate-bearing unit.
== Fourth extended mission (October 2022 - Present) ==
The rover began its fourth extended mission on 1 October 2022, which will last until October 2025.
In January 2023, the Curiosity Rover viewed and studied the "Cacao" meteorite.
In August 2023, Curiosity explored the upper Gediz Vallis Ridge. A panoramic view of the ridge is here, and a 3D rendered view is here.
In February 2024, Curiosity completed its 40th successful drilling, of a rock named "Mineral King" in Gediz Vallis.
In July 2024, it was announced that, in an analysis of a rock that had been crushed by the rover (one in a series of deposits), elemental pure sulfur had been found on Mars for the first time.
In October 2024, the science team behind the SAM experiment onboard the rover announced the results of three years of sampling, which suggested that based on high carbon-13 and oxygen-18 levels in the regolith, the early Martian atmosphere was unlikely to be stable enough to support surface water hospitable to life, with rapid wetting-drying cycles and very high-salinity cryogenic brines providing an explanation.
In March 2025 it was published that Curiosity discovered long chain alkanes with up to 12 consecutive carbon atoms, in mudstone in Gale crater. The origin of these molecules is unknown. They could be derived from either abiotic or biological sources.
In the spring of 2025, NASA announced that Curiosity had found carbonates in the form of crystalline siderite (FeCO3). One rock contained over 10 % of the mineral. These were expected on Mars due to the carbon dioxide atmosphere. Not many carbonated were detected from orbit because they may be obscured by dust. The rocks also were composed of plagioclase with the elements sodium (Na)–, Ca-, and aluminum (Al)–, as well as Ca- and Mg-bearing silicate mineral pyroxene. Other minerals found were calcium sulfates, magnesium sulfates, different amounts of iron oxyhydroxides, and an unidentified X-ray amorphous material. Rover’s Chemistry and Mineralogy (CheMin) instrument uses X-ray diffraction to determine sample mineralogy. The names of the rock formations and drill sites are CA, Canaima; TC, Tapo Caparo; UB, Ubajara; and SQ, Sequoia.
== Current status ==
=== Weather ===
Current Weather Report Archived 21 June 2013 at the Wayback Machine on Mars by the Curiosity rover
Current Weather Report on Mars by the InSight lander
Current Weather Report on Mars by the Perseverance rover
=== Location and travel statistics ===
This up-to-date NASA link shows Curiosity' travels and present location
As of June 6, 2025, Curiosity has been on the planet Mars for 4562 sols (4687 total days) since landing on 6 August 2012. Since 11 September 2014, Curiosity has been exploring the slopes of Mount Sharp, where more information about the history of Mars is expected to be found. As of 26 January 2021, the rover has traveled over 24.15 km (15.01 mi) and climbed over 327 m (1,073 ft) in elevation to, and around, the mountain base since arriving at Bradbury Landing in August 2012.
=== Equipment status ===
Since early 2015, the percussive mechanism in the drill that chisels into rock has had an intermittent electrical short circuit.
In December 2016, the motor inside the drill caused a malfunction that prevented the rover from moving its robotic arm and driving to another location. The fault is in the drill feed motor - internal debris is suspected. The fault was determined to be limited to the drill mechanism and the rover started moving again on 9 December. The robotic arm is functional, and the Curiosity team performed diagnostics on the drill mechanism throughout 2017. On 4 June 2018, NASA announced that Curiosity's ability to drill has been sufficiently restored by changing the drilling methods.
Since 15 September 2018, a glitch in Curiosity's active computer (Side-B) has prevented Curiosity from storing science and key engineering data. On 3 October 2018, the JPL began operating Curiosity on its backup computer (Side-A). Curiosity will store science and engineering data normally using its Side-A computer until the cause of the glitch in Side-B is determined and remedied.
== Self-Portraits ==
== See also ==
== References ==
== External links ==
Curiosity Rover Official Page.
MSL/NASA Official Page.
Mars Weather: Perseverance*Curiosity*InSight
Curiosity Rover Tracker (6 August 2012 to 5 August 2013 and beyond).
Panoramic View of Gale Crater on Mars (4 billion pixels) (March 2013).
Video (04:32) - Evidence for 'Vigorously' Flowing Water on Ancient Mars (September 2012).
Video (02:52) - Curiosity's view from Mount Sharp (NASA; August 2021)
STScI Webcast Catalog and Archive - (Robert Hazen; NASA; April 2014).
Video (86:49) - Search for Life in the Universe - (NASA; July 2014). | Wikipedia/Timeline_of_Mars_Science_Laboratory |
Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI, originally High Energy Solar Spectroscopic Imager or HESSI or Explorer 81) was a NASA solar flare observatory. It was the sixth mission in the Small Explorer program (SMEX), selected in October 1997 and launched on 5 February 2002, at 20:58:12 UTC. Its primary mission was to explore the physics of particle acceleration and energy release in solar flares.
The spacecraft re-entered Earth's atmosphere at 00:21 UTC on 20 April 2023, 21 years after its launch.
== Spacecraft ==
HESSI was renamed to RHESSI on 29 March 2002 in honor of Dr. Reuven Ramaty, a pioneer in the area of high energy solar physics. RHESSI was the first space mission named after a NASA scientist. RHESSI was built by Spectrum Astro for Goddard Space Flight Center and was operated by the Space Sciences Laboratory in Berkeley, California. The principal investigator from 2002 to 2012 was Robert Lin, who was succeeded by Säm Krucker.
The entire spacecraft rotated to provide the necessary signal modulation. The four, fixed solar panels were designed to provide enough gyroscopic moment to stabilize rotation about the solar vector. This largely eliminated the need for attitude control. The instrument detectors were nine high-purity germanium crystals. Each was cooled to cryogenic temperatures by a mechanical cryocooler. Germanium provided not only detections by the photoelectric effect, but inherent spectroscopy through the charge deposition of the incoming ray. The crystals were housed in a cryostat, and mounted with low-conductivity straps. A tubular telescope structure formed the bulk of the spacecraft. Its purpose was to hold the collimators above the Ge crystals at known, fixed positions.
The satellite bus consisted of the structure and mechanisms, the power system (including the battery, solar panels, and control electronics), the attitude control system, thermal control system, command and data handling system (C&DH), and telecommunications system. The spacecraft structure provided support for the telescope and other components. It was manufactured out of aluminum parts to be light weight but strong. The equipment platform had a honeycomb structure to further reduce the weight. The spacecraft was manufactured in Gilbert, Arizona by Spectrum Astro, Inc.
The Imaging Telescope Assembly consisted of the telescope tube, grid trays, Solar aspect system (SAS), and Roll angle system (RAS). It was constructed, assembled, aligned, and tested at the Paul Scherrer Institute in Switzerland. The front and rear grid trays were attached to the telescope tube. It maintained the separation and alignment of the trays. Nine grids were mounted on a grid tray at each end of the telescope tube. The grid pairs modulated the transmission of solar flare X-ray and gamma ray emissions through to the detectors as the spacecraft spins around the axis of the telescope tube. The modulated count rates in the nine detectors were used in computers on the ground to construct images of solar flares in different energy bands. The five coarse grids (square) were constructed by Van Beek Consultancy in Netherlands. The four fine grids (round) were constructed by Thermo Electron Tecomet in Massachusetts. All grids were characterized both optically and with X-rays at Goddard Space Flight Center before being shipped to the Paul Scherrer Institute for integration into the imaging telescope assembly.
The spectrometer contained nine germanium detectors that were positioned behind the nine grid pairs on the telescope. These artificially grown crystals, pure to over one part in a trillion, were manufactured by the ORTEC division of Perkin Elmer Instruments. When they were cooled to cryogenic temperatures and a high voltage was put across them (up to 4000 volts), they converted incoming X-rays and gamma-rays to pulses of electric current. The amount of current was proportional to the energy of the photon, and was measured by sensitive electronics designed at the Lawrence Berkeley National Laboratory and the Space Sciences Laboratory, at Berkeley, California. The detectors were cooled with an electromechanical Stirling-cycle cryocooler built by SunPower Inc., and flight qualified at Goddard Space Flight Center. It maintained them at the required operating temperature of −198 °C (−324.4 °F), or 75° above absolute zero).
== Mission concept ==
RHESSI was designed to image solar flares in energetic photons from soft X-rays (~3 keV) to gamma rays (up to ~20 MeV) and to provide high resolution spectroscopy up to gamma-ray energies of ~20 MeV. Furthermore, it had the capability to perform spatially resolved spectroscopy with high spectral resolution.
== Scientific objectives ==
Researchers believe that much of the energy released during a flare is used to accelerate, to very high energies, electrons (emitting primarily X-rays) and protons and other ions (emitting primarily gamma rays). The new approach of the RHESSI mission was to combine, for the first time, high-resolution imaging in hard X-rays and gamma rays with high-resolution spectroscopy, so that a detailed energy spectrum could be obtained at each point of the image. This new approach enabled researchers to find out where these particles are accelerated and to what energies. Such information will advance understanding of the fundamental high-energy processes at the core of the solar flare phenomena.
The primary scientific objective of RHESSI was to understand the following processes that take place in the magnetized plasmas of the solar atmosphere during a flare:
Impulsive energy release
Particle acceleration
Particle and energy transport
These high-energy processes play a major role at sites throughout the Universe ranging from magnetospheres to active galaxies. Consequently, the importance of understanding these processes transcends the field of solar physics; it is one of the major goals of space physics and astrophysics.
The high energy processes of interest include the following:
The rapid release of energy stored in unstable magnetic configurations
The equally rapid conversion of this energy into the kinetic energy of hot plasma and accelerated particles (primarily electrons, protons and ions)
The transport of these particles through the solar atmosphere and into interplanetary space
The subsequent heating of the ambient solar atmosphere
These processes involve:
Particle energies to many GeV
Temperatures of tens or even hundreds of millions of degrees
Densities as low as 100 million particles per square cm
Spatial scales of tens of thousands of kilometers, and
Magnetic containment times of seconds to hours
It is impossible to duplicate these conditions in laboratories on the Earth.
The acceleration of electrons is revealed by hard X-ray and gamma-ray bremsstrahlung while the acceleration of protons and ions is revealed by gamma-ray lines and continuum. The proximity of the Sun means not only that these high-energy emissions are orders of magnitude more intense than from any other cosmic source but also that they can be better resolved, both spatially and temporally.
== Imaging ==
Since X-rays are not easily reflected or refracted, imaging in X-rays is difficult. One solution to this problem is to selectively block the X-rays. If the X-rays are blocked in a way that depends on the direction of the incoming photons, then it may be possible to reconstruct an image. The imaging capability of RHESSI was based on a Fourier transform technique using a set of 9 Rotational Modulation Collimators (RMCs) as opposed to mirrors and lenses. Each RMC consisted of two sets of widely spaced, fine-scale linear grids. As the spacecraft rotated, these grids blocked and unblocked any X-rays which may have come from the Sun modulating the photon signal in time. The modulation could be measured with a detector having no spatial resolution placed behind the RMC since the spatial information was now stored in the time domain. The modulation pattern over half a rotation for a single RMC provided the amplitude and phase of many spatial Fourier components over a full range of angular orientations but for a small range of spatial source dimensions. Multiple RMCs, each with different slit widths, provided coverage over a full range of flare source sizes. Images were then reconstructed from the set of measured Fourier components in exact mathematical analogy to multi-baseline radio interferometry. RHESSI provided spatial resolution of 2 arcseconds at X-ray energies from ~4 keV to ~100 keV, 7 arcseconds to ~400 keV, and 36 arcseconds for gamma-ray lines and continuum emission above 1 MeV.
RHESSI could also see gamma rays coming from off-solar directions. The more energetic gamma rays passed through the spacecraft structure, and impacted the detectors from any angle. This mode was used to observe gamma-ray bursts (GRBs). The incoming gamma rays were not modulated by the grids, so positional and imaging information was not recorded. However, a crude position could still be derived by the fact that the detectors had front and rear pickups. Also, the detectors near the burst shielded the ones away from the burst. Comparing signal strengths around the nine crystals, and front-to-back, then gave a coarse, two-dimensional position in space.
When combined with high-resolution time stamps of the detector hits, the RHESSI solution could be cross-referenced on the ground with other spacecraft in the IPN (Interplanetary Network) to provide a fine solution. The large area and high sensitivities of the germanium crystal assembly made RHESSI a formidable IPN component. Even when other spacecraft could provide burst locations, few could provide as high-quality spectra of the burst (in both time and energy) as RHESSI. Rarely, however, a GRB occurred near the Sun, in the collimated field of view. The grids then provided full information, and RHESSI was able to provide a fine GRB location even without IPN correlation.
== Experiment ==
=== Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI) ===
RHESSI was intended to image at high resolution solar flares in X-rays and gamma rays. The X-rays and gamma rays covered an energy range of 3 keV-20 MeV with an energy resolution of about 1 keV and a spatial resolution of just a few seconds of arc. The imaging was accomplished by a 45 × 170 cm (18 × 67 in) tube containing nine pairs (one behind the other, spaced 1.5 m (4 ft 11 in) apart) of tungsten or molybdenum wire grids of width 9 cm (3.5 in) mounted parallel to the rotation axis of the tube pointing at the Sun. The tube rotated about its axis as the spacecraft spun at a rate of 15 rpm. During a rotation, a photon from any point on the Sun could either pass through a grid-pair or be blocked by one or other of the grids. This caused a modulation of the intensity of photons emanating from that point. The depth of modulation was zero for the photons arriving exactly along the spin axis and gradually increases to the off-axis photons. Behind each grid-pair was a cryogenic (75 K) germanium detector of 7.1 cm diameter and 8.5 cm (3.3 in) thickness. The output from each of the nine detectors, at any given energy, could be Fourier-analyzed to provide a full two-dimensional spatial spectrum of an extended source region on the Sun. The full spatial spectrum was possible because each wire grid pair had a different slit width, spacing and wire thickness. Data accumulation was about 16 Gb during a 10-minutes rotation. The telemetry data was collected at Berkeley (California), Wallops Flight Facility (WFF), Virginia, Santiago, Chile and Weilheim, Germany. Science analysis of the data involved close collaboration with many dedictated ground based and satellite based solar observatories. A secondary goal of RHESSI was to observe astronomical sources such as Crab Nebula.
== Results ==
RHESSI observations changed our perspective on solar flares, particularly on high-energy processes in flares. Its observations led to numerous publications in scientific journals and presentations at conferences. Through 2017, the satellite was referenced in 2,474 publications, books, and presentations. The collection "High-Energy Aspects of Solar Flares: A RHESSI Inspired Monograph" contains reviews of RHESSI solar flare research up to 2011.
RHESSI was the first satellite to image gamma rays from a solar flare.
It was the first satellite to accurately measure terrestrial gamma-ray flashes that come from thunderstorms, and it found that such flashes occur more often than thought and the gamma rays have a higher frequency on average than the average for cosmic sources.
Following communication difficulties, RHESSI ceased science operations on 11 April 2018 at 01:50 UTC. It was decommissioned on 16 August 2018 and remained in a stable low Earth orbit which gradually decayed due to atmospheric drag.
== See also ==
Explorers program
== References ==
== External links ==
RHESSI website Archived 26 June 2019 at the Wayback Machine by NASA's Goddard Space Flight Center
HESSI archive website by the Space Sciences Laboratory
RHESSI data browser by the Space Sciences Laboratory
RHESSI Science Nuggets, biweekly series explaining recent science results | Wikipedia/Ramaty_High_Energy_Solar_Spectroscopic_Imager |
The Manned Space Flight Network (abbreviated MSFN, pronounced "misfin") was a set of tracking stations built to support the American Mercury, Gemini, Apollo, and Skylab space programs.
There were two other NASA space communication networks at the time, the Spacecraft Tracking and Data Acquisition Network (STADAN) for tracking satellites in low Earth orbit, and the Deep Space Network (DSN) for tracking more distant uncrewed missions. After the end of Skylab, the MSFN and STADAN were merged to form the Spaceflight Tracking and Data Network (STDN). STDN was in turn replaced by the satellite-based Tracking and Data Relay Satellite System (TDRSS) during the Space Shuttle program, being used as of 2009.
== Orbital versus deep space tracking ==
Tracking vehicles in low Earth orbits (LEO) is quite different from tracking deep space missions. Deep space missions are visible for long periods of time from a large portion of the Earth's surface, and so require few stations (the DSN uses only three, as of February 20, 2010). These few stations, however, require the use of huge antennas and ultra-sensitive receivers to cope with the very distant, weak signals. Low Earth orbit missions, on the other hand, are only visible from a small fraction of the Earth's surface at a time, and the satellites move overhead quickly, which requires a large number of tracking stations, spread all over the world. The antennas required for LEO tracking and communication are not required to be as large as those used for deep space, but they must be able to track quickly.
These differing requirements led NASA to build a number of independent tracking networks, each optimized for its own mission. Prior to the mid-1980s, when the Tracking and Data Relay Satellite System (TDRSS) satellites became operational, NASA used several networks of ground-based antennas to track and communicate with Earth orbiting spacecraft. For the Mercury, Gemini, and Apollo missions, these were the primary means of communication, with the Deep Space Network (DSN) being assigned a supporting/backup role.
== Mercury MSFN stations ==
The Mercury Space Flight Network (MSFN) was completed in 1961, and consisted of 18 ground tracking stations and two ships in the Atlantic and Indian Oceans to close gaps between ground stations.
1) Mercury Control Center (CNV), Cape Canaveral, Florida
Grand Bahama (downrange antenna for MCC)
Grand Turk (downrange antenna for MCC)
2) Coopers Island, Bermuda (BDA)
3) Atlantic Ship (ATS)
4) Maspalomas Station, Grand Canary Island (CAN)
5) Kano, Nigeria (KAN)
6) Zanzibar (AAB)
7) Indian Ocean ship (IOS)
8) Muchea, Australia (MUC)
9) Woomera Test Range, Australia (WOM)
11) Canton Island, Republic of Kiribati (CTN)
12) Kauai, Hawaii (HAW)
13) Point Arguello, California (CAL)
14) Guaymas, Mexico (GYM)
15) White Sands, New Mexico (WHS)
16) Corpus Christi, Texas (TEX)
17) Eglin, Florida (EGL)
There was some variation between flights. For example, between MA-6 and MA-7 the Mid-Atlantic Ship was removed and the Indian Ocean Ship was repositioned to the Mozambique Channel.
A Pacific Ocean ship (USNS Wheeling) and the Goldstone Deep Space Communications Complex (GDS), California were used during Gordon Cooper's 1963 MA-9 flight. On MA-9 the Bermuda FPS-16 radar was the only radar on the entire network that had track during the capsule's insertion into an orbital track, and thus was vital to the verification of proper orbit. The next station to have contact was the Canary Islands. Cooper's flight was delayed for 24 hours due to a malfunction in the Bermuda FPS-16 radar's antenna data system. The radar set failed a CADFISS test, where all the stations in the network had to transmit information to NASA to ensure accurate information could be obtained. The failed part was replaced within 3 hours, but when the Capsule communicator asked for a realistic estimate, he was told 24 hours. The mission was immediately scrubbed for one day.
== Project Gemini ==
The network expanded for Project Gemini's longer flights which included rendezvous operations involving two spacecraft. A move toward increased computerization and decreased voice support for Gemini made a more centralized network possible with fewer primary stations and more secondary stations, although those major facilities were better equipped. Some Mercury stations were dropped; many were supplemented with new hardware.
Gemini Network sites:
Mission Control Center (MCC-K)
Grand Bahama Island (GBI)
Grand Turk Island (GTI)
Bermuda (BDA)
Antigua (ANT)
Grand Canary Island (CYI)
Ascension Island (ASC)
Kano, Africa (KNO)
Pretoria, South Africa (PRE)
Tananarive, Malagasy Republic (TAN)
Carnarvon, Australia (CRO)
Woomera, Australia (WOM)
Canton Island (CTN)
Kauai Island, Hawaii (HAW)
Point Arguello, California (CAL)
Guaymas, Mexico (GYM)
White Sands, New Mexico (WHS)
Corpus Christi, Texas (TEX)
Eglin, Florida (EGL)
Wallops Island, Virginia (WLP)
Coastal Sentry Quebec (ship) (CSQ)
Rose Knot Victor (ship) (RKV)
Goddard Space Flight Center (GSFC)
Range Tracker (ship) (RTK)
== The Apollo missions ==
The Manned Space Flight Network (MSFN) during the Apollo era was also known as the Apollo Network. From a NASA technical report on the history of the MSFN:
The technical facts of life were these: the radars
of the Mercury and Gemini Networks obviously could not track
two spacecraft orbiting the Moon a quarter-million miles
away: neither could the small MSFN telemetry antennas hope
to pick out the telemetry and voice messages in the weak
signals arriving from the vicinity of the Moon. Translated
into network hardware terms, Apollo would require at least
the following changes in the MSFN:
A range and range rate tracking system, such as GRARR or the JPL range and range rate system, would have to be incorporated to accurately track the distant spacecraft while it was out of radar range.
Large dish antennas with high gains, such as the 26-m paraboloids employed in STADAN and the DSN, would have to be added to the MSFN to track and communicate at lunar distances.
Extant MSFN stations could not properly monitor the very critical mission phases when the spacecraft was inserted into its lunar trajectory and when it plunged into the narrow reentry corridor on the return trip. The result was that the MSFN had to be extended with ships, aircraft, and additional land sites.
Small paraboloidal antennas would have to be added at some MSFN sites to communicate with the Apollo spacecraft while it was still below the horizon for the 26-m dishes (below about 16,000 km) but beyond the range of the Gemini telemetry antennas.
The communication traffic during the Apollo missions would be several times that planned for Gemini. NASCOM lines would have to be augmented.
To meet these requirements, the MSFN used a combination of resources. A Jet Propulsion Laboratory (JPL) system called "Unified S-band", or USB, was selected for Apollo communications, which allowed tracking, ranging, telemetry, and voice to all use the same S band transmitter. Near-Earth tracking was provided by upgrading the same networks used for Mercury and Gemini. New large antennas for the lunar phase were constructed explicitly for the MSFN, with Deep Space Network (DSN) large antennas used for backup and critical mission phases.
=== DSN support during Apollo ===
Although normally tasked with tracking uncrewed spacecraft, the Deep Space Network (DSN) also contributed to the communication and tracking of Apollo missions to the Moon, although primary responsibility remained with the Manned Space Flight Network (MSFN). The DSN designed the MSFN stations for lunar communication and provided a second antenna at each MSFN site (the MSFN sites were near the DSN sites for just this reason). Two antennas at each site were needed since the beam widths which the large antennas required were too small to encompass both the lunar orbiter and the lander at the same time. DSN also supplied some larger antennas as needed, in particular for television broadcasts from the Moon, and emergency communications such as Apollo 13.
From a NASA report describing how the DSN and MSFN cooperated for Apollo:
Another critical step in the
evolution of the Apollo Network came in 1965 with the advent
of the DSN Wing concept. Originally, the participation of
DSN 26-m antennas during an Apollo Mission was to be limited
to a backup role. This was one reason why the MSFN 26-m
sites were collocated with the DSN sites at Goldstone, Madrid,
and Canberra. However, the presence of two, well-separated
spacecraft during lunar operations stimulated the rethinking
of the tracking and communication problem. One thought was
to add a dual S-band RF system to each of the three 26-m
MSGN antennas, leaving the nearby DSN 26-m antennas still in
a backup role. Calculations showed, though, that a 26-m
antenna pattern centered on the landed Lunar Module would
suffer a 9-to-12 db loss at the lunar horizon, making tracking
and data acquisition of the orbiting Command Service
Module difficult, perhaps impossible. It made sense to
use both the MSFN and DSN antennas simultaneously during the
all-important lunar operations. JPL was naturally reluctant
to compromise the objectives of its many unmanned spacecraft
by turning three of its DSN stations over to the MSFN for
long periods. How could the goals of both Apollo and deep
space exploration be achieved without building a third 26-m
antenna at each of the three sites or undercutting planetary
science missions?
The solution came in early 1965 at a meeting at NASA
Headquarters, when Eberhardt Rechtin suggested what is now
known as the "wing concept". The wing approach involves
constructing a new section or "wing" to the main building at
each of the three involved DSN sites. The wing would include
a MSFN control room and the necessary interface equipment
to accomplish the following:
1. Permit tracking and two-way data transfer with either
spacecraft during lunar operations.
2. Permit tracking and two-way data transfer with the
combined spacecraft during the flight to the Moon
3. Provide backup for the collocated MSFN site passive
track (spacecraft to ground RF links) of the Apollo
spacecraft during trans-lunar and trans-earth phases.
With this arrangement, the DSN station could be quickly
switched from a deep-space mission to Apollo and back again.
GSFC personnel would operate the MSFN equipment completely
independently of DSN personnel. Deep space missions would
not be compromised nearly as much as if the entire station's
equipment and personnel were turned over to Apollo for several
weeks.
The details of this cooperation and operation are available in a two-volume technical report from JPL.
== Current communication with Earth-orbiting spacecraft ==
As of February 20, 2010, three different NASA networks are used - the Deep Space Network (DSN), the Near Earth Network (NEN) and the Space Network/Tracking and Data Relay Satellite System (TDRSS). The DSN, as the name implies, tracks probes in deep space (more than 10,000 miles (16,000 km) from Earth), while NEN and TDRSS are used to communicate with satellites in low earth orbit. TDRSS uses a network of 10 geostationary communication satellites, and a single ground station at White Sands Test Facility.
After Apollo, the MSFN no longer needed the large antennas that had been used for lunar communication, which were eventually given over to the DSN. In 1985, the antenna at Honeysuckle Creek Tracking Station was moved to the Canberra Deep Space Communication Complex (CDSCC) DSN site, and the antenna at Fresnedillas was moved to the existing Robledo DSN location. The Goldstone Deep Space Communications Complex antenna is still in its original location.
== See also ==
Merritt Island Spaceflight Tracking and Data Network station
NASCOM
== References ==
== External links ==
Manned Space Flight Network Performance Analysis for the GT-2 Mission - NASA - May 14, 1965 (PDF format) | Wikipedia/Manned_Space_Flight_Network |
The Mercury Control Center (also known as Building 1385 or simply MCC) provided control and coordination of all activities associated with the NASA's Project Mercury flight operation as well as the first three Project Gemini flights (the first two had no crew). It was located on the Cape Canaveral Space Force Station, east of Samuel C. Phillips Parkway.
The facility was expanded in 1963 to support Project Gemini by contractor Pan American World Airways to provide more meeting space along with space for data analysis, and a large space for a new Gemini spacecraft trainer.
The more complex requirements of later Gemini and Apollo flights forced control operations to move to a larger facility located in Houston, Texas, but the MCC continued to be used for training and meeting space. On June 1, 1967, the Center became a historic stop for public tours, and continued this function through the mid-1990s.
== Configuration ==
The MCC was organized into three rows and was smaller and simpler than subsequent control centers. It needed only to support a mission of no more than 36 hours with a spacecraft less complex than those used on future missions. Positions included those for monitoring the spacecraft and astronauts during flight as well as positions for supporting the launch and recovery of the capsule.
The MCC also featured a large backlit status map for display of the capsule position. Unlike later mission control centers which featured computer generated graphics, this Mercury-era display operated with a physical two-dimensional representation of the capsule suspended and lit in front of the map.
More modern mission control centers were split between launch control, which is located at the launch site such as Cape Canaveral, and mission control which is located at the Lyndon B. Johnson Space Center for the Apollo and Space Shuttle programs or at the Jet Propulsion Laboratory for unmanned missions.
== Tracking and ground facilities ==
Though the MCC provided command and control, it was not the only facility involved in supporting Mercury or Gemini flights. The Computing and Communications Center was located at the Goddard Space Flight Center in Greenbelt, Maryland and provided computing power for missions.
To minimize the time that the spacecraft was out of communications range with the ground, additional ground stations were established at US military facilities, tracking ships, and in cooperation with the governments of Spain, United Kingdom, Nigeria and Australia:
Cape Canaveral (CNV-MCC)
Grand Bahama Island (GBI)
Grand Turk Island (GTI)
Bermuda (BDA)
Atlantic Ship (ATS)
Maspalomas Station, Grand Canary Island (CYI)
Kano, Nigeria (KNO)
Zanzibar (ZZB)
Indian Ocean Ship (IOS)
Muchea, Australia (MUC)
Woomera, Australia (WOM)
Canton island, Kiribati (CTN)
Kauai, Hawaii (HAW)
Point Arguello, California (CAL)
Guaymas, Mexico (GYM)
White Sands, New Mexico (WHS)
Corpus Christi, Texas (TEX)
Eglin Air Force Base (EGL)
== Location ==
The building was erected between 1956 and 1958 and was used throughout Project Mercury (1961–1963) and for Project Gemini through Gemini 3 (1964–1965).
Though the building was listed on the National Register of Historic Places on April 16, 1984, as a contributing property to Cape Canaveral Air Force Station, asbestos removal, other repairs and restoring the center to its original state would have cost $6 million. The decision was made to preserve the consoles and other equipment and destroy the building.
In 1999, consoles, displays and other equipment were moved to the Kennedy Space Center Visitor Complex to a recreation of the MCC inside the Kurt Debus Center. Consoles and displays were reassembled and many are powered on.
== References ==
File:Cape Canaveral Air Force Station Mission Control Center - HAER FL-8-AV.pdf Extensive report on MCC
== External links ==
NASA KSC documentary videos on the Mercury Mission Control Center
Photos of original building and recreation of the flight control area
MCC Gallery
Documentary of MCC | Wikipedia/Mercury_Control_Center |
Space Network (SN) is a NASA program that combines space and ground elements to support spacecraft communications in Earth vicinity. The SN Project Office at Goddard Space Flight Center (GSFC) manages the SN, which consists of:
The geosynchronous Tracking and Data Relay Satellites (TDRS),
Supporting ground terminal systems,
The Bilateration Ranging and Transponder System (BRTS),
Merritt Island Launch Annex (MILA) relay,
Network Control Center Data System (NCCDS).
== Satellite generations ==
Tracking and Data Relay Satellite (TDRS) currently consists of first-generation (F1–F7), and second-generation (F8–F10) satellites.
The space segment of the SN consists of up to six operational relay satellites in geosynchronous orbit. These communications satellites are allocated longitudes for relaying forward and return service signals to and from customers, any entity with an Earth-orbiting satellite that has an agreement with SN to use its communications services, for data transfer and tracking. An additional TDRS, F1, provides dedicated support to the National Science Foundation (NSF) through the use of the WSC Alternate Relay Terminal (WART). Additional spare TDRSs may be in geosynchronous orbit.
All first-generation TDRSs (F1–F7, also known as TDRS A–G) carry functionally identical payloads and all second-generation TDRSs (F8–F10, also known as TDRS H–J) carry functionally identical payloads.
A third generation, TDRS K, L, and M were launched between 2013–2017.
The figures identify the pertinent communications components and associated parameters of the orbiting relay platforms.
== Coverage ==
For spacecraft operating in a low Earth orbit (LEO) 73 km to 3000 km altitude, the SN is capable of providing tracking and data acquisition services over 100% of the spacecraft's orbit. Spacecraft sent to more distant or exotic destinations rely on either Deep Space Network or their own custom, dedicated networks.
== Network Control Center Data System ==
The Network Control Center Data System (NCCDS) is an element of the SN ground segment. Collocated with the White Sands Tracking and Data Relay Satellite System, the NCCDS is the operations control facility for the network. It schedules most Space Network elements and supporting elements and provides interfaces for planning, acquisition, control, and status of the Space Network. The NCCDS is the point-of-contact between customers (who have satellites in orbit) and the Space Network for most scheduling and real-time performance. A customer may obtain Space Network support by submitting specific schedule requests to or establishing generic requirements with the NCCDS. The NCCDS translates customers’ requirements into specific TDRS communications and tracking events. Additionally, the NCCDS notifies affected customers of scheduled system outages so that Mission Operation Centers (MOCs) can properly plan mission activities. Upon MOC request, the NCCDS provides operational performance information (such as data presence monitoring indicators and data quality monitoring data) on scheduled services during actual support to determine if conditions exist that will affect data quality.
The NCCDS issues Network Advisory Messages (NAMs) to provide up-to-date information on network conditions and constraints. These messages are accessible via the NCCDS active NAM web site. The Goddard Space Flight Center uses the NAMs as a means of letting customers know of any performance constraints associated with the TDRS spacecraft.
== See also ==
Deep Space Network
Near Earth Network
Indian Deep Space Network
Tracking and Data Relay Satellite
Eastern Range
SCaN Program
== References ==
== External links ==
NASA's Goddard Space Flight Center Space Network Official Page | Wikipedia/Space_Network |
The Near Earth Network (NEN, formerly GN or Ground Network) provides orbital communications support for near-Earth orbiting customer platforms via various ground stations, operated by NASA and other space agencies. It uses a number of different dishes scattered around the globe. The antennas must be able to move fast for tracking of objects in low Earth orbit (LEO). The NEN and Space Network (SN) combined were previously referred to as the Spaceflight Tracking and Data Network (STDN).
== Ground stations ==
NEN uses several stations run by NASA:
Alaska Satellite Facility in Fairbanks, Alaska— Supports: S/X Band — Assets: 11.3m/11m/9.1m
Kennedy Uplink Station, Merritt Island Launch Annex (MILA)— Supports: S-band - Assets: 6.1m
McMurdo, Antarctica— Supports: S/X Band — Assets: 10m
Ponce de Leon Station, Florida — Supports: S-band - Assets: 6.1m[1]
Wallops Ground Station, in Wallops Island, Virginia— Supports: VHF, S/X Band — Assets: 11m/5m
White Sands Ground Station, New Mexico — Supports: VHF, S/Ka Band — Assets: 18.3m
NEN uses Stations run by KSAT — Kongsberg Satellite Services:
Singapore — Supports: S/X Band — Assets: 9.1m
Svalbard Satellite Station Norway — Supports: S/X Band — Assets: 11.3m/11.3m/13m
TrollSat, Antarctica — Supports: S/X Band — Assets: 7.3m/7.3m
NEN uses a Station run by SANSA — South African National Space Agency:
Hartebeesthoek, South Africa — Supports: S/X Band — Assets: 12m/10m
A new ground station is under construction in Matjiesfontein, scheduled to come online in 2025.
NEN uses Stations run by SSC — Swedish Space Corporation
Kiruna, Sweden — Supports: S/X Band — Assets: 13m/13m
Santiago, Chile— Supports: S Band — Assets: 9m/12m/13m
SSC Space US North Pole, Alaska— Supports: S/X Band — Assets: 5.4m/7.3m/11m/13m
SSC Space US Dongara, Australia — Supports: S/X Band — Assets: 13m
Space US South Point, Hawaii — Supports: S/X Band — Assets: 13m/13m
Also under contract was Poker Flat Research Range. Additionally, the MILA and Wallops stations provide pre-launch, launch, and landing communications support for the Space Shuttle program.
== Authority and responsibility ==
The NEN falls under NASA's SOMD (Space Operations Mission Directorate), interoperating with the SCaN Program offices. The Goddard Space Flight Center Ground Network Project has responsibility for maintaining the NEN, as well as implementing the Satellite laser ranging (SLR) Network.
== Support for Constellation ==
The NEN was slated to support the Constellation Program, including the Ares launch vehicle, NISN (NASA Integrated Services Network), FDF (Flight Dynamics Facilities), KSC Launch Control Center, and the Constellation Mission Control Center (MCC). Constellation has since been canceled.
== See also ==
Deep Space Network (DSN)
Eastern Range (ER)
Indian Deep Space Network (IDSN)
Space Communications and Navigation Program (SCaN)
Space Network (SN)
Tracking and Data Relay Satellite (TDRS)
== References ==
== Footnotes ==
Constellation Architecture Requirements Document (CARD), CxP 70000, Revision C (December 25, 2008). NASA: Constellation Systems Engineering and Integration Office
Space Network User's Guide (SNUG), 450-SNUG, Rev 9.
== External links ==
NASA's Goddard Space Flight Center Near Earth Network Official Page
[2] | Wikipedia/Near_Earth_Network |
The Space Systems Processing Facility (SSPF), originally the Space Station Processing Facility, is a three-story industrial building at Kennedy Space Center for the manufacture and processing of flight hardware, modules, structural components and solar arrays of the International Space Station, and future space stations and commercial spacecraft. It was built in 1992 at the space complex's industrial area, just east of the Operations and Checkout Building.
The SSPF includes two processing bays, an airlock, operational control rooms, laboratories, logistics areas for equipment and machines, office space, a ballroom and conference halls, and a cafeteria.
The processing areas, airlock, and laboratories are designed to support non-hazardous Space Station and Space Shuttle payloads in 100,000 class clean work areas. The building has a total floor area of 42,500 m2 (457,000 sq ft).
== History and construction ==
During the re-designing phase of Space Station Freedom in early 1991, Congress approved new plans for NASA to lead the project and begin manufacturing its components for the future International Space Station. Kennedy Space Center was selected as the ideal launch processing complex for the ISS, as well as hosting all the internationally manufactured modules and station elements.
However the Operations and Checkout Building (which was originally to be the prime factory for station launch processing) was insufficient in size to accommodate all the components. On March 26, 1991, engineers at Kennedy Space Center; along with contractor Metric Constructions Inc. of Tampa Florida, broke ground on a new $56 million Space Station Processing Facility, situated adjacent to the O&C. The design called for a 457,000-square-foot multifunction building housing an enormous processing bay, laboratories, control rooms, staging areas, communications and control facilities, and office space for some 1,400 NASA and contractor employees.
KSC Deputy Director Gene Thomas described the construction: "The skyline around here is really going to change. This will be the biggest facility that we have built since the Apollo days". The SSPF used reinforced concrete and some 4,300 tons of steel. The building was structurally completed and topped out by mid 1992.
After three years of construction, interior fitting and equipment set-up, the SSPF formally opened on June 23, 1994.
Into the 21st century, more commercial partners began using the SSPF for projects unrelated to the ISS. In addition, after the announcement of discontinuing ISS operations beyond 2030 (leading to its planned de-orbit in 2031), the SSPF increasingly became a space for general space systems rather than specifically tailoring to the ISS. Due to these reasons, in December 2023, the facility was renamed from the Space Station Processing Facility to the Space Systems Processing Facility, keeping the same acronym.
== Operations and manufacturing processes ==
At the SSPF, space station modules, trusses and solar arrays are prepped and made ready for launch. The low and high bays are fully air conditioned and ambient temperature is maintained at 12 °C (54 °F) at all times. Workers and engineers wear full non-contaminant clothing while working. Modules receive cleaning and polishing, and some areas are temporarily disassembled for the installation of cables, electrical systems and plumbing. In another area, shipments of spare materials are available for installation. International Standard Payload Rack frames are assembled and welded together, allowing the installation of instruments, machines and allowing science experiment boxes to be fitted. Once racks are fully assembled, they are hoisted by a special manually operated robotic crane and carefully maneuvered into place inside the space station modules. Each rack weighs from 700 to 1,100 kg, and connect inside the module on special mounts with screws and latches.
Cargo bags for MPLM modules are filled with their cargo such as food packages, science experiments and other miscellaneous items on-site in the SSPF, and loaded into the module by the same robotic crane and strapped in securely.
Many of the builders accompanied their modules from around the world during their manufacturing, and worked at KSC for months to years during final assembly. Many ISS modules were renamed after successfully launching.
=== Station Integration Testing ===
Regarding the launch of modules of the International Space Station (ISS), there had been philosophical differences for years between designers and payload processors whether to ship-and-shoot or perform integration testing prior to launch. The former involved building a station module and launching it without ever physically testing it with other modules. The integration testing was not originally in the ISS plan, but in 1995 Johnson Space Center designers began to consider it and embedding KSC personnel at module factories. Multi-Element Integration Testing (MEIT) of ISS modules at KSC was officially in the books in 1997.
Three MEIT and one Integration Systems Test (IST) tests were conducted for the ISS, taking about three years from planning to completion and closure:
MEIT1: US Lab, Z1 truss, P6 truss, and a Node 1 emulator
Planning began in 1997, Testing began January 1999
MEIT2: S0 truss/Mobile Transporter/Mobile Base System, S1 truss, P1 truss, P3 truss, P4 truss, and a US Lab emulator.
MEIT3: Japanese Experiment Module, Node 2, and the US Lab emulator
Completed in 2007
Node2 IST: Node 2 and US Lab and Node 1 emulators, as part of the ISS Flight Emulator
After the launch of the Destiny, an emulator was built for MEIT testing, since the lab controlled many other modules. Among the items checked were mechanical connections, the ability to flow power and fluids between modules, and the flight software.
Numerous issues were found and rectified from these on the ground tests, many of which could not have been fixed in orbit.
== Building specifications ==
The SSPF's High Bays provide maximum flexibility for manufacturing, assembly, testing and processing payloads and elements destined for space. The bays are enormous cleanrooms equipped with overhead cranes, commodities-servicing equipment and a secure backup-power supply. The facility also has 15 offline labs.
=== Intermediate Bay (I-bay) ===
Dimensions: 338 ft (103 m) in length, by 50 ft (15 m) in width
Ceiling height 30 ft (9.1 m)
100,000-class clean work area
=== High Bay ===
Dimensions 472 ft (144 m) in length, by 121 ft (37 m) in Width
Ceiling height 69 ft (21 m)
Can be separated into eight different processing areas
=== Cranes ===
I-bay: Two 5,000 kg (11,000 lb) capacity
High Bay: Two 30,000 kg (66,000 lb) capacity
=== Commodities and Servicing Equipment ===
Ammonia servicing machines
Compressed air supply (125 psi)
Potable water pipes
=== Electrical Services ===
480 V 3-phase power at 60 Hz
Uninterruptible power supply (450 kVA)
=== Laboratory facilities ===
9 independently operated control rooms
15 labs, 2 chemical labs, and 2 darkrooms
1 Certified offline lab for planetary protection processing (Class-100 clean work area)
3 ISS Environmental Simulator (ISSES) Chambers, can be used to expose ISS life science ground controls to ISS environmental conditions (i.e., temperature, humidity).
Experiment Monitoring Area (EMA), used to monitor ISS life science experiments
Office floor area: 140,000 sq ft (13,000 m2) of office/work space
== Space Station and space hardware components currently in the SSPF ==
As of 24 June 2023:
Multi-Purpose Logistics Modules Raffaello and Donatello.
Two Lightweight Multi-Purpose Equipment Support Structure Carriers (LMCs).
Lunar Gateway habitat module, built by Lockheed Martin - used as a training rig.
Dream Chaser 'mini space shuttle' - designed and manufactured by Sierra Nevada Corporation.
Machinery for experiments in the Lunar Gateway
Bigelow Aerospace inflatable habitat mockup
When the lights in the building are on, most of these components can be seen on the live webcam from the facility.
=== Current and future activities ===
After the completion of the International Space Station in 2011, the SSPF factory was dormant for several months until early 2012, where the building was slightly refurbished for the moving in of space companies (such as Orbital ATK, SpaceX and eventually Sierra Nevada Corporation) to manufacture, process and load-up Cygnus and Dragon spacecraft and on-board payloads, as part of the Commercial Resupply Services program. NASA's upcoming Artemis mission hardware such as Moon and Mars space station modules and Space Launch System core stage engine sections, as well as the Dream Chaser mini-space shuttle, have begun manufacturing and processing operations in the high bay as of December 2022.
The building itself is open to the public and tours are offered free of charge by the employees. Exclusive tours of many areas of the SSPF are part of the Kennedy visitor complex's enhanced bus tour package.
In 2016, the laboratories of the SSPF were utilized by many small science companies and student unions with scientific equipment to study the feasibility of growing vegetables in space, such as the Veggie plant growth system, and the Advanced Plant Habitat; to launch as scientific payloads to the International Space Station.
== Events ==
When the high bay area is less busy at times, a variety of events and conferences are held in various places within the SSPF building. Occasional STEM exhibitions take place where visitors (from children and teenagers to university students) can visit the SSPF and its ballroom to learn about the building's history, manufacturing acitivies, biological and chemical sciences, and the future vision of space operations at Kennedy Space Center, including the Lunar Gateway mockup module. The ballroom also doubles as a lecture hall for presentations. On rare occasions the high bay was once used for the National Space Council's second revived meeting on February 21, 2018.
Tenants including Northrop Grumman, Lockheed Martin and Airbus have also moved facilities into the SSPF.
== Gallery ==
== References == | Wikipedia/Space_Systems_Processing_Facility |
A launch control center (LCC), in the United States, is the main control facility for intercontinental ballistic missiles (ICBMs). A launch control center monitors and controls missile launch facilities. From a launch control center, the missile combat crew can monitor the complex, launch the missile, or relax in the living quarters (depending on the ICBM system). The LCC is designed to provide maximum protection for the missile combat crew and equipment vital to missile launch. Missile silos are common across the midwestern United States, and over 450 missiles remain in US Air Force (USAF) service.
Due to modern conventional weapons, missile launch control centers are becoming rarer in the US, and it is expected that the number of missiles will stay at 450 Minuteman III.
== General information ==
All LCCs are dependent on a missile support base (MSB) for logistics support. For example, Minot AFB is the MSB for the 91st Missile Wing.
Three types of Minuteman LCCs exist:
Alternate Command Post (ACP): performed backup functions to missile support base; control missile wing communications
Squadron Command Post (SCP): perform backup functions to ACP; control squadron execution and communications
Primary LCC (PLCC): perform execution and rapid message processing
There are four configurations of the LCC, differing primarily in the amount and location of communications equipment. Functionally, there are three LCC designations. One Alternate Command Post (ACP) LCC is located within each Minuteman wing and serves as backup for the wing command post. Three Squadron Command Posts (SCPs) serve as command units for the remaining squadrons within the wing, and report directly to the wing command post. The ACP doubles as SCP for the squadron it is located within. The remainder of the LCCs (16) are classified as primary LCCs. Four primary LCCs are located within each squadron and report to their respective command post.
== Titan II LCC ==
The Titan rocket LCCs held four crew members: the Missile Combat Crew Commander (MCCC), the Deputy Missile Combat Crew Commander (DMCCC), Ballistic Missile Analyst Technician (BMAT), and the Missile Facilities Technician (MFT).
Titan II had a three-story LCC dome. The first level was the crew's living area and contained a kitchen, bathroom, bedroom, and a small equipment area that housed an exhaust fan and a water heater. The second level was the launch control area and held the LCCFC (Launch Control Complex Facility Console, the main launch console), the ALOC (Alternate Launch Officer Console), the Control Monitor Group (monitored the missile), and several other pieces of equipment. The lowest level, level 3, held communications equipment, the two battery backup supplies, the sewage lift station, the motor-generator, and several other pieces of equipment.
There were two types of Titan II sites: standard, and ACP (alternate command post) sites. ACPs had all of the equipment that one would find on a standard site plus additional communication equipment.
== Minuteman facilities ==
=== Launch Control Center ===
A Minuteman wing consists of either three or four squadrons. Five flights comprise each squadron. Each flight directly controls ten Minuteman missiles remotely. Each flight is commanded from a Launch Control Center, or LCC.
The Minuteman LCC is an underground structure of reinforced concrete and steel of sufficient strength to withstand weapon effects. It contains equipment and a Missile combat crew of two officers capable of controlling, monitoring, and launching the 10 Minuteman missiles in unmanned launch facilities (LFs) within the flight. The Combat Crew monitors message traffic from higher headquarters to all the other four flights in its squadron, and has the ability to countermand launch attempts initiated by any other flight in its squadron.
One LCC in each Minuteman squadron is designated a Squadron Command Post and has the ability to take control of and remotely launch the Minuteman missiles of any other flight in its squadron, in the event of receipt of an authenticated Emergency War Order and the flight designated in the EWO fails to execute its ICBM fire mission contained therein. One of the wing's Squadron Command Posts is designated a Wing Command Post and can execute an authenticated EWO for any flight of Minuteman missiles in the wing. It can also countermand a launch attempt by any flight in any squadron in the wing.
The Minuteman Combat Crew has voice communications capability with all the LFs of the flight which it commands. Under ordinary circumstances this is almost always used to coordinate with maintenance crews on-site at an LF. If the maintenance crew is performing a site penetration (entry into the missile silo) communication with the Combat Crew will always be necessary in order to properly authenticate (prove who you are). Under extraordinary circumstances it may be necessary to communicate with a flight security squad that is dispatched to the LF, usually to investigate a perimeter security alarm.
Each Combat Crew has a voice circuit called the Hardened Voice Channel which links the five Combat Crews (LCCs) that comprise the squadron. There is also a voice circuit called the EWO (Emergency War Order) which links the squadron command posts (CPs). One of the squadron command posts (CPs) is also the wing CP. These two voice circuits work like a party line with all LCCs connected simultaneously. Thus, it is not possible for any of the Combat Crews to have private conversations. The term "EWO" used here is not to be confused with an actual Emergency War Order message from the National Command Authority. The same term is used to denote both this circuit and the message transmitted over the Primary Alert System.
Message traffic over the LF, HVC, and EWO voice circuits are transmitted via the Hardened Intersite Cable System.
Each Combat Crew also has access to commercial telephone lines for ordinary civilian communications.
The outer structure of the LCC itself is cylindrical with hemispherical ends. Its walls are of steel-reinforced concrete and approximately 4.5 feet thick. It is normally accessed from the LCF/MAF by a freight-size elevator. A blast door permits entry into the LCC from the tunnel junction (adjoining the LCC Equipment Building housing the backup diesel-electric generator and emergency supplies). An escape hatch 3-ft in diameter is located at the far end of the LCC. The escape hatch and associated tunnel are constructed to withstand weapon effects and allow personnel egress in the event of damage to the vertical access shaft. The tunnel is sand-filled and the sand will fall into the LCC if the hatch at the bottom of the tunnel is opened. Essential LCC launch equipment and communications gear, along with the missile combat crew, are located in a shock isolated compartment suspended within the outer structure. The room is steel and suspended as a pendulum by four shock isolators (see picture below).
The LCC's electronics are fully shielded from Electromagnetic Pulse damage with carbon block surge arresters.
=== REACT-A LCCs ===
REACT-A capsules were brought online in the mid-1990s and continue in service with the 341st Missile Wing, the 90th Missile Wing, and the 91st Missile Wing. This was an upgrade from the ILCS (Improved Launch Control System) capsules at the 341 MW that date to the late 1970s, and from the Command Data Buffer (CDB) capsules at the 90th and 91st missile wings. This was a major upgrade. The two launch control officers now sit side by side and must turn four launch keys to initiate a launch.
=== REACT-B LCCs ===
The B/CDB capsules were upgraded to REACT-B in the mid-1990s and used only at the 321st Missile Wing at Grand Forks AFB, ND and the 564th Missile Squadron (the "odd squad") of the 341st Missile Wing at Malmstrom AFB, MT until both were shut down. (19 August 1998 for the 564th, 30 September 1998 for the 321st.)
=== CDB LCCs ===
Command Data Buffer (CDB) was a configuration for early Minuteman missiles at the 90th Missile Wing at FE Warren AFB, WY, the 91st Missile Wing at Minot AFB, ND, and the 351st Missile Wing at Whiteman AFB, MO. The overall layout of the LCC did not change through the upgrade to REACT, however there were some major equipment changes.
=== Airborne Launch Control Centers ===
Airborne Launch Control Centers (ALCC) provide a survivable launch capability for the Minuteman force by utilizing the Airborne Launch Control System (ALCS) which is operated by an airborne missile combat crew.
From 1967 to 1998, the ALCC mission was performed by United States Air Force EC-135 command post aircraft. This included EC-135A, EC-135C, EC-135G, and EC-135L aircraft. Today, the ALCC mission is performed by airborne missileers from Air Force Global Strike Command's (AFGSC) 625th Strategic Operations Squadron (STOS) and United States Strategic Command (USSTRATCOM). Starting on October 1, 1998, the ALCS has been located on board the United States Navy's E-6B Mercury. The ALCS crew is integrated into the battle staff of the USSTRATCOM "Looking Glass" Airborne Command Post (ABNCP) and is on alert around-the-clock.
=== Launch control equipment building ===
The Launch Control Equipment Building (LCEB) is a hardened, below-ground capsule for support equipment such as air conditioners, diesel generators, etc. At Wing 1 (and the former Wing 2 setup at Ellsworth AFB) this equipment is above ground ("topside") in the MAF.
=== Missile Alert Facility ===
A Minuteman Missile Alert Facility (MAF), previously known as the Launch Control Facility (LCF), is the above-ground component. It is "soft" or not able to withstand nuclear explosions. It consists of a security control office, dining room, kitchen, sleeping areas for the security forces stationed there (and occasional maintenance troops), garages for various vehicles, and other facilities.
=== Netlink ===
As of 2006, all Minuteman LCCs were modified to handle the LCC Netlink upgrade. The Netlink system brought internet access underground for missile combat crews.
=== Communications equipment ===
Primary Alerting System (PAS)
Strategic Automated Command and Control System (SACCS) - formerly known as Strategic Air Command Digital Information Network (SACDIN)
Minimum Essential Emergency Communications Network (MEECN)
Air Force Satellite Communications (AFSATCOM), using both Milstar and Defense Satellite Communications System satellites
Survivable Low Frequency Communications System (SLFCS)
Hardened Intersite Cable System lines (HICS)
Voice Dial Lines 1 & 2
The Minuteman LCC differs from previous missile systems in that it only held room for two personnel, the Missile Combat Crew Commander (MCCC) and the Deputy Missile Combat Crew Commander (DMCCC).
Previously, each MAF was equipped with the ICBM SHF Satellite Terminal (ISST) communications system. This system has since been deactivated, with Francis E. Warren Air Force Base being the first to completely remove the system components.
== Peacekeeper LCC ==
The Peacekeeper LCCs were non-REACT modified CDB LCCs. Instead of replacing the command and control equipment, the 'old' Minuteman CDB C2 system was modified for the 50 Peacekeeper ICBMs.
== Photo gallery ==
== See also ==
Airborne Launch Control System (ALCS)
Airborne Launch Control Center (ALCC)
Continuity of government
Emergency Rocket Communications System (ERCS)
Game theory
Ground Wave Emergency Network (GWEN)
Minimum Essential Emergency Communications Network (MEECN)
Post-Attack Command and Control System (PACCS)
Survivable Low Frequency Communications System (SLFCS)
The Cold War
== References ==
== External links ==
U.S. National Park Service article Archived 2006-06-28 at the Wayback Machine with detailed information on Minuteman missile launch control centers.
Titan Missile Museum: Pima Air & Space Museum
20th Century Castles: LCC real estate sales | Wikipedia/Missile_launch_control_center |
Cape Canaveral Space Force Station (CCSFS) is an installation of the United States Space Force's Space Launch Delta 45, located on Cape Canaveral in Brevard County, Florida.
Headquartered at the nearby Patrick Space Force Base, the station is the primary launch site for the Space Force's Eastern Range with four launch pads currently active (Space Launch Complexes 36, 40, 41 and 46). The facility is south-southeast of NASA's Kennedy Space Center on adjacent Merritt Island, with the two linked by bridges and causeways. The Cape Canaveral Space Force Station Skid Strip provides a 10,000-foot (3,000 m) runway close to the launch complexes for military airlift aircraft delivering heavy and outsized payloads to the Cape.
A number of American space exploration pioneers were launched from CCSFS, including the first U.S. Earth satellite (1958), first U.S. astronaut (1961), first U.S. astronaut in orbit (1962), first two-man U.S. spacecraft (1965), first U.S. uncrewed lunar landing (1966), and first three-man U.S. spacecraft (1968). It was also the launch site for all of the first spacecraft to (separately) fly past each of the planets in the Solar System (1962–1977), the first spacecraft to orbit Mars (1971) and roam its surface (1996), the first American spacecraft to orbit and land on Venus (1978), the first spacecraft to orbit Saturn (2004), and to orbit Mercury (2011), and the first spacecraft to leave the Solar System (1977). Portions of the base have been designated a National Historic Landmark for their association with the early years of the American space program.
Cape Canaveral was known as Cape Canaveral Launch Area upon its foundation in 1949, but renamed to LRPG Launching Area in 1950. It was known as Cape Canaveral Auxiliary Air Force Base from 1951 to 1955, and Cape Canaveral Missile Test Annex from 1955 to 1964. The facility was known as Cape Kennedy Air Force Station from 1964 to 1974, and as Cape Canaveral Air Force Station from 1974 to 1994 and from 2000 to 2020, taking the designation Cape Canaveral Air Station from 1994 to 2000. The facility was renamed "Cape Canaveral Space Force Station" in December 2020.
== History ==
The CCSFS area had been used by the United States government to test missiles since 1949, when President Harry S. Truman established the Joint Long Range Proving Ground at Cape Canaveral. The location was among the best in the continental United States for this purpose, as it allowed for launches out over the Atlantic Ocean, and is closer to the equator than most other parts of the United States, allowing rockets to get a boost from the Earth's rotation.
=== Air Force Proving Ground ===
On June 1, 1948, the United States Navy transferred the former Naval Air Station Banana River to the United States Air Force, with the Air Force renaming the facility the Joint Long Range Proving Ground (JLRPG) Base on June 10, 1949. On October 1, 1949, the Joint Long Range Proving Ground Base was transferred from the Air Materiel Command to the Air Force Division of the Joint Long Range Proving Ground. On May 17, 1950, the base was renamed the Long Range Proving Ground Base but three months later was renamed Patrick Air Force Base, in honor of Major General Mason Patrick of the U.S. Army Air Corps. In 1951, the Air Force established the Air Force Missile Test Center.
Early American sub-orbital rocket flights were achieved at Cape Canaveral in 1956. These flights occurred shortly after sub-orbital flights launched from White Sands Missile Range, such as the Viking 12 sounding rocket on February 4, 1955.
Following the Soviet Union's successful Sputnik 1 (launched on October 4, 1957), the United States attempted its first launch of an artificial satellite from Cape Canaveral on December 6, 1957. However, the rocket carrying Vanguard TV3 exploded on the launch pad.
NASA was founded in 1958, and Air Force crews launched missiles for NASA from the Cape, known then as Cape Canaveral Missile Annex. Redstone, Jupiter, Pershing 1, Pershing 1a, Pershing II, Polaris, Thor, Atlas, Titan and Minuteman missiles were all tested from the site, the Thor becoming the basis for the expendable launch vehicle (ELV) Delta rocket, which launched Telstar 1 in July 1962. The row of Titan (LC-15, 16, 19, 20) and Atlas (LC-11, 12, 13, 14) launch pads along the coast came to be known as Missile Row in the 1960s.
=== Project Mercury ===
NASA's first crewed spaceflight program was prepared for launch from Canaveral by U.S. Air Force crews. Mercury's objectives were to place a crewed spacecraft into Earth's orbit, investigate human performance and ability to function in space, and safely recover the astronaut and spacecraft. Suborbital flights were launched by derivatives of the Army's Redstone missile from LC-5; two such flights were manned by Alan Shepard on May 5, 1961, and Gus Grissom on July 21. Orbital flights were launched by derivatives of the Air Force's larger Atlas D missile from LC-14. The first American in orbit was John Glenn on February 20, 1962. Three more orbital flights followed through May 1963.
Flight control for all Mercury missions was provided at the Mercury Control Center located at Canaveral near LC-14.
=== Name changes ===
On November 29, 1963, following the death of President John F. Kennedy, his successor, President Lyndon B. Johnson, issued Executive Order 11129 renaming both NASA's Merritt Island Launch Operations Center and "the facilities of Station No. 1 of the Atlantic Missile Range" (a reference to the Cape Canaveral Missile Test Annex) as the "John F. Kennedy Space Center". He had also convinced then-Florida Governor C. Farris Bryant (D-Florida) to change the name of Cape Canaveral to Cape Kennedy. This resulted in some confusion in public perception, which conflated the two. NASA Administrator James E. Webb clarified this by issuing a directive stating the Kennedy Space Center name applied only to Merritt Island, while the Air Force issued a general order renaming the Air Force launch site Cape Kennedy Air Force Station. This name was used through the Project Gemini and early Apollo program.
However, the geographical name change proved to be unpopular, owing to the historical longevity of Cape Canaveral (one of the oldest place-names in the United States, dating to the early 1500s). In 1973 and 1974 respectively, both the geographical and the Air Force Station Cape names were reverted to Canaveral after the Florida legislature passed a bill changing the name back that was signed into law by Florida governor Reubin Askew (D-Fla.).
On August 7, 2020, U.S. military contracts referred to the installation as Cape Canaveral Space Force Station. The installation was formally renamed on December 9, 2020.
=== Gemini and early Apollo ===
The two-man Gemini spacecraft was launched into orbit by a derivative of the Air Force Titan II missile. Twelve Gemini flights were launched from LC-19, ten of which were crewed. The first crewed flight, Gemini 3, took place on March 23, 1965. Later Gemini flights were supported by seven uncrewed launches of the Agena Target Vehicle on the Atlas-Agena from LC-14, to develop rendezvous and docking, critical for Apollo. Two of the Atlas-Agena vehicles failed to reach orbit on Gemini 6 and Gemini 9, and a mis-rigging of the nosecone on a third caused it to fail to eject in orbit, preventing docking on Gemini 9A. The final flight, Gemini 12, launched on November 11, 1966.
The capabilities of the Mercury Control Center were inadequate for the flight control needs of Gemini and Apollo, so NASA built an improved Mission Control Center in 1963, which it decided to locate at the newly built Manned Spacecraft Center in Houston, Texas, rather than at Canaveral or at the Goddard Space Flight Center in Maryland.
The Apollo program's goal of landing a man on the Moon required development of the Saturn family of rockets. The large Saturn V rocket necessary to take men to the Moon required a larger launch facility than Cape Canaveral could provide, so NASA built the Kennedy Space Center located west and north of Canaveral on Merritt Island. But the earlier Saturn I and IB could be launched from the Cape's Launch Complexes 34 and 37. The first four Saturn I development launches were made from LC-34 between October 27, 1961, and March 28, 1963. These were followed by the final test launch and five operational launches from LC-37 between January 29, 1964, and July 30, 1965.
The Saturn IB uprated the capability of the Saturn I, so that it could be used for Earth orbital tests of the Apollo spacecraft. Two uncrewed test launches of the Apollo command and service module (CSM), AS-201 and AS-202, were made from LC-34, and an uncrewed flight (AS-203) to test the behavior of upper stage liquid hydrogen fuel in orbit from LC-37, between February 26 and August 25, 1966. The first crewed CSM flight, AS-204 or Apollo 1, was planned to launch from LC-34 on February 21, 1967, but the entire crew of Gus Grissom, Ed White and Roger Chaffee were killed in a cabin fire during a spacecraft test on pad 34 on January 27, 1967. The AS-204 rocket was used to launch the uncrewed, Earth orbital first test flight of the Apollo Lunar Module, Apollo 5, from LC-37 on January 22, 1968. After significant safety improvements were made to the Command Module, Apollo 7 was launched from LC-34 to fulfill Apollo 1's mission, using Saturn IB AS-205 on October 11, 1968.
In 1972, NASA deactivated both LC-34 and LC-37. It briefly considered reactivating both for Apollo Applications Program launches after the end of Apollo, but instead modified the Kennedy Space Center launch complex to handle the Saturn IB for the Skylab and Apollo-Soyuz Test Project launches. The LC-34 service structure and umbilical tower were razed, leaving only the concrete launch pedestal as a monument to the Apollo 1 crew. In 2001, LC-37 was recommissioned and converted to service Delta IV launch vehicles.
=== Subsequent activity ===
The Air Force chose to expand the capabilities of the Titan launch vehicles for its heavy lift capabilities. The Air Force constructed Launch Complexes 40 and 41 to launch Titan III and Titan IV rockets just south of Kennedy Space Center. A Titan III has about the same payload capacity as the Saturn IB at a considerable cost savings.
Launch Complex 40 and 41 have been used to launch defense reconnaissance, communications and weather satellites and NASA planetary missions. The Air Force also planned to launch two Air Force crewed space projects from LC 40 and 41. They were the Dyna-Soar, a crewed orbital rocket plane (canceled in 1963) and the USAF Manned Orbital Laboratory (MOL), a crewed reconnaissance space station (canceled in 1969).
From 1974 to 1977 the powerful Titan-Centaur became the new heavy lift vehicle for NASA, launching the Viking and Voyager series of spacecraft from Launch Complex 41. Complex 41 later became the launch site for the most powerful uncrewed U.S. rocket, the Titan IV, developed by the Air Force.
With increased use of a leased launch pad by private company SpaceX, the Air Force launch support operations at the Cape planned for 21 launches in 2014, a fifty percent increase over the 2013 launch rate. SpaceX had reservations for a total of ten of those launches in 2014, with an option for an eleventh.
== Uncrewed launches at Cape Canaveral ==
The first United States satellite launch, Explorer 1, was made by the Army Ballistic Missile Agency on February 1, 1958 (UTC) from Canaveral's LC-26A using a Juno I RS-29 missile. NASA's first launch, Pioneer 1, came on October 11 of the same year from LC-17A using a Thor-Able rocket.
Besides Project Gemini, the Atlas-Agena launch complexes LC-12 and LC-13 were used during the 1960s for the uncrewed Ranger and Lunar Orbiter programs and the first five Mariner interplanetary probes. The Atlas-Centaur launch complex LC-36 was used for the 1960s Surveyor uncrewed lunar landing program and the last five Mariner probes through 1973.
NASA has also launched communications and weather satellites from Launch Complexes 40 and 41, built at the north end of the Cape in 1964 by the Air Force for its Titan IIIC and Titan IV rockets. From 1974 to 1977 the powerful Titan IIIE served as the heavy-lift vehicle for NASA, launching the Viking and Voyager series of planetary spacecraft and the Cassini–Huygens Saturn probe from LC-41.
Three Cape Canaveral pads are currently operated by private industry for military and civilian launches: SLC-41 for United Launch Alliance's Atlas V and Vulcan Centaur; LC-36 for Blue Origin's New Glenn and SLC-40 for the SpaceX Falcon 9.
=== Boeing X-37B ===
The Boeing X-37B, a reusable uncrewed spacecraft operated by USSF, which is also known as the Orbital Test Vehicle (OTV), has been successfully launched four times from Cape Canaveral. The first four X-37B missions have been launched with Atlas V rockets. Past launch dates for the X-37B spaceplane include April 22, 2010, March 5, 2011, December 11, 2012, and May 20, 2015. The fourth X-37B mission landed at the Kennedy Space Center on May 7, 2017, after 718 days in orbit. The first three X-37B missions all made successful autonomous landings from space to a 15,000 foot (4,600 m) runway located at Vandenberg Space Force Base in California which was originally designed for Space Shuttle return from orbit operations.
== Operations, infrastructure and facilities ==
Of the launch complexes built since 1950, several have been leased and modified for use by private aerospace companies. Launch Complex SLC-17 was used for the Delta II Heavy variant, through 2011. Launch Complexes SLC-37 and SLC-41 were modified to launch EELV Delta IV and Atlas V launch vehicles, respectively. These launch vehicles replaced all earlier Delta, Atlas, and Titan rockets. Launch Complex SLC-47 is used to launch weather sounding rockets. Launch Complex SLC-46 is reserved for use by Space Florida.
SLC-40 hosted the first launch of the SpaceX Falcon 9 in June 2010. Falcon 9 launches continued from this complex through 2015, consisting of uncrewed Commercial Resupply Services missions for NASA to the International Space Station as well as commercial satellite flights. On September 30th, 2024, SpaceX launched the first crewed flight from SLC-40. SpaceX has also leased Launch Complex 39A from NASA and has completed modifying it to accommodate Falcon Heavy and Commercial Crew crewed spaceflights to the ISS with their Crew Dragon spacecraft in 2019. SpaceX Landing Zone 1 and 2, used to land first stages of the Falcon 9 and the side boosters of the Falcon Heavy, are located at the site of the former LC-13.
On September 16, 2015, NASA announced that Blue Origin has leased Launch Complex 36 and will modify it as a launch site for their next-generation launch vehicles.
In the case of low-inclination (geostationary) launches the location of the area at 28°27'N put it at a slight disadvantage against other launch facilities situated nearer the equator. The boost eastward from the Earth's rotation is about 406 m/s (908 miles per hour) at Cape Canaveral, but 463 m/s (1,035 miles per hour) at the European Guiana Space Centre in French Guiana.
In the case of high-inclination (polar) launches, the latitude does not matter, but the Cape Canaveral area is not suitable, because inhabited areas underlie these trajectories; Vandenberg Space Force Base, Cape Canaveral's West Coast counterpart, or the smaller Pacific Spaceport Complex – Alaska (PSCA) are used instead.
The Cape Canaveral Space Force Museum is located at LC-26. Hangar AE, located in the CCAFS Industrial Area, collects telemetry from launches all over the United States. NASA's Launch Services Program has three Launch Vehicle Data Centers (LVDC) within that display telemetry real-time for engineers.
=== Cape Canaveral Space Force Station Skid Strip ===
Cape Canaveral Space Force Station Skid Strip (ICAO: KXMR, FAA LID: XMR) is a military airport at Cape Canaveral Space Force Station (CCSFS), 7 nautical miles (13 km; 8.1 mi) northeast of Cocoa Beach, Florida. It has an asphalt-paved runway designated 13/31 and measuring 10,000 by 200 ft (3,048 by 61 m). The facility is owned by the United States Space Force (USSF).
This airport is assigned a three-letter location identifier of XMR by the Federal Aviation Administration, but it does not have an International Air Transport Association (IATA) airport code.
The runway was first called the Skid Strip because SM-62 Snark cruise missiles (which lacked wheels) returning from test flights were supposed to skid to a halt on it.
In the 1960s the Douglas C-133 Cargomaster was a frequent visitor, carrying modified Atlas and Titan missiles, used as launch vehicles for crewed and uncrewed space programs leading to the Apollo Moon landings. The Skid Strip was used by NASA's Pregnant Guppy and Super Guppy transport aircraft carrying the S-IVB upper stage for the Saturn IB and Saturn V rockets used in Apollo program.
Today, it is predominantly used by USAF C-130 Hercules, C-17 Globemaster III and C-5 Galaxy aircraft transporting satellite payloads to CCSFS for mating with launch vehicles.
The CCSFS Skid Strip is sometimes confused with the NASA Shuttle Landing Facility, but that runway, specially constructed for the Space Shuttle, is located on Merritt Island at the adjacent Kennedy Space Center.
=== Naval Ordnance Test Unit ===
A tenant command located at Cape Canaveral SFS is the U.S. Navy's Naval Ordnance Test Unit (NOTU). As a major shore command led by a Navy captain, NOTU was created in 1950 and initially directed almost all of its efforts towards the development and subsequent support of the submarine-launched Fleet Ballistic Missile (FBM) program. This resulted in NOTU being assigned to the director of special projects (now strategic systems programs) with a mission to support the development of the Polaris missile and later the Poseidon missile programs.
NOTU's mission is the support and testing of sea-based weapons systems for the United States Navy and the Royal Navy in a safe environment utilizing the airspace and waterspace of the Eastern Range. The command directly supports the mission capability and readiness of the United States Navy's Trident Submarines as well as the Fleet Ballistic Missile program of the United Kingdom. NOTU operates the Navy Port at Port Canaveral, supporting submarines and surface ships of the U.S. Atlantic Fleet, NATO, Allied and other foreign navies, and assets of the Military Sealift Command. NOTU is composed of over 100 active duty U.S. Navy personnel and over 70 defense contractors.
=== Notable Launch Complexes ===
Listed below in this article are less notable launch complexes at the Cape. For a complete list of all launch complexes, see the below table.
=== LC-1 ===
Launch Complex 1 (LC-1) is located on the eastern tip of Cape Canaveral. It was constructed in the early 1950s for the Snark missile program.
The first launch from this site was conducted on January 13, 1955. The complex was used for Snark missions until 1960, and then was utilized as a helicopter pad during Project Mercury. The final use of the site was from 1983 to 1989 for tethered aerostat balloon radar missions. It is now deactivated.
=== LC-2 ===
Launch Complex 2 (LC-2) is a deactivated launch site on the eastern tip of Cape Canaveral. It was constructed with launch complexes 1, 3, and 4, in the early 1950s, for the Snark missile program.
The first launch from this site was a Snark test conducted on February 18, 1954. The complex was used for Snark missions until 1960, and then was utilized as a helicopter pad during Project Mercury. The final use of the site was during the 1980s for tethered aerostat balloon radar missions.
=== LC-3 ===
Launch Complex 3 (LC-3) is a deactivated launch site southeast of SLC-36 at Cape Canaveral. It was constructed, with launch complexes 1, 2, and 4, in the early 1950s for the Snark missile program.
It was formerly used to launch Bumper, BOMARC, UGM-27 Polaris, and Lockheed X-17 missiles. The pad was also the site of the first launch from Cape Canaveral, a Bumper rocket on July 24, 1950. The site also served as a medical support facility during Project Mercury.
In 2023, after weeks of searching, students from the University of Central Florida, working with archaeologists, discovered the site of the original blockhouse supporting the first Bumper launch just north of the pad at LC-3, including the slab foundation and some of the surrounding Marston mat, all long-buried under heavy scrub.
=== LC-4 ===
Launch Complex 4 (LC-4) was one of the first launch complexes to be built at Cape Canaveral. It consisted of two pads: LC-4, which was used for 25 launches of Bomarc, Matador and Redstone missiles between 1952 and 1960; and LC-4A, which was used for three Bomarc launches between 1958 and 1959.
Following its deactivation in 1960, the original structures at the complex were dismantled. New facilities were built at the site in the 1980s, and it was used for TARS aerostat operations between 1983 and 1989. Following this, the aerostat launch facilities were also removed, and the complex is currently not accessible to the public.
=== LC-9 ===
Launch Complex 9 (LC-9) is a small concrete structure consisting of an elevated launch pedestal and flame trench, centered on a small oval-shaped concrete pad.. It is north of Launch Complex 17.
It was used for ten test launches of SM-64 Navaho supersonic nuclear-armed cruise missiles. In addition to LC-9, Navaho tests were also conducted at LC-10 and Edwards Air Force Base. The Navaho was canceled after poor performance in testing: eight of the eleven test launches of the final prototype failed. All of the failed launches were conducted from LC-9.
As of 2023, the concrete launch structure is still standing, but is not maintained; and the launch support equipment has been removed. The site is not accessible to the general public.
=== LC-10 ===
Launch Complex 10 (LC-10) was used for one SM-64 Navaho missile launch, and later Jason sounding rockets and the Alpha Draco research missile. It was located north of Launch Complex 17, where Launch Complexes 31 and 32 are now located.
A single Navaho missile was test-launched from LC-10, on August 12, 1957, and was one of only three Navahos to complete a successful flight. Following the cancellation of the Navaho, LC-10 was reused for launches of Jason and Draco sounding rockets during 1958 and 1959. The last launch to use the site was of a Draco on April 27, 1959.
LC-10 was subsequently demolished during the construction of Launch Complexes 31 and 32, which were built on the same site.
=== LC-25 ===
Launch Complex 25 (LC-25) was a four-pad site built for test flights of the US Navy's submarine-launched ballistic missiles Polaris, Poseidon and Trident It was active from 1958 to 1979.
Pads 25A and 25B were built in 1957. Pad 25B was initially built with an underground launch mechanism known as a ship motion simulator to simulate the roll and pitch of a submarine. It was first used August 14, 1959, and was mothballed in October 1961. Pads 25C and 25D were added in May 1968 for the larger Poseidon. One blockhouse served all four pads; it was extensively reinforced when the Poseidon pads were added.
The complex was dismantled in 1979.
In November 2012, ground was broken for a new $185-million Navy missile test facility to be built over the underground structures at LC-25 and LC-29 called the Strategic Weapon System Ashore. The facility will allow the testing of fire control, launch systems and navigation for submarine-fired missiles to be conducted at one facility instead of being done by contractors in different locations around the country.
==== Launch History ====
Polaris FTV: 19 launches (April 18, 1958 – October 2, 1959)
Polaris A-1: 16 launches (March 9, 1960 – December 5, 1961)
Polaris A-2: 14 launches (November 10, 1960 – March 5, 1965)
Polaris A-3: 11 launches (February 11, 1963 – July 3, 1964)
Poseidon: 16 launches (August 16, 1968 – June 29, 1970)
Trident I: 18 launches (January 18, 1977 – January 22, 1979
=== LC-26 ===
Launch Complex 26 (LC-26) consisted of two pads, A and B. Pad A was used for the Jupiter-C and Juno I rockets, and was the launch site for Explorer 1, the United States' first satellite, on February 1, 1958 (January 31 local time). Pad B was used for Juno II. Jupiter IRBMs were launched from both pads.
It was deactivated and is now the home of the Cape Canaveral Space Force Museum.
=== LC-29 ===
Launch Complex 29 (LC-29) was a one-pad launch site at Cape Canaveral built for test flights of the US Navy's submarine-launched Polaris ballistic missiles from 1958 to 1980. It also launched Chevaline missiles, which were British Polaris A-3 missiles.
The complex was designed to contain two launch pads, 29A and 29B, but only 29A was built. The launch complex was dismantled in 1980.
In November 2012, ground was broken for a new $185-million Navy missile test facility to be built over the underground structures at LC-25 and LC-29 called the Strategic Weapons System Ashore. The facility will allow the testing of fire control, launch systems and navigation for submarine-fired missiles to be conducted at one facility instead of being done by contractors in different locations around the country.
==== Launch History ====
Polaris A1X: 14 launches (September 21, 1959 – April 29, 1960)
Polaris A-2: 15 launches (January 10, 1961 – November 12, 1965)
Polaris A-3: 18 launches (August 7, 1962 – November 1, 1967)
Polaris A-3 Antelope: 3 launches (November 17, 1966 – March 2, 1967)
British Chevaline launches: (September 11, 1977 – May 19, 1980)
=== LC-43 ===
Launch Complex 43 (LC-43) was used by American sounding rockets between 1962 and 1984, supporting 2,038 launches. These launches were moved to moved to LC-47 in 1984, and LC-43 was demolished to make way for Launch Complex 46, which was built near the site.
== Based units ==
Units marked GSU are Air Force Geographically Separate Units which, although based at Cape Canaveral SFS, are subordinate to Space Launch Delta 45 headquarters at Patrick SFB.
=== United States Space Force ===
Space Operations Command (SpOC)
Space Launch Delta 45
5th Space Launch Squadron – Atlas V and Delta IV
45th Operations Group (GSU) incorporating the former 45th Launch Group deactivated in 2018
45th Range Squadron
45th Space Communications Squadron
45th Weather Squadron
=== United States Navy ===
Naval Ordnance Test Unit
== Gallery ==
== See also ==
List of Cape Canaveral and Merritt Island launch sites
== References ==
== Sources ==
"Launch Site Safety Assessment, Section 1.0 Eastern Range General Range Capabilities" (PDF). Research Triangle Institute, Center for Aerospace Technology (CAST), Florida Office. Federal Aviation Administration. March 1999. Archived from the original (PDF) on March 26, 2009. Retrieved January 25, 2010.
== External links ==
Media related to Cape Canaveral Space Force Station at Wikimedia Commons
Patrick Air Force Base
Cape Canaveral Air Force Station Virtual Tour
Air Force Space and Missile Museum Web site
"Cape Canaveral Lighthouse Shines Again" article and video interview about the lighthouse
Aviation: From Sand Dunes to Sonic Booms, a National Park Service Discover Our Shared Heritage Travel Itinerary
The short film "The Cape (1963)" is available for free viewing and download at the Internet Archive.
Historic American Engineering Record (HAER) No. FL-8-5, "Cape Canaveral Air Station, Launch Complex 17, East end of Lighthouse Road, Cape Canaveral, Brevard, FL"
Key Events in Apollo
Cape Canaveral Space Force Station Launch Schedule | Wikipedia/Cape_Canaveral_Space_Force_Station |
The Astromaterials Research and Exploration Science (ARES) Directorate performs the physical science research at the Johnson Space Center (JSC). It serves as the JSC focus for support to the HQ Science Mission Directorate. ARES scientists and engineers also provide support to the human and robotic spaceflight programs with expertise in orbital debris modeling, analysis of micrometeoroid/orbital debris risks to spacecraft, image analysis and Earth observations. The responsibilities of ARES also include interaction with the Office of Safety and Mission Assurance and the Human Space Flight Programs. They perform research in Earth, planetary, and space sciences and have curatorial responsibility for all NASA-held extraterrestrial samples.
The ARES Directorate recently added a Wordpress Blog that will allow social media Twitter updates to new research and Laboratory discoveries. Posts will also be made by the Expedition Earth and Beyond Education Program that will be adding their classroom connection events to the NASA ARES YouTube page.
== Earth Science and Remote Sensing Unit ==
The Earth Science and Remote Sensing Unit (ESRS) is a remote sensing ground unit of NASA. It is part of NASA's Astromaterials Research and Exploration Science Directorate. It works with the United States Geological Survey (USGS) and NASA's Earth Science Disaster Program.
== References ==
== External links ==
Official website | Wikipedia/Astromaterials_Research_and_Exploration_Science_Directorate |
Mars Science Laboratory (MSL) is a robotic space probe mission to Mars launched by NASA on November 26, 2011, which successfully landed Curiosity, a Mars rover, in Gale Crater on August 6, 2012. The overall objectives include investigating Mars' habitability, studying its climate and geology, and collecting data for a human mission to Mars. The rover carries a variety of scientific instruments designed by an international team.
== Overview ==
MSL carried out the most accurate Martian landing of any spacecraft at the time, hitting a target landing ellipse of 7 by 20 km (4.3 by 12.4 mi), in the Aeolis Palus region of Gale Crater. MSL landed 2.4 km (1.5 mi) east and 400 m (1,300 ft) north of the center of the target. This location is near the mountain Aeolis Mons (a.k.a. "Mount Sharp").
The Mars Science Laboratory mission is part of NASA's Mars Exploration Program, a long-term effort for the robotic exploration of Mars that is managed by the Jet Propulsion Laboratory of California Institute of Technology. The total cost of the MSL project was US$2.5 billion.
Previous successful U.S. Mars rovers include Sojourner from the Mars Pathfinder mission and the Mars Exploration Rovers Spirit and Opportunity. Curiosity is about twice as long and five times as heavy as Spirit and Opportunity, and carries over ten times the mass of scientific instruments.
== Goals and objectives ==
The MSL mission has four scientific goals: Determine the landing site's habitability including the role of water, the study of the climate and the geology of Mars. It is also useful preparation for a future human mission to Mars.
To contribute to these goals, MSL has eight main scientific objectives:
Biological
(1) Determine the nature and inventory of organic carbon compounds
(2) Investigate the chemical building blocks of life (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur)
(3) Identify features that may represent the effects of biological processes (biosignatures)
Geological and geochemical
(4) Investigate the chemical, isotopic, and mineralogical composition of the Martian surface and near-surface geological materials
(5) Interpret the processes that have formed and modified rocks and soils
Planetary process
(6) Assess long-timescale (i.e., 4-billion-year) Martian atmospheric evolution processes
(7) Determine present state, distribution, and cycling of water and carbon dioxide
Surface radiation
(8) Characterize the broad spectrum of surface radiation, including cosmic radiation, solar particle events and secondary neutrons. As part of its exploration, it also measured the radiation exposure in the interior of the spacecraft as it traveled to Mars, and it is continuing radiation measurements as it explores the surface of Mars. This data would be important for a future human mission.
About one year into the surface mission, and having assessed that ancient Mars could have been hospitable to microbial life, the MSL mission objectives evolved to developing predictive models for the preservation process of organic compounds and biomolecules; a branch of paleontology called taphonomy.
== Specifications ==
=== Spacecraft ===
The spacecraft flight system had a mass at launch of 3,893 kg (8,583 lb), consisting of an Earth-Mars fueled cruise stage (539 kg (1,188 lb)), the entry-descent-landing (EDL) system (2,401 kg (5,293 lb) including 390 kg (860 lb) of landing propellant), and a 899 kg (1,982 lb) mobile rover with an integrated instrument package.
The MSL spacecraft includes spaceflight-specific instruments, in addition to utilizing one of the rover instruments — Radiation assessment detector (RAD) — during the spaceflight transit to Mars.
MSL EDL Instrument (MEDLI): The MEDLI project's main objective is to measure aerothermal environments, sub-surface heat shield material response, vehicle orientation, and atmospheric density. The MEDLI instrumentation suite was installed in the heatshield of the MSL entry vehicle. The acquired data will support future Mars missions by providing measured atmospheric data to validate Mars atmosphere models and clarify the lander design margins on future Mars missions. MEDLI instrumentation consists of three main subsystems: MEDLI Integrated Sensor Plugs (MISP), Mars Entry Atmospheric Data System (MEADS) and the Sensor Support Electronics (SSE).
=== Rover ===
Curiosity rover has a mass of 899 kg (1,982 lb), can travel up to 90 m (300 ft) per hour on its six-wheeled rocker-bogie system, is powered by a multi-mission radioisotope thermoelectric generator (MMRTG), and communicates in both X band and UHF bands.
Computers: The two identical on-board rover computers, called "Rover Compute Element" (RCE), contain radiation-hardened memory to tolerate the extreme radiation from space and to safeguard against power-off cycles. Each computer's memory includes 256 KB of EEPROM, 256 MB of DRAM, and 2 GB of flash memory. This compares to 3 MB of EEPROM, 128 MB of DRAM, and 256 MB of flash memory used in the Mars Exploration Rovers.
The RCE computers use the RAD750 CPU (a successor to the RAD6000 CPU used in the Mars Exploration Rovers) operating at 200 MHz. The RAD750 CPU is capable of up to 400 MIPS, while the RAD6000 CPU is capable of up to 35 MIPS. Of the two on-board computers, one is configured as backup, and will take over in the event of problems with the main computer.
The rover has an Inertial Measurement Unit (IMU) that provides 3-axis information on its position, which is used in rover navigation. The rover's computers are constantly self-monitoring to keep the rover operational, such as by regulating the rover's temperature. Activities such as taking pictures, driving, and operating the instruments are performed in a command sequence that is sent from the flight team to the rover.
The rover's computers run VxWorks, a real-time operating system from Wind River Systems. During the trip to Mars, VxWorks ran applications dedicated to the navigation and guidance phase of the mission, and also had a pre-programmed software sequence for handling the complexity of the entry-descent-landing. Once landed, the applications were replaced with software for driving on the surface and performing scientific activities.
Communications: Curiosity is equipped with several means of communication, for redundancy. An X band Small Deep Space Transponder for communication directly to Earth via the NASA Deep Space Network and a UHF Electra-Lite software-defined radio for communicating with Mars orbiters.: 46 The X-band system has one radio, with a 15 W power amplifier, and two antennas: a low-gain omnidirectional antenna that can communicate with Earth at very low data rates (15 bit/s at maximum range), regardless of rover orientation, and a high-gain antenna that can communicate at speeds up to 32 kbit/s, but must be aimed. The UHF system has two radios (approximately 9 W transmit power: 81 ), sharing one omnidirectional antenna. This can communicate with the Mars Reconnaissance Orbiter (MRO) and 2001 Mars Odyssey orbiter (ODY) at speeds up to 2 Mbit/s and 256 kbit/s, respectively, but each orbiter is only able to communicate with Curiosity for about 8 minutes per day. The orbiters have larger antennas and more powerful radios, and can relay data to Earth faster than the rover could do directly. Therefore, most of the data returned by Curiosity (MSL) is via the UHF relay links with MRO and ODY. The data return during the first 10 days was approximately 31 megabytes per day.
Typically 225 kbit/day of commands are transmitted to the rover directly from Earth, at a data rate of 1–2 kbit/s, during a 15-minute (900 second) transmit window, while the larger volumes of data collected by the rover are returned via satellite relay.: 46 The one-way communication delay with Earth varies from 4 to 22 minutes, depending on the planets' relative positions, with 12.5 minutes being the average.
At landing, telemetry was monitored by the 2001 Mars Odyssey orbiter, Mars Reconnaissance Orbiter and ESA's Mars Express. Odyssey is capable of relaying UHF telemetry back to Earth in real time. The relay time varies with the distance between the two planets and took 13:46 minutes at the time of landing.
Mobility systems: Curiosity is equipped with six wheels in a rocker-bogie suspension, which also served as landing gear for the vehicle, unlike its smaller predecessors. The wheels are significantly larger (50 centimeters (20 in) diameter) than those used on previous rovers. Each wheel has cleats and is independently actuated and geared, providing for climbing in soft sand and scrambling over rocks. The four corner wheels can be independently steered, allowing the vehicle to turn in place as well as execute arcing turns. Each wheel has a pattern that helps it maintain traction and leaves patterned tracks in the sandy surface of Mars. That pattern is used by on-board cameras to judge the distance traveled. The pattern itself is Morse code for "JPL" (•−−− •−−• •−••). Based on the center of mass, the vehicle can withstand a tilt of at least 50 degrees in any direction without overturning, but automatic sensors will limit the rover from exceeding 30-degree tilts.
==== Instruments ====
The general analysis strategy begins with high resolution cameras to look for features of interest. If a particular surface is of interest, Curiosity can vaporize a small portion of it with an infrared laser and examine the resulting spectra signature to query the rock's elemental composition. If that signature intrigues, the rover will use its long arm to swing over a microscope and an X-ray spectrometer to take a closer look. If the specimen warrants further analysis, Curiosity can drill into the boulder and deliver a powdered sample to either the SAM or the CheMin analytical laboratories inside the rover.
Alpha Particle X-ray Spectrometer (APXS): This device can irradiate samples with alpha particles and map the spectra of X-rays that are re-emitted for determining the elemental composition of samples.
CheMin: CheMin is short for 'Chemistry and Mineralogy', and it is an X-ray diffraction and X-ray fluorescence analyzer. It will identify and quantify the minerals present in rocks and soil and thereby assess the involvement of water in their formation, deposition, or alteration. In addition, CheMin data will be useful in the search for potential mineral biosignatures, energy sources for life or indicators for past habitable environments.
Sample Analysis at Mars (SAM): The SAM instrument suite will analyze organics and gases from both atmospheric and solid samples. This include oxygen and carbon isotope ratios in carbon dioxide (CO2) and methane (CH4) in the atmosphere of Mars in order to distinguish between their geochemical or biological origin.
Radiation Assessment Detector (RAD): This instrument was the first of ten MSL instruments to be turned on. Both en route and on the planet's surface, it characterized the broad spectrum of radiation encountered in the Martian environment. Turned on after launch, it recorded several radiation spikes caused by the Sun. NASA scientists reported that a possible human mission to Mars may involve a great radiation risk due to energetic particle radiation detected by the RAD while traveling from the Earth to Mars.
Dynamic Albedo of Neutrons (DAN): A pulsed neutron source and detector for measuring hydrogen or ice and water at or near the Martian surface. On August 18, 2012 (sol 12) the Russian science instrument, DAN, was turned on, marking the success of a Russian-American collaboration on the surface of Mars and the first working Russian science instrument on the Martian surface since Mars 3 stopped transmitting over forty years ago. The instrument is designed to detect subsurface water.
Rover Environmental Monitoring Station (REMS): Meteorological package and an ultraviolet sensor provided by Spain and Finland. It measures humidity, pressure, temperatures, wind speeds, and ultraviolet radiation.
Cameras: Curiosity has seventeen cameras overall. 12 engineering cameras (Hazcams and Navcams) and five science cameras. MAHLI, MARDI, and MastCam cameras were developed by Malin Space Science Systems and they all share common design components, such as on-board electronic imaging processing boxes, 1600×1200 CCDs, and a RGB Bayer pattern filter.
MastCam: This system provides multiple spectra and true-color imaging with two cameras.
Mars Hand Lens Imager (MAHLI): This system consists of a camera mounted to a robotic arm on the rover, used to acquire microscopic images of rock and soil. It has white and ultraviolet LEDs for illumination.
ChemCam: Designed by Roger Wiens is a system of remote sensing instruments used to erode the Martian surface up to 10 meters away and measure the different components that make up the land. The payload includes the first laser-induced breakdown spectroscopy (LIBS) system to be used for planetary science, and Curiosity's fifth science camera, the remote micro-imager (RMI). The RMI provides black-and-white images at 1024×1024 resolution in a 0.02 radian (1.1-degree) field of view. This is approximately equivalent to a 1500 mm lens on a 35 mm camera.
Mars Descent Imager (MARDI): During the descent to the Martian surface, MARDI acquired 4 color images per second, at 1600×1200 pixels, with a 0.9-millisecond exposure time, from before heatshield separation at 3.7 km altitude, until a few seconds after touchdown. This provided engineering information about both the motion of the rover during the descent process, and science information about the terrain immediately surrounding the rover. NASA descoped MARDI in 2007, but Malin Space Science Systems contributed it with its own resources. After landing it could take 1.5 mm (0.059 in) per pixel views of the surface, the first of these post-landing photos were taken by August 27, 2012 (sol 20).
Engineering cameras: There are 12 additional cameras that support mobility:
Hazard avoidance cameras (Hazcams): The rover has a pair of black and white navigation cameras (Hazcams) located on each of its four corners. These provide close-up views of potential obstacles about to go under the wheels.
Navigation cameras (Navcams): The rover uses two pairs of black and white navigation cameras mounted on the mast to support ground navigation. These provide a longer-distance view of the terrain ahead.
== History ==
The Mars Science Laboratory was recommended by United States National Research Council Decadal Survey committee as the top priority middle-class Mars mission in 2003. NASA called for proposals for the rover's scientific instruments in April 2004, and eight proposals were selected on December 14 of that year. Testing and design of components also began in late 2004, including Aerojet's designing of a monopropellant engine with the ability to throttle from 15 to 100 percent thrust with a fixed propellant inlet pressure.
=== Cost overruns, delays, and launch ===
By November 2008 most hardware and software development was complete, and testing continued. At this point, cost overruns were approximately $400 million. In the attempts to meet the launch date, several instruments and a cache for samples were removed and other instruments and cameras were simplified to simplify testing and integration of the rover. The next month, NASA delayed the launch to late 2011 because of inadequate testing time. Eventually the costs for developing the rover reached $2.47 billion, that for a rover that initially had been classified as a medium-cost mission with a maximum budget of $650 million, yet NASA still had to ask for an additional $82 million to meet the planned November launch. As of 2012, the project suffered an 84 percent overrun.
MSL launched on an Atlas V rocket from Cape Canaveral on November 26, 2011. On January 11, 2012, the spacecraft successfully refined its trajectory with a three-hour series of thruster-engine firings, advancing the rover's landing time by about 14 hours. When MSL was launched, the program's director was Doug McCuistion of NASA's Planetary Science Division.
Curiosity successfully landed in the Gale Crater at 05:17:57.3 UTC on August 6, 2012, and transmitted Hazcam images confirming orientation. Due to the Mars-Earth distance at the time of landing and the limited speed of radio signals, the landing was not registered on Earth for another 14 minutes. The Mars Reconnaissance Orbiter sent a photograph of Curiosity descending under its parachute, taken by its HiRISE camera, during the landing procedure.
Six senior members of the Curiosity team presented a news conference a few hours after landing, they were: John Grunsfeld, NASA associate administrator; Charles Elachi, director, JPL; Peter Theisinger, MSL project manager; Richard Cook, MSL deputy project manager; Adam Steltzner, MSL entry, descent and landing (EDL) lead; and John Grotzinger, MSL project scientist.
=== Naming ===
Between March 23 and 29, 2009, the general public ranked nine finalist rover names (Adventure, Amelia, Journey, Perception, Pursuit, Sunrise, Vision, Wonder, and Curiosity) through a public poll on the NASA website. On May 27, 2009, the winning name was announced to be Curiosity. The name had been submitted in an essay contest by Clara Ma, a sixth-grader from Kansas.
Curiosity is the passion that drives us through our everyday lives. We have become explorers and scientists with our need to ask questions and to wonder.
=== Landing site selection ===
Over 60 landing sites were evaluated, and by July 2011 Gale crater was chosen. A primary goal when selecting the landing site was to identify a particular geologic environment, or set of environments, that would support microbial life. Planners looked for a site that could contribute to a wide variety of possible science objectives. They preferred a landing site with both morphologic and mineralogical evidence for past water. Furthermore, a site with spectra indicating multiple hydrated minerals was preferred; clay minerals and sulfate salts would constitute a rich site. Hematite, other iron oxides, sulfate minerals, silicate minerals, silica, and possibly chloride minerals were suggested as possible substrates for fossil preservation. Indeed, all are known to facilitate the preservation of fossil morphologies and molecules on Earth. Difficult terrain was favored for finding evidence of livable conditions, but the rover must be able to safely reach the site and drive within it.
Engineering constraints called for a landing site less than 45° from the Martian equator, and less than 1 km above the reference datum. At the first MSL Landing Site workshop, 33 potential landing sites were identified. By the end of the second workshop in late 2007, the list was reduced to six; in November 2008, project leaders at a third workshop reduced the list to these four landing sites:
A fourth landing site workshop was held in late September 2010, and the fifth and final workshop May 16–18, 2011. On July 22, 2011, it was announced that Gale Crater had been selected as the landing site of the Mars Science Laboratory mission.
== Launch ==
=== Launch vehicle ===
The Atlas V launch vehicle is capable of launching up to 8,290 kg (18,280 lb) to geostationary transfer orbit. The Atlas V was also used to launch the Mars Reconnaissance Orbiter and the New Horizons probe.
The first and second stages, along with the solid rocket motors, were stacked on October 9, 2011, near the launch pad. The fairing containing MSL was transported to the launch pad on November 3, 2011.
=== Launch event ===
MSL was launched from Cape Canaveral Air Force Station Space Launch Complex 41 on November 26, 2011, at 15:02 UTC via the Atlas V 541 provided by United Launch Alliance. This two stage rocket includes a 3.8 m (12 ft) Common Core Booster (CCB) powered by one RD-180 engine, four solid rocket boosters (SRB), and one Centaur second stage with a 5 m (16 ft) diameter payload fairing. The NASA Launch Services Program coordinated the launch via the NASA Launch Services (NLS) I Contract.
== Cruise ==
=== Cruise stage ===
The cruise stage carried the MSL spacecraft through the void of space and delivered it to Mars. The interplanetary trip covered the distance of 352 million miles in 253 days. The cruise stage has its own miniature propulsion system, consisting of eight thrusters using hydrazine fuel in two titanium tanks. It also has its own electric power system, consisting of a solar array and battery for providing continuous power. Upon reaching Mars, the spacecraft stopped spinning and a cable cutter separated the cruise stage from the aeroshell. Then the cruise stage was diverted into a separate trajectory into the atmosphere. In December 2012, the debris field from the cruise stage was located by the Mars Reconnaissance Orbiter. Since the initial size, velocity, density and impact angle of the hardware are known, it will provide information on impact processes on the Mars surface and atmospheric properties.
=== Mars transfer orbit ===
The MSL spacecraft departed Earth orbit and was inserted into a heliocentric Mars transfer orbit on November 26, 2011, shortly after launch, by the Centaur upper stage of the Atlas V launch vehicle. Prior to Centaur separation, the spacecraft was spin-stabilized at 2 rpm for attitude control during the 36,210 km/h (22,500 mph) cruise to Mars.
During cruise, eight thrusters arranged in two clusters were used as actuators to control spin rate and perform axial or lateral trajectory correction maneuvers. By spinning about its central axis, it maintained a stable attitude. Along the way, the cruise stage performed four trajectory correction maneuvers to adjust the spacecraft's path toward its landing site. Information was sent to mission controllers via two X-band antennas. A key task of the cruise stage was to control the temperature of all spacecraft systems and dissipate the heat generated by power sources, such as solar cells and motors, into space. In some systems, insulating blankets kept sensitive science instruments warmer than the near-absolute zero temperature of space. Thermostats monitored temperatures and switched heating and cooling systems on or off as needed.
== Entry, descent and landing (EDL) ==
=== EDL spacecraft system ===
Landing a large mass on Mars is particularly challenging as the atmosphere is too thin for parachutes and aerobraking alone to be effective, while remaining thick enough to create stability and impingement problems when decelerating with retrorockets. Although some previous missions have used airbags to cushion the shock of landing, the Curiosity rover is too heavy for this to be an option. Instead, Curiosity was set down on the Martian surface using a new high-accuracy entry, descent, and landing (EDL) system that was part of the MSL spacecraft descent stage. The mass of this EDL system, including parachute, sky crane, fuel and aeroshell, is 2,401 kg (5,293 lb). The novel EDL system placed Curiosity within a 20 by 7 km (12.4 by 4.3 mi) landing ellipse, in contrast to the 150 by 20 km (93 by 12 mi) landing ellipse of the landing systems used by the Mars Exploration Rovers.
The entry-descent-landing (EDL) system differs from those used for other missions in that it does not require an interactive, ground-generated mission plan. During the entire landing phase, the vehicle acts autonomously, based on pre-loaded software and parameters. The EDL system was based on a Viking-derived aeroshell structure and propulsion system for a precision guided entry and soft landing, in contrasts with the airbag landings that were used in the mid-1990s by the Mars Pathfinder and Mars Exploration Rover missions. The spacecraft employed several systems in a precise order, with the entry, descent and landing sequence broken down into four parts—described below as the spaceflight events unfolded on August 6, 2012.
=== EDL event–August 6, 2012 ===
Despite its late hour, particularly on the east coast of the United States where it was 1:31 a.m., the landing generated significant public interest. 3.2 million watched the landing live with most watching online instead of on television via NASA TV or cable news networks covering the event live. The final landing place for the rover was less than 2.4 km (1.5 mi) from its target after a 563,270,400 km (350,000,000 mi) journey. In addition to streaming and traditional video viewing, JPL made Eyes on the Solar System, a three-dimensional real time simulation of entry, descent and landing based on real data. Curiosity's touchdown time as represented in the software, based on JPL predictions, was less than 1 second different from reality.
The EDL phase of the MSL spaceflight mission to Mars took only seven minutes and unfolded automatically, as programmed by JPL engineers in advance, in a precise order, with the entry, descent and landing sequence occurring in four distinct event phases:
==== Guided entry ====
Precision guided entry made use of onboard computing ability to steer itself toward the pre-determined landing site, improving landing accuracy from a range of hundreds of kilometers to 20 kilometers (12 mi). This capability helped remove some of the uncertainties of landing hazards that might be present in larger landing ellipses. Steering was achieved by the combined use of thrusters and ejectable balance masses. The ejectable balance masses shift the capsule center of mass enabling generation of a lift vector during the atmospheric phase. A navigation computer integrated the measurements to estimate the position and attitude of the capsule that generated automated torque commands. This was the first planetary mission to use precision landing techniques.
The rover was folded up within an aeroshell that protected it during the travel through space and during the atmospheric entry at Mars. Ten minutes before atmospheric entry the aeroshell separated from the cruise stage that provided power, communications and propulsion during the long flight to Mars. One minute after separation from the cruise stage thrusters on the aeroshell fired to cancel out the spacecraft's 2-rpm rotation and achieved an orientation with the heat shield facing Mars in preparation for Atmospheric entry. The heat shield is made of phenolic impregnated carbon ablator (PICA). The 4.5 m (15 ft) diameter heat shield, which is the largest heat shield ever flown in space, reduced the velocity of the spacecraft by ablation against the Martian atmosphere, from the atmospheric interface velocity of approximately 5.8 km/s (3.6 mi/s) down to approximately 470 m/s (1,500 ft/s), where parachute deployment was possible about four minutes later. One minute and 15 seconds after entry the heat shield experienced peak temperatures of up to 2,090 °C (3,790 °F) as atmospheric pressure converted kinetic energy into heat. Ten seconds after peak heating, that deceleration peaked out at 15 g.
Much of the reduction of the landing precision error was accomplished by an entry guidance algorithm, derived from the algorithm used for guidance of the Apollo Command Modules returning to Earth in the Apollo program. This guidance uses the lifting force experienced by the aeroshell to "fly out" any detected error in range and thereby arrive at the targeted landing site. In order for the aeroshell to have lift, its center of mass is offset from the axial centerline that results in an off-center trim angle in atmospheric flight. This was accomplished by ejecting ballast masses consisting of two 75 kg (165 lb) tungsten weights minutes before atmospheric entry. The lift vector was controlled by four sets of two reaction control system (RCS) thrusters that produced approximately 500 N (110 lbf) of thrust per pair. This ability to change the pointing of the direction of lift allowed the spacecraft to react to the ambient environment, and steer toward the landing zone. Prior to parachute deployment the entry vehicle ejected more ballast mass consisting of six 25 kg (55 lb) tungsten weights such that the center of gravity offset was removed.
==== Parachute descent ====
When the entry phase was complete and the capsule slowed to about 470 m/s (1,500 ft/s) at about 10 km (6.2 mi) altitude, the supersonic parachute deployed, as was done by previous landers such as Viking, Mars Pathfinder and the Mars Exploration Rovers. The parachute has 80 suspension lines, is over 50 m (160 ft) long, and is about 16 m (52 ft) in diameter. Capable of being deployed at Mach 2.2, the parachute can generate up to 289 kN (65,000 lbf) of drag force in the Martian atmosphere. After the parachute was deployed, the heat shield separated and fell away. A camera beneath the rover acquired about 5 frames per second (with resolution of 1600×1200 pixels) below 3.7 km (2.3 mi) during a period of about 2 minutes until the rover sensors confirmed successful landing. The Mars Reconnaissance Orbiter team were able to acquire an image of the MSL descending under the parachute.
==== Powered descent ====
Following the parachute braking, at about 1.8 km (1.1 mi) altitude, still travelling at about 100 m/s (220 mph), the rover and descent stage dropped out of the aeroshell. The descent stage is a platform above the rover with eight variable thrust monopropellant hydrazine rocket thrusters on arms extending around this platform to slow the descent. Each rocket thruster, called a Mars Lander Engine (MLE), produces 400 to 3,100 N (90 to 697 lbf) of thrust and were derived from those used on the Viking landers. A radar altimeter measured altitude and velocity, feeding data to the rover's flight computer. Meanwhile, the rover transformed from its stowed flight configuration to a landing configuration while being lowered beneath the descent stage by the "sky crane" system.
==== Sky crane ====
For several reasons, a different landing system was chosen for MSL compared to previous Mars landers and rovers. Curiosity was considered too heavy to use the airbag landing system as used on the Mars Pathfinder and Mars Exploration Rovers. A legged lander approach would have caused several design problems. It would have needed to have engines high enough above the ground when landing not to form a dust cloud that could damage the rover's instruments. This would have required long landing legs that would need to have significant width to keep the center of gravity low. A legged lander would have also required ramps so the rover could drive down to the surface, which would have incurred extra risk to the mission on the chance rocks or tilt would prevent Curiosity from being able to drive off the lander successfully. Faced with these challenges, the MSL engineers came up with a novel alternative solution: the sky crane. The sky crane system lowered the rover with a 7.6 m (25 ft) tether to a soft landing—wheels down—on the surface of Mars. This system consists of a bridle lowering the rover on three nylon tethers and an electrical cable carrying information and power between the descent stage and rover. As the support and data cables unreeled, the rover's six motorized wheels snapped into position. At roughly 7.5 m (25 ft) below the descent stage the sky crane system slowed to a halt and the rover touched down. After the rover touched down, it waited two seconds to confirm that it was on solid ground by detecting the weight on the wheels and fired several pyros (small explosive devices) activating cable cutters on the bridle and umbilical cords to free itself from the descent stage. The descent stage then flew away to a crash landing 650 m (2,100 ft) away. The sky crane concept had never been used in missions before.
=== Landing site ===
Gale Crater is the MSL landing site. Within Gale Crater is a mountain, named Aeolis Mons ("Mount Sharp"), of layered rocks, rising about 5.5 km (18,000 ft) above the crater floor, that Curiosity will investigate. The landing site is a smooth region in "Yellowknife" Quad 51 of Aeolis Palus inside the crater in front of the mountain. The target landing site location was an elliptical area 20 by 7 km (12.4 by 4.3 mi). Gale Crater's diameter is 154 km (96 mi).
The landing location for the rover was less than 2.4 km (1.5 mi) from the center of the planned landing ellipse, after a 563,000,000 km (350,000,000 mi) journey. NASA named the rover landing site Bradbury Landing on sol 16, August 22, 2012. According to NASA, an estimated 20,000 to 40,000 heat-resistant bacterial spores were on Curiosity at launch, and as much as 1,000 times that number may not have been counted.
== Media ==
=== Videos ===
=== Images ===
== See also ==
== References ==
== Further reading ==
M. K. Lockwood (2006). "Introduction: Mars Science Laboratory: The Next Generation of Mars Landers And The Following 13 articles" (PDF). Journal of Spacecraft and Rockets. 43 (2). American Institute of Aeronautics and Astronautics: 257. Bibcode:2006JSpRo..43..257L. doi:10.2514/1.20678. Archived from the original (PDF) on August 9, 2012. Retrieved November 13, 2006.
Grotzinger, J. P.; Crisp, J.; Vasavada, A. R.; Anderson, R. C.; Baker, C. J.; Barry, R.; Blake, D. F.; Conrad, P.; Edgett, K. S.; Ferdowski, B.; Gellert, R.; Gilbert, J. B.; Golombek, M.; Gómez-Elvira, J.; Hassler, D. M.; Jandura, L.; Litvak, M.; Mahaffy, P.; Maki, J.; Meyer, M.; Malin, M. C.; Mitrofanov, I.; Simmonds, J. J.; Vaniman, D.; Welch, R. V.; Wiens, R. C. (2012). "Mars Science Laboratory Mission and Science Investigation". Space Science Reviews. 170 (1–4): 5–56. Bibcode:2012SSRv..170....5G. doi:10.1007/s11214-012-9892-2.—overview article about the MSL, landing site, and instrumentation
== External links ==
MSL Home Page
Scientific Publications by MSL Team Members (PDF)
MSL – Media Press Kit (November, 2011) (PDF)
Image Gallery
MSL – NASA/JPL News Channel Videos
MSL – Entry, Descent & Landing (EDL) – Animated Video (02:00)
MSL – NASA Updates – *REPLAY* Anytime (NASA-YouTube)
MSL – "Curiosity Lands" (08/06/2012) – NASA/JPL – Video (03:40)
Descent video sim&real/narrated, MSL real time/25fps, all/4fp, HiRise
MSL – Landing ("7 Minutes of Terror")
MSL – Landing Site – Gale Crater – Animated/Narrated Video (02:37)
MSL – Mission Summary – Animated/Extended Video (11:20)
MSL – "Curiosity Launch" (11/26/2011) – NASA/Kennedy – Video (04:00)
MSL – NASA/JPL Virtual Tour – Rover
MSL – Entry, Descent & Landing (EDL) – Timeline/ieee
MSL – Entry, Descent & Landing (EDL) – Description. (PDF)
MSL – Raw Images, Listing by JPL (official) | Wikipedia/Mars_Science_Laboratory |
A stochastic investment model tries to forecast how returns and prices on different assets or asset classes, (e. g. equities or bonds) vary over time. Stochastic models are not applied for making point estimation rather interval estimation and they use different stochastic processes. Investment models can be classified into single-asset and multi-asset models. They are often used for actuarial work and financial planning to allow optimization in asset allocation or asset-liability-management (ALM).
== Single-asset models ==
=== Interest rate models ===
Interest rate models can be used to price fixed income products. They are usually divided into one-factor models and multi-factor assets.
==== One-factor models ====
Black–Derman–Toy model
Black–Karasinski model
Cox–Ingersoll–Ross model
Ho–Lee model
Hull–White model
Kalotay–Williams–Fabozzi model
Merton model
Rendleman–Bartter model
Vasicek model
==== Multi-factor models ====
Chen model
Longstaff–Schwartz model
=== Term structure models ===
LIBOR market model (Brace Gatarek Musiela model)
=== Stock price models ===
Binomial model
Black–Scholes model (geometric Brownian motion)
=== Inflation models ===
== Multi-asset models ==
ALM.IT (GenRe) model
Cairns model
FIM-Group model
Global CAP:Link model
Ibbotson and Sinquefield model
Morgan Stanley model
Russel–Yasuda Kasai model
Smith's jump diffusion model
TSM (B & W Deloitte) model
Watson Wyatt model
Whitten & Thomas model
Wilkie investment model
Yakoubov, Teeger & Duval model
== Further reading ==
Wilkie, A. D. (1984) "A stochastic investment model for actuarial use", Transactions of the Faculty of Actuaries, 39: 341-403
Østergaard, Søren Duus (1971) "Stochastic Investment Models and Decision Criteria", The Swedish Journal of Economics, 73 (2), 157-183 JSTOR 3439055
Sreedharan, V. P.; Wein, H. H. (1967) "A Stochastic, Multistage, Multiproduct Investment Model", SIAM Journal on Applied Mathematics, 15 (2), 347-358 JSTOR 2946287 | Wikipedia/Stochastic_investment_model |
Corporate finance is an area of finance that deals with the sources of funding, and the capital structure of businesses, the actions that managers take to increase the value of the firm to the shareholders, and the tools and analysis used to allocate financial resources. The primary goal of corporate finance is to maximize or increase shareholder value.
Correspondingly, corporate finance comprises two main sub-disciplines. Capital budgeting is concerned with the setting of criteria about which value-adding projects should receive investment funding, and whether to finance that investment with equity or debt capital. Working capital management is the management of the company's monetary funds that deal with the short-term operating balance of current assets and current liabilities; the focus here is on managing cash, inventories, and short-term borrowing and lending (such as the terms on credit extended to customers).
The terms corporate finance and corporate financier are also associated with investment banking. The typical role of an investment bank is to evaluate the company's financial needs and raise the appropriate type of capital that best fits those needs. Thus, the terms "corporate finance" and "corporate financier" may be associated with transactions in which capital is raised in order to create, develop, grow or acquire businesses.
Although it is in principle different from managerial finance which studies the financial management of all firms, rather than corporations alone, the main concepts in the study of corporate finance are applicable to the financial problems of all kinds of firms. Financial management overlaps with the financial function of the accounting profession. However, financial accounting is the reporting of historical financial information, while financial management is concerned with the deployment of capital resources to increase a firm's value to the shareholders.
== History ==
Corporate finance for the pre-industrial world began to emerge in the Italian city-states and the low countries of Europe from the 15th century.
The Dutch East India Company (also known by the abbreviation "VOC" in Dutch) was the first publicly listed company ever to pay regular dividends.
The VOC was also the first recorded joint-stock company to get a fixed capital stock. Public markets for investment securities developed in the Dutch Republic during the 17th century.
By the early 1800s, London acted as a center of corporate finance for companies around the world, which innovated new forms of lending and investment; see City of London § Economy.
The twentieth century brought the rise of managerial capitalism and common stock finance, with share capital raised through listings, in preference to other sources of capital.
Modern corporate finance, alongside investment management, developed in the second half of the 20th century, particularly driven by innovations in theory and practice in the United States and Britain.
Here, see the later sections of History of banking in the United States and of History of private equity and venture capital.
== Outline ==
The primary goal of financial management
is to maximize or to continually increase shareholder value (see Fisher separation theorem).
Here, the three main questions that corporate finance addresses are: what long-term investments should we make? What methods should we employ to finance the investment? How do we manage our day-to-day financial activities? These three questions lead to the primary areas of concern in corporate finance: capital budgeting, capital structure, and working capital management.
This then requires that managers find an appropriate balance between: investments in "projects" that increase the firm's long term profitability; and paying excess cash in the form of dividends to shareholders; short term considerations, such as paying back creditor-related debt, will also feature.
Choosing between investment projects will thus be based upon several inter-related criteria.
(1) Corporate management seeks to maximize the value of the firm by investing in projects which yield a positive net present value when valued using an appropriate discount rate - "hurdle rate" - in consideration of risk. (2) These projects must also be financed appropriately. (3) If no growth is possible by the company and excess cash surplus is not needed to the firm, then financial theory suggests that management should return some or all of the excess cash to shareholders (i.e., distribution via dividends).
The first two criteria concern "capital budgeting", the planning of value-adding, long-term corporate financial projects relating to investments funded through and affecting the firm's capital structure, and where management must allocate the firm's limited resources between competing opportunities ("projects").
Capital budgeting is thus also concerned with the setting of criteria about which projects should receive investment funding to increase the value of the firm, and whether to finance that investment with equity or debt capital. Investments should be made on the basis of value-added to the future of the corporation. Projects that increase a firm's value may include a wide variety of different types of investments, including but not limited to, expansion policies, or mergers and acquisitions.
The third criterion relates to dividend policy.
In general, managers of growth companies (i.e. firms that earn high rates of return on invested capital) will use most of the firm's capital resources and surplus cash on investments and projects so the company can continue to expand its business operations into the future. When companies reach maturity levels within their industry (i.e. companies that earn approximately average or lower returns on invested capital), managers of these companies will use surplus cash to payout dividends to shareholders.
Thus, when no growth or expansion is likely, and excess cash surplus exists and is not needed, then management is expected to pay out some or all of those surplus earnings in the form of cash dividends or to repurchase the company's stock through a share buyback program.
== Capital structure ==
Achieving the goals of corporate finance requires that any corporate investment be financed appropriately. The sources of financing are, generically, capital self-generated by the firm and capital from external funders, obtained by issuing new debt and equity (and hybrid- or convertible securities). However, as above, since both hurdle rate and cash flows (and hence the riskiness of the firm) will be affected, the financing mix will impact the valuation of the firm, and a considered decision
is required here.
See Balance sheet, WACC.
Finally, there is much theoretical discussion as to other considerations that management might weigh here.
=== Sources of capital ===
Corporations, as outlined, may rely on borrowed funds (debt capital or credit) as sources of investment to sustain ongoing business operations or to fund future growth. Debt comes in several forms, such as through bank loans, notes payable, or bonds issued to the public. Bonds require the corporation to make regular interest payments (interest expenses) on the borrowed capital until the debt reaches its maturity date, therein the firm must pay back the obligation in full. (An exception is zero-coupon bonds - or "zeros"). Debt payments can also be made in the form of a sinking fund provision, whereby the corporation pays annual installments of the borrowed debt above regular interest charges. Corporations that issue callable bonds are entitled to pay back the obligation in full whenever the company feels it is in their best interest to pay off the debt payments. If interest expenses cannot be made by the corporation through cash payments, the firm may also use collateral assets as a form of repaying their debt obligations (or through the process of liquidation).
Especially re debt funded corporations, see Bankruptcy and Financial distress.
Under some treatments (especially for valuation) leases are regarded as debt: the payments are set; they are tax deductible; failing to make them results in the loss of the asset.
Corporations can alternatively sell shares of the company to investors to raise capital. Investors, or shareholders, expect that there will be an upward trend in value of the company (or appreciate in value) over time to make their investment a profitable purchase. As outlined:
Shareholder value is increased when corporations invest equity capital and other funds into projects (or investments) that earn a positive rate of return for the owners. Investors then prefer to buy shares of stock in companies that will consistently earn a positive rate of return on capital (on equity) in the future, thus increasing the market value of the stock of that corporation.
Shareholder value may also be increased when corporations payout excess cash surplus (funds that are not needed for business) in the form of dividends.
Internal financing, often, is constituted of retained earnings, i.e. those remaining after dividends; this provides, per some measures, the cheapest form of funding.
Preferred stock is a specialized form of financing which combines properties of common stock and debt instruments, and may then be considered a hybrid security. Preferreds are senior (i.e. higher ranking) to common stock, but subordinate to bonds in terms of claim (or rights to their share of the assets of the company).
Preferred stock usually carries no voting rights, but may carry a dividend and may have priority over common stock in the payment of dividends and upon liquidation. Terms of the preferred stock are stated in a "Certificate of Designation".
Similar to bonds, preferred stocks are rated by the major credit-rating companies. The rating for preferreds is generally lower, since preferred dividends do not carry the same guarantees as interest payments from bonds and they are junior to all creditors.
Preferred stock is then a special class of shares which may have any combination of features not possessed by common stock.
The following features are usually associated with preferred stock:
Preference in dividends
Preference in assets, in the event of liquidation
Convertibility to common stock.
Callability, at the option of the corporation
Nonvoting
=== Capitalization structure ===
As outlined, the financing "mix" will impact the valuation (as well as the cashflows) of the firm, and must therefore be structured appropriately:
there are then two interrelated considerations here:
Management must identify the "optimal mix" of financing – the capital structure that results in maximum firm value - but must also take other factors into account (see trade-off theory below). Financing a project through debt results in a liability or obligation that must be serviced, thus entailing cash flow implications independent of the project's degree of success. Equity financing is less risky with respect to cash flow commitments, but results in a dilution of share ownership, control and earnings. The cost of equity (see CAPM and APT) is also typically higher than the cost of debt - which is, additionally, a deductible expense – and so equity financing may result in an increased hurdle rate which may offset any reduction in cash flow risk.
Management must attempt to match the long-term financing mix to the assets being financed as closely as possible, in terms of both timing and cash flows. Managing any potential asset liability mismatch or duration gap entails matching the assets and liabilities respectively according to maturity pattern ("cashflow matching") or duration ("immunization"); managing this relationship in the short-term is a major function of working capital management, as discussed below. Other techniques, such as securitization, or hedging using interest rate- or credit derivatives, are also common. See: Asset liability management; Treasury management; Credit risk; Interest rate risk.
=== Related considerations ===
The above, are the primary objectives in deciding on the firm's capitalization structure. Parallel considerations, also, will factor into management's thinking.
The starting point for discussion here is the Modigliani–Miller theorem.
This states, through two connected Propositions, that in a "perfect market" how a firm is financed is irrelevant to its value:
(i) the value of a company is independent of its capital structure; (ii) the cost of equity will be the same for a leveraged firm and an unleveraged firm.
"Modigliani and Miller", however, is generally viewed as a theoretical result, and in practice, management will here too focus on enhacing firm value and / or reducing the cost of funding.
Re value, much of the discussion falls under the umbrella of the Trade-Off Theory in which firms are assumed to trade-off the tax benefits of debt with the bankruptcy costs of debt when choosing how to allocate the company's resources, finding an optimum re firm value.
The capital structure substitution theory hypothesizes that management manipulates the capital structure such that earnings per share (EPS) are maximized.
Re cost of funds, the Pecking Order Theory (Stewart Myers) suggests that firms avoid external financing while they have internal financing available and avoid new equity financing while they can engage in new debt financing at reasonably low interest rates.
One of the more recent innovations in this area from a theoretical point of view is the market timing hypothesis. This hypothesis, inspired by the behavioral finance literature, states that firms look for the cheaper type of financing regardless of their current levels of internal resources, debt and equity.
(See also below re corporate governance.)
== Capital budgeting ==
The process of allocating financial resources to major investment- or capital expenditure is known as capital budgeting.
Consistent with the overall goal of increasing firm value, the decisioning here focuses on whether the investment in question is worthy of funding through the firm's capitalization structures (debt, equity or retained earnings as above).
To be considered acceptable, the investment must be value additive re: (i) improved operating profit and cash flows; as combined with (ii) any new funding commitments and capital implications.
Re the latter: if the investment is large in the context of the firm as a whole, so the discount rate applied by outside investors to the (private) firm's equity may be adjusted upwards to reflect the new level of risk, thus impacting future financing activities and overall valuation.
More sophisticated treatments will thus produce accompanying sensitivity- and risk metrics, and will incorporate any inherent contingencies.
The focus of capital budgeting is on major "projects" - often investments in other firms, or expansion into new markets or geographies - but may extend also to new plants, new / replacement machinery, new products, and research and development programs;
day to day operational expenditure is the realm of financial management as below.
=== Investment and project valuation ===
In general, each "project's" value will be estimated using a discounted cash flow (DCF) valuation, and the opportunity with the highest value, as measured by the resultant net present value (NPV) will be selected (first applied in a corporate finance setting by Joel Dean in 1951). This requires estimating the size and timing of all of the incremental cash flows resulting from the project. Such future cash flows are then discounted to determine their present value (see Time value of money). These present values are then summed, and this sum net of the initial investment outlay is the NPV. See Financial modeling § Accounting for general discussion, and Valuation using discounted cash flows for the mechanics, with discussion re modifications for corporate finance.
The NPV is greatly affected by the discount rate. Thus, identifying the proper discount rate – often termed, the project "hurdle rate" – is critical to choosing appropriate projects and investments for the firm. The hurdle rate is the minimum acceptable return on an investment – i.e., the project appropriate discount rate. The hurdle rate should reflect the riskiness of the investment, typically measured by volatility of cash flows, and must take into account the project-relevant financing mix. Managers use models such as the CAPM or the APT to estimate a discount rate appropriate for a particular project, and use the weighted average cost of capital (WACC) to reflect the financing mix selected. (A common error in choosing a discount rate for a project is to apply a WACC that applies to the entire firm. Such an approach may not be appropriate where the risk of a particular project differs markedly from that of the firm's existing portfolio of assets.)
In conjunction with NPV, there are several other measures used as (secondary) selection criteria in corporate finance; see Capital budgeting § Ranked projects. These are visible from the DCF and include discounted payback period, IRR, Modified IRR, equivalent annuity, capital efficiency, and ROI.
Alternatives (complements) to the standard DCF, model economic profit as opposed to free cash flow; these include residual income valuation, MVA / EVA (Joel Stern, Stern Stewart & Co) and APV (Stewart Myers). With the cost of capital correctly and correspondingly adjusted, these valuations should yield the same result as the DCF. These may, however, be considered more appropriate for projects with negative free cash flow several years out, but which are expected to generate positive cash flow thereafter (and may also be less sensitive to terminal value).
=== Sensitivity and scenario analysis ===
Given the uncertainty inherent in project forecasting and valuation,
analysts will wish to assess the sensitivity of project NPV to the various inputs (i.e. assumptions) to the DCF model. In a typical sensitivity analysis the analyst will vary one key factor while holding all other inputs constant, ceteris paribus. The sensitivity of NPV to a change in that factor is then observed, and is calculated as a "slope": ΔNPV / Δfactor. For example, the analyst will determine NPV at various growth rates in annual revenue as specified (usually at set increments, e.g. -10%, -5%, 0%, 5%...), and then determine the sensitivity using this formula. Often, several variables may be of interest, and their various combinations produce a "value-surface" (or even a "value-space"), where NPV is then a function of several variables. See also Stress testing.
Using a related technique, analysts also run scenario based forecasts of NPV. Here, a scenario comprises a particular outcome for economy-wide, "global" factors (demand for the product, exchange rates, commodity prices, etc.) as well as for company-specific factors (unit costs, etc.). As an example, the analyst may specify various revenue growth scenarios (e.g. -5% for "Worst Case", +5% for "Likely Case" and +15% for "Best Case"), where all key inputs are adjusted so as to be consistent with the growth assumptions, and calculate the NPV for each. Note that for scenario based analysis, the various combinations of inputs must be internally consistent (see discussion at Financial modeling), whereas for the sensitivity approach these need not be so. An application of this methodology is to determine an "unbiased" NPV, where management determines a (subjective) probability for each scenario – the NPV for the project is then the probability-weighted average of the various scenarios; see First Chicago Method. (See also rNPV, where cash flows, as opposed to scenarios, are probability-weighted.)
=== Quantifying uncertainty ===
A further advancement which "overcomes the limitations of sensitivity and scenario analyses by examining the effects of all possible combinations of variables and their realizations" is to construct stochastic or probabilistic financial models – as opposed to the traditional static and deterministic models as above. For this purpose, the most common method is to use Monte Carlo simulation to analyze the project's NPV. This method was introduced to finance by David B. Hertz in 1964, although it has only recently become common: today analysts are even able to run simulations in spreadsheet based DCF models, typically using a risk-analysis add-in, such as @Risk or Crystal Ball. Here, the cash flow components that are (heavily) impacted by uncertainty are simulated, mathematically reflecting their "random characteristics". In contrast to the scenario approach above, the simulation produces several thousand random but possible outcomes, or trials, "covering all conceivable real world contingencies in proportion to their likelihood;" see Monte Carlo Simulation versus "What If" Scenarios. The output is then a histogram of project NPV, and the average NPV of the potential investment – as well as its volatility and other sensitivities – is then observed. This histogram provides information not visible from the static DCF: for example, it allows for an estimate of the probability that a project has a net present value greater than zero (or any other value).
Continuing the above example: instead of assigning three discrete values to revenue growth, and to the other relevant variables, the analyst would assign an appropriate probability distribution to each variable (commonly triangular or beta), and, where possible, specify the observed or supposed correlation between the variables. These distributions would then be "sampled" repeatedly – incorporating this correlation – so as to generate several thousand random but possible scenarios, with corresponding valuations, which are then used to generate the NPV histogram. The resultant statistics (average NPV and standard deviation of NPV) will be a more accurate mirror of the project's "randomness" than the variance observed under the scenario based approach. (These are often used as estimates of the underlying "spot price" and volatility for the real option valuation below; see Real options valuation § Valuation inputs.) A more robust Monte Carlo model would include the possible occurrence of risk events - e.g., a credit crunch - that drive variations in one or more of the DCF model inputs.
=== Valuing flexibility ===
Often - for example R&D projects - a project may open (or close) various paths of action to the company, but this reality will not (typically) be captured in a strict NPV approach. Some analysts account for this uncertainty by adjusting the discount rate (e.g. by increasing the cost of capital) or the cash flows (using certainty equivalents, or applying (subjective) "haircuts" to the forecast numbers; see Penalized present value). Even when employed, however, these latter methods do not normally properly account for changes in risk over the project's lifecycle and hence fail to appropriately adapt the risk adjustment. Management will therefore (sometimes) employ tools which place an explicit value on these options. So, whereas in a DCF valuation the most likely or average or scenario specific cash flows are discounted, here the "flexible and staged nature" of the investment is modelled, and hence "all" potential payoffs are considered. See further under Real options valuation. The difference between the two valuations is the "value of flexibility" inherent in the project.
The two most common tools are Decision Tree Analysis (DTA) and real options valuation (ROV); they may often be used interchangeably:
DTA values flexibility by incorporating possible events (or states) and consequent management decisions. (For example, a company would build a factory given that demand for its product exceeded a certain level during the pilot-phase, and outsource production otherwise. In turn, given further demand, it would similarly expand the factory, and maintain it otherwise. In a DCF model, by contrast, there is no "branching" – each scenario must be modelled separately.) In the decision tree, each management decision in response to an "event" generates a "branch" or "path" which the company could follow; the probabilities of each event are determined or specified by management. Once the tree is constructed: (1) "all" possible events and their resultant paths are visible to management; (2) given this "knowledge" of the events that could follow, and assuming rational decision making, management chooses the branches (i.e. actions) corresponding to the highest value path probability weighted; (3) this path is then taken as representative of project value. See Decision theory § Choice under uncertainty.
ROV is usually used when the value of a project is contingent on the value of some other asset or underlying variable. (For example, the viability of a mining project is contingent on the price of gold; if the price is too low, management will abandon the mining rights, if sufficiently high, management will develop the ore body. Again, a DCF valuation would capture only one of these outcomes.) Here: (1) using financial option theory as a framework, the decision to be taken is identified as corresponding to either a call option or a put option; (2) an appropriate valuation technique is then employed – usually a variant on the binomial options model or a bespoke simulation model, while Black–Scholes type formulae are used less often; see Contingent claim valuation. (3) The "true" value of the project is then the NPV of the "most likely" scenario plus the option value. (Real options in corporate finance were first discussed by Stewart Myers in 1977; viewing corporate strategy as a series of options was originally per Timothy Luehrman, in the late 1990s.) See also § Option pricing approaches under Business valuation.
== Dividend policy ==
Dividend policy is concerned with financial policies regarding the payment of a cash dividend in the present, or retaining earnings and then paying an increased dividend at a later stage.
The policy will be set based upon the type of company and what management determines is the best use of those dividend resources for the firm and its shareholders.
Practical and theoretical considerations - interacting with the above funding and investment decisioning, and re overall firm value - will inform this thinking.
=== Considerations ===
In general, whether to issue dividends, and what amount, is determined on the basis of the company's unappropriated profit (excess cash) and influenced by the company's long-term earning power. In all instances, as above, the appropriate dividend policy is in parallel directed by that which maximizes long-term shareholder value.
When cash surplus exists and is not needed by the firm, then management is expected to pay out some or all of those surplus earnings in the form of cash dividends or to repurchase the company's stock through a share buyback program.
Thus, if there are no NPV positive opportunities, i.e. projects where returns exceed the hurdle rate, and excess cash surplus is not needed, then management should return (some or all of) the excess cash to shareholders as dividends.
This is the general case, however the "style" of the stock may also impact the decision. Shareholders of a "growth stock", for example, expect that the company will retain (most of) the excess cash surplus so as to fund future projects internally to help increase the value of the firm. Shareholders of value- or secondary stocks, on the other hand, would prefer management to pay surplus earnings in the form of cash dividends, especially when a positive return cannot be earned through the reinvestment of undistributed earnings; a share buyback program may be accepted when the value of the stock is greater than the returns to be realized from the reinvestment of undistributed profits.
Management will also choose the form of the dividend distribution, as stated, generally as cash dividends or via a share buyback. Various factors may be taken into consideration: where shareholders must pay tax on dividends, firms may elect to retain earnings or to perform a stock buyback, in both cases increasing the value of shares outstanding. Alternatively, some companies will pay "dividends" from stock rather than in cash or via a share buyback as mentioned; see Corporate action.
=== Dividend theory ===
As for capital structure above, there are several schools of thought on dividends, in particular re their impact on firm value.
A key consideration will be whether there are any tax disadvantages associated with dividends: i.e. dividends attract a higher tax rate as compared, e.g., to capital gains; see dividend tax and Retained earnings § Tax implications.
Here, per the abovementioned Modigliani–Miller theorem:
if there are no such disadvantages - and companies can raise equity finance cheaply, i.e. can issue stock at low cost - then dividend policy is value neutral;
if dividends suffer a tax disadvantage, then increasing dividends should reduce firm value.
Regardless, but particularly in the second (more realistic) case, other considerations apply.
The first set of these, relates to investor preferences and behavior (see Clientele effect).
Investors are seen to prefer a “bird in the hand” - i.e. cash dividends are certain as compared to income from future capital gains - and in fact, commonly employ some form of dividend valuation model in valuing shares.
Relatedly, investors will then prefer a stable or "smooth" dividend payout - as far as is reasonable given earnings prospects and sustainability - which will then positively impact share price; see Lintner model.
Cash dividends may also allow management to convey (insider) information about corporate performance; and increasing a company's dividend payout may then predict (or lead to) favorable performance of the company's stock in the future; see Dividend signaling hypothesis
The second set relates to management's thinking re capital structure and earnings, overlapping the above.
Under a "Residual dividend policy" - i.e. as contrasted with a "smoothed" payout policy - the firm will use retained profits to finance capital investments if cheaper than the same via equity financing; see again Pecking order theory.
Similarly, under the Walter model, dividends are paid only if capital retained will earn a higher return than that available to investors (proxied: ROE > Ke).
Management may also want to "manipulate" the capital structure - in this context, by paying or not paying dividends - such that earnings per share are maximized; see again, Capital structure substitution theory.
== Working capital management ==
Managing the corporation's working capital position so as to sustain ongoing business operations is referred to as working capital management.
This entails, essentially, managing the relationship between a firm's short-term assets and its short-term liabilities, conscious of various considerations.
Here, as above, the goal of Corporate Finance is the maximization of firm value. In the context of long term, capital budgeting, firm value is enhanced through appropriately selecting and funding NPV positive investments. These investments, in turn, have implications in terms of cash flow and cost of capital.
The goal of Working Capital (i.e. short term) management is therefore to ensure that the firm is able to operate, and that it has sufficient cash flow to service long-term debt, and to satisfy both maturing short-term debt and upcoming operational expenses. In so doing, firm value is enhanced when, and if, the return on capital exceeds the cost of capital; See Economic value added (EVA). Managing short term finance along with long term finance is therefore one task of a modern CFO.
=== Working capital ===
Working capital is the amount of funds that are necessary for an organization to continue its ongoing business operations, until the firm is reimbursed through payments for the goods or services it has delivered to its customers. Working capital is measured through the difference between resources in cash or readily convertible into cash (Current Assets), and cash requirements (Current Liabilities). As a result, capital resource allocations relating to working capital are always current, i.e. short-term.
In addition to time horizon, working capital management differs from capital budgeting in terms of discounting and profitability considerations; decisions here are also "reversible" to a much larger extent. (Considerations as to risk appetite and return targets remain identical, although some constraints – such as those imposed by loan covenants – may be more relevant here).
The (short term) goals of working capital are therefore not approached on the same basis as (long term) profitability, and working capital management applies different criteria in allocating resources: the main considerations are (1) cash flow / liquidity and (2) profitability / return on capital (of which cash flow is probably the most important).
The most widely used measure of cash flow is the net operating cycle, or cash conversion cycle. This represents the time difference between cash payment for raw materials and cash collection for sales. The cash conversion cycle indicates the firm's ability to convert its resources into cash. Because this number effectively corresponds to the time that the firm's cash is tied up in operations and unavailable for other activities, management generally aims at a low net count. (Another measure is gross operating cycle which is the same as net operating cycle except that it does not take into account the creditors deferral period.)
In this context, the most useful measure of profitability is return on capital (ROC). The result is shown as a percentage, determined by dividing relevant income for the 12 months by capital employed; return on equity (ROE) shows this result for the firm's shareholders. As outlined, firm value is enhanced when, and if, the return on capital exceeds the cost of capital.
=== Management of working capital ===
Guided by the above criteria, management will use a combination of policies and techniques for the management of working capital. These policies, as outlined, aim at managing the current assets (generally cash and cash equivalents, inventories and debtors) and the short term financing, such that cash flows and returns are acceptable.
Cash management. Identify the cash balance which allows for the business to meet day to day expenses, but reduces cash holding costs.
Inventory management. Identify the level of inventory which allows for uninterrupted production but reduces the investment in raw materials – and minimizes reordering costs – and hence increases cash flow. See discussion under Inventory optimization and Supply chain management.
Debtors management. There are two inter-related roles here: (1) Identify the appropriate credit policy, i.e. credit terms which will attract customers, such that any impact on cash flows and the cash conversion cycle will be offset by increased revenue and hence Return on Capital (or vice versa); see Discounts and allowances. (2) Implement appropriate credit scoring policies and techniques such that the risk of default on any new business is acceptable given these criteria.
Short term financing. Identify the appropriate source of financing, given the cash conversion cycle: the inventory is ideally financed by credit granted by the supplier; however, it may be necessary to utilize a bank loan (or overdraft), or to "convert debtors to cash" through "factoring"; see generally, trade finance.
== Other areas ==
=== Investment banking ===
As discussed, corporate finance comprises the activities, analytical methods, and techniques that deal with the company's long-term investments, finances and capital.
Re the latter, when capital must be raised for the corporation or shareholders, the "corporate finance team" will engage its investment bank.
The bank will then facilitate the required share listing (IPO or SEO) or bond issuance, as appropriate given the above anaysis.
Thereafter the bank will work closely with the corporate re servicing the new securities, and managing its presence in the capital markets more generally
(offering advisory, financial advisory, deal advisory, and / or transaction advisory
services).
Use of the term "corporate finance", correspondingly, varies considerably across the world.
In the United States, "Corporate Finance" corresponds to the first usage.
A professional here may be referred to as a "corporate finance analyst" and will typically be based in the FP&A area, reporting to the CFO.
See Financial analyst § Financial planning and analysis.
In the United Kingdom and Commonwealth countries, on the other hand, "corporate finance" and "corporate financier" are associated with investment banking.
=== Financial risk management ===
Financial risk management, generally, is focused on measuring and managing market risk, credit risk and operational risk.
Within corporates (i.e. as opposed to banks), the scope extends to preserving (and enhancing) the firm's economic value.
It will then overlap both corporate finance and enterprise risk management: addressing risks to the firm's overall strategic objectives,
by focusing on the financial exposures and opportunities arising from business decisions, and their link to the firm’s appetite for risk, as well as their impact on share price.
(In large firms, Risk Management typically exists as an independent function, with the CRO consulted on capital-investment and other strategic decisions.)
Re corporate finance, both operational and funding issues are addressed; respectively:
Businesses actively manage any impact on profitability, cash flow, and hence firm value, due to credit and operational factors - this, overlapping "working capital management" to a large extent. Firms then devote much time and effort to forecasting, analytics and performance monitoring (the above analyst role). See also "ALM" and treasury management.
Firm exposure to market (and business) risk is a direct result of previous capital investments and funding decisions: where applicable here, typically in large corporates and under guidance from their investment bankers, firms actively manage and hedge these exposures using traded financial instruments, usually standard derivatives, creating interest rate-, commodity- and foreign exchange hedges; see Cash flow hedge.
=== Corporate governance ===
Broadly, corporate governance considers the mechanisms, processes, practices, and relations by which corporations are controlled and operated by their board of directors, managers, shareholders, and other stakeholders.
In the context of corporate finance,
a more specific concern will be that executives do not "serve their own vested interests" to the detriment of capital providers.
There are several interrelated considerations:
As regards investments: acquisitions and takeovers may be driven by management interests (a larger company) rather than stockholder interests; managers may then overpay on investments, reducing firm value.
Several issues inhere also in the capital structure and management will be expected to balance these: Stockholders, with "potentially unlimited" upside, have an incentive to take riskier projects than bondholders, who earn a fixed return.
Stockholders will also wish to pay more out in dividends than bondholders would like them to.
In general, here, debt may be seen as "an internal means of controlling management", which has to work hard to ensure that repayments are met,
balancing these interests, and also limiting the possibility of overpaying on investments.
Granting Executive stock options, alternatively or in parallel, is seen as a mechanism to align management with stockholder interests.
A more formal treatment is offered under agency theory,
where these problems and approaches can be seen, and hence analysed, as real options;
see Principal–agent problem § Options framework for discussion.
== See also ==
== Notes ==
== References ==
== Bibliography ==
Jonathan Berk; Peter DeMarzo (2013). Corporate Finance (3rd ed.). Pearson. ISBN 978-0132992473.
Peter Bossaerts; Bernt Arne Ødegaard (2006). Lectures on Corporate Finance (Second ed.). World Scientific. ISBN 978-981-256-899-1.
Richard Brealey; Stewart Myers; Franklin Allen (2013). Principles of Corporate Finance. Mcgraw-Hill. ISBN 978-0078034763.
CFA Institute (2022). Corporate Finance: Economic Foundations and Financial Modeling (3rd ed.). Wiley. ISBN 978-1119743767.
Donald H. Chew, ed. (2000). The New Corporate Finance: Where Theory Meets Practice (3rd ed.). Non Basic Stock Line. ISBN 978-0071120432.
Thomas E. Copeland; J. Fred Weston; Kuldeep Shastri (2004). Financial Theory and Corporate Policy (4th ed.). Pearson. ISBN 978-0321127211.
Julie Dahlquist, Rainford Knight, Alan S. Adams (2022). Principles of Finance. ISBN 9781951693541.{{cite book}}: CS1 maint: multiple names: authors list (link)
Aswath Damodaran (2001). Corporate Finance: Theory and Practice (2nd ed.). Wiley. ISBN 978-0471283324.
Aswath Damodaran (2014). Applied Corporate Finance (4th ed.). Wiley. ISBN 978-1118808931.
João Amaro de Matos (2001). Theoretical Foundations of Corporate Finance. Princeton University Press. ISBN 9780691087948.
Tim Koller, Marc Goedhart, David Wessels (McKinsey & Company) (2020). Valuation: Measuring and Managing the Value of Companies (7th ed.). John Wiley & Sons. ISBN 978-1119610885
Joseph Ogden; Frank C. Jen; Philip F. O'Connor (2002). Advanced Corporate Finance. Prentice Hall. ISBN 978-0130915689.
C. Krishnamurti; S. R. Vishwanath (2010). Advanced Corporate Finance. MediaMatics. ISBN 978-8120336117.
Pascal Quiry; Yann Le Fur; Antonio Salvi; Maurizio Dallochio; Pierre Vernimmen (2011). Corporate Finance: Theory and Practice (3rd ed.). Wiley. ISBN 978-1119975588.
Stephen Ross, Randolph Westerfield, Jeffrey Jaffe (2012). Corporate Finance (10th ed.). Mcgraw-Hill. ISBN 978-0078034770.{{cite book}}: CS1 maint: multiple names: authors list (link)
Joel M. Stern, ed. (2003). The Revolution in Corporate Finance (4th ed.). Wiley-Blackwell. ISBN 9781405107815.
Jean Tirole (2006). The Theory of Corporate Finance. Princeton University Press. ISBN 0691125562.
Ivo Welch (2017). Corporate Finance (4th ed.). ISBN 9780984004928.
== Further reading ==
Jensen, Michael C.; Smith. Clifford W. (29 September 2000). The Theory of Corporate Finance: A Historical Overview. SSRN 244161. In The Modern Theory of Corporate Finance, edited by Michael C. Jensen and Clifford H. Smith Jr., pp. 2–20. McGraw-Hill, 1990. ISBN 0070591091
Graham, John R.; Harvey, Campbell R. (1999). "The Theory and Practice of Corporate Finance: Evidence from the Field". AFA 2001 New Orleans; Duke University Working Paper. SSRN 220251.
== External links ==
Corporate Finance Overview - Corporate Finance Institute
Corporate Finance Glossary - Pierre Vernimmen
Corporate finance resources - Aswath Damodaran
Financial management resources - James Van Horne
Financial analysis items - Fincyclopedia | Wikipedia/Corporate_Finance |
The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.
== Underlying principle ==
Let the unknown parameter of interest be
μ
{\displaystyle \mu }
, and assume we have a statistic
m
{\displaystyle m}
such that the expected value of m is μ:
E
[
m
]
=
μ
{\displaystyle \mathbb {E} \left[m\right]=\mu }
, i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic
t
{\displaystyle t}
such that
E
[
t
]
=
τ
{\displaystyle \mathbb {E} \left[t\right]=\tau }
is a known value. Then
m
⋆
=
m
+
c
(
t
−
τ
)
{\displaystyle m^{\star }=m+c\left(t-\tau \right)\,}
is also an unbiased estimator for
μ
{\displaystyle \mu }
for any choice of the coefficient
c
{\displaystyle c}
.
The variance of the resulting estimator
m
⋆
{\displaystyle m^{\star }}
is
Var
(
m
⋆
)
=
Var
(
m
)
+
c
2
Var
(
t
)
+
2
c
Cov
(
m
,
t
)
.
{\displaystyle {\textrm {Var}}\left(m^{\star }\right)={\textrm {Var}}\left(m\right)+c^{2}\,{\textrm {Var}}\left(t\right)+2c\,{\textrm {Cov}}\left(m,t\right).}
By differentiating the above expression with respect to
c
{\displaystyle c}
, it can be shown that choosing the optimal coefficient
c
⋆
=
−
Cov
(
m
,
t
)
Var
(
t
)
{\displaystyle c^{\star }=-{\frac {{\textrm {Cov}}\left(m,t\right)}{{\textrm {Var}}\left(t\right)}}}
minimizes the variance of
m
⋆
{\displaystyle m^{\star }}
. (Note that this coefficient is the same as the coefficient obtained from a linear regression.) With this choice,
Var
(
m
⋆
)
=
Var
(
m
)
−
[
Cov
(
m
,
t
)
]
2
Var
(
t
)
=
(
1
−
ρ
m
,
t
2
)
Var
(
m
)
{\displaystyle {\begin{aligned}{\textrm {Var}}\left(m^{\star }\right)&={\textrm {Var}}\left(m\right)-{\frac {\left[{\textrm {Cov}}\left(m,t\right)\right]^{2}}{{\textrm {Var}}\left(t\right)}}\\&=\left(1-\rho _{m,t}^{2}\right){\textrm {Var}}\left(m\right)\end{aligned}}}
where
ρ
m
,
t
=
Corr
(
m
,
t
)
{\displaystyle \rho _{m,t}={\textrm {Corr}}\left(m,t\right)\,}
is the correlation coefficient of
m
{\displaystyle m}
and
t
{\displaystyle t}
. The greater the value of
|
ρ
m
,
t
|
{\displaystyle \vert \rho _{m,t}\vert }
, the greater the variance reduction achieved.
In the case that
Cov
(
m
,
t
)
{\displaystyle {\textrm {Cov}}\left(m,t\right)}
,
Var
(
t
)
{\displaystyle {\textrm {Var}}\left(t\right)}
, and/or
ρ
m
,
t
{\displaystyle \rho _{m,t}\;}
are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.
When the expectation of the control variable,
E
[
t
]
=
τ
{\displaystyle \mathbb {E} \left[t\right]=\tau }
, is not known analytically, it is still possible to increase the precision in estimating
μ
{\displaystyle \mu }
(for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating
t
{\displaystyle t}
is significantly cheaper than computing
m
{\displaystyle m}
; 2) the magnitude of the correlation coefficient
|
ρ
m
,
t
|
{\displaystyle |\rho _{m,t}|}
is close to unity.
== Example ==
We would like to estimate
I
=
∫
0
1
1
1
+
x
d
x
{\displaystyle I=\int _{0}^{1}{\frac {1}{1+x}}\,\mathrm {d} x}
using Monte Carlo integration. This integral is the expected value of
f
(
U
)
{\displaystyle f(U)}
, where
f
(
U
)
=
1
1
+
U
{\displaystyle f(U)={\frac {1}{1+U}}}
and U follows a uniform distribution [0, 1].
Using a sample of size n denote the points in the sample as
u
1
,
⋯
,
u
n
{\displaystyle u_{1},\cdots ,u_{n}}
. Then the estimate is given by
I
≈
1
n
∑
i
f
(
u
i
)
.
{\displaystyle I\approx {\frac {1}{n}}\sum _{i}f(u_{i}).}
Now we introduce
g
(
U
)
=
1
+
U
{\displaystyle g(U)=1+U}
as a control variate with a known expected value
E
[
g
(
U
)
]
=
∫
0
1
(
1
+
x
)
d
x
=
3
2
{\displaystyle \mathbb {E} \left[g\left(U\right)\right]=\int _{0}^{1}(1+x)\,\mathrm {d} x={\tfrac {3}{2}}}
and combine the two into a new estimate
I
≈
1
n
∑
i
f
(
u
i
)
+
c
(
1
n
∑
i
g
(
u
i
)
−
3
/
2
)
.
{\displaystyle I\approx {\frac {1}{n}}\sum _{i}f(u_{i})+c\left({\frac {1}{n}}\sum _{i}g(u_{i})-3/2\right).}
Using
n
=
1500
{\displaystyle n=1500}
realizations and an estimated optimal coefficient
c
⋆
≈
0.4773
{\displaystyle c^{\star }\approx 0.4773}
we obtain the following results
The variance was significantly reduced after using the control variates technique. (The exact result is
I
=
ln
2
≈
0.69314718
{\displaystyle I=\ln 2\approx 0.69314718}
.)
== See also ==
Antithetic variates
Importance sampling
== Notes ==
== References ==
Ross, Sheldon M. (2002) Simulation 3rd edition ISBN 978-0-12-598053-1
Averill M. Law & W. David Kelton (2000), Simulation Modeling and Analysis, 3rd edition. ISBN 0-07-116537-1
S. P. Meyn (2007) Control Techniques for Complex Networks, Cambridge University Press. ISBN 978-0-521-88441-9. Downloadable draft (Section 11.4: Control variates and shadow functions) | Wikipedia/Control_variates |
In finance, an interest rate derivative (IRD) is a derivative whose payments are determined through calculation techniques where the underlying benchmark product is an interest rate, or set of different interest rates. There are a multitude of different interest rate indices that can be used in this definition.
IRDs are popular with all financial market participants given the need for almost any area of finance to either hedge or speculate on the movement of interest rates.
Modeling of interest rate derivatives is usually done on a time-dependent multi-dimensional lattice ("tree") or using specialized simulation models. Both are calibrated to the underlying risk drivers, usually domestic or foreign short rates and foreign exchange market rates, and incorporate delivery- and day count conventions. The Heath–Jarrow–Morton framework is often used instead of short rates.
== Types ==
The most basic subclassification of interest rate derivatives (IRDs) is to define linear and non-linear.
Further classification of the above is then made to define vanilla (or standard) IRDs and exotic IRDs; see exotic derivative.
=== Linear and non-linear ===
Linear IRDs are those whose net present values (PVs) are overwhelmingly (although not necessarily entirely) dictated by and undergo changes approximately proportional to the one-to-one movement of the underlying interest rate index. Examples of linear IRDs are; interest rate swaps (IRSs), forward rate agreements (FRAs), zero coupon swaps (ZCSs), cross-currency basis swaps (XCSs) and single currency basis swaps (SBSs).
Non-linear IRDs form the set of remaining products. Those whose PVs are commonly dictated by more than the one-to-one movement of the underlying interest rate index. Examples of non-linear IRDs are; swaptions, interest rate caps and floors and constant maturity swaps (CMSs). These products' PVs are reliant upon volatility so their pricing is often more complex as is the nature of their risk management.
=== Vanilla and exotic ===
The categorisation of linear and non-linear and vanilla and exotic is not universally acknowledged and a number of products might exist that can be arguably assigned to different categories. These terms may also overlap.
"Vanilla", in "vanilla IRSs" and "vanilla swaptions", is often taken to mean the basic, most liquid and commonly traded variants of those products.
Exotic is usually used to define a feature that is an extension to an IRD type. For example, an in-arrears IRS is a genuine example of an exotic IRS, whereas an IRS whose structure was the same as vanilla but whose start and end dates might be unconventional, would not generally be classed as exotic. Typically this would be referred to as a bespoke IRS (or customised IRS). Bermudan swaptions are examples of swaption extensions that qualify as exotic variants.
Other products that are generally classed as exotics are power reverse dual currency note (PRDC or Turbo), target redemption note (TARN), CMS steepener [1], Snowball (finance), Inverse floater, Strips of Collateralized mortgage obligation, Ratchet caps and floors, and Cross currency swaptions.
== Trivia ==
The interest rate derivatives market is the largest derivatives market in the world. The Bank for International Settlements estimates that the notional amount outstanding in June 2012 were US$494 trillion for OTC interest rate contracts, and US$342 trillion for OTC interest rate swaps. According to the International Swaps and Derivatives Association, 80% of the world's top 500 companies as of April 2003 used interest rate derivatives to control their cashflows. This compares with 75% for foreign exchange options, 25% for commodity options and 10% for stock options.
== See also ==
Financial modeling
Mathematical finance
Multi-curve framework
== References ==
== Further reading ==
J H M Darbyshire (2017). Pricing and Trading Interest Rate Derivatives (2nd ed. 2017 ed.). Aitch and Dee Ltd. ISBN 978-0995455528.
Leif B.G. Andersen, Vladimir V. Piterbarg (2010). Interest Rate Modeling in Three Volumes (1st ed. 2010 ed.). Atlantic Financial Press. ISBN 978-0-9844221-0-4. Archived from the original on 8 February 2011.
Damiano Brigo, Fabio Mercurio (2001). Interest Rate Models – Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
John C. Hull (2005) Options, Futures and Other Derivatives, Sixth Edition. Prentice Hall. ISBN 0-13-149908-4
John F. Marhsall (2000). Dictionary of Financial Engineering. Wiley. ISBN 0-471-24291-8
== External links ==
Basic Fixed Income Derivative Hedging – Article on Financial-edu.com.
Interest Rate Modeling by L. Andersen and V. Piterbarg
Pricing and Trading Interest Rate Derivatives by J H M Darbyshire
Online Analytics and Portfolio Management Tools by OCM Solutions Inc. | Wikipedia/Interest_rate_derivatives |
The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.
== Underlying principle ==
Let the unknown parameter of interest be
μ
{\displaystyle \mu }
, and assume we have a statistic
m
{\displaystyle m}
such that the expected value of m is μ:
E
[
m
]
=
μ
{\displaystyle \mathbb {E} \left[m\right]=\mu }
, i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic
t
{\displaystyle t}
such that
E
[
t
]
=
τ
{\displaystyle \mathbb {E} \left[t\right]=\tau }
is a known value. Then
m
⋆
=
m
+
c
(
t
−
τ
)
{\displaystyle m^{\star }=m+c\left(t-\tau \right)\,}
is also an unbiased estimator for
μ
{\displaystyle \mu }
for any choice of the coefficient
c
{\displaystyle c}
.
The variance of the resulting estimator
m
⋆
{\displaystyle m^{\star }}
is
Var
(
m
⋆
)
=
Var
(
m
)
+
c
2
Var
(
t
)
+
2
c
Cov
(
m
,
t
)
.
{\displaystyle {\textrm {Var}}\left(m^{\star }\right)={\textrm {Var}}\left(m\right)+c^{2}\,{\textrm {Var}}\left(t\right)+2c\,{\textrm {Cov}}\left(m,t\right).}
By differentiating the above expression with respect to
c
{\displaystyle c}
, it can be shown that choosing the optimal coefficient
c
⋆
=
−
Cov
(
m
,
t
)
Var
(
t
)
{\displaystyle c^{\star }=-{\frac {{\textrm {Cov}}\left(m,t\right)}{{\textrm {Var}}\left(t\right)}}}
minimizes the variance of
m
⋆
{\displaystyle m^{\star }}
. (Note that this coefficient is the same as the coefficient obtained from a linear regression.) With this choice,
Var
(
m
⋆
)
=
Var
(
m
)
−
[
Cov
(
m
,
t
)
]
2
Var
(
t
)
=
(
1
−
ρ
m
,
t
2
)
Var
(
m
)
{\displaystyle {\begin{aligned}{\textrm {Var}}\left(m^{\star }\right)&={\textrm {Var}}\left(m\right)-{\frac {\left[{\textrm {Cov}}\left(m,t\right)\right]^{2}}{{\textrm {Var}}\left(t\right)}}\\&=\left(1-\rho _{m,t}^{2}\right){\textrm {Var}}\left(m\right)\end{aligned}}}
where
ρ
m
,
t
=
Corr
(
m
,
t
)
{\displaystyle \rho _{m,t}={\textrm {Corr}}\left(m,t\right)\,}
is the correlation coefficient of
m
{\displaystyle m}
and
t
{\displaystyle t}
. The greater the value of
|
ρ
m
,
t
|
{\displaystyle \vert \rho _{m,t}\vert }
, the greater the variance reduction achieved.
In the case that
Cov
(
m
,
t
)
{\displaystyle {\textrm {Cov}}\left(m,t\right)}
,
Var
(
t
)
{\displaystyle {\textrm {Var}}\left(t\right)}
, and/or
ρ
m
,
t
{\displaystyle \rho _{m,t}\;}
are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.
When the expectation of the control variable,
E
[
t
]
=
τ
{\displaystyle \mathbb {E} \left[t\right]=\tau }
, is not known analytically, it is still possible to increase the precision in estimating
μ
{\displaystyle \mu }
(for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating
t
{\displaystyle t}
is significantly cheaper than computing
m
{\displaystyle m}
; 2) the magnitude of the correlation coefficient
|
ρ
m
,
t
|
{\displaystyle |\rho _{m,t}|}
is close to unity.
== Example ==
We would like to estimate
I
=
∫
0
1
1
1
+
x
d
x
{\displaystyle I=\int _{0}^{1}{\frac {1}{1+x}}\,\mathrm {d} x}
using Monte Carlo integration. This integral is the expected value of
f
(
U
)
{\displaystyle f(U)}
, where
f
(
U
)
=
1
1
+
U
{\displaystyle f(U)={\frac {1}{1+U}}}
and U follows a uniform distribution [0, 1].
Using a sample of size n denote the points in the sample as
u
1
,
⋯
,
u
n
{\displaystyle u_{1},\cdots ,u_{n}}
. Then the estimate is given by
I
≈
1
n
∑
i
f
(
u
i
)
.
{\displaystyle I\approx {\frac {1}{n}}\sum _{i}f(u_{i}).}
Now we introduce
g
(
U
)
=
1
+
U
{\displaystyle g(U)=1+U}
as a control variate with a known expected value
E
[
g
(
U
)
]
=
∫
0
1
(
1
+
x
)
d
x
=
3
2
{\displaystyle \mathbb {E} \left[g\left(U\right)\right]=\int _{0}^{1}(1+x)\,\mathrm {d} x={\tfrac {3}{2}}}
and combine the two into a new estimate
I
≈
1
n
∑
i
f
(
u
i
)
+
c
(
1
n
∑
i
g
(
u
i
)
−
3
/
2
)
.
{\displaystyle I\approx {\frac {1}{n}}\sum _{i}f(u_{i})+c\left({\frac {1}{n}}\sum _{i}g(u_{i})-3/2\right).}
Using
n
=
1500
{\displaystyle n=1500}
realizations and an estimated optimal coefficient
c
⋆
≈
0.4773
{\displaystyle c^{\star }\approx 0.4773}
we obtain the following results
The variance was significantly reduced after using the control variates technique. (The exact result is
I
=
ln
2
≈
0.69314718
{\displaystyle I=\ln 2\approx 0.69314718}
.)
== See also ==
Antithetic variates
Importance sampling
== Notes ==
== References ==
Ross, Sheldon M. (2002) Simulation 3rd edition ISBN 978-0-12-598053-1
Averill M. Law & W. David Kelton (2000), Simulation Modeling and Analysis, 3rd edition. ISBN 0-07-116537-1
S. P. Meyn (2007) Control Techniques for Complex Networks, Cambridge University Press. ISBN 978-0-521-88441-9. Downloadable draft (Section 11.4: Control variates and shadow functions) | Wikipedia/Control_variate |
In quantitative finance, a lattice model is a numerical approach to the valuation of derivatives in situations requiring a discrete time model. For dividend paying equity options, a typical application would correspond to the pricing of an American-style option, where a decision to exercise is allowed at the closing of any calendar day up to the maturity. A continuous model, on the other hand, such as the standard Black–Scholes one, would only allow for the valuation of European options, where exercise is limited to the option's maturity date. For interest rate derivatives lattices are additionally useful in that they address many of the issues encountered with continuous models, such as pull to par. The method is also used for valuing certain exotic options, because of path dependence in the payoff. Traditional Monte Carlo methods for option pricing fail to account for optimal decisions to terminate the derivative by early exercise, but some methods now exist for solving this problem.
== Equity and commodity derivatives ==
In general the approach is to divide time between now and the option's expiration into N discrete periods. At the specific time n, the model has a finite number of outcomes at time n + 1 such that every possible change in the state of the world between n and n + 1 is captured in a branch. This process is iterated until every possible path between n = 0 and n = N is mapped. Probabilities are then estimated for every n to n + 1 path. The outcomes and probabilities flow backwards through the tree until a fair value of the option today is calculated.
=== Variants ===
For European equity- and commodity options the application is as follows. The first step is to trace the evolution of the option's key underlying variable(s), starting with today's spot price, such that this process is consistent with its volatility; log-normal Brownian motion with constant volatility is usually assumed. The next step is to value the option recursively: stepping backwards from the final time-step, where we have exercise value at each node; and applying risk neutral valuation at each earlier node, where option value is the probability-weighted present value of the up- and down-nodes in the later time-step. See Binomial options pricing model § Method for more detail, as well as Rational pricing § Risk neutral valuation for logic and formulae derivation.
As stated above, the lattice approach is particularly useful in valuing American options, where the choice whether to exercise the option early, or to hold the option, may be modeled at each discrete time/price combination; this is also true for Bermudan options. For similar reasons, real options and employee stock options are often modeled using a lattice framework, though with modified assumptions. In each of these cases, a third step is to determine whether the option is to be exercised or held, and to then apply this value at the node in question. Some exotic options, such as barrier options, are also easily modeled here; for other Path-Dependent Options, simulation would be preferred.
(Although, tree-based methods have been developed.
)
The simplest lattice model is the binomial options pricing model; the standard ("canonical") method is that proposed by Cox, Ross and Rubinstein (CRR) in 1979; see diagram for formulae. Over 20 other methods have been developed, with each "derived under a variety of assumptions" as regards the development of the underlying's price. In the limit, as the number of time-steps increases, these converge to the Log-normal distribution, and hence produce the "same" option price as Black-Scholes: to achieve this, these will variously seek to agree with the underlying's central moments, raw moments and / or log-moments at each time-step, as measured discretely. Further enhancements are designed to achieve stability relative to Black-Scholes as the number of time-steps changes. More recent models, in fact, are designed around direct convergence to Black-Scholes.
A variant on the Binomial, is the Trinomial tree, developed by Phelim Boyle in 1986.
Here, the share price may remain unchanged over the time-step, and option valuation is then based on the value of the share at the up-, down- and middle-nodes in the later time-step.
As for the binomial, a similar (although smaller) range of methods exist. The trinomial model is considered to produce more accurate results than the binomial model when fewer time steps are modelled, and is therefore used when computational speed or resources may be an issue. For vanilla options, as the number of steps increases, the results rapidly converge, and the binomial model is then preferred due to its simpler implementation. For exotic options the trinomial model (or adaptations) is sometimes more stable and accurate, regardless of step-size.
For multiple underlyers – e.g. Rainbow- and Basket options – "multinomial lattices" can be built.
Each underlyer will have its own tree, and the per node option-value will be a function of the corresponding nodes on all underlying trees. In the case of two assets, the tree will then be referred to as a "binomial pyramid".
Two additional complications exist here.
First, the number of nodes increases exponentially with the number of underlyers.
Second, in these products, correlations between assets play a significant role, and these must also inhere in the modelling.
=== Risk measurement and management ===
Various of the Greeks can be estimated directly on the lattice, where the sensitivities are calculated using finite differences. Delta and gamma, being sensitivities of option value w.r.t. price, are approximated given differences between option prices – with their related spot – in the same time step. Theta, sensitivity to time, is likewise estimated given the option price at the first node in the tree and the option price for the same spot in a later time step. (Second time step for trinomial, third for binomial. Depending on method, if the "down factor" is not the inverse of the "up factor", this method will not be precise.) For rho, sensitivity to interest rates, and vega, sensitivity to input volatility, the measurement is indirect, as the value must be calculated a second time on a new lattice built with these inputs slightly altered – and the sensitivity here is likewise returned via finite difference. See also Fugit, the estimated time to exercise for a non-European option, which is typically calculated using a lattice.
Beginning with the 1987 crash, and especially since the 2008 financial crisis, it has become important to incorporate the volatility smile / surface into pricing models. This recognizes the fact that the underlying price-change distribution displays a term structure and is non-normal, unlike that assumed by Black-Scholes; see Financial economics § Derivative pricing and Valuation of options § Post crisis. To do so, banks typically apply stochastic- or local volatility models. In the Lattice framework, implied trees can be constructed; these essentially discretize the latter.
Here, the tree is solved such that it successfully reproduces selected (all) market prices, across various strikes and expirations. These trees thus "ensure that all European standard options (with strikes and maturities coinciding with the tree nodes) will have theoretical values which match their market prices". Using the calibrated lattice one can then price options with strike / maturity combinations not quoted in the market, such that these prices are consistent with observed volatility patterns. For risk management, the Greeks returned will reflect sensitivities more appropriately.
There exist both implied binomial trees, often Rubinstein IBTs (R-IBT), and implied trinomial trees, often Derman-Kani-Chriss (DKC; superseding the DK-IBT). The former is easier built, but is consistent with one maturity only; the latter will be consistent with, but at the same time requires, known (or interpolated) prices at all time-steps and nodes.
As regards the construction, for an R-IBT the first step is to recover the "Implied Ending Risk-Neutral Probabilities" of spot prices. Then by the assumption that all paths which lead to the same ending node have the same risk-neutral probability, a "path probability" is attached to each ending node. Thereafter "it's as simple as One-Two-Three", and a three step backwards recursion allows for the node probabilities to be recovered for each time step. Option valuation then proceeds as standard, with these probabilities replacing
p
{\displaystyle p}
above. For DKC, the first step is to recover the state prices corresponding to each node in the tree, such that these are consistent with observed option prices (i.e. with the volatility surface). Thereafter the up-, down- and middle-probabilities are found for each node such that: these sum to 1; spot prices adjacent time-step-wise evolve risk neutrally, incorporating dividend yield; state prices similarly "grow" at the risk free rate. (The solution here is iterative per time step as opposed to simultaneous.) As for R-IBTs, option valuation is then by standard backward recursion.
As an alternative, Edgeworth binomial trees
may be employed, as these allow for an analyst-specified skew and kurtosis in spot-price returns (see Edgeworth series).
Here, options with differing strikes will return differing implied volatilities, and the tree may then be calibrated to the volatility smile, by a "judicious choice" of parameter values.
For pricing American options, the valuation will be on an R-IBT as combined with the calibrated maturity distribution.
The Edgeworth approach is limited as to the set of skewness and kurtosis pairs for which valid distributions are available:
the more recent Johnson binomial trees,
then, use the Johnson "family" of distributions, as this is capable of accommodating all possible pairs.
Edgeworth (or Johnson) trees are also useful for other applications where the underlying's behavior departs (markedly) from normal. As an example, these trees can be applied to multinomial options: Basket options, for instance, can be priced using an "approximating distribution"
which provides the end-nodes, and skew and kurtosis, on which the tree is then built.
Re the modelling of CVA / XVA via lattice, see below.
== Interest rate derivatives ==
Lattices are commonly used in valuing bond options, swaptions, and other interest rate derivatives In these cases the valuation is largely as above, but requires an additional, zeroeth, step of constructing an interest rate tree, on which the price of the underlying is then based. The next step also differs: the underlying price here is built via "backward induction" i.e. flows backwards from maturity, accumulating the present value of scheduled cash flows at each node, as opposed to flowing forwards from valuation date as above. The final step, option valuation, then proceeds as standard. See top for graphic, and aside for description.
=== Approaches ===
The initial lattice is built by discretizing either a short-rate model, such as Hull–White or Black Derman Toy, or a forward rate-based model, such as the LIBOR market model or HJM. As for equity, trinomial trees may also be employed for these models; this is usually the case for Hull-White trees.
Under HJM, the condition of no arbitrage implies that there exists a martingale probability measure, as well as a corresponding restriction on the "drift coefficients" of the forward rates. These, in turn, are functions of the volatility(s) of the forward rates. A "simple" discretized expression for the drift then allows for forward rates to be expressed in a binomial lattice. For these forward rate-based models, dependent on volatility assumptions, the lattice might not recombine. (This means that an "up-move" followed by a "down-move" will not give the same result as a "down-move" followed by an "up-move".) In this case, the Lattice is sometimes referred to as a "bush", and the number of nodes grows exponentially as a function of number of time-steps. A recombining binomial tree methodology is also available for the Libor Market Model.
As regards the short-rate models, these are, in turn, further categorized: these will be either equilibrium-based (Vasicek and CIR) or arbitrage-free (Ho–Lee and subsequent).
This distinction: for equilibrium-based models the yield curve is an output from the model, while for arbitrage-free models the yield curve is an input to the model. In the former case, the approach is to "calibrate" the model parameters, such that bond prices produced by the model, in its continuous form, best fit observed market prices. The tree is then built as a function of these parameters.
In the latter case, the calibration is directly on the lattice: the fit is to both the current term structure of interest rates (i.e. the yield curve), and the corresponding volatility structure.
Here, calibration means that the interest-rate-tree reproduces the prices of the zero-coupon bonds—and any other interest-rate sensitive securities—used in constructing the yield curve; note the parallel to the implied trees for equity above, and compare Bootstrapping (finance).
For models assuming a normal distribution (such as Ho-Lee), calibration may be performed analytically, while for log-normal models the calibration is via a root-finding algorithm; see for example, the boxed-description under Black–Derman–Toy model.
The volatility structure—i.e. vertical node-spacing—here reflects the volatility of rates during the quarter, or other period, corresponding to the lattice time-step. (Some analysts use "realized volatility", i.e. of the rates applicable historically for the time-step; to be market-consistent, analysts generally prefer to use current interest rate cap prices, and the implied volatility for the Black-76-prices of each component caplet; see Interest rate cap § Implied Volatilities.)
Given this functional link to volatility, note now the resultant difference in the construction relative to equity implied trees:
for interest rates, the volatility is known for each time-step, and the node-values (i.e. interest rates) must be solved for specified risk neutral probabilities;
for equity, on the other hand, a single volatility cannot be specified per time-step, i.e. we have a "smile", and the tree is built by solving for the probabilities corresponding to specified values of the underlying at each node.
Once calibrated, the interest rate lattice is then used in the valuation of various of the fixed income instruments and derivatives.
The approach for bond options is described aside—note that this approach addresses the problem of pull to par experienced under closed form approaches; see Black–Scholes model § Valuing bond options.
For swaptions the logic is almost identical, substituting swaps for bonds in step 1, and swaptions for bond options in step 2.
For caps (and floors) step 1 and 2 are combined: at each node the value is based on the relevant nodes at the later step, plus, for any caplet (floorlet) maturing in the time-step, the difference between its reference-rate and the short-rate at the node (and reflecting the corresponding day count fraction and notional-value exchanged).
For callable- and putable bonds a third step would be required: at each node in the time-step incorporate the effect of the embedded option on the bond price and / or the option price there before stepping-backwards one time-step. (And noting that these options are not mutually exclusive, and so a bond may have several options embedded; hybrid securities are treated below.)
For other, more exotic interest rate derivatives, similar adjustments are made to steps 1 and onward.
For the "Greeks", largely as for equity, see under next section.
An alternative approach to modeling (American) bond options, particularly those struck on yield to maturity (YTM), employs modified equity-lattice methods. Here the analyst builds a CRR tree of YTM, applying a constant volatility assumption, and then calculates the bond price as a function of this yield at each node; prices here are thus pulling-to-par. The second step is to then incorporate any term structure of volatility by building a corresponding DKC tree (based on every second time-step in the CRR tree: as DKC is trinomial whereas CRR is binomial) and then using this for option valuation.
=== Post-crisis ===
Since the 2008 financial crisis, swap pricing is (generally) under a "multi-curve framework", whereas previously it was off a single, "self discounting", curve; see Interest rate swap § Valuation and pricing. Here, payoffs are set as a function of the Reference rate or forecast rate (LIBOR) specific to the tenor in question, while discounting is at the OIS rate.
To accommodate this in the lattice framework, the OIS rate and the relevant reference rate are jointly modeled in a three-dimensional tree, constructed so as to return the input OIS- and Libor-swap prices, while also inhering any correlation between the two rate sets.
With the zeroeth step thus accomplished, the valuation will proceed largely as previously, using steps 1 and onwards, but here – similar to the above "pyramid" – with cashflows based on the LIBOR tree, and discounting using the corresponding nodes from the OIS tree.
A related development is that banks will make a credit valuation adjustment, CVA – as well as various of the other XVA – when assessing the value of derivative contracts that they have entered into. The purpose of these is twofold: primarily to hedge for possible losses due to the other parties' failures to pay amounts due on the derivative contracts; but also to determine (and hedge) the amount of capital required under the bank capital adequacy rules. Although usually calculated under a simulation framework, tree-based methods can be applied here also.
In the case of a swap, for example, the potential future exposure, PFE, facing the bank on each date is the probability-weighted average of the positive settlement payments and swap values over the lattice-nodes at the date; each node's probability is in turn a function of the tree's cumulative up- and down-probabilities. This PFE is combined with the counterparty's (tree-exogenous) probability of default and recovery rate to derive the expected loss for the date. Finally, the aggregated present value of these is the CVA for the counterparty on that position.
== Hybrid securities ==
Hybrid securities, incorporating both equity- and bond-like features are also valued using trees. For convertible bonds (CBs) the approach of Tsiveriotis and Fernandes (1998) is to divide the value of the bond at each node into an "equity" component, arising from situations where the CB will be converted, and a "debt" component, arising from situations where CB is redeemed. Correspondingly, twin trees are constructed where discounting is at the risk free and credit risk adjusted rate respectively, with the sum being the value of the CB. There are other methods, which similarly combine an equity-type tree with a short-rate tree. An alternate approach, originally published by Goldman Sachs (1994), does not decouple the components, rather, discounting is at a conversion-probability-weighted risk-free and risky interest rate within a single tree. See Convertible bond § Valuation, Contingent convertible bond.
More generally, equity can be viewed as a call option on the firm: where the value of the firm is less than the value of the outstanding debt shareholders would choose not to repay the firm's debt; they would choose to repay—and not to liquidate (i.e. exercise their option)—otherwise. Lattice models have been developed for equity analysis here, particularly as relates to distressed firms. Relatedly, as regards corporate debt pricing, the relationship between equity holders' limited liability and potential Chapter 11 proceedings has also been modelled via lattice.
The calculation of "Greeks" for interest rate derivatives proceeds as for equity. There is however an additional requirement, particularly for hybrid securities: that is, to estimate sensitivities related to overall changes in interest rates. For a bond with an embedded option, the standard yield to maturity based calculations of duration and convexity do not consider how changes in interest rates will alter the cash flows due to option exercise. To address this, effective duration and -convexity are introduced. Here, similar to rho and vega above, the interest rate tree is rebuilt for an upward and then downward parallel shift in the yield curve and these measures are calculated numerically given the corresponding changes in bond value.
== References ==
== Bibliography ==
David F. Babbel (1996). Valuation of Interest-Sensitive Financial Instruments (1st ed.). John Wiley & Sons. ISBN 978-1883249151.
Gerald Buetow; Frank Fabozzi (2000). Valuation of Interest Rate Swaps and Swaptions. John Wiley. ISBN 978-1883249892.
Gerald Buetow & James Sochacki (2001). Term-Structure Models Using Binomial Trees. The Research Foundation of AIMR (CFA Institute). ISBN 978-0-943205-53-3.
Les Clewlow; Chris Strickland (1998). Implementing Derivative Models. New Jersey: Wiley. ISBN 978-0-471-96651-7.
Rama Cont, ed. (2010). Tree methods in finance, Encyclopedia of Quantitative Finance (PDF). Wiley. ISBN 978-0-470-05756-8.
Frank Fabozzi (1998). Valuation of fixed income securities and derivatives (3rd ed.). John Wiley. ISBN 978-1-883249-25-0.
Espen Haug (2006). The Complete Guide to Option Pricing Formulas. New York: McGraw-Hill. ISBN 978-0-07-138997-6.
Richard Rendleman (2002). Applied Derivatives: Options, Futures, and Swaps (1st ed.). Wiley-Blackwell. ISBN 978-0-631-21590-5.
Mark Rubinstein (2000). Rubinstein On Derivatives (1st ed.). Risk Books. ISBN 978-1899332533.
Steven Shreve (2004). Stochastic Calculus for Finance I: The Binomial Asset Pricing Model. Springer. ISBN 978-0387249681.
Donald J. Smith (2017). Valuation in a World of CVA, DVA, and FVA: A Tutorial on Debt Securities and Interest Rate Derivatives. World Scientific. ISBN 978-9813222748.
John van der Hoek & Robert J. Elliott (2006). Binomial Models in Finance. Springer. ISBN 978-0-387-25898-0. | Wikipedia/Lattice_model_(finance) |
A short-rate model, in the context of interest rate derivatives, is a mathematical model that describes the future evolution of interest rates by describing the future evolution of the short rate, usually written
r
t
{\displaystyle r_{t}\,}
.
== The short rate ==
Under a short rate model, the stochastic state variable is taken to be the instantaneous spot rate. The short rate,
r
t
{\displaystyle r_{t}\,}
, then, is the (continuously compounded, annualized) interest rate at which an entity can borrow money for an infinitesimally short period of time from time
t
{\displaystyle t}
. Specifying the current short rate does not specify the entire yield curve. However, no-arbitrage arguments show that, under some fairly relaxed technical conditions, if we model the evolution of
r
t
{\displaystyle r_{t}\,}
as a stochastic process under a risk-neutral measure
Q
{\displaystyle Q}
, then the price at time
t
{\displaystyle t}
of a zero-coupon bond maturing at time
T
{\displaystyle T}
with a payoff of 1 is given by
P
(
t
,
T
)
=
E
Q
[
exp
(
−
∫
t
T
r
s
d
s
)
|
F
t
]
,
{\displaystyle P(t,T)=\operatorname {E} ^{Q}\left[\left.\exp {\left(-\int _{t}^{T}r_{s}\,ds\right)}\right|{\mathcal {F}}_{t}\right],}
where
F
{\displaystyle {\mathcal {F}}}
is the natural filtration for the process. The interest rates implied by the zero coupon bonds form a yield curve, or more precisely, a zero curve. Thus, specifying a model for the short rate specifies future bond prices. This means that instantaneous forward rates are also specified by the usual formula
f
(
t
,
T
)
=
−
∂
∂
T
ln
(
P
(
t
,
T
)
)
.
{\displaystyle f(t,T)=-{\frac {\partial }{\partial T}}\ln(P(t,T)).}
Short rate models are often classified as endogenous and exogenous. Endogenous short rate models are short rate models where the term structure of interest rates, or of zero-coupon bond prices
T
↦
P
(
0
,
T
)
{\displaystyle T\mapsto P(0,T)}
, is an output of the model, so it is "inside the model" (endogenous) and is determined by the model parameters. Exogenous short rate models are models where such term structure is an input, as the model involves some time dependent functions or shifts that allow for inputing a given market term structure, so that the term structure comes from outside (exogenous). Other authors use 'equilibrium' and 'no arbitrage' in place of 'endogenous' and 'exogenous'.
== Particular short-rate models ==
Throughout this section
W
t
{\displaystyle W_{t}\,}
represents a standard Brownian motion under a risk-neutral probability measure and
d
W
t
{\displaystyle dW_{t}\,}
its differential. Where the model is lognormal, a variable
X
t
{\displaystyle X_{t}}
is assumed to follow an Ornstein–Uhlenbeck process and
r
t
{\displaystyle r_{t}\,}
is assumed to follow
r
t
=
exp
X
t
{\displaystyle r_{t}=\exp {X_{t}}\,}
.
=== One-factor short-rate models ===
Following are the one-factor models, where a single stochastic factor – the short rate – determines the future evolution of all interest rates.
Other than Rendleman–Bartter and Ho–Lee, which do not capture the mean reversion of interest rates, these models can be thought of as specific cases of Ornstein–Uhlenbeck processes.
The Vasicek, Rendleman–Bartter and CIR models are endogenous models and have only a finite number of free parameters and so it is not possible to specify these parameter values in such a way that the model coincides with a few observed market prices ("calibration") of zero coupon bonds or linear products such as forward rate agreements or swaps, typically, or a best fit is done to these linear products to find the endogenous short rate models parameters that are closest to the market prices. This does not allow for fitting options like caps, floors and swaptions as the parameters have been used to fit linear instruments instead. This problem is overcome by allowing the parameters to vary deterministically with time, or by adding a deterministic shift to the endogenous model.
In this way, exogenous models such as Ho-Lee and subsequent models, can be calibrated to market data, meaning that these can exactly return the price of bonds comprising the yield curve, and the remaining parameters can be used for options calibration. The implementation is usually via a (binomial) short rate tree or simulation; see Lattice model (finance) § Interest rate derivatives and Monte Carlo methods for option pricing, although some short rate models have closed form solutions for zero coupon bonds, and even caps or floors, easing the calibration task considerably.
We list the following endogenous models first.
Merton's model (1973) explains the short rate as
r
t
=
r
0
+
a
t
+
σ
W
t
∗
{\displaystyle r_{t}=r_{0}+at+\sigma W_{t}^{*}}
: where
W
t
∗
{\displaystyle W_{t}^{*}}
is a one-dimensional Brownian motion under the spot martingale measure. In this approach, the short rate follows an arithmetic Brownian motion.
The Vasicek model (1977) models the short rate as
d
r
t
=
(
θ
−
α
r
t
)
d
t
+
σ
d
W
t
{\displaystyle dr_{t}=(\theta -\alpha r_{t})\,dt+\sigma \,dW_{t}}
; it is often written
d
r
t
=
a
(
b
−
r
t
)
d
t
+
σ
d
W
t
{\displaystyle dr_{t}=a(b-r_{t})\,dt+\sigma \,dW_{t}}
. The second form is the more common, and makes the parameters interpretation more direct, with the parameter
a
{\displaystyle a}
being the speed of mean reversion, the parameter
b
{\displaystyle b}
being the long term mean, and the parameter
σ
{\displaystyle \sigma }
being the instantaneous volatility. In this short rate model an Ornstein–Uhlenbeck process is used for the short rate. This model allows for negative rates, because the probability distribution of the short rate is Gaussian. Also, this model allows for closed form solutions for the bond price and for bond options and caps/floors, and using Jamshidian's trick, one can also get a formula for swaptions.
The Rendleman–Bartter model (1980) or Dothan model (1978) explains the short rate as
d
r
t
=
θ
r
t
d
t
+
σ
r
t
d
W
t
{\displaystyle dr_{t}=\theta r_{t}\,dt+\sigma r_{t}\,dW_{t}}
. In this model the short rate follows a geometric Brownian motion. This model does not have closed form formulas for options and it is not mean reverting. Moreover, it has the problem of an infinite expected bank account after a short time. The same problem will be present in all lognormal short rate models
The Cox–Ingersoll–Ross model (1985) supposes
d
r
t
=
(
θ
−
α
r
t
)
d
t
+
r
t
σ
d
W
t
{\displaystyle dr_{t}=(\theta -\alpha r_{t})\,dt+{\sqrt {r_{t}}}\,\sigma \,dW_{t}}
, it is often written
d
r
t
=
a
(
b
−
r
t
)
d
t
+
r
t
σ
d
W
t
{\displaystyle dr_{t}=a(b-r_{t})\,dt+{\sqrt {r_{t}}}\,\sigma \,dW_{t}}
. The
σ
r
t
{\displaystyle \sigma {\sqrt {r_{t}}}}
factor precludes (generally) the possibility of negative interest rates. The interpretation of the parameters, in the second formulation, is the same as in the Vasicek model. The Feller condition
2
a
b
>
σ
2
{\displaystyle 2ab>\sigma ^{2}}
ensures strictly positive short rates. This model follows a Feller square root process and has non-negative rates, and it allows for closed form solutions for the bond price and for bond options and caps/floors, and using Jamshidian's trick, one can also obtain a formula for swaptions. Both this model and the Vasicek model are called affine models, because the formula for the continuously compounded spot rate for a finite maturity T at time t is an affine function of
r
t
{\displaystyle r_{t}}
.
We now list a number of exogenous short rate models.
The Ho–Lee model (1986) models the short rate as
d
r
t
=
θ
t
d
t
+
σ
d
W
t
{\displaystyle dr_{t}=\theta _{t}\,dt+\sigma \,dW_{t}}
. The parameter
θ
t
{\displaystyle \theta _{t}}
allows for the initial term structure of interest rates or bond prices to be an input of the model. This model follows again an arithmetic Brownian motion with time dependent deterministic drift parameter.
The Hull–White model (1990)—also called the extended Vasicek model—posits
d
r
t
=
(
θ
t
−
α
t
r
t
)
d
t
+
σ
t
d
W
t
{\displaystyle dr_{t}=(\theta _{t}-\alpha _{t}r_{t})\,dt+\sigma _{t}\,dW_{t}}
. In many presentations one or more of the parameters
θ
,
α
{\displaystyle \theta ,\alpha }
and
σ
{\displaystyle \sigma }
are not time-dependent. The distribution of the short rate is normal, and the model allows for negative rates. The model with constant
α
{\displaystyle \alpha }
and
σ
{\displaystyle \sigma }
is the most commonly used and it allows for closed form solutions for bond prices, bond options, caps and floors, and swaptions through Jamshidian's trick. This model allows for an exact calibration of the initial term structure of interest rates through the time dependent function
θ
t
{\displaystyle \theta _{t}}
. Lattice-based implementation for Bermudan swaptions and for products without analytical formulas is usually trinomial.
The Black–Derman–Toy model (1990) has
d
ln
(
r
)
=
[
θ
t
+
σ
t
′
σ
t
ln
(
r
)
]
d
t
+
σ
t
d
W
t
{\textstyle d\ln(r)=[\theta _{t}+{\frac {\sigma '_{t}}{\sigma _{t}}}\ln(r)]dt+\sigma _{t}\,dW_{t}}
for time-dependent short rate volatility and
d
ln
(
r
)
=
θ
t
d
t
+
σ
d
W
t
{\displaystyle d\ln(r)=\theta _{t}\,dt+\sigma \,dW_{t}}
otherwise; the model is lognormal. The model has no closed form formulas for options. Also, as all lognormal models, it suffers from the issue of explosion of the expected bank account in finite time.
The Black–Karasinski model (1991), which is lognormal, has
d
ln
(
r
)
=
[
θ
t
−
ϕ
t
ln
(
r
)
]
d
t
+
σ
t
d
W
t
{\displaystyle d\ln(r)=[\theta _{t}-\phi _{t}\ln(r)]\,dt+\sigma _{t}\,dW_{t}}
. The model may be seen as the lognormal application of Hull–White; its lattice-based implementation is similarly trinomial (binomial requiring varying time-steps). The model has no closed form solutions, and even basic calibration to the initial term structure has to be done with numerical methods to generate the zero coupon bond prices. This model too suffers of the issue of explosion of the expected bank account in finite time.
The Kalotay–Williams–Fabozzi model (1993) has the short rate as
d
ln
(
r
t
)
=
θ
t
d
t
+
σ
d
W
t
{\displaystyle d\ln(r_{t})=\theta _{t}\,dt+\sigma \,dW_{t}}
, a lognormal analogue to the Ho–Lee model, and a special case of the Black–Derman–Toy model. This approach is effectively similar to "the original Salomon Brothers model" (1987), also a lognormal variant on Ho-Lee.
The CIR++ model, introduced and studied in detail by Brigo and Mercurio in 2001, and formulated also earlier by Scott (1995) used the CIR model but instead of introducing time dependent parameters in the dynamics, it adds an external shift. The model is formulated as
d
x
t
=
a
(
b
−
x
t
)
d
t
+
x
t
σ
d
W
t
,
r
t
=
x
t
+
ϕ
(
t
)
{\displaystyle dx_{t}=a(b-x_{t})\,dt+{\sqrt {x_{t}}}\,\sigma \,dW_{t},\ \ r_{t}=x_{t}+\phi (t)}
where
ϕ
{\displaystyle \phi }
is a deterministic shift. The shift can be used to absorb the market term structure and make the model fully consistent with this. This model preserves the analytical tractability of the basic CIR model, allowing for closed form solutions for bonds and all linear products, and options such as caps, floor and swaptions through Jamshidian's trick. The model allows for maintaining positive rates if the shift is constrained to be positive, or allows for negative rates if the shift is allowed to go negative. It has been applied often in credit risk too, for credit default swap and swaptions, in this original version or with jumps.
The idea of a deterministic shift can be applied also to other models that have desirable properties in their endogenous form. For example, one could apply the shift
ϕ
{\displaystyle \phi }
to the Vasicek model, but due to linearity of the Ornstein-Uhlenbeck process, this is equivalent to making
b
{\displaystyle b}
a time dependent function, and would thus coincide with the Hull-White model.
=== Multi-factor short-rate models ===
Besides the above one-factor models, there are also multi-factor models of the short rate, among them the best known are the Longstaff and Schwartz two factor model and the Chen three factor model (also called "stochastic mean and stochastic volatility model"). Note that for the purposes of risk management, "to create realistic interest rate simulations", these multi-factor short-rate models are sometimes preferred over One-factor models, as they produce scenarios which are, in general, better "consistent with actual yield curve movements".
The Longstaff–Schwartz model (1992) supposes the short rate dynamics are given by
d
X
t
=
(
a
t
−
b
X
t
)
d
t
+
X
t
c
t
d
W
1
t
,
d
Y
t
=
(
d
t
−
e
Y
t
)
d
t
+
Y
t
f
t
d
W
2
t
,
{\displaystyle {\begin{aligned}dX_{t}&=(a_{t}-bX_{t})\,dt+{\sqrt {X_{t}}}\,c_{t}\,dW_{1t},\\[3pt]dY_{t}&=(d_{t}-eY_{t})\,dt+{\sqrt {Y_{t}}}\,f_{t}\,dW_{2t},\end{aligned}}}
where the short rate is defined as
d
r
t
=
(
μ
X
+
θ
Y
)
d
t
+
σ
t
Y
d
W
3
t
.
{\displaystyle dr_{t}=(\mu X+\theta Y)\,dt+\sigma _{t}{\sqrt {Y}}\,dW_{3t}.}
The Chen model (1996) which has a stochastic mean and volatility of the short rate, is given by
d
r
t
=
(
θ
t
−
α
t
)
d
t
+
r
t
σ
t
d
W
t
,
d
α
t
=
(
ζ
t
−
α
t
)
d
t
+
α
t
σ
t
d
W
t
,
d
σ
t
=
(
β
t
−
σ
t
)
d
t
+
σ
t
η
t
d
W
t
.
{\displaystyle {\begin{aligned}dr_{t}&=(\theta _{t}-\alpha _{t})\,dt+{\sqrt {r_{t}}}\,\sigma _{t}\,dW_{t},\\[3pt]d\alpha _{t}&=(\zeta _{t}-\alpha _{t})\,dt+{\sqrt {\alpha _{t}}}\,\sigma _{t}\,dW_{t},\\[3pt]d\sigma _{t}&=(\beta _{t}-\sigma _{t})\,dt+{\sqrt {\sigma _{t}}}\,\eta _{t}\,dW_{t}.\end{aligned}}}
The two-factor Hull-White or G2++ models are models that have been used due to their tractability. These models are summarized and shown to be equivalent in Brigo and Mercurio (2006). This model is based on adding two possibly correlated Ornstein-Uhlenbeck (Vasicek) processes plus a shift to obtain the short rate. This model allows for exact calibration of the term structure, semi-closed form solutions for options, control of the volatility term structure for instantaneous forward rates through the correlation parameter, and especially for negative rates, which has become important as rates turned negative in financial markets.
== Other interest rate models ==
The other major framework for interest rate modelling is the Heath–Jarrow–Morton framework (HJM). Unlike the short rate models described above, this class of models is generally non-Markovian. This makes general HJM models computationally intractable for most purposes. The great advantage of HJM models is that they give an analytical description of the entire yield curve, rather than just the short rate. For some purposes (e.g., valuation of mortgage backed securities), this can be a big simplification. The Cox–Ingersoll–Ross and Hull–White models in one or more dimensions can both be straightforwardly expressed in the HJM framework. Other short rate models do not have any simple dual HJM representation.
The HJM framework with multiple sources of randomness, including as it does the Brace–Gatarek–Musiela model and market models, is often preferred for models of higher dimension.
Models based on Fischer Black's shadow rate are used when interest rates approach the zero lower bound.
== See also ==
Fixed-income attribution
== References ==
== Further reading ==
Martin Baxter & Andrew Rennie (1996). Financial Calculus. Cambridge University Press. ISBN 978-0-521-55289-9.
Damiano Brigo; Fabio Mercurio (2001). Interest Rate Models – Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
Gerald Buetow & James Sochacki (2001). Term-Structure Models Using Binomial Trees. The Research Foundation of AIMR (CFA Institute). ISBN 978-0-943205-53-3.
Andrew J.G. Cairns (2004). Interest Rate Models – An Introduction. Princeton University Press. ISBN 978-0-691-11894-9.
Andrew J.G. Cairns (2004). Interest-Rate Models; entry in Encyclopaedia of Actuarial Science. John Wiley and Sons. 2004. ISBN 978-0-470-84676-6.
K. C. Chan, G. Andrew Karolyi, Francis Longstaff, and Anthony Sanders (1992). An Empirical Comparison of Alternative Models of the Short-Term Interest Rate (PDF). The Journal of Finance, Vol. XLVII, No. 3 July 1992.{{cite book}}: CS1 maint: multiple names: authors list (link)
Lin Chen (1996). Interest Rate Dynamics, Derivatives Pricing, and Risk Management. Springer. ISBN 978-3-540-60814-1.
Rajna Gibson, François-Serge Lhabitant and Denis Talay (1999). Modeling the Term Structure of Interest Rates: An overview. The Journal of Risk, 1(3): 37–62, 1999.
Lane Hughston (2003). The Past, Present and Future of Term Structure Modelling; entry in Peter Field (2003). Modern Risk Management. Risk Books. ISBN 978-1-906348-30-4.
Jessica James & Nick Webber (2000). Interest Rate Modelling. Wiley Finance. ISBN 978-0-471-97523-6.
Robert Jarrow (2002). Modelling Fixed Income Securities and Interest Rate Options (2nd ed.). Stanford Economics and Finance. ISBN 978-0-8047-4438-6.
Robert Jarrow (2009). "The Term Structure of Interest Rates". Annual Review of Financial Economics. 1 (1): 69–96. doi:10.1146/annurev.financial.050808.114513.
F.C. Park (2004). "Implementing Interest Rate Models: a Practical Guide" (PDF). CMPR Research Publication. Archived from the original (PDF) on 2010-08-16.
Riccardo Rebonato (2002). Modern Pricing of Interest-Rate Derivatives. Princeton University Press. ISBN 978-0-691-08973-7.
Riccardo Rebonato (2003). "Term-Structure Models: a Review" (PDF). Royal Bank of Scotland Quantitative Research Centre Working Paper. | Wikipedia/Short-rate_model |
In statistics, model validation is the task of evaluating whether a chosen statistical model is appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstanding by researchers of the actual relevance of their model. To combat this, model validation is used to test whether a statistical model can hold up to permutations in the data. Model validation is also called model criticism or model evaluation.
This topic is not to be confused with the closely related task of model selection, the process of discriminating between multiple candidate models: model validation does not concern so much the conceptual design of models as it tests only the consistency between a chosen model and its stated outputs.
There are many ways to validate a model. Residual plots plot the difference between the actual data and the model's predictions: correlations in the residual plots may indicate a flaw in the model. Cross validation is a method of model validation that iteratively refits the model, each time leaving out just a small sample and comparing whether the samples left out are predicted by the model: there are many kinds of cross validation. Predictive simulation is used to compare simulated data to actual data. External validation involves fitting the model to new data. Akaike information criterion estimates the quality of a model.
== Overview ==
Model validation comes in many forms and the specific method of model validation a researcher uses is often a constraint of their research design. To emphasize, what this means is that there is no one-size-fits-all method to validating a model. For example, if a researcher is operating with a very limited set of data, but data they have strong prior assumptions about, they may consider validating the fit of their model by using a Bayesian framework and testing the fit of their model using various prior distributions. However, if a researcher has a lot of data and is testing multiple nested models, these conditions may lend themselves toward cross validation and possibly a leave one out test. These are two abstract examples and any actual model validation will have to consider far more intricacies than describes here but these example illustrate that model validation methods are always going to be circumstantial.
In general, models can be validated using existing data or with new data, and both methods are discussed more in the following subsections, and a note of caution is provided, too.
=== Validation with existing data ===
Validation based on existing data involves analyzing the goodness of fit of the model or analyzing whether the residuals seem to be random (i.e. residual diagnostics). This method involves using analyses of the models closeness to the data and trying to understand how well the model predicts its own data. One example of this method is in Figure 1, which shows a polynomial function fit to some data. We see that the polynomial function does not conform well to the data, which appears linear, and might invalidate this polynomial model.
Commonly, statistical models on existing data are validated using a validation set, which may also be referred to as a holdout set. A validation set is a set of data points that the user leaves out when fitting a statistical model. After the statistical model is fitted, the validation set is used as a measure of the model's error. If the model fits well on the initial data but has a large error on the validation set, this is a sign of overfitting.
=== Validation with new data ===
If new data becomes available, an existing model can be validated by assessing whether the new data is predicted by the old model. If the new data is not predicted by the old model, then the model might not be valid for the researcher's goals.
With this in mind, a modern approach is to validate a neural network is to test its performance on domain-shifted data. This ascertains if the model learned domain-invariant features.
=== A note of caution ===
A model can be validated only relative to some application area. A model that is valid for one application might be invalid for some other applications. As an example, consider the curve in Figure 1: if the application only used inputs from the interval [0, 2], then the curve might well be an acceptable model.
== Methods for validating ==
When doing a validation, there are three notable causes of potential difficulty, according to the Encyclopedia of Statistical Sciences. The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment. Note that expert judgment commonly requires expertise in the application area.
Expert judgment can sometimes be used to assess the validity of a prediction without obtaining real data: e.g. for the curve in Figure 1, an expert might well be able to assess that a substantial extrapolation will be invalid. Additionally, expert judgment can be used in Turing-type tests, where experts are presented with both real data and related model outputs and then asked to distinguish between the two.
For some classes of statistical models, specialized methods of performing validation are available. As an example, if the statistical model was obtained via a regression, then specialized analyses for regression model validation exist and are generally employed.
=== Residual diagnostics ===
Residual diagnostics comprise analyses of the residuals to determine whether the residuals seem to be effectively random. Such analyses typically requires estimates of the probability distributions for the residuals. Estimates of the residuals' distributions can often be obtained by repeatedly running the model, i.e. by using repeated stochastic simulations (employing a pseudorandom number generator for random variables in the model).
If the statistical model was obtained via a regression, then regression-residual diagnostics exist and may be used; such diagnostics have been well studied.
=== Cross validation ===
Cross validation is a method of sampling that involves leaving some parts of the data out of the fitting process and then seeing whether those data that are left out are close or far away from where the model predicts they would be. What that means practically is that cross validation techniques fit the model many, many times with a portion of the data and compares each model fit to the portion it did not use. If the models very rarely describe the data that they were not trained on, then the model is probably wrong.
== See also ==
== References ==
== Further reading ==
Barlas, Y. (1996), "Formal aspects of model validity and validation in system dynamics", System Dynamics Review, 12 (3): 183–210, doi:10.1002/(SICI)1099-1727(199623)12:3<183::AID-SDR103>3.0.CO;2-4
Good, P. I.; Hardin, J. W. (2012), "Chapter 15: Validation", Common Errors in Statistics (Fourth ed.), John Wiley & Sons, pp. 277–285
Huber, P. J. (2002), "Chapter 3: Approximate models", in Huber-Carol, C.; Balakrishnan, N.; Nikulin, M. S.; Mesbah, M. (eds.), Goodness-of-Fit Tests and Model Validity, Springer, pp. 25–41
== External links ==
How can I tell if a model fits my data? —Handbook of Statistical Methods (NIST)
Hicks, Dan (July 14, 2017). "What are core statistical model validation techniques?". Stack Exchange. | Wikipedia/Model_validation |
In finance, an interest rate derivative (IRD) is a derivative whose payments are determined through calculation techniques where the underlying benchmark product is an interest rate, or set of different interest rates. There are a multitude of different interest rate indices that can be used in this definition.
IRDs are popular with all financial market participants given the need for almost any area of finance to either hedge or speculate on the movement of interest rates.
Modeling of interest rate derivatives is usually done on a time-dependent multi-dimensional lattice ("tree") or using specialized simulation models. Both are calibrated to the underlying risk drivers, usually domestic or foreign short rates and foreign exchange market rates, and incorporate delivery- and day count conventions. The Heath–Jarrow–Morton framework is often used instead of short rates.
== Types ==
The most basic subclassification of interest rate derivatives (IRDs) is to define linear and non-linear.
Further classification of the above is then made to define vanilla (or standard) IRDs and exotic IRDs; see exotic derivative.
=== Linear and non-linear ===
Linear IRDs are those whose net present values (PVs) are overwhelmingly (although not necessarily entirely) dictated by and undergo changes approximately proportional to the one-to-one movement of the underlying interest rate index. Examples of linear IRDs are; interest rate swaps (IRSs), forward rate agreements (FRAs), zero coupon swaps (ZCSs), cross-currency basis swaps (XCSs) and single currency basis swaps (SBSs).
Non-linear IRDs form the set of remaining products. Those whose PVs are commonly dictated by more than the one-to-one movement of the underlying interest rate index. Examples of non-linear IRDs are; swaptions, interest rate caps and floors and constant maturity swaps (CMSs). These products' PVs are reliant upon volatility so their pricing is often more complex as is the nature of their risk management.
=== Vanilla and exotic ===
The categorisation of linear and non-linear and vanilla and exotic is not universally acknowledged and a number of products might exist that can be arguably assigned to different categories. These terms may also overlap.
"Vanilla", in "vanilla IRSs" and "vanilla swaptions", is often taken to mean the basic, most liquid and commonly traded variants of those products.
Exotic is usually used to define a feature that is an extension to an IRD type. For example, an in-arrears IRS is a genuine example of an exotic IRS, whereas an IRS whose structure was the same as vanilla but whose start and end dates might be unconventional, would not generally be classed as exotic. Typically this would be referred to as a bespoke IRS (or customised IRS). Bermudan swaptions are examples of swaption extensions that qualify as exotic variants.
Other products that are generally classed as exotics are power reverse dual currency note (PRDC or Turbo), target redemption note (TARN), CMS steepener [1], Snowball (finance), Inverse floater, Strips of Collateralized mortgage obligation, Ratchet caps and floors, and Cross currency swaptions.
== Trivia ==
The interest rate derivatives market is the largest derivatives market in the world. The Bank for International Settlements estimates that the notional amount outstanding in June 2012 were US$494 trillion for OTC interest rate contracts, and US$342 trillion for OTC interest rate swaps. According to the International Swaps and Derivatives Association, 80% of the world's top 500 companies as of April 2003 used interest rate derivatives to control their cashflows. This compares with 75% for foreign exchange options, 25% for commodity options and 10% for stock options.
== See also ==
Financial modeling
Mathematical finance
Multi-curve framework
== References ==
== Further reading ==
J H M Darbyshire (2017). Pricing and Trading Interest Rate Derivatives (2nd ed. 2017 ed.). Aitch and Dee Ltd. ISBN 978-0995455528.
Leif B.G. Andersen, Vladimir V. Piterbarg (2010). Interest Rate Modeling in Three Volumes (1st ed. 2010 ed.). Atlantic Financial Press. ISBN 978-0-9844221-0-4. Archived from the original on 8 February 2011.
Damiano Brigo, Fabio Mercurio (2001). Interest Rate Models – Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
John C. Hull (2005) Options, Futures and Other Derivatives, Sixth Edition. Prentice Hall. ISBN 0-13-149908-4
John F. Marhsall (2000). Dictionary of Financial Engineering. Wiley. ISBN 0-471-24291-8
== External links ==
Basic Fixed Income Derivative Hedging – Article on Financial-edu.com.
Interest Rate Modeling by L. Andersen and V. Piterbarg
Pricing and Trading Interest Rate Derivatives by J H M Darbyshire
Online Analytics and Portfolio Management Tools by OCM Solutions Inc. | Wikipedia/Interest_rate_derivative |
In mathematical finance, a Monte Carlo option model uses Monte Carlo methods to calculate the value of an option with multiple sources of uncertainty or with complicated features. The first application to option pricing was by Phelim Boyle in 1977 (for European options). In 1996, M. Broadie and P. Glasserman showed how to price Asian options by Monte Carlo. An important development was the introduction in 1996 by Carriere of Monte Carlo methods for options with early exercise features.
== Methodology ==
As is standard, Monte Carlo valuation relies on risk neutral valuation. Here the price of the option is its discounted expected value; see risk neutrality and rational pricing. The technique applied then, is (1) to generate a large number of possible, but random, price paths for the underlying (or underlyings) via simulation, and (2) to then calculate the associated exercise value (i.e. "payoff") of the option for each path. (3) These payoffs are then averaged and (4) discounted to today. This result is the value of the option.
This approach, although relatively straightforward, allows for increasing complexity:
An option on equity may be modelled with one source of uncertainty: the price of the underlying stock in question. Here the price of the underlying instrument
S
t
{\displaystyle \ S_{t}\,}
is usually modelled such that it follows a geometric Brownian motion with constant drift
μ
{\displaystyle \mu \,}
and volatility
σ
{\displaystyle \sigma \,}
. So:
d
S
t
=
μ
S
t
d
t
+
σ
S
t
d
W
t
{\displaystyle dS_{t}=\mu S_{t}\,dt+\sigma S_{t}\,dW_{t}\,}
, where
d
W
t
{\displaystyle dW_{t}\,}
is found via a random sampling from a normal distribution; see further under Black–Scholes. Since the underlying random process is the same, for enough price paths, the value of a european option here should be the same as under Black–Scholes. More generally though, simulation is employed for path dependent exotic derivatives, such as Asian options.
In other cases, the source of uncertainty may be at a remove. For example, for bond options the underlying is a bond, but the source of uncertainty is the annualized interest rate (i.e. the short rate). Here, for each randomly generated yield curve we observe a different resultant bond price on the option's exercise date; this bond price is then the input for the determination of the option's payoff. The same approach is used in valuing swaptions, where the value of the underlying swap is also a function of the evolving interest rate. (Whereas these options are more commonly valued using lattice based models, as above, for path dependent interest rate derivatives – such as CMOs – simulation is the primary technique employed.) For the models used to simulate the interest-rate see further under Short-rate model; "to create realistic interest rate simulations" Multi-factor short-rate models are sometimes employed. To apply simulation here, the analyst must first "calibrate" the model parameters, such that bond prices produced by the model best fit observed market prices.
Monte Carlo Methods allow for a compounding in the uncertainty. For example, where the underlying is denominated in a foreign currency, an additional source of uncertainty will be the exchange rate: the underlying price and the exchange rate must be separately simulated and then combined to determine the value of the underlying in the local currency. In all such models, correlation between the underlying sources of risk is also incorporated; see Cholesky decomposition § Monte Carlo simulation. Further complications, such as the impact of commodity prices or inflation on the underlying, can also be introduced. Since simulation can accommodate complex problems of this sort, it is often used in analysing real options where management's decision at any point is a function of multiple underlying variables.
Simulation can similarly be used to value options where the payoff depends on the value of multiple underlying assets such as a Basket option or Rainbow option. Here, correlation between asset returns is likewise incorporated.
As required, Monte Carlo simulation can be used with any type of probability distribution, including changing distributions: the modeller is not limited to normal or log-normal returns; see for example Datar–Mathews method for real option valuation. Additionally, the stochastic process of the underlying(s) may be specified so as to exhibit jumps or mean reversion or both; this feature makes simulation the primary valuation method applicable to energy derivatives. Further, some models even allow for (randomly) varying statistical (and other) parameters of the sources of uncertainty. For example, in models incorporating stochastic volatility, the volatility of the underlying changes with time; see Heston model.
== Least Square Monte Carlo ==
Least Square Monte Carlo is a technique for valuing early-exercise options (i.e. Bermudan or American options). It was first introduced by Jacques Carriere in 1996.
It is based on the iteration of a two step procedure:
First, a backward induction process is performed in which a value is recursively assigned to every state at every timestep. The value is defined as the least squares regression against market price of the option value at that state and time (-step). Option value for this regression is defined as the value of exercise possibilities (dependent on market price) plus the value of the timestep value which that exercise would result in (defined in the previous step of the process).
Secondly, when all states are valued for every timestep, the value of the option is calculated by moving through the timesteps and states by making an optimal decision on option exercise at every step on the hand of a price path and the value of the state that would result in. This second step can be done with multiple price paths to add a stochastic effect to the procedure.
== Application ==
As can be seen, Monte Carlo Methods are particularly useful in the valuation of options with multiple sources of uncertainty or with complicated features, which would make them difficult to value through a straightforward Black–Scholes-style or lattice based computation. The technique is thus widely used in valuing path dependent structures like lookback- and Asian options and in real options analysis. Additionally, as above, the modeller is not limited as to the probability distribution assumed.
Conversely, however, if an analytical technique for valuing the option exists—or even a numeric technique, such as a (modified) pricing tree—Monte Carlo methods will usually be too slow to be competitive. They are, in a sense, a method of last resort; see further under Monte Carlo methods in finance. With faster computing capability this computational constraint is less of a concern.
== See also ==
Monte Carlo methods in finance
Quasi-Monte Carlo methods in finance
Stochastic modelling (insurance)
Stochastic asset model
== References ==
Notes
Sources
Primary references
Boyle, Phelim P. (1977). "Options: A Monte Carlo Approach". Journal of Financial Economics. 4 (3): 323–338. doi:10.1016/0304-405x(77)90005-8. Retrieved June 28, 2012.
Broadie, M.; Glasserman, P. (1996). "Estimating Security Price Derivatives Using Simulation" (PDF). Management Science. 42 (2): 269–285. CiteSeerX 10.1.1.196.1128. doi:10.1287/mnsc.42.2.269. Retrieved June 28, 2012.
Longstaff, F.A.; Schwartz, E.S. (2001). "Valuing American options by simulation: a simple least squares approach". Review of Financial Studies. 14: 113–148. CiteSeerX 10.1.1.155.3462. doi:10.1093/rfs/14.1.113. Retrieved June 28, 2012.
Bibliography
Bruno Dupire (1998). Monte Carlo:methodologies and applications for pricing and risk management. Risk.
Paul Glasserman (2003). Monte Carlo methods in financial engineering. Springer-Verlag. ISBN 978-0-387-00451-8.
Peter Jaeckel (2002). Monte Carlo methods in finance. John Wiley and Sons. ISBN 978-0-471-49741-7.
Don L. McLeish (2005). Monte Carlo Simulation & Finance. ISBN 978-0-471-67778-9.
Christian P. Robert, George Casella (2004). Monte Carlo Statistical Methods. ISBN 978-0-387-21239-5.
== External links ==
Online tools
Monte Carlo simulated stock price time series and random number generator (allows for choice of distribution), Steven Whitney
Discussion papers and documents
Monte Carlo Simulation, Prof. Don M. Chance, Louisiana State University
Pricing complex options using a simple Monte Carlo Simulation, Peter Fink (reprint at quantnotes.com)
MonteCarlo Simulation in Finance, global-derivatives.com
Monte Carlo Derivative valuation, contd., Timothy L. Krehbiel, Oklahoma State University–Stillwater
Applications of Monte Carlo Methods in Finance: Option Pricing, Y. Lai and J. Spanier, Claremont Graduate University
Option pricing by simulation, Bernt Arne Ødegaard, Norwegian School of Management
Pricing and Hedging Exotic Options with Monte Carlo Simulations, Augusto Perilla, Diana Oancea, Prof. Michael Rockinger, HEC Lausanne
Monte Carlo Method, riskglossary.com | Wikipedia/Monte_Carlo_option_model |
The Weather Research and Forecasting (WRF) Model () is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs, developed in the United States. NWP refers to the simulation and prediction of the atmosphere with a computer model, and WRF is a set of software for this. WRF features two dynamical (computational) cores (or solvers), a data assimilation system, and a software architecture allowing for parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales ranging from meters to thousands of kilometers.
The effort to develop WRF began in the latter part of the 1990s and was a collaborative partnership principally among the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (represented by the National Centers for Environmental Prediction (NCEP) and the (then) Forecast Systems Laboratory (FSL)), the Air Force Weather Agency (AFWA), the Naval Research Laboratory (NRL), the University of Oklahoma (OU), and the Federal Aviation Administration (FAA). The bulk of the work on the model has been performed or supported by NCAR, NOAA, and AFWA.
WRF allows researchers to produce simulations reflecting either real data (observations, analyses) or idealized atmospheric conditions. WRF provides operational forecasting a flexible and robust platform, while offering advances in physics, numerics, and data assimilation contributed by the many research community developers. WRF is currently in operational use at NCEP and other forecasting centers internationally. WRF has grown to have a large worldwide community of users (over 30,000 registered users in over 150 countries), and workshops and tutorials are held each year at NCAR. WRF is used extensively for research and real-time forecasting throughout the world. It has been shown to perform well in simulating atmospheric convection, but to be prone to producing squall lines too easily.
WRF offers two dynamical solvers for its computation of the atmospheric governing equations, and the variants of the model are known as WRF-ARW (Advanced Research WRF) and WRF-NMM (nonhydrostatic mesoscale model). The Advanced Research WRF (ARW) is supported to the community by the NCAR Mesoscale and Microscale Meteorology Laboratory. The WRF-NMM solver variant was based on the Eta model, and later nonhydrostatic mesoscale model, developed at NCEP. The WRF-NMM (NMM) is supported to the community by the Developmental Testbed Center (DTC).
The WRF serves as the basis for the RAP and HRRR models: high-resolution operational forecast models run regularly at NCEP. The WRF also serves as the basis for the North American Mesoscale (NAM) model at 12km and 3km grid resolutions.
A version of WRF-NMM tailored for hurricane forecasting, HWRF (hurricane weather research and forecasting), became operational in 2007.
In 2009, a polar optimized WRF was released through the Byrd Polar Research Center at the Ohio State University.
== See also ==
Global Forecast System
== References ==
== External links ==
WRF website
WRF-ARW support website
WRF-NMM support website | Wikipedia/Weather_Research_and_Forecasting_Model |
The Regional Atmospheric Modeling System (RAMS) is a set of computer programs that simulate the atmosphere for weather and climate research and for numerical weather prediction (NWP). Other components include a data analysis and a visualization package.
RAMS was developed in the 1980s at Colorado State University (CSU), spearheaded by William R. Cotton and Roger A. Pielke, for mesoscale meteorological modeling. Subsequent development is primarily done by Robert L. Walko and Craig J. Tremback under the supervision of Cotton and Pielke. It is a comprehensive non-hydrostatic model. It is written primarily in Fortran with some C code and it runs best under the Unix operating system. Version 6 was released in 2009.
RAMS is the basis for a system simulating the Martian atmosphere that is named MRAMS.
== See also ==
Downscaling
== References ==
== External links ==
Colorado State University site
ATMET (Atmospheric, Meteorological, and Environmental Technologies) site
RAMS Documentation | Wikipedia/Regional_Atmospheric_Modeling_System |
Upper-atmospheric models are simulations of the Earth's atmosphere between 20 and 100 km (65,000 and 328,000 feet) that comprises the stratosphere, mesosphere, and the lower thermosphere. Whereas most climate models simulate a region of the Earth's atmosphere from the surface to the stratopause, there also exist numerical models which simulate the wind, temperature and composition of the Earth's tenuous upper atmosphere, from the mesosphere to the exosphere, including the ionosphere. This region is affected strongly by the 11 year Solar cycle through variations in solar UV/EUV/Xray radiation and solar wind leading to high latitude particle precipitation and aurora. It has been proposed that these phenomena may have an effect on the lower atmosphere, and should therefore be included in simulations of climate change. For this reason there has been a drive in recent years to create whole atmosphere models to investigate whether or not this is the case.
== Jet stream perturbation model ==
A jet stream perturbation model is employed by Weather Logistics UK, which simulates the diversion of the air streams in the upper atmosphere. North Atlantic air flow modelling is simulated by combining a monthly jet stream climatology input calculated at 20 to 30°W, with different blocking high patterns. The jet stream input is generated by thermal wind balance calculations at 316mbars (6 to 9 km aloft) in the mid-latitude range from 40 to 60°N. Long term blocking patterns are determined by the weather forecaster, who identifies the likely position and strength of North Atlantic Highs from synoptic charts, the North Atlantic Oscillation (NAO) and El Niño-Southern Oscillation (ENSO) patterns. The model is based on the knowledge that low pressure systems at the surface are steered by the fast ribbons (jet streams) of air in the upper atmosphere. The jet stream - blocking interaction model simulation examines the sea surface temperature field using data from NOAA tracked along the ocean on a path to the British Isles. The principal theory suggests that long term weather patterns act on longer time scales, so large blocking patterns are thought to appear in a similar locations repeatedly over several months. With a good knowledge of blocking high patterns, the model performs with an impressive accuracy that is useful to the end user.
== Probabilistic forecasting ==
The modelling undertaken at Weather Logistics UK produces regional-seasonal predictions that are probabilistic in nature. Two different blocking sizes are used for the modelling, located at two different locations. The four possible blocking diversions are then ranked in an order, to be combined by logistic regression and generate the appropriate likelihoods of weather events on seasonal time-scales. The raw output consists of 22 different weather conditions for each season that are compared to the average atmospheric conditions. A global warming bias and 1961–1990 climatology of regional British Isles temperatures are added to the anomaly value to produce a final temperature prediction. The seasonal weather forecasts at Weather Logistics UK include several additional weather components (derivatives) including: precipitation anomalies, storm tracks, air flow trajectories, heating degree days for household utility bills, cooling degree days, heat wave and the snow day odds.
== Planetary waves ==
According to a report in New Scientist many researchers are in consensus that Rossby waves are acting against the jet stream's usual pattern and holding it in place. Upper atmospheric studies using National Oceanic and Atmospheric Administration (NOAA) data indicates that during July 2010 these upper air stream patterns were most frequently observed in the Northern Hemisphere. Examination of the climatology data over the same period of time indicates that these wild planetary wave meanderings are not a normal aspect of our regional climate patterns. Meanwhile, ongoing research studies at the University of Reading show that unusual patterns in the polar jet stream are more common during a period of low activity in the solar cycle when the observed sunspot activity and their associated solar flares are at their minimum. The link between low solar activity and enhanced blocking patterns is associated with an increase in the prevalence of cold weather patterns during the European Winter. Another possible explanation for the observed increase in blocking patterns is natural variability, through the chaotic character of the large-scale ocean currents that flow across the surface of the tropical Pacific.
== See also ==
Ionospheric model
== References ==
== External links ==
NCAR TIE-GCM (97 to ~450km), NCAR TIME-GCM (30 to ~450km)
Are cold winters in Europe associated with low solar activity? M Lockwood et al. 2010 Environ. Res. Lett. 5 024001 | Wikipedia/Upper-atmospheric_models |
A tropical cyclone forecast model is a computer program that uses meteorological data to forecast aspects of the future state of tropical cyclones. There are three types of models: statistical, dynamical, or combined statistical-dynamic. Dynamical models utilize powerful supercomputers with sophisticated mathematical modeling software and meteorological data to calculate future weather conditions. Statistical models forecast the evolution of a tropical cyclone in a simpler manner, by extrapolating from historical datasets, and thus can be run quickly on platforms such as personal computers. Statistical-dynamical models use aspects of both types of forecasting. Four primary types of forecasts exist for tropical cyclones: track, intensity, storm surge, and rainfall. Dynamical models were not developed until the 1970s and the 1980s, with earlier efforts focused on the storm surge problem.
Track models did not show forecast skill when compared to statistical models until the 1980s. Statistical-dynamical models were used from the 1970s into the 1990s. Early models use data from previous model runs while late models produce output after the official hurricane forecast has been sent. The use of consensus, ensemble, and superensemble forecasts lowers errors more than any individual forecast model. Both consensus and superensemble forecasts can use the guidance of global and regional models runs to improve the performance more than any of their respective components. Techniques used at the Joint Typhoon Warning Center indicate that superensemble forecasts are a very powerful tool for track forecasting.
== Statistical guidance ==
The first statistical guidance used by the National Hurricane Center was the Hurricane Analog Technique (HURRAN), which was available in 1969. It used the newly developed North Atlantic tropical cyclone database to find storms with similar tracks. It then shifted their tracks through the storm's current path, and used location, direction and speed of motion, and the date to find suitable analogs. The method did well with storms south of the 25th parallel which had not yet turned northward, but poorly with systems near or after recurvature. Since 1972, the Climatology and Persistence (CLIPER) statistical model has been used to help generate tropical cyclone track forecasts. In the era of skillful dynamical forecasts, CLIPER is now being used as the baseline to show model and forecaster skill. The Statistical Hurricane Intensity Forecast (SHIFOR) has been used since 1979 for tropical cyclone intensity forecasting. It uses climatology and persistence to predict future intensity, including the current Julian day, current cyclone intensity, the cyclone's intensity 12 hours ago, the storm's initial latitude and longitude, as well as its zonal (east-west) and meridional (north-south) components of motion.
A series of statistical-dynamical models, which used regression equations based upon CLIPER output and the latest output from primitive equation models run at the National Meteorological Center, then National Centers for Environmental Prediction, were developed between the 1970s and 1990s and were named NHC73, NHC83, NHC90, NHC91, and NHC98. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the decade of the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. In 1994, a version of SHIFOR was created for the northwest Pacific Ocean for typhoon forecasting, known as the Statistical Typhoon Intensity Forecast (STIFOR), which used the 1971–1990 data for that region to develop intensity forecasts out to 72 hours into the future.
In regards to intensity forecasting, the Statistical Hurricane Intensity Prediction Scheme (SHIPS) utilizes relationships between environmental conditions from the Global Forecast System (GFS) such as vertical wind shear and sea surface temperatures, climatology, and persistence (storm behavior) via multiple regression techniques to come up with an intensity forecast for systems in the northern Atlantic and northeastern Pacific oceans. A similar model was developed for the northwest Pacific Ocean and Southern Hemisphere known as the Statistical Intensity Prediction System (STIPS), which accounts for land interactions through the input environmental conditions from the Navy Operational Global Prediction System (NOGAPS) model. The version of SHIPS with an inland decay component is known as Decay SHIPS (DSHIPS). The Logistic Growth Equation Model (LGEM) uses the same input as SHIPS but within a simplified dynamical prediction system. Within tropical cyclone rainfall forecasting, the Rainfall Climatology and Persistence (r-CLIPER) model was developed using microwave rainfall data from polar orbiting satellites over the ocean and first-order rainfall measurements from the land, to come up with a realistic rainfall distribution for tropical cyclones based on the National Hurricane Center's track forecast. It has been operational since 2004. A statistical-parametric wind radii model has been developed for use at the National Hurricane Center and Joint Typhoon Warning Center which uses climatology and persistence to predict wind structure out to five days into the future.
== Dynamical guidance ==
The first dynamical hurricane track forecast model, the Sanders Barotropic Tropical Cyclone Track Prediction Model (SANBAR), was introduced in 1970 and was used by the National Hurricane Center as part of its operational track guidance through 1989. It was based on a simplified set of atmospheric dynamical equations (the equivalent barotropic formulation) using a deep layer-mean wind.
During 1972, the first model to forecast storm surge along the continental shelf of the United States was developed, known as the Special Program to List the Amplitude of Surges from Hurricanes (SPLASH). In 1978, the first full-physics hurricane-tracking model based on atmospheric dynamics – the movable fine-mesh (MFM) model – began operating. The Quasi-Lagrangian Limited Area (QLM) model is a multi-level primitive equation model using a Cartesian grid and the Global Forecast System (GFS) for boundary conditions. In the early 1980s, the assimilation of satellite-derived winds from water vapor, infrared, and visible satellite imagery was found to improve tropical cyclones track forecasting. The Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model was used for research purposes between 1973 and the mid-1980s. Once it was determined that it could show skill in hurricane prediction, a multi-year transition transformed the research model into an operational model which could be used by the National Weather Service for both track and intensity forecasting in 1995. By 1985, the Sea Lake and Overland Surges from Hurricanes (SLOSH) Model had been developed for use in areas of the Gulf of Mexico and near the United States' East coast, which was more robust than the SPLASH model.
The Beta Advection Model (BAM) has been used operationally since 1987 using steering winds averaged through the 850 hPa to 200 hPa layer and the Beta effect which causes a storm to drift northwest due to differences in the coriolis effect across the tropical cyclone. The larger the cyclone, the larger the impact of the beta effect is likely to be. Starting in 1990, three versions of the BAM were run operationally: the BAM shallow (BAMS) average winds in an 850 hPa to 700 hPa layer, the BAM Medium (BAMM) which uses average winds in an 850 hPa to 400 hPa layer, and the BAM Deep (BAMD) which is the same as the pre-1990 BAM. For a weak hurricane without well-developed central thunderstorm activity, BAMS works well, because weak storms tend to be steered by low-level winds. As the storm grows stronger and associated thunderstorm activity near its center gets deeper, BAMM and BAMD become more accurate, as these types of storms are steered more by the winds in the upper-level. If the forecast from the three versions is similar, then the forecaster can conclude that there is minimal uncertainty, but if the versions vary by a great deal, then the forecaster has less confidence in the track predicted due to the greater uncertainty. Large differences between model predictions can also indicate wind shear in the atmosphere, which could affect the intensity forecast as well.
Tested in 1989 and 1990, The Vic Ooyama Barotropic (VICBAR) model used a cubic-B spline representation of variables for the objective analysis of observations and solutions to the shallow-water prediction equations on nested domains, with the boundary conditions defined as the global forecast model. It was implemented operationally as the Limited Area Sine Transform Barotropic (LBAR) model in 1992, using the GFS for boundary conditions. By 1990, Australia had developed its own storm surge model which was able to be run in a few minutes on a personal computer. The Japan Meteorological Agency (JMA) developed its own Typhoon Model (TYM) in 1994, and in 1998, the agency began using its own dynamic storm surge model.
The Hurricane Weather Research and Forecasting (HWRF) model is a specialized version of the Weather Research and Forecasting (WRF) model and is used to forecast the track and intensity of tropical cyclones. The model was developed by the National Oceanic and Atmospheric Administration (NOAA), the U.S. Naval Research Laboratory, the University of Rhode Island, and Florida State University. It became operational in 2007. Despite improvements in track forecasting, predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance. Other than the specialized guidance, global guidance such as the GFS, Unified Model (UKMET), NOGAPS, Japanese Global Spectral Model (GSM), European Centre for Medium-Range Weather Forecasts model, France's Action de Recherche Petite Echelle Grande Echelle (ARPEGE) and Aire Limit´ee Adaptation Dynamique Initialisation (ALADIN) models, India's National Centre for Medium Range Weather Forecasting (NCMRWF) model, Korea's Global Data Assimilation and Prediction System (GDAPS) and Regional Data Assimilation and Prediction System (RDAPS) models, Hong Kong/China's Operational Regional Spectral Model (ORSM) model, and Canadian Global Environmental Multiscale Model (GEM) model are used for track and intensity purposes.
=== Timeliness ===
Some models do not produce output quickly enough to be used for the forecast cycle immediately after the model starts running (including HWRF, GFDL, and FSSE). Most of the above track models (except CLIPER) require data from global weather models, such as the GFS, which produce output about four hours after the synoptic times of 0000, 0600, 1200, and 1800 Universal Coordinated Time (UTC). For half of their forecasts, the NHC issues forecasts only three hours after that time, so some "early" models – NHC90, BAM, and LBAR – are run using a 12-hour-old forecast for the current time. "Late" models, such as the GFS and GFDL, finish after the advisory has already been issued. These models are interpolated to the current storm position for use in the following forecast cycle – for example, GFDI, the interpolated version of the GFDL model.
== Consensus methods ==
Using a consensus of forecast models reduces forecast error. Trackwise, the GUNA model is a consensus of the interpolated versions of the GFDL, UKMET with quality control applied to the cyclone tracker, United States Navy NOGAPS, and GFS models. The version of the GUNA corrected for model biases is known as the CGUN. The TCON consensus is the GUNA consensus plus the Hurricane WRF model. The version of the TCON corrected for model biases is known as the TCCN. A lagged average of the last two runs of the members within the TCON plus the ECMWF model is known as the TVCN consensus. The version of the TVCN corrected for model biases is the TVCC consensus.
In early 2013, The NAVGEM replaced the NOGAPS as the Navy's primary operational global forecast model. For the 2013 season, and until model verification can occur, it is not being utilized in the development of any consensus forecasts.
For intensity, a combination of the LGEM, interpolated GFDL, interpolated HWRF, and DSHIPS models is known as the ICON consensus. The lagged average of the last two runs of models within the ICON consensus is called the IVCN consensus. Across the northwest Pacific and Southern Hemisphere, a ten-member STIPS consensus is formed from the output of the NOGAPS,
GFS, the Japanese GSM, the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS), the UKMET, the Japanese TYM, the GFDL with NOGAPS boundary conditions, the Air Force Weather Agency (AFWA) Model, the Australian Tropical Cyclone Local Area Prediction System, and the Weber Barotropic Model.
== Ensemble methods ==
No model is ever perfectly accurate because it is impossible to learn exactly everything about the atmosphere in a timely enough manner, and atmospheric measurements that are taken are not completely accurate. The use of the ensemble method of forecasting, whether it be a multi-model ensemble, or numerous ensemble members based on the global model, helps define the uncertainty and further limit errors.
The JMA has produced an 11-member ensemble forecast system for typhoons known as the Typhoon Ensemble Prediction System (TEPS) since February 2008, which is run out to 132 hours into the future. It uses a lower resolution version (with larger grid spacing) of its GSM, with ten perturbed members and one non-perturbed member. The system reduces errors by an average of 40 kilometres (25 mi) five days into the future when compared to its higher resolution GSM.
The Florida State Super Ensemble (FSSE) is produced from a suite of models which then uses statistical regression equations developed over a training phase to reduce their biases, which produces forecasts better than the member models or their mean solution. It uses 11 global models, including five developed at Florida State University, the Unified Model, the GFS, the NOGAPS, the United States Navy NOGAPS, the Australian Bureau of Meteorology Research Centre (BMRC) model, and Canadian Recherche en Prévision Numérique (RPN) model. It shows significant skill in track, intensity, and rainfall predictions of tropical cyclones.
The Systematic Approach Forecast Aid (SAFA) was developed by the Joint Typhoon Warning Center to create a selective consensus forecast which removed more erroneous forecasts at a 72‑hour time frame from consideration using the United States Navy NOGAPS model, the GFDL, the Japan Meteorological Agency's global and typhoon models, as well as the UKMET. All the models improved during SAFA's five-year history and removing erroneous forecasts proved difficult to do in operations.
== Sunspot theory ==
A 2010 report correlates low sunspot activity with high hurricane activity. Analyzing historical data, there was a 25% chance of at least one hurricane striking the continental United States during a peak sunspot year; a 64% chance during a low sunspot year. In June 2010, the hurricanes predictors in the US were not using this information.
== Hurricane forecast model accuracy ==
The accuracy of hurricane forecast models can vary significantly from storm to storm. For some storms the factors affecting the hurricane track are relatively straightforward, and the models are not only accurate but they produce similar forecasts, while for other storms the factors affecting the hurricane track are more complex and different models produce very different forecasts.
== See also ==
Tropical cyclone forecasting
Tropical cyclone rainfall forecasting
Weather forecasting
== References ==
== External links ==
Tropical Cyclone Forecasters Reference Guide, Chapter 5 Archived 11 July 2006 at the Wayback Machine
Model Analyses and Forecasts from NCEP Archived 23 December 2007 at the Wayback Machine
National Hurricane Center Forecast Model Background and Information | Wikipedia/Tropical_cyclone_forecast_model |
The Community Climate System Model (CCSM) is a coupled general circulation model (GCM) developed by the University Corporation for Atmospheric Research (UCAR) with funding from the National Science Foundation (NSF), the Department of Energy (DoE), and the National Aeronautics and Space Administration (NASA). The coupled components include an atmospheric model (Community Atmosphere Model), a land-surface model (Community Land Model), an ocean model (Parallel Ocean Program), and a sea ice model (Community Sea Ice Model, CICE). CCSM is maintained by the National Center for Atmospheric Research (NCAR).
Its software design assumes a physical/dynamical component of the climate system and, as a freely available community model, is designed to work on a variety of machine architectures powerful enough to run the model. The CESM codebase is mostly public domain with some segregable components issued under open source and other licenses. The offline chemical transport model has been described as "very efficient".
The model includes four submodels (land, sea-ice, ocean and atmosphere) connected by a coupler that exchanges information with the submodels. NCAR suggested that because of this, CCSM cannot be considered a single climate model, but rather a framework for building and testing various climate models.
== Submodels ==
=== Ocean model (docn6) ===
The Climatological Data Ocean Model (docn) is recently at version 6.0. It must be run within the framework of CCSM rather than standalone. It takes two netCDF datasets as input and sends six outputs to the coupler, to be integrated with the output of the other submodels.
=== Atmosphere model (CAM) ===
The Community Atmosphere Model (CAM) can also be run as a standalone atmosphere model. Its most current version is 3.1, while 3.0 was the fifth generation. On May 17, 2002, its name was changed from the NCAR Community Climate Model to reflect its role in the new system. It shares the same horizontal grid as the land model of CCSM: a 256×128 regular longitude/latitude global horizontal grid (giving a 1.4 degree resolution). It has 26 levels in the vertical.
=== Sea Ice Model (CICE) ===
The polar component of ocean-atmosphere coupling includes sea ice geophysics using the formerly-known Los Alamos Sea Ice Model, CICE, now often referred to as the CICE Consortium model, to which NCAR has contributed code and physical improvements through the Polar Climate Working Group. CICE simulates the growth, movement, deformation and melt of sea ice, critical for calculating energy and mass fluxes between the polar atmosphere and oceans in the earth system.
== Development ==
The first version of CCSM was created in 1983 as the Community Climate Model (CCM). Over the next two decades it was steadily improved and was renamed CCSM after the Climate System Model (CSM) components were introduced in May 1996. In June 2004 NCAR released the third version, which included new versions of all of the submodels. In 2007 this new version (commonly given the acronym CCSM3 or NCCCSM) was used in the IPCC Fourth Assessment Report, alongside many others. In May 2010 NCAR released CCSM version 4 (CCSM4). On June 25, 2010 NCAR released the successor to CCSM, called the Community Earth System Model (CESM), version 1.0 (CESM1), as a unified code release that included CCSM4 as the code base for its atmospheric component.
== References ==
== External links ==
CCSM4
CCSM3
CCSM2
CCSM1
Community Atmosphere Model (CAM), atmospheric component of CCSM | Wikipedia/Community_Climate_System_Model |
The MEMO model (version 6.2) is a Eulerian non-hydrostatic prognostic mesoscale model for wind-flow simulation. It was developed by the Aristotle University of Thessaloniki in collaboration with the Universität Karlsruhe. The MEMO Model together with the photochemical dispersion model MARS are the two core models of the European zooming model (EZM). This model belongs to the family of models designed for describing atmospheric transport phenomena in the local-to-regional scale, frequently referred to as mesoscale air pollution models.
== History ==
Initially, EZM was developed for modelling the transport and chemical transformation of pollutants in selected European regions in the frame of the EUROTRAC sub-project EUMAC and therefore it was formerly called the EUMAC Zooming Model (EUROTRAC, 1992). EZM has evolved to be one of the most frequently applied mesoscale air pollution model systems in Europe. It has already been successfully applied for various European airsheds including the Upper Rhine valley and the areas of Basel, Graz, Barcelona, Lisbon, Madrid, Milano, London, Cologne, Lyon, The Hague, Athens (Moussiopoulos, 1994; Moussiopoulos, 1995) and Thessaloniki. More details are to be found elsewhere (Moussiopoulos 1989), (Flassak 1990), (Moussiopoulos et al. 1993).
== Model equations ==
The prognostic mesoscale model MEMO describes the dynamics of the atmospheric boundary layer. In the present model version, air is assumed to be unsaturated. The model solves the continuity equation, the momentum equations and several transport equations for scalars (including the thermal energy equation and, as options, transport equations for water vapour, the turbulent kinetic energy and pollutant concentrations).
== Transformation to terrain-following coordinates ==
The lower boundary of the model domain coincides with the ground. Because of the inhomogeneity of the terrain, it is not possible to impose boundary conditions at that boundary with respect to Cartesian coordinates. Therefore, a transformation of the vertical coordinate to a terrain-following one is performed. Hence, the originally irregularly bounded physical domain is mapped onto one consisting of unit cubes.
== Numerical solution of the equation system ==
The discretized equations are solved numerically on a staggered grid, i.e. the scalar quantities
ρ
{\displaystyle \rho }
,
p
{\displaystyle p}
and
θ
{\displaystyle \theta }
are defined at the cell centre while the velocity components
u
{\displaystyle u}
,
v
{\displaystyle v}
and
w
{\displaystyle w}
are defined at the centre of the appropriate interfaces.
Temporal discretization of the prognostic equations is based on the explicit second order Adams-Bashforth scheme. There are two deviations from the Adams-Bashforth scheme: The first refers to the implicit treatment of the nonhydrostatic part of the mesoscale pressure perturbation
p
n
h
{\displaystyle p_{nh}}
. To ensure non-divergence of the flow field, an elliptic equation is solved. The elliptic equation is derived from the continuity equation wherein velocity components are expressed in terms of
p
n
h
{\displaystyle p_{nh}}
. Since the elliptic equation is derived from the discrete form of the continuity equation and the discrete form of the pressure gradient, conservativity is guaranteed (Flassak and Moussiopoulos, 1988). The discrete pressure equation is solved numerically with a fast elliptic solver in conjunction with a generalized conjugate gradient method. The fast elliptic solver is based on fast Fourier analysis in both horizontal directions and Gaussian elimination in the vertical direction (Moussiopoulos and Flassak, 1989).
The second deviation from the explicit treatment is related to the turbulent diffusion in vertical direction. In case of an explicit treatment of this term, the stability requirement may necessitate an unacceptable abridgement of the time increment. To avoid this, vertical turbulent diffusion is treated using the second order Crank–Nicolson method.
On principle, advective terms can be computed using any suitable advection scheme. In the present version of MEMO, a 3D second-order total-variation-diminishing (TVD) scheme is implemented which is based on the 1D scheme proposed by Harten (1986). It achieves a fair (but not any) reduction of numerical diffusion, the solution being independent of the magnitude of the scalar (preserving transportivity).
== Parameterizations ==
Turbulence and radiative transfer are the most important physical processes that have to be parameterized in a prognostic mesoscale model. In the MEMO model, radiative transfer is calculated with an efficient scheme based on the emissivity method for longwave radiation and an implicit multilayer method for shortwave radiation (Moussiopoulos 1987).
The diffusion terms may be represented as the divergence of the corresponding fluxes. For turbulence parameterizations, K-theory is applied. In case of MEMO turbulence can be treated either with a zero-, one- or two-equation turbulence model. For most applications a one-equation model is used, where a conservation equation for the turbulent kinetic energy is solved.
== Initial and boundary conditions ==
In MEMO, initialization is performed with suitable diagnostic methods: a mass-consistent initial wind field is formulated using an objective analysis model and scalar fields are initialized using appropriate interpolating techniques (Kunz, R., 1991). Data needed to apply the diagnostic methods may be derived either from observations or from larger scale simulations.
Suitable boundary conditions have to be imposed for the wind velocity components
u
{\displaystyle u}
,
v
{\displaystyle v}
and
w
{\displaystyle w}
, the potential temperature
θ
{\displaystyle \theta }
and pressure
p
{\displaystyle p}
at all boundaries. At open boundaries, wave reflection and deformation may be minimized by the use of so-called radiation conditions (Orlanski 1976).
According to the experience gained so far with the model MEMO, neglecting large scale environmental information might result in instabilities in case of simulations over longer time periods.
For the nonhydrostatic part of the mesoscale pressure perturbation, homogeneous Neumann boundary conditions are used at lateral boundaries. With these conditions, the wind velocity component perpendicular to the boundary remains unaffected by the pressure change.
At the upper boundary, Neumann boundary conditions are imposed for the horizontal velocity components and the potential temperature. To ensure non-reflectivity, a radiative condition is used for the hydrostatic part of the mesoscale pressure perturbation
p
h
{\displaystyle p_{h}}
at that boundary. Hence, vertically propagating internal gravity waves are allowed to leave the computational domain (Klemp and Durran 1983). For the nonhydrostatic part of the mesoscale pressure perturbation, homogeneous staggered Dirichlet conditions are imposed. Being justified by the fact that nonhydrostatic effects are negligible at large heights, this condition is necessary, if singularity of the elliptic pressure equation is to be avoided in view of the Neumann boundary conditions at all other boundaries.
The lower boundary coincides with the ground (or, more precisely, a height above ground corresponding to its aerodynamic roughness). For the non-hydrostatic part of the mesoscale pressure perturbation, inhomogeneous Neumann conditions are imposed at that boundary. All other conditions at the lower boundary follow from the assumption that the –Obukhov similarity theory is valid.
The one way interactive nesting facility is possible within MEMO. Thus, successive simulations on grids of increasing resolution are possible. During these simulations, the results of the application to a coarse grid are used as boundary conditions for the application to the finer grid (Kunz and Moussiopoulos, 1995).
== Grid definition ==
The governing equations are solved numerically on a staggered grid. Scalar quantities as the temperature, pressure, density and also the cell volume are defined at the centre of a grid cell and the velocity components
u
{\displaystyle u}
,
v
{\displaystyle v}
and
w
{\displaystyle w}
at the centre of the appropriate interface. Turbulent fluxes are defined at different locations: Shear fluxes are defined at the centre of the appropriate edges of a grid cell and normal stress fluxes at scalar points. With this definition, the outgoing fluxes of momentum, mass, heat and also turbulent fluxes of a grid cell are identical to incoming flux of the adjacent grid cell. So the numerical method is conservative.
== Topography and surface type ==
For calculations with MEMO, a file must be provided which contains orography height and surface type for each grid location The following surface types are distinguished and must be stored as percentage:
water (type: 1)
arid land (type: 2)
few vegetation (type: 3)
farmland (type: 4)
forest (type: 5)
suburban area (type: 6)
urban area (type: 7)
Only surface types 1–6 have to be stored. Type 7 is the difference between 100% and the sum of types 1–6. If the percentage of a surface type is 100%, then write the number 10 and for all other surface types the number 99.
The orography height is the mean height for each grid location above sea level in meter.
== Meteorological data ==
The prognostic model MEMO is a set of partial differential equations in three spatial directions and in time. To solve these equations, information about the initial state in the whole domain and about the development of all relevant quantities at the lateral boundaries is required.
=== Initial state ===
To generate an initial state for the prognostic model, a diagnostic model (Kunz, R., 1991) is applied using measured temperature and wind data. Both data can be provided as:
surface measurements i.e. single measurements directly above the surface (not necessary)
upper air soundings (i.e., soundings that consist of two or more measurements at different heights) at a constant geographical location is required (with at least one sounding for temperature and wind velocity).
=== Time-dependent boundary conditions ===
Information about quantities at the lateral boundaries can be taken into account as surface measurements and upper air soundings. Therefore, a key word and the time when boundary data is given must occur in front of a set of boundary information.
== Nesting facility ==
In MEMO, a one-way interactive nesting scheme is implemented. With this nesting scheme a coarse grid and a fine grid simulation can be nested. During the coarse grid simulation, data is interpolated and written to a file. A consecutive fine grid simulation uses this data as lateral boundary values.
== See also ==
Bibliography of atmospheric dispersion modeling
Atmospheric dispersion modeling
List of atmospheric dispersion models
Air pollution dispersion terminology
Useful conversions and formulas for air dispersion modeling
== References ==
EUROTRAC (1992), Annual Report 1991, Part 5.
Flassak, Th. and Moussiopoulos, N. (1988), Direct solution of the Helmholtz equation using Fourier analysis on the CYBER 205, Environmental Software 3, 12–16.
Harten, A. (1986), On a large time-step high resolution scheme, Math. Comp. 46, 379–399.
Klemp, J.B. and Durran, D.R. (1983), An upper boundary condition permitting internal gravity wave radiation in numerical mesoscale models, Mon. Weather Rev.111, 430–444.
Kunz, R. (1991), Entwicklung eines diagnostischen Windmodells zur Berechnung des Anfangszustandes fόr das dynamische Grenzschichtmodell MEMO, Diplomarbeit Universitδt Karlsruhe.
Kunz R. and Moussiopoulos N. (1995), Simulation of the wind field in Athens using refined boundary conditions, Atmos. Environ. 29, 3575–3591.
Moussiopoulos, N. (1987), An efficient scheme to calculate radiative transfer in mesoscale models, Environmental Software 2, 172–191.
Moussiopoulos, N. (1989), Mathematische Modellierung mesoskaliger Ausbreitung in der Atmosphδre, Fortschr.-Ber. VDI, Reihe 15, Nr. 64, pp. 307.
Moussiopoulos N., ed. (1994), The EUMAC Zooming Model (EZM): Model Structure and Applications, EUROTRAC Report, 266 pp.
Moussiopoulos N. (1995), The EUMAC Zooming Model, a tool for local-to-regional air quality studies, Meteorol. Atmos. Phys. 57, 115–133.
Moussiopoulos, N. and Flassak, Th. (1989), A fully vectorized fast direct solver of the Helmholtz equation in Applications of supercomputers in engineering: Algorithms, computer systems and user experience, Brebbia, C.A. and Peters, A. (editors), Elsevier, Amsterdam 67–77.
Moussiopoulos, N., Flassak, Th., Berlowitz, D., Sahm, P. (1993), Simulations of the Wind Field in Athens With the Nonhydrostatic Mesoscale Model MEMO, Environmental Software 8, 29–42.
Orlanski, J. (1976), A simple boundary condition for unbounded hyperbolic flows, J. Comput. Phys. 21, 251–269.
== External links ==
Model Documentation System
European Topic Centre on Air and Climate Change (ETC/ACC) | Wikipedia/MEMO_Model |
The Princeton Ocean Model (POM) is a community general numerical model for ocean circulation that can be used to simulate and predict oceanic currents, temperatures, salinities and other water properties. POM-WEB and POMusers.org
== Development ==
The model code was originally developed at Princeton University (G. Mellor and Alan Blumberg) in collaboration with Dynalysis of Princeton (H. James Herring, Richard C. Patchen). The model incorporates the Mellor–Yamada turbulence scheme developed in the early 1970s by George Mellor and Ted Yamada; this turbulence sub-model is widely used by oceanic and atmospheric models. At the time, early computer ocean models such as the Bryan–Cox model (developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, GFDL, and later became the Modular Ocean Model, MOM)), were aimed mostly at coarse-resolution simulations of the large-scale ocean circulation, so there was a need for a numerical model that can handle high-resolution coastal ocean processes. The Blumberg–Mellor model (which later became POM) thus included new features such as free surface to handle tides, sigma vertical coordinates (i.e., terrain-following) to handle complex topographies and shallow regions, a curvilinear grid to better handle coastlines, and a turbulence scheme to handle vertical mixing. At the early 1980s the model was used primarily to simulate estuaries such as the Hudson–Raritan Estuary (by Leo Oey) and the Delaware Bay (Boris Galperin), but also first attempts to use a sigma coordinate model for basin-scale problems have started with the coarse resolution model of the Gulf of Mexico (Blumberg and Mellor) and models of the Arctic Ocean (with the inclusion of ice-ocean coupling by Lakshmi Kantha and Sirpa Hakkinen).
In the early 1990s when the web and browsers started to be developed, POM became one of the first ocean model codes that were provided free of charge to users through the web. The establishment of the POM users group and its web support (by Tal Ezer) resulted in a continuous increase in the number of POM users which grew from about a dozen U.S. users in the 1980s to over 1000 users in 2000 and over 4000 users by 2009; there are users from over 70 different countries. In the 1990s the usage of POM expands to simulations of the Mediterranean Sea (Zavatarelli) and the first simulations with a sigma coordinate model of the entire Atlantic Ocean for climate research (Ezer). The development of the Mellor–Ezer optimal interpolation data assimilation scheme that projects surface satellite data into deep layers allows the construction of the first ocean forecast systems for the Gulf Stream and the U.S. east coast running operationally at the NOAA's National Weather Service (Frank Aikman and others). Operational forecast system for other regions such as the Great Lakes, the Gulf of Mexico (Oey), the Gulf of Maine (Huijie Xue) and the Hudson River (Blumberg) followed. For more information on applications of the model, see the searchable database of over 1800 POM-related publications.
== Derivatives and other models ==
In the late 1990s and the 2000s many other terrain-following community ocean models have been developed; some of their features can be traced back to features included in the original POM, other features are additional numerical and parameterization improvements. Several ocean models are direct descendants of POM such as the commercial version of POM known as the estuarine and coastal ocean model (ECOM), the navy coastal ocean model (NCOM) and the finite-volume coastal ocean model (FVCOM). Recent developments in POM include a generalized coordinate system that combines sigma and z-level grids (Mellor and Ezer), inundation features that allow simulations of wetting and drying (e.g., flood of land area) (Oey), and coupling ocean currents with surface waves (Mellor). Efforts to improve turbulent mixing also continue (Galperin, Kantha, Mellor and others).
== Users' meetings ==
POM users' meetings were held every few years, and in recent years the meetings were extended to include other models and renamed the International Workshop on Modeling the Ocean (IWMO).
Meeting Pages:
List of meetings:
1. 1996, June 10–12, Princeton, NJ, USA (POM96)
2. 1998, February 17–19, Miami, FL, USA (POM98)
3. 1999, September 20–22, Bar Harbor, ME, USA (SigMod99)
4. 2001, August 20–22, Boulder, CO, USA (SigMod01)
5. 2003, August 4–6, Seattle, WA, USA (SigMod03)
6. 2009, February 23–26, Taipei, Taiwan (1st IWMO-2009)
7. 2010, May 24–26, Norfolk, VA, USA (2nd IWMO-2010; IWMO-2010)
8. 2011, June 6–9, Qingdao, China (3rd IWMO-2011; IWMO-2011)
9. 2012, May 21–24, Yokohama, Japan (4th IWMO-2012; [1])
10. 2013, June 17–20, Bergen, Norway (5th IWMO-2013; [2])
11. 2014, June 23–27, Halifax, Nova Scotia, Canada (6th IWMO-2014; [3])
12. 2015, June 1–5, Canberra, Australia (7th IWMO-2015; [4]).
13. 2016, June 7–10, Bologna, Italy (8th IWMO-2016;[5]).
14. 2017, July 3–6, Seoul, South Korea (9th IWMO-2017;[6]).
15. 2018, June 25–28, Santos, Brazil (10th IWMO-2018;[7]).
16. 2019, June 17–20, Wuxi, China (11th IWMO-2019;[8]).
17. 2022. June 28-July 1, Ann Arbor, MI (12th IWMO-2022).
17. 2023, June 27–30, Hamburg, Germany (13th IWMO-2023).
Reviewed papers from the IWMO meetings are published by Ocean Dynamics in special issues
(IWMO-2009 Part-I, IWMO-2009 Part-II, IWMO-2010, IWMO-2011, IWMO-2012, IWMO-2013, IWMO-2014).
== References ==
== External links ==
POM-WEB page (registration and information)
MPI-POM and Taiwan Ocean Prediction (TOP) Archived June 16, 2016, at the Wayback Machine | Wikipedia/Princeton_ocean_model |
In weather forecasting, model output statistics (MOS) is a multiple linear regression technique in which predictands, often near-surface quantities (such as two-meter-above-ground-level air temperature, horizontal visibility, and wind direction, speed and gusts), are related statistically to one or more predictors. The predictors are typically forecasts from a numerical weather prediction (NWP) model, climatic data, and, if applicable, recent surface observations. Thus, output from NWP models can be transformed by the MOS technique into sensible weather parameters that are familiar to a layperson.
== Background ==
Output directly from the NWP model's lowest layer(s) generally is not used by forecasters because the actual physical processes that occur within the Earth's boundary layer are crudely approximated in the model (i.e., physical parameterizations) along with its relatively coarse horizontal resolution. Because of this lack of fidelity and its imperfect initial state, forecasts of near-surface quantities obtained directly from the model are subject to systematic (bias) and random model errors, which tend to grow with time.
In the development of MOS equations, past observations and archived NWP model forecast fields are used with a screening regression to determine the 'best' predictors and their coefficients for a particular predictand and forecast time. By using archived model forecast output along with verifying surface observations, the resulting equations implicitly take into account physical effects and processes which the underlying numerical weather prediction model cannot explicitly resolve, resulting in much better forecasts of sensible weather quantities. In addition to correcting systematic errors, MOS can produce reliable probabilities of weather events from a single model run. In contrast, despite the enormous amount of computing resources devoted to generating them, ensemble model forecasts' relative frequency of events—often used as a proxy for probability—do not exhibit useful reliability. Thus, ensemble NWP model output also requires additional post-processing in order to obtain reliable probabilistic forecasts, using nonhomogeneous Gaussian regression or other methods.
== History ==
=== United States ===
MOS was conceived and planning for its use began within the U.S. National Weather Service’s (NWS’s) Techniques Development Laboratory (TDL) in 1965 and forecasts first issued from it in 1968. Since then, TDL, now the Meteorological Development Laboratory (MDL), continued to create, refine and update MOS equation sets as additional NWP models were developed and made operational at the National Meteorological Center (NMC) and then the Environmental Modeling Center or EMC.
Given its multi-decadal history within the U.S. NWS and its continuous improvement and superior skill over direct NWP model output, MOS guidance is still one of the most valuable forecast tools used by forecasters within the agency.
== United States forecast guidance ==
There are eight sets of MOS guidance available from MDL, operational and experimental, covering the span of time from the next hour out to ten days for the United States and most of its territories.
Nested Grid Model MOS was discontinued in 2009.
Initially, MOS guidance was developed for airports and other fixed locales where METARs (or similar reports) were routinely issued. Therefore, MOS guidance was and continues to be provided in an alphanumeric 'bulletin' format for these locations. Here is an example of a short-range MOS forecast for Clinton-Sherman Airport, Oklahoma (KCSM) based on the EMC's Global Forecast System model output.
With the availability of private- and government-owned weather mesonets, new objective analysis and interpolation techniques, gridded GFS MOS guidance became available in 2006.
=== Advantages and disadvantages ===
The advantage of MOS forecast guidance as developed in the United States allowed for
the removal of NWP model bias,
prediction of weather elements not forecast by the NWP model, e.g., surface visibilities, cloud ceiling heights,
reliable probabilities of binary events, e.g. probability of precipitation and (severe) thunderstorms,
reliable probabilities of categorical events, e.g., probability of clear, scattered, broken or overcast skies.
These points, while greatly desired by forecasters, do come at a price. From its very beginnings, the development of robust MOS equations for a particular NWP model required at least two years' worth of archived model output and observations, during which time the NWP model should remain unchanged, or nearly so. This requirement is necessary in order to fully capture the model's error characteristics under a wide variety of meteorological flow regimes for any particular location or region. Extreme meteorological events such as unusual cold- or heat-waves, heavy rain and snowfall, high winds, etc., are important in the development of robust MOS equations. A lengthy model archive has the best chance of capturing such events.
From the 1970s and into the 1980s, this requirement was not very onerous since EMC (then NMC) scientists, being relatively constrained by computational resources at the time, could only make relatively minor, incremental improvements to their NWP models. However, since the 1990s, NWP models have been upgraded more frequently, oftentimes with significant changes in physics and horizontal and vertical grid resolutions. Since MOS corrects systematic biases of the NWP model it is based on, any changes to the NWP model's error characteristics affects MOS guidance, usually in a negative way. This was a factor in the discontinuation of the MOS for the individual ensemble members of the GFS in April 2019; that product had not been updated since 2009, and NOAA decided to cease offering the product instead of bringing it up to date.
In the case of a major upgrade to a NWP model, the EMC will run the newer version of model in parallel with the operational one for many months to allow for direct comparison of model performance. In addition to parallel real-time runs, EMC also runs the newer model to examine past events and seasons, i.e., retrospective forecasts.
All of these runs from the upgraded model allows the National Weather Service, Weather Prediction Center (WPC), National Hurricane Center (NHC), and Storm Prediction Center (SPC) to evaluate its performance prior to the decision to either accept or reject it for operational use. MDL scientists have taken advantage of these runs to evaluate and reformulate the MOS equations as needed to avoid deterioration in guidance quality.
== Other weather centers ==
Royal Netherlands Meteorological Institute developed a MOS system to forecast probabilities of (severe) thunderstorms in the Netherlands.
Scientists from the Meteorological Service of Canada developed a post-processing system called Updateable MOS (UMOS) that quickly incorporates changes to their regional NWP model without the need for a lengthy model archive. The Canadian UMOS system generates a 2-day forecast of temperatures, wind speed and direction and probability of precipitation (POP). UMOS temperature and wind forecasts are provided at 3-h intervals, and POP at 6-h intervals.
Scientists at the Kongju National University have also implemented a UMOS system to create forecasts of air temperatures over South Korea. It is unclear as to whether it is used operationally at the Korean Meteorological Administration.
== Notes ==
== References ==
== Further reading ==
Wilks, Daniel S. (2006). Statistical Methods in the Atmospheric Sciences (Second ed.). Academic Press. p. 627. ISBN 0-12-751966-1. | Wikipedia/Model_output_statistics |
The Navy Global Environmental Model (NAVGEM) is a global numerical weather prediction computer simulation run by the United States Navy's Fleet Numerical Meteorology and Oceanography Center. This mathematical model is run four times a day and produces weather forecasts. Along with the NWS's Global Forecast System, which runs out to 16 days, the ECMWF's Integrated Forecast System (IFS) and the CMC's Global Environmental Multiscale Model (GEM), both of which run out 10 days, and the UK Met Office's Unified Model, which runs out to 7 days, it is one of five synoptic scale medium-range models in general use.
The NAVGEM became operational in February 2013, replacing the NOGAPS. It uses the same forecast range as the NOGAPS did (three-hour intervals out 180 hours) but also uses a refurbished dynamic core and improvements to the physics simulations compared to its predecessor.
== References ==
Official Website: [1] | Wikipedia/Navy_Global_Environmental_Model |
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of atmospheric clouds. These aerosols are found in the troposphere, stratosphere, and mesosphere, which collectively make up the greatest part of the homosphere. Clouds consist of microscopic droplets of liquid water (warm clouds), tiny crystals of ice (cold clouds), or both (mixed phase clouds), along with microscopic particles of dust, smoke, or other matter, known as condensation nuclei. Cloud droplets initially form by the condensation of water vapor onto condensation nuclei when the supersaturation of air exceeds a critical value according to Köhler theory. Cloud condensation nuclei are necessary for cloud droplets formation because of the Kelvin effect, which describes the change in saturation vapor pressure due to a curved surface. At small radii, the amount of supersaturation needed for condensation to occur is so large, that it does not happen naturally. Raoult's law describes how the vapor pressure is dependent on the amount of solute in a solution. At high concentrations, when the cloud droplets are small, the supersaturation required is smaller than without the presence of a nucleus.
In warm clouds, larger cloud droplets fall at a higher terminal velocity; because at a given velocity, the drag force per unit of droplet weight on smaller droplets is larger than on large droplets. The large droplets can then collide with small droplets and combine to form even larger drops. When the drops become large enough that their downward velocity (relative to the surrounding air) is greater than the upward velocity (relative to the ground) of the surrounding air, the drops can fall as precipitation. The collision and coalescence is not as important in mixed phase clouds where the Bergeron process dominates. Other important processes that form precipitation are riming, when a supercooled liquid drop collides with a solid snowflake, and aggregation, when two solid snowflakes collide and combine. The precise mechanics of how a cloud forms and grows is not completely understood, but scientists have developed theories explaining the structure of clouds by studying the microphysics of individual droplets. Advances in weather radar and satellite technology have also allowed the precise study of clouds on a large scale.
== History of cloud physics ==
The modern cloud physics began in the 19th century and was described in several publications. Otto von Guericke originated the idea that clouds were composed of water bubbles. In 1847 Augustus Waller used spider web to examine droplets under the microscope. These observations were confirmed by William Henry Dines in 1880 and Richard Assmann in 1884.
== Cloud formation ==
=== Cooling air to its dew point ===
==== Adiabatic cooling ====
As water evaporates from an area of Earth's surface, the air over that area becomes moist. Moist air is lighter than the surrounding dry air, creating an unstable situation. When enough moist air has accumulated, all the moist air rises as a single packet, without mixing with the surrounding air. As more moist air forms along the surface, the process repeats, resulting in a series of discrete packets of moist air rising to form clouds.
This process occurs when one or more of three possible lifting agents—cyclonic/frontal, convective, or orographic—causes air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling. Atmospheric pressure decreases with altitude, so the rising air expands in a process that expends energy and causes the air to cool, which makes water vapor condense into cloud. Water vapor in saturated air is normally attracted to condensation nuclei such as dust and salt particles that are small enough to be held aloft by normal circulation of the air. The water droplets in a cloud have a normal radius of about 0.002 mm (0.00008 in). The droplets may collide to form larger droplets, which remain aloft as long as the velocity of the rising air within the cloud is equal to or greater than the terminal velocity of the droplets.
For non-convective cloud, the altitude at which condensation begins to happen is called the lifted condensation level (LCL), which roughly determines the height of the cloud base. Free convective clouds generally form at the altitude of the convective condensation level (CCL). Water vapor in saturated air is normally attracted to condensation nuclei such as salt particles that are small enough to be held aloft by normal circulation of the air. If the condensation process occurs below the freezing level in the troposphere, the nuclei help transform the vapor into very small water droplets. Clouds that form just above the freezing level are composed mostly of supercooled liquid droplets, while those that condense out at higher altitudes where the air is much colder generally take the form of ice crystals. An absence of sufficient condensation particles at and above the condensation level causes the rising air to become supersaturated and the formation of cloud tends to be inhibited.
===== Frontal and cyclonic lift =====
Frontal and cyclonic lift occur in their purest manifestations when stable air, which has been subjected to little or no surface heating, is forced aloft at weather fronts and around centers of low pressure. Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds will usually be embedded in the main precipitating cloud layer. Cold fronts are usually faster moving and generate a narrower line of clouds which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm air mass just ahead of the front.
===== Convective lift =====
Another agent is the buoyant convective upward motion caused by significant daytime solar heating at surface level, or by relatively high absolute humidity. Incoming short-wave radiation generated by the sun is re-emitted as long-wave radiation when it reaches Earth's surface. This process warms the air closest to ground and increases air mass instability by creating a steeper temperature gradient from warm or hot at surface level to cold aloft. This causes it to rise and cool until temperature equilibrium is achieved with the surrounding air aloft. Moderate instability allows for the formation of cumuliform clouds of moderate size that can produce light showers if the airmass is sufficiently moist. Typical convection upcurrents may allow the droplets to grow to a radius of about 0.015 millimetres (0.0006 in) before precipitating as showers. The equivalent diameter of these droplets is about 0.03 millimetres (0.001 in).
If air near the surface becomes extremely warm and unstable, its upward motion can become quite explosive, resulting in towering cumulonimbiform clouds that can cause severe weather. As tiny water particles that make up the cloud group together to form droplets of rain, they are pulled down to earth by the force of gravity. The droplets would normally evaporate below the condensation level, but strong updrafts buffer the falling droplets, and can keep them aloft much longer than they would otherwise. Violent updrafts can reach speeds of up to 180 miles per hour (290 km/h). The longer the rain droplets remain aloft, the more time they have to grow into larger droplets that eventually fall as heavy showers.
Rain droplets that are carried well above the freezing level become supercooled at first then freeze into small hail. A frozen ice nucleus can pick up 0.5 inches (1.3 cm) in size traveling through one of these updrafts and can cycle through several updrafts and downdrafts before finally becoming so heavy that it falls to the ground as large hail. Cutting a hailstone in half shows onion-like layers of ice, indicating distinct times when it passed through a layer of super-cooled water. Hailstones have been found with diameters of up to 7 inches (18 cm).
Convective lift can occur in an unstable air mass well away from any fronts. However, very warm unstable air can also be present around fronts and low-pressure centers, often producing cumuliform and cumulonimbiform clouds in heavier and more active concentrations because of the combined frontal and convective lifting agents. As with non-frontal convective lift, increasing instability promotes upward vertical cloud growth and raises the potential for severe weather. On comparatively rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.
===== Orographic lift =====
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift). If the air is generally stable, nothing more than lenticular cap clouds will form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.
==== Non-adiabatic cooling ====
Along with adiabatic cooling that requires a lifting agent, there are three other main mechanisms for lowering the temperature of the air to its dew point, all of which occur near surface level and do not require any lifting of the air. Conductive, radiational, and evaporative cooling can cause condensation at surface level resulting in the formation of fog. Conductive cooling takes place when air from a relatively mild source area comes into contact with a colder surface, as when mild marine air moves across a colder land area. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. This type of cooling is common during the night when the sky is clear. Evaporative cooling happens when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or sometimes to the point of saturation.
=== Adding moisture to the air ===
There are five main ways water vapor can be added to the air. Increased vapor content can result from wind convergence over water or moist ground into areas of upward motion. Precipitation or virga falling from above also enhances moisture content. Daytime heating causes water to evaporate from the surface of oceans, water bodies or wet land. Transpiration from plants is another typical source of water vapor. Lastly, cool or dry air moving over warmer water will become more humid. As with daytime heating, the addition of moisture to the air increases its heat content and instability and helps set into motion those processes that lead to the formation of cloud or fog.
=== Supersaturation ===
The amount of water that can exist as vapor in a given volume increases with the temperature. When the amount of water vapor is in equilibrium above a flat surface of water the level of vapor pressure is called saturation and the relative humidity is 100%. At this equilibrium there are equal numbers of molecules evaporating from the water as there are condensing back into the water. If the relative humidity becomes greater than 100%, it is called supersaturated. Supersaturation occurs in the absence of condensation nuclei.
Since the saturation vapor pressure is proportional to temperature, cold air has a lower saturation point than warm air. The difference between these values is the basis for the formation of clouds. When saturated air cools, it can no longer contain the same amount of water vapor. If the conditions are right, the excess water will condense out of the air until the lower saturation point is reached. Another possibility is that the water stays in vapor form, even though it is beyond the saturation point, resulting in supersaturation.
Supersaturation of more than 1–2% relative to water is rarely seen in the atmosphere, since cloud condensation nuclei are usually present. Much higher degrees of supersaturation are possible in clean air, and are the basis of the cloud chamber.
There are no instruments to take measurements of supersaturation in clouds.
=== Supercooling ===
Water droplets commonly remain as liquid water and do not freeze, even well below 0 °C (32 °F). Ice nuclei that may be present in an atmospheric droplet become active for ice formation at specific temperatures in between 0 °C (32 °F) and −38 °C (−36 °F), depending on nucleus geometry and composition. Without ice nuclei, supercooled water droplets (as well as any extremely pure liquid water) can exist down to about −38 °C (−36 °F), at which point spontaneous freezing occurs.
=== Collision-coalescence ===
One theory explaining how the behavior of individual droplets in a cloud leads to the formation of precipitation is the collision-coalescence process. Droplets suspended in the air will interact with each other, either by colliding and bouncing off each other or by combining to form a larger droplet. Eventually, the droplets become large enough that they fall to the earth as precipitation. The collision-coalescence process does not make up a significant part of cloud formation, as water droplets have a relatively high surface tension. In addition, the occurrence of collision-coalescence is closely related to entrainment-mixing processes.
=== Bergeron process ===
The primary mechanism for the formation of ice clouds was discovered by Tor Bergeron. The Bergeron process notes that the saturation vapor pressure of water, or how much water vapor a given volume can contain, depends on what the vapor is interacting with. Specifically, the saturation vapor pressure with respect to ice is lower than the saturation vapor pressure with respect to water. Water vapor interacting with a water droplet may be saturated, at 100% relative humidity, when interacting with a water droplet, but the same amount of water vapor would be supersaturated when interacting with an ice particle. The water vapor will attempt to return to equilibrium, so the extra water vapor will condense into ice on the surface of the particle. These ice particles end up as the nuclei of larger ice crystals. This process only happens at temperatures between 0 °C (32 °F) and −40 °C (−40 °F). Below −40 °C (−40 °F), liquid water will spontaneously nucleate, and freeze. The surface tension of the water allows the droplet to stay liquid well below its normal freezing point. When this happens, it is now supercooled liquid water. The Bergeron process relies on super cooled liquid water (SLW) interacting with ice nuclei to form larger particles. If there are few ice nuclei compared to the amount of SLW, droplets will be unable to form. A process whereby scientists seed a cloud with artificial ice nuclei to encourage precipitation is known as cloud seeding. This can help cause precipitation in clouds that otherwise may not rain. Cloud seeding adds excess artificial ice nuclei which shifts the balance so that there are many nuclei compared to the amount of super cooled liquid water. An over seeded cloud will form many particles, but each will be very small. This can be done as a preventative measure for areas that are at risk for hail storms.
== Cloud classification ==
Clouds in the troposphere, the atmospheric layer closest to Earth, are classified according to the height at which they are found, and their shape or appearance. There are five forms based on physical structure and process of formation. Cirriform clouds are high, thin and wispy, and are seen most extensively along the leading edges of organized weather disturbances. Stratiform clouds are non-convective and appear as extensive sheet-like layers, ranging from thin to very thick with considerable vertical development. They are mostly the product of large-scale lifting of stable air. Unstable free-convective cumuliform clouds are formed mostly into localized heaps. Stratocumuliform clouds of limited convection show a mix of cumuliform and stratiform characteristics which appear in the form of rolls or ripples. Highly convective cumulonimbiform clouds have complex structures often including cirriform tops and stratocumuliform accessory clouds.
These forms are cross-classified by altitude range or level into ten genus types which can be subdivided into species and lesser types. High-level clouds form at altitudes of 5 to 12 kilometers. All cirriform clouds are classified as high-level and therefore constitute a single cloud genus cirrus. Stratiform and stratocumuliform clouds in the high level of the troposphere have the prefix cirro- added to their names yielding the genera cirrostratus and cirrocumulus. Similar clouds found in the middle level (altitude range 2 to 7 kilometers) carry the prefix alto- resulting in the genus names altostratus and altocumulus.
Low level clouds have no height-related prefixes, so stratiform and stratocumuliform clouds based around 2 kilometres or lower are known simply as stratus and stratocumulus. Small cumulus clouds with little vertical development (species humilis) are also commonly classified as low level.
Cumuliform and cumulonimbiform heaps and deep stratiform layers often occupy at least two tropospheric levels, and the largest or deepest of these can occupy all three levels. They may be classified as low or mid-level, but are also commonly classified or characterized as vertical or multi-level. Nimbostratus clouds are stratiform layers with sufficient vertical extent to produce significant precipitation. Towering cumulus (species congestus), and cumulonimbus may form anywhere from near the surface to intermediate heights of around 3 kilometres. Of the vertically developed clouds, the cumulonimbus type is the tallest and can virtually span the entire troposphere from a few hundred metres above the ground up to the tropopause. It is the cloud responsible for thunderstorms.
Some clouds can form at very high to extreme levels above the troposphere, mostly above the polar regions of Earth. Polar stratospheric clouds are seen but rarely in winter at altitudes of 18 to 30 kilometers, while in summer, noctilucent clouds occasionally form at high latitudes at an altitude range of 76 to 85 kilometers. These polar clouds show some of the same forms as seen lower in the troposphere.
Homospheric types determined by cross-classification of forms and levels.
Homospheric types include the ten tropospheric genera and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure.
== Determination of properties ==
Satellites are used to gather data about cloud properties and other information such as Cloud Amount, height, IR emissivity, visible optical depth, icing, effective particle size for both liquid and ice, and cloud top temperature and pressure.
=== Detection ===
Data sets regarding cloud properties are gathered using satellites, such as MODIS, POLDER, CALIPSO or ATSR. The instruments measure the radiances of the clouds, from which the relevant parameters can be retrieved. This is usually done by using inverse theory.
The method of detection is based on the fact that the clouds tend to appear brighter and colder than the land surface. Because of this, difficulties rise in detecting clouds above bright (highly reflective) surfaces, such as oceans and ice.
=== Parameters ===
The value of a certain parameter is more reliable the more satellites are measuring the said parameter. This is because the range of errors and neglected details varies from instrument to instrument. Thus, if the analysed parameter has similar values for different instruments, it is accepted that the true value lies in the range given by the corresponding data sets.
The Global Energy and Water Cycle Experiment uses the following quantities in order to compare data quality from different satellites in order to establish a reliable quantification of the properties of the clouds:
the cloud cover or cloud amount with values between 0 and 1
the cloud temperature at cloud top ranging from 150 to 340 K
the cloud pressure at top 1013 - 100 hPa
the cloud height, measured above sea level, ranging from 0 to 20 km
the cloud IR emissivity, with values between 0 and 1, with a global average around 0.7
the effective cloud amount, the cloud amount weighted by the cloud IR emissivity, with a global average of 0.5
the cloud (visible) optical depth varies within a range of 4 and 10.
the cloud water path for the liquid and solid (ice) phases of the cloud particles
the cloud effective particle size for both liquid and ice, ranging from 0 to 200 μm
=== Icing ===
Another vital property is the icing characteristic of various cloud genus types at various altitudes, which can have great impact on the safety of flying. The methodologies used to determine these characteristics include using CloudSat data for the analysis and retrieval of icing conditions, the location of clouds using cloud geometric and reflectivity data, the identification of cloud types using cloud classification data, and finding vertical temperature distribution along the CloudSat track (GFS).
The range of temperatures that can give rise to icing conditions is defined according to cloud types and altitude levels:
Low-level stratocumulus and stratus can cause icing at a temperature range of 0 to -10 °C.
For mid-level altocumulus and altostratus, the range is 0 to -20 °C.
Vertical or multi-level cumulus, cumulonimbus, and nimbostatus, create icing at a range of 0 to -25 °C.
High-level cirrus, cirrocumulus, and cirrostratus generally cause no icing because they are made mostly of ice crystals colder than -25 °C.
=== Cohesion and dissolution ===
There are forces throughout the homosphere (which includes the troposphere, stratosphere, and mesosphere) that can impact the structural integrity of a cloud. It has been speculated that as long as the air remains saturated, the natural force of cohesion that hold the molecules of a substance together may act to keep the cloud from breaking up. However, this speculation has a logical flaw in that the water droplets in the cloud are not in contact with each other and therefore not satisfying the condition required for the intermolecular forces of cohesion to act. Dissolution of the cloud can occur when the process of adiabatic cooling ceases and upward lift of the air is replaced by subsidence. This leads to at least some degree of adiabatic warming of the air which can result in the cloud droplets or crystals turning back into invisible water vapor. Stronger forces such as wind shear and downdrafts can impact a cloud, but these are largely confined to the troposphere where nearly all the Earth's weather takes place. A typical cumulus cloud weighs about 500 metric tons, or 1.1 million pounds, the weight of 100 elephants.
== Models ==
There are two main model schemes that can represent cloud physics, the most common is bulk microphysics models that uses mean values to describe the cloud properties (e.g. rain water content, ice content), the properties can represent only the first order (concentration) or also the second order (mass).
The second option is to use bin microphysics scheme that keep the moments (mass or concentration) in different for different size of particles.
The bulk microphysics models are much faster than the bin models but are less accurate.
== See also ==
Hurricane dynamics and cloud microphysics
== References == | Wikipedia/Cloud_microphysics |
The Weather Research and Forecasting (WRF) Model () is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs, developed in the United States. NWP refers to the simulation and prediction of the atmosphere with a computer model, and WRF is a set of software for this. WRF features two dynamical (computational) cores (or solvers), a data assimilation system, and a software architecture allowing for parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales ranging from meters to thousands of kilometers.
The effort to develop WRF began in the latter part of the 1990s and was a collaborative partnership principally among the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (represented by the National Centers for Environmental Prediction (NCEP) and the (then) Forecast Systems Laboratory (FSL)), the Air Force Weather Agency (AFWA), the Naval Research Laboratory (NRL), the University of Oklahoma (OU), and the Federal Aviation Administration (FAA). The bulk of the work on the model has been performed or supported by NCAR, NOAA, and AFWA.
WRF allows researchers to produce simulations reflecting either real data (observations, analyses) or idealized atmospheric conditions. WRF provides operational forecasting a flexible and robust platform, while offering advances in physics, numerics, and data assimilation contributed by the many research community developers. WRF is currently in operational use at NCEP and other forecasting centers internationally. WRF has grown to have a large worldwide community of users (over 30,000 registered users in over 150 countries), and workshops and tutorials are held each year at NCAR. WRF is used extensively for research and real-time forecasting throughout the world. It has been shown to perform well in simulating atmospheric convection, but to be prone to producing squall lines too easily.
WRF offers two dynamical solvers for its computation of the atmospheric governing equations, and the variants of the model are known as WRF-ARW (Advanced Research WRF) and WRF-NMM (nonhydrostatic mesoscale model). The Advanced Research WRF (ARW) is supported to the community by the NCAR Mesoscale and Microscale Meteorology Laboratory. The WRF-NMM solver variant was based on the Eta model, and later nonhydrostatic mesoscale model, developed at NCEP. The WRF-NMM (NMM) is supported to the community by the Developmental Testbed Center (DTC).
The WRF serves as the basis for the RAP and HRRR models: high-resolution operational forecast models run regularly at NCEP. The WRF also serves as the basis for the North American Mesoscale (NAM) model at 12km and 3km grid resolutions.
A version of WRF-NMM tailored for hurricane forecasting, HWRF (hurricane weather research and forecasting), became operational in 2007.
In 2009, a polar optimized WRF was released through the Byrd Polar Research Center at the Ohio State University.
== See also ==
Global Forecast System
== References ==
== External links ==
WRF website
WRF-ARW support website
WRF-NMM support website | Wikipedia/Weather_Research_and_Forecasting_model |
CICE () is a computer model that simulates the growth, melt and movement of sea ice. It has been integrated into many coupled climate system models as well as global ocean and weather forecasting models and is often used as a tool in Arctic and Southern Ocean research. CICE development began in the mid-1990s by the United States Department of Energy (DOE), and it is currently maintained and developed by a group of institutions in North America and Europe known as the CICE Consortium. Its widespread use in Earth system science in part owes to the importance of sea ice in determining Earth's planetary albedo, the strength of the global thermohaline circulation in the world's oceans, and in providing surface boundary conditions for atmospheric circulation models, since sea ice occupies a significant proportion (4-6%) of Earth's surface. CICE is a type of cryospheric model.
== Development ==
Development of CICE began in 1994 by Elizabeth Hunke at Los Alamos National Laboratory (LANL). Since its initial release in 1998 following development of the Elastic-Viscous-Plastic (EVP) sea ice rheology within the model, it has been substantially developed by an international community of model users and developers. Enthalpy-conserving thermodynamics and improvements to the sea ice thickness distribution were added to the model between 1998 and 2005. The first institutional user outside of LANL was Naval Postgraduate School in the late-1990s, where it was subsequently incorporated into the Regional Arctic System Model (RASM) in 2011. The National Center for Atmospheric Research (NCAR) was the first to incorporate CICE into a global climate model in 2002, and developers of the NCAR Community Earth System Model (CESM) have continued to contribute to CICE innovations and have used it to investigate polar variability in Earth's climate system. The United States Navy began using CICE shortly after 2000 for polar research and sea ice forecasting and it continues to do so today. Since 2000, CICE development or coupling to oceanic and atmospheric models for weather and climate prediction has occurred at the University of Reading, University College London, the U.K. Met Office Hadley Centre, Environment and Climate Change Canada, the Danish Meteorological Institute, the Commonwealth Science and Industrial Research Organisation, and Beijing Normal University, among other institutions. As a result of model development in the global community of CICE users, the model's computer code now includes a comprehensive saline ice physics and biogeochemistry library that incorporates mushy-layer thermodynamics, anisotropic continuum mechanics, Delta-Eddington radiative transfer, melt-pond physics and land-fast ice. CICE version 6 is open-source software and was released in 2018 on GitHub.
== Keystone Equations ==
There are two main physics equations solved using numerical methods in CICE that underpin the model's predictions of sea ice thickness, concentration and velocity, as well as predictions made with many equations not shown here giving, for example, surface albedo, ice salinity, snow cover, divergence, and biogeochemical cycles. The first keystone equation is Newton's second law for sea ice:
m
d
u
d
t
=
−
m
f
k
×
u
+
τ
a
+
τ
w
−
m
g
^
∇
μ
+
∇
⋅
σ
{\displaystyle m{\frac {d\mathbf {u} }{dt}}=-mf\mathrm {k} \times \mathbf {u} +\tau _{a}+\tau _{w}-m\mathrm {\hat {g}} \nabla \mu +\nabla \cdot \mathbf {\sigma } }
where
m
{\displaystyle m}
is the mass per unit area of saline ice on the sea surface,
u
{\displaystyle \mathbf {u} }
is the drift velocity of the ice,
f
{\displaystyle f}
is the Coriolis parameter,
k
{\displaystyle \mathrm {k} }
is the upward unit vector normal to the sea surface,
τ
a
{\displaystyle \tau _{a}}
and
τ
w
{\displaystyle \tau _{w}}
are the wind and water stress on the ice, respectively,
g
^
{\displaystyle \mathrm {\hat {g}} }
is acceleration due to gravity,
μ
{\displaystyle \mu }
is sea surface height and
σ
{\displaystyle \mathbf {\sigma } }
is internal ice the two-dimensional stress tensor within the ice. Each of the terms require information about the ice thickness, roughness, and concentration, as well as the state of the atmospheric and oceanic boundary layers. Ice mass per unit area
m
{\displaystyle m}
is determined using the second keystone equation in CICE, which describes evolution of the sea ice thickness distribution
g
(
h
)
{\displaystyle g(h)}
for different thicknesses
h
{\displaystyle h}
spread of the area for which sea ice velocity is calculated above:
d
g
d
t
=
θ
+
ψ
−
g
(
∇
⋅
u
)
{\displaystyle {\frac {dg}{dt}}=\theta +\psi -g(\nabla \cdot \mathbf {u} )}
where
θ
{\displaystyle \theta }
is the change in the thickness distribution due to thermodynamic growth and melt,
ψ
{\displaystyle \psi }
is redistribution function due to sea ice mechanics and is associated with internal ice stress
σ
{\displaystyle \mathbf {\sigma } }
, and
−
g
(
∇
⋅
u
)
{\displaystyle -g(\nabla \cdot \mathbf {u} )}
describes advection of sea ice in a Lagrangian reference frame. From this, ice mass is given by:
m
=
ρ
∫
0
∞
h
g
(
h
)
d
h
{\displaystyle m=\rho \int _{0}^{\infty }h\,g(h)\,dh}
for density
ρ
{\displaystyle \rho }
of sea ice.
== Code Design ==
CICE version 6 is coded in FORTRAN90. It is organized into a dynamical core (dycore) and a separate column physics package called Icepack, which is maintained as a CICE submodule on GitHub. The momentum equation and thickness advection described above are time-stepped on a quadrilateral Arakawa B-grid within the dynamical core, while Icepack solves diagnostic and prognostic equations necessary for calculating radiation physics, hydrology, thermodynamics, and vertical biogeochemistry, including terms necessary to calculate
τ
a
{\displaystyle \tau _{a}}
,
τ
w
{\displaystyle \tau _{w}}
,
σ
{\displaystyle \mathbf {\sigma } }
,
θ
{\displaystyle \theta }
, and
ψ
{\displaystyle \psi }
defined above. CICE can be run independently, as in the first figure on this page, but is frequently coupled with earth systems models through an external flux coupler, such as the CESM Flux Coupler from NCAR for which results are shown in the second figure for the CESM Large Ensemble. The column physics were separated into Icepack for the version 6 release to permit insertion into earth system models that use their own sea ice dynamical core, including the new DOE Energy Exascale Earth System Model (E3SM), which uses an unstructured grid in the sea ice component of the Model for Prediction Across Scales (MPAS), as demonstrated in the final figure.
== See also ==
Sea ice
Sea ice microbial communities
Sea ice emissivity modeling
Sea ice growth processes
Sea ice concentration
Sea ice thickness
Sea ice physics and ecosystem experiment
Arctic Ocean
Southern Ocean
Climate model
Weather forecasting
Northern Sea Route
Northwest Passage
Antarctica
== References ==
== External links ==
CICE Consortium GitHub Information Page
CICE Consortium Model for Sea-Ice Development
Icepack: Essential Physics for Sea Ice Models
Community-Driven Sea Ice Modeling with the CICE Consortium (Witness the Arctic)
NOAA press release
Oceans Deeply
Pacific Standard
phys.org: Arctic ice model upgrade to benefit polar research, industry and military
Sea ice: More than just frozen water (Santa Fe New Mexican)
Energy Exascale Earth System Model (E3SM)
Community Earth System Model (CESM) | Wikipedia/CICE_(sea_ice_model) |
Ocean general circulation models (OGCMs) are a particular kind of general circulation model to describe physical and thermodynamical processes in oceans. The oceanic general circulation is defined as the horizontal space scale and time scale larger than mesoscale (of order 100 km and 6 months). They depict oceans using a three-dimensional grid that include active thermodynamics and hence are most directly applicable to climate studies. They are the most advanced tools currently available for simulating the response of the global ocean system to increasing greenhouse gas concentrations. A hierarchy of OGCMs have been developed that include varying degrees of spatial coverage, resolution, geographical realism, process detail, etc.
== History ==
The first generation of OGCMs assumed “rigid lid” to eliminate high-speed external gravity waves. According to CFL criteria without those fast waves, we can use a bigger time step, which is not so computationally expensive. But it also filtered those ocean tides and other waves having the speed of tsunamis. Within this assumption, Kirk Bryan and co-worker Micheal Cox developed a 2D model, a 3D box model, and then a model of full circulation in GFDL, with variable density as well, for the world ocean with its complex coastline and bottom topography. The first application with specified global geometry was done in the early 1970s. Cox designed a 2° latitude-longitude grid with up to 12 vertical levels at each point.
With more and more research on ocean model, mesoscale phenomenon, e.g. most ocean currents have cross-stream dimensions equal to Rossby radius of deformation, started to get more awareness. However, in order to analyze those eddies and currents in numerical models, we need grid spacing to be approximately 20 km in middle latitudes. Thanks to those faster computers and further filtering the equations in advance to remove internal gravity waves, those major currents and low-frequency eddies then can be resolved, one example is the three-layer quasi-geostrophic models designed by Holland. Meanwhile, there are some model retaining internal gravity wave, for example one adiabatic layered model by O'Brien and his students, which did retain internal gravity waves so that equatorial and coastal problems involving these waves could be treated, led to an initial understanding of El Niño in terms of those waves.
In the late 1980s, simulations could finally be undertaken using the GFDL formulation with eddies marginally resolved over extensive domains and with observed winds and some atmospheric influence on density. Furthermore, these simulations with high enough resolution such as the Southern Ocean south of latitude 25°, the North Atlantic, and the World Ocean without the Arctic provided first side-by-side comparison with data.
Early in the 1990s, for those large-scale and eddies resolvable models, the computer requirement for the 2D ancillary problem associated with the rigid lid approximation was becoming excessive. Furthermore, in order to predict tidal effects or compare height data from satellites, methods were developed to predict the height and pressure of the ocean surface directly. For example, one method is to treat the free surface and the vertically averaged velocity using many small steps in time for each single step of the full 3D model. Another method developed at Los Alamos National Laboratory solves the same 2D equations using an implicit method for the free surface. Both methods are quite efficient.
== Importance ==
OGCMs have many important applications: dynamical coupling with the atmosphere, sea ice, and land run-off that in reality jointly determine the oceanic boundary fluxes; transpire of biogeochemical materials; interpretation of the paleoclimate record;climate prediction for both natural variability and anthropogenic chafes; data assimilation and fisheries and other biospheric management. OGCMs play a critical role in Earth system model. They maintain the thermal balance as they transport energy from tropical to the polar latitudes. To analyze the feedback between ocean and atmosphere we need ocean model, which can initiate and amplify climate change on many different time scales, for instance, the interannual variability of El Niño and the potential modification of the major patterns for oceanic heat transport as a result of increasing greenhouse gases. Oceans are a kind of undersampled nature fluid system, so by using OGCMs we can fill in those data blank and improve understanding of basic processes and their interconnectedness, as well as to help interpret sparse observations. Even though, simpler models can be used to estimate climate response, only OGCM can be used conjunction with atmospheric general circulation model to estimate global climate change.
== Grid types ==
There are different types grid types that can be used by OGCMs. There is often a separation between vertical and horizontal grids.
=== Horizontal grid types ===
Most models use one of the following horizontal grid types.
Finite Differences
Finite Element
Spectral Grid
==== Finite differences grid ====
Finite differences grids are the most common grid types for OGCMs. For the grids, the Arakawa Grids are often used. On the A grid all quantities are calculated on a single point. This was only used in some of the earliest OGCMs. However, it was quickly realized that the solutions were extremely poor. The B grid has the velocity components on the edges of the Temperature grid boxes. While the C grid separates these velocity components in an u and v component. Both are still used presently in different models.
It is also possible to have a so-called Nested Grid Model. A nested grid model is an adaptation of the finite differences grid in which some parts have a higher density of grid points.
==== Finite element grid ====
Sometimes models use a finite element grid. Here, the variables are solved on a triangular grid. The big advantage of finite element grids is that it allows flexible resolution throughout the domain of the model. This is especially useful when studying a flow in a near a coastal environment as the coast can be more easily mapped.
==== Spectral grid ====
The spectral grids are the least used grids for OGCMs, while being widely used in atmospheric general circulation models. They are harder to use for ocean modelling because of the more complicated boundary conditions in the ocean compared to atmospheric models where they are extensively used.
=== Vertical grid types ===
The vertical grids used for ocean general circulation models are often different from their atmospheric counterparts. Atmospheric models often use pressure as a vertical coordinate because of its isentropic nature.
z-coordinates
sigma coordinates
isopycnal coordinates
==== Z coordinate systems ====
The z coordinate system in which height is taken as a coordinate is the simplest type of system to implement. The layers are often of varying depth with the layers near the top of the ocean being thinner than the deeper layers. This is because the features nearer to the surface happen on smaller scales. Z-coordinate systems have difficulties representing the bottom boundary layer and downslope flow due to odd diabatic mixing.
==== Sigma coordinates ====
In a sigma coordinate system the bottom topography determines the thickness of the vertical layer at each horizontal grid point. Similarly to the Z coordinate system the layers are often more closely spaced near the surface and/or the bottom than they are in the interior. Sigma coordinates allow the boundary layer to be better represented but have difficulties with pressure gradient errors when sharp bottom topography features are not smoothed out.
==== Isopycnal models ====
Isopycnal models model the potential density at a given pressure level as the vertical coordinate. The layers thus vary in thickness throughout the domain. This type of model is particularly useful when studying tracer transport. This is because tracers often move along lines of constant density. Isopycnal models have a subtle difference with layered models. The main difference is whether the model allows the vanishing of the isopycnals. For layered models the isopycnals are not allowed to vanish which has computational speed benefits.
== Subgridscale parameterization ==
Molecular friction rarely upsets the dominant balances (geostrophic and hydrostatic) in the ocean. With kinematic viscosities of v=10−6m 2 s−1 the Ekman number is several orders of magnitude smaller than unity; therefore, molecular frictional forces are certainly negligible for large-scale oceanic motions. A similar argument holds for the tracer equations, where the molecular thermodiffusivity and salt diffusivity lead to Reynolds number of negligible magnitude, which means the molecular diffusive time scales are much longer than advective time scale. So we can thus safely conclude that the direct effects of molecular processes are insignificant for large-scale. Yet the molecular friction is essential somewhere. The point is that large-scale motions in the ocean interacted with other scales by the nonlinearities in primitive equation. We can show that by Reynolds approach, which will leads to the closure problem. That means new variables arise at each level in the Reynolds averaging procedure. This leads to the need of parameterization scheme to account for those sub grid scale effects.
Here is a schematic “family tree” of subgridscale (SGS) mixing schemes. Although there is a considerable degree of overlap and interrelatedness among the huge variety of schemes in use today, several branch points maybe defined. Most importantly, the approaches for lateral and vertical subgridscale closure vary considerably. Filters and higher-order operators are used to remove small-scale noise that is numerically necessary. Those special dynamical parameterizations (topographic stress, eddy thickness diffusion and convection) are becoming available for certain processes.
In the vertical, the surface mixed layer (sml) has historically received special attention because of its important role in air-sea exchange. Now there are so many schemes can be chose from: Price-Weller-Pinkel, Pacanowksi and Philander, bulk, Mellor-Yamada and k-profile parameterization (KPP) schemes.
Adaptive (non-constant) mixing length schemes are widely used for parameterization of both lateral and vertical mixing. In the horizontal, parameterizations dependent on the rates of stress and strain (Smagroinsky), grid spacing and Reynolds number (Re) have been advocated. In the vertical, vertical mixing as a function stability frequency (N^2) and/or Richardson number are historically prevalent. The rotated mixing tensors scheme is the one considering the angle of the principle direction of mixing, as for in the main thermocline, mixing along isopycnals dominates diapycnal mixing. Therefore, the principle direction of mixing is neither strictly vertical nor purely horizontal, but a spatially variable mixture of the two.
== Spin-up of OGCMs ==
OGCMs require a long spin-up time to be able to realistically represent the studied basins. Spin-up time is the time a model needs to reach a certain equilibrium. This equilibrium is often defined as a statistical parameter at which the change over time of a range of variables gets below a set threshold for a certain number of simulation timesteps. For OGCMs of a global scale it is often a challenge to reach this state. It can take thousands of model years to reach an equilibrium state for a model. The speed at which this equilibrium is reached is determined by slow processes below the thermocline.
=== Decreasing the spin-up time ===
There have been many attempts to decrease the spin-up time of OGCMs. To accelerate the convergence of a model, several methods have been proposed. Better initial conditions significantly decrease the time a model needs to spin-up. However, this is not always possible, especially for the deep ocean.
Another approach is the distorted physics approach. This works on the basis that the ocean has processes on relatively short time scales above the thermocline. While processes below the thermocline are often diffusive and very slow. The acceleration of these processes is achieved by decreasing the local heat capacity, while not changing the transport and the mixing of heat. This makes the speed of reaching equilibrium for these models much quicker and nearly as efficient as atmospheric models with similar resolution. This method is very successful as there is (almost) no change to the final solution of the model.
It is also possible to reduce the spin-up time by exponential extrapolation. In this method, the temperature and salinity fields are repeatedly extrapolated with the assumption that they exponentially decay towards their equilibrium value. This method can in some cases reduce the spin-up time by a factor of two or three.
A third proposed method is the jacobian–free Newton–Krylov method. This method uses the matrix-vector products obtained from an explicit OGCM's jacobian. The method can be applied to many existing explicit OGCMs and can significantly speed up the spin-up time.
== Comparison with Atmospheric General Circulation Model ==
OGCMs and AGCMs have much in common, such as, the equations of motion and the numerical techniques. However, OGCMs have some unique features. For example, the atmosphere is forced thermally throughout its volume, the ocean is forced both thermally and mechanically primarily at its surface, in addition, the geometry of ocean basins is very complex. The boundary conditions are totally different. For ocean models, we need to consider those narrow but important boundary layers on nearly all bounding surfaces as well as within the oceanic interior. These boundary conditions on ocean flows are difficult to define and to parameterize, which results in a high computationally demand.
Ocean modeling is also strongly constrained by the existence in much of the world's oceans of mesoscale eddies with time and space scales, respectively, of weeks to months and tens to hundreds of kilometers. Dynamically, these nearly geostrophic turbulent eddies are the oceanographic counterparts of the atmospheric synoptic scale. Nevertheless, there are important differences. First, ocean eddies are not perturbations on an energetic mean flow. They may play an important role in the poleward transport of heat. Second, they are relatively small in horizontal extent so that ocean climate models, which must have the same overall exterior dimensions as AGCMs, may require as much as 20 times the resolution as AGCM if the eddies are to be explicitly resolved.
There also are more constraints on the OGCM's due to lacking data for the ocean. The bottom topography is especially lacking. Large swaths of the ocean are not mapped in high detail. This is in stark contrast to the land topography which can be mapped in detail by satellite altimeters. This creates even bigger uncertainties in the boundary conditions. Secondly, the atmosphere only has a changing geometry for the lower levels for most of its extent. While the ocean has sharp boundaries, with large swaths of land as complex boundary conditions.
== OGCMs in paleoceanography ==
The relation between paleoclimate and the effect on the ocean circulation has been widely studied. The first attempts at doing this often used the present-day forcings extrapolated to the past climate from proxies. The closure of the different passages in the ocean can then be simulated by simply blocking them with a thin line in the bathymetry. For instance closing the present-day Drake Passage.
These days, more complicated paleo bathymetries are used along with better proxies. To test the quality of the models, the Paleoclimate Modelling Intercomparison Project has been established.
== Classification ==
We can classify ocean models according to different standards. For example, according to vertical ordinates we have geo-potential, isopycnal and topography-following models. According to horizontal discretizations we have unstaggered or staggered grids. According to methods of approximation we have finite difference and finite element models. There are three basic types of OGCMs:
Idealized geometry models: Models with idealized basin geometry have been used extensively in ocean modeling and have played a major role in the development of new modeling methodologies. They use a simplified geometry, offering a basin itself, while the distribution of winds and buoyancy force are generally chosen as simple functions of latitude.
Basin-scale models: To compare OGCM results with observations we need realistic basin information instead of idealized data. However, if we only pay attention to local observation data, we don't need to run whole global simulation, and by doing that we can save a lot of computational resources.
Global models: This kind of model is the most computationally costly one. More experiments are needed as a preliminary step in constructing coupled Earth system models.
== See also ==
List of ocean circulation models
General circulation model (GCM)
Climate model
== References == | Wikipedia/Ocean_general_circulation_model |
The Community Earth System Model (CESM) is a fully coupled numerical simulation of the Earth system consisting of atmospheric, ocean, ice, land surface, carbon cycle, and other components. CESM includes a climate model providing state-of-art simulations of the Earth's past, present, and future. It is the successor of the Community Climate System Model (CCSM), specifically version 4 (CCSMv4), which provided the initial atmospheric component for CESM. Strong ensemble forecasting capabilities, CESM-LE (CESM-Large Ensemble), were developed at the onset to control for error and biases across different model runs (realizations). Simulations from the Earth's surface through the thermosphere are generated utilizing the Whole Atmosphere Community Climate Model (WACCM). CESM1 was released in 2010 with primary development by the Climate and Global Dynamics Division (CGD) of the National Center for Atmospheric Research (NCAR), and significant funding by the National Science Foundation (NSF) and the Department of Energy (DoE).
== See also ==
Model for Prediction Across Scales (MPAS)
== References ==
== External links ==
CESM homepage
CESM/CCSM Special Collection of the Journal of Climate Archived 2019-12-20 at the Wayback Machine | Wikipedia/Community_Earth_System_Model |
NAME atmospheric pollution dispersion model was first developed by the UK's Met Office in 1986 after the nuclear accident at Chernobyl, which demonstrated the need for a method that could predict the spread and deposition of radioactive gases or material released into the atmosphere.
The acronym, NAME, originally stood for the Nuclear Accident ModEl. The Met Office has revised and upgraded the model over the years and it is now used as a general purpose dispersion model. The current version is known as the NAME III (Numerical Atmospheric-dispersion Modelling Environment) model. NAME III is currently operational and it will probably completely replace the original NAME model sometimes in 2006.
== Features and capabilities of NAME ==
NAME (in its current NAME III version) is a Lagrangian air pollution dispersion model for short range to global range scales. It employs 3-dimensional meteorological data provided by the Met Office's Unified National Weather Prediction Model. Random walk techniques using empirical turbulence profiles are utilized to represent turbulent mixing. In essence, NAME follows the 3-dimensional trajectories of parcels of the pollution plume and computes pollutant concentrations by Monte Carlo methods — that is, by direct simulation rather than solving equations.
NAME uses a puff technique when modelling dispersion over a short range which shortens the time needed to compute the pollutant concentrations at the receptors.
The model has the capability to calculate: the rise of buoyant plumes; deposition of pollution plume components due to rainfall (i.e., wet deposition); dry deposition; plume chemistry focusing on sulphate and nitrate chemistry; plume depletion via the decay of radioactive materials; the downwash effects of buildings.
The model can also be run 'backwards' to generate maps that locate possible plume originating sources.
== The Met Office's commitments to emergency response service ==
The Met Office has international commitments to provide emergency response dispersion modelling services for releases of hazardous gases and materials into the atmosphere. Such events include the release of radioactive materials and emissions from erupting volcanoes. Those commitments are met by an operational group known as EMARC who are supported by a Met Office team of dispersion modelling staff. That team is also responsible for the scientific development of NAME III which, combined with the Met Office numerical weather prediction model, is used to provide the dispersion modelling services needed to implement the listed commitments:
The WMO (World Meteorological Office) has designated the Met Office to operate one of the worldwide RSMCs (Regional Specialist Meteorological Centre)
The Met Office has also been designated a VAAC (Volcanic Ash Advisory Centre) which is part of the IAVW (International Airways Volcano Watch) set up by the ICAO (International Civil Aviation Organization).
Over the years, NAME has been applied to radioactive releases, the Kuwaiti oil fires, major industrial fires and chemical spills, and two volcanic eruptions in Iceland.
== See also ==
Bibliography of atmospheric dispersion modeling
Atmospheric dispersion modeling
List of atmospheric dispersion models
UK Dispersion Modelling Bureau
== References ==
== Further reading ==
For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read:
Turner, D.B. (1994). Workbook of atmospheric dispersion estimates: an introduction to dispersion modeling (2nd ed.). CRC Press. ISBN 978-1-56670-023-8. www.crcpress.com
Beychok, M.R. (2005). Fundamentals Of Stack Gas Dispersion (4th ed.). self-published. ISBN 978-0-9644588-0-2. www.air-dispersion.com
== NELOY ADI ==
"Met Office Dispersion Model". Met Office. Retrieved 15 December 2018. | Wikipedia/NAME_(dispersion_model) |
The Reading Intermediate General Circulation Model (IGCM), is a simplified or "intermediate" global climate model, which is developed by members of the Department of Meteorology at the University of Reading, and by members of the Stratospheric Dynamics and Chemistry Group of the Department of Atmospheric and Oceanic Sciences at McGill University.
The IGCM is based on the primitive-equations baroclinic model of Hoskins and Simmons, which has been converted to run on workstations. Several variations have been developed by adjusting representations of the physics.
IGCM1: Portable version of the original spectral, dry baroclinic model formulated in sigma-levels, with an option for Newtonian relaxation and Rayleigh friction, with no surface.
IGCM2: Includes simplified moist parameterisations, a cheap "radiation scheme" (i.e. constant tropospheric cooling), a bulk formulation scheme for the boundary layer, fixed surface temperatures and humidity, uniform vertical diffusion, and can advect tracers.
IGCM3x: Intermediate climate model that includes more sophisticated moisture/clouds parameterisations, a radiation scheme with various gas absorbers and a more realistic surface with an orography and land and sea surface schemes.
The adiabatic version, IGCM1, is freely available. Access to IGCM2 and IGCM3 is restricted to members of the Department of Meteorology at the University of Reading and collaborating researchers.
== See also ==
Global climate model
== External links ==
IGCM web page | Wikipedia/Intermediate_General_Circulation_Model |
The Nested Grid Model (usually known as NGM for short) was a numerical weather prediction model run by the National Centers for Environmental Prediction, a division of the National Weather Service, in the United States. The NGM was, as its name suggested, derived from two levels of grids: a hemispheric-scale grid and a synoptic-scale grid, the latter of which had a resolution of approximately 90 kilometers. Its most notable feature was that it assumed the hydrostatic equation.
The NGM debuted in 1987, directly replacing the limited-area fine mesh (LFM) model, which was immediately halted upon the NGM's debut. The NGM was also used to create model output statistics. Development of the model stopped in 1993. By 2000, the model was seen as obsolete, particularly for mesoscale features that were not hydrostatic, and was scheduled to be superseded by the Global Forecast System (GFS) in 2001. However, though the NGM ceased widespread use in the early 2000s due to the GFS and improvements in the Eta model (later the North American Mesoscale Model), and the NGM's short-range LAMP products were phased out in 2006, NGM MOS products continued to be in significant general use (alongside the Eta/NAM and GFS) until March 3, 2009, when the NGM MOS products were discontinued.
== References == | Wikipedia/Nested_Grid_Model |
The Operational Street Pollution Model (OSPM) is an atmospheric dispersion model for simulating the dispersion of air pollutants in so-called street canyons. It was developed by the National Environmental Research Institute of Denmark, Department of Atmospheric Environment, Aarhus University. As a result of reorganisation at Aarhus University the model has been maintained by the Department of Environmental Science at Aarhus University since 2011. For about 20 years, OSPM has been used in many countries for studying traffic pollution, performing analyses of field campaign measurements, studying efficiency of pollution abatement strategies, carrying out exposure assessments and as reference in comparisons to other models. OSPM is generally considered as state-of-the-art in practical street pollution modelling.
== Description ==
In OSPM concentrations of traffic-emitted pollution is calculated using a combination of a plume model for the direct contribution and a box model for the recirculating part of the pollutants in the street.
The NO2 concentrations are calculated taking into account NO-NO2-O3 chemistry and the residence time of pollutants in the street. The model is designed to work with input and output in the form of one-hour averages.
The main principles in the model are depicted in Figure 1 for the case of a wind direction nearly perpendicular to the street canyon. A receptor point in leeward position is affected by the direct plume showing considerably higher concentrations than a receptor in windward position being exposed to the less concentrated recirculating air.
The turbulence produced by the moving traffic (TPT) is acting in addition to the turbulence created by the roof level wind. This leads to a faster dispersion of the direct plume but also to an improved air exchange at roof level between the street canyon and the background air.
== See also ==
List of atmospheric dispersion models
== Further reading ==
For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read:
Turner, D.B. (1994). Workbook of atmospheric dispersion estimates: an introduction to dispersion modeling (2nd ed.). CRC Press. ISBN 1-56670-023-X. www.crcpress.com
Beychok, M.R. (2005). Fundamentals Of Stack Gas Dispersion (4th ed.). author-published. ISBN 0-9644588-0-2. www.air-dispersion.com
== References ==
== External links ==
OSPM home page | Wikipedia/Operational_Street_Pollution_Model |
A chemical transport model (CTM) is a type of computer numerical model which typically simulates atmospheric chemistry and may be used for air pollution forecasting.
== Chemical transport models and general circulation models ==
While related general circulation models (GCMs) focus on simulating overall atmospheric dynamics (e.g. fluid and heat flows), a CTM instead focuses on the stocks and flows of one or more chemical species. Similarly, a CTM must solve only the continuity equation for its species of interest, a GCM must solve all the primitive equations for the atmosphere; but a CTM will be expected to accurately represent the entire cycle for the species of interest, including fluxes (e.g. advection), chemical production/loss, and deposition. That being said, the tendency, especially as the cost of computing declines over time, is for GCMs to incorporate CTMs for species of special interest to climate dynamics, especially shorter-lived species such as nitrogen oxides and volatile organic compounds; this allows feedbacks from the CTM to the GCM's radiation calculations, and also allows the meteorological fields forcing the CTM to be updated at higher time resolution than may be practical in studies with offline CTMs.
== Types of chemical transport models ==
CTMs may be classified according to their methodology and their species of interest, as well as more generic characteristics (e.g. dimensionality, degree of resolution).
=== Methodologies ===
Jacob (1999) classifies CTMs as Eulerian/"box" or Lagrangian/"puff" models, depending on whether the CTM in question focuses on
(Eulerian) "boxes" through which fluxes, and in which chemical production/loss and deposition occur over time
(Lagrangian) the production and motion of parcels of air ("puffs") over time
An Eulerian CTM solves its continuity equations using a global/fixed frame of reference, while a Lagrangian CTM uses a local/moving frame of reference.
==== See also ====
discussion of gridding in CLaMS
Lagrangian and Eulerian coordinates
discussion of the continuity equation in Jacob's Introduction to Atmospheric Chemistry online
==== Examples of Eulerian CTMs ====
CCATT-BRAMS
WRF-Chem
CMAQ, CMAQ Website
CAMx
GEOS-Chem
LOTOS-EUROS
MATCH
MOZART: (Model for OZone And Related chemical Tracers) is developed jointly by the (US) National Center for Atmospheric Research (NCAR), the Geophysical Fluid Dynamics Laboratory (GFDL), and the Max Planck Institute for Meteorology (MPI-Met) to simulate changes in ozone concentrations in the Earth's atmosphere. MOZART was designed to simulate tropospheric chemical and transport processes, but has been extended (MOZART3) into the stratosphere and mesosphere. It can be driven by standard meteorological fields from, for example, the National Centers for Environmental Prediction (NCEP), the European Centre for Medium-Range Weather Forecasts (ECMWF) and the Global Modeling and Assimilation Office (GMAO), or by fields generated from general circulation models. MOZART4 improves MOZART2's chemical mechanisms, photolysis scheme, dry deposition mechanism, biogenic emissions and handling of tropospheric aerosols.
TOMCAT/SLIMCAT
CHIMERE
POLYPHEMUS
TCAM (Transport Chemical Aerosol Model; TCAM): a mathematical modelling method (computer simulation) designed to model certain aspects of the Earth's atmosphere. TCAM is one of several chemical transport models, all of which are concerned with the movement of chemicals in the atmosphere, and are thus used in the study of air pollution.
TCAM is a multiphase three-dimensional eulerian grid model (as opposed to lagrangian or other modeling methods). It is designed for modelling dispersion of pollutants (in particular photochemical and aerosol) at mesoscales (medium scale, generally concerned with systems a few hundred kilometers in size).
TCAM was developed at the University of Brescia in Italy.
==== Examples of Lagrangian CTMs ====
CLaMS
FLEXPART
==== Examples of Semi-Lagrangian CTMs ====
MOCAGE
GEM-MACH
=== Examples of ozone CTMs ===
CLaMS
MOZART
== Notes ==
== External links ==
MOZART:
UCAR's MOZART page
MPI-Met's MOZART page
Larry Horowitz's MOZART page
== See also ==
Atmospheric dispersion modeling
List of atmospheric dispersion models
University Corporation for Atmospheric Research
National Center for Atmospheric Research
Ozone depletion
Meteorology | Wikipedia/Chemistry_transport_model |
The MM5 (short for Fifth-Generation Penn State/NCAR Mesoscale Model) is a regional mesoscale model used for creating weather forecasts and climate projections. It is a community model maintained by Penn State University and the National Center for Atmospheric Research. The MM5 is a limited-area, terrain-following sigma coordinate model that is used to replicate or forecast mesoscale and regional scale atmospheric circulation. It has been updated many times since the 1970s to fix bugs, adapt to new technologies, and work on different types of computers and software. It is used in many different ways: for research and for weather prediction. In research, it is used to compare it to other models, to see what works and what does not work. It is also used for air quality models.
Active development on the model ended with version 3.7.2 in 2005, and it has been largely superseded by the Weather Research and Forecasting (WRF) model.
== Features ==
MM5 is globally relocatable, which helps support different latitudes, terrain types, elevations, soil types, etc. . The model can be either hydrostatic or non-hydrostatic, depending on the desired outcome. The fact that the model is regional implies that it requires initial conditions and lateral boundary conditions. This means that each boundary (there are four) has initialized wind speeds, temperatures, pressure and moisture fields. Thus, gridded data is needed for this program. This model takes and then analyzes its data based on pressure surfaces. However, these surfaces must first be interpolated by a specific vertical coordinate before it can be analyzed. This vertical coordinate, sigma, is computed and then used throughout the program. Σ is defined as:
Σ = (p-pt)/p*, p* = ps-pt,
Where p is pressure, ps is surface pressure, and pt is the pressure at the top of the model. When Σ is close to the ground, the program follows the actual terrain, but when Σ is higher up, the program looks at isobaric surfaces. Σ ranges from 0 to 1. It has adaptable and multiple nesting capabilities, which allows multiple programs to run at once, while utilizing 2-way nesting. MM5 features inputs from actual data, which is helpful because routine observations can be used. Then, data can be compared and used in context with other models. MM5 also features terrain-following vertical coordinates and four-dimensional data assimilation (FDDA). FDDA is used when there is a lot of data that was taken over a longer period of time. Then this data that needed to be taken over a longer period of time gets placed into FDDA. It is also utilized for dynamical initialization and four-dimensional data sets. Most importantly, MM5 is well documented and has many places for user support.
== Functions ==
There are many pieces of the MM5 modeling system, each with different functions. These pieces include: TERRAIN, REGRID, LITTLE_R, NESTDOWN, INTERPF, INTERPB, and GRAPH/RIP. In the basic sense, the model begins by taking the information and then makes a model of the land. This is done by using the piece TERRAIN. Then, the model guesses the different pressures that exist in the atmosphere throughout this plot of land, which is done through REGRID. Then, the model takes these guesses, and with the help of some surface observations, can create an objective analysis, using LITTLE_R. RAWINS is an older version of LITTLE_R and thus LITTLE_R is more commonly used. Then INTERPF takes that data from RAWINS/LITTLE_R and REGRID to interpolate that data to the sigma coordinate explained above. MM5 itself computes the time combination. The NESTDOWN function has the ability to change the vertical sigma levels. INTERPB generates the first guess for RAWINS, as well as gathers the files used to make guesses for REGRID. Finally, GRAPH/RIP generates the plots that users can view from all of the data given from the other programs. RIP stands for Read/Interpolate/Plot, and it uses NCAR Graphics to help envisage the output from the MM5.
== Memory and code basics ==
MM5 is written in FORTRAN. These FORTRAN programs must be compiled on a local computer, and some need to be recompiled each time a model configuration is changed. The program uses pointers to assign variables to values. These pointers go into parts of the memory to assign specific values to the desired variables. MM5 can also perform multiple tasks at once. Specifically, two different tasks can be executed on different processors simultaneously and MM5 uses this as much as possible. This multi-tasking also uses nesting, and MM5 allows up to nine domains (processes) running at the same time and they interact throughout the whole process. The model utilizes two-way nesting, which occurs when the input from one nest's coarse mesh, which is a high density of cells in one area, comes from one of the four boundaries, but the feedback to the coarser mesh happens through the nest interior. Each domain gathers information from its parent domain each timestep, then it runs three timesteps, and then delivers the information back to its parent domain.
There are three different ways to do two-way nesting: Nest interpolation, nest analysis input, and nest terrain input. Interpolation occurs when the terrain is smooth, such as water. There is no input that is required for this type of two-way nesting. Nest input requires a file called MMINPUT, and that file contains the meteorological and the terrain information so that initially, a better analysis can be done. Finally terrain input requires a TERRAIN file. Then the meteorological fields are interpolated.
When multi-tasking occurs, the variables must be marked as either shared or private. Shared implies that the processors all have access to the same part of memory, while private implies that each processor must have its own private copy of an array with its personal memory location. The multi-tasking occurs specifically in the subroutines Solve1, Solve3, and Sound.
== Requirements ==
MM5 can be run in many different ways, depending on the computer. The model can be run on a single processor computer, shared memory architecture or distributed memory architecture. It can be run on many different platforms as well such as IBM, SGI origin 200/2000, CRAYs (J90, C90, T3E), DEC Alphas, Alphas running Linux, Sun, and more. The computer it is run on must have a Fortran 90 and 77 compilers and a C compiler. Additionally, it can have NCAR graphics and MPI tools for running the MPI MM5 model. However, this is not necessary. The MM5 model takes up at least half a gigabyte of memory and a few gigabytes of disk space.
== Source code ==
The MM5 code itself has over 220 subroutines, with more than 55,000 lines of code. It uses Standard Fortran 77 with “Cray” pointers. A list of the subroutines of all of the functions within the MM5, with a short description and the routines that are called within it can be found here.
== Use ==
The MM5 had been adapted for use in many different types of atmospheric simulations:
Real-time regional weather forecasts
Tropical cyclone prediction
Climate change simulation and projection
== Developments and improvements ==
TERRAIN's latest update uses USGS global 30 sec terrain elevation data, which looks at the entire world, where before was only used in the continental United States. This new update also allows for a better quality terrain image that can be used in MM5. REGRID's improvement makes it simpler for its users to input the data and it also is more portable. LIITLE_R was developed in 2001 to replace RAWINS. LITTLE_R's improvement again makes it easier for users to input data.
== See also ==
Weather Research and Forecasting Model
== References ==
== External links ==
MM5 homepage | Wikipedia/MM5_(weather_model) |
The Unified Model is a numerical weather prediction and climate modeling software suite originally developed by the United Kingdom Met Office from 1990 and now both used and further developed by many weather-forecasting agencies around the world. The Unified Model gets its name because a single model is used across a range of both timescales (nowcasting to centennial) and spatial scales (convective scale to climate system earth modelling). The models are grid-point based, rather than wave based, and are run on a variety of supercomputers around the world. The Unified Model atmosphere can be coupled to a number of ocean models. At the Met Office, it is used for the main suite of weather prediction models, for deployable and on-demand weather models, and for seasonal and climate modelling. Similar Unified Model suites with global and regional domains are used by many other national or military weather agencies around the world for operational forecasting.
Data for numerical weather prediction is provided by observations from satellites, from the ground (both human and from automatic weather stations), from buoys at sea, radar, radiosonde weather balloons, wind profilers, commercial aircraft and a background field from previous model runs.
The computer model is only adjusted towards the observations using assimilation, rather than forcing the model to accept an observed value that might make the system unstable (and could be an inaccurate observation).
The Unified Model software suite is written in Fortran (originally 77 but predominantly 90 as of 2003).
Because most developments of interest are near to the ground the vertical layers are closer together near the surface. A major update was deployed in August 2002, called "New Dynamics".
== Principal UM suites at the Met Office ==
The Met Office runs a range of Numerical Weather Prediction suites using the UM.
All of the models use varying resolutions of topography with greater accuracy at higher resolutions. The limiting factor with all models is that for a weather event to be recorded by the model it must be at least three grid points in size. Thus, for a model at 40 km, a weather system must be at least 120 km to be modelled. This means smaller phenomena such as small depressions, smaller hurricanes and large thunderstorms are too small for the model to handle. As the resolution increases, smaller events can be caught; the 1.5 km model for example, is reputedly capable of modelling individual showers.
=== Global Ensemble Model (MOGREPS-G) ===
MOGREPS-G is an ensemble model that runs multiple simulations to reduce the overall average error caused by the inherent uncertainty in the observation and modelling process. The model consists of 18 ensemble members with a 10 km horizontal resolution and 70 vertical levels, forecasting out to 168 hours (7 days) with a model run every 6 hours. The latest two runs can be combined to give a 36-member ensemble.
=== UK Ensemble Model (MOGREPS-UK) ===
The UK ensemble model consists of 3 ensemble members with a 1.5-4 km horizontal resolution and 70 vertical levels, forecasting out to 120 hours (5 days) with a model run every hour. The latest 6 runs can be combined to give a 18-member ensemble.
=== Global Deterministic Model ===
Approximately 10 km resolution with 70 vertical levels across the entire global domain. Forecasts alternate between 60 hour (2.5 day) and 144 hour (6 day) extents. The Global model provides boundary information for the smaller-domain models. The model is kept close to the real atmosphere using hybrid 4D-Var data assimilation of observations.
=== UK Deterministic Model (UKV) ===
1.5-4 km horizontal resolution and 70 vertical levels. The resolution is 1.5 km over the UK and 4 km over surrounding areas. Forecast lengths vary, with hourly forecasts out to 12 hours, three-hourly forecasts out to 54 hours (2.25 days), and twice-daily forecasts out to 120 hours (5 days.) The model uses boundary conditions from the global deterministic model, sea conditions from the AMM15 shelf seas model, and is kept close to observations using 4D-Var data assimilation hourly.
=== North Atlantic and European model (retired) ===
The Met Office's North Atlantic and European model (NAE) model had 70 levels with a 12 km resolution. It was run out to 48 hours from start. Because the UK is at a northern latitude the model transposes the model area to an equatorial location so that the grid points give an area that is more square. This reduces the load on the model, allowing it to run more quickly. The model was kept close to the real atmosphere using 4D-Var data assimilation of observations.
=== Euro 4km model (retired) ===
70 Vertical levels, 4.4 km horizontal resolution. Ran out to 120 hours. Now superseded by the UKV in many applications and by the Global Model in others.
=== DRMs/CAMs ===
Defence Regional Models and Crisis Area Models are models using the same science configuration as the UK model which can be deployed rapidly to support military operations and disaster relief efforts. For example, to support the MMU or environmental catastrophes.
=== Mountain Wave Models ===
This high resolution model provides information on mountain waves for a variety of locations around the UK and other areas of interest to the Met Office.
=== Met Office's Global Wave Model ===
This models sea waves around the world.
== UM suites outside the Met Office ==
Unified Model suites which are similar to those from the Met Office are run by the Australian Bureau of Meteorology, the Australian Commonwealth Scientific and Industrial Research Organisation, the South African Weather Service, the Norwegian Meteorological Institute, the New Zealand National Institute of Water and Atmospheric Research, the (South) Korea Meteorological Administration and the National Centre for Medium Range Weather Forecasting, a subordinate office of the Indian Ministry of Earth Sciences.
=== Australian Bureau of Meteorology ===
The Australian Bureau of Meteorology, have an operational 12.5 km resolution deterministic (ACCESS-G) and 33 km resolution ensemble (ACCESS-GE) global forecasting system utilizing the Unified Model. This global system provides boundary conditions for a number of higher resolution regional systems also using the Unified Model. These include a 1.5 km resolution deterministic (ACCESS-C) and 2.2 km resolution ensemble (ACCESS-CE) model over Australia and a 4km resolution relocatable model (ACCESS-TC.)
=== (South) Korea Meteorological Administration ===
The (South) Korea Meteorological Administration have an operational 10 km resolution global forecasting system utilizing the Unified Model. This global system provides boundary conditions for a 1.5 km resolution local Unified Model NWP system covering the Korean Peninsula Region.
== UKCA ==
United Kingdom Chemistry & Aerosols (UKCA) is a sub-model of the UM that deals with trace gas and aerosol chemistry within the model. This includes calculating the concentrations of climatically relevant gases such as methane and ozone, as well as the composition and evolution of aerosols. As with most of the UM, the UKCA was written in a collaboration between the UK Met Office and UK Academia.
== JULES ==
Joint UK Land Environment System is a land surface model that has been developed in a collaboration between the Met Office and other research institutes. JULES models the exchange of heat and moisture between the Unified Model atmosphere and the land surface and vegetation. JULES can also be used offline to estimate the impacts of different climate models on the land surface and hydrology.
== References ==
== External links ==
Unified Model Collaboration webpage at the Met Office.
Unified Model Papers
Introduction to the UM by NCAS | Wikipedia/Unified_Model |
A reference atmospheric model describes how the ideal gas properties (namely: pressure, temperature, density, and molecular weight) of an atmosphere change, primarily as a function of altitude, and sometimes also as a function of latitude, day of year, etc.
A static atmospheric model has a more limited domain, excluding time.
A standard atmosphere is defined by the World Meteorological Organization as "a hypothetical vertical distribution of atmospheric temperature, pressure and density which, by international agreement, is roughly representative of year-round, midlatitude conditions."
Typical usages are as a basis for pressure altimeter calibrations, aircraft performance calculations, aircraft and rocket design, ballistic tables, and meteorological diagrams."
For example, the U.S. Standard Atmosphere derives the values for air temperature, pressure, and mass density, as a function of altitude above sea level.
Other static atmospheric models may have other outputs, or depend on inputs besides altitude.
== Basic assumptions ==
The gas which comprises an atmosphere is usually assumed to be an ideal gas, which is to say:
ρ
=
M
P
R
T
{\displaystyle \rho ={\frac {MP}{RT}}}
Where ρ is mass density, M is average molecular weight, P is pressure, T is temperature, and R is the ideal gas constant.
The gas is held in place by so-called "hydrostatic" forces. That is to say, for a particular layer of gas at some altitude: the downward (towards the planet) force of its weight, the downward force exerted by pressure in the layer above it, and the upward force exerted by pressure in the layer below, all sum to zero. Mathematically this is:
P
A
−
(
P
+
d
P
)
A
−
(
ρ
A
d
h
)
g
0
=
0
{\displaystyle PA-(P+{\text{d}}P)A-(\rho A{\text{d}}h)g_{0}=0\,}
d
P
=
−
g
0
ρ
d
h
{\displaystyle {\text{d}}P=-g_{0}\rho {\text{d}}h\,}
Finally, these variables describing the system do not change with time; i.e. it is a static system.
g_0, gravitational acceleration is used here as a constant, with same value as standard gravity (average acceleration due to gravity on the surface of the Earth or other big body). For the basis of simplicity it doesn't vary with latitude, altitude or location. The variation due to all these factors is about 1% up to 50km. More complex models account for these variations.
== Some examples ==
Depending on the model, some gas properties may be treated as constant with respect to altitude.
=== Ocean example ===
If the density of a gas is persistent, then it isn't really behaving like a gas. Instead it is behaving like an incompressible fluid, or liquid, and this situation looks more like an ocean. Assuming density is constant, then a graph of pressure vs altitude will have a retained slope, since the weight of the ocean over head is directly proportional to its depth.
=== Isothermal-barotropic approximation and scale height ===
This atmospheric model assumes both molecular weight and temperature are constant over a wide range of altitude. Such a model may be called isothermal (constant temperature). Inserting constant molecular weight and constant temperature into the equation for the ideal gas law produces the result that density and pressure, the two remaining variables, depend only on each other. For this reason, this model may also be called barotropic (density depends only on pressure).
For the isothermal-barotropic model, density and pressure turn out to be exponential functions of altitude. The increase in altitude necessary for P or ρ to drop to 1/e of its initial value is called the scale height:
H
=
R
T
M
g
0
{\displaystyle H={\frac {RT}{Mg_{0}}}}
where R is the ideal gas constant, T is temperature, M is average molecular weight, and g0 is the gravitational acceleration at the planet's surface. Using the values T=273 K and M=29 g/mol as characteristic of the Earth's atmosphere, H = RT/Mg = (8.315*273)/(29*9.8) = 7.99, or about 8 km, which coincidentally is approximate height of Mt. Everest.
For an isothermal atmosphere,
(
1
−
1
e
)
{\displaystyle (1-{\frac {1}{e}})}
or about 63% of the total mass of the atmosphere exists between the planet's surface and one scale height. (The total air mass below a certain altitude is calculated by integrating over the density function.)
For the ocean example there was a sharp transition in density at the top or "surface" of the ocean. However, for atmospheres made of gas there is no equivalent sharp transition or edge. Gas atmospheres simply get less and less dense until they're so thin that they're space.
=== The U.S. Standard Atmosphere ===
The U.S. Standard Atmosphere model starts with many of the same assumptions as the isothermal-barotropic model, including ideal gas behavior, and constant molecular weight, but it differs by defining a more realistic temperature function, consisting of eight data points connected by straight lines; i.e. regions of constant temperature gradient. (See graph.) Of course the real atmosphere does not have a temperature distribution with this exact shape. The temperature function is an approximation. Values for pressure and density are then calculated based on this temperature function, and the constant temperature gradients help to make some of the maths easier.
=== NASA Global Reference Atmospheric Model ===
The NASA Earth Global Reference Atmospheric Model (Earth-GRAM) was developed by the Marshall Space Flight Center to provide a design reference atmosphere that, unlike the standard atmospheres, allows for geographical variability, a wide range of altitudes (surface to orbital altitudes), and different months and times of day. It can also simulate spatial and temporal perturbations in atmospheric parameters due to turbulence and other atmospheric perturbation phenomena. It is available in computer code written in Fortran. The GRAM series also includes atmospheric models for the planets Venus, Mars and Neptune and the Saturnian moon, Titan.
== Geopotential altitude ==
Gravitational acceleration, g(z), decreases with altitude since moving up means moving away from the planet's center.
g
(
z
)
=
G
m
e
(
r
e
+
z
)
2
{\displaystyle g(z)={\frac {Gm_{e}}{(r_{e}+z)^{2}}}}
This problem of decreasing g can be dealt with by defining a transformation from real geometric altitude z to an abstraction called "geopotential altitude" h, defined:
h
=
r
e
z
r
e
+
z
{\displaystyle h={\frac {r_{e}z}{r_{e}+z}}}
h has the property
g
(
z
)
d
z
=
g
0
d
h
{\displaystyle {\frac {}{}}g(z)dz=g_{0}dh}
where
g
0
=
g
(
0
)
=
G
m
e
r
e
2
{\displaystyle g_{0}=g(0)={\frac {Gm_{e}}{{r_{e}}^{2}}}}
Which basically says the amount of work done lifting a test mass m to height z through an atmosphere where gravity decreases with altitude, is the same as the amount of work done lifting that same mass to a height h through an atmosphere where g magically remains equal to g0, its value at sea level.
This geopotential altitude h is then used instead of geometric altitude z in the hydrostatic equations.
== Common models ==
COSPAR International Reference Atmosphere
International Standard Atmosphere
Jacchia Reference Atmosphere, an older model still commonly used in spacecraft dynamics
Jet standard atmosphere
NRLMSISE-00 is a recent model from NRL often used in the atmospheric sciences
US Standard Atmosphere
== See also ==
Standard temperature and pressure
Upper-atmospheric models
== References ==
== External links ==
Public Domain Aeronautical Software – Derivation of hydrostatic equations used in the 1976 US Standard Atmosphere
FORTRAN code to calculate the US Standard Atmosphere
NASA GSFC Atmospheric Models overview
Various models at NASA GSFC ModelWeb
Earth Global Reference Atmospheric Model (Earth-GRAM 2010) | Wikipedia/Static_atmospheric_model |
The Environmental Modeling Center (EMC) is a United States Government agency, which improves numerical weather, marine and climate predictions at the National Centers for Environmental Prediction (NCEP), through a broad program of research in data assimilation and modeling. In support of the NCEP operational forecasting mission, the EMC develops, improves and monitors data assimilation systems and models of the atmosphere, ocean and coupled system, using advanced methods developed internally as well as cooperatively with scientists from universities, NOAA laboratories and other government agencies, and the international scientific community.
== History ==
In July 1954, the Joint Numerical Weather Prediction Unit (JNWPU) was created to test out numerical weather prediction techniques by computer. Operational numerical weather prediction in the United States began in 1955 under the JNWPU. This unit co-located with the Weather Bureau-Air Force-Navy (WBAN) analysis center to form the National Weather Analysis Center, which was located in Suitland, Maryland. When the two units merged, the name changed to the National Meteorological Center (NMC) in January 1958. When the JNWPU dissolved in 1961, NMC became an independent organization from Global Weather Central and Fleet Numerical Weather Central. Research and computer processing abilities increased over the years, which allowed for the first global forecast model to run by June 1966. NMC moved to the World Weather Building in Camp Springs, Maryland between 1974 and 1976. NMC changed its name to NCEP, the National Centers for Environmental Prediction on October 1, 1995, with the Environmental Modeling Center (EMC) becoming one of its subunits. EMC moved to the National Center for Weather and Climate Prediction building in September 2012.
== Operations ==
The Environmental Modeling Center is responsible for the development, running, and maintenance of more than 20 numerical weather prediction systems comprising NCEP's operational production suite. These models include the Rapid Refresh (RAP), Global Forecast System (GFS), Global Ensemble Forecast System (GEFS), WaveWatch III, Short Range Ensemble Forecast (SREF), Climate Forecast System (CFS), Global Real-Time Ocean Forecast System (RTOFS), North American Mesoscale Model (NAM), Hurricane Weather Research and Forecasting model (HWRF), and Hurricanes in a Multi-scale Ocean-coupled Non-hydrostatic Model (HMON).
== Branches ==
Modeling and Data Assimilation Branch
Verification, Post-processing and Product Generation Branch
Engineering and Implementation Branch
== See also ==
National Hurricane Center
Ocean Prediction Center
Space Weather Prediction Center
Storm Prediction Center
Tropical cyclone forecast model
Weather Prediction Center
== References == | Wikipedia/Environmental_Modeling_Center |
Regional Ocean Modeling System (ROMS) is a free-surface, terrain-following, primitive equations ocean model widely used by the scientific community for a diverse range of applications. The model is developed and supported by researchers at the Rutgers University, University of California Los Angeles and contributors worldwide.
ROMS is used to model how a given region of the ocean responds to physical forcings such as heating or wind. It can also be used to model how a given ocean system responds to inputs like sediment, freshwater, ice, or nutrients, requiring coupled models nested within the ROMS framework.
== Framework ==
ROMS is a 4D modeling system. It is a 3-dimensional model (a 2D horizontal grid and a vertical grid) that can be run over a given amount of time, time being the 4th dimension. It is gridded into vertical levels that make up the water column and horizontal cells that make up the coordinates of the 2D cartesian plane of the model region.
=== Kernel ===
Central to the ROMS framework are four models that form what is called the dynamical/numerical core or kernel:
Non-Linear Model kernel (NLM): NLROMS
Perturbation Tangent Linear Model kernel (TLM): TLROMS
Finite-amplitude tangent linear Representer Model kernel (RPM): RPROMS
Adjoint Model kernel (ADM): ADROMS
=== Vertical grid ===
The vertical grid is a hybrid stretched grid. It is hybrid in that its stretching intervals fall somewhere between the two extremes of 1) the evenly-spaced sigma grid used by the Princeton Ocean Model and 2) a true z-grid with a static depth interval . The vertical grid can be squeezed or stretched to increase or decrease the resolution for an area of interest, such as a thermocline or bottom boundary layer. Grid stretching in the vertical direction follows bottom topography, allowing for the idealized flow of water over features such as seamounts. The numbering of the vertical grid goes from the bottom waters upward to the air-water interface: the bottom water level is level 1 and the topmost surface water level is the highest number (such as level 20). With a coupled sediment module, the numbering of the sediment seabed levels goes from the sediment-water interface downward: the topmost seabed level is level 1 and the deepest seabed level is the highest number.
=== Horizontal grid ===
The horizontal grid is a structured grid, meaning that it has a rectangular 4-sided grid cell structure. The horizontal grid is also an orthogonal curvilinear grid, meaning that it maximizes ocean grid cells of interest and minimizes extra land grid cells. The horizontal grid is also a staggered grid or Arakawa-C grid, where the velocities in the north-south and east-west directions are calculated at the edges of each grid cell, while the values for scalar variables such as density are calculated at the center of each grid cell, known as "rho-points."
=== Physics ===
In both the vertical and horizontal directions, the default equations use centered, second-order finite difference schemes. Higher order schemes are available if desired, for example using parabolic spline reconstruction.
In general, the physical schemes used by ROMS are based on three governing equations:
Continuity
Conservation of momentum (Navier-Stokes)
Transport equations of tracer variables (such as salinity and temperature)
Equations are coupled to solve for five unknowns at each location in the model grid using numerical solutions:
East-west velocity (u)
North-south velocity (v)
Vertical velocity (w)
Salinity
Temperature
== Source code ==
ROMS uses an open-access source code that can be downloaded by filling out an online request form. It runs on C-processing and was developed for shared computing uses. To download the source code a user must create an account and file a request with the developers on the ROMS website.
== Input and output ==
=== Input ===
Boundaries such as coastlines can be specified for a given region using land- and sea-masking. The top vertical boundary, the air-sea interface, uses an interaction scheme developed by Fairall et al. (1996). The bottom vertical boundary, the sediment-water interface, uses a bottom stress or bottom-boundary-layer scheme developed by Styles and Glenn (2000).
Inputs that are needed for an implementer to run ROMS for a specific ocean region include:
Bathymetry and coastline
Freshwater input
Wind
Tides
Open boundary forcings (idealized, such as a reanalysis product, or specific data)
Heat flux
Physical mixing (see above)
The programming framework of ROMS is split into three parts: Initialize, Run, and Finalize, which is standard for the Earth System Modeling Framework (ESMF). "Run" is the largest of these three parts, where the user chooses which options they want to use and assimilates data if desired. The model run must be initialized or compiled before it is run.
=== Output ===
The output format of model run files is netCDF. Model output is often visualized using independent secondary programming software such as MATLAB or Python. Simple visualization software such as NASA's Panoply Data Viewer can also be used to visualize model output for teaching or demonstration purposes.
== User options ==
The general approach of ROMS gives model implementers a high level of freedom and responsibility. One approach cannot meet the needs of all the diverse applications the model is currently used for. Therefore, it is up to each model implementer (either an individual or a research group) to choose how they want to use each of the available options. Options include choices such as:
Mixing formulations in the horizontal and vertical directions
Vertical grid stretching
Processing mode (serial, parallel with MPI, or parallel with OpenMP)
Debugging turned on or off
When using ROMS, if an implementer runs into a problem or bug, they can report it to the ROMS forum.
== Applications ==
The versatility of ROMS has been proven in its diverse applications to different systems and regions. It is best applied to mesoscale systems, or those systems that can be mapped at high resolution, such as 1-km to 100-km grid spacing.
=== Coupled model applications ===
Biogeochemical, bio-optical, sea ice, sediment, and other models can be embedded within the ROMS framework to study specific processes. These are usually developed for specific regions of the world's oceans but can be applied elsewhere. For example, the sea ice application of ROMS was originally developed for the Barents Sea Region.
ROMS modeling efforts are increasingly being coupled with observational platforms, such as buoys, satellites, and ship-mounted underway sampling systems, to provide more accurate forecasting of ocean conditions.
=== Regional applications ===
There is an ever-growing number of applications of ROMS to particular regions of the world's oceans. These integrated ocean modeling systems use ROMS for the circulation component, and add other variables and processes of interest. A few examples are:
Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST)
Experimental System for Predicting Shelf and Slope Optics (ESPRESSO)
New York Harbor Observing and Prediction System (NYHOPS)
Chesapeake Bay Estuarine Carbon & Biogeochemistry (ChesROMS ECB)
Climatic indices in the Gulf of Alaska
LiveOcean daily forecast model of the NE Pacific and Salish Sea
The Western Mediterranean OPerational forecasting system (WMOP)
== See also ==
General circulation model (GCM)
Ocean general circulation model (OGCM)
List of ocean circulation models
Climate model
Oceanography
Physical oceanography
Ecological forecasting
== References ==
== External links ==
ROMS website
ROMS Documentation Portal | Wikipedia/Regional_Ocean_Modeling_System |
A chemical transport model (CTM) is a type of computer numerical model which typically simulates atmospheric chemistry and may be used for air pollution forecasting.
== Chemical transport models and general circulation models ==
While related general circulation models (GCMs) focus on simulating overall atmospheric dynamics (e.g. fluid and heat flows), a CTM instead focuses on the stocks and flows of one or more chemical species. Similarly, a CTM must solve only the continuity equation for its species of interest, a GCM must solve all the primitive equations for the atmosphere; but a CTM will be expected to accurately represent the entire cycle for the species of interest, including fluxes (e.g. advection), chemical production/loss, and deposition. That being said, the tendency, especially as the cost of computing declines over time, is for GCMs to incorporate CTMs for species of special interest to climate dynamics, especially shorter-lived species such as nitrogen oxides and volatile organic compounds; this allows feedbacks from the CTM to the GCM's radiation calculations, and also allows the meteorological fields forcing the CTM to be updated at higher time resolution than may be practical in studies with offline CTMs.
== Types of chemical transport models ==
CTMs may be classified according to their methodology and their species of interest, as well as more generic characteristics (e.g. dimensionality, degree of resolution).
=== Methodologies ===
Jacob (1999) classifies CTMs as Eulerian/"box" or Lagrangian/"puff" models, depending on whether the CTM in question focuses on
(Eulerian) "boxes" through which fluxes, and in which chemical production/loss and deposition occur over time
(Lagrangian) the production and motion of parcels of air ("puffs") over time
An Eulerian CTM solves its continuity equations using a global/fixed frame of reference, while a Lagrangian CTM uses a local/moving frame of reference.
==== See also ====
discussion of gridding in CLaMS
Lagrangian and Eulerian coordinates
discussion of the continuity equation in Jacob's Introduction to Atmospheric Chemistry online
==== Examples of Eulerian CTMs ====
CCATT-BRAMS
WRF-Chem
CMAQ, CMAQ Website
CAMx
GEOS-Chem
LOTOS-EUROS
MATCH
MOZART: (Model for OZone And Related chemical Tracers) is developed jointly by the (US) National Center for Atmospheric Research (NCAR), the Geophysical Fluid Dynamics Laboratory (GFDL), and the Max Planck Institute for Meteorology (MPI-Met) to simulate changes in ozone concentrations in the Earth's atmosphere. MOZART was designed to simulate tropospheric chemical and transport processes, but has been extended (MOZART3) into the stratosphere and mesosphere. It can be driven by standard meteorological fields from, for example, the National Centers for Environmental Prediction (NCEP), the European Centre for Medium-Range Weather Forecasts (ECMWF) and the Global Modeling and Assimilation Office (GMAO), or by fields generated from general circulation models. MOZART4 improves MOZART2's chemical mechanisms, photolysis scheme, dry deposition mechanism, biogenic emissions and handling of tropospheric aerosols.
TOMCAT/SLIMCAT
CHIMERE
POLYPHEMUS
TCAM (Transport Chemical Aerosol Model; TCAM): a mathematical modelling method (computer simulation) designed to model certain aspects of the Earth's atmosphere. TCAM is one of several chemical transport models, all of which are concerned with the movement of chemicals in the atmosphere, and are thus used in the study of air pollution.
TCAM is a multiphase three-dimensional eulerian grid model (as opposed to lagrangian or other modeling methods). It is designed for modelling dispersion of pollutants (in particular photochemical and aerosol) at mesoscales (medium scale, generally concerned with systems a few hundred kilometers in size).
TCAM was developed at the University of Brescia in Italy.
==== Examples of Lagrangian CTMs ====
CLaMS
FLEXPART
==== Examples of Semi-Lagrangian CTMs ====
MOCAGE
GEM-MACH
=== Examples of ozone CTMs ===
CLaMS
MOZART
== Notes ==
== External links ==
MOZART:
UCAR's MOZART page
MPI-Met's MOZART page
Larry Horowitz's MOZART page
== See also ==
Atmospheric dispersion modeling
List of atmospheric dispersion models
University Corporation for Atmospheric Research
National Center for Atmospheric Research
Ozone depletion
Meteorology | Wikipedia/MOZART_(model) |
The Integrated Forecasting System (IFS) is a global numerical weather prediction system jointly developed and maintained by the European Centre for Medium-Range Weather Forecasts (ECMWF) based in Reading, England, and Météo-France based in Toulouse. The version of the IFS run at ECMWF is often referred to as the "ECMWF" or the "European model" in North America, to distinguish it from the American Global Forecast System.
== Mechanism ==
It comprises a spectral atmospheric model with a terrain-following vertical coordinate system coupled to a 4D-Var data assimilation system. In 1997 the IFS became the first operational forecasting system to use 4D-Var. Both ECMWF and Météo-France use the IFS to make operational weather forecasts, but using a different configuration and resolution (the Météo-France configuration is referred to as ARPEGE). It is one of the predominant global medium-range models in general use worldwide; its most prominent rivals in the 6–10 day medium range include the American Global Forecast System (GFS), the Canadian Global Environmental Multiscale Model (GEM and GDPS) and the UK Met Office's Unified Model.
=== Variants ===
ECMWF runs the IFS in several configurations. The highest resolution "HRES" configuration is run every 6 hours (00Z and 12Z out to 10 days, 06Z/18Z out to 90 hours) with a horizontal resolution of 9 km using 137 layers in the vertical. The 51-member ensemble system "ENS" is also run every twelve hours out to 15 days and every 06Z/18Z out to 6 days with a horizontal resolution of 18 km and 137 layers in the vertical. The ECMWF also runs a coarser version of the IFS out 45 days; this version is run weekly, with output in five-day intervals. There is also a version that runs out one year. All model versions except HRES are coupled to the ocean model NEMO.
== Usage ==
Many ECMWF member states use ECMWF global forecasts to provide boundary conditions for their own higher resolution, limited domain forecasts. ECMWF forecasts are free to the national weather services of its member states, but a fee is charged to commercial users, while limited operational data (select variables from the HRES and ENS out ten days) is available direct to consumers under the noncommercial Creative Commons license prohibiting derivative works (CC-BY ND NC). In contrast to the international organization's forecast, the output of many national weather services is usually freely licensed to all users.
The full IFS source code is available only to the national weather services of ECMWF member states,. The source code for the atmosphere model is available to other non-commercial users in the form of the OpenIFS which requires a free license. The EC-Earth climate model is based on the IFS.
== References ==
== External links ==
ECMWF research page
Changes in the ECMWF model
ARPEGE-IFS
IFS documentation | Wikipedia/Integrated_Forecast_System |
The Global Environmental Multiscale Model (GEM), often known as the CMC model in North America, is an integrated forecasting and data assimilation system developed in the Recherche en Prévision Numérique (RPN), Meteorological Research Branch (MRB), and the Canadian Meteorological Centre (CMC). Along with the NWS's Global Forecast System (GFS), which runs out to 16 days, the ECMWF's Integrated Forecast System (IFS), which runs out 10 days, the US Naval Research Laboratory Navy Global Environmental Model (NAVGEM), which runs out eight days, the UK Met Office's Unified Model, which runs out to seven days, and Deutscher Wetterdienst's ICON (ICOsahedral Nonhydrostatic), which runs out to 7.5 days, it is one of the global medium-range models in general use.
== Operation ==
=== Deterministic model ===
The GEM's operational model, known as the Global Deterministic Prediction System (GDPS), is currently operational for the global data assimilation cycle and medium-range forecasting, the regional data assimilation spin-up cycle and short-range forecasting. Mesoscale forecasts (distributed under the names regional deterministic prediction system or RDPS for the coarser mesh, available for all of North America and high-resolution deterministic prediction system or HRDPS for the finer mesh, available in Canada only) are produced overnight and are available to the operational forecasters. A growing number of meteorological applications are now either based on or use the GEM model. Output from the GEM goes out to 10 days, on par with the public output of the European Integrated Forecast System.
=== Ensemble model ===
The ensemble variant of the GEM is known as the Global Ensemble Prediction System (GEPS). It has 20 members (plus control) and runs out 16 days, the same range as the American global forecast system. The GEPS runs alongside the GFS ensemble to form the North American Ensemble Forecast System. A regional ensemble prediction system (REPS), covering North America and also having 20 members plus control, runs out 72 hours.
== Development ==
The GEM model has been developed to meet the operational weather forecasting needs of Canada for the coming years. These presently include short-range regional forecasting, medium-range global forecasting, and data assimilation. In the future they will include nowcasting at the meso-scales, and dynamic extended-range forecasting on monthly to seasonal timescales. The essence of the approach is to develop a single highly efficient model that can be reconfigured at run time to either run globally at uniform-resolution (with possibly degraded resolution in the "other" hemisphere), or to run with variable resolution over a global domain such that high resolution is focused over an area of interest.
=== Mechanics ===
The operational GEM model dynamics is formulated in terms of the hydrostatic primitive equations with a terrain following pressure vertical coordinate (h). The time discretization is an implicit two-time-level semi-Lagrangian scheme. The spatial discretization is a Galerkin grid-point formulation on an Arakawa C-grid in the horizontal (lat-lon) and the staggered Charney-Phillips Vertical discretization. The horizontal mesh can be of uniform or variable resolution, and furthermore can be arbitrarily rotated, the vertical mesh is also variable. The explicit horizontal diffusion is -2 on all prognostic variables.
The operational GEM model is interfaced with a full complement of physical parametrizations, these currently include:
solar and infrared radiation interactive with water vapor, carbon dioxide, ozone and clouds,
prediction of surface temperature over land with the force-restore method,
turbulence in the planetary boundary layer through vertical diffusion, diffusion coefficients based on stability and turbulent kinetic energy,
surface layer based on Monin–Obukhov similarity theory,
shallow convection scheme (non precipitating),
Kuo-type deep convection scheme (global forecast system),
Fritsch–Chappell type deep convection scheme (regional forecast system),
Sundqvist condensation scheme for stratiform precipitation,
Gravity wave drag.
=== Future ===
The next stage of development of the GEM model is to evaluate the non-hydrostatic version for mesoscale applications where the hydrostatic assumption breaks down. The limited-area (open-boundary) version is scheduled to follow. The distributed memory version of GEM is almost completed, it is a major recoding effort that is based upon a locally developed communication interface currently using Message Passing Interface. Research on the performance of different land surface schemes such as ISBA (Interaction Soil-Biosphere-Atmosphere) and CLASS (Canadian Land Surface Scheme) is making progress.
The strategy is progressing towards a unified data assimilation and forecast system, at the heart of which lies a single multipurpose and multiscale numerical model.
== Usage ==
Output from Canadian forecast models such as the GEM is under Canadian crown copyright but is issued under a free license if properly attributed to Environment Canada. Various unofficial sites thus redistribute GEM data, including the GDPS and GEPS.
== See also ==
Global climate model
== References ==
== External links ==
ECCC GEM official website
Introduction to GEMDM, distributed memory version of the GEM
ECCC MSC AniMet tool to visualize and animate GEM outputs
Ventusky GEM - Weather Forecast Visualisation
Model output from the University of Quebec at Montreal:
GDPS
RDPS
HRDPS
REPS individual ensemble members
GEPS individual ensemble members, out 10 days
GEPS mean output, out 16 days from Tropical Tidbits
GEPS mean output, weekly forecasts, out 4 weeks from George Mason University's SubX program (top right) | Wikipedia/Global_Environmental_Multiscale_Model |
The MIT General Circulation Model (MITgcm) is a numerical computer code that solves the equations of motion governing the ocean or Earth's atmosphere using the finite volume method. It was developed at the Massachusetts Institute of Technology and was one of the first non-hydrostatic models of the ocean. It has an automatically generated adjoint that allows the model to be used for data assimilation. The MITgcm is written in the programming language Fortran.
== History ==
== See also ==
Physical oceanography
Global climate model
== References ==
Marshall, John; A. Adcroft; C. Hill; L. Perelman; C. Heisey (1997). "A finite-volume, incompressible Navier Stokes model for studies of the ocean on parallel computers". Journal of Geophysical Research: Oceans. 102 (C3): 5753–5766. Bibcode:1997JGR...102.5753M. doi:10.1029/96JC02775.
== External links ==
The MITgcm home page
Department of Earth, Atmospheric and Planetary Science at MIT
The ECCO2 consortium | Wikipedia/MIT_General_Circulation_Model |
The Modular Ocean Model (MOM) is a three-dimensional ocean circulation model designed primarily for studying the ocean climate system. The model is developed and supported primarily by researchers at the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (NOAA/GFDL) in Princeton, NJ, USA.
== Overview ==
MOM has traditionally been a level-coordinate ocean model, in which the ocean is divided into boxes whose bottoms are located at fixed depths. Such a representation makes it easy to solve the momentum equations and the well-mixed, weakly stratified layer known as the ocean mixed layer near the ocean surface. However, level coordinate models have problems when it comes to the representation of thin bottom boundary layers (Winton et al., 1998) and thick sea ice. Additionally, because mixing in the ocean interior is largely along lines of constant potential density rather than along lines of constant depth, mixing must be rotated relative to the coordinate grid- a process that can be computationally expensive. By contrast, in codes which represent the ocean in terms of constant-density layers (which represent the flow in the ocean interior much more faithfully)- representation of the ocean mixed layer becomes a challenge.
MOM3, MOM4, and MOM5 are used as a code base for the ocean component of the GFDL coupled models used in the IPCC assessment reports, including the GFDL CM2.X physical climate model series and the ESM2M Earth System Model. Versions of MOM have been used in hundreds of scientific papers by authors around the world. MOM4 is used as the basis for the El Nino prediction system employed by the National Centers for Environmental Prediction.
== History ==
MOM owes its genesis to work at GFDL in the late 1960s by Kirk Bryan and Michael Cox. This code, along with a version generated at GFDL and UCLA/NCAR by Bert Semtner, is the ancestor of many of the level-coordinate ocean model codes run around the world today. In the late 1980s, Ron Pacanowski, Keith Dixon, and Tony Rosati at GFDL rewrote the Bryan-Cox-Semtner code in a modular form, enabling different options and configurations to be more easily generated and new physical parameterizations to be more easily included. This version, released on December 5, 1990, became known as Modular Ocean Model v1.0 (MOM1). Further development by Pacanowski, aided by Charles Goldberg and encouraged by community feedback, led to the release of v2.0 (MOM2) in 1995. Pacanowski and Stephen Griffies released v3.0 (MOM3) in 1999. Griffies, Matthew Harrison, Rosati and Pacanowski, with considerable input from a scientific community of hundreds of users, resulted in significant evolution of the code released as v4.0 (MOM4) in 2003. An update, v4.1 (MOM4p1) was released by Griffies in 2009, as was the latest version v5.0 (MOM5), which was released in 2012.
== See also ==
Geophysical Fluid Dynamics Laboratory
== References ==
== External links ==
MOM6 project
MOM5 community website
NOAA/GFDL Modular Ocean Model home page
History of MOM
MOM5 manual
MOM4p1 manual
MOM4 manual
MOM3 manual
MOM2 manual
MOM1 manual
Cox code technical report | Wikipedia/Modular_ocean_model |
The Model for Prediction Across Scales (MPAS) is an Earth system modeling software that integrates atmospheric, oceanographic, and cryospheric modeling across scales from regional to planetary. It includes climate and weather modeling and simulations that were used initially by researchers in 2013. The atmospheric models were created by the Earth System Laboratory at the National Center for Atmospheric Research and the oceanographic models were created by the Climate, Ocean, and Sea Ice Modeling Group at Los Alamos National Laboratory. The software has been used to model real-time weather as well as seasonal forecasting of convection, tornadoes and tropical cyclones. The atmospheric modeling component of the software can be used with other atmospheric modeling software including the Weather Research and Forecasting Model, the Global Forecast System, and the Community Earth System Model.
== See also ==
Tropical cyclone forecast model
Wind wave model
Global circulation model
== References ==
Notes
Skamarock, William C.; J. B. Klemp; M. G. Duda; L. D. Fowler; S.-H. Park; T. D. Ringler (2012). "A Multiscale Nonhydrostatic Atmospheric Model Using Centroidal Voronoi Tesselations and C-Grid Staggering". Mon. Wea. Rev. 140 (9): 3090–105. Bibcode:2012MWRv..140.3090S. doi:10.1175/MWR-D-11-00215.1. S2CID 34543801.
== External links ==
MPAS homepage
2015 NCAR Program Operating Plan: The Model for Prediction Across Scales (MPAS)
One Image That Shows Future of Climate Models | Wikipedia/Model_for_Prediction_Across_Scales |
The Unified Model is a numerical weather prediction and climate modeling software suite originally developed by the United Kingdom Met Office from 1990 and now both used and further developed by many weather-forecasting agencies around the world. The Unified Model gets its name because a single model is used across a range of both timescales (nowcasting to centennial) and spatial scales (convective scale to climate system earth modelling). The models are grid-point based, rather than wave based, and are run on a variety of supercomputers around the world. The Unified Model atmosphere can be coupled to a number of ocean models. At the Met Office, it is used for the main suite of weather prediction models, for deployable and on-demand weather models, and for seasonal and climate modelling. Similar Unified Model suites with global and regional domains are used by many other national or military weather agencies around the world for operational forecasting.
Data for numerical weather prediction is provided by observations from satellites, from the ground (both human and from automatic weather stations), from buoys at sea, radar, radiosonde weather balloons, wind profilers, commercial aircraft and a background field from previous model runs.
The computer model is only adjusted towards the observations using assimilation, rather than forcing the model to accept an observed value that might make the system unstable (and could be an inaccurate observation).
The Unified Model software suite is written in Fortran (originally 77 but predominantly 90 as of 2003).
Because most developments of interest are near to the ground the vertical layers are closer together near the surface. A major update was deployed in August 2002, called "New Dynamics".
== Principal UM suites at the Met Office ==
The Met Office runs a range of Numerical Weather Prediction suites using the UM.
All of the models use varying resolutions of topography with greater accuracy at higher resolutions. The limiting factor with all models is that for a weather event to be recorded by the model it must be at least three grid points in size. Thus, for a model at 40 km, a weather system must be at least 120 km to be modelled. This means smaller phenomena such as small depressions, smaller hurricanes and large thunderstorms are too small for the model to handle. As the resolution increases, smaller events can be caught; the 1.5 km model for example, is reputedly capable of modelling individual showers.
=== Global Ensemble Model (MOGREPS-G) ===
MOGREPS-G is an ensemble model that runs multiple simulations to reduce the overall average error caused by the inherent uncertainty in the observation and modelling process. The model consists of 18 ensemble members with a 10 km horizontal resolution and 70 vertical levels, forecasting out to 168 hours (7 days) with a model run every 6 hours. The latest two runs can be combined to give a 36-member ensemble.
=== UK Ensemble Model (MOGREPS-UK) ===
The UK ensemble model consists of 3 ensemble members with a 1.5-4 km horizontal resolution and 70 vertical levels, forecasting out to 120 hours (5 days) with a model run every hour. The latest 6 runs can be combined to give a 18-member ensemble.
=== Global Deterministic Model ===
Approximately 10 km resolution with 70 vertical levels across the entire global domain. Forecasts alternate between 60 hour (2.5 day) and 144 hour (6 day) extents. The Global model provides boundary information for the smaller-domain models. The model is kept close to the real atmosphere using hybrid 4D-Var data assimilation of observations.
=== UK Deterministic Model (UKV) ===
1.5-4 km horizontal resolution and 70 vertical levels. The resolution is 1.5 km over the UK and 4 km over surrounding areas. Forecast lengths vary, with hourly forecasts out to 12 hours, three-hourly forecasts out to 54 hours (2.25 days), and twice-daily forecasts out to 120 hours (5 days.) The model uses boundary conditions from the global deterministic model, sea conditions from the AMM15 shelf seas model, and is kept close to observations using 4D-Var data assimilation hourly.
=== North Atlantic and European model (retired) ===
The Met Office's North Atlantic and European model (NAE) model had 70 levels with a 12 km resolution. It was run out to 48 hours from start. Because the UK is at a northern latitude the model transposes the model area to an equatorial location so that the grid points give an area that is more square. This reduces the load on the model, allowing it to run more quickly. The model was kept close to the real atmosphere using 4D-Var data assimilation of observations.
=== Euro 4km model (retired) ===
70 Vertical levels, 4.4 km horizontal resolution. Ran out to 120 hours. Now superseded by the UKV in many applications and by the Global Model in others.
=== DRMs/CAMs ===
Defence Regional Models and Crisis Area Models are models using the same science configuration as the UK model which can be deployed rapidly to support military operations and disaster relief efforts. For example, to support the MMU or environmental catastrophes.
=== Mountain Wave Models ===
This high resolution model provides information on mountain waves for a variety of locations around the UK and other areas of interest to the Met Office.
=== Met Office's Global Wave Model ===
This models sea waves around the world.
== UM suites outside the Met Office ==
Unified Model suites which are similar to those from the Met Office are run by the Australian Bureau of Meteorology, the Australian Commonwealth Scientific and Industrial Research Organisation, the South African Weather Service, the Norwegian Meteorological Institute, the New Zealand National Institute of Water and Atmospheric Research, the (South) Korea Meteorological Administration and the National Centre for Medium Range Weather Forecasting, a subordinate office of the Indian Ministry of Earth Sciences.
=== Australian Bureau of Meteorology ===
The Australian Bureau of Meteorology, have an operational 12.5 km resolution deterministic (ACCESS-G) and 33 km resolution ensemble (ACCESS-GE) global forecasting system utilizing the Unified Model. This global system provides boundary conditions for a number of higher resolution regional systems also using the Unified Model. These include a 1.5 km resolution deterministic (ACCESS-C) and 2.2 km resolution ensemble (ACCESS-CE) model over Australia and a 4km resolution relocatable model (ACCESS-TC.)
=== (South) Korea Meteorological Administration ===
The (South) Korea Meteorological Administration have an operational 10 km resolution global forecasting system utilizing the Unified Model. This global system provides boundary conditions for a 1.5 km resolution local Unified Model NWP system covering the Korean Peninsula Region.
== UKCA ==
United Kingdom Chemistry & Aerosols (UKCA) is a sub-model of the UM that deals with trace gas and aerosol chemistry within the model. This includes calculating the concentrations of climatically relevant gases such as methane and ozone, as well as the composition and evolution of aerosols. As with most of the UM, the UKCA was written in a collaboration between the UK Met Office and UK Academia.
== JULES ==
Joint UK Land Environment System is a land surface model that has been developed in a collaboration between the Met Office and other research institutes. JULES models the exchange of heat and moisture between the Unified Model atmosphere and the land surface and vegetation. JULES can also be used offline to estimate the impacts of different climate models on the land surface and hydrology.
== References ==
== External links ==
Unified Model Collaboration webpage at the Met Office.
Unified Model Papers
Introduction to the UM by NCAS | Wikipedia/Unified_model |
The Flow-following, finite-volume Icosahedral Model (FIM) is an experimental numerical weather prediction model that was developed at the Earth System Research Laboratories in the United States from 2008 to 2016.
The FIM was developed as a candidate to eventually supplant the Global Forecast System, the United States's current medium-range forecast model. The FIM was originally slated to become operational some time in 2014 but never did so. The model produces similar results to the GFS, but runs slower on the NWS's operational computers. Its three-part name derives from its key features: "flow-following" indicates that its vertical coordinates are based on both terrain and potential temperature (isentropic sigma coordinates, previously used in the now-discontinued rapid update cycle model), and "finite-volume" describes the method used for calculating horizontal transport. The "icosahedral" portion describes the model's most uncommon feature: whereas most grid-based forecast models have historically used rectangular grid points (a less than ideal arrangement for a planet that is a slightly oblate spheroid), the FIM instead fits Earth to a Goldberg polyhedron with icosahedral symmetry, with twelve evenly spaced pentagons (including two at the poles) anchoring a grid of hexagons.
In November 2016, the ESRL announced it was no longer pursuing the FIM as a replacement for the GFS and would be instead developing the FV3, which uses some of the FIM's principles except on a square grid. The FIM will continue to be run for experimental purposes until FV3 commences. The FIM will also be upgraded with some of the same features as FV3 (retaining the icosahedral grid) and coupling to examine the use of the model in the longer ranges.
The FIM runs as a multiscale model, with a suffix number indicating the model's horizontal resolution. FIM7 operates at a spatial resolution of approximately 60 km, FIM8 at 30 km, FIM9 at 15 km and FIM9.5 at 10 km. Each scale runs on a temporal resolution of 6-hour steps. As of 2017, only the FIM7 (running out 10 days) and FIM8 (running out 14 days) continue to be run daily. The FIM7 also runs four runs between Tuesday and Wednesday each week as a climate model, with the mean output from those runs issued in one-week intervals; it thus complements the Climate Forecast System, the only other model in the U.S. government's arsenal that covers that time frame.
== References ==
== External links ==
Official FIM Web site, includes forecasts and documentation | Wikipedia/Flow-following,_finite-volume_Icosahedral_Model |
The Nippon Telegraph and Telephone Corporation (NTT) is a Japanese telecommunications holding company headquartered in Tokyo, Japan. Ranked 55th in Fortune Global 500, NTT is the fourth largest telecommunications company in the world in terms of revenue, as well as the third largest publicly traded company in Japan after Toyota and Sony, as of June 2022. In 2023, the company was ranked 56th in the Forbes Global 2000. NTT was the world's largest company by market capitalization in the late 1980s, and remained among the world's top 10 largest companies by market capitalization until the burst of the Dot-com bubble in the early 2000s.
The company traces its origin to the national telegraph service established in 1868, which came under the purview of the Ministry of Communications in the 1880s. In 1952, the telegraph and telephone services were spun off as the government-owned Nippon Telegraph and Telephone Public Corporation (日本電信電話公社, Nippon Denshin Denwa Kōsha, shortened to Den-Den Kōsha (電電公社)). Under Prime Minister Yasuhiro Nakasone, the company was privatised in 1985 along with the Japanese National Railways and the Japan Tobacco and Salt Public Corporation, adopting the current name. While NTT has been listed on the Tokyo Stock Exchange since 1985, the Japanese government still owns roughly one-third of NTT's shares, regulated by the NTT Law.
The company is incorporated pursuant to the NTT Law (Law Concerning Nippon Telegraph and Telephone Corporation, Etc.). The purpose of the company defined by the law is to own all the shares issued by Nippon Telegraph and Telephone East Corporation (NTT East) and Nippon Telegraph and Telephone West Corporation (NTT West) and to ensure proper and stable provision of telecommunications services all over Japan including remote rural areas by these companies as well as to conduct research relating to the telecommunications technologies that will form the foundation for telecommunications. On 1 July 2019, NTT Corporation launched NTT Ltd., an $11 billion de facto holding company business consisting of 28 brands from across NTT Security, NTT Communications and Dimension Data.
== History ==
Established as a state monopoly in August 1952 to take over the Japanese telecommunications system, Nippon Telegraph and Telephone Public Corporation (日本電信電話公社, Nippon Denshin Denwa Kōsha, shortened into Den-Den Kōsha (電電公社)) was privatized in 1985 to encourage competition in the country's telecom market, making Japan the second country in the world (after the United States) to deregulate its telecom market. In 1987, NTT made the largest stock offering to date, at US$36.8 billion.
Because NTT owns most of Japan's last mile infrastructure (including broadband fibre connections), it has oligopolistic control over most landlines in Japan. In order to stimulate local competition, the company was divided into a holding company (NTT) and three telecom companies (NTT East, NTT West, and NTT Communications) in 1999. The NTT Law regulating NTT East and West requires them to serve only short-distance communications and obligates them to maintain fixed-line telephone service all over the country. They are also obligated to lease their unused optical fiber (dark fiber) to other carriers at regulated rates. NTT Communications is not regulated by the NTT Law.
In July 2010, NTT and South African IT company Dimension Data Holdings announced an agreement of a cash offer from NTT for Dimension Data's entire issued share capital, in £2.12bn ($3.24bn) deal.
In late 2010, NTT's Japan-to-US transpacific network reached 400 Gbit/s. In August 2011, its network capacity was expanded to 500 Gbit/s.
In 2021, Nippon Telegraph & Tel issued green bonds worth about 300 billion yen ($2.7 billion). The bonds include three tranches with maturities of 3, 5 and 10 years. The proceeds will be used for environmentally friendly projects (renewable energy, energy-efficient broadband infrastructure, etc.).
=== Corporate history timeline ===
1952 Nippon Telegraph and Telephone Public Corporation established
1979 INS Concept announced
1985 Nippon Telegraph and Telephone Corporation (NTT) incorporated as a private company
1987 NTT listed on the First Section of the Tokyo Stock Exchange
1988 NTT DATA Corporation started operations
1990 VI&P Concept announced
1992 NTT Mobile Communications Network, Inc. (presently NTT DOCOMO) started operations
1994 Basic Concept for the Coming Multimedia Age announced
1995 NTT DATA listed on the Second Section of the Tokyo Stock Exchange
1996 21st Century R&D Vision announced
1996 NTT DATA listed on the First Section of the Tokyo Stock Exchange
1997 Digitization of communications network in Japan completed
1998 Global Information Sharing Concept announced
1998 NTT DOCOMO listed on the First Section of the Tokyo Stock Exchange
1999 NTT's operations reorganized into a holding-company structure: businesses transferred to three new wholly owned subsidiaries (NTT East, NTT West, and NTT Communications)
2002 prefecture-based subsidiaries of NTT East and NTT West started operations
2002 "Vision for a New Optical Generation" announced
2004 NTT Urban Development Corporation listed on the First Section of the Tokyo Stock Exchange
2004 "NTT Group's Medium-Term Management Strategy" announced
2008 announcement of a new Medium-Term Management Strategy: "Road to Service Creation Business Group"
== Subsidiaries ==
NTT Group consists of the following major companies, divided into five segments. NTT East, NTT West, NTT Communications, NTT Docomo, and NTT Data are most major subsidiaries. NTT Data is listed on the stock markets. NTT Urban Development is a subsidiary involved in real estate. NTT Communications' business outside of Japan became part of NTT Ltd. on 1 July 2019.
=== Regional ===
Nippon Telegraph and Telephone East Corporation (NTT East)
Nippon Telegraph and Telephone West Corporation (NTT West)
=== Long distance and international ===
NTT Communications
NTT MSC
Verio Inc
NTT America
NTT Europe
HKNet
Plala Networks
=== Mobile ===
NTT Docomo
=== Data (system integration) ===
Dimension Data (now part of NTT Ltd. except in the Middle East and Africa Region)
e-shelter
Gyron Internet Ltd
NTT Data
NTT Comware
NTT Software
NTT AT
NTT IT
=== Information security ===
NTT Security (now part of NTT Ltd. as of the 1st July 2019)
== R&D laboratories ==
Service Innovation Laboratory Group
Service Evolution Laboratories (Yokosuka)
Media Intelligence Laboratories (Yokosuka)
Software Innovation Center (Musashino and Shibaura)
Secure Platform Laboratories (Musashino)
Information Network Laboratory Group
Network Technology Laboratories (Musashino)
Network Service Systems Laboratories (Musashino)
Access Network Service Systems Laboratories (Tsukuba and Yokoska)
Energy and Environment Systems Laboratories (Atsugi)
Science and Core Technology Laboratory Group
Network Innovation Laboratories (Yokoska)
Microsystem Integration Laboratories (Atsugi)
Photonics Laboratories (Atsugi)
Communication Science Laboratories (Keihanna and Atsugi)
Basic Research Laboratories (Atsugi)
NTT Research, Inc. (East Palo Alto, California)
Physics & Informatics (PHI) Laboratory
Cryptography and Information Security (CIS) Laboratory
Medical and Health Informatics (MEI) Laboratory
In 2023, the World Intellectual Property Organization (WIPO)’s Annual PCT Review ranked Nippon Telegraph and Telephone's number of patent applications published under the PCT System as 10th in the world, with 1,760 patent applications being published during 2023.
== Sponsorship ==
Omiya Ardija and Roasso Kumamoto (Japanese football clubs formerly affiliated with NTT)
Dandelion Racing, Super Formula team affiliated with NTT Docomo.
Chip Ganassi Racing in the IndyCar Series (affiliated with NTT Data, drivers include Ryan Briscoe, Tony Kanaan, Scott Dixon, Ed Jones, and Felix Rosenqvist). In 2019, NTT also became title sponsor of the series.
NTT Pro Cycling, UCI WorldTeam cycling team based in South Africa.
== See also ==
Telegraph
Telephone
List of telephone operating companies
== References ==
== External links ==
Official website
NTT Research Inc.
NTT Innovation Institute, Inc. Archived 2019-11-11 at the Wayback Machine
NTT Disruption Archived 2020-10-22 at the Wayback Machine
NTT Group | Wikipedia/Nippon_Telegraph_and_Telephone |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.