text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In evolutionary biology, mimicry is an evolved resemblance between an organism and another object, often an organism of another species. Mimicry may evolve between different species, or between individuals of the same species. In the simplest case, as in Batesian mimicry, a mimic resembles a model, so as to deceive a dupe, all three being of different species. A Batesian mimic, such as a hoverfly, is harmless, while its model, such as a wasp, is harmful, and is avoided by the dupe, such as an insect-eating bird. Birds hunt by sight, so the mimicry in that case is visual, but in other cases mimicry may make use of any of the senses. Most types of mimicry, including Batesian, are deceptive, as the mimics are not harmful, but Müllerian mimicry, where different harmful species resemble each other, is honest, as when species of wasps and of bees all have genuinely aposematic warning coloration. More complex types may be bipolar, involving only two species, such as when the model and the dupe are the same; this occurs for example in aggressive mimicry, where a predator in wolf-in-sheep's-clothing style resembles its prey, allowing it to hunt undetected. Mimicry is not limited to animals; in Pouyannian mimicry, an orchid flower is the mimic, resembling a female bee, its model; the dupe is the male bee of the same species, which tries to copulate with the flower, enabling it to transfer pollen, so the mimicry is again bipolar. In automimicry, another bipolar system, model and mimic are the same, as when blue lycaenid butterflies have 'tails' or eyespots on their wings that mimic their own heads, misdirecting predator dupes to strike harmlessly. Many other types of mimicry exist.
== Etymology ==
Use of the word mimicry dates to 1637. It derives from the Greek term mimetikos, "imitative", in turn from mimetos, the verbal adjective of mimeisthai, "to imitate". "Mimicry" was first used in zoology by the English entomologists William Kirby and William Spence in 1823. Originally used to describe people, "mimetic" was used in zoology from 1851.
== History ==
=== Ancient ===
Aristotle wrote in his History of Animals that partridges use a deceptive distraction display to lure predators away from their flightless young:
When a man comes by chance upon a young brood [of partridges], and tries to catch them, the hen-bird rolls in front of the hunter, pretending to be lame: the man every moment thinks he is on the point of catching her, and so she draws him on and on, until every one of her brood has had time to escape; hereupon she returns to the nest and calls the young back.
The behaviour is recognised as a form of mimicry by biologists.
=== 19th century ===
In 1823, Kirby and Spence, in their book An Introduction to Entomology, used the term "mimicry" informally to depict the way that the structure and coloration of some insects resembled objects in their environments:
A jumping bug, very similar to the one figured by Schellenberg, also much resembles the lichens of the oak on which I took it. The spectre tribe (Phasma) go still further in this mimicry, representing a small branch with its spray.
The English naturalist Henry Walter Bates worked for several years on butterflies in the Amazon rainforest. Returning home, he described multiple forms of mimicry in an 1862 paper at the Linnean Society in London, and then in his 1863 book The Naturalist on the River Amazons. The term "Batesian mimicry" has since been used in his honour, its usage becoming restricted to the situation in which a harmless mimic gains protection from its predators by resembling a distasteful model. Among the observations in Bates's 1862 paper is the statement:
I was never able to distinguish the Leptalides from the species they imitated, although they belong to a family totally different in structure and metamorphosis from the Heliconidae, without examining them closely after capture.
The German naturalist Fritz Müller also spent many years studying butterflies in the Amazon rainforest. He first published a journal article on mimicry in German in 1878, followed in 1879 by a paper to the Entomological Society of London (translated and presented by Ralph Meldola). He described a situation where different species were each unpalatable to predators, and shared similar, genuine, warning signals. Bates found it hard to explain why this should be so, asking why they should need to mimic each other if both were harmful and could warn off predators on their own. Müller put forward the first mathematical model of mimicry for this phenomenon: if a common predator confuses the two species, individuals in both those species are more likely to survive, as fewer individuals of either species are killed by the predator. The term Müllerian mimicry, named in his honour, has since been used for this mutualistic form of mimicry.
Müller wrote that
The resemblance of the genera named [Ituna and Thyridia] is the more worthy of notice since it occurs between insects both belonging to the group of butterflies which are protected by distastefulness. The explanation which applies in ordinary cases of [Batesian] mimicry—and no other has, so far as I know, been offered—cannot obtain for this imitation among protected species.
== Overview ==
=== Evolved resemblance ===
Mimicry is an evolved resemblance between an organism and another object, often an organism of another species. Mimicry may evolve between different species, or between individuals of the same species. Often, mimicry functions to protect from predators. Mimicry systems have three basic roles: a mimic, a model, and a dupe. When these correspond to three separate species, the system is called disjunct; when the roles are taken by just two species, the system is called bipolar. Mimicry evolves if a dupe (such as a predator) perceives a mimic (such as a palatable prey) as a model (the organism it resembles), and is deceived to change its behaviour to the mimic's selective advantage. The resemblances can be via any sensory modality, including any combination of visual, acoustic, chemical, tactile, or electric. Mimicry may be to the advantage of both organisms that share a resemblance, in which case it is mutualistic; or it can be to the detriment of one, making it parasitic or competitive. The evolutionary convergence between groups is driven by the selective action of a dupe. Birds, for example, use sight to identify palatable insects, whilst avoiding noxious ones. Over time, palatable insects may evolve to resemble noxious ones, making them mimics and the noxious ones models. Models do not have to be more abundant than mimics. In the case of mutualism, each model is also a mimic; all such species can be called "co-mimics". Many harmless species such as hoverflies are Batesian mimics of strongly defended species such as wasps, while many such well-defended species form Müllerian mimicry rings of co-mimics. In the evolution of wasp-like appearance, it has been argued that insects evolve to masquerade wasps since predatory wasps do not attack each other, and that this mimetic resemblance has had the useful side-effect of deterring vertebrate predators.
Mimicry can result in an evolutionary arms race if mimicry negatively affects the model, in which case the model can evolve a different appearance from the mimic.p161 Mimics may have different models for different life cycle stages, or they may be polymorphic, with different individuals imitating different models, as occurs in Heliconius butterflies. Models tend to be relatively closely related to their mimics, but mimicry can be of vastly different species, for example when spiders mimic ants. Most known mimics are insects, though many other examples including vertebrates, plants, and fungi exist.
=== Evolutionary explanations ===
It is widely accepted that mimicry evolves as a positive adaptation. The lepidopterist and novelist Vladimir Nabokov however argued that although natural selection might stabilize a "mimic" form, it would not be necessary to create it. The most widely accepted model used to explain the evolution of mimicry in butterflies is the two-step hypothesis. The first step involves mutation in modifier genes that regulate a complex cluster of linked genes that cause large changes in morphology. The second step consists of selections on genes with smaller phenotypic effects, creating an increasingly close resemblance. This model is supported by empirical evidence that suggests that a few single point mutations cause large phenotypic effects, while numerous others produce smaller effects. Some regulatory elements collaborate to form a supergene for the development of butterfly color patterns. The model is supported by computational simulations of population genetics. The Batesian mimicry in Papilio polytes is controlled by the doublesex gene.
Some mimicry is imperfect. Natural selection drives mimicry only far enough to deceive predators. For example, when predators avoid a mimic that imperfectly resembles a coral snake, the mimic is sufficiently protected.
Convergent evolution is an alternative explanation for why coral reef fish have come to resemble each other; the same applies to benthic marine invertebrates such as sponges and nudibranchs.
=== Living and non-living models ===
In its broadest definition, mimicry can include non-living models. The specific terms masquerade and mimesis are sometimes used when the models are inanimate, and the mimicry's purpose is crypsis. For example, animals such as flower mantises, planthoppers, comma and geometer moth caterpillars resemble twigs, bark, leaves, bird droppings or flowers. In addition, predators may make use of resemblance to harmless objects in aggressive masquerade, to enable them to approach prey. This wolf in sheep's clothing strategy differs from the more specific resemblance to the prey in aggressive mimicry, where the prey is both model and dupe.
Many animals bear eyespots, which are hypothesized to resemble the eyes of larger animals. They may not resemble any specific organism's eyes, and whether or not animals respond to them as eyes is also unclear. The model is usually another species, except in automimicry, where members of the species mimic other members, or other parts of their own bodies, and in inter-sexual mimicry, where members of one sex mimic members of the other.
=== Types ===
Many types of mimicry have been described. An overview of each follows, highlighting the similarities and differences between the various forms. Classification is often based on function with respect to the mimic (e.g., avoiding harm). Some cases may belong to more than one class, e.g., automimicry and aggressive mimicry are not mutually exclusive, as one describes the species relationship between model and mimic, while the other describes the function for the mimic (obtaining food). The terminology used has been debated, as classifications have differed or overlapped; attempts to clarify definitions have led to the partial replacement of old terms with new ones.
== Defensive ==
Mimicry is defensive or protective when organisms are able to avoid harmful encounters by deceiving enemies into treating them as something else.
=== Batesian ===
In Batesian mimicry, the mimic resembles the model, but does not have the attribute that makes it unprofitable to predators (e.g., unpalatability, or the ability to sting). In other words, a Batesian mimic is a sheep in wolf's clothing. Mimics are less likely to be found out (for example by predators) when in low proportion to their model. Such negative frequency-dependent selection applies in most forms of mimicry. Specifically, Batesian mimicry can only be maintained if the harm caused to the predator by eating a model outweighs the benefit of eating a mimic. The nature of learning is weighted in favor of the mimics, for a predator that has a bad first experience with a model tends to avoid anything that looks like it for a long time, and does not re-sample soon to see whether the initial experience was a false negative. However, if mimics become more abundant than models, then the probability of a young predator having a first experience with a mimic increases. Batesian systems are therefore most likely to be stable where the model is more abundant than the mimic.
There are many Batesian mimics among butterflies and moths. Consul fabius and Eresia eunice imitate unpalatable Heliconius butterflies such as H. ismenius. Limenitis arthemis imitate the poisonous pipevine swallowtail (Battus philenor). Several palatable moths produce ultrasonic click calls to mimic unpalatable tiger moths. Octopuses of the genus Thaumoctopus (the mimic octopus) are able to intentionally alter their body shape and coloration to resemble dangerous sea snakes or lionfish. In the Amazon, the helmeted woodpecker (Dryocopus galeatus), a rare species which lives in the Atlantic Forest of Brazil, Paraguay, and Argentina, has a similar red crest, black back, and barred underside to two larger woodpeckers: Dryocopus lineatus and Campephilus robustus. This mimicry reduces attacks on D. galeatus. Batesian mimicry occurs in the plant kingdom, where the chameleon vine adapts its leaf shape and colour to match that of the plant it is climbing.
=== Müllerian ===
In Müllerian mimicry, two or more species have similar warning or aposematic signals and both share genuine anti-predation attributes (e.g. being unpalatable), as first described in Heliconius butterflies. This type of mimicry is unique in several respects. Firstly, both the mimic and the model benefit from the interaction, which could thus be classified as mutualism. The signal receiver also benefits by this system, despite being deceived about species identity, as it is able to generalize the pattern to potentially harmful encounters. The distinction between mimic and model that is clear in Batesian mimicry is also blurred. Where one species is scarce and another abundant, the rare species can be said to be the mimic. When both are present in similar numbers, however, it makes more sense to speak of each as a co-mimic than of distinct 'mimic' and 'model' species, as their warning signals tend to converge. Also, the mimetic species may exist on a continuum from harmless to highly noxious, so Batesian mimicry grades smoothly into Müllerian convergence.
=== Emsleyan/Mertensian ===
Emsleyan or Mertensian mimicry describes the unusual case where a deadly prey mimics a less dangerous species. It was first proposed by M. G. Emsley in 1966 as a possible explanation for how a predator can learn to avoid a very dangerous aposematic animal, such as a coral snake, when the predator is very likely to die, making learning unlikely. The theory was developed by the German biologist Wolfgang Wickler who named it after the German herpetologist Robert Mertens. The scenario is unlike Müllerian mimicry, where the most harmful species is the model. But if a predator dies on its first encounter with a deadly snake, it has no occasion to learn to recognize the snake's warning signals. There would then be no advantage for an extremely deadly snake in being aposematic: any predator that attacked it would be killed before it could learn to avoid the deadly prey, so the snake would be better off being camouflaged to avoid attacks. But if the predator first learnt to avoid a less deadly warning-coloured snake, the deadly species could profit by mimicking the less dangerous snake. Some harmless milk snakes (Lampropeltis triangulum), the moderately toxic false coral snakes (Erythrolamprus aesculapii), and the deadly coral snakes (Micrurus) all have a red background color with black and white/yellow rings. In this system, both the milk snakes and the deadly coral snakes are mimics, while the false coral snakes are the model.
=== Wasmannian ===
In Wasmannian mimicry, the mimic resembles a model that it lives along with in a nest or colony. Most of the models here are eusocial insects, principally ants.
=== Gilbertian ===
Gilbertian mimicry is bipolar, involving only two species. The potential host (or prey) drives away its parasite (or predator) by mimicking it, the reverse of host-parasite aggressive mimicry. It was coined by Pasteur as a phrase for such rare mimicry systems, and is named after the American ecologist Lawrence E. Gilbert who described it in 1975. The classical instance of Gilbertian mimicry is in the plant genus Passiflora, which is grazed by the micropredator larvae of some Heliconius butterflies. The host plants have evolved stipules that mimic mature Heliconius eggs near the point of hatching. The butterflies avoid laying eggs near existing ones, reducing intraspecific competition between caterpillars, which are also cannibalistic, so those that lay on vacant leaves provide their offspring with a greater chance of survival. The stipules thus appear to have evolved as Gilbertian mimics of butterfly eggs, under selection pressure from these caterpillars.
=== Browerian ===
Browerian mimicry, named after Lincoln P. Brower and Jane Van Zandt Brower who first described it in 1967, is a postulated form of automimicry; where the model belongs to the same species as the mimic. This is the analogue of Batesian mimicry within a single species, and occurs when there is a palatability spectrum within a population. Examples include the monarch and the queen from the subfamily Danainae, which feed on milkweed species of varying toxicity. These species store toxins from its host plant, which are maintained even in the adult. As levels of toxin vary depending on diet, some individuals are more toxic than the rest, which profit from the toxicity of those individuals, just as hoverflies benefit from mimicking well-defended wasps.
=== Misdirection by automimicry ===
One form of automimicry is where one part of an organism's body resembles another part. For example, the tails of some snakes resemble their heads; they move backwards when threatened and present the predator with the tail, improving their chances of escape without fatal harm. Some fishes have eyespots near their tails, and when mildly alarmed swim slowly backwards, presenting the tail as a head. Some insects such as some lycaenid butterflies have tail patterns and appendages of various degrees of sophistication that promote attacks at the rear rather than at the head. Several species of pygmy owl bear "false eyes" on the back of the head, misleading predators into reacting as though they were the subject of an aggressive stare. Many insects have filamentous "tails" at the ends of their wings and patterns of markings on the wings themselves. These combine to create a "false head". This misdirects predators such as birds and jumping spiders. Spectacular examples occur in the hairstreak butterflies; when perching on a twig or flower, they commonly do so upside down and shift their rear wings repeatedly, causing antenna-like movements of the "tails" on their wings. Studies of rear-wing damage support the hypothesis that this strategy is effective in deflecting attacks from the insect's head.
== Aggressive ==
=== Predators ===
Aggressive mimicry is found in predators or parasites that share some of the characteristics of a harmless species, allowing them to avoid detection by their prey or host; the strategy resembles a wolf in sheep's clothing, though no conscious deceptive intent is involved. The mimic may resemble the prey or host itself, or another organism that does not threaten the prey or host.
Several spiders use aggressive mimicry to lure prey. Species such as the silver argiope (Argiope argentata) employ prominent patterns in the middle of their webs, such as zigzags. These may reflect ultraviolet light, and mimic the pattern seen in many flowers known as nectar guides. Spiders change their web day to day, which can be explained by the ability of bees to remember web patterns.
Another case is where males are lured towards what seems to be a sexually receptive female. The model in this situation is the same species as the dupe. Female fireflies of the genus Photuris emit light signals that mimic the mating signals of females of the genus Photinus. Male fireflies from several different genera are attracted to these "femmes fatales", and are captured and eaten. Each female has a repertoire of signals matching the delay and duration of the flashes of the female of the corresponding species.
Some carnivorous plants may be able to increase their rate of capturing insect prey through mimicry.
A different aggressive strategy is to mimic a mutualistic symbiont of the prey. Cleaner fish eat parasites and dead skin from client fish. Some allow the cleaner to venture inside their body to hunt these parasites. However, the sabre-toothed blenny or false cleanerfish (Aspidontus taeniatus) mimics the bluestreak cleaner wrasse (Labroides dimidiatus), which is recognized by other fishes as a cleaner. The false cleanerfish resembles the cleaner, and mimics the cleaner's "dance". Once it is allowed close to the client, it attacks, biting off a piece of its fin before fleeing. Fish wounded in this fashion soon learn to distinguish mimic from model, but because the similarity is close they also become much more cautious of the model.
A mechanism that does not involve any luring is seen in the zone-tailed hawk, which resembles the turkey vulture. It flies amongst the vultures, effectively camouflaged as a vulture which poses no threat to the hawk's prey. It hunts by suddenly breaking from the formation and ambushing its prey.
=== Parasites ===
Parasites can be aggressive mimics, though the situation is somewhat different from those outlined previously. They can mimic their hosts' natural prey, allowing themselves to be eaten as a pathway into their host. Leucochloridium, a genus of flatworm, matures in the digestive system of songbirds, their eggs then passing out of the bird in the faeces. They are then taken up by Succinea, a terrestrial snail. The eggs develop in this intermediate host, and must then find a suitable bird to mature in. Since the host birds do not eat snails, the sporocyst has another strategy to reach its host's intestine. They are brightly coloured and move in a pulsating fashion. A sporocyst-sac pulsates in the snail's eye stalks, coming to resemble an irresistible meal for a songbird. In this way, it can bridge the gap between hosts, allowing it to complete its life cycle. A nematode (Myrmeconema neotropicum) changes the colour of the abdomen of workers of the canopy ant Cephalotes atratus to make it appear like the ripe fruits of Hyeronima alchorneoides. It also changes the behaviour of the ant so that the gaster (rear part) is held raised. This presumably increases the chances of the ant being eaten by birds.
== Reproductive ==
Reproductive mimicry occurs when the actions of the dupe directly aid in the mimic's reproduction. This is common in plants with deceptive flowers that do not provide the reward they seem to offer and it may occur in Papua New Guinea fireflies, in which the signal of Pteroptyx effulgens is used by P. tarsalis to form aggregations to attract females. Other forms of mimicry have a reproductive component, such as Vavilovian mimicry involving seeds, vocal mimicry in birds, and aggressive and Batesian mimicry in brood parasite-host systems.
=== Bakerian and Dodsonian ===
Bakerian mimicry, named after Herbert G. Baker, is a form of automimicry where female flowers mimic male flowers of their own species, cheating pollinators out of a reward. This reproductive mimicry may not be readily apparent as members of the same species may still exhibit some degree of sexual dimorphism. It is common in many species of Caricaceae.
In Dodsonian mimicry, named after Calaway H. Dodson, the model belongs to a different species than the mimic. By resembling the model, a flower can lure its pollinators without offering nectar. The mechanism occurs in several orchids, including Epidendrum ibaguense which mimics flowers of Lantana camara and Asclepias curassavica, and is pollinated by monarch butterflies and perhaps hummingbirds.
=== Kirbyan mimicry, brood parasitism ===
Brood parasitism or Kirbyan mimicry is a two species system where a brood parasite mimics its host. Cuckoos are a canonical example; the female cuckoo has its offspring raised by a bird of a different species, cutting down the biological mother's parental investment. The ability to lay eggs that mimic the host eggs is the key adaptation. The adaptation to different hosts is inherited through the female line in so-called gentes (gens, singular). Intraspecific brood parasitism, where a female lays in a conspecific's nest, as illustrated by the goldeneye duck (Bucephala clangula), do not involve mimicry The parasitic butterfly Phengaris rebeli parasitizes the ant species Myrmica schencki by releasing chemicals that fool the worker ants to believe that the caterpillar larvae are ant larvae. This enables the larvae to be brought directly into the ant's nest.
=== Pouyannian ===
In Pouyannian mimicry, a flower mimics a female of a certain insect species, inducing the males of that species to try to copulate with the flower. This is much like aggressive mimicry in fireflies, but with a more benign outcome for the pollinator. The mechanism is named after Maurice-Alexandre Pouyanne, who first described the phenomenon. It is most common in orchids, which mimic females of the order Hymenoptera (generally bees and wasps), and may account for around 60% of pollinations. Depending on the morphology of the flower, a pollen sac called a pollinium is attached to the head or abdomen of the male. This is then transferred to the stigma of the next flower the male tries to inseminate, resulting in pollination. The mimicry is a combination of visual, by olfaction, and by touch.
=== Vavilovian ===
Vavilovian mimicry is found in weeds that come to share characteristics with a domesticated plant through unintentional selection. It is named after Russian botanist and geneticist Nikolai Vavilov. Selection against the weed may occur either by manually killing the weed, or by separating its seeds from those of the crop by winnowing. Vavilovian mimicry illustrates unintentional selection by man. Weeders do not want to select weeds and their seeds that look increasingly like cultivated plants, yet there is no other option. For example, early barnyard grass, Echinochloa oryzoides, is a weed in rice fields and looks similar to rice; its seeds are often mixed in rice and have become difficult to separate through Vavilovian mimicry. Vavilovian mimics may eventually be domesticated themselves, as in the case of rye in wheat; Vavilov called these weed-crops secondary crops.
=== Inter-sexual mimicry ===
Inter-sexual mimicry (a type of automimicry, as it is within a single species) occurs when individuals of one sex in a species mimic members of the opposite sex to facilitate sneak mating. An example is the three male forms of the marine isopod Paracerceis sculpta. Alpha males are the largest and guard a harem of females. Beta males mimic females and manage to enter the harem of females without being detected by the alpha males allowing them to mate. Gamma males are the smallest males and mimic juveniles. This also allows them to mate with the females without the alpha males detecting them. Similarly, among common side-blotched lizards, some males mimic the yellow throat coloration and even mating rejection behaviour of the other sex to sneak matings with guarded females. These males look and behave like unreceptive females. This strategy is effective against "usurper" males with orange throats, but ineffective against blue throated "guarder" males, which chase them away. Female spotted hyenas have pseudo-penises that make them look like males.
== See also ==
Attitude (psychology)
Biomimicry
Chemical mimicry
Locomotor mimicry
Mimic octopus
Molecular mimicry
Psychology
Preadaptation
Semiotics
== Notes ==
== References ==
== Further reading ==
Brower, L. P., ed. (1988). Mimicry and the evolutionary process. Chicago: University of Chicago Press. ISBN 0-226-07608-3. (a supplement of volume 131 of the journal American Naturalist dedicated to E. B. Ford).
Carpenter, G. D. Hale; Ford, E. B. (1933). Mimicry. London: Methuen.
Cott, H. B. (1940) Adaptive Coloration in Animals. Methuen and Co, London, ISBN 0-416-30050-2
Dafni, A. (1984). "Mimicry and Deception in Pollination". Annual Review of Ecology and Systematics. 15: 259–278. doi:10.1146/annurev.es.15.110184.001355.
Edmunds, M. 1974. Defence in Animals: a survey of anti-predator defences. Harlow, Essex and New York, Longman. ISBN 0-582-44132-3.
Evans, M. A. (1965). "Mimicry and the Darwinian Heritage". Journal of the History of Ideas. 26 (2): 211–220. doi:10.2307/2708228. JSTOR 2708228.
Owen, D. (1980) Camouflage and Mimicry. Oxford University Press, ISBN 0-19-217683-8.
Pasteur, Georges (1982). "A classificatory review of mimicry systems". Annual Review of Ecology and Systematics. 13: 169–199. doi:10.1146/annurev.es.13.110182.001125.
Stevens, M. (2016). Cheats and deceits: how animals and plants exploit and mislead. Oxford University Press, ISBN 978-0-19-870789-9
Wiens, D. (1978). "Mimicry in Plants". In Max K. Hecht; William C. Steere; Bruce Wallace (eds.). Evolutionary Biology. Vol. 11. pp. 365–403. doi:10.1007/978-1-4615-6956-5_6. ISBN 978-1-4615-6958-9. PMC 3282713. PMID 22182416. {{cite book}}: |journal= ignored (help)
Vane-Wright, R. I. (1976). "A unified classification of mimetic resemblances". Biol. J. Linn. Soc. 8: 25–56. doi:10.1111/j.1095-8312.1976.tb00240.x.
Wickler, W. (1968) Mimicry in Plants and Animals (translated from the German), McGraw-Hill, New York. ISBN 0-07-070100-8.
=== Children's ===
Hoff, M. K. (2003) Mimicry and Camouflage. Creative Education. Mankato, Minnesota, USA, Great Britain. ISBN 1-58341-237-9.
== External links ==
Warning colour and mimicry • Lecture outline from University College London
Camouflage and Mimicry in Fossils | Wikipedia/Model_(mimicry) |
Medical model is the term coined by psychiatrist R. D. Laing in his The Politics of the Family and Other Essays (1971), for the "set of procedures in which all doctors are trained". It includes complaint, history, physical examination, ancillary tests if needed, diagnosis, treatment, and prognosis with and without treatment.
The medical model embodies basic assumptions about medicine that drive research and theorizing about physical or psychological difficulties on a basis of causation and remediation.
It can be contrasted with other models that make different basic assumptions. Examples include holistic model of the alternative health movement and the social model of the disability rights movement, as well as to biopsychosocial and recovery models of mental disorders. For example, Gregory Bateson's double bind theory of schizophrenia focuses on environmental rather than medical causes. These models are not mutually exclusive. A model is not a statement of absolute reality or a belief system but a tool for helping patients. Thus, utility is the main criterion, and the utility of a model depends on context.
== Other uses ==
=== In psychology ===
In psychology, the term medical model refers to the assumption that psychopathology is the result of one's biology, that is to say, a physical/organic problem in brain structures, neurotransmitters, genetics, the endocrine system, etc., as with traumatic brain injury, Alzheimer's disease, or Down's syndrome. The medical model is useful in these situations as a guide for diagnosis, prognosis, and research. However, for most mental disorders, exclusive reliance on the medical model leads to an incomplete understanding, and, frequently, to incomplete or ineffective treatment interventions. The current Diagnostic and Statistical Manual of Mental Disorders (DSM-5), addresses this point in part, stating, However, in the absence of clear biological markers or clinically useful measurements of severity for many mental disorders, it has not been possible to completely separate normal and pathological symptom expressions contained in diagnostic criteria. This gap in information is particularly problematic in clinical situations in which the patient's symptom presentation by itself (particularly in mild forms) is not inherently pathological and may be encountered in individuals for whom a diagnosis of "mental disorder" would be inappropriate.
The Critical Psychiatry Network, a group of psychiatrists who critique the practice of psychiatry on many grounds, feel that the medical model for mental illness can result in poor treatment choices.
=== Germ theory of disease ===
The rise of modern scientific medicine during the 19th century has a great impact on the development of the medical model. Especially important was the development of the "germ theory" of disease by European medical researchers such as Louis Pasteur and Robert Koch. During the late 19th and early 20th centuries, the physical causes of a variety of diseases were uncovered, which, in turn, led to the development of effective forms of treatment.
== Concept of "disease" and "injury" ==
The concepts of "disease" and "injury" are central to the medical model. In general, "disease" or "injury" refer to some deviation from normal body functioning that has undesirable consequences for the affected individual. An important aspect of the medical model is that it regards signs (objective indicators such as an elevated temperature) and symptoms (subjective feelings of distress expressed by the patient) as indicative of an underlying physical abnormality (pathology) within the individual. According to the medical model, medical treatment, wherever possible, should be directed at the underlying pathology in an attempt to correct the abnormality and cure the disease. In regard to many mental illnesses, for example, the assumption is that the cause of the disorder lies in abnormalities within the affected individual's brain (especially their brain neurochemistry). That carries the implicit conclusion that disordered behaviors are not learned but are spontaneously generated by the disordered brain. According to the medical model, for treatment (such as drugs), to be effective, it should be directed as closely as possible at correcting the theorized chemical imbalance in the brain of the person with mental illness.
=== Importance of diagnosis ===
Proper diagnosis (that is, the categorization of illness signs and symptoms into meaning disease groupings) is essential to the medical model. Placing the patient's signs and symptoms into the correct diagnostic category can:
Provide the physician with clinically useful information about the course of the illness over time (its prognosis);
Point to (or at least suggest) a specific underlying cause or causes for the disorder; and
Direct the physician to specific treatment or treatments for the condition.
For example, if a patient presents to a primary care provider with symptoms of a given illness, by taking a thorough history, performing assessments (such as auscultation and palpation), and, in some cases, ordering diagnostic tests the primary care provider can make a reasonable conclusion about the cause of the symptoms. Based on clinical experience and available evidence, the healthcare professional can identify treatment options that are likely to be successful.
== Other important aspects ==
Finally, adherence to the medical model has a number of other consequences for the patient and society as a whole, both positive and negative:
In the medical model, the physician was traditionally seen as the expert, and patients were expected to comply with the advice. The physician assumes an authoritarian position in relation to the patient. Because of the specific expertise of the physician, according to the medical model, it is necessary and to be expected. However, in recent years, the move towards patient-centered care has resulted in greater patient involvement in many cases.
In the medical model, the physician may be viewed as the dominant health care professional, who is the professional trained in diagnosis and treatment.
An ill patient should not be held responsible for the condition. The patient should not be blamed or stigmatized for the illness.
Under the medical model, the disease condition of the patient is of major importance. Social, psychological, and other "external" factors, which may influence patient behavior, may be given less attention.
== See also ==
Andersen healthcare utilization model
Biomedical model
Medical model of disability
Reductionism
Social constructionism
== References ==
== External links ==
'Medical model' vs 'social model' British Film Institute Education.
Disability Awareness at the University of Sheffield, UK
Medical model Open university UK | Wikipedia/Medical_model |
A mental model is an internal representation of external reality: that is, a way of representing reality within one's mind. Such models are hypothesized to play a major role in cognition, reasoning and decision-making. The term for this concept was coined in 1943 by Kenneth Craik, who suggested that the mind constructs "small-scale models" of reality that it uses to anticipate events. Mental models can help shape behaviour, including approaches to solving problems and performing tasks.
In psychology, the term mental models is sometimes used to refer to mental representations or mental simulation generally. The concepts of schema and conceptual models are cognitively adjacent. Elsewhere, it is used to refer to the "mental model" theory of reasoning developed by Philip Johnson-Laird and Ruth M. J. Byrne.
== History ==
The term mental model is believed to have originated with Kenneth Craik in his 1943 book The Nature of Explanation. Georges-Henri Luquet in Le dessin enfantin (Children's drawings), published in 1927 by Alcan, Paris, argued that children construct internal models, a view that influenced, among others, child psychologist Jean Piaget.
Jay Wright Forrester defined general mental models thus:
The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system (Forrester, 1971).
Philip Johnson-Laird published Mental Models: Towards a Cognitive Science of Language, Inference and Consciousness in 1983. In the same year, Dedre Gentner and Albert Stevens edited a collection of chapters in a book also titled Mental Models. The first line of their book explains the idea further: "One function of this chapter is to belabor the obvious; people's views of the world, of themselves, of their own capabilities, and of the tasks that they are asked to perform, or topics they are asked to learn, depend heavily on the conceptualizations that they bring to the task." (see the book: Mental Models).
Since then, there has been much discussion and use of the idea in human-computer interaction and usability by researchers including Donald Norman and Steve Krug (in his book Don't Make Me Think). Walter Kintsch and Teun A. van Dijk, using the term situation model (in their book Strategies of Discourse Comprehension, 1983), showed the relevance of mental models for the production and comprehension of discourse.
Charlie Munger popularized the use of multi-disciplinary mental models for making business and investment decisions.
== Mental models and reasoning ==
One view of human reasoning is that it depends on mental models. In this view, mental models can be constructed from perception, imagination, or the comprehension of discourse (Johnson-Laird, 1983). Such mental models are similar to architects' models or to physicists' diagrams in that their structure is analogous to the structure of the situation that they represent, unlike, say, the structure of logical forms used in formal rule theories of reasoning. In this respect, they are a little like pictures in the picture theory of language described by philosopher Ludwig Wittgenstein in 1922. Philip Johnson-Laird and Ruth M.J. Byrne developed their mental model theory of reasoning which makes the assumption that reasoning depends, not on logical form, but on mental models (Johnson-Laird and Byrne, 1991).
=== Principles of mental models ===
Mental models are based on a small set of fundamental assumptions (axioms), which distinguish them from other proposed representations in the psychology of reasoning (Byrne and Johnson-Laird, 2009). Each mental model represents a possibility. A mental model represents one possibility, capturing what is common to all the different ways in which the possibility may occur (Johnson-Laird and Byrne, 2002). Mental models are iconic, i.e., each part of a model corresponds to each part of what it represents (Johnson-Laird, 2006). Mental models are based on a principle of truth: they typically represent only those situations that are possible, and each model of a possibility represents only what is true in that possibility according to the proposition. However, mental models can represent what is false, temporarily assumed to be true, for example, in the case of counterfactual conditionals and counterfactual thinking (Byrne, 2005).
=== Reasoning with mental models ===
People infer that a conclusion is valid if it holds in all the possibilities. Procedures for reasoning with mental models rely on counter-examples to refute invalid inferences; they establish validity by ensuring that a conclusion holds over all the models of the premises. Reasoners focus on a subset of the possible models of multiple-model problems, often just a single model. The ease with which reasoners can make deductions is affected by many factors, including age and working memory (Barrouillet, et al., 2000). They reject a conclusion if they find a counterexample, i.e., a possibility in which the premises hold, but the conclusion does not (Schroyens, et al. 2003; Verschueren, et al., 2005).
=== Criticisms ===
Scientific debate continues about whether human reasoning is based on mental models, versus formal rules of inference (e.g., O'Brien, 2009), domain-specific rules of inference (e.g., Cheng & Holyoak, 2008; Cosmides, 2005), or probabilities (e.g., Oaksford and Chater, 2007). Many empirical comparisons of the different theories have been carried out (e.g., Oberauer, 2006).
== Mental models of dynamics systems: mental models in system dynamics ==
=== Characteristics ===
A mental model is generally:
founded on unquantifiable, impugnable, obscure, or incomplete facts;
flexible – considerably variable in positive as well as in negative sense;
an information filter that causes selective perception, perception of only selected parts of information;
very limited, compared with the complexities of the world, and even when a scientific model is extensive and in accordance with a certain reality in the derivation of logical consequences of it, it must take into account such restrictions as working memory; i.e., rules on the maximum number of elements that people are able to remember, gestaltisms or failure of the principles of logic, etc.;
dependent on sources of information, which one cannot find anywhere else, are available at any time and can be used.
Mental models are a fundamental way to understand organizational learning. Mental models, in popular science parlance, have been described as "deeply held images of thinking and acting". Mental models are so basic to understanding the world that people are hardly conscious of them.
=== Expression of mental models of dynamic systems ===
S.N. Groesser and M. Schaffernicht (2012) describe three basic methods which are typically used:
Causal loop diagrams – displaying tendency and a direction of information connections and the resulting causality and feedback loops
System structure diagrams – another way to express the structure of a qualitative dynamic system
Stock and flow diagrams - a way to quantify the structure of a dynamic system
These methods allow showing a mental model of a dynamic system, as an explicit, written model about a certain system based on internal beliefs. Analyzing these graphical representations has been an increasing area of research across many social science fields. Additionally software tools that attempt to capture and analyze the structural and functional properties of individual mental models such as Mental Modeler, "a participatory modeling tool based in fuzzy-logic cognitive mapping", have recently been developed and used to collect/compare/combine mental model representations collected from individuals for use in social science research, collaborative decision-making, and natural resource planning.
=== Mental model in relation to system dynamics and systemic thinking ===
In the simplification of reality, creating a model can find a sense of reality, seeking to overcome systemic thinking and system dynamics.
These two disciplines can help to construct a better coordination with the reality of mental models and simulate it accurately. They increase the probability that the consequences of how to decide and act in accordance with how to plan.
System dynamics – extending mental models through the creation of explicit models, which are clear, easily communicated and can be compared with each other.
Systemic thinking – seeking the means to improve the mental models and thereby improve the quality of dynamic decisions that are based on mental models.
Experimental studies carried out in weightlessness and on Earth using neuroimaging
showed that humans are endowed with a mental model of the effects of gravity on object motion.
=== Single and double-loop learning ===
After analyzing the basic characteristics, it is necessary to bring the process of changing the mental models, or the process of learning. Learning is a back-loop process, and feedback loops can be illustrated as: single-loop learning or double-loop learning.
==== Single-loop learning ====
Mental models affect the way that people work with information, and also how they determine the final decision. The decision itself changes, but the mental models remain the same. It is the predominant method of learning, because it is very convenient.
==== Double-loop learning ====
Double-loop learning (see diagram below) is used when it is necessary to change the mental model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models.
== See also ==
== Notes ==
== References ==
Barrouillet, P. et al. (2000). Conditional reasoning by mental models: chronometric and developmental evidence. Cognit. 75, 237-266.
Byrne, R.M.J. (2005). The Rational Imagination: How People Create Counterfactual Alternatives to Reality. Cambridge MA: MIT Press.
Byrne, R.M.J. & Johnson-Laird, P.N. (2009). 'If' and the problems of conditional reasoning. Trends in Cognitive Sciences. 13, 282-287
Cheng, P.C. and Holyoak, K.J. (2008) Pragmatic reasoning schemas. In Reasoning: studies of human inference and its foundations (Adler, J.E. and Rips, L.J., eds), pp. 827–842, Cambridge University Press
Cosmides, L. et al. (2005) Detecting cheaters. Trends in Cognitive Sciences. 9,505–506
Forrester, J. W. (1971) Counterintuitive behavior of social systems. Technology Review.
Johnson-Laird, P.N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge: Cambridge University Press.
Johnson-Laird, P.N. (2006) How We Reason. Oxford University Press
Johnson-Laird, P.N. and Byrne, R.M.J. (2002) Conditionals: a theory of meaning, inference, and pragmatics. Psychol. Rev. 109, 646–678
Oaksford, M. and Chater, N. (2007) Bayesian Rationality. Oxford University Press
Oberauer K. (2006) Reasoning with conditionals: A test of formal models of four theories. Cognit. Psychol. 53:238–283.
O’Brien, D. (2009). Human reasoning includes a mental logic. Behav. Brain Sci. 32, 96–97
Schroyens, W. et al. (2003). In search of counterexamples: Deductive rationality in human reasoning. Quart. J. Exp. Psychol. 56(A), 1129–1145.
Verschueren, N. et al. (2005). Everyday conditional reasoning: A working memory-dependent tradeoff between counterexample and likelihood use. Mem. Cognit. 33, 107-119.
== Further reading ==
Chater, N. et al. (2006) Probabilistic Models of Cognition: Conceptual Foundations. Trends Cogn Sci 10(7):287-91. doi:10.1016/j.tics.2006.05.007.
Gentner, Dedre; Stevens, Albert L., eds. (1983). Mental Models. Hillsdale: Erlbaum 1983.
Groesser, S.N. (2012). Mental model of dynamic systems. In N.M. Seel (Ed.). The encyclopedia of the sciences of learning (Vol. 5, pp. 2195–2200). New York: Springer.
Groesser, S.N. & Schaffernicht, M. (2012). Mental Models of Dynamic Systems: Taking Stock and Looking Ahead. System Dynamics Review, 28(1): 46-68, Wiley.
Johnson-Laird, P.N. 2005. The History of Mental Models
Jones, N. A. et al. (2011). "Mental Models: an interdisciplinary synthesis of theory and methods" Ecology and Society.16 (1): 46.
Jones, N. A. et al. (2014). "Eliciting mental models: a comparison of interview procedures in the context of natural resource management" Ecology and Society.19 (1): 13.
Luquet, Georges-Henri (2001). Children's Drawings. Free Association Books. ISBN 1-85343-516-3
Prediger, S. (2008). "Discontinuities for mental models - a source for difficulties with the multiplication of fractions" Proceedings of ICME-11, Topic Study Group 10, Research and Development of Number Systems and Arithmetic. (See also Prediger's references to Fischbein 1985 and Fischbein 1989, "Tacit models and mathematical reasoning".)
Robles-De-La-Torre, G. & Sekuler, R. (2004). "Numerically Estimating Internal Models of Dynamic Virtual Objects Archived 2008-05-17 at the Wayback Machine". In: ACM Transactions on Applied Perception 1(2), pp. 102–117.
Sterman, John D. A Skeptic’s Guide to Computer Models, Massachusetts Institute of Technology
== External links ==
Mental Models and Reasoning Laboratory
Systems Analysis, Modelling and Prediction Group, University of Oxford
System Dynamics Society | Wikipedia/Mental_model |
A model organism is a non-human species that is extensively studied to understand particular biological phenomena, with the expectation that discoveries made in the model organism will provide insight into the workings of other organisms. Model organisms are widely used to research human disease when human experimentation would be unfeasible or unethical. This strategy is made possible by the common descent of all living organisms, and the conservation of metabolic and developmental pathways and genetic material over the course of evolution.
Research using animal models has been central to most of the achievements of modern medicine. It has contributed most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. The results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". Research in model organisms led to further medical advances, such as the production of the diphtheria antitoxin and the 1922 discovery of insulin and its use in treating diabetes, which had previously meant death. Modern general anaesthetics such as halothane were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine.
In researching human disease, model organisms allow for better understanding the disease process without the added risk of harming an actual human. The species of the model organism is usually chosen so that it reacts to disease or its treatment in a way that resembles human physiology, even though care must be taken when generalizing from one organism to another. However, many drugs, treatments and cures for human diseases are developed in part with the guidance of animal models. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available.
Model organisms are drawn from all three domains of life, as well as viruses. One of the first model systems for molecular biology was the bacterium Escherichia coli (E. coli), a common constituent of the human digestive system. The mouse (Mus musculus) has been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. Other examples include baker's yeast (Saccharomyces cerevisiae), the T4 phage virus, the fruit fly Drosophila melanogaster, the flowering plant Arabidopsis thaliana, and guinea pigs (Cavia porcellus). Several of the bacterial viruses (bacteriophage) that infect E. coli also have been very useful for the study of gene structure and gene regulation (e.g. phages Lambda and T4). Disease models are divided into three categories: homologous animals have the same causes, symptoms and treatment options as would humans who have the same disease, isomorphic animals share the same symptoms and treatments, and predictive models are similar to a particular human disease in only a couple of aspects, but are useful in isolating and making predictions about mechanisms of a set of disease features.
== History ==
The use of animals in research dates back to ancient Greece, with Aristotle (384–322 BCE) and Erasistratus (304–258 BCE) among the first to perform experiments on living animals. Discoveries in the 18th and 19th centuries included Antoine Lavoisier's use of a guinea pig in a calorimeter to prove that respiration was a form of combustion, and Louis Pasteur's demonstration of the germ theory of disease in the 1880s using anthrax in sheep.
Research using animal models has been central to most of the achievements of modern medicine. It has contributed most of the basic knowledge in fields such as human physiology and biochemistry, and has played significant roles in fields such as neuroscience and infectious disease. For example, the results have included the near-eradication of polio and the development of organ transplantation, and have benefited both humans and animals. From 1910 to 1927, Thomas Hunt Morgan's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes. Drosophila became one of the first, and for some time the most widely used, model organisms, and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". D. melanogaster remains one of the most widely used eukaryotic model organisms. During the same time period, studies on mouse genetics in the laboratory of William Ernest Castle in collaboration with Abbie Lathrop led to generation of the DBA ("dilute, brown and non-agouti") inbred mouse strain and the systematic generation of other inbred strains. The mouse has since been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries.
In the late 19th century, Emil von Behring isolated the diphtheria toxin and demonstrated its effects in guinea pigs. He went on to develop an antitoxin against diphtheria in animals and then in humans, which resulted in the modern methods of immunization and largely ended diphtheria as a threatening disease. The diphtheria antitoxin is famously commemorated in the Iditarod race, which is modeled after the delivery of antitoxin in the 1925 serum run to Nome. The success of animal studies in producing the diphtheria antitoxin has also been attributed as a cause for the decline of the early 20th-century opposition to animal research in the United States.
Subsequent research in model organisms led to further medical advances, such as Frederick Banting's research in dogs, which determined that the isolates of pancreatic secretion could be used to treat dogs with diabetes. This led to the 1922 discovery of insulin (with John Macleod) and its use in treating diabetes, which had previously meant death. John Cade's research in guinea pigs discovered the anticonvulsant properties of lithium salts, which revolutionized the treatment of bipolar disorder, replacing the previous treatments of lobotomy or electroconvulsive therapy. Modern general anaesthetics, such as halothane and related compounds, were also developed through studies on model organisms, and are necessary for modern, complex surgical operations.
In the 1940s, Jonas Salk used rhesus monkey studies to isolate the most virulent forms of the polio virus, which led to his creation of a polio vaccine. The vaccine, which was made publicly available in 1955, reduced the incidence of polio 15-fold in the United States over the following five years. Albert Sabin improved the vaccine by passing the polio virus through animal hosts, including monkeys; the Sabin vaccine was produced for mass consumption in 1963, and had virtually eradicated polio in the United States by 1965. It has been estimated that developing and producing the vaccines required the use of 100,000 rhesus monkeys, with 65 doses of vaccine produced from each monkey. Sabin wrote in 1992, "Without the use of animals and human beings, it would have been impossible to acquire the important knowledge needed to prevent much suffering and premature death not only among humans, but also among animals."
Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful in vitro model system available.
== Selection ==
Models are those organisms with a wealth of biological data that make them attractive to study as examples for other species and/or natural phenomena that are more difficult to study directly. Continual research on these organisms focuses on a wide variety of experimental techniques and goals from many different levels of biology—from ecology, behavior and biomechanics, down to the tiny functional scale of individual tissues, organelles and proteins. Inquiries about the DNA of organisms are classed as genetic models (with short generation times, such as the fruitfly and nematode worm), experimental models, and genomic parsimony models, investigating pivotal position in the evolutionary tree. Historically, model organisms include a handful of species with extensive genomic research data, such as the NIH model organisms.
Often, model organisms are chosen on the basis that they are amenable to experimental manipulation. This usually will include characteristics such as short life-cycle, techniques for genetic manipulation (inbred strains, stem cell lines, and methods of transformation) and non-specialist living requirements. Sometimes, the genome arrangement facilitates the sequencing of the model organism's genome, for example, by being very compact or having a low proportion of junk DNA (e.g. yeast, arabidopsis, or pufferfish).
When researchers look for an organism to use in their studies, they look for several traits. Among these are size, generation time, accessibility, manipulation, genetics, conservation of mechanisms, and potential economic benefit. As comparative molecular biology has become more common, some researchers have sought model organisms from a wider assortment of lineages on the tree of life.
=== Phylogeny and genetic relatedness ===
The primary reason for the use of model organisms in research is the evolutionary principle that all organisms share some degree of relatedness and genetic similarity due to common ancestry. The study of taxonomic human relatives, then, can provide a great deal of information about mechanism and disease within the human body that can be useful in medicine.
Various phylogenetic trees for vertebrates have been constructed using comparative proteomics, genetics, genomics as well as the geochemical and fossil record. These estimations tell us that humans and chimpanzees last shared a common ancestor about 6 million years ago (mya). As our closest relatives, chimpanzees have a lot of potential to tell us about mechanisms of disease (and what genes may be responsible for human intelligence). However, chimpanzees are rarely used in research and are protected from highly invasive procedures. Rodents are the most common animal models. Phylogenetic trees estimate that humans and rodents last shared a common ancestor ~80-100mya. Despite this distant split, humans and rodents have far more similarities than they do differences. This is due to the relative stability of large portions of the genome, making the use of vertebrate animals particularly productive.
Genomic data is used to make close comparisons between species and determine relatedness. Humans share about 99% of their genome with chimpanzees (98.7% with bonobos) and over 90% with the mouse. With so much of the genome conserved across species, it is relatively impressive that the differences between humans and mice can be accounted for in approximately six thousand genes (of ~30,000 total). Scientists have been able to take advantage of these similarities in generating experimental and predictive models of human disease.
== Use ==
There are many model organisms. One of the first model systems for molecular biology was the bacterium Escherichia coli, a common constituent of the human digestive system. Several of the bacterial viruses (bacteriophage) that infect E. coli also have been very useful for the study of gene structure and gene regulation (e.g. phages Lambda and T4). However, it is debated whether bacteriophages should be classified as organisms, because they lack metabolism and depend on functions of the host cells for propagation.
In eukaryotes, several yeasts, particularly Saccharomyces cerevisiae ("baker's" or "budding" yeast), have been widely used in genetics and cell biology, largely because they are quick and easy to grow. The cell cycle in a simple yeast is very similar to the cell cycle in humans and is regulated by homologous proteins. The fruit fly Drosophila melanogaster is studied, again, because it is easy to grow for an animal, has various visible congenital traits and has a polytene (giant) chromosome in its salivary glands that can be examined under a light microscope. The roundworm Caenorhabditis elegans is studied because it has very defined development patterns involving fixed numbers of cells, and it can be rapidly assayed for abnormalities.
== Disease models ==
Animal models serving in research may have an existing, inbred or induced disease or injury that is similar to a human condition. These test conditions are often termed as animal models of disease. The use of animal models allows researchers to investigate disease states in ways which would be inaccessible in a human patient, performing procedures on the non-human animal that imply a level of harm that would not be considered ethical to inflict on a human.
The best models of disease are similar in etiology (mechanism of cause) and phenotype (signs and symptoms) to the human equivalent. However complex human diseases can often be better understood in a simplified system in which individual parts of the disease process are isolated and examined. For instance, behavioral analogues of anxiety or pain in laboratory animals can be used to screen and test new drugs for the treatment of these conditions in humans. A 2000 study found that animal models concorded (coincided on true positives and false negatives) with human toxicity in 71% of cases, with 63% for nonrodents alone and 43% for rodents alone.
In 1987, Davidson et al. suggested that selection of an animal model for research be based on nine considerations. These include 1) appropriateness as an analog, 2) transferability of information, 3) genetic uniformity of organisms, where applicable, 4) background knowledge of biological properties, 5) cost and availability, 6) generalizability of the results, 7) ease of and adaptability to experimental manipulation, 8) ecological consequences, and 9) ethical implications.
Animal models can be classified as homologous, isomorphic or predictive. Animal models can also be more broadly classified into four categories: 1) experimental, 2) spontaneous, 3) negative, 4) orphan.
Experimental models are most common. These refer to models of disease that resemble human conditions in phenotype or response to treatment but are induced artificially in the laboratory. Some examples include:
The use of metrazol (pentylenetetrazol) as an animal model of epilepsy
Induction of mechanical brain injury as an animal model of post-traumatic epilepsy
Injection of the neurotoxin 6-hydroxydopamine to dopaminergic parts of the basal ganglia as an animal model of Parkinson's disease.
Immunisation with an auto-antigen to induce an immune response to model autoimmune diseases such as Experimental autoimmune encephalomyelitis
Occlusion of the middle cerebral artery as an animal model of ischemic stroke
Injection of blood in the basal ganglia of mice as a model for hemorrhagic stroke
Sepsis and septic shock induction by impairing the integrity of barrier tissues, administering live pathogens or toxins
Infecting animals with pathogens to reproduce human infectious diseases
Injecting animals with agonists or antagonists of various neurotransmitters to reproduce human mental disorders
Using ionizing radiation to cause tumors
Using gene transfer to cause tumors
Implanting animals with tumors to test and develop treatments using ionizing radiation
Genetically selected (such as in diabetic mice also known as NOD mice)
Various animal models for screening of drugs for the treatment of glaucoma
The use of the ovariectomized rat in osteoporosis research
Use of Plasmodium yoelii as a model of human malaria
Spontaneous models refer to diseases that are analogous to human conditions that occur naturally in the animal being studied. These models are rare, but informative. Negative models essentially refer to control animals, which are useful for validating an experimental result. Orphan models refer to diseases for which there is no human analog and occur exclusively in the species studied.
The increase in knowledge of the genomes of non-human primates and other mammals that are genetically close to humans is allowing the production of genetically engineered animal tissues, organs and even animal species which express human diseases, providing a more robust model of human diseases in an animal model.
Animal models observed in the sciences of psychology and sociology are often termed animal models of behavior. It is difficult to build an animal model that perfectly reproduces the symptoms of depression in patients. Depression, as other mental disorders, consists of endophenotypes that can be reproduced independently and evaluated in animals. An ideal animal model offers an opportunity to understand molecular, genetic and epigenetic factors that may lead to depression. By using animal models, the underlying molecular alterations and the causal relationship between genetic or environmental alterations and depression can be examined, which would afford a better insight into pathology of depression. In addition, animal models of depression are indispensable for identifying novel therapies for depression.
== Important model organisms ==
Model organisms are drawn from all three domains of life, as well as viruses. The most widely studied prokaryotic model organism is Escherichia coli (E. coli), which has been intensively investigated for over 60 years. It is a common, gram-negative gut bacterium which can be grown and cultured easily and inexpensively in a laboratory setting. It is the most widely used organism in molecular genetics, and is an important species in the fields of biotechnology and microbiology, where it has served as the host organism for the majority of work with recombinant DNA.
Simple model eukaryotes include baker's yeast (Saccharomyces cerevisiae) and fission yeast (Schizosaccharomyces pombe), both of which share many characters with higher cells, including those of humans. For instance, many cell division genes that are critical for the development of cancer have been discovered in yeast. Chlamydomonas reinhardtii, a unicellular green alga with well-studied genetics, is used to study photosynthesis and motility. C. reinhardtii has many known and mapped mutants and expressed sequence tags, and there are advanced methods for genetic transformation and selection of genes. Dictyostelium discoideum is used in molecular biology and genetics, and is studied as an example of cell communication, differentiation, and programmed cell death.
Among invertebrates, the fruit fly Drosophila melanogaster is famous as the subject of genetics experiments by Thomas Hunt Morgan and others. They are easily raised in the lab, with rapid generations, high fecundity, few chromosomes, and easily induced observable mutations. The nematode Caenorhabditis elegans is used for understanding the genetic control of development and physiology. It was first proposed as a model for neuronal development by Sydney Brenner in 1963, and has been extensively used in many different contexts since then. C. elegans was the first multicellular organism whose genome was completely sequenced, and as of 2012, the only organism to have its connectome (neuronal "wiring diagram") completed.
Arabidopsis thaliana is currently the most popular model plant. Its small stature and short generation time facilitates rapid genetic studies, and many phenotypic and biochemical mutants have been mapped. A. thaliana was the first plant to have its genome sequenced.
Among vertebrates, guinea pigs (Cavia porcellus) were used by Robert Koch and other early bacteriologists as a host for bacterial infections, becoming a byword for "laboratory animal", but are less commonly used today. The classic model vertebrate is currently the mouse (Mus musculus). Many inbred strains exist, as well as lines selected for particular traits, often of medical interest, e.g. body size, obesity, muscularity, and voluntary wheel-running behavior.
The rat (Rattus norvegicus) is particularly useful as a toxicology model, and as a neurological model and source of primary cell cultures, owing to the larger size of organs and suborganellar structures relative to the mouse, while eggs and embryos from Xenopus tropicalis and Xenopus laevis (African clawed frog) are used in developmental biology, cell biology, toxicology, and neuroscience. Likewise, the zebrafish (Danio rerio) has a nearly transparent body during early development, which provides unique visual access to the animal's internal anatomy during this time period. Zebrafish are used to study development, toxicology and toxicopathology, specific gene function and roles of signaling pathways.
Other important model organisms and some of their uses include: T4 phage (viral infection), Tetrahymena thermophila (intracellular processes), maize (transposons), hydras (regeneration and morphogenesis), cats (neurophysiology), chickens (development), dogs (respiratory and cardiovascular systems), Nothobranchius furzeri (aging), non-human primates such as the rhesus macaque and chimpanzee (hepatitis, HIV, Parkinson's disease, cognition, and vaccines), and ferrets (SARS-CoV-2)
=== Selected model organisms ===
The organisms below have become model organisms because they facilitate the study of certain characters or because of their genetic accessibility. For example, E. coli was one of the first organisms for which genetic techniques such as transformation or genetic manipulation has been developed.
The genomes of all model species have been sequenced, including their mitochondrial/chloroplast genomes. Model organism databases exist to provide researchers with a portal from which to download sequences (DNA, RNA, or protein) or to access functional information on specific genes, for example the sub-cellular localization of the gene product or its physiological role.
== Limitations ==
Many animal models serving as test subjects in biomedical research, such as rats and mice, may be selectively sedentary, obese and glucose intolerant. This may confound their use to model human metabolic processes and diseases as these can be affected by dietary energy intake and exercise. Similarly, there are differences between the immune systems of model organisms and humans that lead to significantly altered responses to stimuli, although the underlying principles of genome function may be the same. The impoverished environments inside standard laboratory cages deny research animals of the mental and physical challenges are necessary for healthy emotional development. Without day-to-day variety, risks and rewards, and complex environments, some have argued that animal models are irrelevant models of human experience.
Mice differ from humans in several immune properties: mice are more resistant to some toxins than humans; have a lower total neutrophil fraction in the blood, a lower neutrophil enzymatic capacity, lower activity of the complement system, and a different set of pentraxins involved in the inflammatory process; and lack genes for important components of the immune system, such as IL-8, IL-37, TLR10, ICAM-3, etc. Laboratory mice reared in specific-pathogen-free (SPF) conditions usually have a rather immature immune system with a deficit of memory T cells. These mice may have limited diversity of the microbiota, which directly affects the immune system and the development of pathological conditions. Moreover, persistent virus infections (for example, herpesviruses) are activated in humans, but not in SPF mice, with septic complications and may change the resistance to bacterial coinfections. "Dirty" mice are possibly better suitable for mimicking human pathologies. In addition, inbred mouse strains are used in the overwhelming majority of studies, while the human population is heterogeneous, pointing to the importance of studies in interstrain hybrid, outbred, and nonlinear mice.
=== Unintended bias ===
Some studies suggests that inadequate published data in animal testing may result in irreproducible research, with missing details about how experiments are done omitted from published papers or differences in testing that may introduce bias. Examples of hidden bias include a 2014 study from McGill University in Montreal, Canada which suggests that mice handled by men rather than women showed higher stress levels. Another study in 2016 suggested that gut microbiomes in mice may have an impact upon scientific research.
=== Alternatives ===
Ethical concerns, as well as the cost, maintenance and relative inefficiency of animal research has encouraged development of alternative methods for the study of disease. Cell culture, or in vitro studies, provide an alternative that preserves the physiology of the living cell, but does not require the sacrifice of an animal for mechanistic studies. Human, inducible pluripotent stem cells can also elucidate new mechanisms for understanding cancer and cell regeneration. Imaging studies (such as MRI or PET scans) enable non-invasive study of human subjects. Recent advances in genetics and genomics can identify disease-associated genes, which can be targeted for therapies.
Many biomedical researchers argue that there is no substitute for a living organism when studying complex interactions in disease pathology or treatments.
== Ethics ==
Debate about the ethical use of animals in research dates at least as far back as 1822 when the British Parliament under pressure from British and Indian intellectuals enacted the first law for animal protection preventing cruelty to cattle. This was followed by the Cruelty to Animals Act 1835 and the Cruelty to Animals Act 1849, which criminalized ill-treating, over-driving, and torturing animals. In 1876, under pressure from the National Anti-Vivisection Society, the Cruelty to Animals Act 1849 was amended to include regulations governing the use of animals in research. This new act stipulated that 1) experiments must be proven absolutely necessary for instruction, or to save or prolong human life; 2) animals must be properly anesthetized; and 3) animals must be killed as soon as the experiment is over. Today, these three principles are central to the laws and guidelines governing the use of animals and research. In the U.S., the Animal Welfare Act of 1970 (see also Laboratory Animal Welfare Act) set standards for animal use and care in research. This law is enforced by APHIS's Animal Care program.
In academic settings in which NIH funding is used for animal research, institutions are governed by the NIH Office of Laboratory Animal Welfare (OLAW). At each site, OLAW guidelines and standards are upheld by a local review board called the Institutional Animal Care and Use Committee (IACUC). All laboratory experiments involving living animals are reviewed and approved by this committee. In addition to proving the potential for benefit to human health, minimization of pain and distress, and timely and humane euthanasia, experimenters must justify their protocols based on the principles of Replacement, Reduction and Refinement.
"Replacement" refers to efforts to engage alternatives to animal use. This includes the use of computer models, non-living tissues and cells, and replacement of "higher-order" animals (primates and mammals) with "lower" order animals (e.g. cold-blooded animals, invertebrates) wherever possible.
"Reduction" refers to efforts to minimize number of animals used during the course of an experiment, as well as prevention of unnecessary replication of previous experiments. To satisfy this requirement, mathematical calculations of statistical power are employed to determine the minimum number of animals that can be used to get a statistically significant experimental result.
"Refinement" refers to efforts to make experimental design as painless and efficient as possible in order to minimize the suffering of each animal subject.
== See also ==
== References ==
== Further reading ==
Marx, Vivien (June 2014). "Models: stretching the skills of cell lines and mice". Nature Methods. 11 (6): 617–620. doi:10.1038/nmeth.2966. PMID 24874573.
Goldstein, Bob; King, Nicole (November 2016). "The Future of Cell Biology: Emerging Model Organisms". Trends in Cell Biology. 26 (11): 818–824. doi:10.1016/j.tcb.2016.08.005. PMC 5077642. PMID 27639630.
Lloyd, Kent; Franklin, Craig; Lutz, Cat; Magnuson, Terry (June 2015). "Reproducibility: Use mouse biobanks or lose them". Nature. 522 (7555): 151–153. Bibcode:2015Natur.522..151L. doi:10.1038/522151a. PMC 4636083. PMID 26062496.
== External links ==
Wellcome Trust description of model organisms
National Institutes of Health Comparative Medicine Program Vertebrate Models
NIH Using Model Organisms to Study Human Disease
National Institutes of Health Model Organism Sharing Policy
Why are Animals Used in NIH Research
Disease Animal Models – BSRC Alexander Fleming
Emice – National Cancer Institute
Knock Out Mouse Project – KOMP
Mouse Biology Program
Mutant Mouse Resource & Research Centers, National Institutes of Health, supported Mouse Repository
Rat Resource & Research Center – National Institutes of Health, supported Rat Repository
NIH Model Organism Research Reproducibility and Rigor | Wikipedia/Model_organism |
A model is a person with a role either to display commercial products (notably fashion clothing in fashion shows) or to serve as an artist's model.
Modelling ("modeling" in American English) entails using one's body to represent someone else's body or someone's artistic imagination of a body. For example, a woman modelling for shoes uses her foot to model the potential customers' feet. Modelling thus is different from posing for portrait photography, portrait painting, and distinct from other types of public performance, such as acting or dancing. Personal opinions are normally not expressed, and a model's reputation and image are considered critical.
Types of modelling include: fine art, fashion, glamour, fitness, and body-part promotional modelling. Models are featured in various media formats, including books, magazines, films, newspapers, the Internet, and television. Fashion modelling is sometimes featured in reality TV shows (America's Next Top Model). Modelling often is a part-time activity.
== Artist's models ==
Artist's models pose for any visual artist as part of the creative process. Artist's models are often paid professionals who provide a reference or inspiration for a work of art that includes the human figure. The most common types of art created using models are figure drawing, figure painting, sculpture and photography, but almost any medium may be used. Although commercial motives dominate over aesthetics in illustration, its artwork commonly employs models. Models are most frequently employed for art classes or by informal groups of experienced artists who gather to share the expense of a model.
== Fashion modelling ==
=== History of fashion modelling ===
==== Early years ====
In 14th-century Europe, fashion had been displayed in miniature form to (often royal) clients by fashion dolls, before the clothes were made in human size.
Modelling as a profession was first established in 1853 by Charles Frederick Worth, the "father of haute couture", when he asked his wife, Marie Vernet Worth, to model the potential clients for the clothes he designed. The term "house model" was coined to describe this type of work. Eventually, this became common practice for Parisian fashion houses. There were no standard physical measurement requirements for a model, and most designers would use women of varying sizes to demonstrate variety in their designs.
The modelling profession expanded to photo modelling with the development of fashion photography. Models remained fairly anonymous, and relatively poorly paid, until the late 1940s, when the world's first three supermodels, Barbara Goalen, Bettina Graziani and Lisa Fonssagrives began commanding very large sums. During the 1940s and 1950s, Graziani was the most photographed woman in France and the undisputed queen of couture, while Fonssagrives appeared on over 200 Vogue covers; her name recognition led to the importance of Vogue in shaping the careers of fashion models. One of the most popular models during the 1940s was Jinx Falkenburg, who was paid $25 per hour, a large sum at the time; through the 1950s, Wilhelmina Cooper, Jean Patchett, Dovima, Dorian Leigh, Suzy Parker, Evelyn Tripp and Carmen Dell'Orefice also dominated fashion. Dorothea Church was among the first black models in the industry to gain recognition in Paris. However, these models were unknown outside the fashion community. Wilhelmina Cooper's measurements were 38"-24"-36" whereas Chanel Iman's measurements are 32"-23"-33". In 1946, Ford Models was established by Eileen and Gerard Ford in New York, making it one of the oldest model agencies in the world.
==== The 1960s and the beginning of the industry ====
In the 1960s, the modelling world established modelling agencies. Throughout Europe, secretarial services acted as models' agents charging them weekly rates for their messages and bookings. For the most part, models were responsible for their own billing. In Germany, agents were not allowed to work for a percentage of a person's earnings, so they referred to themselves as secretaries. Except for a few models travelling to Paris or New York, travelling was relatively unheard of for a model. Most models only worked in one market due to different labour laws governing modelling in various countries. In the 1960s, Italy had many fashion houses and fashion magazines but desperately needed models. Italian agencies often coerced models to return to Italy without work visas by withholding their pay. They would also pay their models in cash, which models would have to hide from customs agents. It was not uncommon for models staying in hotels such as La Louisiana in Paris or the Arena in Milan to have their hotel rooms raided by the police looking for their work visas. It was rumoured that competing agencies were behind the raids. This led many agencies to form worldwide chains; for example, the Marilyn Agency has branches in Paris and New York.
By the late 1960s, London was considered the best market in Europe due to its more organised and innovative approach to modelling. It was during this period that models began to become household names. Models such as Jean Shrimpton, Tania Mallet, Celia Hammond, Twiggy, and Penelope Tree dominated the London fashion scene and were well paid, unlike their predecessors. Twiggy became The Face of '66 at the age of 16. At this time, model agencies were not as restrictive about the models they represented, although it was uncommon for them to sign shorter models. Twiggy, who stood at 5 feet 6 inches (168 cm) with a 32" bust and had a boy's haircut, is credited with changing model ideals. At that time, she earned £80 (equivalent to £1,639.1 or US$2,037.32 in 2023) an hour, while the average wage was £15 (equivalent to £307.33 or US$382 in 2023) a week.
In 1967, seven of the top model agents in London formed the Association of London Model Agents. The formation of this association helped legitimise modelling and changed the fashion industry. Even with a more professional attitude towards modelling, models were still expected to have their hair and makeup done before they arrived at a shoot. Meanwhile, agencies took responsibility for a model's promotional materials and branding. That same year, former top fashion model Wilhelmina Cooper opened up her own fashion agency with her husband called Wilhelmina Models. By 1968, FM Agency and Models 1 were established and represented models in a similar way that agencies do today. By the late 1960s, models were treated better and were making better wages. One of the innovators, Ford Models, was the first agency to advance models money they were owed and would often allow teen models, who did not live locally, to reside in their house, a precursor to model housing.
==== The 1970s and 1980s ====
The innovations of the 1960s flowed into the 1970s fashion scene. As a result of model industry associations and standards, model agencies became more business minded, and more thought went into a model's promotional materials. By this time, agencies were starting to pay for a model's publicity. In the early 1970s, Scandinavia had many tall, leggy, blonde-haired, blue-eyed models and not enough clients. It was during this time that Ford Models pioneered scouting. They would spend time working with agencies holding modeling contests. This was the precursor to the Ford Models Supermodel of the World competition, established in 1980. Ford also focused its attention on Brazil, which had a wide array of seemingly "exotic" models, which eventually led to the establishment of Ford Models Brazil. During this time, the Sports Illustrated Swimsuit Issue debuted. The magazine set the trend by photographing "bigger and healthier" California models, and printing their names by their photos, thus turning many of them into household names and establishing the issue as a hallmark of supermodel status.
The 1970s marked numerous milestones in fashion. Beverly Johnson was the first black woman to appear on the cover of U.S. Vogue in 1974. Models, including Iman, Grace Jones, Pat Cleveland, Alva Chinn, Donyale Luna, Minah Bird, Naomi Sims, and Toukie Smith were some of the top black fashion models who paved the way for black women in fashion. In 1975, Margaux Hemingway landed a then-unprecedented million-dollar contract as the face of Fabergé's Babe perfume and the same year appeared on the cover of Time magazine, labeled one of the "New Beauties", giving further name recognition to fashion models.
Many of the world's most prominent modeling agencies were established in the 1970s and early 1980s. These agencies created the standard by which agencies now run. In 1974, Nevs Models was established in London with only a men's board, the first of its kind. Elite Models was founded in Paris in 1975, as well as Friday's Models in Japan. The next year Cal-Carries was established in Singapore, the first of a chain of agencies in Asia. In 1977, Select Model Management and Why Not Models in Milan opened its doors. By the 1980s, agencies such as Premier Model Management, Storm Models, Mikas, Marilyn, and Metropolitan Models had been established.
In October 1981, Life cited Shelley Hack, Lauren Hutton and Iman for Revlon, Margaux Hemingway for Fabergé, Karen Graham for Estée Lauder, Cristina Ferrare for Max Factor, and Cheryl Tiegs for CoverGirl by proclaiming them the "million dollar faces" of the beauty industry. These models negotiated previously unheard-of lucrative and exclusive deals with giant cosmetics companies, were instantly recognizable, and their names became well known to the public.
By the 1980s, most models could make modeling a full-time career. Patti Hansen, one of the top earning models in 1980, earned $200 an hour for print and $2,000 for television plus residuals; it was estimated that she earned about $300,000 a year in 1980 (equivalent to $931,463 in 2023). It was common for models to travel abroad and work throughout Europe. As modeling became global, numerous agencies began to think globally. In 1980, Ford Models, the innovator of scouting, introduced the Ford Models Supermodel of the World contest. That same year, John Casablancas opened Elite Models in New York. In 1981, cosmetics companies began contracting top models to lucrative endorsement deals. By 1983, Elite had developed its own contest, the Elite Model Look competition. In New York, during the 1980s there were so-called "model wars" in which the Ford and Elite agencies fought over models and campaigns. Models were jumping back and forth between agencies such Elite, Wilhelmina, and Ford. In New York, the late 1980s trend was the boyish look in which models had short cropped hair and looked androgynous. In Europe, the trend was the exact opposite. During this time, many American models who were considered more feminine-looking moved abroad. By the mid-1980s, big hair was made popular by some musical groups, and the boyish look was out. The hourglass figure, a fashionable trend from the late 1940s to the early 1960s, has made a comeback.
==== 1990s ====
The high fashion models of the late 1980s dominated the early 1990s. In 1990, Linda Evangelista famously said to Vogue, "we don't wake up for less than $10,000 a day". Evangelista and her contemporaries, Naomi Campbell, Cindy Crawford, Christy Turlington, Tatjana Patitz, Stephanie Seymour, and Yasmeen Ghauri became arguably the most recognisable models in the world, earning the moniker of "supermodel", and were boosted to global recognition and new heights of wealth for the industry. In 1991, Turlington signed a contract with Maybelline that paid her $800,000 for twelve days' work each year.
By the mid‑1990s, the new "heroin chic" trend became popular amongst New York and London editorial clients. Kate Moss became its poster child through her ads for Calvin Klein. With the popularity of lingerie retailer Victoria's Secret, and the Sports Illustrated Swimsuit Issue, there was a need for healthier-looking supermodels such as Tyra Banks and Heidi Klum to meet commercial modelling demand. The mid‑1990s also saw many Asian countries establishing modelling agencies.
By the late 1990s, the heroin chic era had run its course. Teen-inspired clothing infiltrated mainstream fashion, teen pop music was on the rise, and artists such as Britney Spears, Aaliyah and Christina Aguilera popularised pleather and bare midriffs. As fashion changed to a more youthful demographic, the models who rose to fame had to be sexier for the digital age. Following Gisele Bündchen's breakthrough, a wave of Brazilian models including Adriana Lima and Alessandra Ambrosio rose to fame on runways and became popular in commercial modelling throughout the 2000s. Some have tied this increase in Brazilian models to the trend of magazines featuring celebrities instead of models on their covers.
==== 2000s and since ====
In the late 2000s, the Brazilians fell out of favour on the runways. Editorial clients were favouring models with a china-doll or alien look to them, such as Gemma Ward and Lily Cole. During the 2000s, Ford Models and NEXT Model Management were engaged in a legal battle, with each agency alleging that the other was stealing its models.
However, the most significant controversy of the 2000s was the health of high-fashion models participating in fashion week. While the health of models had been a concern since the 1970s, there were several high-profile news stories surrounding the deaths of young fashion models due to eating disorders and drug abuse. The British Fashion Council subsequently asked designers to sign a contract stating they would not use models under the age of sixteen. On March 3, 2012, Vogue banned models under the age of sixteen as well as models who appeared to have an eating disorder. Similarly, other countries placed bans on unhealthy, and underage models, including Spain, Italy, Israel and France, which all enacted a minimum body mass index (BMI) requirement. The French law also requires digitally altered pictures of models to be identified as such.
In 2013, New York toughened its child labour law protections for models under the age of eighteen by passing New York Senate Bill No. 5486, which gives underage models the same labour protections afforded to child actors. Key new protections included the following: underage models are not to work before 5:00 pm or after 10:00 pm on school nights, nor were they to work later than 12:30 am on non-school nights; the models may not return to work less than twelve hours after they leave; a pediatric nurse must be on-site; an adult chaperone must accompany models under sixteen; parents or guardians of underage models must create a trust fund account into which employers will transfer a minimum of 15% of the child model's gross earnings; and employers must set aside time and a dedicated space for educational instruction.
=== Runway modelling ===
Catwalk or runway models, also called live models, display clothes from fashion designers, fashion media, and consumers. During runway shows, models have to constantly change clothes and makeup. Models walk, turn, and stand to demonstrate a garment's key features. Models also go to interviews (called "go and sees") to present their portfolios. A runway model can also work in other areas, such as department store fashion shows, and the most successful models sometimes create their own product lines or go into acting.: 191–192
Top runway models travel around the world to attend fashion shows. The most prestigious events are held in New York City, London, Paris, and Milan. Second-tier international fashion centre cities include Rome, Florence, Venice, Brescia, Barcelona, Los Angeles, Tokyo, and Moscow.
The criteria for runway models include certain height and weight requirements. The British Association of Model Agents (AMA) says that female models should be around 34"-24"-34" and between 5 ft 8 in (173 cm) and 5 ft 11 in (180 cm) tall. The average model is very slender. Those not meeting the size requirement may try to become a plus-size model. According to the New York Better Business Career Services website, the preferred dimensions for a male model are a height of 5 ft 11 in (180 cm) to 6 ft 2 in (188 cm), a waist of 26–32 in (66–81 cm) and a chest measurement of 39–40 in (99–102 cm). Male runway models are notably skinny and well toned.
Male and female models must also possess clear skin, healthy hair, and attractive facial features. Stringent weight and body proportion guidelines form the selection criteria by which established, and would‑be, models are judged for their placement suitability, on an ongoing basis. There can be some variation regionally, and by market tier, subject to current prevailing trends at any point, in any era, by agents, agencies and end-clients.
Formerly, the required measurements for models were 35"-23.5"-35" in (90-60-90 cm), the alleged measurements of Marilyn Monroe. Today's fashion models tend to have measurements closer to the AMA-recommended shape, but some – such as Afghan model Zohre Esmaeli – still have 35"-23.5"-35" measurements. Although in some fashion centres, a size 00 is more desirable than a size 0.
The often thin shape of many fashion models has been criticised for warping girls' body image and encouraging eating disorders. Organisers of a fashion show in Madrid in September 2006 turned away models who were judged to be underweight by medical personnel who were on hand. In February 2007 a Uruguayan model, Luisel Ramos, died from heart problems secondary to malnutrition. Her sister Eliana Ramos also was a model and had died immediately after a runway show several months prior. They were amongst the three fashion models to die of malnutrition in a six-month span. The other victim was Ana Carolina Reston. Luisel Ramos died of heart failure caused by anorexia nervosa just after stepping off the catwalk. In 2015, France passed a law requiring models to be declared healthy by a doctor to participate in fashion shows. The law also requires re-touched images to be marked as such in magazines.
=== Magazine modelling ===
Fashion modelling also includes modelling clothing in fashion magazines. In Japan, there are different types of fashion magazine models. Exclusive models (専属モデル, senzoku moderu) are models who regularly appear in a fashion magazine and model exclusively for it. On the other hand, street models, or "reader models" (読者モデル, dokusha moderu, abbreviated as "dokumo" for short), are amateur models who model part-time for fashion magazines in conjunction to school work and their main jobs. Unlike professional models, street models are meant to represent the average person in appearance and do not appear on runways. Street models are not exclusively contracted to fashion magazines. If a street model is popular enough, some become exclusive models. Many fashion icons and musicians in Japan began their careers as street models, including Kaela Kimura and Kyary Pamyu Pamyu.
=== Plus-size ===
Plus-size models are models who generally have larger measurements than editorial fashion models, and are not necessarily overweight. The primary use of plus-size models is to appear in advertising and runway shows for plus-size labels. Plus-size models are also engaged in work not strictly related to selling large-sized clothing, e.g., stock photography and advertising photography for cosmetics, household and pharmaceutical products and sunglasses, footwear and watches. Therefore, plus-size models do not exclusively wear garments marketed as plus-size clothing. This is especially true when participating in fashion editorials for mainstream fashion magazines. Some plus-size models have appeared in runway shows and campaigns for mainstream retailers and designers such as Gucci, Guess, Jean-Paul Gaultier, Levi's and Versace Jeans.
=== Normal-size ===
Also known as the "in-between" and "middle models", they are neither considered catalogue size (0–2) nor plus-size (10 up). There is criticism that these models have been left out of the conversation because fashion companies and brands opt to employ the extremes of the spectrum.
Model Camille Kostek who was on a solo cover of Sports Illustrated Swimsuit Issue in 2019 has stated that she was told by a well-known international modelling agency "...that it was too bad that I wasn't a size 10. That plus size is a big market right now and it's too bad I wasn't measuring bigger. My size (4/6) is considered an "in-between size", meaning I'm not a straight model nor plus model, I'm right in the middle. Actress Mindy Kaling has described this body type in her 2011 book Is Everybody Hanging Out Without Me? writing, "Since I am not model-skinny, but also not super-fat... I fall into that nebulous, 'Normal American Woman Size' that legions of fashion stylists detest... Many stylists hate that size because, I think, to them, I lack the self-discipline to be an aesthetic, or the sassy confidence to be a total fatty hedonist. They're like, 'Pick a lane.'"
=== Black models ===
The arrival of black women modelling as a profession began in early postwar America. It started most notably from the need of advertisers and a rise of black photography magazines. The women who advanced in such careers were those in a middle-class system emphasising the conservative value of marriage, motherhood, and domesticity. Originally titled the "Brownskin" model, black women refined the social, sexual, and racial realities confined in the gender expectations of the modelling world. There was a profound need for black women to partake in the advertising process for the new "Negro Market". With the help of Branford Models, the first black agency, 1946 was the beginning of the black modelling era. Branford Models' was able to "overturn the barriers facing African Americans in the early postwar period," especially by lifting at least one economic freedom. In this postwar America, the demand for such presence in magazines advanced "as a stage for models to display consumer goods" while assisting "in constructing a new visual discourse of urban middle-class African America". In March 1966, Donyale Luna became the first Black model to appear on the cover of the British edition of Vogue.
While they represented diversity, a major gap in the fashion industry, it was only until the 1970s that black models had a substantial presence in the modelling world. Known as the "Black is Beautiful" movement, the 1970s became the era of the black model. With growing disenfranchisement and racial inequality, the United States recognised the urgency of opening the "doors of social access and visibility to black Americans". The world of fashion was the gateway for social change. "The world of fashion was similarly looked to as a place where the culture could find signs of racial progress. Expressions of beauty and glamour mattered. Good race relations required taking note of who was selling women lipsticks and mini skirts, which meant that advertisers began looking for black models" Black models were looked to as the vehicle of social change. They were given the opportunity to balance out the lack of presence of black individuals in the mainstream culture. Agencies were beginning to scout black models and focus on the social change they were contributing to. Life magazine in October 1969, covered their issue with Naomi Sims, one of the most influential black models in the industry. Her rise to fame led to her being hired by international magazines and working on individual projects with designers across the globe. In the Life Magazine issue, Black Beauty, a new agency that represented black models, had a spread in the magazine that showcased 39 black models. Each one of the models had unique features, allowing black expression to progress through this historic magazine spread.
With the movement's presence both in magazines and on the runway, designers began to realise the need to include black models on their runways and advertisements. The Battle of Versailles was one of the most notable moments in fashion history that put black models on the map. Eleanor Lambert, creator of Fashion Week and a major "[controller] of the narrative of American fashion", set up a dinner and a fundraiser to both increase American fashion visibility and restore the palace of Versailles. Five French designers and five American designers battled it out on the runway, showing off the fashion, and for the Americans, black models as well. Oscar de la Renta stated "it was the black models that had made the difference." Pat Cleveland, Bethann Hardison, Billie Blair, Jennifer Brice, Alva Chinn, and Ramona Saunders, were among the many black models that helped Team America win and stun the French competition. This competition made the black model a worldwide phenomenon. The French were beginning to welcome diversity on the runway and in their advertising. With the recognition Versailles had given, black presence in the modelling world carried out into the 1980s and the 1990s. The models were now known by name and the publicity that came with the designers they were modelling for. With the rise of the supermodel, models like Naomi Campbell and Tyra Banks paved the way for black success. Naomi Campbell, born in London, was the first black model to cover American Vogue, TIME magazine, Russian Vogue, and the first British black model to cover British vogue. Brands like Chanel, Louis Vitton, Balmain, Prada, and more have all featured Campbell in their campaigns. She used her remarkable success to achieve more than fashion excellence.
By the mid-1990s, black presence in the modelling world had dramatically decreased. Designers began to favour a consistent aesthetic and elected for skinnier white models. This reality was paved by models such as Kate Moss and Stella Tennant, who provided a more consistent look for the runway. At this time, "the number of working black models in high-profile runway presentation... became so dire that stories began appearing in the mainstream media about the whitewashing of the runway". In response, models like Campbell, Iman, and Bethann Hardison, joined forces throughout the"Diversity Coalition" in an attempt to "call out and accuse prominent fashion houses for snubbing Black and Asian models on the catwalk, editorial spreads, and campaigns". The lack of representation was, in part, due to the belief that "black girls don't push products", which "encouraged people who work directly and indirectly in the industry to speak out on the injustices that go on within it". In the 1990s, it was quite clear that the top designers simply preferred a new aesthetic that excluded models of colour, which resulted in only 6% of runway models to be women of colour. Campbell's Diversity Coalition's primary mission was to "expedite inclusion on the runway by deliberately calling out designers who have executed acts of racism on the runway". According to Campbell, it was their choice to not include black models on the runway and desire a uniformed runway that resulted in a racist act. Although such a dramatic effort to exclude black presence from the fashion world, models like Tyra Banks and Veronica Webb persisted. Banks not only dominated the runway as a teen, she took over countless pop culture platforms. Being the first black model to cover Sports Illustrated, Banks was one of the most prominent models in the early 2000s. Covering Sports Illustrated, Elle, Essence, Vogue, and walking for Chanel, Chrisitan Dior, and Claude Monanta, Banks was truly dominating the fashion world. In addition, she acted in Fresh Prince of Bel Air and created her own reality competition show called America's Next Top Model. In conversation with Trebay of Los the New York Times, Banks stated that her first cover on Sports Illustrated "changed [her] life overnight. You have to think back to remember what that did for an appreciation of black beauty to have a black girl, a girl next door type, on the cover of one of the most mass mainstream magazines of our lives. It was a societal statement, a political statement, and an economic one". Now, models like Joan Smalls, Winne Harlow, Slick Woods, Jasmine Sanders and more are continuing the fight for black presence in the modelling world and using their successors as inspiration.
=== Fitting models ===
A fit model (sometimes fitting model) is a person who is used by a fashion designer or clothing manufacturer to check the fit, drape and visual appearance of a design on a representative human being, effectively acting as a live mannequin.
=== Parts models ===
Some models are employed for their body parts. For example, hand models may be used to promote products held in the hand and nail-related products. (e.g. rings, other jewelry or nail polish). They are frequently part of television commercials. Many parts models have exceptionally attractive body parts, but there is also demand for unattractive or unusual looking body parts for particular campaigns.
Hands are the most in-demand body parts. Feet models are also in high demand, particularly those that fit sample-size shoes. Models are also successful modelling other specific parts including abs, arms, back, bust or chest, legs, and lips. Some petite models (females who are under 5 ft 6 in (1.68 m) and do not qualify as fashion models) have found success in women's body part modelling.
Parts model divisions can be found at agencies worldwide. Several agencies solely represent parts models, including Hired Hands in London, Body Parts Models in Los Angeles, Carmen Hand Model Management in New York and Parts Models in New York. Parts Models is the largest parts agency, representing over 300 parts models.
=== Petite models ===
Petite models are models that are under the typical height requirements that are expected of fashion models. Petite models typically work more often in commercial and print modelling (rather than runway modelling).
The height of models is typically 5 feet 9 inches (1.75 m) and above for women, and 6 feet 1 inch (1.85 m) and above for men. Models who are shorter than these heights usually fall under the category of petite or commercial models.
=== Podium models ===
Podium models differ from runway models in that they do not walk down a runway, but rather just stand on an elevated platform. They resemble live mannequins placed in various places throughout an event. Attendees can walk up to the models and inspect and even feel the clothing. Podium Modelling is a practical alternative way of presenting a fashion show when space is too limited to have a full runway fashion show.
=== Earnings and demographics ===
According to the Bureau of Labor Statistics the median earnings for a model in the United States, as of 2021, is $34,000 annually. There are approximately 3,200 men and women who work as models full-time in the United States.
== Glamour models ==
Glamour modelling focuses on sexuality; thus, general requirements are often unclear, depending more on each case. Glamour models can be any size or shape. A study from 2014 that analysed glamour model profiles estimated that the mean values of female models were 1.68 metres (5 ft 6 in) height, 54 kilograms (119 lb) weight and 0.73 waist-to-hip ratio.
There is no industry standard for glamour modelling and it varies greatly by country. For the most part, glamour models are limited to modelling in calendars, men's magazines, such as Playboy, bikini modelling, lingerie modelling, fetish modelling, music videos, and extra work in films. However, some extremely popular glamour models transition into commercial print modelling, appearing in swimwear, bikini and lingerie campaigns.
In the UK, glamour modelling became a prominent feature of the newspaper industry when The Sun established Page 3 in 1969, a section in their newspaper which featured sexually suggestive images of Penthouse and Playboy models. From 1970 models appeared topless. In the 1980s, The Sun's competitors followed suit and produced their own Page 3 sections. It was during this time that glamour models first came to prominence with the likes of Samantha Fox. As a result, the United Kingdom has a very large glamour market and numerous glamour modelling agencies.
It was not until the 1990s that modern glamour modelling was established. During this time, the fashion industry was promoting models with waif bodies and androgynous-looking women, which left a void. Several fashion models, who were deemed too commercial, and too curvaceous, were frustrated with industry standards, and took a different approach. Models such as Victoria Silvstedt left the fashion world and began modelling for men's magazines. In the previous decades, posing nude for Playboy resulted in models losing their agencies and endorsements. Playboy was a stepping stone which catapulted the careers of Victoria Silvstedt, Pamela Anderson, Jenny McCarthy, and Anna Nicole Smith. Pamela Anderson became so popular from her Playboy spreads that she was able to land roles on Home Improvement and Baywatch.
In the mid-1990s, a series of men's magazines were established such as Maxim, FHM, and Stuff. At the same time, magazines including Sweden's Slitz (formerly a music magazine) re-branded themselves as men's magazines. Pre-internet, these magazines were popular among men in their late teens and early twenties because they were considered more tasteful than their predecessors. With the glamour market growing, fashion moved away from the waifs and onto Brazilian bombshells. The glamour market, consisting mostly of commercial fashion and print models, became its own genre due to its popularity. Even in a large market like the United Kingdom, however, glamour models are not usually signed exclusively to one agency as they can not rely financially on one agency to provide them with enough work. It was, and still is, a common practice for glamour models to partake in kiss-and-tell interviews about their dalliances with famous men. The notoriety of their alleged bed-hopping often propels their popularity and they are often promoted by their current or former fling. With Page 3 models becoming fixtures in the British tabloids, glamour models such as Jordan, now known as Katie Price, became household names. By 2004, Page 3 regulars earned anywhere from £30,000 to 40,000, where the average salary of a non-Page 3 model, as of 2011, was between £10,000 and 20,000. In the early 2000s, glamour models, and aspiring glamour models, appeared on reality television shows such as Big Brother to gain fame. Several Big Brother alumni parlayed their fifteen minutes of fame into successful glamour modelling careers. However, partly because of competition from the upcoming internet -giving audiences access to large amounts of, often free, online content- and its own glamour stars, such as Jordan Capri, the offline glamour market became saturated by the mid-2000s, and numerous men's magazines including Arena, Stuff and FHM in the United States went under. During this time, there was a growing trend of glamour models, including Kellie Acreman and Lauren Pope, becoming DJs to supplement their income. In a 2012 interview, Keeley Hazell said that going topless is not the best way to achieve success and that "[she] was lucky to be in that 1% of people that get that, and become really successful."
=== Gravure idols ===
In Japan, a gravure idol (グラビアアイドル, gurabia aidoru), often abbreviated to gradol (グラドル, guradoru), is a female model who primarily models for magazines, especially men's magazines, photobooks or DVDs. It is considered part of the overall idol industry in Japan. "Gurabia" (グラビア) is a wasei-eigo term derived from "rotogravure", which is a type of intaglio printing process that was once a staple of newspaper photo features. The rotogravure process is still used for commercial printing of magazines, postcards, and cardboard product packaging.
Gravure idols appear in a wide range of photographic styles and genres. Their photos are largely aimed at male audiences with poses or activities intended to be provocative or suggestive, generally accentuated by an air of playfulness and innocence rather than aggressive sexuality. Although gravure idols may sometimes wear clothing that exposes most of their body, they seldom appear fully nude. Gravure idols may be as young as pre-teen age up to their early thirties. In addition to appearing in mainstream magazines, gravure idols often release their own professional photobooks and DVDs for their fans. Many popular female idols in Japan started their careers as gravure idols.
=== Alternative models ===
An alternative model is any model who does not fit into the conventional model types and may include punk, goth, fetish, and tattooed models or models with distinctive attributes. This type of modelling is usually a cross between glamour modelling and art modelling. Publishers such as Goliath Books in Germany introduced alternative models and punk photography to larger audiences. Billi Gordon, then known as Wilbert Anthony Gordon, was the top greeting card model in the world and inspired a cottage industry, including greeting cards, T-shirts, fans, stationery, gift bags, etc.
== Fitness models ==
Fitness modelling focuses on displaying a healthy, toned physique. Fitness models usually have defined muscle groups. The model's body weight is greater due to muscle being denser than fat; however, they have a lower body fat percentage and a higher degree of muscle definition.
Fitness models are often used in magazine advertising; they can also in some cases be certified personal fitness trainers. However, other fitness models are also athletes and compete as professionals in fitness and figure competitions.
There are several agencies in large markets such as New York, London, and Germany that have fitness modelling agencies. While there is a large market for these models, most of these agencies are secondary agencies promoting models who typically earn their primary income as commercial models. There are also magazines that are geared towards specifically fitness modelling or getting fit and in shape.
== Commercial models ==
=== Promotional models ===
A promotional model is a model hired to drive consumer demand for a product, service, brand, or concept by interacting in person with potential consumers. The vast majority of promotional models tend to be attractive in physical appearance. They serve to provide information about the product or service and make it appealing to consumers. While the interaction length may be short, the promotional model delivers a live experience that reflects on the product or service he or she represents. This form of marketing touches fewer consumers for the cost than traditional advertising media (such as print, radio, and television); however, the consumer's perception of a brand, product, service, or company is often more profoundly affected by a live person-to-person experience.
Marketing campaigns that make use of promotional models may take place in stores or shopping malls, at tradeshows, special promotional events, clubs, or even at outdoor public spaces. Promotional models may also be used as TV hosts/anchors for interviewing celebrities at film awards, sports events, etc. They are often held at high-traffic locations to reach as many consumers as possible or at venues where a particular type of target consumer is expected to be present.
==== Spokesmodels ====
"Spokesmodel" is a term used for a model who is employed to be associated with a specific brand in advertisements. A spokesmodel may be a celebrity used only in advertisements (in contrast to a brand ambassador who is also expected to represent the company at various events), but more often the term refers to a model who is not a celebrity in their own right. A classic example of the spokesmodel is the models hired to be the Marlboro Man between 1954 and 1999.
==== Trade show models ====
Trade show models work a trade show floorspace or booth and represent a company to attendees. Trade show models are typically not regular employees of the company, but are freelancers hired by the company renting the booth space. They are hired for several reasons: trade show models can make a company's booth more visible from the hundreds of other booths competing for attendee attention. They are articulate and quickly learn and explain or disseminate information on the company and its product(s) and service(s). And they can assist a company in handling a large number of attendees, which the company might otherwise not have enough employees to accommodate, possibly increasing the number of sales or leads resulting from participation in the show.
==== Atmosphere models ====
Atmosphere models are hired by the producers of themed events to enhance the atmosphere or ambience of their event. They are usually dressed in costumes exemplifying the event's theme and are often placed strategically in various locations around the venue. It is common for event guests to have their picture taken with atmosphere models. For example, if someone is throwing a "Brazilian Day" celebration, they would hire models dressed in samba costumes and headdresses to stand or walk around the party.
=== Instagram models ===
Instagram models are people who have accumulated a large number of followers on Instagram by posting engaging photos of themselves and their lifestyles, and are consequently hired by a company to advertise products as their influence and popularity can increase sales. They should not be confused with established models such as Cara Delevingne and Gigi Hadid, who use Instagram to promote their traditional modelling careers, although some models, such as Playboy model Lindsey Pelas, begin their careers conventionally and subsequently become Instagram models. Some models use Instagram success to develop their careers, such as Rosie Roff, who worked as a fashion model before being discovered via Instagram and gaining work as a ring girl in American boxing. In some cases, Instagram gives unsigned models a platform to attract the attention of agencies and talent scouts. American model Matthew Noszka entered the profession as a result of being discovered on Instagram by Wilhelmina Models.
The Instagram model concept originated in the late 2000s, when the boyfriends of fashion bloggers such as Rumi Neely and Chiara Ferragni began photographing their girlfriends in various outfits. Instagram models often attempt to become social media influencers and engage in influencer marketing, promoting products such as fashion brands and detox teas. High-profile influencers can earn thousands of dollars for promoting commercial brands. When choosing whom to employ, brands have become less concerned with the number of followers an influencer has and more focused on their engagement marketing strategy. Research indicates that 89% of influencers use Instagram to promote themselves, compared to 20% using Twitter and 16% using Facebook.
Some Instagram models have gained high-profile modelling jobs and become celebrities. Fitness model Jen Selter had become an Internet celebrity by 2014 and modelled for Vanity Fair magazine. Cosplayer and model Anna Faith acquired over 250,000 Instagram followers by 2014, gaining success from her ability to impersonate the Disney character Elsa. With Facebook's continuing decrease in post reach, Instagram has increasingly become cosplayers' favourite platform. American actress Caitlin O'Connor had almost 300,000 Instagram followers in 2016, earning most of her social media income from endorsing products on Instagram. Australian personal trainer Kayla Itsines acquired 5.5 million Instagram followers, allowing her to build a business in the fitness industry. Brazilian model Claudia Alende gained a following of 2.8 million people on Instagram by 2015 and developed a career as a lingerie model. Plus-size models Iskra Lawrence and Tess Holliday have used Instagram to demonstrate their potential as models. Yashika Aannand, an Indian teenage actress, rose to prominence in the Tamil film industry after gaining popularity as an Instagram model with over 145,000 followers on her account by 2017. Iraqi cross-dressing model, Noor Alsaffar, was killed in September 2023 as part of an increase in violence against LGBTQ+ people.
Instagram model techniques and aesthetics have also been used in unconventional or parody profiles. Instagram model Lil Miquela has blurred the line between reality and social media, amassing more than 200,000 followers without revealing whether she is real or computer-generated. Australian comedian Celeste Barber had acquired 1.8 million Instagram followers by 2017, parodying celebrity fashion photographs with real-life reenactments. In 2016, French organisation Addict Aide ran a campaign to raise awareness for alcohol abuse among young people in which a model posed as Louise Delage, a fictitious 25-year-old Parisian whose Instagram photos nearly always featured alcohol. The account amassed 65,000 followers in a month, after which a reveal video posted to it had over 160,000 views.
Some reports suggest that a number of Instagram models obtain extra income by covertly working as prostitutes. Websites accusing various models of this, often without reliable evidence, have increased in popularity recently, sometimes with the unintended effect of increasing their earnings. But false accusations on these sites can harm legitimate models' reputations, and some women in the industry consider them a way for men to exert power over women.
== See also ==
Hip hop model
Instagram's impact on people
List of models in music videos
List of modeling agencies
Time for print
Female body shape
Size zero
== References ==
== Further reading ==
Gross, Michael. Model : the Ugly Business of Beautiful Women. New York: IT Books, 2011. ISBN 0-062-06790-7
Hix, Charles, and Michael Taylor. Male Model: the World Behind the Camera. New York: St. Martin's Press, 1979. ISBN 0-312-50938-3
Mears, Ashley (2011). Pricing Beauty: The Making of a Fashion Model. University of California Press. ISBN 978-0-520-26033-7.
Vogels, Josey, and Smee, Tracy. "Object of Desire: Idealized Male Bodies Sell Everything from Underwear to Appliances; Are We Creating a Male Beauty Myth?" Hour (Montréal), vol. 3, no. 46 (14-20 Dec. 1995), p. [1], 10–11. N.B.: The caption title (on p. 10) is "Male Attention". | Wikipedia/Model_(person) |
Ship models or model ships are scale models of ships. They can range in size from 1/6000 scale wargaming miniatures to large vessels capable of holding people.
Ship modeling is a craft as old as shipbuilding itself, stretching back to ancient times when water transport was first developed.
== History ==
=== Ancient Mediterranean ===
Ancient ship and boat models have been discovered throughout the Mediterranean, especially from ancient Greece, Egypt, and Phoenicia. These models provide archaeologists with valuable information regarding seafaring technology and the sociological and economic importance of seafaring. In spite of how helpful ancient boat and ship models are to archaeologists, they are not always easily or correctly interpreted due to artists’ mistakes, ambiguity in the model design, and wear and tear over the centuries.
Ships "were among the most technologically complex mechanisms of the ancient world." Ships made far-flung travel and trade more comfortable and economical, and they added a whole new facet to warfare. Thus, ships carried a great deal of significance to the people of the ancient world, and this is expressed partly through the creation of boat and ship models. Ancient boat and ship models are made of a variety of materials and are intended for different purposes. The most common purposes for boat and ship models include burial votives, house hold articles, art, and toys. While archaeologists have found ship and boat models from societies all around the Mediterranean, the three of the most prolific ship model building cultures were the Greeks, Phoenicians, and Egyptians.
Archaeologists have determined that Ancient Greek ship models were used as burial or votive offerings and as household articles such as lamps or drinking vessels. The kinds of ships depicted in Ancient Greek models can be classified broadly as small craft, merchant vessels, and warships. Models were cast in different materials, including wood, bronze, lead, and clay.
Greek warships were popular subjects to be made in miniature. One particular model, acquired by the Staatliches Museum (engl.: Land museum) in Kassel, Germany, proves to be helpful to archaeologists and historians in understanding what a hemiolia warship was like. Archaeologists have tentatively dated the Kassel model to be from the 6th or 5th centuries BC through iconographic and literary sources. This ship model is made of clay and features a distinctive prow shaped like a boar's head that is described by Herodotus in The History, and depicted on pottery, coins seals and drinking cups. The model is a miniature of a vessel that would have been too small to be a typical warship. The presence of holes bored into 8 thwarts in the ship suggests that the thwarts may have been seats for a pegged-in dummy crew. If the holes bored into the thwarts are indeed meant to accommodate a dummy crew, the crew seating would have been arranged with two men per bench amidships, and one man per bench fore and aft where the ship narrows so that there is only room for one man. Alec Tilley (former Royal Navy and Navy of Oman officer) suggests that a small ship with this type of seating arrangement would have been called a hemiolia, or a one-and-a-halfer. The name indicates that two oarsmen would have been seated on half of the benches and one on the others. Until this ship model was discovered, archaeologists, classicists, and historians had only been able to hypothesize on what the seating arrangement might have been like on a hemiolia based on its name.
Not all ancient Greek ship models are of warships. One boat model from a house deposit in Mochlos, Crete, dating to around 3000BC, is thought to be too small to be a war ship. Belgian maritime historian L.Basch postulates that the boat "cannot have been propelled by more than four oarsmen … so it can hardly be other than a fishing boat." As opposed to other Early Bronze Age ship and boat models, this model was not found in a burial context. This model is thought to be a child's toy or a piece of art, instead of a burial offering. The model itself features a projection of the keel beyond the stem-post at both ends. Despite appearances, these projections are not rams. Because the model is depicting a fishing boat, there would be no need for rams. This model in particular has helped archaeologists understand that not all keel projections in depictions of boats during this time are necessarily rams. Instead, keel projections on depictions of Bronze Age ships are explained as cut-waters or as beaching protection.
Phoenician ship models also provide archaeologists information regarding the technical aspects of seafaring, and the cultural importance of seafaring for the ancient Phoenicians. However, some models offer tantalizing pieces of information that are, however, difficult to interpret. Item number H-3134 at the Hecht Museum, a dark-brown clay model of a 5th-century BCE oared boat, is one such craft. The vessel has no provenance, save for the reported location of its discovery off the Phoenician coast, but scientists have been able to tentatively confirm the origin and authenticity of this model. The model is of an oared boat manned by three pairs of oarsmen, who are rendered with "hands … raised to their chests, in the last instant of pulling the oar in the water, before lifting it for the recovery." The mystery of this model is the purpose of small holes- three on the starboard side, and four on port- that were made in the sides of the ship with a sharp tool before the clay dried. It is believed that the holes are too small to pass an oar through, and thus would not be used for rowing purposes. This is hard to prove, however, because the poorly preserved state of the model and the amount of fouling that is layered on the model makes it difficult to definitively rule out this possibility. Another theory regarding the purpose of these holes suggests that "ropes for holding oars were threaded through these holes."
Ship models are helpful to archaeologists in that they allow archaeologists to make estimates regarding the size the vessel would be in real life. While this technique makes the assumption that artists scaled the models appropriately, it is useful to get some sense of how large these ships and boats may have been in real life. Archaeologists estimate the Phoenician vessel above (H-3134) to be about 6 meters long and the beam about 2 meters. Archaeologists are able to calculate these estimates of size by employing a series of assumptions about the distance between benches, the lateral distance between rowers, and a maximum draft of the vessel.
Egyptian ship and boat models are perhaps some of the most well-preserved types of ship models available to archaeologists. Ancient Egyptian ship and boat models were most frequently placed in tombs of prominent people as "magical substitutes for the actual objects which the deceased has used in life and which he expected to use again in the next world." These boats have been categorised into two types: boat models that represent actual vessels used on the Nile, and boat models that represent boats that are considered necessary for religious purposes. The second type of model may or may not have been used in real life, but were purely magical boats. The majority of boats found in tombs are carved from wood.
Several boat and ship models were found in the tomb of Tutankhamen, dating back to the Sixth Dynasty, and in the tomb of Meketre (2061–2010 BC). The wide variety of vessels depicted by the models in these two tombs has provided archaeologists new information on the types of boats that were used in Egypt. Moreover, the presence of boat and ship models in the tombs attests to the paramount importance of boats and ships to the Nile-going people of Egypt.
The boat models discovered at Meketre's tomb feature several different kinds of boats, including traveling boats, sporting boats, and several papyriform crafts. Two of the papyriform skiffs have a trawling net slung between them. It is uncertain whether or not the net is meant to be depicted as being under the water or being pulled out of the water by the fishermen. In the event that the artist meant for the net to be in the water, it is upside down. Needless to say, the upside down net would not work for catching fish. This ambiguity points up the question of artistic veracity of the craftsmen who make ship models. As is attested by the ambiguity of the holes in the sides of the Phoenician model, and the skiff from Meketre, archaeologists need to be aware of the possibility of artistic error while interpreting ancient ship models. While a mistake involving an inverted trawling net may seem trivial, the lesson is important. It is important for archaeologists to be aware of the possibility that ancient artists may not have been familiar with the finer details of ships and boats.
Despite some of the limitations of interpreting ancient Mediterranean ship models, archaeologists have been able to glean a great deal of information from these items. This information has been instrumental in filling in gaps in knowledge about ancient seafaring technology and culture.
=== Europe ===
Some of the oldest surviving European ship models have been those of early craft such as galleys, galleons, and possibly carracks, dating from the 12th through the 15th centuries and found occasionally mounted in churches, where they were used in ceremonies to bless ships and those who sailed in them, or as votive offerings for successful voyages or surviving peril at sea, a practice which remained common in Catholic countries until the 19th century.
Until the early 18th century, virtually all European small craft and many larger vessels were built without formal plans being drawn. Shipwrights would construct models to show prospective customers how the full size ship would appear and to illustrate advanced building techniques. These were also useful for marine artists, and it is clear that from Dutch Golden Age Painting onwards extensive use of models was made by artists.
Ship models constructed for the Royal Navy were referred to as Admiralty models and were principally constructed during the 18th and 19th centuries to depict proposed warship design. Although many of these models did not illustrate the actual timbering or framing, they did show the form of the hull and usually had great detail of the deck furnishings, masts, spars, and general configuration. Some of these grand models were decorated with carvings of great beauty and were evidently constructed by teams of artisans.
Admiralty models served to educate civilians who were involved in the financing or some other aspect of the ship, to avoid construction errors that might have evolved as the ship itself took form.
During the Napoleonic wars French and English seamen who were taken prisoner were confined, sometimes for many years, and in their boredom sought relief by building ship models from scraps of wood and bone. This evolved into something of an art form and the models were sold to the public,
which responded by supplying the prisoners with ivory so that the models would be more decorative. For the most part, the models had carved wooden hulls with rigging made from human hair, horsehair, silk, or whatever other fine material could be obtained. Bone or ivory would be used for masts and spars, and as a thin veneer over the hull.
A consequence of Britain's naval supremacy in the eighteenth and nineteenth centuries was wide public interest in ships and ship models. Numerous fairly crude models were built as children's toys leading to the creation of functional, as opposed to decorative, ship models. Britain also led the world in model ship sailing clubs – in 1838 the Serpentine Sailing Society was started in Hyde Park, followed by the first London Model Yacht Club in 1845. By the 1880s there were three model sailing clubs sharing the Kensington Gardens Round Pond alone.
=== Modern era ===
In the early part of the 20th century, amateur ship model kits became available from companies such as Bassett-Lowke in Great Britain and Boucher's in the United States. Early 20th century models comprised a combination of wooden hulls and cast lead for anchors, deadeyes, and rigging blocks. These materials gradually gave way to plastic precast sets.
The development of tinplate and improvements in machine tools enabled significant advances in ship modelling from 1900 onwards. Thin, workable sheets of iron could be coated with tin to prevent rusting, then mass-produced as parts of ship model kits. The process was pioneered by French ship model manufacturer Radiguet, which produced a line of zinc boats with pressurised steam engines, wooden decking and brass fittings. The speed of production for tinplate vessels enabled one 1909 manufacturer to produce ship models of speedboats that had competed that year in Monaco.
Ship modelling in the United States experienced a boom in the late 1920s when Popular Science magazine published an extended series of articles and plans for famous ships by modeller and former Navy officer E. Armitage McCann. McCann, who, according to Popular Science was the "recognized leader of the ship model building hobby" of his time founded the Ship Model Makers′ Club in 1929, with him as secretary and treasurer and marine artist and fellow ship model builder Gordon Grant as president.
The world's leading magazine for this hobby, Model Boat, is published from the UK by MyTime Media and has been in print continuously since 1950. In recent years, widespread internet access has played a major role in promoting ship modelling, offering enthusiasts the opportunity to show off their work and share techniques. Internet sites such as Modelwarships.com, Steelnavy.com, or Model Shipwrights are oriented to plastic model ship builders, while others such as Hyperscale focus largely on aircraft or other subjects can regularly feature plastic ship models as well.
== Types of construction ==
The most common materials used for ship models are:
Wood—commonly solid wood, two pieces of wood with a vertical seam or slabs of wood placed one on top of each other.
Plastic—including both injected styrene and cast resin models. In larger scales (1/192 and larger), fiberglass is often used for hull shells.
Metal—usually cast lead or other alloys. Steel, sheet tin and aluminum brass are used less frequently for hull construction, but are used extensively for adding details.
Paper—preprinted paper construction kits are common in Europe, and are available in a variety of scales.
=== Wooden ===
Wooden ship model hulls can be constructed in several ways. The simplest is a solid wood hull sawn and carved from a single block of wood. This method requires the greatest skill to achieve accurate results.
A variant of this technique, sometimes known as bread and butter construction (the wood is the "bread" and glue the "butter") is a hull built up from thin blocks of wood glued together with either a vertical seam which can be incorporated into deck design, or a horizontal seam. This reduces the amount of carving required, but still requires skill and the use of templates to achieve an accurate hull form.
Modelling precision and lightweight design can be achieved by creating a hollow hull. The plank on bulkhead technique inserts a series of shaped bulkheads along the keel to form a shaped stage which will be covered with planks to form the hull of the model. Plank on frame designs build the model just as the full size wooden ship is constructed. The keel is laid down in a manner which keeps it straight and true. The sternpost and stem are erected, deadwood and strengthening pieces inserted, and a series of shaped frames are built and erected along the keel to form the internal framework of the model. The planks are then applied over the frame to form the external covering.
A wooden hull can be used for operating models if properly sealed.
=== Plastic ===
In the decades since World War Two injection-molded polystyrene plastic model ships have become increasingly popular. Consisting of preformed plastic parts which can be bonded together with plastic cement, these models are much simpler to construct than the more labor-intensive traditional wooden models. The inexpensive plastic kits were initially targeted to the postwar generation who could glue them together and produce passable replicas in a single afternoon. Plastic models are available in both full hull and waterline versions for a wide variety of vessels.
A more recent addition has been a variety of kits in cold cure resin marketed by various small companies as part of a cottage industry. These often cover more obscure subjects than mainstream manufacturers.
Scales vary as well, with many kits from the early days being "box scale"; that is, scaled to fit into a uniform sized box designed to fit conveniently on hobby shop shelves. Scales have since become more standardized to enable modelers to construct consistent scale collections, but there are still many to choose from. In Europe 1/400 scale remains popular, while in the United States and Japan the most popular scales are 1/700 (making a World War Two aircraft carrier about a foot long) and 1/350 (twice as long as 1/700). Nevertheless, mainstream plastic kit manufacturers continue to produce kits as small as 1/1200 and as large as 1/72, with a few even larger.
The early plastic model kit producers such as Airfix, Revell, Frog and Pyro have since been joined by Imai, Tamiya, Hasegawa, Skywave/Pit-Road, Trumpeter, Dragon Models Limited and many others in producing a wide array of model subjects. The plastic model kit market has shifted over the years to a focus on adult hobbyists willing to pay for more elaborate, higher quality kits.
Another recent development has been the advent of aftermarket parts to enhance the basic kits. Decals, specialized paints and turned metal replacement gun barrels are available to make plastic models more accurate. The introduction of flat photoetched metal sets, usually stainless steel or brass, also provide much more realistic lifelines, cranes, and other details than are possible with the injection molded plastic kits. These photoetch sets have transformed the hobby, enabling the finescale modeler to reproduce very delicate details with much less effort.
=== Live steam ===
Enthusiasts build live steam model ships of many types and in many scales. These range from simple pop pop boats to models of racing hydroplanes.
== Scale conversion factors ==
Instead of using plans made specifically for models, many model shipwrights use the actual blueprints for the original vessel.
One can take drawings for the original ship to a blueprint service and have them blown up, or reduced to bring them to the new scale.
For instance, if the drawings are in 1/4" scale and you intend to build in 3/16", tell the service to reduce them 25%.
You can use the conversion table below to determine the percentage of change. You can easily work directly from the original drawings however, by changing scale each time you make a measurement.
The equation for converting a measurement in one scale to that of another scale is D2 = D1 x F where:
D1 = Dimension in the "from-scale"
D2 = Dimension in the "to-scale"
F = Conversion factor between scales
Example: A yardarm is 6" long in 3/16" scale. Find its length in 1/8" scale.
F = .67 (from table)
D2 = 6" X .67 = 4.02 = 4"
It is easier to make measurements in the metric system and then multiply them by the scale conversion factor.
Scales are expressed in fractional inches, but fractions themselves are harder to work with than metric measurements.
For example, a hatch measures 1" wide on the draft.
You are building in 3/16" scale.
Measuring the hatch in metric, you measure 25 mm. T
he conversion factor for 1/4" to 3/16', according to the conversion table is .75. So 25 mm x .75 = 18.75 mm, or about 19 mm.
That is the hatch size in 3/16" scale.
Conversion is a fairly simple task once you start measuring in metric and converting according to the scale.
There is a simple conversion factor that allows you to determine the approximate size of a model by taking the actual measurements of the full-size ship and arriving at a scale factor.
It is a rough way of deciding whether you want to build a model that is about two feet long, three feet long, or four feet long.
Here is a ship model conversion example using a real ship, the Hancock.
This is a frigate appearing in Chappelle's "History of American Sailing Ships".
In this example we want to estimate its size as a model.
We find that the length is given at 136' 7", which rounds off to 137 feet.
To convert feet (of the actual ship) to the number of inches long that the model will be, use the factors in the table on the right.
To find the principal dimensions (length, height, and width) of a (square rigged) model in 1/8" scale, then:
Find scaled length by dividing 137 by 8 = 17.125"
Find 50% of 17.125 and add it to 17.125 (8.56 + 17.125 = 25.685, about 25.5)
Typically, the height of this model will be its length less 10% or about 23.1/2"
Typically, the beam of this model will be its length divided by 4, or about 6 1/2"
Although this technique allows you to judge the approximate length of a proposed model from its true footage, only square riggers will fit the approximate height and beam by the above factors. To approximate these dimensions on other craft, scale the drawings from which you found the length and arrive at her mast heights and beam.
== Wargaming ==
Model ships have been used for war gaming since antiquity, but the introduction of elaborate rules made the practice more popular in the early 20th century. Small miniature ships, often in 1:1200 scale and 1:1250 scale were maneuvered on large playing surfaces to either recreate a historical battle, or in the case of governments, plan for future encounters. These models were basic representations of ship types, with enough detail to make them recognizable. Bassett-Lowke marketed these to the public in England, along with more detailed versions that appealed to collectors.
Prior to World War II, the German company Wiking became a leader in the field but the war ended its dominance.
Upon the United States' entry into World War II, Charles King Van Riper was commissioned to build identification models at a scale of 1 ft (0.30 m) to 64 ft (20 m). He produced 1:1200 models of freighters for the United States Navy's Submarine Attack Teacher at Groton, Connecticut.
== Large scale ==
Larger ship models have been used in museums to document historical ships, in companies for decoration and public relations. These are typically built by commercial firms, or, in the past, model departments of large shipyards. One famous builder of ship models for the United States Navy was the firm of Gibbs & Cox; a 1/48 scale model of the USS Missouri, which is on display at the Washington Navy Yard museum, required an estimated 77,000 man hours to construct. Commercial ship models are usually built to rigorous standards; for example the US Navy has an exacting set of specifications regarding the use of materials and methods with the aim of ensuring a model "lifespan" of one hundred years.
=== Radio control ===
Some hobbyists build and operate scale model ships utilizing radio control equipment. These can range from small models that can be operated in aquariums to vessels capable of navigating large bodies of water. Further expanding the concept is model warship combat, in which scale models fire projectiles at each other in combat.
== Engineering models ==
Model ships are important in the field of engineering, where analytical modeling of a new design needs to be verified. Principals of similitude are used to apply measured data from a scaled model to the full scale design. Models are often tested in special facilities known as model basins.
== Manned ==
Manned models are model ships that can carry and be handled by at least one person on an open expanse of water. They must behave just like real ships, giving the shiphandler the same sensations. Physical conditions such as wind, currents, waves, water depths, channels and berths must be reproduced realistically.
Manned models are used for research (e.g. ship behaviour), engineering (e.g. port layout) and for training in shiphandling (e.g. maritime pilots, masters and officers). They are usually at 1:25 scale.
The aim of training on manned models is to enable seamen to acquire or to develop manoeuvring skills through a better understanding of a ship's behaviour as it sails in restricted water conditions at manoeuvring speed.
Manned models are considered by maritime pilots as the next best thing to a full-scale prototype for understanding a ship's behaviour. Those who have trained on both claim that scale models are complementary to computer simulators. While manoeuvres with currents, waves, tugs, anchors, bank effects, etc. are reproduced more accurately on scale models, numerical simulators are more realistic when it comes to the bridge environment.
The Port Revel Shiphandling Training Centre is a French maritime pilotage school specializing in training for pilots, masters, and officers on large ships like supertankers, container ships, LNG carriers and cruise ships. The facility uses manned models at a 1:25 scale on a man-made lake designed to simulate natural conditions in harbours, canals, and open seas. It was the first such facility in the world. The centre was originally created in 1967 near Grenoble by Laboratoire Dauphinois d'Hydraulique.
== Yachts ==
Model yachts are operating craft, which may be sail, steam, engine or electric motor powered, typically resembling pleasure power craft, although the hobby also includes the construction and operation of models of working ships such as tugboats and other craft shown in this article as static models.
== Model shipwright guilds ==
Model shipwright guilds tend to concentrate their efforts on highly accurate static models of all types of watercraft and are social groupings intended to allow more experienced ship modellers the opportunity to pass on their knowledge to new members; to allow members of all levels of expertise to exchange new ideas, as well as serving as social function.
Some model shipwright guilds are incorporated into government and Naval facilities, achieving a semi-official status as a clearinghouse for information on naval history, ship design and, at times, teaching the craft of ship modeling, through model building, restoration, repair of the facility's models, as well as, museum docent services. The USS Constitution Museum operates a model shipwright guild from the Charlestown Navy Yard adjacent to the berth for the vessel itself, as does the San Francisco Maritime National Historical Park by sponsoring the Hyde Street Pier Model Shipwrights and providing work and meeting to them aboard space aboard the ferryboat Eureka tied at the Hyde Street Pier where they are considered working museum volunteers.
== Collections ==
The largest collection of ship models is thought to be the collection at the National Maritime Museum, Greenwich, which numbers nearly 5,000, most of which are held in the Model Store at the No. 1 Smithery at the Historic Dockyard Chatham, Kent.
In private hands two of the largest collections belong to the hobbyists who made them. Philip Warren of England has a collection of 432 ship models built on the scale of 1:300, all of which he constructed himself. Erick Navas of Peru has a collection of 1005 warships, some of which he built from scratch.
== See also ==
Model airplane
Model yachting
Model warship combat
Radio-controlled boat
Wooden Ship Models
== References ==
== External links ==
Media related to Models of ships at Wikimedia Commons | Wikipedia/Ship_model |
A model car, or toy car, is a miniature representation of an automobile. Other miniature motor vehicles, such as trucks, buses, or even ATVs, etc. are often included in this general category. Because many miniature vehicles were originally aimed at children as playthings, there is no precise difference between a model car and a toy car, yet the word 'model' implies either assembly required or the accurate rendering of an actual vehicle at smaller scale. The kit building hobby became popular through the 1950s, while the collecting of miniatures by adults started to gain momentum around 1970. Precision-detailed miniatures made specifically for adults are a significant part of the market since the mid-1980s.
The scope of the vehicles involved in the hobby, according to Louis Heilbroner Hertz author of The Complete Book of Building and Collecting Model Automobiles, encompasses "ordinary or stock automobiles, racing cars ([...]), buses, trucks, specialized service vehicles (especially fire engines), military vehicles, including such equipment as self-propelled gun carriers and mobile rocket launchers; construction equipment, including bulldozers and road rollers, tractors and related farm equipment; mobile showmen's engines, customized automobiles, hot rods, dragsters, the recently popular so-called 'funny cars', early self-propelled road carriages, and so on."
== History ==
Miniature models of automobiles first appeared in Europe around the time real automobiles did. Then, shortly after, they appeared in the United States. These were toys and replicas often made of lead and brass. Later models made in the early 20th century were slush cast plaster or iron. Tin and pressed steel cars, trucks, and military vehicles, like those made by Bing of Germany, were introduced in the 1920s through the 1940s, but period models rarely copied actual vehicles, likely because of the crudeness of early casting and metal shaping techniques. Casting vehicles in alloys such as zinc-aluminum-magnesium-copper (trademarked as zamak) became popular in the late 1930s and remained prominent after World War II.
=== Fabricating the 'real' thing ===
Many early model cars were not intended either as toys or for collecting. By the 1920s, the manufacturers of real automobiles would design and construct scale as well as full-sized models for design or promotion. Citroën of France, for example, made its own models for promotional purposes as early as 1923. Sometimes styling or concept models were made out of wood or clay, often in 3/8 scale. From 1930 until 1968, General Motors sponsored the Fisher Body Craftsman's Guild Competition where hundreds of modelers competed for scholarship money.. The emphasis was to earn recognition for creativity which would lead to possible employment as an industry stylist.
In-house models could also be precise replicas made of similar materials to the real vehicles. For example, Hudson Motor Car Company made twelve precisely crafted 1/4 scale replicas of its 1932 vehicles for promotion at the 1932 New York Auto Show (see Hudson display models). About the same time, but in a different vein, Studebaker made a wooden model of a cabriolet over twice the size of the real car. The vehicle was stationary on the company grounds and large enough to hold a whole band that played mostly for photo shoots (Quinn 2004). As time went by, companies in the United States, Europe and Asia made, provided, or sold toys or precision promotional models to attract succeeding generations to their products. More models also displayed advertising on their bodies for non-automotive promotions.
=== Scale sizes ===
The scales of toy and model cars vary according to historical precedent, market demand and the need for detail. Many 'in house' models of real car companies are made by professional modelers in full size, or at very large scales like 1:4, 1:5, 3:8, or 1:10 to portray adequate features and proportions. For toys, many European pre-war cars and trucks were made to display with railroad layouts, making 1:87 (1 to 2 inches, or HO scale) or 1:43 (about 4 inches long, or O scale) common scales. Other companies made vehicles in variations around 1:40 to 1:50 scales. Some companies went smaller to appeal to the hands of smaller children (about 1:64 scale or about 3 inches), which improved profit margins in packaging more items per carton, and increasing profit per vehicle sold. Others moved to larger scales from 1:43 toward 1:40, 1:38 or 1:35. Later, popular scales went even larger. In the United States, 1:25 (6 to 7 inches) became the staple size for plastic promotional models, while European manufacturers went to 1:24 or 1:18 (about 9 inches long). The larger 1:12 scale was occasionally seen and more rarely, 1:10 or 1:8. At the other extreme, some very tiny toys since the 1980s were fairly accurate down to about 1:120 (a little over an inch).
=== Materials and markets ===
Toys in the United States almost always were simpler castings of zinc alloy (zamak), pressed steel or plastic and often castings of only seven parts (a car body, four plastic wheels and two axles) – while more complex plastic and zamak models in Europe often had precision detail with more working features. This provides instruction on different regions of the world and their varied cultures, markets, labor and economies.
Europe quickly developed niche marketing after World War II. The greater availability of labor there generally allowed the development of relatively complex toys to serve different markets in different countries. In the United States, less labor availability would not allow for complex toys with opening doors, hoods, and complete interiors with all detail, so they were often single castings with few parts. Sophistication in America did come in the form of detailed (but simply cast) promotional models for automotive dealerships which preceded the appearance of automotive kits for assembly.
== European die casting ==
Among more collectible vehicles in Europe after World War II and during the 1950s, smaller scales, like 1:43, and 1:64 generally became popular first. Since the 1980s, many factory assembled scale model cars made of diecast metal have become more and more adult collectible oriented and less and less toy-like. Besides the smaller scales, these models are manufactured in various scales like 1:12, 1:18, and 1:24.
=== Early European diecast ===
Northern Europe and the British Isles were the homes of the most successful European producers in the 1950s and 1960s in the post-war revitalized economies across the continent (Rixon 2005, p. 9). Quite popular were models produced in the altered railroad modeling scale of 1:43.
Examples of well known companies are (or were) Corgi Toys, Dinky Toys, Matchbox, and Spot-On Models of the United Kingdom; Solido, Norev, and Majorette of France; Schuco Modell, Gama, and Siku of (West) Germany; Tekno of Denmark, and Mercury, Polistil and Mebetoys of Italy. Immediately post-war, Belgium made Septoy and Gasquy. Even Israel got into the act quite successfully with Gamda Koor Sabra which made its own tooling for several unique models. Non-market system communist countries also had some successful factories, like Kaden models and Igra of Czechoslovakia, Espewe of East Germany, and Estetyka of Poland. State factories of the Soviet Union (commonly known as Novoexport, Saratov, or Tantal) produced many carefully crafted diecast models mostly in 1:43 scale. These were known for their intricate detail, numerous parts, and delicate construction.
Larger sizes in die-cast grew out of offerings of European companies like Polistil, Schuco Modell, and Martoys, which was later to become Bburago. 1:24 and 1:18 scales did not become really popular until the late 1980s when other brands like Yatming and Maisto were produced in Hong Kong or China by either American or Asian companies. 1:87 scale plastic vehicles, related to railroad modeling or not, also continue to be popular in Europe. Despite continued European companies, today, China is now the center of diecast production.
Post-war European diecast models were produced in fairly simple form, such as Dinky Toys (often in the train related 1:64 or 1:43). Dinky production began in 1934, while Matchbox cars (often approx. 1:64) were introduced in the mid-1950s. These early die-cast toys featured no opening parts whatsoever. Affected by market forces and by improvements in production technology, companies began to improve the quality of the toys over time. The "best" improvements were often copied by the competition within 1–2 years of their appearance on the market. Examples of these would be plastic windows, interiors, separate wheel/tire assemblies, working suspensions, opening/moving parts, jeweled headlights, mask-spraying or tampo-printing, and low-friction 'fast' wheels.
Into the 1970s, model makers began to feel the squeeze of rising costs. Often press tooling for a new model might cost more than 30,000 pounds (more than US$50,000). Companies began to offer fewer new issues and the models became simpler with fewer opening parts.
=== Trends in toy detail ===
Larger 1:24 and 1:18 scale premium models became extremely popular at toy and hobby centers during the 1990s, but are less popular circa 2010. This size is generally made with close attention to the details of the real vehicles, such as a working steering, and opening doors, trunk/boot, and hood/bonnet. Detailed interiors, instrument panels, trunks/boots with spare tires and engine compartments are common. Chassis often show intricacies of exhaust systems and suspensions. A working suspension system is often included. In smaller scales some of the details are often eliminated, so in 1:43, 1:64, or 1:87 scale cars, working steering is not common. Likewise, only the front doors and hood might open, with non-opening rear doors and trunk. (There are exceptions, of course, such as the steering by lever on the late 1960s 3 inch Ford Mustang by Matchbox or the patented steering on 1:32 Modarri toy cars.)
Over time, market pressures have caused further changes in the way models are designed and manufactured. In the 1960s, many European models had opening parts and working components, but today few of the smaller scale toys do. More working parts mean more production expense and Hot Wheels and Matchbox vehicles now rarely have such features. Today, the number of moving parts has been reduced even in large-scale models. For example, premium model maker AUTOart introduced a line of race and sports cars in 1:18 scale with no opening parts.
=== Die cast seconds ===
Also notable is the diffusion of model dies to companies in other countries which could not afford tooling expenses for their own new lines. Traditionally, when European companies have finished marketing their models, newer dies are developed and introduced and older dies are sold off to other companies, often in less developed countries.
As early as about 1970, Dinky tooling became 'Nicky' Toys in India, just as older Matchbox models became 'Miltons' or Corgi dies became 'Maxwell'. Many dies previously made by Corgi, Efsi, Tekno, Sablon or Solido, trekked southward in Europe to Spanish or Portuguese companies like MetOsul, Nacoral or Auto Pilen. Politoys became MacGregor in Mexico and also showed up in plastic in the Soviet Union. Earlier Solido and Schuco dies made their way to Brazil. Even some of Mattel's earlier Hot Wheels tooling showed up in Argentina as Muky. Tomicas became Yat Mings, Tomicas and Yat Mings became Playarts, and Matchbox tooling reappeared in other forms in many places.
The trend is nearly always a diffusion from more industrialized to somewhat lesser industrialized countries and often the result is poorer paint, faulty zamac alloys, and imprecise assembly. One example was the copies of Italian Ediltoys made by Meboto in Turkey. The Argentine Mukys featured paint that was flat and dull, unlike the bright colors of the original Hot Wheels. At the other extreme, Auto Pilen of Spain was an exception and copied models beautifully. These were as good as, or sometimes better, than the original Dinkys or Solidos in quality and paint.
== Collecting ==
Organized collecting of model cars developed shortly after the models first appeared on the market. Even before such companies as Corgi and Dinky were ten years old, adults were collecting them, particularly in the UK and the USA. Often, as well, adults seek the joys of childhood, collecting what they had destroyed in youth or what their parents had thrown away. This also lead to the foundation of the Diecast Hall of Fame in 2009.
=== The adult collector ===
Many manufacturers began catering to the adult collector market. In the late 1960s and early 1970s, David Sinclair in Erie, Pennsylvania, was important in bringing new, more sophisticated and rarely produced years and makes to the United States. Model brands like Rio, Western Models, Brooklin, Idea3 and Pirate Models were sold to adult collectors for the first time. Many of these were handmade in white metal in fewer numbers. Also in the early 1970s, craftsmen like Carlo Brianza and Michelle Conti started making ultra-detailed large replicas in Italy and Spain – costing hundreds or even thousands of dollars. In addition, the company Pocher, from Italy, made extremely complex kits in 1:8 scale
Around the early 1990s, many began to collect and record vehicle variations in miniature (in a manner similar to stamp or coin collecting) which led to rising values, especially for rare models (for an example, see Parker 1993). This led to mass producers such as Matchbox (specifically with its Models of Yesteryear series) and Corgi intentionally catering to a higher-price market segment with exclusive 'limited editions' of collectible vehicles. Thus, this smaller movement in the late 1960s and early 1970s gradually gave rise to a huge premium market segment by the early 1990s.
=== Licensing ===
The collectors' market also led to licensing aspects not known until the 1980s. In the 1950s and 1960s, models were produced spontaneously without licensing agreements, and real auto manufacturers saw it as free advertising. Today, model companies have licensing arrangements with real car manufacturers to make replicas of their products, whether they be concepts, cars in current production, or models no longer produced.
Licenses appear on models where model car manufacturers enter similar licensing agreements. Licenses are expensive, which enhances the position of mass producers of model cars, while smaller companies have been marginalized and forced out of business. For example, when Ferrari entered into an exclusive agreement with Mattel's Hot Wheels, companies like Solido and Bburago felt the crunch, and Bburago went out of business (though the name was eventually reacquired by Maisto).
=== Collectible manufacturers and locations ===
Manufacturers focusing on premium models, usually in white metal and sometimes resin, include Brooklin Models, Western Models, Enchantment Land, Conquest / Madison, Durham Classics, Elegance Models, Mini Auto Emporium, Mini Marque, Motor City USA, Tron, Starter, RacingModels, SMTS and Victory. Several of these started production in the 1970s and 1980s and were handmade in the United States, Canada, or England with the occasional constructor in France, Belgium or the Netherlands. A couple of geographical oddities include Goldvarg (made in Argentina) and some early Milestone Models which were made in South Africa. Mail order companies like Franklin Mint and Danbury Mint also focus on the collector market, though in a more popular vein.
Since 2000, more than fifty different diecast, resin and white metal manufacturers in England, France, Italy, Ukraine and Russia have exploded onto the adult collector market. These include Spark which focus on motorsport such as 24h Le Mans and F1, Bizarre is the brand dedicated to the unusual and extraordinary in the car world, FDS, YOW Modellini (from Japan) and many others. Since 2000, companies like Altaya, Ixo, and Model Car World (for example, with its White Box line) have been started in Europe – with production increasingly seen in China. Many of these producers have focused on global auto marques producing vehicles that were produced in Russia or Brazil. Some of these companies only produce kits – others produce kits and build them up to order. Still others are professional kit builders, who do not produce the kits themselves.
== Promotional models ==
Promotional models are sometimes used when the real auto manufacturers contract with model or toy companies to make copies of their real vehicles. Some of the earliest promotional models were from the early 1930s, when TootsieToy introduced a line of 1932 Grahams and later, the 1935 LaSalle. These were both diecast and made available in boxes with the brand name displayed with appropriate logos and colors (Seeley, No Date). National Products made models of about 1/28th scale starting in 1934. Later manufacturers like Winross, Lesney Matchbox, Lledo, AHL, and White Rose used their toy vehicles to advertise logos on their flanks promoting various companies.
In the U.S., Banthrico started producing diecast promotional model car banks in the late 1940s for the banking industry. These coin-banks were available as gifts to customers who opened a new account and had a slot in the bottom to put their spare change. Usually the bank's name and address was painted on the roof of the car. Banthrico models were also painted in authentic Big Three colors and used as "paint chips" so dealers could gauge the upcoming colors on real models. These primitive promotionals included Buicks, Cadillacs, Lincolns, Packards, DeSotos, Chryslers, Dodges, Ramblers and the more common Chevrolets and Fords.
In the United States, the word 'promo' is usually associated with 1:25 scale plastic, pre-assembled models. In Europe, promotionals were made in smaller vehicle sizes in diecast zamac in 1:32, 1:43, or 1:50 scales. In the case of Chrysler's later Turbine Car, where 50 real cars were put into consumer use, the model by Jo-Han was widely distributed as a good will gesture by Chrysler, though the Turbine was never actually marketed.
In Japan, promotional models from the late 1950s until the 1970s were typically cast in pot metal and given a chrome or gold finish; they typically doubled as cigarette holders and ash trays.
=== The plastic promo ===
About the time Banthrico was declining as a promotional maker, two companies, PMC and Ideal Models (later to become Jo-Han) were introducing plastic promotional models to the public. Similar to metal model producer Banthrico, PMC also made many in the form of banks. Many Chevrolet bank models had the inscription on the bottom "To help save for a rainy day, or to buy a new Chevrolet." The scale for these cars was 1:25, however a few Chevrolets and Plymouths were produced in a larger 1:20 scale. Other less well known plastic companies like Lincoln Line, Cruver or Burd Manufacturing, made the occasional promotional model though cars may not have been the company's specialty.
AMT began producing assembled 1/25 friction and coaster models in 1948. These were mostly promotional models manufactured for automobile dealers. Youngsters would be given the scale models to play with while the parents and the salesman haggled. Collecting and trading these "promos" soon became a popular hobby. AMT soon took control of SMP, another plastic promotional model producer. By 1960, Wisconsin-based PMC ceased to produce promo models, though continued to make toys.
Interest in the hobby peaked during the 1950s and 1960s, with AMT, Jo-Han, and Model Products Corporation (MPC) as the primary promotional manufacturers.
Throughout, the promo producers were at the whim of the real automakers and would respond to requests of particular scales, paint colors, and other details like working suspensions or even, on occasion, detailed engines, or other opening features.
=== American promo details ===
These plastic models were intricately detailed, with body scripts, trim, and emblems, as well as dashboard details, exact duplicates of the real thing, in 1/25 scale. Typically, each automaker would license their cars to one or more model companies. Sometimes the contracts seemed piecemeal – for example in 1965, Chrysler had promos made by AMT, Jo-Han, and MPC. But often one of the BIG 3 favored a particular model maker. For example, Jo-Han produced most Chrysler products and Cadillacs and Oldsmobiles from GM, while AMT did the Chevrolet, Buick, Pontiac, and Fords. American Motors Corporation shared promotional duties between Jo-Han and AMT depending on the year. Also, contracts sometimes changed between companies for similar models almost on an annual basis. For example, Jo-Han uncharacteristically produced the 1972 Ford Torino, and MPC did full-size Chevrolets in the early and mid-1970s. While Jo-Han did Chrysler early on, MPC took on the pentastar in the mid-1970s. 1968 through 1970 Chevy Impala kits were made by both MPC and AMT, as were some Camaros. Trying to beat competition to market, sometimes a model company would make a 'guess' at a particular model for a member of the Big 3 for a particular year and thus get details wrong.
=== Marketing approaches ===
Commercial versions of the promos were also marketed and sold in retail stores like Zayre and Murphy USA from the early 1960s, up until around 1973. Differences from dealer promos were lack of manufacturer's official paint schemes and often the addition of a friction motor located on the front axle, noticeable by the studded white vinyl gear that protruded around the axle (and through the oil pan). However, they were painted and looked just as attractive as dealer promos.
Some model companies sold unassembled versions of the promo cars, that were typically simpler and easier to assemble than the annual kits (with engine and customizing parts available in the full-blown kits left out). They were molded in color (instead of the traditional white) and easily assembled without glue (thus no glue or paint was required). When assembled these were almost identical to the much more elite promotional models. What usually gives them away is that they were mostly molded in a brighter nonmetallic color without paint matched to official 'Big 3' colors. AMT's "Craftsman" series of promo-like models had perforations in the bodies for mirrors and antennae – thus the model's final appearance was not precisely like a promo (which would have had no custom parts attached to the body of the car). Probably, because of the promo look, however, today these often command higher prices than the detailed "3-in-1" kits, especially AMT's Craftsman series of the early and mid sixties.
After being owned for a time by Seville Enterprises, Okey Spaulding purchased once-defunct Jo-Han, which produced a few of its original Jo-Han models in limited quantities. These include the 1963 Chrysler Turbine Car, 1959 Rambler station wagon, and some of its original 1950s Oldsmobiles and Plymouths. However, he has had financial problems from the start, and there are no indications that he will be able to continue to produce the highly desired Johan line of models.
=== European promotionals ===
With the exception of some firms like Stahlberg which made larger scale plastic promotional models of Swedish Volvos and Saabs in an American style, European promotionals were usually based on the 1:43 or 1:32 scale diecast metal models produced as toys or collectors items, often brightly colored or with authentic tampo or silk screen liveries for commercial products. Companies commonly making promotionals in Europe have been NZG Models, Conrad Models, Gescha in Germany and Tekno and Emek Muovi in Denmark and Finland, respectively. Tekno was one of the first European companies to offer a wide variety of multiple promotional variations. Almost all European toy model brands had some kind of promotional service, but in Germany, 1:50 scale was, and remains very common for trucks. In the United States, such diecast companies are rare, but Winross Models and Pennjoy are a couple of European style examples which have had much success, particularly Winross which has been making models since the early 1960s.
Another variation on promotionals were whole toy lines or brands constructed to represent vehicles on display at particular automotive museums. Examples were Cursor Models of Germany which made models specifically on display in the Mercedes-Benz Museum in Sindelfingen, R.A.M.I. by J.M.K. of France which made vehicles in the Automobile Museum de Rochetaillée sur Saône in France, or also Dugu Miniautotoys of Italy which made vehicles for the classic automobile museum in Turin.
== Model kits ==
Scale miniatures of real production vehicles, designed as kits for children or the enthusiast to construct, can be made of plastic, die-cast metal, resin, and even wood. In plastic model kits, parts are molded in single cast 'trees' with thin connections that can be easily severed for painting and assembly. Parts come molded in a variety of colors, white being the most common in the 1960s and 1970s. Some parts are chrome plated to simulate real bumpers, grilles, wheels, and other pieces that might be chrome on the actual vehicle. Tires are most commonly molded in rubber. Water 'slide-on' decals are usually included along with an instruction brochure.
The best kits have incredible levels of accuracy, even in detail and parts unseen when the model is complete. Major manufacturers are AMT, MPC, Revell, Monogram, and Tamiya but many smaller plastics companies, like Aurora, Pyro, IMC, and Premier have come and gone.
=== Pioneers ===
The model car "kit" hobby began in the post World War II era with Ace and Berkeley wooden model cars. Revell pioneered the plastic model car in the late 1940s with their Maxwell kit, which was basically an unassembled version of a pull toy. Derek Brand, from England, pioneered the first real plastic kit, a 1932 Ford Roadster for Revell. He was also known for developing a line of 1/32 scale model car kits in England for the Gowland brothers. These kits were later introduced by Revell in the U.S. as the "Highway Pioneers" Series of kits.
On the heels of the promotional model business, Aluminum Model Toys or AMT introduced model car kits in 1957. Jo-Han, Revell and Monogram also started producing model car kits about this same time. Most of these were known as "annual" kits, and were the unassembled kit version of the promotional models or 'promos' representing the new cars that were introduced at the beginning of each model year. As early as 1962, avid British collector Cecil Gibson had even written a book on plastic model cars. By the mid-1960s, plastic model kits had become more plentiful and varied, with increased level of detail. Typically, the kits often had opening hoods, separate engines and detailed suspension parts.
=== Customizers ===
The mid-1960s is generally considered the "golden age" of plastic model car kits. Many specialty modelers and customizers, famous for their wild creations, were hired by model companies to sponsor and create new kit designs. George Barris, Darryl Starbird, and the Alexander Brothers worked for AMT. Tom Daniel design vehicles for Monogram and Mattel. Dean Jeffries was employed by MPC. Bill Campbell created hippie monster designs for Hawk. Ed Roth, famous for his 'Rat Fink' was hired by Revell about 1962. Many of these customizers created real cars and had to have specialists convert their creations into model kit form. Jim Keeler, a model kit designer for Revell, brought the world highly detailed model cars in the early sixties and is credited with bringing Ed Roth's famous hot rods and customs to the model car marketplace. He also designed Revell's Custom Car Parts which allowed kit builders to add engines, custom wheels and other custom features to existing models. Keeler later went on to Aurora Plastics and innovated the Prehistoric Scenes, which were highly detailed models of prehistoric dinosaurs. Many of Keelers kit designs are still being sold in the 21st century.
In addition to building them stock, most annual kits offered "3 in 1" versions which allowed the builder to assemble the car in stock, custom, or racing form. MPC joined the kit/promo business in 1965, and among their first annual kits/promos, was the full-size Dodge Monaco, which was released with a gold metallic plastic body and is a valuable collector's item today.
=== Decline and revival ===
Interest in model car kits began to wane in the mid-1970s, and while the precise causes are not perfectly clear, some factors were a sharp rise in the price of plastics, parents becoming cautious of 'glue sniffing' and, later, the rise of video gaming. A revival of sorts was seen in the late 1980s, especially among adults, as Monogram introduced a series of replicas of NASCAR race cars, as did AMT with a kit of the 1966 Chevrolet Nova, which American modelers had been requesting for years. New model specific magazines sprang up, such as Scale Auto Enthusiast, (now simply Scale Auto) and Model Cars Magazine!. These magazines spread the word, helped advertisers, and brought a new generation of modelers together from all across the country.
Many of the kits from the golden age of modeling have been reissued. Not only does this allow the craftsman to build the cars they always wanted (but couldn't obtain or afford), but it tends to lower the prices of the originals. In some cases, models of cars from the 1950s and 1960s have been issued with all-new tooling, which allows for even more detailing with modern kit design and manufacturing methods. These include AMT's 1966 Fairlane and 1967 Impala SS, and Monogram's 1967 Chevelle and 1965 Impala Super Sport.
Today, model car companies are still in business, fueled by this renewed interest. ERTL took over AMT and MPC which are now both under the Round 2 LLC name. Revell and Monogram have merged. Modelers today can take advantage of modern technology, which includes photoetched details, adhesive chrome foil for chrome trim, wiring for engines, and billet-aluminum parts. Many builders today can construct a model so it resembles the real car in miniature, much more than could have been done with essentially the same kit more than forty years ago.
The internet has also fueled a growing modeling community through websites, online forums and bulletin boards, and sites that host photographs, allowing the hobby to expand internationally.
=== Japanese kits ===
Japanese model kit manufacturers – Tamiya, Fujimi, Aoshima, and Hasegawa, among them – also stepped up their presence in the U.S. market during the 1980s and 1990s. Lesser known kit manufacturers, at least in the United States, were Doyusha, Yamada, Nichimo, Otaki, Marui, Rosso, and Arii. Japanese kits are generally known for being ultra detailed and of very high quality. Most of the subjects of these companies are Japanese cars, both classic and current (and, of course, ships, planes and military vehicles). For example, Hasegawa and Aoshima make detailed models of the first-generation Toyota Celica, which has become somewhat of a classic. Nevertheless, Hasegawa also produced 1/25 scale models of 1965–66 American cars, including the 1965 Chevrolet Impala, and 1966 Buick Wildcat, Cadillac Coupe DeVille, and Thunderbird Landau. These were actually Johan and AMT kits that were simplified and modified for the Japanese market.
=== Short-run multimedia kits ===
Since the mid-1990s several companies including: Tameo, Studio 27, Model Factory Hiro, and Renaissance have issued hundreds of Sports Car and Formula 1 subjects in limited-run, multimedia kit sets. These so-called "multimedia" offerings consist of a combination of resin, white metal, photo-etch, and machined aluminum instead of inject plastic parts. The most popular scales are 1/43, 1/20, and 1/24. These multimedia kits are very high quality, require a wide set of construction skills to complete, and are marketed to international competition enthusiasts.
== Powered models ==
Though most car models are static display items, individual model builders have sometimes powered their vehicles in various ways, including rubber bands, springs, inertia mechanisms, electric motors, internal combustion engines, air engines and steam engines. In order to make them less fragile, powered models are often somewhat simplified and not as detailed as the best static models. For this reason, some modelers dismiss nearly all powered miniature cars as toys; however many individual efforts and commercial products are sufficiently well-scaled and detailed that they deserve to be called models. The main types of commercially produced powered car models include:
Uncontrolled powered models, which were developed in the 1930s and were common until the 1960s. Often guided by a rail between the wheels, or by a tether staked to the center of a circular course, most of these cars use small internal combustion glow plug engines and are known as tether cars.
Electrically powered slot cars which draw power from the track. They became extremely popular in the 1960s, but commercial slot car racing experienced a rapid decline in popularity late in the decade. By the end of the 1970s, the slot car hobby had diminished significantly, especially public tracks operating larger scale cars, and modeling in general was on the decline (HO Slot Car Racing 1999–2011). One website attributes the weakening of the pastime to both the ageing of the baby-boomers along with the fragile economics of the slot car industry and the closing of many commercial slot car tracks perhaps as toy companies offered smaller sets to be used at home. A wide variety of electrically powered vehicles, however are available today – in various forms.
Battery powered model cars are also available. They exist in versions with or without remote control and are common toys.
Spring-powered or "clockwork" car models, that are wound with a key or by a friction mechanism. These were common until slot cars largely replaced them in the 1960s. In fact, the first commercially successful slot cars, the Scalextric 1/32 line (originally 1:30) which debuted in 1957, were simply motorized versions of the earlier Scalex clockwork racers.
Radio-controlled cars, which can be bought assembled or built from kits. These are usually powered by electric motors or glow plug engines. Drivers can control the speed and steering of these cars remotely by a radio signal.
Combustion engine powered model cars are expensive and usually remote controllable. As combustion engines have a significant danger such cars are not suitable for children. Combustion engine powered model cars are often used for races.
== See also ==
Model building
Die-cast toy
Diecast Collector (magazine)
List of model car brands
== References ==
=== Reference bibliography === | Wikipedia/Model_car |
A model rocket is a small rocket designed to reach low altitudes (e.g., 100–500 m (330–1,640 ft) for a 30 g (1.1 oz) model) and be recovered by a variety of means.
According to the United States National Association of Rocketry (NAR)'s Safety Code, model rockets are constructed out of lightweight and non metallic parts. The materials are typically paper, cardboard, balsa wood or plastic. The code also provides guidelines for motor use, launch site selection, launch methods, launcher placement, recovery system design and deployment and more. Since the early 1960s, a copy of the Model Rocket Safety Code has been provided with most model rocket kits and motors. Despite its inherent association with extremely flammable substances and objects with a pointed tip traveling at high speeds, model rocketry historically has proven to be a very safe hobby and has been credited as a significant source of inspiration for children who have eventually become scientists and engineers.
== History of model rocketry ==
While there were many small and rockets produced after years of research and experimentation, the first modern model rocket, and more importantly, the model rocket motor, was designed in 1954 by Orville Carlisle, a licensed pyrotechnics expert, and his brother Robert, a model airplane enthusiast. They originally designed the motor and rocket for Robert to use in lectures on the principles of rocket-powered flight. But then Orville read articles written in Popular Mechanics by G. Harry Stine about the safety problems associated with young people trying to make their own rocket engines. With the launch of Sputnik, many young people were trying to build their own rocket motors, often with tragic results. Some of these attempts were dramatized in the fact-based 1999 film October Sky. The Carlisles realized their motor design could be marketed and provide a safe outlet for a new hobby. They sent samples to Mr. Stine in January 1957. Stine, a range safety officer at White Sands Missile Range, built and flew the models, and then devised a safety handbook for the activity based on his experience at the range.
The first American model rocket company was Model Missiles Incorporated (MMI), in Denver, Colorado, opened by Stine and others. Stine had model rocket engines made by a local fireworks company recommended by Carlisle, but reliability and delivery problems forced Stine to approach others. Stine eventually approached Vernon Estes, the son of a local fireworks maker. Estes founded Estes Industries in 1958 in Denver, Colorado and developed a high-speed automated machine for manufacturing solid model rocket motors for MMI. The machine, nicknamed "Mabel", made low-cost motors with great reliability, and did so in quantities much greater than Stine needed. Stine's business faltered and this enabled Estes to market the motors separately. Subsequently, he began marketing model rocket kits in 1960, and eventually, Estes dominated the market. Estes moved his company to Penrose, Colorado in 1961. Estes Industries was acquired by Damon Industries in 1970. It continues to operate in Penrose today.
Competitors like Centuri and Cox came and went in America during the 1960s, 1970s, and 1980s, but Estes continued to control the American market, offering discounts to schools and clubs like Boy Scouts of America to help grow the hobby. In recent years, companies like Quest Aerospace have taken a small portion of the market, but Estes continues to be the main source of rockets, motors, and launch equipment for the low- to medium-power rocketry hobby today. Estes produces and sells black powder rocket motors.
Since the advent of high-power rocketry, which began in the mid-1980s with the availability of G- through J-class motors (each letter designation has up to twice the energy of the one before), a number of companies have shared the market for larger and more powerful rockets. By the early 1990s, Aerotech Consumer Aerospace, LOC/Precision, and Public Missiles Limited (PML) had taken up leadership positions, while a host of engine manufacturers provided ever larger motors, and at much higher costs. Companies like Aerotech, Vulcan, and Kosdon were widely popular at launches during this time as high-power rockets routinely broke Mach 1 and reached heights over 3,000 m (9,800 ft). In a span of about five years, the largest regularly made production motors available reached N, which had the equivalent power of over 1,000 D engines combined, and could lift rockets weighing 50 kg (110 lb) with ease. Custom motor builders continue to operate on the periphery of the market today, often creating propellants that produce colored flame (red, blue, and green being common), black smoke and sparking combinations, as well as occasionally building enormous motors of P, Q, and even R class for special projects such as extreme-altitude attempts over 17,000 m (56,000 ft).
High-power motor reliability was a significant issue in the late 1980s and early 1990s, with catastrophic engine failures occurring relatively frequently (est. 1 in 20) in motors of L class or higher. At costs exceeding $300 per motor, the need to find a cheaper and more reliable alternative was apparent. Reloadable motor designs (metal sleeves with screwed-on end caps and filled with cast propellant slugs) were introduced by Aerotech and became very popular over the span of a few years. These metal containers needed only to be cleaned and refilled with propellant and a few throw-away components after each launch. The cost of a "reload" was typically half of a comparable single use motor. While catastrophes at take-off (CATOs) still occur occasionally with reloadable motors (mostly due to poor assembly techniques by the user), the reliability of launches has risen significantly.
It is possible to change the thrust profile of solid-propellant motors by selecting different propellant designs. Since thrust is proportional to burning surface area, propellant slugs can be shaped to produce very high thrust for a second or two, or to have a lower thrust that continues for an extended time. Depending on the weight of the rocket and the maximum speed threshold of the airframe and fins, appropriate motor choices can be used to maximize performance and the chance of successful recovery.
Aerotech, Cesaroni, Rouse-Tech, Loki and others have standardized around a set of common reload sizes such that customers have great flexibility in their hardware and reload selections, while there continues to be an avid group of custom engine builders who create unique designs and occasionally offer them for sale.
== Precautions and safety ==
Model rocketry is a safe and widespread hobby. Individuals such as G. Harry Stine and Vernon Estes helped to ensure this by developing and publishing the NAR Model Rocket Safety Codes and by commercially producing safe, professionally designed and manufactured model rocket motors. The safety code is a list of guidelines and is only mandatory for National Association of Rocketry members.
A primary motivation for the development of the hobby in the 1950s and 1960s was to enable young people to make flying rocket models without having to construct the dangerous motor units or directly handle explosive propellants.
The NAR and the TRA successfully sued the US Bureau of Alcohol, Tobacco, Firearms and Explosives(BATFE) over the classification of Ammonium Perchlorate Composite Propellant (APCP), the most commonly used propellant in high-power rocket motors, as an explosive. The March 13, 2009 decision by DC District court judge Reggie Walton removed APCP from the list of regulated explosives, essentially eliminating BATFE regulation of hobby rocketry.
== Model rocket motors ==
Most small model rocket motors are single-use engines, with cardboard bodies and lightweight molded clay nozzles, ranging in impulse class from fractional A to G. Model rockets generally use commercially manufactured black-powder motors. These motors are tested and certified by the National Association of Rocketry, the Tripoli Rocketry Association (TRA) or the Canadian Association of Rocketry (CAR). Black-powder motors come in impulse ranges from 1/8A to F.
The physically largest black-powder model rocket motors are typically F-class, as black powder is very brittle. If a large black-powder motor is the upper stage motor of a rocket that exceeds the maximum recommended takeoff weight, or is dropped or exposed to many heating/cooling cycles (e.g., in a closed vehicle exposed to high heat or a storage area with inconsistent temperature control), the propellant charge may develop hairline fractures. These fractures increase the surface area of the propellant, so that when the motor is ignited, the propellant burns much faster and produces greater than normal internal chamber pressure inside the engine. This pressure may exceed the strength of the paper case and cause the motor to burst. A bursting motor can cause damage to the model rocket ranging from a simple ruptured motor tube or body tube to the violent ejection (and occasionally ignition) of the recovery system.
Therefore, rocket motors with power ratings higher than D to F customarily use composite propellants made of ammonium perchlorate, aluminium powder, and a rubbery binder substance contained in a hard plastic case. This type of propellant is similar to that used in the solid rocket boosters of the Space Shuttle and is not as fragile as black powder, increasing motor reliability and resistance to fractures in the propellant. These motors
range in impulse from size A to O. Composite motors produce more impulse per unit weight (specific impulse) than do black-powder motors.
Reloadable composite-propellant motors are also available. These are commercially produced motors requiring the user to assemble propellant grains, o-rings and washers (to contain the expanding gases), delay grains and ejection charges into special non-shattering aluminum motor casings with screw-on or snap-in ends (closures). The advantage of a reloadable motor is the cost: firstly, because the main casing is reusable, reloads cost significantly less than single-use motors of the same impulse. Secondly, assembly of larger composite engines is labor-intensive and difficult to automate; off-loading this task on the consumer results in a cost savings. Reloadable motors are available from D through O class.
Motors are electrically ignited with an electric match consisting of a short length of pyrogen-coated nichrome, copper, or aluminum bridgewire pushed into the nozzle and held in place with flameproof wadding, a rubber band, a plastic plug or masking tape. On top of the propellant is a tracking delay charge, which produces smoke but in essence no thrust, as the rocket slows down and arcs over. When the delay charge has burned through, it ignites an ejection charge, which is used to deploy the recovery system.
Model rocket motors mostly don't offer any sort of thrust vectoring, instead just relying on fins at the base to keep the vehicle aerodynamically stable. Some rockets do however have thrust vectoring control (TVC) by gimbaling the motor itself rather than the nozzle. This is done on some rockets built by many model rocket builders, the most notable of which is BPS.space.
== Performance ==
The impulse (area under the thrust-time curve) of a model motor is used to determine its class. Motors are divided into classes from 1/4A to O and beyond. Black powder rocket motors are typically only manufactured up to Class F. Each class's upper limit is double the upper limit of the previous class.
Model rockets only use motors that are class G and below. Rockets using motors with a greater impulse are considered high power rockets.
Figures from tests of Estes rocket motors are used in the following examples of rocket motor performance.
For miniature black powder rocket motors (13 mm diameter), the maximum thrust is between 5 and 12 N, the total impulse is between .5 and 2.2 Ns, and the burn time is between .25 and 1 second. For Estes ‘regular size’ rocket motors (18 mm diameter), there are three classes: A, B, and C. The A class 18 mm motors have a maximum thrust between 9.5 and 9.75 N, a total impulse between 2.1 and 2.3 Ns, and a burn time between .5 and .75 seconds. The B class 18 mm motors have a maximum thrust between 12.15 and 12.75 N, a total impulse between 4.2 and 4.35 Ns, and a burn time between .85 and 1 second. The C class 18mm motors have a maximum thrust from 14 – 14.15 N, a total impulse between 8.8 and 9 Ns, and a burn time between 1.85 and 2 seconds.
There are also 3 classes included in Estes large (24 mm diameter) rocket motors: C, D, and E. The C class 24 mm motors have a maximum thrust between 21.6 and 21.75 N, a total impulse of between 8.8 and 9 Ns, and a burn time between .8 and .85 seconds. The D class 24 mm motors have a maximum thrust between 29.7 and 29.8 N, a total impulse between 16.7 and 16.85 Ns, and a burn time between 1.6 and 1.7 seconds. The E class 24 mm motors have a maximum thrust between 19.4 and 19.5 N, a total impulse between 28.45 and 28.6 Ns, and a burn time between 3 and 3.1 seconds. Estes has also released a line of 29mm black powder E and F motors. The 29mm E produces 33.4 Newton-seconds of total impulse over a 2.1 second burn, and the F produces 49.6 Newton-seconds over a 3.45 second burn.
Several independent sources have published measurements showing that Estes model rocket engines often fail to meet their published thrust specifications.
== Motor nomenclature ==
Model rocket motors produced by companies like Estes Industries, Centuri Engineering and Quest Aerospace are stamped with a code (such as A10-3T or B6-4) that indicates several things about the motor.
The Quest Micro Maxx engines are the smallest at a diameter of 6mm. The company Apogee Components made 10.5mm micro motors, however, those were discontinued in 2001. Estes manufactures size "T" (Tiny) motors that are 13 mm in diameter by 45 mm long from 1/4A through A class, while standard A, B and C motors are 18 mm in diameter by 70 mm long. C, D, and E class black-powder motors are also available; they are 24 mm in diameter and either 70 (C and D motors) or 95 mm long (E motors). Estes also produces a line of 29mm diameter by 114mm length E and F class black powder motors. Larger composite propellant motors, such as F and G single-use motors, are also 29mm in diameter. High-power motors (usually reloadable) are available in 29mm, 38mm, 54mm, 75mm, and 98mm diameters.
=== First letter ===
The letter at the beginning of the code indicates the motor's total impulse range (commonly measured in newton-seconds). Each letter in successive alphabetical order has up to twice the impulse of the letter preceding it. This does not mean that a given "C" motor has twice the total impulse of a given "B" motor, only that C motors are in the 5.01-10.0 N-s range while "B" motors are in the 2.51-5.0 N-s range. The designations "¼A" and "½A" are also used. For a more complete discussion of the letter codes, see Model rocket motor classification.
For instance, a B6-4 motor from Estes-Cox Corporation has a total impulse rating of 5.0 N-s. A C6-3 motor from Quest Aerospace has a total impulse of 8.5 N-s.
=== First number ===
The number that comes after the letter indicates the motor's average thrust, measured in newtons. A higher thrust will result in higher liftoff acceleration, and can be used to launch a heavier model. Within the same letter class, a higher average thrust also implies a shorter burn time (e.g., a B6 motor will not burn as long as - but will have more initial thrust than - a B4). Motors within the same letter class that have different first numbers are usually for rockets with different weights. For example, a heavier rocket would require an engine with more initial thrust to get it off of the launch pad, whereas a lighter rocket would need less initial thrust and would sustain a longer burn, reaching higher altitudes.
=== Last number ===
The last number is the delay in seconds between the end of the thrust phase and ignition of the ejection charge. Black Powder Motors that end in a zero have no delay or ejection charge. Such motors are typically used as first-stage motors in multistage rockets as the lack of delay element and cap permit burning material to burst forward and ignite an upper-stage motor.
A "P" indicates that the motor is "plugged". In this case, there is no ejection charge, but a cap is in place. A plugged motor is used in rockets that do not need to deploy a standard recovery system such as small rockets that tumble or R/C glider rockets. Plugged motors are also used in larger rockets, where electronic altimeters or timers are used to trigger the deployment of the recovery system.
Composite motors usually have a letter or combination of letters after the delay length, indicating which of the manufacturer's different propellant formulations (resulting in colored flames or smoke) is used in that particular motor.
=== Reloadable motors ===
Reloadable rocket motors are specified in the same manner as single-use model rocket motors as described above. However, they have an additional designation that specifies both the diameter and maximum total impulse of the motor casing in the form of diameter/impulse. After that, there is a series of letters indicating the propellant type. However, not all companies that produce reloadable motors use the same designations for their motors.
An Aerotech reload designed for a 29-millimeter-diameter case with a maximum total impulse of 60 newton-seconds carries the designation 29/60 in addition to its impulse specification.
However, Cesaroni Technology Incorporated (CTI) motors use a different designation. They first have "Pro" followed by a number representing the diameter of the motor in millimeters, for example, a Pro38 motor is a 38mm diameter motor. After this, there is a new string of characters such that the impulse in newton-seconds is first, followed by the motor classification, the average thrust in newtons, followed by a dash, and the delay time in seconds. For example, a Pro29 110G250-14 is a G-motor with 110 Ns of impulse, 250 N of thrust, and a 14-second delay.
== Model rocket recovery methods ==
Model and high-power rockets are designed to be safely recovered and flown repeatedly. The most common recovery methods are parachute and streamer. The parachute is usually blown out by the engine's ejection charge, which pops off the nose cone. The parachute is attached to the nose cone, making it pull the parachute out and make a soft landing.
=== Featherweight recovery ===
The simplest approach, which is appropriate only for the tiniest of rockets, is to let the rocket flutter back to the ground after ejecting the motor. This is slightly different from tumble recovery, which relies on some system to destabilize the rocket to prevent it from entering a ballistic trajectory on its way back to Earth.
=== Tumble recovery ===
Another simple approach appropriate for small rockets — or rockets with a large cross-sectional area — is to have the rocket tumble back to Earth. Any rocket that will enter a stable, ballistic trajectory as it falls is not safe to use with tumble recovery. To prevent this, some such rockets use the ejection charge to slide the engine to the rear of the rocket, moving the center of mass behind the center of pressure and thus making the rocket unstable.
=== Nose-blow recovery ===
Another very simple recovery technique, used in very early models in the 1950s and occasionally in modern examples, is nose-blow recovery. This is where the ejection charge of the motor ejects the nose cone of the rocket (usually attached by a shock cord made of rubber, Kevlar string or another type of cord) from the body tube, destroying the rocket's aerodynamic profile, causing highly increased drag, and reducing the rocket's airspeed to a safe rate for landing. Nose-blow recovery is generally only suitable for very light rockets.
=== Parachute/Streamer ===
The parachute/streamer approach is used most often in small model rockets, but can also be used with larger rockets. It uses the ejective force of the motor to deploy, or push out, the parachute or streamer. The parachute is attached to the body either directly, by means of a ripcord, or indirectly, when it's attached to the nose cone, which attached to the body by a ripcord. Typically, a ball or mass of fireproof paper or material, sometimes referred to as recovery wadding, is inserted into the body before the parachute or streamer. This allows the ejection charge to propel the wadding, parachute, and nose cone without damaging the recovery equipment. Air resistance slows the rocket's fall, ending in a smooth, controlled and gentle landing.
=== Glide recovery ===
In glide recovery, the ejection charge either deploys an airfoil (wing) or separates a glider from the motor. If properly trimmed, the rocket/glider will enter a spiral glide and return safely. BnB Rockets "Boost Glider" Is a perfect example of a gliding recovery system. In some cases, radio-controlled rocket gliders are flown back to the earth by a pilot in much the way as R/C model airplanes are flown.
Some rockets (typically long thin rockets) are the proper proportions to safely glide to Earth tail-first. These are termed 'backsliders'.
=== Helicopter recovery ===
The ejection charge, through one of several methods, deploys helicopter-style blades and the rocket autorotates back to earth. The helicopter recovery usually happens when the engine's recoil creates pressure, making the nose cone pop out. There are rubber bands connected to the nosecone and three or more blades. The rubber bands pull the blades out and they provide enough drag to soften the landing.
In some rockets, the fins are used as the blades as well. In these, the ejection charge pushes a tube inside that has tabs sticking out of the rocket that hold the fins during launch. Then the tab releases the rubber band-pulled fins than pivot up into helicopter position.
=== Propulsive recovery ===
A very small number of people have been pursuing propulsive landing to recover their model rockets using active control through thrust vectoring. The most notable example of this is Joe Barnard's rockets such as "Echo" and the "Scout" series of rockets as part of the BPS.Space project. In 2022, BPS.Space successfully landed the Scout F Model Rocket with plume impingement throttling. In 2023, Teddy Duncker's TTB Aerospace successfully landed the LLL Model Rocket.
== Instrumentation ==
=== Aerial photography ===
Cameras and video cameras can be launched on model rockets to take aerial photographs in-flight. These photographs can be taken in many ways. Mechanized timers can be used or passive methods may be employed, such as strings that are pulled by flaps that respond to wind resistance. Microprocessor controllers and altimeters can also be used. However, the rocket's speed and motion can lead to blurry photographs, and quickly changing lighting conditions as the rocket points from ground to sky can affect video quality. Video frames can also be stitched together to create panoramas. As parachute systems can be prone to failure or malfunction, model rocket cameras need to be protected from impact with the ground.
The first commercially available system was the Estes CAMROC in 1965. This system used a 1.5 inch round film negative held in a large pill-shaped camera body with the lens facing forwards. It would take a single photograph after apogee as the rocket deployed its parachute. The hobbyist would then send the negatives back to Estes for developing and printing.
The second system was also released by Estes in 1970. Created by Mike Dorfler, the CINEROC held 20 seconds of Super 8mm film that ran at 30 fps, making for a slow-motion effect. Like the CAMROC negatives, these special movie cartridges needed to be shipped back to Estes to be processed.
In 1979, Estes released the Astrocam 110, the first single frame camera rocket that took multiple single shot pictures (one per flight) using standard Kodak 110 cartridge film. Unlike its CAMROC predecessor, it could use color film, and did not require sending the film back to Estes for processing. It did require asking the processor to 'flip' the negative before printing, as the camera used a mirror to take its pictures. Otherwise, you were directed to hold prints up to a mirror to see them in the correct attitude. Through the 1980s and into the early 2000s, the Astrocam 110 was revised and updated, originally as a kit where you built the camera, which became one with a pre-built camera, then an Almost Ready to Fly model such as the Astrocam RTF and finally the renamed Snapshot RTF. By the mid 2000s, these models had been retired, as the first digital video cameras started to appear on the market. With Kodak ending 110 film production, only specialty film producers make the ASA400 film needed for these cameras, such as the Austrian based Lomography Company.
In 2005 the Oracle Video Rocket, and in 2007 the AstroVision digital/video camera were released by Estes. Both systems were capable of recording a flight from start to finish, but required downloading after each flight, as expandable memory had not been incorporated into them. The AstroVision did have a snapshot mode, so it could do more than a single flight and take multiple pictures, but movie mode was a single take before needing to be attached to a laptop. Both models were discontinued by 2010. A major reason for this was the advent of the 'Key-Fob camera' - many of which were more powerful, lighter and easier to attach to any rocket, and did not need a specific model to do so, and had expandable memory in the form of Mini SD cards, and were much less expensive. These devices also have the advantage of rechargeable batteries, and since they were built on the same plug-and-play technology Flash Drives use, do not need any extra drivers installed into a computer for them to work. In 2020, Estes brought out a new Key-Fob based camera, which now bears the Astrocam name. As of 2024, two versions can be found - a full rocket kit whose nose cone has a mount for the fob, and the Universal Astrocam, which has the fob along with a holding mount that allows for the camera to ride other models.
In the area of higher powered rocketry, there are also experimental homemade rockets that include onboard video cameras, with multiple methods for shooting videos. One is to transmit a signal down to a receiver, like in the BoosterVision series of cameras. The second method for this is to record it on board and be downloaded after recovery, the method employed by the Estes cameras listed above. (Some experimenters use the Aiptek PenCam Mega for this, the lowest power usable with this method is a C or D Motor).
=== Instrumentation and experimentation ===
Model rockets with electronic altimeters can report and or record electronic data such as maximum speed, acceleration, and altitude. Two methods of determining these quantities are to a) have an accelerometer and a timer and work backwards from the acceleration to the speed and then to the height and b) to have a barometer on board with a timer and to get the height (from the difference of the pressure on the ground to the pressure in the air) and to work forwards with the time of the measurements to the speed and acceleration.
Rocket modelers often experiment with rocket sizes, shapes, payloads, multistage rockets, and recovery methods. Some rocketeers build scale models of larger rockets, space launchers, or missiles.
== High-power rocketry ==
As with low-power model rockets, high-power rockets are also constructed from lightweight materials. Unlike model rockets, high-power rockets often require stronger materials such as fiberglass to withstand the higher stresses during flights that often exceed speeds of Mach 1 (340 m/s) and over 3,000 m (9,800 ft) altitude.
Because of the potential risk to other aircraft, coordination with proper authorities is often required.
High-power rockets are propelled by larger motors ranging from class H to class O, and/or weigh more than 3.3 lbs or 1,500 grams at liftoff. Their motors are almost always reloadable rather than single-use, in order to reduce cost. Recovery and/or multi-stage ignition may be initiated by small on-board computers, which use an altimeter or accelerometer for detecting when to ignite engines or deploy parachutes.
High-power model rockets can carry large payloads, including cameras and instrumentation such as GPS units.
=== Differences from model rocketry ===
A high-power rocket is typically:
The rocket weighs more than 1,500 grams
The rocket is rarely made out of metallic and high performance materials, such as Aluminum and carbon fiber, as this is against industry safety standards required by National Association of Rocketry and Tripoli. Instead, fiberglass is often used in order to withstand the rigors of high power motors and flight.
The motor used contains more than 125 grams of propellant
The motor used has an impulse of more than 160 Newton-seconds (is an H-class or above) or uses multiple motors with a total impulse of more than 320 Newton-seconds.
Exact requirements vary from one jurisdiction to another.
== See also ==
Amateur rocket motor classification
Amateur rocketry
CO2 rocket
Model Rocketry magazine
Rocket Festival
Thermalite
Water rocket
== References ==
== External links ==
Illustrated 3D Model Rocket
FAA launch regulations for the US | Wikipedia/Model_rocket |
A scale model is a physical model that is geometrically similar to an object (known as the prototype). Scale models are generally smaller than large prototypes such as vehicles, buildings, or people; but may be larger than small prototypes such as anatomical structures or subatomic particles. Models built to the same scale as the prototype are called mockups.
Scale models are used as tools in engineering design and testing, promotion and sales, filmmaking special effects, military strategy, and hobbies such as rail transport modeling, wargaming and racing; and as toys. Model building is also pursued as a hobby for the sake of artisanship.
Scale models are constructed of plastic, wood, or metal. They are usually painted with enamel, lacquer, or acrylics.
Model prototypes include all types of vehicles (railroad trains, cars, trucks, military vehicles, aircraft, and spacecraft), buildings, people, and science fiction themes (spaceships and robots).
== Methods ==
Models are built to scale, defined as the ratio of any linear dimension of the model to the equivalent dimension on the full-size subject (called the "prototype"), expressed either as a ratio with a colon (ex. 1:8 scale), or as a fraction with a slash (1/8 scale). This designates that 1 length unit on the model represents 8 such units on the prototype. In English-speaking countries, the scale is sometimes expressed as the number of feet on the prototype corresponding to one inch on the model, e.g. 1:48 scale = "1 inch to 4 feet", 1:96 = "1 inch to 8 feet", etc.
Models are obtained by three different means: kit assembly, scratch building, and collecting pre-assembled models. Scratch building is the only option available to structural engineers, and among hobbyists requires the highest level of skill, craftsmanship, and time; scratch builders tend to be the most concerned with accuracy and detail. Kit assembly is done either "out of the box", or with modifications (known as "kitbashing"). Many kit manufacturers, for various reasons leave something to be desired in terms of accuracy, but using the kit parts as a baseline and adding after-market conversion kits, alternative decal sets, and some scratch building can correct this without the master craftsmanship or time expenditure required by scratch building.
== Purposes ==
Scale models are generally of two types: static and animated. They are used for several purposes in many fields, including:
=== Hobby ===
Most hobbyist's models are built for static display, but some have operational features, such as railroad trains that roll, and airplanes and rockets that fly. Flying airplane models may be simple unpowered gliders, or have sophisticated features such as radio control powered by miniature methanol/nitromethane engines.
==== Slot car racing ====
Cars in 1:24, 1:32, or HO scale are fitted with externally powered electric motors which run on plastic road track fitted with metal rails on slots. The track may or may not be augmented with miniature buildings, trees, and people.
==== Wood car racing ====
Children can build and race their own gravity-powered, uncontrolled cars carved out of a wood such as pine, with plastic wheels on metal axles, which run on inclined tracks.
The most famous wood racing event is the Boy Scouts of America's annual Pinewood Derby which debuted in 1953. Entry is open to Cub Scouts. Entrants are supplied with a kit containing a wooden block out of which to carve the body, four plastic wheels, and four axle nails; or they may purchase their own commercially available kit. Regulations generally limit the car's weight to 5 ounces (141.7 g), width to 2.75 inches (7.0 cm), and length to 7 inches (17.8 cm). The rules permit the cars to be augmented with tungsten carbide weights up to the limit, and graphite axle lubricant.
==== Wargaming ====
Miniature wargames are played using miniature soldiers, artillery, vehicles, and scenery built by the players.
=== Television and film production ===
Before the advent of computer-generated imagery (CGI), visual effects of vehicles such as marine ships and spaceships were created by filming "miniature" models. These were considerably larger scale than hobby versions to allow inclusion of a high degree of surface detail, and electrical features such as interior lighting and animation. For Star Trek: The Original Series, a 33-inch (0.84 m) pre-production model of the Starship Enterprise was created in December 1964, mostly of pine, with Plexiglass and brass details, at a cost of $600. This was followed by a 135.5-inch (3.44 m) production model constructed from plaster, sheet metal, and wood, at ten times the cost of the first. As the Enterprise was originally reckoned to be 947 feet (289 m) long, this put the models at 1:344 and 1:83.9 scale respectively. The Polar Lights company sells a large plastic Enterprise model kit essentially the same size as the first TV model, in 1:350 scale (32 inches long). It can be purchased with an optional electronic lighting and animation (rotating engine domes) kit.
=== Engineering ===
==== Structural ====
Although structural engineering has been a field of study for thousands of years and many of the great problems have been solved using analytical and numerical techniques, many problems are still too complicated to understand in an analytical manner or the current numerical techniques lack real world confirmation. When this is the case, for example a complicated reinforced concrete beam-column-slab interaction problem, scale models can be constructed observing the requirements of similitude to study the problem. Many structural labs exist to test these structural scale models such as the Newmark Civil Engineering Laboratory at the University of Illinois, UC.
For structural engineering scale models, it is important for several specific quantities to be scaled according to the theory of similitude. These quantities can be broadly grouped into three categories: loading, geometry, and material properties. A good reference for considering scales for a structural scale model under static loading conditions in the elastic regime is presented in Table 2.2 of the book Structural Modeling and Experimental Techniques.
Structural engineering scale models can use different approaches to satisfy the similitude requirements of scale model fabrication and testing. A practical introduction to scale model design and testing is discussed in the paper "Pseudodynamic Testing of Scaled Models".
==== Aerodynamic ====
Aerodynamic models may be used for testing new aircraft designs in a wind tunnel or in free flight. Models of scale large enough to permit piloting may be used for testing of a proposed design.
==== Architectural ====
Architecture firms usually employ model makers or contract model making firms to make models of projects to sell their designs to builders and investors. These models are traditionally hand-made, but advances in technology have turned the industry into a very high tech process than can involve Class IV laser cutters, five-axis CNC machines as well as rapid prototyping or 3D printing. Typical scales are 1:12, 1:24, 1:48, 1:50, 1:100, 1:200, 1:500, etc.
=== Advertising and sales ===
=== Military ===
With elements similar to miniature wargaming, building models and architectural models, a plan-relief is a means of geographical representation in relief as a scale model for military use, to visualize building projects on fortifications or campaigns involving fortifications.
In the first half of the 20th century, navies used hand-made models of warships for identification and instruction in a variety of scales. That of 1:500 was called "teacher scale." Besides models made in 1:1200 and 1:2400 scales, there were also ones made to 1:2000 and 1:5000. Some, made in Britain, were labelled "1 inch to 110 feet", which would be 1:1320 scale, but are not necessarily accurate.
==== Manned ships ====
Many research workers, hydraulics specialists and engineers have used scale models for over a century, in particular in towing tanks. Manned models are small scale models that can carry and be handled by at least one person on an open expanse of water. They must behave just like real ships, giving the shiphandler the same sensations. Physical conditions such as wind, currents, waves, water depths, channels, and berths must be reproduced realistically.
Manned models are used for research (e.g. ship behaviour), engineering (e.g. port layout) and for training in shiphandling (e.g. maritime pilots, masters and officers). They are usually at 1:25 scale.
== Materials ==
Models, and their constituent parts, can be built out of a variety of materials, such as:
=== Plastic ===
This includes injection molded or extruded plastics such as polystyrene, acrylonitrile butadiene styrene (ABS), butyrate, and clear acrylic and copolyester (PETG). Parts can also be cast from synthetic resins.
=== Wood ===
Pine wood is sometimes used; balsa wood, a light wood, is good for flying airplane models.
=== Metal ===
Aluminum or brass can be used in tubing form, or can be used in flat sheets with photo-etched surface detail. Model figures used in wargaming can be made of white metal.
=== Glue ===
Styrene parts are welded together using plastic cement, which comes both in a thick form to be carefully applied to a bonding surface, or in a thin liquid which is applied into a joint by capillary action using a brush or syringe needle. Ethyl cyanoacrylate (ECA) aka "super-glue", or fast-setting epoxy, must be used to bond styrene to other materials.
=== Paint ===
Glossy colors are generally used for car and commercial truck exteriors. Flat colors are generally desirable for military vehicles, aircraft, and spacecraft. Metallic colors simulate the various metals (silver, gold, aluminum, steel, copper, brass, etc.)
Enamel paint has classically been used for model making and is generally considered the most durable paint for plastics. It is available in small bottles for brushing and airbrushing, and aerosol spray cans. Disadvantages include toxicity and a strong chemical smell of the paint and its mineral spirit thinner/brush cleaner. Modern enamels are made of alkyd resin to limit toxicity. Popular brands include Testor's in the US and Humbrol (now Hornby) in the UK.
Lacquer paint produces a hard, durable finish, and requires its own lacquer thinner.
Enamels have been generally replaced in popularity by acrylic paint, which is water-based. Advantages include decreased toxicity and chemical smell, and brushes clean with soap and water. Disadvantages include possibly limited durability on plastic, requiring priming coats, at least two color coats, and allowing adequate cure time. Popular brands include the Japanese import Tamiya.
Some beginner's level kits avoid the necessity to paint the model by adding pigments and chrome plating to the plastic.
=== Decals ===
Decals are generally applied to models after painting and assembly, to add details such as lettering, flags, insignia, or other decorations too small to paint. Water transfer (slide-on) decals are generally used, but beginner's kits may use dry transfer stickers instead.
== Subjects ==
=== Vehicles ===
==== Trains ====
Model railroading (US and Canada; known as railway modelling in UK, Australia, New Zealand, and Ireland) is done in a variety of scales from 1:4 to 1:450 (T scale). Each scale has its own strengths and weaknesses, and fills a different niche in the hobby:
The largest scales are used outdoors, for "Live steam" railroads with trains large enough for people to ride on, as much as 3 meters (9.8 ft) longs are built in several scales such as 1-1/2", 1", and 3/4 inches to the foot. Common gauges are 7-1/2" (Western US) and 7-1/4" (Eastern US & rest of the world), 5", and 4-3/4". Smaller live steam gauges do exist, but as the scale gets smaller, pulling power decreases. One of the smallest gauges on which a live steam engine can pull a passenger is the now almost defunct 2+1⁄2-inch gauge.
The next largest scale range, G scale (1:22.5) in the US and 16 mm scale (1:19.05) in the UK, and as large as 1:12 scale, is too small for riding but is used for outdoor garden railways, which allow use of natural landscaping. G scale is also sometimes used indoors, with the track mounted adjacent to walls at eye level of standing adults. A franchise chain of restaurants and coffeehouses named Výtopna in the Czech Republic acquired a trademark for the use of G-scale trains mounted on the countertops to serve customers beverages, and pick up their orders and empty glasses.
Smaller scales are used indoors. O scale (1:48) sets were introduced as early "toy trains" by companies such as Lionel Corporation, but has developed a following among serious adult hobbyists. American Flyer purchased by A. C. Gilbert Company popularized S scale (1:64) trains starting in 1946. Even smaller scales have become the most popular, allowing larger, more complex layouts to be built in smaller spaces. Dedicated model railroaders often mount indoor layouts on homemade plywood tables, at a height in the range of 30 to 42 inches (76 to 107 cm), putting the track optimally close to eye level for children or adults. As of 2022, the two most popular sizes are HO scale (1:87) and N scale (1:160).
===== Gauge vs scale =====
Model railroads originally used the term gauge, which refers to the distance between the rails, just as full-size railroads continue to do. Although model railroads were also built to different gauges, standard gauge in full-size railroads is 4' 8.5". Therefore, a model railroad reduces that standard to scale. An HO scale model railroad runs on track that is 1/87 of 4' 8.5", or 0.649" from rail to rail. Today model railroads are more typically referred to using the term scale instead of "gauge" in most usages.
Confusion arises from indiscriminate use of "scale" and "gauge" synonymously. The word "scale" strictly refers to the proportional size of the model, while "gauge" strictly applies to the measurement between the inside faces of the rails. It is completely incorrect to refer to the mainstream scales as "HO gauge", "N gauge, "Z gauge", etc. This is further complicated by the fact some scales use several different gauges; for example, HO scale uses 16.5 mm as the standard gauge of 4 ft 8+1⁄2 in (1,435 mm), 12 mm to represent 1,000 mm (3 ft 3+3⁄8 in) gauge (HOm), and 3 ft 6 in (1,067 mm) (HOn3-1/2), and 9 mm to represent a prototype gauge of 2 ft (610 mm).
The most popular scale to go with a given gauge was often arrived at through the following roundabout process: German artisans would take strips of metal of standard metric size to construct their products from blueprints dimensioned in inches. "Four mm to the foot" yielded the 1:76.2 size of the British "OO scale", which is anomalously used on the standard HO/OO scale (16.5 mm gauge from 3.5 mm/foot scale) tracks, because early electric motors weren't available commercially in smaller sizes. Today, most scale sizes are internationally standardized, with the notable exceptions of O scale and N scale.
There are three different versions of the "O" scale, each of which uses tracks of 32 mm for the standard gauge. The American version follows a dollhouse scale of 1:48, sometimes called "quarter-gauge" as in "one-quarter-inch to the foot". The British version continued the pattern of sub-contracting to Germans, so, at 7 mm to the foot, it works out to a scale of 1:43.5. Later, the European authority of model railroad firms MOROP declared that the "O" gauge (still 32 mm) must use the scale of 1:45, to allow wheel, tire, and splasher clearance for smaller than realistic curved sections.
N scale trains were first commercially produced at 1:160 scale in 1962 by the Arnold company of Nuremberg. This standard size was imported to the US by firms such as the Aurora Plastics Corporation. However, the early N-scale motors would not fit in the smaller models of British locomotives, so the British N gauge was standardized to allow a slightly larger body size. Similar sizing problems with Japanese prototypes led to adoption of a 1:150 scale standard there. Since space is more limited in Japanese houses, N scale has become more popular there than HO scale.
==== Aircraft ====
Static model aircraft are commonly built using plastic, but wood, metal, card and paper can also be used. Models are sold painted and assembled, painted but not assembled (snap-fit), or unpainted and not assembled. The most popular types of aircraft to model are commercial airliners and military aircraft. Popular aircraft scales are, in order of increasing size: 1:144, 1:87 (also known as HO, or "half-O scale"), 1:72 (the most numerous), 1:48 (known as "O scale"), 1:32, 1:24, 1:16, 1:6, and 1:4. Some European models are available at more metric scales such as 1:50. The highest quality models are made from injection molded plastic or cast resin. Models made from Vacuum formed plastic are generally for the more skilled builder. More inexpensive models are made from heavy paper or card stock. Ready-made die-cast metal models are also very popular. As well as the traditional scales, die-cast models are available in 1:200, 1:250, 1:350, 1:400, 1:500 and 1:600 scale.
The majority of aircraft modelers concern themselves with depiction of real-life aircraft, but there are some modelers who 'bend' history by modeling aircraft that either never actually flew or existed, or by painting them in a color scheme that did not actually exist. This is commonly referred to as 'What-if' or 'Alternative' modeling, and the most common theme is 'Luftwaffe 1946' or 'Luftwaffe '46'. This theme stems from the idea of modeling German secret projects that never saw the light of day due to the close of World War II. This concept has been extended to include British, Russian, and US experimental projects that never made it into production.
Flying model aircraft are built for aerodynamic research and for recreation (aeromodeling).
Recreational models are often made to resemble some real type. However the aerodynamic requirements of a small model are different from those of a full-size craft, so flying models are seldom fully accurate to scale. Flying model aircraft are one of three types: free flight, control line, and radio controlled. Some flying model kits take many hours to put together, and some kits are almost ready to fly or ready to fly.
==== Rockets and spacecraft ====
Model rocketry dates back to the Space Race of the 1950s. The first model rocket engine was designed in 1954 by Orville Carlisle, a licensed pyrotechnics expert, and his brother Robert, a model airplane enthusiast.
Static model rocket kits began as a development of model aircraft kits, yet the scale of 1:72 [V.close to 4 mm.::1foot] never caught on. Scales 1:48 and 1:96 are most frequently used. There are some rockets of scales 1:128, 1:144, and 1:200, but Russian firms put their large rockets in 1:288. Heller SA offers some models in the scale of 1:125.
Science fiction space ships are heavily popular in the modeling community. In 1966, with the release of the television show Star Trek: The Original Series, AMT corporation released an 18-inch (46 cm) model of the Starship Enterprise. This has been followed over the decades by a complete array of various starships, shuttlecraft, and space stations from the Star Trek franchise. The 1977 release of the first Star Wars film and the 1978 TV series Battlestar Galactica also spawned lines of licensed model kits in scales ranging from 1:24 for fighters and smaller ships, to 1:1000, 1:1400, and 1:2500 for most main franchise ships, and up to 1:10000 for the larger Star Wars ships (for especially objects like the Death Stars and Super Star Destroyers, even smaller scales are used). Finemolds in Japan have recently released a series of high quality injection molded Star Wars kits in 1:72, and this range is supplemented by resin kits from Fantastic Plastic.
==== Cars ====
Although the British scale for 0 gauge was first used for model cars made of rectilinear and circular parts, it was the origin of the European scale for cast or injection molded model cars. MOROP's specification of 1:45 scale for European 0 does not alter the series of cars in 1:43 scale, as it has the widest distribution in the world.
In America, a series of cars was developed from at first cast metal and later styrene models ("promos") offered at new-car dealerships to drum up interest. The firm Monogram, and later Tamiya, first produced them in a scale derived from the Architect's scale: 1:24 scale, while the firms AMT, Jo-Han, and Revell chose the scale of 1:25. Monogram later switched to this scale after the firm was purchased by Revell. Some cars are also made in 1:32 scale, and rolling toys are often made on the scale 1:64 scale. Chinese die-cast manufacturers have introduced 1/72 scale into their range. The smaller scales are usually die-cast cars and not the in the class as model cars. Except in rare occasions, Johnny Lightning and Ertl-made die-cast cars were sold as kits for buyers to assemble. Model cars are also used in car design.
==== Buses and trucks ====
Typically found in 1:50 scale, most manufacturers of commercial vehicles and heavy equipment commission scale models made of die-cast metal as promotional items to give to prospective customers. These are also popular children's toys and collectibles. The major manufacturers of these items are Conrad and NZG in Germany. Corgi also makes some 1:50 models, as well as Dutch maker Tekno.
Trucks are also found as diecast models in 1:43 scale and injection molded kits (and children's toys) in 1:24 scale. Recently some manufacturers have appeared in 1:64 scale like Code 3.
==== Construction vehicles ====
A model construction vehicle (or engineering vehicle) is a scale model or die-cast toy that represents a construction vehicle such as a bulldozer, excavator, crane, concrete pump, backhoe, etc.
Construction vehicle models are almost always made in 1:50 scale, particularly because the cranes at this scale are often three to four feet tall when extended and larger scales would be unsuited for display on a desk or table. These models are popular as children's toys in Germany. In the US they are commonly sold as promotional models for new construction equipment, commissioned by the manufacturer of the prototype real-world equipment. The major manufacturers in Germany are Conrad and NZG, with some competition from Chinese firms that have been entering the market.
=== Robots ===
Japanese firms have marketed toys and models of what are often called mecha, nimble humanoid fighting robots. The robots, which appear in animated shows (anime), are often depicted at a size between 15-20m in height, and so scales of 1:100 and 1:144 are common for these subjects, though other scales such as 1:72 are commonly used for robots and related subjects of different size.
The most prolific manufacturer of mecha models is Bandai, whose Gundam kit lines were a strong influence in the genre in the 1980s. Even today, Gundam kits are the most numerous in the mecha modeling genre, usually with dozens of new releases every year. The features of modern Gundam kits, such as color molding and snap-fit construction, have become the standard expectations for other mecha model kits.
Due to the fantasy nature of most anime robots, and the necessary simplicity of cel-animated designs, mecha models lend themselves well to stylized work, improvisations, and simple scratchbuilds. One of Gundam's contributions to the genre was the use of a gritty wartime backstory as a part of the fantasy, and so it is almost equally fashionable to build the robots in a weathered, beaten style, as would often be expected for AFV kits as to build them in a more stylish, pristine manner.
=== Live action figures ===
Scale models of people and animals are found in a wide variety of venues, and may be either single-piece objects or kits that must be assembled, usually depending on the purpose of the model. For instance, models of people as well as both domestic and wild animals are often produced for display in model cities or railroads to provide a measure of detail or realism, and scaled relative to the trains, buildings, and other accessories of a certain line of models. If a line of trains or buildings does not feature models of living creatures, those who build the models often buy these items separately from another line so they can feature people or animals. In other cases, scale model lines feature living creatures exclusively, often focusing on educational interests.
Model kits of superheroes and super-villains from popular franchises such as DC Entertainment and Marvel Entertainment are also sold, as are models of real-world celebrities, such as Marilyn Monroe and Elvis Presley.
One type of assembly kit sold as educational features skeletons and anatomical structure of humans and animals. Such kits may have unique features such as glow-in-the-dark pieces. Dinosaurs are a popular subject for such models. There are also garage kits, which are often figures of anime characters in multiple parts that require assembly.
=== Ships and naval war-gaming ===
Michele Morciano says small scale ship models were produced in about 1905 linked to the wargaming rules and other publications of Fred T. Jane. The company that standardized on 1:1200 was Bassett-Lowke in 1908. The British Admiralty subsequently contracted with Bassett-Lowke and other companies and individual craftsmen to produce large numbers of recognition models, to this scale, in 1914–18.
Just before the Second World War, the American naval historian (and science fiction author) Fletcher Pratt published a book on naval wargaming as could be done by civilians using ship models cut off at the waterline to be moved on the floors of basketball courts and similar locales. The scale he used was non-standard (reported as 1:666), and may have been influenced by toy ships then available, but as the hobby progressed, and other rule sets came into use, it was progressively supplemented by the series 1:600, 1:1200, and 1:2400. In Britain, 1:3000 became popular and these models also have come into use in the USA. These had the advantage of approximating the nautical mile as 120 inches, 60 inches, and 30 inches, respectively. As the knot is based on this mile and a 60-minute hour, this was quite handy.
After the war, firms emerged to produce models from the same white metal used to make toy soldiers. Lines Bros. Ltd, a British firm, offered a tremendously wide range of waterline merchant and naval ships as well as dockyard equipment in the scale 1:1200 which were die-cast in Zamak. In the US, at least one manufacturer, of the wartime 1:1200 recognition models, Comet, made them available for the civilian market postwar, which also drove the change to this scale. In addition, continental European manufacturers and European ship book publishers had adopted the 1:1250 drawing scale because of its similar convenience in size for both models and comparison drawings in books.
A prestige scale for boats, comparable to that of 1:32 for fighter planes, is 1:72, producing huge models, but there are very few kits marketed in this scale. There are now several clubs around the world for those who choose to scratch-build radio-controlled model ships and submarines in 1:72, which is often done because of the compatibility with naval aircraft kits. For the smaller ships, plank-on-frame or other wood construction kits are offered in the traditional shipyard scales of 1:96, 1:108, or 1:192 (half of 1:96). In injection-molded plastic kits, Airfix makes full-hull models in the scale the Royal Navy has used to compare the relative sizes of ships: 1:600. Revell makes some kits to half the scale of the US Army standard: 1:570. Some American and foreign firms have made models in a proportion from the Engineer's scale: "one-sixtieth-of-an-inch-to-the-foot", or 1:720.
=== Tanks and wargaming ===
Early in the 20th century, the British historian and science fiction author H. G. Wells published a book, Little Wars, on how to play at battles in miniature. His books use 2" lead figures, particularly those manufactured by Britains. His fighting system employed spring-loaded model guns that shot matchsticks.
This use of physical mechanisms was echoed in the later games of Fred Jane, whose rules required throwing darts at ship silhouettes; his collection of data on the world's fleets was later published and became renowned. Dice have largely replaced this toy mayhem for consumers.
For over a century, toy soldiers were made of white metal, a lead-based alloy, often in architect's scale-based ratios in the English-speaking countries, and called tin soldiers. After the Second World War, such toys were on the market for children but now made of a safe plastic softer than styrene. American children called these "army men". Many sets were made in the new scale of 1:40. A few styrene model kits of land equipment were offered in this and in 1:48 and 1:32 scales. However, these were swept away by the number of kits in the scale of 1:35.
Those who continued to develop miniature wargaming preferred smaller scale models, the soldiers still made of soft plastic. Airfix particularly wanted people to buy 1:76 scale soldiers and tanks to go with "00" gauge train equipment. Roco offered 1:87 scale styrene military vehicles to go with "HO" gauge model houses. However, although there is no 1:72 scale model railroad, more toy soldiers are now offered in this scale because it is the same as the popular aircraft scale. The number of fighting vehicles in this scale is also increasing, although the number of auxiliary vehicles available is far fewer than in 1:87 scale.
A more recent development, especially in wargaming of land battles, is 15 mm white metal miniatures, often referred to as 1:100. The use of 15 mm scale metals has grown quickly since the early 1990s as they allow a more affordable option over 28 mm if large battles are to be refought, or a large number of vehicles represented. The rapid rise in the detail and quality of castings at 15 mm scale has also helped to fuel their uptake by the wargaming community.
Armies use smaller scales still. The US Army specifies models of the scale 1:285 for its sand table wargaming. There are metal ground vehicles and helicopters in this scale, which is a near "one-quarter-inch-to-six-feet" scale. The continental powers of NATO have developed the similar scale of 1:300, even though metric standardizers really don't like any divisors other than factors of 10, 5, and 2, so maps are not commonly offered in Europe in scales with a "3" in the denominator.
Consumer wargaming has since expanded into fantasy realms, employing scales large enough to be painted in imaginative detail - so called "heroic" 28 mm figures, (roughly 1:64 scale). Firms that produce these make small production lots of white metal.
Alternatively to the commercial models, some modelers also tend to use scraps to achieve home-made warfare models. While it doesn't always involve wargaming, some modelers insert realistic procedures, enabling a certain realism such as firing guns or shell deflection on small scale models.
=== Engines ===
Kits for building an engine model are available, especially for kids. The most popular are the internal combustion, steam, jet, and Stirling model engine. Usually they move using an electric motor or a hand crank, and many of them have a transparent case to show the internal process in action.
=== Buildings ===
Most hobbyists who build models of buildings do so as part of a diorama to enhance their other models, such as a model railroad or model war machines. As a stand-alone hobby, building models are probably most popular among enthusiasts of construction toys such as Erector, Lego and K'Nex. Famous landmarks such as the Empire State Building, Big Ben and the White House are common subjects. Standard scales have not emerged in this hobby. Model railroaders use railroad scales for their buildings: HO scale (1:87), OO scale (1:76), N scale (1:160), and O scale (1:43). Lego builders use miniland scale (1:20), minifig scale (1:48), and micro scale (1:192) Generally, the larger the building, the smaller the scale. Model buildings are commonly made from plastic, foam, balsa wood or paper. Card models are published in the form of a book, and some models are manufactured like 3-D puzzles. Professionally, building models are used by architects and salesmen.
==== House portrait ====
Typically found in 1:50 scale and also called model house, model home or display house, this type of model is usually found in stately homes or specially designed houses. Sometimes this kind of model is commissioned to mark a special date like an anniversary or the completion of the architecture, or these models might be used by salesmen selling homes in a new neighborhood.
=== Miniatures in contemporary art ===
Miniatures and model kits are used in contemporary art whereby artists use both scratch built miniaturizations or commercially manufactured model kits to construct a dialogue between object and viewer. The role of the artist in this type of miniature is not necessarily to re-create an historical event or achieve naturalist realism, but rather to use scale as a mode of articulation in generating conceptual or theoretical exploration. Political, conceptual, and architectural examples are provided by noted artists such as Bodys Isek Kingelez, Jake and Dinos Chapman (otherwise known as the Chapman Brothers), Ricky Swallow, Shaun Wilson, Sven Christoffersen, or the Psikhelekedana artists from Mozambique, James Casebere, Oliver Boberg, and Daniel Dorall.
== See also ==
Autofest City
Computer-aided design
Cutaway drawing
International Plastic Modellers' Society
Maquette
Miniature faking
Miniature figure (disambiguation)
Miniature park
Miniature pioneering
Rail transport modelling scale standards
Solar System model
Standard gauge in model railways
Similitude
Terrain model
List of scale model sizes
List of scale-model industry people
List of scale model kit manufacturers
== References and notes ==
=== References ===
=== Notes ===
== References ==
Crowe, Clayton t.; Elger, Donald F.; Williams, Barbara C.; Roberson, John A. (2010). Engineering Fluid Mechanics. John Wiley & Sons, Inc. ISBN 978-0-470-40943-5.
Eaglemoss (2013), U.S.S. Enterprise NCC-1701 Refit, Eaglemoss Productions Ltd.
Harris, Harry G.; Sagnis, Gajanan M. (1999). Structural Modeling and Experimental Techniques. CRC Press LLC. ISBN 9780849324697.
Kumar; et al. (1997). "Pseudodynamic Testing of Scaled Models". J. Struct. Eng. 123 (4): 524–526. doi:10.1061/(ASCE)0733-9445(1997)123:4(524).
Weitekamp, Margaret A. (2016), "Two Enterprises: Star Trek's Iconic Starship As Studio Model and Celebrity", Journal of Popular Film and Television, 44: 2–13, doi:10.1080/01956051.2015.1075955, S2CID 191380605
"Progress in Scale Modeling, an International Journal (PSMIJ)". Retrieved 19 September 2020.
== Further reading ==
Crowe, Clayton t.; Elger, Donald F.; Williams, Barbara C.; Roberson, John A. (2010). Engineering Fluid Mechanics. John Wiley & Sons, Inc. ISBN 978-0-470-40943-5.
Harris, Harry G.; Sagnis, Gajanan M. (1999). Structural Modeling and Experimental Techniques. CRC Press LLC. ISBN 9780849324697.
Lune, Peter van. "FROG Penguin plastic scale model kits 1936 - 1950". Zwolle, The Netherlands, 2017, published by author ISBN 978-90-9030180-8
Saito, Kozo, ed. (2008). Progress in Scale Modeling. Springer. ISBN 978-1-4020-8681-6.
Saito, Kozo; et al. (2015). Progress in Scale Modeling Vol. II. Springer. ISBN 978-3-319-10308-2.
== External links == | Wikipedia/Scale_model |
Model–view–controller (MVC) is a software architectural pattern commonly used for developing user interfaces that divides the related program logic into three interconnected elements. These elements are:
the model, the internal representations of information
the view, the interface that presents information to and accepts it from the user
the controller, the software linking the two.
Traditionally used for desktop graphical user interfaces (GUIs), this pattern became popular for designing web applications. Popular programming languages have MVC frameworks that facilitate the implementation of the pattern.
== History ==
One of the seminal insights in the early development of graphical user interfaces, MVC became one of the first approaches to describe and implement software constructs in terms of their responsibilities.
Trygve Reenskaug created MVC while working on Smalltalk-79 as a visiting scientist at the Xerox Palo Alto Research Center (PARC) in the late 1970s.: 330 He wanted a pattern that could be used to structure any program where users interact with a large, convoluted data set. His design initially had four parts: Model, view, thing, and editor. After discussing it with the other Smalltalk developers, he and the rest of the group settled on model, view, and controller instead.
In their final design, a model represents some part of the program purely and intuitively. A view is a visual representation of a model, retrieving data from the model to display to the user and passing requests back and forth between the user and the model. A controller is an organizational part of the user interface that lays out and coordinates multiple Views on the screen, and which receives user input and sends the appropriate messages to its underlying Views. This design also includes an Editor as a specialized kind of controller used to modify a particular view, and which is created through that view.
Smalltalk-80 supports a version of MVC that evolved from this one. It provides abstract view and controller classes as well as various concrete subclasses of each that represent different generic widgets. In this scheme, a View represents some way of displaying information to the user, and a controller represents some way for the user to interact with a view. A view is also coupled to a model object, but the structure of that object is left up to the application programmer. The Smalltalk-80 environment also includes an "MVC Inspector", a development tool for viewing the structure of a given model, view, and controller side-by-side.
In 1988, an article in The Journal of Object Technology (JOT) by two ex-PARC employees presented MVC as a general "programming paradigm and methodology" for Smalltalk-80 developers. However, their scheme differed from both Reenskaug et al.'s and that presented by the Smalltalk-80 reference books. They defined a view as covering any graphical concern, with a controller being a more abstract, generally invisible object that receives user input and interacts with one or many views and only one model.
The MVC pattern subsequently evolved, giving rise to variants such as hierarchical model–view–controller (HMVC), model–view–adapter (MVA), model–view–presenter (MVP), model–view–viewmodel (MVVM), and others that adapted MVC to different contexts.
The use of the MVC pattern in web applications grew after the introduction of NeXT's WebObjects in 1996, which was originally written in Objective-C (that borrowed heavily from Smalltalk) and helped enforce MVC principles. Later, the MVC pattern became popular with Java developers when WebObjects was ported to Java. Later frameworks for Java, such as Spring (released in October 2002), continued the strong bond between Java and MVC.
In 2003, Martin Fowler published Patterns of Enterprise Application Architecture, which presented MVC as a pattern where an "input controller" receives a request, sends the appropriate messages to a model object, takes a response from the model object, and passes the response to the appropriate view for display.: 56 This is close to the approach taken by the Ruby on Rails web application framework (August 2004), which has the client send requests to the server via an in-browser view, these requests are handled by a controller on the server, and the controller communicates with the appropriate model objects. The Django framework (July 2005, for Python) put forward a similar "model-template-view" (MTV) take on the pattern, in which a view retrieves data from models and passes it to templates for display. Both Rails and Django debuted with a strong emphasis on rapid deployment, which increased MVC's popularity outside the traditional enterprise environment in which it has long been popular.
== Components ==
=== Model ===
The central component of the pattern. It is the application's dynamic data structure, independent of the user interface. It directly manages the data, logic and rules of the application. In Smalltalk-80, the design of a model type is left entirely to the programmer. With WebObjects, Rails, and Django, a model type typically represents a table in the application's database. The model is essential for keeping the data organized and consistent. It ensures that the application's data behaves according to the defined rules and logic.
=== View ===
Any representation of information such as a chart, diagram or table. Multiple views of the same information are possible, such as a bar chart for management and a tabular view for accountants.
In Smalltalk-80, a view is just a visual representation of a model, and does not handle user input. With WebObjects, a view represents a complete user interface element such as a menu or button, and does receive input from the user. In both Smalltalk-80 and WebObjects, however, views are meant to be general-purpose and composable.
With Rails and Django, the role of the view is played by HTML templates, so in their scheme a view specifies an in-browser user interface rather than representing a user interface widget directly. (Django opts to call this kind of object a "template" in light of this.) This approach puts relatively less emphasis on small, composable views; a typical Rails view has a one-to-one relationship with a controller action.
Smalltalk-80 views communicate with both a model and a controller, whereas with WebObjects, a view talks only to a controller, which then talks to a model. With Rails and Django, a view/template is used by a controller/view when preparing a response to the client.
=== Controller ===
Accepts input and converts it to commands for the model or view.
A Smalltalk-80 controller handles user input events, such as button presses or mouse movement. At any given time, each controller has one associated view and model, although one model object may hear from many different controllers. Only one controller, the "active" controller, receives user input at any given time; a global window manager object is responsible for setting the current active controller. If user input prompts a change in a model, the controller will signal the model to change, but the model is then responsible for telling its views to update.
In WebObjects, the views handle user input, and the controller mediates between the views and the models. There may be only one controller per application, or one controller per window. Much of the application-specific logic is found in the controller.
In Rails, requests arriving at the on-server application from the client are sent to a "router", which maps the request to a specific method of a specific controller. Within that method, the controller interacts with the request data and any relevant model objects and prepares a response using a view. Conventionally, each view has an associated controller; for example, if the application had a client view, it would typically have an associated Clients controller as well. However, developers are free to make other kinds of controllers if they wish.
Django calls the object playing this role a "view" instead of a controller. A Django view is a function that receives a web request and returns a web response. It may use templates to create the response.
== Interactions ==
In addition to dividing the application into a model, a view and a controller component, the MVC design pattern defines the interactions between these three components :
The model is responsible for managing the data of the application. It receives user input from the controller.
The view renders presentation of the model in a particular format.
The controller responds to the user input and performs interactions on the data model objects. The controller receives the input, optionally validates it and then passes the input to the model.
As with other software patterns, MVC expresses the "core of the solution" to a problem while allowing it to be adapted for each system. Particular MVC designs can vary significantly from the traditional description here.
== Motivation ==
As Alan Kay wrote in 2003, the original motivation behind the MVC was to allow creation of a graphical interface for any object. That was outlined in detail in Richard Pawson's book Naked Objects.
Trygve Reenskaug, originator of MVC at PARC, has written that "MVC was conceived as a general solution to the problem of users controlling a large and complex data set."
In their 1991 guide Inside Smalltalk, Carleton University computer science professors Wilf LaLonde and John Pugh described the advantages of Smalltalk-80-style MVC as:
independence of presentation and data, e.g. multiple views on one model simultaneously,
composable presentation widgets, e.g. one view used as a subview of another,
switchable input modes, by swapping one controller out for another during runtime, and
independence of input and output processing, via the separate responsibilities of controllers and views.
== Use in web applications ==
Although originally developed for desktop computing, MVC has been widely adopted as a design for World Wide Web applications in major programming languages. Several web frameworks have been created that enforce the pattern. These software frameworks vary in their interpretations, mainly in the way that the MVC responsibilities are divided between the client and server. Early MVC frameworks took a thin client approach that placed almost the entire model, view and controller logic on the server. In this approach, the client sends hyperlink requests or form submissions to the controller and then receives a complete and updated web page (or other document) from the view; the model exists entirely on the server. Later frameworks have allowed the MVC components to execute partly on the client, using Ajax to synchronize data.
== See also ==
== References ==
== Bibliography == | Wikipedia/Model_(MVC) |
A show house, also called a model home or display home, is a "display" version of manufactured homes, or houses in a subdivision. They are used on newly built developments to show the living space and features of homes available. Show homes are often built in such a way that they can be sold like any other house once the construction of other houses in the area is finished, and as such are connected to utilities such as mains electricity, telephone lines and water mains.
They are almost always equipped with full furnishings, including appliances and interior decoration ("staging") to allow prospective buyers to more easily visualize what the house would look like when lived in. Once the home is ultimately put up for sale, many builders will give buyers the option to buy the home in its fully furnished state.
In model homes that have attached garages (which is common among homes in subdivisions), the garage is usually completely finished to look like another room of the house, making it viable office space for the salespeople working at the model. This is often the first thing that prospective buyers see when entering the home, making it a "lobby" of sorts. The eventual homeowner can choose to keep it as a home office or utilize it as a garage.
== See also ==
Housing portal
Architectural model
== References == | Wikipedia/Model_house |
Railway modelling (UK, Australia, New Zealand, and Ireland) or model railroading (US and Canada) is a hobby in which rail transport systems are modelled at a reduced scale.
The scale models include locomotives, rolling stock, streetcars, tracks, signalling, cranes, and landscapes including: countryside, roads, bridges, buildings, vehicles, harbors, urban landscape, model figures, lights, and features such as rivers, hills, tunnels, and canyons.
The earliest model railways were the 'carpet railways' in the 1840s. The first documented model railway was the Railway of the Prince Imperial (French: Chemin de fer du Prince Impérial) built in 1859 by Emperor Napoleon III for his then 3-year-old son, also Napoleon, in the grounds of the Château de Saint-Cloud in Paris. It was powered by clockwork and ran in a figure-of-eight. Electric trains appeared around the start of the 20th century, but these were crude likenesses. Model trains today are more realistic, in addition to being much more technologically advanced. Today modellers create model railway layouts, often recreating real locations and periods throughout history.
The world's oldest working model railway is a model designed to train signalmen on the Lancashire and Yorkshire Railway. It is located in the National Railway Museum, York, England and dates back to 1912. It remained in use until 1995. The model was built as a training exercise by apprentices of the company's Horwich Works and supplied with rolling stock by Bassett-Lowke.
== General description ==
Involvement ranges from possession of a train set to spending hours and large sums of money on a large and exacting model of a railroad and the scenery through which it passes, called a "layout". Hobbyists, called "railway modellers" or "model railroaders", may maintain models large enough to ride (see Live steam, Ridable miniature railway and Backyard railroad).
Modellers may collect model trains, building a landscape for the trains to pass through. They may also operate their own railroad in miniature. For some modellers, the goal of building a layout is to eventually run it as if it were a real railroad (if the layout is based on the fancy of the builder) or as the real railroad did (if the layout is based on a prototype). If modellers choose to model a prototype, they may reproduce track-by-track reproductions of the real railroad in miniature, often using prototype track diagrams and historic maps.
Layouts vary from a circle or oval of track to realistic reproductions of real places modelled to scale. Probably the largest model landscape in the UK is in the Pendon Museum in Oxfordshire, UK, where an EM gauge (same 1:76.2 scale as 00 but with more accurate track gauge) model of the Vale of White Horse in the 1930s is under construction. The museum also houses one of the earliest scenic models – the Madder Valley layout built by John Ahern. This was built in the late 1930s to late 1950s and brought in realistic modelling, receiving coverage on both sides of the Atlantic in the magazines Model Railway News and Model Railroader. Bekonscot in Buckinghamshire is the oldest model village and includes a model railway, dating from the 1930s. The world's largest model railroad in H0 scale is the Miniatur Wunderland in Hamburg, Germany. The largest live steam layout, with 25 miles (40 km) of track is Train Mountain in Chiloquin, Oregon, U.S.
Operations form an important aspect of rail transport modelling with many layouts being dedicated to emulating the operational aspects of a working railway. These layouts can become extremely complex with multiple routes, movement patterns and timetabled operation. The British outline model railway of Banbury Connections in New South Wales, Australia, is one of the world's most complicated model railways.
Model railroad clubs exist where enthusiasts meet. Clubs often display models for the public. One specialist branch concentrates on larger scales and gauges, commonly using track gauges from 3.5 to 7.5 inches (89 to 191 mm). Models in these scales are usually hand-built and powered by live steam, or diesel-hydraulic, and the engines are often powerful enough to haul dozens of human passengers.
The Tech Model Railroad Club (TMRC) at MIT in the 1950s pioneered automatic control of track-switching by using telephone relays.
The oldest society is 'The Model Railway Club' (established 1910), near Kings Cross, London, UK. As well as building model railways, it has 5,000 books and periodicals. Similarly, 'The Historical Model Railway Society' at Butterley, near Ripley, Derbyshire specialises in historical matters and has archives available to members and non-members.
== Scales and gauges ==
The words scale and gauge seem at first interchangeable but their meanings are different. Scale is the model's measurement as a proportion to the original, while gauge is the measurement between the rails.
The size of engines depends on the scale and can vary from 700 mm (27.6 in) tall for the largest rideable live steam scales such as 1:4, down to matchbox size for the smallest: Z-scale (1:220) or T scale (1:450). A typical HO (1:87) engine is 50 mm (1.97 in) tall, and 100 to 300 mm (3.94 to 11.81 in) long. The most popular scales are: G scale, Gauge 1, O scale, S scale, HO scale (in Britain, the similar OO), TT scale, and N scale (1:160 in the United States, but 1:148 in the UK). HO and OO are the most popular. Popular narrow-gauge scales include Sn3, HOn3 and Nn3, which are the same in scale as S, HO and N except with a narrower spacing between the tracks (in these examples, a scale 3 ft (914 mm) instead of the 4 ft 8+1⁄2 in (1,435 mm) standard gauge).
The largest common scale is 1:8, with 1:4 sometimes used for park rides. G scale (Garden, 1:24 scale) is most popular for backyard modelling. It is easier to fit a G scale model into a garden and keep scenery proportional to the trains. Gauge 1 and Gauge 3 are also popular for gardens. O, S, HO, and N scale are more often used indoors.
At first, model railways were not to scale. Aided by trade associations such as the National Model Railroad Association (NMRA) and Normen Europäischer Modellbahnen (NEM), manufacturers and hobbyists soon arrived at de facto standards for interchangeability, such as gauge, but trains were only a rough approximation to the real thing. Official scales for the gauges were drawn up but not at first rigidly followed and not necessarily correctly proportioned for the gauge chosen. 0 (zero) gauge trains, for instance, operate on track too widely spaced in the United States as the scale is accepted as 1:48 whereas in Britain 0 gauge uses a ratio of 43.5:1 or 7 mm/1 foot and the gauge is near to correct. British OO standards operate on track significantly too narrow. The 4 mm/1 foot scale on a 16.5 mm (0.65 in) gauge corresponds to a track gauge of 4 ft 1+1⁄2 in (1,257 mm), 7 inches or 178 millimetres (undersized). 16.5 mm (0.65 in) gauge corresponds to 4 ft 8+1⁄2 in (1,435 mm) standard gauge in H0 (half-0) 3.5 mm/1 foot or 1:87.1. This arose due to British locomotives and rolling stock being smaller than those found elsewhere, leading to an increase in scale to enable H0 scale mechanisms to be used. Most commercial scales have standards that include wheel flanges that are too deep, wheel treads that are too wide, and rail tracks that are too large. In H0 scale, the rail heights are codes 100, 87, 83, 70, 55, 53, and 40 -- the height in thousandths of an inch from base to railhead (so code 100 is a tenth of an inch and represents 156-pound rail).
Later, modellers became dissatisfied with inaccuracies and developed standards in which everything is correctly scaled. These are used by modellers but have not spread to mass-production because the inaccuracies and overscale properties of the commercial scales ensure reliable operation and allow for shortcuts necessary for cost control. The finescale standards include the UK's P4, and the even finer S4, which uses track dimensions scaled from the prototype. This 4 mm:1 ft modelling uses wheels 2 mm (0.079 in) or less wide running on track with a gauge of 18.83 mm (0.741 in). Check-rail and wing-rail clearances are similarly accurate.
A compromise of P4 and OO is "EM" which uses a gauge of 18.2 mm (0.717 in) with more generous tolerances than P4 for check clearances. It gives a better appearance than OO though pointwork is not as close to reality as P4. It suits many where time and improved appearance are important. There is a small following of finescale OO which uses the same 16.5mm gauge as OO, but with the finer scale wheels and smaller clearances as used with EM- it is essentially 'EM-minus-1.7mm.'
== Modules ==
Many groups build modules, which are sections of layouts, and can be joined together to form a larger layout, for meetings or for special occasions. For each kind of module system, there is an interface standard, so that modules made by different participants may be connected, even if they have never been connected before. Many of these module types are listed in the Layout standards organizations section of this article.
== Couplers and connectors ==
In addition to different scales, there are also different types of couplers for connecting cars, which are not compatible with each other.
In HO, the Americans standardized on horn-hook, or X2F couplers. Horn hook couplers have largely given way to a design known as a working knuckle coupler which was popularized by the Kadee Quality Products Co., and which has subsequently been emulated by a number of other manufactures in recent years. Working knuckle couplers are a closer approximation to the "automatic" couplers used on the prototype there and elsewhere. Also in HO, the European manufacturers have standardized, but on a coupler mount, not a coupler: many varieties of coupler can be plugged in (and out) of the NEM coupler box. None of the popular couplers has any resemblance to the prototype three-link chains generally used on the continent.
For British modellers, whose most popular scale is OO, the normal coupler is a tension-lock coupler, which, again has no pretence of replicating the usual prototype three-link chain couplers. Bachmann and more recently Hornby have begun to offer models fitted with NEM coupler pockets. This theoretically enables modellers of British railways to substitute any other NEM362 coupler, though many Bachmann models place the coupler pocket at the wrong height. A fairly common alternative is to use representations of chain couplings as found on the prototype, though these require large radius curves to be used to avoid derailments.
Other scales have similar ranges of non-compatible couplers available. In all scales couplers can be exchanged, with varying degrees of difficulty.
== Landscaping ==
Some modellers pay attention to landscaping their layout, creating a fantasy world or modelling an actual location, often historic. Landscaping is termed "scenery building" or "scenicking".
Constructing scenery involves preparing a sub-terrain using a wide variety of building materials, including (but not limited to) screen wire, a lattice of cardboard strips, or carved stacks of expanded polystyrene (styrofoam) sheets. A scenery base is applied over the sub-terrain; typical base include casting plaster, plaster of Paris, hybrid paper-pulp (papier-mâché) or a lightweight foam/fiberglass/bubblewrap composite as in Geodesic Foam Scenery.
The scenery base is covered with substitutes for ground cover, which may be Static Grass or scatter. Scatter or flock is a substance used in the building of dioramas and model railways to simulate the effect of grass, poppies, fireweed, track ballast and other scenic ground cover. Scatter used to simulate track ballast is usually fine-grained ground granite. Scatter which simulates coloured grass is usually tinted sawdust, wood chips or ground foam. Foam or natural lichen or commercial scatter materials can be used to simulate shrubbery. An alternative to scatter, for grass, is static grass which uses static electricity to make its simulated grass actually stand up.
Buildings and structures can be purchased as kits, or built from cardboard, balsa wood, basswood, other soft woods, paper, or polystyrene or other plastic. Trees can be fabricated from materials such as Western sagebrush, candytuft, and caspia, to which adhesive and model foliage are applied; or they can be bought ready-made from specialist manufacturers. Water can be simulated using polyester casting resin, polyurethane, or rippled glass. Rocks can be cast in plaster or in plastic with a foam backing. Castings can be painted with stains to give colouring and shadows.
== Weathering ==
Weathering refers to making a model look used and exposed to weather by simulating dirt and wear on real vehicles, structures and equipment. Most models come out of the box looking new, because unweathered finishes are easier to produce. Also, the wear a freight car or building undergoes depends not only on age but where it is used. Rail cars in cities accumulate grime from building and automobile exhaust and graffiti, while cars in deserts may be subjected to sandstorms which etch or strip paint. A model that is weathered would not fit as many layouts as a pristine model which can be weathered by its purchaser.
There are many weather techniques that include, but are not limited to, painting (by either drybrushing or an airbrush), sanding, breaking, and even the use of chemicals to cause corrosion. Some processes become very creative depending on the skill of the modeller. For instance several steps may be taken to create a rusting effect to ensure not only proper colouring, but also proper texture and lustre.
Weathering purchased models is common, at the least, weathering aims to reduce the plastic-like finish of scale models. The simulation of grime, rust, dirt, and wear adds realism. Some modellers simulate fuel stains on tanks, or corrosion on battery boxes. In some cases, evidence of accidents or repairs may be added, such as dents or freshly painted replacement parts, and weathered models can be nearly indistinguishable from their prototypes when photographed appropriately.
== Methods of power ==
Static diorama models or "push along" scale models are a branch of model railways for unpowered locomotives, examples are Lone Star and Airfix models. Powered model railways are now generally operated by low voltage direct current (DC) electricity supplied via the tracks, but there are exceptions, such as Märklin and Lionel Corporation, which use alternating current (AC). Modern Digital Command Control (DCC) systems use alternating current. Other locomotives, particularly large models, can use steam. Steam and clockwork-driven engines are still sought by collectors.
=== Clockwork ===
Most early models for the toy market were powered by clockwork and controlled by levers on the locomotive. Although this made control crude the models were large and robust enough that handling the controls was practical. Various manufacturers introduced slowing and stopping tracks that could trigger levers on the locomotive and allow station stops.
=== Electricity ===
Three-rail
The first miniature electric trains used a three-rail track, with non-insulated wheels resting on the two outer rails that were in contact with the metal sleepers. The insulated central rail supplied the current to a skid under the locomotive. The outer rails ensured the return of the current. The current was alternating, supplied by the domestic network, lowered by various means (transformer or serial resistances). This kind of track made sense at the time as models were metal and conductive. Modern plastics were not available and insulation was a problem. In addition the notion of accurate models had yet to evolve and toy trains and track were crude tinplate.
In 1938, Hornby, a manufacturer of ‘O’ scale model trains in the UK, launched a range of ‘OO’ scale electric trains (Hornby Dublo) with 1/76 scale rolling stock using 1/87 scale 16.5 mm wide track with a third centre rail. The power supply was 12 V DC and the track was equipped with an electrically insulated central rail and two non-insulated running rails. In 1959 Hornby abandoned its three-rail track in favour of a two-rail track for its ‘OO’ scale electric trains.
Other systems such as Märklin instead used, since 1953, fine metal studs to replace the central rail, allowing existing three-rail models to use more realistic track.
A variation on the three-rail system, early introduced by Trix in 1935, used a track with three insulated rails that allowed two trains to be independently controlled on the same track. The use of a catenary made it possible for three trains to be independently controlled. The center rail ensured the common return of the current. That system, known as Trix Express or Trix Twin in the UK, which first used alternative current and then direct current after 1953, was abandoned in 1997 when Märklin took over Trix. This three-rail system enabled DC and AC locomotives to run on the same track.
Two-rail
When DC motors with more powerful magnets began to be used for model trains in the 1950s, the two-rail track was generally preferred because at the same time accuracy became important. The two insulated rails from each other are to be used with insulated wheels on the same axle. In the direction of travel, the right-hand rail carries the positive potential and the left-hand rail the negative. This system excludes certain track layouts such as the reversing loop, the reversing triangle and the diagonal in a circle without insulated sections and suitable cabling.
Overhead line
Where the model is of an electric locomotive, it may be supplied by overhead lines, like the full-size locomotive. Before Digital Command Control became available, this was one way of controlling two trains separately on the same track. The electric-outline model would be supplied by the overhead wire and the other model could be supplied by one of the running rails. The other running rail would act as a common return.
Battery
Early electric trains ran on trackside batteries because few homes in the late 19th century and early 20th century had electricity. Today, inexpensive train sets running on batteries are again common but regarded as toys and seldom used by hobbyists. Batteries located in the model often power garden railway and larger scale systems because of the difficulty in obtaining reliable power supply through the outdoor rails. The high-power consumption and current draw of large-scale garden models is more easily and safely met with internal rechargeable batteries. Most large-scale battery-powered models use radio control.
=== Live steam ===
Engines powered by live steam are often built in large outdoor gauges of 5 inches (130 mm) and 7+1⁄2 inches (190 mm), are also available in Gauge 1, G scale, 16 mm scale and can be found in O and OO/HO. Hornby Railways produce live steam locomotives in OO, based on designs first arrived at by an amateur modeller. Other modellers have built live steam models in HO/OO, OO9 and N, and there is one in Z in Australia.
=== Internal combustion ===
Occasionally gasoline-electric models, patterned after real diesel-electric locomotives, come up among hobbyists and companies like Pilgrim Locomotive Works have sold such locomotives. Large-scale petrol-mechanical and petrol-hydraulic models are available but unusual and pricier than the electrically powered versions.
== Scratch building ==
Modern manufacturing techniques can allow mass-produced models to cost-effectively achieve a high degree of precision and realism. In the past this was not the case and scratch building was very common. Simple models are made using cardboard engineering techniques. More sophisticated models can be made using a combination of etched sheets of brass and low temperature castings. Parts that need machining, such as wheels and couplings are purchased.
Etched kits are still popular, still accompanied by low temperature castings. These kits produce models that are not covered by the major manufacturers or in scales that are not in mass production. Laser machining techniques have extended this ability to thicker materials for scale steam and other locomotive types. Scratch builders may also make silicone rubber moulds of the parts they create, and cast them in various plastic resins (see Resin casting), or plasters. This may be done to save duplication of effort, or to sell to others. Resin "craftsman kits" are also available for a wide range of prototypes.
== Control ==
The first clockwork (spring-drive) and live steam locomotives ran until out of power, with no way for the operator to stop and restart the locomotive or vary its speed. The advent of electric trains, which appeared commercially in the 1890s, allowed control of the speed by varying the current or voltage. As trains began to be powered by transformers and rectifiers more sophisticated throttles appeared, and soon trains powered by AC contained mechanisms to change direction or go into neutral gear when the operator cycled the power. Trains powered by DC can change direction by reversing polarity.
Electricity permits control by dividing the layout into isolated blocks, where trains can be slowed or stopped by lowering or cutting power to a block. Dividing a layout into blocks permits operators to run more than one train with less risk of a fast train catching and hitting a slow train. Blocks can also trigger signals or other accessories, adding realism or whimsy. Three-rail systems often insulate one of the common rails on a section of track, and use a passing train to complete the circuit and activate an accessory.
Many layout builders are choosing digital operation of their layouts rather than the more traditional DC design. Of the several competing systems, the command system offered by the majority of manufacturers in 2020 was a variant of Digital Command Control (DCC). The advantages of DCC are that track voltage is constant (usually in the range of 20 volts AC) and the command throttle sends a signal to small circuit cards, or decoders, hidden inside the piece of equipment which control several functions of an individual locomotive, including speed, direction of travel, lights, smoke and various sound effects. This allows more realistic operation in that the modeller can operate independently several locomotives on the same stretch of track. Several manufacturers also offer software that can provide computer-control of DCC layouts.
In large scales, particularly for garden railways, radio control and DCC in the garden have become popular.
== Model railway manufacturers ==
Model railways
== Magazines ==
== Layout standards organizations ==
Several organizations exist to set standardizations for connectibility between individual layout sections (commonly called "modules"). This is so several (or hundreds, given enough space and power) people or groups can bring together their own modules, connect them together with as little trouble as possible, and operate their trains. Despite different design and operation philosophies, different organizations have similar goals; standardized ends to facilitate connection with other modules built to the same specifications, standardized electricals, equipment, curve radii.
ausTRAK, N Scale, two-track main with hidden third track (can be used as NTRAK's third main, as a return/continuous loop, or hidden yard/siding/on-line storage). Australian scenery and rolling stock modelled in Standard Gauge.
FREMO a European-based organisation focusing on a single-track line, HO Scale. Also sets standards for N Scale modules. Standards are considerably more flexible in module shape than NTRAK, and has expanded over the years to accommodate several scenery variations.
Free-mo Originally developed by the San Luis Obispo Model Railroad Club in 1995 (California), it has grown across North America and is expanding across the world. The objective of the Free-mo Standard is to provide a platform for prototype modelling in a flexible, modular environment. Free-mo modules not only provide track to operate realistic models, but also emphasize realistic, plausible scenery; realistic, reliable trackwork; and operations. Free-Mo was designed to go beyond the traditional closed-loop set-up in creating a truly universal "free-form" modular design that is operations-oriented and heavily influenced by prototype railroading. This is emphasized in the Free-mo motto, "More than Just a Standard".
MOROP, European Union of Model Railroad and Railroad Fans, the European standardization organisation.
NEM, The German modelling standards organisation.
NMRA, National Model Railroad Association, the largest organization devoted to the development, promotion, and enjoyment of the hobby of model railroading.
N-orma, Polish N-scale (1:160) modules organization.
NTRAK, standardized three-track (heavy operation) mainline with several optional branchlines. Focuses on standard gauge, but also has specifications for narrow gauge. Due to its popularity, it can be found in regional variations, most notably the imperial-to-metric measurement conversions. Tends to be used more for "unattended display" than "operation".
oNeTRAK, operationally similar to FREMO, standardises around a single-track mainline, with modules of varying sizes and shapes. Designed with the existing NTRAK spec in mind, is fully compatible with such modules.
Sipping and Switching Society of NC is a society/association of individuals which has developed a system of HO modules, which feature lightweight waffle construction using 5 mm lauan plywood underlayment and an interface which depends on using a metal template to locate 1-inch (25 mm) pegs to mate to 1-inch holes in the adjoining module. The rails of the tracks are positioned in an exact relationship with the pegs. The rails come up to the end of the modules, so that the rails on adjacent modules do not need joiner track, but depend on the accuracy of the placement of the rails to allow trains to pass from one section to another. This style of module allows for very quick set-up, compared with module systems that use joiner tracks.
sTTandard, Polish TT-scale (1:120) modules organization.
T-TRAK, is a modular system that uses table-top modules, 2+3⁄4 inches (70 mm) high, which set on tables, that are not part of the modules, but are often found at sites which members meet. It uses a specific track interface, which has joiners which hold the modules together, which enables quick setting up and taking down.
Z-Bend Track, uses a double-track mainline running down both sides of a module. Modules can be of any length or width in the middle and any overall shape. The "standard" called Z-Bend Track applies only to the last 5 inches (130 mm) of the module's interface to other modules, the electrical interface and the module height.
== In popular culture ==
In the 1990 film Back to the Future III, Doc brown builds a "crude" electrified model rail "not to scale" to demonstrate his time travel experiment to Marty in 1885.
In Hinterland Season 1, Episode 4 ("The Girl in the Water"), a semi-recluse who lives and works at Borth railway station maintains a model train set with custom made components; the set and certain components contribute to a death as well as provide important clues to a murder investigation. During the investigation, DCI Tom Mathias reveals that his late brother was a model train aficionado.
In The Sopranos, Bobby Baccalieri is a model train aficionado. He is shown wearing an engineer's cap while playing with model trains in his garage.
In The Simpsons, Reverend Lovejoy is often depicted playing with his model trains when not on ecumenical duty, often while wearing a conductor's uniform and hat. His character may be a nod to the real life Reverend W. Awdry.
In Trailer Park Boys, Season 7 Episode 4, "Friends of the Dead", heavy metal singer Sebastian Bach is a featured guest at the Bangor model train convention and is introduced as "our Competitive Model Train World Champion". He expresses a dislike of alleged rival model train competitor Patrick Swayze. Attendees at the family event are shocked by Sebastian's use of obscenities as he attempts to work the crowd in a rock concert fashion shouting, "I know, I just know, that there are some great f**king trains here in Bangor!"
In That '90s Show, Red Forman runs a model railway in the garage after he retired.
== See also ==
Displays and famous layouts
Groups dedicated to railway modelling
== References ==
== External links ==
The National Model Railroad Association, USA – the largest model railroad organization in the world
The Model Railway Club, UK – the oldest known society in the world – established 1910
Associazione Ferrovie Siciliane – AFS (Messina – IT) – One of the most important group of rail enthusiasts end railways modellers active in Sicily and all over Italy founded in 2006 | Wikipedia/Rail_transport_modelling |
In software engineering, a domain model is a conceptual model of the domain that incorporates both behavior and data. In ontology engineering, a domain model is a formal representation of a knowledge domain with concepts, roles, datatypes, individuals, and rules, typically grounded in a description logic.
== Overview ==
In the field of computer science a conceptual model aims to express the meaning of terms and concepts used by domain experts to discuss the problem, and to find the correct relationships between different concepts. The conceptual model is explicitly chosen to be independent of design or implementation concerns, for example, concurrency or data storage. Conceptual modeling in computer science should not be confused with other modeling disciplines within the broader field of conceptual models such as data modelling, logical modelling and physical modelling.
The conceptual model attempts to clarify the meaning of various, usually ambiguous terms, and ensure that confusion caused by different interpretations of the terms and concepts cannot occur. Such differing interpretations could easily cause confusion amongst stakeholders, especially those responsible for designing and implementing a solution, where the conceptual model provides a key artifact of business understanding and clarity. Once the domain concepts have been modeled, the model becomes a stable basis for subsequent development of applications in the domain. The concepts of the conceptual model can be mapped into physical design or implementation constructs using either manual or automated code generation approaches. The realization of conceptual models of many domains can be combined to a coherent platform.
A conceptual model can be described using various notations, such as UML, ORM or OMT for object modelling, ITE, or IDEF1X for Entity Relationship Modelling. In UML notation, the conceptual model is often described with a class diagram in which classes represent concepts, associations represent relationships between concepts and role types of an association represent role types taken by instances of the modelled concepts in various situations. In ER notation, the conceptual model is described with an ER Diagram in which entities represent concepts, cardinality and optionality represent relationships between concepts. Regardless of the notation used, it is important not to compromise the richness and clarity of the business meaning depicted in the conceptual model by expressing it directly in a form influenced by design or implementation concerns.
This is often used for defining different processes in a particular company or institute.
A domain model is a system of abstractions that describes selected aspects of a sphere of knowledge, influence or activity (a domain). The model can then be used to solve problems related to that domain.
The domain model is a representation of meaningful real-world concepts pertinent to the domain that need to be modeled in software. The concepts include the data involved in the business and rules the business uses in relation to that data. A domain model leverages natural language of the domain.
A domain model generally uses the vocabulary of the domain, thus allowing a representation of the model to be communicated to non-technical stakeholders. It should not refer to any technical implementations such as databases or software components that are being designed.
== Usage ==
A domain model is generally implemented as an object model within a layer that uses a lower-level layer for persistence and "publishes" an API to a higher-level layer to gain access to the data and behavior of the model.
In the Unified Modeling Language (UML), a class diagram is used to represent the domain model.
== See also ==
Domain-driven design (DDD)
Domain layer
Information model
Feature-driven development
Logical data model
Mental model
OntoUML
== References ==
== Further reading ==
Halpin T, Morgan T: Information Modeling and Relational Databases, Morgan Kaufmann, 2008. ISBN 978-0-12-373568-3.
Fowler, Martin: Analysis Patterns, Reusable object models, Addison-Wesley Longman, 1997. ISBN 0-201-89542-0.
Stewart Robinson, Roger Brooks, Kathy Kotiadis, and Durk-Jouke Van Der Zee (Eds.): Conceptual Modeling for Discrete-Event Simulation, 2010. ISBN 978-1-4398-1037-8
David W. Embley, Bernhard Thalheim (Eds.): Handbook of Conceptual Modeling, 2011. ISBN 978-3-642-15864-3. | Wikipedia/Conceptual_model_(computer_science) |
In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, and polygons in a simulated 3D space.
Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created manually, algorithmically (procedural modeling), or by scanning. Their surfaces may be further defined with texture mapping.
== Outline ==
The product is called a 3D model, while someone who works with 3D models may be referred to as a 3D artist or a 3D modeler.
A 3D model can also be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena.
3D models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. The 3D model can be physically created using 3D printing devices that form 2D layers of the model with three-dimensional material, one layer at a time. Without a 3D model, a 3D print is not possible.
3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications.
== History ==
3D models are now widely used anywhere in 3D graphics and CAD but their history predates the widespread use of 3D graphics on personal computers.
In the past, many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. The designer can then see the model in various directions and views, this can help the designer see if the object is created as intended to compared to their original vision. Seeing the design this way can help the designer or company figure out changes or improvements needed to the product.
=== Representation ===
Almost all 3D models can be divided into two categories:
Solid – These models define the volume of the object they represent (like a rock). Solid models are mostly used for engineering and medical simulations, and are usually built with constructive solid geometry
Shell or boundary – These models represent the surface, i.e., the boundary of the object, not its volume (like an infinitesimally thin eggshell). Almost all visual models used in games and film are shell models.
Solid and shell modeling can create functionally identical objects. Differences between them are mostly variations in the way they are created and edited and conventions of use in various fields and differences in types of approximations between the model and reality.
Shell models must be manifold (having no holes or cracks in the shell) to be meaningful as a real object. In a shell model of a cube, the bottom and top surfaces of the cube must have a uniform thickness with no holes or cracks in the first and last layers printed. Polygonal meshes (and to a lesser extent, subdivision surfaces) are by far the most common representation. Level sets are a useful representation for deforming surfaces that undergo many topological changes, such as fluids.
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference, into a polygon representation of a sphere is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g., squares) are popular as they have proven to be easy to rasterize (the surface described by each triangle is planar, so the projection is always convex). Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
== Process ==
There are three popular ways to represent a model:
Polygonal modeling – Points in 3D space, called vertices, are connected by line segments to form a polygon mesh. The vast majority of 3D models today are built as textured polygonal models because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons.
Curve modeling – Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point pulls the curve closer to that point. Curve types include nonuniform rational B-spline (NURBS), splines, patches, and geometric primitives
Digital sculpting – There are three types of digital sculpting: Displacement, which is the most widely used among applications at this moment, uses a dense model (often generated by subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of an image map that stores the adjusted locations. Volumetric, loosely based on voxels, has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tessellation, which is similar to voxel, divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for artistic exploration as the model has new topology created over it once the models form and possibly details have been sculpted. The new mesh usually has the original high-resolution mesh information transferred into displacement data or normal map data if it is for a game engine.
The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including:
Constructive solid geometry
Implicit surfaces
Subdivision surfaces
Modeling can be performed by means of a dedicated program (e.g., 3D modeling software like Adobe Substance, Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, SketchUp, Rhinoceros 3D, and others) or an application component (Shaper, Lofter in 3ds Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases, modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D).
3D models can also be created using the technique of Photogrammetry with dedicated programs such as RealityCapture, Metashape and 3DF Zephyr. Cleanup and further processing can be performed with applications such as MeshLab, the GigaMesh Software Framework, netfabb or MeshMixer. Photogrammetry creates models using algorithms to interpret the shape and texture of real-world objects and environments based on photographs taken from many angles of the subject.
Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats or sprites assigned to them.
== 3D modeling software ==
There are a variety of 3D modeling programs that can be used in the industries of engineering, interior design, film and others. Each 3D modeling software has specific capabilities and can be utilized to fulfill demands for the industry.
=== G-code ===
Many programs include export options to form a g-code, applicable to additive or subtractive manufacturing machinery. G-code (computer numerical control) works with automated technology to form a real-world rendition of 3D models. This code is a specific set of instructions to carry out steps of a product's manufacturing.
=== Human models ===
The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual human models (Poser being one example).
=== 3D clothing ===
The development of cloth simulation software such as Marvelous Designer, CLO3D and Optitex, has enabled artists and fashion designers to model dynamic 3D clothing on the computer.
Dynamic 3D clothing is used for virtual fashion catalogs, as well as for dressing 3D characters for video games, 3D animation movies, for digital doubles in movies, as a creation tool for digital fashion brands, as well as for making clothes for avatars in virtual worlds such as SecondLife.
== Comparison with 2D methods ==
3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers.
Advantages of wireframe 3D modeling over exclusively 2D methods include:
Flexibility, ability to change angles or animate images with quicker rendering of the changes;
Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating;
Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect.
Disadvantages compared to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model.
== 3D model market ==
A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) exists—either for individual models or large collections. Several online marketplaces for 3D content allow individual artists to sell content that they have created, including TurboSquid, MyMiniFactory, Sketchfab, CGTrader, and Cults. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according to the marketplace. In most cases, the artist retains ownership of the 3d model while the customer only buys the right to use and present the model. Some artists sell their products directly in their own stores, offering their products at a lower price by not using intermediaries.
The architecture, engineering and construction (AEC) industry is the biggest market for 3D modeling, with an estimated value of $12.13 billion by 2028. This is due to the increasing adoption of 3D modeling in the AEC industry, which helps to improve design accuracy, reduce errors and omissions and facilitate collaboration among project stakeholders.
Over the last several years numerous marketplaces specializing in 3D rendering and printing models have emerged. Some of the 3D printing marketplaces are a combination of models sharing sites, with or without a built in e-com capability. Some of those platforms also offer 3D printing services on demand, software for model rendering and dynamic viewing of items.
== 3D printing ==
The term 3D printing or three-dimensional printing is a form of additive manufacturing technology where a three-dimensional object is created from successive layers of material. Objects can be created without the need for complex expensive molds or assembly with multiple parts. 3D printing allows ideas to be prototyped and tested without having to go through a production process.
3D models can be purchased from online markets and printed by individuals or companies using commercially available 3D printers, enabling the home-production of objects such as spare parts and even medical equipment.
== Uses ==
3D modeling is used in many industries.
The medical industry uses detailed models of organs created from multiple two-dimensional image slices from an MRI or CT scan. Other scientific fields can use 3D models to visualize and communicate information such as models of chemical compounds.
The movie industry uses 3D models for computer-generated characters and objects in animated and real-life motion pictures. Similarly, the video game industry uses 3D models as assets for computer and video games. The source of the geometry for the shape of an object can be a designer, industrial engineer, or artist using a 3D CAD system; an existing object that has been reverse engineered or copied using a 3D shape digitizer or scanner; or mathematical data based on a numerical description or calculation of the object.
The architecture industry uses 3D models to demonstrate proposed buildings and landscapes in lieu of traditional, physical architectural models. Additionally, the use of Level of Detail (LOD) in 3D models is becoming increasingly important in architecture, engineering, and construction.
Archeologists create 3D models of cultural heritage items for research and visualization. For example, the International Institute of MetaNumismatics (INIMEN) studies the applications of 3D modeling for the digitization and preservation of numismatic artifacts.
In recent decades, the earth science community has started to construct 3D geological models as a standard practice.
3D models are also used in constructing digital representations of mechanical parts before they are manufactured. Using CAD- and CAM-related software, an engineer can test the functionality of assemblies of parts then use the same data to create toolpaths for CNC machining or 3D printing.
3D modeling is used in industrial design, wherein products are 3D modeled before representing them to the clients.
In media and event industries, 3D modeling is used in stage and set design.
The OWL 2 translation of the vocabulary of X3D can be used to provide semantic descriptions for 3D models, which is suitable for indexing and retrieval of 3D models by features such as geometry, dimensions, material, texture, diffuse reflection, transmission spectra, transparency, reflectivity, opalescence, glazes, varnishes and enamels (as opposed to unstructured textual descriptions or 2.5D virtual museums and exhibitions using Google Street View on Google Arts & Culture, for example). The RDF representation of 3D models can be used in reasoning, which enables intelligent 3D applications which, for example, can automatically compare two 3D models by volume.
== See also ==
== References ==
== External links ==
Media related to 3D modeling at Wikimedia Commons | Wikipedia/Model_(CGI) |
The model of a car is its design, in the context of the manufacturer's range or series of cars. Different models, variants are distinguishable by technology, components, underpinnings, and/or style and appearance.
The methods used to categorise cars into models differ significantly between manufacturers. Frequently, several different body variants are offered, depending on market demand; and when completing their 'production lifespan', sufficiently successful models are usually followed by a new 'generation' of that model.
The name of a model (range or series) is almost always trademarked, so that competing manufacturers cannot also use it (unless the owner permits it, for an agreed licence fee).
A popular model can have a significantly valuable brand name, and manufacturers often take great care in fostering and maintaining the brand image of the models bearing the name, both in terms of key model characteristics, as well as the targeted market, and the expected or desired buyer's demographic.
== Common characteristics ==
Equipment, upholstery and exterior trim are usually determined by the trim level, the car model often defines the platform (which determines the engines, drivetrains and chassis options available), body styles and aesthetic theme.
Some models have only one body style (e.g. the Hyundai i20 hatchback), while other models are produced in several body styles (e.g. the Audi A3, which has been produced in hatchback, sedan and convertible body styles). In some cases, a manufacturer has marketed a body style as a separate model, such as the Volkswagen Jetta and the BMW 4 Series, which are based on the Volkswagen Golf and BMW 3 Series platforms respectively.
Some models have an only engine (or electric/hybrid powertrain) option available, while other models have multiple powertrains available.
In North America, a model can also be called a nameplate. The Chevrolet Suburban is the oldest automobile nameplate in continuous production, dating to 1934, and the 1940-1996 Chrysler New Yorker was another long-running North American car nameplate. However, the term "nameplate" is also sometimes used to describe an entire brand, rather than a specific model. The Rolls-Royce Phantom is also a long-running model that has returned for 2003, having originally been introduced in 1925.
=== Country-specific model names ===
The same car model may be sold by the automaker in different countries under different model names. Examples include Mitsubishi Pajero / Montero, Mazda MX-5 / Miata, Volkswagen Golf / Rabbit and Ford Everest / Endeavour.
== Model years ==
The model year (MY) is a manner of indicating the version of a car that has been produced and changed over multiple years.
== Trim level ==
Beyond the standard equipment that is fitted to all vehicles for a model, additional features (such as the upholstery, interior equipment, safety features and exterior aerodynamic/styling upgrades) are often determined by the trim level of the vehicle.
Many manufacturers also allow additional equipment to be added to a vehicle by purchasing individual options (such as alloy wheels) or 'packages' of bundled options (such as a "safety package" consisting of lane departure warning system, collision avoidance system and additional airbags).
== Model code ==
Model codes (also known as chassis codes, codename, designation, or descriptor, among others) are designated to a vehicle to provide identification. It provides information on its type, and to an extent its engine, transmission and body style. Some manufacturers include model codes on the vehicle identification plate alongside the vehicle identification number. Some manufacturers adopted development codes as model codes. Model codes can be used to find the correct parts for the vehicle.
== See also ==
== References == | Wikipedia/Car_model |
In marketing, a product is an object, or system, or service made available for consumer use as of the consumer demand; it is anything that can be offered to a domestic or an international market to satisfy the desire or need of a customer. In retailing, products are often referred to as merchandise, and in manufacturing, products are bought as raw materials and then sold as finished goods. A service is also regarded as a type of product.
In project management, products are the formal definition of the project deliverables that make up or contribute to delivering the objectives of the project.
A related concept is that of a sub-product, a secondary but useful result of a production process.
Dangerous products, particularly physical ones, that cause injuries to consumers or bystanders may be subject to product liability.
== Product classification ==
A product can be classified as tangible or intangible. A tangible product is an actual physical object that can be perceived by touch such as a building, vehicle, gadget, or clothing. An intangible product is a product that can only be perceived indirectly such as an insurance policy. These services can be broadly classified under intangible products, which can be durable or nondurable.
=== By use ===
In its online product catalog, retailer Sears, Roebuck and Company divides its products into "departments", then presents products to potential shoppers according to (1) function or (2) brand. Each product has a Sears item number and a manufacturer's model number. Sears uses the departments and product groupings with the intention of helping customers browse products by function or brand within a traditional department-store structure.
=== By association ===
A product line is "a group of products that are closely related, either because they function in a similar manner, are sold to the same customer groups, are marketed through the same types of outlets, or fall within given price ranges."
Many businesses offer a range of product lines which may be unique to a single organisation or may be common across the business's industry. In 2002 the US Census compiled revenue figures for the finance and insurance industry by various product lines such as "accident, health and medical insurance premiums" and "income from secured consumer loans". Within the insurance industry, product lines are indicated by the type of risk coverage, such as auto insurance, commercial insurance and life insurance.
=== National and international product classifications ===
Various classification systems for products have been developed for economic statistical purposes. The NAFTA signatories are working on a system that classifies products called NAPCS as a companion to the North American Industry Classification System (NAICS). The European Union uses a "Classification of Products by Activity" among other product classifications. The United Nations also classifies products for international economic activity reporting.
The Aspinwall Classification System classifies and rates products based on five variables:
Replacement rate (How frequently is the product repurchased?)
Gross margin (How much profit is obtained from each product?)
Buyer goal adjustment (How flexible are the buyers' purchasing habits with regard to this product?)
Duration of product satisfaction (How long will the product produce benefits for the user?)
Duration of buyer search behavior (How long will consumers shop for the product?)
The National Institute of Governmental Purchasing (NIGP) developed a commodity and services classification system for use by state and local governments, the NIGP Code. The NIGP Code is used by 33 states within the United States as well as thousands of cities, counties and political subdivisions. The NIGP Code is a hierarchical schema consisting of a 3 digit class, 5 digit class-item, 7 digit class-item-group, and an 11 digit class-item-group-detail. Applications of the NIGP Code include vendor registration, inventory item identification, contract item management, spend analysis, and strategic sourcing.
== Product model ==
A manufacturer usually provides an identifier for each particular design of product they make, known as a model, model variant, or model number (often abbreviated as MN, M/N or model no., and sometimes as M- or Mk). For example, Dyson Ltd, a manufacturer of appliances (mainly vacuum cleaners), requires customers to identify their model in the support section of the website. Brand and model can be used together to identify products in the market. The model number is not necessarily the same as the manufacturer part number (MPN).
Because of the huge amount of similar products in the automotive industry, there is a special kind of defining a car with options (marks, attributes) that represent the characteristics features of the vehicle. A model of a car is defined by some basic options like body, engine, gearbox, and axles. The variants of a model (often called the trim levels) are built by some additional options like color, seats, wheels, mirrors, other trims, entertainment and assistant systems, etc. Options, that exclude each other (pairwise) build an option family. That means that you can choose only one option for each family and you have to choose exactly one option.
In addition, a specific unit of a product is often (and in some contexts must be) identified by a serial number, which is necessary to distinguish products with the same product definition. In the case of automotive products, it is called the vehicle identification number (VIN), an internationally standardised format.
== Product information ==
Product information, beyond currency price information, can include:
Many of these types of product information are regulated to some degree, such as to some degree prohibiting false or misleading product information or requiring sellers or manufacturers to specify various information such as ingredients of food-, pharmaceutical- and hygiene-products. There also is standardization. Marketing to entice the shopper is often prioritized over accurate, high-quality or extensive and relevant information.
Product information is often a key element in the buyer decision process. Relevant factors include trust in the accuracy of the information and social normative pressure. Easily accessible and up-to-date medicinal product information can contribute to the health literacy. Online shopping is usually more informationally rich than shopping at physical stores traveled to and usually has higher comparability and customizability.
Production information-related developments can be useful for enabling, facilitating, or shifting towards sustainable consumption and support more sustainable products. Environmental life-cycle assessment (LCA) has been widely used for to assess environmental impacts across the life cycle of products. There are LCA datasets that assess all products in some supermarkets in a standardized way. Consumers may seek reliable information to evaluate relevant characteristics of products such as durability and reliability. Development of 'transparency by design' scenarios have been suggested to "complement the physical product with layers of digital information", improving transparency and traceability (T&T). The app CodeCheck gives some smartphone users some capability to scan products for assessed ingredients. Many labels are considered to be flawed and few have the time to "study the true environmental impact of every purchase". Full product transparency is a concept of making the full life-cycle impacts public. An important element that is required for various product information is supply chain transparency, which relates to human rights and supply chain sustainability.
=== Produce traceability ===
=== Product passports ===
In the EU, under the renewed Sustainable Product Policy Initiative, the inclusion of a Digital Product Passport has been proposed. A material passport is a document consisting of all the materials that are included in a product or construction. It consists of a set of data describing defined characteristics of materials in products, useful for recovery, recycling, re-use and various evaluations. They may contribute to a more circular economy.
=== Product information management ===
== See also ==
Builder's plate
Manufacturer part number
== References ==
== Further reading ==
Stark, John (2015). Product Lifecycle Management: Volume 1: 21st Century Paradigm for Product Realisation. Springer. ISBN 978-3-319-17439-6.
== External links ==
Quotations related to Merchandise at Wikiquote
Media related to Products at Wikimedia Commons | Wikipedia/Model_(product) |
A role model is a person whose behaviour, example, or success serves as a model to be emulated by others, especially by younger people. The term role model is credited to sociologist Robert K. Merton, who hypothesized that individuals compare themselves with reference groups of people who occupy the social role to which the individual aspires, an example of which is the way young fans may idolize and imitate professional athletes or entertainment artists.
In the second half of the twentieth century, U.S. advocates for workplace equity popularized the term and concept of role models as part of a larger social capital lexicon—which also includes terms such as glass ceiling, networking, mentoring, and gatekeeper—serving to identify and address the problems barring non-dominant groups from professional success. Mainstream business literature subsequently adopted the terms and concepts, promoting them as pathways to success for all career climbers. In 1970 these terms were not in the general American vocabulary; by the mid-1990s they had become part of everyday speech. Although the term role model has been criticized more recently as "outdated", the term and its associated responsibility remains prominent in the public consciousness as a commonly used phrase, and a "powerful presence" in the entertainment industry and media.
Role models can also be national. for example, Chilean politicians and intellectuals had France as the prime role model during much of the 19th century until they shifted to Germany in the last decades of the century. In short, a role model is a person looked to by others as an example to be imitated.
== Effect on career opportunity and choice ==
According to historian Pamela Laird, a person's chosen role models may have a considerable impact on his or her career opportunities and choices. The suitability of a role model depends, in part, on the admirer's perceived commonality with the model, who should provide an image of an ambitious yet realistic goal. For example, Laird suggests that, Benjamin Franklin served as the role model for countless nineteenth-century white businessmen, including notables such as Thomas Mellon, B.F. Goodrich, and Frederick Weyerhäuser. Laird suggests that the lack of commonalities between potential role models and would-be admirers helped perpetuate barriers to American minorities and women as they tried to advance in a business world dominated by white men, thus spurring late twentieth-century efforts to develop suitable role models for these groups.
Parent role models also significantly influence a person's "education and training aspirations, task self-efficacy, and expectancy for an entrepreneurial career".
== Celebrity role models ==
The ever-widening reach of the media in popular culture has elevated certain celebrities to worldwide acclaim. This boom of media coverage and constant exposure to these individuals resulted in a change of mindset toward celebrities in both adults and youth alike. According to a survey of teachers in the United Kingdom conducted in 2008 by the Association of Teachers and Lecturers, young people most frequently chose sports stars as role models, followed by pop stars. Many, however, simply aspired to be "famous for being famous", believing that fame and fortune could be easily accessed through reality television.
== Community role models ==
According to Rita Pierson, teachers, because of the large amount of time spent with children, have such a huge impact on children that they're being advised to be likeable in order to build strong emotional relationships with children.
== Athlete role models ==
There is significant discussion as to whether athletes should be considered role models. Some athletes have been asked to behave as if they were role models for their local communities, and some such as Hank Greenberg have deliberately tried to set a good example but generally regarding athletes as role models has been criticised due to their appointment often being based solely on sporting ability rather than any morality – it has been suggested that the discipline and control shown continuously by sportspeople on the field leads to a belief from viewers that these same qualities are continuously shown off the field. These and other factors such as the elements of competition, excitement and success are what make people want to emulate them. Charles Barkley has stated that he believes athletes are not the figures that children should be emulating and that it is the parent's responsibility to be role models, that the role is deliberately applied by the media out of jealousy in order to make life more difficult for sportspeople, and that it sets up the sportspeople as an unattainable target for most.
== The Importance of Role Models for Children ==
Role models are visible in the process of child and personal development, through shaping a person's morals, aspirations, and even confidence level. Role models can have either a positive or negative influence on children, depending on what they are promoting. However, many studies have shown that positive relationships with a role model is associated with higher levels of certain traits, such as "elevated self-esteem, performance in school, and resilience". Role models, similar to mentors, have also proven to reduce risky behavior in adolescents.
Certain behaviors practiced by role models in an adolescent's life, can usually be seen replicated by that adolescent—due to the higher standard that the child holds their role model(s) to. One of the most commonly cited role models by children are their family members - because of the positive attributes that they are constantly exposed to.
Without these types of figures, to rely on through observational learning, during child development can result in ill decision-making skills, or even a lacking sense of self later down the road.
== The Impact of Role Models on Young Girls ==
Role models play an impactful role in shaping the aspirations, self-perceptions, and attitudes of girls, particularly when they intend to challenge traditional gender norms. Positive role models, especially women in male dominated fields, such as STEM, can inspire the younger generation of women to expand their understandings of what is possible for them to achieve. According to a study conducted by Laurie T. O'Brien, middle school girls who interacted with competent and enthusiastic female scientists reported a greater sense of belonging in STEM fields.
Role Models play a part into younger children with their aspirations of a career choice. The HarvardKennedySchool conducted research on how the influence of role models for students interested in STEM, to which students agreed that their choice of who their role model reflected back with how they correlate and identify with their pick. The importance is underscored within the assumption that exposing children to gender counter-stereotypical role models can challenge their gendered aspirations.
In media, the portrayal of strong, independent female characters such as Disney Princesses, also influences girl's actions and perceptions of gender roles. Cartoons like Elsa and Mulan challenge traditional depictions of femininity by instead focusing on themes like bravery and independence as opposed to the typical portrayal of the beautiful, love interest princess. These portrayals encourage girls to embrace diverse qualities and pursue their personal ambitions, thereby fostering a sense of empowerment and resilience.
It is also important to note how within the younger generation of girl's the influence of in Disney Princesses on them, Mulan as an example, struggles with her identity, moments within the song "Reflection", she questions societal exceptions that were placed upon her. In the movie she wipes off her makeup and transforms from a potential bride into a warrior. Mulan disobeys the stereotypical role assigned to her as a woman and continues with her journey by empowering the message that it's okay to not conform into these social norms through encouraging individuals to embrace their true identities.
Within the 1950 version of Cinderella, there is discussion about the challenge of gender stereotypes. Cinderella shows a gentle side through the treatment she gets from her step-mother and step-sisters. Throughout her story she is treated like a housewife even if she doesn't agree with how she has to live. However, on the deeper level when it comes to appearance body dysmorphia comes into play. Towards the end of the movie, when her Prince Charming goes around the whole kingdom with her lost slipper trying to find his true love. Within recent perspectives as of 2021, children may be negatively affected by watching these older Disney movies, believing that being a princess is having to fit the weight criteria or that their personal family will be as perfect as they are within the movie(s).
However the impact of role models is nuanced. While short term exposure to non-traditional female role models can help reduce gender stereotypes in certain situations, it does not always translate to sustained changes in behavior or career aspirations. Studies like one done by Frontiers in Psychology show that lasting influence requires ongoing engagement and support, such as mentorship programs.
On the other hand, of things, researchers observed same-sex role models in the same job field foster gender-congruent aspirations and behavior. This learning process leads children to adopt gender-stereotypical knowledge which later on influences their aspirations to align with traditional gender roles (e.g., women aspiring to take care of individuals, men being represented into typical leadership positions). The research also found that children who are exposed to gender-incongruent roles such as male kindergarten teachers, or female scientists then this can challenger traditional gender norms. Individuals who are defy to gender stereotypes reduce the impact of gender stereotyping within children, in the end leads to stereotypical aspirations and behaviors.
== See also ==
Virtue ethics
Celebrity
Identification (psychology)
Model
Role engulfment
== References == | Wikipedia/Role_model |
A model aircraft is a physical model of an existing or imagined aircraft, and is built typically for display, research, or amusement. Model aircraft are divided into two basic groups: flying and non-flying. Non-flying models are also termed static, display, or shelf models.
Aircraft manufacturers and researchers make wind tunnel models for testing aerodynamic properties, for basic research, or for the development of new designs. Sometimes only part of the aircraft is modelled.
Static models range from mass-produced toys in white metal or plastic to highly accurate and detailed models produced for museum display and requiring thousands of hours of work. Many are available in kits, typically made of injection-molded polystyrene or resin.
Flying models range from simple toy gliders made of sheets of paper, balsa, card stock or foam polystyrene to powered scale models built up from balsa, bamboo sticks, plastic, (including both molded or sheet polystyrene, and styrofoam), metal, synthetic resin, either alone or with carbon fiber or fiberglass, and skinned with either tissue paper, mylar and other materials. Some can be large, especially when used to research the flight properties of a proposed full scale aircraft.
== Aerodynamic research and mock-ups ==
Models are made for wind tunnel and free-flight research tests and may have components that can be swapped to compare various fittings and configurations, or have features such as controls that can be repositioned to reflect various in flight configurations. They are also often fitted with sensors for spot measurements and are usually mounted on a structure that ensures the correct alignment with the airflow, and which provides additional measurements. For wind tunnel research, it is sometimes necessary only to make part of the proposed aircraft.
Full-scale static engineering models are also constructed for production development, often made of different materials from the proposed design. Again, often only part of the aircraft is modelled.
== Static display models ==
Static model aircraft cannot fly, and are used for display, education and are used in wind tunnels to collect data for the design of full scale aircraft. They may be built using any suitable material, which often includes plastic, wood, metal, paper and fiberglass and may be built to a specific scale, so that the size of the original may be compared to that of other aircraft. Models may come finished, or may require painting or assembly, with glue, screws, or by clipping together, or both.
Many of the world's airlines allow their aircraft to be modelled for publicity. Airlines used to order large scale models of their aircraft to supply them to travel agencies as a promotional item. Desktop model airplanes may be given to airport, airline and government officials to promote an airline or celebrate a new route or an achievement.
=== Scale ===
Static model aircraft are primarily available commercially in a variety of scales from as large as 1:18 scale to as small as 1:1250 scale. Plastic model kits requiring assembly and painting are primarily available in 1:144, 1:72, 1:48, 1:32, and 1:24 scale. Die-cast metal models (pre-assembled and factory painted) are available in scales ranging from 1:48 to 1:600.
Scales are not random, but are generally based on divisions of either the Imperial system, or the Metric system.
For example, 1:48 scale is 1/4" to 1-foot (or 1" to 4 feet) and 1:72 is 1" to 6 feet, while in metric scales such as 1:100th, 1 centimeter equals 1 meter.
1:72 scale was introduced with Skybirds wood and metal model aircraft kits in 1932 and were followed closely by Frog, which used the same scale from 1936 with their "Frog Penguin" brand. 1:72 was popularized in the US during the Second World War by the US War Department after it requested models of commonly encountered single engine aircraft at that scale, and multi-engine aircraft in 1:144th scale. They hoped to improve aircraft recognition skills and these scales compromised between size and detail. After WWII, manufacturers continued with these scales, however kits are also added in other divisions of the imperial system. 1:50th and 1:100th are common in Japan and France, which both use Metric. Promotional models for airlines are produced in scales ranging from 1:200 to 1:1200.
Some manufacturers made 1:18th scale aircraft to go with cars of the same scale. Aircraft models, military vehicles, figures, cars, and trains all have different common scales but there is some crossover. There is a substantial amount of duplication of more famous subjects in different scales, which can be useful for forced perspective box dioramas.
Older models often did not conform to an established scale as they were sized to fit the box, and are referred to as being to "Box Scale".
=== Materials ===
The most common form of manufacture for kits is injection molded polystyrene plastic, formed in steel forms. Plastic pellets are heated into a liquid and forced into the mold under high pressure through trees that hold all the parts, and ensure plastic flows to every part of the mold. This allows a greater degree of automation than other manufacturing processes but molds require large production runs to cover the cost of making them. Today, this takes place mostly in Asia and Eastern Europe. Smaller runs are possible with copper molds, and some companies use resin or rubber molds, but while the cost is lower for the mold, the durability is also lower and labor costs can be much higher.
Resin kits are made in forms similar to those used for limited run plastic kits, but these molds are usually not as durable, which limits them to smaller production runs, and prices for the finished product are higher.
Vacuum forming is another common alternative but requires more skill, and details must be supplied by the modeller. There is a handful of photo etched metal kits that allow a high level of detail and they are unable to replicate compound curves.
Scale models can also be made from paper or card stock. Commercial models are mainly printed by publishers in Germany or Eastern Europe but can be distributed through the internet, some of which are offered this way for free.
From World War I through the 1950s, static model airplanes were also built from light weight bamboo or balsa wood and covered with tissue paper in the same manner as with flying models. This was a time-consuming process that mirrored the actual construction of airplanes through the beginning of World War II. Many model makers would create models from drawings of the actual aircraft.
Ready-made desk-top models include those produced in fiberglass for travel agents and aircraft manufacturers, as well as collectors models made from die-cast metal, mahogany, resin and plastic.
Carbon fibers and fiberglass have become increasingly common in model aircraft kits. In model helicopters, main frames and rotor blades are often made from carbon fiber, along with ribs and spars in fixed-wing aircraft wings.
== Flying models ==
Aeromodelling is the building and operation of flying model aircraft. Some flying models resemble scaled down versions of full scale aircraft, while others are built with no intention of looking like real aircraft. There are also models of birds, bats and pterosaurs (usually ornithopters). The reduced size affects the model's Reynolds number, which determines how the air reacts when flowing past the model, and compared to a full sized aircraft the size of control surfaces needed, the stability and the effectiveness of specific airfoil sections may differ considerably requiring changes to the design.
=== Control ===
Flying model aircraft are generally controlled through one of three methods
Free flight (F/F) model aircraft are uncontrolled other than by control surfaces that must be preset before flight, and must have a high degree of natural stability. Most free flying models are either unpowered gliders or rubber powered. These pre-date manned flight.
Control line (C/L) model aircraft use strings or wires to tether the model to a central pivot, either held by hand or to a pole. The aircraft then flies in circles around that point, secured by one cable, while a second provides pitch control through a connection to the elevator. Some use a third cable to control a throttle. There are many competition categories. Speed flying is divided into classes based on engine displacement. Class 'D' 60 size speed planes can easily reach speeds well in excess of 150 mph (240 km/h).
Radio-controlled aircraft have a controller who operates a transmitter that sends signals to a receiver in the model to actuate servos that adjust the model's flight controls similarly to a full sized aircraft. Traditionally, the radio signal directly controlled servos, however, modern examples often use flight control computers to stabilize the model or even to fly it autonomously. This is particularly the case with quadcopters. Rudimentary flight controllers were first introduced in model helicopters, with standalone electronic gyroscopes used stabilize the tail rotor control. Much like quadcopters, this has now extended to all flight controls.
=== Construction ===
Flying models construction may differ from that of static models as both weight and strength are major considerations.
Flying models borrow construction techniques from full-sized aircraft although the use of metal is limited. These might consist of forming a frame using thin planks of a light wood such as balsa to duplicate the formers, longerons, spars, and ribs of a vintage full-size aircraft, or, on larger (usually powered) models where weight is less of a factor, sheets of wood, expanded polystyrene, and wood veneers may be employed. It is then given a smooth sealed surface, usually with aircraft dope. For light models, tissue paper is used. For larger models (usually powered and radio controlled) heat-curing or heat shrink covering plastic films or heat-shrinkable synthetic fabrics are applied to the model. Microfilm covering is used for the lightest models and is made by spreading few drops of lacquer out over several square feet of water, and lifting a wire loop through it, which creates a thin plastic film.
Flying models can be assembled from kits, built from plans, or made completely from scratch. A kit contains the necessary raw material, typically die- or laser-cut wood parts, some molded parts, plans, assembly instructions and may have been flight tested. Plans are intended for the more experienced modeller, since the builder must make or find the materials themselves. Scratch builders may draw their own plans, and source all the materials themselves. Any method may be labor-intensive, depending on the model in question.
To increase the hobby's accessibility, some vendors offer Almost Ready to Fly (ARF) models that minimize the skills required, and reduce build time to under 4 hours, versus 10–40 or more for a traditional kit. Ready To Fly (RTF) radio control aircraft are also available, however model building remains integral to the hobby for many. For a more mass market approach, foamies, injection-molded from lightweight foam (sometimes reinforced) have made indoor flight more accessible and many require little more than attaching the wing and landing gear.
=== Gliders ===
Gliders do not have an attached powerplant. Larger outdoor model gliders are usually radio-controlled gliders and hand-winched against the wind by a line attached to a hook under the fuselage with a ring, so that the line drops when the model is overhead. Other methods include catapult-launching, using an elastic bungee cord. The newer "discus" style of wingtip hand-launching has largely supplanted the earlier "javelin" type of launch. Also using ground-based power winches, hand-towing, and towing aloft using a second powered aircraft.
Gliders sustain flight through exploitation of the wind in the environment. A hill or slope often produces updrafts of air that sustain the flight of a glider. This is called slope soaring, and radio controlled gliders can remain airborne for as long as the updraft remains. Another means of attaining height in a glider is exploitation of thermals, which are columns of warm rising air created by differences of temperature on the ground such as between an asphalt parking lot and a lake. Heated air rises, carrying the glider with it. As with a powered aircraft, lift is obtained by the action of the wings as the aircraft moves through the air, but in a glider, height is gained by flying through air that is rising faster than the aircraft is sinking.
Walkalong gliders are lightweight model airplanes flown in the ridge lift produced by the pilot following in close proximity. In other words, the glider is slope soaring in the updraft of the moving pilot (see also Controllable slope soaring).
=== Power sources ===
Powered models contain an onboard powerplant, a mechanism powering propulsion of the aircraft through the air. Electric motors and internal combustion engines are the most common propulsion systems, but other types include rocket, small turbine, pulsejet, compressed gas, and tension-loaded (twisted) rubber band devices.
==== Rubber ====
The oldest method of powering free flight models is Alphonse Pénaud's elastic motor (or extensible motor) of 1871, essentially a long rubber band that is twisted to add tension, prior to flight. It is the most widely used powerplant, found on everything from children's toys to competition models. The elastic offers simplicity and durability, but has a short running time, and the initial high torque of a fully wound motor drops sharply before plateauing to a steady output, until the final turns unwind and power drops off completely. Using it efficiently is one of the challenges of competitive free-flight rubber flying, and variable-pitch propellers, differential wing and tailplane incidence and rudder settings, controlled by timers, can help to manage the torque. There are also usually motor weight restrictions in contest classes. Even so, models have achieved flights of nearly 1 hour.
==== Compressed gases ====
Stored compressed gas, typically carbon dioxide (CO2), can power simple models in a manner similar to filling a balloon and then releasing it. Compressed CO2 may also be used to power an expansion engine to turn a propeller. These engines can incorporate speed controls and multiple cylinders, and are capable of powering lightweight scale radio-controlled aircraft. Gasparin and Modela are two recent makers of CO2 engines. CO2, like rubber, is known as "cold" power because it generates no heat.
Steam is even older than rubber power, and like rubber, contributed much to aviation history, but is now rarely used. In 1848, John Stringfellow flew a steam-powered model, in Chard, Somerset, England. Samuel Pierpont Langley built both steam- and internal-combustion-powered models that made long flights.
Baronet Sir George Cayley built, and flew, internal and external combustion gunpowder-fueled model aircraft engines in 1807, 1819, and 1850. These had no crank, working ornithopter-like flappers instead of a propeller. He speculated that the fuel might be too dangerous for manned aircraft.
==== Internal combustion ====
For larger and heavier models, the most popular powerplant is the glow plug engine. Glow engines are fueled by a mixture of slow burning methanol, nitromethane, and lubricant (castor oil or synthetic oil), which is sold pre-mixed as glow-fuel. Glow-engines require an external starting mechanism; the glow plug must be heated until it is hot enough to ignite fuel to start. Reciprocating cylinders apply torque to a rotating crankshaft, which is the engine's primary power-output. Some power is lost from converting linear motion to rotary and in lost heat and unburned fuel, so efficiency is low.
These are rated by engine displacement and range from 0.01 cu in (0.16 cc) to over 1.0 cu in (16 cc). The smallest engines can spin a 3.5 inches (8.9 cm) propeller to over 30,000 rpm, while the larger engines turn at 10–14,000 rpm.
The simplest glow-engines use the two-stroke cycle. These engines are inexpensive, and offer the highest power-to-weight ratio of all glow-engines, but are noisy and require substantial expansion chamber mufflers, which may be tuned. four-stroke cycle glow engines, whether using poppet valves or more rarely rotary valves are more fuel-efficient, but deliver less power than similar two-stroke engines. The power they deliver is more suited to turning larger diameter propellers for lighter weight, higher drag airframes such as with in biplanes. Four-stroke engines are now popular as they are quieter than two-stroke engines, and are available in horizontally opposed twins and radial engine configurations. Variations include engines with multiple-cylinders, spark-ignition gasoline operation, carbureted diesel operation and variable compression-ratio engines. Diesels are preferred for endurance and have higher torque, and for a given capacity, can "swing" a larger propeller than a glow engine. Home manufacture of model aircraft engines is a hobby in its own right.
==== Jets and rockets ====
Early "jet" style model aircraft used a multi-blade propeller ducted fan, inside ductwork, usually in the fuselage. The fans were generally powered by 2 stroke engines at high RPM. They generally had 0.40 to 0.90 cu in (6.6 to 14.7 cc) displacements, but some were as small as 0.049 cu in (0.80 cc). This fan-in-tube design has been adopted successfully for electric-powered jets while glow engine powered ducted-fan aircraft are now rare. Small jet turbine engines are now used in hobbyist models that resemble simplified versions of the turbojet engines found on commercial aircraft, but are not scaled-down as Reynolds numbers come into play. The first hobbyist-developed turbine was developed and flown in the 1980s but recently have commercial examples become readily available. Turbines require specialized design and precision-manufacturing, and some have been built from car engine turbocharger units. Owning or operating a turbine-powered aircraft is prohibitively expensive and many national clubs (as with the USA's Academy of Model Aeronautics) require members to be certified to safely use them.
V-1 flying bomb type Pulsejet engines have also been used as they offer more thrust in a smaller package than a traditional glow-engine, but are not widely used due to the extremely high noise levels they produce, and are illegal in some countries.
Rocket engines are sometimes used to boost gliders and sailplanes. The earliest purpose-built rocket motor dates back to the 1950s, with the introduction of the Jetex motor, which used solid fuel pellets, ignited by a wick fuse, in a reusable casing. Flyers can now also use single-use model rocket engines to provide a short, under 10 second burst of power. Government restrictions in some countries made rocket-propulsion rare but these were being eased in many places and their use was expanding, however a reclassification from "smoke producing devices" to "fireworks" has made them difficult to obtain again.
==== Electric power ====
Electric-powered models use an electric motor powered by a source of electricity - usually a battery. Electrical power began being used on models in the 1970s, but the cost delayed widespread use until the early 1990s, when more efficient battery technologies, and brushless motors became available, while the costs of motors, batteries and control systems dropped dramatically. Electric power now predominated with park-flyer and 3D-flyer models, both of which are small and light, where electric-power offers greater efficiency and reliability, less maintenance and mess, quieter flight and near-instantaneous throttle response compared to internal combustion engines.
The first electric models used brushed DC motors and nickel cadmium (NiCad) rechargeable cells that gave flight times of 5 to 10 minutes, while a comparable glow-engine provided double the flight-time. Later electric systems used more-efficient brushless DC motors and higher-capacity nickel metal hydride (NiMh) batteries, yielding considerably improved flight times. Cobalt and lithium polymer batteries (LiPoly or LiPo) permit electric flight-times to surpass those of glow-engines, while the more rugged and durable, cobalt-free lithium iron phosphate batteries are also becoming popular. Solar power has also become practical for R/C hobbyists, and in June 2005 a record flight of 48 hours and 16 minutes was set in California. It is now possible to power most models under 20 lb (9.1 kg) with electric power for a cost equivalent to or lower than traditional power sources.
Recent developments have resulted in the use of brushless three-phase motors in model aviation. Brushless motors are more powerful and offer greater torque and efficiency. The design of brushless motors also means less internal friction, as there is no requirement for brushes to be in contact with any rotating parts. This increase in efficiency results in longer flight times.
=== Propulsion types ===
Most powered model-aircraft, including electric, internal-combustion, and rubber-band powered models, generate thrust by spinning an airscrew. The propeller is the most commonly used device. Propellers generate thrust due to lift generated by the wing-like sections of the blades, which forces air backward.
==== Propellers ====
A large diameter and low-pitch propeller offers greater thrust and acceleration at low airspeed, while a small diameter and higher-pitch propeller sacrifices acceleration for higher maximum speeds. The builder can choose from a selection of propellers to match the model but a mismatched propeller can compromise performance, and if too heavy, cause undue wear on the powerplant. Model aircraft propellers are usually specified as diameter × pitch, in inches. For example, a 5 x 3 propeller has a diameter of 5 inches (130 mm), and a pitch of 3 inches (76 mm). The pitch is the distance that the propeller would advance if turned through one revolution in a solid medium. Two and three bladed propellers are the most common.
Three methods are used to transfer energy to the propeller:
Direct-drive systems have the propeller attached directly to the engine's crankshaft or driveshaft. This arrangement is preferred when the propeller and powerplant both operate near peak efficiency at similar rpms. Direct-drive is most common with fuel-powered engines. Rarely, some electric motors are designed with a sufficiently high torque and low enough speed and can utilize direct-drive as well. These motors are typically called outrunners.
Reduction drive uses gears to reduce shaft rpm, so the motor can spin much faster. The higher the gear ratio, the slower the prop rotates, which also increases torque by roughly the same ratio. This is common on larger models and on those with unusually large propellers. The reduction drive matches the powerplant and propeller to their respective optimum operating speeds. Geared propellers are rare on internal combustion engines, but are common on electric motors because most electric motors spin extremely fast, but lack torque.
A built-in 2:1 gear reduction ratio can be obtained by attaching the propeller to the camshaft rather than the crankshaft of a four stroke engine, which runs at half the speed of the crankshaft.
==== Ducted fans ====
Ducted fans are multi-blade propellers encased in a cylindrical duct or tube that may look like and fit in the same space as jet engine. They are available for both electric and liquid-fuelled engines, although they have become common with recent improvements in electric-flight technology. A model aircraft can now be fitted with four electric ducted fans for less than the cost of a single jet turbine, enabling affordable modelling of multi-engine airplanes. Compared to an unducted propeller, a ducted fan generates more thrust for the same area and speeds of up to 200 mph (320 km/h) have been recorded with electric-powered ducted fan airplanes, largely due to the higher RPMs possible with ducted fan propellers. Ducted fans are popular with scale models of jet aircraft, where they mimic the appearance of jet engines but they are also found on non-scale and sport models, and even lightweight 3D-flyers.
==== Other ====
With ornithopters the motion of the wing structure imitates the flapping-wings of living birds, producing both thrust and lift.
== Competitions and classes ==
World competitions are organized by the Fédération Aéronautique Internationale (FAI) in many classes, groups, and subclasses:
Class F – model aircraft
F1x – Free Flight
F2x – Control Line
F3x – Radio Control
F4x – Scale Aircraft (a reduced-size reproduction of a full-size aircraft)
F5x – Radio Control Electric Powered Motor Gliders
FAI – Drone Racing (F3U)
Class S – space model
Class U – unmanned aerial vehicle
=== Free flight (F1) ===
The Wakefield Gold Challenge Cup is an international modelling competition named for the donor, Lord Wakefield. The event was first held on 5 July 1911 at The Crystal Palace in England. There were contests in 1912, 1913 and 1914. No contests were held again until 1927, when the Society of Model Aeronautical Engineers (SMAE) approached Lord Wakefield for a new larger silver trophy for international competition. This trophy is the present Wakefield International Cup and was first awarded in 1928. The SMAE organized the international competitions until 1951 when the FAI took over, and has since been made the award for the rubber-power category at the FAI World Free Flight Championships.
The FAI free flight classes include:
F1A – Gliders
F1B – Model Aircraft with extensible (rubber band) motors – Wakefield Trophy
F1C – Power model aircraft (combustion powered 2.5 cc (0.15 cu in))
F1D – Indoor model aircraft
F1E – Gliders with automatic steering
F1N – Indoor hand-launch gliders
F1P – Power model aircraft (combustion powered 1.0cc)
F1Q – Electric power model aircraft
F1G – Model aircraft with extensible (rubber band) motors « Coupe d’hiver » (provisional)
F1H – Gliders (provisional)
F1J – Power model aircraft (provisional) (combustion powered 1.0 cc (0.061 cu in))
F1K – Model aircraft with CO2 motors (provisional)
F1L – Indoor zone EZB model aircraft (provisional)
F1M – Indoor model aircraft (provisional)
F1R – Indoor model aircraft “Micro 35” (provisional)
F1S – Small electric power model aircraft “E36”
=== Control line (F2) ===
Also referred to as U-Control in the US, it was pioneered by the late Jim Walker who often, for show, flew three models at a time. Normally the model is flown in a circle and controlled by a pilot in the center holding a handle connected to two thin steel wires. The wires connect through the inboard wing tip of the plane to a mechanism that translates the handle movement to the aircraft elevator, allowing maneuvers to be performed along the aircraft pitch axis. The pilot turns to follow the model going round, the convention being counterclockwise for upright level flight.
For the conventional control-line system, tension in the lines is required to provide control. Line tension is maintained largely by centrifugal force. To increase line tension, models may be built or adjusted in various ways. Rudder offset and thrust vectoring (tilting the engine toward the outside) yaw the model outward. The position where the lines exit the wing can compensate for the tendency of the aerodynamic drag of the lines to yaw the model inboard. Weight on the outside wing, an inside wing that is longer or has more lift than the outside wing (or even no outside wing at all) and the torque of a left rotating propeller (or flying clockwise) tend to roll the model toward the outside. Wing tip weights, propeller torque, and thrust vectoring are more effective when the model is going slowly, while rudder offset and other aerodynamic effects have more influence on a fast moving model.
Since its introduction, control line flying has developed into a competition sport. There are contest categories for control line models, including Speed, Aerobatics (AKA Stunt), Racing, Navy Carrier, Balloon Bust, Scale, and Combat. There are variations on the basic events, including divisions by engine size and type, skill categories, and age of model design.
The events originated largely in the United States, and were later adapted for use internationally. The rules for US Competition are available from the Academy of Model Aeronautics. The international rules are defined by the Fédération Aéronautique Internationale (FAI). World Championships are held semiannually throughout the world, most recently in 2008 in France, with a limited slate of events – special varieties of Racing (F2C or "Team Race"), combat (F2D), and speed (F2A), all limited to engines displacing 0.15 cu. in (2.5cc), and Stunt (F2b), which is essentially unlimited with regard to design and size.
CIAM (FAI Aeromodelling Commission) designated the following classes in the F2 Control Line category:
F2A
CL Speed
F2B
CL Aerobatics
F2C
CL Team racing
The international class of racing is referred to as F2C (F2 = Control-line, C=racing) or Team Race. A pilot and a mechanic compete as a team to fly small 370 g (13 oz) 65 cm (26 in) wingspan semi-scale racing models over a tarmac or concrete surface. Lines are 15.92 m (52.2 ft) long.
Three pilots, plus mechanic teams, compete simultaneously in the same circle, and the object is to finish the determined course as fast as possible. Tank size is limited to 7 cc (0.43 cu in), requiring 2 or 3 refueling pitstops during the race.
The mechanic stands at a pit area outside the marked flight circle. The engine is started and the model released on the start signal. For refueling, the pilot operates a fuel shutoff by a quick down elevator movement after the planned number of laps so that the model can approach the mechanic at optimum speed, of around 31 mph (50 km/h). The mechanic catches the model by the wing, fill the tank from a pressurized can by a hose and finger valve, then restart the engine by flicking the propeller with his finger. A pitstop generally takes less than three seconds.
The course is 6.2 mi (10 km), with 100 laps. Flying speeds are around 200 km/h (120 mph), which means that the pilots turn one lap in roughly 1.8 seconds. Line pull due to centrifugal force is 19 lbf (85 N). An overtaking model is steered over the heads of the competing pilots of slower models.
After two rounds of elimination heats, the 6, 9 or 12 fastest teams enter two semifinal rounds, and the three fastest teams in the semifinals go to the final, which is run over the double course. Single cylinder two-stroke Diesel compression ignition engines designed for this purpose of up to 2.5 cc (0.15 cu in) are used. At the world championship level it is common for competitors design and build their own engines. Output power approaches 0.8 hp (0.60 kW) at 25,000 rpm.
==== F2D – control line combat ====
CLASS F2D - Control Line Combat Model Aircraft - Two pilots compete, with four mechanics in the pit. The aircraft are light and stubby so as to maneuver quickly in the air. Each has a 8 ft 2 in (2.5 m) crepe paper streamer attached to the rear of the aircraft by a 3 m (9.8 ft) string. Each pilot attacks only the other aircraft's streamer, to attempt to cut it with their propeller or wing. Each cut scores 100 points. Each second the model is in the air scores a point and the match runs for 4 minutes from the starter's signal. At the almost 120 mph (200 km/h) speeds of the aircraft, mistakes often lead to crash damage so two aircraft are permitted for each match. The mechanics are prepared for crashes and quickly start the second aircraft and transfer the streamer to the reserve model before launching. The action is so fast that an observer may miss the cuts of the streamers. A second loss eliminates a competitor, and the last pilot still flying wins.
=== Radio-controlled flight (F3) ===
The FAI radio control classes include:
F3A
RC Aerobatic Aircraft
F3B
RC Multi-Task Gliders
F3C
RC Aerobatic Helicopters
F3D
RC Pylon Racing Aeroplanes – Pylon racing refers to a class of air racing for radio controlled model aircraft that fly through a course of pylons. The sport is similar to the full-scale Red Bull Air Race World Series.
F3F
RC Slope Soaring Gliders
F3J
RC Thermal Duration Gliders
F3K
RC Hand Launch Gliders
F3M
RC Large Aerobatic Aircraft
F3N
RC Freestyle Aerobatic Helicopters
F3P
RC Indoor Aerobatic Aircraft
F3H
RC Soaring Cross Country Gliders
F3Q
RC Aero-Tow Gliders
F3R
RC Pylon Racing Limited Technology Aeroplanes
F3S
RC Jet Aerobatic Aircraft
F3T
RC Semi-Scale Pylon Racing with Controlled Technology Aeroplanes
F3U
RC Multi-rotor FPV Racing – The FAI Drone Racing World Cup is in the F3U class (Radio Control Multi-rotor FPV Racing). This is a highly competitive drone racing activity, involving mental exertion and big cash prizes.
=== Scale aircraft (F4) ===
The FAI classes for scale model aircraft (a reduced-size reproduction of a full-size aircraft) include:
F4B
control line scale aeroplanes
F4C
radio control scale aeroplanes
F4H
radio control stand-off scale aeroplanes
=== Radio-controlled electric motor gliders (F5) ===
The FAI classes include:
F5B – Electric Motor Glider – Multi Task (held in alternate years only)
F5D – Electric Pylon Racing
F5J – Electric Motor Glider – Thermal Duration
== Model aerodynamics ==
The flight behavior of an aircraft depends on the scale to which it is built, the density of the air and the speed of flight.
At subsonic speeds the relationship between these is expressed by the Reynolds number. Where two models at different scales are flown with the same Reynolds number, the airflow is similar. Where the Reynolds numbers differ, as for example a small-scale model flying at lower speed than the full-size craft, the airflow characteristics can differ significantly. This can make an exact scale model unflyable, and the model has to be modified in some way. For example, at low Reynolds numbers, a flying scale model usually requires a larger-than-scale propeller.
Maneuverability depends on scale, with stability also becoming more important. Control torque is proportional to lever arm length while angular inertia is proportional to the square of the lever arm, so the smaller the scale the more quickly an aircraft or other vehicle turns in response to control inputs or outside forces.
One consequence of this is that models in general require additional longitudinal and directional stability, resisting sudden changes in pitch and yaw. While it may be possible for a pilot to respond quickly enough to control an unstable aircraft, a radio control scale model of the same aircraft would be flyable only with design adjustments such as increased tail surfaces and wing dihedral for stability, or with avionics providing artificial stability. Free flight models need to have both static and dynamic stability. Static stability is the resistance to sudden changes in pitch and yaw already described, and is typically provided by the horizontal and vertical tail surfaces respectively, and by a forward center of gravity. Dynamic stability is the ability to return to straight and level flight without any control input. The three dynamic instability modes are pitch (phugoid) oscillation, spiral and Dutch roll. An aircraft with too large a horizontal tail on a fuselage that is too short may have a phugoid instability with increasing climbs and dives. With free flight models, this usually results in a stall or loop at the end of the initial climb. Insufficient dihedral or sweep back generally leads to increasing spiral turn. Too much dihedral or sweepback generally causes Dutch roll. These all depend on the scale, as well as details of the shape and weight distribution. For example, the paper glider shown here is a contest winner when made of a small sheet of paper but goes from side to side in Dutch roll when scaled up even slightly.
== See also ==
== Footnotes ==
== References ==
RCadvisor′s Model Airplane Design Made Easy, by Carlos Reyes, RCadvisor.com, Albuquerque, New Mexico, 2009. ISBN 9780982261323 OCLC 361461928
The Great International Paper Airplane Book, by Jerry Mander, George Dippel and Howard Gossage, Simon and Schuster, New York, 1967. ISBN 0671289918 OCLC 437094
Model Aircraft Aerodynamics, by Martin Simons, Swanley: Nexus Special Interests, 1999. 4th ed. ISBN 1854861905 OCLC 43634314
How to Design and Build Flying Model Airplanes, by Keith Laumer, Harper, New York, 1960. 2nd ed., 1970. OCLC 95315
The Middle Ages of the Internal-Combustion Engine, by Horst O. Hardenberg, SAE, 1999. ISBN 0768003911 OCLC 40632327
Model Airplane Design and Theory of Flight, by Charles Hampson Grant, Jay Publishing Corporation, New York, 1941. OCLC 1336984
Pulling Back the Clouds, by Mike Kelly, Limerick Writers' Centre Publishing, Ireland, 2020. ISBN 9781916065383 | Wikipedia/Model_aircraft |
Railway modelling (UK, Australia, New Zealand, and Ireland) or model railroading (US and Canada) is a hobby in which rail transport systems are modelled at a reduced scale.
The scale models include locomotives, rolling stock, streetcars, tracks, signalling, cranes, and landscapes including: countryside, roads, bridges, buildings, vehicles, harbors, urban landscape, model figures, lights, and features such as rivers, hills, tunnels, and canyons.
The earliest model railways were the 'carpet railways' in the 1840s. The first documented model railway was the Railway of the Prince Imperial (French: Chemin de fer du Prince Impérial) built in 1859 by Emperor Napoleon III for his then 3-year-old son, also Napoleon, in the grounds of the Château de Saint-Cloud in Paris. It was powered by clockwork and ran in a figure-of-eight. Electric trains appeared around the start of the 20th century, but these were crude likenesses. Model trains today are more realistic, in addition to being much more technologically advanced. Today modellers create model railway layouts, often recreating real locations and periods throughout history.
The world's oldest working model railway is a model designed to train signalmen on the Lancashire and Yorkshire Railway. It is located in the National Railway Museum, York, England and dates back to 1912. It remained in use until 1995. The model was built as a training exercise by apprentices of the company's Horwich Works and supplied with rolling stock by Bassett-Lowke.
== General description ==
Involvement ranges from possession of a train set to spending hours and large sums of money on a large and exacting model of a railroad and the scenery through which it passes, called a "layout". Hobbyists, called "railway modellers" or "model railroaders", may maintain models large enough to ride (see Live steam, Ridable miniature railway and Backyard railroad).
Modellers may collect model trains, building a landscape for the trains to pass through. They may also operate their own railroad in miniature. For some modellers, the goal of building a layout is to eventually run it as if it were a real railroad (if the layout is based on the fancy of the builder) or as the real railroad did (if the layout is based on a prototype). If modellers choose to model a prototype, they may reproduce track-by-track reproductions of the real railroad in miniature, often using prototype track diagrams and historic maps.
Layouts vary from a circle or oval of track to realistic reproductions of real places modelled to scale. Probably the largest model landscape in the UK is in the Pendon Museum in Oxfordshire, UK, where an EM gauge (same 1:76.2 scale as 00 but with more accurate track gauge) model of the Vale of White Horse in the 1930s is under construction. The museum also houses one of the earliest scenic models – the Madder Valley layout built by John Ahern. This was built in the late 1930s to late 1950s and brought in realistic modelling, receiving coverage on both sides of the Atlantic in the magazines Model Railway News and Model Railroader. Bekonscot in Buckinghamshire is the oldest model village and includes a model railway, dating from the 1930s. The world's largest model railroad in H0 scale is the Miniatur Wunderland in Hamburg, Germany. The largest live steam layout, with 25 miles (40 km) of track is Train Mountain in Chiloquin, Oregon, U.S.
Operations form an important aspect of rail transport modelling with many layouts being dedicated to emulating the operational aspects of a working railway. These layouts can become extremely complex with multiple routes, movement patterns and timetabled operation. The British outline model railway of Banbury Connections in New South Wales, Australia, is one of the world's most complicated model railways.
Model railroad clubs exist where enthusiasts meet. Clubs often display models for the public. One specialist branch concentrates on larger scales and gauges, commonly using track gauges from 3.5 to 7.5 inches (89 to 191 mm). Models in these scales are usually hand-built and powered by live steam, or diesel-hydraulic, and the engines are often powerful enough to haul dozens of human passengers.
The Tech Model Railroad Club (TMRC) at MIT in the 1950s pioneered automatic control of track-switching by using telephone relays.
The oldest society is 'The Model Railway Club' (established 1910), near Kings Cross, London, UK. As well as building model railways, it has 5,000 books and periodicals. Similarly, 'The Historical Model Railway Society' at Butterley, near Ripley, Derbyshire specialises in historical matters and has archives available to members and non-members.
== Scales and gauges ==
The words scale and gauge seem at first interchangeable but their meanings are different. Scale is the model's measurement as a proportion to the original, while gauge is the measurement between the rails.
The size of engines depends on the scale and can vary from 700 mm (27.6 in) tall for the largest rideable live steam scales such as 1:4, down to matchbox size for the smallest: Z-scale (1:220) or T scale (1:450). A typical HO (1:87) engine is 50 mm (1.97 in) tall, and 100 to 300 mm (3.94 to 11.81 in) long. The most popular scales are: G scale, Gauge 1, O scale, S scale, HO scale (in Britain, the similar OO), TT scale, and N scale (1:160 in the United States, but 1:148 in the UK). HO and OO are the most popular. Popular narrow-gauge scales include Sn3, HOn3 and Nn3, which are the same in scale as S, HO and N except with a narrower spacing between the tracks (in these examples, a scale 3 ft (914 mm) instead of the 4 ft 8+1⁄2 in (1,435 mm) standard gauge).
The largest common scale is 1:8, with 1:4 sometimes used for park rides. G scale (Garden, 1:24 scale) is most popular for backyard modelling. It is easier to fit a G scale model into a garden and keep scenery proportional to the trains. Gauge 1 and Gauge 3 are also popular for gardens. O, S, HO, and N scale are more often used indoors.
At first, model railways were not to scale. Aided by trade associations such as the National Model Railroad Association (NMRA) and Normen Europäischer Modellbahnen (NEM), manufacturers and hobbyists soon arrived at de facto standards for interchangeability, such as gauge, but trains were only a rough approximation to the real thing. Official scales for the gauges were drawn up but not at first rigidly followed and not necessarily correctly proportioned for the gauge chosen. 0 (zero) gauge trains, for instance, operate on track too widely spaced in the United States as the scale is accepted as 1:48 whereas in Britain 0 gauge uses a ratio of 43.5:1 or 7 mm/1 foot and the gauge is near to correct. British OO standards operate on track significantly too narrow. The 4 mm/1 foot scale on a 16.5 mm (0.65 in) gauge corresponds to a track gauge of 4 ft 1+1⁄2 in (1,257 mm), 7 inches or 178 millimetres (undersized). 16.5 mm (0.65 in) gauge corresponds to 4 ft 8+1⁄2 in (1,435 mm) standard gauge in H0 (half-0) 3.5 mm/1 foot or 1:87.1. This arose due to British locomotives and rolling stock being smaller than those found elsewhere, leading to an increase in scale to enable H0 scale mechanisms to be used. Most commercial scales have standards that include wheel flanges that are too deep, wheel treads that are too wide, and rail tracks that are too large. In H0 scale, the rail heights are codes 100, 87, 83, 70, 55, 53, and 40 -- the height in thousandths of an inch from base to railhead (so code 100 is a tenth of an inch and represents 156-pound rail).
Later, modellers became dissatisfied with inaccuracies and developed standards in which everything is correctly scaled. These are used by modellers but have not spread to mass-production because the inaccuracies and overscale properties of the commercial scales ensure reliable operation and allow for shortcuts necessary for cost control. The finescale standards include the UK's P4, and the even finer S4, which uses track dimensions scaled from the prototype. This 4 mm:1 ft modelling uses wheels 2 mm (0.079 in) or less wide running on track with a gauge of 18.83 mm (0.741 in). Check-rail and wing-rail clearances are similarly accurate.
A compromise of P4 and OO is "EM" which uses a gauge of 18.2 mm (0.717 in) with more generous tolerances than P4 for check clearances. It gives a better appearance than OO though pointwork is not as close to reality as P4. It suits many where time and improved appearance are important. There is a small following of finescale OO which uses the same 16.5mm gauge as OO, but with the finer scale wheels and smaller clearances as used with EM- it is essentially 'EM-minus-1.7mm.'
== Modules ==
Many groups build modules, which are sections of layouts, and can be joined together to form a larger layout, for meetings or for special occasions. For each kind of module system, there is an interface standard, so that modules made by different participants may be connected, even if they have never been connected before. Many of these module types are listed in the Layout standards organizations section of this article.
== Couplers and connectors ==
In addition to different scales, there are also different types of couplers for connecting cars, which are not compatible with each other.
In HO, the Americans standardized on horn-hook, or X2F couplers. Horn hook couplers have largely given way to a design known as a working knuckle coupler which was popularized by the Kadee Quality Products Co., and which has subsequently been emulated by a number of other manufactures in recent years. Working knuckle couplers are a closer approximation to the "automatic" couplers used on the prototype there and elsewhere. Also in HO, the European manufacturers have standardized, but on a coupler mount, not a coupler: many varieties of coupler can be plugged in (and out) of the NEM coupler box. None of the popular couplers has any resemblance to the prototype three-link chains generally used on the continent.
For British modellers, whose most popular scale is OO, the normal coupler is a tension-lock coupler, which, again has no pretence of replicating the usual prototype three-link chain couplers. Bachmann and more recently Hornby have begun to offer models fitted with NEM coupler pockets. This theoretically enables modellers of British railways to substitute any other NEM362 coupler, though many Bachmann models place the coupler pocket at the wrong height. A fairly common alternative is to use representations of chain couplings as found on the prototype, though these require large radius curves to be used to avoid derailments.
Other scales have similar ranges of non-compatible couplers available. In all scales couplers can be exchanged, with varying degrees of difficulty.
== Landscaping ==
Some modellers pay attention to landscaping their layout, creating a fantasy world or modelling an actual location, often historic. Landscaping is termed "scenery building" or "scenicking".
Constructing scenery involves preparing a sub-terrain using a wide variety of building materials, including (but not limited to) screen wire, a lattice of cardboard strips, or carved stacks of expanded polystyrene (styrofoam) sheets. A scenery base is applied over the sub-terrain; typical base include casting plaster, plaster of Paris, hybrid paper-pulp (papier-mâché) or a lightweight foam/fiberglass/bubblewrap composite as in Geodesic Foam Scenery.
The scenery base is covered with substitutes for ground cover, which may be Static Grass or scatter. Scatter or flock is a substance used in the building of dioramas and model railways to simulate the effect of grass, poppies, fireweed, track ballast and other scenic ground cover. Scatter used to simulate track ballast is usually fine-grained ground granite. Scatter which simulates coloured grass is usually tinted sawdust, wood chips or ground foam. Foam or natural lichen or commercial scatter materials can be used to simulate shrubbery. An alternative to scatter, for grass, is static grass which uses static electricity to make its simulated grass actually stand up.
Buildings and structures can be purchased as kits, or built from cardboard, balsa wood, basswood, other soft woods, paper, or polystyrene or other plastic. Trees can be fabricated from materials such as Western sagebrush, candytuft, and caspia, to which adhesive and model foliage are applied; or they can be bought ready-made from specialist manufacturers. Water can be simulated using polyester casting resin, polyurethane, or rippled glass. Rocks can be cast in plaster or in plastic with a foam backing. Castings can be painted with stains to give colouring and shadows.
== Weathering ==
Weathering refers to making a model look used and exposed to weather by simulating dirt and wear on real vehicles, structures and equipment. Most models come out of the box looking new, because unweathered finishes are easier to produce. Also, the wear a freight car or building undergoes depends not only on age but where it is used. Rail cars in cities accumulate grime from building and automobile exhaust and graffiti, while cars in deserts may be subjected to sandstorms which etch or strip paint. A model that is weathered would not fit as many layouts as a pristine model which can be weathered by its purchaser.
There are many weather techniques that include, but are not limited to, painting (by either drybrushing or an airbrush), sanding, breaking, and even the use of chemicals to cause corrosion. Some processes become very creative depending on the skill of the modeller. For instance several steps may be taken to create a rusting effect to ensure not only proper colouring, but also proper texture and lustre.
Weathering purchased models is common, at the least, weathering aims to reduce the plastic-like finish of scale models. The simulation of grime, rust, dirt, and wear adds realism. Some modellers simulate fuel stains on tanks, or corrosion on battery boxes. In some cases, evidence of accidents or repairs may be added, such as dents or freshly painted replacement parts, and weathered models can be nearly indistinguishable from their prototypes when photographed appropriately.
== Methods of power ==
Static diorama models or "push along" scale models are a branch of model railways for unpowered locomotives, examples are Lone Star and Airfix models. Powered model railways are now generally operated by low voltage direct current (DC) electricity supplied via the tracks, but there are exceptions, such as Märklin and Lionel Corporation, which use alternating current (AC). Modern Digital Command Control (DCC) systems use alternating current. Other locomotives, particularly large models, can use steam. Steam and clockwork-driven engines are still sought by collectors.
=== Clockwork ===
Most early models for the toy market were powered by clockwork and controlled by levers on the locomotive. Although this made control crude the models were large and robust enough that handling the controls was practical. Various manufacturers introduced slowing and stopping tracks that could trigger levers on the locomotive and allow station stops.
=== Electricity ===
Three-rail
The first miniature electric trains used a three-rail track, with non-insulated wheels resting on the two outer rails that were in contact with the metal sleepers. The insulated central rail supplied the current to a skid under the locomotive. The outer rails ensured the return of the current. The current was alternating, supplied by the domestic network, lowered by various means (transformer or serial resistances). This kind of track made sense at the time as models were metal and conductive. Modern plastics were not available and insulation was a problem. In addition the notion of accurate models had yet to evolve and toy trains and track were crude tinplate.
In 1938, Hornby, a manufacturer of ‘O’ scale model trains in the UK, launched a range of ‘OO’ scale electric trains (Hornby Dublo) with 1/76 scale rolling stock using 1/87 scale 16.5 mm wide track with a third centre rail. The power supply was 12 V DC and the track was equipped with an electrically insulated central rail and two non-insulated running rails. In 1959 Hornby abandoned its three-rail track in favour of a two-rail track for its ‘OO’ scale electric trains.
Other systems such as Märklin instead used, since 1953, fine metal studs to replace the central rail, allowing existing three-rail models to use more realistic track.
A variation on the three-rail system, early introduced by Trix in 1935, used a track with three insulated rails that allowed two trains to be independently controlled on the same track. The use of a catenary made it possible for three trains to be independently controlled. The center rail ensured the common return of the current. That system, known as Trix Express or Trix Twin in the UK, which first used alternative current and then direct current after 1953, was abandoned in 1997 when Märklin took over Trix. This three-rail system enabled DC and AC locomotives to run on the same track.
Two-rail
When DC motors with more powerful magnets began to be used for model trains in the 1950s, the two-rail track was generally preferred because at the same time accuracy became important. The two insulated rails from each other are to be used with insulated wheels on the same axle. In the direction of travel, the right-hand rail carries the positive potential and the left-hand rail the negative. This system excludes certain track layouts such as the reversing loop, the reversing triangle and the diagonal in a circle without insulated sections and suitable cabling.
Overhead line
Where the model is of an electric locomotive, it may be supplied by overhead lines, like the full-size locomotive. Before Digital Command Control became available, this was one way of controlling two trains separately on the same track. The electric-outline model would be supplied by the overhead wire and the other model could be supplied by one of the running rails. The other running rail would act as a common return.
Battery
Early electric trains ran on trackside batteries because few homes in the late 19th century and early 20th century had electricity. Today, inexpensive train sets running on batteries are again common but regarded as toys and seldom used by hobbyists. Batteries located in the model often power garden railway and larger scale systems because of the difficulty in obtaining reliable power supply through the outdoor rails. The high-power consumption and current draw of large-scale garden models is more easily and safely met with internal rechargeable batteries. Most large-scale battery-powered models use radio control.
=== Live steam ===
Engines powered by live steam are often built in large outdoor gauges of 5 inches (130 mm) and 7+1⁄2 inches (190 mm), are also available in Gauge 1, G scale, 16 mm scale and can be found in O and OO/HO. Hornby Railways produce live steam locomotives in OO, based on designs first arrived at by an amateur modeller. Other modellers have built live steam models in HO/OO, OO9 and N, and there is one in Z in Australia.
=== Internal combustion ===
Occasionally gasoline-electric models, patterned after real diesel-electric locomotives, come up among hobbyists and companies like Pilgrim Locomotive Works have sold such locomotives. Large-scale petrol-mechanical and petrol-hydraulic models are available but unusual and pricier than the electrically powered versions.
== Scratch building ==
Modern manufacturing techniques can allow mass-produced models to cost-effectively achieve a high degree of precision and realism. In the past this was not the case and scratch building was very common. Simple models are made using cardboard engineering techniques. More sophisticated models can be made using a combination of etched sheets of brass and low temperature castings. Parts that need machining, such as wheels and couplings are purchased.
Etched kits are still popular, still accompanied by low temperature castings. These kits produce models that are not covered by the major manufacturers or in scales that are not in mass production. Laser machining techniques have extended this ability to thicker materials for scale steam and other locomotive types. Scratch builders may also make silicone rubber moulds of the parts they create, and cast them in various plastic resins (see Resin casting), or plasters. This may be done to save duplication of effort, or to sell to others. Resin "craftsman kits" are also available for a wide range of prototypes.
== Control ==
The first clockwork (spring-drive) and live steam locomotives ran until out of power, with no way for the operator to stop and restart the locomotive or vary its speed. The advent of electric trains, which appeared commercially in the 1890s, allowed control of the speed by varying the current or voltage. As trains began to be powered by transformers and rectifiers more sophisticated throttles appeared, and soon trains powered by AC contained mechanisms to change direction or go into neutral gear when the operator cycled the power. Trains powered by DC can change direction by reversing polarity.
Electricity permits control by dividing the layout into isolated blocks, where trains can be slowed or stopped by lowering or cutting power to a block. Dividing a layout into blocks permits operators to run more than one train with less risk of a fast train catching and hitting a slow train. Blocks can also trigger signals or other accessories, adding realism or whimsy. Three-rail systems often insulate one of the common rails on a section of track, and use a passing train to complete the circuit and activate an accessory.
Many layout builders are choosing digital operation of their layouts rather than the more traditional DC design. Of the several competing systems, the command system offered by the majority of manufacturers in 2020 was a variant of Digital Command Control (DCC). The advantages of DCC are that track voltage is constant (usually in the range of 20 volts AC) and the command throttle sends a signal to small circuit cards, or decoders, hidden inside the piece of equipment which control several functions of an individual locomotive, including speed, direction of travel, lights, smoke and various sound effects. This allows more realistic operation in that the modeller can operate independently several locomotives on the same stretch of track. Several manufacturers also offer software that can provide computer-control of DCC layouts.
In large scales, particularly for garden railways, radio control and DCC in the garden have become popular.
== Model railway manufacturers ==
Model railways
== Magazines ==
== Layout standards organizations ==
Several organizations exist to set standardizations for connectibility between individual layout sections (commonly called "modules"). This is so several (or hundreds, given enough space and power) people or groups can bring together their own modules, connect them together with as little trouble as possible, and operate their trains. Despite different design and operation philosophies, different organizations have similar goals; standardized ends to facilitate connection with other modules built to the same specifications, standardized electricals, equipment, curve radii.
ausTRAK, N Scale, two-track main with hidden third track (can be used as NTRAK's third main, as a return/continuous loop, or hidden yard/siding/on-line storage). Australian scenery and rolling stock modelled in Standard Gauge.
FREMO a European-based organisation focusing on a single-track line, HO Scale. Also sets standards for N Scale modules. Standards are considerably more flexible in module shape than NTRAK, and has expanded over the years to accommodate several scenery variations.
Free-mo Originally developed by the San Luis Obispo Model Railroad Club in 1995 (California), it has grown across North America and is expanding across the world. The objective of the Free-mo Standard is to provide a platform for prototype modelling in a flexible, modular environment. Free-mo modules not only provide track to operate realistic models, but also emphasize realistic, plausible scenery; realistic, reliable trackwork; and operations. Free-Mo was designed to go beyond the traditional closed-loop set-up in creating a truly universal "free-form" modular design that is operations-oriented and heavily influenced by prototype railroading. This is emphasized in the Free-mo motto, "More than Just a Standard".
MOROP, European Union of Model Railroad and Railroad Fans, the European standardization organisation.
NEM, The German modelling standards organisation.
NMRA, National Model Railroad Association, the largest organization devoted to the development, promotion, and enjoyment of the hobby of model railroading.
N-orma, Polish N-scale (1:160) modules organization.
NTRAK, standardized three-track (heavy operation) mainline with several optional branchlines. Focuses on standard gauge, but also has specifications for narrow gauge. Due to its popularity, it can be found in regional variations, most notably the imperial-to-metric measurement conversions. Tends to be used more for "unattended display" than "operation".
oNeTRAK, operationally similar to FREMO, standardises around a single-track mainline, with modules of varying sizes and shapes. Designed with the existing NTRAK spec in mind, is fully compatible with such modules.
Sipping and Switching Society of NC is a society/association of individuals which has developed a system of HO modules, which feature lightweight waffle construction using 5 mm lauan plywood underlayment and an interface which depends on using a metal template to locate 1-inch (25 mm) pegs to mate to 1-inch holes in the adjoining module. The rails of the tracks are positioned in an exact relationship with the pegs. The rails come up to the end of the modules, so that the rails on adjacent modules do not need joiner track, but depend on the accuracy of the placement of the rails to allow trains to pass from one section to another. This style of module allows for very quick set-up, compared with module systems that use joiner tracks.
sTTandard, Polish TT-scale (1:120) modules organization.
T-TRAK, is a modular system that uses table-top modules, 2+3⁄4 inches (70 mm) high, which set on tables, that are not part of the modules, but are often found at sites which members meet. It uses a specific track interface, which has joiners which hold the modules together, which enables quick setting up and taking down.
Z-Bend Track, uses a double-track mainline running down both sides of a module. Modules can be of any length or width in the middle and any overall shape. The "standard" called Z-Bend Track applies only to the last 5 inches (130 mm) of the module's interface to other modules, the electrical interface and the module height.
== In popular culture ==
In the 1990 film Back to the Future III, Doc brown builds a "crude" electrified model rail "not to scale" to demonstrate his time travel experiment to Marty in 1885.
In Hinterland Season 1, Episode 4 ("The Girl in the Water"), a semi-recluse who lives and works at Borth railway station maintains a model train set with custom made components; the set and certain components contribute to a death as well as provide important clues to a murder investigation. During the investigation, DCI Tom Mathias reveals that his late brother was a model train aficionado.
In The Sopranos, Bobby Baccalieri is a model train aficionado. He is shown wearing an engineer's cap while playing with model trains in his garage.
In The Simpsons, Reverend Lovejoy is often depicted playing with his model trains when not on ecumenical duty, often while wearing a conductor's uniform and hat. His character may be a nod to the real life Reverend W. Awdry.
In Trailer Park Boys, Season 7 Episode 4, "Friends of the Dead", heavy metal singer Sebastian Bach is a featured guest at the Bangor model train convention and is introduced as "our Competitive Model Train World Champion". He expresses a dislike of alleged rival model train competitor Patrick Swayze. Attendees at the family event are shocked by Sebastian's use of obscenities as he attempts to work the crowd in a rock concert fashion shouting, "I know, I just know, that there are some great f**king trains here in Bangor!"
In That '90s Show, Red Forman runs a model railway in the garage after he retired.
== See also ==
Displays and famous layouts
Groups dedicated to railway modelling
== References ==
== External links ==
The National Model Railroad Association, USA – the largest model railroad organization in the world
The Model Railway Club, UK – the oldest known society in the world – established 1910
Associazione Ferrovie Siciliane – AFS (Messina – IT) – One of the most important group of rail enthusiasts end railways modellers active in Sicily and all over Italy founded in 2006 | Wikipedia/Model_railway |
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus, 'a measure'.
Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science.
In scholarly research and applied science, a model should not be confused with a theory: while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality.
== Types of model ==
=== Model in specific contexts ===
As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout":
Model (art), a person posing for an artist, e.g. a 15th-century criminal representing the biblical Judas in Leonardo da Vinci's painting The Last Supper
Model (person), a person who serves as a template for others to copy, as in a role model, often in the context of advertising commercial products; e.g. the first fashion model, Marie Vernet Worth in 1853, wife of designer Charles Frederick Worth.
Model (product), a particular design of a product as displayed in a catalogue or show room (e.g. Ford Model T, an early car model)
Model (organism) a non-human species that is studied to understand biological phenomena in other organisms, e.g. a guinea pig starved of vitamin C to study scurvy, an experiment that would be immoral to conduct on a person
Model (mimicry), a species that is mimicked by another species
Model (logic), a structure (a set of items, such as natural numbers 1, 2, 3,..., along with mathematical operations such as addition and multiplication, and relations, such as
<
{\displaystyle <}
) that satisfies a given system of axioms (basic truisms), i.e. that satisfies the statements of a given theory
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Model (MVC), the information-representing internal component of a software, as distinct from its user interface
=== Physical model ===
A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model) is a smaller or larger physical representation of an object, person or system. The object being modelled may be small (e.g., an atom) or large (e.g., the Solar System) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers).
The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains.
An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy.
Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures. Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC, to predict for example the effect of tax rises on employment.
=== Conceptual model ===
A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. It consists of concepts used to help understand or simulate a subject the model represents.
Abstract or conceptual models are central to philosophy of science, as almost every scientific theory effectively embeds some kind of model of the physical or human sphere. In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. Thus, the term refers to models that are formed after a conceptualization or generalization process.
=== Examples ===
Conceptual model (computer science), an agreed representation of entities and their relationships, to assist in developing software
Economic model, a theoretical construct representing economic processes
Language model, a probabilistic model of a natural language, used for speech recognition, language generation, and information retrieval
Large language models are artificial neural networks used for generative artificial intelligence (AI), e.g. ChatGPT
Mathematical model, a description of a system using mathematical concepts and language
Statistical model, a mathematical model that usually specifies the relationship between one or more random variables and other non-random variables
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Medical model, a proposed "set of procedures in which all doctors are trained"
Mental model, in psychology, an internal representation of external reality
Model (logic), a set along with a collection of finitary operations, and relations that are defined on it, satisfying a given collection of axioms
Model (MVC), information-representing component of a software, distinct from the user interface (the "view"), both linked by the "controller" component, in the context of the model–view–controller software design
Model act, a law drafted centrally to be disseminated and proposed for enactment in multiple independent legislatures
Standard model (disambiguation)
== Properties of models, according to general model theory ==
According to Herbert Stachowiak, a model is characterized by at least three properties:
1. Mapping
A model always is a model of something—it is an image or representation of some natural or artificial, existing or imagined original, where this original itself could be a model.
2. Reduction
In general, a model will not include all attributes that describe the original but only those that appear relevant to the model's creator or user.
3. Pragmatism
A model does not relate unambiguously to its original. It is intended to work as a replacement for the original
a) for certain subjects (for whom?)
b) within a certain time range (when?)
c) restricted to certain conceptual or physical actions (what for?).
For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism).
Additional properties have been proposed, like extension and distortion as well as validity. The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models.
== Uses of models ==
According to Bruce Edmonds, there are at least 5 general uses for models:
Prediction: reliably anticipating unknown data, including data within the domain of the training data (interpolation), and outside the domain (extrapolation)
Explanation: establishing plausible chains of causality by proposing mechanisms that can explain patterns seen in data
Theoretical exposition: discovering or proposing new hypotheses, or refuting existing hypotheses about the behaviour of the system being modelled
Description: representing important aspects of the system being modelled
Illustration: communicating an idea or explanation
== See also ==
== References ==
== External links ==
Media related to Physical models at Wikimedia Commons | Wikipedia/Modelling |
A large language model (LLM) is a machine learning model designed for natural language processing tasks, especially language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text.
The largest and most capable LLMs are generative pretrained transformers (GPTs), which are largely used in generative chatbots such as ChatGPT or Gemini. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in.
== History ==
Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, the IBM alignment models pioneered statistical language modelling. A smoothed n-gram model in 2001 trained on 300 million words achieved state-of-the-art perplexity at the time. In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"), upon which they trained statistical language models. In 2009, in most language processing tasks, statistical language models dominated over symbolic language models because they can usefully ingest large datasets.
After neural networks became dominant in image processing around 2012, they were applied to language modelling as well. Google converted its translation service to Neural Machine Translation in 2016. Because it preceded the existence of transformers, it was done by seq2seq deep LSTM networks.
At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology, and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014. The following year in 2018, BERT was introduced and quickly became "ubiquitous". Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting.
Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI at first deemed it too powerful to release publicly, out of fear of malicious use. GPT-3 in 2020 went a step further and as of 2025 is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-based ChatGPT that captured the imaginations of the general population and caused some media hype and online buzz. The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities. OpenAI did not reveal the high-level architecture and the number of parameters of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work. In 2024 OpenAI released the reasoning model OpenAI o1, which generates long chains of thought before returning a final answer.
Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters.
Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use. Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissive Apache License. In January 2025, DeepSeek released DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower cost.
Since 2023, many LLMs have been trained to be multimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs).
As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model).
== Dataset preprocessing ==
=== Tokenization ===
As machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding (BPE) and WordPiece. There are also special tokens serving as control characters, such as [MASK] for masked-out token (as used in BERT), and [UNK] ("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT.
For example, the BPE tokenizer used by GPT-3 (Legacy) would split tokenizer: texts -> series of numerical "tokens" as
Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset.
==== BPE ====
As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.
==== Problems ====
A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English.
Greedy tokenization also causes subtle problems with text completion.
=== Dataset cleaning ===
In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data. Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM.
With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it).
=== Synthetic data ===
Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM.
== Training and architecture ==
An LLM is a type of foundation model (large X model) trained on language. LLMs can be trained in different ways. In particular, GPT models are first pretrained to predict the next word on a large amount of data, before being fine-tuned.
=== Reinforcement learning from human feedback ===
Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences.
=== Instruction tuning ===
Using "self-instruct" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented in Hamlet," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus.
=== Mixture of experts ===
The largest LLM may be too expensive to train and use directly. For such models, mixture of experts (MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters.
=== Prompt engineering, attention mechanism, and context window ===
Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window).
In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k tokens. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.
The largest models, such as Google's Gemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested"). Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens. Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens.
Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with ChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation.
The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them is a matter of experimentation and domain-specific considerations.
A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either
autoregressive (i.e. predicting how the segment continues, as GPTs do): for example given a segment "I like to eat", the model predicts "ice cream", or "sushi".
"masked" (i.e. filling in the parts missing from the segment, the way "BERT" does it): for example, given a segment "I like to [__] [__] cream", the model predicts that "eat" and "ice" are missing.
Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation.
=== Infrastructure ===
Substantial infrastructure is necessary for training the largest models.
== Training cost ==
The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though it has only 117 million parameters. The tendency towards larger models is visible in the list of large language models.
As technology advanced, large sums have been invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million.
For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token.
== Tool use ==
Tool use is a mechanism that enables LLMs to interact with external systems, applications, or data sources. It can allow for example to fetch real-time information from an API or to execute code. Generally, in order to get an LLM to use tools, one must fine-tune it for tool use. If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be fine-tuned to be able to read API documentation and call API correctly.
Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.
== Agency ==
An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action.
The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment.
In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives.
The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes.
Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.
For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning.
LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially.
== Compression ==
Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics.
Post-training quantization aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantization codebook per layer. Further improvement can be done by applying different precisions to different parameters, with higher precision for particularly important parameters ("outlier weights"). See the visual guide to quantization by Maarten Grootendorst for a visual depiction.
While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned.
== Multimodality ==
Multimodality means having multiple modalities, where a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. For example, Google PaLM model was fine-tuned into a multimodal model and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4o can process and generate text, audio and images. Such models are sometimes called large multimodal models (LMMs).
A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder
E
{\displaystyle E}
. Make a small multilayered perceptron
f
{\displaystyle f}
, so that for any image
y
{\displaystyle y}
, the post-processed vector
f
(
E
(
y
)
)
{\displaystyle f(E(y))}
has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. The model Flamingo demonstrated in 2022 the effectiveness of the tokenization method, fine-tuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch.
== Reasoning ==
In late 2024, a new direction emerged in LLM development with models specifically designed for complex reasoning tasks. These "reasoning models" were trained to spend more time generating step-by-step solutions before providing final answers, similar to human problem-solving processes.
OpenAI introduced this trend with their o1 model in September 2024, followed by o3 in December 2024. These models showed significant improvements in mathematics, science, and coding tasks compared to traditional LLMs. For example, on International Mathematics Olympiad qualifying exam problems, GPT-4o achieved 13% accuracy while o1 reached 83%.
In January 2025, the Chinese company DeepSeek released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI's o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1's open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private.
These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step-by-step. However, they have shown superior capabilities in domains requiring structured logical thinking, such as mathematics, scientific research, and computer programming.
Efforts to reduce or compensate for hallucinations have employed automated reasoning, RAG (retrieval-augmented generation), fine-tuning, and other methods.
== Properties ==
=== Scaling laws ===
The performance of an LLM after pretraining largely depends on the:
cost of pretraining
C
{\displaystyle C}
(the total amount of compute used),
size of the artificial neural network itself, such as number of parameters
N
{\displaystyle N}
(i.e. amount of neurons in its layers, amount of weights between them and biases),
size of its pretraining dataset (i.e. number of tokens in corpus,
D
{\displaystyle D}
).
"Scaling laws" are empirical statistical laws that predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that:
{
C
=
C
0
N
D
L
=
A
N
α
+
B
D
β
+
L
0
{\displaystyle {\begin{cases}C=C_{0}ND\\[6pt]L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}}
where the variables are
C
{\displaystyle C}
is the cost of training the model, in FLOPs.
N
{\displaystyle N}
is the number of parameters in the model.
D
{\displaystyle D}
is the number of tokens in the training set.
L
{\displaystyle L}
is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset.
and the statistical hyper-parameters are
C
0
=
6
{\displaystyle C_{0}=6}
, meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token.
α
=
0.34
,
β
=
0.28
,
A
=
406.4
,
B
=
410.7
,
L
0
=
1.69
{\displaystyle \alpha =0.34,\beta =0.28,A=406.4,B=410.7,L_{0}=1.69}
=== Emergent abilities ===
Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities". They arise from the complex interaction of the model's components and are not explicitly programmed or designed.
Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory.
One of the emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as:
reported arithmetics
decoding the International Phonetic Alphabet
unscrambling a word's letters
disambiguating word-in-context datasets
converting spatial words
cardinal directions (for example, replying "northeast" in response to a 3x3 grid of 8 zeros and a 1 in the top-right), color terms represented in text.
chain-of-thought prompting: In a 2022 research paper, chain-of-thought prompting only improved the performance for models that had at least 62B parameters. Smaller models perform better when prompted to answer immediately, without chain of thought.
identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs.
Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.
Let
x
{\displaystyle x}
be the number of parameter count, and
y
{\displaystyle y}
be the performance of the model.
== Interpretation ==
Large language models by themselves are black boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind.
Various techniques have been developed to enhance the transparency and interpretability of LLMs. Mechanistic interpretability aims to reverse-engineer LLMs by discovering symbolic algorithms that approximate the inference performed by an LLM. In recent years, sparse coding models such as sparse autoencoders, transcoders, and crosscoders have emerged as promising tools for identifying interpretable features.
=== Studying a replacement model ===
Transcoders, which are more interpretable than transformers, have been utilized to develop “replacement models.” In one such study involving the mechanistic interpretation of writing a rhyming poem by an LLM, it was shown that although they are believed to simply predict the next token, they can, in fact, plan ahead.
=== Explainability ===
A related concept is AI explainability, which focuses on understanding how an AI model arrives at a given result. Techniques such as partial dependency plots, SHAP (SHapley Additive exPlanations), and feature importance assessments allow researchers to visualize and understand the contributions of various input features to the model's predictions. These methods help ensure that AI models make decisions based on relevant and fair criteria, enhancing trust and accountability.
By integrating these techniques, researchers and practitioners can gain deeper insights into the operations of LLMs, fostering trust and facilitating the responsible deployment of these powerful models.
In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform.
=== Understanding and intelligence ===
NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense". Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?" Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation. Some researchers characterize LLMs as "alien intelligence". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."
In contrast, some skeptics of LLM understanding believe that existing LLMs are "simply remixing and recombining existing writing", a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate".
The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language. These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human like language.
== Evaluation ==
=== Perplexity ===
The canonical measure of the performance of an LLM is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token.
log
(
Perplexity
)
=
−
1
N
∑
i
=
1
N
log
(
Pr
(
token
i
∣
context for token
i
)
)
{\displaystyle \log({\text{Perplexity}})=-{\frac {1}{N}}\sum _{i=1}^{N}\log(\Pr({\text{token}}_{i}\mid {\text{context for token}}_{i}))}
Here,
N
{\displaystyle N}
is the number of tokens in the text corpus, and "context for token
i
{\displaystyle i}
" depends on the specific type of LLM. If the LLM is autoregressive, then "context for token
i
{\displaystyle i}
" is the segment of text appearing before token
i
{\displaystyle i}
. If the LLM is masked, then "context for token
i
{\displaystyle i}
" is the segment of text surrounding token
i
{\displaystyle i}
.
Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set. This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set.
==== Measures ====
In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon. This relationship is mathematically expressed as
Entropy
=
log
2
(
Perplexity
)
{\displaystyle {\text{Entropy}}=\log _{2}({\text{Perplexity}})}
.
Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization.
Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word.
In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions.
Due to their ability to accurately predict the next token, LLMs are highly capable in lossless compression. A 2023 study by DeepMind showed that the model Chinchilla, despite being trained primarily on text, was able to compress ImageNet to 43% of its size, beating PNG with 58%.
=== Benchmarks ===
Benchmarks are used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias, commonsense reasoning, question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method.
A question answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."). Otherwise, the task is considered "closed book", and the model must draw solely on its training. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, HELM, and HLE (Humanity's Last Exam).
LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs), Stereo Set, and Parity Benchmark.
Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as PolitiFact and Snopes. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers.
An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques. Researchers vary in how they formulate prompts for particular tasks, particularly with respect to the number of correct examples attached to the prompt (i.e. the value of n in n-shot prompting).
==== Datasets ====
Typical datasets consist of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No"). Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD.
Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ____".
Datasets are of varying quality and may contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality.
==== Adversarial evaluations ====
LLMs' rapid improvement regularly obsoletes benchmarks, with the models exceeding the performance of human annotators. In addition, "shortcut learning" allows AIs to "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording to guess the correct responses, without considering the specific question.
Some datasets are adversarial, focusing on problems that confound LLMs. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions that stump LLMs by mimicking falsehoods to which they were exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiom you can't teach an old dog new tricks, even though this is not literally true.
Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model. The resulting problems are trivial for humans but defeated LLMs. Sample questions:
We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man...
demonstrates how to increase efficient exercise work by running up and down balls.
moves all his arms and legs and builds up a lot of muscle.
then plays the ball and we see a graphics and hedge trimming demonstration.
performs sit ups while on the ball and talking.
BERT selects b) as the most likely completion, though the correct answer is d).
== Wider impact ==
In 2023, Nature Biomedical Engineering wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time." Goldman Sachs suggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally. Brinkmann et al. (2023) also argue that LLMs are transforming processes of cultural evolution by shaping processes of variation, transmission, and selection.
=== Memorization and copyright ===
Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural nets. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates or up to about 7%.
A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data.
=== Security ===
Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse. For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.
The potential presence of "sleeper agents" within LLMs is another emerging security concern. These are hidden functionalities built into the model that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions.
LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study proposed a method for circumventing LLM safety systems. In 2025, The American Sunlight Project, a non-profit, published a study showing evidence that the so-called Pravda network, a pro-Russia propaganda aggregator, was strategically placing web content through mass publication and duplication with the intention of biasing LLM outputs. The American Sunlight Project coined this technique "LLM grooming," and pointed to it as a new tool of weaponizing AI to spread disinformation and harmful content. Similarly, Yongge Wang illustrated in 2024 how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation. External filters, circuit breakers and overrides have been posed as solutions.
=== Algorithmic bias ===
While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups. Since English data is overrepresented in current large language models' training data, it may also downplay non-English views.
==== Stereotyping ====
AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that homogenize, or unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.
Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms. For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.
==== Selection bias ====
Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as “A”) when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model’s performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings.
==== Political bias ====
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
=== Energy demands ===
The energy demands of LLMs have grown along with their size and capabilities. Data centers that enable LLM training require substantial amounts of electricity. Much of that electricity is generated by non-renewable resources that create greenhouse gases and contribute to climate change. Nuclear power and geothermal energy are two options tech companies are exploring to meet the sizable energy demands of LLM training. The significant expense of investing in geothermal solutions has led to major shale producers like Chevron and Exxon Mobil advocating for tech companies to use electricity produced via natural gas to fuel their large energy demands.
== See also ==
Foundation models
List of large language models
List of chatbots
Language model benchmark
Reinforcement learning
Small language model
== References ==
== Further reading ==
Jurafsky, Dan, Martin, James. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 3rd Edition draft, 2023.
Zhao, Wayne Xin; et al. (2023). "A Survey of Large Language Models". arXiv:2303.18223 [cs.CL].
Kaddour, Jean; et al. (2023). "Challenges and Applications of Large Language Models". arXiv:2307.10169 [cs.CL].
Yin, Shukang; Fu, Chaoyou; Zhao, Sirui; Li, Ke; Sun, Xing; Xu, Tong; Chen, Enhong (2024). "A Survey on Multimodal Large Language Models". National Science Review. 11 (12): nwae403. arXiv:2306.13549. doi:10.1093/nsr/nwae403. PMC 11645129. PMID 39679213.
"AI Index Report 2024 – Artificial Intelligence Index". aiindex.stanford.edu. Retrieved 2024-05-05.
Frank, Michael C. (27 June 2023). "Baby steps in evaluating the capacities of large language models". Nature Reviews Psychology. 2 (8): 451–452. doi:10.1038/s44159-023-00211-x. ISSN 2731-0574. S2CID 259713140. Retrieved 2 July 2023.
Anwar, U.; Saparov, A.; Rando, J.; Paleka, D.; Turpin, M.; Hase, P.; Lubana, E. S.; Jenner, E.; Casper, S.; Sourbut, O.; Edelman, B. L.; Zhang, Z.; Günther, M.; Korinek, A.; Hernandez-Orallo, J.; Hammond, L.; Bigelow, E.; Pan, A.; Langosco, L.; Krueger, D. (2024). "Foundational Challenges in Assuring Alignment and Safety of Large Language Models". arXiv:2404.09932 [cs.LG]. | Wikipedia/Large_language_model |
A model act, also called a model law or a piece of model legislation, is a suggested example for a law, drafted centrally to be disseminated and suggested for enactment in multiple independent legislatures. The motivation classically has been the hope of fostering more legal uniformity among jurisdictions, and better practice in legislative wording, than would otherwise occur; another motivation sometimes has been lobbying disguised under such ideals. Model laws can be intended to be enacted verbatim, to be enacted after minor modification, or to serve more as general guides for the legislatures.
Model laws are especially prevalent in federations because the federal subjects (for example, states, provinces, or other subjects) are autonomous or semi-autonomous but nonetheless can benefit from a substantial degree of uniformity of laws among jurisdictions. For example, in the United States, because the country consists of 50 semi-autonomous states, each with its own legislature and set of laws, avoidance of needless variation is valuable, reserving variation only to essential autonomous differences. There, model laws are referred to as model acts or model bills. Many American special interest groups draft model acts which they lobby lawmakers to pass. In particular, the conservative American Legislative Exchange Council (ALEC) has successfully gotten hundreds of model acts passed since 2010. Uniform acts are model acts intended to be enacted exactly as written. They are drafted by the Uniform Law Commission (ULC), a state-run non-profit organization whose purpose is to draft laws in areas where uniformity is important (for example, to facilitate interstate commerce).
The concept is not specific to federations, and international organizations such as the United Nations Commission on International Trade Law, the International Red Cross and Red Crescent Movement, and the European Union have also written model laws to harmonize laws between different countries.
Although model acts inherently can serve valid purposes (such as for uniform justice, with less capriciousness), their distortion into disguised lobbying has been criticized. American critics of such model laws have thus referred to them as "copycat laws", "fill-in-the-blanks laws", and "copy-paste laws". The concept caused some controversy in 2019 when a coalition of 30 investigative journalists published a series called "Copy, Paste, Legislate", investigating the corporate interests behind many model laws.
== American drafters of model law ==
=== Harry H. Laughlin's Model Eugenical Sterilization Law ===
One early example of a model law was eugenicist Harry H. Laughlin's Model Eugenical Sterilization Law. In 1922, he published the book Eugenical Sterilization in the United States whose purpose was to persuade state legislatures into passing sterilization laws, which it also did. In chapter XV of the book he included the bill Model Eugenical Sterilization Law. Two years later, Laughlin's sterilization act was enacted almost unmodified by the Virginia Sterilization Act of 1924. The Supreme Court upheld the constitutionality of the law in Buck v. Bell in 1927, paving the way for similar sterilization laws in other states.
=== Uniform Law Commission ===
The non-profit Uniform Law Commission (ULC), formerly known as the National Conference of Commissioners on Uniform State Laws, was founded in 1892 to provide American jurisdictions with robust legislation. ULC promotes enactment of uniform acts in areas of state law where uniformity is desirable and practical. ULC produces both model and uniform acts. Since its inception it has produced over 250 uniform acts.
ULC drafted the Model Tribal Secured Transactions Act in 2005 which served as a template for tribal legal infrastructure on reservations to provide consistency and greater accessibility in lending and credit transactions.
=== American Bar Association ===
The American Bar Association is an association of American lawyers and law students which has published a large number of model acts. Its most successful model law is probably the Model Business Corporation Act published in 1950. As of 2020, the act is followed by 24 states. Another influential act ABA has drafted is the 1979 Model Procurement Code for State and Local Governments, which as of 2000 had been adopted in full by 16 states and in part by several more. The act went through a major update in 2000.
Other model acts drafted by ABA include the Model Airspace Act in 1973, and the Model Code for Public Infrastructure Procurement in 2007.
=== American Law Institute ===
The American Law Institute (ALI) is most famous for its Restatements of the Law but has also produced model acts. A well-known example is the Model Penal Code published in 1962 seeking to harmonize state criminal law.
=== American Legislative Exchange Council ===
The American Legislative Exchange Council (ALEC) an American nonprofit organization—whose members include conservative state legislators and private sector representatives—is a prolific producer of model state-level laws for conservative causes. ALEC has deep ties to the State Policy Network (SPN), an umbrella organization for a consortium of conservative and libertarian think tanks that focus on state-level policy, which is one of ALEC's sponsors.
One of ALEC's earliest model acts were the 1981 Animal and Ecological Terrorism Act to prohibit acts that would make agricultural business operations more difficult. The act sought to impose harsh penalties, including a terrorism registry, on instances of direct action performed by organizations such as the Animal Liberation Front.
ALEC's model acts concern many topics important to conservatives like Stand Your Ground, Voter ID, illegal immigration, truth in sentencing, three strikes, right to know, and cutting taxes. ALEC has drafted and distributed state-level legislation to limit. It has also opposed the creation or expansion of municipal broadband networks.
ALEC has been very successful in getting its laws passed; according to Brendan Greeley, lawmakers introduce bills based on the organization's model acts about 1,000 times per year in state legislatures and about 200 of them become law. In 2015, ALEC model bills were reflected in about 172 measures introduced in 42 states, according to the Center for Media and Democracy, publishers of the ALEC Exposed series.
ALEC has also been criticized for being funded by big corporations and over alleged underhandedness. The Guardian has described it as a "dating agency for Republican state legislators and big corporations" to "frame rightwing legislative agendas".
== Notable model acts ==
Some notable model acts not drafted by the above-mentioned organizations:
National Notary Association provided the draft for the Uniform Notary Act in 1973. It was renamed to the Model Notary Act and expanded in 1984, 2002, and 2010. The act has been adopted in its entirety in several jurisdictions.
The United States Department of Justice issued the Revised Model Tribal Sex Offender Registry Code/Ordinance in 2017. It offered guidelines to American Indian tribes on how to implement the Adam Walsh Child Protection and Safety Act.
The National Association of Civil-Law Notaries issued the Model Civil Law Notary Act to streamline law relating to civil law notary. The act has been enacted in Alabama and Florida.
The National Association of Insurance Commissioners's issued the Uniform Health Carrier External Review Model Act. The 2010 Patient Protection and Affordable Care Act requires states to enact laws based on the model act.
President Franklin D. Roosevelt wrote the Standard State Soil Conservation Districts Law, a model act he submitted to the states. It led to the establishment of the New York State Soil and Water Conservation Committee on April 23, 1940, and the soil and water conservation districts were authorized with the enactment of the Soil Conservation District Law, a law based on that model act.
The Tenther movement in 2013 and 2014, introduced bills based on their model act the Fourth Amendment Protection Act in several state legislatures via Republican and Democratic lawmakers. The intent of the bills was to prevent state governments from co-operating with the National Security Agency's mass surveillance program.
In 2006, the National Auctioneers Association proposed the Uniform Auction and Auctioneer Licensing Act, model legislation governing auctions and auctioneers.
The National Committee on Uniform Traffic Laws and Ordinances introduced and periodically updates Uniform Vehicle Code.
Model State Emergency Health Powers Act, drafted by the Centers for Disease Control and Prevention in 1999.
Federal Rules of Civil Procedure (FRCP), published in 1938 to harmonize the rules governing civil procedure. 35 states have adopted rules based on FRCP.
Partisanship Out of Civics Act (POCA), published in February 2021 by the National Association of Scholars (NAS) by Stanley Kurtz—National Review education writer and Senior Fellow at the Ethics and Public Policy Center—to limit the teaching of critical race theory in schools. This model bill, particularly Section 7—which specifically bans certain concepts—has been incorporated into state legislation in Idaho, Oklahoma, Iowa, Tennessee, Texas, Florida, Montana, Utah, Georgia, and South Carolina.
== International model laws ==
An example of an international model law is the UNCITRAL Model Law on International Commercial Arbitration. Model legislative provisions on privately financed infrastructure projects were drafted by UNCITRAL and recommended for states to use by the United Nations General Assembly in 2004. Other UNCITRAL Legislative Guides, which make recommendations for efficient approaches to addressing an area of law within a national or local context, are listed at United Nations Commission on International Trade Law#Legislative Guides.
== "Copy, Paste, Legislate" ==
In 2019, a team of 30 reporters from the Center for Public Integrity (CPI), USA TODAY, and The Arizona Republic published the result of a two-year-long investigation into model acts entitled "Copy, Paste, Legislate". The investigation raised concerns over the role of ALEC and other corporate-sponsored organizations on the American legislative process.
The investigation used text analysis software called Legislative Influence Detector created by Joe Walsh, a former data scientist at the University of Chicago to spot similarities between model acts and enacted legislation. Its main finding was that during the period 2010 to 2018 lawmakers had introduced bills based on model acts at least 10,000 times. Another 10,000 bills were likely copied but were more dissimilar. The investigation identified over 2,100 model acts but speculated that the real number is far higher since many organizations keep their model acts secret. In many states, the use of model bills was found to have supplanted the traditional way of writing legislation "from scratch".
Mississippi was found to be the state with the highest number of bills introduced based on model acts, 744 - 200 more than the next highest state. 288 came from the non-partisan Council of State Governments and 255 from ALEC. But only 57 of them became law, according to the investigation.
=== Open recall disclosure ===
The "Copy, Paste, Legislate" investigation uncovered a legal initiative by the car industry to enact laws that would require dealers to disclose if a bought used car were under open recall, something most states do not require. The car industry's initiative was in response to other legal initiatives that called for banning the sales of used cars under open recall entirely.
The first bill produced by the initiative was introduced in 2014 by New Jersey Speaker, Paul D. Moriarty and called for "a fine for failing to disclose open recalls to customers." It was based on model law that had been crafted by a lobbyist who headed the New Jersey Coalition of Automotive Retailers. The lobbyist said that their "model legislation" provided "suggested language" and was never intended to be a copy-and-paste exercise." Similar model legislation was drafted by the Washington, D.C.–based Automotive Trade Association Executives (ATAE), representing over 100 "executives from regional auto dealer associations". The bill allowed dealers to continue selling recalled cars as long as they disclosed open recalls. The dealers worked with over 600 lobbyists in 43 states to assist in getting the legislation passed. From 2014 through 2019, lawmakers in eleven states introduced similar bills into their state legislatures.
=== "Right-to-try" ===
The libertarian Arizona-based Goldwater Institute, drafted the "right-to-try" law that was signed into law in Ohio in 2016 by then-Governor John Kasich. It allows patients with terminal illnesses to try drugs that the federal Food and Drug Administration has not approved. The law was passed on the federal level in 2018.
=== Anti-BDS laws ===
The "Copy, Paste, Legislate" investigation also documented the Israel lobby's largely successful attempts to get statehouses to pass legislation to curb the Palestinian-led BDS movement. BDS calls for comprehensive boycotts of Israel until it stops its human rights violations against Palestinians. The legislation that the Israel lobby promotes requires state contractors to pledge not to boycott Israel and state pension funds to divest from entities that do.
One of the first anti-BDS law was sponsored by Republican lawmaker Alan Clemmons who introduced it in 2015. He worked with the Israeli-American Coalition for Action's (AIC) Joe Sabag, his "buddy and wordsmith-in-chief", to prepare the bill. Eugene Kontorovich, a George Mason University law professor, assisted in drafting the legislation. He also helped other states with their anti-BDS laws and frequently defends their constitutionality in the media. By May 2019, 25 other states had adopted similar measures. Many of the bills shared exact wording. The anti-BDS initiatives, undertaken by activist groups concerned about the rise of antisemitism, such as the Jewish Federations of North America (JFNA) and the Israeli-American Coalition for Action, have been largely successful in pushing the anti-boycott legislation through state legislatures, according to a two-year collaborative investigative journal report. A JFNA lobbyist wrote the "anti-boycott executive order and news release" for the governor of Louisiana. A pro-Israel lobbyist closely helped edit the bill and guided the lawmaker who introduced and supported the anti-boycott legislation in Nevada.
== See also ==
Conflict of laws
EU Harmonization, a somewhat similar concept in European Union law
Model State Constitution
Project Blitz
Uniform act
== References == | Wikipedia/Model_act |
An art model is a person who poses, often nude, for visual artists as part of the creative process, providing a reference for the human body in a work of art. As an occupation, modeling requires the often strenuous 'physical work' of holding poses for the required length of time, the 'aesthetic work' of performing a variety of interesting poses, and the 'emotional work' of maintaining a socially ambiguous role. While the role of nude models is well-established as a necessary part of artistic practice, public nudity remains transgressive, and models may be vulnerable to stigmatization or exploitation.: 1 Family and friends may pose for artists, in particular for works with costumed figures.
Much of the public perception of art models and their role in the production of artworks is based upon mythology, the conflation of art modeling with fashion modeling or erotic performances, and representations of art models in popular media.: 15–18 One of the perennial tropes is that in addition to providing a subject for an artwork, models may be thought of as muses, or sources of inspiration without whom the art would not exist.: 68–79, 102–115 Another popular narrative is the female model as a male artist's mistress, some of whom become wives.: 3 None of these public perceptions include the professional model's own experience of modelling as work,: 44–45 the performance of which has little to do with sexuality.: Ch. 10
Beginning with the Renaissance, drawing the human figure has been considered the most effective way to develop the skills of drawing. In the modern era it became established that it is best to draw from life, rather than from plaster casts or copying two dimensional images such as photographs. In addition, an artist has an emotional: 32 or empathic: 4 connection to drawing another human being that cannot exist with any other subject. What is called the life class became an essential part of the curriculum in art school. In the classroom setting, where the purpose is to learn how to draw or paint the human form in all the different shapes, ages and ethnicities, anyone who can hold a pose may be a model.
== Role of the model ==
Although artists may also rely on friends and family to pose, art models are most often paid professionals with skill and experience. Rarely employed full-time, they must be gig workers or independent contractors if modeling is to be a major source of income. Paid art models are usually anonymous and unacknowledged subjects of the work. Models are most frequently employed by institutions of higher learning or by informal groups of artists that gather to share the expense of a model. Models are also employed privately by professional artists. Although commercial motives dominate over aesthetics in illustration, its artwork commonly employs models. For example, Norman Rockwell employed his friends and neighbors as models for both his commercial and fine-art work.
In the second half of the 20th century, the dominance of abstraction in the art world reduced the need for models by professional artists except for the remaining representational artists. However, drawing from life remained part of the training needed for a complete visual arts education at the majority of art schools.: 8–9 In recent years, art modeling has expanded from educational settings to non-traditional art spaces and sometimes bars, blurring the line between art and entertainment.: 9 With the increasing presence of sexual imagery in popular culture, effort is required to maintain the desexualized context of nude modeling in studio classes.: 21–22
=== Training and selection ===
In some countries there are figure model guilds that concern themselves with the competence, conduct and reliability of their members. An example is the Register of Artists' Models (RAM) in the United Kingdom. Some basic training is offered to beginners and membership is by audition – to test competence, not to discriminate on grounds of physical characteristics. RAM also acts as an important employment exchange for models and publishes the 'RAM Guidelines', which are widely referred to by models and employers. A similar organization in the United States, the Bay Area Models Guild in California, was founded in 1946 by Florence Wysinger Allen. Groups also exist in Australia and Sweden. These groups may also attempt to establish minimum rates of pay and working conditions, but only rarely have models been sufficiently organized to go on strike.
=== Diversity of models and students ===
Unlike commercial modeling, modeling in an art school classroom is for the purpose of teaching students of art how to draw humans of all physical types, genders, ages, and ethnicities.: 11, 77, 81
Children are generally excluded from modeling nude for classes. The minimum age can vary, but is often 15 to 18. Despite being nonsexual in nature, this may be influenced by the age of consent (i.e. at or slightly below). Younger children are not good candidates for art modeling since they lack the ability to hold still.: 9
Gender roles and stereotypes in society are reflected in different experiences for male and female art models, and different responses when those not in the arts learn that someone is a nude model. However, both male and female models tend to keep their modeling careers distinct from their other social interactions, if for different reasons. Attitudes toward male nudity, issues of homosexuality when male artists work with male models, and some bias in favor of the female form in art may lead to less opportunity for male models. Works of art that include male nudity are much less marketable.
In classrooms with predominantly white students, a model of color may be an issue not due to overt racism, but students unfamiliarity with different skin tones and body types.
Figure on Diversity is an organization that seeks to diversify the field of figurative representation in art education by leading workshops for models and artists. Founded in Boston in 2018, it has since moved to Florida, but has an increasing presence online.
=== Working as a model ===
Posing nude is physically and emotionally challenging, but models find the effort worthwhile and appreciate having a role in the creative arts.
==== Physical work ====
While posing, a model is expected to remain essentially motionless, and return to the same pose after a break.: 47–55 : 111–113 While posing for a class models do not talk, and should not be spoken to by students, maintaining the serious atmosphere of the studio.: 64–67 Poses can range in length from seconds to many hours—with appropriate breaks—but the shortest is usually one minute. Short dynamic poses are used for gesture drawing exercises or warm-ups, with the model taking strenuous or precarious positions that could not be sustained for a longer pose. Sessions proceed through groups of poses increasing in duration. Active, gestural, or challenging standing poses are often scheduled at the beginning of a session when the models' energy level is highest. Specific exercises or lesson plans may require a particular type of pose, but more often the model is expected to do a series of poses with little direction. The more a model knows about the types of exercises used to teach art, the better they become at posing. Occasionally a pose will cause unexpected problems, such as constricting blood flow that could result in a model passing out. While the first time posing may cause anxiety, most continue due to the relatively high pay. The most significant characteristic of the job mentioned by models is the physical exertion required.
Poses fall into three basic categories: standing, seated and reclining. Within each of these, there are varying levels of difficulty, so one kind is not always easier than another. Artists and life drawing instructors will often prefer poses in which the body is being exerted, for a more dynamic and aesthetically interesting subject. Common poses such as standing twists, slouched seated poses and especially the classical contrapposto are difficult to sustain accurately for any amount of time, although it is often surprising what a skilled model can do. The model's level of experience and skill may be taken into account in determining the length of the posing session and the difficulty of the poses.: 9–10
Models usually pose on a raised platform called the model stand or dais. When artists are working standing at easels, a model stand is essential to avoid a distorted perspective. If the model is posed standing on the floor, the artist should draw while seated.: 14–15 In sculpture studios this platform may be built to rotate periodically through the session to allow for a 360° view for every artist. Long poses are generally required for painting (hours) and sculpture (perhaps days).: 9–10
==== Aesthetic work ====
The most creative aspect of modeling is being able to think of an endless variety of new and interesting poses. A typical short-pose session may begin with five or ten gestures, followed by two 5-, two 10-, and five 25-minute poses separated by five-minute breaks.: 42 When modeling for the same group, new poses are expected at each session. Most models learn on the job, but many have experience in the performing arts, athletics, or yoga that provide a basis for posing, such as strength, flexibility, and a well-developed sense of body position.
Those that try modeling casually may find it to be rewarding, and then seek to learn more about the job. Some may have previously taken an art class and seen other models, but others rely upon fine-art museums and books for suggestions on how to pose.: 103–104 Experienced models work for many employers, gaining a wider knowledge of methods and practices than most individual artists or art teachers. Many models are visual artists themselves, and come to think of modeling as part of their visual arts practice, or as a creative activity in its own right.: 36
==== Emotional work ====
In social science terms, an art model is recognized as having a valuable role in the art world as a sub-culture, with norms of behavior and a definition of the situation that ideally support models' being proud of their work. However, stereotypes and prejudices of the larger culture may threaten these norms and definitions.: 8 Pride in being a model comes from identification with fine arts education and creativity as having social value, which is dependent on the quality of teaching, which models experience first-hand in a myriad of settings.: 37
Sexuality is an issue in an art studio where naked models are present, and has become more so with the sexualization of the body in contemporary cultures. The traditional definition of the situation in art studios has been that the nudity of models is functional, not sexual. The norms and behaviors that support this understanding included models being naked only while posing, quickly disrobing/robing and not interacting with others while naked. This understanding is less strict when student artists are also models, either in classes or posing for each other outside of class. The other aspect of sex in the arts is gender, including feminist critiques of the performance of gender in the classroom and representations of gender in figurative works.: 127–131
A common experience for young first-time participants in a figure class, both models and students, is overcoming anxiety for the initial session due to preconceptions regarding public nudity. Occasionally the class is the first time a student has seen someone of the opposite sex entirely nude in real life, but they quickly get used to it.
== Types of modeling ==
The major distinction in types of art modeling is between posing for art classes or other groups versus posing for an individual artist in the creation of a particular work. There is also the distinction between models who pose for an hourly fee versus those that pose for friends, family, or significant others. These types apply to all the media used, figure drawing, figure painting, sculpture and figure photography.
=== Academic modeling ===
Job descriptions for modeling posted by art schools list basic requirements of being willing to pose nude or clothed, being able to hold poses for the requested time (from minutes to hours with breaks), and to follow cues from the instructor. These basic requirements hold true at large universities, liberal arts colleges, and schools of art and design. The hourly rate paid by schools for nude modeling may be significantly higher than for clothed modeling. Some colleges have a model coordinator assigned to supervise the selection and scheduling of models for all classes.
At many public universities in the United States, "Art Model" is listed in the human resources system as would any part-time temporary job. Sometime modeling jobs are reserved for students. At Indiana University; however, current students at the Eskenazi School of Art, Architecture + Design may not pose nude, but only clothed, while students in other departments may be nude. At other institutions students cannot be models, even if they are not art students, to avoid any possibility of conflict of interest.
In some institutions, guidelines for the conduct of all participants in a nude model session may be specified in a handbook, and are observed to maintain decorum and emphasize the serious intent of figure studies. Admission to and visibility of the area where a nude model is posing is tightly controlled. Disrobing is done discreetly, and the model wears a robe when not posing. Models may not be accompanied by non-class members. It is generally prohibited for anyone (including the instructor) to touch a model. Very close examinations are only made with the permission of the model. Some institutions allow only the instructor to speak directly with a model. Experienced models avoid any sexually suggestive poses. Art instructors and institutions may consider the incident of a male model gaining an erection while posing cause for termination, or grounds for not hiring him again. Guidelines at St. Olaf College discourages students making comments on a model's appearance. Photography is generally forbidden.
Any of these policies may vary in different parts of the world. In Europe and South America attitudes are more relaxed than in the United States, while in China, Taiwan and Korea attitudes are more conservative.: 39 A figure class held in Singapore is conducted as it would be in other parts of the world.
=== Artist's groups ===
While otherwise similar to art school modeling, groups variously called "open studios" or "drop-in sessions" lack instruction. They may be sponsored by arts organizations or galleries, or meet in an artist's private studio or home. Generally the attendees are experienced artists who want to continue the practice of life drawing, and find an informal group easier and more economical, paying a fee for each session or a series.: 18–19
In many locations there may be few opportunities for figure drawing, and also few that are willing to model. Those that do so seek an additional source of income, but also find validation in being able to hold poses and contributing to the artistic process. However, they are more likely to avoid letting it be known that they model, given the negative associations toward nudity. The Philbrook Museum of Art in Tulsa, Oklahoma has been holding a weekly session for as long as anyone can remember. Otherwise a typical open session, a professor at the University of Tulsa offers instruction once each month. The models for these sessions tend to be middle age or older, and the artists are generally experienced drawing nude models with only the occasional new participant.
=== Modeling for individual artists ===
In non-academic settings, models may pose as requested by artists within the limits of the law and their own comfort, including work that requires physical contact with other models, the artist, or the public. French artist Yves Klein applied paint to models' bodies which were then pressed into or dragged across canvas both as performance art and as painting technique. In 2010 at the Museum of Modern Art, a retrospective of the work of Marina Abramović included two nude models, male and female, standing in a narrow doorway through which visitors passed, replicating a work performed by the artist and a partner in 1977.
Models who work for individual artists in a private studio tend to observe art school norms in order to maintain the definition of modeling as serious artistic work. However, there are no longer strict rules, so a more informal working relationship may be established over time. This may include not undressing in another room, or not wearing a robe during breaks. In addition, silence is no longer necessary if the artist is comfortable working and conversing with the model. A more collegial relationship may develop where artist and model feel that they are collaborating. However, in a private studio environment, with an artist on a deadline or with commission guidelines, stricter work standards may apply regarding punctuality and holding longer, more demanding poses, but also higher rates of pay. However, private studio work is rare outside of major cities.: 49–54
Chuck Close apologized in 2017 when several women accused him of making inappropriate comments when they came to his studio to pose, but initially denied any wrongdoing. Following his death in 2021, it was revealed that Close suffered from a form of dementia, which could account for his behavior.
==== Family members, wives and life partners ====
Through history, artists use family members as models, both nude and otherwise, in creating their works. The Dutch Golden Age painter Jan de Bray specialized in the portrait historié, "portraits" of historical figures using contemporary figures as models, including himself and his family, as in two versions of The Banquet of Cleopatra (1652 and 1669). Rose Beuret was the subject of several portrait sculptures by Auguste Rodin and his companion for 53 years, but his wife only in the final year of her life. Camille Doncieux, first wife of Claude Monet also posed for paintings by Pierre-Auguste Renoir and Édouard Manet. Hortense Fiquet, companion and later wife of Cézanne is rarely mentioned in art history. Lucian Freud painted many of his 14 children, sometimes nude; the most controversial being his daughter Annie Freud in 1963 when she was 14. However, she now looks back upon posing for her father as a positive experience.
=== Clothed modeling ===
Painting classes, and artists doing historical themed works often require clothed or costumed models who take poses that may be sustained until the work is completed. This creates some demand for clothed models in those schools that continue to teach academic painting methods. Some models may promote their services based upon having interesting or varied costumes. Clothing is required in public venues, such as Dr Sketchy's Anti-Art School, but occurs in more traditional settings as well, such as the fund-raising marathons sponsored by the Bay Area Models Guild.: 39
Usually an individual who is having their own portrait painted or sculpted is called a "sitter" rather than a model; when they are not being paid to pose, it is frequently the case that the artist is being paid to create a likeness. Modern portraits are done from photographs at least in part, although artists prefer to have at least some hours of live sitting at the beginning to better capture the personality, and at the end for final touches. In some cases, the sitter may reject a portrait as unflattering, and destroy it.
=== Photography ===
There has been controversy regarding the status of photography as a fine-arts medium that is reflected in the unwillingness of some models to also pose nude for photography as they would for drawing or painting.: 18–25 The experience of nude modeling for an amateur photographer is different from that of posing for figure drawing/painting. Traditional media create a single image that is not a true likeness of the individual model, but photographs require a release in order to protect the model's right to privacy. The hourly rate of pay for models posing for fine-art photography is much higher than for other media, although less than for commercial photography.
Photographer Sally Mann published the book Immediate Family, in which 13 of the 65 images are of her children nude. Mary Gordon characterized many of these images as sexualizing children regardless of artistic merit. Mann's response to this criticism has been that the images were spontaneous and natural, having no sexual connotations other than those supplied by the viewer. Less well-known photographers have been charged, but not convicted, for suspected child abuse for similar photographs of their own children. Jock Sturges photographed entire families of naturists, which led to an FBI investigation when a photo-lab employee reported the images; however, no charges were made.
The relationship between male photographers and their wives as models is studied in Arthur Ollman's book, The Model Wife. It focuses on the photographers Baron Adolph de Meyer (whose wife was Olga de Meyer), Alfred Stieglitz (whose wife was Georgia O'Keeffe), Edward Weston and model Charis Wilson, Harry Callahan, Emmet Gowin, Lee Friedlander, Masahisa Fukase, Seiichi Furuya, and Nicholas Nixon.
Occasionally the distinction of participating in Fine Art may make a young amateur model willing to pose for a well-known photographer, examples being Vanessa Williams and Madonna. A signed print of one of the nude photographs of Madonna taken by Lee Friedlander in 1979 sold at auction in 2012 for $37,000. Although largely a result of her fame, the model does not share in this increased value of the artwork.
=== Online modeling ===
During the COVID-19 pandemic, life drawing classes began to appear on online platforms, most frequently on Zoom. This shift to virtual spaces created new, global communities and increased access to artists who were able to join sessions from their homes. Although remote sessions suffer from some difficulties, such as the flattening and distortion of the camera and the lack of direct communications, there has been an expansion of the community willing and able to participate, both as models and artists.
Models at the Government College of Art & Craft in India for whom posing for classes is their only income do not have the online option, but have been supported by donations from artists.
== Nudity and body image ==
In recent years, a connection has been made between social issues of body image, sexualization and art modeling with some promoting wider participation in life drawing, including at a younger age, to provide an experience of real nude people as an alternative to social media representations of idealized bodies. The social benefits of life drawing had been suggested by David B. Manzella in the 1970s while director of the Rhode Island School of Design. Nude models were introduced to the young people's classes with the permission of parents. Models often cite acceptance of their bodies as one of the benefits of modeling. While younger women continue to be the typical model, men and older models are welcomed in cities with an active arts community such as Glasgow, Scotland. Figure On Diversity is one initiative which aims to increase representation in studio art and studio art education by creating resources in support of models who hold visible marginalized identities. Sociologist Sarah R. Phillips, in a 2020 follow-up to her 2005 book Modeling Life notes that models who have contacted her during these years generally experience posing nude in a classroom as empowering.
== Alternative views ==
The mainstream view of art modeling is based upon a moderate position regarding the value of figure studies and nudity in art. There are also schools or studios that may be more conservative, or more liberal.
Many art programs in Christian institutions consider nudity in any form to be in conflict with their beliefs, and therefore hire only clothed models for art classes. None of the Protestant Evangelical colleges in the United States were found to include nude models in their arts and graphic design programs, citing it as an immodest practice; yet similar institutions in Australia held life drawing classes.
At Louisiana State University (LSU), there are rare objections to nudity by religious or conservative students, but the faculty assert that drawing the body is necessary training for art in general and to understand the structure underneath clothing. Models at LSU are full-time students who learn about modeling from other students or artists. Brigham Young University does not allow nude models, describing their policy as self-censorship within the context of the school's honor code. Other institutions view the absence of figure studies as bringing into question the completeness of the art education offered. Some recognize that an appreciation of the beauty of the human body is compatible with a Christian education. Gordon College not only maintains the need for nude figure studies as part of a complete classical art education, but sees the use of models clad in swimwear or other revealing garments as placing the activity in the context of advertisement and sexual exploitation.
James Elkins voices an alternative to classical "dispassionate" figure study by stating that the nude is never devoid of erotic meaning, and it is a fiction to pretend otherwise. Even the advocate of classical aesthetics Kenneth Clark recognized that "biological urges" were never absent even in the most chaste nude, nor should they be unless all life is drained from the work. Most models maintain that posing nude need not be any more sexual than any other coed social situation as long as all participants maintain a mature attitude. However, decorum is not always maintained when either a model or the students are not familiar with the often unspoken rules. Models may be apprehensive about posing for incoming freshmen who, having never encountered classroom nudity, respond immaturely.
Acceptance of the erotic is apparent in the work and behavior of some artists. For example, Picasso was also famous for having a series of model/muse/mistresses through his life: Marie-Thérèse Walter, Fernande Olivier, Dora Maar, and Françoise Gilot. The painter John Currin, whose work is often erotic, combines images from popular culture and references to his wife, Rachel Feinstein.
A feminist view is the male gaze, which asserts that nudes are inherently voyeuristic, with the viewer in the place of the powerful male gazing upon the passive female subject.
== History ==
The role of art models has changed through different eras as the meaning and importance of the human figure in art and society has changed. Nude modeling, nude art and nudity in general have at times been the subject to social disapproval, at least by some elements in society.: 3 When the nude in art was most popular, the models that made these artworks possible might be of low status and poorly paid. The stereotype of the female art model was part of bohemianism in the late 19th and early 20th century Europe. The combination of nakedness and the exchange of money led others to associate nude modeling with prostitution, particularly in the United States.: 6–7
As the 20th century progressed, models gained more recognition and status, including forming the first organizations with some of the functions of labor unions thus becoming a professional occupation. It became possible for individuals to gain notoriety, such as Audrey Munson, who was the model or inspiration for more than 15 statues in New York City in the 1910s. Quentin Crisp began a thirty-year career as a model in 1942.: 20–21
=== Ancient and Post-classical ===
The Greeks, who had the naked body constantly before them in the exercises of the gymnasium, had far less need of professional models than the moderns; but it is scarcely likely that they could have attained the high level reached by their works without constant study from nature. It was probably in Ancient Greece that models were first used. The story told of Zeuxis by Valerius Maximus, who had five of the most beautiful virgins of the city of Crotone offered him as models for his picture of Helen, proves their occasional use. The remark of Eupompus, quoted by Pliny, who advised Lysippos, "Let nature be your model, not an artist", directing his attention to the crowd instead of to his own work, also suggests a use of models which the many portrait statues of Greek and Roman times show to have been not unknown. The names of some of these models of the era are themselves known, such as the beautiful Phryne who modeled for many paintings and sculptures.
The nude almost disappeared from Western art during the Middle Ages, largely due to the attitude of the early Christians, although in Kenneth Clark's famous distinction "naked" figures were still required for some subjects, especially the Last Judgment. This changed with the Renaissance and the rediscovery of classical antiquity, when painters initially used their male apprentices (garzoni) as models, for figures of both genders, as is often clear from their drawings. Leon Battista Alberti recommends drawing from the nude in his De pictura of 1435; as remained usual until the end of the century, he seems only to mean using male models.: 49–50
=== Early modern ===
Possibly the first images of nude women done from the life are a number of drawings and prints by Albrecht Dürer from the 1490s, which were ahead of Italian practice.: 51–55 The production of female nudes suddenly became important in Venetian painting in the decade after 1500, with works such as Giorgione's Dresden Venus of c. 1510. Venetian painters made relatively little use of drawings, and it has been thought that these works did not involve much use of live models, but this view has recently been challenged.: 55–56 The first Italian artist to regularly use female models for studies is usually thought to have been Raphael, whose drawings of the female nude clearly do not use teenage boys.: 56–60 Michelangelo's earlier Study of a Kneeling Nude Girl for The Entombment (c. 1500) may or may not have used a female model, but if it did this was not his normal practice.
The story of the love between Raphael and his mistress-model Margarita Luti (La Fornarina) is "the archetypal artist-model relationship of Western tradition". There was also a tradition of incorporating donor portraits as minor figures into religious narrative scenes, and several Virgin and Child compositions by court painters are thought to use princesses or other court figures as models for the Virgin Mary; these are sometimes called "disguised portraits".: 3–4, 137 The most notorious of these is the portrayal as the Virgo lactans (or just post-lactans) of Agnès Sorel (died 1450), the mistress of Charles VII of France, in a panel by Jean Fouquet.: 3–4
Raphael's relationship was probably somewhat untypical, although the Autobiography of Benvenuto Cellini records his use, in both Rome and Paris, of servant girls as model, mistress and maid. However, when he broke with one he had difficulty in finding another model, and was forced to rehire her just to pose.: 60–61 Lorenzo Lotto records payments to prostitutes to pose in Venice in 1541, perhaps the earliest record of what long remained an option for artists.: 60
Art modeling as an occupation appeared in the late Renaissance when the establishment of schools for the study of the human figure created a regular demand, and since that time the remuneration offered ensured a continual supply. However, academy models were usually only men until the late 19th century, as were the students. The Académie royale de peinture et de sculpture only allowed female models, clothed, from 1759.: 61 In London the students at the female branch of the Royal Academy of Art were not allowed to study the undraped figure until the later 19th century.
=== Late modern and contemporary ===
In 19th-century Paris, a number of models earned a place in art history. Victorine Meurent became a painter herself after posing for several works, including two of the most infamous: Manet's Olympia and Le déjeuner sur l'herbe. Joanna Hiffernan was an Irish artists' model and muse who was romantically linked with American painter James Abbott McNeill Whistler and French painter Gustave Courbet. She is the model for Whistler's painting Symphony in White, No. 1: The White Girl and is rumored to be the model for Courbet's painting L'Origine du monde. Suzanne Valadon, also a painter, modeled for Pierre-Auguste Renoir (most notably in Dance at Bougival), Henri de Toulouse-Lautrec, Pierre Puvis de Chavannes, and Edgar Degas. She was the mother of the painter Maurice Utrillo.
The second Bal des Quat'z'Arts held in 1893 was a costume ball featuring nude models among the crowd, blurring the distinction between the idealized images in works of art and the real people who posed for them. This was symbolic of other social changes that marked the fin de siècle. Four studio models were convicted of public indecency, which was followed by protests of censorship by students of the École des Beaux-Arts.
When Victorian attitudes took hold in England, studies with a live model became more restrictive than they had been in the prior century, limited to advanced classes of students that had already proved their worthiness by copying old master paintings and drawing from plaster casts.: 9 This is in part because many schools were publicly funded, so decisions were under the scrutiny of non-artists.: 12 Modeling was not respectable, and even less so for women. During the same period, the French art atelier system allowed any art student to work from life in a less formal atmosphere, and also admitted women as students. In England, the life class became well established as a central element in art education only with the approach of the 20th century.: 14–16
In the United States, Victorian modesty sometimes required the female model to pose nude with her face draped (Masked Nude by Thomas Eakins, for example).: 84 In 1886, Eakins was dismissed from the Pennsylvania Academy of Fine Art for removing the loincloth from a male model in a mixed classroom.
In the postmodern era, the nude has returned to gain some acceptance in the art world, but not necessarily the art model. Figure drawing is offered in most art schools, but may not be required for a fine art degree. Peter Steinhart says that in trendy galleries, the nude has become passé,: 21 while according to Wendy Steiner there has been a revival in the importance of the figure as a source of beauty in contemporary art. Some established living artists work from models, but more work from photographs, or their imagination. Yet privately held open drawing sessions with a live model remain as popular as ever.
== In popular culture ==
=== Films ===
While there have been a number of films that exploited the artist/model stereotype, a few have more accurately portrayed the working relationship.
The Artist and the Model (2012) — Set during WWII, an elderly sculptor is prompted to resume working by the arrival of a beautiful Spanish refugee who is willing to pose.
Camille Claudel (1988) — Depicts Auguste Rodin and Camille Claudel working in their studio with models.
La Belle Noiseuse (1991) — An aging artist is coaxed out of retirement by an aspiring young artist's suggestion that his girlfriend pose nude for a new painting.
Maze (2000) — The film opens with New York painter and sculptor Lyle Maze (Rob Morrow), who has Tourette syndrome, drawing from a model. Later a friend, Callie (Laura Linney), also poses for Maze.
The Prime of Miss Jean Brodie (1969) — One of Miss Brodie's teenage students, Sandy (Pamela Franklin), poses nude for the art instructor Mr. Lloyd (Robert Stephens).
Renoir (2012) — Tells the story of Catherine Hessling, the last model of Pierre-Auguste Renoir and the first actress in the films of his son, Jean Renoir.
== See also ==
The Helga Pictures
Russell Nesbit
== References ==
== External links ==
Dr. Sketchy's Anti-Art School
LIFE magazine compilation of photos of artist and models
Smithsonian Institution: Artists and Their Models
== Further reading ==
Meskimmon, Marsha; Desmarais, Jane; Postle, Martin; Vaughan, William; Vaughan, Martin; West, Shearer; Barringer, Tim (2006). Model and Supermodel: The Artists' Model in British Art and Culture. Manchester University Press. ISBN 978-0-7190-6662-7.
Waller, Susan (2006). The Invention of the Model: Artists and Models in Paris, 1830–1870. Burlington: Ashgate. ISBN 978-0-7546-3484-3. | Wikipedia/Model_(art) |
A language model is a model of the human brain's ability to produce natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.
Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.
== History ==
Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars.
In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances.
In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender.
== Pure statistical models ==
In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.
=== Models based on word n-grams ===
=== Exponential ===
Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is
P
(
w
m
∣
w
1
,
…
,
w
m
−
1
)
=
1
Z
(
w
1
,
…
,
w
m
−
1
)
exp
(
a
T
f
(
w
1
,
…
,
w
m
)
)
{\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))}
where
Z
(
w
1
,
…
,
w
m
−
1
)
{\displaystyle Z(w_{1},\ldots ,w_{m-1})}
is the partition function,
a
{\displaystyle a}
is the parameter vector, and
f
(
w
1
,
…
,
w
m
)
{\displaystyle f(w_{1},\ldots ,w_{m})}
is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on
a
{\displaystyle a}
or some form of regularization.
The log-bilinear model is another example of an exponential language model.
=== Skip-gram model ===
== Neural models ==
=== Recurrent neural network ===
Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.
=== Large language models ===
Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.
== Evaluation and benchmarks ==
Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.
Various data sets have been developed for use in evaluating language processing systems. These include:
Massive Multitask Language Understanding (MMLU)
Corpus of Linguistic Acceptability
GLUE benchmark
Microsoft Research Paraphrase Corpus
Multi-Genre Natural Language Inference
Question Natural Language Inference
Quora Question Pairs
Recognizing Textual Entailment
Semantic Textual Similarity Benchmark
SQuAD question answering Test
Stanford Sentiment Treebank
Winograd NLI
BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs
== See also ==
== References ==
== Further reading == | Wikipedia/Language_model |
Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.: 58
Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility.
The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry.
Many algebraic varieties are differentiable manifolds, but an algebraic variety may have singular points while a differentiable manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces.
In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type.
== Overview and definitions ==
An affine variety over an algebraically closed field is conceptually the easiest type of variety to define, which will be done in this section. Next, one can define projective and quasi-projective varieties in a similar way. The most general definition of a variety is obtained by patching together smaller quasi-projective varieties. It is not obvious that one can construct genuinely new examples of varieties in this way, but Nagata gave an example of such a new variety in the 1950s.
=== Affine varieties ===
For an algebraically closed field K and a natural number n, let An be an affine n-space over K, identified to
K
n
{\displaystyle K^{n}}
through the choice of an affine coordinate system. The polynomials f in the ring K[x1, ..., xn] can be viewed as K-valued functions on An by evaluating f at the points in An, i.e. by choosing values in K for each xi. For each set S of polynomials in K[x1, ..., xn], define the zero-locus Z(S) to be the set of points in An on which the functions in S simultaneously vanish, that is to say
Z
(
S
)
=
{
x
∈
A
n
∣
f
(
x
)
=
0
for all
f
∈
S
}
.
{\displaystyle Z(S)=\left\{x\in \mathbf {A} ^{n}\mid f(x)=0{\text{ for all }}f\in S\right\}.}
A subset V of An is called an affine algebraic set if V = Z(S) for some S.: 2 A nonempty affine algebraic set V is called irreducible if it cannot be written as the union of two proper algebraic subsets.: 3 An irreducible affine algebraic set is also called an affine variety.: 3 (Some authors use the phrase affine variety to refer to any affine algebraic set, irreducible or not.)
Affine varieties can be given a natural topology by declaring the closed sets to be precisely the affine algebraic sets. This topology is called the Zariski topology.: 2
Given a subset V of An, we define I(V) to be the ideal of all polynomial functions vanishing on V:
I
(
V
)
=
{
f
∈
K
[
x
1
,
…
,
x
n
]
∣
f
(
x
)
=
0
for all
x
∈
V
}
.
{\displaystyle I(V)=\left\{f\in K[x_{1},\ldots ,x_{n}]\mid f(x)=0{\text{ for all }}x\in V\right\}.}
For any affine algebraic set V, the coordinate ring or structure ring of V is the quotient of the polynomial ring by this ideal.: 4
=== Projective varieties and quasi-projective varieties ===
Let k be an algebraically closed field and let Pn be the projective n-space over k. Let f in k[x0, ..., xn] be a homogeneous polynomial of degree d. It is not well-defined to evaluate f on points in Pn in homogeneous coordinates. However, because f is homogeneous, meaning that f (λx0, ..., λxn) = λd f (x0, ..., xn), it does make sense to ask whether f vanishes at a point [x0 : ... : xn]. For each set S of homogeneous polynomials, define the zero-locus of S to be the set of points in Pn on which the functions in S vanish:
Z
(
S
)
=
{
x
∈
P
n
∣
f
(
x
)
=
0
for all
f
∈
S
}
.
{\displaystyle Z(S)=\{x\in \mathbf {P} ^{n}\mid f(x)=0{\text{ for all }}f\in S\}.}
A subset V of Pn is called a projective algebraic set if V = Z(S) for some S.: 9 An irreducible projective algebraic set is called a projective variety.: 10
Projective varieties are also equipped with the Zariski topology by declaring all algebraic sets to be closed.
Given a subset V of Pn, let I(V) be the ideal generated by all homogeneous polynomials vanishing on V. For any projective algebraic set V, the coordinate ring of V is the quotient of the polynomial ring by this ideal.: 10
A quasi-projective variety is a Zariski open subset of a projective variety. Notice that every affine variety is quasi-projective. Notice also that the complement of an algebraic set in an affine variety is a quasi-projective variety; in the context of affine varieties, such a quasi-projective variety is usually not called a variety but a constructible set.
=== Abstract varieties ===
In classical algebraic geometry, all varieties were by definition quasi-projective varieties, meaning that they were open subvarieties of closed subvarieties of a projective space. For example, in Chapter 1 of Hartshorne a variety over an algebraically closed field is defined to be a quasi-projective variety,: 15 but from Chapter 2 onwards, the term variety (also called an abstract variety) refers to a more general object, which locally is a quasi-projective variety, but when viewed as a whole is not necessarily quasi-projective; i.e. it might not have an embedding into projective space.: 105 So classically the definition of an algebraic variety required an embedding into projective space, and this embedding was used to define the topology on the variety and the regular functions on the variety. The disadvantage of such a definition is that not all varieties come with natural embeddings into projective space. For example, under this definition, the product P1 × P1 is not a variety until it is embedded into a larger projective space; this is usually done by the Segre embedding. Furthermore, any variety that admits one embedding into projective space admits many others, for example by composing the embedding with the Veronese embedding; thus many notions that should be intrinsic, such as that of a regular function, are not obviously so.
The earliest successful attempt to define an algebraic variety abstractly, without an embedding, was made by André Weil. In his Foundations of Algebraic Geometry, using valuations. Claude Chevalley made a definition of a scheme, which served a similar purpose, but was more general. However, Alexander Grothendieck's definition of a scheme is more general still and has received the most widespread acceptance. In Grothendieck's language, an abstract algebraic variety is usually defined to be an integral, separated scheme of finite type over an algebraically closed field,: 104–105 although some authors drop the irreducibility or the reducedness or the separateness condition or allow the underlying field to be not algebraically closed. Classical algebraic varieties are the quasiprojective integral separated finite type schemes over an algebraically closed field.
==== Existence of non-quasiprojective abstract algebraic varieties ====
One of the earliest examples of a non-quasiprojective algebraic variety were given by Nagata. Nagata's example was not complete (the analog of compactness), but soon afterwards he found an algebraic surface that was complete and non-projective.: Remark 4.10.2 p.105 Since then other examples have been found: for example, it is straightforward to construct toric varieties that are not quasi-projective but complete.
== Examples ==
=== Subvariety ===
A subvariety is a subset of a variety that is itself a variety (with respect to the topological structure induced by the ambient variety). For example, every open subset of a variety is a variety. See also closed immersion.
Hilbert's Nullstellensatz says that closed subvarieties of an affine or projective variety are in one-to-one correspondence with the prime ideals or non-irrelevant homogeneous prime ideals of the coordinate ring of the variety.
=== Affine variety ===
==== Example 1 ====
Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element f (x, y):
f
(
x
,
y
)
=
x
+
y
−
1.
{\displaystyle f(x,y)=x+y-1.}
The zero-locus of f (x, y) is the set of points in A2 on which this function vanishes: it is the set of all pairs of complex numbers (x, y) such that y = 1 − x. This is called a line in the affine plane. (In the classical topology coming from the topology on the complex numbers, a complex line is a real manifold of dimension two.) This is the set Z( f ):
Z
(
f
)
=
{
(
x
,
1
−
x
)
∈
C
2
}
.
{\displaystyle Z(f)=\{(x,1-x)\in \mathbf {C} ^{2}\}.}
Thus the subset V = Z( f ) of A2 is an algebraic set. The set V is not empty. It is irreducible, as it cannot be written as the union of two proper algebraic subsets. Thus it is an affine algebraic variety.
==== Example 2 ====
Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element g(x, y):
g
(
x
,
y
)
=
x
2
+
y
2
−
1.
{\displaystyle g(x,y)=x^{2}+y^{2}-1.}
The zero-locus of g(x, y) is the set of points in A2 on which this function vanishes, that is the set of points (x,y) such that x2 + y2 = 1. As g(x, y) is an absolutely irreducible polynomial, this is an algebraic variety. The set of its real points (that is the points for which x and y are real numbers), is known as the unit circle; this name is also often given to the whole variety.
==== Example 3 ====
The following example is neither a hypersurface, nor a linear space, nor a single point. Let A3 be the three-dimensional affine space over C. The set of points (x, x2, x3) for x in C is an algebraic variety, and more precisely an algebraic curve that is not contained in any plane. It is the twisted cubic shown in the above figure. It may be defined by the equations
y
−
x
2
=
0
z
−
x
3
=
0
{\displaystyle {\begin{aligned}y-x^{2}&=0\\z-x^{3}&=0\end{aligned}}}
The irreducibility of this algebraic set needs a proof. One approach in this case is to check that the projection (x, y, z) → (x, y) is injective on the set of the solutions and that its image is an irreducible plane curve.
For more difficult examples, a similar proof may always be given, but may imply a difficult computation: first a Gröbner basis computation to compute the dimension, followed by a random linear change of variables (not always needed); then a Gröbner basis computation for another monomial ordering to compute the projection and to prove that it is generically injective and that its image is a hypersurface, and finally a polynomial factorization to prove the irreducibility of the image.
==== General linear group ====
The set of n-by-n matrices over the base field k can be identified with the affine n2-space
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
with coordinates
x
i
j
{\displaystyle x_{ij}}
such that
x
i
j
(
A
)
{\displaystyle x_{ij}(A)}
is the (i, j)-th entry of the matrix
A
{\displaystyle A}
. The determinant
det
{\displaystyle \det }
is then a polynomial in
x
i
j
{\displaystyle x_{ij}}
and thus defines the hypersurface
H
=
V
(
det
)
{\displaystyle H=V(\det )}
in
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
. The complement of
H
{\displaystyle H}
is then an open subset of
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
that consists of all the invertible n-by-n matrices, the general linear group
GL
n
(
k
)
{\displaystyle \operatorname {GL} _{n}(k)}
. It is an affine variety, since, in general, the complement of a hypersurface in an affine variety is affine. Explicitly, consider
A
n
2
×
A
1
{\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}}
where the affine line is given coordinate t. Then
GL
n
(
k
)
{\displaystyle \operatorname {GL} _{n}(k)}
amounts to the zero-locus in
A
n
2
×
A
1
{\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}}
of the polynomial in
x
i
j
,
t
{\displaystyle x_{ij},t}
:
t
⋅
det
[
x
i
j
]
−
1
,
{\displaystyle t\cdot \det[x_{ij}]-1,}
i.e., the set of matrices A such that
t
det
(
A
)
=
1
{\displaystyle t\det(A)=1}
has a solution. This is best seen algebraically: the coordinate ring of
GL
n
(
k
)
{\displaystyle \operatorname {GL} _{n}(k)}
is the localization
k
[
x
i
j
∣
0
≤
i
,
j
≤
n
]
[
det
−
1
]
{\displaystyle k[x_{ij}\mid 0\leq i,j\leq n][{\det }^{-1}]}
, which can be identified with
k
[
x
i
j
,
t
∣
0
≤
i
,
j
≤
n
]
/
(
t
det
−
1
)
{\displaystyle k[x_{ij},t\mid 0\leq i,j\leq n]/(t\det -1)}
.
The multiplicative group k* of the base field k is the same as
GL
1
(
k
)
{\displaystyle \operatorname {GL} _{1}(k)}
and thus is an affine variety. A finite product of it
(
k
∗
)
r
{\displaystyle (k^{*})^{r}}
is an algebraic torus, which is again an affine variety.
A general linear group is an example of a linear algebraic group, an affine variety that has a structure of a group in such a way the group operations are morphism of varieties.
==== Characteristic variety ====
Let A be a not-necessarily-commutative algebra over a field k. Even if A is not commutative, it can still happen that A has a
Z
{\displaystyle \mathbb {Z} }
-filtration so that the associated ring
gr
A
=
⨁
i
=
−
∞
∞
A
i
/
A
i
−
1
{\displaystyle \operatorname {gr} A=\bigoplus _{i=-\infty }^{\infty }A_{i}/{A_{i-1}}}
is commutative, reduced and finitely generated as a k-algebra; i.e.,
gr
A
{\displaystyle \operatorname {gr} A}
is the coordinate ring of an affine (reducible) variety X. For example, if A is the universal enveloping algebra of a finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, then
gr
A
{\displaystyle \operatorname {gr} A}
is a polynomial ring (the PBW theorem); more precisely, the coordinate ring of the dual vector space
g
∗
{\displaystyle {\mathfrak {g}}^{*}}
.
Let M be a filtered module over A (i.e.,
A
i
M
j
⊂
M
i
+
j
{\displaystyle A_{i}M_{j}\subset M_{i+j}}
). If
gr
M
{\displaystyle \operatorname {gr} M}
is fintiely generated as a
gr
A
{\displaystyle \operatorname {gr} A}
-algebra, then the support of
gr
M
{\displaystyle \operatorname {gr} M}
in X; i.e., the locus where
gr
M
{\displaystyle \operatorname {gr} M}
does not vanish is called the characteristic variety of M. The notion plays an important role in the theory of D-modules.
=== Projective variety ===
A projective variety is a closed subvariety of a projective space. That is, it is the zero locus of a set of homogeneous polynomials that generate a prime ideal.
==== Example 1 ====
A plane projective curve is the zero locus of an irreducible homogeneous polynomial in three indeterminates. The projective line P1 is an example of a projective curve; it can be viewed as the curve in the projective plane P2 = {[x, y, z]} defined by x = 0. For another example, first consider the affine cubic curve
y
2
=
x
3
−
x
.
{\displaystyle y^{2}=x^{3}-x.}
in the 2-dimensional affine space (over a field of characteristic not two). It has the associated cubic homogeneous polynomial equation:
y
2
z
=
x
3
−
x
z
2
,
{\displaystyle y^{2}z=x^{3}-xz^{2},}
which defines a curve in P2 called an elliptic curve. The curve has genus one (genus formula); in particular, it is not isomorphic to the projective line P1, which has genus zero. Using genus to distinguish curves is very basic: in fact, the genus is the first invariant one uses to classify curves (see also the construction of moduli of algebraic curves).
==== Example 2: Grassmannian ====
Let V be a finite-dimensional vector space. The Grassmannian variety Gn(V) is the set of all n-dimensional subspaces of V. It is a projective variety: it is embedded into a projective space via the Plücker embedding:
{
G
n
(
V
)
↪
P
(
∧
n
V
)
⟨
b
1
,
…
,
b
n
⟩
↦
[
b
1
∧
⋯
∧
b
n
]
{\displaystyle {\begin{cases}G_{n}(V)\hookrightarrow \mathbf {P} \left(\wedge ^{n}V\right)\\\langle b_{1},\ldots ,b_{n}\rangle \mapsto [b_{1}\wedge \cdots \wedge b_{n}]\end{cases}}}
where bi are any set of linearly independent vectors in V,
∧
n
V
{\displaystyle \wedge ^{n}V}
is the n-th exterior power of V, and the bracket [w] means the line spanned by the nonzero vector w.
The Grassmannian variety comes with a natural vector bundle (or locally free sheaf in other terminology) called the tautological bundle, which is important in the study of characteristic classes such as Chern classes.
==== Jacobian variety and abelian variety ====
Let C be a smooth complete curve and
Pic
(
C
)
{\displaystyle \operatorname {Pic} (C)}
the Picard group of it; i.e., the group of isomorphism classes of line bundles on C. Since C is smooth,
Pic
(
C
)
{\displaystyle \operatorname {Pic} (C)}
can be identified as the divisor class group of C and thus there is the degree homomorphism
deg
:
Pic
(
C
)
→
Z
{\displaystyle \operatorname {deg} :\operatorname {Pic} (C)\to \mathbb {Z} }
. The Jacobian variety
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
of C is the kernel of this degree map; i.e., the group of the divisor classes on C of degree zero. A Jacobian variety is an example of an abelian variety, a complete variety with a compatible abelian group structure on it (the name "abelian" is however not because it is an abelian group). An abelian variety turns out to be projective (in short, algebraic theta functions give an embedding into a projective space. See equations defining abelian varieties); thus,
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
is a projective variety. The tangent space to
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
at the identity element is naturally isomorphic to
H
1
(
C
,
O
C
)
;
{\displaystyle \operatorname {H} ^{1}(C,{\mathcal {O}}_{C});}
hence, the dimension of
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
is the genus of
C
{\displaystyle C}
.
Fix a point
P
0
{\displaystyle P_{0}}
on
C
{\displaystyle C}
. For each integer
n
>
0
{\displaystyle n>0}
, there is a natural morphism
C
n
→
Jac
(
C
)
,
(
P
1
,
…
,
P
r
)
↦
[
P
1
+
⋯
+
P
n
−
n
P
0
]
{\displaystyle C^{n}\to \operatorname {Jac} (C),\,(P_{1},\dots ,P_{r})\mapsto [P_{1}+\cdots +P_{n}-nP_{0}]}
where
C
n
{\displaystyle C^{n}}
is the product of n copies of C. For
g
=
1
{\displaystyle g=1}
(i.e., C is an elliptic curve), the above morphism for
n
=
1
{\displaystyle n=1}
turns out to be an isomorphism;: Ch. IV, Example 1.3.7. in particular, an elliptic curve is an abelian variety.
==== Moduli varieties ====
Given an integer
g
≥
0
{\displaystyle g\geq 0}
, the set of isomorphism classes of smooth complete curves of genus
g
{\displaystyle g}
is called the moduli of curves of genus
g
{\displaystyle g}
and is denoted as
M
g
{\displaystyle {\mathfrak {M}}_{g}}
. There are few ways to show this moduli has a structure of a possibly reducible algebraic variety; for example, one way is to use geometric invariant theory which ensures a set of isomorphism classes has a (reducible) quasi-projective variety structure. Moduli such as the moduli of curves of fixed genus is typically not a projective variety; roughly the reason is that a degeneration (limit) of a smooth curve tends to be non-smooth or reducible. This leads to the notion of a stable curve of genus
g
≥
2
{\displaystyle g\geq 2}
, a not-necessarily-smooth complete curve with no terribly bad singularities and not-so-large automorphism group. The moduli of stable curves
M
¯
g
{\displaystyle {\overline {\mathfrak {M}}}_{g}}
, the set of isomorphism classes of stable curves of genus
g
≥
2
{\displaystyle g\geq 2}
, is then a projective variety which contains
M
g
{\displaystyle {\mathfrak {M}}_{g}}
as an open dense subset. Since
M
¯
g
{\displaystyle {\overline {\mathfrak {M}}}_{g}}
is obtained by adding boundary points to
M
g
{\displaystyle {\mathfrak {M}}_{g}}
,
M
¯
g
{\displaystyle {\overline {\mathfrak {M}}}_{g}}
is colloquially said to be a compactification of
M
g
{\displaystyle {\mathfrak {M}}_{g}}
. Historically a paper of Mumford and Deligne introduced the notion of a stable curve to show
M
g
{\displaystyle {\mathfrak {M}}_{g}}
is irreducible when
g
≥
2
{\displaystyle g\geq 2}
.
The moduli of curves exemplifies a typical situation: a moduli of nice objects tend not to be projective but only quasi-projective. Another case is a moduli of vector bundles on a curve. Here, there are the notions of stable and semistable vector bundles on a smooth complete curve
C
{\displaystyle C}
. The moduli of semistable vector bundles of a given rank
n
{\displaystyle n}
and a given degree
d
{\displaystyle d}
(degree of the determinant of the bundle) is then a projective variety denoted as
S
U
C
(
n
,
d
)
{\displaystyle SU_{C}(n,d)}
, which contains the set
U
C
(
n
,
d
)
{\displaystyle U_{C}(n,d)}
of isomorphism classes of stable vector bundles of rank
n
{\displaystyle n}
and degree
d
{\displaystyle d}
as an open subset. Since a line bundle is stable, such a moduli is a generalization of the Jacobian variety of
C
{\displaystyle C}
.
In general, in contrast to the case of moduli of curves, a compactification of a moduli need not be unique and, in some cases, different non-equivalent compactifications are constructed using different methods and by different authors. An example over
C
{\displaystyle \mathbb {C} }
is the problem of compactifying
D
/
Γ
{\displaystyle D/\Gamma }
, the quotient of a bounded symmetric domain
D
{\displaystyle D}
by an action of an arithmetic discrete group
Γ
{\displaystyle \Gamma }
. A basic example of
D
/
Γ
{\displaystyle D/\Gamma }
is when
D
=
H
g
{\displaystyle D={\mathfrak {H}}_{g}}
, Siegel's upper half-space and
Γ
{\displaystyle \Gamma }
commensurable with
Sp
(
2
g
,
Z
)
{\displaystyle \operatorname {Sp} (2g,\mathbb {Z} )}
; in that case,
D
/
Γ
{\displaystyle D/\Gamma }
has an interpretation as the moduli
A
g
{\displaystyle {\mathfrak {A}}_{g}}
of principally polarized complex abelian varieties of dimension
g
{\displaystyle g}
(a principal polarization identifies an abelian variety with its dual). The theory of toric varieties (or torus embeddings) gives a way to compactify
D
/
Γ
{\displaystyle D/\Gamma }
, a toroidal compactification of it. But there are other ways to compactify
D
/
Γ
{\displaystyle D/\Gamma }
; for example, there is the minimal compactification of
D
/
Γ
{\displaystyle D/\Gamma }
due to Baily and Borel: it is the projective variety associated to the graded ring formed by modular forms (in the Siegel case, Siegel modular forms; see also Siegel modular variety). The non-uniqueness of compactifications is due to the lack of moduli interpretations of those compactifications; i.e., they do not represent (in the category-theory sense) any natural moduli problem or, in the precise language, there is no natural moduli stack that would be an analog of moduli stack of stable curves.
=== Non-affine and non-projective example ===
An algebraic variety can be neither affine nor projective. To give an example, let X = P1 × A1 and p: X → A1 the projection. Here X is an algebraic variety since it is a product of varieties. It is not affine since P1 is a closed subvariety of X (as the zero locus of p), but an affine variety cannot contain a projective variety of positive dimension as a closed subvariety. It is not projective either, since there is a nonconstant regular function on X; namely, p.
Another example of a non-affine non-projective variety is X = A2 − (0, 0) (cf. Morphism of varieties § Examples.)
=== Non-examples ===
Consider the affine line
A
1
{\displaystyle \mathbb {A} ^{1}}
over
C
{\displaystyle \mathbb {C} }
. The complement of the circle
{
z
∈
C
with
|
z
|
2
=
1
}
{\displaystyle \{z\in \mathbb {C} {\text{ with }}|z|^{2}=1\}}
in
A
1
=
C
{\displaystyle \mathbb {A} ^{1}=\mathbb {C} }
is not an algebraic variety (nor even an algebraic set). Note that
|
z
|
2
−
1
{\displaystyle |z|^{2}-1}
is not a polynomial in
z
{\displaystyle z}
(although it is a polynomial in the real coordinates
x
,
y
{\displaystyle x,y}
). On the other hand, the complement of the origin in
A
1
=
C
{\displaystyle \mathbb {A} ^{1}=\mathbb {C} }
is an algebraic (affine) variety, since the origin is the zero-locus of
z
{\displaystyle z}
. This may be explained as follows: the affine line has dimension one and so any subvariety of it other than itself must have strictly less dimension; namely, zero.
For similar reasons, a unitary group (over the complex numbers) is not an algebraic variety, while the special linear group
SL
n
(
C
)
{\displaystyle \operatorname {SL} _{n}(\mathbb {C} )}
is a closed subvariety of
GL
n
(
C
)
{\displaystyle \operatorname {GL} _{n}(\mathbb {C} )}
, the zero-locus of
det
−
1
{\displaystyle \det -1}
. (Over a different base field, a unitary group can however be given a structure of a variety.)
== Basic results ==
An affine algebraic set V is a variety if and only if I(V) is a prime ideal; equivalently, V is a variety if and only if its coordinate ring is an integral domain.: 52 : 4
Every nonempty affine algebraic set may be written uniquely as a finite union of algebraic varieties (where none of the varieties in the decomposition is a subvariety of any other).: 5
The dimension of a variety may be defined in various equivalent ways. See Dimension of an algebraic variety for details.
A product of finitely many algebraic varieties (over an algebraically closed field) is an algebraic variety. A finite product of affine varieties is affine and a finite product of projective varieties is projective.
== Isomorphism of algebraic varieties ==
Let V1, V2 be algebraic varieties. We say V1 and V2 are isomorphic, and write V1 ≅ V2, if there are regular maps φ : V1 → V2 and ψ : V2 → V1 such that the compositions ψ ∘ φ and φ ∘ ψ are the identity maps on V1 and V2 respectively.
== Discussion and generalizations ==
The basic definitions and facts above enable one to do classical algebraic geometry. To be able to do more — for example, to deal with varieties over fields that are not algebraically closed — some foundational changes are required. The modern notion of a variety is considerably more abstract than the one above, though equivalent in the case of varieties over algebraically closed fields. An abstract algebraic variety is a particular kind of scheme; the generalization to schemes on the geometric side enables an extension of the correspondence described above to a wider class of rings. A scheme is a locally ringed space such that every point has a neighbourhood that, as a locally ringed space, is isomorphic to a spectrum of a ring. Basically, a variety over k is a scheme whose structure sheaf is a sheaf of k-algebras with the property that the rings R that occur above are all integral domains and are all finitely generated k-algebras, that is to say, they are quotients of polynomial algebras by prime ideals.
This definition works over any field k. It allows you to glue affine varieties (along common open sets) without worrying whether the resulting object can be put into some projective space. This also leads to difficulties since one can introduce somewhat pathological objects, e.g. an affine line with zero doubled. Such objects are usually not considered varieties, and are eliminated by requiring the schemes underlying a variety to be separated. (Strictly speaking, there is also a third condition, namely, that one needs only finitely many affine patches in the definition above.)
Some modern researchers also remove the restriction on a variety having integral domain affine charts, and when speaking of a variety only require that the affine charts have trivial nilradical.
A complete variety is a variety such that any map from an open subset of a nonsingular curve into it can be extended uniquely to the whole curve. Every projective variety is complete, but not vice versa.
These varieties have been called "varieties in the sense of Serre", since Serre's foundational paper FAC
on sheaf cohomology was written for them. They remain typical objects to start studying in algebraic geometry, even if more general objects are also used in an auxiliary way.
One way that leads to generalizations is to allow reducible algebraic sets (and fields k that aren't algebraically closed), so the rings R may not be integral domains. A more significant modification is to allow nilpotents in the sheaf of rings, that is, rings which are not reduced. This is one of several generalizations of classical algebraic geometry that are built into Grothendieck's theory of schemes.
Allowing nilpotent elements in rings is related to keeping track of "multiplicities" in algebraic geometry. For example, the closed subscheme of the affine line defined by x2 = 0 is different from the subscheme defined by x = 0 (the origin). More generally, the fiber of a morphism of schemes X → Y at a point of Y may be non-reduced, even if X and Y are reduced. Geometrically, this says that fibers of good mappings may have nontrivial "infinitesimal" structure.
There are further generalizations called algebraic spaces and stacks.
== Algebraic manifolds ==
An algebraic manifold is an algebraic variety that is also an m-dimensional manifold, and hence every sufficiently small local patch is isomorphic to km. Equivalently, the variety is smooth (free from singular points). When k is the real numbers, R, algebraic manifolds are called Nash manifolds. Algebraic manifolds can be defined as the zero set of a finite collection of analytic algebraic functions. Projective algebraic manifolds are an equivalent definition for projective varieties. The Riemann sphere is one example.
== See also ==
Variety (disambiguation) — listing also several mathematical meanings
Function field of an algebraic variety
Birational geometry
Motive (algebraic geometry)
Analytic variety
Zariski–Riemann space
Semi-algebraic set
Fano variety
Mnëv's universality theorem
== Notes ==
== References ==
=== Sources ===
This article incorporates material from Isomorphism of varieties on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Projective_algebraic_set |
In theoretical physics, there are many theories with supersymmetry (SUSY) which also have internal gauge symmetries. Supersymmetric gauge theory generalizes this notion.
== Gauge theory ==
A gauge theory is a field theory with gauge symmetry. Roughly, there are two types of symmetries, global and local. A global symmetry is a symmetry applied uniformly (in some sense) to each point of a manifold. A local symmetry is a symmetry which is position dependent. Gauge symmetry is an example of a local symmetry, with the symmetry described by a Lie group (which mathematically describe continuous symmetries), which in the context of gauge theory is called the gauge group of the theory.
Quantum chromodynamics and quantum electrodynamics are famous examples of gauge theories.
== Supersymmetry ==
In particle physics, there exist particles with two kinds of particle statistics, bosons and fermions. Bosons carry integer spin values, and are characterized by the ability to have any number of identical bosons occupy a single point in space. They are thus identified with forces. Fermions carry half-integer spin values, and by the Pauli exclusion principle, identical fermions cannot occupy a single position in spacetime. Boson and fermion fields are interpreted as matter. Thus, supersymmetry is considered a strong candidate for the unification of radiation (boson-mediated forces) and matter.
This unification is given by an operator
Q
{\displaystyle Q}
(or typically many operators), known as a supercharge or supersymmetry generator, which acts schematically as
Q
|
boson
⟩
=
|
fermion
⟩
{\displaystyle Q|{\text{boson}}\rangle =|{\text{fermion}}\rangle }
Q
|
fermion
⟩
=
|
boson
⟩
{\displaystyle Q|{\text{fermion}}\rangle =|{\text{boson}}\rangle }
For instance, the supersymmetry generator can take a photon as an argument and transform it into a photino and vice versa. This happens through translation in the (parameter) space. This superspace is a
Z
2
{\displaystyle {\mathbb {Z} _{2}}}
-graded vector space
W
=
W
0
⊕
W
1
{\displaystyle {\mathcal {W}}={\mathcal {W}}^{0}\oplus {\mathcal {W}}^{1}}
, where
W
0
{\displaystyle {\mathcal {W}}^{0}}
is the bosonic Hilbert space and
W
1
{\displaystyle {\mathcal {W}}^{1}}
is the fermionic Hilbert space.
== SUSY gauge theory ==
The motivation for a supersymmetric version of gauge theory can be the fact that gauge invariance is consistent with supersymmetry.
The first examples were discovered by Bruno Zumino and Sergio Ferrara, and independently by Abdus Salam and James Strathdee in 1974.
Both the half-integer spin fermions and the integer spin bosons can become gauge particles. The gauge vector fields and its spinorial superpartner can be made to both reside in the same representation of the internal symmetry group.
Suppose we have a
U
(
1
)
{\displaystyle U(1)}
gauge transformation
V
μ
→
V
μ
+
∂
μ
A
{\displaystyle V_{\mu }\rightarrow V_{\mu }+\partial _{\mu }A}
, where
V
μ
{\displaystyle V_{\mu }}
is a vector field and
A
{\displaystyle A}
is the gauge function. The main difficulty in construction of a SUSY Gauge Theory is to extend the above transformation in a way that is consistent with SUSY transformations.
The Wess–Zumino gauge (a prescription for supersymmetric gauge fixing) provides a successful solution to this problem. Once such suitable gauge is obtained, the dynamics of the SUSY gauge theory work as follows: we seek a Lagrangian that is invariant under the Super-gauge transformations (these transformations are an important tool needed to develop supersymmetric version of a gauge theory). Then we can integrate the Lagrangian using the Berezin integration rules and thus obtain the action. Which further leads to the equations of motion and hence can provide a complete analysis of the dynamics of the theory.
== N = 1 SUSY in 4D (with 4 real generators) ==
In four dimensions, the minimal N = 1 supersymmetry may be written using a superspace. This superspace involves four extra fermionic coordinates
θ
1
,
θ
2
,
θ
¯
1
,
θ
¯
2
{\displaystyle \theta ^{1},\theta ^{2},{\bar {\theta }}^{1},{\bar {\theta }}^{2}}
, transforming as a two-component spinor and its conjugate.
Every superfield, i.e. a field that depends on all coordinates of the superspace, may be expanded with respect to the new fermionic coordinates. There exists a special kind of superfields, the so-called chiral superfields, that only depend on the variables θ but not their conjugates (more precisely,
D
¯
f
=
0
{\displaystyle {\overline {D}}f=0}
). However, a vector superfield depends on all coordinates. It describes a gauge field and its superpartner, namely a Weyl fermion that obeys a Dirac equation.
V
=
C
+
i
θ
χ
−
i
θ
¯
χ
¯
+
i
2
θ
2
(
M
+
i
N
)
−
i
2
θ
2
¯
(
M
−
i
N
)
−
θ
σ
μ
θ
¯
v
μ
+
i
θ
2
θ
¯
(
λ
¯
−
i
2
σ
¯
μ
∂
μ
χ
)
−
i
θ
¯
2
θ
(
λ
+
i
2
σ
μ
∂
μ
χ
¯
)
+
1
2
θ
2
θ
¯
2
(
D
+
1
2
◻
C
)
{\displaystyle V=C+i\theta \chi -i{\overline {\theta }}{\overline {\chi }}+{\tfrac {i}{2}}\theta ^{2}(M+iN)-{\tfrac {i}{2}}{\overline {\theta ^{2}}}(M-iN)-\theta \sigma ^{\mu }{\overline {\theta }}v_{\mu }+i\theta ^{2}{\overline {\theta }}\left({\overline {\lambda }}-{\tfrac {i}{2}}{\overline {\sigma }}^{\mu }\partial _{\mu }\chi \right)-i{\overline {\theta }}^{2}\theta \left(\lambda +{\tfrac {i}{2}}\sigma ^{\mu }\partial _{\mu }{\overline {\chi }}\right)+{\tfrac {1}{2}}\theta ^{2}{\overline {\theta }}^{2}\left(D+{\tfrac {1}{2}}\Box C\right)}
V is the vector superfield (prepotential) and is real (V = V). The fields on the right hand side are component fields.
The gauge transformations act as
V
→
V
+
Λ
+
Λ
¯
{\displaystyle V\to V+\Lambda +{\overline {\Lambda }}}
where Λ is any chiral superfield.
It's easy to check that the chiral superfield
W
α
≡
−
1
4
D
¯
2
D
α
V
{\displaystyle W_{\alpha }\equiv -{\tfrac {1}{4}}{\overline {D}}^{2}D_{\alpha }V}
is gauge invariant. So is its complex conjugate
W
¯
α
˙
{\displaystyle {\overline {W}}_{\dot {\alpha }}}
.
A non-supersymmetric covariant gauge which is often used is the Wess–Zumino gauge. Here, C, χ, M and N are all set to zero. The residual gauge symmetries are gauge transformations of the traditional bosonic type.
A chiral superfield X with a charge of q transforms as
X
→
e
q
Λ
X
,
X
¯
→
e
q
Λ
¯
X
{\displaystyle X\to e^{q\Lambda }X,\qquad {\overline {X}}\to e^{q{\overline {\Lambda }}}X}
Therefore Xe−qVX is gauge invariant. Here e−qV is called a bridge since it "bridges" a field which transforms under Λ only with a field which transforms under Λ only.
More generally, if we have a real gauge group G that we wish to supersymmetrize, we first have to complexify it to Gc ⋅ e−qV then acts a compensator for the complex gauge transformations in effect absorbing them leaving only the real parts. This is what's being done in the Wess–Zumino gauge.
=== Differential superforms ===
Let's rephrase everything to look more like a conventional Yang–Mills gauge theory. We have a U(1) gauge symmetry acting upon full superspace with a 1-superform gauge connection A. In the analytic basis for the tangent space, the covariant derivative is given by
D
M
=
d
M
+
i
q
A
M
{\displaystyle D_{M}=d_{M}+iqA_{M}}
. Integrability conditions for chiral superfields with the chiral constraint
D
¯
α
˙
X
=
0
{\displaystyle {\overline {D}}_{\dot {\alpha }}X=0}
leave us with
{
D
¯
α
˙
,
D
¯
β
˙
}
=
F
α
˙
β
˙
=
0.
{\displaystyle \left\{{\overline {D}}_{\dot {\alpha }},{\overline {D}}_{\dot {\beta }}\right\}=F_{{\dot {\alpha }}{\dot {\beta }}}=0.}
A similar constraint for antichiral superfields leaves us with Fαβ = 0. This means that we can either gauge fix
A
α
˙
=
0
{\displaystyle A_{\dot {\alpha }}=0}
or Aα = 0 but not both simultaneously. Call the two different gauge fixing schemes I and II respectively. In gauge I,
d
¯
α
˙
X
=
0
{\displaystyle {\overline {d}}_{\dot {\alpha }}X=0}
and in gauge II, dα X = 0. Now, the trick is to use two different gauges simultaneously; gauge I for chiral superfields and gauge II for antichiral superfields. In order to bridge between the two different gauges, we need a gauge transformation. Call it e−V (by convention). If we were using one gauge for all fields, XX would be gauge invariant. However, we need to convert gauge I to gauge II, transforming X to (e−V)qX. So, the gauge invariant quantity is Xe−qVX.
In gauge I, we still have the residual gauge eΛ where
d
¯
α
˙
Λ
=
0
{\displaystyle {\overline {d}}_{\dot {\alpha }}\Lambda =0}
and in gauge II, we have the residual gauge eΛ satisfying dα Λ = 0. Under the residual gauges, the bridge transforms as
e
−
V
→
e
−
Λ
¯
−
V
−
Λ
.
{\displaystyle e^{-V}\to e^{-{\overline {\Lambda }}-V-\Lambda }.}
Without any additional constraints, the bridge e−V wouldn't give all the information about the gauge field. However, with the additional constraint
F
α
˙
β
{\displaystyle F_{{\dot {\alpha }}\beta }}
, there's only one unique gauge field which is compatible with the bridge modulo gauge transformations. Now, the bridge gives exactly the same information content as the gauge field.
== Theories with 8 or more SUSY generators (N > 1) ==
In theories with higher supersymmetry (and perhaps higher dimension), a vector superfield typically describes not only a gauge field and a Weyl fermion but also at least one complex scalar field.
== Examples ==
=== Pure supersymmetric gauge theories ===
N = 1 Super Yang–Mills
N = 2 Super Yang–Mills
N = 4 Super Yang–Mills
=== Supersymmetric gauge theories with matter ===
Super QCD
MSSM (Minimal supersymmetric Standard Model)
NMSSM (Next-to-minimal supersymmetric Standard Model)
== See also ==
superpotential
D-term
F-term
Supermultiplet
Supersymmetric quantum mechanics
== References ==
Stephen P. Martin. A Supersymmetry Primer, arXiv:hep-ph/9709356 .
Balin, David; Love, Alexander (1994). Supersymmetric Gauge Field Theory and String Theory. Taylor & Francis. ISBN 9780367805807.
Kulshreshtha, D. S.; Mueller-Kirsten, H. J. W. (1991). "Quantization of systems with constraints: The Faddeev-Jackiw method versus Dirac's method applied to superfields". Physical Review D. Phys. Rev. D43, 3376-3383. 43 (10): 3376–3383. Bibcode:1991PhRvD..43.3376K. doi:10.1103/PhysRevD.43.3376. PMID 10013289. | Wikipedia/Supersymmetric_gauge_theory |
The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
== Historical background ==
In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter.
In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction.
In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form.
In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark.
The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons.
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted.
The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks.
The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France.
== Particle content ==
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Notes:
[†] An anti-electron (e+) is conventionally called a "positron".
=== Fermions ===
The Standard Model includes 12 elementary particles of spin 1⁄2, known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.
There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe.
=== Gauge bosons ===
The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below.
Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED).
Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves.
Weak Interactions: The W+, W−, and Z gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the Z having more mass than the W±. The weak interactions involving the W± act only on left-handed particles and right-handed antiparticles respectively. The W± carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral Z boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction.
Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime.
The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section.
=== Higgs boson ===
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above 1.4 TeV; therefore, the LHC (designed to collide two 7 TeV proton beams) was built to answer the question of whether the Higgs boson actually exists.
On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about 125 GeV/c2 (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson.
== Theoretical aspects ==
=== Construction of the Standard Model Lagrangian ===
Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time.
The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries.
The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above.
==== Quantum chromodynamics sector ====
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by
T
a
=
λ
a
/
2
{\displaystyle T^{a}=\lambda ^{a}/2}
. Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
L
QCD
=
ψ
¯
i
γ
μ
D
μ
ψ
−
1
4
G
μ
ν
a
G
a
μ
ν
,
{\displaystyle {\mathcal {L}}_{\text{QCD}}={\overline {\psi }}i\gamma ^{\mu }D_{\mu }\psi -{\frac {1}{4}}G_{\mu \nu }^{a}G_{a}^{\mu \nu },}
where
ψ
{\displaystyle \psi }
is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied.
The gauge covariant derivative of QCD is defined by
D
μ
≡
∂
μ
−
i
g
s
1
2
λ
a
G
μ
a
{\displaystyle D_{\mu }\equiv \partial _{\mu }-ig_{\text{s}}{\frac {1}{2}}\lambda ^{a}G_{\mu }^{a}}
, where
γμ are the Dirac matrices,
Gaμ is the 8-component (
a
=
1
,
2
,
…
,
8
{\displaystyle a=1,2,\dots ,8}
) SU(3) gauge field,
λa are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group,
Gaμν represents the gluon field strength tensor, and
gs is the strong coupling constant.
The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form
ψ
→
ψ
′
=
U
ψ
{\displaystyle \psi \rightarrow \psi '=U\psi }
, where
U
=
e
−
i
g
s
λ
a
ϕ
a
(
x
)
{\displaystyle U=e^{-ig_{\text{s}}\lambda ^{a}\phi ^{a}(x)}}
is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and
ϕ
a
(
x
)
{\displaystyle \phi ^{a}(x)}
is an arbitrary function of spacetime.
==== Electroweak sector ====
The electroweak sector is a Yang–Mills gauge theory with the symmetry group U(1) × SU(2)L,
L
EW
=
Q
¯
L
j
i
γ
μ
D
μ
Q
L
j
+
u
¯
R
j
i
γ
μ
D
μ
u
R
j
+
d
¯
R
j
i
γ
μ
D
μ
d
R
j
+
ℓ
¯
L
j
i
γ
μ
D
μ
ℓ
L
j
+
e
¯
R
j
i
γ
μ
D
μ
e
R
j
−
1
4
W
a
μ
ν
W
μ
ν
a
−
1
4
B
μ
ν
B
μ
ν
,
{\displaystyle {\mathcal {L}}_{\text{EW}}={\overline {Q}}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }Q_{{\text{L}}j}+{\overline {u}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }u_{{\text{R}}j}+{\overline {d}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }d_{{\text{R}}j}+{\overline {\ell }}_{{\text{L}}j}i\gamma ^{\mu }D_{\mu }\ell _{{\text{L}}j}+{\overline {e}}_{{\text{R}}j}i\gamma ^{\mu }D_{\mu }e_{{\text{R}}j}-{\tfrac {1}{4}}W_{a}^{\mu \nu }W_{\mu \nu }^{a}-{\tfrac {1}{4}}B^{\mu \nu }B_{\mu \nu },}
where the subscript
j
{\displaystyle j}
sums over the three generations of fermions;
Q
L
,
u
R
{\displaystyle Q_{\text{L}},u_{\text{R}}}
, and
d
R
{\displaystyle d_{\text{R}}}
are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and
ℓ
L
{\displaystyle \ell _{\text{L}}}
and
e
R
{\displaystyle e_{\text{R}}}
are the left-handed doublet and right-handed singlet lepton fields.
The electroweak gauge covariant derivative is defined as
D
μ
≡
∂
μ
−
i
g
′
1
2
Y
W
B
μ
−
i
g
1
2
τ
→
L
W
→
μ
{\displaystyle D_{\mu }\equiv \partial _{\mu }-ig'{\tfrac {1}{2}}Y_{\text{W}}B_{\mu }-ig{\tfrac {1}{2}}{\vec {\tau }}_{\text{L}}{\vec {W}}_{\mu }}
, where
Bμ is the U(1) gauge field,
YW is the weak hypercharge – the generator of the U(1) group,
W→μ is the 3-component SU(2) gauge field,
→τL are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions,
g' and g are the U(1) and SU(2) coupling constants respectively,
W
a
μ
ν
{\displaystyle W^{a\mu \nu }}
(
a
=
1
,
2
,
3
{\displaystyle a=1,2,3}
) and
B
μ
ν
{\displaystyle B^{\mu \nu }}
are the field strength tensors for the weak isospin and weak hypercharge fields.
Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form
m
ψ
¯
ψ
{\displaystyle m{\overline {\psi }}\psi }
do not respect U(1) × SU(2)L gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field.
==== Higgs sector ====
In the Standard Model, the Higgs field is an SU(2)L doublet of complex scalar fields with four degrees of freedom:
φ
=
(
φ
+
φ
0
)
=
1
2
(
φ
1
+
i
φ
2
φ
3
+
i
φ
4
)
,
{\displaystyle \varphi ={\begin{pmatrix}\varphi ^{+}\\\varphi ^{0}\end{pmatrix}}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}\varphi _{1}+i\varphi _{2}\\\varphi _{3}+i\varphi _{4}\end{pmatrix}},}
where the superscripts + and 0 indicate the electric charge
Q
{\displaystyle Q}
of the components. The weak hypercharge
Y
W
{\displaystyle Y_{\text{W}}}
of both components is 1. Before symmetry breaking, the Higgs Lagrangian is
L
H
=
(
D
μ
φ
)
†
(
D
μ
φ
)
−
V
(
φ
)
,
{\displaystyle {\mathcal {L}}_{\text{H}}=\left(D_{\mu }\varphi \right)^{\dagger }\left(D^{\mu }\varphi \right)-V(\varphi ),}
where
D
μ
{\displaystyle D_{\mu }}
is the electroweak gauge covariant derivative defined above and
V
(
φ
)
{\displaystyle V(\varphi )}
is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields
W
μ
a
{\displaystyle W_{\mu }^{a}}
and
B
μ
{\displaystyle B_{\mu }}
and the scalar field
φ
{\displaystyle \varphi }
. The scalar potential is given by
V
(
φ
)
=
−
μ
2
φ
†
φ
+
λ
(
φ
†
φ
)
2
,
{\displaystyle V(\varphi )=-\mu ^{2}\varphi ^{\dagger }\varphi +\lambda \left(\varphi ^{\dagger }\varphi \right)^{2},}
where
μ
2
>
0
{\displaystyle \mu ^{2}>0}
, so that
φ
{\displaystyle \varphi }
acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and
λ
>
0
{\displaystyle \lambda >0}
, so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field
φ
{\displaystyle \varphi }
.
The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when
φ
†
φ
=
μ
2
2
λ
{\displaystyle \varphi ^{\dagger }\varphi ={\tfrac {\mu ^{2}}{2\lambda }}}
. It is possible to perform a gauge transformation on
φ
{\displaystyle \varphi }
such that the ground state is transformed to a basis where
φ
1
=
φ
2
=
φ
4
=
0
{\displaystyle \varphi _{1}=\varphi _{2}=\varphi _{4}=0}
and
φ
3
=
μ
λ
≡
v
{\displaystyle \varphi _{3}={\tfrac {\mu }{\sqrt {\lambda }}}\equiv v}
. This breaks the symmetry of the ground state. The expectation value of
φ
{\displaystyle \varphi }
now becomes
⟨
φ
⟩
=
1
2
(
0
v
)
,
{\displaystyle \langle \varphi \rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}0\\v\end{pmatrix}},}
where
v
{\displaystyle v}
has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~246 GeV/c2.
After symmetry breaking, the masses of the W and Z are given by
m
W
=
1
2
g
v
{\displaystyle m_{\text{W}}={\frac {1}{2}}gv}
and
m
Z
=
1
2
g
2
+
g
′
2
v
{\displaystyle m_{\text{Z}}={\frac {1}{2}}{\sqrt {g^{2}+g'^{2}}}v}
, which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is
m
H
=
2
μ
2
=
2
λ
v
{\displaystyle m_{\text{H}}={\sqrt {2\mu ^{2}}}={\sqrt {2\lambda }}v}
. Since
μ
{\displaystyle \mu }
and
λ
{\displaystyle \lambda }
are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally.
==== Yukawa sector ====
The Yukawa interaction terms are:
L
Yukawa
=
(
Y
u
)
m
n
(
Q
¯
L
)
m
φ
~
(
u
R
)
n
+
(
Y
d
)
m
n
(
Q
¯
L
)
m
φ
(
d
R
)
n
+
(
Y
e
)
m
n
(
ℓ
¯
L
)
m
φ
(
e
R
)
n
+
h
.
c
.
{\displaystyle {\mathcal {L}}_{\text{Yukawa}}=(Y_{\text{u}})_{mn}({\bar {Q}}_{\text{L}})_{m}{\tilde {\varphi }}(u_{\text{R}})_{n}+(Y_{\text{d}})_{mn}({\bar {Q}}_{\text{L}})_{m}\varphi (d_{\text{R}})_{n}+(Y_{\text{e}})_{mn}({\bar {\ell }}_{\text{L}})_{m}{\varphi }(e_{\text{R}})_{n}+\mathrm {h.c.} }
where
Y
u
{\displaystyle Y_{\text{u}}}
,
Y
d
{\displaystyle Y_{\text{d}}}
, and
Y
e
{\displaystyle Y_{\text{e}}}
are 3 × 3 matrices of Yukawa couplings, with the mn term giving the coupling of the generations m and n, and h.c. means Hermitian conjugate of preceding terms. The fields
Q
L
{\displaystyle Q_{\text{L}}}
and
ℓ
L
{\displaystyle \ell _{\text{L}}}
are left-handed quark and lepton doublets. Likewise,
u
R
,
d
R
{\displaystyle u_{\text{R}},d_{\text{R}}}
and
e
R
{\displaystyle e_{\text{R}}}
are right-handed up-type quark, down-type quark, and lepton singlets. Finally
φ
{\displaystyle \varphi }
is the Higgs doublet and
φ
~
=
i
τ
2
φ
∗
{\displaystyle {\tilde {\varphi }}=i\tau _{2}\varphi ^{*}}
is its charge conjugate state.
The Yukawa terms are invariant under the SU(2)L × U(1)Y gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking.
== Fundamental interactions ==
The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles.
=== Gravity ===
Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist.
=== Electromagnetism ===
Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics.
=== Weak nuclear force ===
The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions.
In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies.
=== Strong nuclear force ===
The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales.
While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model.
== Tests and predictions ==
The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed.
== Challenges ==
Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem.
Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done.
If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory.
This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude.
Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include:
The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe.
Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters.
The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles.
The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.
Currently, no proposed theory of everything has been widely accepted or verified.
== See also ==
== Notes ==
== References ==
== Further reading ==
Oerter, Robert (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Plume. ISBN 978-0-452-28786-0.
Schumm, Bruce A. (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 978-0-8018-7971-5.
"The Standard Model of Particle Physics Interactive Graphic".
=== Introductory textbooks ===
Robert Mann (2009). An Introduction to Particle Physics and the Standard Model. CRC Press. ISBN 9780429141225.
W. Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0.
J.E. Dodd; B.M. Gripaios (2020). The Ideas of Particle Physics: An Introduction for Scientists. Cambridge University Press. ISBN 978-1-108-72740-2.
D.J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-0-471-60386-3.
W. N. Cottingham and D. A. Greenwood (2023). An Introduction to the Standard Model of Particle Physics. Cambridge University Press. ISBN 9781009401685.
=== Advanced textbooks ===
T.P. Cheng; L.F. Li (2006). Gauge theory of elementary particle physics. Oxford University Press. ISBN 978-0-19-851961-4. Highlights the gauge theory aspects of the Standard Model.
J.F. Donoghue; E. Golowich; B.R. Holstein (1994). Dynamics of the Standard Model. Cambridge University Press. ISBN 978-0-521-47652-2. Highlights dynamical and phenomenological aspects of the Standard Model.
Ken J. Barnes (2010). Group Theory for the Standard Model of Particle Physics and Beyond. Taylor & Francis. ISBN 9780429184550.
Nagashima, Yorikiyo (2013). Elementary Particle Physics: Foundations of the Standard Model, Volume 2. Wiley. ISBN 978-3-527-64890-0. 920 pages.
Schwartz, Matthew D. (2014). Quantum Field Theory and the Standard Model. Cambridge University. ISBN 978-1-107-03473-0. 952 pages.
Langacker, Paul (2009). The Standard Model and Beyond. CRC Press. ISBN 978-1-4200-7907-4. 670 pages. Highlights group-theoretical aspects of the Standard Model.
=== Journal articles ===
E.S. Abers; B.W. Lee (1973). "Gauge theories". Physics Reports. 9 (1): 1–141. Bibcode:1973PhR.....9....1A. doi:10.1016/0370-1573(73)90027-6.
M. Baak; et al. (2012). "The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC". The European Physical Journal C. 72 (11): 2205. arXiv:1209.2716. Bibcode:2012EPJC...72.2205B. doi:10.1140/epjc/s10052-012-2205-9. S2CID 15052448.
Y. Hayato; et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". Physical Review Letters. 83 (8): 1529–1533. arXiv:hep-ex/9904020. Bibcode:1999PhRvL..83.1529H. doi:10.1103/PhysRevLett.83.1529. S2CID 118326409.
S.F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283.
D.P. Roy (1999). "Basic Constituents of Matter and their Interactions – A Progress Report". arXiv:hep-ph/9912523.
F. Wilczek (2004). "The Universe Is A Strange Place". Nuclear Physics B: Proceedings Supplements. 134: 3. arXiv:astro-ph/0401347. Bibcode:2004NuPhS.134....3W. doi:10.1016/j.nuclphysbps.2004.08.001. S2CID 28234516.
== External links ==
"The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast.
The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces.
Particle Physics: Standard Model, Leonard Susskind lectures (2010). | Wikipedia/Standard_model |
Continuum (pl.: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states.
== In physics ==
In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths).
In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts.
== In mathematics and philosophy ==
A good introduction to the philosophical issues involved is John Lane Bell's essay in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's. Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: "It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis".
== In social sciences, psychology and psychiatry ==
In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those levels include dichotomous (a person either has a personality trait or not) and non-dichotomous approaches. While the non-dichotomous approach allows for understanding that everyone lies somewhere on a particular personality dimension, the dichotomous (nominal categorical and ordinal) approaches only seek to confirm that a particular person either has or does not have a particular mental disorder.
Expert witnesses particularly are trained to help courts in translating the data into the legal (e.g. 'guilty' vs. 'not guilty') dichotomy, which apply to law, sociology and ethics.
== In linguistics ==
In linguistics, the range of dialects spoken over a geographical area that differ slightly between neighboring areas is known as a dialect continuum. A language continuum is a similar description for the merging of neighboring languages without a clear defined boundary. Examples of dialect or language continuums include the varieties of Italian or German; and the Romance languages, Arabic languages, or Bantu languages.
== References ==
== External links ==
Continuity and infinitesimals, John Bell, Stanford Encyclopedia of Philosophy | Wikipedia/Continuum_(theory) |
In theoretical physics, a super-Poincaré algebra is an extension of the Poincaré algebra to incorporate supersymmetry, a relation between bosons and fermions. They are examples of supersymmetry algebras (without central charges or internal symmetries), and are Lie superalgebras. Thus a super-Poincaré algebra is a Z2-graded vector space with a graded Lie bracket such that the even part is a Lie algebra containing the Poincaré algebra, and the odd part is built from spinors on which there is an anticommutation relation with values in the even part.
== Informal sketch ==
The Poincaré algebra describes the isometries of Minkowski spacetime. From the representation theory of the Lorentz group, it is known that the Lorentz group admits two inequivalent complex spinor representations, dubbed
2
{\displaystyle 2}
and
2
¯
{\displaystyle {\overline {2}}}
. Taking their tensor product, one obtains
2
⊗
2
¯
=
3
⊕
1
{\displaystyle 2\otimes {\overline {2}}=3\oplus 1}
; such decompositions of tensor products of representations into direct sums is given by the Littlewood–Richardson rule.
Normally, one treats such a decomposition as relating to specific particles: so, for example, the pion, which is a chiral vector particle, is composed of a quark-anti-quark pair. However, one could also identify
3
⊕
1
{\displaystyle 3\oplus 1}
with Minkowski spacetime itself. This leads to a natural question: if Minkowski space-time belongs to the adjoint representation, then can Poincaré symmetry be extended to the fundamental representation? Well, it can: this is exactly the super-Poincaré algebra. There is a corresponding experimental question: if we live in the adjoint representation, then where is the fundamental representation hiding? This is the program of supersymmetry, which has not been found experimentally.
== History ==
The super-Poincaré algebra was first proposed in the context of the Haag–Łopuszański–Sohnius theorem, as a means of avoiding the conclusions of the Coleman–Mandula theorem. That is, the Coleman–Mandula theorem is a no-go theorem that states that the Poincaré algebra cannot be extended with additional symmetries that might describe the internal symmetries of the observed physical particle spectrum. However, the Coleman–Mandula theorem assumed that the algebra extension would be by means of a commutator; this assumption, and thus the theorem, can be avoided by considering the anti-commutator, that is, by employing anti-commuting Grassmann numbers. The proposal was to consider a supersymmetry algebra, defined as the semidirect product of a central extension of the super-Poincaré algebra by a compact Lie algebra of internal symmetries.
== Definition ==
The simplest supersymmetric extension of the Poincaré algebra contains two Weyl spinors with the following anti-commutation relation:
{
Q
α
,
Q
¯
β
˙
}
=
2
σ
μ
α
β
˙
P
μ
{\displaystyle \{Q_{\alpha },{\bar {Q}}_{\dot {\beta }}\}=2{\sigma ^{\mu }}_{\alpha {\dot {\beta }}}P_{\mu }}
and all other anti-commutation relations between the Qs and Ps vanish. The operators
Q
α
,
Q
¯
α
˙
{\displaystyle Q_{\alpha },{\bar {Q}}_{\dot {\alpha }}}
are known as supercharges. In the above expression
P
μ
{\displaystyle P_{\mu }}
are the generators of translation and
σ
μ
{\displaystyle \sigma ^{\mu }}
are the Pauli matrices. The index
α
{\displaystyle \alpha }
runs over the values
α
=
1
,
2.
{\displaystyle \alpha =1,2.}
A dot is used over the index
β
˙
{\displaystyle {\dot {\beta }}}
to remind that this index transforms according to the inequivalent conjugate spinor representation; one must never accidentally contract these two types of indexes. The Pauli matrices can be considered to be a direct manifestation of the Littlewood–Richardson rule mentioned before: they indicate how the tensor product
2
⊗
2
¯
{\displaystyle 2\otimes {\overline {2}}}
of the two spinors can be re-expressed as a vector. The index
μ
{\displaystyle \mu }
of course ranges over the space-time dimensions
μ
=
0
,
1
,
2
,
3.
{\displaystyle \mu =0,1,2,3.}
It is convenient to work with Dirac spinors instead of Weyl spinors; a Dirac spinor can be thought of as an element of
2
⊕
2
¯
{\displaystyle 2\oplus {\overline {2}}}
; it has four components. The Dirac matrices are thus also four-dimensional, and can be expressed as direct sums of the Pauli matrices. The tensor product then gives an algebraic relation to the Minkowski metric
g
μ
ν
{\displaystyle g^{\mu \nu }}
which is expressed as:
{
γ
μ
,
γ
ν
}
=
2
g
μ
ν
{\displaystyle \{\gamma ^{\mu },\gamma ^{\nu }\}=2g^{\mu \nu }}
and
σ
μ
ν
=
i
2
[
γ
μ
,
γ
ν
]
{\displaystyle \sigma ^{\mu \nu }={\frac {i}{2}}\left[\gamma ^{\mu },\gamma ^{\nu }\right]}
This then gives the full algebra
[
M
μ
ν
,
Q
α
]
=
1
2
(
σ
μ
ν
)
α
β
Q
β
[
Q
α
,
P
μ
]
=
0
{
Q
α
,
Q
¯
β
˙
}
=
2
(
σ
μ
)
α
β
˙
P
μ
{\displaystyle {\begin{aligned}\left[M^{\mu \nu },Q_{\alpha }\right]&={\frac {1}{2}}(\sigma ^{\mu \nu })_{\alpha }^{\;\;\beta }Q_{\beta }\\\left[Q_{\alpha },P^{\mu }\right]&=0\\\{Q_{\alpha },{\bar {Q}}_{\dot {\beta }}\}&=2(\sigma ^{\mu })_{\alpha {\dot {\beta }}}P_{\mu }\\\end{aligned}}}
which are to be combined with the normal Poincaré algebra. It is a closed algebra, since all Jacobi identities are satisfied and can have since explicit matrix representations. Following this line of reasoning will lead to supergravity.
=== Extended supersymmetry ===
It is possible to add more supercharges. That is, we fix a number which by convention is labelled
N
{\displaystyle {\mathcal {N}}}
, and define supercharges
Q
α
I
,
Q
¯
α
˙
I
{\displaystyle Q_{\alpha }^{I},{\bar {Q}}_{\dot {\alpha }}^{I}}
with
I
=
1
,
⋯
,
N
.
{\displaystyle I=1,\cdots ,{\mathcal {N}}.}
These can be thought of as many copies of the original supercharges, and hence satisfy
[
M
μ
ν
,
Q
α
I
]
=
(
σ
μ
ν
)
α
β
Q
β
I
{\displaystyle [M^{\mu \nu },Q_{\alpha }^{I}]=(\sigma ^{\mu \nu })_{\alpha }{}^{\beta }Q_{\beta }^{I}}
[
P
μ
,
Q
α
I
]
=
0
{\displaystyle [P^{\mu },Q_{\alpha }^{I}]=0}
and
{
Q
α
I
,
Q
¯
α
˙
J
}
=
2
σ
α
α
˙
μ
P
μ
δ
I
J
{\displaystyle \{Q_{\alpha }^{I},{\bar {Q}}_{\dot {\alpha }}^{J}\}=2\sigma _{\alpha {\dot {\alpha }}}^{\mu }P_{\mu }\delta ^{IJ}}
but can also satisfy
{
Q
α
I
,
Q
β
J
}
=
ϵ
α
β
Z
I
J
{\displaystyle \{Q_{\alpha }^{I},Q_{\beta }^{J}\}=\epsilon _{\alpha \beta }Z^{IJ}}
and
{
Q
¯
α
˙
I
,
Q
¯
β
˙
J
}
=
ϵ
α
˙
β
˙
Z
†
I
J
{\displaystyle \{{\bar {Q}}_{\dot {\alpha }}^{I},{\bar {Q}}_{\dot {\beta }}^{J}\}=\epsilon _{{\dot {\alpha }}{\dot {\beta }}}Z^{\dagger IJ}}
where
Z
I
J
=
−
Z
J
I
{\displaystyle Z^{IJ}=-Z^{JI}}
is the central charge.
== Super-Poincaré group and superspace ==
Just as the Poincaré algebra generates the Poincaré group of isometries of Minkowski space, the super-Poincaré algebra, an example of a Lie super-algebra, generates what is known as a supergroup. This can be used to define superspace with
N
{\displaystyle {\mathcal {N}}}
supercharges: these are the right cosets of the Lorentz group within the
N
{\displaystyle {\mathcal {N}}}
super-Poincaré group.
Just as
P
μ
{\displaystyle P_{\mu }}
has the interpretation as being the generator of spacetime translations, the charges
Q
α
I
,
Q
¯
α
˙
I
{\displaystyle Q_{\alpha }^{I},{\bar {Q}}_{\dot {\alpha }}^{I}}
, with
I
=
1
,
⋯
,
N
{\displaystyle I=1,\cdots ,{\mathcal {N}}}
, have the interpretation as generators of superspace translations in the 'spin coordinates' of superspace. That is, we can view superspace as the direct sum of Minkowski space with 'spin dimensions' labelled by coordinates
θ
α
I
,
θ
¯
I
α
˙
{\displaystyle \theta _{\alpha }^{I},{\bar {\theta }}^{I{\dot {\alpha }}}}
. The supercharge
Q
α
I
{\displaystyle Q_{\alpha }^{I}}
generates translations in the direction labelled by the coordinate
θ
α
I
.
{\displaystyle \theta _{\alpha }^{I}.}
By counting, there are
4
N
{\displaystyle 4{\mathcal {N}}}
spin dimensions.
=== Notation for superspace ===
The superspace consisting of Minkowski space with
N
{\displaystyle {\mathcal {N}}}
supercharges is therefore labelled
R
1
,
3
|
4
N
{\displaystyle \mathbb {R} ^{1,3|4{\mathcal {N}}}}
or sometimes simply
R
4
|
4
N
{\displaystyle \mathbb {R} ^{4|4{\mathcal {N}}}}
.
== SUSY in 3 + 1 Minkowski spacetime ==
In (3 + 1) Minkowski spacetime, the Haag–Łopuszański–Sohnius theorem states that the SUSY algebra with N spinor generators is as follows.
The even part of the star Lie superalgebra is the direct sum of the Poincaré algebra and a reductive Lie algebra B (such that its self-adjoint part is the tangent space of a real compact Lie group). The odd part of the algebra would be
(
1
2
,
0
)
⊗
V
⊕
(
0
,
1
2
)
⊗
V
∗
{\displaystyle \left({\frac {1}{2}},0\right)\otimes V\oplus \left(0,{\frac {1}{2}}\right)\otimes V^{*}}
where
(
1
/
2
,
0
)
{\displaystyle (1/2,0)}
and
(
0
,
1
/
2
)
{\displaystyle (0,1/2)}
are specific representations of the Poincaré algebra. (Compared to the notation used earlier in the article, these correspond
2
¯
⊕
1
{\displaystyle {\overline {2}}\oplus 1}
and
1
⊕
2
{\displaystyle 1\oplus 2}
, respectively, also see the footnote where the previous notation was introduced). Both components are conjugate to each other under the * conjugation. V is an N-dimensional complex representation of B and V* is its dual representation. The Lie bracket for the odd part is given by a symmetric equivariant pairing {.,.} on the odd part with values in the even part. In particular, its reduced intertwiner from
[
(
1
2
,
0
)
⊗
V
]
⊗
[
(
0
,
1
2
)
⊗
V
∗
]
{\displaystyle \left[\left({\frac {1}{2}},0\right)\otimes V\right]\otimes \left[\left(0,{\frac {1}{2}}\right)\otimes V^{*}\right]}
to the ideal of the Poincaré algebra generated by translations is given as the product of a nonzero intertwiner from
(
1
2
,
0
)
⊗
(
0
,
1
2
)
{\displaystyle \left({\frac {1}{2}},0\right)\otimes \left(0,{\frac {1}{2}}\right)}
to (1/2,1/2) by the "contraction intertwiner" from
V
⊗
V
∗
{\displaystyle V\otimes V^{*}}
to the trivial representation. On the other hand, its reduced intertwiner from
[
(
1
2
,
0
)
⊗
V
]
⊗
[
(
1
2
,
0
)
⊗
V
]
{\displaystyle \left[\left({\frac {1}{2}},0\right)\otimes V\right]\otimes \left[\left({\frac {1}{2}},0\right)\otimes V\right]}
is the product of a (antisymmetric) intertwiner from
(
1
2
,
0
)
⊗
(
1
2
,
0
)
{\displaystyle \left({\frac {1}{2}},0\right)\otimes \left({\frac {1}{2}},0\right)}
to (0,0) and an antisymmetric intertwiner A from
N
2
{\displaystyle N^{2}}
to B. Conjugate it to get the corresponding case for the other half.
=== N = 1 ===
B is now
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
(called R-symmetry) and V is the 1D representation of
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
with charge 1. A (the intertwiner defined above) would have to be zero since it is antisymmetric.
Actually, there are two versions of N=1 SUSY, one without the
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
(i.e. B is zero-dimensional) and the other with
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
.
=== N = 2 ===
B is now
s
u
(
2
)
⊕
u
(
1
)
{\displaystyle {\mathfrak {su}}(2)\oplus {\mathfrak {u}}(1)}
and V is the 2D doublet representation of
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
with a zero
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
charge. Now, A is a nonzero intertwiner to the
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
part of B.
Alternatively, V could be a 2D doublet with a nonzero
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
charge. In this case, A would have to be zero.
Yet another possibility would be to let B be
u
(
1
)
A
⊕
u
(
1
)
B
⊕
u
(
1
)
C
{\displaystyle {\mathfrak {u}}(1)_{A}\oplus {\mathfrak {u}}(1)_{B}\oplus {\mathfrak {u}}(1)_{C}}
. V is invariant under
u
(
1
)
B
{\displaystyle {\mathfrak {u}}(1)_{B}}
and
u
(
1
)
C
{\displaystyle {\mathfrak {u}}(1)_{C}}
and decomposes into a 1D rep with
u
(
1
)
A
{\displaystyle {\mathfrak {u}}(1)_{A}}
charge 1 and another 1D rep with charge -1. The intertwiner A would be complex with the real part mapping to
u
(
1
)
B
{\displaystyle {\mathfrak {u}}(1)_{B}}
and the imaginary part mapping to
u
(
1
)
C
{\displaystyle {\mathfrak {u}}(1)_{C}}
.
Or we could have B being
s
u
(
2
)
⊕
u
(
1
)
A
⊕
u
(
1
)
B
{\displaystyle {\mathfrak {su}}(2)\oplus {\mathfrak {u}}(1)_{A}\oplus {\mathfrak {u}}(1)_{B}}
with V being the doublet rep of
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
with zero
u
(
1
)
{\displaystyle {\mathfrak {u}}(1)}
charges and A being a complex intertwiner with the real part mapping to
u
(
1
)
A
{\displaystyle {\mathfrak {u}}(1)_{A}}
and the imaginary part to
u
(
1
)
B
{\displaystyle {\mathfrak {u}}(1)_{B}}
.
This doesn't even exhaust all the possibilities. We see that there is more than one N = 2 supersymmetry; likewise, the SUSYs for N > 2 are also not unique (in fact, it only gets worse).
=== N = 3 ===
It is theoretically allowed, but the multiplet structure becomes automatically the same with
that of an N=4 supersymmetric theory. So it is less often discussed compared to N=1,2,4 version.
=== N = 4 ===
This is the maximal number of supersymmetries in a theory without gravity.
=== N = 8 ===
This is the maximal number of supersymmetries in any supersymmetric theory. Beyond
N
=
8
{\displaystyle {\mathcal {N}}=8}
, any massless supermultiplet contains a sector with helicity
λ
{\displaystyle \lambda }
such that
|
λ
|
>
2
{\displaystyle |\lambda |>2}
. Such theories on Minkowski space must be free (non-interacting).
== SUSY in various dimensions ==
In 0 + 1, 2 + 1, 3 + 1, 4 + 1, 6 + 1, 7 + 1, 8 + 1, and 10 + 1 dimensions, a SUSY algebra is classified by a positive integer N.
In 1 + 1, 5 + 1 and 9 + 1 dimensions, a SUSY algebra is classified by two nonnegative integers (M, N), at least one of which is nonzero. M represents the number of left-handed SUSYs and N represents the number of right-handed SUSYs.
The reason of this has to do with the reality conditions of the spinors.
Hereafter d = 9 means d = 8 + 1 in Minkowski signature, etc. The structure of supersymmetry algebra is mainly determined by the number of the fermionic generators, that is the number N times the real dimension of the spinor in d dimensions. It is because one can obtain a supersymmetry algebra of lower dimension easily from that of higher dimensionality by the use of dimensional reduction.
=== Upper bound on dimension of supersymmetric theories ===
The maximum allowed dimension of theories with supersymmetry is
d
=
11
=
10
+
1
{\displaystyle d=11=10+1}
, which admits a unique theory called eleven-dimensional supergravity which is the low-energy limit of M-theory. This incorporates supergravity: without supergravity, the maximum allowed dimension is
d
=
10
=
9
+
1
{\displaystyle d=10=9+1}
.
=== d = 11 ===
The only example is the N = 1 supersymmetry with 32 supercharges.
=== d = 10 ===
From d = 11, N = 1 SUSY, one obtains N = (1, 1) nonchiral SUSY algebra, which is also called the type IIA supersymmetry. There is also N = (2, 0) SUSY algebra, which is called the type IIB supersymmetry. Both of them have 32 supercharges.
N = (1, 0) SUSY algebra with 16 supercharges is the minimal susy algebra in 10 dimensions. It is also called the type I supersymmetry. Type IIA / IIB / I superstring theory has the SUSY algebra of the corresponding name. The supersymmetry algebra for the heterotic superstrings is that of type I.
== Remarks ==
== Notes ==
== References ==
Aitchison, Ian J R (2005). "Supersymmetry and the MSSM: An Elementary Introduction". arXiv:hep-ph/0505105.
Gol'fand, Y. A.; Likhtman, E. P. (1971). "Extension of the algebra of the Poincare group generators and violation of P invariance". JETP Lett. 13: 323–326. Bibcode:1971JETPL..13..323G.
van Nieuwenhuizen, P. (1981). "Supergravity". Phys. Rep. 68 (4): 189–398. Bibcode:1981PhR....68..189V. doi:10.1016/0370-1573(81)90157-5.
Volkov, D. V.; Akulov, V. P. (1972). "Possible Universal Neutrino Interaction". JETP Lett. 16 (11): 621 pp.
Volkov, D. V.; Akulov, V. P. (1973). "Is the neutrino a goldstone particle". Phys. Lett. B. 46 (1): 109–110. Bibcode:1973PhLB...46..109V. doi:10.1016/0370-2693(73)90490-5.
Weinberg, Steven (2000). Supersymmetry. The Quantum Theory of Fields. Vol. 3 (1st ed.). Cambridge: Cambridge University Press. ISBN 978-0521670555.
Wess, J.; Zumino, B. (1974). "Supergauge transformations in four dimensions". Nuclear Physics B. 70 (1): 39–50. Bibcode:1974NuPhB..70...39W. doi:10.1016/0550-3213(74)90355-1. | Wikipedia/Super-Poincaré_algebra |
A Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE.
The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of 1016 GeV/c2 (only three orders of magnitude below the Planck scale of 1019 GeV/c2)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following:
proton decay,
electric dipole moments of elementary particles,
or the properties of neutrinos.
Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles.
While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.
== History ==
Historically, the first true GUT, which was based on the simple Lie group SU(5), was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions.
The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper.
== Motivation ==
The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups SU(3) and SU(2) which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry U(1) which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations.
Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different.
== Unification of matter particles ==
=== SU(5) ===
SU(5) is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is
S
U
(
5
)
⊃
S
U
(
3
)
×
S
U
(
2
)
×
U
(
1
)
.
{\displaystyle {\rm {SU(5)\supset SU(3)\times SU(2)\times U(1).}}}
Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of SU(5) and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature.
The two smallest irreducible representations of SU(5) are 5 (the defining representation) and 10. (These bold numbers indicate the dimension of the representation.) In the standard assignment, the 5 contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the 10 contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content.
The hypothetical right-handed neutrinos are a singlet of SU(5), which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism).
=== SO(10) ===
The next simple Lie group which contains the standard model is
S
O
(
10
)
⊃
S
U
(
5
)
⊃
S
U
(
3
)
×
S
U
(
2
)
×
U
(
1
)
.
{\displaystyle {\rm {SO(10)\supset SU(5)\supset SU(3)\times SU(2)\times U(1).}}}
Here, the unification of matter is even more complete, since the irreducible spinor representation 16 contains both the 5 and 10 of SU(5) and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector).
Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for SU(5) and SO(10). Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation).
The boson matrix for SO(10) is found by taking the 15 × 15 matrix from the 10 + 5 representation of SU(5) and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of SO(10).
=== E6 ===
In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT.
=== Extended Grand Unified Theories ===
Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued.
GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of 64 types of particles. These can be put into 64 = 8 + 56 representations of SU(8). This can be divided into SU(5) × SU(3)F × U(1) which is the SU(5) theory together with some heavy bosons which act on the generation number.
GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of O(16).
=== Symplectic groups and quaternion representations ===
Symplectic gauge groups could also be considered. For example, Sp(8) (which is called Sp(4) in the article symplectic group) has a representation in terms of 4 × 4 quaternion unitary matrices which has a 16 dimensional real representation and so might be considered as a candidate for a gauge group. Sp(8) has 32 charged bosons and 4 neutral bosons. Its subgroups include SU(4) so can at least contain the gluons and photon of SU(3) × U(1). Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be:
[
e
+
i
e
¯
+
j
v
+
k
v
¯
u
r
+
i
u
¯
r
¯
+
j
d
r
+
k
d
¯
r
¯
u
g
+
i
u
¯
g
¯
+
j
d
g
+
k
d
¯
g
¯
u
b
+
i
u
¯
b
¯
+
j
d
b
+
k
d
¯
b
¯
]
L
{\displaystyle {\begin{bmatrix}e+i\ {\overline {e}}+j\ v+k\ {\overline {v}}\\u_{r}+i\ {\overline {u}}_{\mathrm {\overline {r}} }+j\ d_{\mathrm {r} }+k\ {\overline {d}}_{\mathrm {\overline {r}} }\\u_{g}+i\ {\overline {u}}_{\mathrm {\overline {g}} }+j\ d_{\mathrm {g} }+k\ {\overline {d}}_{\mathrm {\overline {g}} }\\u_{b}+i\ {\overline {u}}_{\mathrm {\overline {b}} }+j\ d_{\mathrm {b} }+k\ {\overline {d}}_{\mathrm {\overline {b}} }\\\end{bmatrix}}_{\mathrm {L} }}
A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed 4 × 4 quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed 4 × 4 quaternion matrices is Sp(8) × SU(2) which does include the standard model bosons:
S
U
(
4
,
H
)
L
×
H
R
=
S
p
(
8
)
×
S
U
(
2
)
⊃
S
U
(
4
)
×
S
U
(
2
)
⊃
S
U
(
3
)
×
S
U
(
2
)
×
U
(
1
)
{\displaystyle \mathrm {SU(4,\mathbb {H} )_{L}\times \mathbb {H} _{R}=Sp(8)\times SU(2)\supset SU(4)\times SU(2)\supset SU(3)\times SU(2)\times U(1)} }
If
ψ
{\displaystyle \psi }
is a quaternion valued spinor,
A
μ
a
b
{\displaystyle A_{\mu }^{ab}}
is quaternion hermitian 4 × 4 matrix coming from Sp(8) and
B
μ
{\displaystyle B_{\mu }}
is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is:
ψ
a
¯
γ
μ
(
A
μ
a
b
ψ
b
+
ψ
a
B
μ
)
{\displaystyle \ {\overline {\psi ^{a}}}\gamma _{\mu }\left(A_{\mu }^{ab}\psi ^{b}+\psi ^{a}B_{\mu }\right)\ }
=== Octonion representations ===
It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (F4, E6, E7, or E8) depending on the details.
ψ
=
[
a
e
μ
e
¯
b
τ
μ
¯
τ
¯
c
]
{\displaystyle \psi ={\begin{bmatrix}a&e&\mu \\{\overline {e}}&b&\tau \\{\overline {\mu }}&{\overline {\tau }}&c\end{bmatrix}}}
[
ψ
A
,
ψ
B
]
⊂
J
3
(
O
)
{\displaystyle \ [\psi _{A},\psi _{B}]\subset \mathrm {J} _{3}(\mathbb {O} )\ }
Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that E6 has subgroup O(10) and so is big enough to include the Standard Model. An E8 gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of E8, these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems.
=== Beyond Lie groups ===
Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills.
== Unification of forces and the role of supersymmetry ==
The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale.
The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with SU(5) or SO(10) GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale:
Λ
GUT
≈
10
16
GeV
.
{\displaystyle \Lambda _{\text{GUT}}\approx 10^{16}\,{\text{GeV}}.}
It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections.
== Neutrino masses ==
Since Majorana masses of the right-handed neutrino are forbidden by SO(10) symmetry, SO(10) GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios.
== Proposed theories ==
Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are:
Pati–Salam model – SU(4) × SU(2) × SU(2)
Georgi–Glashow model – SU(5); and Flipped SU(5) – SU(5) × U(1)
SO(10) model; and Flipped SO(10) – SO(10) × U(1)
E6 model; and Trinification – SU(3) × SU(3) × SU(3)
minimal left-right model – SU(3)C × SU(2)L × SU(2)R × U(1)B−L
331 model – SU(3)C × SU(3)L × U(1)X
chiral color
Not quite GUTs:
Note: These models refer to Lie algebras not to Lie groups. The Lie group could be
[
S
U
(
4
)
×
S
U
(
2
)
×
S
U
(
2
)
]
/
Z
2
,
{\displaystyle [\mathrm {SU} (4)\times \mathrm {SU} (2)\times \mathrm {SU} (2)]/\mathbb {Z} _{2},}
just to take a random example.
The most promising candidate is SO(10).
(Minimal) SO(10) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of SO(10). They are the minimal left-right model, SU(5), flipped SU(5) and the Pati–Salam model. The GUT group E6 contains SO(10), but models based upon it are significantly more complicated. The primary reason for studying E6 models comes from E8 × E8 heterotic string theory.
GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal SU(5) and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models.
Some GUT theories like SU(5) and SO(10) suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group.
Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations.
== Ingredients ==
A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter.
== Current evidence ==
The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as SO(10).
One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the 1034~1035 year range) have ruled out simpler GUTs and most non-SUSY models.
The maximum upper limit on proton lifetime (if unstable), is calculated at 6×1039 years for SUSY models and 1.4×1036 years for minimal non-SUSY GUTs.
The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 1016 GeV (slightly less than the Planck energy of 1019 GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) SO(10) models break with an intermediate gauge scale, such as the one of Pati–Salam group.
== See also ==
B − L quantum number
Classical unified field theories
Paradigm shift
Physics beyond the Standard Model
Theory of everything
X and Y bosons
== Notes ==
== References ==
== Further reading ==
Stephen Hawking, A Brief History of Time, includes a brief popular overview.
Langacker, Paul (2012). "Grand unification". Scholarpedia. 7 (10): 11419. Bibcode:2012SchpJ...711419L. doi:10.4249/scholarpedia.11419.
== External links ==
The Algebra of Grand Unified Theories | Wikipedia/Grand_unification_theory |
The concept of supergroup is a generalization of that of group. In other words, every supergroup carries a natural group structure, but there may be more than one way to structure a given group as a supergroup. A supergroup is like a Lie group in that there is a well defined notion of smooth function defined on them.
However the functions may have even and odd parts. Moreover, a supergroup has a super Lie algebra which plays a role similar to that of a Lie algebra for Lie groups in that they determine most of the representation theory and which is the starting point for classification.
== Details ==
More formally, a Lie supergroup is a supermanifold G together with a multiplication morphism
μ
:
G
×
G
→
G
{\displaystyle \mu :G\times G\rightarrow G}
, an inversion morphism
i
:
G
→
G
{\displaystyle i:G\rightarrow G}
and a unit morphism
e
:
1
→
G
{\displaystyle e:1\rightarrow G}
which makes G a group object in the category of supermanifolds. This means that, formulated as commutative diagrams, the usual associativity and inversion axioms of a group continue to hold. Since every manifold is a supermanifold, a Lie supergroup generalises the notion of a Lie group.
There are many possible supergroups. The ones of most interest in theoretical physics are the ones which extend the Poincaré group or the conformal group. Of particular interest are the orthosymplectic groups Osp(M|N) and the superunitary groups SU(M|N).
An equivalent algebraic approach starts from the observation that a supermanifold is determined by its ring of supercommutative smooth functions, and that a morphism of supermanifolds corresponds one to one with an algebra homomorphism between their functions in the opposite direction, i.e. that the category of supermanifolds is opposite to the category of algebras of smooth graded commutative functions. Reversing all the arrows in the commutative diagrams that define a Lie supergroup then shows that functions over the supergroup have the structure of a Z2-graded Hopf algebra. Likewise the representations of this Hopf algebra turn out to be Z2-graded comodules. This Hopf algebra gives the global properties of the supergroup.
There is another related Hopf algebra which is the dual of the previous Hopf algebra. It can be identified with the Hopf algebra of graded differential operators at the origin. It only gives the local properties of the symmetries i.e., it only gives information about infinitesimal supersymmetry transformations. The representations of this Hopf algebra are modules. Like in the non-graded case, this Hopf algebra can be described purely algebraically as the universal enveloping algebra of the Lie superalgebra.
In a similar way one can define an affine algebraic supergroup as a group object in the category of superalgebraic affine varieties. An affine algebraic supergroup has a similar one to one relation to its Hopf algebra of superpolynomials. Using the language of schemes, which combines the geometric and algebraic point of view, algebraic supergroup schemes can be defined including super Abelian varieties.
== Examples ==
The super-Poincaré group is the group of isometries of superspace (specifically, Minkowski superspace with
N
{\displaystyle {\mathcal {N}}}
supercharges, where often
N
{\displaystyle {\mathcal {N}}}
is taken to be 1). It is most often treated at the algebra level, and is generated by the super-Poincaré algebra.
The super-conformal group is the group of conformal symmetries of superspace, generated by the super-conformal algebra.
== Notes ==
== References ==
supergroup in nLab | Wikipedia/Supergroup_(physics) |
Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vicinity of black holes or similar compact astrophysical objects, as well as in the early stages of the universe moments after the Big Bang.
Three of the four fundamental forces of nature are described within the framework of quantum mechanics and quantum field theory: the electromagnetic interaction, the strong force, and the weak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although general relativity is highly regarded for its elegance and accuracy, it has limitations: the gravitational singularities inside black holes, the ad hoc postulation of dark matter, as well as dark energy and its relation to the cosmological constant are among the current unsolved mysteries regarding gravity, all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to the Planck length, like those near the center of a black hole, quantum fluctuations of spacetime are expected to play an important role. Finally, the discrepancies between the predicted value for the vacuum energy and the observed values (which, depending on considerations, can be of 60 or 120 orders of magnitude) highlight the necessity for a quantum theory of gravity.
The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular being M-theory and loop quantum gravity. All of these approaches aim to describe the quantum behavior of the gravitational field, which does not necessarily include unifying all fundamental interactions into a single mathematical framework. However, many approaches to quantum gravity, such as string theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as a theory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories include causal dynamical triangulation, noncommutative geometry, and twistor theory.
One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near the Planck scale, around 10−35 meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energy particle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed.
Thought experiment approaches have been suggested as a testing tool for quantum gravity theories. In the field of quantum gravity there are several open questions – e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions, even in the absence of lab experiments or physical observations.
In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. This field of study is called phenomenological quantum gravity.
== Overview ==
Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature of spacetime: in the slogan of John Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve." On the other hand, quantum field theory is typically formulated in the flat spacetime used in special relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is not renormalizable. Even in the simpler case where the curvature of spacetime is fixed a priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable.
It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe.
One major obstacle is that for quantum field theory in curved spacetime with a fixed metric, bosonic/fermionic operator fields supercommute for spacelike separated points. (This is a way of imposing a principle of locality.) However, in quantum gravity, the metric is dynamical, so that whether two points are spacelike separated depends on the state. In fact, they can be in a quantum superposition of being spacelike and not spacelike separated.
== Quantum mechanics and general relativity ==
=== Graviton ===
The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the graviton. These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly.
=== Nonrenormalizability of gravity ===
General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory.
However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale.
On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all.
It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries.
=== Quantum gravity as an effective field theory ===
In an effective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally.
By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Another example is the calculation of the corrections to the Bekenstein-Hawking entropy formula.
=== Spacetime background dependence ===
A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in spacetime.
On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory.
==== String theory ====
String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamic way.
Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence.
==== Background independent theories ====
Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory.
Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks.
=== Semi-classical quantum gravity ===
Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation.
Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles).
=== Problem of time ===
A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories, time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level.
== Candidate theories ==
There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available.
=== String theory ===
The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.
In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge.
=== Loop quantum gravity ===
Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space.
The main result of loop quantum gravity is that there is a granular structure of space at the Planck length. This is derived from the following considerations: In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory.
The quantum state of spacetime is described in the theory by means of a mathematical structure called spin networks. Spin networks were initially introduced by Roger Penrose in abstract form, and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime.
The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields. In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.
The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined within the theory. In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks.
=== Other theories ===
There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Such theories include:
== Experimental tests ==
As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, since the 2000s, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field of phenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention.
The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement,
violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam. The latter scenario has been searched for in light from gamma-ray bursts and both astrophysical and atmospheric neutrinos, placing limits on phenomenological quantum gravity parameters.
ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48 m, or 13 orders of magnitude below the Planck scale.
The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due to interstellar dust interference.
== See also ==
== Notes ==
== References ==
== Sources ==
Green, Michael B.; Schwarz, John H.; Witten, Edward (2012) [1987]. Superstring theory. 1: Introduction. Vol. l (25th Anniversary ed.). Cambridge University Press. ISBN 978-1-107-02911-8.
Penrose, Roger (2005). The road to reality: a complete guide to the laws of the universe. New York: Knopf. ISBN 978-0-679-45443-4.
== Further reading ==
Ahluwalia, D. V. (2002). "Interface of Gravitational and Quantum Realms". Modern Physics Letters A. 17 (15–17): 1135–1145. arXiv:gr-qc/0205121. Bibcode:2002MPLA...17.1135A. doi:10.1142/S021773230200765X. S2CID 119358167.
Ashtekar, Abhay (2005). "The winding road to quantum gravity" (PDF). Current Science. 89 (12): 2064–2074. JSTOR 24111069.
Carlip, Steven (2001). "Quantum Gravity: a Progress Report". Reports on Progress in Physics. 64 (8): 885–942. arXiv:gr-qc/0108040. Bibcode:2001RPPh...64..885C. doi:10.1088/0034-4885/64/8/301. S2CID 118923209.
Hamber, H. W. (2009). Hamber, Herbert W. (ed.). Quantum gravitation: the Feynman path integral approach. Berlin: Springer. doi:10.1007/978-3-540-85293-3. hdl:11858/00-001M-0000-0013-471D-A. ISBN 978-3-540-85292-6. OCLC 248994165.
Kiefer, Claus (2007). Quantum Gravity. Oxford University Press. ISBN 978-0-19-921252-1.
Kiefer, Claus (2005). "Quantum Gravity: General Introduction and Recent Developments". Annalen der Physik. 15 (1): 129–148. arXiv:gr-qc/0508120. Bibcode:2006AnP...518..129K. doi:10.1002/andp.200510175. S2CID 12984346.
Lämmerzahl, Claus, ed. (2003). Quantum Gravity: From Theory to Experimental Search. Lecture Notes in Physics. Springer. ISBN 978-3-540-40810-9.
Rovelli, Carlo (2004). Quantum Gravity. Cambridge University Press. ISBN 978-0-521-83733-0.
== External links ==
Weinstein, Steven; Rickles, Dean. "Quantum Gravity". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
"Planck Era" and "Planck Time" Archived 2018-11-28 at the Wayback Machine (up to 10−43 seconds after birth of Universe) (University of Oregon).
"Quantum Gravity", BBC Radio 4 discussion with John Gribbin, Lee Smolin and Janna Levin (In Our Time, February 22, 2001) | Wikipedia/Quantum_theory_of_gravity |
In particle physics and physical cosmology, Planck units are a system of units of measurement defined exclusively in terms of four universal physical constants: c, G, ħ, and kB (described further below). Expressing one of these physical constants in terms of Planck units yields a numerical value of 1. They are a system of natural units, defined using fundamental properties of nature (specifically, properties of free space) rather than properties of a chosen prototype object. Originally proposed in 1899 by German physicist Max Planck, they are relevant in research on unified theories such as quantum gravity.
The term Planck scale refers to quantities of space, time, energy and other units that are similar in magnitude to corresponding Planck units. This region may be characterized by particle energies of around 1019 GeV or 109 J, time intervals of around 5×10−44 s and lengths of around 10−35 m (approximately the energy-equivalent of the Planck mass, the Planck time and the Planck length, respectively). At the Planck scale, the predictions of the Standard Model, quantum field theory and general relativity are not expected to apply, and quantum effects of gravity are expected to dominate. One example is represented by the conditions in the first 10−43 seconds of our universe after the Big Bang, approximately 13.8 billion years ago.
The four universal constants that, by definition, have a numeric value 1 when expressed in these units are:
c, the speed of light in vacuum,
G, the gravitational constant,
ħ, the reduced Planck constant, and
kB, the Boltzmann constant.
Variants of the basic idea of Planck units exist, such as alternate choices of normalization that give other numeric values to one or more of the four constants above.
== Introduction ==
Any system of measurement may be assigned a mutually independent set of base quantities and associated base units, from which all other quantities and units may be derived. In the International System of Units, for example, the SI base quantities include length with the associated unit of the metre. In the system of Planck units, a similar set of base quantities and associated units may be selected, in terms of which other quantities and coherent units may be expressed.: 1215 The Planck unit of length has become known as the Planck length, and the Planck unit of time is known as the Planck time, but this nomenclature has not been established as extending to all quantities.
All Planck units are derived from the dimensional universal physical constants that define the system, and in a convention in which these units are omitted (i.e. treated as having the dimensionless value 1), these constants are then eliminated from equations of physics in which they appear. For example, Newton's law of universal gravitation,
F
=
G
m
1
m
2
r
2
=
(
F
P
l
P
2
m
P
2
)
m
1
m
2
r
2
,
{\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}=\left({\frac {F_{\text{P}}l_{\text{P}}^{2}}{m_{\text{P}}^{2}}}\right){\frac {m_{1}m_{2}}{r^{2}}},}
can be expressed as:
F
F
P
=
(
m
1
m
P
)
(
m
2
m
P
)
(
r
l
P
)
2
.
{\displaystyle {\frac {F}{F_{\text{P}}}}={\frac {\left({\dfrac {m_{1}}{m_{\text{P}}}}\right)\left({\dfrac {m_{2}}{m_{\text{P}}}}\right)}{\left({\dfrac {r}{l_{\text{P}}}}\right)^{2}}}.}
Both equations are dimensionally consistent and equally valid in any system of quantities, but the second equation, with G absent, is relating only dimensionless quantities since any ratio of two like-dimensioned quantities is a dimensionless quantity. If, by a shorthand convention, it is understood that each physical quantity is the corresponding ratio with a coherent Planck unit (or "expressed in Planck units"), the ratios above may be expressed simply with the symbols of physical quantity, without being scaled explicitly by their corresponding unit:
F
′
=
m
1
′
m
2
′
r
′
2
.
{\displaystyle F'={\frac {m_{1}'m_{2}'}{r'^{2}}}.}
This last equation (without G) is valid with F′, m1′, m2′, and r′ being the dimensionless ratio quantities corresponding to the standard quantities, written e.g. F′ ≘ F or F′ = F/FP, but not as a direct equality of quantities. This may seem to be "setting the constants c, G, etc., to 1" if the correspondence of the quantities is thought of as equality. For this reason, Planck or other natural units should be employed with care. Referring to "G = c = 1", Paul S. Wesson wrote that, "Mathematically it is an acceptable trick which saves labour. Physically it represents a loss of information and can lead to confusion."
== History and definition ==
The concept of natural units was introduced in 1874, when George Johnstone Stoney, noting that electric charge is quantized, derived units of length, time, and mass, now named Stoney units in his honor. Stoney chose his units so that G, c, and the electron charge e would be numerically equal to 1. In 1899, one year before the advent of quantum theory, Max Planck introduced what became later known as the Planck constant. At the end of the paper, he proposed the base units that were later named in his honor. The Planck units are based on the quantum of action, now usually known as the Planck constant, which appeared in the Wien approximation for black-body radiation. Planck underlined the universality of the new unit system, writing:
... die Möglichkeit gegeben ist, Einheiten für Länge, Masse, Zeit und Temperatur aufzustellen, welche, unabhängig von speciellen Körpern oder Substanzen, ihre Bedeutung für alle Zeiten und für alle, auch ausserirdische und aussermenschliche Culturen nothwendig behalten und welche daher als »natürliche Maasseinheiten« bezeichnet werden können.
... it is possible to set up units for length, mass, time and temperature, which are independent of special bodies or substances, necessarily retaining their meaning for all times and for all civilizations, including extraterrestrial and non-human ones, which can be called "natural units of measure".
Planck considered only the units based on the universal constants
G
{\displaystyle G}
,
h
{\displaystyle h}
,
c
{\displaystyle c}
, and
k
B
{\displaystyle k_{\rm {B}}}
to arrive at natural units for length, time, mass, and temperature. His definitions differ from the modern ones by a factor of
2
π
{\displaystyle {\sqrt {2\pi }}}
, because the modern definitions use
ℏ
{\displaystyle \hbar }
rather than
h
{\displaystyle h}
.
Unlike the case with the International System of Units, there is no official entity that establishes a definition of a Planck unit system. Some authors define the base Planck units to be those of mass, length and time, regarding an additional unit for temperature to be redundant. Other tabulations add, in addition to a unit for temperature, a unit for electric charge, so that either the Coulomb constant
k
e
{\displaystyle k_{\text{e}}}
or the vacuum permittivity
ϵ
0
{\displaystyle \epsilon _{0}}
is normalized to 1. Thus, depending on the author's choice, this charge unit is given by
q
P
=
4
π
ϵ
0
ℏ
c
≈
1.875546
×
10
−
18
C
≈
11.7
e
{\displaystyle q_{\text{P}}={\sqrt {4\pi \epsilon _{0}\hbar c}}\approx 1.875546\times 10^{-18}{\text{ C}}\approx 11.7\ e}
for
k
e
=
1
{\displaystyle k_{\text{e}}=1}
, or
q
P
=
ϵ
0
ℏ
c
≈
5.290818
×
10
−
19
C
≈
3.3
e
{\displaystyle q_{\text{P}}={\sqrt {\epsilon _{0}\hbar c}}\approx 5.290818\times 10^{-19}{\text{ C}}\approx 3.3\ e}
for
ε
0
=
1
{\displaystyle \varepsilon _{0}=1}
. Some of these tabulations also replace mass with energy when doing so.
In SI units, the values of c, h, e and kB are exact and the values of ε0 and G in SI units respectively have relative uncertainties of 1.6×10−10 and 2.2×10−5. Hence, the uncertainties in the SI values of the Planck units derive almost entirely from uncertainty in the SI value of G.
Compared to Stoney units, Planck base units are all larger by a factor
1
/
α
≈
11.7
{\textstyle {\sqrt {{1}/{\alpha }}}\approx 11.7}
, where
α
{\displaystyle \alpha }
is the fine-structure constant.
== Derived units ==
In any system of measurement, units for many physical quantities can be derived from base units. Table 2 offers a sample of derived Planck units, some of which are seldom used. As with the base units, their use is mostly confined to theoretical physics because most of them are too large or too small for empirical or practical use and there are large uncertainties in their values.
Some Planck units, such as of time and length, are many orders of magnitude too large or too small to be of practical use, so that Planck units as a system are typically only relevant to theoretical physics. In some cases, a Planck unit may suggest a limit to a range of a physical quantity where present-day theories of physics apply. For example, our understanding of the Big Bang does not extend to the Planck epoch, i.e., when the universe was less than one Planck time old. Describing the universe during the Planck epoch requires a theory of quantum gravity that would incorporate quantum effects into general relativity. Such a theory does not yet exist.
Several quantities are not "extreme" in magnitude, such as the Planck mass, which is about 22 micrograms: very large in comparison with subatomic particles, and within the mass range of living organisms.: 872 Similarly, the related units of energy and of momentum are in the range of some everyday phenomena.
== Significance ==
Planck units have little anthropocentric arbitrariness, but do still involve some arbitrary choices in terms of the defining constants. Unlike the metre and second, which exist as base units in the SI system for historical reasons, the Planck length and Planck time are conceptually linked at a fundamental physical level. Consequently, natural units help physicists to reframe questions. Frank Wilczek puts it succinctly:
We see that the question [posed] is not, "Why is gravity so feeble?" but rather, "Why is the proton's mass so small?" For in natural (Planck) units, the strength of gravity simply is what it is, a primary quantity, while the proton's mass is the tiny number 1/13 quintillion.
While it is true that the electrostatic repulsive force between two protons (alone in free space) greatly exceeds the gravitational attractive force between the same two protons, this is not about the relative strengths of the two fundamental forces.
When Planck proposed his units, the goal was only that of establishing a universal ("natural") way of measuring objects, without giving any special meaning to quantities that measured one single unit. During the 1950s, multiple authors including Lev Landau and Oskar Klein argued that quantities on the order of the Planck scale indicated the limits of the validity of quantum field theory. John Archibald Wheeler proposed in 1955 that quantum fluctuations of spacetime become significant at the Planck scale, though at the time he was unaware of the Planck units.
== Planck scale ==
In particle physics and physical cosmology, the Planck scale is an energy scale around 1.22×1028 eV (the Planck energy, corresponding to the energy equivalent of the Planck mass, 2.17645×10−8 kg) at which quantum effects of gravity become significant. At this scale, present descriptions and theories of sub-atomic particle interactions in terms of quantum field theory break down and become inadequate, due to the impact of the apparent non-renormalizability of gravity within current theories.
=== Relationship to gravity ===
At the Planck length scale, the strength of gravity is expected to become comparable with the other forces, and it has been theorized that all the fundamental forces are unified at that scale, but the exact mechanism of this unification remains unknown. The Planck scale is therefore the point at which the effects of quantum gravity can no longer be ignored in other fundamental interactions, where current calculations and approaches begin to break down, and a means to take account of its impact is necessary. On these grounds, it has been speculated that it may be an approximate lower limit at which a black hole could be formed by collapse.
While physicists have a fairly good understanding of the other fundamental interactions of forces on the quantum level, gravity is problematic, and cannot be integrated with quantum mechanics at very high energies using the usual framework of quantum field theory. At lesser energy levels it is usually ignored, while for energies approaching or exceeding the Planck scale, a new theory of quantum gravity is necessary. Approaches to this problem include string theory and M-theory, loop quantum gravity, noncommutative geometry, and causal set theory.
=== In cosmology ===
In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, tP, or approximately 10−43 seconds. There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Immeasurably hot and dense, the state of the Planck epoch was succeeded by the grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the inflationary epoch, which ended after about 10−32 seconds (or about 1011 tP).
Table 3 lists properties of the observable universe today expressed in Planck units.
After the measurement of the cosmological constant (Λ) in 1998, estimated at 10−122 in Planck units, it was noted that this is suggestively close to the reciprocal of the age of the universe (T) squared. Barrow and Shaw proposed a modified theory in which Λ is a field evolving in such a way that its value remains Λ ~ T−2 throughout the history of the universe.
=== Analysis of the units ===
==== Planck length ====
The Planck length, denoted ℓP, is a unit of length defined as:
ℓ
P
=
ℏ
G
c
3
{\displaystyle \ell _{\mathrm {P} }={\sqrt {\frac {\hbar G}{c^{3}}}}}
It is equal to 1.616255(18)×10−35 m (the two digits enclosed by parentheses are the estimated standard error associated with the reported numerical value) or about 10−20 times the diameter of a proton. It can be motivated in various ways, such as considering a particle whose reduced Compton wavelength is comparable to its Schwarzschild radius, though whether those concepts are in fact simultaneously applicable is open to debate. (The same heuristic argument simultaneously motivates the Planck mass.)
The Planck length is a distance scale of interest in speculations about quantum gravity. The Bekenstein–Hawking entropy of a black hole is one-fourth the area of its event horizon in units of Planck length squared.: 370 Since the 1950s, it has been conjectured that quantum fluctuations of the spacetime metric might make the familiar notion of distance inapplicable below the Planck length. This is sometimes expressed by saying that "spacetime becomes a foam at the Planck scale". It is possible that the Planck length is the shortest physically measurable distance, since any attempt to investigate the possible existence of shorter distances, by performing higher-energy collisions, would result in black hole production. Higher-energy collisions, rather than splitting matter into finer pieces, would simply produce bigger black holes.
The strings of string theory are modeled to be on the order of the Planck length. In theories with large extra dimensions, the Planck length calculated from the observed value of
G
{\displaystyle G}
can be smaller than the true, fundamental Planck length.: 61
==== Planck time ====
The Planck time, denoted tP, is defined as:
t
P
=
ℓ
P
c
=
ℏ
G
c
5
{\displaystyle t_{\mathrm {P} }={\frac {\ell _{\mathrm {P} }}{c}}={\sqrt {\frac {\hbar G}{c^{5}}}}}
This is the time required for light to travel a distance of 1 Planck length in vacuum, which is a time interval of approximately 5.39×10−44 s. No current physical theory can describe timescales shorter than the Planck time, such as the earliest events after the Big Bang. Some conjectures state that the structure of time need not remain smooth on intervals comparable to the Planck time.
==== Planck energy ====
The Planck energy EP is approximately equal to the energy released in the combustion of the fuel in an automobile fuel tank (57.2 L at 34.2 MJ/L of chemical energy). The ultra-high-energy cosmic ray observed in 1991 had a measured energy of about 50 J, equivalent to about 2.5×10−8 EP.
Proposals for theories of doubly special relativity posit that, in addition to the speed of light, an energy scale is also invariant for all inertial observers. Typically, this energy scale is chosen to be the Planck energy.
==== Planck unit of force ====
The Planck unit of force may be thought of as the derived unit of force in the Planck system if the Planck units of time, length, and mass are considered to be base units.
F
P
=
m
P
c
t
P
=
c
4
G
≈
1.2103
×
10
44
N
{\displaystyle F_{\text{P}}={\frac {m_{\text{P}}c}{t_{\text{P}}}}={\frac {c^{4}}{G}}\approx \mathrm {1.2103\times 10^{44}~N} }
It is the gravitational attractive force of two bodies of 1 Planck mass each that are held 1 Planck length apart. One convention for the Planck charge is to choose it so that the electrostatic repulsion of two objects with Planck charge and mass that are held 1 Planck length apart balances the Newtonian attraction between them.
Some authors have argued that the Planck force is on the order of the maximum force that can occur between two bodies. However, the validity of these conjectures has been disputed.
==== Planck temperature ====
The Planck temperature TP is 1.416784(16)×1032 K. At this temperature, the wavelength of light emitted by thermal radiation reaches the Planck length. There are no known physical models able to describe temperatures greater than TP; a quantum theory of gravity would be required to model the extreme energies attained. Hypothetically, a system in thermal equilibrium at the Planck temperature might contain Planck-scale black holes, constantly being formed from thermal radiation and decaying via Hawking evaporation. Adding energy to such a system might decrease its temperature by creating larger black holes, whose Hawking temperature is lower.
== Nondimensionalized equations ==
Physical quantities that have different dimensions (such as time and length) cannot be equated even if they are numerically equal (e.g., 1 second is not the same as 1 metre). In theoretical physics, however, this scruple may be set aside, by a process called nondimensionalization. The effective result is that many fundamental equations of physics, which often include some of the constants used to define Planck units, become equations where these constants are replaced by a 1.
Examples include the energy–momentum relation
E
2
=
(
m
c
2
)
2
+
(
p
c
)
2
{\displaystyle E^{2}=(mc^{2})^{2}+(pc)^{2}}
(which becomes
E
2
=
m
2
+
p
2
{\displaystyle E^{2}=m^{2}+p^{2}}
) and the Dirac equation
(
i
ℏ
γ
μ
∂
μ
−
m
c
)
ψ
=
0
{\displaystyle (i\hbar \gamma ^{\mu }\partial _{\mu }-mc)\psi =0}
(which becomes
(
i
γ
μ
∂
μ
−
m
)
ψ
=
0
{\displaystyle (i\gamma ^{\mu }\partial _{\mu }-m)\psi =0}
).
== Alternative choices of normalization ==
As already stated above, Planck units are derived by "normalizing" the numerical values of certain fundamental constants to 1. These normalizations are neither the only ones possible nor necessarily the best. Moreover, the choice of what factors to normalize, among the factors appearing in the fundamental equations of physics, is not evident, and the values of the Planck units are sensitive to this choice.
The factor 4π is ubiquitous in theoretical physics because in three-dimensional space, the surface area of a sphere of radius r is 4πr2. This, along with the concept of flux, are the basis for the inverse-square law, Gauss's law, and the divergence operator applied to flux density. For example, gravitational and electrostatic fields produced by point objects have spherical symmetry, and so the electric flux through a sphere of radius r around a point charge will be distributed uniformly over that sphere. From this, it follows that a factor of 4πr2 will appear in the denominator of Coulomb's law in rationalized form.: 214–15 (Both the numerical factor and the power of the dependence on r would change if space were higher-dimensional; the correct expressions can be deduced from the geometry of higher-dimensional spheres.: 51 ) Likewise for Newton's law of universal gravitation: a factor of 4π naturally appears in Poisson's equation when relating the gravitational potential to the distribution of matter.: 56
Hence a substantial body of physical theory developed since Planck's 1899 paper suggests normalizing not G but 4πG (or 8πG) to 1. Doing so would introduce a factor of 1/4π (or 1/8π) into the nondimensionalized form of the law of universal gravitation, consistent with the modern rationalized formulation of Coulomb's law in terms of the vacuum permittivity. In fact, alternative normalizations frequently preserve the factor of 1/4π in the nondimensionalized form of Coulomb's law as well, so that the nondimensionalized Maxwell's equations for electromagnetism and gravitoelectromagnetism both take the same form as those for electromagnetism in SI, which do not have any factors of 4π. When this is applied to electromagnetic constants, ε0, this unit system is called "rationalized". When applied additionally to gravitation and Planck units, these are called rationalized Planck units and are seen in high-energy physics.
The rationalized Planck units are defined so that c = 4πG = ħ = ε0 = kB = 1.
There are several possible alternative normalizations.
=== Gravitational constant ===
In 1899, Newton's law of universal gravitation was still seen as exact, rather than as a convenient approximation holding for "small" velocities and masses (the approximate nature of Newton's law was shown following the development of general relativity in 1915). Hence Planck normalized to 1 the gravitational constant G in Newton's law. In theories emerging after 1899, G nearly always appears in formulae multiplied by 4π or a small integer multiple thereof. Hence, a choice to be made when designing a system of natural units is which, if any, instances of 4π appearing in the equations of physics are to be eliminated via the normalization.
Normalizing 4πG to 1 (and therefore setting G = 1/4π):
Gauss's law for gravity becomes Φg = −M (rather than Φg = −4πM in Planck units).
Eliminates 4πG from the Poisson equation.
Eliminates 4πG in the gravitoelectromagnetic (GEM) equations, which hold in weak gravitational fields or locally flat spacetime. These equations have the same form as Maxwell's equations (and the Lorentz force equation) of electromagnetism, with mass density replacing charge density, and with 1/4πG replacing ε0.
Normalizes the characteristic impedance Zg of gravitational radiation in free space to 1 (normally expressed as 4πG/c).
Eliminates 4πG from the Bekenstein–Hawking formula (for the entropy of a black hole in terms of its mass mBH and the area of its event horizon ABH) which is simplified to SBH = πABH = (mBH)2.
Setting 8πG = 1 (and therefore setting G = 1/8π). This would eliminate 8πG from the Einstein field equations, Einstein–Hilbert action, and the Friedmann equations, for gravitation. Planck units modified so that 8πG = 1 are known as reduced Planck units, because the Planck mass is divided by
8
π
{\displaystyle {\sqrt {8\pi }}}
. Also, the Bekenstein–Hawking formula for the entropy of a black hole simplifies to SBH = (mBH)2/2 = 2πABH.
== See also ==
cGh physics
Dimensional analysis
Doubly special relativity
Trans-Planckian problem
Zero-point energy
== Explanatory notes ==
== References ==
== External links ==
Value of the fundamental constants, including the Planck units, as reported by the National Institute of Standards and Technology (NIST).
The Planck scale: relativity meets quantum mechanics meets gravity from 'Einstein Light' at UNSW | Wikipedia/Planck_force |
In quantum field theory, the term moduli (sg.: modulus; more properly moduli fields) is sometimes used to refer to scalar fields whose potential energy function has continuous families of global minima. Such potential functions frequently occur in supersymmetric systems. The term "modulus" is borrowed from mathematics (or more specifically, moduli space is borrowed from algebraic geometry), where it is used synonymously with "parameter". The word moduli (Moduln in German) first appeared in 1857 in Bernhard Riemann's celebrated paper "Theorie der Abel'schen Functionen".
== Moduli spaces in quantum field theories ==
In quantum field theories, the possible vacua are usually labeled by the vacuum expectation values of scalar fields, as Lorentz invariance forces the vacuum expectation values of any higher spin fields to vanish. These vacuum expectation values can take any value for which the potential function is a minimum. Consequently, when the potential function has continuous families of global minima, the space of vacua for the quantum field theory is a manifold (or orbifold), usually called the vacuum manifold. This manifold is often called the moduli space of vacua, or just the moduli space, for short.
The term moduli is also used in string theory to refer to various continuous parameters that label possible string backgrounds: the expectation value of the dilaton field, the parameters (e.g. the radius and complex structure) which govern the shape of the compactification manifold, et cetera. These parameters are represented, in the quantum field theory that approximates the string theory at low energies, by the vacuum expectation values of massless scalar fields, making contact with the usage described above. In string theory, the term "moduli space" is often used specifically to refer to the space of all possible string backgrounds.
== Moduli spaces of supersymmetric gauge theories ==
In general quantum field theories, even if the classical potential energy is minimized over a large set of possible expectation values, once quantum corrections are included it is generically the case that nearly all of these configurations cease to minimize the energy. The result is that the set of vacua of the quantum theory is generally much smaller than that of the classical theory. A notable exception occurs when the various vacua in question are related by a symmetry which guarantees that their energy levels remain exactly degenerate.
The situation is very different in supersymmetric quantum field theories. In general, these possess large moduli spaces of vacua which are not related by any symmetry, for example, the masses of the various excitations may differ at various points on the moduli space. The moduli spaces of supersymmetric gauge theories are in general easier to calculate than those of nonsupersymmetric theories because supersymmetry restricts the allowed geometries of the moduli space even when quantum corrections are included.
=== Allowed moduli spaces of 4-dimensional theories ===
The more supersymmetry there is, the stronger the restriction on the vacuum manifold. Therefore, if a restriction appears below for a given number N of spinors of supercharges, then it also holds for all greater values of N.
==== N=1 Theories ====
The first restriction on the geometry of a moduli space was found in 1979 by Bruno Zumino and published in the article "Supersymmetry and Kähler Manifolds". He considered an N=1 theory in 4-dimensions with global supersymmetry. N=1 means that the fermionic components of the supersymmetry algebra can be assembled into a single Majorana supercharge. The only scalars in such a theory are the complex scalars of the chiral superfields. He found that the vacuum manifold of allowed vacuum expectation values for these scalars is not only complex but also a Kähler manifold.
If gravity is included in the theory, so that there is local supersymmetry, then the resulting theory is called a supergravity theory and the restriction on the geometry of the moduli space becomes stronger. The moduli space must not only be Kähler, but also the Kähler form must lift to integral cohomology. Such manifolds are called Hodge manifolds. The first example appeared in the 1979 article "Spontaneous Symmetry Breaking and Higgs Effect in Supergravity Without Cosmological Constant" and the general statement appeared 3 years later in "Quantization of Newton's Constant in Certain Supergravity Theories".
==== N=2 Theories ====
In extended 4-dimensional theories with N=2 supersymmetry, corresponding to a single Dirac spinor supercharge, the conditions are stronger. The N=2 supersymmetry algebra contains two representations with scalars, the vector multiplet which contains a complex scalar and the hypermultiplet which contains two complex scalars. The moduli space of the vector multiplets is called the Coulomb branch while that of the hypermultiplets is called the Higgs branch. The total moduli space is locally a product of these two branches, as nonrenormalization theorems imply that the metric of each is independent of the fields of the other multiplet.(See for example Argyres, Non-Perturbative Dynamics Of Four-Dimensional Supersymmetric Field Theories, pp. 6–7, for further discussion of the local product structure.)
In the case of global N=2 supersymmetry, in other words in the absence of gravity, the Coulomb branch of the moduli space is a special Kähler manifold. The first example of this restriction appeared in the 1984 article Potentials and Symmetries of General Gauged N=2 Supergravity: Yang-Mills Models by Bernard de Wit and Antoine Van Proeyen, while a general geometric description of the underlying geometry, called special geometry, was presented by Andrew Strominger in his 1990 paper Special Geometry.
The Higgs branch is a hyperkähler manifold as was shown by Luis Alvarez-Gaume and Daniel Freedman in their 1981 paper Geometrical Structure and Ultraviolet Finiteness in the Supersymmetric Sigma Model. Including gravity the supersymmetry becomes local. Then one needs to add the same Hodge condition to the special Kahler Coulomb branch as in the N=1 case. Jonathan Bagger and Edward Witten demonstrated in their 1982 paper Matter Couplings in N=2 Supergravity that in this case, the Higgs branch must be a quaternionic Kähler manifold.
==== N>2 Supersymmetry ====
In extended supergravities with N>2 the moduli space must always be a symmetric space.
== References ==
Andrianopoli, L.; Bertolini, M.; Ceresole, A.; D'Auria, R.; Ferrara, S.; Fré, P.; Magri, T. (Sep 1997). "N = 2 supergravity and N = 2 super Yang-Mills theory on general scalar manifolds: Symplectic covariance gaugings and the momentum map". Journal of Geometry and Physics. 23 (2): 111–189. arXiv:hep-th/9605032. Bibcode:1997JGP....23..111A. doi:10.1016/S0393-0440(97)00002-8, contains a review of restrictions on moduli spaces in various supersymmetric gauge theories. | Wikipedia/Moduli_(physics) |
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers).
Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory can often be understood through the study of analytical objects, such as the Riemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation).
Number theory is one of the oldest branches of mathematics alongside geometry. One quirk of number theory is that it deals with statements that are simple to understand but are very difficult to solve. Examples of this are Fermat's Last Theorem, which was proved 358 years after the original formulation, and Goldbach's conjecture, which remains unsolved since the 18th century. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." It was regarded as the example of pure mathematics with no applications outside mathematics until the 1970s, when it became known that prime numbers would be used as the basis for the creation of public-key cryptography algorithms.
== History ==
Number theory is the branch of mathematics that studies integers and their properties and relations. The integers comprise a set that extends the set of natural numbers
{
1
,
2
,
3
,
…
}
{\displaystyle \{1,2,3,\dots \}}
to include number
0
{\displaystyle 0}
and the negation of natural numbers
{
−
1
,
−
2
,
−
3
,
…
}
{\displaystyle \{-1,-2,-3,\dots \}}
. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers).
Number theory is closely related to arithmetic and some authors use the terms as synonyms. However, the word "arithmetic" is used today to mean the study of numerical operations and extends to the real numbers. In a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships. Traditionally, it is known as higher arithmetic. By the early twentieth century, the term number theory had been widely adopted. The term number means whole numbers, which refers to either the natural numbers or the integers.
Elementary number theory studies aspects of integers that can be investigated using elementary methods such as elementary proofs. Analytic number theory, by contrast, relies on complex numbers and techniques from analysis and calculus. Algebraic number theory employs algebraic structures such as fields and rings to analyze the properties of and relations between numbers. Geometric number theory uses concepts from geometry to study numbers. Further branches of number theory are probabilistic number theory, combinatorial number theory, computational number theory, and applied number theory, which examines the application of number theory to science and technology.
=== Origins ===
==== Ancient Mesopotamia ====
The earliest historical find of an arithmetical nature is a fragment of a table: Plimpton 322 (Larsa, Mesopotamia, c. 1800 BC), a broken clay tablet, contains a list of "Pythagorean triples", that is, integers
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
such that
a
2
+
b
2
=
c
2
{\displaystyle a^{2}+b^{2}=c^{2}}
. The triples are too numerous and too large to have been obtained by brute force. The heading over the first column reads: "The takiltum of the diagonal which has been subtracted such that the width..."
The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity
(
1
2
(
x
−
1
x
)
)
2
+
1
=
(
1
2
(
x
+
1
x
)
)
2
,
{\displaystyle \left({\frac {1}{2}}\left(x-{\frac {1}{x}}\right)\right)^{2}+1=\left({\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right)^{2},}
which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by
c
/
a
{\displaystyle c/a}
, presumably for actual use as a "table", for example, with a view to applications.
It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own many centuries later. It has been suggested instead that the table was a source of numerical examples for school problems. Plimpton 322 tablet is the only surviving evidence of what today would be called number theory within Babylonian mathematics, though a kind of Babylonian algebra was much more developed.
==== Ancient Greece ====
Although other civilizations probably influenced Greek mathematics at the beginning, all evidence of such borrowings appear relatively late, and it is likely that Greek arithmētikḗ (the theoretical or philosophical study of numbers) is an indigenous tradition. Aside from a few fragments, most of what is known about Greek mathematics in the 6th to 4th centuries BC (the Archaic and Classical periods) comes through either the reports of contemporary non-mathematicians or references from mathematical works in the early Hellenistic period. In the case of number theory, this means largely Plato, Aristotle, and Euclid.
Plato had a keen interest in mathematics, and distinguished clearly between arithmētikḗ and calculation (logistikē). Plato reports in his dialogue Theaetetus that Theodorus had proven that
3
,
5
,
…
,
17
{\displaystyle {\sqrt {3}},{\sqrt {5}},\dots ,{\sqrt {17}}}
are irrational. Theaetetus, a disciple of Theodorus's, worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. Aristotle further claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: Platonem ferunt didicisse Pythagorea omnia ("They say Plato learned all things Pythagorean").
Euclid devoted part of his Elements (Books VII–IX) to topics that belong to elementary number theory, including prime numbers and divisibility. He gave an algorithm, the Euclidean algorithm, for computing the greatest common divisor of two numbers (Prop. VII.2) and a proof implying the infinitude of primes (Prop. IX.20). There is also older material likely based on Pythagorean teachings (Prop. IX.21–34), such as "odd times even is even" and "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it". This is all that is needed to prove that
2
{\displaystyle {\sqrt {2}}}
is irrational. Pythagoreans apparently gave great importance to the odd and the even. The discovery that
2
{\displaystyle {\sqrt {2}}}
is irrational is credited to the early Pythagoreans, sometimes assigned to Hippasus, who was expelled or split from the Pythagorean community as a result. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic) and lengths and proportions (which may be identified with real numbers, whether rational or not).
The Pythagorean tradition also spoke of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th centuries).
An epigram published by Lessing in 1773 appears to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as it is known, such equations were first successfully treated by Indian mathematicians. It is not known whether Archimedes himself had a method of solution.
===== Late Antiquity =====
Aside from the elementary work of Neopythagoreans such as Nicomachus and Theon of Smyrna, the foremost authority in arithmētikḗ in Late Antiquity was Diophantus of Alexandria, who probably lived in the 3rd century AD, approximately five hundred years after Euclid. Little is known about his life, but he wrote two works that are extant: On Polygonal Numbers, a short treatise written in the Euclidean manner on the subject, and the Arithmetica, a work on pre-modern algebra (namely, the use of algebra to solve numerical problems). Six out of the thirteen books of Diophantus's Arithmetica survive in the original Greek and four more survive in an Arabic translation. The Arithmetica is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form
f
(
x
,
y
)
=
z
2
{\displaystyle f(x,y)=z^{2}}
or
f
(
x
,
y
,
z
)
=
w
2
{\displaystyle f(x,y,z)=w^{2}}
. In modern parlance, Diophantine equations are polynomial equations to which rational or integer solutions are sought.
==== Asia ====
The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (between the third and fifth centuries). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) The result was later generalized with a complete solution called Da-yan-shu (大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early nineteenth century by British missionary Alexander Wylie. There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere.
While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an autochthonous tradition; in particular, there is no evidence that Euclid's Elements reached India before the eighteenth century. Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences
n
≡
a
1
mod
m
1
{\displaystyle n\equiv a_{1}{\bmod {m}}_{1}}
,
n
≡
a
2
mod
m
2
{\displaystyle n\equiv a_{2}{\bmod {m}}_{2}}
could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalization of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations.
Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century).
Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke.
==== Arithmetic in the Islamic golden age ====
In the early ninth century, the caliph al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind, which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta).
Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912).
Part of the treatise al-Fakhri (by al-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem.
==== Western Europe in the Middle Ages ====
Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica.
=== Early modern number theory ===
==== Fermat ====
Pierre de Fermat (1607–1665) never published his writings but communicated through correspondence instead. Accordingly, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. Although he drew inspiration from classical sources, in his notes and letters Fermat scarcely wrote any proofs—he had no models in the area.
Over his lifetime, Fermat made the following contributions to the field:
One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day.
In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer.
Fermat's little theorem (1640): if a is not divisible by a prime p, then
a
p
−
1
≡
1
mod
p
.
{\displaystyle a^{p-1}\equiv 1{\bmod {p}}.}
If a and b are coprime, then
a
2
+
b
2
{\displaystyle a^{2}+b^{2}}
is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form
a
2
+
b
2
{\displaystyle a^{2}+b^{2}}
. These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent.
In 1657, Fermat posed the problem of solving
x
2
−
N
y
2
=
1
{\displaystyle x^{2}-Ny^{2}=1}
as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat was not aware of this). He stated that a proof could be found by infinite descent.
Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that
x
4
+
y
4
=
z
4
{\displaystyle x^{4}+y^{4}=z^{4}}
has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that
x
3
+
y
3
=
z
3
{\displaystyle x^{3}+y^{3}=z^{3}}
has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent).
Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to
x
n
+
y
n
=
z
n
{\displaystyle x^{n}+y^{n}=z^{n}}
for all
n
≥
3
{\displaystyle n\geq 3}
; this claim appears in his annotations in the margins of his copy of Diophantus.
==== Euler ====
The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following:
Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that
p
=
x
2
+
y
2
{\displaystyle p=x^{2}+y^{2}}
if and only if
p
≡
1
mod
4
{\displaystyle p\equiv 1{\bmod {4}}}
; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to
x
4
+
y
4
=
z
2
{\displaystyle x^{4}+y^{4}=z^{2}}
(implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method).
Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation.
First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function.
Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form
x
2
+
N
y
2
{\displaystyle x^{2}+Ny^{2}}
, some of it prefiguring quadratic reciprocity.
Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated.
==== Lagrange, Legendre, and Gauss ====
Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations; for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to
m
X
2
+
n
Y
2
{\displaystyle mX^{2}+nY^{2}}
), including defining their equivalence relation, showing how to put them in reduced form, etc.
Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation
a
x
2
+
b
y
2
+
c
z
2
=
0
{\displaystyle ax^{2}+by^{2}+cz^{2}=0}
and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for
n
=
5
{\displaystyle n=5}
(completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain).
Carl Friedrich Gauss (1777–1855) worked in a wide variety of fields in both mathematics and physics including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. The Disquisitiones Arithmeticae (1801), which he wrote three years earlier when he was 21, had an immense influence in the area of number theory and set its agenda for much of the 19th century. Gauss proved in this work the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory:
The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic.
In this way, Gauss arguably made forays towards Évariste Galois's work and the area algebraic number theory.
=== Maturity and division into subfields ===
Starting early in the nineteenth century, the following developments gradually took place:
The rise to self-consciousness of number theory (or higher arithmetic) as a field of study.
The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra.
The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory.
Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms).
The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize.
== Main subdivisions ==
=== Elementary number theory ===
Elementary number theory deals with the topics in number theory by means of basic methods in arithmetic. Its primary subjects of study are divisibility, factorization, and primality, as well as congruences in modular arithmetic. Other topics in elementary number theory include Diophantine equations, continued fractions, integer partitions, and Diophantine approximations.
Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithms. Multiplication, for instance, is an operation that combines two numbers, referred to as factors, to form a single number, termed the product, such as
2
×
3
=
6
{\displaystyle 2\times 3=6}
.
Divisibility is a property between two nonzero integers related to division. An integer
a
{\displaystyle a}
is said to be divisible by a nonzero integer
b
{\displaystyle b}
if
a
{\displaystyle a}
is a multiple of
b
{\displaystyle b}
; that is, if there exists an integer
q
{\displaystyle q}
such that
a
=
b
q
{\displaystyle a=bq}
. An equivalent formulation is that
b
{\displaystyle b}
divides
a
{\displaystyle a}
and is denoted by a vertical bar, which in this case is
b
|
a
{\displaystyle b|a}
. Conversely, if this were not the case, then
a
{\displaystyle a}
would not be divided evenly by
b
{\displaystyle b}
, resulting in a remainder. Euclid's division lemma asserts that
a
{\displaystyle a}
and
b
{\displaystyle b}
can generally be written as
a
=
b
q
+
r
{\displaystyle a=bq+r}
, where the remainder
r
<
b
{\displaystyle r<b}
accounts for the leftover quantity. Elementary number theory studies divisibility rules in order to quickly identify if a given integer is divisible by a fixed divisor. For instance, it is known that any integer is divisible by 3 if its decimal digit sum is divisible by 3.
A common divisor of several nonzero integers is an integer that divides all of them. The greatest common divisor (gcd) is the largest of such divisors. Two integers are said to be coprime or relatively prime to one another if their greatest common divisor, and simultaneously their only divisor, is 1. The Euclidean algorithm computes the greatest common divisor of two integers
a
,
b
{\displaystyle a,b}
by means of repeatedly applying the division lemma and shifting the divisor and remainder after every step. The algorithm can be extended to solve a special case of linear Diophantine equations
a
x
+
b
y
=
1
{\displaystyle ax+by=1}
. A Diophantine equation is an equation with several unknowns and integer coefficients. Another kind of Diophantine equation is described in the Pythagorean theorem,
x
2
+
y
2
=
z
2
{\displaystyle x^{2}+y^{2}=z^{2}}
, whose solutions are called Pythagorean triples if they are all integers.
Elementary number theory studies the divisibility properties of integers such as parity (even and odd numbers), prime numbers, and perfect numbers. Important number-theoric functions include the divisor-counting function, the divisor summatory function and its modifications, and Euler's totient function. A prime number is an integer greater than 1 whose only positive divisors are 1 and the prime itself. A positive integer greater than 1 that is not prime is called a composite number. Euclid's theorem demonstrates that there are infinitely many prime numbers that comprise the set {2, 3, 5, 7, 11, ...}. The sieve of Eratosthenes was devised as an efficient algorithm for identifying all primes up to a given natural number by eliminating all composite numbers.
Factorization is a method of expressing a number as a product. Specifically in number theory, integer factorization is the decomposition of an integer into a product of integers. The process of repeatedly applying this procedure until all factors are prime is known as prime factorization. A fundamental property of primes is shown in Euclid's lemma. It is a consequence of the lemma that if a prime divides a product of integers, then that prime divides at least one of the factors in the product. The unique factorization theorem is the fundamental theorem of arithmetic that relates to prime factorization. The theorem states that every integer greater than 1 can be factorised into a product of prime numbers and that this factorisation is unique up to the order of the factors. For example,
120
{\displaystyle 120}
is expressed uniquely as
2
×
2
×
2
×
3
×
5
{\displaystyle 2\times 2\times 2\times 3\times 5}
or simply
2
3
×
3
×
5
{\displaystyle 2^{3}\times 3\times 5}
.
Modular arithmetic works with finite sets of integers and introduces the concepts of congruence and residue classes. A congruence of two integers
a
,
b
{\displaystyle a,b}
modulo
n
{\displaystyle n}
(a positive integer called the modulus) is an equivalence relation whereby
n
|
(
a
−
b
)
{\displaystyle n|(a-b)}
is true. Performing Euclidean division on both
a
{\displaystyle a}
and
n
{\displaystyle n}
, and on
b
{\displaystyle b}
and
n
{\displaystyle n}
, yields the same remainder. This written as
a
≡
b
(
mod
n
)
{\textstyle a\equiv b{\pmod {n}}}
. In a manner analogous to the 12-hour clock, the sum of 4 and 9 is equal to 13, yet congruent to 1. A residue class modulo
n
{\displaystyle n}
is a set that contains all integers congruent to a specified
r
{\displaystyle r}
modulo
n
{\displaystyle n}
. For example,
6
Z
+
1
{\displaystyle 6\mathbb {Z} +1}
contains all multiples of 6 incremented by 1. Modular arithmetic provides a range of formulas for rapidly solving congruences of very large powers. An influential theorem is Fermat's little theorem, which states that if a prime
p
{\displaystyle p}
is coprime to some integer
a
{\displaystyle a}
, then
a
p
−
1
≡
1
(
mod
p
)
{\textstyle a^{p-1}\equiv 1{\pmod {p}}}
is true. Euler's theorem extends this to assert that every integer
n
{\displaystyle n}
satisfies the congruence
a
φ
(
n
)
≡
1
(
mod
n
)
,
{\displaystyle a^{\varphi (n)}\equiv 1{\pmod {n}},}
where Euler's totient function
φ
{\displaystyle \varphi }
counts all positive integers up to
n
{\displaystyle n}
that are coprime to
n
{\displaystyle n}
. Modular arithmetic also provides formulas that are used to solve congruences with unknowns in a similar vein to equation solving in algebra, such as the Chinese remainder theorem.
=== Analytic number theory ===
Analytic number theory, in contrast to elementary number theory, relies on complex numbers and techniques from analysis and calculus. Analytic number theory may be defined
in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or
in terms of its concerns, as the study within number theory of estimates on the size and density of certain numbers (e.g., primes), as opposed to identities.
It studies the distribution of primes, behavior of number-theoric functions, and irrational numbers.
Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, many of the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture, the twin prime conjecture, the Hardy–Littlewood conjectures, the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory.
Analysis is the branch of mathematics that studies the limit, defined as the value to which a sequence or function tends as the argument (or index) approaches a specific value. For example, the limit of the sequence 0.9, 0.99, 0.999, ... is 1. In the context of functions, the limit of
1
x
{\textstyle {\frac {1}{x}}}
as
x
{\displaystyle x}
approaches infinity is 0. The complex numbers extend the real numbers with the imaginary unit
i
{\displaystyle i}
defined as the solution to
i
2
=
−
1
{\displaystyle i^{2}=-1}
. Every complex number can be expressed as
x
+
i
y
{\displaystyle x+iy}
, where
x
{\displaystyle x}
is called the real part and
y
{\displaystyle y}
is called the imaginary part.
The distribution of primes, described by the function
π
{\displaystyle \pi }
that counts all primes up to a given real number, is unpredictable and is a major subject of study in number theory. Elementary formulas for a partial sequence of primes, including Euler's prime-generating polynomials have been developed. However, these cease to function as the primes become too large. The prime number theorem in analytic number theory provides a formalisation of the notion that prime numbers appear less commonly as their numerical value increases. One distribution states, informally, that the function
x
log
(
x
)
{\displaystyle {\frac {x}{\log(x)}}}
approximates
π
(
x
)
{\displaystyle \pi (x)}
. Another distribution involves an offset logarithmic integral which converges to
π
(
x
)
{\displaystyle \pi (x)}
more quickly.
The zeta function has been demonstrated to be connected to the distribution of primes. It is defined as the series
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
1
s
+
1
2
s
+
1
3
s
+
⋯
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots }
that converges if
s
{\displaystyle s}
is greater than 1. Euler demonstrated a link involving the infinite product over all prime numbers, expressed as the identity
ζ
(
s
)
=
∏
p
prime
(
1
−
1
p
s
)
−
1
.
{\displaystyle \zeta (s)=\prod _{p{\text{ prime}}}\left(1-{\frac {1}{p^{s}}}\right)^{-1}.}
Riemann extended the definition to a complex variable and conjectured that all nontrivial cases (
0
<
ℜ
(
s
)
<
1
{\displaystyle 0<\Re (s)<1}
) where the function returns a zero are those in which the real part of
s
{\displaystyle s}
is equal to
1
2
{\textstyle {\frac {1}{2}}}
. He established a connection between the nontrivial zeroes and the prime-counting function. In what is now recognised as the unsolved Riemann hypothesis, a solution to it would imply direct consequences for understanding the distribution of primes.
One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function.
Elementary number theory works with elementary proofs, a term that excludes the use of complex numbers but may include basic analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous. For example, proofs based on complex Tauberian theorems, such as Wiener–Ikehara, are often seen as quite enlightening but not elementary despite using Fourier analysis, not complex analysis. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a more advanced proof.
Some subjects generally considered to be part of analytic number theory (e.g., sieve theory) are better covered by the second rather than the first definition. Small sieves, for instance, use little analysis and yet still belong to analytic number theory.
=== Algebraic number theory ===
An algebraic number is any complex number that is a solution to some polynomial equation
f
(
x
)
=
0
{\displaystyle f(x)=0}
with rational coefficients; for example, every solution
x
{\displaystyle x}
of
x
5
+
(
11
/
2
)
x
3
−
7
x
2
+
9
=
0
{\displaystyle x^{5}+(11/2)x^{3}-7x^{2}+9=0}
is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields.
It could be argued that the simplest kind of number fields, namely quadratic fields, were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones Arithmeticae can be restated in terms of ideals and
norms in quadratic fields. (A quadratic field consists of all
numbers of the form
a
+
b
d
{\displaystyle a+b{\sqrt {d}}}
, where
a
{\displaystyle a}
and
b
{\displaystyle b}
are rational numbers and
d
{\displaystyle d}
is a fixed rational number whose square root is not rational.)
For that matter, the eleventh-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such.
The grounds of the subject were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were introduced; these are three complementary ways of dealing with the lack of unique factorization in algebraic number fields. (For example, in the field generated by the rationals
and
−
5
{\displaystyle {\sqrt {-5}}}
, the number
6
{\displaystyle 6}
can be factorised both as
6
=
2
⋅
3
{\displaystyle 6=2\cdot 3}
and
6
=
(
1
+
−
5
)
(
1
−
−
5
)
{\displaystyle 6=(1+{\sqrt {-5}})(1-{\sqrt {-5}})}
; all of
2
{\displaystyle 2}
,
3
{\displaystyle 3}
,
1
+
−
5
{\displaystyle 1+{\sqrt {-5}}}
and
1
−
−
5
{\displaystyle 1-{\sqrt {-5}}}
are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalizations of quadratic reciprocity.
Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K.
(For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.)
Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood.
Their classification was the object of the programme of class field theory, which was initiated in the late nineteenth century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950.
An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields.
=== Diophantine geometry ===
The central problem of Diophantine geometry is to determine when a Diophantine equation has integer or rational solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object.
For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines a curve, a surface, or some other such object in n-dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or
integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely
or infinitely many rational points on a given curve or surface.
Consider, for instance, the Pythagorean equation
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
. One would like to know its rational solutions, namely
(
x
,
y
)
{\displaystyle (x,y)}
such that x and y are both rational. This is the same as asking for all integer solutions
to
a
2
+
b
2
=
c
2
{\displaystyle a^{2}+b^{2}=c^{2}}
; any solution to the latter equation gives us a solution
x
=
a
/
c
{\displaystyle x=a/c}
,
y
=
b
/
c
{\displaystyle y=b/c}
to the former. It is also the
same as asking for all points with rational coordinates on the curve described by
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
(a circle of radius 1 centered on the origin).
The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equation
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0}
, where
f
{\displaystyle f}
is a polynomial in two variables) depends crucially on the genus of the curve. A major achievement of this approach is Wiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial.
There is also the closely linked area of Diophantine approximations: given a number
x
{\displaystyle x}
, determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: call
a
/
q
{\displaystyle a/q}
(with
gcd
(
a
,
q
)
=
1
{\displaystyle \gcd(a,q)=1}
) a good approximation to
x
{\displaystyle x}
if
|
x
−
a
/
q
|
<
1
q
c
{\displaystyle |x-a/q|<{\frac {1}{q^{c}}}}
, where
c
{\displaystyle c}
is large. This question is of special interest if
x
{\displaystyle x}
is an algebraic number. If
x
{\displaystyle x}
cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be approximated better than any algebraic number, then it is a transcendental number. It is by this argument that π and e have been shown to be transcendental.
Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry is a contemporary term for the same domain covered by Diophantine geometry, particularly when one wishes to emphasize the connections to modern algebraic geometry (for example, in Faltings's theorem) rather than to techniques in Diophantine approximations.
=== Other subfields ===
Probabilistic number theory starts with questions such as the following: Take an integer n at random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors will n have on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average?
Combinatorics in number theory starts with questions like the following: Does a fairly "thick" infinite set
A
{\displaystyle A}
contain many elements in arithmetic progression:
a
{\displaystyle a}
,
a
+
b
,
a
+
2
b
,
a
+
3
b
,
…
,
a
+
10
b
{\displaystyle a+b,a+2b,a+3b,\ldots ,a+10b}
? Should it be possible to write large integers as sums of elements of
A
{\displaystyle A}
?There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms for testing primality are now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring.
== Applications ==
For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory.
This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. Schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators.
In 1974, Donald Knuth said "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations".
Elementary number theory is taught in discrete mathematics courses for computer scientists. It also has applications to the continuous in numerical analysis.
Number theory has now several modern applications spanning diverse areas such as:
Computer science: The fast Fourier transform (FFT) algorithm, which is used to efficiently compute the discrete Fourier transform, has important applications in signal processing and data analysis.
Physics: The Riemann hypothesis has connections to the distribution of prime numbers and has been studied for its potential implications in physics.
Error correction codes: The theory of finite fields and algebraic geometry have been used to construct efficient error-correcting codes.
Communications: The design of cellular telephone networks requires knowledge of the theory of modular forms, which is a part of analytic number theory.
Study of musical scales: the concept of "equal temperament", which is the basis for most modern Western music, involves dividing the octave into 12 equal parts. This has been studied using number theory and in particular the properties of the 12th root of 2.
== See also ==
Arithmetic dynamics
Algebraic function field
Arithmetic topology
Finite field
p-adic number
List of number theoretic algorithms
== Notes ==
== References ==
=== Sources ===
This article incorporates material from the Citizendium article "Number theory", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
== Further reading ==
Two of the most popular introductions to the subject are:
Hardy, G. H.; Wright, E. M. (2008) [1938]. An introduction to the theory of numbers (rev. by D. R. Heath-Brown and J. H. Silverman, 6th ed.). Oxford University Press. ISBN 978-0-19-921986-5.
Vinogradov, I. M. (2003) [1954]. Elements of Number Theory (reprint of the 1954 ed.). Mineola, NY: Dover Publications.
Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol 1981).
Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are:
Ivan M. Niven; Herbert S. Zuckerman; Hugh L. Montgomery (2008) [1960]. An introduction to the theory of numbers (reprint of the 5th 1991 ed.). John Wiley & Sons. ISBN 978-81-265-1811-1. Retrieved 2016-02-28.
Rosen, Kenneth H. (2010). Elementary Number Theory (6th ed.). Pearson Education. ISBN 978-0-321-71775-7. Retrieved 2016-02-28.
Popular choices for a second textbook include:
Borevich, A. I.; Shafarevich, Igor R. (1966). Number theory. Pure and Applied Mathematics. Vol. 20. Boston, MA: Academic Press. ISBN 978-0-12-117850-5. MR 0195803.
Serre, Jean-Pierre (1996) [1973]. A course in arithmetic. Graduate Texts in Mathematics. Vol. 7. Springer. ISBN 978-0-387-90040-7.
== External links ==
Number Theory entry in the Encyclopedia of Mathematics
Number Theory Web | Wikipedia/number_theory |
Wildfire modeling is concerned with numerical simulation of wildfires to comprehend and predict fire behavior. Wildfire modeling aims to aid wildfire suppression, increase the safety of firefighters and the public, and minimize damage. Wildfire modeling can also aid in protecting ecosystems, watersheds, and air quality.
Using computational science, wildfire modeling involves the statistical analysis of past fire events to predict spotting risks and front behavior. Various wildfire propagation models have been proposed in the past, including simple ellipses and egg- and fan-shaped models. Early attempts to determine wildfire behavior assumed terrain and vegetation uniformity. However, the exact behavior of a wildfire's front is dependent on a variety of factors, including wind speed and slope steepness. Modern growth models utilize a combination of past ellipsoidal descriptions and Huygens' Principle to simulate fire growth as a continuously expanding polygon. Extreme value theory may also be used to predict the size of large wildfires. However, large fires that exceed suppression capabilities are often regarded as statistical outliers in standard analyses, even though fire policies are more influenced by large wildfires than by small fires.
== Objectives ==
Wildfire modeling attempts to reproduce fire behavior, such as how quickly the fire spreads, in which direction, how much heat it generates. A key input to behavior modeling is the Fuel Model, or type of fuel, through which the fire is burning. Behavior modeling can also include whether the fire transitions from the surface (a "surface fire") to the tree crowns (a "crown fire"), as well as extreme fire behavior including rapid rates of spread, fire whirls, and tall well-developed convection columns. Fire modeling also attempts to estimate fire effects, such as the ecological and hydrological effects of the fire, fuel consumption, tree mortality, and amount and rate of smoke produced.
== Environmental factors ==
Wildland fire behavior is affected by weather, fuel characteristics, and topography.
Weather influences fire through wind and moisture. Wind increases the fire spread in the wind direction, higher temperature makes the fire burn faster, while higher relative humidity, and precipitation (rain or snow) may slow it down or extinguish it altogether. Weather involving fast wind changes can be particularly dangerous, since they can suddenly change the fire direction and behavior. Such weather includes cold fronts, foehn winds, thunderstorm downdrafts, sea and land breeze, and diurnal slope winds.
Wildfire fuel includes grass, wood, and anything else that can burn. Small dry twigs burn faster while large logs burn slower; dry fuel ignites more easily and burns faster than wet fuel.
Topography factors that influence wildfires include the orientation toward the sun, which influences the amount of energy received from the sun, and the slope (fire spreads faster uphill). Fire can accelerate in narrow canyons and it can be slowed down or stopped by barriers such as creeks and roads.
These factors act in combination. Rain or snow increases the fuel moisture, high relative humidity slows the drying of the fuel, while winds can make fuel dry faster. Wind can change the fire-accelerating effect of slopes to effects such as downslope windstorms (called Santa Anas, foehn winds, East winds, depending on the geographic location). Fuel properties may vary with topography as plant density varies with elevation or aspect with respect to the sun.
It has long been recognized that "fires create their own weather." That is, the heat and moisture created by the fire feed back into the atmosphere, creating intense winds that drive the fire behavior. The heat produced by the wildfire changes the temperature of the atmosphere and creates strong updrafts, which can change the direction of surface winds. The water vapor released by the fire changes the moisture balance of the atmosphere. The water vapor can be carried away, where the latent heat stored in the vapor is released through condensation.
== Approaches ==
Like all models in computational science, fire models need to strike a balance between fidelity, availability of data, and fast execution. Wildland fire models span a vast range of complexity, from simple cause and effect principles to the most physically complex presenting a difficult supercomputing challenge that cannot hope to be solved faster than real time.
Forest-fire models have been developed since 1940 to the present, but a lot of chemical and thermodynamic questions related to fire behaviour are still to be resolved. Scientists and their forest fire models from 1940 till 2003 are listed in article. Models can be divided into three groups: Empirical, Semi-empirical, and Physically based.
=== Empirical models ===
Conceptual models from experience and intuition from past fires can be used to anticipate the future. Many semi-empirical fire spread equations, as in those published by the USDA Forest Service, Forestry Canada, Nobel, Bary, and Gill, and Cheney, Gould, and Catchpole for Australasian fuel complexes have been developed for quick estimation of fundamental parameters of interest such as fire spread rate, flame length, and fireline intensity of surface fires at a point for specific fuel complexes, assuming a representative point-location wind and terrain slope. Based on the work by Fons's in 1946, and Emmons in 1963, the quasi-steady equilibrium spread rate calculated for a surface fire on flat ground in no-wind conditions was calibrated using data of piles of sticks burned in a flame chamber/wind tunnel to represent other wind and slope conditions for the fuel complexes tested.
Two-dimensional fire growth models such as FARSITE and Prometheus, the Canadian wildland fire growth model designed to work in Canadian fuel complexes, have been developed that apply such semi-empirical relationships and others regarding ground-to-crown transitions to calculate fire spread and other parameters along the surface. Certain assumptions must be made in models such as FARSITE and Prometheus to shape the fire growth. For example, Prometheus and FARSITE use the Huygens principle of wave propagation. A set of equations that can be used to propagate (shape and direction) a fire front using an elliptical shape was developed by Richards in 1990. Although more sophisticated applications use a three-dimensional numerical weather prediction system to provide inputs such as wind velocity to one of the fire growth models listed above, the input was passive and the feedback of the fire upon the atmospheric wind and humidity are not accounted for.
=== Physically based models and coupling with the atmosphere ===
A simplified physically based two-dimensional fire spread models based upon conservation laws that use radiation as the dominant heat transfer mechanism and convection, which represents the effect of wind and slope, lead to reaction–diffusion systems of partial differential equations.
More complex physical models join computational fluid dynamics models with a wildland fire component and allow the fire to feed back upon the atmosphere. These models include NCAR's Coupled Atmosphere-Wildland Fire-Environment (CAWFE) model developed in 2005, WRF-Fire at NCAR and University of Colorado Denver which combines the Weather Research and Forecasting Model with a spread model by the level-set method, University of Utah's Coupled Atmosphere-Wildland Fire Large Eddy Simulation developed in 2009, Los Alamos National Laboratory's FIRETEC developed in, the WUI (wildland–urban interface) Fire Dynamics Simulator (WFDS) developed in 2007, and, to some degree, the two-dimensional model FIRESTAR. These tools have different emphases and have been applied to better understand the fundamental aspects of fire behavior, such as fuel inhomogeneities on fire behavior, feedbacks between the fire and the atmospheric environment as the basis for the universal fire shape, and are beginning to be applied to wildland urban interface house-to-house fire spread at the community-scale.
The cost of added physical complexity is a corresponding increase in computational cost, so much so that a full three-dimensional explicit treatment of combustion in wildland fuels by direct numerical simulation (DNS) at scales relevant for atmospheric modeling does not exist, is beyond current supercomputers, and does not currently make sense to do because of the limited skill of weather models at spatial resolution under 1 km. Consequently, even these more complex models parameterize the fire in some way, for example, papers by Clark use equations developed by Rothermel for the USDA forest service to calculate local fire spread rates using fire-modified local winds. And, although FIRETEC and WFDS carry prognostic conservation equations for the reacting fuel and oxygen concentrations, the computational grid cannot be fine enough to resolve the reaction rate-limiting mixing of fuel and oxygen, so approximations must be made concerning the subgrid-scale temperature distribution or the combustion reaction rates themselves. These models also are too small-scale to interact with a weather model, so the fluid motions use a computational fluid dynamics model confined in a box much smaller than the typical wildfire.
Attempts to create the most complete theoretical model were made by Albini F.A. in USA and Grishin A.M. in Russia. Grishin's work is based on the fundamental laws of physics, conservation and theoretical justifications are provided. The simplified two-dimensional model of running crown forest fire was developed in Belarusian State University by Barovik D.V. and Taranchuk V.B.
== Data assimilation ==
Data assimilation periodically adjusts the model state to incorporate new data using statistical methods. Because fire is highly nonlinear and irreversible, data assimilation for fire models poses special challenges, and standard methods, such as the ensemble Kalman filter (EnKF) do not work well. Statistical variability of corrections and especially large corrections may result in nonphysical states, which tend to be preceded or accompanied by large spatial gradients. In order to ease this problem, the regularized EnKF penalizes large changes of spatial gradients in the Bayesian update in EnKF. The regularization technique has a stabilizing effect on the simulations in the ensemble but it does not improve much the ability of the EnKF to track the data: The posterior ensemble is made out of linear combinations of the prior ensemble, and if a reasonably close location and shape of the fire cannot be found between the linear combinations, the data assimilation is simply out of luck, and the ensemble cannot approach the data. From that point on, the ensemble evolves essentially without regard to the data. This is called filter divergence. So, there is clearly a need to adjust the simulation state by a position change rather than an additive correction only. The morphing EnKF combines the ideas of data assimilation with image registration and morphing to provide both additive and position correction in a natural manner, and can be used to change a model state reliably in response to data.
== Limitations and practical use ==
The limitations on fire modeling are not entirely computational. At this level, the models encounter limits in knowledge about the composition of pyrolysis products and reaction pathways, in addition to gaps in basic understanding about some aspects of fire behavior such as fire spread in live fuels and surface-to-crown fire transition.
Thus, while more complex models have value in studying fire behavior and testing fire spread in a range of scenarios, from the application point of view, FARSITE and Palm-based applications of BEHAVE have shown great utility as practical in-the-field tools because of their ability to provide estimates of fire behavior in real time. While the coupled fire-atmosphere models have the ability to incorporate the ability of the fire to affect its own local weather, and model many aspects of the explosive, unsteady nature of fires that cannot be incorporated in current tools, it remains a challenge to apply these more complex models in a faster-than-real-time operational environment. Also, although they have reached a certain degree of realism when simulating specific natural fires, they must yet address issues such as identifying what specific, relevant operational information they could provide beyond current tools, how the simulation time could fit the operational time frame for decisions (therefore, the simulation must run substantially faster than real time), what temporal and spatial resolution must be used by the model, and how they estimate the inherent uncertainty in numerical weather prediction in their forecast. These operational constraints must be used to steer model development.
== See also ==
Catastrophe modeling
Extreme value theory
Fuel model
== References ==
== External links ==
PROMETHEUS fire growth simulator
WRF-Fire
Wildfire Visualizations collected links
Wildfire simulations on Youtube
Wildfire visualizations at NCAR
Coupled Weather-Wildfire Modeling - Basic aspects of wildfire behavior Archived 2010-06-10 at the Wayback Machine
Coupled Weather-Wildfire Modeling - Wildfire Case Studies Archived 2010-06-10 at the Wayback Machine
Fire research links
McKenzie D, Gedalof Z, Peterson DL, Mote P (2004). "Climatic change, wildfire, and conservation" (PDF). Conservation Biology. 18 (4): 890–902. Bibcode:2004ConBi..18..890M. doi:10.1111/j.1523-1739.2004.00492.x. S2CID 54617780.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Why are wildfires defying long-standing computer models? September 2012 | Wikipedia/Wildfire_modeling |
An agent-based model (ABM) is a computational model for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) in order to understand the behavior of a system and what governs its outcomes. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to understand the stochasticity of these models. Particularly within ecology, ABMs are also called individual-based models (IBMs). A review of recent literature on individual-based models, agent-based models, and multiagent systems shows that ABMs are used in many scientific domains including biology, ecology and social science. Agent-based modeling is related to, but distinct from, the concept of multi-agent systems or multi-agent simulation in that the goal of ABM is to search for explanatory insight into the collective behavior of agents obeying simple rules, typically in natural systems, rather than in designing agents or solving specific practical or engineering problems.
Agent-based models are a kind of microscale model that simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The process is one of emergence, which some express as "the whole is greater than the sum of its parts". In other words, higher-level system properties emerge from the interactions of lower-level subsystems. Or, macro-scale state changes emerge from micro-scale agent behaviors. Or, simple behaviors (meaning rules followed by agents) generate complex behaviors (meaning state changes at the whole system level).
Individual agents are typically characterized as boundedly rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules. ABM agents may experience "learning", adaptation, and reproduction.
Most agent-based models are composed of: (1) numerous agents specified at various scales (typically referred to as agent-granularity); (2) decision-making heuristics; (3) learning rules or adaptive processes; (4) an interaction topology; and (5) an environment. ABMs are typically implemented as computer simulations, either as custom software, or via ABM toolkits, and this software can be then used to test how changes in individual behaviors will affect the system's emerging overall behavior.
== History ==
The idea of agent-based modeling was developed as a relatively simple concept in the late 1940s. Since it requires computation-intensive procedures, it did not become widespread until the 1990s.
=== Early developments ===
The history of the agent-based model can be traced back to the Von Neumann machine, a theoretical machine capable of reproduction. The device von Neumann proposed would follow precisely detailed instructions to fashion a copy of itself. The concept was then built upon by von Neumann's friend Stanislaw Ulam, also a mathematician; Ulam suggested that the machine be built on paper, as a collection of cells on a grid. The idea intrigued von Neumann, who drew it up—creating the first of the devices later termed cellular automata.
Another advance was introduced by the mathematician John Conway. He constructed the well-known Game of Life. Unlike von Neumann's machine, Conway's Game of Life operated by simple rules in a virtual world in the form of a 2-dimensional checkerboard.
The Simula programming language, developed in the mid 1960s and widely implemented by the early 1970s, was the first framework for automating step-by-step agent simulations.
=== 1970s and 1980s: the first models ===
One of the earliest agent-based models in concept was Thomas Schelling's segregation model, which was discussed in his paper "Dynamic Models of Segregation" in 1971. Though Schelling originally used coins and graph paper rather than computers, his models embodied the basic concept of agent-based models as autonomous agents interacting in a shared environment with an observed aggregate, emergent outcome.
In the late 1970s, Paulien Hogeweg and Bruce Hesper began experimenting with individual models of ecology. One of their first results was to show that the social structure of bumble-bee colonies emerged as a result of simple rules that govern the behaviour of individual bees.
They introduced the ToDo principle, referring to the way agents "do what there is to do" at any given time.
In the early 1980s, Robert Axelrod hosted a tournament of Prisoner's Dilemma strategies and had them interact in an agent-based manner to determine a winner. Axelrod would go on to develop many other agent-based models in the field of political science that examine phenomena from ethnocentrism to the dissemination of culture.
By the late 1980s, Craig Reynolds' work on flocking models contributed to the development of some of the first biological agent-based models that contained social characteristics. He tried to model the reality of lively biological agents, known as artificial life, a term coined by Christopher Langton.
The first use of the word "agent" and a definition as it is currently used today is hard to track down. One candidate appears to be John Holland and John H. Miller's 1991 paper "Artificial Adaptive Agents in Economic Theory", based on an earlier conference presentation of theirs. A stronger and earlier candidate is Allan Newell, who in the first Presidential Address of AAAI (published as The Knowledge Level) discussed intelligent agents as a concept.
At the same time, during the 1980s, social scientists, mathematicians, operations researchers, and a scattering of people from other disciplines developed Computational and Mathematical Organization Theory (CMOT). This field grew as a special interest group of The Institute of Management Sciences (TIMS) and its sister society, the Operations Research Society of America (ORSA).
=== 1990s: expansion ===
The 1990s were especially notable for the expansion of ABM within the social sciences, one notable effort was the large-scale ABM, Sugarscape, developed by
Joshua M. Epstein and Robert Axtell to simulate and explore the role of social phenomena such as seasonal migrations, pollution, sexual reproduction, combat, and transmission of disease and even culture. Other notable 1990s developments included Carnegie Mellon University's Kathleen Carley ABM, to explore the co-evolution of social networks and culture. The Santa Fe Institute (SFI) was important in encouraging the development of the ABM modeling platform Swarm under the leadership of Christopher Langton. Research conducted through SFI allowed the expansion of ABM techniques to a number of fields including study of the social and spatial dynamics of small-scale human societies and primates. During this 1990s timeframe Nigel Gilbert published the first textbook on Social Simulation: Simulation for the social scientist (1999) and established a journal from the perspective of social sciences: the Journal of Artificial Societies and Social Simulation (JASSS). Other than JASSS, agent-based models of any discipline are within scope of SpringerOpen journal Complex Adaptive Systems Modeling (CASM).
Through the mid-1990s, the social sciences thread of ABM began to focus on such issues as designing effective teams, understanding the communication required for organizational effectiveness, and the behavior of social networks. CMOT—later renamed Computational Analysis of Social and Organizational Systems (CASOS)—incorporated more and more agent-based modeling. Samuelson (2000) is a good brief overview of the early history, and Samuelson (2005) and Samuelson and Macal (2006) trace the more recent developments.
In the late 1990s, the merger of TIMS and ORSA to form INFORMS, and the move by INFORMS from two meetings each year to one, helped to spur the CMOT group to form a separate society, the North American Association for Computational Social and Organizational Sciences (NAACSOS). Kathleen Carley was a major contributor, especially to models of social networks, obtaining National Science Foundation funding for the annual conference and serving as the first President of NAACSOS. She was succeeded by David Sallach of the University of Chicago and Argonne National Laboratory, and then by Michael Prietula of Emory University. At about the same time NAACSOS began, the European Social Simulation Association (ESSA) and the Pacific Asian Association for Agent-Based Approach in Social Systems Science (PAAA), counterparts of NAACSOS, were organized. As of 2013, these three organizations collaborate internationally. The First World Congress on Social Simulation was held under their joint sponsorship in Kyoto, Japan, in August 2006. The Second World Congress was held in the northern Virginia suburbs of Washington, D.C., in July 2008, with George Mason University taking the lead role in local arrangements.
=== 2000s ===
More recently, Ron Sun developed methods for basing agent-based simulation on models of human cognition, known as cognitive social simulation. Bill McKelvey, Suzanne Lohmann, Dario Nardi, Dwight Read and others at UCLA have also made significant contributions in organizational behavior and decision-making. Since 1991, UCLA has arranged a conference at Lake Arrowhead, California, that has become another major gathering point for practitioners in this field.
=== 2020 and later ===
After the advent of large language models, researchers began applying interacting language models to agent based modeling. In one widely cited paper, agentic language models interacted in a sandbox environment to perform activities like planning birthday parties and holding elections.
== Theory ==
Most computational modeling research describes systems in equilibrium or as moving between equilibria. Agent-based modeling, however, using simple rules, can result in different sorts of complex and interesting behavior. The three ideas central to agent-based models are agents as objects, emergence, and complexity.
Agent-based models consist of dynamically interacting rule-based agents. The systems within which they interact can create real-world-like complexity. Typically agents are
situated in space and time and reside in networks or in lattice-like neighborhoods. The location of the agents and their responsive behavior are encoded in algorithmic form in computer programs. In some cases, though not always, the agents may be considered as intelligent and purposeful. In ecological ABM (often referred to as "individual-based models" in ecology), agents may, for example, be trees in a forest, and would not be considered intelligent, although they may be "purposeful" in the sense of optimizing access to a resource (such as water).
The modeling process is best described as inductive. The modeler makes those assumptions thought most relevant to the situation at hand and then watches phenomena emerge from the agents' interactions. Sometimes that result is an equilibrium. Sometimes it is an emergent pattern. Sometimes, however, it is an unintelligible mangle.
In some ways, agent-based models complement traditional analytic methods. Where analytic methods enable humans to characterize the equilibria of a system, agent-based models allow the possibility of generating those equilibria. This generative contribution may be the most mainstream of the potential benefits of agent-based modeling. Agent-based models can explain the emergence of higher-order patterns—network structures of terrorist organizations and the Internet, power-law distributions in the sizes of traffic jams, wars, and stock-market crashes, and social segregation that persists despite populations of tolerant people. Agent-based models also can be used to identify lever points, defined as moments in time in which interventions have extreme consequences, and to distinguish among types of path dependency.
Rather than focusing on stable states, many models consider a system's robustness—the ways that complex systems adapt to internal and external pressures so as to maintain their functionalities. The task of harnessing that complexity requires consideration of the agents themselves—their diversity, connectedness, and level of interactions.
=== Framework ===
Recent work on the Modeling and simulation of Complex Adaptive Systems has demonstrated the need for combining agent-based and complex network based models. describe a framework consisting of four levels of developing models of complex adaptive systems described using several example multidisciplinary case studies:
Complex Network Modeling Level for developing models using interaction data of various system components.
Exploratory Agent-based Modeling Level for developing agent-based models for assessing the feasibility of further research. This can e.g. be useful for developing proof-of-concept models such as for funding applications without requiring an extensive learning curve for the researchers.
Descriptive Agent-based Modeling (DREAM) for developing descriptions of agent-based models by means of using templates and complex network-based models. Building DREAM models allows model comparison across scientific disciplines.
Validated agent-based modeling using Virtual Overlay Multiagent system (VOMAS) for the development of verified and validated models in a formal manner.
Other methods of describing agent-based models include code templates and text-based methods such as the ODD (Overview, Design concepts, and Design Details) protocol.
The role of the environment where agents live, both macro and micro, is also becoming an important factor in agent-based modelling and simulation work. Simple environment affords simple agents, but complex environments generate diversity of behavior.
=== Multi-scale modelling ===
One strength of agent-based modelling is its ability to mediate information flow between scales. When additional details about an agent are needed, a researcher can integrate it with models describing the extra details. When one is interested in the emergent behaviours demonstrated by the agent population, they can combine the agent-based model with a continuum model describing population dynamics. For example, in a study about CD4+ T cells (a key cell type in the adaptive immune system), the researchers modelled biological phenomena occurring at different spatial (intracellular, cellular, and systemic), temporal, and organizational scales (signal transduction, gene regulation, metabolism, cellular behaviors, and cytokine transport). In the resulting modular model, signal transduction and gene regulation are described by a logical model, metabolism by constraint-based models, cell population dynamics are described by an agent-based model, and systemic cytokine concentrations by ordinary differential equations. In this multi-scale model, the agent-based model occupies the central place and orchestrates every stream of information flow between scales.
== Applications ==
=== In biology ===
Agent-based modeling has been used extensively in biology, including the analysis of the spread of epidemics, and the threat of biowarfare, biological applications including population dynamics, stochastic gene expression, plant-animal interactions, vegetation ecology, migratory ecology, landscape diversity, sociobiology, the growth and decline of ancient civilizations, evolution of ethnocentric behavior, forced displacement/migration, language choice dynamics, cognitive modeling, and biomedical applications including modeling 3D breast tissue formation/morphogenesis, the effects of ionizing radiation on mammary stem cell subpopulation dynamics, inflammation,
and the human immune system, and the evolution of foraging behaviors. Agent-based models have also been used for developing decision support systems such as for breast cancer. Agent-based models are increasingly being used to model pharmacological systems in early stage and pre-clinical research to aid in drug development and gain insights into biological systems that would not be possible a priori. Military applications have also been evaluated. Moreover, agent-based models have been recently employed to study molecular-level biological systems. Agent-based models have also been written to describe ecological processes at work in ancient systems, such as those in dinosaur environments and more recent ancient systems as well.
=== In epidemiology ===
Agent-based models now complement traditional compartmental models, the usual type of epidemiological models. ABMs have been shown to be superior to compartmental models in regard to the accuracy of predictions. Recently, ABMs such as CovidSim by epidemiologist Neil Ferguson, have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. The ABMs for such simulations are mostly based on synthetic populations, since the data of the actual population is not always available.
=== In business, technology and network theory ===
Agent-based models have been used since the mid-1990s to solve a variety of business and technology problems. Examples of applications include marketing, organizational behaviour and cognition, team working, supply chain optimization and logistics, modeling of consumer behavior, including word of mouth, social network effects, distributed computing, workforce management, and portfolio management. They have also been used to analyze traffic congestion.
Recently, agent based modelling and simulation has been applied to various domains such as studying the impact of publication venues by researchers in the computer science domain (journals versus conferences). In addition, ABMs have been used to simulate information delivery in ambient assisted environments. A November 2016 article in arXiv analyzed an agent based simulation of posts spread in Facebook. In the domain of peer-to-peer, ad hoc and other self-organizing and complex networks, the usefulness of agent based modeling and simulation has been shown. The use of a computer science-based formal specification framework coupled with wireless sensor networks and an agent-based simulation has recently been demonstrated.
Agent based evolutionary search or algorithm is a new research topic for solving complex optimization problems.
=== In team science ===
In the realm of team science, agent-based modeling has been utilized to assess the effects of team members' characteristics and biases on team performance across various settings. By simulating interactions between agents—each representing individual team members with distinct traits and biases—this modeling approach enables researchers to explore how these factors collectively influence the dynamics and outcomes of team performance. Consequently, agent-based modeling provides a nuanced understanding of team science, facilitating a deeper exploration of the subtleties and variabilities inherent in team-based collaborations.
=== In economics and social sciences ===
Prior to, and during the 2008 financial crisis, interest has grown in ABMs as possible tools for economic analysis. ABMs do not assume the economy can achieve equilibrium and "representative agents" are replaced by agents with diverse, dynamic, and interdependent behavior including herding. ABMs take a "bottom-up" approach and can generate extremely complex and volatile simulated economies. ABMs can represent unstable systems with crashes and booms that develop out of non-linear (disproportionate) responses to proportionally small changes. A July 2010 article in The Economist looked at ABMs as alternatives to DSGE models. The journal Nature also encouraged agent-based modeling with an editorial that suggested ABMs can do a better job of representing financial markets and other economic complexities than standard models along with an essay by J. Doyne Farmer and Duncan Foley that argued ABMs could fulfill both the desires of Keynes to represent a complex economy and of Robert Lucas to construct models based on microfoundations. Farmer and Foley pointed to progress that has been made using ABMs to model parts of an economy, but argued for the creation of a very large model that incorporates low level models. By modeling a complex system of analysts based on three distinct behavioral profiles – imitating, anti-imitating, and indifferent – financial markets were simulated to high accuracy. Results showed a correlation between network morphology and the stock market index. However, the ABM approach has been criticized for its lack of robustness between models, where similar models can yield very different results.
ABMs have been deployed in architecture and urban planning to evaluate design and to simulate pedestrian flow in the urban environment and the examination of public policy applications to land-use. There is also a growing field of socio-economic analysis of infrastructure investment impact using ABM's ability to discern systemic impacts upon a socio-economic network. Heterogeneity and dynamics can be easily built in ABM models to address wealth inequality and social mobility.
ABMs have also been proposed as applied educational tools for diplomats in the field of international relations and for domestic and international policymakers to enhance their evaluation of public policy.
=== In water management ===
ABMs have also been applied in water resources planning and management, particularly for exploring, simulating, and predicting the performance of infrastructure design and policy decisions, and in assessing the value of cooperation and information exchange in large water resources systems.
=== Organizational ABM: agent-directed simulation ===
The agent-directed simulation (ADS) metaphor distinguishes between two categories, namely "Systems for Agents" and "Agents for Systems." Systems for Agents (sometimes referred to as agents systems) are systems implementing agents for the use in engineering, human and social dynamics, military applications, and others. Agents for Systems are divided in two subcategories. Agent-supported systems deal with the use of agents as a support facility to enable computer assistance in problem solving or enhancing cognitive capabilities. Agent-based systems focus on the use of agents for the generation of model behavior in a system evaluation (system studies and analyses).
=== Self-driving cars ===
Hallerbach et al. discussed the application of agent-based approaches for the development and validation of automated driving systems via a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents. Waymo has created a multi-agent simulation environment Carcraft to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior. The basic idea of using agent-based modeling to understand self-driving cars was discussed as early as 2003.
== Implementation ==
Many ABM frameworks are designed for serial von-Neumann computer architectures, limiting the speed and scalability of implemented models. Since emergent behavior in large-scale ABMs is dependent of population size, scalability restrictions may hinder model validation. Such limitations have mainly been addressed using distributed computing, with frameworks such as Repast HPC specifically dedicated to these type of implementations. While such approaches map well to cluster and supercomputer architectures, issues related to communication and synchronization, as well as deployment complexity, remain potential obstacles for their widespread adoption.
A recent development is the use of data-parallel algorithms on Graphics Processing Units GPUs for ABM simulation. The extreme memory bandwidth combined with the sheer number crunching power of multi-processor GPUs has enabled simulation of millions of agents at tens of frames per second.
=== Integration with other modeling forms ===
Since Agent-Based Modeling is more of a modeling framework than a particular piece of software or platform, it has often been used in conjunction with other modeling forms. For instance, agent-based models have also been combined with Geographic Information Systems (GIS). This provides a useful combination where the ABM serves as a process model and the GIS system can provide a model of pattern. Similarly, Social Network Analysis (SNA) tools and agent-based models are sometimes integrated, where the ABM is used to simulate the dynamics on the network while the SNA tool models and analyzes the network of interactions. Tools like GAMA provide a natural way to integrate system dynamics and GIS with ABM.
== Verification and validation ==
Verification and validation (V&V) of simulation models is extremely important. Verification involves making sure the implemented model matches the conceptual model, whereas validation ensures that the implemented model has some relationship to the real-world. Face validation, sensitivity analysis, calibration, and statistical validation are different aspects of validation. A discrete-event simulation framework approach for the validation of agent-based systems has been proposed. A comprehensive resource on empirical validation of agent-based models can be found here.
As an example of V&V technique, consider VOMAS (virtual overlay multi-agent system), a software engineering based approach, where a virtual overlay multi-agent system is developed alongside the agent-based model. Muazi et al. also provide an example of using VOMAS for verification and validation of a forest fire simulation model. Another software engineering method, i.e. Test-Driven Development has been adapted to for agent-based model validation. This approach has another advantage that allows an automatic validation using unit test tools.
== See also ==
== References ==
=== General ===
== External links ==
=== Articles/general information ===
Agent-based models of social networks, java applets.
On-Line Guide for Newcomers to Agent-Based Modeling in the Social Sciences
Introduction to Agent-based Modeling and Simulation. Argonne National Laboratory, November 29, 2006.
Agent-based models in Ecology – Using computer models as theoretical tools to analyze complex ecological systems
Network for Computational Modeling in the Social and Ecological Sciences' Agent Based Modeling FAQ
Multiagent Information Systems – Article on the convergence of SOA, BPM and Multi-Agent Technology in the domain of the Enterprise Information Systems. Jose Manuel Gomez Alvarez, Artificial Intelligence, Technical University of Madrid – 2006
Artificial Life Framework
Article providing methodology for moving real world human behaviors into a simulation model where agent behaviors are represented
Agent-based Modeling Resources, an information hub for modelers, methods, and philosophy for agent-based modeling
An Agent-Based Model of the Flash Crash of May 6, 2010, with Policy Implications, Tommi A. Vuorenmaa (Valo Research and Trading), Liang Wang (University of Helsinki - Department of Computer Science), October, 2013
=== Simulation models ===
Multi-agent Meeting Scheduling System Model by Qasim Siddique
Multi-firm market simulation by Valentino Piana
List of COVID-19 simulation models | Wikipedia/Agent_based_modeling |
The lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.
== Algorithm ==
Unlike CFD methods that solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice. Due to its particulate nature and local dynamics, LBM has several advantages over other conventional CFD methods, especially in dealing with complex boundaries, incorporating microscopic interactions, and parallelization of the algorithm. A different interpretation of the lattice Boltzmann equation is that of a discrete-velocity Boltzmann equation. The numerical methods of solution of the system of partial differential equations then give rise to a discrete map, which can be interpreted as the propagation and collision of fictitious particles.
In an algorithm, there are collision and streaming steps. These evolve the density of the fluid
ρ
(
x
→
,
t
)
{\displaystyle \rho ({\vec {x}},t)}
, for
x
→
{\displaystyle {\vec {x}}}
the position and
t
{\displaystyle t}
the time. As the fluid is on a lattice, the density has a number of components
f
i
,
i
=
0
,
…
,
a
{\displaystyle f_{i},i=0,\ldots ,a}
equal to the number of lattice vectors connected to each lattice point. As an example, the lattice vectors for a simple lattice used in simulations in two dimensions is shown here. This lattice is usually denoted D2Q9, for two dimensions and nine vectors: four vectors along north, east, south and west, plus four vectors to the corners of a unit square, plus a vector with both components zero. Then, for example vector
e
→
4
=
(
0
,
−
1
)
{\displaystyle {\vec {e}}_{4}=(0,-1)}
, i.e., it points due south and so has no
x
{\displaystyle x}
component but a
y
{\displaystyle y}
component of
−
1
{\displaystyle -1}
. So one of the nine components of the total density at the central lattice point,
f
4
(
x
→
,
t
)
{\displaystyle f_{4}({\vec {x}},t)}
, is that part of the fluid at point
x
→
{\displaystyle {\vec {x}}}
moving due south, at a speed in lattice units of one.
Then the steps that evolve the fluid in time are:
The collision step
f
i
∗
(
x
→
,
t
)
=
f
i
(
x
→
,
t
)
+
f
i
e
q
(
x
→
,
t
)
−
f
i
(
x
→
,
t
)
τ
f
{\displaystyle f_{i}^{\ast }({\vec {x}},t)=f_{i}({\vec {x}},t)+{\frac {f_{i}^{eq}({\vec {x}},t)-f_{i}({\vec {x}},t)}{\tau _{f}}}\,\!}
which is the Bhatnagar Gross and Krook (BGK) model for relaxation to equilibrium via collisions between the molecules of a fluid.
f
i
e
q
(
x
→
,
t
)
{\displaystyle f_{i}^{eq}({\vec {x}},t)}
is the equilibrium density along direction i at the current density there, this can be expressed in a Taylor approximation (see below, in Mathematical equations for simulations):
f
i
e
q
=
ω
i
ρ
(
1
+
3
e
→
i
u
→
c
2
+
9
(
e
→
i
u
→
)
2
2
c
4
−
3
(
u
→
)
2
2
c
2
)
{\displaystyle f_{i}^{eq}=\omega _{i}\rho \left(1+{\frac {3{\vec {e}}_{i}{\vec {u}}}{c^{2}}}+{\frac {9({\vec {e}}_{i}{\vec {u}})^{2}}{2c^{4}}}-{\frac {3({\vec {u}})^{2}}{2c^{2}}}\right)}
The model assumes that the fluid locally relaxes to equilibrium over a characteristic timescale
τ
f
{\displaystyle \tau _{f}}
. This timescale determines the kinematic viscosity, the larger it is, the larger is the kinematic viscosity.
The streaming step
f
i
(
x
→
+
e
→
i
,
t
+
δ
t
)
=
f
i
∗
(
x
→
,
t
)
{\displaystyle f_{i}({\vec {x}}+{\vec {e}}_{i},t+\delta _{t})=f_{i}^{\ast }({\vec {x}},t)\,\!}
As
f
i
∗
(
x
→
,
t
)
{\displaystyle f_{i}^{\ast }({\vec {x}},t)}
is, by definition, the fluid density at point
x
→
{\displaystyle {\vec {x}}}
at time
t
{\displaystyle t}
, that is moving at a velocity of
e
→
i
{\displaystyle {\vec {e}}_{i}}
per time step, then at the next time step
t
+
δ
t
{\displaystyle t+\delta _{t}}
it will have flowed to point
x
→
+
e
→
i
{\displaystyle {\vec {x}}+{\vec {e}}_{i}}
.
== Advantages ==
The LBM was designed from scratch to run efficiently on massively parallel architectures, ranging from inexpensive embedded FPGAs and DSPs up to GPUs and heterogeneous clusters and supercomputers (even with a slow interconnection network). It enables complex physics and sophisticated algorithms. Efficiency leads to a qualitatively new level of understanding since it allows solving problems that previously could not be approached (or only with insufficient accuracy).
The method originates from a molecular description of a fluid and can directly incorporate physical terms stemming from a knowledge of the interaction between molecules. Hence it is an indispensable instrument in fundamental research, as it keeps the cycle between the elaboration of a theory and the formulation of a corresponding numerical model short.
Automated data pre-processing and lattice generation in a time that accounts for a small fraction of the total simulation.
Parallel data analysis, post-processing and evaluation.
Fully resolved multi-phase flow with small droplets and bubbles.
Fully resolved flow through complex geometries and porous media.
Complex, coupled flow with heat transfer and chemical reactions.
== Limitations and development ==
As with Navier–Stokes based CFD, LBM methods have been successfully coupled with thermal-specific solutions to enable heat transfer (solids-based conduction, convection and radiation) simulation capability. For multiphase/multicomponent models, the interface thickness is usually large and the density ratio across the interface is small when compared with real fluids. Recently this problem has been resolved by Yuan and Schaefer who improved on models by Shan and Chen, Swift, and He, Chen, and Zhang. They were able to reach density ratios of 1000:1 by simply changing the equation of state. It has been proposed to apply Galilean Transformation to overcome the limitation of modelling high-speed fluid flows.
The fast advancements of this method had also successfully simulated microfluidics, However, as of now, LBM is still limited in simulating high Knudsen number flows where Monte Carlo methods are instead used, and high-Mach number flows in aerodynamics are still difficult for LBM, and a consistent thermo-hydrodynamic scheme is absent.
== Development from the LGA method ==
LBM originated from the lattice gas automata (LGA) method, which can be considered as a simplified fictitious molecular dynamics model in which space, time, and particle velocities are all discrete. For example, in the 2-dimensional FHP Model each lattice node is connected to its neighbors by 6 lattice velocities on a triangular lattice; there can be either 0 or 1 particles at a lattice node moving with a given lattice velocity. After a time interval, each particle will move to the neighboring node in its direction; this process is called the propagation or streaming step. When more than one particle arrives at the same node from different directions, they collide and change their velocities according to a set of collision rules. Streaming steps and collision steps alternate. Suitable collision rules should conserve the particle number (mass), momentum, and energy before and after the collision. LGA suffer from several innate defects for use in hydrodynamic simulations: lack of Galilean invariance for fast flows, statistical noise and poor Reynolds number scaling with lattice size. LGA are, however, well suited to simplify and extend the reach of reaction diffusion and molecular dynamics models.
The main motivation for the transition from LGA to LBM was the desire to remove the statistical noise by replacing the Boolean particle number in a lattice direction with its ensemble average, the so-called density distribution function. Accompanying this replacement, the discrete collision rule is also replaced by a continuous function known as the collision operator. In the LBM development, an important simplification is to approximate the collision operator with the Bhatnagar-Gross-Krook (BGK) relaxation term. This lattice BGK (LBGK) model makes simulations more efficient and allows flexibility of the transport coefficients. On the other hand, it has been shown that the LBM scheme can also be considered as a special discretized form of the continuous Boltzmann equation. From Chapman-Enskog theory, one can recover the governing continuity and Navier–Stokes equations from the LBM algorithm.
== Lattices and the DnQm classification ==
Lattice Boltzmann models can be operated on a number of different lattices, both cubic and triangular, and with or without rest particles in the discrete distribution function.
A popular way of classifying the different methods by lattice is the DnQm scheme. Here "Dn" stands for "n dimensions", while "Qm" stands for "m speeds". For example, D3Q15 is a 3-dimensional lattice Boltzmann model on a cubic grid, with rest particles present. Each node has a crystal shape and can deliver particles to 15 nodes: each of the 6 neighboring nodes that share a surface, the 8 neighboring nodes sharing a corner, and itself. (The D3Q15 model does not contain particles moving to the 12 neighboring nodes that share an edge; adding those would create a "D3Q27" model.)
Real quantities as space and time need to be converted to lattice units prior to simulation. Nondimensional quantities, like the Reynolds number, remain the same.
== Lattice units conversion ==
In most Lattice Boltzmann simulations
δ
x
{\displaystyle \delta _{x}\,\!}
is the basic unit for lattice spacing, so if the domain of length
L
{\displaystyle L\,\!}
has
N
{\displaystyle N\,\!}
lattice units along its entire length, the space unit is simply defined as
δ
x
=
L
/
N
{\displaystyle \delta _{x}=L/N\,\!}
. Speeds in lattice Boltzmann simulations are typically given in terms of the speed of sound. The discrete time unit can therefore be given as
δ
t
=
δ
x
C
s
{\displaystyle \delta _{t}={\frac {\delta _{x}}{C_{s}}}\,\!}
, where the denominator
C
s
{\displaystyle C_{s}}
is the physical speed of sound.
For small-scale flows (such as those seen in porous media mechanics), operating with the true speed of sound can lead to unacceptably short time steps. It is therefore common to raise the lattice Mach number to something much larger than the real Mach number, and compensating for this by raising the viscosity as well in order to preserve the Reynolds number.
== Simulation of mixtures ==
Simulating multiphase/multicomponent flows has always been a challenge to conventional CFD because of the moving and deformable interfaces. More fundamentally, the interfaces between different phases (liquid and vapor) or components (e.g., oil and water) originate from the specific interactions among fluid molecules. Therefore, it is difficult to implement such microscopic interactions into the macroscopic Navier–Stokes equation. However, in LBM, the particulate kinetics provides a relatively easy and consistent way to incorporate the underlying microscopic interactions by modifying the collision operator. Several LBM multiphase/multicomponent models have been developed. Here phase separations are generated automatically from the particle dynamics and no special treatment is needed to manipulate the interfaces as in traditional CFD methods. Successful applications of multiphase/multicomponent LBM models can be found in various complex fluid systems, including interface instability, bubble/droplet dynamics, wetting on solid surfaces, interfacial slip, and droplet electrohydrodynamic deformations.
A lattice Boltzmann model for simulation of gas mixture combustion capable of accommodating significant density variations at low-Mach number regime has been recently proposed.
To this respect, it is worth to notice that, since LBM deals with a larger set of fields (as compared to conventional CFD), the simulation of reactive gas mixtures presents some additional challenges in terms of memory demand as far as large detailed combustion mechanisms are concerned. Those issues may be addressed, though, by resorting to systematic model reduction techniques.
== Thermal lattice-Boltzmann method ==
Currently (2009), a thermal lattice-Boltzmann method (TLBM) falls into one of three categories: the multi-speed approach, the passive scalar approach, and the thermal energy distribution.
== Derivation of Navier–Stokes equation from discrete LBE ==
Starting with the discrete lattice Boltzmann equation (also referred to as LBGK equation due to the collision operator used). We first do a 2nd-order Taylor series expansion about the left side of the LBE. This is chosen over a simpler 1st-order Taylor expansion as the discrete LBE cannot be recovered. When doing the 2nd-order Taylor series expansion, the zero derivative term and the first term on the right will cancel, leaving only the first and second derivative terms of the Taylor expansion and the collision operator:
f
i
(
x
→
+
e
→
i
δ
t
,
t
+
δ
t
)
=
f
i
(
x
→
,
t
)
+
δ
t
τ
f
(
f
i
e
q
−
f
i
)
.
{\displaystyle f_{i}({\vec {x}}+{\vec {e}}_{i}\delta _{t},t+\delta _{t})=f_{i}({\vec {x}},t)+{\frac {\delta _{t}}{\tau _{f}}}(f_{i}^{eq}-f_{i}).}
For simplicity, write
f
i
(
x
→
,
t
)
{\displaystyle f_{i}({\vec {x}},t)}
as
f
i
{\displaystyle f_{i}}
. The slightly simplified Taylor series expansion is then as follows, where ":" is the colon product between dyads:
∂
f
i
∂
t
+
e
→
i
⋅
∇
f
i
+
(
1
2
e
→
i
e
→
i
:
∇
∇
f
i
+
e
→
i
⋅
∇
∂
f
i
∂
t
+
1
2
∂
2
f
i
∂
t
2
)
=
1
τ
(
f
i
e
q
−
f
i
)
.
{\displaystyle {\frac {\partial f_{i}}{\partial t}}+{\vec {e}}_{i}\cdot \nabla f_{i}+\left({\frac {1}{2}}{\vec {e}}_{i}{\vec {e}}_{i}:\nabla \nabla f_{i}+{\vec {e}}_{i}\cdot \nabla {\frac {\partial f_{i}}{\partial t}}+{\frac {1}{2}}{\frac {\partial ^{2}f_{i}}{\partial t^{2}}}\right)={\frac {1}{\tau }}(f_{i}^{eq}-f_{i}).}
By expanding the particle distribution function into equilibrium and non-equilibrium components and using the Chapman-Enskog expansion, where
K
{\displaystyle K}
is the Knudsen number, the Taylor-expanded LBE can be decomposed into different magnitudes of order for the Knudsen number in order to obtain the proper continuum equations:
f
i
=
f
i
eq
+
K
f
i
neq
,
{\displaystyle f_{i}=f_{i}^{\text{eq}}+Kf_{i}^{\text{neq}},}
f
i
neq
=
f
i
(
1
)
+
K
f
i
(
2
)
+
O
(
K
2
)
.
{\displaystyle f_{i}^{\text{neq}}=f_{i}^{(1)}+Kf_{i}^{(2)}+O(K^{2}).}
The equilibrium and non-equilibrium distributions satisfy the following relations to their macroscopic variables (these will be used later, once the particle distributions are in the "correct form" in order to scale from the particle to macroscopic level):
ρ
=
∑
i
f
i
eq
,
{\displaystyle \rho =\sum _{i}f_{i}^{\text{eq}},}
ρ
u
→
=
∑
i
f
i
eq
e
→
i
,
{\displaystyle \rho {\vec {u}}=\sum _{i}f_{i}^{\text{eq}}{\vec {e}}_{i},}
0
=
∑
i
f
i
(
k
)
for
k
=
1
,
2
,
{\displaystyle 0=\sum _{i}f_{i}^{(k)}\qquad {\text{for }}k=1,2,}
0
=
∑
i
f
i
(
k
)
e
→
i
.
{\displaystyle 0=\sum _{i}f_{i}^{(k)}{\vec {e}}_{i}.}
The Chapman-Enskog expansion is then:
∂
∂
t
=
K
∂
∂
t
1
+
K
2
∂
∂
t
2
for
t
2
(
diffusive time-scale
)
≪
t
1
(
convective time-scale
)
,
{\displaystyle {\frac {\partial }{\partial t}}=K{\frac {\partial }{\partial t_{1}}}+K^{2}{\frac {\partial }{\partial t_{2}}}\qquad {\text{for }}t_{2}({\text{diffusive time-scale}})\ll t_{1}({\text{convective time-scale}}),}
∂
∂
x
=
K
∂
∂
x
1
.
{\displaystyle {\frac {\partial }{\partial x}}=K{\frac {\partial }{\partial x_{1}}}.}
By substituting the expanded equilibrium and non-equilibrium into the Taylor expansion and separating into different orders of
K
{\displaystyle K}
, the continuum equations are nearly derived.
For order
K
0
{\displaystyle K^{0}}
:
∂
f
i
eq
∂
t
1
+
e
→
i
∇
1
f
i
eq
=
−
f
i
(
1
)
τ
.
{\displaystyle {\frac {\partial f_{i}^{\text{eq}}}{\partial t_{1}}}+{\vec {e}}_{i}\nabla _{1}f_{i}^{\text{eq}}=-{\frac {f_{i}^{(1)}}{\tau }}.}
For order
K
1
{\displaystyle K^{1}}
:
∂
f
i
(
1
)
∂
t
1
+
∂
f
i
eq
∂
t
2
+
e
→
i
∇
f
i
(
1
)
+
1
2
e
→
i
e
→
i
:
∇
∇
f
i
eq
+
e
→
i
⋅
∇
∂
f
i
eq
∂
t
1
+
1
2
∂
2
f
i
eq
∂
t
1
2
=
−
f
i
(
2
)
τ
.
{\displaystyle {\frac {\partial f_{i}^{(1)}}{\partial t_{1}}}+{\frac {\partial f_{i}^{\text{eq}}}{\partial t_{2}}}+{\vec {e}}_{i}\nabla f_{i}^{(1)}+{\frac {1}{2}}{\vec {e}}_{i}{\vec {e}}_{i}:\nabla \nabla f_{i}^{\text{eq}}+{\vec {e}}_{i}\cdot \nabla {\frac {\partial f_{i}^{\text{eq}}}{\partial t_{1}}}+{\frac {1}{2}}{\frac {\partial ^{2}f_{i}^{\text{eq}}}{\partial t_{1}^{2}}}=-{\frac {f_{i}^{(2)}}{\tau }}.}
Then, the second equation can be simplified with some algebra and the first equation into the following:
∂
f
i
eq
∂
t
2
+
(
1
−
1
2
τ
)
[
∂
f
i
(
1
)
∂
t
1
+
e
→
i
∇
1
f
i
(
1
)
]
=
−
f
i
(
2
)
τ
.
{\displaystyle {\frac {\partial f_{i}^{\text{eq}}}{\partial t_{2}}}+\left(1-{\frac {1}{2\tau }}\right)\left[{\frac {\partial f_{i}^{(1)}}{\partial t_{1}}}+{\vec {e}}_{i}\nabla _{1}f_{i}^{(1)}\right]=-{\frac {f_{i}^{(2)}}{\tau }}.}
Applying the relations between the particle distribution functions and the macroscopic properties from above, the mass and momentum equations are achieved:
∂
ρ
∂
t
+
∇
⋅
ρ
u
→
=
0
,
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \rho {\vec {u}}=0,}
∂
ρ
u
→
∂
t
+
∇
⋅
Π
=
0.
{\displaystyle {\frac {\partial \rho {\vec {u}}}{\partial t}}+\nabla \cdot \Pi =0.}
The momentum flux tensor
Π
{\displaystyle \Pi }
has the following form then:
Π
x
y
=
∑
i
e
→
i
x
e
→
i
y
[
f
i
e
q
+
(
1
−
1
2
τ
)
f
i
(
1
)
]
,
{\displaystyle \Pi _{xy}=\sum _{i}{\vec {e}}_{ix}{\vec {e}}_{iy}\left[f_{i}^{eq}+\left(1-{\frac {1}{2\tau }}\right)f_{i}^{(1)}\right],}
where
e
→
i
x
e
→
i
y
{\displaystyle {\vec {e}}_{ix}{\vec {e}}_{iy}}
is shorthand for the square of the sum of all the components of
e
→
i
{\displaystyle {\vec {e}}_{i}}
(i. e.
(
∑
x
e
→
i
x
)
2
=
∑
x
∑
y
e
→
i
x
e
→
i
y
{\displaystyle \textstyle \left(\sum _{x}{\vec {e}}_{ix}\right)^{2}=\sum _{x}\sum _{y}{\vec {e}}_{ix}{\vec {e}}_{iy}}
), and the equilibrium particle distribution with second order to be comparable to the Navier–Stokes equation is:
f
i
eq
=
ω
i
ρ
(
1
+
e
→
i
u
→
c
s
2
+
(
e
→
i
u
→
)
2
2
c
s
4
−
u
→
2
2
c
s
2
)
.
{\displaystyle f_{i}^{\text{eq}}=\omega _{i}\rho \left(1+{\frac {{\vec {e}}_{i}{\vec {u}}}{c_{s}^{2}}}+{\frac {({\vec {e}}_{i}{\vec {u}})^{2}}{2c_{s}^{4}}}-{\frac {{\vec {u}}^{2}}{2c_{s}^{2}}}\right).}
The equilibrium distribution is only valid for small velocities or small Mach numbers. Inserting the equilibrium distribution back into the flux tensor leads to:
Π
x
y
(
0
)
=
∑
i
e
→
i
x
e
→
i
y
f
i
e
q
=
p
δ
x
y
+
ρ
u
x
u
y
,
{\displaystyle \Pi _{xy}^{(0)}=\sum _{i}{\vec {e}}_{ix}{\vec {e}}_{iy}f_{i}^{eq}=p\delta _{xy}+\rho u_{x}u_{y},}
Π
x
y
(
1
)
=
(
1
−
1
2
τ
)
∑
i
e
→
i
x
e
→
i
y
f
i
(
1
)
=
ν
(
∇
x
(
ρ
u
→
y
)
+
∇
y
(
ρ
u
→
x
)
)
.
{\displaystyle \Pi _{xy}^{(1)}=\left(1-{\frac {1}{2\tau }}\right)\sum _{i}{\vec {e}}_{ix}{\vec {e}}_{iy}f_{i}^{(1)}=\nu \left(\nabla _{x}\left(\rho {\vec {u}}_{y}\right)+\nabla _{y}\left(\rho {\vec {u}}_{x}\right)\right).}
Finally, the Navier–Stokes equation is recovered under the assumption that density variation is small:
ρ
(
∂
u
→
x
∂
t
+
∇
y
⋅
u
→
x
u
→
y
)
=
−
∇
x
p
+
ν
∇
y
⋅
(
∇
x
(
ρ
u
→
y
)
+
∇
y
(
ρ
u
→
x
)
)
.
{\displaystyle \rho \left({\frac {\partial {\vec {u}}_{x}}{\partial t}}+\nabla _{y}\cdot {\vec {u}}_{x}{\vec {u}}_{y}\right)=-\nabla _{x}p+\nu \nabla _{y}\cdot \left(\nabla _{x}\left(\rho {\vec {u}}_{y}\right)+\nabla _{y}\left(\rho {\vec {u}}_{x}\right)\right).}
This derivation follows the work of Chen and Doolen.
== Mathematical equations for simulations ==
The continuous Boltzmann equation is an evolution equation for a single particle probability distribution function
f
(
x
→
,
e
→
i
,
t
)
{\displaystyle f({\vec {x}},{\vec {e}}_{i},t)}
and the internal energy density distribution function
g
(
x
→
,
e
→
i
,
t
)
{\displaystyle g({\vec {x}},{\vec {e}}_{i},t)}
(He et al.) are each respectively:
∂
t
f
+
(
e
→
⋅
∇
)
f
+
F
∂
v
f
=
Ω
(
f
)
,
{\displaystyle \partial _{t}f+({\vec {e}}\cdot \nabla )f+F\partial _{v}f=\Omega (f),}
∂
t
g
+
(
e
→
⋅
∇
)
g
+
G
∂
v
f
=
Ω
(
g
)
,
{\displaystyle \partial _{t}g+({\vec {e}}\cdot \nabla )g+G\partial _{v}f=\Omega (g),}
where
g
(
x
→
,
e
→
i
,
t
)
{\displaystyle g({\vec {x}},{\vec {e}}_{i},t)}
is related to
f
(
x
→
,
e
→
i
,
t
)
{\displaystyle f({\vec {x}},{\vec {e}}_{i},t)}
by
g
(
x
→
,
e
→
i
,
t
)
=
(
e
→
−
u
→
)
2
2
f
(
x
→
,
e
→
i
,
t
)
,
{\displaystyle g({\vec {x}},{\vec {e}}_{i},t)={\frac {({\vec {e}}-{\vec {u}})^{2}}{2}}f({\vec {x}},{\vec {e}}_{i},t),}
F
{\displaystyle F}
is an external force,
Ω
{\displaystyle \Omega }
is a collision integral, and
e
→
{\displaystyle {\vec {e}}}
(also labeled by
ξ
→
{\displaystyle {\vec {\xi }}}
in literature) is the microscopic velocity. The external force
F
{\displaystyle F}
is related to temperature external force
G
{\displaystyle G}
by the relation below. A typical test for one's model is the Rayleigh–Bénard convection for
G
{\displaystyle G}
.
F
=
G
→
⋅
(
e
→
−
u
→
)
R
T
f
eq
,
{\displaystyle F={\frac {{\vec {G}}\cdot ({\vec {e}}-{\vec {u}})}{RT}}f^{\text{eq}},}
G
→
=
β
g
0
(
T
−
T
a
v
g
)
k
→
.
{\displaystyle {\vec {G}}=\beta g_{0}(T-T_{avg}){\vec {k}}.}
Macroscopic variables such as density
ρ
{\displaystyle \rho }
, velocity
u
→
{\displaystyle {\vec {u}}}
, and temperature
T
{\displaystyle T}
can be calculated as the moments of the density distribution function:
ρ
=
∫
f
d
e
→
,
{\displaystyle \rho =\int f\,d{\vec {e}},}
ρ
u
→
=
∫
e
→
f
d
e
→
,
{\displaystyle \rho {\vec {u}}=\int {\vec {e}}f\,d{\vec {e}},}
ρ
D
R
T
2
=
ρ
ϵ
=
∫
g
d
e
→
.
{\displaystyle {\frac {\rho DRT}{2}}=\rho \epsilon =\int g\,d{\vec {e}}.}
The lattice Boltzmann method discretizes this equation by limiting space to a lattice and the velocity space to a discrete set of microscopic velocities (i. e.
e
→
i
=
(
e
→
i
x
,
e
→
i
y
)
{\displaystyle {\vec {e}}_{i}=({\vec {e}}_{ix},{\vec {e}}_{iy})}
). The microscopic velocities in D2Q9, D3Q15, and D3Q19 for example are given as:
e
→
i
=
c
×
{
(
0
,
0
)
i
=
0
(
1
,
0
)
,
(
0
,
1
)
,
(
−
1
,
0
)
,
(
0
,
−
1
)
i
=
1
,
2
,
3
,
4
(
1
,
1
)
,
(
−
1
,
1
)
,
(
−
1
,
−
1
)
,
(
1
,
−
1
)
i
=
5
,
6
,
7
,
8
{\displaystyle {\vec {e}}_{i}=c\times {\begin{cases}(0,0)&i=0\\(1,0),(0,1),(-1,0),(0,-1)&i=1,2,3,4\\(1,1),(-1,1),(-1,-1),(1,-1)&i=5,6,7,8\\\end{cases}}}
e
→
i
=
c
×
{
(
0
,
0
,
0
)
i
=
0
(
±
1
,
0
,
0
)
,
(
0
,
±
1
,
0
)
,
(
0
,
0
,
±
1
)
i
=
1
,
2
,
.
.
.
,
5
,
6
(
±
1
,
±
1
,
±
1
)
i
=
7
,
8
,
.
.
.
,
13
,
14
{\displaystyle {\vec {e}}_{i}=c\times {\begin{cases}(0,0,0)&i=0\\(\pm 1,0,0),(0,\pm 1,0),(0,0,\pm 1)&i=1,2,...,5,6\\(\pm 1,\pm 1,\pm 1)&i=7,8,...,13,14\\\end{cases}}}
e
→
i
=
c
×
{
(
0
,
0
,
0
)
i
=
0
(
±
1
,
0
,
0
)
,
(
0
,
±
1
,
0
)
,
(
0
,
0
,
±
1
)
i
=
1
,
2
,
.
.
.
,
5
,
6
(
±
1
,
±
1
,
0
)
,
(
±
1
,
0
,
±
1
)
,
(
0
,
±
1
,
±
1
)
i
=
7
,
8
,
.
.
.
,
17
,
18
{\displaystyle {\vec {e}}_{i}=c\times {\begin{cases}(0,0,0)&i=0\\(\pm 1,0,0),(0,\pm 1,0),(0,0,\pm 1)&i=1,2,...,5,6\\(\pm 1,\pm 1,0),(\pm 1,0,\pm 1),(0,\pm 1,\pm 1)&i=7,8,...,17,18\\\end{cases}}}
The single-phase discretized Boltzmann equation for mass density and internal energy density are:
f
i
(
x
→
+
e
→
i
δ
t
,
t
+
δ
t
)
−
f
i
(
x
→
,
t
)
+
F
i
=
Ω
(
f
)
,
{\displaystyle f_{i}({\vec {x}}+{\vec {e}}_{i}\delta _{t},t+\delta _{t})-f_{i}({\vec {x}},t)+F_{i}=\Omega (f),}
g
i
(
x
→
+
e
→
i
δ
t
,
t
+
δ
t
)
−
g
i
(
x
→
,
t
)
+
G
i
=
Ω
(
g
)
.
{\displaystyle g_{i}({\vec {x}}+{\vec {e}}_{i}\delta _{t},t+\delta _{t})-g_{i}({\vec {x}},t)+G_{i}=\Omega (g).}
The collision operator is often approximated by a BGK collision operator under the condition it also satisfies the conservation laws:
Ω
(
f
)
=
1
τ
f
(
f
i
eq
−
f
i
)
,
{\displaystyle \Omega (f)={\frac {1}{\tau _{f}}}(f_{i}^{\text{eq}}-f_{i}),}
Ω
(
g
)
=
1
τ
g
(
g
i
eq
−
g
i
)
.
{\displaystyle \Omega (g)={\frac {1}{\tau _{g}}}(g_{i}^{\text{eq}}-g_{i}).}
In the collision operator
f
i
eq
{\displaystyle f_{i}^{\text{eq}}}
is the discrete, equilibrium particle probability distribution function. In D2Q9 and D3Q19, it is shown below for an incompressible flow in continuous and discrete form where D, R, and T are the dimension, universal gas constant, and absolute temperature respectively. The partial derivation for the continuous to discrete form is provided through a simple derivation to second order accuracy.
f
eq
=
ρ
(
2
π
R
T
)
D
/
2
e
−
(
e
→
−
u
→
)
2
2
R
T
{\displaystyle f^{\text{eq}}={\frac {\rho }{(2\pi RT)^{D/2}}}e^{-{\frac {({\vec {e}}-{\vec {u}})^{2}}{2RT}}}}
=
ρ
(
2
π
R
T
)
D
/
2
e
−
(
e
→
)
2
2
R
T
e
e
→
u
→
R
T
−
u
→
2
2
R
T
{\displaystyle ={\frac {\rho }{(2\pi RT)^{D/2}}}e^{-{\frac {({\vec {e}})^{2}}{2RT}}}e^{{\frac {{\vec {e}}{\vec {u}}}{RT}}-{\frac {{\vec {u}}^{2}}{2RT}}}}
=
ρ
(
2
π
R
T
)
D
/
2
e
−
(
e
→
)
2
2
R
T
(
1
+
e
→
u
→
R
T
+
(
e
→
u
→
)
2
2
(
R
T
)
2
−
u
→
2
2
R
T
+
.
.
.
)
{\displaystyle ={\frac {\rho }{(2\pi RT)^{D/2}}}e^{-{\frac {({\vec {e}})^{2}}{2RT}}}\left(1+{\frac {{\vec {e}}{\vec {u}}}{RT}}+{\frac {({\vec {e}}{\vec {u}})^{2}}{2(RT)^{2}}}-{\frac {{\vec {u}}^{2}}{2RT}}+...\right)}
Letting
c
=
3
R
T
{\displaystyle c={\sqrt {3RT}}}
yields the final result:
f
i
e
q
=
ω
i
ρ
(
1
+
3
e
→
i
u
→
c
2
+
9
(
e
→
i
u
→
)
2
2
c
4
−
3
(
u
→
)
2
2
c
2
)
{\displaystyle f_{i}^{eq}=\omega _{i}\rho \left(1+{\frac {3{\vec {e}}_{i}{\vec {u}}}{c^{2}}}+{\frac {9({\vec {e}}_{i}{\vec {u}})^{2}}{2c^{4}}}-{\frac {3({\vec {u}})^{2}}{2c^{2}}}\right)}
g
e
q
=
ρ
(
e
→
−
u
→
)
2
2
(
2
π
R
T
)
D
/
2
e
−
(
e
→
−
u
→
)
2
2
R
T
{\displaystyle g^{eq}={\frac {\rho ({\vec {e}}-{\vec {u}})^{2}}{2(2\pi RT)^{D/2}}}e^{-{\frac {({\vec {e}}-{\vec {u}})^{2}}{2RT}}}}
ω
i
=
{
4
/
9
i
=
0
1
/
9
i
=
1
,
2
,
3
,
4
1
/
36
i
=
5
,
6
,
7
,
8
{\displaystyle \omega _{i}={\begin{cases}4/9&i=0\\1/9&i=1,2,3,4\\1/36&i=5,6,7,8\\\end{cases}}}
ω
i
=
{
1
/
3
i
=
0
1
/
18
i
=
1
,
2
,
.
.
.
,
5
,
6
1
/
36
i
=
7
,
8
,
.
.
.
,
17
,
18
{\displaystyle \omega _{i}={\begin{cases}1/3&i=0\\1/18&i=1,2,...,5,6\\1/36&i=7,8,...,17,18\\\end{cases}}}
As much work has already been done on a single-component flow, the following TLBM will be discussed. The multicomponent/multiphase TLBM is also more intriguing and useful than simply one component. To be in line with current research, define the set of all components of the system (i. e. walls of porous media, multiple fluids/gases, etc.)
Ψ
{\displaystyle \Psi }
with elements
σ
j
{\displaystyle \sigma _{j}}
.
f
i
σ
(
x
→
+
e
→
i
δ
t
,
t
+
δ
t
)
−
f
i
σ
(
x
→
,
t
)
+
F
i
=
1
τ
f
σ
(
f
i
σ
,
e
q
(
ρ
σ
,
v
σ
)
−
f
i
σ
)
{\displaystyle f_{i}^{\sigma }({\vec {x}}+{\vec {e}}_{i}\delta _{t},t+\delta _{t})-f_{i}^{\sigma }({\vec {x}},t)+F_{i}={\frac {1}{\tau _{f}^{\sigma }}}(f_{i}^{\sigma ,eq}(\rho ^{\sigma },v^{\sigma })-f_{i}^{\sigma })}
The relaxation parameter,
τ
f
σ
j
{\displaystyle \tau _{f}^{\sigma _{j}}\,\!}
, is related to the kinematic viscosity,
ν
f
σ
j
{\displaystyle \nu _{f}^{\sigma _{j}}\,\!}
, by the following relationship:
ν
f
σ
j
=
(
τ
f
σ
j
−
0.5
)
c
s
2
δ
t
.
{\displaystyle \nu _{f}^{\sigma _{j}}=(\tau _{f}^{\sigma _{j}}-0.5)c_{s}^{2}\delta _{t}.}
The moments of the
f
i
{\displaystyle f_{i}\,\!}
give the local conserved quantities. The density is given by
ρ
=
∑
σ
∑
i
f
i
{\displaystyle \rho =\sum _{\sigma }\sum _{i}f_{i}\,\!}
ρ
ϵ
=
∑
i
g
i
{\displaystyle \rho \epsilon =\sum _{i}g_{i}\,\!}
ρ
σ
=
∑
i
f
i
σ
{\displaystyle \rho ^{\sigma }=\sum _{i}f_{i}^{\sigma }\,\!}
and the weighted average velocity,
u
′
→
{\displaystyle {\vec {u'}}\,\!}
, and the local momentum are given by
u
′
→
=
(
∑
σ
ρ
σ
u
σ
→
τ
f
σ
)
/
(
∑
σ
ρ
σ
τ
f
σ
)
{\displaystyle {\vec {u'}}=\left(\sum _{\sigma }{\frac {\rho ^{\sigma }{\vec {u^{\sigma }}}}{\tau _{f}^{\sigma }}}\right)/\left(\sum _{\sigma }{\frac {\rho ^{\sigma }}{\tau _{f}^{\sigma }}}\right)}
ρ
σ
u
σ
→
=
∑
i
f
i
σ
e
→
i
.
{\displaystyle \rho ^{\sigma }{\vec {u^{\sigma }}}=\sum _{i}f_{i}^{\sigma }{\vec {e}}_{i}.}
v
σ
=
u
′
→
+
τ
f
σ
ρ
σ
F
→
σ
{\displaystyle v^{\sigma }={\vec {u'}}+{\frac {\tau _{f}^{\sigma }}{\rho ^{\sigma }}}{\vec {F}}^{\sigma }}
In the above equation for the equilibrium velocity
v
σ
{\displaystyle v^{\sigma }\,\!}
, the
F
→
σ
{\displaystyle {\vec {F}}^{\sigma }\,\!}
term is the interaction force between a component and the other components. It is still the subject of much discussion as it is typically a tuning parameter that determines how fluid-fluid, fluid-gas, etc. interact. Frank et al. list current models for this force term. The commonly used derivations are Gunstensen chromodynamic model, Swift's free energy-based approach for both liquid/vapor systems and binary fluids, He's intermolecular interaction-based model, the Inamuro approach, and the Lee and Lin approach.
The following is the general description for
F
→
σ
{\displaystyle {\vec {F}}^{\sigma }\,\!}
as given by several authors.
F
→
σ
=
−
ψ
σ
(
x
→
)
∑
σ
j
H
σ
σ
j
(
x
→
,
x
→
′
)
∑
i
ψ
σ
j
(
x
→
+
e
→
i
)
e
→
i
{\displaystyle {\vec {F}}^{\sigma }=-\psi ^{\sigma }({\vec {x}})\sum _{\sigma _{j}}H^{\sigma \sigma _{j}}({\vec {x}},{\vec {x}}')\sum _{i}\psi ^{\sigma _{j}}({\vec {x}}+{\vec {e}}_{i}){\vec {e}}_{i}\,\!}
ψ
(
x
→
)
{\displaystyle \psi ({\vec {x}})\,\!}
is the effective mass and
H
(
x
→
,
x
→
′
)
{\displaystyle H({\vec {x}},{\vec {x}}')\,\!}
is Green's function representing the interparticle interaction with
x
→
′
{\displaystyle {\vec {x}}'\,\!}
as the neighboring site. Satisfying
H
(
x
→
,
x
→
′
)
=
H
(
x
→
′
,
x
→
)
{\displaystyle H({\vec {x}},{\vec {x}}')=H({\vec {x}}',{\vec {x}})\,\!}
and where
H
(
x
→
,
x
→
′
)
>
0
{\displaystyle H({\vec {x}},{\vec {x}}')>0\,\!}
represents repulsive forces. For D2Q9 and D3Q19, this leads to
H
σ
σ
j
(
x
→
,
x
→
′
)
=
{
h
σ
σ
j
|
x
→
−
x
→
′
|
≤
c
0
|
x
→
−
x
→
′
|
>
c
{\displaystyle H^{\sigma \sigma _{j}}({\vec {x}},{\vec {x}}')={\begin{cases}h^{\sigma \sigma _{j}}&\left|{\vec {x}}-{\vec {x}}'\right|\leq c\\0&\left|{\vec {x}}-{\vec {x}}'\right|>c\\\end{cases}}}
H
σ
σ
j
(
x
→
,
x
→
′
)
=
{
h
σ
σ
j
|
x
→
−
x
→
′
|
=
c
h
σ
σ
j
/
2
|
x
→
−
x
→
′
|
=
2
c
0
otherwise
{\displaystyle H^{\sigma \sigma _{j}}({\vec {x}},{\vec {x}}')={\begin{cases}h^{\sigma \sigma _{j}}&\left|{\vec {x}}-{\vec {x}}'\right|=c\\h^{\sigma \sigma _{j}}/2&\left|{\vec {x}}-{\vec {x}}'\right|={\sqrt {2c}}\\0&{\text{otherwise}}\\\end{cases}}}
The effective mass as proposed by Shan and Chen uses the following effective mass for a single-component, multiphase system. The equation of state is also given under the condition of a single component and multiphase.
ψ
(
x
→
)
=
ψ
(
ρ
σ
)
=
ρ
0
σ
[
1
−
e
(
−
ρ
σ
/
ρ
0
σ
)
]
{\displaystyle \psi ({\vec {x}})=\psi (\rho ^{\sigma })=\rho _{0}^{\sigma }\left[1-e^{(-\rho ^{\sigma }/\rho _{0}^{\sigma })}\right]\,\!}
p
=
c
s
2
ρ
+
c
0
h
[
ψ
(
x
→
)
]
2
{\displaystyle p=c_{s}^{2}\rho +c_{0}h[\psi ({\vec {x}})]^{2}\,\!}
So far, it appears that
ρ
0
σ
{\displaystyle \rho _{0}^{\sigma }\,\!}
and
h
σ
σ
j
{\displaystyle h^{\sigma \sigma _{j}}\,\!}
are free constants to tune but once plugged into the system's equation of state(EOS), they must satisfy the thermodynamic relationships at the critical point such that
(
∂
P
/
∂
ρ
)
T
=
(
∂
2
P
/
∂
ρ
2
)
T
=
0
{\displaystyle (\partial P/\partial {\rho })_{T}=(\partial ^{2}P/\partial {\rho ^{2}})_{T}=0\,\!}
and
p
=
p
c
{\displaystyle p=p_{c}\,\!}
. For the EOS,
c
0
{\displaystyle c_{0}\,\!}
is 3.0 for D2Q9 and D3Q19 while it equals 10.0 for D3Q15.
It was later shown by Yuan and Schaefer that the effective mass density needs to be changed to simulate multiphase flow more accurately. They compared the Shan and Chen (SC), Carnahan-Starling (C–S), van der Waals (vdW), Redlich–Kwong (R–K), Redlich–Kwong Soave (RKS), and Peng–Robinson (P–R) EOS. Their results revealed that the SC EOS was insufficient and that C–S, P–R, R–K, and RKS EOS are all more accurate in modeling multiphase flow of a single component.
For the popular isothermal Lattice Boltzmann methods these are the only conserved quantities. Thermal models also conserve energy and therefore have an additional conserved quantity:
ρ
θ
+
ρ
u
u
=
∑
i
f
i
e
→
i
e
→
i
.
{\displaystyle \rho \theta +\rho uu=\sum _{i}f_{i}{\vec {e}}_{i}{\vec {e}}_{i}.}
== Unstructured grids ==
Normally, the lattice Boltzmann methods is implemented on regular grids, However the use of unstructured grid can help with solving complex boundaries, unstructured grids are made of triangles or tetrahedra with variations.
Assuming
Ω
j
{\displaystyle \Omega ^{j}}
is a volume made by all barycenters of tetrahedra, faces and edges connected to vertex
v
j
{\displaystyle {\boldsymbol {v}}^{j}}
, the discrete velocity density function:
f
i
(
v
j
,
t
+
δ
t
)
=
f
i
(
v
j
,
t
)
−
δ
t
∑
k
S
i
j
k
f
i
(
v
k
,
t
)
−
δ
t
τ
∑
k
C
j
k
(
f
i
(
v
k
,
t
)
−
f
i
e
q
(
v
k
)
)
{\displaystyle f_{i}({\boldsymbol {v}}^{j},t+\delta t)=f_{i}({\boldsymbol {v}}^{j},t)-\delta t\sum _{k}S_{i}^{jk}f_{i}({\boldsymbol {v}}^{k},t)-{\delta t \over \tau }\sum _{k}C^{jk}(f_{i}({\boldsymbol {v}}^{k},t)-f_{i}^{eq}({\boldsymbol {v}}^{k}))}
where
v
k
{\displaystyle {\boldsymbol {v}}^{k}}
are position of a vertex and its neighbors, and:
C
j
k
=
1
V
j
∫
Ω
j
w
k
(
x
)
d
Ω
{\displaystyle C^{jk}={1 \over V^{j}}\int _{\Omega ^{j}}w_{k}({\boldsymbol {x}})d\Omega }
S
i
j
k
=
1
V
j
∮
∂
Ω
j
(
e
i
→
n
→
)
w
k
(
x
)
d
Ω
{\displaystyle S_{i}^{jk}={1 \over V^{j}}\oint _{\partial \Omega ^{j}}({\vec {e_{i}}}{\vec {n}})w_{k}({\boldsymbol {x}})d\Omega }
where
w
k
(
x
)
{\displaystyle w_{k}({\boldsymbol {x}})}
is wights of a linear interpolation of
x
{\displaystyle {\boldsymbol {x}}}
by vertices of triangle or tetrahedra that
x
{\displaystyle {\boldsymbol {x}}}
lies within.
== Applications ==
During the last years, the LBM has proven to be a powerful tool for solving problems at different length and time scales.
Some of the applications of LBM include:
Porous Media flows
Biomedical Flows
Earth sciences (Soil filtration).
Energy Sciences (Fuel Cells).
== Example Implementation ==
This is a barebone implementation of LBM on a 100x100 grid, Using Python:
== External links ==
LBM Method
Entropic Lattice Boltzmann Method (ELBM)
dsfd.org: Website of the annual DSFD conference series (1986 -- now) where advances in theory and application of the lattice Boltzmann method are discussed
Website of the annual ICMMES conference on lattice Boltzmann methods and their applications
== Further reading ==
Deutsch, Andreas; Sabine Dormann (2004). Cellular Automaton Modeling of Biological Pattern Formation. Birkhäuser Verlag. ISBN 978-0-8176-4281-5.
Succi, Sauro (2001). The Lattice Boltzmann Equation for Fluid Dynamics and Beyond. Oxford University Press. ISBN 978-0-19-850398-9.
Wolf-Gladrow, Dieter (2000). Lattice-Gas Cellular Automata and Lattice Boltzmann Models. Springer Verlag. ISBN 978-3-540-66973-9.
Sukop, Michael C.; Daniel T. Thorne, Jr. (2007). Lattice Boltzmann Modeling: An Introduction for Geoscientists and Engineers. Springer. ISBN 978-3-540-27981-5.
Jian Guo Zhou (2004). Lattice Boltzmann Methods for Shallow Water Flows. Springer. ISBN 978-3-540-40746-1.
He, X., Chen, S., Doolen, G. (1998). A Novel Thermal Model for the Lattice Boltzmann Method in Incompressible Limit. Academic Press.{{cite book}}: CS1 maint: multiple names: authors list (link)
Guo, Z. L.; Shu, C (2013). Lattice Boltzmann Method and Its Applications in Engineering. World Scientific Publishing.
Huang, H.; M.C. Sukop; X-Y. Lu (2015). Multiphase Lattice Boltzmann Methods: Theory and Application. Wiley-Blackwell. ISBN 978-1-118-97133-8.
Krüger, T.; Kusumaatmaja, H.; Kuzmin, A.; Shardt, O.; Silva, G.; Viggen, E. M. (2017). The Lattice Boltzmann Method: Principles and Practice. Springer Verlag. ISBN 978-3-319-44647-9.
== Notes == | Wikipedia/Lattice_Boltzmann_methods |
Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline (or offshoot) of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
== Overview ==
In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, or is too complicated. In such cases, numerical approximations are required. Computational physics is the subject that deals with these numerical approximations: the approximation of the solution is written as a finite (and typically large) number of simple mathematical operations (algorithm), and a computer is used to perform these operations and compute an approximated solution and respective error.
=== Status in physics ===
There is a debate about the status of computation within the scientific method. Sometimes it is regarded as more akin to theoretical physics; some others regard computer simulation as "computer experiments", yet still others consider it an intermediate or different branch between theoretical and experimental physics, a third way that supplements theory and experiment. While computers can be used in experiments for the measurement and recording (and storage) of data, this clearly does not constitute a computational approach.
== Challenges in computational physics ==
Computational physics problems are in general very difficult to solve exactly. This is due to several (mathematical) reasons: lack of algebraic and/or analytic solvability, complexity, and chaos. For example, even apparently simple problems, such as calculating the wavefunction of an electron orbiting an atom in a strong electric field (Stark effect), may require great effort to formulate a practical algorithm (if one can be found); other cruder or brute-force techniques, such as graphical methods or root finding, may be required. On the more advanced side, mathematical perturbation theory is also sometimes used (a working is shown for this particular example here). In addition, the computational cost and computational complexity for many-body problems (and their classical counterparts) tend to grow quickly. A macroscopic system typically has a size of the order of
10
23
{\displaystyle 10^{23}}
constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system and for classical N-body it is of order N-squared. Finally, many physical systems are inherently nonlinear at best, and at worst chaotic: this means it can be difficult to ensure any numerical errors do not grow to the point of rendering the 'solution' useless.
== Methods and algorithms ==
Because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Between them, one can consider:
root finding (using e.g. Newton-Raphson method)
system of linear equations (using e.g. LU decomposition)
ordinary differential equations (using e.g. Runge–Kutta methods)
integration (using e.g. Romberg method and Monte Carlo integration)
partial differential equations (using e.g. finite difference method and relaxation method)
matrix eigenvalue problem (using e.g. Jacobi eigenvalue algorithm and power iteration)
All these methods (and several others) are used to calculate physical properties of the modeled systems.
Computational physics also borrows a number of ideas from computational chemistry - for example, the density functional theory used by computational solid state physicists to calculate properties of solids is basically the same as that used by chemists to calculate the properties of molecules.
Furthermore, computational physics encompasses the tuning of the software/hardware structure to solve the problems (as the problems usually can be very large, in processing power need or in memory requests).
== Divisions ==
It is possible to find a corresponding computational branch for every major field in physics:
Computational mechanics consists of computational fluid dynamics (CFD), computational solid mechanics and computational contact mechanics.
Computational electrodynamics is the process of modeling the interaction of electromagnetic fields with physical objects and the environment. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics.
Computational chemistry is a rapidly growing field that was developed due to the quantum many-body problem.
Computational solid state physics is a very important division of computational physics dealing directly with material science.
Computational statistical mechanics is a field related to computational condensed matter which deals with the simulation of models and theories (such as percolation and spin models) that are difficult to solve otherwise.
Computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, (particularly through the use of agent based modeling and cellular automata) it also concerns itself with (and finds application in, through the use of its techniques) in the social sciences, network theory, and mathematical models for the propagation of disease (most notably, the SIR Model) and the spread of forest fires.
Numerical relativity is a (relatively) new field interested in finding numerical solutions to the field equations of both special relativity and general relativity.
Computational particle physics deals with problems motivated by particle physics.
Computational astrophysics is the application of these techniques and methods to astrophysical problems and phenomena.
Computational biophysics is a branch of biophysics and computational biology itself, applying methods of computer science and physics to large complex biological problems.
== Applications ==
Due to the broad class of problems computational physics deals, it is an essential component of modern research in different areas of physics, namely: accelerator physics, astrophysics, general theory of relativity (through numerical relativity), fluid mechanics (computational fluid dynamics), lattice field theory/lattice gauge theory (especially lattice quantum chromodynamics), plasma physics (see plasma modeling), simulating physical systems (using e.g. molecular dynamics), nuclear engineering computer codes, protein structure prediction, weather prediction, solid state physics, soft condensed matter physics, hypervelocity impact physics etc.
Computational solid state physics, for example, uses density functional theory to calculate properties of solids, a method similar to that used by chemists to study molecules. Other quantities of interest in solid state physics, such as the electronic band structure, magnetic properties and charge densities can be calculated by this and several methods, including the Luttinger-Kohn/k.p method and ab-initio methods.
On top of advanced physics software, there are also a myriad of tools of analytics available for beginning students of physics such as the PASCO Capstone software.
== See also ==
Advanced Simulation Library
CECAM - Centre européen de calcul atomique et moléculaire
Division of Computational Physics (DCOMP) of the American Physical Society
Important publications in computational physics
List of quantum chemistry and solid-state physics software
Mathematical and theoretical physics
Open Source Physics, computational physics libraries and pedagogical tools
Timeline of computational physics
Car–Parrinello molecular dynamics
== References ==
== Further reading ==
A.K. Hartmann, Practical Guide to Computer Simulations, World Scientific (2009)
International Journal of Modern Physics C (IJMPC): Physics and Computers Archived 2004-11-03 at the Wayback Machine, World Scientific
Steven E. Koonin, Computational Physics, Addison-Wesley (1986)
T. Pang, An Introduction to Computational Physics, Cambridge University Press (2010)
B. Stickler, E. Schachinger, Basic concepts in computational physics, Springer Verlag (2013). ISBN 9783319024349.
E. Winsberg, Science in the Age of Computer Simulation. Chicago: University of Chicago Press, 2010.
== External links ==
C20 IUPAP Commission on Computational Physics Archived 2015-11-15 at the Wayback Machine
American Physical Society: Division of Computational Physics
Institute of Physics: Computational Physics Group Archived 2015-02-13 at the Wayback Machine
SciDAC: Scientific Discovery through Advanced Computing
Open Source Physics
SCINET Scientific Software Framework
Computational Physics Course with youtube videos | Wikipedia/Computational_biophysics |
Computational astrophysics refers to the methods and computing tools developed and used in astrophysics research. Like computational chemistry or computational physics, it is both a specific branch of theoretical astrophysics and an interdisciplinary field relying on computer science, mathematics, and wider physics. Computational astrophysics is most often studied through an applied mathematics or astrophysics programme at PhD level.
Well-established areas of astrophysics employing computational methods include magnetohydrodynamics, astrophysical radiative transfer, stellar and galactic dynamics, and astrophysical fluid dynamics. A recently developed field with interesting results is numerical relativity.
== Research ==
Many astrophysicists use computers in their work, and a growing number of astrophysics departments now have research groups specially devoted to computational astrophysics. Important research initiatives include the US Department of Energy (DoE) SciDAC collaboration for astrophysics and the now defunct European AstroSim collaboration. A notable active project is the international Virgo Consortium, which focuses on cosmology.
In August 2015 during the general assembly of the International Astronomical Union a new
commission C.B1 on Computational Astrophysics was inaugurated, therewith recognizing the importance of astronomical discovery by computing.
Important techniques of computational astrophysics include particle-in-cell (PIC) and the closely related particle-mesh (PM), N-body simulations, Monte Carlo methods, as well as grid-free (with smoothed particle hydrodynamics (SPH) being an important example) and grid-based methods for fluids. In addition, methods from numerical analysis for solving ODEs and PDEs are also used.
Simulation of astrophysical flows is of particular importance as many objects and processes of astronomical interest such as stars and nebulae involve gases. Fluid computer models are often coupled with radiative transfer, (Newtonian) gravity, nuclear physics and (general) relativity to study highly energetic phenomena such as supernovae, relativistic jets, active galaxies and gamma-ray bursts and are also used to model stellar structure, planetary formation, evolution of stars and of galaxies, and exotic objects such as neutron stars, pulsars, magnetars and black holes. Computer simulations are often the only means to study stellar collisions, galaxy mergers, as well as galactic and black hole interactions.
In recent years the field has made increasing use of parallel and high performance computers.
== Tools ==
Computational astrophysics as a field makes extensive use of software and hardware technologies. These systems are often highly specialized and made by dedicated professionals, and so generally find limited popularity in the wider (computational) physics community.
=== Hardware ===
Like other similar fields, computational astrophysics makes extensive use of supercomputers and computer clusters . Even on the scale of a normal desktop it is possible to accelerate the hardware. Perhaps the most notable such computer architecture built specially for astrophysics is the GRAPE (gravity pipe) in Japan.
As of 2010, the biggest N-body simulations, such as DEGIMA, do general-purpose computing on graphics processing units.
=== Software ===
Many codes and software packages, exist along with various researchers and consortia maintaining them. Most codes tend to be
n-body packages or fluid solvers of some sort. Examples of n-body codes include ChaNGa, MODEST, nbodylab.org and Starlab.
For hydrodynamics there is usually a coupling between codes, as the motion of the fluids usually has some other effect (such as gravity, or radiation) in astrophysical situations. For example, for SPH/N-body there is GADGET and SWIFT; for grid-based/N-body RAMSES, ENZO, FLASH, and ART.
AMUSE [2], takes a different approach (called Noah's Ark) than the other packages by providing an interface structure to a large number of publicly available astronomical codes for addressing stellar dynamics, stellar evolution, hydrodynamics and radiative transport.
== See also ==
Millennium Simulation, Eris, and Bolshoi cosmological simulation are astrophysical supercomputer simulations
Plasma modeling
Computational physics
Theoretical astronomy and theoretical astrophysics
Center for Computational Relativity and Gravitation
University of California High-Performance AstroComputing Center
== References ==
== Further reading ==
Beginner/intermediate level:
Astrophysics with a PC: An Introduction to Computational Astrophysics, Paul Hellings. Willmann-Bell; 1st English ed edition.
Practical Astronomy with your Calculator, Peter Duffett-Smith. Cambridge University Press; 3rd edition 1988.
Advanced/graduate level:
Numerical Methods in Astrophysics: An Introduction (Series in Astronomy and Astrophysics): Peter Bodenheimer, Gregory P. Laughlin, Michal Rozyczka, Harold. W Yorke. Taylor & Francis, 2006.
Open cluster membership probability based on K-means clustering algorithm, Mohamed Abd El Aziz & I. M. Selim & A. Essam, Exp Astron., 2016
Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach, Mohamed Abd El Aziz, I. M. Selim & Shengwu Xiong Scientific Reports 7, 4463, 2017
Journals (Open Access):
Living Reviews in Computational Astrophysics
Computational Astrophysics and Cosmology | Wikipedia/Computational_astrophysics |
In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.
Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discretizations of differential equations. They are also used for the solution of linear equations for linear least-squares problems and also for systems of linear inequalities, such as those arising in linear programming. They have also been developed for solving nonlinear systems of equations.
Relaxation methods are important especially in the solution of linear systems used to model elliptic partial differential equations, such as Laplace's equation and its generalization, Poisson's equation. These equations describe boundary-value problems, in which the solution-function's values are specified on boundary of a domain; the problem is to compute a solution also on its interior. Relaxation methods are used to solve the linear equations resulting from a discretization of the differential equation, for example by finite differences.
Iterative relaxation of solutions is commonly dubbed smoothing because with certain equations, such as Laplace's equation, it resembles repeated application of a local smoothing filter to the solution vector. These are not to be confused with relaxation methods in mathematical optimization, which approximate a difficult problem by a simpler problem whose "relaxed" solution provides information about the solution of the original problem.
== Model problem of potential theory ==
When φ is a smooth real-valued function on the real numbers, its second derivative can be approximated by:
d
2
φ
(
x
)
d
x
2
=
φ
(
x
−
h
)
−
2
φ
(
x
)
+
φ
(
x
+
h
)
h
2
+
O
(
h
2
)
.
{\displaystyle {\frac {d^{2}\varphi (x)}{{dx}^{2}}}={\frac {\varphi (x{-}h)-2\varphi (x)+\varphi (x{+}h)}{h^{2}}}\,+\,{\mathcal {O}}(h^{2})\,.}
Using this in both dimensions for a function φ of two arguments at the point (x, y), and solving for φ(x, y), results in:
φ
(
x
,
y
)
=
1
4
(
φ
(
x
+
h
,
y
)
+
φ
(
x
,
y
+
h
)
+
φ
(
x
−
h
,
y
)
+
φ
(
x
,
y
−
h
)
−
h
2
∇
2
φ
(
x
,
y
)
)
+
O
(
h
4
)
.
{\displaystyle \varphi (x,y)={\tfrac {1}{4}}\left(\varphi (x{+}h,y)+\varphi (x,y{+}h)+\varphi (x{-}h,y)+\varphi (x,y{-}h)\,-\,h^{2}{\nabla }^{2}\varphi (x,y)\right)\,+\,{\mathcal {O}}(h^{4})\,.}
To approximate the solution of the Poisson equation:
∇
2
φ
=
f
{\displaystyle {\nabla }^{2}\varphi =f\,}
numerically on a two-dimensional grid with grid spacing h, the relaxation method assigns the given values of function φ to the grid points near the boundary and arbitrary values to the interior grid points, and then repeatedly performs the assignment
φ := φ* on the interior points, where φ* is defined by:
φ
∗
(
x
,
y
)
=
1
4
(
φ
(
x
+
h
,
y
)
+
φ
(
x
,
y
+
h
)
+
φ
(
x
−
h
,
y
)
+
φ
(
x
,
y
−
h
)
−
h
2
f
(
x
,
y
)
)
,
{\displaystyle \varphi ^{*}(x,y)={\tfrac {1}{4}}\left(\varphi (x{+}h,y)+\varphi (x,y{+}h)+\varphi (x{-}h,y)+\varphi (x,y{-}h)\,-\,h^{2}f(x,y)\right)\,,}
until convergence.
The method is easily generalized to other numbers of dimensions.
== Convergence and acceleration ==
While the method converges under general conditions, it typically makes slower progress than competing methods. Nonetheless, the study of relaxation methods remains a core part of linear algebra, because the transformations of relaxation theory provide excellent preconditioners for new methods. Indeed, the choice of preconditioner is often more important than the choice of iterative method.
Multigrid methods may be used to accelerate the methods. One can first compute an approximation on a coarser grid – usually the double spacing 2h – and use that solution with interpolated values for the other grid points as the initial assignment. This can then also be done recursively for the coarser computation.
== See also ==
In linear systems, the two main classes of relaxation methods are stationary iterative methods, and the more general Krylov subspace methods.
The Jacobi method is a simple relaxation method.
The Gauss–Seidel method is an improvement upon the Jacobi method.
Successive over-relaxation can be applied to either of the Jacobi and Gauss–Seidel methods to speed convergence.
Multigrid methods
== Notes ==
== References ==
Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. ISBN 0-89871-321-8.
Ortega, J. M.; Rheinboldt, W. C. (2000). Iterative solution of nonlinear equations in several variables. Classics in Applied Mathematics. Vol. 30 (Reprint of the 1970 Academic Press ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM). pp. xxvi+572. ISBN 0-89871-461-3. MR 1744713.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 18.3. Relaxation Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
Yousef Saad, Iterative Methods for Sparse Linear Systems, 1st edition, PWS, 1996.
Richard S. Varga 2002 Matrix Iterative Analysis, Second ed. (of 1962 Prentice Hall edition), Springer-Verlag.
David M. Young, Jr. Iterative Solution of Large Linear Systems, Academic Press, 1971. (reprinted by Dover, 2003)
== Further reading ==
Southwell, R.V. (1940) Relaxation Methods in Engineering Science. Oxford University Press, Oxford.
Southwell, R.V. (1946) Relaxation Methods in Theoretical Physics. Oxford University Press, Oxford.
John. D. Jackson (1999). Classical Electrodynamics. New Jersey: Wiley. ISBN 0-471-30932-X.
M.N.O. Sadiku (1992). Numerical Techniques in Electromagnetics. Boca Raton: CRC Pres.
P.-B. Zhou (1993). Numerical Analysis of Electromagnetic Fields. New York: Springer.
P. Grivet, P.W. Hawkes, A.Septier (1972). Electron Optics, 2nd edition. Pergamon Press. ISBN 9781483137858.
D. W. O. Heddle (2000). Electrostatic Lens Systems, 2nd edition. CRC Press. ISBN 9781420034394.
Erwin Kasper (2001). Advances in Imaging and Electron Physics, Vol. 116, Numerical Field Calculation for Charged Particle Optics. Academic Press. ISBN 978-0-12-014758-8. | Wikipedia/Relaxation_(iterative_method) |
Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment using computers.
It typically involves using computer programs to compute approximate solutions to Maxwell's equations to calculate antenna performance, electromagnetic compatibility, radar cross section and electromagnetic wave propagation when not in free space. A large subfield is antenna modeling computer programs, which calculate the radiation pattern and electrical properties of radio antennas, and are widely used to design antennas for specific applications.
== Background ==
Several real-world electromagnetic problems like electromagnetic scattering, electromagnetic radiation, modeling of waveguides etc., are not analytically calculable, for the multitude of irregular geometries found in actual devices. Computational numerical techniques can overcome the inability to derive closed form solutions of Maxwell's equations under various constitutive relations of media, and boundary conditions. This makes computational electromagnetics (CEM) important to the design, and modeling of antenna, radar, satellite and other communication systems, nanophotonic devices and high speed silicon electronics, medical imaging, cell-phone antenna design, among other applications.
CEM typically solves the problem of computing the E (electric) and H (magnetic) fields across the problem domain (e.g., to calculate antenna radiation pattern for an arbitrarily shaped antenna structure). Also calculating power flow direction (Poynting vector), a waveguide's normal modes, media-generated wave dispersion, and scattering can be computed from the E and H fields. CEM models may or may not assume symmetry, simplifying real world structures to idealized cylinders, spheres, and other regular geometrical objects. CEM models extensively make use of symmetry, and solve for reduced dimensionality from 3 spatial dimensions to 2D and even 1D.
An eigenvalue problem formulation of CEM allows us to calculate steady state normal modes in a structure. Transient response and impulse field effects are more accurately modeled by CEM in time domain, by FDTD. Curved geometrical objects are treated more accurately as finite elements FEM, or non-orthogonal grids. Beam propagation method (BPM) can solve for the power flow in waveguides. CEM is application specific, even if different techniques converge to the same field and power distributions in the modeled domain.
== Overview of methods ==
The most common numerical approach is to discretize ("mesh") the problem space in terms of grids or regular shapes ("cells"), and solve Maxwell's equations simultaneously across all cells. Discretization consumes computer memory, and solving the relevant equations takes significant time. Large-scale CEM problems face memory and CPU limitations, and combating these limitations is an active area of research. High performance clustering, vector processing, and/or parallelism is often required to make the computation practical. Some typical methods involve: time-stepping through the equations over the whole domain for each time instant; banded matrix inversion to calculate the weights of basis functions (when modeled by finite element methods); matrix products (when using transfer matrix methods); calculating numerical integrals (when using the method of moments); using fast Fourier transforms; and time iterations (when calculating by the split-step method or by BPM).
== Choice of methods ==
Choosing the right technique for solving a problem is important, as choosing the wrong one can either result in incorrect results, or results which take excessively long to compute. However, the name of a technique does not always tell one how it is implemented, especially for commercial tools, which will often have more than one solver.
Davidson gives two tables comparing the FEM, MoM and FDTD techniques in the way they are normally implemented. One table is for both open region (radiation and scattering problems) and another table is for guided wave problems.
== Maxwell's equations in hyperbolic PDE form ==
Maxwell's equations can be formulated as a hyperbolic system of partial differential equations. This gives access to powerful techniques for numerical solutions.
It is assumed that the waves propagate in the (x,y)-plane and restrict the direction of the magnetic field to be parallel to the z-axis and thus the electric field to be parallel to the (x,y) plane. The wave is called a transverse magnetic (TM) wave. In 2D and no polarization terms present, Maxwell's equations can then be formulated as:
∂
∂
t
u
¯
+
A
∂
∂
x
u
¯
+
B
∂
∂
y
u
¯
+
C
u
¯
=
g
¯
{\displaystyle {\frac {\partial }{\partial t}}{\bar {u}}+A{\frac {\partial }{\partial x}}{\bar {u}}+B{\frac {\partial }{\partial y}}{\bar {u}}+C{\bar {u}}={\bar {g}}}
where u, A, B, and C are defined as
u
¯
=
(
E
x
E
y
H
z
)
,
A
=
(
0
0
0
0
0
1
ϵ
0
1
μ
0
)
,
B
=
(
0
0
−
1
ϵ
0
0
0
−
1
μ
0
0
)
,
C
=
(
σ
ϵ
0
0
0
σ
ϵ
0
0
0
0
)
.
{\displaystyle {\begin{aligned}{\bar {u}}&=\left({\begin{matrix}E_{x}\\E_{y}\\H_{z}\end{matrix}}\right),\\[1ex]A&=\left({\begin{matrix}0&0&0\\0&0&{\frac {1}{\epsilon }}\\0&{\frac {1}{\mu }}&0\end{matrix}}\right),\\[1ex]B&=\left({\begin{matrix}0&0&{\frac {-1}{\epsilon }}\\0&0&0\\{\frac {-1}{\mu }}&0&0\end{matrix}}\right),\\[1ex]C&=\left({\begin{matrix}{\frac {\sigma }{\epsilon }}&0&0\\0&{\frac {\sigma }{\epsilon }}&0\\0&0&0\end{matrix}}\right).\end{aligned}}}
In this representation,
g
¯
{\displaystyle {\bar {g}}}
is the forcing function, and is in the same space as
u
¯
{\displaystyle {\bar {u}}}
. It can be used to express an externally applied field or to describe an optimization constraint. As formulated above:
g
¯
=
(
E
x
,
constraint
E
y
,
constraint
H
z
,
constraint
)
.
{\displaystyle {\bar {g}}=\left({\begin{matrix}E_{x,{\text{constraint}}}\\E_{y,{\text{constraint}}}\\H_{z,{\text{constraint}}}\end{matrix}}\right).}
g
¯
{\displaystyle {\bar {g}}}
may also be explicitly defined equal to zero to simplify certain problems, or to find a characteristic solution, which is often the first step in a method to find the particular inhomogeneous solution.
== Integral equation solvers ==
=== The discrete dipole approximation ===
The discrete dipole approximation is a flexible technique for computing scattering and absorption by targets of arbitrary geometry. The formulation is based on integral form of Maxwell equations. The DDA is an approximation of the continuum target by a finite array of polarizable points. The points acquire dipole moments in response to the local electric field. The dipoles of course interact with one another via their electric fields, so the DDA is also sometimes referred to as the coupled dipole approximation. The resulting linear system of equations is commonly solved using conjugate gradient iterations. The discretization matrix has symmetries (the integral form of Maxwell equations has form of convolution) enabling fast Fourier transform to multiply matrix times vector during conjugate gradient iterations.
=== Method of moments and boundary element method ===
The method of moments (MoM) or boundary element method (BEM) is a numerical computational method of solving linear partial differential equations which have been formulated as integral equations (i.e. in boundary integral form). It can be applied in many areas of engineering and science including fluid mechanics, acoustics, electromagnetics, fracture mechanics, and plasticity.
MoM has become more popular since the 1980s. Because it requires calculating only boundary values, rather than values throughout the space, it is significantly more efficient in terms of computational resources for problems with a small surface/volume ratio. Conceptually, it works by constructing a "mesh" over the modeled surface. However, for many problems, MoM are significantly computationally less efficient than volume-discretization methods (finite element method, finite difference method, finite volume method). Boundary element formulations typically give rise to fully populated matrices. This means that the storage requirements and computational time will tend to grow according to the square of the problem size. By contrast, finite element matrices are typically banded (elements are only locally connected) and the storage requirements for the system matrices typically grow linearly with the problem size. Compression techniques (e.g. multipole expansions or adaptive cross approximation/hierarchical matrices) can be used to ameliorate these problems, though at the cost of added complexity and with a success-rate that depends heavily on the nature and geometry of the problem.
MoM is applicable to problems for which Green's functions can be calculated. These usually involve fields in linear homogeneous media. This places considerable restrictions on the range and generality of problems suitable for boundary elements. Nonlinearities can be included in the formulation, although they generally introduce volume integrals which require the volume to be discretized before solution, removing an oft-cited advantage of MoM.
=== Fast multipole method ===
The fast multipole method (FMM) is an alternative to MoM or Ewald summation. It is an accurate simulation technique and requires less memory and processor power than MoM. The FMM was first introduced by Greengard and Rokhlin and is based on the multipole expansion technique. The first application of the FMM in computational electromagnetics was by Engheta et al.(1992). The FMM has also applications in computational bioelectromagnetics in the Charge based boundary element fast multipole method. FMM can also be used to accelerate MoM.
=== Plane wave time-domain ===
While the fast multipole method is useful for accelerating MoM solutions of integral equations with static or frequency-domain oscillatory kernels, the plane wave time-domain (PWTD) algorithm employs similar ideas to accelerate the MoM solution of time-domain integral equations involving the retarded potential. The PWTD algorithm was introduced in 1998 by Ergin, Shanker, and Michielssen.
=== Partial element equivalent circuit method ===
The partial element equivalent circuit (PEEC) is a 3D full-wave modeling method suitable for combined electromagnetic and circuit analysis. Unlike MoM, PEEC is a full spectrum method valid from dc to the maximum frequency determined by the meshing. In the PEEC method, the integral equation is interpreted as Kirchhoff's voltage law applied to a basic PEEC cell which results in a complete circuit solution for 3D geometries. The equivalent circuit formulation allows for additional SPICE type circuit elements to be easily included. Further, the models and the analysis apply to both the time and the frequency domains. The circuit equations resulting from the PEEC model are easily constructed using a modified loop analysis (MLA) or modified nodal analysis (MNA) formulation. Besides providing a direct current solution, it has several other advantages over a MoM analysis for this class of problems since any type of circuit element can be included in a straightforward way with appropriate matrix stamps. The PEEC method has recently been extended to include nonorthogonal geometries. This model extension, which is consistent with the classical orthogonal formulation, includes the Manhattan representation of the geometries in addition to the more general quadrilateral and hexahedral elements. This helps in keeping the number of unknowns at a minimum and thus reduces computational time for nonorthogonal geometries.
=== Cagniard-deHoop method of moments ===
The Cagniard-deHoop method of moments (CdH-MoM) is a 3-D full-wave time-domain integral-equation technique that is formulated via the Lorentz reciprocity theorem. Since the CdH-MoM heavily relies on the Cagniard-deHoop method, a joint-transform approach originally developed for the analytical analysis of seismic wave propagation in the crustal model of the Earth, this approach is well suited for the TD EM analysis of planarly-layered structures. The CdH-MoM has been originally applied to time-domain performance studies of cylindrical and planar antennas and, more recently, to the TD EM scattering analysis of transmission lines in the presence of thin sheets and electromagnetic metasurfaces, for example.
== Differential equation solvers ==
=== Finite-Difference Frequency-Domain ===
Finite-difference frequency-domain (FDFD) provides a rigorous solution to Maxwell’s equations in the frequency-domain using the finite-difference method. FDFD is arguably the simplest numerical method that still provides a rigorous solution. It is incredibly versatile and able to solve virtually any problem in electromagnetics. The primary drawback of FDFD is poor efficiency compared to other methods. On modern computers, however, a huge array of problems are easily handled such as calculated guided modes in waveguides, calculating scattering from an object, calculating transmission and reflection from photonic crystals, calculate photonic band diagrams, simulating metamaterials, and much more.
FDFD may be the best "first" method to learn in computational electromagnetics (CEM). It involves almost all the concepts encountered with other methods, but in a much simpler framework. Concepts include boundary conditions, linear algebra, injecting sources, representing devices numerically, and post-processing field data to calculate meaningful things. This will help a person learn other techniques as well as provide a way to test and benchmark those other techniques.
FDFD is very similar to finite-difference time-domain (FDTD). Both methods represent space as an array of points and enforces Maxwell’s equations at each point. FDFD puts this large set of equations into a matrix and solves all the equations simultaneously using linear algebra techniques. In contrast, FDTD continually iterates over these equations to evolve a solution over time. Numerically, FDFD and FDTD are very similar, but their implementations are very different.
=== Finite-difference time-domain ===
Finite-difference time-domain (FDTD) is a popular CEM technique. It is easy to understand. It has an exceptionally simple implementation for a full wave solver. It is at least an order of magnitude less work to implement a basic FDTD solver than either an FEM or MoM solver. FDTD is the only technique where one person can realistically implement oneself in a reasonable time frame, but even then, this will be for a quite specific problem. Since it is a time-domain method, solutions can cover a wide frequency range with a single simulation run, provided the time step is small enough to satisfy the Nyquist–Shannon sampling theorem for the desired highest frequency.
FDTD belongs in the general class of grid-based differential time-domain numerical modeling methods. Maxwell's equations (in partial differential form) are modified to central-difference equations, discretized, and implemented in software. The equations are solved in a cyclic manner: the electric field is solved at a given instant in time, then the magnetic field is solved at the next instant in time, and the process is repeated over and over again.
The basic FDTD algorithm traces back to a seminal 1966 paper by Kane Yee in IEEE Transactions on Antennas and Propagation. Allen Taflove originated the descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym in a 1980 paper in IEEE Trans. Electromagn. Compat. Since about 1990, FDTD techniques have emerged as the primary means to model many scientific and engineering problems addressing electromagnetic wave interactions with material structures. An effective technique based on a time-domain finite-volume discretization procedure was introduced by Mohammadian et al. in 1991. Current FDTD modeling applications range from near-DC (ultralow-frequency geophysics involving the entire Earth-ionosphere waveguide) through microwaves (radar signature technology, antennas, wireless communications devices, digital interconnects, biomedical imaging/treatment) to visible light (photonic crystals, nanoplasmonics, solitons, and biophotonics). Approximately 30 commercial and university-developed software suites are available.
=== Discontinuous time-domain method ===
Among many time domain methods, discontinuous Galerkin time domain (DGTD) method has become popular recently since it integrates advantages of both the finite volume time domain (FVTD) method and the finite element time domain (FETD) method. Like FVTD, the numerical flux is used to exchange information between neighboring elements, thus all operations of DGTD are local and easily parallelizable. Similar to FETD, DGTD employs unstructured mesh and is capable of high-order accuracy if the high-order hierarchical basis function is adopted. With the above merits, DGTD method is widely implemented for the transient analysis of multiscale problems involving large number of unknowns.
=== Multiresolution time-domain ===
MRTD is an adaptive alternative to the finite difference time domain method (FDTD) based on wavelet analysis.
=== Finite element method ===
The finite element method (FEM) is used to find approximate solution of partial differential equations (PDE) and integral equations. The solution approach is based either on eliminating the time derivatives completely (steady state problems), or rendering the PDE into an equivalent ordinary differential equation, which is then solved using standard techniques such as finite differences, etc.
In solving partial differential equations, the primary challenge is to create an equation which approximates the equation to be studied, but which is numerically stable, meaning that errors in the input data and intermediate calculations do not accumulate and destroy the meaning of the resulting output. There are many ways of doing this, with various advantages and disadvantages. The finite element method is a good choice for solving partial differential equations over complex domains or when the desired precision varies over the entire domain.
=== Finite integration technique ===
The finite integration technique (FIT) is a spatial discretization scheme to numerically solve electromagnetic field problems in time and frequency domain. It preserves basic topological properties of the continuous equations such as conservation of charge and energy. FIT was proposed in 1977 by Thomas Weiland and has been enhanced continually over the years. This method covers the full range of electromagnetics (from static up to high frequency) and optic applications and is the basis for commercial simulation tools: CST Studio Suite developed by Computer Simulation Technology (CST AG) and
Electromagnetic Simulation solutions developed by Nimbic.
The basic idea of this approach is to apply the Maxwell equations in integral form to a set of staggered grids. This method stands out due to high flexibility in geometric modeling and boundary handling as well as incorporation of arbitrary material distributions and material properties such as anisotropy, non-linearity and dispersion. Furthermore, the use of a consistent dual orthogonal grid (e.g. Cartesian grid) in conjunction with an explicit time integration scheme (e.g. leap-frog-scheme) leads to compute and memory-efficient algorithms, which are especially adapted for transient field analysis in radio frequency (RF) applications.
=== Pseudo-spectral time domain ===
This class of marching-in-time computational techniques for Maxwell's equations uses either discrete Fourier or discrete Chebyshev transforms to calculate the spatial derivatives of the electric and magnetic field vector components that are arranged in either a 2-D grid or 3-D lattice of unit cells. PSTD causes negligible numerical phase velocity anisotropy errors relative to FDTD, and therefore allows problems of much greater electrical size to be modeled.
=== Pseudo-spectral spatial domain ===
PSSD solves Maxwell's equations by propagating them forward in a chosen spatial direction. The fields are therefore held as a function of time, and (possibly) any transverse spatial dimensions. The method is pseudo-spectral because temporal derivatives are calculated in the frequency domain with the aid of FFTs. Because the fields are held as functions of time, this enables arbitrary dispersion in the propagation medium to be rapidly and accurately modelled with minimal effort. However, the choice to propagate forward in space (rather than in time) brings with it some subtleties, particularly if reflections are important.
=== Transmission line matrix ===
Transmission line matrix (TLM) can be formulated in several means as a direct set of lumped elements solvable directly by a circuit solver (ala SPICE, HSPICE, et al.), as a custom network of elements or via a scattering matrix approach. TLM is a very flexible analysis strategy akin to FDTD in capabilities, though more codes tend to be available with FDTD engines.
=== Locally one-dimensional ===
This is an implicit method. In this method, in two-dimensional case, Maxwell equations are computed in two steps, whereas in three-dimensional case Maxwell equations are divided into three spatial coordinate directions. Stability and dispersion analysis of the three-dimensional LOD-FDTD method have been discussed in detail.
== Other methods ==
=== Eigenmode expansion ===
Eigenmode expansion (EME) is a rigorous bi-directional technique to simulate electromagnetic propagation which relies on the decomposition of the electromagnetic fields into a basis set of local eigenmodes. The eigenmodes are found by solving Maxwell's equations in each local cross-section. Eigenmode expansion can solve Maxwell's equations in 2D and 3D and can provide a fully vectorial solution provided that the mode solvers are vectorial. It offers very strong benefits compared with the FDTD method for the modelling of optical waveguides, and it is a popular tool for the modelling of fiber optics and silicon photonics devices.
=== Physical optics ===
Physical optics (PO) is the name of a high frequency approximation (short-wavelength approximation) commonly used in optics, electrical engineering and applied physics. It is an intermediate method between geometric optics, which ignores wave effects, and full wave electromagnetism, which is a precise theory. The word "physical" means that it is more physical than geometrical optics and not that it is an exact physical theory.
The approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation, in that the details of the problem are treated as a perturbation.
=== Uniform theory of diffraction ===
The uniform theory of diffraction (UTD) is a high frequency method for solving electromagnetic scattering problems from electrically small discontinuities or discontinuities in more than one dimension at the same point.
The uniform theory of diffraction approximates near field electromagnetic fields as quasi optical and uses ray diffraction to determine diffraction coefficients for each diffracting object-source combination. These coefficients are then used to calculate the field strength and phase for each direction away from the diffracting point. These fields are then added to the incident fields and reflected fields to obtain a total solution.
== Validation ==
Validation is one of the key issues facing electromagnetic simulation users. The user must understand and master the validity domain of its simulation. The measure is, "how far from the reality are the results?"
Answering this question involves three steps: comparison between simulation results and analytical formulation, cross-comparison between codes, and comparison of simulation results with measurement.
=== Comparison between simulation results and analytical formulation ===
For example, assessing the value of the radar cross section of a plate with the analytical formula:
RCS
Plate
=
4
π
A
2
λ
2
,
{\displaystyle {\text{RCS}}_{\text{Plate}}={\frac {4\pi A^{2}}{\lambda ^{2}}},}
where A is the surface of the plate and
λ
{\displaystyle \lambda }
is the wavelength. The next curve presenting the RCS of a plate computed at 35 GHz can be used as reference example.
=== Cross-comparison between codes ===
One example is the cross comparison of results from method of moments and asymptotic methods in their validity domains.
=== Comparison of simulation results with measurement ===
The final validation step is made by comparison between measurements and simulation. For example, the RCS calculation and the measurement of a complex metallic object at 35 GHz. The computation implements GO, PO and PTD for the edges.
Validation processes can clearly reveal that some differences can be explained by the differences between the experimental setup and its reproduction in the simulation environment.
== Light scattering codes ==
There are now many efficient codes for solving electromagnetic scattering problems. They are listed as:
discrete dipole approximation codes,
codes for electromagnetic scattering by cylinders,
codes for electromagnetic scattering by spheres.
Solutions which are analytical, such as Mie solution for scattering by spheres or cylinders, can be used to validate more involved techniques.
== See also ==
== References ==
== Further reading ==
R. F. Harrington (1993). Field Computation by Moment Methods. Wiley-IEEE Press. ISBN 978-0-7803-1014-8.
W. C. Chew; J.-M. Jin; E. Michielssen; J. Song (2001). Fast and Efficient Algorithms in Computational Electromagnetics. Artech House Publishers. ISBN 978-1-58053-152-8.
J. Jin (2002). The Finite Element Method in Electromagnetics, 2nd. ed. Wiley-IEEE Press. ISBN 978-0-471-43818-2.
Allen Taflove and Susan C. Hagness (2005). Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd ed. Artech House Publishers. ISBN 978-1-58053-832-9.
== External links ==
Computational electromagnetics at the Open Directory Project
Computational electromagnetics: a review Archived 2016-03-15 at the Wayback Machine | Wikipedia/Computational_electrodynamics |
Open Source Physics, or OSP, is a project sponsored by the National Science Foundation and Davidson College, whose mission is to spread the use of open source code libraries that take care of a lot of the heavy lifting for physics: drawing and plotting, differential equation solvers, exporting to animated GIFs and movies, etc., tools, and compiled simulations for physics and other numerical simulations . The OSP collection provides curriculum resources that engage students in physics, computation, and computer modeling. The core library is in the Java programming language and licensed with GNU General Public License (GNU GPL or simply GPL) licenses. The site now serves over 10,000 visitors per month. The Open Source Physics Project is an extension of the Physlet Project.
== Sub-projects ==
They have four projects with this purpose.
OSP libraries: Java code libraries for numerical simulations. The OSP code library was created to meet the need by the broader science education community for a synthesis of curriculum development, computational physics, computer science, and physics education that will be useful for scientists and students wishing to write their own simulations and develop their own curricular material. OSP code library is described in the OSP User's Guide by Wolfgang Christian in An Introduction to Computer Simulation Methods by Harvey Gould, Jan Tobochnik, and Wolfgang Christian.
Easy Java Simulations (EJS) (New name: Easy JavaScript Simulations = EJSS): A free and open source computer-based modeling environment used to generate automatically Java and JavaScript code. Easy JavaScript Simulations is an authoring and modeling tool that allows users to create Java or JavaScript programs with minimal programming. EjsS creates programs that other people can easily inspect or modify.
Tracker: An open-source software video analysis and modeling tool designed for use in physics education and distributed under the GNU General Public License. In the context of physics education, video analysis means tracking the motions of objects in videos to obtain their 2-D position-time data and associated physical quantities such as velocity, acceleration, momentum and energy. Computerized video analysis has been used widely in physics education since the 1990s. By contrast, video modeling involves defining a theoretical model and drawing it as an animation directly on a video and was introduced only in 2009. Tracker has a built-in dynamic model builder to define particles that move according to Newton's laws. External models built with spreadsheets, Easy Java Simulations or other modeling program can also be used. Tracker also has a line profile tool for measuring light spectra and other optical phenomena. Tracker 1.0 was distributed on disc at the 2003 Summer Meeting of the American Association of Physics Teachers. The current version, 6.0, was released in 2021.
OSP Curricular Development: A set of programs, packages, and worksheets for the teaching of advanced physics topics. Many instructors do not teach (or do research in) computational physics. For these instructors they have made the various physical models available in an easily accessible, modifiable, and distributable form for teaching of physics. For convenience, OSP programs are almost always packaged in Java archive (jar) files. These jar files contain compiled code and resources such curricular materials, images, and data files.
== Awards ==
In 2011, the project received an important award, the Science Prize for Online Resources in Education, or SPORE from Science magazine
In 2015, the project received the UNESCO King Hamad Bin Isa Al-Khalifa Prize for the Use of ICTs in Education [1] and Excellence Award Multimedia Physics Teaching and Learning Conference MPTL20 [2] In 2020, the project received the Excellence in Physics Education Award from the American Physical Society [3]
== References ==
Notes
== External links ==
Official website of the Open Source Physics Project
Science education prize goes to Open Source Physics, By John Timmer
NTNUJAVA Virtual Physics Laboratory is the largest library of EJS simulations by Fu-Kwun Hwang
Open Educational Resources / Open Source Physics @ Singapore is the largest library of EJSS simulations outside USA by Loo Kang Lawrence WEE
Easy JavaScript Simulation website by Francisco Esquembre and Félix Jesús Garcia Clemente
Tracker website by Douglas Brown
Educational Resources - Simulations collection of EJS by Marcelo José Rodrigues and Paulo Simeão Carvalho
EJS collection collection of EJS by José Ignacio Fernández Palop
EJS collection collection of EJS by Juan M. Aguirregabiria
EJS collection by teachers in NS548 Computer Modeling course at Boston University.
EJS collection by Eugene Butikov
EJS collection by Andrew Duffy | Wikipedia/Open_Source_Physics |
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
== Overview ==
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs.
Generate inputs randomly from a probability distribution over the domain.
Perform a deterministic computation of the outputs.
Aggregate the results.
For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is π/4, the value of π can be approximated using the Monte Carlo method:
Draw a square, then inscribe a quadrant within it.
Uniformly scatter a given number of points over the square.
Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1.
The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, π/4. Multiply the result by 4 to estimate π.
In this procedure, the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square, then performing a computation on each input to test whether it falls within the quadrant. Aggregating the results yields our final result, the approximation of π.
There are two important considerations:
If the points are not uniformly distributed, the approximation will be poor.
The approximation improves as more points are randomly placed in the whole square.
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously employed.
== Application ==
Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
== Simple Monte Carlo ==
Suppose one wants to know the expected value
μ
{\displaystyle \mu }
of a population (and knows that
μ
{\displaystyle \mu }
exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for
μ
{\displaystyle \mu }
by running
n
{\displaystyle n}
simulations and averaging the simulations' results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that
μ
{\displaystyle \mu }
exists. A sufficiently large
n
{\displaystyle n}
will produce a value for
m
{\displaystyle m}
that is arbitrarily close to
μ
{\displaystyle \mu }
; more formally, it will be the case that, for any
ϵ
>
0
{\displaystyle \epsilon >0}
,
|
μ
−
m
|
≤
ϵ
{\displaystyle |\mu -m|\leq \epsilon }
.
Typically, the algorithm to obtain
m
{\displaystyle m}
is
s = 0;
for i = 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
repeat
m = s / n;
=== An example ===
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least
T
{\displaystyle T}
. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
s = 0;
for i = 1 to n do
throw the three dice until T is met or first exceeded; ri = the number of throws;
s = s + ri;
repeat
m = s / n;
If
n
{\displaystyle n}
is large enough,
m
{\displaystyle m}
will be within
ϵ
{\displaystyle \epsilon }
of
μ
{\displaystyle \mu }
for any
ϵ
>
0
{\displaystyle \epsilon >0}
.
=== Determining a sufficiently large n ===
==== General formula ====
Let
ϵ
=
|
μ
−
m
|
>
0
{\displaystyle \epsilon =|\mu -m|>0}
. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes,
m
{\displaystyle m}
is indeed within
ϵ
{\displaystyle \epsilon }
of
μ
{\displaystyle \mu }
. Let
z
{\displaystyle z}
be the
z
{\displaystyle z}
-score corresponding to that confidence level.
Let
s
2
{\displaystyle s^{2}}
be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number
k
{\displaystyle k}
of “sample” simulations. Choose a
k
{\displaystyle k}
; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."
The following algorithm computes
s
2
{\displaystyle s^{2}}
in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:
s1 = 0;
run the simulation for the first time, producing result r1;
m1 = r1; //mi is the mean of the first i simulations
for i = 2 to k do
run the simulation for the ith time, producing result ri;
δi = ri - mi−1;
mi = mi-1 + (1/i)δi;
si = si-1 + ((i - 1)/i)(δi)2;
repeat
s2 = sk/(k - 1);
Note that, when the algorithm completes,
m
k
{\displaystyle m_{k}}
is the mean of the
k
{\displaystyle k}
results.
The value
n
{\displaystyle n}
is sufficiently large when
n
≥
s
2
z
2
/
ϵ
2
.
{\displaystyle n\geq s^{2}z^{2}/\epsilon ^{2}.}
If
n
≤
k
{\displaystyle n\leq k}
, then
m
k
=
m
{\displaystyle m_{k}=m}
; sufficient sample simulations were done to ensure that
m
k
{\displaystyle m_{k}}
is within
ϵ
{\displaystyle \epsilon }
of
μ
{\displaystyle \mu }
. If
n
>
k
{\displaystyle n>k}
, then
n
{\displaystyle n}
simulations can be run “from scratch,” or, since
k
{\displaystyle k}
simulations have already been done, one can just run
n
−
k
{\displaystyle n-k}
more simulations and add their results into those from the sample simulations:
s = mk * k;
for i = k + 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
m = s / n;
==== A formula when simulations' results are bounded ====
An alternative formula can be used in the special case where all simulation results are bounded above and below.
Choose a value for
ϵ
{\displaystyle \epsilon }
that is twice the maximum allowed difference between
μ
{\displaystyle \mu }
and
m
{\displaystyle m}
. Let
0
<
δ
<
100
{\displaystyle 0<\delta <100}
be the desired confidence level, expressed as a percentage. Let every simulation result
r
1
,
r
2
,
…
,
r
i
,
…
,
r
n
{\displaystyle r_{1},r_{2},\ldots ,r_{i},\ldots ,r_{n}}
be such that
a
≤
r
i
≤
b
{\displaystyle a\leq r_{i}\leq b}
for finite
a
{\displaystyle a}
and
b
{\displaystyle b}
. To have confidence of at least
δ
{\displaystyle \delta }
that
|
μ
−
m
|
<
ϵ
/
2
{\displaystyle |\mu -m|<\epsilon /2}
, use a value for
n
{\displaystyle n}
such that:
n
≥
2
(
b
−
a
)
2
ln
(
2
/
(
1
−
(
δ
/
100
)
)
)
/
ϵ
2
{\displaystyle n\geq 2(b-a)^{2}\ln(2/(1-(\delta /100)))/\epsilon ^{2}}
For example, if
δ
=
99
%
{\displaystyle \delta =99\%}
, then
n
≥
2
(
b
−
a
)
2
ln
(
2
/
0.01
)
/
ϵ
2
≈
10.6
(
b
−
a
)
2
/
ϵ
2
{\displaystyle n\geq 2(b-a)^{2}\ln(2/0.01)/\epsilon ^{2}\approx 10.6(b-a)^{2}/\epsilon ^{2}}
.
== Computational costs ==
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
== History ==
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing).
An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which π can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.
In the late 1940s, Stanisław Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations.
Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.
== Definitions ==
There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are some examples:
Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation.
Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation.
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.
Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic.
=== Monte Carlo and random numbers ===
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:
the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats)
the (pseudo-random) number generator produces values that pass tests for randomness
there are enough samples to ensure accurate results
the proper sampling technique is used
the algorithm used is valid for what is being modeled
it simulates the phenomenon in question.
Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution.
Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers.
=== Monte Carlo simulation versus "what if" scenarios ===
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
== Applications ==
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include:
=== Physical sciences ===
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
=== Engineering ===
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits.
In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis.
In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms.
In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm.
In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response.
In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures.
=== Climate change and radiative forcing ===
The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing.
=== Computational biology ===
Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes.
The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
=== Computer graphics ===
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
=== Applied statistics ===
The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes:
To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.
To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions.
To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior.
To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix.
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
=== Artificial intelligence for games ===
Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.
The Monte Carlo tree search (MCTS) method has four steps:
Starting at root node of the tree, select optimal child nodes until a leaf node is reached.
Expand the leaf node and choose one of its children.
Play a simulated game starting with that node.
Use the results of that simulated game to update the node and its ancestors.
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa.
=== Design and visuals ===
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
=== Search and rescue ===
The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources.
=== Finance and business ===
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions.
=== Law ===
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
=== Library science ===
Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan.
=== Other ===
Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
== Use in mathematics ==
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
=== Integration ===
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays
1
/
N
{\displaystyle \scriptstyle 1/{\sqrt {N}}}
convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.
A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm.
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
=== Simulation and optimization ===
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
=== Inverse problems ===
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.
=== Philosophy ===
Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links == | Wikipedia/Monte_Carlo_method |
In computational modelling, multiphysics simulation (often shortened to simply "multiphysics") is defined as the simultaneous simulation of different aspects of a physical system or systems and the interactions among them. For example, simultaneous simulation of the physical stress on an object, the temperature distribution of the object and the thermal expansion which leads to the variation of the stress and temperature distributions would be considered a multiphysics simulation. Multiphysics simulation is related to multiscale simulation, which is the simultaneous simulation of a single process on either multiple time or distance scales.
As an interdisciplinary field, multiphysics simulation can span many science and engineering disciplines. Simulation methods frequently include numerical analysis, partial differential equations and tensor analysis.
== Multiphysics simulation process ==
The implementation of a multiphysics simulation follows a typical series of steps:
Identify the aspects of the system to be simulated, including physical processes, starting conditions, and the coupling or boundary conditions among these processes.
Create a discrete mathematical model of the system.
Numerically solve the model.
Process the resulting data.
== Mathematical models ==
Mathematical models used in multiphysics simulations are generally a set of coupled equations. The equations can be divided into three categories according to the nature and intended role: governing equation, auxiliary equations and boundary/initial conditions. A governing equation describes a major physical mechanism or process. Multiphysics simulations are numerically implemented with discretization methods such as the finite element method, finite difference method, or finite volume method.
== Challenges of multiphysics simulation ==
Generally speaking, multiphysics simulation is much harder than that for individual aspects of the physical processes.
The main extra issue is how to integrate the multiple aspects of the processes with proper handling of the interactions among them.
Such issues become quite difficult when different types of numerical methods are used for the simulations of individual physical aspects.
For example, when simulating a fluid-structure interaction problem with typical Eulerian finite volume method for flow
and Lagrangian finite element method for structure dynamics.
== See also ==
Finite difference time-domain method
== References ==
Susan L. Graham, Marc Snir, and Cynthia A. Patterson (Editors), Getting Up to Speed: The Future of Supercomputing, Appendix D. The National Academies Press, Washington DC, 2004. ISBN 0-309-09502-6.
Paul Lethbridge, Multiphysics Analysis, p26, The Industrial Physicist, Dec 2004/Jan 2005, [1], Archived at: [2] | Wikipedia/Multiphysics_simulation |
A spin model is a mathematical model used in physics primarily to explain magnetism. Spin models may either be classical or quantum mechanical in nature. Spin models have been studied in quantum field theory as examples of integrable models. Spin models are also used in quantum information theory and computability theory in theoretical computer science. The theory of spin models is a far reaching and unifying topic that cuts across many fields.
== Introduction ==
In ordinary materials, the magnetic dipole moments of individual atoms produce magnetic fields that cancel one another, because each dipole points in a random direction. Ferromagnetic materials below their Curie temperature, however, exhibit magnetic domains in which the atomic dipole moments are locally aligned, producing a macroscopic, non-zero magnetic field from the domain. These are the ordinary "magnets" with which we are all familiar.
The study of the behavior of such "spin models" is a thriving area of research in condensed matter physics. For instance, the Ising model describes spins (dipoles) that have only two possible states, up and down, whereas in the Heisenberg model the spin vector is allowed to point in any direction. In certain magnets, the magnetic dipoles are only free to rotate in a 2D plane, a system which can be adequately described by the so-called xy-model.
The lack of a unified theory of magnetism forces scientist to model magnetic systems theoretically with one, or a combination of these spin models in order to understand the intricate behavior of atomic magnetic interactions. Numerical implementation of these models has led to several interesting results, such as quantitative research in the theory of phase transitions.
== Quantum ==
A quantum spin model is a quantum Hamiltonian model that describes a system which consists of spins either interacting or not and are an active area of research in the fields of strongly correlated electron systems, quantum information theory, and quantum computing. The physical observables in these quantum models are actually operators in a Hilbert space acting on state vectors as opposed to the physical observables in the corresponding classical spin models - like the Ising model - which are commutative variables.
== See also ==
== References ==
== Bibliography ==
Bethe, H. (March 1931). "Zur Theorie der Metalle". Zeitschrift für Physik. 71 (3–4): 205–226. Bibcode:1931ZPhy...71..205B. doi:10.1007/BF01341708. S2CID 124225487.
R.J. Baxter, Exactly solved models in statistical mechanics, London, Academic Press, 1982 [1] Archived 2011-04-10 at the Wayback Machine
Affleck, Ian; Marston, J. Brad (1 March 1988). "Large-n limit of the Heisenberg-Hubbard model: Implications for high-Tc superconductors". Physical Review B. 37 (7): 3774–3777. Bibcode:1988PhRvB..37.3774A. doi:10.1103/PhysRevB.37.3774. PMID 9944997.
== External links ==
Introduction to classical and Ising Spin Models
Quantum Field Theory of Many-Body Systems
Institute of Quantum Information Caltech | Wikipedia/Spin_model |
Plasma modeling refers to solving equations of motion that describe the state of a plasma. It is generally coupled with Maxwell's equations for electromagnetic fields or Poisson's equation for electrostatic fields. There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles.
== Single particle description ==
The single-particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law.
In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point.
== Kinetic description ==
The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function
f
(
x
→
,
v
→
,
t
)
{\displaystyle f({\vec {x}},{\vec {v}},t)}
where the independent variables
x
→
{\displaystyle {\vec {x}}}
and
v
→
{\displaystyle {\vec {v}}}
are position and velocity, respectively.
A kinetic description is achieved by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker–Planck equation, in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations.
== Fluid description ==
To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation. The fluid equations are not closed without the determination of transport coefficients such as mobility, diffusion coefficient, averaged collision frequencies, and so on. To determine the transport coefficients, the velocity distribution function must be assumed/chosen. But this assumption can lead to a failure of capturing some physics.
== Hybrid kinetic/fluid description ==
Although the kinetic model describes the physics accurately, it is more complex (and in the case of numerical simulations, more computationally intensive) than the fluid model. The hybrid model is a combination of fluid and kinetic models, treating some components of the system as a fluid, and others kinetically. The hybrid model is sometimes applied in space physics, when the simulation domain exceeds thousands of ion gyroradius scales, making it impractical to solve kinetic equations for electrons. In this approach, magnetohydrodynamic fluid equations describe electrons, while the kinetic Vlasov equation describes ions.
== Gyrokinetic description ==
In the gyrokinetic model, which is appropriate to systems with a strong background magnetic field, the kinetic equations are averaged over the fast circular motion of the gyroradius. This model has been used extensively for simulation of tokamak plasma instabilities (for example, the GYRO and Gyrokinetic ElectroMagnetic codes), and more recently in astrophysical applications.
== Quantum mechanical methods ==
Quantum methods are not yet very common in plasma modeling. They can be used to solve unique modeling problems; like situations where other methods do not apply. They involve the application of quantum field theory to plasma. In these cases, the electric and magnetic fields made by particles are modeled like a field; A web of forces. Particles that move, or are removed from the population push and pull on this web of forces, this field. The mathematical treatment for this involves Lagrangian mathematics.
Collisional-radiative modeling is used to calculate quantum state densities and the emission/absorption properties of a plasma. This plasma radiation physics is critical for the diagnosis and simulation of astrophysical and nuclear fusion plasma. It is one of the most general approaches and lies between the extrema of a local thermal equilibrium and a coronal picture.
In a local thermal equilibrium the population of excited states is distributed according
to a Boltzmann distribution. However, this holds only if densities are high enough for an excited hydrogen
atom to undergo many collisions such that the energy is distributed before the
radiative process sets in. In a coronal picture the timescale of the radiative
process is small compared to the collisions since densities are very small. The use of the term coronal equilibrium is ambiguous and may also refer to the non-transport ionization balance of recombination and ionization. The only thing they have in common is that a coronal equilibrium is not sufficient for tokamak plasma.
== Commercial plasma physics modeling codes ==
Quantemol-VT
VizGlow
VizSpark
CFD-ACE+
COMSOL
LSP
Magic
Starfish
USim
VSim
STAR-CCM+
== See also ==
Particle-in-cell
== References ==
Francis F. Chen (2006). Introduction to Plasma Physics and Controlled Fusion (2nd ed.). Springer. ISBN 978-0-306-41332-2.
Nicholas Krall & Alvin Trivelpiece (1986). Principles of Plasma Physics. San Francisco Press. ISBN 978-0-911302-58-5.
Ledvina, S. A.; Y.-J. Ma; E. Kallio (2008). "Modeling and Simulating Flowing Plasmas and Related Phenomena". Space Science Reviews. 139 (1–4): 143. Bibcode:2008SSRv..139..143L. doi:10.1007/s11214-008-9384-6. S2CID 121999061. | Wikipedia/Plasma_modeling |
Computational geophysics is the field of study that uses any type of numerical computations to generate and analyze models of complex geophysical systems. It can be considered an extension, or sub-field, of both computational physics and geophysics. In recent years, computational power, data availability, and modelling capabilities have all improved exponentially, making computational geophysics a more populated discipline. Due to the large computational size of many geophysical problems, high-performance computing can be required to handle analysis. Modeling applications of computational geophysics include atmospheric modelling, oceanic modelling, general circulation models, and geological modelling. In addition to modelling, some problems in remote sensing fall within the scope of computational geophysics such as tomography, inverse problems, and 3D reconstruction.
== Geophysical models ==
The generation of geophysical models are a key component of computational geophysics. Geophysical models are defined as "physical-mathematical descriptions of temporal and/or spatial changes in important geological variables, as derived from accepted laws, theories, and empirical relationships." Geophysical models are frequently used by researchers in all disciplines of environmental science.
In climate science, atmospheric, oceanic, and general circulation models are a crucial standby for researchers. Although remote sensing has been steadily providing more and more in-situ measurements of geophysical variables, nothing comes close to the temporal and geospatial resolution of data provided by models. Although data can be subject to accuracy issues due to the extrapolation techniques used, the usage of modeled data is a commonly accepted practice in climate and meteorological sciences. Oftentimes, these models will be used in concert with in-situ measurements.
A few well-known models are
NCEP/NCAR Reanalysis Project, an atmospheric model
Global Forecast System, a numerical weather prediction model
HYCOM, a general ocean circulation model
Geological system models are frequently used in research, but have less public data availability than climatic and meteorological models. There is a wide range of software available that allows for geomodelling.
== Remote sensing ==
The United States Geological Survey (USGS) defines remote sensing as the measurement of some property by transmitting some type of radiation at a distance, and measuring the emitted and reflected radiation. Remote sensing can involve satellites, cameras, and sound wave emission. Remote sensing is inherently a type of indirect measurement, meaning that some type of computation must be completed in order to obtain a measurement of the property of interest. For some applications, these computations can be highly complex. In addition, the analysis of these data products can be classified as computational geophysics.
== Programs of study ==
In Canada, computational geophysics is offered as a university major in the form of a BSc (Hon.) with co-op at Carleton University.
Elsewhere, Rice University has a Center for Computational Geophysics, while Princeton University, the University of Texas, and California Institute of Technology have similar research centers. Experts, laboratories, projects, internships, undergraduate programs, graduate programs and/or facilities in the program exist at the University of Queensland, Wyoming University, Boston University, Stanford University, Uppsala University, Kansas State University, Kingston University, Australian National University, University of California, San Diego, University of Washington, Nanyang Technological University, ETH Zurich, University of Sydney, Appalachian State University, University of Minnesota, University of Tasmania, Bahria University, Boise State University, University of Michigan, University of Oulu, University of Utah, and others.
== Laboratories ==
Federal organizations that study or apply computational geophysics include
Earth System Research Laboratories at NOAA
Earth Sciences Division at NASA
Computational Geophysics Lab at the Earth Observatory of Singapore
== References ==
== See also ==
Computational fluid dynamics
History of geophysics
List of ocean circulation models
Meteorological reanalysis
Numerical weather prediction | Wikipedia/Computational_geophysics |
Computational Transportation Science (CTS) is an emerging discipline that combines computer science and engineering with the modeling, planning, and economic aspects of transport. The discipline studies how to improve the safety, mobility, and sustainability of the transport system by taking advantage of information technologies and ubiquitous computing. A list of subjects encompassed by CTS can be found at include.
Computational Transportation Science is an emerging discipline going beyond vehicular technology, addressing pedestrian systems on hand-held devices but also issues such as transport data mining (or movement analysis), as well as data management aspects. CTS allows for an increasing flexibility of the system as local and autonomous negotiations between transport peers, partners and supporting infrastructure are allowed. Thus, CTS provides means to study localized computing, self-organization, cooperation and simulation of transport systems.
Several academic conferences on CTS have been held up to date:
The Fourth ACM SIGSPATIAL International Workshop on Computational Transportation Science
The Third ACM SIGSPATIAL International Workshop on Computational Transportation Science
Dagstuhl Seminar 10121 on Computational Transportation Science
The Second International Workshop on Computational Transportation Science
The First International Workshop on Computational Transportation Science
There is also an IGERT PHD program on Computational Transportation Science at the University of Illinois at Chicago.
== References ==
== External links ==
Computational Transportation Science | Wikipedia/Computational_transportation_science |
This is a list of graphical methods with a mathematical basis.
Included are diagram techniques, chart techniques, plot techniques, and other forms of visualization.
There is also a list of computer graphics and descriptive geometry topics.
== Simple displays ==
Area chart
Bar chart
Histogram
Variable-width bar chart
Box plot
Dispersion fan diagram
Graph of a function
Logarithmic graph paper
Heatmap
Line chart
Pie chart
Plotting
Radar chart
Scatterplot
Sparkline
Spiral graphic
Stemplot
Stripe graphic
== Set theory ==
Venn diagram
Karnaugh diagram
== Descriptive geometry ==
Isometric projection
Orthographic projection
Perspective (graphical)
== Engineering drawing ==
Technical drawing
Graphical projection
Mohr's circle
Pantograph
Circuit diagram
Smith chart
Sankey diagram
== Systems analysis ==
Binary decision diagram
Control-flow graph
Functional flow block diagram
Information flow diagram
IDEF
N2 chart
Sankey diagram
State diagram
System context diagram
Data-flow diagram
== Cartography ==
Map projection
Orthographic projection (cartography)
Robinson projection
Stereographic projection
Dymaxion map
Topographic map
Craig retroazimuthal projection
Hammer retroazimuthal projection
== Biological sciences ==
Cladogram
Punnett square
Systems Biology Graphical Notation
== Physical sciences ==
Free body diagram
Greninger chart
Phase diagram
Wavenumber-frequency diagram
Bode plot
Nyquist plot
Dalitz plot
Feynman diagram
Carnot Plot
== Business methods ==
Flowchart
Workflow
Gantt chart
Growth-share matrix (often called BCG chart)
Work breakdown structure
Control chart
Ishikawa diagram
Pareto chart (often used to prioritise outputs of an Ishikawa diagram)
== Conceptual analysis ==
Mind mapping
Concept mapping
Conceptual graph
Entity-relationship diagram
Tag cloud, also known as word cloud
== Statistics ==
Autocorrelation plot
Bar chart
Biplot
Box plot
Bullet graph
Chernoff faces
Control chart
Fan chart
Forest plot
Funnel plot
Galbraith plot
Histogram
Mosaic plot
Multidimensional scaling
np-chart
p-chart
Pie chart
Probability plot
Normal probability plot
Poincaré plot
Probability plot correlation coefficient plot
Q–Q plot
Rankit
Run chart
Seasonal subseries plot
Scatter plot
Skewplot
Ternary plot
Recurrence plot
Waterfall chart
Violin plot
== Other ==
Ulam spiral
Nomogram
Fitness landscape
Weather map
Predominance diagram
One-line diagram
Autostereogram
Edgeworth box
Lineweaver-Burk diagram
Eadie-Hofstee diagram
Population pyramid
Parametric plot
Causality loop diagram
Ramachandran plot
V model
Sentence diagram
Tree structure
Treemapping
Airfield traffic pattern diagram
== See also ==
List of information graphics software
Data and information visualization
== External links ==
A periodic table of visualization methods.
Speaking of Graphics. | Wikipedia/Graphical_method |
Car–Parrinello molecular dynamics or CPMD refers to either a method used in molecular dynamics (also known as the Car–Parrinello method) or the computational chemistry software package used to implement this method.
The CPMD method is one of the major methods for calculating ab-initio molecular dynamics (ab-initio MD or AIMD).
Ab initio molecular dynamics (ab initio MD) is a computational method that uses first principles, or fundamental laws of nature, to simulate the motion of atoms in a system. It is a type of molecular dynamics (MD) simulation that does not rely on empirical potentials or force fields to describe the interactions between atoms, but rather calculates these interactions directly from the electronic structure of the system using quantum mechanics.
In an ab initio MD simulation, the total energy of the system is calculated at each time step using density functional theory (DFT) or another method of quantum chemistry. The forces acting on each atom are then determined from the gradient of the energy with respect to the atomic coordinates, and the equations of motion are solved to predict the trajectory of the atoms.
AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effect. Therefore, Ab initio MD simulations can be used to study a wide range of phenomena, including the structural, thermodynamic, and dynamic properties of materials and chemical reactions. They are particularly useful for systems that are not well described by empirical potentials or force fields, such as systems with strong electronic correlation or systems with many degrees of freedom. However, ab initio MD simulations are computationally demanding and require significant computational resources.
The CPMD method is related to the more common Born–Oppenheimer molecular dynamics (BOMD) method in that the quantum mechanical effect of the electrons is included in the calculation of energy and forces for the classical motion of the nuclei. CPMD and BOMD are different types of AIMD. However, whereas BOMD treats the electronic structure problem within the time-independent Schrödinger equation, CPMD explicitly includes the electrons as active degrees of freedom, via (fictitious) dynamical variables.
The software is a parallelized plane wave / pseudopotential implementation of density functional theory, particularly designed for ab initio molecular dynamics.
== Car–Parrinello method ==
The Car–Parrinello method is a type of molecular dynamics, usually employing periodic boundary conditions, planewave basis sets, and density functional theory, proposed by Roberto Car and Michele Parrinello in 1985 while working at SISSA, who were subsequently awarded the Dirac Medal by ICTP in 2009.
In contrast to Born–Oppenheimer molecular dynamics wherein the nuclear (ions) degree of freedom are propagated using ionic forces which are calculated at each iteration by approximately solving the electronic problem with conventional matrix diagonalization methods, the Car–Parrinello method explicitly introduces the electronic degrees of freedom as (fictitious) dynamical variables, writing an extended Lagrangian for the system which leads to a system of coupled equations of motion for both ions and electrons. In this way, an explicit electronic minimization at each time step, as done in Born–Oppenheimer MD, is not needed: after an initial standard electronic minimization, the fictitious dynamics of the electrons keeps them on the electronic ground state corresponding to each new ionic configuration visited along the dynamics, thus yielding accurate ionic forces. In order to maintain this adiabaticity condition, it is necessary that the fictitious mass of the electrons is chosen small enough to avoid a significant energy transfer from the ionic to the electronic degrees of freedom. This small fictitious mass in turn requires that the equations of motion are integrated using a smaller time step than the one (1–10 fs) commonly used in Born–Oppenheimer molecular dynamics.
Currently, the CPMD method can be applied to systems that consist of a few tens or hundreds of atoms and access timescales on the order of tens of picoseconds.
== General approach ==
In CPMD the core electrons are usually described by a pseudopotential and the wavefunction of the valence electrons are approximated by a plane wave basis set.
The ground state electronic density (for fixed nuclei) is calculated self-consistently, usually using the density functional theory method. Kohn-Sham equations are often used to calculate the electronic structure, where electronic orbitals are expanded in a plane-wave basis set. Then, using that density, forces on the nuclei can be computed, to update the trajectories (using, e.g. the Verlet integration algorithm). In addition, however, the coefficients used to obtain the electronic orbital functions can be treated as a set of extra spatial dimensions, and trajectories for the orbitals can be calculated in this context.
== Fictitious dynamics ==
CPMD is an approximation of the Born–Oppenheimer MD (BOMD) method. In BOMD, the electrons' wave function must be minimized via matrix diagonalization at every step in the trajectory. CPMD uses fictitious dynamics to keep the electrons close to the ground state, preventing the need for a costly self-consistent iterative minimization at each time step. The fictitious dynamics relies on the use of a fictitious electron mass (usually in the range of 400 – 800 a.u.) to ensure that there is very little energy transfer from nuclei to electrons, i.e. to ensure adiabaticity. Any increase in the fictitious electron mass resulting in energy transfer would cause the system to leave the ground-state BOMD surface.
=== Lagrangian ===
L
=
1
2
(
∑
I
n
u
c
l
e
i
M
I
R
˙
I
2
+
μ
∑
i
o
r
b
i
t
a
l
s
∫
d
r
|
ψ
˙
i
(
r
,
t
)
|
2
)
−
E
[
{
ψ
i
}
,
{
R
I
}
]
+
∑
i
j
Λ
i
j
(
∫
d
r
ψ
i
ψ
j
−
δ
i
j
)
,
{\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\sum _{I}^{\mathrm {nuclei} }\ M_{I}{\dot {\mathbf {R} }}_{I}^{2}+\mu \sum _{i}^{\mathrm {orbitals} }\int d\mathbf {r} \ |{\dot {\psi }}_{i}(\mathbf {r} ,t)|^{2}\right)-E\left[\{\psi _{i}\},\{\mathbf {R} _{I}\}\right]+\sum _{ij}\Lambda _{ij}\left(\int d\mathbf {r} \ \psi _{i}\psi _{j}-\delta _{ij}\right),}
where
μ
{\displaystyle \mu }
is the fictitious mass parameter; E[{ψi},{RI}] is the Kohn–Sham energy density functional, which outputs energy values when given Kohn–Sham orbitals and nuclear positions.
=== Orthogonality constraint ===
∫
d
r
ψ
i
∗
(
r
,
t
)
ψ
j
(
r
,
t
)
=
δ
i
j
,
{\displaystyle \int d\mathbf {r} \ \psi _{i}^{*}(\mathbf {r} ,t)\psi _{j}(\mathbf {r} ,t)=\delta _{ij},}
where δij is the Kronecker delta.
=== Equations of motion ===
The equations of motion are obtained by finding the stationary point of the Lagrangian under variations of ψi and RI, with the orthogonality constraint.
M
I
R
¨
I
=
−
∇
I
E
[
{
ψ
i
}
,
{
R
I
}
]
{\displaystyle M_{I}{\ddot {\mathbf {R} }}_{I}=-\nabla _{I}\,E\left[\{\psi _{i}\},\{\mathbf {R} _{I}\}\right]}
μ
ψ
¨
i
(
r
,
t
)
=
−
δ
E
δ
ψ
i
∗
(
r
,
t
)
+
∑
j
Λ
i
j
ψ
j
(
r
,
t
)
,
{\displaystyle \mu {\ddot {\psi }}_{i}(\mathbf {r} ,t)=-{\frac {\delta E}{\delta \psi _{i}^{*}(\mathbf {r} ,t)}}+\sum _{j}\Lambda _{ij}\psi _{j}(\mathbf {r} ,t),}
where Λij is a Lagrangian multiplier matrix to comply with the orthonormality constraint.
=== Born–Oppenheimer limit ===
In the formal limit where μ → 0, the equations of motion approach Born–Oppenheimer molecular dynamics.
== Software packages ==
There are a number of software packages available for performing AIMD simulations. Some of the most widely used packages include:
CP2K: an open-source software package for AIMD.
Quantum Espresso: an open-source package for performing DFT calculations. It includes a module for AIMD.
VASP: a commercial software package for performing DFT calculations. It includes a module for AIMD.
Gaussian: a commercial software package that can perform AIMD.
NWChem: an open-source software package for AIMD.
LAMMPS: an open-source software package for performing classical and ab initio MD simulations.
SIESTA: an open-source software package for AIMD.
== Application ==
Studying the behavior of water near a hydrophobic graphene sheet.
Investigating the structure and dynamics of liquid water at ambient temperature.
Solving the heat transfer problems (heat conduction and thermal radiation) between Si/Ge superlattices.
Probing the proton transfer along 1D water chains inside carbon nanotubes.
Evaluating the critical point of aluminum.
Predicting the amorphous phase of the phase-change memory material GeSbTe.
Studying the combustion process of lignite-water systems.
Computing and analyzing the IR spectra in terms of H-bond interactions.
== See also ==
Computational physics
Density functional theory
Computational chemistry
Molecular dynamics
Quantum chemistry
Ab initio quantum chemistry methods
Quantum chemistry computer programs
List of software for molecular mechanics modeling
List of quantum chemistry and solid-state physics software
CP2K
== References ==
== External links ==
Car-Parrinello Molecular Dynamics
about [CP2K Open Source Molecular Dynamics ] | Wikipedia/Car–Parrinello_molecular_dynamics |
In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. New samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate a histogram) or to compute an integral (e.g. an expected value).
Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g. adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.
== History ==
The algorithm is named in part for Nicholas Metropolis, the first coauthor of a 1953 paper, entitled Equation of State Calculations by Fast Computing Machines, with Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller and Edward Teller. For many years the algorithm was known simply as the Metropolis algorithm. The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970, W.K. Hastings extended it to the more general case. The generalized method was eventually identified by both names, although the first use of the term "Metropolis-Hastings algorithm" is unclear.
Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with Stanisław Ulam, and led the group in the Theoretical Division that designed and built the MANIAC I computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics". Further historical clarification is made by Gubernatis in a 2005 journal article recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)". In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with John Von Neumann. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death, Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer.
== Description ==
The Metropolis–Hastings algorithm can draw samples from any probability distribution with probability density
P
(
x
)
{\displaystyle P(x)}
, provided that we know a function
f
(
x
)
{\displaystyle f(x)}
proportional to the density
P
{\displaystyle P}
and the values of
f
(
x
)
{\displaystyle f(x)}
can be calculated. The requirement that
f
(
x
)
{\displaystyle f(x)}
must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice.
The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples a Markov chain. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function
f
(
x
)
{\displaystyle f(x)}
of the current and candidate sample values with respect to the desired distribution.
The method used to propose new candidates is characterized by the probability distribution
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
(sometimes written
Q
(
x
∣
y
)
{\displaystyle Q(x\mid y)}
) of a new proposed sample
x
{\displaystyle x}
given the previous sample
y
{\displaystyle y}
. This is called the proposal density, proposal function, or jumping distribution. A common choice for
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
is a Gaussian distribution centered at
y
{\displaystyle y}
, so that points closer to
y
{\displaystyle y}
are more likely to be visited next, making the sequence of samples into a Gaussian random walk. In the original paper by Metropolis et al. (1953),
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
was suggested to be a uniform distribution limited to some maximum distance from
y
{\displaystyle y}
. More complicated proposal functions are also possible, such as those of Hamiltonian Monte Carlo, Langevin Monte Carlo, or preconditioned Crank–Nicolson.
For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below.
Metropolis algorithm (symmetric proposal distribution)
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is proportional to the desired probability density function
P
(
x
)
{\displaystyle P(x)}
(a.k.a. a target distribution).
Initialization: Choose an arbitrary point
x
t
{\displaystyle x_{t}}
to be the first observation in the sample and choose a proposal function
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
. In this section,
g
{\displaystyle g}
is assumed to be symmetric; in other words, it must satisfy
g
(
x
∣
y
)
=
g
(
y
∣
x
)
{\displaystyle g(x\mid y)=g(y\mid x)}
.
For each iteration t:
Propose a candidate
x
′
{\displaystyle x'}
for the next sample by picking from the distribution
g
(
x
′
∣
x
t
)
{\displaystyle g(x'\mid x_{t})}
.
Calculate the acceptance ratio
α
=
f
(
x
′
)
/
f
(
x
t
)
{\displaystyle \alpha =f(x')/f(x_{t})}
, which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that
α
=
f
(
x
′
)
/
f
(
x
t
)
=
P
(
x
′
)
/
P
(
x
t
)
{\displaystyle \alpha =f(x')/f(x_{t})=P(x')/P(x_{t})}
.
Accept or reject:
Generate a uniform random number
u
∈
[
0
,
1
]
{\displaystyle u\in [0,1]}
.
If
u
≤
α
{\displaystyle u\leq \alpha }
, then accept the candidate by setting
x
t
+
1
=
x
′
{\displaystyle x_{t+1}=x'}
,
If
u
>
α
{\displaystyle u>\alpha }
, then reject the candidate and set
x
t
+
1
=
x
t
{\displaystyle x_{t+1}=x_{t}}
instead.
This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place.
P
(
x
)
{\displaystyle P(x)}
at specific point
x
{\displaystyle x}
is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratio
α
{\displaystyle \alpha }
indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is
P
(
x
)
{\displaystyle P(x)}
. If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of
P
(
x
)
{\displaystyle P(x)}
corresponding to an
α
>
1
≥
u
{\displaystyle \alpha >1\geq u}
), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of
P
(
x
)
{\displaystyle P(x)}
, while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density
P
(
x
)
{\displaystyle P(x)}
.
Compared with an algorithm like adaptive rejection sampling that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
The samples are autocorrelated. Even though over the long term they do correctly follow
P
(
x
)
{\displaystyle P(x)}
, a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors.
Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a burn-in period is typically necessary, where an initial number of samples are thrown away.
On the other hand, most simple rejection sampling methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from hierarchical Bayesian models and other high-dimensional statistical models used nowadays in many disciplines.
In multivariate distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as Gibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality. This is especially applicable when the multivariate distribution is composed of a set of individual random variables in which each variable is conditioned on only a small number of other variables, as is the case in most typical hierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the adaptive rejection sampling methods, the adaptive rejection Metropolis sampling algorithm, a simple one-dimensional Metropolis–Hastings step, or slice sampling.
== Formal derivation ==
The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution
P
(
x
)
{\displaystyle P(x)}
. To accomplish this, the algorithm uses a Markov process, which asymptotically reaches a unique stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
such that
π
(
x
)
=
P
(
x
)
{\displaystyle \pi (x)=P(x)}
.
A Markov process is uniquely defined by its transition probabilities
P
(
x
′
∣
x
)
{\displaystyle P(x'\mid x)}
, the probability of transitioning from any given state
x
{\displaystyle x}
to any other given state
x
′
{\displaystyle x'}
. It has a unique stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
when the following two conditions are met:
Existence of stationary distribution: there must exist a stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
. A sufficient but not necessary condition is detailed balance, which requires that each transition
x
→
x
′
{\displaystyle x\to x'}
is reversible: for every pair of states
x
,
x
′
{\displaystyle x,x'}
, the probability of being in state
x
{\displaystyle x}
and transitioning to state
x
′
{\displaystyle x'}
must be equal to the probability of being in state
x
′
{\displaystyle x'}
and transitioning to state
x
{\displaystyle x}
,
π
(
x
)
P
(
x
′
∣
x
)
=
π
(
x
′
)
P
(
x
∣
x
′
)
{\displaystyle \pi (x)P(x'\mid x)=\pi (x')P(x\mid x')}
.
Uniqueness of stationary distribution: the stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
must be unique. This is guaranteed by ergodicity of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the expected number of steps for returning to the same state is finite.
The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) that fulfills the two above conditions, such that its stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
is chosen to be
P
(
x
)
{\displaystyle P(x)}
. The derivation of the algorithm starts with the condition of detailed balance:
P
(
x
′
∣
x
)
P
(
x
)
=
P
(
x
∣
x
′
)
P
(
x
′
)
,
{\displaystyle P(x'\mid x)P(x)=P(x\mid x')P(x'),}
which is re-written as
P
(
x
′
∣
x
)
P
(
x
∣
x
′
)
=
P
(
x
′
)
P
(
x
)
.
{\displaystyle {\frac {P(x'\mid x)}{P(x\mid x')}}={\frac {P(x')}{P(x)}}.}
The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The proposal distribution
g
(
x
′
∣
x
)
{\displaystyle g(x'\mid x)}
is the conditional probability of proposing a state
x
′
{\displaystyle x'}
given
x
{\displaystyle x}
, and the acceptance distribution
A
(
x
′
,
x
)
{\displaystyle A(x',x)}
is the probability to accept the proposed state
x
′
{\displaystyle x'}
. The transition probability can be written as the product of them:
P
(
x
′
∣
x
)
=
g
(
x
′
∣
x
)
A
(
x
′
,
x
)
.
{\displaystyle P(x'\mid x)=g(x'\mid x)A(x',x).}
Inserting this relation in the previous equation, we have
A
(
x
′
,
x
)
A
(
x
,
x
′
)
=
P
(
x
′
)
P
(
x
)
g
(
x
∣
x
′
)
g
(
x
′
∣
x
)
.
{\displaystyle {\frac {A(x',x)}{A(x,x')}}={\frac {P(x')}{P(x)}}{\frac {g(x\mid x')}{g(x'\mid x)}}.}
The next step in the derivation is to choose an acceptance ratio that fulfills the condition above. One common choice is the Metropolis choice:
A
(
x
′
,
x
)
=
min
(
1
,
P
(
x
′
)
P
(
x
)
g
(
x
∣
x
′
)
g
(
x
′
∣
x
)
)
.
{\displaystyle A(x',x)=\min \left(1,{\frac {P(x')}{P(x)}}{\frac {g(x\mid x')}{g(x'\mid x)}}\right).}
For this Metropolis acceptance ratio
A
{\displaystyle A}
, either
A
(
x
′
,
x
)
=
1
{\displaystyle A(x',x)=1}
or
A
(
x
,
x
′
)
=
1
{\displaystyle A(x,x')=1}
and, either way, the condition is satisfied.
The Metropolis–Hastings algorithm can thus be written as follows:
Initialise
Pick an initial state
x
0
{\displaystyle x_{0}}
.
Set
t
=
0
{\displaystyle t=0}
.
Iterate
Generate a random candidate state
x
′
{\displaystyle x'}
according to
g
(
x
′
∣
x
t
)
{\displaystyle g(x'\mid x_{t})}
.
Calculate the acceptance probability
A
(
x
′
,
x
t
)
=
min
(
1
,
P
(
x
′
)
P
(
x
t
)
g
(
x
t
∣
x
′
)
g
(
x
′
∣
x
t
)
)
{\displaystyle A(x',x_{t})=\min \left(1,{\frac {P(x')}{P(x_{t})}}{\frac {g(x_{t}\mid x')}{g(x'\mid x_{t})}}\right)}
.
Accept or reject:
generate a uniform random number
u
∈
[
0
,
1
]
{\displaystyle u\in [0,1]}
;
if
u
≤
A
(
x
′
,
x
t
)
{\displaystyle u\leq A(x',x_{t})}
, then accept the new state and set
x
t
+
1
=
x
′
{\displaystyle x_{t+1}=x'}
;
if
u
>
A
(
x
′
,
x
t
)
{\displaystyle u>A(x',x_{t})}
, then reject the new state, and copy the old state forward
x
t
+
1
=
x
t
{\displaystyle x_{t+1}=x_{t}}
.
Increment: set
t
=
t
+
1
{\displaystyle t=t+1}
.
Provided that specified conditions are met, the empirical distribution of saved states
x
0
,
…
,
x
T
{\displaystyle x_{0},\ldots ,x_{T}}
will approach
P
(
x
)
{\displaystyle P(x)}
. The number of iterations (
T
{\displaystyle T}
) required to effectively estimate
P
(
x
)
{\displaystyle P(x)}
depends on the number of factors, including the relationship between
P
(
x
)
{\displaystyle P(x)}
and the proposal distribution and the desired accuracy of estimation. For distribution on discrete state spaces, it has to be of the order of the autocorrelation time of the Markov process.
It is important to notice that it is not clear, in a general problem, which distribution
g
(
x
′
∣
x
)
{\displaystyle g(x'\mid x)}
one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand.
== Use in numerical integration ==
A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a space
Ω
⊂
R
{\displaystyle \Omega \subset \mathbb {R} }
and a probability distribution
P
(
x
)
{\displaystyle P(x)}
over
Ω
{\displaystyle \Omega }
,
x
∈
Ω
{\displaystyle x\in \Omega }
. Metropolis–Hastings can estimate an integral of the form of
P
(
E
)
=
∫
Ω
A
(
x
)
P
(
x
)
d
x
,
{\displaystyle P(E)=\int _{\Omega }A(x)P(x)\,dx,}
where
A
(
x
)
{\displaystyle A(x)}
is a (measurable) function of interest.
For example, consider a statistic
E
(
x
)
{\displaystyle E(x)}
and its probability distribution
P
(
E
)
{\displaystyle P(E)}
, which is a marginal distribution. Suppose that the goal is to estimate
P
(
E
)
{\displaystyle P(E)}
for
E
{\displaystyle E}
on the tail of
P
(
E
)
{\displaystyle P(E)}
. Formally,
P
(
E
)
{\displaystyle P(E)}
can be written as
P
(
E
)
=
∫
Ω
P
(
E
∣
x
)
P
(
x
)
d
x
=
∫
Ω
δ
(
E
−
E
(
x
)
)
P
(
x
)
d
x
=
E
(
P
(
E
∣
X
)
)
{\displaystyle P(E)=\int _{\Omega }P(E\mid x)P(x)\,dx=\int _{\Omega }\delta {\big (}E-E(x){\big )}P(x)\,dx=E{\big (}P(E\mid X){\big )}}
and, thus, estimating
P
(
E
)
{\displaystyle P(E)}
can be accomplished by estimating the expected value of the indicator function
A
E
(
x
)
≡
1
E
(
x
)
{\displaystyle A_{E}(x)\equiv \mathbf {1} _{E}(x)}
, which is 1 when
E
(
x
)
∈
[
E
,
E
+
Δ
E
]
{\displaystyle E(x)\in [E,E+\Delta E]}
and zero otherwise.
Because
E
{\displaystyle E}
is on the tail of
P
(
E
)
{\displaystyle P(E)}
, the probability to draw a state
x
{\displaystyle x}
with
E
(
x
)
{\displaystyle E(x)}
on the tail of
P
(
E
)
{\displaystyle P(E)}
is proportional to
P
(
E
)
{\displaystyle P(E)}
, which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate
P
(
E
)
{\displaystyle P(E)}
on the tails. This can be done e.g. by using a sampling distribution
π
(
x
)
{\displaystyle \pi (x)}
to favor those states (e.g.
π
(
x
)
∝
e
a
E
{\displaystyle \pi (x)\propto e^{aE}}
with
a
>
0
{\displaystyle a>0}
).
== Step-by-step instructions ==
Suppose that the most recent value sampled is
x
t
{\displaystyle x_{t}}
. To follow the Metropolis–Hastings algorithm, we next draw a new proposal state
x
′
{\displaystyle x'}
with probability density
g
(
x
′
∣
x
t
)
{\displaystyle g(x'\mid x_{t})}
and calculate a value
a
=
a
1
a
2
,
{\displaystyle a=a_{1}a_{2},}
where
a
1
=
P
(
x
′
)
P
(
x
t
)
{\displaystyle a_{1}={\frac {P(x')}{P(x_{t})}}}
is the probability (e.g., Bayesian posterior) ratio between the proposed sample
x
′
{\displaystyle x'}
and the previous sample
x
t
{\displaystyle x_{t}}
, and
a
2
=
g
(
x
t
∣
x
′
)
g
(
x
′
∣
x
t
)
{\displaystyle a_{2}={\frac {g(x_{t}\mid x')}{g(x'\mid x_{t})}}}
is the ratio of the proposal density in two directions (from
x
t
{\displaystyle x_{t}}
to
x
′
{\displaystyle x'}
and conversely).
This is equal to 1 if the proposal density is symmetric.
Then the new state
x
t
+
1
{\displaystyle x_{t+1}}
is chosen according to the following rules.
If
a
≥
1
:
{\displaystyle a\geq 1{:}}
x
t
+
1
=
x
′
,
{\displaystyle x_{t+1}=x',}
else:
x
t
+
1
=
{
x
′
with probability
a
,
x
t
with probability
1
−
a
.
{\displaystyle x_{t+1}={\begin{cases}x'&{\text{with probability }}a,\\x_{t}&{\text{with probability }}1-a.\end{cases}}}
The Markov chain is started from an arbitrary initial value
x
0
{\displaystyle x_{0}}
, and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The remaining set of accepted values of
x
{\displaystyle x}
represent a sample from the distribution
P
(
x
)
{\displaystyle P(x)}
.
The algorithm works best if the proposal density matches the shape of the target distribution
P
(
x
)
{\displaystyle P(x)}
, from which direct sampling is difficult, that is
g
(
x
′
∣
x
t
)
≈
P
(
x
′
)
{\displaystyle g(x'\mid x_{t})\approx P(x')}
.
If a Gaussian proposal density
g
{\displaystyle g}
is used, the variance parameter
σ
2
{\displaystyle \sigma ^{2}}
has to be tuned during the burn-in period.
This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last
N
{\displaystyle N}
samples.
The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for an
N
{\displaystyle N}
-dimensional Gaussian target distribution. These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using the Bernstein–von Mises theorem.
If
σ
2
{\displaystyle \sigma ^{2}}
is too small, the chain will mix slowly (i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly to
P
(
x
)
{\displaystyle P(x)}
). On the other hand,
if
σ
2
{\displaystyle \sigma ^{2}}
is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so
a
1
{\displaystyle a_{1}}
will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph.
== Bayesian Inference ==
MCMC can be used to draw samples from the posterior distribution of a statistical model.
The acceptance probability is given by:
P
a
c
c
(
θ
i
→
θ
∗
)
=
min
(
1
,
L
(
y
|
θ
∗
)
P
(
θ
∗
)
L
(
y
|
θ
i
)
P
(
θ
i
)
Q
(
θ
i
|
θ
∗
)
Q
(
θ
∗
|
θ
i
)
)
,
{\displaystyle P_{acc}(\theta _{i}\to \theta ^{*})=\min \left(1,{\frac {{\mathcal {L}}(y|\theta ^{*})P(\theta ^{*})}{{\mathcal {L}}(y|\theta _{i})P(\theta _{i})}}{\frac {Q(\theta _{i}|\theta ^{*})}{Q(\theta ^{*}|\theta _{i})}}\right),}
where
L
{\displaystyle {\mathcal {L}}}
is the likelihood,
P
(
θ
)
{\displaystyle P(\theta )}
the prior probability density and
Q
{\displaystyle Q}
the (conditional) proposal probability.
== See also ==
Genetic algorithms
Mean-field particle methods
Metropolis light transport
Multiple-try Metropolis
Parallel tempering
Sequential Monte Carlo
Simulated annealing
== References ==
== Notes ==
== Further reading ==
Bernd A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. Singapore, World Scientific, 2004.
Chib, Siddhartha; Greenberg, Edward (1995). "Understanding the Metropolis–Hastings Algorithm". The American Statistician, 49(4), 327–335.
David D. L. Minh and Do Le Minh. "Understanding the Hastings Algorithm." Communications in Statistics - Simulation and Computation, 44:2 332–349, 2015
Bolstad, William M. (2010) Understanding Computational Bayesian Statistics, John Wiley & Sons ISBN 0-470-04609-0 | Wikipedia/Metropolis–Hastings_algorithm |
In fluid dynamics, turbulence modeling is the construction and use of a mathematical model to predict the effects of turbulence. Turbulent flows are commonplace in most real-life scenarios. In spite of decades of research, there is no analytical theory to predict the evolution of these turbulent flows. The equations governing turbulent flows can only be solved directly for simple cases of flow. For most real-life turbulent flows, CFD simulations use turbulent models to predict the evolution of turbulence. These turbulence models are simplified constitutive equations that predict the statistical evolution of turbulent flows.
== Closure problem ==
The Navier–Stokes equations govern the velocity and pressure of a fluid flow. In a turbulent flow, each of these quantities may be decomposed into a mean part and a fluctuating part. Averaging the equations gives the Reynolds-averaged Navier–Stokes (RANS) equations, which govern the mean flow. However, the nonlinearity of the Navier–Stokes equations means that the velocity fluctuations still appear in the RANS equations, in the nonlinear term
−
ρ
v
i
′
v
j
′
¯
{\displaystyle -\rho {\overline {v_{i}^{\prime }v_{j}^{\prime }}}}
from the convective acceleration. This term is known as the Reynolds stress,
R
i
j
{\displaystyle R_{ij}}
. Its effect on the mean flow is like that of a stress term, such as from pressure or viscosity.
To obtain equations containing only the mean velocity and pressure, we need to close the RANS equations by modelling the Reynolds stress term
R
i
j
{\displaystyle R_{ij}}
as a function of the mean flow, removing any reference to the fluctuating part of the velocity. This is the closure problem.
== Eddy viscosity ==
Joseph Valentin Boussinesq was the first to attack the closure problem, by introducing the concept of eddy viscosity. In 1877 Boussinesq proposed relating the turbulence stresses to the mean flow to close the system of equations. Here the Boussinesq hypothesis is applied to model the Reynolds stress term. Note that a new proportionality constant
ν
t
>
0
{\displaystyle \nu _{t}>0}
, the (kinematic) turbulence eddy viscosity, has been introduced. Models of this type are known as eddy viscosity models (EVMs).
−
v
i
′
v
j
′
¯
=
ν
t
(
∂
v
i
¯
∂
x
j
+
∂
v
j
¯
∂
x
i
)
−
2
3
k
δ
i
j
{\displaystyle -{\overline {v_{i}^{\prime }v_{j}^{\prime }}}=\nu _{t}\left({\frac {\partial {\overline {v_{i}}}}{\partial x_{j}}}+{\frac {\partial {\overline {v_{j}}}}{\partial x_{i}}}\right)-{\frac {2}{3}}k\delta _{ij}}
which can be written in shorthand as
−
v
i
′
v
j
′
¯
=
2
ν
t
S
i
j
−
2
3
k
δ
i
j
{\displaystyle -{\overline {v_{i}^{\prime }v_{j}^{\prime }}}=2\nu _{t}S_{ij}-{\tfrac {2}{3}}k\delta _{ij}}
where
S
i
j
{\displaystyle S_{ij}}
is the mean rate of strain tensor
ν
t
{\displaystyle \nu _{t}}
is the (kinematic) turbulence eddy viscosity
k
=
1
2
v
i
′
v
i
′
¯
{\displaystyle k={\tfrac {1}{2}}{\overline {v_{i}'v_{i}'}}}
is the turbulence kinetic energy
and
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta.
In this model, the additional turbulence stresses are given by augmenting the molecular viscosity with an eddy viscosity. This can be a simple constant eddy viscosity (which works well for some free shear flows such as axisymmetric jets, 2-D jets, and mixing layers).
The Boussinesq hypothesis – although not explicitly stated by Boussinesq at the time – effectively consists of the assumption that the Reynolds stress tensor is aligned with the strain tensor of the mean flow (i.e.: that the shear stresses due to turbulence act in the same direction as the shear stresses produced by the averaged flow). It has since been found to be significantly less accurate than most practitioners would assume. Still, turbulence models which employ the Boussinesq hypothesis have demonstrated significant practical value. In cases with well-defined shear layers, this is likely due the dominance of streamwise shear components, so that considerable relative errors in flow-normal components are still negligible in absolute terms. Beyond this, most eddy viscosity turbulence models contain coefficients which are calibrated against measurements, and thus produce reasonably accurate overall outcomes for flow fields of similar type as used for calibration.
== Prandtl's mixing-length concept ==
Later, Ludwig Prandtl introduced the additional concept of the mixing length, along with the idea of a boundary layer. For wall-bounded turbulent flows, the eddy viscosity must vary with distance from the wall, hence the addition of the concept of a 'mixing length'. In the simplest wall-bounded flow model, the eddy viscosity is given by the equation:
ν
t
=
|
∂
u
∂
y
|
l
m
2
{\displaystyle \nu _{t}=\left|{\frac {\partial u}{\partial y}}\right|l_{m}^{2}}
where
∂
u
∂
y
{\displaystyle {\frac {\partial u}{\partial y}}}
is the partial derivative of the streamwise velocity (u) with respect to the wall normal direction (y)
l
m
{\displaystyle l_{m}}
is the mixing length.
This simple model is the basis for the "law of the wall", which is a surprisingly accurate model for wall-bounded, attached (not separated) flow fields with small pressure gradients.
More general turbulence models have evolved over time, with most modern turbulence models given by field equations similar to the Navier–Stokes equations.
== Smagorinsky model for the sub-grid scale eddy viscosity ==
Joseph Smagorinsky was the first who proposed a formula for the eddy viscosity in Large Eddy Simulation models, based on the local derivatives of the velocity field and the local grid size:
ν
t
=
Δ
x
Δ
y
(
∂
u
∂
x
)
2
+
(
∂
v
∂
y
)
2
+
1
2
(
∂
u
∂
y
+
∂
v
∂
x
)
2
{\displaystyle \nu _{t}=\Delta x\Delta y{\sqrt {\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial v}{\partial y}}\right)^{2}+{\frac {1}{2}}\left({\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}\right)^{2}}}}
In the context of Large Eddy Simulation, turbulence modeling refers to the need to parameterize the subgrid scale stress in terms of features of the filtered velocity field. This field is called subgrid-scale modeling.
== Spalart–Allmaras, k–ε and k–ω models ==
The Boussinesq hypothesis is employed in the Spalart–Allmaras (S–A), k–ε (k–epsilon), and k–ω (k–omega) models and offers a relatively low cost computation for the turbulence viscosity
ν
t
{\displaystyle \nu _{t}}
. The S–A model uses only one additional equation to model turbulence viscosity transport, while the k–ε and k–ω models use two.
== Common models ==
The following is a brief overview of commonly employed models in modern engineering applications.
== References ==
=== Notes ===
=== Other ===
Absi, R. (2019) "Eddy Viscosity and Velocity Profiles in Fully-Developed Turbulent Channel Flows" Fluid Dyn (2019) 54: 137. https://doi.org/10.1134/S0015462819010014
Absi, R. (2021) "Reinvestigating the Parabolic-Shaped Eddy Viscosity Profile for Free Surface Flows" Hydrology 2021, 8(3), 126. https://doi.org/10.3390/hydrology8030126
Townsend, A. A. (1980) "The Structure of Turbulent Shear Flow" 2nd Edition (Cambridge Monographs on Mechanics), ISBN 0521298199
Bradshaw, P. (1971) "An introduction to turbulence and its measurement" (Pergamon Press), ISBN 0080166210
Wilcox, C. D. (1998), "Turbulence Modeling for CFD" 2nd Ed., (DCW Industries, La Cañada), ISBN 0963605100 | Wikipedia/Turbulence_modeling |
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
== Eigenvalues and eigenvectors ==
Given an n × n square matrix A of real or complex numbers, an eigenvalue λ and its associated generalized eigenvector v are a pair obeying the relation
(
A
−
λ
I
)
k
v
=
0
,
{\displaystyle \left(A-\lambda I\right)^{k}{\mathbf {v} }=0,}
where v is a nonzero n × 1 column vector, I is the n × n identity matrix, k is a positive integer, and both λ and v are allowed to be complex even when A is real. When k = 1, the vector is called simply an eigenvector, and the pair is called an eigenpair. In this case, Av = λv. Any eigenvalue λ of A has ordinary eigenvectors associated to it, for if k is the smallest integer such that (A − λI)k v = 0 for a generalized eigenvector v, then (A − λI)k−1 v is an ordinary eigenvector. The value k can always be taken as less than or equal to n. In particular, (A − λI)n v = 0 for all generalized eigenvectors v associated with λ.
For each eigenvalue λ of A, the kernel ker(A − λI) consists of all eigenvectors associated with λ (along with 0), called the eigenspace of λ, while the vector space ker((A − λI)n) consists of all generalized eigenvectors, and is called the generalized eigenspace. The geometric multiplicity of λ is the dimension of its eigenspace. The algebraic multiplicity of λ is the dimension of its generalized eigenspace. The latter terminology is justified by the equation
p
A
(
z
)
=
det
(
z
I
−
A
)
=
∏
i
=
1
k
(
z
−
λ
i
)
α
i
,
{\displaystyle p_{A}\left(z\right)=\det \left(zI-A\right)=\prod _{i=1}^{k}(z-\lambda _{i})^{\alpha _{i}},}
where det is the determinant function, the λi are all the distinct eigenvalues of A and the αi are the corresponding algebraic multiplicities. The function pA(z) is the characteristic polynomial of A. So the algebraic multiplicity is the multiplicity of the eigenvalue as a zero of the characteristic polynomial. Since any eigenvector is also a generalized eigenvector, the geometric multiplicity is less than or equal to the algebraic multiplicity. The algebraic multiplicities sum up to n, the degree of the characteristic polynomial. The equation pA(z) = 0 is called the characteristic equation, as its roots are exactly the eigenvalues of A. By the Cayley–Hamilton theorem, A itself obeys the same equation: pA(A) = 0. As a consequence, the columns of the matrix
∏
i
≠
j
(
A
−
λ
i
I
)
α
i
{\textstyle \prod _{i\neq j}(A-\lambda _{i}I)^{\alpha _{i}}}
must be either 0 or generalized eigenvectors of the eigenvalue λj, since they are annihilated by
(
A
−
λ
j
I
)
α
j
{\displaystyle (A-\lambda _{j}I)^{\alpha _{j}}}
. In fact, the column space is the generalized eigenspace of λj.
Any collection of generalized eigenvectors of distinct eigenvalues is linearly independent, so a basis for all of Cn can be chosen consisting of generalized eigenvectors. More particularly, this basis {vi}ni=1 can be chosen and organized so that
if vi and vj have the same eigenvalue, then so does vk for each k between i and j, and
if vi is not an ordinary eigenvector, and if λi is its eigenvalue, then (A − λiI)vi = vi−1 (in particular, v1 must be an ordinary eigenvector).
If these basis vectors are placed as the column vectors of a matrix V = [v1 v2 ⋯ vn], then V can be used to convert A to its Jordan normal form:
V
−
1
A
V
=
[
λ
1
β
1
0
…
0
0
λ
2
β
2
…
0
0
0
λ
3
…
0
⋮
⋮
⋮
⋱
⋮
0
0
0
…
λ
n
]
,
{\displaystyle V^{-1}AV={\begin{bmatrix}\lambda _{1}&\beta _{1}&0&\ldots &0\\0&\lambda _{2}&\beta _{2}&\ldots &0\\0&0&\lambda _{3}&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\ldots &\lambda _{n}\end{bmatrix}},}
where the λi are the eigenvalues, βi = 1 if (A − λi+1)vi+1 = vi and βi = 0 otherwise.
More generally, if W is any invertible matrix, and λ is an eigenvalue of A with generalized eigenvector v, then (W−1AW − λI)k W−kv = 0. Thus λ is an eigenvalue of W−1AW with generalized eigenvector W−kv. That is, similar matrices have the same eigenvalues.
=== Normal, Hermitian, and real-symmetric matrices ===
The adjoint M* of a complex matrix M is the transpose of the conjugate of M: M * = M T. A square matrix A is called normal if it commutes with its adjoint: A*A = AA*. It is called Hermitian if it is equal to its adjoint: A* = A. All Hermitian matrices are normal. If A has only real elements, then the adjoint is just the transpose, and A is Hermitian if and only if it is symmetric. When applied to column vectors, the adjoint can be used to define the canonical inner product on Cn: w ⋅ v = w* v. Normal, Hermitian, and real-symmetric matrices have several useful properties:
Every generalized eigenvector of a normal matrix is an ordinary eigenvector.
Any normal matrix is similar to a diagonal matrix, since its Jordan normal form is diagonal.
Eigenvectors of distinct eigenvalues of a normal matrix are orthogonal.
The null space and the image (or column space) of a normal matrix are orthogonal to each other.
For any normal matrix A, Cn has an orthonormal basis consisting of eigenvectors of A. The corresponding matrix of eigenvectors is unitary.
The eigenvalues of a Hermitian matrix are real, since (λ − λ)v = (A* − A)v = (A − A)v = 0 for a non-zero eigenvector v.
If A is real, there is an orthonormal basis for Rn consisting of eigenvectors of A if and only if A is symmetric.
It is possible for a real or complex matrix to have all real eigenvalues without being Hermitian. For example, a real triangular matrix has its eigenvalues along its diagonal, but in general is not symmetric.
== Condition number ==
Any problem of numeric calculation can be viewed as the evaluation of some function f for some input x. The condition number κ(f, x) of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be very ill-conditioned. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.
For the problem of solving the linear equation Av = b where A is invertible, the matrix condition number κ(A−1, b) is given by ||A||op||A−1||op, where || ||op is the operator norm subordinate to the normal Euclidean norm on Cn. Since this number is independent of b and is the same for A and A−1, it is usually just called the condition number κ(A) of the matrix A. This value κ(A) is also the absolute value of the ratio of the largest singular value of A to its smallest. If A is unitary, then ||A||op = ||A−1||op = 1, so κ(A) = 1. For general matrices, the operator norm is often difficult to calculate. For this reason, other matrix norms are commonly used to estimate the condition number.
For the eigenvalue problem, Bauer and Fike proved that if λ is an eigenvalue for a diagonalizable n × n matrix A with eigenvector matrix V, then the absolute error in calculating λ is bounded by the product of κ(V) and the absolute error in A. As a result, the condition number for finding λ is κ(λ, A) = κ(V) = ||V ||op ||V −1||op. If A is normal, then V is unitary, and κ(λ, A) = 1. Thus the eigenvalue problem for all normal matrices is well-conditioned.
The condition number for the problem of finding the eigenspace of a normal matrix A corresponding to an eigenvalue λ has been shown to be inversely proportional to the minimum distance between λ and the other distinct eigenvalues of A. In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.
== Algorithms ==
The most reliable and most widely used algorithm for computing eigenvalues is John G. F. Francis' and Vera N. Kublanovskaya's QR algorithm, considered one of the top ten algorithms of 20th century.
Any monic polynomial is the characteristic polynomial of its companion matrix. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The Abel–Ruffini theorem shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are iterative, producing better approximate solutions with each iteration.
Some algorithms produce every eigenvalue, others will produce a few, or only one. However, even the latter algorithms can be used to find all eigenvalues. Once an eigenvalue λ of a matrix A has been identified, it can be used to either direct the algorithm towards a different solution next time, or to reduce the problem to one that no longer has λ as a solution.
Redirection is usually accomplished by shifting: replacing A with A − μI for some constant μ. The eigenvalue found for A − μI must have μ added back in to get an eigenvalue for A. For example, for power iteration, μ = λ. Power iteration finds the largest eigenvalue in absolute value, so even when λ is only an approximate eigenvalue, power iteration is unlikely to find it a second time. Conversely, inverse iteration based methods find the lowest eigenvalue, so μ is chosen well away from λ and hopefully closer to some other eigenvalue.
Reduction can be accomplished by restricting A to the column space of the matrix A − λI, which A carries to itself. Since A - λI is singular, the column space is of lesser dimension. The eigenvalue algorithm can then be applied to the restricted matrix. This process can be repeated until all eigenvalues are found.
If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with μ set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to μ. For small matrices, an alternative is to look at the column space of the product of A − λ'I for each of the other eigenvalues λ'.
A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others.
If A is an
n
×
n
{\textstyle n\times n}
normal matrix with eigenvalues λi(A) and corresponding unit eigenvectors vi whose component entries are vi,j, let Aj be the
n
−
1
×
n
−
1
{\textstyle n-1\times n-1}
matrix obtained by removing the i-th row and column from A, and let λk(Aj) be its k-th eigenvalue. Then
|
v
i
,
j
|
2
∏
k
=
1
,
k
≠
i
n
(
λ
i
(
A
)
−
λ
k
(
A
)
)
=
∏
k
=
1
n
−
1
(
λ
i
(
A
)
−
λ
k
(
A
j
)
)
{\displaystyle |v_{i,j}|^{2}\prod _{k=1,k\neq i}^{n}(\lambda _{i}(A)-\lambda _{k}(A))=\prod _{k=1}^{n-1}(\lambda _{i}(A)-\lambda _{k}(A_{j}))}
If
p
,
p
j
{\displaystyle p,p_{j}}
are the characteristic polynomials of
A
{\displaystyle A}
and
A
j
{\displaystyle A_{j}}
, the formula can be re-written as
|
v
i
,
j
|
2
=
p
j
(
λ
i
(
A
)
)
p
′
(
λ
i
(
A
)
)
{\displaystyle |v_{i,j}|^{2}={\frac {p_{j}(\lambda _{i}(A))}{p'(\lambda _{i}(A))}}}
assuming the derivative
p
′
{\displaystyle p'}
is not zero at
λ
i
(
A
)
{\displaystyle \lambda _{i}(A)}
.
== Hessenberg and tridiagonal matrices ==
Because the eigenvalues of a triangular matrix are its diagonal elements, for general matrices there is no finite method like gaussian elimination to convert a matrix to triangular form while preserving eigenvalues. But it is possible to reach something close to triangular. An upper Hessenberg matrix is a square matrix for which all entries below the subdiagonal are zero. A lower Hessenberg matrix is one for which all entries above the superdiagonal are zero. Matrices that are both upper and lower Hessenberg are tridiagonal. Hessenberg and tridiagonal matrices are the starting points for many eigenvalue algorithms because the zero entries reduce the complexity of the problem. Several methods are commonly used to convert a general matrix into a Hessenberg matrix with the same eigenvalues. If the original matrix was symmetric or Hermitian, then the resulting matrix will be tridiagonal.
When only eigenvalues are needed, there is no need to calculate the similarity matrix, as the transformed matrix has the same eigenvalues. If eigenvectors are needed as well, the similarity matrix may be needed to transform the eigenvectors of the Hessenberg matrix back into eigenvectors of the original matrix.
For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial.
== Iterative algorithms ==
Iterative algorithms solve the eigenvalue problem by producing sequences that converge to the eigenvalues. Some algorithms also produce sequences of vectors that converge to the eigenvectors. Most commonly, the eigenvalue sequences are expressed as sequences of similar matrices which converge to a triangular or diagonal form, allowing the eigenvalues to be read easily. The eigenvector sequences are expressed as the corresponding similarity matrices.
== Direct calculation ==
While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include:
=== Triangular matrices ===
Since the determinant of a triangular matrix is the product of its diagonal entries, if T is triangular, then
det
(
λ
I
−
T
)
=
∏
i
(
λ
−
T
i
i
)
{\textstyle \det(\lambda I-T)=\prod _{i}(\lambda -T_{ii})}
. Thus the eigenvalues of T are its diagonal entries.
=== Factorable polynomial equations ===
If p is any polynomial and p(A) = 0, then the eigenvalues of A also satisfy the same equation. If p happens to have a known factorization, then the eigenvalues of A lie among its roots.
For example, a projection is a square matrix P satisfying P2 = P. The roots of the corresponding scalar polynomial equation, λ2 = λ, are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is the nullity of P, while the multiplicity of 1 is the rank of P.
Another example is a matrix A that satisfies A2 = α2I for some scalar α. The eigenvalues must be ±α. The projection operators
P
+
=
1
2
(
I
+
A
α
)
{\displaystyle P_{+}={\frac {1}{2}}\left(I+{\frac {A}{\alpha }}\right)}
P
−
=
1
2
(
I
−
A
α
)
{\displaystyle P_{-}={\frac {1}{2}}\left(I-{\frac {A}{\alpha }}\right)}
satisfy
A
P
+
=
α
P
+
A
P
−
=
−
α
P
−
{\displaystyle AP_{+}=\alpha P_{+}\quad AP_{-}=-\alpha P_{-}}
and
P
+
P
+
=
P
+
P
−
P
−
=
P
−
P
+
P
−
=
P
−
P
+
=
0.
{\displaystyle P_{+}P_{+}=P_{+}\quad P_{-}P_{-}=P_{-}\quad P_{+}P_{-}=P_{-}P_{+}=0.}
The column spaces of P+ and P− are the eigenspaces of A corresponding to +α and −α, respectively.
=== 2×2 matrices ===
For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of the root formulas makes this approach less attractive.
For the 2×2 matrix
A
=
[
a
b
c
d
]
,
{\displaystyle A={\begin{bmatrix}a&b\\c&d\end{bmatrix}},}
the characteristic polynomial is
det
[
λ
−
a
−
b
−
c
λ
−
d
]
=
λ
2
−
(
a
+
d
)
λ
+
(
a
d
−
b
c
)
=
λ
2
−
λ
t
r
(
A
)
+
det
(
A
)
.
{\displaystyle \det {\begin{bmatrix}\lambda -a&-b\\-c&\lambda -d\end{bmatrix}}=\lambda ^{2}\,-\,\left(a+d\right)\lambda \,+\,\left(ad-bc\right)=\lambda ^{2}\,-\,\lambda \,{\rm {tr}}(A)\,+\,\det(A).}
Thus the eigenvalues can be found by using the quadratic formula:
λ
=
t
r
(
A
)
±
t
r
2
(
A
)
−
4
det
(
A
)
2
.
{\displaystyle \lambda ={\frac {{\rm {tr}}(A)\pm {\sqrt {{\rm {tr}}^{2}(A)-4\det(A)}}}{2}}.}
Defining
g
a
p
(
A
)
=
t
r
2
(
A
)
−
4
det
(
A
)
{\textstyle {\rm {gap}}\left(A\right)={\sqrt {{\rm {tr}}^{2}(A)-4\det(A)}}}
to be the distance between the two eigenvalues, it is straightforward to calculate
∂
λ
∂
a
=
1
2
(
1
±
a
−
d
g
a
p
(
A
)
)
,
∂
λ
∂
b
=
±
c
g
a
p
(
A
)
{\displaystyle {\frac {\partial \lambda }{\partial a}}={\frac {1}{2}}\left(1\pm {\frac {a-d}{{\rm {gap}}(A)}}\right),\qquad {\frac {\partial \lambda }{\partial b}}={\frac {\pm c}{{\rm {gap}}(A)}}}
with similar formulas for c and d. From this it follows that the calculation is well-conditioned if the eigenvalues are isolated.
Eigenvectors can be found by exploiting the Cayley–Hamilton theorem. If λ1, λ2 are the eigenvalues, then (A − λ1I)(A − λ2I) = (A − λ2I)(A − λ1I) = 0, so the columns of (A − λ2I) are annihilated by (A − λ1I) and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, then A is a multiple of the identity and any non-zero vector is an eigenvector.)
For example, suppose
A
=
[
4
3
−
2
−
3
]
,
{\displaystyle A={\begin{bmatrix}4&3\\-2&-3\end{bmatrix}},}
then tr(A) = 4 − 3 = 1 and det(A) = 4(−3) − 3(−2) = −6, so the characteristic equation is
0
=
λ
2
−
λ
−
6
=
(
λ
−
3
)
(
λ
+
2
)
,
{\displaystyle 0=\lambda ^{2}-\lambda -6=(\lambda -3)(\lambda +2),}
and the eigenvalues are 3 and -2. Now,
A
−
3
I
=
[
1
3
−
2
−
6
]
,
A
+
2
I
=
[
6
3
−
2
−
1
]
.
{\displaystyle A-3I={\begin{bmatrix}1&3\\-2&-6\end{bmatrix}},\qquad A+2I={\begin{bmatrix}6&3\\-2&-1\end{bmatrix}}.}
In both matrices, the columns are multiples of each other, so either column can be used. Thus, (1, −2) can be taken as an eigenvector associated with the eigenvalue -2, and (3, −1) as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by A.
=== Symmetric 3×3 matrices ===
The characteristic equation of a symmetric 3×3 matrix A is:
det
(
α
I
−
A
)
=
α
3
−
α
2
t
r
(
A
)
−
α
1
2
(
t
r
(
A
2
)
−
t
r
2
(
A
)
)
−
det
(
A
)
=
0.
{\displaystyle \det \left(\alpha I-A\right)=\alpha ^{3}-\alpha ^{2}{\rm {tr}}(A)-\alpha {\frac {1}{2}}\left({\rm {tr}}(A^{2})-{\rm {tr}}^{2}(A)\right)-\det(A)=0.}
This equation may be solved using the methods of Cardano or Lagrange, but an affine change to A will simplify the expression considerably, and lead directly to a trigonometric solution. If A = pB + qI, then A and B have the same eigenvectors, and β is an eigenvalue of B if and only if α = pβ + q is an eigenvalue of A. Letting
q
=
t
r
(
A
)
/
3
{\textstyle q={\rm {tr}}(A)/3}
and
p
=
(
t
r
(
(
A
−
q
I
)
2
)
/
6
)
1
/
2
{\textstyle p=\left({\rm {tr}}\left((A-qI)^{2}\right)/6\right)^{1/2}}
, gives
det
(
β
I
−
B
)
=
β
3
−
3
β
−
det
(
B
)
=
0.
{\displaystyle \det \left(\beta I-B\right)=\beta ^{3}-3\beta -\det(B)=0.}
The substitution β = 2cos θ and some simplification using the identity cos 3θ = 4cos3 θ − 3cos θ reduces the equation to cos 3θ = det(B) / 2. Thus
β
=
2
cos
(
1
3
arccos
(
det
(
B
)
/
2
)
+
2
k
π
3
)
,
k
=
0
,
1
,
2.
{\displaystyle \beta =2{\cos }\left({\frac {1}{3}}{\arccos }\left(\det(B)/2\right)+{\frac {2k\pi }{3}}\right),\quad k=0,1,2.}
If det(B) is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of k. This issue doesn't arise when A is real and symmetric, resulting in a simple algorithm:
Once again, the eigenvectors of A can be obtained by recourse to the Cayley–Hamilton theorem. If α1, α2, α3 are distinct eigenvalues of A, then (A − α1I)(A − α2I)(A − α3I) = 0. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if α3 = α1, then (A − α1I)2(A − α2I) = 0 and (A − α2I)(A − α1I)2 = 0. Thus the generalized eigenspace of α1 is spanned by the columns of A − α2I while the ordinary eigenspace is spanned by the columns of (A − α1I)(A − α2I). The ordinary eigenspace of α2 is spanned by the columns of (A − α1I)2.
For example, let
A
=
[
3
2
6
2
2
5
−
2
−
1
−
4
]
.
{\displaystyle A={\begin{bmatrix}3&2&6\\2&2&5\\-2&-1&-4\end{bmatrix}}.}
The characteristic equation is
0
=
λ
3
−
λ
2
−
λ
+
1
=
(
λ
−
1
)
2
(
λ
+
1
)
,
{\displaystyle 0=\lambda ^{3}-\lambda ^{2}-\lambda +1=(\lambda -1)^{2}(\lambda +1),}
with eigenvalues 1 (of multiplicity 2) and -1. Calculating,
A
−
I
=
[
2
2
6
2
1
5
−
2
−
1
−
5
]
,
A
+
I
=
[
4
2
6
2
3
5
−
2
−
1
−
3
]
{\displaystyle A-I={\begin{bmatrix}2&2&6\\2&1&5\\-2&-1&-5\end{bmatrix}},\qquad A+I={\begin{bmatrix}4&2&6\\2&3&5\\-2&-1&-3\end{bmatrix}}}
and
(
A
−
I
)
2
=
[
−
4
0
−
8
−
4
0
−
8
4
0
8
]
,
(
A
−
I
)
(
A
+
I
)
=
[
0
4
4
0
2
2
0
−
2
−
2
]
{\displaystyle (A-I)^{2}={\begin{bmatrix}-4&0&-8\\-4&0&-8\\4&0&8\end{bmatrix}},\qquad (A-I)(A+I)={\begin{bmatrix}0&4&4\\0&2&2\\0&-2&-2\end{bmatrix}}}
Thus (−4, −4, 4) is an eigenvector for −1, and (4, 2, −2) is an eigenvector for 1. (2, 3, −1) and (6, 5, −3) are both generalized eigenvectors associated with 1, either one of which could be combined with (−4, −4, 4) and (4, 2, −2) to form a basis of generalized eigenvectors of A. Once found, the eigenvectors can be normalized if needed.
==== Eigenvectors of normal 3×3 matrices ====
If a 3×3 matrix
A
{\displaystyle A}
is normal, then the cross-product can be used to find eigenvectors. If
λ
{\displaystyle \lambda }
is an eigenvalue of
A
{\displaystyle A}
, then the null space of
A
−
λ
I
{\displaystyle A-\lambda I}
is perpendicular to its column space. The cross product of two independent columns of
A
−
λ
I
{\displaystyle A-\lambda I}
will be in the null space. That is, it will be an eigenvector associated with
λ
{\displaystyle \lambda }
. Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it.
If
A
−
λ
I
{\displaystyle A-\lambda I}
does not contain two independent columns but is not 0, the cross-product can still be used. In this case
λ
{\displaystyle \lambda }
is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Suppose
v
{\displaystyle \mathbf {v} }
is a non-zero column of
A
−
λ
I
{\displaystyle A-\lambda I}
. Choose an arbitrary vector
u
{\displaystyle \mathbf {u} }
not parallel to
v
{\displaystyle \mathbf {v} }
. Then
v
×
u
{\displaystyle \mathbf {v} \times \mathbf {u} }
and
(
v
×
u
)
×
v
{\displaystyle (\mathbf {v} \times \mathbf {u} )\times \mathbf {v} }
will be perpendicular to
v
{\displaystyle \mathbf {v} }
and thus will be eigenvectors of
λ
{\displaystyle \lambda }
.
This does not work when
A
{\displaystyle A}
is not normal, as the null space and column space do not need to be perpendicular for such matrices.
== See also ==
List of eigenvalue algorithms
== Notes ==
== References ==
== Further reading ==
Bojanczyk, Adam W.; Adam Lutoborski (Jan 1991). "Computation of the Euler angles of a symmetric 3X3 matrix". SIAM Journal on Matrix Analysis and Applications. 12 (1): 41–48. doi:10.1137/0612005. | Wikipedia/Matrix_eigenvalue_problem |
In numerical analysis, Romberg's method is used to estimate the definite integral
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangular array. Romberg's method is a Newton–Cotes formula – it evaluates the integrand at equally spaced points.
The integrand must have continuous derivatives, though fairly good results
may be obtained if only a few derivatives exist.
If it is possible to evaluate the integrand at unequally spaced points, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are generally more accurate.
The method is named after Werner Romberg, who published the method in 1955.
== Method ==
Using
h
n
=
(
b
−
a
)
2
n
{\textstyle h_{n}={\frac {(b-a)}{2^{n}}}}
, the method can be inductively defined by
R
(
0
,
0
)
=
h
0
(
f
(
a
)
+
f
(
b
)
)
R
(
n
,
0
)
=
1
2
R
(
n
−
1
,
0
)
+
2
h
n
∑
k
=
1
2
n
−
1
f
(
a
+
(
2
k
−
1
)
h
n
−
1
)
R
(
n
,
m
)
=
R
(
n
,
m
−
1
)
+
1
4
m
−
1
(
R
(
n
,
m
−
1
)
−
R
(
n
−
1
,
m
−
1
)
)
=
1
4
m
−
1
(
4
m
R
(
n
,
m
−
1
)
−
R
(
n
−
1
,
m
−
1
)
)
{\displaystyle {\begin{aligned}R(0,0)&=h_{0}(f(a)+f(b))\\R(n,0)&={\tfrac {1}{2}}R(n-1,0)+2h_{n}\sum _{k=1}^{2^{n-1}}f(a+(2k-1)h_{n-1})\\R(n,m)&=R(n,m-1)+{\tfrac {1}{4^{m}-1}}(R(n,m-1)-R(n-1,m-1))\\&={\frac {1}{4^{m}-1}}(4^{m}R(n,m-1)-R(n-1,m-1))\end{aligned}}}
where
n
≥
m
{\displaystyle n\geq m}
and
m
≥
1
{\displaystyle m\geq 1\,}
.
In big O notation, the error for R(n, m) is:
O
(
h
n
2
m
+
2
)
.
{\displaystyle O\left(h_{n}^{2m+2}\right).}
The zeroeth extrapolation, R(n, 0), is equivalent to the trapezoidal rule with 2n + 1 points; the first extrapolation, R(n, 1), is equivalent to Simpson's rule with 2n + 1 points. The second extrapolation, R(n, 2), is equivalent to Boole's rule with 2n + 1 points. The further extrapolations differ from Newton-Cotes formulas. In particular further Romberg extrapolations expand on Boole's rule in very slight ways, modifying weights into ratios similar as in Boole's rule. In contrast, further Newton-Cotes methods produce increasingly differing weights, eventually leading to large positive and negative weights. This is indicative of how large degree interpolating polynomial Newton-Cotes methods fail to converge for many integrals, while Romberg integration is more stable.
By labelling our
O
(
h
2
)
{\textstyle O(h^{2})}
approximations as
A
0
(
h
2
n
)
{\textstyle A_{0}{\big (}{\frac {h}{2^{n}}}{\big )}}
instead of
R
(
n
,
0
)
{\textstyle R(n,0)}
, we can perform Richardson extrapolation with the error formula defined below:
∫
a
b
f
(
x
)
d
x
=
A
0
(
h
2
n
)
+
a
0
(
h
2
n
)
2
+
a
1
(
h
2
n
)
4
+
a
2
(
h
2
n
)
6
+
⋯
{\displaystyle \int _{a}^{b}f(x)\,dx=A_{0}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}+a_{0}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}^{2}+a_{1}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}^{4}+a_{2}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}^{6}+\cdots }
Once we have obtained our
O
(
h
2
(
m
+
1
)
)
{\textstyle O(h^{2(m+1)})}
approximations
A
m
(
h
2
n
)
{\textstyle A_{m}{\big (}{\frac {h}{2^{n}}}{\big )}}
, we can label them as
R
(
n
,
m
)
{\textstyle R(n,m)}
.
When function evaluations are expensive, it may be preferable to replace the polynomial interpolation of Richardson with the rational interpolation proposed by Bulirsch & Stoer (1967).
== A geometric example ==
To estimate the area under a curve the trapezoid rule is applied first to one-piece, then two, then four, and so on.
After trapezoid rule estimates are obtained, Richardson extrapolation is applied.
For the first iteration the two piece and one piece estimates are used in the formula 4 × (more accurate) − (less accurate)/3. The same formula is then used to compare the four piece and the two piece estimate, and likewise for the higher estimates
For the second iteration the values of the first iteration are used in the formula 16 × (more accurate) − (less accurate)/15
The third iteration uses the next power of 4: 64 × (more accurate) − (less accurate)/63 on the values derived by the second iteration.
The pattern is continued until there is one estimate.
== Example ==
As an example, the Gaussian function is integrated from 0 to 1, i.e. the error function erf(1) ≈ 0.842700792949715. The triangular array is calculated row by row and calculation is terminated if the two last entries in the last row differ less than 10−8.
0.77174333
0.82526296 0.84310283
0.83836778 0.84273605 0.84271160
0.84161922 0.84270304 0.84270083 0.84270066
0.84243051 0.84270093 0.84270079 0.84270079 0.84270079
The result in the lower right corner of the triangular array is accurate to the digits shown.
It is remarkable that this result is derived from the less accurate approximations
obtained by the trapezium rule in the first column of the triangular array.
== Implementation ==
Here is an example of a computer implementation of the Romberg method (in the C programming language):
Here is an implementation of the Romberg method (in the Python programming language):
== References ==
=== Citations ===
=== Bibliography ===
== External links ==
ROMBINT – code for MATLAB (author: Martin Kacenak)
Free online integration tool using Romberg, Fox–Romberg, Gauss–Legendre and other numerical methods
SciPy implementation of Romberg's method
Romberg.jl — Julia implementation (supporting arbitrary factorizations, not just
2
n
+
1
{\displaystyle 2^{n}+1}
points) | Wikipedia/Romberg's_method |
Computational thermodynamics is the use of computers to simulate thermodynamic problems specific to materials science, particularly used in the construction of phase diagrams.
Several open and commercial programs exist to perform these operations. The concept of the technique is minimization of Gibbs free energy of the system; the success of this method is due not only to properly measuring thermodynamic properties, such as those in the list of thermodynamic properties, but also due to the extrapolation of the properties of metastable allotropes of the chemical elements.
== History ==
The computational modeling of metal-based phase diagrams, which dates back to the beginning of the previous century mainly by Johannes van Laar and to the modeling of regular solutions, has evolved in more recent years to the CALPHAD (CALculation of PHAse Diagrams). This has been pioneered by American metallurgist Larry Kaufman since the 1970s.
== Current state ==
Computational thermodynamics may be considered a part of materials informatics and is a cornerstone of the concepts behind the materials genome project. While crystallographic databases are used mainly as a reference source, thermodynamic databases represent one of the earliest examples of informatics, as these databases were integrated into thermochemical computations to map phase stability in binary and ternary alloys. Many concepts and software used in computational thermodynamics are credited to the SGTE Group, a consortium devoted to the development of thermodynamic databases; the open elements database is freely available based on the paper by Dinsdale. This so-called "unary" system proves to be a common basis for the development of binary and multiple systems and is used by both commercial and open software in this field.
However, as stated in recent CALPHAD papers and meetings, such a Dinsdale/SGTE database will likely need to be corrected over time despite the utility in keeping a common base. In this case, most published assessments will likely have to be revised, similarly to rebuilding a house due to a severely broken foundation. This concept has also been depicted as an "inverted pyramid." Merely extending the current approach (limited to temperatures above room temperature) is a complex task. PyCalphad, a Python library, was designed to facilitate simple computational thermodynamics calculation using open source code. In complex systems, computational methods such as CALPHAD are employed to model thermodynamic properties for each phase and simulate multicomponent phase behavior. The application of CALPHAD to high pressures in some important applications, which are not restricted to one side of materials science like the Fe-C system, confirms experimental results by using computational thermodynamic calculations of phase relations in the Fe–C system at high pressures. Other scientists even considered viscosity and other physical parameters, which are beyond the domain of thermodynamics.
== Future developments ==
There is still a gap between ab initio methods and operative computational thermodynamics databases. In the past, a simplified approach introduced by the early works of Larry Kaufman, based on Miedema's Model, was employed to check the correctness of even the simplest binary systems. However, relating the two communities to Solid State Physics and Materials Science remains a challenge, as it has been for many years. Promising results from ab initio quantum mechanics molecular simulation packages like VASP are readily integrated in thermodynamic databases with approaches like Zentool.
A relatively easy way to collect data for intermetallic compounds is now possible by using Open Quantum Materials Database. A series of papers focused on the concept of Zentropy has been proposed by prof. Z.K. Liu and his research group has been recently proposed
== See also ==
Phase diagram
Gibbs energy
Enthalpy of mixing
Miedema's Model
Materials Genome
UNIQUAC
UNIFAC
== References ==
== External links ==
Gaye, Henri; Lupis, C.H.P (1970). "Computer calculations of multicomponent phase diagrams". Scripta Metallurgica. 4 (9): 685–91. doi:10.1016/0036-9748(70)90207-3.
Official CALPHAD website
Cool, Thomas; Bartol, Alexander; Kasenga, Matthew; Modi, Kunal; García, R. Edwin (2010). "Gibbs: Phase equilibria and symbolic computation of thermodynamic properties". Calphad. 34 (4): 393–404. doi:10.1016/j.calphad.2010.07.005.
Python-based libraries for the calculation of phase diagrams and thermodynamic properties
Computational Phase Diagram Database (CPDDB), binary databases, free access with a registration
Open Calphad
Thermocalc for Students
Pandat (free up to three components)
Matcalc (free up to three components, open databases available) Archived 24 May 2018 at the Wayback Machine
FactSage Education 7.2
Thermodynamic Modeling of Multicomponent Phase Equilibria
NIST
Thermodynamic Modeling using the Calphad Method at ETH Zurich
MELTS Software for thermodynamic modeling of phase equilibria in magmatic systems
SGTE Scientific Group Thermodata Europe
Larry Kaufman at Hmolpedia
Miodownik, Peter (2012). "Working with Larry Kaufman: Some thoughts on his 80th birthday". Calphad. 36: iii–iv. doi:10.1016/j.calphad.2011.08.008.
Kaufman, Larry; Ågren, John (2014). "CALPHAD, first and second generation – Birth of the materials genome". Scripta Materialia. 70: 3–6. doi:10.1016/j.scriptamat.2012.12.003.
Kirklin, Scott; Saal, James E.; Meredig, Bryce; Thompson, Alex; Doak, Jeff W.; Aykol, Muratahan; Rühl, Stephan; Wolverton, Chris (11 December 2015). "The Open Quantum Materials Database (OQMD): assessing the accuracy of DFT formation energies". npj Computational Materials. 1 (1): 15010. Bibcode:2015npjCM...115010K. doi:10.1038/npjcompumats.2015.10.
[Open Quantum Materials Database OQMD]
== University Courses on Computational Thermodynamics ==
Computational Thermodynamics for Materials Design KTH, Sweden
MatSE580: Computational Thermodynamics of Materials, Pennsylvania State University, USA
Computational Thermodynamics University of Brno, Czech Republic | Wikipedia/Computational_thermodynamics |
In solid-state physics, the k·p perturbation theory is an approximated semi-empirical approach for calculating the band structure (particularly effective mass) and optical properties of crystalline solids. It is pronounced "k dot p", and is also called the k·p method. This theory has been applied specifically in the framework of the Luttinger–Kohn model (after Joaquin Mazdak Luttinger and Walter Kohn), and of the Kane model (after Evan O. Kane).
== Background and derivation ==
=== Bloch's theorem and wavevectors ===
According to quantum mechanics (in the single-electron approximation), the quasi-free electrons in any solid are characterized by wavefunctions which are eigenstates of the following stationary Schrödinger equation:
(
p
2
2
m
+
V
)
ψ
=
E
ψ
{\displaystyle \left({\frac {p^{2}}{2m}}+V\right)\psi =E\psi }
where p is the quantum-mechanical momentum operator, V is the potential, and m is the vacuum mass of the electron. (This equation neglects the spin–orbit effect; see below.)
In a crystalline solid, V is a periodic function, with the same periodicity as the crystal lattice. Bloch's theorem proves that the solutions to this differential equation can be written as follows:
ψ
n
,
k
(
x
)
=
e
i
k
⋅
x
u
n
,
k
(
x
)
{\displaystyle \psi _{n,\mathbf {k} }(\mathbf {x} )=e^{i\mathbf {k} \cdot \mathbf {x} }u_{n,\mathbf {k} }(\mathbf {x} )}
where k is a vector (called the wavevector), n is a discrete index (called the band index), and un,k is a function with the same periodicity as the crystal lattice.
For any given n, the associated states are called a band. In each band, there will be a relation between the wavevector k and the energy of the state En,k, called the band dispersion. Calculating this dispersion is one of the primary applications of k·p perturbation theory.
=== Perturbation theory ===
The periodic function un,k satisfies the following Schrödinger-type equation (simply, a direct expansion of the Schrödinger equation with a Bloch-type wave function):
H
k
u
n
,
k
=
E
n
,
k
u
n
,
k
{\displaystyle H_{\mathbf {k} }u_{n,\mathbf {k} }=E_{n,\mathbf {k} }u_{n,\mathbf {k} }}
where the Hamiltonian is
H
k
=
p
2
2
m
+
ℏ
k
⋅
p
m
+
ℏ
2
k
2
2
m
+
V
{\displaystyle H_{\mathbf {k} }={\frac {p^{2}}{2m}}+{\frac {\hbar \mathbf {k} \cdot \mathbf {p} }{m}}+{\frac {\hbar ^{2}k^{2}}{2m}}+V}
Note that k is a vector consisting of three real numbers with dimensions of inverse length, while p is a vector of operators; to be explicit,
k
⋅
p
=
k
x
(
−
i
ℏ
∂
∂
x
)
+
k
y
(
−
i
ℏ
∂
∂
y
)
+
k
z
(
−
i
ℏ
∂
∂
z
)
{\displaystyle \mathbf {k} \cdot \mathbf {p} =k_{x}(-i\hbar {\frac {\partial }{\partial x}})+k_{y}(-i\hbar {\frac {\partial }{\partial y}})+k_{z}(-i\hbar {\frac {\partial }{\partial z}})}
In any case, we write this Hamiltonian as the sum of two terms:
H
k
=
H
0
+
H
k
′
,
H
0
=
p
2
2
m
+
V
,
H
k
′
=
ℏ
2
k
2
2
m
+
ℏ
k
⋅
p
m
{\displaystyle H_{\mathbf {k} }=H_{0}+H_{\mathbf {k} }',\;\;H_{0}={\frac {p^{2}}{2m}}+V,\;\;H_{\mathbf {k} }'={\frac {\hbar ^{2}k^{2}}{2m}}+{\frac {\hbar \mathbf {k} \cdot \mathbf {p} }{m}}}
This expression is the basis for perturbation theory. The "unperturbed Hamiltonian" is H0, which in fact equals the exact Hamiltonian at k = 0 (i.e., at the gamma point). The "perturbation" is the term
H
k
′
{\displaystyle H_{\mathbf {k} }'}
. The analysis that results is called k·p perturbation theory, due to the term proportional to k·p. The result of this analysis is an expression for En,k and un,k in terms of the energies and wavefunctions at k = 0.
Note that the "perturbation" term
H
k
′
{\displaystyle H_{\mathbf {k} }'}
gets progressively smaller as k approaches zero. Therefore, k·p perturbation theory is most accurate for small values of k. However, if enough terms are included in the perturbative expansion, then the theory can in fact be reasonably accurate for any value of k in the entire Brillouin zone.
=== Expression for a nondegenerate band ===
For a nondegenerate band (i.e., a band which has a different energy at k = 0 from any other band), with an extremum at k = 0, and with no spin–orbit coupling, the result of k·p perturbation theory is (to lowest nontrivial order):
u
n
,
k
=
u
n
,
0
+
ℏ
m
∑
n
′
≠
n
⟨
u
n
′
,
0
|
k
⋅
p
|
u
n
,
0
⟩
E
n
,
0
−
E
n
′
,
0
u
n
′
,
0
{\displaystyle u_{n,\mathbf {k} }=u_{n,0}+{\frac {\hbar }{m}}\sum _{n'\neq n}{\frac {\langle u_{n',0}|\mathbf {k} \cdot \mathbf {p} |u_{n,0}\rangle }{E_{n,0}-E_{n',0}}}u_{n',0}}
E
n
,
k
=
E
n
,
0
+
ℏ
2
k
2
2
m
+
ℏ
2
m
2
∑
n
′
≠
n
|
⟨
u
n
,
0
|
k
⋅
p
|
u
n
′
,
0
⟩
|
2
E
n
,
0
−
E
n
′
,
0
{\displaystyle E_{n,\mathbf {k} }=E_{n,0}+{\frac {\hbar ^{2}k^{2}}{2m}}+{\frac {\hbar ^{2}}{m^{2}}}\sum _{n'\neq n}{\frac {|\langle u_{n,0}|\mathbf {k} \cdot \mathbf {p} |u_{n',0}\rangle |^{2}}{E_{n,0}-E_{n',0}}}}
Since k is a vector of real numbers (rather than a vector of more complicated linear operators), the matrix element in these expressions can be rewritten as:
⟨
u
n
,
0
|
k
⋅
p
|
u
n
′
,
0
⟩
=
k
⋅
⟨
u
n
,
0
|
p
|
u
n
′
,
0
⟩
{\displaystyle \langle u_{n,0}|\mathbf {k} \cdot \mathbf {p} |u_{n',0}\rangle =\mathbf {k} \cdot \langle u_{n,0}|\mathbf {p} |u_{n',0}\rangle }
Therefore, one can calculate the energy at any k using only a few unknown parameters, namely En,0 and
⟨
u
n
,
0
|
p
|
u
n
′
,
0
⟩
{\displaystyle \langle u_{n,0}|\mathbf {p} |u_{n',0}\rangle }
. The latter are called "optical matrix elements", closely related to transition dipole moments. These parameters are typically inferred from experimental data.
In practice, the sum over n often includes only the nearest one or two bands, since these tend to be the most important (due to the denominator). However, for improved accuracy, especially at larger k, more bands must be included, as well as more terms in the perturbative expansion than the ones written above.
==== Effective mass ====
Using the expression above for the energy dispersion relation, a simplified expression for the effective mass in the conduction band of a semiconductor can be found. To approximate the dispersion relation in the case of the conduction band, take the energy En0 as the minimum conduction band energy Ec0 and include in the summation only terms with energies near the valence band maximum, where the energy difference in the denominator is smallest. (These terms are the largest contributions to the summation.) This denominator is then approximated as the band gap Eg, leading to an energy expression:
E
c
(
k
)
≈
E
c
0
+
(
ℏ
k
)
2
2
m
+
ℏ
2
E
g
m
2
∑
n
|
⟨
u
c
,
0
|
k
⋅
p
|
u
n
,
0
⟩
|
2
{\displaystyle E_{c}({\boldsymbol {k}})\approx E_{c0}+{\frac {(\hbar k)^{2}}{2m}}+{\frac {\hbar ^{2}}{{E_{g}}m^{2}}}\sum _{n}{|\langle u_{c,0}|\mathbf {k} \cdot \mathbf {p} |u_{n,0}\rangle |^{2}}}
The effective mass in direction ℓ is then:
1
m
ℓ
=
1
ℏ
2
∑
m
⋅
∂
2
E
c
(
k
)
∂
k
ℓ
∂
k
m
≈
1
m
+
2
E
g
m
2
∑
m
,
n
⟨
u
c
,
0
|
p
ℓ
|
u
n
,
0
⟩
⟨
u
n
,
0
|
p
m
|
u
c
,
0
⟩
{\displaystyle {\frac {1}{{m}_{\ell }}}={{1} \over {\hbar ^{2}}}\sum _{m}\cdot {{\partial ^{2}E_{c}({\boldsymbol {k}})} \over {\partial k_{\ell }\partial k_{m}}}\approx {\frac {1}{m}}+{\frac {2}{E_{g}m^{2}}}\sum _{m,\ n}{\langle u_{c,0}|p_{\ell }|u_{n,0}\rangle }{\langle u_{n,0}|p_{m}|u_{c,0}\rangle }}
Ignoring the details of the matrix elements, the key consequences are that the effective mass varies with the smallest bandgap and goes to zero as the gap goes to zero. A useful approximation for the matrix elements in direct gap semiconductors is:
2
E
g
m
2
∑
m
,
n
|
⟨
u
c
,
0
|
p
ℓ
|
u
n
,
0
⟩
|
|
⟨
u
c
,
0
|
p
m
|
u
n
,
0
⟩
|
≈
20
e
V
1
m
E
g
,
{\displaystyle {\frac {2}{E_{g}m^{2}}}\sum _{m,\ n}{|\langle u_{c,0}|p_{\ell }|u_{n,0}\rangle |}{|\langle u_{c,0}|p_{m}|u_{n,0}\rangle |}\approx 20\mathrm {eV} {\frac {1}{mE_{g}}}\ ,}
which applies within about 15% or better to most group-IV, III-V and II-VI semiconductors.
In contrast to this simple approximation, in the case of valence band energy the spin–orbit interaction must be introduced (see below) and many more bands must be individually considered. The calculation is provided in Yu and Cardona. In the valence band the mobile carriers are holes. One finds there are two types of hole, named heavy and light, with anisotropic masses.
=== k·p model with spin–orbit interaction ===
Including the spin–orbit interaction, the Schrödinger equation for u is:
H
k
u
n
,
k
=
E
n
,
k
u
n
,
k
{\displaystyle H_{\mathbf {k} }u_{n,\mathbf {k} }=E_{n,\mathbf {k} }u_{n,\mathbf {k} }}
where
H
k
=
p
2
2
m
+
ℏ
m
k
⋅
p
+
ℏ
2
k
2
2
m
+
V
+
ℏ
4
m
2
c
2
(
∇
V
×
(
p
+
ℏ
k
)
)
⋅
σ
→
{\displaystyle H_{\mathbf {k} }={\frac {p^{2}}{2m}}+{\frac {\hbar }{m}}\mathbf {k} \cdot \mathbf {p} +{\frac {\hbar ^{2}k^{2}}{2m}}+V+{\frac {\hbar }{4m^{2}c^{2}}}(\nabla V\times (\mathbf {p} +\hbar \mathbf {k} ))\cdot {\vec {\sigma }}}
where
σ
→
=
(
σ
x
,
σ
y
,
σ
z
)
{\displaystyle {\vec {\sigma }}=(\sigma _{x},\sigma _{y},\sigma _{z})}
is a vector consisting of the three Pauli matrices. This Hamiltonian can be subjected to the same sort of perturbation-theory analysis as above.
=== Calculation in degenerate case ===
For degenerate or nearly degenerate bands, in particular the valence bands in certain materials such as gallium arsenide, the equations can be analyzed by the methods of degenerate perturbation theory. Models of this type include the "Luttinger–Kohn model" (a.k.a. "Kohn–Luttinger model"), and the "Kane model".
Generally, an effective Hamiltonian
H
e
f
f
{\displaystyle H^{\rm {eff}}}
is introduced, and to the first order, its matrix elements can be expressed as
H
k
,
m
n
e
f
f
=
⟨
u
m
,
0
|
H
0
|
u
n
,
0
⟩
+
k
⋅
⟨
u
m
,
0
|
∇
k
H
k
′
|
u
n
,
0
⟩
{\displaystyle H_{\mathbf {k} ,mn}^{\rm {eff}}=\langle u_{m,0}|H_{0}|u_{n,0}\rangle +\mathbf {k} \cdot \langle u_{m,0}|\nabla _{\mathbf {k} }H_{\mathbf {k} }'|u_{n,0}\rangle }
After solving it, the wave functions and energy bands are obtained.
== See also ==
== Notes and references == | Wikipedia/K.p_method |
Compartmental models are a mathematical framework used to simulate how populations move between different states or "compartments." While widely applied in various fields, they have become particularly fundamental to the mathematical modelling of infectious diseases. In these models, the population is divided into compartments labeled with shorthand notation – most commonly S, I, and R, representing Susceptible, Infectious, and Recovered individuals. The sequence of letters typically indicates the flow patterns between compartments; for example, an SEIS model represents progression from susceptible to exposed to infectious and then back to susceptible again.
These models originated in the early 20th century through pioneering epidemiological work by several mathematicians. Key developments include Hamer's work in 1906, Ross's contributions in 1916, collaborative work by Ross and Hudson in 1917, the seminal Kermack and McKendrick model in 1927, and Kendall's work in 1956. The historically significant Reed–Frost model, though often overlooked, also substantially influenced modern epidemiological modeling approaches.
Most implementations of compartmental models use ordinary differential equations (ODEs), providing deterministic results that are mathematically tractable. However, they can also be formulated within stochastic frameworks that incorporate randomness, offering more realistic representations of population dynamics at the cost of greater analytical complexity.
Epidemiologists and public health officials use these models for several critical purposes: analyzing disease transmission dynamics, projecting the total number of infections and recoveries over time, estimating key epidemiological parameters such as the basic reproduction number (R₀) or effective reproduction number (Rt), evaluating potential impacts of different public health interventions before implementation, and informing evidence-based policy decisions during disease outbreaks. Beyond infectious disease modeling, the approach has been adapted for applications in population ecology, pharmacokinetics, chemical kinetics, and other fields requiring the study of transitions between defined states.
== SIR model ==
The SIR model is one of the simplest compartmental models, and many models are derivatives of this basic form. The model consists of three compartments:
S: The number of susceptible individuals. When a susceptible and an infectious individual come into "infectious contact", the susceptible individual contracts the disease and transitions to the infectious compartment.
I: The number of infectious individuals. These are individuals who have been infected and are capable of infecting susceptible individuals.
R for the number of removed (and immune) or deceased individuals. These are individuals who have been infected and have either recovered from the disease and entered the removed compartment, or died. It is assumed that the number of deaths is negligible with respect to the total population. This compartment may also be called "recovered" or "resistant".
This model is reasonably predictive for infectious diseases that are transmitted from human to human, and where recovery confers lasting resistance, such as measles, mumps, and rubella.
These variables (S, I, and R) represent the number of people in each compartment at a particular time. To represent that the number of susceptible, infectious, and removed individuals may vary over time (even if the total population size remains constant), we make the precise numbers a function of t (time): S(t), I(t), and R(t). For a specific disease in a specific population, these functions may be worked out in order to predict possible outbreaks and bring them under control. Note that in the SIR model,
R
(
0
)
{\displaystyle R(0)}
and
R
0
{\displaystyle R_{0}}
are different quantities – the former describes the number of recovered at t = 0 whereas the latter describes the ratio between the frequency of contacts to the frequency of recovery.
As implied by the variable function of t, the model is dynamic in that the numbers in each compartment may fluctuate over time. The importance of this dynamic aspect is most obvious in an endemic disease with a short infectious period, such as measles in the UK prior to the introduction of a vaccine in 1968. Such diseases tend to occur in cycles of outbreaks due to the variation in number of susceptibles (S(t)) over time. During an epidemic, the number of susceptible individuals falls rapidly as more of them are infected and thus enter the infectious and removed compartments. The disease cannot break out again until the number of susceptibles has built back up, e.g. as a result of offspring being born into the susceptible compartment.
Each member of the population typically progresses from susceptible to infectious to recovered. This can be shown as a flow diagram in which the boxes represent the different compartments and the arrows the transition between compartments (see diagram).
=== Transition rates ===
For the full specification of the model, the arrows should be labeled with the transition rates between compartments. Between S and I, the transition rate is assumed to be
d
(
S
/
N
)
/
d
t
=
−
β
S
I
/
N
2
{\displaystyle d(S/N)/dt=-\beta SI/N^{2}}
, where
N
{\displaystyle N}
is the total population,
β
{\displaystyle \beta }
is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject, and
S
I
/
N
2
{\displaystyle SI/N^{2}}
is the fraction of all possible contacts that involves an infectious and susceptible individual. (This is mathematically similar to the law of mass action in chemistry in which random collisions between molecules result in a chemical reaction and the fractional rate is proportional to the concentration of the two reactants.)
Between I and R, the transition rate is assumed to be proportional to the number of infectious individuals which is
γ
I
{\displaystyle \gamma I}
. If an individual is infectious for an average time period
D
{\displaystyle D}
, then
γ
=
1
/
D
{\displaystyle \gamma =1/D}
. This is also equivalent to the assumption that the length of time spent by an individual in the infectious state is a random variable with an exponential distribution. The "classical" SIR model may be modified by using more complex and realistic distributions for the I-R transition rate (e.g. the Erlang distribution).
For the special case in which there is no removal from the infectious compartment (
γ
=
0
{\displaystyle \gamma =0}
), the SIR model reduces to a very simple SI model, which has a logistic solution, in which every individual eventually becomes infected.
=== The SIR model without birth and death ===
The dynamics of an epidemic, for example, the flu, are often much faster than the dynamics of birth and death, therefore, birth and death are often omitted in simple compartmental models. The SIR system without so-called vital dynamics (birth and death, sometimes called demography) described above can be expressed by the following system of ordinary differential equations:
{
d
S
d
t
=
−
β
N
I
S
,
d
I
d
t
=
β
N
I
S
−
γ
I
,
d
R
d
t
=
γ
I
,
{\displaystyle \left\{{\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta }{N}}IS,\\[6pt]&{\frac {dI}{dt}}={\frac {\beta }{N}}IS-\gamma I,\\[6pt]&{\frac {dR}{dt}}=\gamma I,\end{aligned}}\right.}
where
S
{\displaystyle S}
is the stock of susceptible population in unit number of people,
I
{\displaystyle I}
is the stock of infected in unit number of people,
R
{\displaystyle R}
is the stock of removed population (either by death or recovery) in unit number of people, and
N
{\displaystyle N}
is the sum of these three in unit number of people.
β
{\displaystyle \beta }
is the infection rate constant in the unit number of people infected per day per infected person, and
γ
{\displaystyle \gamma }
is the recovery rate constant in the unit fraction of a person recovered per day per infected person, when time is in unit day.
This model was for the first time proposed by William Ogilvy Kermack and Anderson Gray McKendrick as a special case of what we now call Kermack–McKendrick theory, and followed work McKendrick had done with Ronald Ross.
This system is non-linear, however it is possible to derive its analytic solution in implicit form. Firstly note that from:
d
S
d
t
+
d
I
d
t
+
d
R
d
t
=
0
,
{\displaystyle {\frac {dS}{dt}}+{\frac {dI}{dt}}+{\frac {dR}{dt}}=0,}
it follows that:
S
(
t
)
+
I
(
t
)
+
R
(
t
)
=
constant
=
N
,
{\displaystyle S(t)+I(t)+R(t)={\text{constant}}=N,}
expressing in mathematical terms the constancy of population
N
{\displaystyle N}
. Note that the above relationship implies that one need only study the equation for two of the three variables.
Secondly, we note that the dynamics of the infectious class depends on the following ratio:
R
0
=
β
γ
,
{\displaystyle R_{0}={\frac {\beta }{\gamma }},}
the so-called basic reproduction number (also called basic reproduction ratio). This ratio is derived as the expected number of new infections (these new infections are sometimes called secondary infections) from a single infection in a population where all subjects are susceptible. This idea can probably be more readily seen if we say that the typical time between contacts is
T
c
=
β
−
1
{\displaystyle T_{c}=\beta ^{-1}}
, and the typical time until removal is
T
r
=
γ
−
1
{\displaystyle T_{r}=\gamma ^{-1}}
. From here it follows that, on average, the number of contacts by an infectious individual with others before the infectious has been removed is:
T
r
/
T
c
.
{\displaystyle T_{r}/T_{c}.}
By dividing the first differential equation by the third, separating the variables and integrating we get
S
(
t
)
=
S
(
0
)
e
−
R
0
(
R
(
t
)
−
R
(
0
)
)
/
N
,
{\displaystyle S(t)=S(0)e^{-R_{0}(R(t)-R(0))/N},}
where
S
(
0
)
{\displaystyle S(0)}
and
R
(
0
)
{\displaystyle R(0)}
are the initial numbers of, respectively, susceptible and removed subjects.
Writing
s
0
=
S
(
0
)
/
N
{\displaystyle s_{0}=S(0)/N}
for the initial proportion of susceptible individuals, and
s
∞
=
S
(
∞
)
/
N
{\displaystyle s_{\infty }=S(\infty )/N}
and
r
∞
=
R
(
∞
)
/
N
{\displaystyle r_{\infty }=R(\infty )/N}
for the proportion of susceptible and removed individuals respectively in the limit
t
→
∞
,
{\displaystyle t\to \infty ,}
one has
s
∞
=
1
−
r
∞
=
s
0
e
−
R
0
(
r
∞
−
r
0
)
{\displaystyle s_{\infty }=1-r_{\infty }=s_{0}e^{-R_{0}(r_{\infty }-r_{0})}}
(note that the infectious compartment empties in this limit).
This transcendental equation has a solution in terms of the Lambert W function, namely
s
∞
=
1
−
r
∞
=
−
R
0
−
1
W
(
−
s
0
R
0
e
−
R
0
(
1
−
r
0
)
)
.
{\displaystyle s_{\infty }=1-r_{\infty }=-R_{0}^{-1}\,W(-s_{0}R_{0}e^{-R_{0}(1-r_{0})}).}
This shows that at the end of an epidemic that conforms to the simple assumptions of the SIR model, unless
s
0
=
0
{\displaystyle s_{0}=0}
, not all individuals of the population have been removed, so some must remain susceptible. A driving force leading to the end of an epidemic is a decline in the number of infectious individuals. The epidemic does not typically end because of a complete lack of susceptible individuals.
The role of both the basic reproduction number and the initial susceptibility are extremely important. In fact, upon rewriting the equation for infectious individuals as follows:
d
I
d
t
=
(
R
0
S
N
−
1
)
γ
I
,
{\displaystyle {\frac {dI}{dt}}=\left(R_{0}{\frac {S}{N}}-1\right)\gamma I,}
it yields that if:
R
0
⋅
S
(
0
)
>
N
,
{\displaystyle R_{0}\cdot S(0)>N,}
then:
d
I
d
t
(
0
)
>
0
,
{\displaystyle {\frac {dI}{dt}}(0)>0,}
i.e., there will be a proper epidemic outbreak with an increase of the number of the infectious (which can reach a considerable fraction of the population). On the contrary, if
R
0
⋅
S
(
0
)
<
N
,
{\displaystyle R_{0}\cdot S(0)<N,}
then
d
I
d
t
(
0
)
<
0
,
{\displaystyle {\frac {dI}{dt}}(0)<0,}
i.e., independently from the initial size of the susceptible population the disease can never cause a proper epidemic outbreak. As a consequence, it is clear that both the basic reproduction number and the initial susceptibility are extremely important.
==== The force of infection ====
Note that in the above model the function:
F
=
β
I
,
{\displaystyle F=\beta I,}
models the transition rate from the compartment of susceptible individuals to the compartment of infectious individuals, so that it is called the force of infection. However, for large classes of communicable diseases it is more realistic to consider a force of infection that does not depend on the absolute number of infectious subjects, but on their fraction (with respect to the total constant population
N
{\displaystyle N}
):
F
=
β
I
N
.
{\displaystyle F=\beta {\frac {I}{N}}.}
Capasso and, afterwards, other authors have proposed nonlinear forces of infection to model more realistically the contagion process.
==== Exact analytical solutions to the SIR model ====
In 2014, Harko and coauthors derived an exact so-called analytical solution (involving an integral that can only be calculated numerically) to the SIR model. In the case without vital dynamics setup, for
S
(
u
)
=
S
(
t
)
{\displaystyle {\mathcal {S}}(u)=S(t)}
, etc., it corresponds to the following time parametrization
S
(
u
)
=
S
(
0
)
u
{\displaystyle {\mathcal {S}}(u)=S(0)u}
I
(
u
)
=
N
−
R
(
u
)
−
S
(
u
)
{\displaystyle {\mathcal {I}}(u)=N-{\mathcal {R}}(u)-{\mathcal {S}}(u)}
R
(
u
)
=
R
(
0
)
−
ρ
ln
(
u
)
{\displaystyle {\mathcal {R}}(u)=R(0)-\rho \ln(u)}
for
t
=
N
β
∫
u
1
d
u
∗
u
∗
I
(
u
∗
)
,
ρ
=
γ
N
β
,
{\displaystyle t={\frac {N}{\beta }}\int _{u}^{1}{\frac {du^{*}}{u^{*}{\mathcal {I}}(u^{*})}},\quad \rho ={\frac {\gamma N}{\beta }},}
with initial conditions
(
S
(
1
)
,
I
(
1
)
,
R
(
1
)
)
=
(
S
(
0
)
,
N
−
R
(
0
)
−
S
(
0
)
,
R
(
0
)
)
,
u
T
<
u
<
1
,
{\displaystyle ({\mathcal {S}}(1),{\mathcal {I}}(1),{\mathcal {R}}(1))=(S(0),N-R(0)-S(0),R(0)),\quad u_{T}<u<1,}
where
u
T
{\displaystyle u_{T}}
satisfies
I
(
u
T
)
=
0
{\displaystyle {\mathcal {I}}(u_{T})=0}
. By the transcendental equation for
R
∞
{\displaystyle R_{\infty }}
above, it follows that
u
T
=
e
−
(
R
∞
−
R
(
0
)
)
/
ρ
(
=
S
∞
/
S
(
0
)
{\displaystyle u_{T}=e^{-(R_{\infty }-R(0))/\rho }(=S_{\infty }/S(0)}
, if
S
(
0
)
≠
0
)
{\displaystyle S(0)\neq 0)}
and
I
∞
=
0
{\displaystyle I_{\infty }=0}
.
An equivalent so-called analytical solution (involving an integral that can only be calculated numerically) found by Miller yields
S
(
t
)
=
S
(
0
)
e
−
ξ
(
t
)
I
(
t
)
=
N
−
S
(
t
)
−
R
(
t
)
R
(
t
)
=
R
(
0
)
+
ρ
ξ
(
t
)
ξ
(
t
)
=
β
N
∫
0
t
I
(
t
∗
)
d
t
∗
{\displaystyle {\begin{aligned}S(t)&=S(0)e^{-\xi (t)}\\[8pt]I(t)&=N-S(t)-R(t)\\[8pt]R(t)&=R(0)+\rho \xi (t)\\[8pt]\xi (t)&={\frac {\beta }{N}}\int _{0}^{t}I(t^{*})\,dt^{*}\end{aligned}}}
Here
ξ
(
t
)
{\displaystyle \xi (t)}
can be interpreted as the expected number of transmissions an individual has received by time
t
{\displaystyle t}
. The two solutions are related by
e
−
ξ
(
t
)
=
u
{\displaystyle e^{-\xi (t)}=u}
.
Effectively the same result can be found in the original work by Kermack and McKendrick.
These solutions may be easily understood by noting that all of the terms on the right-hand sides of the original differential equations are proportional to
I
{\displaystyle I}
. The equations may thus be divided through by
I
{\displaystyle I}
, and the time rescaled so that the differential operator on the left-hand side becomes simply
d
/
d
τ
{\displaystyle d/d\tau }
, where
d
τ
=
I
d
t
{\displaystyle d\tau =Idt}
, i.e.
τ
=
∫
I
d
t
{\displaystyle \tau =\int Idt}
. The differential equations are now all linear, and the third equation, of the form
d
R
/
d
τ
=
{\displaystyle dR/d\tau =}
const., shows that
τ
{\displaystyle \tau }
and
R
{\displaystyle R}
(and
ξ
{\displaystyle \xi }
above) are simply linearly related.
A highly accurate analytic approximant of the SIR model as well as exact analytic expressions for the final values
S
∞
{\displaystyle S_{\infty }}
,
I
∞
{\displaystyle I_{\infty }}
, and
R
∞
{\displaystyle R_{\infty }}
were provided by Kröger and Schlickeiser, so that there is no need to perform a numerical integration to solve the SIR model (a simplified example practice on COVID-19 numerical simulation using Microsoft Excel can be found here ), to obtain its parameters from existing data, or to predict the future dynamics of an epidemics modeled by the SIR model. The approximant involves the Lambert W function which is part of all basic data visualization software such as Microsoft Excel, MATLAB, and Mathematica.
While Kendall considered the so-called all-time SIR model where the initial conditions
S
(
0
)
{\displaystyle S(0)}
,
I
(
0
)
{\displaystyle I(0)}
, and
R
(
0
)
{\displaystyle R(0)}
are coupled through the above relations, Kermack and McKendrick proposed to study the more general semi-time case, for which
S
(
0
)
{\displaystyle S(0)}
and
I
(
0
)
{\displaystyle I(0)}
are both arbitrary. This latter version, denoted as semi-time SIR model, makes predictions only for future times
t
>
0
{\displaystyle t>0}
. An analytic approximant and exact expressions for the final values are available for the semi-time SIR model as well.
==== Numerical solutions to the SIR model with approximations ====
Numerical solutions to the SIR model can be found in the literature. An example is using the model to analyze COVID-19 spreading data. Three reproduction numbers can be pulled out from the data analyzed with numerical approximation,
the basic reproduction number:
R
0
=
β
0
γ
0
{\displaystyle R_{0}={\frac {\beta _{0}}{\gamma _{0}}}}
the real-time reproduction number:
R
t
=
β
t
γ
t
{\displaystyle R_{t}={\frac {\beta _{t}}{\gamma _{t}}}}
and the real-time effective reproduction number:
R
e
=
β
t
S
γ
t
N
{\displaystyle R_{e}={\frac {\beta _{t}S}{\gamma _{t}N}}}
R
0
{\displaystyle R_{0}}
represents the speed of reproduction rate at the beginning of the spreading when all populations are assumed susceptible, e.g. if
β
0
=
0.4
d
a
y
−
1
{\displaystyle \beta _{0}=0.4day^{-1}}
and
γ
0
=
0.2
d
a
y
−
1
{\displaystyle \gamma _{0}=0.2day^{-1}}
meaning one infectious person on average infects 0.4 susceptible people per day and recovers in 1/0.2=5 days. Thus when this person recovered, there are two people still infectious directly got from this person and
R
0
=
2
{\displaystyle R_{0}=2}
, i.e. the number of infectious people doubled in one cycle of 5 days. The data simulated by the model with
R
0
=
2
{\displaystyle R_{0}=2}
or real data fitted will yield a doubling of the number of infectious people faster than 5 days because the two infected people are infecting people. From the SIR model, we can tell that
β
{\displaystyle \beta }
is determined by the nature of the disease and also a function of the interactive frequency between the infectious person
I
{\displaystyle I}
with the susceptible people
S
{\displaystyle S}
and also the intensity/duration of the interaction like how close they interact for how long and whether or not they both wear masks, thus, it changes over time when the average behavior of the carriers and susceptible people changes. The model use
S
I
{\displaystyle SI}
to represent these factors but it indeed is referenced to the initial stage when no action is taken to prevent the spread and all population is susceptible, thus all changes are absorbed by the change of
β
{\displaystyle \beta }
.
γ
{\displaystyle \gamma }
is usually more stable over time assuming when the infectious person shows symptoms, she/he will seek medical attention or be self-isolated. So if we find
R
t
{\displaystyle R_{t}}
changes, most probably the behaviors of people in the community have changed from their normal patterns before the outbreak, or the disease has mutated to a new form. Costive massive detection and isolation of susceptible close contacts have effects on reducing
1
/
γ
{\displaystyle 1/\gamma }
but whose efficiencies are under debate. This debate is largely on the uncertainty of the number of days reduced from after infectious or detectable whichever comes first to before a symptom shows up for an infected susceptible person. If the person is infectious after symptoms show up, or detection only works for a person with symptoms, then these prevention methods are not necessary, and self-isolation and/or medical attention is the best way to cut the
1
/
γ
{\displaystyle 1/\gamma }
values. The typical onset of the COVID-19 infectious period is in the order of one day from the symptoms showing up, making massive detection with typical frequency in a few days useless.
R
t
{\displaystyle R_{t}}
does not tell us whether or not the spreading will speed up or slow down in the latter stages when the fraction of susceptible people in the community has dropped significantly after recovery or vaccination.
R
e
{\displaystyle R_{e}}
corrects this dilution effect by multiplying the fraction of the susceptible population over the total population. It corrects the effective/transmissible interaction between an infectious person and the rest of the community when many of the interaction is immune in the middle to late stages of the disease spreading. Thus, when
R
e
>
1
{\displaystyle R_{e}>1}
, we will see an exponential-like outbreak; when
R
e
=
1
{\displaystyle R_{e}=1}
, a steady state reached and no number of infectious people changes over time; and when
R
e
<
1
{\displaystyle R_{e}<1}
, the disease decays and fades away over time.
Using the differential equations of the SIR model and converting them to numerical discrete forms, one can set up the recursive equations and calculate the S, I, and R populations with any given initial conditions but accumulate errors over a long calculation time from the reference point. Sometimes a convergence test is needed to estimate the errors. Given a set of initial conditions and the disease-spreading data, one can also fit the data with the SIR model and pull out the three reproduction numbers when the errors are usually negligible due to the short time step from the reference point. Any point of the time can be used as the initial condition to predict the future after it using this numerical model with assumption of time-evolved parameters such as population,
R
t
{\displaystyle R_{t}}
, and
γ
{\displaystyle \gamma }
. However, away from this reference point, errors will accumulate over time thus convergence test is needed to find an optimal time step for more accurate results.
Among these three reproduction numbers,
R
0
{\displaystyle R_{0}}
is very useful to judge the control pressure, e.g., a large value meaning the disease will spread very fast and is very difficult to control.
R
t
{\displaystyle R_{t}}
is most useful in predicting future trends, for example, if we know the social interactions have reduced 50% frequently from that before the outbreak and the interaction intensities among people are the same, then we can set
R
t
=
0.5
R
0
{\displaystyle R_{t}=0.5R_{0}}
. If social distancing and masks add another 50% cut in infection efficiency, we can set
R
t
=
0.25
R
0
{\displaystyle R_{t}=0.25R_{0}}
.
R
e
{\displaystyle R_{e}}
will perfectly correlate with the waves of the spreading and whenever
R
e
>
1
{\displaystyle R_{e}>1}
, the spreading accelerates, and when
R
e
<
1
{\displaystyle R_{e}<1}
, the spreading slows down thus useful to set a prediction on the short-term trends. Also, it can be used to directly calculate the threshold population of vaccination/immunization for the herd immunity stage by setting
R
t
=
R
0
{\displaystyle R_{t}=R_{0}}
, and
R
E
=
1
{\displaystyle R_{E}=1}
, i.e.
S
=
N
/
R
0
{\displaystyle S=N/R_{0}}
.
=== The SIR model with vital dynamics and constant population ===
Consider a population characterized by a death rate
μ
{\displaystyle \mu }
and birth rate
Λ
{\displaystyle \Lambda }
, and where a communicable disease is spreading. The model with mass-action transmission is:
d
S
d
t
=
Λ
−
μ
S
−
β
I
S
N
d
I
d
t
=
β
I
S
N
−
γ
I
−
μ
I
d
R
d
t
=
γ
I
−
μ
R
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\Lambda -\mu S-{\frac {\beta IS}{N}}\\[8pt]{\frac {dI}{dt}}&={\frac {\beta IS}{N}}-\gamma I-\mu I\\[8pt]{\frac {dR}{dt}}&=\gamma I-\mu R\end{aligned}}}
for which the disease-free equilibrium (DFE) is:
(
S
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
(
Λ
μ
,
0
,
0
)
.
{\displaystyle \left(S(t),I(t),R(t)\right)=\left({\frac {\Lambda }{\mu }},0,0\right).}
In this case, we can derive a basic reproduction number:
R
0
=
β
μ
+
γ
,
{\displaystyle R_{0}={\frac {\beta }{\mu +\gamma }},}
which has threshold properties. In fact, independently from biologically meaningful initial values, one can show that:
R
0
≤
1
⇒
lim
t
→
∞
(
S
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
DFE
=
(
Λ
μ
,
0
,
0
)
{\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to \infty }(S(t),I(t),R(t))={\textrm {DFE}}=\left({\frac {\Lambda }{\mu }},0,0\right)}
R
0
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
∞
(
S
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
EE
=
(
γ
+
μ
β
,
μ
β
(
R
0
−
1
)
,
γ
β
(
R
0
−
1
)
)
.
{\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to \infty }(S(t),I(t),R(t))={\textrm {EE}}=\left({\frac {\gamma +\mu }{\beta }},{\frac {\mu }{\beta }}\left(R_{0}-1\right),{\frac {\gamma }{\beta }}\left(R_{0}-1\right)\right).}
The point EE is called the Endemic Equilibrium (the disease is not totally eradicated and remains in the population). With heuristic arguments, one may show that
R
0
{\displaystyle R_{0}}
may be read as the average number of infections caused by a single infectious subject in a wholly susceptible population, the above relationship biologically means that if this number is less than or equal to one the disease goes extinct, whereas if this number is greater than one the disease will remain permanently endemic in the population.
=== The SIR model ===
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible,
S
(
t
)
{\displaystyle S(t)}
; infected,
I
(
t
)
{\displaystyle I(t)}
; and recovered,
R
(
t
)
{\displaystyle R(t)}
. The compartments used for this model consist of three classes:
S
(
t
)
{\displaystyle S(t)}
is used to represent the individuals not yet infected with the disease at time t, or those susceptible to the disease of the population.
I
(
t
)
{\displaystyle I(t)}
denotes the individuals of the population who have been infected with the disease and are capable of spreading the disease to those in the susceptible category.
R
(
t
)
{\displaystyle R(t)}
is the compartment used for the individuals of the population who have been infected and then removed from the disease, either due to immunization or due to death. Those in this category are not able to be infected again or to transmit the infection to others.
The flow of this model may be considered as follows:
S
→
I
→
R
{\displaystyle {\color {blue}{{\mathcal {S}}\rightarrow {\mathcal {I}}\rightarrow {\mathcal {R}}}}}
Using a fixed population,
N
=
S
(
t
)
+
I
(
t
)
+
R
(
t
)
{\displaystyle N=S(t)+I(t)+R(t)}
in the three functions resolves that the value
N
{\displaystyle N}
should remain constant within the simulation, if a simulation is used to solve the SIR model. Alternatively, the analytic approximant can be used without performing a simulation. The model is started with values of
S
(
t
=
0
)
{\displaystyle S(t=0)}
,
I
(
t
=
0
)
{\displaystyle I(t=0)}
and
R
(
t
=
0
)
{\displaystyle R(t=0)}
. These are the number of people in the susceptible, infected and removed categories at time equals zero. If the SIR model is assumed to hold at all times, these initial conditions are not independent. Subsequently, the flow model updates the three variables for every time point with set values for
β
{\displaystyle \beta }
and
γ
{\displaystyle \gamma }
. The simulation first updates the infected from the susceptible and then the removed category is updated from the infected category for the next time point (t=1). This describes the flow persons between the three categories. During an epidemic the susceptible category is not shifted with this model,
β
{\displaystyle \beta }
changes over the course of the epidemic and so does
γ
{\displaystyle \gamma }
. These variables determine the length of the epidemic and would have to be updated with each cycle.
d
S
d
t
=
−
β
S
I
{\displaystyle {\frac {dS}{dt}}=-\beta SI}
d
I
d
t
=
β
S
I
−
γ
I
{\displaystyle {\frac {dI}{dt}}=\beta SI-\gamma I}
d
R
d
t
=
γ
I
{\displaystyle {\frac {dR}{dt}}=\gamma I}
Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of
a
{\displaystyle a}
and an equal fraction
b
{\displaystyle b}
of people that an individual makes contact with per unit time. Then, let
β
{\displaystyle \beta }
be the multiplication of
a
{\displaystyle a}
and
b
{\displaystyle b}
. This is the transmission probability times the contact rate. Besides, an infected individual makes contact with
b
{\displaystyle b}
persons per unit time whereas only a fraction,
S
/
N
{\displaystyle S/N}
of them are susceptible. Thus, we have every infective can infect
a
b
S
=
β
S
{\displaystyle abS=\beta S}
susceptible persons, and therefore, the whole number of susceptibles infected by infectives per unit time is
β
S
I
{\displaystyle \beta SI}
. For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, a number equal to the fraction
γ
{\displaystyle \gamma }
(which represents the mean recovery/death rate, or
1
/
γ
{\displaystyle 1/\gamma }
the mean infective period) of infectives are leaving this class per unit time to enter the removed class. These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned. Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model.
=== Steady-state solutions ===
The only steady state solution to the classic SIR model as defined by the differential equations above is I=0, S and R can then take any values. The model can be changed while retaining three compartments to give a steady-state endemic solution by adding some input to the S compartment.
For example, one may postulate that the expected duration of susceptibility will be
E
[
min
(
T
L
∣
T
S
)
]
{\displaystyle \operatorname {E} [\min(T_{L}\mid T_{S})]}
where
T
L
{\displaystyle T_{L}}
reflects the time alive (life expectancy) and
T
S
{\displaystyle T_{S}}
reflects the time in the susceptible state before becoming infected, which can be simplified to:
E
[
min
(
T
L
∣
T
S
)
]
=
∫
0
∞
e
−
(
μ
+
δ
)
x
d
x
=
1
μ
+
δ
,
{\displaystyle \operatorname {E} [\min(T_{L}\mid T_{S})]=\int _{0}^{\infty }e^{-(\mu +\delta )x}\,dx={\frac {1}{\mu +\delta }},}
such that the number of susceptible persons is the number entering the susceptible compartment
μ
N
{\displaystyle \mu N}
times the duration of susceptibility:
S
=
μ
N
μ
+
λ
.
{\displaystyle S={\frac {\mu N}{\mu +\lambda }}.}
Analogously, the steady-state number of infected persons is the number entering the infected state from the susceptible state (number susceptible, times rate of infection)
λ
=
β
I
N
,
{\displaystyle \lambda ={\tfrac {\beta I}{N}},}
times the duration of infectiousness
1
μ
+
v
{\displaystyle {\tfrac {1}{\mu +v}}}
:
I
=
μ
N
μ
+
λ
λ
1
μ
+
v
.
{\displaystyle I={\frac {\mu N}{\mu +\lambda }}\lambda {\frac {1}{\mu +v}}.}
=== Other compartmental models ===
There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR). Compartmental models can also be used to model multiple risk groups, and even the interaction of multiple pathogens.
== Variations on the basic SIR model ==
=== SIS model ===
Some infections, for example, those from the common cold and influenza, do not confer any long-lasting immunity. Such infections may give temporary resistance but do not give long-term immunity upon recovery from infection, and individuals become susceptible again.
We have the model:
d
S
d
t
=
−
β
S
I
N
+
γ
I
d
I
d
t
=
β
S
I
N
−
γ
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=-{\frac {\beta SI}{N}}+\gamma I\\[6pt]{\frac {dI}{dt}}&={\frac {\beta SI}{N}}-\gamma I\end{aligned}}}
Note that denoting with N the total population it holds that:
d
S
d
t
+
d
I
d
t
=
0
⇒
S
(
t
)
+
I
(
t
)
=
N
{\displaystyle {\frac {dS}{dt}}+{\frac {dI}{dt}}=0\Rightarrow S(t)+I(t)=N}
.
It follows that:
d
I
d
t
=
(
β
−
γ
)
I
−
β
N
I
2
{\displaystyle {\frac {dI}{dt}}=(\beta -\gamma )I-{\frac {\beta }{N}}I^{2}}
,
i.e. the dynamics of infectious is ruled by a logistic function, so that
∀
I
(
0
)
>
0
{\displaystyle \forall I(0)>0}
:
β
γ
≤
1
⇒
lim
t
→
+
∞
I
(
t
)
=
0
,
β
γ
>
1
⇒
lim
t
→
+
∞
I
(
t
)
=
(
1
−
γ
β
)
N
.
{\displaystyle {\begin{aligned}&{\frac {\beta }{\gamma }}\leq 1\Rightarrow \lim _{t\to +\infty }I(t)=0,\\[6pt]&{\frac {\beta }{\gamma }}>1\Rightarrow \lim _{t\to +\infty }I(t)=\left(1-{\frac {\gamma }{\beta }}\right)N.\end{aligned}}}
It is possible to find an analytical solution to this model (by making a transformation of variables:
I
=
y
−
1
{\displaystyle I=y^{-1}}
and substituting this into the mean-field equations), such that the basic reproduction rate is greater than unity. The solution is given as
I
(
t
)
=
I
∞
1
+
V
e
−
χ
t
{\displaystyle I(t)={\frac {I_{\infty }}{1+Ve^{-\chi t}}}}
.
where
I
∞
=
(
1
−
γ
/
β
)
N
{\displaystyle I_{\infty }=(1-\gamma /\beta )N}
is the endemic infectious population,
χ
=
β
−
γ
{\displaystyle \chi =\beta -\gamma }
, and
V
=
I
∞
/
I
0
−
1
{\displaystyle V=I_{\infty }/I_{0}-1}
. As the system is assumed to be closed, the susceptible population is then
S
(
t
)
=
N
−
I
(
t
)
{\displaystyle S(t)=N-I(t)}
.
Whenever the integer nature of the number of agents is evident (populations with fewer than tens of thousands of individuals), inherent fluctuations in the disease spreading process caused by discrete agents result in uncertainties. In this scenario, the evolution of the disease predicted by compartmental equations deviates significantly from the observed results. These uncertainties may even cause the epidemic to end earlier than predicted by the compartmental equations.
As a special case, one obtains the usual logistic function by assuming
γ
=
0
{\displaystyle \gamma =0}
. This can be also considered in the SIR model with
R
=
0
{\displaystyle R=0}
, i.e. no removal will take place. That is the SI model. The differential equation system using
S
=
N
−
I
{\displaystyle S=N-I}
thus reduces to:
d
I
d
t
∝
I
⋅
(
N
−
I
)
.
{\displaystyle {\frac {dI}{dt}}\propto I\cdot (N-I).}
In the long run, in the SI model, all individuals will become infected.
=== SIRD model ===
The Susceptible-Infectious-Recovered-Deceased model differentiates between Recovered (meaning specifically individuals having survived the disease and now immune) and Deceased. The SIRD model has semi analytical solutions based on the four parts method. This model uses the following system of differential equations:
d
S
d
t
=
−
β
I
S
N
,
d
I
d
t
=
β
I
S
N
−
γ
I
−
μ
I
,
d
R
d
t
=
γ
I
,
d
D
d
t
=
μ
I
,
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta IS}{N}},\\[6pt]&{\frac {dI}{dt}}={\frac {\beta IS}{N}}-\gamma I-\mu I,\\[6pt]&{\frac {dR}{dt}}=\gamma I,\\[6pt]&{\frac {dD}{dt}}=\mu I,\end{aligned}}}
where
β
,
γ
,
μ
{\displaystyle \beta ,\gamma ,\mu }
are the rates of infection, recovery, and mortality, respectively.
=== SIRV model ===
The Susceptible-Infectious-Recovered-Vaccinated model is an extended SIR model that accounts for vaccination of the susceptible population. This model uses the following system of differential equations:
d
S
d
t
=
−
β
(
t
)
I
S
N
−
v
(
t
)
S
,
d
I
d
t
=
β
(
t
)
I
S
N
−
γ
(
t
)
I
,
d
R
d
t
=
γ
(
t
)
I
,
d
V
d
t
=
v
(
t
)
S
,
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta (t)IS}{N}}-v(t)S,\\[6pt]&{\frac {dI}{dt}}={\frac {\beta (t)IS}{N}}-\gamma (t)I,\\[6pt]&{\frac {dR}{dt}}=\gamma (t)I,\\[6pt]&{\frac {dV}{dt}}=v(t)S,\end{aligned}}}
where
β
,
γ
,
v
{\displaystyle \beta ,\gamma ,v}
are the rates of infection, recovery, and vaccination, respectively. For the semi-time initial conditions
S
(
0
)
=
(
1
−
η
)
N
{\displaystyle S(0)=(1-\eta )N}
,
I
(
0
)
=
η
N
{\displaystyle I(0)=\eta N}
,
R
(
0
)
=
V
(
0
)
=
0
{\displaystyle R(0)=V(0)=0}
and constant ratios
k
=
γ
(
t
)
/
β
(
t
)
{\displaystyle k=\gamma (t)/\beta (t)}
and
b
=
v
(
t
)
/
β
(
t
)
{\displaystyle b=v(t)/\beta (t)}
the model had been solved approximately. The occurrence of a pandemic outburst requires
k
+
b
<
1
−
2
η
{\displaystyle k+b<1-2\eta }
and there is a critical reduced vaccination rate
b
c
{\displaystyle b_{c}}
beyond which the steady-state size
S
∞
{\displaystyle S_{\infty }}
of the susceptible compartment remains relatively close to
S
(
0
)
{\displaystyle S(0)}
. Arbitrary initial conditions satisfying
S
(
0
)
+
I
(
0
)
+
R
(
0
)
+
V
(
0
)
=
N
{\displaystyle S(0)+I(0)+R(0)+V(0)=N}
can be mapped to the solved special case with
R
(
0
)
=
V
(
0
)
=
0
{\displaystyle R(0)=V(0)=0}
.
The numerical solution of this model to calculate the real-time reproduction number
R
t
{\displaystyle R_{t}}
of COVID-19 can be practiced based on information from the different populations in a community. Numerical solution is a commonly used method to analyze complicated kinetic networks when the analytical solution is difficult to obtain or limited by requirements such as boundary conditions or special parameters. It uses recursive equations to calculate the next step by converting the numerical integration into Riemann sum of discrete time steps e.g., use yesterday's principal and interest rate to calculate today's interest which assumes the interest rate is fixed during the day. The calculation contains projected errors if the analytical corrections on the numerical step size are not included, e.g. when the interest rate of annual collection is simplified to 12 times the monthly rate, a projected error is introduced. Thus the calculated results will carry accumulative errors when the time step is far away from the reference point and a convergence test is needed to estimate the error. However, this error is usually acceptable for data fitting. When fitting a set of data with a close time step, the error is relatively small because the reference point is nearby compared to when predicting a long period of time after a reference point. Once the real-time
R
t
{\displaystyle R_{t}}
is pulled out, one can compare it to the basic reproduction number
R
0
{\displaystyle R_{0}}
. Before the vaccination,
R
t
{\displaystyle R_{t}}
gives the policy maker and general public a measure of the efficiency of social mitigation activities such as social distancing and face masking simply by dividing
R
t
R
0
{\displaystyle {\frac {R_{t}}{R_{0}}}}
. Under massive vaccination, the goal of disease control is to reduce the effective reproduction number
R
e
=
R
t
S
N
<
1
{\displaystyle R_{e}={\frac {R_{t}S}{N}}<1}
, where
S
{\displaystyle S}
is the number of susceptible population at the time and
N
{\displaystyle N}
is the total population. When
R
e
<
1
{\displaystyle R_{e}<1}
, the spreading decays and daily infected cases go down.
=== SIRVD model ===
The susceptible-infected-recovered-vaccinated-deceased (SIRVD) epidemic compartment model extends the SIR model to include the effects of vaccination campaigns and time-dependent fatality rates on epidemic outbreaks. It encompasses the SIR, SIRV, SIRD, and SI models as special cases, with individual time-dependent rates governing transitions between different fractions. This model uses the following system of differential equations for the population fractions
S
,
I
,
R
,
V
,
D
{\displaystyle S,I,R,V,D}
:
d
S
d
t
=
−
a
(
t
)
S
I
−
v
(
t
)
S
,
d
I
d
t
=
a
(
t
)
S
I
−
μ
(
t
)
I
−
ψ
(
t
)
I
,
d
R
d
t
=
μ
(
t
)
I
,
d
V
d
t
=
v
(
t
)
S
,
d
D
d
t
=
ψ
(
t
)
I
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-a(t)SI-v(t)S,\\[6pt]&{\frac {dI}{dt}}=a(t)SI-\mu (t)I-\psi (t)I,\\[6pt]&{\frac {dR}{dt}}=\mu (t)I,\\[6pt]&{\frac {dV}{dt}}=v(t)S,\\[6pt]&{\frac {dD}{dt}}=\psi (t)I\end{aligned}}}
where
a
(
t
)
,
v
(
t
)
,
μ
(
t
)
,
ψ
(
t
)
{\displaystyle a(t),v(t),\mu (t),\psi (t)}
are the infection, vaccination, recovery, and fatality rates, respectively. For the semi-time initial conditions
S
(
0
)
=
1
−
η
{\displaystyle S(0)=1-\eta }
,
I
(
0
)
=
η
{\displaystyle I(0)=\eta }
,
R
(
0
)
=
V
(
0
)
=
D
(
0
)
=
0
{\displaystyle R(0)=V(0)=D(0)=0}
and constant ratios
k
=
μ
(
t
)
/
a
(
t
)
{\displaystyle k=\mu (t)/a(t)}
,
b
=
v
(
t
)
/
a
(
t
)
{\displaystyle b=v(t)/a(t)}
, and
q
=
ψ
(
t
)
/
a
(
t
)
{\displaystyle q=\psi (t)/a(t)}
the model had been solved approximately, and exactly for some special cases, irrespective of the functional form of
a
(
t
)
{\displaystyle a(t)}
. This is achieved upon rewriting the above SIRVD model equations in equivalent, but reduced form
d
S
d
τ
=
−
S
I
−
b
(
τ
)
S
,
d
I
d
τ
=
S
I
−
[
k
(
τ
)
+
q
(
τ
)
]
I
,
d
R
d
τ
=
k
(
τ
)
I
,
d
V
d
τ
=
b
(
τ
)
S
,
d
D
d
τ
=
q
(
τ
)
S
{\displaystyle {\begin{aligned}&{\frac {dS}{d\tau }}=-SI-b(\tau )S,\\[6pt]&{\frac {dI}{d\tau }}=SI-[k(\tau )+q(\tau )]I,\\[6pt]&{\frac {dR}{d\tau }}=k(\tau )I,\\[6pt]&{\frac {dV}{d\tau }}=b(\tau )S,\\[6pt]&{\frac {dD}{d\tau }}=q(\tau )S\end{aligned}}}
where
τ
(
t
)
=
∫
0
t
a
(
ξ
)
d
ξ
{\displaystyle \tau (t)=\int _{0}^{t}a(\xi )d\xi }
is a reduced, dimensionless time. The temporal dependence of the infected fraction
I
(
τ
)
{\displaystyle I(\tau )}
and the rate of new infections
j
(
τ
)
=
S
(
τ
)
I
(
τ
)
{\displaystyle j(\tau )=S(\tau )I(\tau )}
differs when considering the effects of vaccinations and when the real-time dependence of fatality and recovery rates diverge. These differences have been highlighted for stationary ratios and gradually decreasing fatality rates. The case of stationary ratios allows one to construct a diagnostics method to extract analytically all SIRVD model parameters from measured
COVID-19 data of a completed pandemic wave.
=== SIRVB model ===
The SIRVB model adds a breakthrough pathway in the SIRV model.
The kinetic equations become:
d
S
d
t
=
−
a
(
t
)
S
I
−
v
(
t
)
S
+
b
(
t
)
[
μ
(
t
)
I
+
v
(
t
)
S
]
,
d
I
d
t
=
a
(
t
)
S
I
−
μ
(
t
)
I
,
d
R
d
t
=
[
1
−
b
(
t
)
]
μ
(
t
)
I
,
d
V
d
t
=
[
1
−
b
(
t
)
]
v
(
t
)
S
,
{\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-a(t)SI-v(t)S+b(t)[\mu (t)I+v(t)S],\\[6pt]&{\frac {dI}{dt}}=a(t)SI-\mu (t)I,\\[6pt]&{\frac {dR}{dt}}=[1-b(t)]\mu (t)I,\\[6pt]&{\frac {dV}{dt}}=[1-b(t)]v(t)S,\\[6pt]\end{aligned}}}
where infection rate
a
(
t
)
{\displaystyle a(t)}
can be write as
β
(
t
)
/
N
{\displaystyle \beta (t)/N}
, recovery rate
μ
(
t
)
{\displaystyle \mu (t)}
can be simplified to a constant
γ
{\displaystyle \gamma }
,
v
(
t
)
{\displaystyle v(t)}
is the vaccination rate,
b
(
t
)
{\displaystyle b(t)}
is the break through ratio or fraction of immuned people susceptible to reinfection (<1).
=== MSIR model ===
For many infections, including measles, babies are not born into the susceptible compartment but are immune to the disease for the first few months of life due to protection from maternal antibodies (passed across the placenta and additionally through colostrum). This is called passive immunity. This added detail can be shown by including an M class (for maternally derived immunity) at the beginning of the model.
To indicate this mathematically, an additional compartment is added, M(t). This results in the following differential equations:
d
M
d
t
=
Λ
−
δ
M
−
μ
M
d
S
d
t
=
δ
M
−
β
S
I
N
−
μ
S
d
I
d
t
=
β
S
I
N
−
γ
I
−
μ
I
d
R
d
t
=
γ
I
−
μ
R
{\displaystyle {\begin{aligned}{\frac {dM}{dt}}&=\Lambda -\delta M-\mu M\\[8pt]{\frac {dS}{dt}}&=\delta M-{\frac {\beta SI}{N}}-\mu S\\[8pt]{\frac {dI}{dt}}&={\frac {\beta SI}{N}}-\gamma I-\mu I\\[8pt]{\frac {dR}{dt}}&=\gamma I-\mu R\end{aligned}}}
=== Carrier state ===
Some people who have had an infectious disease such as tuberculosis never completely recover and continue to carry the infection, whilst not suffering the disease themselves. They may then move back into the infectious compartment and suffer symptoms (as in tuberculosis) or they may continue to infect others in their carrier state, while not suffering symptoms. The most famous example of this is probably Mary Mallon, who infected 22 people with typhoid fever. The carrier compartment is labelled C.
=== SEIR model ===
For many important infections, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment E (for exposed).
Assuming that the latency period is a random variable with exponential distribution with parameter
a
{\displaystyle a}
(i.e. the average latency period is
a
−
1
{\displaystyle a^{-1}}
), and also assuming the presence of vital dynamics with birth rate
Λ
{\displaystyle \Lambda }
equal to death rate
N
μ
{\displaystyle N\mu }
(so that the total number
N
{\displaystyle N}
is constant), we have the model:
d
S
d
t
=
μ
N
−
μ
S
−
β
I
S
N
d
E
d
t
=
β
I
S
N
−
(
μ
+
a
)
E
d
I
d
t
=
a
E
−
(
γ
+
μ
)
I
d
R
d
t
=
γ
I
−
μ
R
.
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N-\mu S-{\frac {\beta IS}{N}}\\[8pt]{\frac {dE}{dt}}&={\frac {\beta IS}{N}}-(\mu +a)E\\[8pt]{\frac {dI}{dt}}&=aE-(\gamma +\mu )I\\[8pt]{\frac {dR}{dt}}&=\gamma I-\mu R.\end{aligned}}}
We have
S
+
E
+
I
+
R
=
N
,
{\displaystyle S+E+I+R=N,}
but this is only constant because of the simplifying assumption that birth and death rates are equal; in general
N
{\displaystyle N}
is a variable.
For this model, the basic reproduction number is:
R
0
=
a
μ
+
a
β
μ
+
γ
.
{\displaystyle R_{0}={\frac {a}{\mu +a}}{\frac {\beta }{\mu +\gamma }}.}
Similarly to the SIR model, also, in this case, we have a Disease-Free-Equilibrium (N,0,0,0) and an Endemic Equilibrium EE, and one can show that, independently from biologically meaningful initial conditions
(
S
(
0
)
,
E
(
0
)
,
I
(
0
)
,
R
(
0
)
)
∈
{
(
S
,
E
,
I
,
R
)
∈
[
0
,
N
]
4
:
S
≥
0
,
E
≥
0
,
I
≥
0
,
R
≥
0
,
S
+
E
+
I
+
R
=
N
}
{\displaystyle \left(S(0),E(0),I(0),R(0)\right)\in \left\{(S,E,I,R)\in [0,N]^{4}:S\geq 0,E\geq 0,I\geq 0,R\geq 0,S+E+I+R=N\right\}}
it holds that:
R
0
≤
1
⇒
lim
t
→
+
∞
(
S
(
t
)
,
E
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
D
F
E
=
(
N
,
0
,
0
,
0
)
,
{\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to +\infty }\left(S(t),E(t),I(t),R(t)\right)=DFE=(N,0,0,0),}
R
0
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
+
∞
(
S
(
t
)
,
E
(
t
)
,
I
(
t
)
,
R
(
t
)
)
=
E
E
.
{\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to +\infty }\left(S(t),E(t),I(t),R(t)\right)=EE.}
In case of periodically varying contact rate
β
(
t
)
{\displaystyle \beta (t)}
the condition for the global attractiveness of DFE is that the following linear system with periodic coefficients:
d
E
1
d
t
=
β
(
t
)
I
1
−
(
γ
+
a
)
E
1
d
I
1
d
t
=
a
E
1
−
(
γ
+
μ
)
I
1
{\displaystyle {\begin{aligned}{\frac {dE_{1}}{dt}}&=\beta (t)I_{1}-(\gamma +a)E_{1}\\[8pt]{\frac {dI_{1}}{dt}}&=aE_{1}-(\gamma +\mu )I_{1}\end{aligned}}}
is stable (i.e. it has its Floquet's eigenvalues inside the unit circle in the complex plane).
=== SEIS model ===
The SEIS model is like the SEIR model (above) except that no immunity is acquired at the end.
S
→
E
→
I
→
S
{\displaystyle {\color {blue}{{\mathcal {S}}\to {\mathcal {E}}\to {\mathcal {I}}\to {\mathcal {S}}}}}
In this model an infection does not leave any immunity thus individuals that have recovered return to being susceptible, moving back into the S(t) compartment. The following differential equations describe this model:
d
S
d
t
=
Λ
−
β
S
I
N
−
μ
S
+
γ
I
d
E
d
t
=
β
S
I
N
−
(
ϵ
+
μ
)
E
d
I
d
t
=
ε
E
−
(
γ
+
μ
)
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\Lambda -{\frac {\beta SI}{N}}-\mu S+\gamma I\\[6pt]{\frac {dE}{dt}}&={\frac {\beta SI}{N}}-(\epsilon +\mu )E\\[6pt]{\frac {dI}{dt}}&=\varepsilon E-(\gamma +\mu )I\end{aligned}}}
=== MSEIR model ===
For the case of a disease, with the factors of passive immunity, and a latency period there is the MSEIR model.
M
→
S
→
E
→
I
→
R
{\displaystyle \color {blue}{{\mathcal {M}}\to {\mathcal {S}}\to {\mathcal {E}}\to {\mathcal {I}}\to {\mathcal {R}}}}
d
M
d
t
=
Λ
−
δ
M
−
μ
M
d
S
d
t
=
δ
M
−
β
S
I
N
−
μ
S
d
E
d
t
=
β
S
I
N
−
(
ε
+
μ
)
E
d
I
d
t
=
ε
E
−
(
γ
+
μ
)
I
d
R
d
t
=
γ
I
−
μ
R
{\displaystyle {\begin{aligned}{\frac {dM}{dt}}&=\Lambda -\delta M-\mu M\\[6pt]{\frac {dS}{dt}}&=\delta M-{\frac {\beta SI}{N}}-\mu S\\[6pt]{\frac {dE}{dt}}&={\frac {\beta SI}{N}}-(\varepsilon +\mu )E\\[6pt]{\frac {dI}{dt}}&=\varepsilon E-(\gamma +\mu )I\\[6pt]{\frac {dR}{dt}}&=\gamma I-\mu R\end{aligned}}}
=== MSEIRS model ===
An MSEIRS model is similar to the MSEIR, but the immunity in the R class would be temporary, so that individuals would regain their susceptibility when the temporary immunity ended.
M
→
S
→
E
→
I
→
R
→
S
{\displaystyle {\color {blue}{{\mathcal {M}}\to {\mathcal {S}}\to {\mathcal {E}}\to {\mathcal {I}}\to {\mathcal {R}}\to {\mathcal {S}}}}}
=== Variable contact rates ===
It is well known that the probability of getting a disease is not constant in time. As a pandemic progresses, reactions to the pandemic may change the contact rates which are assumed constant in the simpler models. Counter-measures such as masks, social distancing, and lockdown will alter the contact rate in a way to reduce the speed of the pandemic.
In addition, Some diseases are seasonal, such as the common cold viruses, which are more prevalent during winter. With childhood diseases, such as measles, mumps, and rubella, there is a strong correlation with the school calendar, so that during the school holidays the probability of getting such a disease dramatically decreases. As a consequence, for many classes of diseases, one should consider a force of infection with periodically ('seasonal') varying contact rate
F
=
β
(
t
)
I
N
,
β
(
t
+
T
)
=
β
(
t
)
{\displaystyle F=\beta (t){\frac {I}{N}},\quad \beta (t+T)=\beta (t)}
with period T equal to one year.
Thus, our model becomes
d
S
d
t
=
μ
N
−
μ
S
−
β
(
t
)
I
N
S
d
I
d
t
=
β
(
t
)
I
N
S
−
(
γ
+
μ
)
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N-\mu S-\beta (t){\frac {I}{N}}S\\[8pt]{\frac {dI}{dt}}&=\beta (t){\frac {I}{N}}S-(\gamma +\mu )I\end{aligned}}}
(the dynamics of recovered easily follows from
R
=
N
−
S
−
I
{\displaystyle R=N-S-I}
), i.e. a nonlinear set of differential equations with periodically varying parameters. It is well known that this class of dynamical systems may undergo very interesting and complex phenomena of nonlinear parametric resonance. It is easy to see that if:
1
T
∫
0
T
β
(
t
)
μ
+
γ
d
t
<
1
⇒
lim
t
→
+
∞
(
S
(
t
)
,
I
(
t
)
)
=
D
F
E
=
(
N
,
0
)
,
{\displaystyle {\frac {1}{T}}\int _{0}^{T}{\frac {\beta (t)}{\mu +\gamma }}\,dt<1\Rightarrow \lim _{t\to +\infty }(S(t),I(t))=DFE=(N,0),}
whereas if the integral is greater than one the disease will not die out and there may be such resonances. For example, considering the periodically varying contact rate as the 'input' of the system one has that the output is a periodic function whose period is a multiple of the period of the input.
This allowed to give a contribution to explain the poly-annual (typically biennial) epidemic outbreaks of some infectious diseases as interplay between the period of the contact rate oscillations and the pseudo-period of the damped oscillations near the endemic equilibrium. Remarkably, in some cases, the behavior may also be quasi-periodic or even chaotic.
=== SIR model with diffusion ===
Spatiotemporal compartmental models describe not the total number, but the density of susceptible/infective/recovered persons. Consequently, they also allow to model the distribution of infected persons in space. In most cases, this is done by combining the SIR model with a diffusion equation
∂
t
S
=
D
S
∇
2
S
−
β
I
S
N
,
∂
t
I
=
D
I
∇
2
I
+
β
I
S
N
−
γ
I
,
∂
t
R
=
D
R
∇
2
R
+
γ
I
,
{\displaystyle {\begin{aligned}&\partial _{t}S=D_{S}\nabla ^{2}S-{\frac {\beta IS}{N}},\\[6pt]&\partial _{t}I=D_{I}\nabla ^{2}I+{\frac {\beta IS}{N}}-\gamma I,\\[6pt]&\partial _{t}R=D_{R}\nabla ^{2}R+\gamma I,\end{aligned}}}
where
D
S
{\displaystyle D_{S}}
,
D
I
{\displaystyle D_{I}}
and
D
R
{\displaystyle D_{R}}
are diffusion constants. Thereby, one obtains a reaction-diffusion equation. (Note that, for dimensional reasons, the parameter
β
{\displaystyle \beta }
has to be changed compared to the simple SIR model.) Early models of this type have been used to model the spread of the black death in Europe. Extensions of this model have been used to incorporate, e.g., effects of nonpharmaceutical interventions such as social distancing.
=== Interacting Subpopulation SEIR Model ===
As social contacts, disease severity and lethality, as well as the efficacy of prophylactic measures may differ substantially between interacting subpopulations, e.g., the elderly versus the young, separate SEIR models for each subgroup may be used that are mutually connected through interaction links. Such Interacting Subpopulation SEIR models have been used for modeling the COVID-19 pandemic at continent scale to develop personalized, accelerated, subpopulation-targeted vaccination strategies that promise a shortening of the pandemic and a reduction of case and death counts in the setting of limited access to vaccines during a wave of virus Variants of Concern.
=== SIR Model on Networks ===
The SIR model has been studied on networks of various kinds in order to model a more realistic form of connection than the homogeneous mixing condition which is usually required. A simple model for epidemics on networks in which an individual has a probability p of being infected by each of his infected neighbors in a given time step leads to results similar to giant component formation on Erdos Renyi random graphs.
A stochastic compartment model with a transmission pathway via vectors has been developed recently in which a multiple random walkers approach
is implemented to investigate the spreading dynamics in random graphs of the Watts-Strogatz and the Barabási-Albert type
to mimic human mobility patterns in complex real world environments such as cities, streets, and transportation networks.
This model captures the class of vector transmitted infectious diseases such as Dengue, Malaria (transmission by mosquitoes), pestilence (transmission by fleas), and others.
=== SIRSS model - combination of SIR with modelling of social stress ===
Dynamics of epidemics depend on how people's behavior changes in time. For example, at the beginning of the epidemic, people are ignorant and careless, then, after the outbreak of epidemics and alarm, they begin to comply with the various restrictions and the spreading of epidemics may decline. Over time, some people get tired/frustrated by the restrictions and stop following them (exhaustion), especially if the number of new cases drops down. After resting for some time, they can follow the restrictions again. But during this pause the second wave can come and become even stronger than the first one. Social dynamics should be considered. The social physics models of social stress complement the classical epidemics models.
The simplest SIR-social stress (SIRSS) model is organised as follows. The susceptible individuals (S) can be split in three subgroups by the types of behavior: ignorant or unaware of the epidemic (Sign), rationally resistant (Sres), and exhausted (Sexh) that do not react on the external stimuli (this is a sort of refractory period). In other words: S(t) = Sign(t) + Sres(t) + Sexh(t). Symbolically, the social stress model can be presented by the "reaction scheme" (where I denotes the infected individuals):
S
i
g
n
+
2
I
→
S
r
e
s
+
2
I
{\displaystyle {\color {blue}{{\mathcal {S_{ign}}}+2{\mathcal {I}}\to {\mathcal {S_{res}}}+2{\mathcal {I}}}}}
– mobilization reaction (the autocatalytic form here means that the transition rate is proportional to the square of the infected fraction I);
S
r
e
s
→
S
e
x
h
{\displaystyle {\color {blue}{{\mathcal {S_{res}}}\to {\mathcal {S_{exh}}}}}}
– exhaustion process due to fatigue from anti-epidemic restrictions;
S
e
x
h
→
S
i
g
n
{\displaystyle {\color {blue}{{\mathcal {S_{exh}}}\to {\mathcal {S_{ign}}}}}}
– slow relaxation to the initial state (end of the refractory period).
The main SIR epidemic reaction
S
.
.
.
+
I
→
2
I
{\displaystyle {\color {blue}{{\mathcal {S_{...}}}+{\mathcal {I}}\to {\mathcal {2I}}}}}
has different reaction rate constants
β
{\displaystyle \beta }
for Sign, Sres, and Sexh. Presumably, for Sres,
β
{\displaystyle \beta }
is lower than for Sign and Sign.
The differences between countries are concentrated in two kinetic constants: the rate of mobilization and the rate of exhaustion calculated for COVID-19 epidemic in 13 countries. These constants for this epidemic in all countries can be extracted by the fitting of the SIRSS model to publicly available data
=== KdV-SIR equation ===
Based on the classical SIR model, a Korteweg-de Vries (KdV)–SIR equation and its analytical solution have been proposed to illustrate the fundamental dynamics of an epidemic wave, the dependence of solutions on parameters, and the dependence of predictability horizons on various types of solutions. The KdV-SIR equation is written as follows:
d
2
I
d
t
−
σ
o
2
I
+
3
2
σ
o
2
I
m
a
x
I
2
=
0
{\displaystyle {\frac {d^{2}I}{dt}}-\sigma _{o}^{2}I+{\frac {3}{2}}{\frac {\sigma _{o}^{2}}{I_{max}}}I^{2}=0}
.
Here,
σ
o
=
γ
(
R
o
−
1
)
{\displaystyle \sigma _{o}=\gamma (R_{o}-1)}
,
R
o
=
β
γ
S
o
N
{\displaystyle R_{o}={\frac {\beta }{\gamma }}{\frac {S_{o}}{N}}}
,
and
I
m
a
x
=
S
o
2
(
R
o
−
1
)
2
R
o
2
{\displaystyle I_{max}={\frac {S_{o}}{2}}{\frac {(R_{o}-1)^{2}}{R_{o}^{2}}}}
.
S
o
{\displaystyle S_{o}}
indicates the initial value of the state variable
S
{\displaystyle S}
. Parameters
σ
o
{\displaystyle \sigma _{o}}
(σ-naught) and
R
o
{\displaystyle R_{o}}
(R-naught) are the time-independent relative growth rate and basic reproduction number, respectively.
I
m
a
x
{\displaystyle I_{max}}
presents the maximum of the state variables
I
{\displaystyle I}
(for the number of infected persons). The KdV-SIR equation shares the same form as the Korteweg–De Vries equation in the traveling wave coordinate. An analytical solution to the KdV-SIR equation is written as follows:
I
=
I
m
a
x
s
e
c
h
2
(
σ
o
2
t
)
{\displaystyle I=I_{max}sech^{2}\left({\frac {\sigma _{o}}{2}}t\right)}
,
which represents a solitary wave solution.
== Heterogeneous (structured, Bayesian) model ==
Modeling a full population of possibly millions people using two constants
β
{\displaystyle \beta }
and
γ
{\displaystyle \gamma }
seem far fetched; each individual has personal characteristics that influence the propagation : immunity status, contact habits and so on. So it is interesting to know what happens if, for instance,
β
{\displaystyle \beta }
and
γ
{\displaystyle \gamma }
are not two constants but some random variables (a pair for each individual). This procedure has several names : "heterogeneous model", "structuration" (see also below for age structured models) or "Bayesian" view. Surprising results emerge, for instance it was proved in that the number of infected at the peak of a heterogeneous epidemic is smaller than the deterministic epidemic having same average
β
{\displaystyle \beta }
; the same holds true for the total epidemic size
S
(
0
)
−
S
(
∞
)
{\displaystyle S(0)-S(\infty )}
and other models, e.g. SEIR.
== Modelling vaccination ==
The SIR model can be modified to model vaccination. Typically these introduce an additional compartment to the SIR model,
V
{\displaystyle V}
, for vaccinated individuals. Below are some examples.
=== Vaccinating newborns ===
In presence of a communicable diseases, one of the main tasks is that of eradicating it via prevention measures and, if possible, via the establishment of a mass vaccination program. Consider a disease for which the newborn are vaccinated (with a vaccine giving lifelong immunity) at a rate
P
∈
(
0
,
1
)
{\displaystyle P\in (0,1)}
:
d
S
d
t
=
ν
N
(
1
−
P
)
−
μ
S
−
β
I
N
S
d
I
d
t
=
β
I
N
S
−
(
μ
+
γ
)
I
d
V
d
t
=
ν
N
P
−
μ
V
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\nu N(1-P)-\mu S-\beta {\frac {I}{N}}S\\[8pt]{\frac {dI}{dt}}&=\beta {\frac {I}{N}}S-(\mu +\gamma )I\\[8pt]{\frac {dV}{dt}}&=\nu NP-\mu V\end{aligned}}}
where
V
{\displaystyle V}
is the class of vaccinated subjects. It is immediate to show that:
lim
t
→
+
∞
V
(
t
)
=
N
P
,
{\displaystyle \lim _{t\to +\infty }V(t)=NP,}
thus we shall deal with the long term behavior of
S
{\displaystyle S}
and
I
{\displaystyle I}
, for which it holds that:
R
0
(
1
−
P
)
≤
1
⇒
lim
t
→
+
∞
(
S
(
t
)
,
I
(
t
)
)
=
D
F
E
=
(
N
(
1
−
P
)
,
0
)
{\displaystyle R_{0}(1-P)\leq 1\Rightarrow \lim _{t\to +\infty }\left(S(t),I(t)\right)=DFE=\left(N\left(1-P\right),0\right)}
R
0
(
1
−
P
)
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
+
∞
(
S
(
t
)
,
I
(
t
)
)
=
E
E
=
(
N
R
0
(
1
−
P
)
,
N
(
R
0
(
1
−
P
)
−
1
)
)
.
{\displaystyle R_{0}(1-P)>1,\quad I(0)>0\Rightarrow \lim _{t\to +\infty }\left(S(t),I(t)\right)=EE=\left({\frac {N}{R_{0}(1-P)}},N\left(R_{0}(1-P)-1\right)\right).}
In other words, if
P
<
P
∗
=
1
−
1
R
0
{\displaystyle P<P^{*}=1-{\frac {1}{R_{0}}}}
the vaccination program is not successful in eradicating the disease, on the contrary, it will remain endemic, although at lower levels than the case of absence of vaccinations. This means that the mathematical model suggests that for a disease whose basic reproduction number may be as high as 18 one should vaccinate at least 94.4% of newborns in order to eradicate the disease.
=== Vaccination and information ===
Modern societies are facing the challenge of "rational" exemption, i.e. the family's decision to not vaccinate children as a consequence of a "rational" comparison between the perceived risk from infection and that from getting damages from the vaccine. In order to assess whether this behavior is really rational, i.e. if it can equally lead to the eradication of the disease, one may simply assume that the vaccination rate is an increasing function of the number of infectious subjects:
P
=
P
(
I
)
,
P
′
(
I
)
>
0.
{\displaystyle P=P(I),\quad P'(I)>0.}
In such a case the eradication condition becomes:
P
(
0
)
≥
P
∗
,
{\displaystyle P(0)\geq P^{*},}
i.e. the baseline vaccination rate should be greater than the "mandatory vaccination" threshold, which, in case of exemption, cannot hold. Thus, "rational" exemption might be myopic since it is based only on the current low incidence due to high vaccine coverage, instead taking into account future resurgence of infection due to coverage decline.
=== Vaccination of non-newborns ===
In case there also are vaccinations of non newborns at a rate ρ the equation for the susceptible and vaccinated subject has to be modified as follows:
d
S
d
t
=
μ
N
(
1
−
P
)
−
μ
S
−
ρ
S
−
β
I
N
S
d
V
d
t
=
μ
N
P
+
ρ
S
−
μ
V
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N(1-P)-\mu S-\rho S-\beta {\frac {I}{N}}S\\[8pt]{\frac {dV}{dt}}&=\mu NP+\rho S-\mu V\end{aligned}}}
leading to the following eradication condition:
P
≥
1
−
(
1
+
ρ
μ
)
1
R
0
{\displaystyle P\geq 1-\left(1+{\frac {\rho }{\mu }}\right){\frac {1}{R_{0}}}}
=== Pulse vaccination strategy ===
This strategy repeatedly vaccinates a defined age-cohort (such as young children or the elderly) in a susceptible population over time. Using this strategy, the block of susceptible individuals is then immediately removed, making it possible to eliminate an infectious disease, (such as measles), from the entire population. Every T time units a constant fraction p of susceptible subjects is vaccinated in a relatively short (with respect to the dynamics of the disease) time. This leads to the following impulsive differential equations for the susceptible and vaccinated subjects:
d
S
d
t
=
μ
N
−
μ
S
−
β
I
N
S
,
S
(
n
T
+
)
=
(
1
−
p
)
S
(
n
T
−
)
,
n
=
0
,
1
,
2
,
…
d
V
d
t
=
−
μ
V
,
V
(
n
T
+
)
=
V
(
n
T
−
)
+
p
S
(
n
T
−
)
,
n
=
0
,
1
,
2
,
…
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=\mu N-\mu S-\beta {\frac {I}{N}}S,\quad S(nT^{+})=(1-p)S(nT^{-}),&&n=0,1,2,\ldots \\[8pt]{\frac {dV}{dt}}&=-\mu V,\quad V(nT^{+})=V(nT^{-})+pS(nT^{-}),&&n=0,1,2,\ldots \end{aligned}}}
It is easy to see that by setting I = 0 one obtains that the dynamics of the susceptible subjects is given by:
S
∗
(
t
)
=
1
−
p
1
−
(
1
−
p
)
E
−
μ
T
E
−
μ
M
O
D
(
t
,
T
)
{\displaystyle S^{*}(t)=1-{\frac {p}{1-(1-p)E^{-\mu T}}}E^{-\mu MOD(t,T)}}
and that the eradication condition is:
R
0
∫
0
T
S
∗
(
t
)
d
t
<
1
{\displaystyle R_{0}\int _{0}^{T}S^{*}(t)\,dt<1}
=== Vaccination games ===
A huge literature recognizes that the vaccination can be seen as a game: in a population where everybody is vaccinated any epidemic will die off immediately so an additional person will have no interest to vaccinate at all. On the contrary, a person arriving in a population where nobody is vaccinated will have all incentives to vaccinate (the epidemic will break loose in such a population). So, it seems that the individual has interest to do the opposite of the population as a whole. But the population is the sum of all individuals, and the previous affirmation should be false. So, in fact, a Nash equilibrium is reached. Technical tools to treat such situations involve game theory or modern tools such as Mean-field game theory.
== The influence of age: age-structured models ==
Age has a deep influence on the disease spread rate in a population, especially the contact rate. This rate summarizes the effectiveness of contacts between susceptible and infectious subjects. Taking into account the ages of the epidemic classes
s
(
t
,
a
)
,
i
(
t
,
a
)
,
r
(
t
,
a
)
{\displaystyle s(t,a),i(t,a),r(t,a)}
(to limit ourselves to the susceptible-infectious-removed scheme) such that:
S
(
t
)
=
∫
0
a
M
s
(
t
,
a
)
d
a
{\displaystyle S(t)=\int _{0}^{a_{M}}s(t,a)\,da}
I
(
t
)
=
∫
0
a
M
i
(
t
,
a
)
d
a
{\displaystyle I(t)=\int _{0}^{a_{M}}i(t,a)\,da}
R
(
t
)
=
∫
0
a
M
r
(
t
,
a
)
d
a
{\displaystyle R(t)=\int _{0}^{a_{M}}r(t,a)\,da}
(where
a
M
≤
+
∞
{\displaystyle a_{M}\leq +\infty }
is the maximum admissible age) and their dynamics is not described, as one might think, by "simple" partial differential equations, but by integro-differential equations:
∂
t
s
(
t
,
a
)
+
∂
a
s
(
t
,
a
)
=
−
μ
(
a
)
s
(
a
,
t
)
−
s
(
a
,
t
)
∫
0
a
M
k
(
a
,
a
1
;
t
)
i
(
a
1
,
t
)
d
a
1
{\displaystyle \partial _{t}s(t,a)+\partial _{a}s(t,a)=-\mu (a)s(a,t)-s(a,t)\int _{0}^{a_{M}}k(a,a_{1};t)i(a_{1},t)\,da_{1}}
∂
t
i
(
t
,
a
)
+
∂
a
i
(
t
,
a
)
=
s
(
a
,
t
)
∫
0
a
M
k
(
a
,
a
1
;
t
)
i
(
a
1
,
t
)
d
a
1
−
μ
(
a
)
i
(
a
,
t
)
−
γ
(
a
)
i
(
a
,
t
)
{\displaystyle \partial _{t}i(t,a)+\partial _{a}i(t,a)=s(a,t)\int _{0}^{a_{M}}{k(a,a_{1};t)i(a_{1},t)da_{1}}-\mu (a)i(a,t)-\gamma (a)i(a,t)}
∂
t
r
(
t
,
a
)
+
∂
a
r
(
t
,
a
)
=
−
μ
(
a
)
r
(
a
,
t
)
+
γ
(
a
)
i
(
a
,
t
)
{\displaystyle \partial _{t}r(t,a)+\partial _{a}r(t,a)=-\mu (a)r(a,t)+\gamma (a)i(a,t)}
where:
F
(
a
,
t
,
i
(
⋅
,
⋅
)
)
=
∫
0
a
M
k
(
a
,
a
1
;
t
)
i
(
a
1
,
t
)
d
a
1
{\displaystyle F(a,t,i(\cdot ,\cdot ))=\int _{0}^{a_{M}}k(a,a_{1};t)i(a_{1},t)\,da_{1}}
is the force of infection, which, of course, will depend, though the contact kernel
k
(
a
,
a
1
;
t
)
{\displaystyle k(a,a_{1};t)}
on the interactions between the ages.
Complexity is added by the initial conditions for newborns (i.e. for a=0), that are straightforward for infectious and removed:
i
(
t
,
0
)
=
r
(
t
,
0
)
=
0
{\displaystyle i(t,0)=r(t,0)=0}
but that are nonlocal for the density of susceptible newborns:
s
(
t
,
0
)
=
∫
0
a
M
(
φ
s
(
a
)
s
(
a
,
t
)
+
φ
i
(
a
)
i
(
a
,
t
)
+
φ
r
(
a
)
r
(
a
,
t
)
)
d
a
{\displaystyle s(t,0)=\int _{0}^{a_{M}}\left(\varphi _{s}(a)s(a,t)+\varphi _{i}(a)i(a,t)+\varphi _{r}(a)r(a,t)\right)\,da}
where
φ
j
(
a
)
,
j
=
s
,
i
,
r
{\displaystyle \varphi _{j}(a),j=s,i,r}
are the fertilities of the adults.
Moreover, defining now the density of the total population
n
(
t
,
a
)
=
s
(
t
,
a
)
+
i
(
t
,
a
)
+
r
(
t
,
a
)
{\displaystyle n(t,a)=s(t,a)+i(t,a)+r(t,a)}
one obtains:
∂
t
n
(
t
,
a
)
+
∂
a
n
(
t
,
a
)
=
−
μ
(
a
)
n
(
a
,
t
)
{\displaystyle \partial _{t}n(t,a)+\partial _{a}n(t,a)=-\mu (a)n(a,t)}
In the simplest case of equal fertilities in the three epidemic classes, we have that in order to have demographic equilibrium the following necessary and sufficient condition linking the fertility
φ
(
.
)
{\displaystyle \varphi (.)}
with the mortality
μ
(
a
)
{\displaystyle \mu (a)}
must hold:
1
=
∫
0
a
M
φ
(
a
)
exp
(
−
∫
0
a
μ
(
q
)
d
q
)
d
a
{\displaystyle 1=\int _{0}^{a_{M}}\varphi (a)\exp \left(-\int _{0}^{a}{\mu (q)dq}\right)\,da}
and the demographic equilibrium is
n
∗
(
a
)
=
C
exp
(
−
∫
0
a
μ
(
q
)
d
q
)
,
{\displaystyle n^{*}(a)=C\exp \left(-\int _{0}^{a}\mu (q)\,dq\right),}
automatically ensuring the existence of the disease-free solution:
D
F
S
(
a
)
=
(
n
∗
(
a
)
,
0
,
0
)
.
{\displaystyle DFS(a)=(n^{*}(a),0,0).}
A basic reproduction number can be calculated as the spectral radius of an appropriate functional operator.
=== Next-generation method ===
One way to calculate
R
0
{\displaystyle R_{0}}
is to average the expected number of new infections over all possible infected types. The next-generation method is a general method of deriving
R
0
{\displaystyle R_{0}}
when more than one class of infectives is involved. This method, originally introduced by Diekmann et al. (1990), can be used for models with underlying age structure or spatial structure, among other possibilities. In this picture, the spectral radius of the next-generation matrix
G
{\displaystyle G}
gives the basic reproduction number,
R
0
=
ρ
(
G
)
.
{\displaystyle R_{0}=\rho (G).}
Consider a sexually transmitted disease. In a naive population where almost everyone is susceptible, but the infection seed, if the expected number of gender 1 is
f
{\displaystyle f}
and the expected number of infected gender 2 is
m
{\displaystyle m}
, we can know how many would be infected in the next-generation. Such that the next-generation matrix
G
{\displaystyle G}
can be written as:
G
=
(
0
f
m
0
)
,
{\displaystyle G={\begin{pmatrix}0&f\\m&0\end{pmatrix}},}
where each element
g
i
j
{\displaystyle g_{ij}}
is the expected number of secondary infections of gender
i
{\displaystyle i}
caused by a single infected individual of gender
j
{\displaystyle j}
, assuming that the population of gender
i
{\displaystyle i}
is entirely susceptible. Diagonal elements are zero because people of the same gender cannot transmit the disease to each other but, for example, each
f
{\displaystyle f}
can transmit the disease to
m
{\displaystyle m}
, on average. Meaning that each element
g
i
j
{\displaystyle g_{ij}}
is a reproduction number, but one where who infects whom is accounted for. If generation
a
{\displaystyle a}
is represented with
ϕ
a
{\displaystyle \phi _{a}}
then the next generation
ϕ
a
+
1
{\displaystyle \phi _{a+1}}
would be
G
ϕ
a
{\displaystyle G\phi _{a}}
.
The spectral radius of the next-generation matrix is the basic reproduction number,
R
0
=
ρ
(
G
)
=
m
f
{\displaystyle R_{0}=\rho (G)={\sqrt {mf}}}
, that is here, the geometric mean of the expected number of each gender in the next-generation. Note that multiplication factors
f
{\displaystyle f}
and
m
{\displaystyle m}
alternate because, the infectious person has to ‘pass through’ a second gender before it can enter a new host of the first gender. In other words, it takes two generations to get back to the same type, and every two generations numbers are multiplied by
m
{\displaystyle m}
×
f
{\displaystyle f}
. The average per generation multiplication factor is therefore
m
f
{\displaystyle {\sqrt {mf}}}
. Note that
G
{\displaystyle G}
is a non-negative matrix so it has single, unique, positive, real eigenvalue which is strictly greater than all the others.
=== Next-generation matrix for compartmental models ===
In mathematical modelling of infectious disease, the dynamics of spreading is usually described through a set of non-linear ordinary differential equations (ODE). So there is always
n
{\displaystyle n}
coupled equations of form
C
i
˙
=
d
C
i
d
t
=
f
(
C
1
,
C
2
,
.
.
.
,
C
n
)
{\displaystyle {\dot {C_{i}}}={\operatorname {d} \!C_{i} \over \operatorname {d} \!t}=f(C_{1},C_{2},...,C_{n})}
which shows how the number of people in compartment
C
i
{\displaystyle C_{i}}
changes over time. For example, in a SIR model,
C
1
=
S
{\displaystyle C_{1}=S}
,
C
2
=
I
{\displaystyle C_{2}=I}
, and
C
3
=
R
{\displaystyle C_{3}=R}
. Compartmental models have a disease-free equilibrium (DFE) meaning that it is possible to find an equilibrium while setting the number of infected people to zero,
I
=
0
{\displaystyle I=0}
. In other words, as a rule, there is an infection-free steady state. This solution, also usually ensures that the disease-free equilibrium is also an equilibrium of the system. There is another fixed point known as an Endemic Equilibrium (EE) where the disease is not totally eradicated and remains in the population. Mathematically,
R
0
{\displaystyle R_{0}}
is a threshold for stability of a disease-free equilibrium such that:
R
0
≤
1
⇒
lim
t
→
∞
(
C
1
(
t
)
,
C
2
(
t
)
,
⋯
,
C
n
(
t
)
)
=
DFE
{\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to \infty }(C_{1}(t),C_{2}(t),\cdots ,C_{n}(t))={\textrm {DFE}}}
R
0
>
1
,
I
(
0
)
>
0
⇒
lim
t
→
∞
(
C
1
(
t
)
,
C
2
(
t
)
,
⋯
,
C
n
(
t
)
)
=
EE
.
{\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to \infty }(C_{1}(t),C_{2}(t),\cdots ,C_{n}(t))={\textrm {EE}}.}
To calculate
R
0
{\displaystyle R_{0}}
, the first step is to linearise around the disease-free equilibrium (DFE), but for the infected subsystem of non-linear ODEs which describe the production of new infections and changes in state among infected individuals. Epidemiologically, the linearisation reflects that
R
0
{\displaystyle R_{0}}
characterizes the potential for initial spread of an infectious person in a naive population, assuming the change in the susceptible population is negligible during the initial spread. A linear system of ODEs can always be described by a matrix. So, the next step is to construct a linear positive operator that provides the next generation of infected people when applied to the present generation. Note that this operator (matrix) is responsible for the number of infected people, not all the compartments. Iteration of this operator describes the initial progression of infection within the heterogeneous population. So comparing the spectral radius of this operator to unity determines whether the generations of infected people grow or not.
R
0
{\displaystyle R_{0}}
can be written as a product of the infection rate near the disease-free equilibrium and average duration of infectiousness. It is used to find the peak and final size of an epidemic.
==== The SEIR model with vital dynamics and constant population ====
As described in the example above, so many epidemic processes can be described with a SIR model. However, for many important infections, such as COVID-19, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment E (for exposed). Here, the formation of the next-generation matrix from the SEIR model involves determining two compartments, infected and non-infected, since they are the populations that spread the infection. So we only need to model the exposed, E, and infected, I, compartments. Consider a population characterized by a death rate
μ
{\displaystyle \mu }
and birth rate
λ
{\displaystyle \lambda }
where a communicable disease is spreading. As in the previous example, we can use the transition rates between the compartments per capita such that
β
{\displaystyle \beta }
be the infection rate,
γ
{\displaystyle \gamma }
be the recovery rate, and
κ
{\displaystyle \kappa }
be the rate at which a latent individual becomes infectious. Then, we can define the model dynamics using the following equations:
{
S
˙
=
λ
−
μ
S
−
β
S
I
,
E
˙
=
β
S
I
−
(
μ
+
κ
)
E
,
I
˙
=
κ
E
−
(
μ
+
γ
)
I
,
R
˙
=
γ
I
−
μ
R
.
{\displaystyle {\begin{cases}{\dot {S}}=\lambda -\mu S-\beta SI,\\\\{\dot {E}}=\beta SI-(\mu +\kappa )E,\\\\{\dot {I}}=\kappa E-(\mu +\gamma )I,\\\\{\dot {R}}=\gamma I-\mu R.\end{cases}}}
Here we have 4 compartments and we can define vector
x
=
(
S
,
E
,
I
,
R
)
{\displaystyle \mathrm {x} =(S,E,I,R)}
where
x
i
{\displaystyle \mathrm {x} _{i}}
denotes the number or proportion of individuals in the
i
{\displaystyle i}
-th compartment. Let
F
i
(
x
)
{\displaystyle F_{i}(\mathrm {x} )}
be the rate of appearance of new infections in compartment
i
{\displaystyle i}
such that it includes only infections that are newly arising, but does not include terms which describe the transfer of infectious individuals from one infected compartment to another. Then if
V
i
+
{\displaystyle V_{i}^{+}}
is the rate of transfer of individuals into compartment
i
{\displaystyle i}
by all other means and
V
i
−
{\displaystyle V_{i}^{-}}
is the rate of transfer of individuals out of the
i
{\displaystyle i}
-th compartment, then the difference
F
i
(
x
)
−
V
i
(
x
)
{\displaystyle F_{i}(\mathrm {x} )-V_{i}(\mathrm {x} )}
gives the rate of change of such that
V
i
(
x
)
=
V
i
−
(
x
)
−
V
i
+
(
x
)
{\displaystyle V_{i}(\mathrm {x} )=V_{i}^{-}(\mathrm {x} )-V_{i}^{+}(\mathrm {x} )}
.
We can now make matrices of partial derivatives of
F
{\displaystyle F}
and
V
{\displaystyle V}
such that
F
i
j
=
∂
F
i
(
x
∗
)
∂
x
j
{\displaystyle F_{ij}={\partial \!\ F_{i}(\mathrm {x} ^{*}) \over \partial \!\ \mathrm {x} _{j}}}
and
V
i
j
=
∂
V
i
(
x
∗
)
∂
x
j
{\displaystyle V_{ij}={\partial \!\ V_{i}(\mathrm {x} ^{*}) \over \partial \!\ \mathrm {x} _{j}}}
, where
x
∗
=
(
S
∗
,
E
∗
,
I
∗
,
R
∗
)
=
(
λ
/
μ
,
0
,
0
,
0
)
{\displaystyle \mathrm {x} ^{*}=(S^{*},E^{*},I^{*},R^{*})=(\lambda /\mu ,0,0,0)}
is the disease-free equilibrium.
We now can form the next-generation matrix (operator)
G
=
F
V
−
1
{\displaystyle G=FV^{-1}}
. Basically,
F
{\displaystyle F}
is a non-negative matrix which represents the infection rates near the equilibrium, and
V
{\displaystyle V}
is an M-matrix for linear transition terms making
V
−
1
{\displaystyle V^{-1}}
a matrix which represents the average duration of infectiousness. Therefore,
G
i
j
{\displaystyle G_{ij}}
gives the rate at which infected individuals in
x
j
{\displaystyle \mathrm {x} _{j}}
produce new infections in
x
i
{\displaystyle \mathrm {x} _{i}}
, times the average length of time an individual spends in a single visit to compartment
j
.
{\displaystyle j.}
Finally, for this SEIR process we can have:
F
=
(
0
β
S
∗
0
0
)
{\displaystyle F={\begin{pmatrix}0&\beta S^{*}\\0&0\end{pmatrix}}}
and
V
=
(
μ
+
κ
0
−
κ
γ
+
μ
)
{\displaystyle V={\begin{pmatrix}\mu +\kappa &0\\-\kappa &\gamma +\mu \end{pmatrix}}}
and so
R
0
=
ρ
(
F
V
−
1
)
=
κ
β
S
∗
(
μ
+
κ
)
(
μ
+
γ
)
.
{\displaystyle R_{0}=\rho (FV^{-1})={\frac {\kappa \beta S^{*}}{(\mu +\kappa )(\mu +\gamma )}}.}
== Estimation methods ==
The basic reproduction number can be estimated through examining detailed transmission chains or through genomic sequencing. However, it is most frequently calculated using epidemiological models. During an epidemic, typically the number of diagnosed infections
N
(
t
)
{\displaystyle N(t)}
over time
t
{\displaystyle t}
is known. In the early stages of an epidemic, growth is exponential, with a logarithmic growth rate
K
:=
d
ln
(
N
)
d
t
.
{\displaystyle K:={\frac {d\ln(N)}{dt}}.}
For exponential growth,
N
{\displaystyle N}
can be interpreted as the cumulative number of diagnoses (including individuals who have recovered) or the present number of infection cases; the logarithmic growth rate is the same for either definition. In order to estimate
R
0
{\displaystyle R_{0}}
, assumptions are necessary about the time delay between infection and diagnosis and the time between infection and starting to be infectious.
In exponential growth,
K
{\displaystyle K}
is related to the doubling time
T
d
{\displaystyle T_{d}}
as
K
=
ln
(
2
)
T
d
.
{\displaystyle K={\frac {\ln(2)}{T_{d}}}.}
=== Simple model ===
If an individual, after getting infected, infects exactly
R
0
{\displaystyle R_{0}}
new individuals only after exactly a time
τ
{\displaystyle \tau }
(the serial interval) has passed, then the number of infectious individuals over time grows as
n
E
(
t
)
=
n
E
(
0
)
R
0
t
/
τ
=
n
E
(
0
)
e
K
t
{\displaystyle n_{E}(t)=n_{E}(0)\,R_{0}^{t/\tau }=n_{E}(0)\,e^{Kt}}
or
ln
(
n
E
(
t
)
)
=
ln
(
n
E
(
0
)
)
+
ln
(
R
0
)
t
/
τ
.
{\displaystyle \ln(n_{E}(t))=\ln(n_{E}(0))+\ln(R_{0})t/\tau .}
The underlying matching differential equation is
d
n
E
(
t
)
d
t
=
n
E
(
t
)
ln
(
R
0
)
τ
.
{\displaystyle {\frac {dn_{E}(t)}{dt}}=n_{E}(t){\frac {\ln(R_{0})}{\tau }}.}
or
d
ln
(
n
E
(
t
)
)
d
t
=
ln
(
R
0
)
τ
.
{\displaystyle {\frac {d\ln(n_{E}(t))}{dt}}={\frac {\ln(R_{0})}{\tau }}.}
In this case,
R
0
=
e
K
τ
{\displaystyle R_{0}=e^{K\tau }}
or
K
=
ln
R
0
τ
{\displaystyle K={\frac {\ln R_{0}}{\tau }}}
.
For example, with
τ
=
5
d
{\displaystyle \tau =5~\mathrm {d} }
and
K
=
0.183
d
−
1
{\displaystyle K=0.183~\mathrm {d} ^{-1}}
, we would find
R
0
=
2.5
{\displaystyle R_{0}=2.5}
.
If
R
0
{\displaystyle R_{0}}
is time dependent
ln
(
n
E
(
t
)
)
=
ln
(
n
E
(
0
)
)
+
1
τ
∫
0
t
ln
(
R
0
(
t
)
)
d
t
{\displaystyle \ln(n_{E}(t))=\ln(n_{E}(0))+{\frac {1}{\tau }}\int \limits _{0}^{t}\ln(R_{0}(t))dt}
showing that it may be important to keep
ln
(
R
0
)
{\displaystyle \ln(R_{0})}
below 0, time-averaged, to avoid exponential growth.
=== Latent infectious period, isolation after diagnosis ===
In this model, an individual infection has the following stages:
Exposed: an individual is infected, but has no symptoms and does not yet infect others. The average duration of the exposed state is
τ
E
{\displaystyle \tau _{E}}
.
Latent infectious: an individual is infected, has no symptoms, but does infect others. The average duration of the latent infectious state is
τ
I
{\displaystyle \tau _{I}}
. The individual infects
R
0
{\displaystyle R_{0}}
other individuals during this period.
Isolation after diagnosis: measures are taken to prevent further infections, for example by isolating the infected person.
This is a SEIR model and
R
0
{\displaystyle R_{0}}
may be written in the following form
R
0
=
1
+
K
(
τ
E
+
τ
I
)
+
K
2
τ
E
τ
I
.
{\displaystyle R_{0}=1+K(\tau _{E}+\tau _{I})+K^{2}\tau _{E}\tau _{I}.}
This estimation method has been applied to COVID-19 and SARS. It follows from the differential equation for the number of exposed individuals
n
E
{\displaystyle n_{E}}
and the number of latent infectious individuals
n
I
{\displaystyle n_{I}}
,
d
d
t
(
n
E
n
I
)
=
(
−
1
/
τ
E
R
0
/
τ
I
1
/
τ
E
−
1
/
τ
I
)
(
n
E
n
I
)
.
{\displaystyle {\frac {d}{dt}}{\begin{pmatrix}n_{E}\\n_{I}\end{pmatrix}}={\begin{pmatrix}-1/\tau _{E}&R_{0}/\tau _{I}\\1/\tau _{E}&-1/\tau _{I}\end{pmatrix}}{\begin{pmatrix}n_{E}\\n_{I}\end{pmatrix}}.}
The largest eigenvalue of the matrix is the logarithmic growth rate
K
{\displaystyle K}
, which can be solved for
R
0
{\displaystyle R_{0}}
.
In the special case
τ
I
=
0
{\displaystyle \tau _{I}=0}
, this model results in
R
0
=
1
+
K
τ
E
{\displaystyle R_{0}=1+K\tau _{E}}
, which is different from the simple model above (
R
0
=
exp
(
K
τ
E
)
{\displaystyle R_{0}=\exp(K\tau _{E})}
). For example, with the same values
τ
=
5
d
{\displaystyle \tau =5~\mathrm {d} }
and
K
=
0.183
d
−
1
{\displaystyle K=0.183~\mathrm {d} ^{-1}}
, we would find
R
0
=
1.9
{\displaystyle R_{0}=1.9}
, rather than the true value of
2.5
{\displaystyle 2.5}
. The difference is due to a subtle difference in the underlying growth model; the matrix equation above assumes that newly infected patients are currently already contributing to infections, while in fact infections only occur due to the number infected at
τ
E
{\displaystyle \tau _{E}}
ago. A more correct treatment would require the use of delay differential equations.
Latent period is the transition time between contagion event and disease manifestation. In cases of diseases with varying latent periods, the basic reproduction number can be calculated as the sum of the reproduction numbers for each transition time into the disease. An example of this is tuberculosis (TB). Blower and coauthors calculated from a simple model of TB the following reproduction number:
R
0
=
R
0
FAST
+
R
0
SLOW
{\displaystyle R_{0}=R_{0}^{\text{FAST}}+R_{0}^{\text{SLOW}}}
In their model, it is assumed that the infected individuals can develop active TB by either direct progression (the disease develops immediately after infection) considered above as FAST tuberculosis or endogenous reactivation (the disease develops years after the infection) considered above as SLOW tuberculosis.
== Other considerations within compartmental epidemic models ==
=== Vertical transmission ===
In the case of some diseases such as AIDS and hepatitis B, it is possible for the offspring of infected parents to be born infected. This transmission of the disease down from the mother is referred to as vertical transmission. The influx of additional members into the infected category can be considered within the model by including a fraction of the newborn members in the infected compartment.
=== Vector transmission ===
Diseases transmitted from human to human indirectly, i.e. malaria spread by way of mosquitoes, are transmitted through a vector. In these cases, the infection transfers from human to insect and an epidemic model must include both species, generally requiring many more compartments than a model for direct transmission.
=== Others ===
Other occurrences which may need to be considered when modeling an epidemic include things such as the following:
Non-homogeneous mixing
Variable infectivity
Distributions that are spatially non-uniform
Diseases caused by macroparasites
== Deterministic versus stochastic epidemic models ==
The deterministic models presented here are valid only in case of sufficiently large populations, and as such should be used cautiously. These models are only valid in the thermodynamic limit, where the population is effectively infinite. In stochastic models, the long-time endemic equilibrium derived above, does not hold, as there is a finite probability that the number of infected individuals drops below one in a system. In a true system then, the pathogen may not propagate, as no host will be infected. But, in deterministic mean-field models, the number of infected can take on real, namely, non-integer values of infected hosts, and the number of hosts in the model can be less than one, but more than zero, thereby allowing the pathogen in the model to propagate. The reliability of compartmental models is limited to compartmental applications.
One of the possible extensions of mean-field models considers the spreading of epidemics on a network based on percolation theory concepts. Stochastic epidemic models have been studied on different networks and more recently applied to the COVID-19 pandemic.
== See also ==
Attack rate
Basic reproduction number
Flatten the curve
List of COVID-19 simulation models
Mathematical modelling in epidemiology
Modifiable areal unit problem
Next-generation matrix
Risk assessment
== References ==
== Further reading ==
May RM, Anderson RM (1991). Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press. ISBN 0-19-854040-X.
Vynnycky E, White RG, eds. (2010). An Introduction to Infectious Disease Modelling. Oxford: Oxford University Press. ISBN 978-0-19-856576-5.
Capasso V (2008). Mathematical Structures of Epidemic Systems. 2nd Printing. Heidelberg: Springer. ISBN 978-3-540-56526-0.
Carlson CS, Rubin DM, Heikkilä V, Postema M (2021). "Extracting transmission and recovery parameters for an adaptive global system dynamics model of the COVID-19 pandemic". 2021 IEEE Africon (PDF). pp. 456–459. doi:10.1109/AFRICON51333.2021.9570946. ISBN 978-1-6654-1984-0. S2CID 239899862.
== External links ==
SIR model: Online experiments with JSXGraph
"Simulating an epidemic". 3Blue1Brown. March 27, 2020 – via YouTube. | Wikipedia/Compartmental_models_in_epidemiology |
Modelling biological systems is a significant task of systems biology and mathematical biology. Computational systems biology aims to develop and use efficient algorithms, data structures, visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism, signal transduction pathways and gene regulatory networks), to both analyze and visualize the complex connections of these cellular processes.
An unexpected emergent property of a complex system may be a result of the interplay of the cause-and-effect among simpler, integrated parts (see biological organisation). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart.
== Standards ==
By far the most widely accepted standard format for storing and exchanging models in the field is the Systems Biology Markup Language (SBML). The SBML.org website includes a guide to many important software packages used in computational systems biology. A large number of models encoded in SBML can be retrieved from BioModels. Other markup languages with different emphases include BioPAX, CellML and MorpheusML.
== Particular tasks ==
=== Cellular model ===
Creating a cellular model has been a particularly challenging task of systems biology and mathematical biology. It involves the use of computer simulations of the many cellular subsystems such as the networks of metabolites, enzymes which comprise metabolism and transcription, translation, regulation and induction of gene regulatory networks.
The complex network of biochemical reaction/transport processes and their spatial organization make the development of a predictive model of a living cell a grand challenge for the 21st century, listed as such by the National Science Foundation (NSF) in 2006.
A whole cell computational model for the bacterium Mycoplasma genitalium, including all its 525 genes, gene products, and their interactions, was built by scientists from Stanford University and the J. Craig Venter Institute and published on 20 July 2012 in Cell.
A dynamic computer model of intracellular signaling was the basis for Merrimack Pharmaceuticals to discover the target for their cancer medicine MM-111.
Membrane computing is the task of modelling specifically a cell membrane.
=== Multi-cellular organism simulation ===
An open source simulation of C. elegans at the cellular level is being pursued by the OpenWorm community. So far the physics engine Gepetto has been built and models of the neural connectome and a muscle cell have been created in the NeuroML format.
=== Protein folding ===
Protein structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of a protein's tertiary structure from its primary structure. It is one of the most important goals pursued by bioinformatics and theoretical chemistry. Protein structure prediction is of high importance in medicine (for example, in drug design) and biotechnology (for example, in the design of novel enzymes). Every two years, the performance of current methods is assessed in the CASP experiment.
=== Human biological systems ===
==== Brain model ====
The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The aim of this project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique in Lausanne, Switzerland, is to study the brain's architectural and functional principles. The project is headed by the Institute's director, Henry Markram. Using a Blue Gene supercomputer running Michael Hines's NEURON software, the simulation does not consist simply of an artificial neural network, but involves a partially biologically realistic model of neurons. It is hoped by its proponents that it will eventually shed light on the nature of consciousness.
There are a number of sub-projects, including the Cajal Blue Brain, coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories in the UK, U.S., and Israel. The Human Brain Project builds on the work of the Blue Brain Project. It is one of six pilot projects in the Future Emerging Technologies Research Program of the European Commission, competing for a billion euro funding.
==== Model of the immune system ====
The last decade has seen the emergence of a growing number of simulations of the immune system.
==== Virtual liver ====
The Virtual Liver project is a 43 million euro research program funded by the German Government, made up of seventy research group distributed across Germany. The goal is to produce a virtual liver, a dynamic mathematical model that represents human liver physiology, morphology and function.
=== Tree model ===
Electronic trees (e-trees) usually use L-systems to simulate growth. L-systems are very important in the field of complexity science and A-life.
A universally accepted system for describing changes in plant morphology at the cellular or modular level has yet to be devised.
The most widely implemented tree generating algorithms are described in the papers "Creation and Rendering of Realistic Trees" and Real-Time Tree Rendering.
=== Ecological models ===
Ecosystem models are mathematical representations of ecosystems. Typically they simplify complex foodwebs down to their major components or trophic levels, and quantify these as either numbers of organisms, biomass or the inventory/concentration of some pertinent chemical element (for instance, carbon or a nutrient species such as nitrogen or phosphorus).
=== Models in ecotoxicology ===
The purpose of models in ecotoxicology is the understanding, simulation and prediction of effects caused by toxicants in the environment. Most current models describe effects on one of many different levels of biological organization (e.g. organisms or populations). A challenge is the development of models that predict effects across biological scales. Ecotoxicology and models discusses some types of ecotoxicological models and provides links to many others.
=== Modelling of infectious disease ===
It is possible to model the progress of most infectious diseases mathematically to discover the likely outcome of an epidemic or to help manage them by vaccination. This field tries to find parameters for various infectious diseases and to use those parameters to make useful calculations about the effects of a mass vaccination programme.
== See also ==
Biological data visualization
Biosimulation
Gillespie algorithm
Molecular modelling software
Stochastic simulation
== Notes ==
== References ==
== Sources ==
Antmann, S. S.; Marsden, J. E.; Sirovich, L., eds. (2009). Mathematical Physiology (2nd ed.). New York, New York: Springer. ISBN 978-0-387-75846-6.
Barnes, D.J.; Chu, D. (2010), Introduction to Modelling for Biosciences, Springer Verlag
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White. An introductory book on infectious disease modelling and its applications.
== Further reading ==
== External links ==
The Center for Modeling Immunity to Enteric Pathogens (MIEP) | Wikipedia/Modelling_biological_systems |
In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization). It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.
This algorithm is inherently a dense matrix algorithm: it draws little or no advantage from being applied to a sparse matrix, and it will destroy sparseness by creating fill-in. Similarly, it will not preserve structures such as being banded of the matrix on which it operates.
== Description ==
Let
S
{\displaystyle S}
be a symmetric matrix, and
G
=
G
(
i
,
j
,
θ
)
{\displaystyle G=G(i,j,\theta )}
be a Givens rotation matrix. Then:
S
′
=
G
⊤
S
G
{\displaystyle S'=G^{\top }SG\,}
is symmetric and similar to
S
{\displaystyle S}
.
Furthermore,
S
′
{\displaystyle S^{\prime }}
has entries:
S
i
i
′
=
c
2
S
i
i
−
2
s
c
S
i
j
+
s
2
S
j
j
S
j
j
′
=
s
2
S
i
i
+
2
s
c
S
i
j
+
c
2
S
j
j
S
i
j
′
=
S
j
i
′
=
(
c
2
−
s
2
)
S
i
j
+
s
c
(
S
i
i
−
S
j
j
)
S
i
k
′
=
S
k
i
′
=
c
S
i
k
−
s
S
j
k
k
≠
i
,
j
S
j
k
′
=
S
k
j
′
=
s
S
i
k
+
c
S
j
k
k
≠
i
,
j
S
k
l
′
=
S
k
l
k
,
l
≠
i
,
j
{\displaystyle {\begin{aligned}S'_{ii}&=c^{2}\,S_{ii}-2\,sc\,S_{ij}+s^{2}\,S_{jj}\\S'_{jj}&=s^{2}\,S_{ii}+2sc\,S_{ij}+c^{2}\,S_{jj}\\S'_{ij}&=S'_{ji}=(c^{2}-s^{2})\,S_{ij}+sc\,(S_{ii}-S_{jj})\\S'_{ik}&=S'_{ki}=c\,S_{ik}-s\,S_{jk}&k\neq i,j\\S'_{jk}&=S'_{kj}=s\,S_{ik}+c\,S_{jk}&k\neq i,j\\S'_{kl}&=S_{kl}&k,l\neq i,j\end{aligned}}}
where
s
=
sin
(
θ
)
{\displaystyle s=\sin(\theta )}
and
c
=
cos
(
θ
)
{\displaystyle c=\cos(\theta )}
.
Since
G
{\displaystyle G}
is orthogonal,
S
{\displaystyle S}
and
S
′
{\displaystyle S^{\prime }}
have the same Frobenius norm
|
|
⋅
|
|
F
{\displaystyle ||\cdot ||_{F}}
(the square-root sum of squares of all components), however we can choose
θ
{\displaystyle \theta }
such that
S
i
j
′
=
0
{\displaystyle S_{ij}^{\prime }=0}
, in which case
S
′
{\displaystyle S^{\prime }}
has a larger sum of squares on the diagonal:
S
i
j
′
=
cos
(
2
θ
)
S
i
j
+
1
2
sin
(
2
θ
)
(
S
i
i
−
S
j
j
)
{\displaystyle S'_{ij}=\cos(2\theta )S_{ij}+{\tfrac {1}{2}}\sin(2\theta )(S_{ii}-S_{jj})}
Set this equal to 0, and rearrange:
tan
(
2
θ
)
=
2
S
i
j
S
j
j
−
S
i
i
{\displaystyle \tan(2\theta )={\frac {2S_{ij}}{S_{jj}-S_{ii}}}}
if
S
j
j
=
S
i
i
{\displaystyle S_{jj}=S_{ii}}
θ
=
π
4
{\displaystyle \theta ={\frac {\pi }{4}}}
In order to optimize this effect, Sij should be the off-diagonal element with the largest absolute value, called the pivot.
The Jacobi eigenvalue method repeatedly performs rotations until the matrix becomes almost diagonal. Then the elements in the diagonal are approximations of the (real) eigenvalues of S.
== Convergence ==
If
p
=
S
k
l
{\displaystyle p=S_{kl}}
is a pivot element, then by definition
|
S
i
j
|
≤
|
p
|
{\displaystyle |S_{ij}|\leq |p|}
for
1
≤
i
,
j
≤
n
,
i
≠
j
{\displaystyle 1\leq i,j\leq n,i\neq j}
. Let
Γ
(
S
)
2
{\displaystyle \Gamma (S)^{2}}
denote the sum of squares of all off-diagonal entries of
S
{\displaystyle S}
. Since
S
{\displaystyle S}
has exactly
2
N
:=
n
(
n
−
1
)
{\displaystyle 2N:=n(n-1)}
off-diagonal elements, we have
p
2
≤
Γ
(
S
)
2
≤
2
N
p
2
{\displaystyle p^{2}\leq \Gamma (S)^{2}\leq 2Np^{2}}
or
2
p
2
≥
Γ
(
S
)
2
/
N
{\displaystyle 2p^{2}\geq \Gamma (S)^{2}/N}
. Now
Γ
(
S
J
)
2
=
Γ
(
S
)
2
−
2
p
2
{\displaystyle \Gamma (S^{J})^{2}=\Gamma (S)^{2}-2p^{2}}
. This implies
Γ
(
S
J
)
2
≤
(
1
−
1
/
N
)
Γ
(
S
)
2
{\displaystyle \Gamma (S^{J})^{2}\leq (1-1/N)\Gamma (S)^{2}}
or
Γ
(
S
J
)
≤
(
1
−
1
/
N
)
1
/
2
Γ
(
S
)
{\displaystyle \Gamma (S^{J})\leq (1-1/N)^{1/2}\Gamma (S)}
;
that is, the sequence of Jacobi rotations converges at least linearly by a factor
(
1
−
1
/
N
)
1
/
2
{\displaystyle (1-1/N)^{1/2}}
to a diagonal matrix.
A number of
N
{\displaystyle N}
Jacobi rotations is called a sweep; let
S
σ
{\displaystyle S^{\sigma }}
denote the result. The previous estimate yields
Γ
(
S
σ
)
≤
(
1
−
1
N
)
N
/
2
Γ
(
S
)
{\displaystyle \Gamma (S^{\sigma })\leq \left(1-{\frac {1}{N}}\right)^{N/2}\Gamma (S)}
;
that is, the sequence of sweeps converges at least linearly with a factor ≈
e
1
/
2
{\displaystyle e^{1/2}}
.
However the following result of Schönhage yields locally quadratic convergence. To this end let S have m distinct eigenvalues
λ
1
,
.
.
.
,
λ
m
{\displaystyle \lambda _{1},...,\lambda _{m}}
with multiplicities
ν
1
,
.
.
.
,
ν
m
{\displaystyle \nu _{1},...,\nu _{m}}
and let d > 0 be the smallest distance of two different eigenvalues. Let us call a number of
N
S
:=
n
(
n
−
1
)
2
−
∑
μ
=
1
m
1
2
ν
μ
(
ν
μ
−
1
)
≤
N
{\displaystyle N_{S}:={\frac {n(n-1)}{2}}-\sum _{\mu =1}^{m}{\frac {1}{2}}\nu _{\mu }(\nu _{\mu }-1)\leq N}
Jacobi rotations a Schönhage-sweep. If
S
s
{\displaystyle S^{s}}
denotes the result then
Γ
(
S
s
)
≤
n
2
−
1
(
γ
2
d
−
2
γ
)
,
γ
:=
Γ
(
S
)
{\displaystyle \Gamma (S^{s})\leq {\sqrt {{\frac {n}{2}}-1}}\left({\frac {\gamma ^{2}}{d-2\gamma }}\right),\quad \gamma :=\Gamma (S)}
.
Thus convergence becomes quadratic as soon as
Γ
(
S
)
<
d
2
+
n
2
−
1
{\displaystyle \Gamma (S)<{\frac {d}{2+{\sqrt {{\frac {n}{2}}-1}}}}}
== Cost ==
Each Givens rotation can be done in
O
(
n
)
{\displaystyle O(n)}
steps when the pivot element p is known. However the search for p requires inspection of all N ≈ 1/2 n2 off-diagonal elements, which means this search dominates the overall complexity and pushes the computational complexity of a sweep in the classical Jacobi algorithm to
O
(
n
4
)
{\displaystyle O(n^{4})}
. Competing algorithms attain
O
(
n
3
)
{\displaystyle O(n^{3})}
complexity for a full diagonalisation.
=== Caching row maximums ===
We can reduce the complexity of finding the pivot element from O(N) to O(n) if we introduce an additional index array
m
1
,
…
,
m
n
−
1
{\displaystyle m_{1},\,\dots \,,\,m_{n-1}}
with the property that
m
i
{\displaystyle m_{i}}
is the index of the largest element in row i, (i = 1, ..., n − 1) of the current S. Then the indices of the pivot (k, l) must be one of the pairs
(
i
,
m
i
)
{\displaystyle (i,m_{i})}
. Also the updating of the index array can be done in O(n) average-case complexity: First, the maximum entry in the updated rows k and l can be found in O(n) steps. In the other rows i, only the entries in columns k and l change. Looping over these rows, if
m
i
{\displaystyle m_{i}}
is neither k nor l, it suffices to compare the old maximum at
m
i
{\displaystyle m_{i}}
to the new entries and update
m
i
{\displaystyle m_{i}}
if necessary. If
m
i
{\displaystyle m_{i}}
should be equal to k or l and the corresponding entry decreased during the update, the maximum over row i has to be found from scratch in O(n) complexity. However, this will happen on average only once per rotation. Thus, each rotation has O(n) and one sweep O(n3) average-case complexity, which is equivalent to one matrix multiplication. Additionally the
m
i
{\displaystyle m_{i}}
must be initialized before the process starts, which can be done in n2 steps.
Typically the Jacobi method converges within numerical precision after a small number of sweeps. Note that multiple eigenvalues reduce the number of iterations since
N
S
<
N
{\displaystyle N_{S}<N}
.
=== Cyclic and parallel Jacobi ===
An alternative approach is to forego the search entirely, and simply have each sweep pivot every off-diagonal element once, in some predetermined order. It has been shown that this cyclic Jacobi attains quadratic convergence, just like the classical Jacobi.
The opportunity for parallelisation that is particular to Jacobi is based on combining cyclic Jacobi with the observation that Givens rotations for disjoint sets of indices commute, so that several can be applied in parallel. Concretely, if
G
1
{\displaystyle G_{1}}
pivots between indices
i
1
,
j
1
{\displaystyle i_{1},j_{1}}
and
G
2
{\displaystyle G_{2}}
pivots between indices
i
2
,
j
2
{\displaystyle i_{2},j_{2}}
, then from
{
i
1
,
j
1
}
∩
{
i
2
,
j
2
}
=
∅
{\displaystyle \{i_{1},j_{1}\}\cap \{i_{2},j_{2}\}=\varnothing }
follows
G
1
G
2
=
G
2
G
1
{\displaystyle G_{1}G_{2}=G_{2}G_{1}}
because in computing
G
1
G
2
A
{\displaystyle G_{1}G_{2}A}
or
G
2
G
1
A
{\displaystyle G_{2}G_{1}A}
the
G
1
{\displaystyle G_{1}}
rotation only needs to access rows
i
1
,
j
1
{\displaystyle i_{1},j_{1}}
and the
G
2
{\displaystyle G_{2}}
rotation only needs to access rows
i
2
,
j
2
{\displaystyle i_{2},j_{2}}
. Two processors can perform both rotations in parallel, because no matrix element is accessed for both.
Partitioning the set of index pairs of a sweep into classes that are pairwise disjoint is equivalent to partitioning the edge set of a complete graph into matchings, which is the same thing as edge colouring it; each colour class then becomes a round within the sweep. The minimal number of rounds is the chromatic index of the complete graph, and equals
n
{\displaystyle n}
for odd
n
{\displaystyle n}
but
n
−
1
{\displaystyle n-1}
for even
n
{\displaystyle n}
. A simple rule for odd
n
{\displaystyle n}
is to handle the pairs
{
i
1
,
j
1
}
{\displaystyle \{i_{1},j_{1}\}}
and
{
i
2
,
j
2
}
{\displaystyle \{i_{2},j_{2}\}}
in the same round if
i
1
+
j
1
≡
i
2
+
j
2
(
mod
n
)
{\displaystyle i_{1}+j_{1}\equiv i_{2}+j_{2}\textstyle {\pmod {n}}}
. For even
n
{\displaystyle n}
one may create
n
−
1
{\displaystyle n-1}
rounds
k
=
0
,
1
,
…
,
n
−
2
{\displaystyle k=0,1,\dotsc ,n-2}
where a pair
{
i
,
j
}
{\displaystyle \{i,j\}}
for
1
⩽
i
<
j
⩽
n
−
1
{\displaystyle 1\leqslant i<j\leqslant n-1}
goes into round
(
i
+
j
)
mod
(
n
−
1
)
{\displaystyle (i+j){\bmod {(}}n-1)}
and additionally a pair
{
i
,
n
}
{\displaystyle \{i,n\}}
for
1
⩽
i
⩽
n
−
1
{\displaystyle 1\leqslant i\leqslant n-1}
goes into round
2
i
mod
(
n
−
1
)
{\displaystyle 2i{\bmod {(}}n-1)}
. This brings the time complexity of a sweep down from
O
(
n
3
)
{\displaystyle O(n^{3})}
to
O
(
n
2
)
{\displaystyle O(n^{2})}
, if
n
/
2
{\displaystyle n/2}
processors are available.
A round would consist of each processor first calculating
(
c
,
s
)
{\displaystyle (c,s)}
for its rotation, and then applying the rotation from the left (rotating between rows). Next, the processors synchronise before applying the transpose rotation from the right (rotating between columns), and finally synchronising again. A matrix element may be accessed by two processors during a round, but not by both during the same half of this round.
Further parallelisation is possible by dividing the work for a single rotation between several processors, but that might be getting too fine-grained to be practical.
== Algorithm ==
The following algorithm is a description of the Jacobi method in math-like notation.
It calculates a vector e which contains the eigenvalues and a matrix E which contains the corresponding eigenvectors; that is,
e
i
{\displaystyle e_{i}}
is an eigenvalue and the column
E
i
{\displaystyle E_{i}}
an orthonormal eigenvector for
e
i
{\displaystyle e_{i}}
, i = 1, ..., n.
procedure jacobi(S ∈ Rn×n; out e ∈ Rn; out E ∈ Rn×n)
var
i, k, l, m, state ∈ N
s, c, t, p, y, d, r ∈ R
ind ∈ Nn
changed ∈ Ln
function maxind(k ∈ N) ∈ N ! index of largest off-diagonal element in row k
m := k+1
for i := k+2 to n do
if │Ski│ > │Skm│ then m := i endif
endfor
return m
endfunc
procedure update(k ∈ N; t ∈ R) ! update ek and its status
y := ek; ek := y+t
if changedk and (y=ek) then changedk := false; state := state−1
elsif (not changedk) and (y≠ek) then changedk := true; state := state+1
endif
endproc
procedure rotate(k,l,i,j ∈ N) ! perform rotation of Sij, Skl
┌ ┐ ┌ ┐┌ ┐
│Skl│ │c −s││Skl│
│ │ := │ ││ │
│Sij│ │s c││Sij│
└ ┘ └ ┘└ ┘
endproc
! init e, E, and arrays ind, changed
E := I; state := n
for k := 1 to n do indk := maxind(k); ek := Skk; changedk := true endfor
while state≠0 do ! next rotation
m := 1 ! find index (k,l) of pivot p
for k := 2 to n−1 do
if │Sk indk│ > │Sm indm│ then m := k endif
endfor
k := m; l := indm; p := Skl
! calculate c = cos φ, s = sin φ
y := (el−ek)/2; d := │y│+√(p2+y2)
r := √(p2+d2); c := d/r; s := p/r; t := p2/d
if y<0 then s := −s; t := −t endif
Skl := 0.0; update(k,−t); update(l,t)
! rotate rows and columns k and l
for i := 1 to k−1 do rotate(i,k,i,l) endfor
for i := k+1 to l−1 do rotate(k,i,i,l) endfor
for i := l+1 to n do rotate(k,i,l,i) endfor
! rotate eigenvectors
for i := 1 to n do
┌ ┐ ┌ ┐┌ ┐
│Eik│ │c −s││Eik│
│ │ := │ ││ │
│Eil│ │s c││Eil│
└ ┘ └ ┘└ ┘
endfor
! update all potentially changed indi
for i := 1 to n do indi := maxind(i) endfor
loop
endproc
=== Notes ===
1. The logical array changed holds the status of each eigenvalue. If the numerical value of
e
k
{\displaystyle e_{k}}
or
e
l
{\displaystyle e_{l}}
changes during an iteration, the corresponding component of changed is set to true, otherwise to false. The integer state counts the number of components of changed which have the value true. Iteration stops as soon as state = 0. This means that none of the approximations
e
1
,
.
.
.
,
e
n
{\displaystyle e_{1},\,...\,,e_{n}}
has recently changed its value and thus it is not very likely that this will happen if iteration continues. Here it is assumed that floating point operations are optimally rounded to the nearest floating point number.
2. The upper triangle of the matrix S is destroyed while the lower triangle and the diagonal are unchanged. Thus it is possible to restore S if necessary according to
for k := 1 to n−1 do ! restore matrix S
for l := k+1 to n do
Skl := Slk
endfor
endfor
3. The eigenvalues are not necessarily in descending order. This can be achieved by a simple sorting algorithm.
for k := 1 to n−1 do
m := k
for l := k+1 to n do
if el > em then
m := l
endif
endfor
if k ≠ m then
swap em,ek
swap Em,Ek
endif
endfor
4. The algorithm is written using matrix notation (1 based arrays instead of 0 based).
5. When implementing the algorithm, the part specified using matrix notation must be performed simultaneously.
6. This implementation does not correctly account for the case in which one dimension is an independent subspace. For example, if given a diagonal matrix, the above implementation will never terminate, as none of the eigenvalues will change. Hence, in real implementations, extra logic must be added to account for this case.
=== Example ===
Let
S
=
(
4
−
30
60
−
35
−
30
300
−
675
420
60
−
675
1620
−
1050
−
35
420
−
1050
700
)
{\displaystyle S={\begin{pmatrix}4&-30&60&-35\\-30&300&-675&420\\60&-675&1620&-1050\\-35&420&-1050&700\end{pmatrix}}}
Then jacobi produces the following eigenvalues and eigenvectors after 3 sweeps (19 iterations) :
e
1
=
2585.25381092892231
{\displaystyle e_{1}=2585.25381092892231}
E
1
=
(
0.0291933231647860588
−
0.328712055763188997
0.791411145833126331
−
0.514552749997152907
)
{\displaystyle E_{1}={\begin{pmatrix}0.0291933231647860588\\-0.328712055763188997\\0.791411145833126331\\-0.514552749997152907\end{pmatrix}}}
e
2
=
37.1014913651276582
{\displaystyle e_{2}=37.1014913651276582}
E
2
=
(
−
0.179186290535454826
0.741917790628453435
−
0.100228136947192199
−
0.638282528193614892
)
{\displaystyle E_{2}={\begin{pmatrix}-0.179186290535454826\\0.741917790628453435\\-0.100228136947192199\\-0.638282528193614892\end{pmatrix}}}
e
3
=
1.4780548447781369
{\displaystyle e_{3}=1.4780548447781369}
E
3
=
(
−
0.582075699497237650
0.370502185067093058
0.509578634501799626
0.514048272222164294
)
{\displaystyle E_{3}={\begin{pmatrix}-0.582075699497237650\\0.370502185067093058\\0.509578634501799626\\0.514048272222164294\end{pmatrix}}}
e
4
=
0.1666428611718905
{\displaystyle e_{4}=0.1666428611718905}
E
4
=
(
0.792608291163763585
0.451923120901599794
0.322416398581824992
0.252161169688241933
)
{\displaystyle E_{4}={\begin{pmatrix}0.792608291163763585\\0.451923120901599794\\0.322416398581824992\\0.252161169688241933\end{pmatrix}}}
== Applications for real symmetric matrices ==
When the eigenvalues (and eigenvectors) of a symmetric matrix are known, the following
values are easily calculated.
Singular values
The singular values of a (square) matrix
A
{\displaystyle A}
are the square roots of the (non-negative) eigenvalues of
A
T
A
{\displaystyle A^{T}A}
. In case of a symmetric matrix
S
{\displaystyle S}
we have of
S
T
S
=
S
2
{\displaystyle S^{T}S=S^{2}}
, hence the singular values of
S
{\displaystyle S}
are the absolute values of the eigenvalues of
S
{\displaystyle S}
.
2-norm and spectral radius
The 2-norm of a matrix A is the norm based on the Euclidean vectornorm; that is, the largest value
‖
A
x
‖
2
{\displaystyle \|Ax\|_{2}}
when x runs through all vectors with
‖
x
‖
2
=
1
{\displaystyle \|x\|_{2}=1}
. It is the largest singular value of
A
{\displaystyle A}
. In case of a symmetric matrix it is the largest absolute value of its eigenvectors and thus equal to its spectral radius.
Condition number
The condition number of a nonsingular matrix
A
{\displaystyle A}
is defined as
cond
(
A
)
=
‖
A
‖
2
‖
A
−
1
‖
2
{\displaystyle {\mbox{cond}}(A)=\|A\|_{2}\|A^{-1}\|_{2}}
. In case of a symmetric matrix it is the absolute value of the quotient of the largest and smallest eigenvalue. Matrices with large condition numbers can cause numerically unstable results: small perturbation can result in large errors. Hilbert matrices are the most famous ill-conditioned matrices. For example, the fourth-order Hilbert matrix has a condition of 15514, while for order 8 it is 2.7 × 108.
Rank
A matrix
A
{\displaystyle A}
has rank
r
{\displaystyle r}
if it has
r
{\displaystyle r}
columns that are linearly independent while the remaining columns are linearly dependent on these. Equivalently,
r
{\displaystyle r}
is the dimension of the range of
A
{\displaystyle A}
. Furthermore it is the number of nonzero singular values.
In case of a symmetric matrix r is the number of nonzero eigenvalues. Unfortunately because of rounding errors numerical approximations of zero eigenvalues may not be zero (it may also happen that a numerical approximation is zero while the true value is not). Thus one can only calculate the numerical rank by making a decision which of the eigenvalues are close enough to zero.
Pseudo-inverse
The pseudo inverse of a matrix
A
{\displaystyle A}
is the unique matrix
X
=
A
+
{\displaystyle X=A^{+}}
for which
A
X
{\displaystyle AX}
and
X
A
{\displaystyle XA}
are symmetric and for which
A
X
A
=
A
,
X
A
X
=
X
{\displaystyle AXA=A,XAX=X}
holds. If
A
{\displaystyle A}
is nonsingular, then
A
+
=
A
−
1
{\displaystyle A^{+}=A^{-1}}
.
When procedure jacobi (S, e, E) is called, then the relation
S
=
E
T
Diag
(
e
)
E
{\displaystyle S=E^{T}{\mbox{Diag}}(e)E}
holds where Diag(e) denotes the diagonal matrix with vector e on the diagonal. Let
e
+
{\displaystyle e^{+}}
denote the vector where
e
i
{\displaystyle e_{i}}
is replaced by
1
/
e
i
{\displaystyle 1/e_{i}}
if
e
i
≤
0
{\displaystyle e_{i}\leq 0}
and by 0 if
e
i
{\displaystyle e_{i}}
is (numerically close to) zero. Since matrix E is orthogonal, it follows that the pseudo-inverse of S is given by
S
+
=
E
T
Diag
(
e
+
)
E
{\displaystyle S^{+}=E^{T}{\mbox{Diag}}(e^{+})E}
.
Least squares solution
If matrix
A
{\displaystyle A}
does not have full rank, there may not be a solution of the linear system
A
x
=
b
{\displaystyle Ax=b}
. However one can look for a vector x for which
‖
A
x
−
b
‖
2
{\displaystyle \|Ax-b\|_{2}}
is minimal. The solution is
x
=
A
+
b
{\displaystyle x=A^{+}b}
. In case of a symmetric matrix S as before, one has
x
=
S
+
b
=
E
T
Diag
(
e
+
)
E
b
{\displaystyle x=S^{+}b=E^{T}{\mbox{Diag}}(e^{+})Eb}
.
Matrix exponential
From
S
=
E
T
Diag
(
e
)
E
{\displaystyle S=E^{T}{\mbox{Diag}}(e)E}
one finds
exp
S
=
E
T
Diag
(
exp
e
)
E
{\displaystyle \exp S=E^{T}{\mbox{Diag}}(\exp e)E}
where exp
e
{\displaystyle e}
is the vector where
e
i
{\displaystyle e_{i}}
is replaced by
exp
e
i
{\displaystyle \exp e_{i}}
. In the same way,
f
(
S
)
{\displaystyle f(S)}
can be calculated in an obvious way for any (analytic) function
f
{\displaystyle f}
.
Linear differential equations
The differential equation
x
′
=
A
x
,
x
(
0
)
=
a
{\displaystyle x'=Ax,x(0)=a}
has the solution
x
(
t
)
=
exp
(
t
A
)
{\displaystyle x(t)=\exp(tA)}
. For a symmetric matrix
S
{\displaystyle S}
, it follows that
x
(
t
)
=
E
T
Diag
(
exp
t
e
)
E
a
{\displaystyle x(t)=E^{T}{\mbox{Diag}}(\exp te)Ea}
. If
a
=
∑
i
=
1
n
a
i
E
i
{\displaystyle a=\sum _{i=1}^{n}a_{i}E_{i}}
is the expansion of
a
{\displaystyle a}
by the eigenvectors of
S
{\displaystyle S}
, then
x
(
t
)
=
∑
i
=
1
n
a
i
exp
(
t
e
i
)
E
i
{\displaystyle x(t)=\sum _{i=1}^{n}a_{i}\exp(te_{i})E_{i}}
.
Let
W
s
{\displaystyle W^{s}}
be the vector space spanned by the eigenvectors of
S
{\displaystyle S}
which correspond to a negative eigenvalue and
W
u
{\displaystyle W^{u}}
analogously for the positive eigenvalues. If
a
∈
W
s
{\displaystyle a\in W^{s}}
then
lim
t
→
∞
x
(
t
)
=
0
{\displaystyle {\mbox{lim}}_{t\rightarrow \infty }x(t)=0}
; that is, the equilibrium point 0 is attractive to
x
(
t
)
{\displaystyle x(t)}
. If
a
∈
W
u
{\displaystyle a\in W^{u}}
then
lim
t
→
∞
x
(
t
)
=
∞
{\displaystyle {\mbox{lim}}_{t\rightarrow \infty }x(t)=\infty }
; that is, 0 is repulsive to
x
(
t
)
{\displaystyle x(t)}
.
W
s
{\displaystyle W^{s}}
and
W
u
{\displaystyle W^{u}}
are called stable and unstable manifolds for
S
{\displaystyle S}
. If
a
{\displaystyle a}
has components in both manifolds, then one component is attracted and one component is repelled. Hence
x
(
t
)
{\displaystyle x(t)}
approaches
W
u
{\displaystyle W^{u}}
as
t
→
∞
{\displaystyle t\to \infty }
.
== Julia implementation ==
The following code is a straight-forward implementation of the mathematical description of the Jacobi eigenvalue algorithm in the Julia programming language.
== Generalizations ==
The Jacobi Method has been generalized to complex Hermitian matrices, general nonsymmetric real and complex matrices as well as block matrices.
Since singular values of a real matrix are the square roots of the eigenvalues of the symmetric matrix
S
=
A
T
A
{\displaystyle S=A^{T}A}
it can also be used for the calculation of these values. For this case, the method is modified in such a way that S must not be explicitly calculated which reduces the danger of round-off errors. Note that
J
S
J
T
=
J
A
T
A
J
T
=
J
A
T
J
T
J
A
J
T
=
B
T
B
{\displaystyle JSJ^{T}=JA^{T}AJ^{T}=JA^{T}J^{T}JAJ^{T}=B^{T}B}
with
B
:=
J
A
J
T
{\displaystyle B\,:=JAJ^{T}}
.
== References ==
== Further reading ==
== External links ==
Matlab implementation of Jacobi algorithm that avoids trigonometric functions
C++11 implementation | Wikipedia/Jacobi_eigenvalue_algorithm |
Dissipative particle dynamics (DPD) is an off-lattice mesoscopic simulation technique which involves a set of particles moving in continuous space and discrete time. Particles represent whole molecules or fluid regions, rather than single atoms, and atomistic details are not considered relevant to the processes addressed. The particles' internal degrees of freedom are integrated out and replaced by simplified pairwise dissipative and random forces, so as to conserve momentum locally and ensure correct hydrodynamic behaviour. The main advantage of this method is that it gives access to longer time and length scales than are possible using conventional MD simulations. Simulations of polymeric fluids in volumes up to 100 nm in linear dimension for tens of microseconds are now common.
DPD was initially devised by Hoogerbrugge and Koelman to avoid the lattice artifacts of the so-called lattice gas automata and to tackle hydrodynamic time and space scales beyond those available with molecular dynamics (MD). It was subsequently reformulated and slightly modified by P. Español to ensure the proper thermal equilibrium state. A series of new DPD algorithms with reduced computational complexity and better control of transport properties are presented. The algorithms presented in this article choose randomly a pair particle for applying DPD thermostating thus reducing the computational complexity.
== Equations ==
The total non-bonded force acting on a DPD particle i is given by a sum over all particles j that lie within a fixed cut-off distance, of three pairwise-additive forces:
f
i
=
∑
j
≠
i
(
F
i
j
C
+
F
i
j
D
+
F
i
j
R
)
{\displaystyle f_{i}=\sum _{j\neq i}(F_{ij}^{C}+F_{ij}^{D}+F_{ij}^{R})}
where the first term in the above equation is a conservative force, the second a
dissipative force and the third a random force. The conservative force acts to give beads a chemical identity, while the dissipative and random forces together form a thermostat that keeps the mean temperature of the system constant. A key property of all of the non-bonded forces is that they conserve momentum locally, so that hydrodynamic modes of the fluid emerge even for small particle numbers. Local momentum conservation requires that the random force between two interacting beads be antisymmetric. Each pair of interacting particles therefore requires only a single random force calculation. This distinguishes DPD from Brownian dynamics in which each particle experiences a random force independently of all other particles. Beads can be connected into ‘molecules’ by tying them together with soft (often Hookean) springs. The most common applications of DPD keep the particle number, volume and temperature constant, and so take place in the NVT ensemble. Alternatively, the pressure instead of the volume is held constant, so that the simulation is in the NPT ensemble.
== Parallelization ==
In principle, simulations of very large systems, approaching a cubic micron for milliseconds, are possible using a parallel implementation of DPD running on multiple processors in a Beowulf-style cluster. Because the non-bonded forces are short-ranged in DPD, it is possible to parallelize a DPD code very efficiently using a spatial domain decomposition technique. In this scheme, the total simulation space is divided into a number of cuboidal regions each of which is assigned to a distinct processor in the cluster. Each processor is responsible for integrating the equations of motion of all beads whose centres of mass lie within its region of space. Only beads lying near the boundaries of each processor's space require communication between processors. In order to ensure that the simulation is efficient, the crucial requirement is that the number of particle-particle interactions that require inter-processor communication be much smaller than the number of particle-particle interactions within the bulk of each processor's region of space. Roughly speaking, this means that the volume of space assigned to each processor should be sufficiently large that its surface area (multiplied by a distance comparable to the force cut-off distance) is much less than its volume.
== Applications ==
A wide variety of complex hydrodynamic phenomena have been simulated using DPD, the list here is necessarily incomplete. The goal of these simulations often is to relate the macroscopic non-Newtonian flow properties of the fluid to its microscopic structure. Such DPD applications range from modeling the rheological properties of concrete to simulating liposome formation in biophysics to other recent three-phase phenomena such as dynamic wetting.
The DPD method has also found popularity in modeling heterogeneous multi-phase flows containing deformable objects such as blood cells and polymer micelles.
== Further reading ==
The full trace of the developments of various important aspects of the DPD methodology since it was first proposed in the early 1990s can be found in "Dissipative Particle Dynamics: Introduction, Methodology and Complex Fluid Applications – A Review".
The state-of-the-art in DPD was captured in a CECAM workshop in 2008. Innovations to the technique presented there include DPD with energy conservation; non-central frictional forces that allow the fluid viscosity to be tuned; an algorithm for preventing bond crossing between polymers; and the automated calibration of DPD interaction parameters from atomistic molecular dynamics. Recently, examples of automated calibration and parameterization have been shown against experimental observables. Additionally, datasets for the purpose of interaction potential calibration and parameterisation have been explored. Swope et al, have provided a detailed analysis of literature data and an experimental dataset based on Critical micelle concentration (CMC) and micellar mean aggregation number (Nagg). Examples of micellar simulations using DPD have been well documented previously.
== References ==
== Available packages ==
Some available simulation packages that can (also) perform DPD simulations are:
CULGI: The Chemistry Unified Language Interface, Culgi B.V., The Netherlands
DL_MESO: Open-source mesoscale simulation software.
DPDmacs
ESPResSo: Extensible Simulation Package for the Research on Soft Matter Systems - Open-source
Fluidix: The Fluidix simulation suite available from OneZero Software.
GPIUTMD: Graphical processors for Many-Particle Dynamics
Gromacs-DPD: A modified version of Gromacs including DPD.
HOOMD-blue Archived 2011-11-11 at the Wayback Machine: Highly Optimized Object-oriented Many-particle Dynamics—Blue Edition
LAMMPS
Materials Studio: Materials Studio - Modeling and simulation for studying chemicals and materials, Accelrys Software Inc.
OSPREY-DPD: Open Source Polymer Research Engine-DPD
SYMPLER: A freeware SYMbolic ParticLE simulatoR from the University of Freiburg.
SunlightDPD: Open-source (GPL) DPD software.
== External links ==
DPD simulation technique by MatDL (Materials Digital Library Pathway) (MatDL) | Wikipedia/Dissipative_particle_dynamics |
The Luttinger–Kohn model is a flavor of the k·p perturbation theory used for calculating the structure of multiple, degenerate electronic bands in bulk and quantum well semiconductors. The method is a generalization of the single band k·p theory.
In this model, the influence of all other bands is taken into account by using Löwdin's perturbation method.
== Background ==
All bands can be subdivided into two classes:
Class A: six valence bands (heavy hole, light hole, split off band and their spin counterparts) and two conduction bands.
Class B: all other bands.
The method concentrates on the bands in Class A, and takes into account Class B bands perturbatively.
We can write the perturbed solution,
ϕ
{\displaystyle \phi _{}^{}}
, as a linear combination of the unperturbed eigenstates
ϕ
i
(
0
)
{\displaystyle \phi _{i}^{(0)}}
:
ϕ
=
∑
n
A
,
B
a
n
ϕ
n
(
0
)
{\displaystyle \phi =\sum _{n}^{A,B}a_{n}\phi _{n}^{(0)}}
Assuming the unperturbed eigenstates are orthonormalized, the eigenequations are:
(
E
−
H
m
m
)
a
m
=
∑
n
≠
m
A
H
m
n
a
n
+
∑
α
≠
m
B
H
m
α
a
α
{\displaystyle (E-H_{mm})a_{m}=\sum _{n\neq m}^{A}H_{mn}a_{n}+\sum _{\alpha \neq m}^{B}H_{m\alpha }a_{\alpha }}
,
where
H
m
n
=
∫
ϕ
m
(
0
)
†
H
ϕ
n
(
0
)
d
3
r
=
E
n
(
0
)
δ
m
n
+
H
m
n
′
{\displaystyle H_{mn}=\int \phi _{m}^{(0)\dagger }H\phi _{n}^{(0)}d^{3}\mathbf {r} =E_{n}^{(0)}\delta _{mn}+H_{mn}^{'}}
.
From this expression, we can write:
a
m
=
∑
n
≠
m
A
H
m
n
E
−
H
m
m
a
n
+
∑
α
≠
m
B
H
m
α
E
−
H
m
m
a
α
{\displaystyle a_{m}=\sum _{n\neq m}^{A}{\frac {H_{mn}}{E-H_{mm}}}a_{n}+\sum _{\alpha \neq m}^{B}{\frac {H_{m\alpha }}{E-H_{mm}}}a_{\alpha }}
,
where the first sum on the right-hand side is over the states in class A only, while the second sum is over the states on class B. Since we are interested in the coefficients
a
m
{\displaystyle a_{m}}
for m in class A, we may eliminate those in class B by an iteration procedure to obtain:
a
m
=
∑
n
A
U
m
n
A
−
δ
m
n
H
m
n
E
−
H
m
m
a
n
{\displaystyle a_{m}=\sum _{n}^{A}{\frac {U_{mn}^{A}-\delta _{mn}H_{mn}}{E-H_{mm}}}a_{n}}
,
U
m
n
A
=
H
m
n
+
∑
α
≠
m
B
H
m
α
H
α
n
E
−
H
α
α
+
∑
α
,
β
≠
m
,
n
;
α
≠
β
H
m
α
H
α
β
H
β
n
(
E
−
H
α
α
)
(
E
−
H
β
β
)
+
…
{\displaystyle U_{mn}^{A}=H_{mn}+\sum _{\alpha \neq m}^{B}{\frac {H_{m\alpha }H_{\alpha n}}{E-H_{\alpha \alpha }}}+\sum _{\alpha ,\beta \neq m,n;\alpha \neq \beta }{\frac {H_{m\alpha }H_{\alpha \beta }H_{\beta n}}{(E-H_{\alpha \alpha })(E-H_{\beta \beta })}}+\ldots }
Equivalently, for
a
n
{\displaystyle a_{n}}
(
n
∈
A
{\displaystyle n\in A}
):
a
n
=
∑
n
A
(
U
m
n
A
−
E
δ
m
n
)
a
n
=
0
,
m
∈
A
{\displaystyle a_{n}=\sum _{n}^{A}(U_{mn}^{A}-E\delta _{mn})a_{n}=0,m\in A}
and
a
γ
=
∑
n
A
U
γ
n
A
−
H
γ
n
δ
γ
n
E
−
H
γ
γ
a
n
=
0
,
γ
∈
B
{\displaystyle a_{\gamma }=\sum _{n}^{A}{\frac {U_{\gamma n}^{A}-H_{\gamma n}\delta _{\gamma n}}{E-H_{\gamma \gamma }}}a_{n}=0,\gamma \in B}
.
When the coefficients
a
n
{\displaystyle a_{n}}
belonging to Class A are determined, so are
a
γ
{\displaystyle a_{\gamma }}
.
== Schrödinger equation and basis functions ==
The Hamiltonian including the spin-orbit interaction can be written as:
H
=
H
0
+
ℏ
4
m
0
2
c
2
σ
¯
⋅
∇
V
×
p
{\displaystyle H=H_{0}+{\frac {\hbar }{4m_{0}^{2}c^{2}}}{\bar {\sigma }}\cdot \nabla V\times \mathbf {p} }
,
where
σ
¯
{\displaystyle {\bar {\sigma }}}
is the Pauli spin matrix vector. Substituting into the Schrödinger equation in Bloch approximation we obtain
H
u
n
k
(
r
)
=
(
H
0
+
ℏ
m
0
k
⋅
Π
+
ℏ
2
k
2
4
m
0
2
c
2
∇
V
×
p
⋅
σ
¯
)
u
n
k
(
r
)
=
E
n
(
k
)
u
n
k
(
r
)
{\displaystyle Hu_{n\mathbf {k} }(\mathbf {r} )=\left(H_{0}+{\frac {\hbar }{m_{0}}}\mathbf {k} \cdot \mathbf {\Pi } +{\frac {\hbar ^{2}k^{2}}{4m_{0}^{2}c^{2}}}\nabla V\times \mathbf {p} \cdot {\bar {\sigma }}\right)u_{n\mathbf {k} }(\mathbf {r} )=E_{n}(\mathbf {k} )u_{n\mathbf {k} }(\mathbf {r} )}
,
where
Π
=
p
+
ℏ
4
m
0
2
c
2
σ
¯
×
∇
V
{\displaystyle \mathbf {\Pi } =\mathbf {p} +{\frac {\hbar }{4m_{0}^{2}c^{2}}}{\bar {\sigma }}\times \nabla V}
and the perturbation Hamiltonian can be defined as
H
′
=
ℏ
m
0
k
⋅
Π
.
{\displaystyle H'={\frac {\hbar }{m_{0}}}\mathbf {k} \cdot \mathbf {\Pi } .}
The unperturbed Hamiltonian refers to the band-edge spin-orbit system (for k=0). At the band edge, the conduction band Bloch waves exhibits s-like symmetry, while the valence band states are p-like (3-fold degenerate without spin). Let us denote these states as
|
S
⟩
{\displaystyle |S\rangle }
, and
|
X
⟩
{\displaystyle |X\rangle }
,
|
Y
⟩
{\displaystyle |Y\rangle }
and
|
Z
⟩
{\displaystyle |Z\rangle }
respectively. These Bloch functions can be pictured as periodic repetition of atomic orbitals, repeated at intervals corresponding to the lattice spacing. The Bloch function can be expanded in the following manner:
u
n
k
(
r
)
=
∑
j
′
A
a
j
′
(
k
)
u
j
′
0
(
r
)
+
∑
γ
B
a
γ
(
k
)
u
γ
0
(
r
)
{\displaystyle u_{n\mathbf {k} }(\mathbf {r} )=\sum _{j'}^{A}a_{j'}(\mathbf {k} )u_{j'0}(\mathbf {r} )+\sum _{\gamma }^{B}a_{\gamma }(\mathbf {k} )u_{\gamma 0}(\mathbf {r} )}
,
where j' is in Class A and
γ
{\displaystyle \gamma }
is in Class B. The basis functions can be chosen to be
u
10
(
r
)
=
u
e
l
(
r
)
=
|
S
1
2
,
1
2
⟩
=
|
S
↑
⟩
{\displaystyle u_{10}(\mathbf {r} )=u_{el}(\mathbf {r} )=\left|S{\frac {1}{2}},{\frac {1}{2}}\right\rangle =\left|S\uparrow \right\rangle }
u
20
(
r
)
=
u
S
O
(
r
)
=
|
1
2
,
1
2
⟩
=
1
3
|
(
X
+
i
Y
)
↓
⟩
+
1
3
|
Z
↑
⟩
{\displaystyle u_{20}(\mathbf {r} )=u_{SO}(\mathbf {r} )=\left|{\frac {1}{2}},{\frac {1}{2}}\right\rangle ={\frac {1}{\sqrt {3}}}|(X+iY)\downarrow \rangle +{\frac {1}{\sqrt {3}}}|Z\uparrow \rangle }
u
30
(
r
)
=
u
l
h
(
r
)
=
|
3
2
,
1
2
⟩
=
−
1
6
|
(
X
+
i
Y
)
↓
⟩
+
2
3
|
Z
↑
⟩
{\displaystyle u_{30}(\mathbf {r} )=u_{lh}(\mathbf {r} )=\left|{\frac {3}{2}},{\frac {1}{2}}\right\rangle =-{\frac {1}{\sqrt {6}}}|(X+iY)\downarrow \rangle +{\sqrt {\frac {2}{3}}}|Z\uparrow \rangle }
u
40
(
r
)
=
u
h
h
(
r
)
=
|
3
2
,
3
2
⟩
=
−
1
2
|
(
X
+
i
Y
)
↑
⟩
{\displaystyle u_{40}(\mathbf {r} )=u_{hh}(\mathbf {r} )=\left|{\frac {3}{2}},{\frac {3}{2}}\right\rangle =-{\frac {1}{\sqrt {2}}}|(X+iY)\uparrow \rangle }
u
50
(
r
)
=
u
¯
e
l
(
r
)
=
|
S
1
2
,
−
1
2
⟩
=
−
|
S
↓
⟩
{\displaystyle u_{50}(\mathbf {r} )={\bar {u}}_{el}(\mathbf {r} )=\left|S{\frac {1}{2}},-{\frac {1}{2}}\right\rangle =-|S\downarrow \rangle }
u
60
(
r
)
=
u
¯
S
O
(
r
)
=
|
1
2
,
−
1
2
⟩
=
1
3
|
(
X
−
i
Y
)
↑
⟩
−
1
3
|
Z
↓
⟩
{\displaystyle u_{60}(\mathbf {r} )={\bar {u}}_{SO}(\mathbf {r} )=\left|{\frac {1}{2}},-{\frac {1}{2}}\right\rangle ={\frac {1}{\sqrt {3}}}|(X-iY)\uparrow \rangle -{\frac {1}{\sqrt {3}}}|Z\downarrow \rangle }
u
70
(
r
)
=
u
¯
l
h
(
r
)
=
|
3
2
,
−
1
2
⟩
=
1
6
|
(
X
−
i
Y
)
↑
⟩
+
2
3
|
Z
↓
⟩
{\displaystyle u_{70}(\mathbf {r} )={\bar {u}}_{lh}(\mathbf {r} )=\left|{\frac {3}{2}},-{\frac {1}{2}}\right\rangle ={\frac {1}{\sqrt {6}}}|(X-iY)\uparrow \rangle +{\sqrt {\frac {2}{3}}}|Z\downarrow \rangle }
u
80
(
r
)
=
u
¯
h
h
(
r
)
=
|
3
2
,
−
3
2
⟩
=
−
1
2
|
(
X
−
i
Y
)
↓
⟩
{\displaystyle u_{80}(\mathbf {r} )={\bar {u}}_{hh}(\mathbf {r} )=\left|{\frac {3}{2}},-{\frac {3}{2}}\right\rangle =-{\frac {1}{\sqrt {2}}}|(X-iY)\downarrow \rangle }
.
Using Löwdin's method, only the following eigenvalue problem needs to be solved
∑
j
′
A
(
U
j
j
′
A
−
E
δ
j
j
′
)
a
j
′
(
k
)
=
0
,
{\displaystyle \sum _{j'}^{A}(U_{jj'}^{A}-E\delta _{jj'})a_{j'}(\mathbf {k} )=0,}
where
U
j
j
′
A
=
H
j
j
′
+
∑
γ
≠
j
,
j
′
B
H
j
γ
H
γ
j
′
E
0
−
E
γ
=
H
j
j
′
+
∑
γ
≠
j
,
j
′
B
H
j
γ
′
H
γ
j
′
′
E
0
−
E
γ
{\displaystyle U_{jj'}^{A}=H_{jj'}+\sum _{\gamma \neq j,j'}^{B}{\frac {H_{j\gamma }H_{\gamma j'}}{E_{0}-E_{\gamma }}}=H_{jj'}+\sum _{\gamma \neq j,j'}^{B}{\frac {H_{j\gamma }^{'}H_{\gamma j'}^{'}}{E_{0}-E_{\gamma }}}}
,
H
j
γ
′
=
⟨
u
j
0
|
ℏ
m
0
k
⋅
(
p
+
ℏ
4
m
0
c
2
σ
¯
×
∇
V
)
|
u
γ
0
⟩
≈
∑
α
ℏ
k
α
m
0
p
j
γ
α
.
{\displaystyle H_{j\gamma }^{'}=\left\langle u_{j0}\right|{\frac {\hbar }{m_{0}}}\mathbf {k} \cdot \left(\mathbf {p} +{\frac {\hbar }{4m_{0}c^{2}}}{\bar {\sigma }}\times \nabla V\right)\left|u_{\gamma 0}\right\rangle \approx \sum _{\alpha }{\frac {\hbar k_{\alpha }}{m_{0}}}p_{j\gamma }^{\alpha }.}
The second term of
Π
{\displaystyle \Pi }
can be neglected compared to the similar term with p instead of k. Similarly to the single band case, we can write for
U
j
j
′
A
{\displaystyle U_{jj'}^{A}}
D
j
j
′
≡
U
j
j
′
A
=
E
j
(
0
)
δ
j
j
′
+
∑
α
β
D
j
j
′
α
β
k
α
k
β
,
{\displaystyle D_{jj'}\equiv U_{jj'}^{A}=E_{j}(0)\delta _{jj'}+\sum _{\alpha \beta }D_{jj'}^{\alpha \beta }k_{\alpha }k_{\beta },}
D
j
j
′
α
β
=
ℏ
2
2
m
0
[
δ
j
j
′
δ
α
β
+
∑
γ
B
p
j
γ
α
p
γ
j
′
β
+
p
j
γ
β
p
γ
j
′
α
m
0
(
E
0
−
E
γ
)
]
.
{\displaystyle D_{jj'}^{\alpha \beta }={\frac {\hbar ^{2}}{2m_{0}}}\left[\delta _{jj'}\delta _{\alpha \beta }+\sum _{\gamma }^{B}{\frac {p_{j\gamma }^{\alpha }p_{\gamma j'}^{\beta }+p_{j\gamma }^{\beta }p_{\gamma j'}^{\alpha }}{m_{0}(E_{0}-E_{\gamma })}}\right].}
We now define the following parameters
A
0
=
ℏ
2
2
m
0
+
ℏ
2
m
0
2
∑
γ
B
p
x
γ
x
p
γ
x
x
E
0
−
E
γ
,
{\displaystyle A_{0}={\frac {\hbar ^{2}}{2m_{0}}}+{\frac {\hbar ^{2}}{m_{0}^{2}}}\sum _{\gamma }^{B}{\frac {p_{x\gamma }^{x}p_{\gamma x}^{x}}{E_{0}-E_{\gamma }}},}
B
0
=
ℏ
2
2
m
0
+
ℏ
2
m
0
2
∑
γ
B
p
x
γ
y
p
γ
x
y
E
0
−
E
γ
,
{\displaystyle B_{0}={\frac {\hbar ^{2}}{2m_{0}}}+{\frac {\hbar ^{2}}{m_{0}^{2}}}\sum _{\gamma }^{B}{\frac {p_{x\gamma }^{y}p_{\gamma x}^{y}}{E_{0}-E_{\gamma }}},}
C
0
=
ℏ
2
m
0
2
∑
γ
B
p
x
γ
x
p
γ
y
y
+
p
x
γ
y
p
γ
y
x
E
0
−
E
γ
,
{\displaystyle C_{0}={\frac {\hbar ^{2}}{m_{0}^{2}}}\sum _{\gamma }^{B}{\frac {p_{x\gamma }^{x}p_{\gamma y}^{y}+p_{x\gamma }^{y}p_{\gamma y}^{x}}{E_{0}-E_{\gamma }}},}
and the band structure parameters (or the Luttinger parameters) can be defined to be
γ
1
=
−
1
3
2
m
0
ℏ
2
(
A
0
+
2
B
0
)
,
{\displaystyle \gamma _{1}=-{\frac {1}{3}}{\frac {2m_{0}}{\hbar ^{2}}}(A_{0}+2B_{0}),}
γ
2
=
−
1
6
2
m
0
ℏ
2
(
A
0
−
B
0
)
,
{\displaystyle \gamma _{2}=-{\frac {1}{6}}{\frac {2m_{0}}{\hbar ^{2}}}(A_{0}-B_{0}),}
γ
3
=
−
1
6
2
m
0
ℏ
2
C
0
,
{\displaystyle \gamma _{3}=-{\frac {1}{6}}{\frac {2m_{0}}{\hbar ^{2}}}C_{0},}
These parameters are very closely related to the effective masses of the holes in various valence bands.
γ
1
{\displaystyle \gamma _{1}}
and
γ
2
{\displaystyle \gamma _{2}}
describe the coupling of the
|
X
⟩
{\displaystyle |X\rangle }
,
|
Y
⟩
{\displaystyle |Y\rangle }
and
|
Z
⟩
{\displaystyle |Z\rangle }
states to the other states. The third parameter
γ
3
{\displaystyle \gamma _{3}}
relates to the anisotropy of the energy band structure around the
Γ
{\displaystyle \Gamma }
point when
γ
2
≠
γ
3
{\displaystyle \gamma _{2}\neq \gamma _{3}}
.
== Explicit Hamiltonian matrix ==
The Luttinger-Kohn Hamiltonian
D
j
j
′
{\displaystyle \mathbf {D_{jj'}} }
can be written explicitly as a 8X8 matrix (taking into account 8 bands - 2 conduction, 2 heavy-holes, 2 light-holes and 2 split-off)
H
=
(
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
P
z
†
P
+
Δ
2
Q
†
−
S
†
/
2
−
2
P
+
†
0
−
3
/
2
S
−
2
R
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
E
e
l
P
z
2
P
z
−
3
P
+
0
2
P
−
P
−
0
)
{\displaystyle \mathbf {H} =\left({\begin{array}{cccccccc}E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\P_{z}^{\dagger }&P+\Delta &{\sqrt {2}}Q^{\dagger }&-S^{\dagger }/{\sqrt {2}}&-{\sqrt {2}}P_{+}^{\dagger }&0&-{\sqrt {3/2}}S&-{\sqrt {2}}R\\E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\E_{el}&P_{z}&{\sqrt {2}}P_{z}&-{\sqrt {3}}P_{+}&0&{\sqrt {2}}P_{-}&P_{-}&0\\\end{array}}\right)}
== Summary ==
== References ==
2. Luttinger, J. M. Kohn, W., "Motion of Electrons and Holes in Perturbed Periodic Fields", Phys. Rev. 97,4. pp. 869-883, (1955). https://journals.aps.org/pr/abstract/10.1103/PhysRev.97.869 | Wikipedia/Luttinger-Kohn_model |
Smoothed-particle hydrodynamics (SPH) is a computational method used for simulating the mechanics of continuum media, such as solid mechanics and fluid flows. It was developed by Gingold and Monaghan and Lucy in 1977, initially for astrophysical problems. It has been used in many fields of research, including astrophysics, ballistics, volcanology, and oceanography. It is a meshfree Lagrangian method (where the co-ordinates move with the fluid), and the resolution of the method can easily be adjusted with respect to variables such as density.
== Method ==
=== Advantages ===
By construction, SPH is a meshfree method, which makes it ideally suited to simulate problems dominated by complex boundary dynamics, like free surface flows, or large boundary displacement.
The lack of a mesh significantly simplifies the model implementation and its parallelization, even for many-core architectures.
SPH can be easily extended to a wide variety of fields, and hybridized with some other models, as discussed in Modelling Physics.
As discussed in section on weakly compressible SPH, the method has great conservation features.
The computational cost of SPH simulations per number of particles is significantly less than the cost of grid-based simulations per number of cells when the metric of interest is related to fluid density (e.g., the probability density function of density fluctuations). This is the case because in SPH the resolution is put where the matter is.
=== Limitations ===
Setting boundary conditions in SPH such as inlets and outlets and walls is more difficult than with grid-based methods. In fact, it has been stated that "the treatment of boundary conditions is certainly one of the most difficult technical points of the SPH method". This challenge is partly because in SPH the particles near the boundary change with time. Nonetheless, wall boundary conditions for SPH are available.
The computational cost of SPH simulations per number of particles is significantly larger than the cost of grid-based simulations per number of cells when the metric of interest is not (directly) related to density (e.g., the kinetic-energy spectrum). Therefore, overlooking issues of parallel speedup, the simulation of constant-density flows (e.g., external aerodynamics) is more efficient with grid-based methods than with SPH.
== Examples ==
=== Fluid dynamics ===
Smoothed-particle hydrodynamics is being increasingly used to model fluid motion as well. This is due to several benefits over traditional grid-based techniques. First, SPH guarantees conservation of mass without extra computation since the particles themselves represent mass. Second, SPH computes pressure from weighted contributions of neighboring particles rather than by solving linear systems of equations. Finally, unlike grid-based techniques, which must track fluid boundaries, SPH creates a free surface for two-phase interacting fluids directly since the particles represent the denser fluid (usually water) and empty space represents the lighter fluid (usually air). For these reasons, it is possible to simulate fluid motion using SPH in real time. However, both grid-based and SPH techniques still require the generation of renderable free surface geometry using a polygonization technique such as metaballs and marching cubes, point splatting, or 'carpet' visualization. For gas dynamics it is more appropriate to use the kernel function itself to produce a rendering of gas column density (e.g., as done in the SPLASH visualisation package).
One drawback over grid-based techniques is the need for large numbers of particles to produce simulations of equivalent resolution. In the typical implementation of both uniform grids and SPH particle techniques, many voxels or particles will be used to fill water volumes that are never rendered. However, accuracy can be significantly higher with sophisticated grid-based techniques, especially those coupled with particle methods (such as particle level sets), since it is easier to enforce the incompressibility condition in these systems. SPH for fluid simulation is being used increasingly in real-time animation and games where accuracy is not as critical as interactivity.
Recent work in SPH for fluid simulation has increased performance, accuracy, and areas of application:
B. Solenthaler, 2009, develops Predictive-Corrective SPH (PCISPH) to allow for better incompressibility constraints
M. Ihmsen et al., 2010, introduce boundary handling and adaptive time-stepping for PCISPH for accurate rigid body interactions
K. Bodin et al., 2011, replace the standard equation of state pressure with a density constraint and apply a variational time integrator
R. Hoetzlein, 2012, develops efficient GPU-based SPH for large scenes with Fluids v.3
N. Akinci et al., 2012, introduce a versatile boundary handling and two-way SPH-rigid coupling technique that is completely based on hydrodynamic forces; the approach is applicable to different types of SPH solvers
M. Macklin et al., 2013 simulates incompressible flows inside the Position Based Dynamics framework, for bigger timesteps
N. Akinci et al., 2013, introduce a versatile surface tension and two-way fluid-solid adhesion technique that allows simulating a variety of interesting physical effects that are observed in reality
J. Kyle and E. Terrell, 2013, apply SPH to Full-Film Lubrication
A. Mahdavi and N. Talebbeydokhti, 2015, propose a hybrid algorithm for implementation of solid boundary condition and simulate flow over a sharp crested weir
S. Tavakkol et al., 2016, develop curvSPH, which makes the horizontal and vertical size of particles independent and generates uniform mass distribution along curved boundaries
W. Kostorz and A. Esmail-Yakas, 2020, propose a general, efficient and simple method for evaluating normalization factors near piecewise-planar boundaries
Colagrossi et al., 2019, study flow around a cylinder close to a free-surface and compare with other techniques
=== Astrophysics ===
Smoothed-particle hydrodynamics's adaptive resolution, numerical conservation of physically conserved quantities, and ability to simulate phenomena covering many orders of magnitude make it ideal for computations in theoretical astrophysics.
Simulations of galaxy formation, star formation, stellar collisions, supernovae and meteor impacts are some of the wide variety of astrophysical and cosmological uses of this method.
SPH is used to model hydrodynamic flows, including possible effects of gravity. Incorporating other astrophysical processes which may be important, such as radiative transfer and magnetic fields is an active area of research in the astronomical community, and has had some limited success.
=== Solid mechanics ===
Libersky and Petschek
extended SPH to Solid Mechanics. The main advantage of SPH in this application is the possibility of dealing with larger local distortion than grid-based methods.
This feature has been exploited in many applications in Solid Mechanics: metal forming, impact, crack growth, fracture, fragmentation, etc.
Another important advantage of meshfree methods in general, and of SPH in particular, is that mesh dependence problems are naturally avoided given the meshfree nature of the method. In particular, mesh alignment is related to problems involving cracks and it is avoided in SPH due to the isotropic support of the kernel functions. However, classical SPH formulations suffer from tensile instabilities
and lack of consistency.
Over the past years, different corrections have been introduced to improve the accuracy of the SPH solution, leading to the RKPM by Liu et al.
Randles and Libersky
and Johnson and Beissel
tried to solve the consistency problem in their study of impact phenomena.
Dyka et al.
and Randles and Libersky
introduced the stress-point integration into SPH and Ted Belytschko et al.
showed that the stress-point technique removes the instability due to spurious singular modes, while tensile instabilities can be avoided by using a Lagrangian kernel. Many other recent studies can be found in the literature devoted to improve the convergence of the SPH method.
Recent improvements in understanding the convergence and stability of SPH have allowed for more widespread applications in Solid Mechanics. Other examples of applications and developments of the method include:
Metal forming simulations.
SPH-based method SPAM (Smoothed Particle Applied Mechanics) for impact fracture in solids by William G. Hoover.
Modified SPH (SPH/MLSPH) for fracture and fragmentation.
Taylor-SPH (TSPH) for shock wave propagation in solids.
Generalized coordinate SPH (GSPH) allocates particles inhomogeneously in the Cartesian coordinate system and arranges them via mapping in a generalized coordinate system in which the particles are aligned at a uniform spacing.
== Numerical tools ==
=== Interpolations ===
The Smoothed-Particle Hydrodynamics (SPH) method works by dividing the fluid into a set of discrete moving elements
i
,
j
{\displaystyle i,j}
, referred to as particles. Their Lagrangian nature allows setting their position
r
i
{\displaystyle \mathbf {r} _{i}}
by integration of their velocity
v
i
{\displaystyle \mathbf {v} _{i}}
as:
d
r
i
d
t
=
v
i
.
{\displaystyle {\frac {\mathrm {d} {\boldsymbol {r}}_{i}}{\mathrm {d} t}}={\boldsymbol {v}}_{i}.}
These particles interact through a kernel function with characteristic radius known as the "smoothing length", typically represented in equations by
h
{\displaystyle h}
. This means that the physical quantity of any particle can be obtained by summing the relevant properties of all the particles that lie within the range of the kernel, the latter being used as a weighting function
W
{\displaystyle W}
. This can be understood in two steps. First an arbitrary field
A
{\displaystyle A}
is written as a convolution with
W
{\displaystyle W}
:
A
(
r
)
=
∫
A
(
r
′
)
W
(
|
r
−
r
′
|
,
h
)
d
V
(
r
′
)
.
{\displaystyle A({\boldsymbol {r}})=\int A\left({\boldsymbol {r^{\prime }}}\right)W(|{\boldsymbol {r}}-{\boldsymbol {r^{\prime }}}|,h)\,\mathrm {d} V\!\left({\boldsymbol {r'}}\right).}
The error in making the above approximation is order
h
2
{\displaystyle h^{2}}
. Secondly, the integral is approximated using a Riemann summation over the particles:
A
(
r
)
=
∑
j
V
j
A
j
W
(
|
r
−
r
j
|
,
h
)
,
{\displaystyle A({\boldsymbol {r}})=\sum _{j}V_{j}A_{j}W(|{\boldsymbol {r}}-{\boldsymbol {r}}_{j}|,h),}
where the summation over
j
{\displaystyle j}
includes all particles in the simulation.
V
j
{\displaystyle V_{j}}
is the volume of particle
j
{\displaystyle j}
,
A
j
{\displaystyle A_{j}}
is the value of the quantity
A
{\displaystyle A}
for particle
j
{\displaystyle j}
and
r
{\displaystyle {\boldsymbol {r}}}
denotes position. For example, the density
ρ
i
{\displaystyle \rho _{i}}
of particle
i
{\displaystyle i}
can be expressed as:
ρ
i
=
ρ
(
r
i
)
=
∑
j
m
j
W
i
j
,
{\displaystyle \rho _{i}=\rho ({\boldsymbol {r}}_{i})=\sum _{j}m_{j}W_{ij},}
where
m
j
=
ρ
j
V
j
{\displaystyle m_{j}=\rho _{j}V_{j}}
denotes the particle mass and
ρ
j
{\displaystyle \rho _{j}}
the particle density, while
W
i
j
=
W
j
i
{\displaystyle W_{ij}=W_{ji}}
is a short notation for
W
(
|
r
i
−
r
j
|
,
h
)
{\displaystyle W(|{\boldsymbol {r}}_{i}-{\boldsymbol {r}}_{j}|,h)}
. The error done in approximating the integral by a discrete sum depends on
h
{\displaystyle h}
, on the particle size (i.e.
V
j
1
/
d
{\displaystyle V_{j}^{1/d}}
,
d
{\displaystyle d}
being the space dimension), and on the particle arrangement in space. The latter effect is still poorly known.
Kernel functions commonly used include the Gaussian function, the quintic spline and the Wendland
C
2
{\displaystyle C^{2}}
kernel. The latter two kernels are compactly supported (unlike the Gaussian, where there is a small contribution at any finite distance away), with support proportional to
h
{\displaystyle h}
. This has the advantage of saving computational effort by not including the relatively minor contributions from distant particles.
Although the size of the smoothing length can be fixed in both space and time, this does not take advantage of the full power of SPH. By assigning each particle its own smoothing length and allowing it to vary with time, the resolution of a simulation can be made to automatically adapt itself depending on local conditions. For example, in a very dense region where many particles are close together, the smoothing length can be made relatively short, yielding high spatial resolution. Conversely, in low-density regions where individual particles are far apart and the resolution is low, the smoothing length can be increased, optimising the computation for the regions of interest.
=== Discretization of governing equations ===
For particles of constant mass, differentiating the interpolated density
ρ
i
{\displaystyle \rho _{i}}
with respect to time yields
d
ρ
i
d
t
=
∑
j
m
j
(
v
i
−
v
j
)
⋅
∇
W
i
j
,
{\displaystyle {\frac {d\rho _{i}}{dt}}=\sum _{j}m_{j}\left({\boldsymbol {v}}_{i}-{\boldsymbol {v}}_{j}\right)\cdot \nabla W_{ij},}
where
∇
W
i
j
=
−
∇
W
j
i
{\displaystyle \nabla W_{ij}=-\nabla W_{ji}}
is the gradient of
W
i
j
{\displaystyle W_{ij}}
with respect to
r
i
{\displaystyle {\boldsymbol {r}}_{i}}
. Comparing this equation with the continuity equation in the Lagrangian description (using material derivatives),
d
ρ
d
t
=
−
ρ
∇
⋅
v
,
{\displaystyle {\frac {d\rho }{dt}}=-\rho \nabla \cdot {\boldsymbol {v}},}
it is apparent that its right-hand side is an approximation of
−
ρ
∇
⋅
v
{\displaystyle -\rho \nabla \cdot \mathbf {v} }
; hence one defines a discrete divergence operator as follows:
D
i
{
v
j
}
=
−
1
ρ
i
∑
j
m
j
(
v
i
−
v
j
)
⋅
∇
W
i
j
.
{\displaystyle \operatorname {D} _{i}\left\{{\boldsymbol {v}}_{j}\right\}=-{\frac {1}{\rho _{i}}}\sum _{j}m_{j}\left({\boldsymbol {v}}_{i}-{\boldsymbol {v}}_{j}\right)\cdot \nabla W_{ij}.}
This operator gives an SPH approximation of
∇
⋅
v
{\displaystyle \nabla \cdot \mathbf {v} }
at the particle
i
{\displaystyle i}
for a given set of particles with given masses
m
j
{\displaystyle m_{j}}
, positions
{
r
j
}
{\displaystyle \left\{\mathbf {r} _{j}\right\}}
and velocities
{
v
j
}
{\displaystyle \left\{\mathbf {v} _{j}\right\}}
.
The other important equation for a compressible inviscid fluid is the Euler equation for momentum balance:
d
v
d
t
=
−
1
ρ
∇
p
+
g
{\displaystyle {\frac {d{\boldsymbol {v}}}{dt}}=-{\frac {1}{\rho }}\nabla p+{\boldsymbol {g}}}
Similarly to continuity, the task is to define a discrete gradient operator in order to write
d
v
i
d
t
=
−
1
ρ
G
i
{
p
j
}
+
g
{\displaystyle {\frac {d{\boldsymbol {v}}_{i}}{dt}}=-{\frac {1}{\rho }}\operatorname {\mathbf {G} } _{i}\left\{p_{j}\right\}+{\boldsymbol {g}}}
One choice is
G
i
{
p
j
}
=
ρ
i
∑
j
m
j
(
p
i
ρ
i
2
+
p
j
ρ
j
2
)
∇
W
i
j
,
{\displaystyle \operatorname {\mathbf {G} } _{i}\left\{p_{j}\right\}=\rho _{i}\sum _{j}m_{j}\left({\frac {p_{i}}{\rho _{i}^{2}}}+{\frac {p_{j}}{\rho _{j}^{2}}}\right)\nabla W_{ij},}
which has the property of being skew-adjoint with the divergence operator above, in the sense that
∑
i
V
i
v
i
⋅
G
i
{
p
j
}
=
−
∑
i
V
i
p
i
D
i
{
v
j
}
,
{\displaystyle \sum _{i}V_{i}{\boldsymbol {v}}_{i}\cdot \operatorname {\mathbf {G} } _{i}\left\{p_{j}\right\}=-\sum _{i}V_{i}p_{i}\operatorname {D} _{i}\left\{{\boldsymbol {v}}_{j}\right\},}
this being a discrete version of the continuum identity
∫
v
⋅
grad
p
=
−
∫
p
div
⋅
v
.
{\displaystyle \int {\boldsymbol {v}}\cdot \operatorname {grad} p=-\int p\operatorname {div} \cdot {\boldsymbol {v}}.}
This property leads to nice conservation properties.
Notice also that this choice leads to a symmetric divergence operator and antisymmetric gradient. Although there are several ways of discretizing the pressure gradient in the Euler equations, the above antisymmetric form is the most acknowledged one. It supports strict conservation of linear and angular momentum. This means that a force that is exerted on particle
i
{\displaystyle i}
by particle
j
{\displaystyle j}
equals the one that is exerted on particle
j
{\displaystyle j}
by particle
i
{\displaystyle i}
including the sign change of the effective direction, thanks to the antisymmetry property
∇
W
i
j
=
−
∇
W
j
i
{\displaystyle \nabla W_{ij}=-\nabla W_{ji}}
.
Nevertheless, other operators have been proposed, which may perform better numerically or physically.
For instance, one drawback of these operators is that while the divergence
D
{\displaystyle \operatorname {D} }
is zero-order consistent (i.e. yields zero when applied to a constant vector field), it can be seen that the gradient
G
{\displaystyle \operatorname {\mathbf {G} } }
is not. Several techniques have been proposed to circumvent this issue, leading to renormalized operators (see e.g.).
=== Variational principle ===
The above SPH governing equations can be derived from a least action principle, starting from the Lagrangian of a particle system:
L
=
∑
j
m
j
(
1
2
v
j
2
−
e
j
+
g
⋅
r
j
)
{\displaystyle {\mathcal {L}}=\sum _{j}m_{j}\left({\tfrac {1}{2}}{\boldsymbol {v}}_{j}^{2}-e_{j}+{\boldsymbol {g}}\cdot {\boldsymbol {r}}_{j}\right)}
,
where
e
j
{\displaystyle e_{j}}
is the particle specific internal energy. The Euler–Lagrange equation of variational mechanics reads, for each particle:
d
d
t
∂
L
∂
v
i
=
∂
L
∂
r
i
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {v}}_{i}}}={\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {r}}_{i}}}.}
When applied to the above Lagrangian, it gives the following momentum equation:
m
i
d
v
i
d
t
=
−
∑
j
m
j
∂
e
j
∂
r
i
+
m
i
g
=
−
∑
j
m
j
∂
e
j
∂
ρ
j
∂
ρ
j
∂
r
i
+
m
i
g
{\displaystyle m_{i}{\frac {\mathrm {d} {\boldsymbol {v}}_{i}}{\mathrm {d} t}}=-\sum _{j}m_{j}{\frac {\partial e_{j}}{\partial {\boldsymbol {r}}_{i}}}+m_{i}{\boldsymbol {g}}=-\sum _{j}m_{j}{\frac {\partial e_{j}}{\partial \rho _{j}}}{\frac {\partial \rho _{j}}{\partial {\boldsymbol {r}}_{i}}}+m_{i}{\boldsymbol {g}}}
where the chain rule has been used, since
e
j
{\displaystyle e_{j}}
depends on
ρ
j
{\displaystyle \rho _{j}}
, and the latter, on the position of the particles.
Using the thermodynamic property
d
e
=
(
p
/
ρ
2
)
d
ρ
{\displaystyle \mathrm {d} e=\left(p/\rho ^{2}\right)\mathrm {d} \rho }
we may write
m
i
d
v
i
d
t
=
−
∑
j
m
j
p
j
ρ
j
2
∂
ρ
j
∂
r
i
+
m
i
g
,
{\displaystyle m_{i}{\frac {\mathrm {d} {\boldsymbol {v}}_{i}}{\mathrm {d} t}}=-\sum _{j}m_{j}{\frac {p_{j}}{\rho _{j}^{2}}}{\frac {\partial \rho _{j}}{\partial {\boldsymbol {r}}_{i}}}+m_{i}{\boldsymbol {g}},}
Plugging the SPH density interpolation and differentiating explicitly
∂
ρ
j
∂
r
i
{\displaystyle {\tfrac {\partial \rho _{j}}{\partial {\boldsymbol {r}}_{i}}}}
leads to
d
v
i
d
t
=
−
∑
j
m
j
(
p
i
ρ
i
2
+
p
j
ρ
j
2
)
∇
W
i
j
+
g
,
{\displaystyle {\frac {\mathrm {d} {\boldsymbol {v}}_{i}}{\mathrm {d} t}}=-\sum _{j}m_{j}\left({\frac {p_{i}}{\rho _{i}^{2}}}+{\frac {p_{j}}{\rho _{j}^{2}}}\right)\nabla W_{ij}+{\boldsymbol {g}},}
which is the SPH momentum equation already mentioned, where we recognize the
G
{\displaystyle \operatorname {\mathbf {G} } }
operator. This explains why linear momentum is conserved, and allows conservation of angular momentum and energy to be conserved as well.
=== Time integration ===
From the work done in the 80's and 90's on numerical integration of point-like particles in large accelerators, appropriate time integrators have been developed with accurate conservation properties on the long term; they are called symplectic integrators. The most popular in the SPH literature is the leapfrog scheme, which reads for each particle
i
{\displaystyle i}
:
v
i
n
+
1
/
2
=
v
i
n
+
a
i
n
Δ
t
2
,
r
i
n
+
1
=
r
i
n
+
v
i
i
+
1
/
2
Δ
t
,
v
i
n
+
1
=
v
i
n
+
1
/
2
+
a
i
i
+
1
Δ
t
2
,
{\displaystyle {\begin{aligned}{\boldsymbol {v}}_{i}^{n+1/2}&={\boldsymbol {v}}_{i}^{n}+{\boldsymbol {a}}_{i}^{n}{\frac {\Delta t}{2}},\\{\boldsymbol {r}}_{i}^{n+1}&={\boldsymbol {r}}_{i}^{n}+{\boldsymbol {v}}_{i}^{i+1/2}\Delta t,\\{\boldsymbol {v}}_{i}^{n+1}&={\boldsymbol {v}}_{i}^{n+1/2}+{\boldsymbol {a}}_{i}^{i+1}{\frac {\Delta t}{2}},\end{aligned}}}
where
Δ
t
{\displaystyle \Delta t}
is the time step, superscripts stand for time iterations while
a
i
{\displaystyle {\boldsymbol {a}}_{i}}
is the particle acceleration, given by the right-hand side of the momentum equation.
Other symplectic integrators exist (see the reference textbook). It is recommended to use a symplectic (even low-order) scheme instead of a high order non-symplectic scheme, to avoid error accumulation after many iterations.
Integration of density has not been studied extensively (see below for more details).
Symplectic schemes are conservative but explicit, thus their numerical stability requires stability conditions, analogous to the Courant-Friedrichs-Lewy condition (see below).
=== Boundary techniques ===
In case the SPH convolution shall be practiced close to a boundary, i.e. closer than s · h, then the integral support is truncated. Indeed, when the convolution is affected by a boundary, the convolution shall be split in 2 integrals,
A
(
r
)
=
∫
Ω
(
r
)
A
(
r
′
)
W
(
|
r
−
r
′
|
,
h
)
d
r
′
+
∫
B
(
r
)
−
Ω
(
r
)
A
(
r
′
)
W
(
|
r
−
r
′
|
,
h
)
d
r
′
,
{\displaystyle A({\boldsymbol {r}})=\int _{\Omega ({\boldsymbol {r}})}A\left({\boldsymbol {r^{\prime }}}\right)W(|{\boldsymbol {r}}-{\boldsymbol {r^{\prime }}}|,h)d{\boldsymbol {r^{\prime }}}+\int _{B({\boldsymbol {r}})-\Omega ({\boldsymbol {r}})}A\left({\boldsymbol {r^{\prime }}}\right)W(|{\boldsymbol {r}}-{\boldsymbol {r^{\prime }}}|,h)d{\boldsymbol {r^{\prime }}},}
where B(r) is the compact support ball centered at r, with radius s · h, and Ω(r) denotes the part of the compact support inside the computational domain, Ω ∩ B(r). Hence, imposing boundary conditions in SPH is completely based on approximating the second integral on the right hand side. The same can be of course applied to the differential operators computation,
∇
A
(
r
)
=
∫
Ω
(
r
)
A
(
r
′
)
∇
W
(
r
−
r
′
,
h
)
d
r
′
+
∫
B
(
r
)
−
Ω
(
r
)
A
(
r
′
)
∇
W
(
r
−
r
′
,
h
)
d
r
′
.
{\displaystyle \nabla A({\boldsymbol {r}})=\int _{\Omega ({\boldsymbol {r}})}A\left({\boldsymbol {r^{\prime }}}\right)\nabla W({\boldsymbol {r}}-{\boldsymbol {r^{\prime }}},h)d{\boldsymbol {r^{\prime }}}+\int _{B({\boldsymbol {r}})-\Omega ({\boldsymbol {r}})}A\left({\boldsymbol {r^{\prime }}}\right)\nabla W({\boldsymbol {r}}-{\boldsymbol {r^{\prime }}},h)d{\boldsymbol {r^{\prime }}}.}
Several techniques has been introduced in the past to model boundaries in SPH.
==== Integral neglect ====
The most straightforward boundary model is neglecting the integral,
∫
B
(
r
)
−
Ω
(
r
)
A
(
r
′
)
∇
W
(
r
−
r
′
,
h
)
d
r
′
≃
0
,
{\displaystyle \int _{B({\boldsymbol {r}})-\Omega ({\boldsymbol {r}})}A\left({\boldsymbol {r^{\prime }}}\right)\nabla W({\boldsymbol {r}}-{\boldsymbol {r^{\prime }}},h)d{\boldsymbol {r^{\prime }}}\simeq {\boldsymbol {0}},}
such that just the bulk interactions are taken into account,
∇
A
i
=
∑
j
∈
Ω
i
V
j
A
j
∇
W
i
j
.
{\displaystyle \nabla A_{i}=\sum _{j\in \Omega _{i}}V_{j}A_{j}\nabla W_{ij}.}
This is a popular approach when free-surface is considered in monophase simulations.
The main benefit of this boundary condition is its obvious simplicity. However, several consistency issues shall be considered when this boundary technique is applied. That's in fact a heavy limitation on its potential applications.
==== Fluid Extension ====
Probably the most popular methodology, or at least the most traditional one, to impose boundary conditions in SPH, is Fluid Extension technique. Such technique is based on populating the compact support across the boundary with so-called ghost particles, conveniently imposing their field values.
Along this line, the integral neglect methodology can be considered as a particular case of fluid extensions, where the field, A, vanish outside the computational domain.
The main benefit of this methodology is the simplicity, provided that the boundary contribution is computed as part of the bulk interactions. Also, this methodology has been deeply analyzed in the literature.
On the other hand, deploying ghost particles in the truncated domain is not a trivial task, such that modelling complex boundary shapes becomes cumbersome. The 2 most popular approaches to populate the empty domain with ghost particles are Mirrored-Particles and Fixed-Particles.
==== Boundary Integral ====
The newest Boundary technique is the Boundary Integral methodology. In this methodology, the empty volume integral is replaced by a surface integral, and a renormalization:
∇
A
i
=
1
γ
i
(
∑
j
∈
Ω
i
V
j
A
j
∇
W
i
j
+
∑
j
∈
∂
Ω
i
S
j
A
j
n
j
W
i
j
)
,
{\displaystyle \nabla A_{i}={\frac {1}{\gamma _{i}}}\left(\sum _{j\in \Omega _{i}}V_{j}A_{j}\nabla W_{ij}+\sum _{j\in \partial \Omega _{i}}S_{j}A_{j}{\boldsymbol {n}}_{j}W_{ij}\right),}
γ
i
=
∑
j
∈
Ω
i
V
j
W
i
j
,
{\displaystyle \gamma _{i}=\sum _{j\in \Omega _{i}}V_{j}W_{ij},}
with nj the normal of the generic j-th boundary element. The surface term can be also solved considering a semi-analytic expression.
== Modelling physics ==
=== Hydrodynamics ===
==== Weakly compressible approach ====
Another way to determine the density is based on the SPH smoothing operator itself. Therefore, the density is estimated from the particle distribution utilizing the SPH interpolation. To overcome undesired errors at the free surface through kernel truncation, the density formulation can again be integrated in time.
The weakly compressible SPH in fluid dynamics is based on the discretization of the Navier–Stokes equations or Euler equations for compressible fluids. To close the system, an appropriate equation of state is utilized to link pressure
p
{\displaystyle p}
and density
ρ
{\displaystyle \rho }
. Generally, the so-called Cole equation
(sometimes mistakenly referred to as the "Tait equation") is used in SPH. It reads
p
=
ρ
0
c
2
γ
(
(
ρ
ρ
0
)
γ
−
1
)
+
p
0
,
{\displaystyle p={\frac {\rho _{0}c^{2}}{\gamma }}\left(\left({\frac {\rho }{\rho _{0}}}\right)^{\gamma }-1\right)+p_{0},}
where
ρ
0
{\displaystyle \rho _{0}}
is the reference density and
c
{\displaystyle c}
the speed of sound. For water,
γ
=
7
{\displaystyle \gamma =7}
is commonly used. The background pressure
p
0
{\displaystyle p_{0}}
is added to avoid negative pressure values.
Real nearly incompressible fluids such as water are characterized by very high speeds of sound of the order
10
3
m
/
s
{\displaystyle 10^{3}\mathrm {m/s} }
. Hence, pressure information travels fast compared to the actual bulk flow, which leads to very small Mach numbers
M
{\displaystyle M}
. The momentum equation leads to the following relation:
δ
ρ
ρ
0
≈
|
v
|
2
c
2
=
M
2
{\displaystyle {\frac {\delta \rho }{\rho _{0}}}\approx {\frac {|{\boldsymbol {v}}|^{2}}{c^{2}}}=M^{2}}
where
ρ
{\displaystyle \rho }
is the density change and
v
{\displaystyle v}
the velocity vector.
In practice a value of c smaller than the real one is adopted to avoid time steps too small in the time integration scheme. Generally a numerical speed of sound is adopted such that density variation smaller than 1% are allowed. This is the so-called weak-compressibility assumption.
This corresponds to a Mach number smaller than 0.1, which implies:
c
=
10
v
max
{\displaystyle c=10v_{\text{max}}}
where the maximum velocity
v
max
{\displaystyle v_{\text{max}}}
needs to be estimated, for e.g. by Torricelli's law or an educated guess. Since only small density variations occur, a linear equation of state can be adopted:
p
=
c
2
(
ρ
−
ρ
0
)
{\displaystyle p=c^{2}\left(\rho -\rho _{0}\right)}
Usually the weakly-compressible schemes are affected by a high-frequency spurious noise on the pressure and density fields.
This phenomenon is caused by the nonlinear interaction of acoustic waves and by fact that the scheme is explicit in time and centered in space
.
Through the years, several techniques have been proposed to get rid of this problem. They can be classified in three different groups:
the schemes that adopt density filters,
the models that add a diffusive term in the continuity equation,
the schemes that employ Riemann solvers to model the particle interaction.
===== Density filter technique =====
The schemes of the first group apply a filter directly on the density field to remove the spurious numerical noise. The most used filters are the MLS (moving least squares) and the Shepard filter
which can be applied at each time step or every n time steps. The more frequent is the use of the filtering procedure, the more regular density and pressure fields are obtained. On the other hand, this leads to an increase of the computational costs. In long time simulations, the use of the filtering procedure may lead to the disruption of the hydrostatic pressure component and to an inconsistency between the global volume of fluid and the density field. Further, it does not ensure the enforcement of the dynamic free-surface boundary condition.
===== Diffusive term technique =====
A different way to smooth out the density and pressure field is to add a diffusive term inside the continuity equation (group 2) :
d
ρ
i
d
t
=
∑
j
m
j
(
v
i
−
v
j
)
⋅
∇
W
i
j
+
D
i
(
ρ
)
,
{\displaystyle {\displaystyle {\frac {d\rho _{i}}{dt}}=\sum _{j}m_{j}\left({\boldsymbol {v}}_{i}-{\boldsymbol {v}}_{j}\right)\cdot \nabla W_{ij}+{\mathcal {D}}_{i}(\rho ),}}
The first schemes that adopted such an approach were described in Ferrari
and in Molteni
where the diffusive term was modeled as a Laplacian of the density field. A similar approach was also used in Fatehi and Manzari
.
In Antuono et al.
a correction to the diffusive term of Molteni was proposed to remove some inconsistencies close to the free-surface. In this case the adopted diffusive term is equivalent to a high-order differential operator on the density field.
The scheme is called δ-SPH and preserves all the conservation properties of the SPH without diffusion (e.g., linear and angular momenta, total energy,
see
) along with a smooth and regular representation of the density and pressure fields.
In the third group there are those SPH schemes which employ numerical fluxes obtained through Riemann solvers to model the particle interactions.
===== Riemann solver technique =====
For an SPH method based on Riemann solvers, an inter-particle Riemann problem is constructed along a unit vector
e
i
j
=
−
r
i
j
/
r
i
j
{\displaystyle \mathbf {e} _{ij}=-\mathbf {r} _{ij}/r_{ij}}
pointing from particle
i
{\displaystyle i}
to particle
j
{\displaystyle j}
. In this Riemann problem the initial left and right states are on particles
i
{\displaystyle i}
and
j
{\displaystyle j}
, respectively. The
L
{\displaystyle L}
and
R
{\displaystyle R}
states are
{
(
ρ
L
,
U
L
,
P
L
)
=
(
ρ
i
,
v
i
⋅
e
i
j
,
P
i
)
(
ρ
R
,
U
R
,
P
R
)
=
(
ρ
j
,
v
j
⋅
e
i
j
,
P
j
)
.
{\displaystyle {\begin{cases}(\rho _{L},U_{L},P_{L})=(\rho _{i},\mathbf {v} _{i}\cdot \mathbf {e} _{ij},P_{i})\\(\rho _{R},U_{R},P_{R})=(\rho _{j},\mathbf {v} _{j}\cdot \mathbf {e} _{ij},P_{j}).\end{cases}}}
The solution of the Riemann problem results in three waves emanating from the discontinuity. Two waves, which can be shock or rarefaction wave, traveling with the smallest or largest wave speed. The middle wave is always a contact discontinuity and separates two intermediate states, denoted by
(
ρ
L
∗
,
U
L
∗
,
P
L
∗
)
{\displaystyle (\rho _{L}^{\ast },U_{L}^{\ast },P_{L}^{\ast })}
and
(
ρ
R
∗
,
U
R
∗
,
P
R
∗
)
{\displaystyle (\rho _{R}^{\ast },U_{R}^{\ast },P_{R}^{\ast })}
. By assuming that the intermediate state satisfies
U
L
∗
=
U
R
∗
=
U
∗
{\displaystyle U_{L}^{\ast }=U_{R}^{\ast }=U^{\ast }}
and
P
L
∗
=
P
R
∗
=
P
∗
{\displaystyle P_{L}^{\ast }=P_{R}^{\ast }=P^{\ast }}
, a linearized Riemann solver for smooth flows or with only moderately strong shocks can be written as
{
U
∗
=
U
¯
+
1
2
(
P
L
−
P
R
)
ρ
¯
c
0
P
∗
=
P
¯
+
1
2
ρ
¯
c
0
(
U
L
−
U
R
)
,
{\displaystyle {\begin{cases}U^{\ast }={\overline {U}}+{\frac {1}{2}}{\frac {(P_{L}-P_{R})}{{\bar {\rho }}c_{0}}}\\P^{\ast }={\overline {P}}+{\frac {1}{2}}{\bar {\rho }}c_{0}{(U_{L}-U_{R})},\end{cases}}}
where
U
¯
=
(
U
L
+
U
R
)
/
2
{\displaystyle {\overline {U}}=(U_{L}+U_{R})/2}
and
P
¯
=
(
P
L
+
P
R
)
/
2
{\displaystyle {\overline {P}}=(P_{L}+P_{R})/2}
are inter-particle averages. With the solution of the Riemann problem, i.e.
U
∗
{\displaystyle U^{\ast }}
and
P
∗
{\displaystyle P^{\ast }}
, the discretization of the SPH method is
d
ρ
i
d
t
=
2
ρ
i
∑
j
m
j
ρ
j
(
v
i
−
v
∗
)
⋅
∇
i
W
i
j
,
{\displaystyle {\frac {d\rho _{i}}{dt}}=2\rho _{i}\sum _{j}{\frac {m_{j}}{\rho _{j}}}(\mathbf {v} _{i}-\mathbf {v} ^{\ast })\cdot \nabla _{i}W_{ij},}
d
v
i
d
t
=
−
2
∑
j
m
j
(
P
∗
ρ
i
ρ
j
)
∇
i
W
i
j
.
{\displaystyle {\frac {d\mathbf {v} _{i}}{dt}}=-2\sum _{j}m_{j}\left({\frac {P^{\ast }}{\rho _{i}\rho _{j}}}\right)\nabla _{i}W_{ij}.}
where
v
∗
=
U
∗
e
i
j
+
(
v
¯
i
j
−
U
¯
e
i
j
)
{\displaystyle \mathbf {v} ^{\ast }=U^{\ast }\mathbf {e} _{ij}+({\overline {\mathbf {v} }}_{ij}-{\overline {U}}\mathbf {e} _{ij})}
. This indicates that the inter-particle average velocity and pressure are simply replaced by the solution of the Riemann problem. By comparing both it can be seen that the intermediate velocity and pressure from the inter-particle averages amount to implicit dissipation, i.e. density regularization and numerical viscosity, respectively.
Since the above discretization is very dissipative a straightforward modification is to apply a limiter to decrease the implicit numerical dissipations introduced by limiting the intermediate pressure by
P
∗
=
P
¯
+
1
2
β
ρ
¯
(
U
L
−
U
R
)
,
{\displaystyle P^{\ast }={\overline {P}}+{\frac {1}{2}}\beta {\overline {\rho }}{(U_{L}-U_{R})},}
where the limiter is defined as
β
=
min
(
η
max
(
U
L
−
U
R
,
0
)
,
c
¯
)
.
{\displaystyle \beta =\min {\big (}\eta \max(U_{L}-U_{R},0),{\overline {c}}{\big )}.}
Note that
β
{\displaystyle \beta }
ensures that there is no dissipation when the fluid is under the action of an expansion wave, i.e.
U
L
<
U
R
{\displaystyle U_{L}<U_{R}}
, and that the parameter
η
{\displaystyle \eta }
, is used to modulate dissipation when the fluid is under the action of a compression wave, i.e.
U
L
≥
U
R
{\displaystyle U_{L}\geq U_{R}}
. Numerical experiments found the
η
=
3
{\displaystyle \eta =3}
is generally effective. Also note that the dissipation introduced by the intermediate velocity is not limited.
==== Incompressible approach ====
==== Viscosity modelling ====
In general, the description of hydrodynamic flows require a convenient treatment of diffusive processes to model the viscosity in the Navier–Stokes equations. It needs special consideration because it involves the Laplacian differential operator. Since the direct computation does not provide satisfactory results, several approaches to model the diffusion have been proposed.
Artificial viscosity
Introduced by Monaghan and Gingold
the artificial viscosity was used to deal with high Mach number fluid flows. It reads
Π
i
j
=
{
−
α
c
¯
i
j
ϕ
i
j
+
β
ϕ
i
j
2
ρ
¯
i
j
v
i
j
⋅
r
i
j
<
0
0
v
i
j
⋅
r
i
j
≥
0
{\displaystyle \Pi _{ij}={\begin{cases}{\dfrac {-\alpha {\bar {c}}_{ij}\phi _{ij}+\beta \phi _{ij}^{2}}{{\bar {\rho }}_{ij}}}&\quad {\boldsymbol {v}}_{ij}\cdot {\boldsymbol {r}}_{ij}<0\\0&\quad {\boldsymbol {v}}_{ij}\cdot {\boldsymbol {r}}_{ij}\geq 0\end{cases}}}
Here,
α
{\displaystyle \alpha }
is controlling a volume viscosity while
β
{\displaystyle \beta }
acts similar to the Neumann Richtmeyr artificial viscosity. The
ϕ
i
j
{\displaystyle \phi _{ij}}
is defined by
ϕ
i
j
=
h
v
i
j
⋅
r
i
j
‖
r
i
j
‖
2
+
η
h
2
,
{\displaystyle \phi _{ij}={\frac {h{\boldsymbol {v}}_{ij}\cdot {\boldsymbol {r}}_{ij}}{\Vert {\boldsymbol {r}}_{ij}\Vert ^{2}+\eta _{h}^{2}}},}
where ηh is a small fraction of h (e.g. 0.01h) to prevent possible numerical infinities at close distances.
The artificial viscosity also has shown to improve the overall stability of general flow simulations. Therefore, it is applied to inviscid problems in the following form
Π
i
j
=
α
h
c
v
i
j
⋅
r
i
j
‖
r
i
j
‖
2
+
η
h
2
.
{\displaystyle \Pi _{ij}=\alpha hc{\frac {{\boldsymbol {v}}_{ij}\cdot {\boldsymbol {r}}_{ij}}{\Vert {\boldsymbol {r}}_{ij}\Vert ^{2}+\eta _{h}^{2}}}.}
It is possible to not only stabilize inviscid simulations but also to model the physical viscosity by this approach. To do so
α
h
c
=
2
(
n
+
2
)
μ
ρ
{\displaystyle \alpha hc=2(n+2){\frac {\mu }{\rho }}}
is substituted in the equation above, where
n
{\displaystyle n}
is the number of spatial dimensions of the model. This approach introduces the bulk viscosity
ζ
=
5
3
μ
{\displaystyle \zeta ={\frac {5}{3}}\mu }
.
Morris
For low Reynolds numbers the viscosity model by Morris
was proposed.
[
ν
Δ
v
]
i
j
=
2
ν
m
j
ρ
j
r
i
j
⋅
∇
w
h
,
i
j
‖
r
i
j
‖
2
+
η
h
2
v
i
j
.
{\displaystyle [\nu \Delta {\boldsymbol {v}}]_{ij}=2\nu {\frac {m_{j}}{\rho _{j}}}\,{\frac {{\boldsymbol {r}}_{ij}\cdot \nabla w_{h,ij}}{\Vert {\boldsymbol {r}}_{ij}\Vert ^{2}+\eta _{h}^{2}}}\,{\boldsymbol {v}}_{ij}.}
LoShao
==== Additional physics ====
Surface tension
Heat transfer
Turbulence
==== Multiphase extensions ====
=== Astrophysics ===
Often in astrophysics, one wishes to model self-gravity in addition to pure hydrodynamics. The particle-based nature of SPH makes it ideal to combine with a particle-based gravity solver, for instance tree gravity code, particle mesh, or particle-particle particle-mesh.
=== Solid mechanics and fluid-structure interaction (FSI) ===
==== Total Lagrangian formulation for solid mechanics ====
To discretize the governing equations of solid dynamics, a correction matrix
B
0
{\displaystyle \mathbb {B} ^{0}}
is first introduced to reproducing rigid-body rotation as
where
∇
a
0
W
a
=
∂
W
(
|
r
a
b
0
|
,
h
)
∂
|
r
a
b
0
|
e
a
b
0
{\displaystyle \nabla _{a}^{0}W_{a}={\frac {\partial W\left(|\mathbf {r} _{ab}^{0}|,h\right)}{\partial |\mathbf {r} _{ab}^{0}|}}\mathbf {e} _{ab}^{0}}
stands for the gradient of the kernel function evaluated at the initial reference configuration.
Note that subscripts
a
{\displaystyle a}
and
b
{\displaystyle b}
are used to denote solid particles, and smoothing length
h
{\displaystyle h}
is identical to that in the discretization of fluid equations.
Using the initial configuration as the reference, the solid density is directly evaluated as
where
J
=
det
(
F
)
{\displaystyle J=\det(\mathbb {F} )}
is the Jacobian determinant of deformation tensor
F
{\displaystyle \mathbb {F} }
.
We can now discretize the momentum equation in the following form
where inter-particle averaged first Piola-Kirchhoff stress
P
~
{\displaystyle {\tilde {\mathbb {P} }}}
is defined as
Also
f
a
F
:
p
{\displaystyle \mathbf {f} _{a}^{F:p}}
and
f
a
F
:
v
{\displaystyle \mathbf {f} _{a}^{F:v}}
correspond to the fluid pressure and viscous forces acting on the solid particle
a
{\displaystyle a}
, respectively.
==== Fluid-structure coupling ====
In fluid-structure coupling, the surrounding solid structure is behaving as a moving boundary for fluid, and the no-slip boundary condition is imposed at the fluid-structure interface. The interaction forces
f
i
S
:
p
{\displaystyle \mathbf {f} _{i}^{S:p}}
and
f
i
S
:
v
{\displaystyle \mathbf {f} _{i}^{S:v}}
acting on a fluid particle
i
{\displaystyle i}
, due to the presence of the neighboring solid particle
a
{\displaystyle a}
, can be obtained as
and
Here, the imaginary pressure
p
a
d
{\displaystyle p_{a}^{d}}
and velocity
v
a
d
{\displaystyle \mathbf {v} _{a}^{d}}
are defined by
where
n
S
{\displaystyle \mathbf {n} ^{S}}
denotes the surface normal direction of the solid structure,
and the imaginary particle density
ρ
a
d
{\displaystyle \rho _{a}^{d}}
is calculated through the equation of state.
Accordingly, the interaction forces
f
a
F
:
p
{\displaystyle \mathbf {f} _{a}^{F:p}}
and
f
a
F
:
v
{\displaystyle \mathbf {f} _{a}^{F:v}}
acting on a solid particle
a
{\displaystyle a}
are given by
and
The anti-symmetric property of the derivative of the kernel function will ensure the momentum conservation for each pair of interacting particles
i
{\displaystyle i}
and
a
{\displaystyle a}
.
=== Others ===
The discrete element method, used for simulating granular materials, is related to SPH.
== Variants of the method ==
== References ==
== Further reading ==
Hoover, W. G. (2006); Smooth Particle Applied Mechanics: The State of the Art, World Scientific.
Stellingwerf, R. F.; Wingate, C. A.; "Impact Modelling with SPH", Memorie della Societa Astronomia Italiana, Vol. 65, p. 1117 (1994).
Amada, T.; Imura, M.; Yasumuro, Y.; Manabe, Y.; and Chihara, K. (2004); "Particle-based fluid simulation on GPU", in Proceedings of ACM Workshop on General-purpose Computing on Graphics Processors (August, 2004, Los Angeles, California).
Desbrun, M.; and Cani, M.-P. (1996). "Smoothed Particles: a new paradigm for animating highly deformable bodies" in Proceedings of Eurographics Workshop on Computer Animation and Simulation (August 1996, Poitiers, France).
Hegeman, K.; Carr, N. A.; and Miller, G. S. P.; "Particle-based fluid simulation on the GPU", in Proceedings of International Conference on Computational Science (Reading, UK, May 2006), Lecture Notes in Computer Science v. 3994/2006 (Springer-Verlag).
Kelager, M. (2006) Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics, MSc Thesis, Univ. Copenhagen.
Kolb, A.; and Cuntz, N. (2005); "Dynamic particle coupling for GPU-based fluid simulation", in Proceedings of the 18th Symposium on Simulation Techniques (2005) pp. 722–727.
Liu, G. R.; and Liu, M. B.; Smoothed Particle Hydrodynamics: a meshfree particle method, Singapore: World Scientific (2003).
Monaghan, Joseph J. (1992). "Smoothed Particle Hydrodynamics", Annual Review of Astronomy and Astrophysics (1992). 30 : 543–74.
Muller, M.; Charypar, D.; and Gross, M.; "Particle-based Fluid Simulation for Interactive Applications", in Breen, D; and Lin, M. (eds.), Proceedings of Eurographics/SIGGRAPH Symposium on Computer Animation (2003).
Vesterlund, M.; Simulation and Rendering of a Viscous Fluid Using Smoothed Particle Hydrodynamics, MSc Thesis, Umea University, Sweden.
Violeau, D.; Fluid Mechanics and the SPH method, Oxford University Press (2012).
== External links ==
First large simulation of star formation using SPH
SPHERIC (SPH rEsearch and engineeRing International Community)
ITVO is the web-site of The Italian Theoretical Virtual Observatory created to query a database of numerical simulation archive.
SPHC Image Gallery depicts a wide variety of test cases, experimental validations, and commercial applications of the SPH code SPHC.
A derivation of the SPH model starting from Navier-Stokes equations
=== Software ===
Algodoo is a 2D simulation framework for education using SPH
AQUAgpusph is the free (GPLv3) SPH of the researchers, by the researchers, for the researchers
dive solutions is a commercial web-based SPH engineering software for CFD purposes
DualSPHysics is a mostly open source SPH code based on SPHysics and using GPU computing. The open source components are available under the LGPL.
FLUIDS v.1 is a simple, open source (Zlib), real-time 3D SPH implementation in C++ for liquids for CPU and GPU.
Fluidix is a GPU-based particle simulation API available from OneZero Software
GADGET [1] is a freely available (GPL) code for cosmological N-body/SPH simulations
GPUSPH SPH simulator with viscosity (GPLv3)
Pasimodo is a program package for particle-based simulation methods, e.g. SPH
LAMMPS is a massively parallel, open-source classical molecular dynamics code that can perform SPH simulations
Physics Abstraction Layer is an open source abstraction system that supports real time physics engines with SPH support
PreonLab is a commercial engineering software developed by FIFTY2 Technology implementing an implicit SPH method
Punto is a freely available visualisation tool for particle simulations
pysph Open Source Framework for Smoothed Particle Hydrodynamics in Python (New BSD License)
Py-SPHViewer Open Source python visualisation tool for Smoothed Particle Hydrodynamics simulations.
RealFlow Commercial SPH solver for the cinema industry.
RheoCube is a commercial SaaS product by Lorenz Research for the study and prediction of complex-fluid rheology and stability
SimPARTIX is a commercial simulation package for SPH and Discrete element method (DEM) simulations from Fraunhofer IWM
SPH-flow
SPHERA
SPHinXsys is an open source multi-physics, multi-resolution SPH library. It provides C++ APIs for physical accurate simulation and aims to model coupled industrial dynamic systems including fluid, solid, multi-body dynamics and beyond.
SPHysics is an open source SPH implementation in Fortran
SPLASH is an open source (GPL) visualisation tool for SPH simulations
SYMPLER: A freeware SYMbolic ParticLE simulatoR from the University of Freiburg.
Nauticle is a general-purpose computational tool for particle-based numerical methods.
NDYNAMICS is a commercial fluid simulation software based on implicit SPH developed by CENTROID LAB currently used for internal/external flooding/nuclear/chemical engineering applications. | Wikipedia/Smoothed-particle_hydrodynamics |
In physics, lattice gauge theory is the study of gauge theories on a spacetime that has been discretized into a lattice.
Gauge theories are important in particle physics, and include the prevailing theories of elementary particles: quantum electrodynamics, quantum chromodynamics (QCD) and particle physics' Standard Model. Non-perturbative gauge theory calculations in continuous spacetime formally involve evaluating an infinite-dimensional path integral, which is computationally intractable. By working on a discrete spacetime, the path integral becomes finite-dimensional, and can be evaluated by stochastic simulation techniques such as the Monte Carlo method. When the size of the lattice is taken infinitely large and its sites infinitesimally close to each other, the continuum gauge theory is recovered.
== Basics ==
In lattice gauge theory, the spacetime is Wick rotated into Euclidean space and discretized into a lattice with sites separated by distance
a
{\displaystyle a}
and connected by links. In the most commonly considered cases, such as lattice QCD, fermion fields are defined at lattice sites (which leads to fermion doubling), while the gauge fields are defined on the links. That is, an element U of the compact Lie group G (not algebra) is assigned to each link. Hence, to simulate QCD with Lie group SU(3), a 3×3 unitary matrix is defined on each link. The link is assigned an orientation, with the inverse element corresponding to the same link with the opposite orientation. And each node is given a value in
C
3
{\displaystyle \mathbb {C} ^{3}}
(a color 3-vector, the space on which the fundamental representation of SU(3) acts), a bispinor (Dirac 4-spinor), an nf vector, and a Grassmann variable.
Thus, the composition of links' SU(3) elements along a path (i.e. the ordered multiplication of their matrices) approximates a path-ordered exponential (geometric integral), from which Wilson loop values can be calculated for closed paths.
== Yang–Mills action ==
The Yang–Mills action is written on the lattice using Wilson loops (named after Kenneth G. Wilson), so that the limit
a
→
0
{\displaystyle a\to 0}
formally reproduces the original continuum action. Given a faithful irreducible representation ρ of G, the lattice Yang–Mills action, known as the Wilson action, is the sum over all lattice sites of the (real component of the) trace over the n links e1, ..., en in the Wilson loop,
S
=
∑
F
−
ℜ
{
χ
(
ρ
)
(
U
(
e
1
)
⋯
U
(
e
n
)
)
}
.
{\displaystyle S=\sum _{F}-\Re \{\chi ^{(\rho )}(U(e_{1})\cdots U(e_{n}))\}.}
Here, χ is the character. If ρ is a real (or pseudoreal) representation, taking the real component is redundant, because even if the orientation of a Wilson loop is flipped, its contribution to the action remains unchanged.
There are many possible Wilson actions, depending on which Wilson loops are used in the action. The simplest Wilson action uses only the 1×1 Wilson loop, and differs from the continuum action by "lattice artifacts" proportional to the small lattice spacing
a
{\displaystyle a}
. By using more complicated Wilson loops to construct "improved actions", lattice artifacts can be reduced to be proportional to
a
2
{\displaystyle a^{2}}
, making computations more accurate.
== Measurements and calculations ==
Quantities such as particle masses are stochastically calculated using techniques such as the Monte Carlo method. Gauge field configurations are generated with probabilities proportional to
e
−
β
S
{\displaystyle e^{-\beta S}}
, where
S
{\displaystyle S}
is the lattice action and
β
{\displaystyle \beta }
is related to the lattice spacing
a
{\displaystyle a}
. The quantity of interest is calculated for each configuration, and averaged. Calculations are often repeated at different lattice spacings
a
{\displaystyle a}
so that the result can be extrapolated to the continuum,
a
→
0
{\displaystyle a\to 0}
.
Such calculations are often extremely computationally intensive, and can require the use of the largest available supercomputers. To reduce the computational burden, the so-called quenched approximation can be used, in which the fermionic fields are treated as non-dynamic "frozen" variables. While this was common in early lattice QCD calculations, "dynamical" fermions are now standard. These simulations typically utilize algorithms based upon molecular dynamics or microcanonical ensemble algorithms. An alternative method could be simulations on quantum computers.
The results of lattice QCD computations show e.g. that in a meson not only the particles (quarks and antiquarks), but also the "fluxtubes" of the gluon fields are important.
== Quantum triviality ==
Lattice gauge theory is also important for the study of quantum triviality by the real-space renormalization group. The most important information in the RG flow are what's called the fixed points.
The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to be trivial or noninteracting. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
Triviality has yet to be proven rigorously, but lattice computations
have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the mass of Higgs boson. Lattice calculations have been useful in this context.
== Other applications ==
Originally, solvable two-dimensional lattice gauge theories had already been introduced in 1971 as models with interesting statistical properties by the theorist Franz Wegner, who worked in the field of phase transitions.
When only 1×1 Wilson loops appear in the action, lattice gauge theory can be shown to be exactly dual to spin foam models.
== See also ==
Hamiltonian lattice gauge theory
Lattice field theory
Lattice QCD
Quantum triviality
Wilson action
== References ==
== Further reading ==
Creutz, M., Quarks, gluons and lattices, Cambridge University Press, Cambridge, (1985). ISBN 978-0521315357
Montvay, I., Münster, G., Quantum Fields on a Lattice, Cambridge University Press, Cambridge, (1997). ISBN 978-0521599177
Makeenko, Y., Methods of contemporary gauge theory, Cambridge University Press, Cambridge, (2002). ISBN 0-521-80911-8.
Smit, J., Introduction to Quantum Fields on a Lattice, Cambridge University Press, Cambridge, (2002). ISBN 978-0521890519
Rothe, H., Lattice Gauge Theories, An Introduction, World Scientific, Singapore, (2005). ISBN 978-9814365857
DeGrand, T., DeTar, C., Lattice Methods for Quantum Chromodynamics, World Scientific, Singapore, (2006). ISBN 978-9812567277
Gattringer, C., Lang, C. B., Quantum Chromodynamics on the Lattice, Springer, (2010). ISBN 978-3642018497
Knechtli, F., Günther, M., Peardon, M., Lattice Quantum Chromodynamics: Practical Essentials, Springer, (2016). ISBN 978-9402409970
Weisz Peter, Majumdar Pushan (2012). "Lattice gauge theories". Scholarpedia. 7 (4): 8615. Bibcode:2012SchpJ...7.8615W. doi:10.4249/scholarpedia.8615.
== External links ==
The FermiQCD Library for Lattice Field theory
US Lattice Quantum Chromodynamics Software Libraries | Wikipedia/Lattice_gauge_theory |
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.
The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
== History ==
The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.
== Fundamentals ==
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.
=== Structure ===
Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.
Structure is studied in the following levels.
==== Atomic structure ====
Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
===== Bonding =====
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
===== Crystallography =====
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.
==== Nanostructure ====
Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.
Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.
Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.
Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.
Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
==== Microstructure ====
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.
==== Macrostructure ====
Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.
=== Properties ===
Materials exhibit myriad properties, including the following.
Mechanical properties, see Strength of materials
Chemical properties, see Chemistry
Electrical properties, see Electricity
Thermal properties, see Thermodynamics
Optical properties, see Optics and Photonics
Magnetic properties, see Magnetism
The properties of a material determine its usability and hence its engineering application.
=== Processing ===
Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.
=== Thermodynamics ===
Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.
The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium.
=== Kinetics ===
Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.
== Research ==
Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas.
=== Nanomaterials ===
Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.
=== Biomaterials ===
A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.
Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.
=== Electronic, optical, and magnetic ===
Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.
Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.
This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.
=== Computational materials science ===
With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.
== Industry ==
Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).
Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.
Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass.
=== Ceramics and glasses ===
Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components.
Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.
Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries.
=== Composites ===
Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases.
Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide.
Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.
=== Polymers ===
Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.
Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.
The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.
=== Metal alloys ===
The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value.
Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.
=== Semiconductors ===
A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.
Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.
== Relation with other fields ==
Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.
The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.
Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in.
The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields.
== Emerging technologies ==
== Subdisciplines ==
The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites.
Ceramic engineering
Metallurgy
Polymer science and engineering
Composite engineering
There are additionally broadly applicable, materials independent, endeavors.
Materials characterization (spectroscopy, microscopy, diffraction)
Computational materials science
Materials informatics and selection
There are also relatively broad focuses across materials on specific phenomena and techniques.
Crystallography
Surface science
Tribology
Microelectronics
== Related or interdisciplinary fields ==
Condensed matter physics, solid-state physics and solid-state chemistry
Nanotechnology
Mineralogy
Supramolecular chemistry
Biomaterials science
== Professional societies ==
American Ceramic Society
ASM International
Association for Iron and Steel Technology
Materials Research Society
The Minerals, Metals & Materials Society
== See also ==
== References ==
=== Citations ===
=== Bibliography ===
Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3.
Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8.
Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5.
Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4.
Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3.
González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1.
Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9.
Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1.
Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826.
Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2.
Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9.
Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276.
== Further reading ==
Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007
Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7.
Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8.
Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0.
Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7.
O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9.
Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4.
Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3.
== External links ==
MS&T conference organized by the main materials societies
MIT OpenCourseWare for MSE | Wikipedia/Material_science |
Computational magnetohydrodynamics (CMHD) is a rapidly developing branch of magnetohydrodynamics that uses numerical methods and algorithms to solve and analyze problems that involve electrically conducting fluids. Most of the methods used in CMHD are borrowed from the well established techniques employed in Computational fluid dynamics. The complexity mainly arises due to the presence of a magnetic field and its coupling with the fluid. One of the important issues is to numerically maintain the
∇
⋅
B
=
0
{\displaystyle \nabla \cdot {\mathbf {B} }=0}
(conservation of magnetic flux) condition, from Maxwell's equations, to avoid the presence of unrealistic effects, namely magnetic monopoles, in the solutions.
== Open-source MHD software ==
Pencil CodeCompressible resistive MHD, intrinsically divergence free, embedded particles module, finite-difference explicit scheme, high-order derivatives, Fortran95 and C, parallelized up to hundreds of thousands cores. Source code is available.
RAMSES RAMSES is an open source program to model astrophysical systems, featuring self-gravitating, magnetised, compressible, radiative fluid flows. It is based on the Adaptive Mesh Refinement (AMR) technique on a fully threaded graded octree. RAMSES is written in Fortran 90 and is making intensive use of the Message Passing Interface (MPI) library. Source code is available.
RamsesGPU RamsesGPU is an MHD program written in C++, based on the original RAMSES but only for regular grid (no AMR). The code has been designed to run on large clusters of GPU (NVIDIA graphics processors), so parallelization relies on MPI for distributed memory processing, as well as the programing language CUDA for efficient usage of GPU resources. Static Gravity Fields are supported. Different finite volume methods are implemented. Source code is available.
AthenaAthena is a grid-based program for astrophysical magnetohydrodynamics (MHD). It was developed primarily for studies of the interstellar medium, star formation, and accretion flows. Source code is available.
EOF-Library EOF-Library is a software that couples Elmer FEM and OpenFOAM simulation packages. It enables efficient internal field interpolation and communication between the finite element and the finite volume frameworks. Potential applications are MHD, convective cooling of electrical devices, industrial plasma physics and microwave heating of liquids.
== Closed-source MHD software ==
USim
MACH2
STAR-CCM+
== See also ==
Magnetohydrodynamic turbulence
Magnetic flow meter
Plasma modeling
== References ==
Brio, M., Wu, C. C.(1988), "An upwind differencing scheme for the equations of ideal magnetohydrodynamics", Journal of Computational Physics, 75, 400–422.
Henri-Marie Damevin and Klaus A. Hoffmann(2002), "Development of a Runge-Kutta Scheme with TVD for Magnetogasdynamics", Journal of Spacecraft and Rockets, 34, No.4, 624–632.
Robert W. MacCormack(1999), "An upwind conservation form method for ideal magnetohydrodynamics equations", AIAA-99-3609.
Robert W. MacCormack(2001), "A conservation form method for magneto-fluid dynamics", AIAA-2001-0195.
== Further reading ==
Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag.
Ledvina, S. A.; Y.-J. Ma; E. Kallio (2008). "Modeling and Simulating Flowing Plasmas and Related Phenomena". Space Science Reviews. 139 (1–4): 143–189. Bibcode:2008SSRv..139..143L. doi:10.1007/s11214-008-9384-6. S2CID 121999061.
== External links ==
NCBI | Wikipedia/Computational_magnetohydrodynamics |
The Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) is an academic conference in the fields of algorithm design and discrete mathematics. It is considered to be one of the top conferences for research in algorithms. SODA has been organized annually since 1990, typically in January. SODA is jointly sponsored by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT) and the SIAM Activity Group on Discrete Mathematics, and in format is more similar to a theoretical computer science conference than to a mathematics conference.
== History ==
The first Symposium on Discrete Algorithms was held in 1990 at San Francisco, organized by David Johnson.
In 2012, the ACM Special Interest Group on Algorithms and Computation Theory (ACM SIGACT) and SIAM Activity Group on Discrete Mathematics (SIAG/DM) jointly established SODA Steering Committee to work with SIAM and ACM on organizing SODA.
== References == | Wikipedia/Symposium_on_Discrete_Algorithms |
The Symposium on Theoretical Aspects of Computer Science (STACS) is an academic conference in the field of computer science. It is held each year, alternately in Germany and France, since 1984. Typical themes of the conference include algorithms, computational and structural complexity, automata, formal languages and logic.
STACS proceedings from 1984 to 2007 have been published by Springer Science+Business Media in the Lecture Notes in Computer Science series. The proceedings since 2008 are published by the Leibniz Center for Informatics in the open access series Leibniz International Proceedings in Informatics. The proceedings since are freely available from the conference portal, as well as from DROPS, the Dagstuhl Research Online Publication Server, and from Hyper Articles en Ligne.
The conference is indexed by several bibliographic databases, including the DBLP, Google Scholar and The Collection of Computer Science Bibliographies.
== See also ==
The list of computer science conferences contains other academic conferences in computer science.
== External links ==
Official website .
STACS proceedings from 1984 to 2007 at Springer Link.
STACS at DBLP.
STACS proceedings.
Leibniz International Proceedings in Informatics home page | Wikipedia/Symposium_on_Theoretical_Aspects_of_Computer_Science |
ACM Transactions on Computation Theory (ACM ToCT) is a quarterly peer-reviewed scientific journal devoted to the study of computational complexity theory and allied fields. It was established in 2009 and is published by the Association for Computing Machinery, a premier scientific and educational society on computer science and computational technology in the United States.
The editor-in-chief is Prahladh Harsha. (Tata Institute of Fundamental Research).
== Abstracting and indexing ==
The journal is abstracted and indexed in the Science Citation Index Expanded, and Scopus. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7.
== Past editors ==
The following persons have been editors-in-chief of the journal:
Lance Fortnow (2009–2010)
Eric Allender (2010–2017)
Venkatesan Guruswami (2017–2019)
Ryan O'Donnell (2020–2022)
Prahladh Harsha (2023–)
== References ==
== External links ==
Official website | Wikipedia/ACM_Transactions_on_Computation_Theory |
Protein–protein interactions (PPIs) are physical contacts of high specificity established between two or more protein molecules as a result of biochemical events steered by interactions that include electrostatic forces, hydrogen bonding and the hydrophobic effect. Many are physical contacts with molecular associations between chains that occur in a cell or in a living organism in a specific biomolecular context.
Proteins rarely act alone as their functions tend to be regulated. Many molecular processes within a cell are carried out by molecular machines that are built from numerous protein components organized by their PPIs. These physiological interactions make up the so-called interactomics of the organism, while aberrant PPIs are the basis of multiple aggregation-related diseases, such as Creutzfeldt–Jakob and Alzheimer's diseases.
PPIs have been studied with many methods and from different perspectives: biochemistry, quantum chemistry, molecular dynamics, signal transduction, among others. All this information enables the creation of large protein interaction networks – similar to metabolic or genetic/epigenetic networks – that empower the current knowledge on biochemical cascades and molecular etiology of disease, as well as the discovery of putative protein targets of therapeutic interest.
== Examples ==
=== Electron transfer proteins ===
In many metabolic reactions, a protein that acts as an electron carrier binds to an enzyme that acts as its reductase. After it receives an electron, it dissociates and then binds to the next enzyme that acts as its oxidase (i.e. an acceptor of the electron). These interactions between proteins are dependent on highly specific binding between proteins to ensure efficient electron transfer. Examples: mitochondrial oxidative phosphorylation chain system components cytochrome c-reductase / cytochrome c / cytochrome c oxidase; microsomal and mitochondrial P450 systems.
In the case of the mitochondrial P450 systems, the specific residues involved in the binding of the electron transfer protein adrenodoxin to its reductase were identified as two basic Arg residues on the surface of the reductase and two acidic Asp residues on the adrenodoxin.
More recent work on the phylogeny of the reductase has shown that these residues involved in protein–protein interactions have been conserved throughout the evolution of this enzyme.
=== Signal transduction ===
The activity of the cell is regulated by extracellular signals. Signal propagation inside and/or along the interior of cells depends on PPIs between the various signaling molecules. The recruitment of signaling pathways through PPIs is called signal transduction and plays a fundamental role in many biological processes and in many diseases including Parkinson's disease and cancer.
=== Membrane transport ===
A protein may be carrying another protein (for example, from cytoplasm to nucleus or vice versa in the case of the nuclear pore importins).
=== Cell metabolism ===
In many biosynthetic processes enzymes interact with each other to produce small compounds or other macromolecules.
=== Muscle contraction ===
Physiology of muscle contraction involves several interactions. Myosin filaments act as molecular motors and by binding to actin enables filament sliding. Furthermore, members of the skeletal muscle lipid droplet-associated proteins family associate with other proteins, as activator of adipose triglyceride lipase and its coactivator comparative gene identification-58, to regulate lipolysis in skeletal muscle
== Types ==
To describe the types of protein–protein interactions (PPIs) it is important to consider that proteins can interact in a "transient" way (to produce some specific effect in a short time, like signal transduction) or to interact with other proteins in a "stable" way to form complexes that become molecular machines within the living systems. A protein complex assembly can result in the formation of homo-oligomeric or hetero-oligomeric complexes. In addition to the conventional complexes, as enzyme-inhibitor and antibody-antigen, interactions can also be established between domain-domain and domain-peptide. Another important distinction to identify protein–protein interactions is the way they have been determined, since there are techniques that measure direct physical interactions between protein pairs, named “binary” methods, while there are other techniques that measure physical interactions among groups of proteins, without pairwise determination of protein partners, named “co-complex” methods.
=== Homo-oligomers vs. hetero-oligomers ===
Homo-oligomers are macromolecular complexes constituted by only one type of protein subunit. Protein subunits assembly is guided by the establishment of non-covalent interactions in the quaternary structure of the protein. Disruption of homo-oligomers in order to return to the initial individual monomers often requires denaturation of the complex. Several enzymes, carrier proteins, scaffolding proteins, and transcriptional regulatory factors carry out their functions as homo-oligomers.
Distinct protein subunits interact in hetero-oligomers, which are essential to control several cellular functions. The importance of the communication between heterologous proteins is even more evident during cell signaling events and such interactions are only possible due to structural domains within the proteins (as described below).
=== Stable interactions vs. transient interactions ===
Stable interactions involve proteins that interact for a long time, taking part of permanent complexes as subunits, in order to carry out functional roles. These are usually the case of homo-oligomers (e.g. cytochrome c), and some hetero-oligomeric proteins, as the subunits of ATPase. On the other hand, a protein may interact briefly and in a reversible manner with other proteins in only certain cellular contexts – cell type, cell cycle stage, external factors, presence of other binding proteins, etc. – as it happens with most of the proteins involved in biochemical cascades. These are called transient interactions. For example, some G protein–coupled receptors only transiently bind to Gi/o proteins when they are activated by extracellular ligands, while some Gq-coupled receptors, such as muscarinic receptor M3, pre-couple with Gq proteins prior to the receptor-ligand binding. Interactions between intrinsically disordered protein regions to globular protein domains (i.e. MoRFs) are transient interactions.
=== Covalent vs. non-covalent ===
Covalent interactions are those with the strongest association and are formed by disulphide bonds or electron sharing. While rare, these interactions are determinant in some posttranslational modifications, as ubiquitination and SUMOylation. Non-covalent bonds are usually established during transient interactions by the combination of weaker bonds, such as hydrogen bonds, ionic interactions, Van der Waals forces, or hydrophobic bonds.
=== Role of water ===
Water molecules play a significant role in the interactions between proteins. The crystal structures of complexes, obtained at high resolution from different but homologous proteins, have shown that some interface water molecules are conserved between homologous complexes. The majority of the interface water molecules make hydrogen bonds with both partners of each complex. Some interface amino acid residues or atomic groups of one protein partner engage in both direct and water mediated interactions with the other protein partner. Doubly indirect interactions, mediated by two water molecules, are more numerous in the homologous complexes of low affinity. Carefully conducted mutagenesis experiments, e.g. changing a tyrosine residue into a phenylalanine, have shown that water mediated interactions can contribute to the energy of interaction. Thus, water molecules may facilitate the interactions and cross-recognitions between proteins.
== Structure ==
The molecular structures of many protein complexes have been unlocked by the technique of X-ray crystallography. The first structure to be solved by this method was that of sperm whale myoglobin by Sir John Cowdery Kendrew. In this technique the angles and intensities of a beam of X-rays diffracted by crystalline atoms are detected in a film, thus producing a three-dimensional picture of the density of electrons within the crystal.
Later, nuclear magnetic resonance also started to be applied with the aim of unravelling the molecular structure of protein complexes. One of the first examples was the structure of calmodulin-binding domains bound to calmodulin. This technique is based on the study of magnetic properties of atomic nuclei, thus determining physical and chemical properties of the correspondent atoms or the molecules. Nuclear magnetic resonance is advantageous for characterizing weak PPIs.
=== Protein-protein interaction domains ===
Some proteins have specific structural domains or sequence motifs that provide binding to other proteins. Here are some examples of such domains:
Src homology 2 (SH2) domain
SH2 domains are structurally composed by three-stranded twisted beta sheet sandwiched flanked by two alpha-helices. The existence of a deep binding pocket with high affinity for phosphotyrosine, but not for phosphoserine or phosphothreonine, is essential for the recognition of tyrosine phosphorylated proteins, mainly autophosphorylated growth factor receptors. Growth factor receptor binding proteins and phospholipase Cγ are examples of proteins that have SH2 domains.
Src homology 3 (SH3) domain
Structurally, SH3 domains are constituted by a beta barrel formed by two orthogonal beta sheets and three anti-parallel beta strands. These domains recognize proline enriched sequences, as polyproline type II helical structure (PXXP motifs) in cell signaling proteins like protein tyrosine kinases and the growth factor receptor bound protein 2 (Grb2).
Phosphotyrosine-binding (PTB) domain
PTB domains interact with sequences that contain a phosphotyrosine group. These domains can be found in the insulin receptor substrate.
LIM domain
LIM domains were initially identified in three homeodomain transcription factors (lin11, is11, and mec3). In addition to this homeodomain proteins and other proteins involved in development, LIM domains have also been identified in non-homeodomain proteins with relevant roles in cellular differentiation, association with cytoskeleton and senescence. These domains contain a tandem cysteine-rich Zn2+-finger motif and embrace the consensus sequence CX2CX16-23HX2CX2CX2CX16-21CX2C/H/D. LIM domains bind to PDZ domains, bHLH transcription factors, and other LIM domains.
Sterile alpha motif (SAM) domain
SAM domains are composed by five helices forming a compact package with a conserved hydrophobic core. These domains, which can be found in the Eph receptor and the stromal interaction molecule (STIM) for example, bind to non-SAM domain-containing proteins and they also appear to have the ability to bind RNA.
PDZ domain
PDZ domains were first identified in three guanylate kinases: PSD-95, DlgA and ZO-1. These domains recognize carboxy-terminal tri-peptide motifs (S/TXV), other PDZ domains or LIM domains and bind them through a short peptide sequence that has a C-terminal hydrophobic residue. Some of the proteins identified as having PDZ domains are scaffolding proteins or seem to be involved in ion receptor assembling and receptor-enzyme complexes formation.
FERM domain
FERM domains contain basic residues capable of binding PtdIns(4,5)P2. Talin and focal adhesion kinase (FAK) are two of the proteins that present FERM domains.
Calponin homology (CH) domain
CH domains are mainly present in cytoskeletal proteins as parvin.
Pleckstrin homology domain
Pleckstrin homology domains bind to phosphoinositides and acid domains in signaling proteins.
WW domain
WW domains bind to proline enriched sequences.
WSxWS motif
Found in cytokine receptors
== Properties of the interface ==
The study of the molecular structure can give fine details about the interface that enables the interaction between proteins. When characterizing PPI interfaces it is important to take into account the type of complex.
Parameters evaluated include size (measured in absolute dimensions Å2 or in solvent-accessible surface area (SASA)), shape, complementarity between surfaces, residue interface propensities, hydrophobicity, segmentation and secondary structure, and conformational changes on complex formation.
The great majority of PPI interfaces reflects the composition of protein surfaces, rather than the protein cores, in spite of being frequently enriched in hydrophobic residues, particularly in aromatic residues. PPI interfaces are dynamic and frequently planar, although they can be globular and protruding as well. Based on three structures – insulin dimer, trypsin-pancreatic trypsin inhibitor complex, and oxyhaemoglobin – Cyrus Chothia and Joel Janin found that between 1,130 and 1,720 Å2 of surface area was removed from contact with water indicating that hydrophobicity is a major factor of stabilization of PPIs. Later studies refined the buried surface area of the majority of interactions to 1,600±350 Å2. However, much larger interaction interfaces were also observed and were associated with significant changes in conformation of one of the interaction partners. PPIs interfaces exhibit both shape and electrostatic complementarity.
== Regulation ==
Protein concentration, which in turn are affected by expression levels and degradation rates;
Protein affinity for proteins or other binding ligands;
Ligands concentrations (substrates, ions, etc.);
Presence of other proteins, nucleic acids, and ions;
Electric fields around proteins.
Occurrence of covalent modifications;
== Experimental methods ==
There are a multitude of methods to detect them. Each of the approaches has its own strengths and weaknesses, especially with regard to the sensitivity and specificity of the method. The most conventional and widely used high-throughput methods are yeast two-hybrid screening and affinity purification coupled to mass spectrometry.
=== Yeast two-hybrid screening ===
This system was firstly described in 1989 by Fields and Song using Saccharomyces cerevisiae as biological model. Yeast two hybrid allows the identification of pairwise PPIs (binary method) in vivo, in which the two proteins are tested for biophysically direct interaction. The Y2H is based on the functional reconstitution of the yeast transcription factor Gal4 and subsequent activation of a selective reporter such as His3. To test two proteins for interaction, two protein expression constructs are made: one protein (X) is fused to the Gal4 DNA-binding domain (DB) and a second protein (Y) is fused to the Gal4 activation domain (AD). In the assay, yeast cells are transformed with these constructs. Transcription of reporter genes does not occur unless bait (DB-X) and prey (AD-Y) interact with each other and form a functional Gal4 transcription factor. Thus, the interaction between proteins can be inferred by the presence of the products resultant of the reporter gene expression. In cases in which the reporter gene expresses enzymes that allow the yeast to synthesize essential amino acids or nucleotides, yeast growth under selective media conditions indicates that the two proteins tested are interacting. Recently, software to detect and prioritize protein interactions was published.
Despite its usefulness, the yeast two-hybrid system has limitations. It uses yeast as main host system, which can be a problem when studying proteins that contain mammalian-specific post-translational modifications. The number of PPIs identified is usually low because of a high false negative rate; and, understates membrane proteins, for example.
In initial studies that utilized Y2H, proper controls for false positives (e.g. when DB-X activates the reporter gene without the presence of AD-Y) were frequently not done, leading to a higher than normal false positive rate. An empirical framework must be implemented to control for these false positives. Limitations in lower coverage of membrane proteins have been overcoming by the emergence of yeast two-hybrid variants, such as the membrane yeast two-hybrid (MYTH) and the split-ubiquitin system, which are not limited to interactions that occur in the nucleus; and, the bacterial two-hybrid system, performed in bacteria;
=== Affinity purification coupled to mass spectrometry ===
Affinity purification coupled to mass spectrometry mostly detects stable interactions and thus better indicates functional in vivo PPIs. This method starts by purification of the tagged protein, which is expressed in the cell usually at in vivo concentrations, and its interacting proteins (affinity purification). One of the most advantageous and widely used methods to purify proteins with very low contaminating background is the tandem affinity purification, developed by Bertrand Seraphin and Matthias Mann and respective colleagues. PPIs can then be analysed by mass spectrometry using different methods: chemical incorporation, biological or metabolic incorporation (SILAC), and label-free methods. Furthermore, network theory has been used to study the whole set of identified protein–protein interactions in cells.
=== Nucleic acid programmable protein array (NAPPA) ===
This system was first developed by LaBaer and colleagues in 2004 by using in vitro transcription and translation system. They use DNA template encoding the gene of interest fused with GST protein, and it was immobilized in the solid surface. Anti-GST antibody and biotinylated plasmid DNA were bounded in aminopropyltriethoxysilane (APTES)-coated slide. BSA can improve the binding efficiency of DNA. Biotinylated plasmid DNA was bound by avidin. New protein was synthesized by using cell-free expression system i.e. rabbit reticulocyte lysate (RRL), and then the new protein was captured through anti-GST antibody bounded on the slide. To test protein–protein interaction, the targeted protein cDNA and query protein cDNA were immobilized in a same coated slide. By using in vitro transcription and translation system, targeted and query protein was synthesized by the same extract. The targeted protein was bound to array by antibody coated in the slide and query protein was used to probe the array. The query protein was tagged with hemagglutinin (HA) epitope. Thus, the interaction between the two proteins was visualized with the antibody against HA.
=== Intragenic complementation ===
When multiple copies of a polypeptide encoded by a gene form a complex, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation (also called inter-allelic complementation). Intragenic complementation has been demonstrated in many different genes in a variety of organisms including the fungi Neurospora crassa, Saccharomyces cerevisiae and Schizosaccharomyces pombe; the bacterium Salmonella typhimurium; the virus bacteriophage T4, an RNA virus and humans. In such studies, numerous mutations defective in the same gene were often isolated and mapped in a linear order on the basis of recombination frequencies to form a genetic map of the gene. Separately, the mutants were tested in pairwise combinations to measure complementation. An analysis of the results from such studies led to the conclusion that intragenic complementation, in general, arises from the interaction of differently defective polypeptide monomers to form a multimer. Genes that encode multimer-forming polypeptides appear to be common. One interpretation of the data is that polypeptide monomers are often aligned in the multimer in such a way that mutant polypeptides defective at nearby sites in the genetic map tend to form a mixed multimer that functions poorly, whereas mutant polypeptides defective at distant sites tend to form a mixed multimer that functions more effectively. Direct interaction of two nascent proteins emerging from nearby ribosomes appears to be a general mechanism for homo-oligomer (multimer) formation. Hundreds of protein oligomers were identified that assemble in human cells by such an interaction. The most prevalent form of interaction is between the N-terminal regions of the interacting proteins. Dimer formation appears to be able to occur independently of dedicated assembly machines. The intermolecular forces likely responsible for self-recognition and multimer formation were discussed by Jehle.
=== Other potential methods ===
Diverse techniques to identify PPIs have been emerging along with technology progression. These include co-immunoprecipitation, protein microarrays, analytical ultracentrifugation, light scattering, fluorescence spectroscopy, luminescence-based mammalian interactome mapping (LUMIER), resonance-energy transfer systems, mammalian protein–protein interaction trap, electro-switchable biosurfaces, protein–fragment complementation assay, as well as real-time label-free measurements by surface plasmon resonance, and calorimetry.
== Computational methods ==
=== Computational prediction of protein–protein interactions ===
The experimental detection and characterization of PPIs is labor-intensive and time-consuming. However, many PPIs can be also predicted computationally, usually using experimental data as a starting point. However, methods have also been developed that allow the prediction of PPI de novo, that is without prior evidence for these interactions.
==== Genomic context methods ====
The Rosetta Stone or Domain Fusion method is based on the hypothesis that interacting proteins are sometimes fused into a single protein in another genome. Therefore, we can predict if two proteins may be interacting by determining if they each have non-overlapping sequence similarity to a region of a single protein sequence in another genome.
The Conserved Neighborhood method is based on the hypothesis that if genes encoding two proteins are neighbors on a chromosome in many genomes, then they are likely functionally related (and possibly physically interacting).
The Phylogenetic Profile method is based on the hypothesis that if two or more proteins are concurrently present or absent across several genomes, then they are likely functionally related. Therefore, potentially interacting proteins can be identified by determining the presence or absence of genes across many genomes and selecting those genes which are always present or absent together.
=== Text mining methods ===
Publicly available information from biomedical documents is readily accessible through the internet and is becoming a powerful resource for collecting known protein–protein interactions (PPIs), PPI prediction and protein docking. Text mining is much less costly and time-consuming compared to other high-throughput techniques. Currently, text mining methods generally detect binary relations between interacting proteins from individual sentences using rule/pattern-based information extraction and machine learning approaches. A wide variety of text mining applications for PPI extraction and/or prediction are available for public use, as well as repositories which often store manually validated and/or computationally predicted PPIs. Text mining can be implemented in two stages: information retrieval, where texts containing names of either or both interacting proteins are retrieved and information extraction, where targeted information (interacting proteins, implicated residues, interaction types, etc.) is extracted.
There are also studies using phylogenetic profiling, basing their functionalities on the theory that proteins involved in common pathways co-evolve in a correlated fashion across species. Some more complex text mining methodologies use advanced Natural Language Processing (NLP) techniques and build knowledge networks (for example, considering gene names as nodes and verbs as edges). Other developments involve kernel methods to predict protein interactions.
=== Machine learning methods ===
Many computational methods have been suggested and reviewed for predicting protein–protein interactions. Prediction approaches can be grouped into categories based on predictive evidence: protein sequence, comparative genomics, protein domains, protein tertiary structure, and interaction network topology. The construction of a positive set (known interacting protein pairs) and a negative set (non-interacting protein pairs) is needed for the development of a computational prediction model. Prediction models using machine learning techniques can be broadly classified into two main groups: supervised and unsupervised, based on the labeling of input variables according to the expected outcome.
In 2005, integral membrane proteins of Saccharomyces cerevisiae were analyzed using the mating-based ubiquitin system (mbSUS). The system detects membrane proteins interactions with extracellular signaling proteins Of the 705 integral membrane proteins 1,985 different interactions were traced that involved 536 proteins. To sort and classify interactions a support vector machine was used to define high medium and low confidence interactions. The split-ubiquitin membrane yeast two-hybrid system uses transcriptional reporters to identify yeast transformants that encode pairs of interacting proteins.
In 2006, random forest, an example of a supervised technique, was found to be the most-effective machine learning method for protein interaction prediction. Such methods have been applied for discovering protein interactions on human interactome, specifically the interactome of Membrane proteins and the interactome of Schizophrenia-associated proteins.
As of 2020, a model using residue cluster classes (RCCs), constructed from the 3DID and Negatome databases, resulted in 96-99% correctly classified instances of protein–protein interactions. RCCs are a computational vector space that mimics protein fold space and includes all simultaneously contacted residue sets, which can be used to analyze protein structure-function relation and evolution.
== Databases ==
Large scale identification of PPIs generated hundreds of thousands of interactions, which were collected together in specialized biological databases that are continuously updated in order to provide complete interactomes. The first of these databases was the Database of Interacting Proteins (DIP).
Primary databases collect information about published PPIs proven to exist via small-scale or large-scale experimental methods. Examples: DIP, Biomolecular Interaction Network Database (BIND), Biological General Repository for Interaction Datasets (BioGRID), Human Protein Reference Database (HPRD), IntAct Molecular Interaction Database, Molecular Interactions Database (MINT), MIPS Protein Interaction Resource on Yeast (MIPS-MPact), and MIPS Mammalian Protein–Protein Interaction Database (MIPS-MPPI).<
Meta-databases normally result from the integration of primary databases information, but can also collect some original data.
Prediction databases include many PPIs that are predicted using several techniques (main article). Examples: Human Protein–Protein Interaction Prediction Database (PIPs), Interlogous Interaction Database (I2D), Known and Predicted Protein–Protein Interactions (STRING-db), and Unified Human Interactive (UniHI).
The aforementioned computational methods all depend on source databases whose data can be extrapolated to predict novel protein–protein interactions. Coverage differs greatly between databases. In general, primary databases have the fewest total protein interactions recorded as they do not integrate data from multiple other databases, while prediction databases have the most because they include other forms of evidence in addition to experimental. For example, the primary database IntAct has 572,063 interactions, the meta-database APID has 678,000 interactions, and the predictive database STRING has 25,914,693 interactions. However, it is important to note that some of the interactions in the STRING database are only predicted by computational methods such as Genomic Context and not experimentally verified.
== Interaction networks ==
Information found in PPIs databases supports the construction of interaction networks. Although the PPI network of a given query protein can be represented in textbooks, diagrams of whole cell PPIs are frankly complex and difficult to generate.
One example of a manually produced molecular interaction map is the Kurt Kohn's 1999 map of cell cycle control. Drawing on Kohn's map, Schwikowski et al. in 2000 published a paper on PPIs in yeast, linking 1,548 interacting proteins determined by two-hybrid screening. They used a layered graph drawing method to find an initial placement of the nodes and then improved the layout using a force-based algorithm.
Bioinformatic tools have been developed to simplify the difficult task of visualizing molecular interaction networks and complement them with other types of data. For instance, Cytoscape is an open-source software widely used and many plugins are currently available. Pajek software is advantageous for the visualization and analysis of very large networks.
Identification of functional modules in PPI networks is an important challenge in bioinformatics. Functional modules means a set of proteins that are highly connected to each other in PPI network. It is almost similar problem as community detection in social networks. There are some methods such as Jactive modules and MoBaS. Jactive modules integrate PPI network and gene expression data where as MoBaS integrate PPI network and Genome Wide association Studies.
protein–protein relationships are often the result of multiple types of interactions or are deduced from different approaches, including co-localization, direct interaction, suppressive genetic interaction, additive genetic interaction, physical association, and other associations.
=== Signed interaction networks ===
Protein–protein interactions often result in one of the interacting proteins either being 'activated' or 'repressed'. Such effects can be indicated in a PPI network by "signs" (e.g. "activation" or "inhibition"). Although such attributes have been added to networks for a long time, Vinayagam et al. (2014) coined the term Signed network for them. Signed networks are often expressed by labeling the interaction as either positive or negative. A positive interaction is one where the interaction results in one of the proteins being activated. Conversely, a negative interaction indicates that one of the proteins being inactivated.
Protein–protein interaction networks are often constructed as a result of lab experiments such as yeast two-hybrid screens or 'affinity purification and subsequent mass spectrometry techniques. However these methods do not provide the layer of information needed in order to determine what type of interaction is present in order to be able to attribute signs to the network diagrams.
==== RNA interference screens ====
RNA interference (RNAi) screens (repression of individual proteins between transcription and translation) are one method that can be utilized in the process of providing signs to the protein–protein interactions. Individual proteins are repressed and the resulting phenotypes are analyzed. A correlating phenotypic relationship (i.e. where the inhibition of either of two proteins results in the same phenotype) indicates a positive, or activating relationship. Phenotypes that do not correlate (i.e. where the inhibition of either of two proteins results in two different phenotypes) indicate a negative or inactivating relationship. If protein A is dependent on protein B for activation then the inhibition of either protein A or B will result in a cell losing the service that is provided by protein A and the phenotypes will be the same for the inhibition of either A or B. If, however, protein A is inactivated by protein B then the phenotypes will differ depending on which protein is inhibited (inhibit protein B and it can no longer inactivate protein A leaving A active however inactivate A and there is nothing for B to activate since A is inactive and the phenotype changes). Multiple RNAi screens need to be performed in order to reliably appoint a sign to a given protein–protein interaction. Vinayagam et al. who devised this technique state that a minimum of nine RNAi screens are required with confidence increasing as one carries out more screens.
== As therapeutic targets ==
Modulation of PPI is challenging and is receiving increasing attention by the scientific community. Several properties of PPI such as allosteric sites and hotspots, have been incorporated into drug-design strategies. Nevertheless, very few PPIs are directly targeted by FDA-approved small-molecule PPI inhibitors, emphasizing a huge untapped opportunity for drug discovery.
In 2014, Amit Jaiswal and others were able to develop 30 peptides to inhibit recruitment of telomerase towards telomeres by utilizing protein–protein interaction studies. Arkin and others were able to develop antibody fragment-based inhibitors to regulate specific protein-protein interactions.
As the "modulation" of PPIs not only includes the inhibition, but also the stabilization of quaternary protein complexes, molecules with this mechanism of action (so called molecular glues) are also intensively studied.
=== Examples ===
Tirobifan, inhibitor of the glycoprotein IIb/IIIa, used as a cardiovascular drug
Maraviroc, inhibitor of the CCR5-gp120 interaction, used as anti-HIV drug.
AMG-176, AZD5991, S64315, inhibitors of myeloid cell leukemia 1 (Mcl-1) protein and its interactions
== See also ==
== References ==
== Further reading ==
== External links ==
Protein–Protein Interaction Databases
Library of Modulators of Protein–Protein Interactions (PPI) | Wikipedia/Protein–protein_interaction |
A phenomenon (pl. phenomena), sometimes spelled phaenomenon, is an observable event. The term came into its modern philosophical usage through Immanuel Kant, who contrasted it with the noumenon, which cannot be directly observed. Kant was heavily influenced by Gottfried Wilhelm Leibniz in this part of his philosophy, in which phenomenon and noumenon serve as interrelated technical terms. Far predating this, the ancient Greek Pyrrhonist philosopher Sextus Empiricus also used phenomenon and noumenon as interrelated technical terms.
== Common usage ==
In popular usage, a phenomenon often refers to an extraordinary, unusual or notable event. According to the Dictionary of Visual Discourse:In ordinary language 'phenomenon/phenomena' refer to any occurrence worthy of note and investigation, typically an untoward or unusual event, person or fact that is of special significance or otherwise notable.
== Philosophy ==
In modern philosophical use, the term phenomena means things as they are experienced through the senses and processed by the mind as distinct from things in and of themselves (noumena). In his inaugural dissertation, titled On the Form and Principles of the Sensible and Intelligible World, Immanuel Kant (1770) theorizes that the human mind is restricted to the logical world and thus can only interpret and understand occurrences according to their physical appearances. He wrote that humans could infer only as much as their senses allowed, but not experience the actual object itself. Thus, the term phenomenon refers to any incident deserving of inquiry and investigation, especially processes and events which are particularly unusual or of distinctive importance.
== Science ==
In scientific usage, a phenomenon is any event that is observable, including the use of instrumentation to observe, record, or compile data. Especially in physics, the study of a phenomenon may be described as measurements related to matter, energy, or time, such as Isaac Newton's observations of the Moon's orbit and of gravity; or Galileo Galilei's observations of the motion of a pendulum.
In natural sciences, a phenomenon is an observable happening or event. Often, this term is used without considering the causes of a particular event. Example of a physical phenomenon is an observable phenomenon of the lunar orbit or the phenomenon of oscillations of a pendulum.
A mechanical phenomenon is a physical phenomenon associated with the equilibrium or motion of objects. Some examples are Newton's cradle, engines, and double pendulums.
== Sociology ==
Group phenomena concern the behavior of a particular group of individual entities, usually organisms and most especially people. The behavior of individuals often changes in a group setting in various ways, and a group may have its own behaviors not possible for an individual because of the herd mentality.
Social phenomena apply especially to organisms and people in that subjective states are implicit in the term. Attitudes and events particular to a group may have effects beyond the group, and either be adapted by the larger society, or seen as aberrant, being punished or shunned.
== See also ==
== References ==
== External links ==
The dictionary definition of phenomenon at Wiktionary
Quotations related to Phenomenon at Wikiquote
Media related to Phenomena at Wikimedia Commons | Wikipedia/Phenomena |
ACM SIGACT or SIGACT is the Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory, whose purpose is support of research in theoretical computer science. It was founded in 1968 by Patrick C. Fischer.
== Publications ==
SIGACT publishes a quarterly print newsletter, SIGACT News. Its online version, SIGACT News Online, is available since 1996 for SIGACT members, with unrestricted access to some features.
== Conferences ==
SIGACT sponsors or has sponsored several annual conferences.
COLT: Conference on Learning Theory, until 1999
PODC: ACM Symposium on Principles of Distributed Computing (jointly sponsored by SIGOPS)
PODS: ACM Symposium on Principles of Database Systems (jointly sponsored by SIGAI and SIGACT)
POPL: ACM Symposium on Principles of Programming Languages
SOCG: ACM Symposium on Computational Geometry (jointly sponsored by SIGGRAPH), until 2014
SODA: ACM/SIAM Symposium on Discrete Algorithms (jointly sponsored by the Society for Industrial and Applied Mathematics). Two annual workshops held in conjunction with SODA also have the same joint sponsorship:
ALENEX: Workshop on Algorithms and Experiments
ANALCO: Workshop on Analytic Algorithms and Combinatorics
SPAA: ACM Symposium on Parallelism in Algorithms and Architectures
STOC: ACM Symposium on the Theory of Computing
COLT, PODC, PODS, POPL, SODA, and STOC are all listed as highly cited venues by both citeseerx and libra.
== Awards and prizes ==
Gödel Prize, for outstanding papers in theoretical computer science (sponsored jointly with EATCS)
Donald E. Knuth Prize, for outstanding contributions to the foundations of computer science (sponsored jointly with IEEE Computer Society's Technical Committee on the Mathematical Foundations of Computing)
Edsger W. Dijkstra Prize in distributed computing (sponsored jointly with SIGOPS, EATCS, and companies)
Paris Kanellakis Theory and Practice Award, for theoretical accomplishments of significant and demonstrable effect on the practice of computing (ACM Award co-sponsored by SIGACT)
Eugene L. Lawler Award for Humanitarian Contributions within Computer Science and Informatics (ACM Award co-sponsored by SIGACT)
Danny Lewin Best Student Paper Award
Best Paper Award for ACM STOC and IEEE FOCS conference papers
ACM SIGACT Distinguished Service Award
== References ==
== External links ==
Official website
SIGACT News on ACM Digital Library | Wikipedia/Special_Interest_Group_on_Algorithms_and_Computation_Theory |
A computer network is a collection of communicating computers and other devices, such as printers and smart phones. In order to communicate, the computers and devices must be connected by wired media like copper cables, optical fibers, or by wireless communication. The devices may be connected in a variety of network topologies. In order to communicate over the network, computers use agreed-on rules, called communication protocols, over whatever medium is used.
The computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent.
Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications.
== History ==
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technological developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s).
In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963).
In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project.
In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes.
In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric.
Throughout the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network. Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks.
In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
In 1972, commercial services were first deployed on experimental public data networks in Europe.
In 1973, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.
In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network.
In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a local area networking system he created with David Boggs. It was inspired by the packet radio ALOHAnet, started by Norman Abramson and Franklin Kuo at the University of Hawaii in the late 1960s. Metcalfe and Boggs, with John Shoch and Edward Taft, also developed the PARC Universal Packet for internetworking.
In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking.
In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention.
Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s.
In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.
In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.
In 1979, Robert Metcalfe pursued making Ethernet an open standard.
In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, and Yogen Dalal.
In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (as of 2018). The scaling of Ethernet has been a contributing factor to its continued use.
== Use ==
Computer networks enhance how users communicate with each other by using various electronic methods like email, instant messaging, online chat, voice and video calls, and video conferencing. Networks also enable the sharing of computing resources. For example, a user can print a document on a shared printer or use shared storage devices. Additionally, networks allow for the sharing of files and information, giving authorized users access to data stored on other computers. Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively.
== Network packet ==
Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network.
Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free.
The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message.
== Network topology ==
The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts.
Common topologies are:
Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree.
Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point.
Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology.
Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
Fully connected network: each node is connected to every other node in the network.
Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing.
The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding.
=== Overlay network ===
An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.
Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.
== Network links ==
The transmission media (often referred to in the literature as the physical medium) used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family that uses copper and fiber media in local area network (LAN) technology are collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
=== Wired ===
The following classes of wired technologies are used in computer networking.
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network.
Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
An optical fiber is a glass fiber. It carries pulses of light that represent data via lasers and optical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade.
=== Wireless ===
Network connections can be established wirelessly using radio or other electromagnetic means of communication.
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 40 miles (64 km) apart.
Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-power transceiver.
Radio and spread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
Extending the Internet to interplanetary dimensions via radio waves and optical means, the Interplanetary Internet.
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput).
== Network nodes ==
Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions.
=== Network interfaces ===
A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
=== Repeaters and hubs ===
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule.
An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches.
=== Bridges and switches ===
Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches.
Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame.
They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply.
Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks.
=== Routers ===
A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks.
=== Modems ===
Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology.
=== Firewalls ===
A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
== Communication protocols ==
A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
In a protocol stack, often constructed per the OSI model, communications functions are divided up into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.
There are many communication protocols, a few of which are described below.
=== Common protocols ===
==== Internet protocol suite ====
The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet.
==== IEEE 802 ====
IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model.
For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".
===== Ethernet =====
Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.
===== Wireless LAN =====
Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet.
==== SONET/SDH ====
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.
==== Asynchronous Transfer Mode ====
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user.
==== Cellular standards ====
There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).
=== Routing ===
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.
In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.
Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks.
== Geographic scale ==
Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale.
=== Nanoscale network ===
A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques.
=== Personal area network ===
A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
=== Local area network ===
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.
A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010.
=== Home area network ===
A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider.
=== Storage area network ===
A storage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
=== Campus area network ===
A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.).
For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
=== Backbone network ===
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.
For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks.
=== Metropolitan area network ===
A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area.
=== Wide area network ===
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer.
=== Enterprise private network ===
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
=== Virtual private network ===
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider.
=== Global area network ===
A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.
== Organizational scope ==
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.
=== Intranet ===
An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information.
=== Extranet ===
An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology.
=== Internet ===
An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers.
The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services.
Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
=== Darknet ===
A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.
== Network service ==
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.
The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like 210.121.67.18), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address.
Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.
== Network performance ==
=== Bandwidth ===
Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation).
=== Network delay ===
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay:
Processing delay – time it takes a router to process the packet header
Queuing delay – time the packet spends in routing queues
Transmission delay – time it takes to push the packet's bits onto the link
Propagation delay – time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds.
=== Performance metrics ===
The parameters that affect performance typically can include throughput, jitter, bit error rate and latency.
In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo.
In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.
=== Network congestion ===
Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers.
Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard.
For the Internet, RFC 2914 addresses the subject of congestion control in detail.
=== Network resilience ===
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."
== Security ==
Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.
=== Network security ===
Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals.
=== Network surveillance ===
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.
Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.
However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".
=== End to end encryption ===
End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.
Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.
Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox.
The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.
=== SSL/TLS ===
The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.
== Views of networks ==
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. A community of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs.
Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology.
== See also ==
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.
== Further reading ==
Kurose James F and Keith W. Ross: Computer Networking: A Top-Down Approach Featuring the Internet, Pearson Education 2005.
William Stallings, Computer Networking with Internet Protocols and Technology, Pearson Education 2004.
Dimitri Bertsekas, and Robert Gallager, "Data Networks," Prentice Hall, 1992. | Wikipedia/Computer_networking |
In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA.
In general, an ISA defines the supported instructions, data types, registers, the hardware support for managing main memory, fundamental features (such as the memory consistency, addressing modes, virtual memory), and the input/output model of implementations of the ISA.
An ISA specifies the behavior of machine code running on implementations of that ISA in a fashion that does not depend on the characteristics of that implementation, providing binary compatibility between implementations. This enables multiple implementations of an ISA that differ in characteristics such as performance, physical size, and monetary cost (among other things), but that are capable of running the same machine code, so that a lower-performance, lower-cost machine can be replaced with a higher-cost, higher-performance machine without having to replace software. It also enables the evolution of the microarchitectures of the implementations of that ISA, so that a newer, higher-performance implementation of an ISA can run software that runs on previous generations of implementations.
If an operating system maintains a standard and compatible application binary interface (ABI) for a particular ISA, machine code will run on future implementations of that ISA and operating system. However, if an ISA supports running multiple operating systems, it does not guarantee that machine code for one operating system will run on another operating system, unless the first operating system supports running machine code built for the other operating system.
An ISA can be extended by adding instructions or other capabilities, or adding support for larger addresses and data values; an implementation of the extended ISA will still be able to execute machine code for versions of the ISA without those extensions. Machine code using those extensions will only run on implementations that support those extensions.
The binary compatibility that they provide makes ISAs one of the most fundamental abstractions in computing.
== Overview ==
An instruction set architecture is distinguished from a microarchitecture, which is the set of processor design techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but they have radically different internal designs.
The concept of an architecture, distinct from the design of a specific machine, was developed by Fred Brooks at IBM during the design phase of System/360. Prior to NPL [System/360], the company's computer designers had been free to honor cost objectives not only by selecting technologies but also by fashioning functional and architectural refinements. The SPREAD compatibility objective, in contrast, postulated a single architecture for a series of five processors spanning a wide range of cost and performance. None of the five engineering design teams could count on being able to bring about adjustments in architectural specifications as a way of easing difficulties in achieving cost and performance objectives.: p.137
Some virtual machines that support bytecode as their ISA such as Smalltalk, the Java virtual machine, and Microsoft's Common Language Runtime, implement this by translating the bytecode for commonly used code paths into native machine code. In addition, these virtual machines execute less frequently used code paths by interpretation (see: Just-in-time compilation). Transmeta implemented the x86 instruction set atop very long instruction word (VLIW) processors in this fashion.
== Classification of ISAs ==
An ISA may be classified in a number of different ways. A common classification is by architectural complexity. A complex instruction set computer (CISC) has many specialized instructions, some of which may only be rarely used in practical programs. A reduced instruction set computer (RISC) simplifies the processor by efficiently implementing only the instructions that are frequently used in programs, while the less common operations are implemented as subroutines, having their resulting additional processor execution time offset by infrequent use.
Other types include VLIW architectures, and the closely related long instruction word (LIW) and explicitly parallel instruction computing (EPIC) architectures. These architectures seek to exploit instruction-level parallelism with less hardware than RISC and CISC by making the compiler responsible for instruction issue and scheduling.
Architectures with even less complexity have been studied, such as the minimal instruction set computer (MISC) and one-instruction set computer (OISC). These are theoretically important types, but have not been commercialized.
== Instructions ==
Machine language is built up from discrete statements or instructions. On the processing architecture, a given instruction may specify:
opcode (the instruction to be performed) e.g. add, copy, test
any explicit operands:
registers
literal/constant values
addressing modes used to access memory
More complex operations are built up by combining these simple instructions, which are executed sequentially, or as otherwise directed by control flow instructions.
=== Instruction types ===
Examples of operations common to many instruction sets include:
==== Data handling and memory operations ====
Set a register to a fixed constant value.
Copy data from a memory location or a register to a memory location or a register (a machine instruction is often called move; however, the term is misleading). They are used to store the contents of a register, the contents of another memory location or the result of a computation, or to retrieve stored data to perform a computation on it later. They are often called load or store operations.
Read or write data from hardware devices.
==== Arithmetic and logic operations ====
Add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register.
increment, decrement in some ISAs, saving operand fetch in trivial cases.
Perform bitwise operations, e.g., taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register.
Compare two values in registers (for example, to see if one is less, or if they are equal).
Floating-point instructions for arithmetic on floating-point numbers.
==== Control flow operations ====
Branch to another location in the program and execute instructions there.
Conditionally branch to another location if a certain condition holds.
Indirectly branch to another location.
Skip one of more instructions, depending on conditions
Trap Explicitly cause an interrupt, either conditionally or unconditionally.
Call another block of code, while saving, e.g., the location of the next instruction, as a point to return to.
==== Coprocessor instructions ====
Load/store data to and from a coprocessor or exchanging with CPU registers.
Perform coprocessor operations.
=== Complex instructions ===
Processors may include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include:
transferring multiple registers to or from memory (especially the stack) at once
moving large blocks of memory (e.g. string copy or DMA transfer)
complicated integer and floating-point arithmetic (e.g. square root, or transcendental functions such as logarithm, sine, cosine, etc.)
SIMD instructions, a single instruction performing an operation on many homogeneous values in parallel, possibly in dedicated SIMD registers
performing an atomic test-and-set instruction or other read–modify–write atomic instruction
instructions that perform ALU operations with an operand from memory rather than a register
Complex instructions are more common in CISC instruction sets than in RISC instruction sets, but RISC instruction sets may include them as well. RISC instruction sets generally do not include ALU operations with memory operands, or instructions to move large blocks of memory, but most RISC instruction sets include SIMD or vector instructions that perform the same arithmetic operation on multiple pieces of data at the same time. SIMD instructions have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as MMX, 3DNow!, and AltiVec.
=== Instruction encoding ===
On traditional architectures, an instruction includes an opcode that specifies the operation to perform, such as add contents of memory to register—and zero or more operand specifiers, which may specify registers, memory locations, or literal data. The operand specifiers may have addressing modes determining their meaning or may be in fixed fields. In very long instruction word (VLIW) architectures, which include many microcode architectures, multiple simultaneous opcodes and operands are specified in a single instruction.
Some exotic instruction sets do not have an opcode field, such as transport triggered architectures (TTA), only operand(s).
Most stack machines have "0-operand" instruction sets in which arithmetic and logical operations lack any operand specifier fields; only instructions that push operands onto the evaluation stack or that pop operands from the stack into variables have operand specifiers. The instruction set carries out most ALU actions with postfix (reverse Polish notation) operations that work only on the expression stack, not on data registers or arbitrary main memory cells. This can be very convenient for compiling high-level languages, because most arithmetic expressions can be easily translated into postfix notation.
Conditional instructions often have a predicate field—a few bits that encode the specific condition to cause an operation to be performed rather than not performed. For example, a conditional branch instruction will transfer control if the condition is true, so that execution proceeds to a different part of the program, and not transfer control if the condition is false, so that execution continues sequentially. Some instruction sets also have conditional moves, so that the move will be executed, and the data stored in the target location, if the condition is true, and not executed, and the target location not modified, if the condition is false. Similarly, IBM z/Architecture has a conditional store instruction. A few instruction sets include a predicate field in every instruction. Having predicates for non-branch instructions is called predication.
==== Number of operands ====
Instruction sets may be categorized by the maximum number of operands explicitly specified in instructions.
(In the examples that follow, a, b, and c are (direct or calculated) addresses referring to memory cells, while reg1 and so on refer to machine registers.)
C = A+B
0-operand (zero-address machines), so called stack machines: All arithmetic operations take place using the top one or two positions on the stack: push a, push b, add, pop c.
C = A+B needs four instructions. For stack machines, the terms "0-operand" and "zero-address" apply to arithmetic instructions, but not to all instructions, as 1-operand push and pop instructions are used to access memory.
1-operand (one-address machines), so called accumulator machines, include early computers and many small microcontrollers: most instructions specify a single right operand (that is, constant, a register, or a memory location), with the implicit accumulator as the left operand (and the destination if there is one): load a, add b, store c.
C = A+B needs three instructions.
2-operand — many CISC and RISC machines fall under this category:
CISC — move A to C; then add B to C.
C = A+B needs two instructions. This effectively 'stores' the result without an explicit store instruction.
CISC — Often machines are limited to one memory operand per instruction: load a,reg1; add b,reg1; store reg1,c; This requires a load/store pair for any memory movement regardless of whether the add result is an augmentation stored to a different place, as in C = A+B, or the same memory location: A = A+B.
C = A+B needs three instructions.
RISC — Requiring explicit memory loads, the instructions would be: load a,reg1; load b,reg2; add reg1,reg2; store reg2,c.
C = A+B needs four instructions.
3-operand, allowing better reuse of data:
CISC — It becomes either a single instruction: add a,b,c
C = A+B needs one instruction.
CISC — Or, on machines limited to two memory operands per instruction, move a,reg1; add reg1,b,c;
C = A+B needs two instructions.
RISC — arithmetic instructions use registers only, so explicit 2-operand load/store instructions are needed: load a,reg1; load b,reg2; add reg1+reg2->reg3; store reg3,c;
C = A+B needs four instructions.
Unlike 2-operand or 1-operand, this leaves all three values a, b, and c in registers available for further reuse.
more operands—some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the VAX "POLY" polynomial evaluation instruction.
Due to the large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, TI MSP430, and some versions of ARM Thumb. RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the ARM, AVR32, MIPS, Power ISA, and SPARC architectures.
Each instruction specifies some number of operands (registers, memory locations, or immediate values) explicitly. Some instructions give one or both operands implicitly, such as by being stored on top of the stack or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the arity). Operands are either encoded in the "opcode" representation of the instruction, or else are given as values or addresses following the opcode.
=== Register pressure ===
Register pressure measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be spilled into memory. Increasing the number of registers in an architecture decreases register pressure but increases the cost.
While embedded instruction sets such as Thumb suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like MIPS and Alpha enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer.
=== Instruction length ===
The size or length of an instruction varies widely, from as little as four bits in some microcontrollers to many hundreds of bits in some VLIW systems. Processors used in personal computers, mainframes, and supercomputers have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15 bytes (120 bits). Within an instruction set, different instructions may have different lengths. In some architectures, notably most reduced instruction set computers (RISC), instructions are a fixed length, typically corresponding with that architecture's word size. In other architectures, instructions have variable length, typically integral multiples of a byte or a halfword. Some, such as the ARM with Thumb-extension have mixed variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8).
Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary, for instance), and are therefore somewhat easier to optimize for speed.
=== Code density ===
In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch.
Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named Complex Instruction Set Computers, CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions.
Reduced instruction-set computers, RISC, were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed instruction length, whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.
Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.
Minimal instruction set computers (MISC) are commonly a form of stack machine, where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA (field-programmable gate array) or in a multi-core form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.
There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this.
In practice, code density is also dependent on the compiler. Most optimizing compilers have options that control whether to optimize code generation for execution speed or for code density. For instance GCC has the option -Os to optimize for small machine code size, and -O3 to optimize for execution speed at the cost of larger machine code.
=== Representation ===
The instructions constituting a program are rarely specified using their internal, numeric form (machine code); they may be specified by programmers using an assembly language or, more commonly, may be generated from high-level programming languages by compilers.
== Design ==
The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (complex instruction set computer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (reduced instruction set computer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and cache efficiency, or simplify programming.
Some instruction set designers reserve one or more opcodes for some kind of system call or software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH while Motorola 68000 use codes in the range A000..AFFFH.
Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.
The NOP slide used in immunity-aware programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.
On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if the instruction set includes support for something such as "fetch-and-add", "load-link/store-conditional" (LL/SC), or "atomic compare-and-swap".
== Instruction set implementation ==
A given instruction set can be implemented in a variety of ways. All ways of implementing a particular instruction set provide the same programming model, and all implementations of that instruction set are able to run the same executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.
When designing the microarchitecture of a processor, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs, etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture.
There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):
Some computer designs "hardwire" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
Other designs employ microcode routines or tables (or both) to do this, using ROMs or writable RAMs (writable control store), PLAs, or both.
Some microcoded CPU designs with a writable control store use it to allow the instruction set to be changed (for example, the Rekursiv processor and the Imsys Cjip).
CPUs designed for reconfigurable computing may use field-programmable gate arrays (FPGAs).
An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.
Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline only allow a single memory load or memory store per instruction, leading to a load–store architecture (RISC). For another example, some early ways of implementing the instruction pipeline led to a delay slot.
The demands of high-speed digital signal processing have pushed in the opposite direction—forcing instructions to be implemented in a particular way. For example, to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must use a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply–accumulate multiplier.
== See also ==
Comparison of instruction set architectures
Compressed instruction set
Computer architecture
Emulator
Instruction set simulator
Micro-operation
No instruction set computing
OVPsim – full systems simulator providing ability to create/model/emulate any instruction set using C and standard APIs
Processor design
Simulation
Register transfer language (RTL)
== References ==
== Further reading ==
Bowen, Jonathan P. (July–August 1985). "Standard Microprocessor Programming Cards". Microprocessors and Microsystems. 9 (6): 274–290. doi:10.1016/0141-9331(85)90116-4.
Hennessy, John L.; Patterson, David A. (2003). Computer Architecture: A Quantitative Approach (Third ed.). Morgan Kaufmann Publishers. ISBN 1-55860-724-2. Retrieved 2023-03-04.
== External links ==
Media related to Instruction set architectures at Wikimedia Commons
Programming Textfiles: Bowen's Instruction Summary Cards
Mark Smotherman's Historical Computer Designs Page | Wikipedia/Instruction_(computer_science) |
In computer science, synchronization is the task of coordinating multiple processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.
== Motivation ==
The need for synchronization does not arise merely in multi-processor systems but for any kind of concurrent processes; even in single processor systems. Mentioned below are some of the main needs for synchronization:
Forks and Joins: When a job arrives at a fork point, it is split into N sub-jobs which are then serviced by n tasks. After being serviced, each sub-job waits until all other sub-jobs are done processing. Then, they are joined again and leave the system. Thus, parallel programming requires synchronization as all the parallel processes wait for several other processes to occur.
Producer-Consumer: In a producer-consumer relationship, the consumer process is dependent on the producer process until the necessary data has been produced.
Exclusive use resources: When multiple processes are dependent on a resource and they need to access it at the same time, the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces concurrency.
== Requirements ==
Thread synchronization is defined as a mechanism which ensures that two or more concurrent processes or threads do not simultaneously execute some particular program segment known as critical section. Processes' access to critical section is controlled by using synchronization techniques. When one thread starts executing the critical section (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques are not applied, it may cause a race condition where the values of variables may be unpredictable and vary depending on the timings of context switches of the processes or threads.
For example, suppose that there are three processes, namely 1, 2, and 3. All three of them are concurrently executing, and they need to share a common resource (critical section) as shown in Figure 1. Synchronization should be used here to avoid any conflicts for accessing this shared resource. Hence, when Process 1 and 2 both try to access that resource, it should be assigned to only one process at a time. If it is assigned to Process 1, the other process (Process 2) needs to wait until Process 1 frees that resource (as shown in Figure 2).
Another synchronization requirement which needs to be considered is the order in which particular processes or threads should be executed. For example, one cannot board a plane before buying a ticket. Similarly, one cannot check e-mails before validating the appropriate credentials (for example, user name and password). In the same way, an ATM will not provide any service until it receives a correct PIN.
Other than mutual exclusion, synchronization also deals with the following:
deadlock, which occurs when many processes are waiting for a shared resource (critical section) which is being held by some other process. In this case, the processes just keep waiting and execute no further;
starvation, which occurs when a process is waiting to enter the critical section, but other processes monopolize the critical section, and the first process is forced to wait indefinitely;
priority inversion, which occurs when a high-priority process is in the critical section, and it is interrupted by a medium-priority process. This violation of priority rules can happen under certain circumstances and may lead to serious consequences in real-time systems;
busy waiting, which occurs when a process frequently polls to determine if it has access to a critical section. This frequent polling robs processing time from other processes.
== Minimization ==
One of the challenges for exascale algorithm design is to minimize or reduce synchronization.
Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver. This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG), for ranking the top 500 supercomputers.
== Problems ==
The following are some classic problems of synchronization:
The Producer–Consumer Problem (also called The Bounded Buffer Problem);
The Readers–Writers Problem;
The Dining Philosophers Problem.
These problems are used to test nearly every newly proposed synchronization scheme or primitive.
=== Overhead ===
Synchronization overheads can significantly impact performance in parallel computing environments, where merging data from multiple processes can incur costs substantially higher—often by two or more orders of magnitude—than processing the same data on a single thread, primarily due to the additional overhead of inter-process communication and synchronization mechanisms.
== Hardware synchronization ==
Many systems provide hardware support for critical section code.
A single processor or uniprocessor system could disable interrupts by executing currently running code without preemption, which is very inefficient on multiprocessor systems.
"The key ability we require to implement synchronization in a multiprocessor is a set of hardware primitives with the ability to atomically read and modify a memory location. Without such a capability, the cost of building basic synchronization primitives will be too high and will increase as the processor count increases. There are a number of alternative formulations of the basic hardware primitives, all of which provide the ability to atomically read and modify a location, together with some way to tell if the read and write were performed atomically. These hardware primitives are the basic building blocks that are used to build a wide variety of user-level synchronization operations, including things such as locks and barriers. In general, architects do not expect users to employ the basic hardware primitives, but instead expect that the primitives will be used by system programmers to build a synchronization library, a process that is often complex and tricky." Many modern pieces of hardware provide such atomic instructions, two common examples being: test-and-set, which operates on a single memory word, and compare-and-swap, which swaps the contents of two memory words.
== Support in programming languages ==
In Java, one way to prevent thread interference and memory consistency errors, is by prefixing a method signature with the synchronized keyword, in which case the lock of the declaring object is used to enforce synchronization. A second way is to wrap a block of code in a synchronized(someObject){...} section, which offers finer-grain control. This forces any thread to acquire the lock of someObject before it can execute the contained block. The lock is automatically released when the thread which acquired the lock leaves this block or enters a waiting state within the block. Any variable updates made by a thread in a synchronized block become visible to other threads when they similarly acquire the lock and execute the block. For either implementation, any object may be used to provide a lock because all Java objects have an intrinsic lock or monitor lock associated with them when instantiated.
Java synchronized blocks, in addition to enabling mutual exclusion and memory consistency, enable signaling—i.e. sending events from threads which have acquired the lock and are executing the code block to those which are waiting for the lock within the block. Java synchronized sections, therefore, combine the functionality of both mutexes and events to ensure synchronization. Such a construct is known as a synchronization monitor.
The .NET Framework also uses synchronization primitives. "Synchronization is designed to be cooperative, demanding that every thread follow the synchronization mechanism before accessing protected resources for consistent results. Locking, signaling, lightweight synchronization types, spinwait and interlocked operations are mechanisms related to synchronization in .NET."
Many programming languages support synchronization and entire specialized languages have been written for embedded application development where strictly deterministic synchronization is paramount.
== Implementation ==
=== Spinlocks ===
Another effective way of implementing synchronization is by using spinlocks. Before accessing any shared resource or piece of code, every processor checks a flag. If the flag is reset, then the processor sets the flag and continues executing the thread. But, if the flag is set (locked), the threads would keep spinning in a loop and keep checking if the flag is set or not. Spinlocks are effective only if the flag is reset for lower cycles; otherwise, it can lead to performance issues as it wastes many processor cycles waiting.
=== Barriers ===
Barriers are simple to implement and provide good responsiveness. They are based on the concept of implementing wait cycles to provide synchronization. Consider three threads running simultaneously, starting from barrier 1. After time t, thread1 reaches barrier 2 but it still has to wait for thread 2 and 3 to reach barrier2 as it does not have the correct data. Once all the threads reach barrier 2 they all start again. After time t, thread 1 reaches barrier3 but it will have to wait for threads 2 and 3 and the correct data again.
Thus, in barrier synchronization of multiple threads there will always be a few threads that will end up waiting for other threads as in the above example thread 1 keeps waiting for thread 2 and 3. This results in severe degradation of the process performance.
The barrier synchronization wait function for ith thread can be represented as:
(Wbarrier)i=f ((Tbarrier)i, (Rthread)i)
Where Wbarrier is the wait time for a thread, Tbarrier is the number of threads has arrived, and Rthread is the arrival rate of threads.
Experiments show that 34% of the total execution time is spent in waiting for other slower threads.
=== Semaphores ===
Semaphores are signalling mechanisms which can allow one or more threads/processors to access a section. A Semaphore has a flag which has a certain fixed value associated with it and each time a thread wishes to access the section, it decrements the flag. Similarly, when the thread leaves the section, the flag is incremented. If the flag is zero, the thread cannot access the section and gets blocked if it chooses to wait.
Some semaphores would allow only one thread or process in the code section. Such Semaphores are called binary semaphore and are very similar to Mutex. Here, if the value of semaphore is 1, the thread is allowed to access and if the value is 0, the access is denied.
== Distributed transaction ==
In event driven architectures, synchronous transactions can be achieved through using request-response paradigm and it can be implemented in two ways:
Creating two separate queues: one for requests and the other for replies. The event producer must wait until it receives the response.
Creating one dedicated ephemeral queue for each request.
== Mathematical foundations ==
Synchronization was originally a process-based concept whereby a lock could be obtained on an object. Its primary usage was in databases. There are two types of (file) lock; read-only and read–write. Read-only locks may be obtained by many processes or threads. Readers–writer locks are exclusive, as they may only be used by a single process/thread at a time.
Although locks were derived for file databases, data is also shared in memory between processes and threads. Sometimes more than one object (or file) is locked at a time. If they are not locked simultaneously they can overlap, causing a deadlock exception.
Java and Ada only have exclusive locks because they are thread based and rely on the compare-and-swap processor instruction.
An abstract mathematical foundation for synchronization primitives is given by the history monoid. There are also many higher-level theoretical devices, such as process calculi and Petri nets, which can be built on top of the history monoid.
== Examples ==
Following are some synchronization examples with respect to different platforms.
=== In Windows ===
Windows provides:
interrupt masks, which protect access to global resources (critical section) on uniprocessor systems;
spinlocks, which prevent, in multiprocessor systems, spinlocking-thread from being preempted;
dynamic dispatchers, which act like mutexes, semaphores, events, and timers.
=== In Linux ===
Linux provides:
semaphores;
spinlock;
barriers;
mutex;
readers–writer locks, for the longer section of codes which are accessed very frequently but don't change very often;
read-copy-update (RCU).
Enabling and disabling of kernel preemption replaced spinlocks on uniprocessor systems. Prior to kernel version 2.6, Linux disabled interrupt to implement short critical sections. Since version 2.6 and later, Linux is fully preemptive.
=== In Solaris ===
Solaris provides:
semaphores
condition variables
adaptive mutexes – binary semaphores that are implemented differently depending upon the conditions
readers-writer locks
turnstiles – queue of threads which are waiting on acquired lock
=== In Pthreads ===
Pthreads is a platform-independent API that provides:
mutexes;
condition variables;
readers–writer locks;
spinlocks;
barriers.
== See also ==
Futures and promises, synchronization mechanisms in pure functional paradigms
Memory barrier
== References ==
Schneider, Fred B. (1997). On concurrent programming. Springer-Verlag New York, Inc. ISBN 978-0-387-94942-0.
== External links ==
Anatomy of Linux synchronization methods at IBM developerWorks
The Little Book of Semaphores, by Allen B. Downey
Need of Process Synchronization | Wikipedia/Synchronization_(computer_science) |
SPAA, the ACM Symposium on Parallelism in Algorithms and Architectures, is an academic conference in the fields of parallel computing and distributed computing. It is sponsored by the Association for Computing Machinery special interest groups SIGACT and SIGARCH, and it is organized in cooperation with the European Association for Theoretical Computer Science (EATCS).
== History ==
SPAA was first organised on 18–21 June 1989, in Santa Fe, New Mexico, United States. In 1989–2002, SPAA was known as Symposium on Parallel Algorithms and Architectures. In 2003, the name changed to Symposium on Parallelism in Algorithms and Architectures to reflect the extended scope of the conference.
In 2003 and 2007, SPAA was part of the Federated Computing Research Conference (FCRC). In 1998, 2005, 2009, and 2024, SPAA was co-located with the ACM Symposium on Principles of Distributed Computing (PODC).
== See also ==
The list of distributed computing conferences contains other academic conferences in parallel and distributed computing.
The list of computer science conferences contains other academic conferences in computer science.
== Notes ==
== External links ==
SPAA proceedings in ACM Digital Library.
SPAA proceedings information in DBLP. | Wikipedia/Symposium_on_Parallelism_in_Algorithms_and_Architectures |
This article is a list of notable unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known or when experts in the field disagree about proposed solutions.
== Computational complexity ==
P versus NP problem – The P vs NP problem is a major unsolved question in computer science that asks whether every problem whose solution can be quickly verified by a computer (NP) can also be quickly solved by a computer (P). This question has profound implications for fields such as cryptography, algorithm design, and computational theory.
What is the relationship between BQP and NP?
NC = P problem
NP = co-NP problem
P = BPP problem
P = PSPACE problem
L = NL problem
PH = PSPACE problem
L = P problem
L = RL problem
Unique games conjecture
Is the exponential time hypothesis true?
Is the strong exponential time hypothesis (SETH) true?
Do one-way functions exist?
Is public-key cryptography possible?
Log-rank conjecture
== Polynomial versus nondeterministic-polynomial time for specific algorithmic problems ==
Can integer factorization be done in polynomial time on a classical (non-quantum) computer?
Can the discrete logarithm be computed in polynomial time on a classical (non-quantum) computer?
Can the shortest vector of a lattice be computed in polynomial time on a classical or quantum computer?
Can the graph isomorphism problem be solved in polynomial time on a classical computer?
The graph isomorphism problem involves determining whether two finite graphs are isomorphic, meaning there is a one-to-one correspondence between their vertices and edges that preserves adjacency. While the problem is known to be in NP, it is not known whether it is NP-complete or solvable in polynomial time. This uncertainty places it in a unique complexity class, making it a significant open problem in computer science.
Is graph canonization polynomial-time equivalent to the graph isomorphism problem?
Can leaf powers and k-leaf powers be recognized in polynomial time?
Can parity games be solved in polynomial time?
Can the rotation distance between two binary trees be computed in polynomial time?
Can graphs of bounded clique-width be recognized in polynomial time?
Can one find a simple closed quasigeodesic on a convex polyhedron in polynomial time?
Can a simultaneous embedding with fixed edges for two given graphs be found in polynomial time?
Can the square-root sum problem be solved in polynomial time in the Turing machine model?
== Other algorithmic problems ==
The dynamic optimality conjecture: Do splay trees have a bounded competitive ratio?
Can a depth-first search tree be constructed in NC?
Can the fast Fourier transform be computed in o(n log n) time?
What is the fastest algorithm for multiplication of two n-digit numbers?
What is the lowest possible average-case time complexity of Shellsort with a deterministic fixed gap sequence?
Can 3SUM be solved in strongly sub-quadratic time, that is, in time O(n2−ϵ) for some ϵ > 0?
Can the edit distance between two strings of length n be computed in strongly sub-quadratic time? (This is only possible if the strong exponential time hypothesis is false.)
Can X + Y sorting be done in o(n2 log n) time?
What is the fastest algorithm for matrix multiplication?
Can all-pairs shortest paths be computed in strongly sub-cubic time, that is, in time O(V3−ϵ) for some ϵ > 0?
Can the Schwartz–Zippel lemma for polynomial identity testing be derandomized?
Does linear programming admit a strongly polynomial-time algorithm? (This is problem #9 in Smale's list of problems.)
How many queries are required for envy-free cake-cutting?
What is the algorithmic complexity of the minimum spanning tree problem? Equivalently, what is the decision tree complexity of the MST problem? The optimal algorithm to compute MSTs is known, but it relies on decision trees, so its complexity is unknown.
Gilbert–Pollak conjecture: Is the Steiner ratio of the Euclidean plane equal to
2
/
3
{\displaystyle 2/{\sqrt {3}}}
?
== Programming language theory ==
Barendregt–Geuvers–Klop conjecture: Is every weakly normalizing pure type system also strongly normalizing?
== Other problems ==
Is the Aanderaa–Karp–Rosenberg conjecture true?
Černý conjecture: If a deterministic finite automaton with
n
{\displaystyle n}
states has a synchronizing word, must it have one of length at most
(
n
−
1
)
2
{\displaystyle (n-1)^{2}}
?
Generalized star-height problem: Can all regular languages be expressed using generalized regular expressions with a limited nesting depth of Kleene stars?
Separating words problem: How many states are needed in a deterministic finite automaton that behaves differently on two given strings of length
n
{\displaystyle n}
?
What is the Turing completeness status of all unique elementary cellular automata?
Determine whether the length of the minimal uncompletable word of
M
{\displaystyle M}
is polynomial in
l
(
M
)
{\displaystyle l(M)}
, or even in
s
l
(
M
)
{\displaystyle sl(M)}
. It is known that
M
{\displaystyle M}
is a variable-length code if for all
u
1
,
.
.
.
,
u
n
,
v
1
,
.
.
.
,
v
m
∈
M
{\displaystyle u_{1},...,u_{n},v_{1},...,v_{m}\in M}
,
u
1
.
.
.
u
n
=
v
1
.
.
.
v
m
{\displaystyle u_{1}...u_{n}=v_{1}...v_{m}}
implies
n
=
m
{\displaystyle n=m}
and
u
i
=
v
i
{\displaystyle u_{i}=v_{i}}
for all
0
<
i
≤
n
{\displaystyle 0<i\leq n}
. In such cases, we do not yet know if a polynomial bound exists. This is a possible weakening of the Restivo conjecture (already disproven in general, though upper bounds remain unknown).
Determine all positive integers
n
{\displaystyle n}
such that the concatenation of
n
{\displaystyle n}
and
n
2
{\displaystyle n^{2}}
in base
b
{\displaystyle b}
uses at most
k
{\displaystyle k}
distinct characters, for fixed
b
{\displaystyle b}
and
k
{\displaystyle k}
. Many other problems in coding theory are also listed among the unsolved problems in mathematics.
== References ==
== External links ==
Woeginger, Gerhard J. "Open problems around exact algorithms". Discrete Applied Mathematics. 156 (2008), 397–405.
The RTA list of open problems – Open problems in rewriting.
The TLCA List of Open Problems – Open problems in the area of typed lambda calculus. | Wikipedia/Unsolved_problems_in_computer_science |
In engineering, a function is interpreted as a specific process, action or task that a system is able to perform.
== In engineering design ==
In the lifecycle of engineering projects, there are usually distinguished subsequently: Requirements and Functional specification documents. The Requirements usually specifies the most important attributes of the requested system. In the Design specification documents, physical or software processes and systems are frequently the requested functions
== In products ==
For advertising and marketing of technical products, the number of functions they can perform is often counted and used for promotion. For example a calculator capable of the basic mathematical operations of addition, subtraction, multiplication, and division, would be called a "four-function" model; when other operations are added, for example for scientific, financial, or statistical calculations, advertisers speak of "57 scientific functions", etc. A wristwatch with stopwatch and timer facilities would similarly claim a specified number of functions. To maximise the claim, trivial operations which do not significantly enhance the functionality of a product may be counted.
== References ==
== See also ==
Process
System
Utility | Wikipedia/Function_(engineering) |
The ACM–IEEE Symposium on Logic in Computer Science (LICS) is an annual academic conference on the theory and practice of computer science in relation to mathematical logic. Extended versions of selected papers of each year's conference appear in renowned international journals such as Logical Methods in Computer Science and ACM Transactions on Computational Logic.
== History ==
LICS was originally sponsored solely by the IEEE, but as of the 2014 founding of the ACM Special Interest Group on Logic and Computation LICS has become the flagship conference of SIGLOG, under the joint sponsorship of ACM and IEEE.
From the third
installment in 1988 until 2013, the cover page of the conference proceedings has featured an artwork entitled Irrational Tiling by Logical Quantifiers, by Alvy Ray Smith.
Since 1995, each year the Kleene award is given to the best student paper. In addition, since 2006, the LICS Test-of-Time Award is given annually to one among the twenty-year-old LICS papers that have best met the test of time.
== LICS Awards ==
=== Test-of-Time Award ===
Each year, since 2006, the LICS Test-of-Time Award recognizes those articles from LICS proceedings 20 years earlier, which have become influential.
==== 2006 ====
Leo Bachmair, Nachum Dershowitz, Jieh Hsiang, "Orderings for Equational Proofs"
E. Allen Emerson, Chin-Laung Lei, "Efficient Model Checking in Fragments of the Propositional Mu-Calculus (Extended Abstract)"
Moshe Y. Vardi, Pierre Wolper, "An Automata-Theoretic Approach to Automatic Program Verification (Preliminary Report)"
==== 2007 ====
Samson Abramsky, "Domain theory in Logical Form"
Robert Harper, Furio Honsell, Gordon D. Plotkin, "A Framework for Defining Logics"
==== 2008 ====
Martin Abadi, Leslie Lamport, "The existence of refinement mappings"
==== 2009 ====
Eugenio Moggi, "Computational lambda-calculus and monads"
==== 2010 ====
Rajeev Alur, Costas Courcoubetis, David L. Dill, "Model-checking for real-time systems"
Jerry R. Burch, Edmund Clarke, Kenneth L. McMillan, David L. Dill, James Hwang, "Symbolic model checking: 10^20 states and beyond"
Max Dauchet, Sophie Tison, "The theory of ground rewrite systems is decidable"
Peter Freyd, "Recursive types reduced to inductive types"
==== 2011 ====
Patrice Godefroid, Pierre Wolper, "A partial approach to model checking"
Joshua Hodas, Dale Miller, "Logic programming in a fragment of intuitionistic linear logic"
Dexter Kozen, "A completeness theorem for Kleene algebras and the algebra of regular events"
==== 2012 ====
Thomas Henzinger, Xavier Nicollin, Joseph Sifakis, Sergio Yovine, "Symbolic model checking for real-time systems"
Jean-Pierre Talpin, Pierre Jouvelot, "The type and effect discipline"
==== 2013 ====
Leo Bachmair, Harald Ganzinger, Uwe Waldmann, "Set constraints are the monadic class"
André Joyal, Mogens Nielson, Glynn Winskel, "Bisimulation and open maps"
Benjamin C. Pierce, Davide Sangiorgi, "Typing and subtyping for mobile processes"
==== 2014 ====
Martin Hofmann, Thomas Streicher, "The groupoid model refutes uniqueness of identity proofs"
Dale Miller, "A multiple-conclusion meta-logic"
==== 2015 ====
Igor Walukiewicz, "Completeness of Kozen's Axiomatisation of the Propositional Mu-Calculus"
==== 2016 ====
Parosh A. Abdulla, Karlis Cerans, Bengt Jonsson, Yih-Kuen Tsay, "General decidability theorems for infinite-state systems"
Iliano Cervesato, Frank Pfenning, "A Linear Logical Framework"
==== 2017 ====
Richard Blute, Josée Desharnais, Abbas Edalat, Prakash Panangaden, "Bisimulation for Labelled Markov Processes"
Daniele Turi, Gordon D. Plotkin, "Towards a Mathematical Operational Semantics"
==== 2018 ====
Martín Abadi, Cédric Fournet, Georges Gonthier, "Secure Implementation of Channel Abstractions"
Samson Abramsky, Kohei Honda, Guy McCusker, "A Fully Abstract Game Semantics for General References"
==== 2019 ====
Marcelo P. Fiore, Gordon D. Plotkin, Daniele Turi, "Abstract Syntax and Variable Binding"
Murdoch Gabbay, Andrew M. Pitts, "A New Approach to Abstract Syntax Involving Binders"
==== 2020 ====
Luca de Alfaro, Thomas A. Henzinger, "Concurrent Omega-Regular Games"
Hiroshi Nakano, "A Modality for Recursion"
==== 2021 ====
Aaron Stump;, Clark W. Barrett, David L. Dill, Jeremy R. Levitt, "A Decision Procedure for an Extensional Theory of Arrays"
Hongwei Xi, "Dependent Types for Program Termination Verification"
=== Kleene award ===
At each conference the Kleene award, in honour of S.C. Kleene, is given for the best student paper.
== See also ==
The list of computer science conferences contains other academic conferences in computer science.
== Notes ==
== External links ==
LICS home page | Wikipedia/Symposium_on_Logic_in_Computer_Science |
Evolutionary algorithms (EA) reproduce essential elements of the biological evolution in a computer algorithm in order to solve "difficult" problems, at least approximately, for which no exact or satisfactory solution methods are known. They belong to the class of metaheuristics and are a subset of population based bio-inspired algorithms and evolutionary computation, which itself are part of the field of computational intelligence. The mechanisms of biological evolution that an EA mainly imitates are reproduction, mutation, recombination and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.
Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity.
== Generic definition ==
The following is an example of a generic evolutionary algorithm:
Randomly generate the initial population of individuals, the first generation.
Evaluate the fitness of each individual in the population.
Check, if the goal is reached and the algorithm can be terminated.
Select individuals as parents, preferably of higher fitness.
Produce offspring with optional crossover (mimicking reproduction).
Apply mutation operations on the offspring.
Select individuals preferably of lower fitness for replacement with new individuals (mimicking natural selection).
Return to 2
== Types ==
Similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem.
Genetic algorithm – This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems.
Genetic programming – Here the solutions are in the form of computer programs, and their fitness is determined by their ability to solve a computational problem. There are many variants of Genetic Programming:
Cartesian genetic programming
Gene expression programming
Grammatical evolution
Linear genetic programming
Multi expression programming
Evolutionary programming – Similar to evolution strategy, but with a deterministic selection of all parents.
Evolution strategy (ES) – Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates. The method is mainly used for numerical optimization, although there are also variants for combinatorial tasks.
CMA-ES
Natural evolution strategy
Differential evolution – Based on vector differences and is therefore primarily suited for numerical optimization problems.
Coevolutionary algorithm – Similar to genetic algorithms and evolution strategies, but the created solutions are compared on the basis of their outcomes from interactions with other solutions. Solutions can either compete or cooperate during the search process. Coevolutionary algorithms are often used in scenarios where the fitness landscape is dynamic, complex, or involves competitive interactions.
Neuroevolution – Similar to genetic programming but the genomes represent artificial neural networks by describing structure and connection weights. The genome encoding can be direct or indirect.
Learning classifier system – Here the solution is a set of classifiers (rules or conditions). A Michigan-LCS evolves at the level of individual classifiers whereas a Pittsburgh-LCS uses populations of classifier-sets. Initially, classifiers were only binary, but now include real, neural net, or S-expression types. Fitness is typically determined with either a strength or accuracy based reinforcement learning or supervised learning approach.
Quality–Diversity algorithms – QD algorithms simultaneously aim for high-quality and diverse solutions. Unlike traditional optimization algorithms that solely focus on finding the best solution to a problem, QD algorithms explore a wide variety of solutions across a problem space and keep those that are not just high performing, but also diverse and unique.
== Theoretical background ==
The following theoretical principles apply to all or almost all EAs.
=== No free lunch theorem ===
The no free lunch theorem of optimization states that all optimization strategies are equally effective when the set of all optimization problems is considered. Under the same condition, no evolutionary algorithm is fundamentally better than another. This can only be the case if the set of all problems is restricted. This is exactly what is inevitably done in practice. Therefore, to improve an EA, it must exploit problem knowledge in some form (e.g. by choosing a certain mutation strength or a problem-adapted coding). Thus, if two EAs are compared, this constraint is implied. In addition, an EA can use problem specific knowledge by, for example, not randomly generating the entire start population, but creating some individuals through heuristics or other procedures. Another possibility to tailor an EA to a given problem domain is to involve suitable heuristics, local search procedures or other problem-related procedures in the process of generating the offspring. This form of extension of an EA is also known as a memetic algorithm. Both extensions play a major role in practical applications, as they can speed up the search process and make it more robust.
=== Convergence ===
For EAs in which, in addition to the offspring, at least the best individual of the parent generation is used to form the subsequent generation (so-called elitist EAs), there is a general proof of convergence under the condition that an optimum exists. Without loss of generality, a maximum search is assumed for the proof:
From the property of elitist offspring acceptance and the existence of the optimum it follows that per generation
k
{\displaystyle k}
an improvement of the fitness
F
{\displaystyle F}
of the respective best individual
x
′
{\displaystyle x'}
will occur with a probability
P
>
0
{\displaystyle P>0}
. Thus:
F
(
x
1
′
)
≤
F
(
x
2
′
)
≤
F
(
x
3
′
)
≤
⋯
≤
F
(
x
k
′
)
≤
⋯
{\displaystyle F(x'_{1})\leq F(x'_{2})\leq F(x'_{3})\leq \cdots \leq F(x'_{k})\leq \cdots }
I.e., the fitness values represent a monotonically non-decreasing sequence, which is bounded due to the existence of the optimum. From this follows the convergence of the sequence against the optimum.
Since the proof makes no statement about the speed of convergence, it is of little help in practical applications of EAs. But it does justify the recommendation to use elitist EAs. However, when using the usual panmictic population model, elitist EAs tend to converge prematurely more than non-elitist ones. In a panmictic population model, mate selection (see step 4 of the generic definition) is such that every individual in the entire population is eligible as a mate. In non-panmictic populations, selection is suitably restricted, so that the dispersal speed of better individuals is reduced compared to panmictic ones. Thus, the general risk of premature convergence of elitist EAs can be significantly reduced by suitable population models that restrict mate selection.
=== Virtual alphabets ===
With the theory of virtual alphabets, David E. Goldberg showed in 1990 that by using a representation with real numbers, an EA that uses classical recombination operators (e.g. uniform or n-point crossover) cannot reach certain areas of the search space, in contrast to a coding with binary numbers. This results in the recommendation for EAs with real representation to use arithmetic operators for recombination (e.g. arithmetic mean or intermediate recombination). With suitable operators, real-valued representations are more effective than binary ones, contrary to earlier opinion.
== Comparison to other concepts ==
=== Biological processes ===
A possible limitation of many evolutionary algorithms is their lack of a clear genotype–phenotype distinction. In nature, the fertilized egg cell undergoes a complex process known as embryogenesis to become a mature phenotype. This indirect encoding is believed to make the genetic search more robust (i.e. reduce the probability of fatal mutations), and also may improve the evolvability of the organism. Such indirect (also known as generative or developmental) encodings also enable evolution to exploit the regularity in the environment. Recent work in the field of artificial embryogeny, or artificial developmental systems, seeks to address these concerns. And gene expression programming successfully explores a genotype–phenotype system, where the genotype consists of linear multigenic chromosomes of fixed length and the phenotype consists of multiple expression trees or computer programs of different sizes and shapes.
=== Monte-Carlo methods ===
Both method classes have in common that their individual search steps are determined by chance. The main difference, however, is that EAs, like many other metaheuristics, learn from past search steps and incorporate this experience into the execution of the next search steps in a method-specific form. With EAs, this is done firstly through the fitness-based selection operators for partner choice and the formation of the next generation. And secondly, in the type of search steps: In EA, they start from a current solution and change it or they mix the information of two solutions. In contrast, when dicing out new solutions in Monte-Carlo methods, there is usually no connection to existing solutions.
If, on the other hand, the search space of a task is such that there is nothing to learn, Monte-Carlo methods are an appropriate tool, as they do not contain any algorithmic overhead that attempts to draw suitable conclusions from the previous search. An example of such tasks is the proverbial search for a needle in a haystack, e.g. in the form of a flat (hyper)plane with a single narrow peak.
== Applications ==
The areas in which evolutionary algorithms are practically used are almost unlimited and range from industry, engineering, complex scheduling, agriculture, robot movement planning and finance to research and art. The application of an evolutionary algorithm requires some rethinking from the inexperienced user, as the approach to a task using an EA is different from conventional exact methods and this is usually not part of the curriculum of engineers or other disciplines. For example, the fitness calculation must not only formulate the goal but also support the evolutionary search process towards it, e.g. by rewarding improvements that do not yet lead to a better evaluation of the original quality criteria. For example, if peak utilisation of resources such as personnel deployment or energy consumption is to be avoided in a scheduling task, it is not sufficient to assess the maximum utilisation. Rather, the number and duration of exceedances of a still acceptable level should also be recorded in order to reward reductions below the actual maximum peak value. There are therefore some publications that are aimed at the beginner and want to help avoiding beginner's mistakes as well as leading an application project to success. This includes clarifying the fundamental question of when an EA should be used to solve a problem and when it is better not to.
== Related techniques and other global search methods ==
There are some other proven and widely used methods of nature inspired global search techniques such as
Memetic algorithm – A hybrid method, inspired by Richard Dawkins's notion of a meme. It commonly takes the form of a population-based algorithm (frequently an EA) coupled with individual learning procedures capable of performing local refinements. Emphasizes the exploitation of problem-specific knowledge and tries to orchestrate local and global search in a synergistic way.
A cellular evolutionary or memetic algorithm uses a topological neighbouhood relation between the individuals of a population for restricting the mate selection and by that reducing the propagation speed of above-average individuals. The idea is to maintain genotypic diversity in the poulation over a longer period of time to reduce the risk of premature convergence.
Ant colony optimization is based on the ideas of ant foraging by pheromone communication to form paths. Primarily suited for combinatorial optimization and graph problems.
Particle swarm optimization is based on the ideas of animal flocking behaviour. Also primarily suited for numerical optimization problems.
Gaussian adaptation – Based on information theory. Used for maximization of manufacturing yield, mean fitness or average information. See for instance Entropy in thermodynamics and information theory.
In addition, many new nature-inspired or methaphor-guided algorithms have been proposed since the beginning of this century. For criticism of most publications on these, see the remarks at the end of the introduction to the article on metaheuristics.
== Examples ==
In 2020, Google stated that their AutoML-Zero can successfully rediscover classic algorithms such as the concept of neural networks.
The computer simulations Tierra and Avida attempt to model macroevolutionary dynamics.
== Gallery ==
== References ==
== Bibliography ==
Ashlock, D. (2006), Evolutionary Computation for Modeling and Optimization, Springer, New York, doi:10.1007/0-387-31909-3 ISBN 0-387-22196-4.
Bäck, T. (1996), Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford Univ. Press, New York, ISBN 978-0-19-509971-3.
Bäck, T., Fogel, D., Michalewicz, Z. (1999), Evolutionary Computation 1: Basic Algorithms and Operators, CRC Press, Boca Raton, USA, ISBN 978-0-7503-0664-5.
Bäck, T., Fogel, D., Michalewicz, Z. (2000), Evolutionary Computation 2: Advanced Algorithms and Operators, CRC Press, Boca Raton, USA, doi:10.1201/9781420034349 ISBN 978-0-3678-0637-8.
Banzhaf, W., Nordin, P., Keller, R., Francone, F. (1998), Genetic Programming - An Introduction, Morgan Kaufmann, San Francisco, ISBN 978-1-55860-510-7.
Eiben, A.E., Smith, J.E. (2003), Introduction to Evolutionary Computing, Springer, Heidelberg, New York, doi:10.1007/978-3-662-44874-8 ISBN 978-3-662-44873-1.
Holland, J. H. (1992), Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA, ISBN 978-0-262-08213-6.
Michalewicz, Z.; Fogel, D.B. (2004), How To Solve It: Modern Heuristics. Springer, Berlin, Heidelberg, ISBN 978-3-642-06134-9, doi:10.1007/978-3-662-07807-5.
Benko, Attila; Dosa, Gyorgy; Tuza, Zsolt (2010). "Bin Packing/Covering with Delivery, solved with the evolution of algorithms". 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA). pp. 298–302. doi:10.1109/BICTA.2010.5645312. ISBN 978-1-4244-6437-1. S2CID 16875144.
Poli, R.; Langdon, W. B.; McPhee, N. F. (2008). A Field Guide to Genetic Programming. Lulu.com, freely available from the internet. ISBN 978-1-4092-0073-4. Archived from the original on 2016-05-27. Retrieved 2011-03-05.
Price, K., Storn, R.M., Lampinen, J.A., (2005). Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin, Heidelberg, ISBN 978-3-642-42416-8, doi:10.1007/3-540-31306-0.
Ingo Rechenberg (1971), Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973). ISBN 3-7728-1642-8
Hans-Paul Schwefel (1974), Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977).
Hans-Paul Schwefel (1995), Evolution and Optimum Seeking. Wiley & Sons, New York. ISBN 0-471-57148-2
Simon, D. (2013), Evolutionary Optimization Algorithms Archived 2014-03-10 at the Wayback Machine, Wiley & Sons, ISBN 978-0-470-93741-9
Kruse, Rudolf; Borgelt, Christian; Klawonn, Frank; Moewes, Christian; Steinbrecher, Matthias; Held, Pascal (2013), Computational Intelligence: A Methodological Introduction. Springer, London. ISBN 978-1-4471-5012-1, doi:10.1007/978-1-4471-5013-8.
Rahman, Rosshairy Abd.; Kendall, Graham; Ramli, Razamin; Jamari, Zainoddin; Ku-Mahamud, Ku Ruhana (2017). "Shrimp Feed Formulation via Evolutionary Algorithm with Power Heuristics for Handling Constraints". Complexity. 2017: 1–12. doi:10.1155/2017/7053710.
== External links ==
An Overview of the History and Flavors of Evolutionary Algorithms | Wikipedia/Evolutionary_algorithms |
DNA computing is an emerging branch of unconventional computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional electronic computing. Research and development in this area concerns theory, experiments, and applications of DNA computing. Although the field originally started with the demonstration of a computing application by Len Adleman in 1994, it has now been expanded to several other avenues such as the development of storage technologies, nanoscale imaging modalities, synthetic controllers and reaction networks, etc.
== History ==
Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible.
Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made after almost a decade.
The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018.
In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientists and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography, computational capacity of DNA systems, DNA memories and disks, and DNA-based robotics.
Before 2002, Lila Kari showed that the DNA operations performed by genetic recombination in some organisms are Turing complete.
In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated.
== Applications, examples, and recent developments ==
In 1994 Leonard Adleman presented the first prototype of a DNA computer. The TT-100 was a test tube filled with 100 microliters of a DNA solution. He managed to solve an instance of the directed Hamiltonian path problem. In Adleman's experiment, the Hamiltonian Path Problem was implemented notationally as the "travelling salesman problem". For this purpose, different DNA fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA fragments were produced and mixed in a test tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction, the DNA fragments representing the longer routes were eliminated. The remains are the solution to the problem, but overall, the experiment lasted a week. However, current technical limitations prevent the evaluation of the results. Therefore, the experiment isn't suitable for the application, but it is nevertheless a proof of concept.
=== Combinatorial problems ===
First results to these problems were obtained by Leonard Adleman.
In 1994: Solving a Hamiltonian path in a graph with seven summits.
In 2002: Solving a NP-complete problem as well as a 3-SAT problem with 20 variables.
=== Tic-tac-toe game ===
In 2002, J. Macdonald, D. Stefanović and M. Stojanović created a DNA computer able to play tic-tac-toe against a human player. The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are cut in half. The DNA enzymes simulate logical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND.
By default, the computer is considered to have played first in the central square. The human player starts with eight different types of DNA strands corresponding to the eight remaining boxes that may be played. To play box number i, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting, in one of these bins, in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The DNA enzymes are divided among the bins in such a way as to ensure that the best the human player can achieve is a draw, as in real tic-tac-toe.
=== Neural network based computing ===
Kevin Cherry and Lulu Qian at Caltech developed a DNA-based artificial neural network that can recognize 100-bit hand-written digits. They achieved this by programming on a computer in advance with the appropriate set of weights represented by varying concentrations weight molecules which are later added to the test tube that holds the input DNA strands.
=== Improved speed with Localized (cache-like) Computing ===
One of the challenges of DNA computing is its slow speed. While DNA is a biologically compatible substrate, i.e., it can be used at places where silicon technology cannot, its computational speed is still very slow. For example, the square-root circuit used as a benchmark in the field takes over 100 hours to complete. While newer ways with external enzyme sources are reporting faster and more compact circuits, Chatterjee et al. demonstrated an interesting idea in the field to speed up computation through localized DNA circuits, a concept being further explored by other groups. This idea, while originally proposed in the field of computer architecture, has been adopted in this field as well. In computer architecture, it is very well-known that if the instructions are executed in sequence, having them loaded in the cache will inevitably lead to fast performance, also called the principle of localization. This is because with instructions in fast cache memory, there is no need swap them in and out of main memory, which can be slow. Similarly, in localized DNA computing, the DNA strands responsible for computation are fixed on a breadboard-like substrate ensuring physical proximity of the computing gates. Such localized DNA computing techniques have been shown to potentially reduce the computation time by orders of magnitude.
=== Renewable (or reversible) DNA computing ===
Subsequent research on DNA computing has produced reversible DNA computing, bringing the technology one step closer to the silicon-based computing used in (for example) PCs. In particular, John Reif and his group at Duke University have proposed two different techniques to reuse the computing DNA complexes. The first design uses dsDNA gates, while the second design uses DNA hairpin complexes.
While both designs face some issues (such as reaction leaks), this appears to represent a significant breakthrough in the field of DNA computing. Some other groups have also attempted to address the gate reusability problem.
Using strand displacement reactions (SRDs), reversible proposals are presented in the "Synthesis Strategy of Reversible Circuits on DNA Computers" paper for implementing reversible gates and circuits on DNA computers by combining DNA computing and reversible computing techniques. This paper also proposes a universal reversible gate library (URGL) for synthesizing n-bit reversible circuits on DNA computers with an average length and cost of the constructed circuits better than the previous methods.
== Methods ==
There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates (AND, OR, NOT) associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, and toehold exchange.
=== Strand displacement mechanisms ===
The most fundamental operation in DNA computing and molecular programming is the strand displacement mechanism. Currently, there are two ways to perform strand displacement:
Toehold mediated strand displacement (TMSD)
Polymerase-based strand displacement (PSD)
=== Toehold exchange ===
Besides simple strand displacement schemes, DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA.
=== Chemical reaction networks (CRNs) ===
The full stack for DNA computing looks very similar to a traditional computer architecture. At the highest level, a C-like general purpose programming language is expressed using a set of chemical reaction networks (CRNs). This intermediate representation gets translated to domain-level DNA design and then implemented using a set of DNA strands. In 2010, Erik Winfree's group showed that DNA can be used as a substrate to implement arbitrary chemical reactions. This opened the way to design and synthesis of biochemical controllers since the expressive power of CRNs is equivalent to a Turing machine. Such controllers can potentially be used in vivo for applications such as preventing hormonal imbalance.
=== DNAzymes ===
Catalytic DNA (deoxyribozyme or DNAzyme) catalyze a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to one-, two-, and three-input gates with no current implementation for evaluating statements in series.
The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit. The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then "used", and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added.
Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location. Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I and MAYA II machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme. While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need of a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo.
A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent.
=== Enzymes ===
Enzyme-based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA.
Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN. Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule (ssDNA) that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor. On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in "cells of higher organisms". It should also be pointed out that the 'software' molecules can be reused in this case.
=== Algorithmic self-assembly ===
DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act as Wang tiles. A DX array has been demonstrated whose assembly encodes an XOR operation; this allows the DNA array to implement a cellular automaton which generates a fractal called the Sierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays.
== Capabilities ==
DNA computing is a form of parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once. For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer.
DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation.
For example,
if the space required for the solution of a problem grows exponentially with the size of the problem (EXPSPACE problems) on von Neumann machines, it still grows exponentially with the size of the problem on DNA machines.
For very large EXPSPACE problems, the amount of DNA required is too large to be practical.
== Alternative technologies ==
A partnership between IBM and Caltech was established in 2009 aiming at "DNA chips" production. A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots. A compiler has been written in Perl.
== Pros and cons ==
The slow processing speed of a DNA computer (the response time is measured in minutes, hours or days, rather than milliseconds) is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA computer than by a digital one.
== See also ==
== References ==
== Further reading ==
== External links ==
DNA modeled computing
How Stuff Works explanation
Dirk de Pol: DNS – Ein neuer Supercomputer?. In: Die Neue Gesellschaft / Frankfurter Hefte ISSN 0177-6738, Heft 2/96, Februar 1996, S. 170–172
'DNA computer' cracks code, Physics Web
Ars Technica
- The New York Times DNA Computer for detecting Cancer
Bringing DNA computers to life, in Scientific American
Japanese Researchers store information in bacteria DNA
International Meeting on DNA Computing and Molecular Programming
LiveScience.com-How DNA Could Power Computers | Wikipedia/DNA_computing |
Artificial immune systems (AIS) are a class of rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for problem-solving, specifically for the computational techniques called Evolutionary Computation and Amorphous Computation.
== Definition ==
The field of artificial immune systems (AIS) is concerned with abstracting the structure and function of the immune system to computational systems, and investigating the application of these systems towards solving computational problems from fields like mathematics, engineering, and information technology. AIS is a sub-field of biologically inspired computing, and natural computation, with interests in machine learning and belonging to the broader field of Artificial Intelligence, such as Artificial General Intelligence.
Artificial immune systems (AIS) are adaptive systems, inspired by theoretical immunology and observed immune functions, principles and models, which are applied to problem solving.
AIS is distinct from computational immunology and theoretical biology that are concerned with simulating immunology using computational and mathematical models towards better understanding the immune system, although such models initiated the field of AIS and continue to provide a fertile ground for inspiration. Finally, the field of AIS is not concerned with the investigation of the immune system as a substrate for computation, unlike other fields such as DNA computing.
== History ==
AIS emerged in the mid-1980s with articles authored by Farmer, Packard and Perelson (1986) and Bersini and Varela (1990) on immune networks. However, it was only in the mid-1990s that AIS became a field in its own right. Forrest et al. (on negative selection) and Kephart et al. published their first papers on AIS in 1994, and Dasgupta conducted extensive studies on Negative Selection Algorithms. Hunt and Cooke started the works on Immune Network models in 1995; Timmis and Neal continued this work and made some improvements. De Castro & Von Zuben's and Nicosia & Cutello's work (on clonal selection) became notable in 2002. The first book on Artificial Immune Systems was edited by Dasgupta in 1999.
Currently, new ideas along AIS lines, such as danger theory and algorithms inspired by the innate immune system, are also being explored. Although some believe that these new ideas do not yet offer any truly 'new' abstract, over and above existing AIS algorithms. This, however, is hotly debated, and the debate provides one of the main driving forces for AIS development at the moment. Other recent developments involve the exploration of degeneracy in AIS models, which is motivated by its hypothesized role in open ended learning and evolution.
Originally AIS set out to find efficient abstractions of processes found in the immune system but, more recently, it is becoming interested in modelling the biological processes and in applying immune algorithms to bioinformatics problems.
In 2008, Dasgupta and Nino published a textbook on immunological computation which presents a compendium of up-to-date work related to immunity-based techniques and describes a wide variety of applications.
== Techniques ==
The common techniques are inspired by specific immunological theories that explain the function and behavior of the mammalian adaptive immune system.
Clonal selection algorithm: A class of algorithms inspired by the clonal selection theory of acquired immunity that explains how B and T lymphocytes improve their response to antigens over time called affinity maturation. These algorithms focus on the Darwinian attributes of the theory where selection is inspired by the affinity of antigen–antibody interactions, reproduction is inspired by cell division, and variation is inspired by somatic hypermutation. Clonal selection algorithms are most commonly applied to optimization and pattern recognition domains, some of which resemble parallel hill climbing and the genetic algorithm without the recombination operator.
Negative selection algorithm: Inspired by the positive and negative selection processes that occur during the maturation of T cells in the thymus called T cell tolerance. Negative selection refers to the identification and deletion (apoptosis) of self-reacting cells, that is T cells that may select for and attack self tissues. This class of algorithms are typically used for classification and pattern recognition problem domains where the problem space is modeled in the complement of available knowledge. For example, in the case of an anomaly detection domain the algorithm prepares a set of exemplar pattern detectors trained on normal (non-anomalous) patterns that model and detect unseen or anomalous patterns.
Immune network algorithms: Algorithms inspired by the idiotypic network theory proposed by Niels Kaj Jerne that describes the regulation of the immune system by anti-idiotypic antibodies (antibodies that select for other antibodies). This class of algorithms focus on the network graph structures involved where antibodies (or antibody producing cells) represent the nodes and the training algorithm involves growing or pruning edges between the nodes based on affinity (similarity in the problems representation space). Immune network algorithms have been used in clustering, data visualization, control, and optimization domains, and share properties with artificial neural networks.
Dendritic cell algorithms: The dendritic cell algorithm (DCA) is an example of an immune inspired algorithm developed using a multi-scale approach. This algorithm is based on an abstract model of dendritic cells (DCs). The DCA is abstracted and implemented through a process of examining and modeling various aspects of DC function, from the molecular networks present within the cell to the behaviour exhibited by a population of cells as a whole. Within the DCA information is granulated at different layers, achieved through multi-scale processing.
== See also ==
Biologically inspired computing
Computational immunology
Computational intelligence
Evolutionary computation
Immunocomputing
Natural computation
Swarm intelligence
Learning classifier system
Rule-based machine learning
== Notes ==
== References ==
J.D. Farmer, N. Packard and A. Perelson, (1986) "The immune system, adaptation and machine learning", Physica D, vol. 2, pp. 187–204
H. Bersini, F.J. Varela, Hints for adaptive problem solving gleaned from immune networks. Parallel Problem Solving from Nature, First Workshop PPSW 1, Dortmund, FRG, October, 1990.
D. Dasgupta (Editor), Artificial Immune Systems and Their Applications, Springer-Verlag, Inc. Berlin, January 1999, ISBN 3-540-64390-7
V. Cutello and G. Nicosia (2002) "An Immunological Approach to Combinatorial Optimization Problems" Lecture Notes in Computer Science, Springer vol. 2527, pp. 361–370.
L. N. de Castro and F. J. Von Zuben, (1999) "Artificial Immune Systems: Part I -Basic Theory and Applications", School of Computing and Electrical Engineering, State University of Campinas, Brazil, No. DCA-RT 01/99.
S. Garrett (2005) "How Do We Evaluate Artificial Immune Systems?" Evolutionary Computation, vol. 13, no. 2, pp. 145–178. http://mitpress.mit.edu/journals/pdf/EVCO_13_2_145_0.pdf Archived 2011-06-29 at the Wayback Machine
V. Cutello, G. Nicosia, M. Pavone, J. Timmis (2007) An Immune Algorithm for Protein Structure Prediction on Lattice Models, IEEE Transactions on Evolutionary Computation, vol. 11, no. 1, pp. 101–117. https://web.archive.org/web/20120208130715/http://www.dmi.unict.it/nicosia/papers/journals/Nicosia-IEEE-TEVC07.pdf
Villalobos-Arias M., Coello C.A.C., Hernández-Lerma O. (2004) Convergence Analysis of a Multiobjective Artificial Immune System Algorithm. In: Nicosia G., Cutello V., Bentley P.J., Timmis J. (eds) Artificial Immune Systems. ICARIS 2004. Lecture Notes in Computer Science, vol 3239. Springer, Berlin, Heidelberg. DOI https://doi.org/10.1007/978-3-540-30220-9_19
== External links ==
AISWeb: The Online Home of Artificial Immune Systems Information about AIS in general and links to a variety of resources including ICARIS conference series, code, teaching material and algorithm descriptions.
ARTIST: Network for Artificial Immune Systems Provides information about the UK AIS network, ARTIST. It provides technical and financial support for AIS in the UK and beyond, and aims to promote AIS projects.
Computer Immune Systems Archived 2017-02-20 at the Wayback Machine Group at the University of New Mexico led by Stephanie Forrest.
AIS: Artificial Immune Systems Group at the University of Memphis led by Dipankar Dasgupta.
IBM Antivirus Research Early work in AIS for computer security. | Wikipedia/Artificial_immune_systems |
The Conference on Innovations in Theoretical Computer Science is an academic conference about theoretical computer science. The conference was initiated by Andrew Yao in 2010, and was originally called Innovations in Computer Science. The proceedings were hosted online in 2010 and 2011, were published in the ACM Digital Library from 2012 to 2016, and were published as open access in the LIPIcs collection from 2017 onwards.
As of 2022, the conference is listed by Google Scholar as the 8th venue in theoretical computer science according to the h5-index metric. It is indexed by the DBLP bibliographical database.
== External links ==
Website
DBLP entry
== References == | Wikipedia/Innovations_in_Theoretical_Computer_Science |
Silicon Graphics, Inc. (stylized as SiliconGraphics before 1999, later rebranded SGI, historically known as Silicon Graphics Computer Systems or SGCS) was an American high-performance computing manufacturer, producing computer hardware and software. Founded in Mountain View, California, in November 1981 by James H. Clark, the computer scientist and entrepreneur perhaps best known for founding Netscape (with Marc Andreessen). Its initial market was 3D graphics computer workstations, but its products, strategies and market positions developed significantly over time.
Early systems were based on the Geometry Engine that Clark and Marc Hannah had developed at Stanford University, and were derived from Clark's broader background in computer graphics. The Geometry Engine was the first very-large-scale integration (VLSI) implementation of a geometry pipeline, specialized hardware that accelerated the "inner-loop" geometric computations needed to display three-dimensional images. For much of its history, the company focused on 3D imaging and was a major supplier of both hardware and software in this market.
Silicon Graphics reincorporated as a Delaware corporation in January 1990. Through the mid to late-1990s, the rapidly improving performance of commodity Wintel machines began to erode SGI's stronghold in the 3D market. The porting of Maya to other platforms was a major event in this process. SGI made several attempts to address this, including a disastrous move from their existing MIPS platforms to the Intel Itanium, as well as introducing their own Linux-based Intel IA-32 based workstations and servers that failed in the market. In the mid-2000s the company repositioned itself as a supercomputer vendor, a move that also failed.
On April 1, 2009, SGI filed for Chapter 11 bankruptcy protection and announced that it would sell substantially all of its assets to Rackable Systems, a deal finalized on May 11, 2009, with Rackable assuming the name Silicon Graphics International. The remnants of Silicon Graphics, Inc. became Graphics Properties Holdings, Inc.
== History ==
=== Early years ===
James H. Clark left his position as an electrical engineering associate professor at Stanford University to found SGI in 1982 along with a group of seven graduate students and research staff from Stanford University: Kurt Akeley, David J. Brown, Tom Davis, Rocky Rhodes, Marc Hannah, Herb Kuta, and Mark Grossman; along with Abbey Silverstone and a few others.
=== Growth ===
Ed McCracken was CEO of Silicon Graphics from 1984 to 1997. During those years, SGI grew from annual revenues of $5.4 million to $3.7 billion.
=== Decline ===
The addition of 3D graphic capabilities to PCs, and the ability of clusters of Linux- and BSD-based PCs to take on many of the tasks of larger SGI servers, ate into SGI's core markets. The porting of Maya to Linux, Mac OS and Microsoft Windows further eroded the low end of SGI's product line.
In response to challenges faced in the marketplace and a falling share price, Ed McCracken was fired and SGI brought in Richard Belluzzo to replace him. Under Belluzzo's leadership a number of initiatives were taken which are considered to have accelerated the corporate decline.
One such initiative was trying to sell workstations running Windows NT called Visual Workstations in addition to workstations running IRIX, the company's version of UNIX. This put the company in even more direct competition with the likes of Dell, making it more difficult to justify a price premium. The product line was unsuccessful and abandoned a few years later.
SGI's premature announcement of its migration from MIPS to Itanium and its abortive ventures into IA-32 architecture systems (the Visual Workstation line, the ex-Intergraph Zx10 range and the SGI 1000-series Linux servers) damaged SGI's credibility in the market.
In 1999, in an attempt to clarify their current market position as more than a graphics company, Silicon Graphics Inc. changed its corporate identity to "SGI", although its legal name was unchanged.
At the same time, SGI announced a new logo consisting of only the letters "sgi" in a proprietary font called "SGI", created by branding and design consulting firm Landor Associates, in collaboration with designer Joe Stitzlein. SGI continued to use the "Silicon Graphics" name for its workstation product line, and later re-adopted the cube logo for some workstation models.
In November 2005, SGI announced that it had been delisted from the New York Stock Exchange because its common stock had fallen below the minimum share price for listing on the exchange. SGI's market capitalization dwindled from a peak of over seven billion dollars in 1995 to just $120 million at the time of delisting. In February 2006, SGI noted that it could run out of cash by the end of the year.
=== Re-emergence ===
In mid-2005, SGI hired Alix Partners to advise it on returning to profitability and received a new line of credit. SGI announced it was postponing its scheduled annual December stockholders meeting until March 2006. It proposed a reverse stock split to deal with the de-listing from the New York Stock Exchange.
In January 2006, SGI hired Dennis McKenna as its new CEO and chairman of the board of directors. Mr. McKenna succeeded Robert Bishop, who remained vice chairman of the board of directors.
On May 8, 2006, SGI announced that it had filed for Chapter 11 bankruptcy protection for itself and U.S. subsidiaries as part of a plan to reduce debt by $250 million. Two days later, the U.S. Bankruptcy Court approved its first day motions and its use of a $70 million financing facility provided by a group of its bondholders. Foreign subsidiaries were unaffected.
On September 6, 2006, SGI announced the end of development for the MIPS/IRIX line and the IRIX operating system. Production would end on December 29 and the last orders would be fulfilled by March 2007. Support for these products would end after December 2013.
SGI emerged from bankruptcy protection on October 17, 2006. Its stock symbol on Pink Sheets at that point, SGID, was canceled, and new stock was issued on the NASDAQ exchange under the symbol SGIC. This new stock was distributed to the company's creditors, and the SGID common stockholders were left with worthless shares. At the end of that year, the company moved its headquarters from Mountain View to Sunnyvale. Its earlier North Shoreline headquarters is now occupied by the Computer History Museum; the newer Amphitheatre Parkway headquarters was sold to Google (which had already subleased and moved into the facility in 2003). Both of these locations were award-winning designs by Studios Architecture.
In April 2008, SGI re-entered the visualization market with the SGI Virtu range of visualization servers and workstations, which were re-badged systems from BOXX Technologies based on Intel Xeon or AMD Opteron processors and Nvidia Quadro graphics chipsets, running Red Hat Enterprise Linux, SUSE Linux Enterprise Server or Windows Compute Cluster Server.
=== Final bankruptcy and acquisition by Rackable Systems ===
In December 2008, SGI received a delisting notification from NASDAQ, as its market value had been below the minimum $35 million requirement for 10 consecutive trading days, and also did not meet NASDAQ's alternative requirements of a minimum stockholders' equity of $2.5 million or annual net income from continuing operations of $500,000 or more.
On April 1, 2009, SGI filed for Chapter 11 again, and announced that it would sell substantially all of its assets to Rackable Systems for $25 million. The sale, ultimately for $42.5 million, was finalized on May 11, 2009; at the same time, Rackable announced their adoption of "Silicon Graphics International" as their global name and brand. The Bankruptcy Court scheduled continuing proceedings and hearings for June 3 and 24, 2009, and July 22, 2009.
After the Rackable acquisition, Vizworld magazine published a series of six articles that chronicle the downfall of SGI.
Hewlett Packard Enterprise acquired Silicon Graphics International in November 2016, which allowed HPE to place the SGI Pleiades, a TOP500 supercomputer at NASA Ames Research Center, in its portfolio.
=== Graphics Properties Holdings, Inc. era ===
During Silicon Graphics Inc.'s second bankruptcy phase, it was renamed to Graphics Properties Holdings, Inc.(GPHI) in June 2009.
In 2010, GPHI announced it had won a significant favorable ruling in its litigation with ATI Technologies and AMD in June 2010, following the patent lawsuit originally filed during the Silicon Graphics, Inc. era. Following the 2008 appeal by ATI over the validity of U.S. patent 6,650,327 ('327) and Silicon Graphics Inc's voluntary dismissal of the U.S. patent 6,885,376 ('376) patent from the lawsuit, the Federal Circuit upheld the jury verdict on the validity of GPHI's U.S. Patent No. 6,650,327, and furthermore found that AMD had lost its right to challenge patent validity in future proceedings. On January 31, 2011, the District Court entered an order that permits AMD to pursue its invalidity affirmative defense at trial and does not permit SGI to accuse AMD's Radeon R700 series of graphics products of infringement in this case. On April 18, 2011, GPHI and AMD had entered into a confidential Settlement and License Agreement that resolved this litigation matter for an immaterial amount and that provides immunity under all GPHI patents for alleged infringement by AMD products, including components, software and designs. On April 26, 2011, the Court entered an order granting the parties' agreed motion for dismissal and final judgment.
In November 2011, GPHI filed another patent infringement lawsuit against Apple Inc. in Delaware involving more patents than their original patent infringement case against Apple last November, for alleged violation of U.S. patents 6,650,327 ('327), U.S. patent 6,816,145 ('145) and U.S. patent 5,717,881 ('881).
In 2012, GPHI filed lawsuit against Apple, Sony, HTC Corp, LG Electronics Inc. and Samsung Electronics Co., Research in Motion Ltd. for allegedly violating patent relating to a computer graphics process that turns text and images into pixels to be displayed on screens. Affected devices include Apple iPhone, HTC EVO4G, LG Thrill, Research in Motion Torch, Samsung Galaxy S and Galaxy S II, and Sony Xperia Play smartphones.
U.S. patent 6,650,327 - 1998 Display system having floating point rasterization and floating point ..
U.S. patent 6,885,376 - 2002 System, method, and computer program product for near-real time load ..
U.S. patent 6,816,145 - 1998 Large area wide aspect ratio flat panel monitor having high resolution for ..
U.S. patent 5,717,881 - 1995 Data processing system for processing one and two parcel instructions
== Technology ==
=== Motorola 680x0-based systems ===
SGI's first generation products, starting with the IRIS (Integrated Raster Imaging System) 1000 series of high-performance graphics terminals, were based on the Motorola 68000 family of microprocessors. The later IRIS 2000 and 3000 models developed into full UNIX workstations.
==== IRIS 1000 series ====
The first entries in the 1000 series (models 1000 and 1200, introduced in 1984) were graphics terminals, peripherals to be connected to a general-purpose computer such as a Digital Equipment Corporation VAX, to provide graphical raster display abilities. They used 8 MHz Motorola 68000 CPUs with 768 kB of RAM and had no disk drives. They booted over the network (via an Excelan EXOS/101 Ethernet card) from their controlling computer. They used the "PM1" CPU board, which was a variant of the board that was used in Stanford University's SUN workstation and later in the Sun-1 workstation from Sun Microsystems. The graphics system was composed of the GF1 frame buffer, the UC3 "Update Controller", DC3 "Display Controller", and the BP2 bitplane. The 1000-series machines were designed around the Multibus standard.
Later 1000-series machines, the 1400 and 1500, ran at 10 MHz and had 1.5 MB of RAM. The 1400 had a 72 MB ST-506 disk drive, while the 1500 had a 474 MB SMD-based disk drive with a Xylogics 450 disk controller. They may have used the PM2 CPU and PM2M1 RAM board from the 2000 series. The usual monitor for the 1000 series ran at 30 Hz interlaced. Six beta-test units of the 1400 workstation were produced, and the first production unit (SGI's first commercial computer) was shipped to Carnegie-Mellon University's Electronic Imaging Laboratory in 1984.
==== IRIS 2000 and 3000 series ====
SGI rapidly developed its machines into workstations with its second product line — the IRIS 2000 series, first released in August 1985. SGI began using the UNIX System V operating system. There were five models in two product ranges, the 2000/2200/2300/2400/2500 range which used 68010 CPUs (the PM2 CPU module), and the later "Turbo" systems, the 2300T, 2400T and 2500T, which had 68020s (the IP2 CPU module). All used the Excelan EXOS/201 Ethernet card, the same graphics hardware (GF2 Frame Buffer, UC4 Update Controller, DC4 Display Controller, BP3 Bitplane). Their main differences were the CPU, RAM, and Weitek Floating Point Accelerator boards, disk controllers and disk drives (both ST-506 and SMD were available). These could be upgraded, for example from a 2400 to a 2400T. The 2500 and 2500T had a larger chassis, a standard 6' 19" EIA rack with space at the bottom for two SMD disk drives weighing approximately 68 kg each. The non-Turbo models used the Multibus for the CPU to communicate with the floating point accelerator, while the Turbos added a ribbon cable dedicated for this. 60 Hz monitors were used for the 2000 series.
The height of the machines using Motorola CPUs was reached with the IRIS 3000 series (models 3010/3020/3030 and 3110/3115/3120/3130, the 30s both being full-size rack machines). They used the same graphics subsystem and Ethernet as the 2000s, but could also use up to 12 "geometry engines", the first widespread use of hardware graphics accelerators. The standard monitor was a 19" 60 Hz non-interlaced unit with a tilt/swivel base; 19" 30 Hz interlaced and a 15" 60 Hz non-interlaced (with tilt/swivel base) were also available.
The IRIS 3130 and its smaller siblings were impressive for the time, being complete UNIX workstations. The 3130 was powerful enough to support a complete 3D animation and rendering package without mainframe support. With large capacity hard drives by standards of the day (two 300 MB drives), streaming tape and Ethernet, it could be the centerpiece of an animation operation.
The line was formally discontinued in November 1989, with about 3,500 systems shipped of all 2000 and 3000 models combined.
=== RISC era ===
With the introduction of the IRIS 4D series, SGI switched to MIPS microprocessors. These machines were more powerful and came with powerful on-board floating-point capability. As 3D graphics became more popular in television and film during this time, these systems were responsible for establishing much of SGI's reputation.
SGI produced a broad range of MIPS-based workstations and servers during the 1990s, running SGI's version of UNIX System V, now called IRIX. These included the massive Onyx visualization systems, the size of refrigerators and capable of supporting up to 64 processors while managing up to three streams of high resolution, fully realized 3D graphics.
In October 1991, MIPS announced the first commercially available 64-bit microprocessor, the R4000. SGI used the R4000 in its Crimson workstation. IRIX 6.2 was the first fully 64-bit IRIX release, including 64-bit pointers.
To secure the supply of future generations of MIPS microprocessors (the 64-bit R4000), SGI acquired the company in 1992 for $333 million and renamed it as MIPS Technologies Inc., a wholly owned subsidiary of SGI.
In 1993, Silicon Graphics (SGI) signed a deal with Nintendo to develop the Reality Coprocessor (RCP) GPU used in the Nintendo 64 (N64) video game console. The deal was signed in early 1993, and it was later made public in August of that year. The console itself was later released in 1996. The RCP was developed by SGI's Nintendo Operations department, led by engineer Dr. Wei Yen. In 1997, twenty SGI employees, led by Yen, left SGI and founded ArtX (later acquired by ATI Technologies in 2000).
In 1998, SGI relinquished some ownership of MIPS Technologies, Inc in a Re-IPO, and fully divested itself in 2000.
In the late 1990s, when much of the industry expected the Itanium to replace both CISC and RISC architectures in non-embedded computers, SGI announced their intent to phase out MIPS in their systems. Development of new MIPS microprocessors stopped, and the existing R12000 design was extended multiple times until 2003 to provide existing customers more time to migrate to Itanium.
In August 2006, SGI announced the end of production for MIPS/IRIX systems, and by the end of the year MIPS/IRIX products were no longer generally available from SGI.
=== IRIS GL and OpenGL ===
Until the second generation Onyx Reality Engine machines, SGI offered access to its high performance 3D graphics subsystems through a proprietary API known as IRIS Graphics Library (IRIS GL). As more features were added over the years, IRIS GL became harder to maintain and more cumbersome to use. In 1992, SGI decided to clean up and reform IRIS GL and made the bold move of allowing the resulting OpenGL API to be cheaply licensed by SGI's competitors, and set up an industry-wide consortium to maintain the OpenGL standard (the OpenGL Architecture Review Board).
This meant that for the first time, fast, efficient, cross-platform graphics programs could be written. For over 20 years – until the introduction of the Vulkan API – OpenGL remained the only real-time 3D graphics standard to be portable across a variety of operating systems.
=== ACE Consortium ===
SGI was part of the Advanced Computing Environment initiative, formed in the early 1990s with 20 other companies, including Compaq, Digital Equipment Corporation, MIPS Computer Systems, Groupe Bull, Siemens, NEC, NeTpower, Microsoft and Santa Cruz Operation. Its intent was to introduce workstations based on the MIPS architecture and able to run Windows NT and SCO UNIX. The group produced the Advanced RISC Computing (ARC) specification, but began to unravel little more than a year after its formation.
=== Entertainment industry ===
For eight consecutive years (1995–2002), all films nominated for an Academy Award for Distinguished Achievement in Visual Effects were created on Silicon Graphics computer systems. The technology was also used in commercials for a host of companies.
An SGI Crimson system with the fsn three-dimensional file system navigator appeared in the 1993 movie Jurassic Park.
In the movie Twister, protagonists can be seen using an SGI laptop computer; however, the unit shown was not an actual working computer, but rather a fake laptop shell built around an SGI Corona LCD flat screen display.
The 1995 film Congo also features an SGI laptop computer being used by Dr. Ross (Laura Linney) to communicate via satellite to TraviCom HQ.
The purple, lowercased "sgi" logo can be seen at the beginning of the opening credits of the HBO series Silicon Valley, before being taken down and replaced by the Google logo as the intro graphics progress. Google leased the former SGI buildings in 2003 for their headquarters in Mountain View, CA until they purchased the buildings outright in 2006.
Once inexpensive PCs began to have graphics performance close to the more expensive specialized graphical workstations which were SGI's core business, SGI shifted its focus to high performance servers for digital video and the Web. Many SGI graphics engineers left to work at other computer graphics companies such as ATI and Nvidia, contributing to the PC 3D graphics revolution.
=== Free software ===
SGI was a promoter of free software, supporting several projects such as Linux and Samba, and opening some of its own previously proprietary code such as the XFS filesystem and the Open64 compiler.
SGI was also important in its contribution to the C++ Standard Template Library (STL) with many useful extensions in the MIT-like licensed SGI STL implementation. The extension keeps being carried by the direct descendant STLport and GNU's libstdc++.
=== Acquisition of Alias, Wavefront, Cray and Intergraph ===
In 1995, SGI purchased Alias Research, Kroyer Films, and Wavefront Technologies in a deal totaling approximately $500 million and merged the companies into Alias|Wavefront. In June 2004 SGI sold the business, later renamed to Alias/Wavefront, to the private equity investment firm Accel-KKR for $57.5 million. In October 2005, Autodesk announced that it signed a definitive agreement to acquire Alias for $182 million in cash.
In February 1996, SGI purchased the well-known supercomputer manufacturer Cray Research for $740 million, and began to use marketing names such as "CrayLink" for (SGI-developed) technology integrated into the SGI server line. Three months later, it sold the Cray Business Systems Division, responsible for the CS6400 SPARC/Solaris server, to Sun Microsystems for an undisclosed amount (acknowledged later by a Sun executive to be "significantly less than $100 million"). Many of the Cray T3E engineers designed and developed the SGI Altix and NUMAlink technology. SGI sold the Cray brand and product lines to Tera Computer Company on March 31, 2000, for $35 million plus one million shares. SGI also distributed its remaining interest in MIPS Technologies through a spin-off effective June 20, 2000.
In September 2000, SGI acquired the Zx10 series of Windows workstations and servers from Intergraph Computer Systems (for a rumored $100 million), and rebadged them as SGI systems. The product line was discontinued in June 2001.
=== SGI Visual Workstations ===
Another attempt by SGI in the late 1990s to introduce its own family of Intel-based workstations running Windows NT or Red Hat Linux (see also SGI Visual Workstation) proved to be a financial disaster, and shook customer confidence in SGI's commitment to its own MIPS-based line.
=== Switch to Itanium ===
In 1998, SGI announced that future generations of its machines would be based not on their own MIPS processors, but the upcoming "super-chip" from Intel, code-named "Merced" and later called Itanium. Funding for its own high-end processors was reduced, and it was planned that the R10000 would be the last MIPS mainstream processor. MIPS Technologies would focus entirely on the embedded market, where it was having some success, and SGI would no longer have to fund development of a CPU that, since the failure of ARC, found use only in their own machines. This plan quickly went awry. As early as 1999, it was clear the Itanium was going to be delivered very late and would have nowhere near the performance originally expected. As the production delays increased, MIPS' existing R10000-based machines grew increasingly uncompetitive. It was eventually forced to introduce faster MIPS processors, the R12000, R14000 and R16000, which were used in a series of models from 1999 through 2006.
SGI's first Itanium-based system was the short-lived SGI 750 workstation, launched in 2001. SGI's MIPS-based systems were not to be superseded until the launch of the Itanium 2-based Altix servers and Prism workstations some time later. Unlike the MIPS systems, which ran IRIX, the Itanium systems used SuSE Linux Enterprise Server with SGI enhancements as their operating system. SGI used Transitive Corporation's QuickTransit software to allow their old MIPS/IRIX applications to run (in emulation) on the new Itanium/Linux platform.
In the server market, the Itanium 2-based Altix eventually replaced the MIPS-based Origin product line. In the workstation market, the switch to Itanium was not completed before SGI exited the market.
The Altix was the most powerful computer in the world in 2006, assuming that a "computer" is defined as a collection of hardware running under a single instance of an operating system. The Altix had 512 Itanium processors running under a single instance of Linux. A cluster of 20 machines was then the eighth-fastest supercomputer. All faster supercomputers were clusters, but none have as many FLOPS per machine. However, more recent supercomputers are very large clusters of machines that are individually less capable. SGI acknowledged this and in 2007 moved away from the "massive NUMA" model to clusters.
=== Switch to Xeon ===
Although SGI continued to market Itanium-based machines, its more recent machines were based on the Intel Xeon processor. The first Altix XE systems were relatively low-end machines, but by December 2006 the XE systems were more capable than the Itanium machines by some measures (e.g., power consumption in FLOPS/W, density in FLOPS/m3, cost/FLOPS). The XE1200 and XE1300 servers used a cluster architecture. This was a departure from the pure NUMA architectures of the earlier Itanium and MIPS servers.
In June 2007, SGI announced the Altix ICE 8200, a blade-based Xeon system with up to 512 Xeon cores per rack. An Altix ICE 8200 installed at New Mexico Computing Applications Center (with 14336 processors) ranked at number 3 on the TOP500 list of November 2007.
== User base and core market ==
Conventional wisdom holds that SGI's core market has traditionally been Hollywood visual effects studios. In fact, SGI's largest revenue has always been generated by government and defense applications, energy, and scientific and technical computing. In one case Silicon Graphics' largest single sale ever was to the United States Postal Service. SGI's servers powered an artificial intelligence program to mechanically read, tag and sort the mail (hand-written and block) at a number of USPS's key mail centers. The rise of cheap yet powerful commodity workstations running Linux, Windows and Mac OS X, and the availability of diverse professional software for them, effectively pushed SGI out of the visual effects industry in all but the most niche markets.
== High-end server market ==
SGI continued to enhance its line of servers (including some supercomputers) based on the SN architecture. SN, for Scalable Node, is a technology developed by SGI in the mid-1990s that uses cache-coherent non-uniform memory access (cc-NUMA). In an SN system, processors, memory, and a bus- and memory-controller are coupled together into an entity called a node, usually on a single circuit board. Nodes are connected by a high-speed interconnect called NUMAlink (originally marketed as CrayLink). There is no internal bus, and instead access between processors, memory, and I/O devices is done through a switched fabric of links and routers.
Thanks to the cache coherence of the distributed shared memory, SN systems scale along several axes at once: as CPU count increases, so does memory capacity, I/O capacity, and system bisection bandwidth. This allows the combined memory of all the nodes to be accessed under a single OS image using standard shared-memory synchronization methods. This makes an SN system far easier to program and able to achieve higher sustained-to-peak performance than non-cache-coherent systems like conventional clusters or massively parallel computers which require applications code to be written (or re-written) to do explicit message-passing communication between their nodes.
The first SN system, known as SN-0, was released in 1996 under the product name Origin 2000. Based on the MIPS R10000 processor, it scaled from 2 to 128 processors and a smaller version, the Origin 200 (SN-00), scaled from 1 to 4. Later enhancements enabled systems of as large as 512 processors.
The second generation system, originally called SN-1 but later SN-MIPS, was released in July 2000, as the Origin 3000. It scaled from 4 to 512 processors, and 1,024-processor configurations were delivered by special order to some customers. A smaller, less scalable implementation followed, called Origin 300.
In November 2002, SGI announced a repackaging of its SN system, under the name Origin 3900. It quadrupled the processor area density of the SN-MIPS system, from 32 up to 128 processors per rack while moving to a "fat tree" interconnect topology.
In January 2003, SGI announced a variant of the SN platform called the Altix 3000 (internally called SN-IA). It used Intel Itanium 2 processors and ran the Linux operating system kernel. At the time it was released, it was the world's most scalable Linux-based computer, supporting up to 64 processors in a single system node. Nodes could be connected using the same NUMAlink technology to form what SGI predictably termed "superclusters".
In February 2004, SGI announced general support for 128 processor nodes to be followed by 256 and 512 processor versions that year.
In April 2004, SGI announced the sale of its Alias software business for approximately $57 million.
In October 2004, SGI built the supercomputer Columbia, which broke the world record for computer speed, for the NASA Ames Research Center. It was a cluster of 20 Altix supercomputers each with 512 Intel Itanium 2 processors running Linux, and achieved sustained speed of 42.7 trillion floating-point operations per second (teraflops), easily topping Japan's famed Earth Simulator's record of 35.86 teraflops. (A week later, IBM's upgraded Blue Gene/L clocked in at 70.7 teraflops.)
In July 2006, SGI announced an SGI Altix 4700 system with 1,024 processors and 4 TB of memory running a single Linux system image.
== Hardware products ==
Some 68k- and MIPS-based models were also rebadged by other vendors, including CDC, Tandem Computers, Prime Computer and Siemens-Nixdorf.
SGI Onyx and SGI Indy series systems were used for video game development for the Nintendo 64.
=== Motorola 68k-based systems ===
IRIS 1000 series graphics terminals (diskless 1000/1200, 1400/1500 with disks)
IRIS 2000 series workstations (2000/2200/2300/2400/2500 non-Turbo and 2300T/2400T/2500T "Turbo" models)
IRIS 3000 series workstations (3010/3020/3030 and 3110/3115/3120/3130)
=== MIPS-based systems ===
=== Intel IA-32-based systems ===
=== Itanium-based systems ===
SGI 750 workstation
Altix 330 entry-level server
Altix 350 mid-range server
Altix 3000 high-end server
Altix 450 mid-range server
Altix 4000 high-end server, capable of up to 2048 CPUs
Prism (deskside and rackmount systems)
=== Intel/AMD x86-64 systems ===
Altix XE210 server
Altix XE240 server
Altix XE310 server
Altix XE1200 cluster
Altix XE1300 cluster
Altix ICE 8200
Altix ICE 8400
Virtu VN200 visualization node
Virtu VS100 workstation
Virtu VS200 workstation
Virtu VS300 workstation
Virtu VS350 workstation
=== FPGA-based accelerators ===
RASC Application Acceleration
=== Storage systems ===
InfiniteStorage 10000
InfiniteStorage 6700
InfiniteStorage 4600
InfiniteStorage 4500
InfiniteStorage 4000
InfiniteStorage 350
InfiniteStorage 220
InfiniteStorage 120
SGI Infinite Data Cluster
=== Storage solutions ===
InfiniteStorage NEXIS 500
InfiniteStorage NEXIS 2000
InfiniteStorage NEXIS 7000
InfiniteStorage NEXIS 7000-HA
InfiniteStorage NEXIS 9000
InfiniteStorage Server 3500
=== Displays ===
1600SW, a multi-award-winning wide screen video monitor
=== Accelerator cards ===
IrisVision, one of the first 3D graphics accelerators for high-end PCs
=== Other ===
Espressigo, Espresso maker in collaboration with Gaggia
== SGI timeline ==
== See also ==
SCO and SGI
Rick Belluzzo, SGI CEO from January 1998 to August 1999
Silicon Graphics Image
== References ==
== External links ==
SGI official website (pre-acquisition) at the Wayback Machine (archived March 27, 2009)
Whatever Happened to SGI?
SGI timeline
Irix Network - information, forums, and archive for SGI machines
TechPubs Wiki for SGI and IRIX
IRIS 2000/3000 FAQ
A collection of SGI equipment images
Silicon Graphics User Group | Wikipedia/Silicon_Graphics |
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
== Comparison to CPUs and GPUs ==
Compared to a graphics processing unit, TPUs are designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, without hardware for rasterisation/texture mapping. The TPU ASICs are mounted in a heatsink assembly, which can fit in a hard drive slot within a data center rack, according to Norman Jouppi.
Different types of processors are suited for different types of machine learning models. TPUs are well suited for CNNs, while GPUs have benefits for some fully-connected neural networks, and CPUs can have advantages for RNNs.
== History ==
According to Jonathan Ross, one of the original TPU engineers, and later the founder of Groq, three separate groups at Google were developing AI accelerators, with the TPU being the design that was ultimately selected. He was not aware of systolic arrays at the time and upon learning the term thought "Oh, that's called a systolic array? It just seemed to make sense."
The tensor processing unit was announced in May 2016 at Google I/O, when the company said that the TPU had already been used inside their data centers for over a year. Google's 2017 paper describing its creation cites previous systolic matrix multipliers of similar architecture built in the 1990s. The chip has been specifically designed for Google's TensorFlow framework, a symbolic math library which is used for machine learning applications such as neural networks. However, as of 2017 Google still used CPUs and GPUs for other types of machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets.
Google's TPUs are proprietary. Some models are commercially available, and on February 12, 2018, The New York Times reported that Google "would allow other companies to buy access to those chips through its cloud-computing service." Google has said that they were used in the AlphaGo versus Lee Sedol series of human-versus-machine Go games, as well as in the AlphaZero system, which produced Chess, Shogi and Go playing programs from the game rules alone and went on to beat the leading programs in those games. Google has also used TPUs for Google Street View text processing and was able to find all the text in the Street View database in less than five days. In Google Photos, an individual TPU can process over 100 million photos a day. It is also used in RankBrain which Google uses to provide search results.
Google provides third parties access to TPUs through its Cloud TPU service as part of the Google Cloud Platform and through its notebook-based services Kaggle and Colaboratory.
== Products ==
=== First generation TPU ===
The first-generation TPU is an 8-bit matrix multiplication engine, driven with CISC instructions by the host processor across a PCIe 3.0 bus. It is manufactured on a 28 nm process with a die size ≤ 331 mm2. The clock speed is 700 MHz and it has a thermal design power of 28–40 W. It has 28 MiB of on chip memory, and 4 MiB of 32-bit accumulators taking the results of a 256×256 systolic array of 8-bit multipliers. Within the TPU package is 8 GiB of dual-channel 2133 MHz DDR3 SDRAM offering 34 GB/s of bandwidth. Instructions transfer data to or from the host, perform matrix multiplications or convolutions, and apply activation functions.
=== Second generation TPU ===
The second-generation TPU was announced in May 2017. Google stated the first-generation TPU design was limited by memory bandwidth and using 16 GB of High Bandwidth Memory in the second-generation design increased bandwidth to 600 GB/s and performance to 45 teraFLOPS. The TPUs are then arranged into four-chip modules with a performance of 180 teraFLOPS. Then 64 of these modules are assembled into 256-chip pods with 11.5 petaFLOPS of performance. Notably, while the first-generation TPUs were limited to integers, the second-generation TPUs can also calculate in floating point, introducing the bfloat16 format invented by Google Brain. This makes the second-generation TPUs useful for both training and inference of machine learning models. Google has stated these second-generation TPUs will be available on the Google Compute Engine for use in TensorFlow applications.
=== Third generation TPU ===
The third-generation TPU was announced on May 8, 2018. Google announced that processors themselves are twice as powerful as the second-generation TPUs, and would be deployed in pods with four times as many chips as the preceding generation. This results in an 8-fold increase in performance per pod (with up to 1,024 chips per pod) compared to the second-generation TPU deployment.
=== Fourth generation TPU ===
On May 18, 2021, Google CEO Sundar Pichai spoke about TPU v4 Tensor Processing Units during his keynote at the Google I/O virtual conference. TPU v4 improved performance by more than 2x over TPU v3 chips. Pichai said "A single v4 pod contains 4,096 v4 chips, and each pod has 10x the interconnect bandwidth per chip at scale, compared to any other networking technology.” An April 2023 paper by Google claims TPU v4 is 5-87% faster than an Nvidia A100 at machine learning benchmarks.
There is also an "inference" version, called v4i, that does not require liquid cooling.
=== Fifth generation TPU ===
In 2021, Google revealed the physical layout of TPU v5 is being designed with the assistance of a novel application of deep reinforcement learning. Google claims TPU v5 is nearly twice as fast as TPU v4, and based on that and the relative performance of TPU v4 over A100, some speculate TPU v5 as being as fast as or faster than an H100.
Similar to the v4i being a lighter-weight version of the v4, the fifth generation has a "cost-efficient" version called v5e. In December 2023, Google announced TPU v5p which is claimed to be competitive with the H100.
=== Sixth generation TPU ===
In May 2024, at the Google I/O conference, Google announced TPU v6, which became available in preview in October 2024. Google claimed a 4.7 times performance increase relative to TPU v5e, via larger matrix multiplication units and an increased clock speed. High bandwidth memory (HBM) capacity and bandwidth have also doubled. A pod can contain up to 256 Trillium units.
=== Seventh generation TPU ===
In April 2025, at Google Cloud Next conference, Google unveiled TPU v7. This new chip, called Ironwood, will come in two configurations: a 256-chip cluster and a 9,216-chip cluster. Ironwood will have a peak computational performance rate of 4,614 TFLOP/s.
=== Edge TPU ===
In July 2018, Google announced the Edge TPU. The Edge TPU is Google's purpose-built ASIC chip designed to run machine learning (ML) models for edge computing, meaning it is much smaller and consumes far less power compared to the TPUs hosted in Google datacenters (also known as Cloud TPUs). In January 2019, Google made the Edge TPU available to developers with a line of products under the Coral brand. The Edge TPU is capable of 4 trillion operations per second with 2 W of electrical power.
The product offerings include a single-board computer (SBC), a system on module (SoM), a USB accessory, a mini PCI-e card, and an M.2 card. The SBC Coral Dev Board and Coral SoM both run Mendel Linux OS – a derivative of Debian. The USB, PCI-e, and M.2 products function as add-ons to existing computer systems, and support Debian-based Linux systems on x86-64 and ARM64 hosts (including Raspberry Pi).
The machine learning runtime used to execute models on the Edge TPU is based on TensorFlow Lite. The Edge TPU is only capable of accelerating forward-pass operations, which means it's primarily useful for performing inferences (although it is possible to perform lightweight transfer learning on the Edge TPU). The Edge TPU also only supports 8-bit math, meaning that for a network to be compatible with the Edge TPU, it needs to either be trained using the TensorFlow quantization-aware training technique, or since late 2019 it's also possible to use post-training quantization.
On November 12, 2019, Asus announced a pair of single-board computer (SBCs) featuring the Edge TPU. The Asus Tinker Edge T and Tinker Edge R Board designed for IoT and edge AI. The SBCs officially support Android and Debian operating systems. ASUS has also demonstrated a mini PC called Asus PN60T featuring the Edge TPU.
On January 2, 2020, Google announced the Coral Accelerator Module and Coral Dev Board Mini, to be demonstrated at CES 2020 later the same month. The Coral Accelerator Module is a multi-chip module featuring the Edge TPU, PCIe and USB interfaces for easier integration. The Coral Dev Board Mini is a smaller SBC featuring the Coral Accelerator Module and MediaTek 8167s SoC.
=== Pixel Neural Core ===
On October 15, 2019, Google announced the Pixel 4 smartphone, which contains an Edge TPU called the Pixel Neural Core. Google describe it as "customized to meet the requirements of key camera features in Pixel 4", using a neural network search that sacrifices some accuracy in favor of minimizing latency and power use.
=== Google Tensor ===
Google followed the Pixel Neural Core by integrating an Edge TPU into a custom system-on-chip named Google Tensor, which was released in 2021 with the Pixel 6 line of smartphones. The Google Tensor SoC demonstrated "extremely large performance advantages over the competition" in machine learning-focused benchmarks; although instantaneous power consumption also was relatively high, the improved performance meant less energy was consumed due to shorter periods requiring peak performance.
== Lawsuit ==
In 2019, Singular Computing, founded in 2009 by Joseph Bates, a visiting professor at MIT, filed suit against Google alleging patent infringement in TPU chips. By 2020, Google had successfully lowered the number of claims the court would consider to just two: claim 53 of US 8407273 filed in 2012 and claim 7 of US 9218156 filed in 2013, both of which claim a dynamic range of 10−6 to 106 for floating point numbers, which the standard float16 cannot do (without resorting to subnormal numbers) as it only has five bits for the exponent. In a 2023 court filing, Singular Computing specifically called out Google's use of bfloat16, as that exceeds the dynamic range of float16. Singular claims non-standard floating point formats were non-obvious in 2009, but Google retorts that the VFLOAT format, with configurable number of exponent bits, existed as prior art in 2002. By January 2024, subsequent lawsuits by Singular had brought the number of patents being litigated up to eight. Towards the end of the trial later that month, Google agreed to a settlement with undisclosed terms.
== See also ==
Cognitive computer
AI accelerator
Structure tensor, a mathematical foundation for TPU's
Tensor Core, a similar architecture by Nvidia
TrueNorth, a similar device simulating spiking neurons instead of low-precision tensors
Vision processing unit, a similar device specialised for vision processing
== References ==
== External links ==
Cloud Tensor Processing Units (TPUs) (Documentation from Google Cloud)
Photo of Google's TPU chip and board
Photo of Google's TPU v2 board
Photo of Google's TPU v3 board
Photo of Google's TPU v2 pod | Wikipedia/Tensor_processing_unit |
The GeForce 10 series is a series of graphics processing units developed by Nvidia, initially based on the Pascal microarchitecture announced in March 2014. This design series succeeded the GeForce 900 series, and is succeeded by the GeForce 16 series and GeForce 20 series using the Turing microarchitecture.
== Architecture ==
The Pascal microarchitecture, named after Blaise Pascal, was announced in March 2014 as a successor to the Maxwell microarchitecture. The first graphics cards from the series, the GeForce GTX 1080 and 1070, were announced on May 6, 2016, and were released several weeks later on May 27 and June 10, respectively. The architecture incorporates either 16 nm FinFET (TSMC) or 14 nm FinFET (Samsung) technologies. Initially, chips were only produced in TSMC's 16 nm process, but later chips were made with Samsung's newer 14 nm process (GP107, GP108).
New Features in GP10x:
CUDA Compute Capability 6.0 (GP100 only), 6.1 (GP102, GP104, GP106, GP107, GP108)
DisplayPort 1.4 (No DSC)
HDMI 2.0b
Fourth generation Delta Color Compression
PureVideo Feature Set H hardware video decoding HEVC Main10 (10 bit), Main12 (12 bit) & VP9 hardware decoding (GM200 & GM204 did not support HEVC Main10/Main12 & VP9 hardware decoding)
HDCP 2.2 support for 4K DRM protected content playback & streaming (Maxwell GM200 & GM204 lack HDCP 2.2 support, GM206 supports HDCP 2.2)
NVENC HEVC Main10 10 bit hardware encoding (except GP108 which doesn't support NVENC)
GPU Boost 3.0
Simultaneous Multi-Projection
HB SLI Bridge Technology
New memory controller with GDDR5X & GDDR5 support (GP102, GP104, GP106)
Dynamic load balancing scheduling system. This allows the scheduler to dynamically adjust the amount of the GPU assigned to multiple tasks, ensuring that the GPU remains saturated with work except when there is no more work that can safely be distributed. Nvidia therefore has safely enabled asynchronous compute in Pascal's driver.
Instruction-level preemption. In graphics tasks, the driver restricts this to pixel-level preemption because pixel tasks typically finish quickly and the overhead costs of doing pixel-level preemption are much lower than performing instruction-level preemption. Compute tasks get either thread-level or instruction-level preemption. Instruction-level preemption is useful because compute tasks can take long times to finish and there are no guarantees on when a compute task finishes, so the driver enables the very expensive instruction-level preemption for these tasks.
Triple buffering implemented in the driver level. Nvidia calls this "Fast Sync". This has the GPU maintain three frame buffers per monitor. This results in the GPU continuously rendering frames, and the most recently completely rendered frame is sent to a monitor each time it needs one. This removes the initial delay that double buffering with vsync causes and disallows tearing. The costs are that more memory is consumed for the buffers and that the GPU will consume power drawing frames that might be wasted because two or more frames could possibly be drawn between the time a monitor is sent a frame and the time the same monitor needs to be sent another frame. In this case, the latest frame is picked, causing frames drawn after the previously displayed frame but before the frame that is picked to be completely wasted. This feature has been backported to Maxwell-based GPUs in driver version 372.70.
Nvidia has announced that the Pascal GP100 GPU will feature four High Bandwidth Memory stacks, allowing a total of 16 GB HBM2 on the highest-end models, 16 nm technology, Unified Memory and NVLink.
Starting with Windows 10 version 2004, support has been added for using a hardware graphics scheduler to reduce latency and improve performance, which requires a driver level of WDDM 2.7.
== Products ==
=== Founders Edition ===
Announcing the GeForce 10 series products, Nvidia introduced Founders Edition graphics card versions of the GTX 1060, 1070, 1070 Ti, 1080 and 1080 Ti. These are what were previously known as reference cards, i.e. which were designed and built by Nvidia and not by its authorized board partners. These cards were used as reference to measure performance of partner cards. The Founders Edition cards have a die cast machine-finished aluminum body with a single radial fan and a vapor chamber cooling (1070 Ti, 1080, 1080 Ti only), an upgraded power supply and a new low profile backplate (1070, 1070 Ti, 1080, 1080 Ti only). Nvidia also released a limited supply of Founders Edition cards for the GTX 1060 that were only available directly from Nvidia's website. Founders Edition cards prices (with the exception of the GTX 1070 Ti and 1080 Ti) are greater than MSRP of partners cards; however, some partners' cards, incorporating a complex design, with liquid or hybrid cooling may cost more than Founders Edition.
=== GeForce 10 (10xx) series for desktops ===
Supported display standards are: DP 1.3/1.4, HDMI 2.0b, dual link DVI
Supported APIs are: Direct3D 12 (feature level 12_1), OpenGL 4.6, OpenCL 3.0 and Vulkan 1.3
=== GeForce 10 (10xx) series for notebooks ===
The biggest highlight to this line of notebook GPUs is the implementation of configured specifications close to (for the GTX 1060–1080) and exceeding (for the GTX 1050/1050 Ti) that of their desktop counterparts, as opposed to having "cut-down" specifications in previous generations. As a result, the "M" suffix is completely removed from the model's naming schemes, denoting these notebook GPUs to possess similar performance to those made for desktop PCs, including the ability to overclock their core frequencies by the user, something not possible with previous generations of notebook GPUs. This was made possible by having lower Thermal Design Power (TDP) ratings as compared to their desktop equivalents, making these desktop-level GPUs thermally feasible to be implemented into OEM notebook chassis with improved thermal dissipation designs, and, as such, are only available through the OEMs. In addition, the entire line of GTX Notebook GPUs also are available in lower-TDP and quieter variations called the "Max-Q Design", specifically made for ultra-thin gaming systems in conjunction with OEM Partners that incorporate enhanced heat dissipation mechanisms with lower operating noise volumes, which are also made available as an additional more powerful option to existing gaming notebooks as well, which was launched on 27 June 2017.
In addition, the GT series line of Notebook GPUs is no longer introduced starting from this generation, replaced by the MX series of Notebook GPUs. Only the MX150 is based on Pascal's GP108 die used on the GT1030 for Desktops, with higher clock frequencies compared to its Desktop counterpart, while the other chips in the MX series were re-branded versions of the previous generation GPUs (MX130 is a re-branded GT940MX GPU while MX110 is a re-branded GT920MX GPU).
Supported APIs are: Direct3D 12 (feature level 12_1 or 11_0 on MX110 and MX130), OpenGL 4.6, OpenCL 3.0 and Vulkan 1.3
Only GTX 1070 and GTX 1080 have SLI support.
== Reintroduction ==
Due to production problems surrounding the RTX 30-series cards and a general shortage of graphics cards due to production issues caused by the ongoing COVID-19 pandemic, which led to a global shortage of semiconductor chips, and general demand for graphics cards increasing due to an increase in cryptocurrency mining, the GTX 1050 Ti, alongside the RTX 2060 and its Super counterpart, was brought back into production in 2021.
In addition, Nvidia quietly released the GeForce GT 1010 in January 2021.
== Discontinued support ==
Nvidia stopped releasing 32-bit drivers for 32-bit operating systems after driver 391.35 in March 2018.
Nvidia announced that after release of the 470 drivers, it would transition driver support for the Windows 7 and Windows 8.1 operating systems to legacy status and continue to provide critical security updates for these operating systems through September 2024. The GeForce 10 series is the last Nvidia GPU generation to support Windows 7/8.x or any 32-bit operating system; beginning with the Turing architecture, newer Nvidia GPUs now require a 64-bit operating system.
In May 2025, Nvidia discontinued developer support for the Maxwell, Pascal, and Volta architectures, which includes the GTX 10 Series. Driver updates are expected to continue for a limited time.
== See also ==
GeForce RTX 30 series
GeForce RTX 40 series
GeForce RTX 50 series
Nvidia Quadro
Nvidia Tesla
Pascal (microarchitecture)
List of Nvidia graphics processing units
== References ==
== External links ==
Official website
NVIDIA GeForce GTX 1080 whitepaper
Media related to Nvidia GeForce 10 series video cards at Wikimedia Commons | Wikipedia/GeForce_10_series |
A die, in the context of integrated circuits, is a small block of semiconducting material on which a given functional circuit is fabricated. Typically, integrated circuits are produced in large batches on a single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through processes such as photolithography. The wafer is cut (diced) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die.
There are three commonly used plural forms: dice, dies, and die. To simplify handling and integration onto a printed circuit board, most dies are packaged in various forms.
== Manufacturing process ==
Most dies are composed of silicon and used for integrated circuits. The process begins with the production of monocrystalline silicon ingots. These ingots are then sliced into disks with a diameter of up to 300 mm.
These wafers are then polished to a mirror finish before going through photolithography. In many steps the transistors are manufactured and connected with metal interconnect layers. These prepared wafers then go through wafer testing to test their functionality. The wafers are then sliced and sorted to filter out the faulty dies. Functional dies are then packaged and the completed integrated circuit is ready to be shipped.
== Uses ==
A die can host many types of circuits. One common use case of an integrated circuit die is in the form of a central processing unit (CPU). Through advances in modern technology, the size of the transistor within the die has shrunk exponentially, following Moore's law. Other uses for dies can range from LED lighting to power semiconductor devices.
== Images ==
Images of dies are commonly called die shots.
== See also ==
Die preparation
Integrated circuit design
Wire bonding and ball bonding
== References ==
== External links ==
Wedge Bonding Process on YouTube – animation | Wikipedia/Die_(integrated_circuit) |
Intel Graphics Technology (GT) is the collective name for a series of integrated graphics processors (IGPs) produced by Intel that are manufactured on the same package or die as the central processing unit (CPU). It was first introduced in 2010 as Intel HD Graphics and renamed in 2017 as Intel UHD Graphics.
Intel Iris Graphics and Intel Iris Pro Graphics are the IGP series introduced in 2013 with some models of Haswell processors as the high-performance versions of HD Graphics. Iris Pro Graphics was the first in the series to incorporate embedded DRAM. Since 2016 Intel refers to the technology as Intel Iris Plus Graphics with the release of Kaby Lake.
In the fourth quarter of 2013, Intel integrated graphics represented, in units, 65% of all PC graphics processor shipments. However, this percentage does not represent actual adoption as a number of these shipped units end up in systems with discrete graphics cards.
== History ==
Before the introduction of Intel HD Graphics, Intel integrated graphics were built into the motherboard's northbridge, as part of the Intel's Hub Architecture. They were known as Intel Extreme Graphics and Intel GMA. As part of the Platform Controller Hub (PCH) design, the northbridge was eliminated and graphics processing was moved to the same die as the central processing unit (CPU).
The previous Intel integrated graphics solution, Intel GMA, had a reputation of lacking performance and features, and therefore was not considered to be a good choice for more demanding graphics applications, such as 3D gaming. The performance increases brought by Intel's HD Graphics made the products competitive with integrated graphics adapters made by its rivals, Nvidia and ATI/AMD. Intel HD Graphics, featuring minimal power consumption that is important in laptops, was capable enough that PC manufacturers often stopped offering discrete graphics options in both low-end and high-end laptop lines, where reduced dimensions and low power consumption are important.
== Generations ==
Intel HD and Iris Graphics are divided into generations, and within each generation are divided into 'tiers' of increasing performance, denominated by the 'GTx' label. Each generation corresponds to the implementation of a Gen graphics microarchitecture with a corresponding GEN instruction set architecture since Gen4.
=== Gen5 architecture ===
==== Westmere ====
In January 2010, Clarkdale and Arrandale processors with Ironlake graphics were released, and branded as Celeron, Pentium, or Core with HD Graphics. There was only one specification: 12 execution units, up to 43.2 GFLOPS at 900 MHz. It can decode a H.264 1080p video at up to 40 fps.
Its direct predecessor, the GMA X4500, featured 10 EUs at 800 MHz, but it lacked some capabilities.
=== Gen6 architecture ===
==== Sandy Bridge ====
In January 2011, the Sandy Bridge processors were released, introducing the "second generation" HD Graphics:
Sandy Bridge Celeron and Pentium have Intel HD, while Core i3 and above have either HD 2000 or HD 3000. HD Graphics 2000 and 3000 include hardware video encoding and HD postprocessing effects.
=== Gen7 architecture ===
==== Ivy Bridge ====
On 24 April 2012, Ivy Bridge was released, introducing the "third generation" of Intel's HD graphics:
Ivy Bridge Celeron and Pentium have Intel HD, while Core i3 and above have either HD 2500 or HD 4000. HD Graphics 2500 and 4000 include hardware video encoding and HD postprocessing effects.
For some low-power mobile CPUs there is limited video decoding support, while none of the desktop CPUs have this limitation. HD P4000 is featured on the Ivy Bridge E3 Xeon processors with the 12X5 v2 descriptor, and supports unbuffered ECC RAM.
=== Gen7.5 architecture ===
==== Haswell ====
In June 2013, Haswell CPUs were announced, with four tiers of integrated GPUs:
The 128 MB of eDRAM in the Iris Pro GT3e is in the same package as the CPU, but on a separate die manufactured in a different process. Intel refers to this as a Level 4 cache, available to both CPU and GPU, naming it Crystalwell. The Linux drm/i915 driver is aware and capable of using this eDRAM since kernel version 3.12.
=== Gen8 architecture ===
==== Broadwell ====
In November 2013, it was announced that Broadwell-K desktop processors (aimed at enthusiasts) would also carry Iris Pro Graphics.
The following models of integrated GPU are announced for Broadwell processors:
==== Braswell ====
=== Gen9 architecture ===
==== Skylake ====
The Skylake line of processors, launched in August 2015, retires VGA support, while supporting multi-monitor setups of up to three monitors connected via HDMI 1.4, DisplayPort 1.2 or Embedded DisplayPort (eDP) 1.3 interfaces.
The following models of integrated GPU are available or announced for the Skylake processors:
==== Apollo Lake ====
The Apollo Lake line of processors was launched in August 2016.
=== Gen9.5 architecture ===
==== Kaby Lake ====
The Kaby Lake line of processors was introduced in August 2016. New features: speed increases, support for 4K UHD "premium" (DRM encoded) streaming services, media engine with full hardware acceleration of 8- and 10-bit HEVC and VP9 decode.
==== Kaby Lake Refresh / Amber Lake / Coffee Lake / Coffee Lake Refresh / Whiskey Lake / Comet Lake ====
The Kaby Lake Refresh line of processors was introduced in October 2017. New features: HDCP 2.2 support
==== Gemini Lake/Gemini Lake Refresh ====
New features: HDMI 2.0 support, VP9 10-bit Profile2 hardware decoder
=== Gen11 architecture ===
==== Ice Lake ====
New features: 10 nm Gen 11 GPU microarchitecture, two HEVC 10-bit encode pipelines, three 4K display pipelines (or 2× 5K60, 1× 4K120), variable rate shading (VRS), and integer scaling.
While the microarchitecture continues to support double-precision floating-point as previous versions did, the mobile configurations of it do not include the feature and therefore on these it is supported only through emulation.
=== Xe-LP architecture (Gen12) ===
These are based on the Intel Xe-LP microarchitecture, the low power variant of the Intel Xe GPU architecture also known as Gen 12. New features include Sampler Feedback, Dual Queue Support, DirectX12 View Instancing Tier2, and AV1 8-bit and 10-bit fixed-function hardware decoding. Support for FP64 was removed.
=== Arc Alchemist Tile GPU (Gen12.7) ===
Intel Meteor Lake and Arrow Lake use Intel Arc Alchemist Tile GPU microarchitecture.
New features: DirectX 12 Ultimate Feature Level 12_2 support, 8K 10-bit AV1 hardware encoder, HDMI 2.1 48Gbps native support
==== Meteor Lake ====
=== Arc Battlemage Tile GPU ===
Intel Lunar Lake will use Intel Arc Battlemage Tile GPU microarchitecture.
== Features ==
=== Intel Insider ===
Beginning with Sandy Bridge, the graphics processors include a form of digital copy protection and digital rights management (DRM) called Intel Insider, which allows decryption of protected media within the processor. Previously there was a similar technology called Protected Audio Video Path (PAVP).
=== HDCP ===
Intel Graphics Technology supports the HDCP technology, but the actual HDCP support depends on the computer's motherboard.
=== Intel Quick Sync Video ===
Intel Quick Sync Video is Intel's hardware video encoding and decoding technology, which is integrated into some of the Intel CPUs. The name "Quick Sync" refers to the use case of quickly transcoding ("syncing") a video from, for example, a DVD or Blu-ray Disc to a format appropriate to, for example, a smartphone. Quick Sync was introduced with the Gen 6 in Sandy Bridge microprocessors on 9 January 2011.
=== Graphics Virtualization Technology ===
Graphics Virtualization Technology (GVT) was announced 1 January 2014 and introduced at the same time as Intel Iris Pro. Intel integrated GPUs support the following sharing methods:
Direct passthrough (GVT-d): the GPU is available for a single virtual machine without sharing with other machines
Paravirtualized API forwarding (GVT-s): the GPU is shared by multiple virtual machines using a virtual graphics driver; few supported graphics APIs (OpenGL, DirectX), no support for GPGPU
Full GPU virtualization (GVT-g): the GPU is shared by multiple virtual machines (and by the host machine) on a time-sharing basis using a native graphics driver; similar to AMD's MxGPU and Nvidia's vGPU, which are available only on professional line cards (Radeon Pro and Nvidia Quadro)
Full GPU virtualization in hardware (SR-IOV): The gpu can be partitioned and used/shared by multiple virtual machines and the host with support built-in hardware, unlike GVT-g that does this in software(driver).
Gen9 (i.e. Graphics powering 6th through 9th generation Intel processors) is the last generation of the software-based vGPU solution GVT-G (Intel® Graphics Virtualization Technology –g).
SR-IOV (Single Root IO Virtualization) is supported only on platforms with 11th Generation Intel® Core™ "G" Processors (products formerly known as Tiger Lake) or newer. This leaves Rocket Lake (11th Gen Intel Processors) without support for GVT-g and/or SR-IOV. This means Rocket Lake has no full virtualization support. Started from 12th Generation Intel® Core™ Processors, both desktop and laptop Intel CPUs have GVT-g and SR-IOV support.
=== Multiple monitors ===
==== Ivy Bridge ====
HD 2500 and HD 4000 GPUs in Ivy Bridge CPUs are advertised as supporting three active monitors, but this only works if two of the monitors are configured identically, which covers many but not all three-monitor configurations. The reason for this is that the chipsets only include two phase-locked loops (PLLs) for generating the pixel clocks timing the data being transferred to the displays.
Therefore, three simultaneously active monitors can only be achieved when at least two of them share the same pixel clock, such as:
Using two or three DisplayPort connections, as they require only a single pixel clock for all connections. Passive adapters from DisplayPort to some other connector do not count as a DisplayPort connection, as they rely on the chipset being able to emit a non-DisplayPort signal through the DisplayPort connector. Active adapters that contain additional logic to convert the DisplayPort signal to some other format count as a DisplayPort connection.
Using two non-DisplayPort connections of the same connection type (for example, two HDMI connections) and the same clock frequency (like when connected to two identical monitors at the same resolution), so that a single unique pixel clock can be shared between both connections.
Another possible three-monitor solution uses the Embedded DisplayPort on a mobile CPU (which does not use a chipset PLL at all) along with any two chipset outputs.
==== Haswell ====
ASRock Z87- and H87-based motherboards support three displays simultaneously. Asus H87-based motherboards are also advertised to support three independent monitors at once.
== Capabilities (GPU hardware) ==
OpenCL 2.1 and 2.2 possible with software update on OpenCL 2.0 hardware (Broadwell+) with future software updates.
Support in Mesa is provided by two Gallium3D-style drivers, with the Iris driver supporting Broadwell hardware and later, while the Crocus driver supports Haswell and earlier. The classic Mesa i965 driver was removed in Mesa 22.0, although it would continue to see further maintenance as part of the Amber branch.
New OpenCL driver is Mesa RustiCL and this driver written in new language Rust is OpenCL 3.0 conformant for Intel XE Graphics with Mesa 22.3. Intel Broadwell and higher will be also conformant to 3.0 with many 2.x features. For Intel Ivy Bridge and Haswell target is OpenCL 1.2. Actual development state is available in mesamatrix.
NEO compute runtime driver supports openCL 3.0 with 1.2, 2.0 and 2.1 included for Broadwell and higher and Level Zero API 1.3 for Skylake and higher.
All GVT virtualization methods are supported since the Broadwell processor family with KVM and Xen.
== Capabilities (GPU video acceleration) ==
Intel developed a dedicated SIP core which implements multiple video decompression and compression algorithms branded Intel Quick Sync Video. Some are implemented completely, some only partially.
=== Hardware-accelerated algorithms ===
=== Intel Pentium and Celeron family ===
=== Intel Atom family ===
== Documentation ==
Intel releases programming manuals for most of Intel HD Graphics devices via its Open Source Technology Center. This allows various open source enthusiasts and hackers to contribute to driver development, and port drivers to various operating systems, without the need for reverse engineering.
== See also ==
Graphics card
AMD APU
Free and open-source graphics device driver
List of Intel graphics processing units
List of Nvidia graphics processing units
List of AMD graphics processing units
== Notes ==
== References ==
== External links ==
Intel Graphics Performance Analyzers 2024.1
Intel's Embedded DRAM
Intel Open Source Technology Center: Linux graphics documentation (includes the GPU manuals) | Wikipedia/Intel_HD_Graphics |
A display resolution standard is a commonly used width and height dimension (display resolution) of an electronic visual display device, measured in pixels. This information is used for electronic devices such as a computer monitor. Certain combinations of width and height are standardized (e.g. by VESA) and typically given a name and an initialism which is descriptive of its dimensions.
The graphics display resolution is also known as the display mode or the video mode, although these terms usually include further specifications such as the image refresh rate and the color depth.
The resolution itself only indicates the number of distinct pixels that can be displayed on a screen, which affects the sharpness and clarity of the image. It can be controlled by various factors, such as the type of display device, the signal format, the aspect ratio, and the refresh rate.
Some graphics display resolutions are frequently referenced with a single number (e.g. in "1080p" or "4K"), which represents the number of horizontal or vertical pixels. More generally, any resolution can be expressed as two numbers separated by a multiplication sign (e.g. "1920×1080"), which represent the width and height in pixels. Since most screens have a landscape format to accommodate the human field of view, the first number for the width (in columns) is larger than the second for the height (in lines), and this conventionally holds true for handheld devices that are predominantly or even exclusively used in portrait orientation.
The graphics display resolution is influenced by the aspect ratio, which is the ratio of the width to the height of the display. The aspect ratio determines how the image is scaled and stretched or cropped to fit the screen. The most common aspect ratios for graphics displays are 4:3, 16:10 (equal to 8:5), 16:9, and 21:9. The aspect ratio also affects the perceived size of objects on the screen.
The native screen resolution together with the physical dimensions of the graphics display can be used to calculate its pixel density. An increase in the pixel density often correlates with a decrease in the size of individual pixels on a display.
Some graphics displays support multiple resolutions and aspect ratios, which can be changed by the user or by the software. In particular, some devices use a hardware/native resolution that is a simple multiple of the recommended software/virtual resolutions in order to show finer details; marketing terms for this include "Retina display".
== Table of display resolution standards ==
== Aspect ratio ==
The favored aspect ratio of mass-market display industry products has changed gradually from 4:3, then to 16:10, then to 16:9, and has now changed to 18:9 for smartphones. The 4:3 aspect ratio generally reflects older products, especially the era of the cathode ray tube (CRT). The 16:10 aspect ratio had its largest use in the 1995–2010 period, and the 16:9 aspect ratio tends to reflect post-2010 mass-market computer monitor, laptop, and entertainment products displays. On CRTs, there was often a difference between the aspect ratio of the computer resolution and the aspect ratio of the display causing non-square pixels (e.g. 320 × 200 or 1280 × 1024 on a 4:3 display).
The 4:3 aspect ratio was common in older television cathode ray tube (CRT) displays, which were not easily adaptable to a wider aspect ratio. When good quality alternate technologies (i.e., liquid crystal displays (LCDs) and plasma displays) became more available and less costly, around the year 2000, the common computer displays and entertainment products moved to a wider aspect ratio, first to the 16:10 ratio. The 16:10 ratio allowed some compromise between showing older 4:3 aspect ratio broadcast TV shows, but also allowing better viewing of widescreen movies. However, around the year 2005, home entertainment displays (i.e., TV sets) gradually moved from 16:10 to the 16:9 aspect ratio, for further improvement of viewing widescreen movies. By about 2007, virtually all mass-market entertainment displays were 16:9. In 2011, 1920 × 1080 (Full HD, the native resolution of Blu-ray) was the favored resolution in the most heavily marketed entertainment market displays. The next standard, 3840 × 2160 (4K UHD), was first sold in 2013.
Also in 2013, displays with 2560 × 1080 (aspect ratio 64:27 or 2.370, however commonly referred to as "21:9" for easy comparison with 16:9) appeared, which closely approximate the common CinemaScope movie standard aspect ratio of 2.35–2.40. In 2014, "21:9" screens with pixel dimensions of 3440 × 1440 (actual aspect ratio 43:18 or 2.38) became available as well.
The computer display industry maintained the 16:10 aspect ratio longer than the entertainment industry, but in the 2005–2010 period, computers were increasingly marketed as dual-use products, with uses in the traditional computer applications, but also as means of viewing entertainment content. In this time frame, with the notable exception of Apple, almost all desktop, laptop, and display manufacturers gradually moved to promoting only 16:9 aspect ratio displays. By 2011, the 16:10 aspect ratio had virtually disappeared from the Windows laptop display market (although Mac laptops are still mostly 16:10, including the 2880 × 1800 15" Retina MacBook Pro and the 2560 × 1600 13" Retina MacBook Pro). One consequence of this transition was that the highest available resolutions moved generally downward (i.e., the move from 1920 × 1200 laptop displays to 1920 × 1080 displays).
In response to usability flaws of now common 16:9 displays in office/professional applications, Microsoft and Huawei started to offer notebooks with a 3:2 aspect ratio. By 2021, Huawei also offers a monitor display offering this aspect ratio, targeted towards professional uses.
== High-definition ==
All standard HD resolutions share a 16∶9 aspect ratio, although some derived resolutions with smaller or larger ratios also exist, e.g. 4∶3 and 64∶27, respectively. Most of the narrower resolutions are only used for storing, not for displaying videos, while the wider resolutions are often available as physical displays. YouTube, for instance, recommends users upload videos in a 16:9 format with 240, 360, 480 (SD), 720, 1080 (HD), 1440, 2160 (4K) or 4320 (8K) lines.
While the monikers for those resolutions originally all used a letter prefix with "HD" for the multiplier, and possibly a "+" suffix for intermediate or taller formats, the newer, larger formats tend to be used with "K" notation for thousands of pixels of horizontal resolution, but may be disambiguated by a system qualifier that includes "HD", e.g. "8K UHD" instead of just "8K".
=== 960 × 540 (qHD) ===
Note: qHD is quarter HD; QHD is quad HD
qHD is a display resolution of 960 × 540 pixels, which is exactly one-quarter of a Full HD (1080p) frame, in a 16:9 aspect ratio. Notably, it is neither "qFHD" nor 640 × 360 which would be quarter of "HD" resolution (720p).
Some of the few tabletop TVs to use this as its native resolution from around 2005 were the Sony XEL-1 and the Sharp Aquos P50. Sharp marketed its ED TV sets with this resolution as PAL optimal.
Similar to DVGA, this resolution became popular for high-end smartphone displays in early 2011. Mobile phones including the Jolla, Sony Xperia C, HTC Sensation, Motorola Droid RAZR, LG Optimus L9, Microsoft Lumia 535, and Samsung Galaxy S4 Mini have displays with the qHD resolution, as does the PlayStation Vita portable game system.
=== 1280 × 720 (HD) ===
The HD or 720p resolution of 1280 × 720 pixels stems from high-definition television (HDTV), where it originally used 50 or 60 frames per second. With its 16:9 aspect ratio, it is exactly 2 times the width and 1+1/2 times the height of 4:3 VGA (640 × 480), which shares its aspect ratio and 480 line count with NTSC. HD, therefore, has exactly 3 times as many pixels as VGA, i.e. almost 1 megapixel.
In the mid-2000s, when the digital HD technology and standard debuted on the market, this type of resolution was often referred to by the branded name "HD ready" or "HDr" for short, which had specified it as a minimum resolution for devices to qualify for the certification. However, few screens have been built that use this resolution natively. Most employ 16:9 panels with 768 lines instead (WXGA), which resulted in odd numbers of pixels per line, i.e. 13651/3 are rounded to 1360, 1364, 1366 or even 1376, the next multiple of 16.
=== 1600 × 900 (HD+) ===
The HD+ resolution of 1600 × 900 pixels in a 16:9 aspect ratio is often referred to as "900p".
=== 1920 × 1080 (FHD) ===
FHD (Full HD) is the resolution 1920 × 1080 used by the 1080p and 1080i HDTV video formats. It has a 16:9 aspect ratio and 2,073,600 total pixels, i.e. very close to 2 megapixels, and is exactly 50% larger than 720p HD (1280 × 720) in each dimension for a total of 2.25 times as many pixels. When using interlacing, the uncompressed bandwidth requirements are similar to those of 720p at the same field rate (a 12.5% increase, as one field of 1080i video is 1,036,800 pixels, and one frame of 720p video is 921,600 pixels). Although the number of pixels is the same for 1080p and 1080i, the effective resolution is somewhat lower for the interlaced format, as it is necessary to use some vertical low-pass filtering to reduce temporal artifacts such as interline twitter.
Sometimes, this resolution is referred to simply as HD, as is evident from derived terms like qHD (quarter), which have a half of the lines and columns of their common base 1920 × 1080, whereas QHD (quadruple) has double the dimensions of 1280 × 720 instead.
When set in relation to higher resolutions, 1920 × 1080 is also referred to as 2K because it has roughly 2000 pixels of horizontal resolution.
The next bigger resolution from 1920 × 1080 in vertical direction is 1920 × 1200 (16∶10), which is hence called FHD+ by some producers, but is elsewhere known as WUXGA, the wider variant of 1600 × 1200 UXGA.
=== 2048 × 1080 (DCI 2K) ===
DCI 2K is a standardized format established by the Digital Cinema Initiatives consortium in 2005 for 2K video projection. This format has a resolution of 2048 × 1080 (2.2 megapixels) with an aspect ratio of 256∶135 (1.8962) or roughly "17∶9". This is the native resolution for DCI-compliant 2K digital projectors – active displays with this resolution are rare. The display aspect ratio is frequently wider than the native one, requiring non-square pixels.
=== 2560 × 1080 (UWFHD) ===
The resolution 2560 × 1080 is equivalent to Full HD (1920 × 1080) extended in width by one third, with an aspect ratio of 64:27 (2.370, or 21.3:9). Monitors at this resolution usually contain built-in firmware to divide the screen into two 1280 × 1080 screens.
There are other, non-standard display resolutions with 1080 lines whose aspect ratios fall between the usual 16∶9 and the ultra-wide 64∶27, e.g. 18∶9, 18.5∶9, 19∶9 and 19.5∶9. They are mostly used in smartphones or phablets and do not have established names, but may be subsumed under the umbrella term ultra-wide (full) HD.
=== 2560 × 1440 (QHD) ===
Note: qHD is quarter HD; QHD is quad HD
QHD (Quad HD) or 1440p is a display resolution of 2560 × 1440 pixels. The name "QHD" reflects the fact that it has four times as many pixels as HD (720p). It is also sometimes called "WQHD" to distinguish it from qHD (960 × 540), otherwise it is technically redundant since the HD resolutions are all widescreen.
This resolution was under consideration by the ATSC in the late 1980s to become the standard HDTV format, because it is exactly 3 times the height of SDTV NTSC television signals, with a wider aspect ratio. Pragmatic technical constraints made them choose the now well-known 16:9 formats of 1280 × 720 (1.5x NTSC/VGA height) and 1920 × 1080 (2x PAL height of 540 lines) instead.
In October 2006, Chi Mei Optoelectronics (CMO) announced a 47-inch 1440p LCD panel to be released in Q2 2007; the panel was planned to finally debut at FPD International 2008 in a form of autostereoscopic 3D display. As of the end of 2013, monitors with this resolution were becoming more common.
The 27-inch version of the Apple Cinema Display monitor introduced in July 2010 has a native resolution of 2560 × 1440, as did its successor, the 27-inch Apple Thunderbolt Display.
The resolution is also used in portable devices. In September 2012, Samsung announced the Series 9 WQHD laptop with a 13-inch 2560 × 1440 display. In August 2013, LG announced a 5.5-inch QHD smartphone display, which was used in the LG G3. In October 2013 Vivo announced a smartphone with a 2560 × 1440 display.
Other phone manufacturers followed in 2014, such as Samsung with the Galaxy Note 4, and Google and Motorola with the Nexus 6 smartphone. By the mid-2010s, it was a common resolution among flagship phones such as the HTC 10, the Lumia 950, and the Galaxy S6 and S7.
==== 5120 × 1440 DQHD ====
Ultrawide (curved) monitors with a 32:9 aspect ratio and a 5120 × 1440 resolution have been referred to as Dual QHD or DQHD for short. It is sometimes also called "Super-Ultrawide" for marketing purposes.
=== 3200 × 1800 (QHD+) ===
The resolution 3200 × 1800 has a 16:9 aspect ratio and is exactly four times as many pixels as the 1600 × 900 HD+ resolution, and is therefore referred to as "QHD+" (Quad HD+). It has also been referred to as simply "QHD" by some companies.
The first products announced to use this resolution were the 2013 HP Envy 14 TouchSmart Ultrabook and the 13.3-inch Samsung Ativ Q.
=== 3440 × 1440 (UWQHD) ===
The resolution 3440 × 1440 is equivalent to QHD (2560 × 1440) extended in width by 34%, giving it an aspect ratio of 43:18 (2.38:1, or 21.5:9; commonly marketed as simply "21:9"). The first monitor to support this resolution was the 34-inch LG 34UM95-P. This monitor was first released in Germany in late December 2013, before being officially announced at CES 2014.
=== 3840 × 1080 ===
The resolution 3840 × 1080 is equivalent to two Full HD (1920 × 1080) displays side by side or one vertical half of a 4K UHD (3840 × 2160) display. It has an aspect ratio of 32:9 (3.5:1), close to the 3.6:1 ratio of IMAX UltraWideScreen 3.6. Samsung monitors at this resolution contain built-in firmware to divide the screen into two 1920 × 1080 screens, or one 2560 × 1080 and one 1280 × 1080 screen.
=== 3840 × 1600 ===
The resolution 3840 × 1600 has a 12:5 aspect ratio, i.e. 2.4 or 21.6:9, which is commonly marketed as simply "21:9". It is equivalent to WQXGA (2560 × 1600) extended in width by 50%, or 4K UHD (3840 × 2160) reduced in height by 26%. This resolution is commonly encountered in cinematic 4K content that has been cropped vertically to a widescreen aspect ratio. The first monitor to support this resolution was the 37.5-inch LG 38UC99-W. Other vendors followed, with Dell U3818DW, HP Z38c, and Acer XR382CQK.
This resolution has been referred to as UW4K, WQHD+, UWQHD+ or QHD+, though no single name is agreed upon.
=== 3840 × 2160 (4K UHD) ===
The resolution 3840 × 2160, sometimes referred to as 4K UHD or 4K × 2K, has a 16:9 aspect ratio and 8,294,400 pixels. It is double the size of Full HD (1920 × 1080) in both dimensions for a total of four times as many pixels, and triple the size of HD (1280 × 720) in both dimensions for a total of nine times as many pixels. It is the lowest common multiple of the HDTV resolutions.
3840 × 2160 was chosen as the resolution of the UHDTV1 format defined in SMPTE ST 2036-1, as well as the 4K UHDTV system defined in ITU-R BT.2020 and the UHD-1 broadcast standard from DVB. It is also the minimum resolution requirement for CEA's definition of an Ultra HD display. Before the publication of these standards, it was sometimes casually referred to as "QFHD" (Quad Full HD).
The first commercial displays capable of this resolution include an 82-inch LCD TV revealed by Samsung in early 2008, the Sony SRM-L560, a 56-inch LCD reference monitor announced in October 2009, an 84-inch display demonstrated by LG in mid-2010, and a 27.84-inch 158 PPI 4K IPS monitor for medical purposes launched by Innolux in November 2010. In October 2011 Toshiba announced the REGZA 55x3, which is claimed to be the first 4K glasses-free 3D TV.
DisplayPort supports 3840 × 2160 at 30 Hz in version 1.1 and added support for up to 75 Hz in version 1.2 (2009) and 120 Hz in version 1.3 (2014), while HDMI added support for 3840 × 2160 at 30 Hz in version 1.4 (2009) and 60 Hz in version 2.0 (2013).
When support for 4K at 60 Hz was added in DisplayPort 1.2, no DisplayPort timing controllers (TCONs) existed which were capable of processing the necessary amount of data from a single video stream. As a result, the first 4K monitors from 2013 and early 2014, such as the Sharp PN-K321, Asus PQ321Q, and Dell UP2414Q and UP3214Q, were addressed internally as two 1920 × 2160 monitors side by side instead of a single display and made use of DisplayPort's Multi-Stream Transport (MST) feature to multiplex a separate signal for each half over the connection, splitting the data between two timing controllers. Newer timing controllers became available in 2014, and after mid-2014 new 4K monitors such as the Asus PB287Q no longer rely on MST tiling technique to achieve 4K at 60 Hz, instead, using the standard SST (Single-Stream Transport) approach.
In 2015, Sony announced the Xperia Z5 Premium, the first smartphone with a 4K display, and in 2017 Sony announced the Xperia XZ Premium, the first smartphone with a 4K HDR display.
=== 4096 × 2160 (DCI 4K) ===
4096 × 2160, referred to as DCI 4K, Cinema 4K or 4K × 2K, is the resolution used by the 4K container format defined by the Digital Cinema Initiatives Digital Cinema System Specification, a prominent standard in the cinema industry. This resolution has an aspect ratio of 256:135 (1.8962:1), and 8,847,360 total pixels. This is the native resolution for DCI 4K digital projectors and displays.
HDMI added support for 4096 × 2160 at 24 Hz in version 1.4 and 60 Hz in version 2.0.
=== 5120 × 2160 ===
The resolution 5120 × 2160 is equivalent to 4K UHD (3840 × 2160) extended in width by one third, giving it a 64:27 aspect ratio (2.370 or 21.3:9, commonly marketed as simply "21:9") and 11,059,200 total pixels. It is exactly double the size of 2560 × 1080 in both dimensions, for a total of four times as many pixels. The first displays to support this resolution were 105-inch televisions, the LG 105UC9 and the Samsung UN105S9W. In December 2017, LG announced a 34-inch 5120 × 2160 monitor, the 34WK95U, and in January 2021 the 40-inch 40WP95C. LG refers to this resolution as "5K2K WUHD".
=== 5120 × 2880 (5K) ===
The resolution 5120 × 2880, commonly referred to as 5K or 5K × 3K, has a 16:9 aspect ratio and 14,745,600 pixels. Although it is not established by any of the UHDTV standards, some manufacturers such as Dell have referred to it as "UHD+". It is exactly double the pixel count of QHD (2560 × 1440) in both dimensions for a total of four times as many pixels, and is one third larger than 4K UHD (3840 × 2160) in both dimensions for a total of 1.77 times as many pixels. The line count of 2880 is also the least common multiple of 480 and 576, the scanline count of NTSC and PAL, respectively. Such a resolution can vertically scale SD content to fit by natural numbers (6 for NTSC and 5 for PAL). Horizontal scaling of SD is always fractional (non-anamorphic: 5.33...5.47, anamorphic: 7.11...7.29).
The first display with this resolution was the Dell UltraSharp UP2715K, announced on September 5, 2014. On October 16, 2014, Apple announced the iMac with Retina 5K display.
DisplayPort version 1.3 added support for 5K at 60 Hz over a single cable, whereas version 1.2 was only capable of 5K at 30 Hz. Early 5K 60 Hz displays such as the Dell UltraSharp UP2715K and HP DreamColor Z27q that lacked DisplayPort 1.3 support required two DisplayPort 1.2 connections to operate at 60 Hz, in a tiled display mode similar to early 4K displays using DP MST.
=== 7680 × 4320 (8K UHD) ===
The resolution 7680 × 4320, sometimes referred to as 8K UHD, has a 16:9 aspect ratio and 33,177,600 pixels. It is exactly double the size of 4K UHD (3840 × 2160) in each dimension for a total of four times as many pixels, and Quadruple the size of Full HD (1920 × 1080) in each dimension for a total of sixteen times as many pixels. 7680 × 4320 was chosen as the resolution of the UHDTV2 format defined in SMPTE ST 2036-1, as well as the 8K UHDTV system defined in ITU-R BT.2020 and the UHD-2 broadcast standard from DVB.
DisplayPort 1.3, finalized by VESA in late 2014, added support for 7680 × 4320 at 30 Hz (or 60 Hz with Y′CBCR 4:2:0 subsampling). VESA's Display Stream Compression (DSC), which was part of early DisplayPort 1.3 drafts and would have enabled 8K at 60 Hz without subsampling, was cut from the specification prior to publication of the final draft.
DSC support was reintroduced with the publication of DisplayPort 1.4 in March 2016. Using DSC, a "visually lossless" form of compression, formats up to 7680 × 4320 (8K UHD) at 60 Hz with HDR and 30 bit/px color depth are possible without subsampling.
== Video Graphics Array (VGA and derivatives) ==
=== 160 × 120 (QQVGA) ===
Quarter-QVGA (QQVGA or qqVGA) denotes a resolution of 160 × 120 (4:3 storage aspect ratio) or 120 × 160 pixels, usually used in displays of handheld devices. The term Quarter-QVGA signifies a resolution of one fourth the number of pixels in a QVGA display (half the number of vertical and half the number of horizontal pixels) which itself has one fourth the number of pixels in a VGA display. There are also devices with QQVGA 160 × 128 (5:4 storage aspect ratio).
The abbreviation qqVGA may be used to distinguish quarter from quad, just like qVGA.
=== 240 × 160 ===
HQVGA (or Half-QVGA) denotes a display screen resolution of 240 × 160 or 160 × 240 pixels, as seen on the Game Boy Advance. This resolution is half of QVGA, which is itself a quarter of VGA, which is 640 × 480 pixels.
=== 320 × 240 (QVGA) ===
Quarter VGA (QVGA or qVGA) is a popular term for a computer display with 320 × 240 display resolution. QVGA displays were most often used in mobile phones, personal digital assistants (PDA), and some handheld game consoles. Often the displays are in a "portrait" orientation (i.e., taller than they are wide, as opposed to "landscape") and are referred to as 240 × 320.
The name comes from having a quarter of the 640 × 480 maximum resolution of the original IBM Video Graphics Array display technology, which became a de facto industry standard in the late 1980s. QVGA is not a standard mode offered by the VGA BIOS, even though VGA and compatible chipsets support a QVGA-sized Mode X. The term refers only to the display's resolution and thus the abbreviated term QVGA or Quarter VGA is more appropriate to use.
QVGA resolution is also used in digital video recording equipment as a low-resolution mode requiring less data storage capacity than higher resolutions, typically in still digital cameras with video recording capability, and some mobile phones. Each frame is an image of 320 × 240 pixels. QVGA video is typically recorded at 15 or 30 frames per second. QVGA mode describes the size of an image in pixels, commonly called the resolution; numerous video file formats support this resolution.
While QVGA is a lower resolution than VGA, at higher resolutions the "Q" prefix commonly means quad(ruple) or four times higher display resolution (e.g., QXGA is four times higher resolution than XGA). To distinguish quarter from quad, lowercase "q" is sometimes used for "quarter" and uppercase "Q" for "Quad", by analogy with SI prefixes like m/M and p/P, but this is not a consistent usage.
Some examples of devices that use QVGA display resolution include the iPod Classic, Samsung i5500, LG Optimus L3-E400, Galaxy Fit, Y and Pocket, HTC Wildfire, Sony Ericsson Xperia X10 Mini and Mini pro and Nintendo 3DS' bottom screen.
=== 400 × 240 (WQVGA) ===
Wide QVGA or WQVGA are some display resolutions having the same height in pixels as QVGA, but wider.
Since QVGA is 320 pixels wide and 240 pixels high (aspect ratio of 4:3), the resolution of a WQVGA screen might be 360 × 240 (3:2 aspect ratio), 384 × 240 (16:10 aspect ratio), 400 × 240 (5:3 – such as the Nintendo 3DS screen), 426 × 240, 428 × 240 (≈16:9 ratio) or 432 × 240 (18:10 aspect ratio). As with WVGA, exact ratios of n:9 are difficult because of the way VGA controllers internally deal with pixels. For instance, when using graphical combinatorial operations on pixels, VGA controllers will use 1 bit per pixel. Since bits cannot be accessed individually but by chunks of 16 or an even higher power of 2, this limits the horizontal resolution to a 16-pixel granularity, i.e., the horizontal resolution must be divisible by 16. In the case of the 16:9 ratio, with 240 pixels high, the horizontal resolution should be 240 / 9 × 16 = 426.6 (4262⁄3), the closest multiple of 16 is 432.
WQVGA has also been used to describe displays that are not 240 pixels high, for example, Sixteenth HD1080 displays which are 480 pixels wide and 270 or 272 pixels high. This may be due to WQVGA having the nearest screen height.
WQVGA resolutions were commonly used in touchscreen mobile phones, such as 400 × 240, 432 × 240, and 480 × 240. For example, the Hyundai MB 490i, Sony Ericsson Aino and the Samsung Instinct have WQVGA screen resolutions – 240 × 432. Other devices such as the Apple iPod Nano also use a WQVGA screen, 240 × 376 pixels. The Nintendo 3DS line is probably the most famous device to have a WQVGA screen.
=== 480 × 320 (HVGA) ===
HVGA (Half-size VGA) screens have 480 × 320 pixels (3:2 aspect ratio), 480 × 360 pixels (4:3 aspect ratio), 480 × 272 (≈16:9 aspect ratio), or 640 × 240 pixels (8:3 aspect ratio). The former is used by a variety of PDA devices, starting with the Sony CLIÉ PEG-NR70 in 2002, and standalone PDAs by Palm. The latter was used by a variety of handheld PC devices. VGA resolution is 640 × 480.
Examples of devices that use HVGA include the Apple iPhone (1st generation through 3GS), iPod Touch (1st Generation through 3rd), BlackBerry Bold 9000, HTC Dream, Hero, Wildfire S, LG GW620 Eve, MyTouch 3G Slide, Nokia 6260 Slide, Palm Pre, Samsung M900 Moment, Sony Ericsson Xperia X8, mini, mini pro, active and live and the Sony PlayStation Portable.
Texas Instruments produces the DLP pico projector which supports HVGA resolution.
HVGA was the only resolution supported in the first versions of Google Android, up to release 1.5. Other higher and lower resolutions became available starting on release 1.6, like the popular WVGA resolution on the Motorola Droid or the QVGA resolution on the HTC Tattoo.
Three-dimensional computer graphics common on television throughout the 1980s were mostly rendered at this resolution, causing objects to have jagged edges on the top and bottom when edges were not anti-aliased.
=== 640 × 480 (VGA) ===
Video Graphics Array (VGA) refers specifically to the display hardware first introduced with the IBM PS/2 line of computers in 1987. Through its widespread adoption, VGA has also come to mean either an analog computer display standard, the 15-pin D-subminiature VGA connector, or the 640 × 480 resolution itself. While the VGA resolution was superseded in the personal computer market in the 1990s and the SEGA Dreamcast in 1998, it became a popular resolution on mobile devices in the 2000s. VGA is still the universal fallback troubleshooting mode in the case of trouble with graphic device drivers in operating systems.
In the field of video, the resolution of 480i supports 640 samples per line (corresponding to 640x480) corresponding to Standard Definition (SD), in contrast to high-definition (HD) resolutions like 1280 × 720 and 1920 × 1080.
=== 800 × 480 (WVGA) ===
Wide VGA or WVGA, sometimes just WGA are some display resolutions with the same 480-pixel height as VGA but wider, such as 720 × 480 (3:2 aspect ratio), 800 × 480 (5:3), 848 × 480, 852 × 480, 853 × 480, or 854 × 480 (≈16:9).
It was a common resolution among LCD projectors and later portable and hand-held internet-enabled devices (such as MID and Netbooks) as it is capable of rendering websites designed for an 800 wide window in full page-width. Examples of hand-held internet devices, without phone capability, with this resolution include: Spice stellar nhance mi-435, ASUS Eee PC 700 series, Dell XCD35, Nokia 770, N800, and N810.
=== 854 × 480 (FWVGA) ===
FWVGA is an abbreviation for Full Wide Video Graphics Array which refers to a display resolution of 854 × 480 pixels. 854 × 480 is approximately the 16:9 aspect ratio of anamorphically "un-squeezed" NTSC DVD widescreen video and is considered a "safe" resolution that does not crop any of the image. It is called Full WVGA to distinguish it from other, narrower WVGA resolutions which require cropping 16:9 aspect ratio high-definition video (i.e. it is full width, albeit with a considerable reduction in size).
The 854 pixel width is rounded up from 853.3:
480 × 16⁄9 = 7680⁄9 = 853+1⁄3.
Since a pixel must be a whole number, rounding up to 854 ensures inclusion of the entire image. 853 × 480 is the 16:9 equivalent for NTSC (480 lines) on a display with square pixels. Plasma and other digital TV sets with this resolution were marketed as enhanced-definition television (EDTV) at the time.
In 2010, mobile phones with FWVGA display resolution started to become more common. (See also: list of mobile phones with FWVGA display.) In addition, the Wii U GamePad for Nintendo's Wii U gaming console includes a 6.2-inch FWVGA display.
=== 800 × 600 (SVGA) ===
Super Video Graphics Array, abbreviated to Super VGA or SVGA, also known as Ultra Video Graphics Array early on, abbreviated to Ultra VGA or UVGA, is a broad term that covers a wide range of computer display standards.
Originally, it was an extension to the VGA standard first released by IBM in 1987. Unlike VGA – a purely IBM-defined standard – Super VGA was defined by the Video Electronics Standards Association (VESA), an open consortium set up to promote interoperability and define standards. When used as a resolution specification, in contrast to VGA or XGA for example, the term SVGA normally refers to a resolution of 800 × 600 pixels.
The marginally higher resolution 832 × 624 is the highest 4:3 resolution not greater than 219 pixels, with its horizontal dimension a multiple of 32 pixels. This enables it to fit within a framebuffer of 512 KB (512 × 210 bytes), and the common multiple of 32 pixels constraint is related to alignment. For these reasons, this resolution was available on the Macintosh LC III and other systems.
=== 1024 × 576, 1024 × 600 (WSVGA) ===
The wide version of SVGA is known as WSVGA (Wide Super VGA or Wide SVGA), featured on Ultra-Mobile PCs, netbooks, and tablet computers. The resolution is either 1024 × 576 (aspect ratio 16:9) or 1024 × 600 (128:75) with screen sizes normally ranging from 7 to 10 inches. It has full XGA width of 1024 pixels.
Although digital broadcast content in former PAL/SECAM regions has 576 active lines, several mobile TV sets with a DVB-T2 tuner use the 600-line variant with a diameter of 7, 9 or 10 inches (18 to 26 cm).
1024 × 576 is the 16:9 equivalent for PAL (576 lines) on a display with square pixels, resulting in a pixel aspect ratio of 16∶11 or 64∶45 depending on the native resolution of PAL.
=== 960 × 640 ===
DVGA (DoubleVGA) screens have 960 × 640 pixels (3:2 aspect ratio). Both dimensions are double that of HVGA, hence the pixel count is quadrupled.
Examples of devices that use DVGA include the Meizu MX mobile phone and the Apple iPhone 4 and 4S with the iPod Touch 4, where the screen is called the "Retina Display".
iPhone 5 introduced a wide, 16:9 variant at 1136 × 640 pixels, which also has no official acronym.
=== 1280 × 960 (QuadVGA) ===
QuadVGA (also labelled as Quad VGA or Quad-VGA) is a non-standard term used to refer to a resolution of 1280 × 960, since both sides are doubled from VGA. However, it is usually not as the abbreviation QVGA because this is strongly associated with the alternate meaning Quarter VGA (QVGA 320 × 240).
It is sometimes unofficially called SXGA− to avoid confusion with the SXGA standard (1280 × 1024). Elsewhere, this 4:3 resolution was supposedly also called UVGA (Ultra VGA), or SXVGA (Super eXtended VGA).
== Extended Graphics Array (XGA and derivatives) ==
=== 1024 × 768 (XGA) ===
The Extended Graphics Array (XGA) or originally Extended Video Graphics Array (Extended-VGA, EVGA) is an IBM display standard introduced in 1990. Later it became the most common appellation of the 1024 × 768 pixels display resolution.
The initial version of XGA expanded upon IBM's older VGA by adding support for four new screen modes, including one new resolution:
640 × 480 pixels in direct 16 bits-per-pixel (65,536 color) RGB hi-color and 8 bit/px (256 color) palette-indexed mode.
1024 × 768 pixels with a 16- or 256-color (4 or 8 bit/px) palette, using a low frequency interlaced refresh rate.
XGA-2 added a 24-bit DAC, but this was used only to extend the available master palette in 256-color mode, e.g. to allow true 256-greyscale output. Other improvements included the provision of the previously missing 800 × 600 resolution in up to 65,536 colors, faster screen refresh rates in all modes (including non-interlace, flicker-free output for 1024 × 768), and improved accelerator performance and versatility.
All standard XGA modes have a 4:3 aspect ratio with square pixels, although this does not hold for certain standard VGA and third-party extended modes (640 × 400, 1280 × 1024).
=== WXGA ===
Wide XGA (WXGA) is a set of non-standard resolutions derived from XGA (1024 × 768) by widening it to 1366 × 768 with a widescreen aspect ratio of nearly 16:9 or to 1280 × 800 with an aspect ratio of 16:10. WXGA is commonly used for low-end LCD TVs and LCD computer monitors for widescreen presentation. The exact resolution offered by a device described as "WXGA" can be somewhat variable owing to a proliferation of several closely related timings optimised for different uses and derived from different bases.
In Microsoft Windows operating system specifically, the larger taskbar of Windows 7 occupies an additional 16-pixel lines by default, which may compromise the usability of programs that already demanded a full 1024 × 768 (instead of, e.g. 800 × 600) unless it is specifically set to use small icons; an "oddball" 784-line resolution would compensate for this, but 1280 × 800 has a simpler aspect and also gives the slight bonus of 16 more usable lines. Also, the Windows Sidebar in Windows Vista and 7 can use the additional 256 or 336 horizontal pixels to display informational "widgets" without compromising the display width of other programs, and Windows 8 is specifically designed around a "two-pane" concept where the full 16:9 or 16:10 screen is not required. Typically, this consists of a 4:3 main program area (typically 1024 × 768, 1000 × 800 or 1440 × 1080) plus a narrow sidebar running a second program, showing a toolbox for the main program or a pop-out OS shortcut panel taking up the remainder.
==== 1366 × 768 (WXGA) ====
When referring to televisions and other monitors intended for consumer entertainment use, WXGA is often understood to refer to a resolution of 1366 × 768, with an aspect ratio of very nearly 16:9. The basis for this otherwise odd seeming resolution is similar to that of other "wide" standards – the line scan (refresh) rate of the well-established "XGA" standard (1024 × 768 pixels, 4:3 aspect ratio) extended to give square pixels on the increasingly popular 16:9 widescreen display ratio without having to effect major signalling changes other than a faster pixel clock, or manufacturing changes other than extending panel width by one third. As 768 is not divisible by 9, the aspect ratio is not quite 16:9 – this would require a width of 13651⁄3 (1365.3) pixels. However, at only 0.05%, the resulting error is insignificant. It is also occasionally referred to as FWXGA (Full Wide XGA), so it can be distinguished from other, narrower WXGA resolutions.
Following the introduction of the European HD ready logo in 2005, a year later 1366 × 768 was the most popular resolution for liquid crystal display televisions (versus XGA for Plasma TVs flat panel displays);
By 2013, even this was relegated to only being used in smaller or cheaper displays (e.g. "bedroom" LCD TVs, or low-cost, large-format plasmas), cheaper laptop and mobile tablet computers, and midrange home cinema projectors, having otherwise been overtaken by higher "full HD" resolutions such as 1920 × 1080.
A common variant on this resolution is also 1360 × 768 (unnamed or named FWXGA), which confers several technical benefits, most significantly a reduction in memory requirements from just over to just under 1 MB per 8-bit channel (1366 × 768 needs 1024.5 KB per channel; 1360 × 768 needs 1020 KB; 1 MB is equal to 1024 KB), which simplifies architecture and can significantly reduce the amount–and speed–of VRAM required with only a very minor change in available resolution, as memory chips are usually only available in fixed megabyte capacities. For example, at 32-bit color, a 1360 × 768 framebuffer would require only 4 MB, whilst a 1366 × 768 one may need 5, 6, or even 8 MB depending on the exact display circuitry architecture and available chip capacities. The 6-pixel reduction also means each line's width is divisible by 8 pixels, simplifying numerous routines used in both computer and broadcast/theatrical video processing, which operate on 8-pixel blocks. Historically, many video cards also mandated screen widths divisible by 8 for their lower-color, planar modes to accelerate memory accesses and simplify pixel position calculations (e.g. fetching 4-bit pixels from 32-bit memory is much faster when performed 8 pixels at a time, and calculating exactly where a particular pixel is within a memory block is much easier when lines do not end partway through a memory word), and this convention persisted in low-end hardware even into the early days of widescreen, LCD HDTVs; thus, most 1366-width displays also quietly support display of 1360-width material, with a thin border of unused pixel columns at each side. This narrower mode is even further removed from the 16:9 ideal, but the error is still less than 0.5% (technically, the mode is either 15.94:9.00 or 16.00:9.04) and should be imperceptible.
==== 1280 × 800 (WXGA) ====
When referring to laptop displays or independent displays and projectors intended primarily for use with computers, WXGA is also used to describe a resolution of 1280 × 800 pixels, with an aspect ratio of 16:10. This was once particularly popular for laptop screens, usually with a diagonal screen size of between 12 and 15 inches, as it provided a useful compromise between 4:3 XGA and 16:9 WXGA, with improved resolution in both dimensions vs. the old standard (especially useful in portrait mode, or for displaying two standard pages of text side by side), a perceptibly "wider" appearance and the ability to display 720p HD video "native" with only very thin letterbox borders (usable for on-screen playback controls) and no stretching. Additionally, it required only 1000 KB (just under 1 MB) of memory per 8-bit channel; thus, a typical double-buffered 32-bit color screen could fit within 8 MB, limiting everyday demands on the complexity (and cost, energy use) of integrated graphics chipsets and their shared use of typically sparse system memory (generally allocated to the video system in relatively large blocks), at least when only the internal display was in use (external monitors generally being supported in "extended desktop" mode to at least 1600 × 1200 resolution). 16:10 (or 8:5) is itself a rather "classic" computer aspect ratio, harking back to early 320 × 200 modes (and their derivatives) as seen in the Commodore 64, IBM CGA card and others. However, as of mid-2013, this standard is becoming increasingly rare, crowded out by the more standardized and thus more economical-to-produce 1366 × 768 panels, as its previously beneficial features become less important with improvements to hardware, gradual loss of general backwards software compatibility, and changes in interface layout. As of February 2024, the market availability of panels with 1280 × 800 native resolution had been generally relegated to handheld gaming computers 1280 × 800 is used by Valve's Steam Deck, as well as several other handheld gaming computers.
==== Other WXGA ====
Additionally, at least three other resolutions are sometimes labelled as WXGA:
The first variant, 1280 × 768, can be seen as a compromise resolution that addressed this problem, as well as a halfway point between the older 1024 × 768 and 1280 × 1024 resolutions, and a stepping stone to 1366 × 768 (being one-quarter wider than 1024, not one-third) and 1280 × 800, that never quite caught on in the same way as either of its arguably derivative successors. Its square-pixel aspect ratio is 15:9 (or 5:3), in contrast to HDTV's 16:9 and 1280 × 800's 16:10. It is also the lowest resolution that might be found in an "Ultrabook" standard laptop, as it satisfies the minimum horizontal and vertical pixel resolutions required to officially qualify for the designation.
Second, the HDTV-standard 1280 × 720 (otherwise commonly described as "720p"), which offers an exact 16:9 aspect ratio with square pixels; naturally, it displays standard 720p HD video material without stretching or letterboxing and 1080i/1080p with a simple 2:3 downscale. This resolution has found some use in tablets and modern, high-pixel-density mobile phones, as well as small-format "netbook" or "ultralight" laptop computers. However, its use is uncommon in larger, mainstream devices as it has an insufficient vertical resolution for the proper use of modern operating systems such as Windows 7 whose UI design assumes a minimum of 768 lines. For certain uses such as word processing, it can even be considered a slight downgrade (reducing the number of simultaneously visible lines of text without granting any significant benefit as even 640 pixels is sufficient horizontal resolution to legibly render a full page width, especially with the addition of subpixel anti-aliasing).
Another mentionable resolution is 1152 × 768 with a 3:2 aspect ratio.
Likewise, 1344 × 768 with a 7:4 aspect ratio (similar to 16:9) is used sometimes.
Some 1440 × 900 resolution displays have also been found labeled as WXGA; however, the "correct" label is WXGA+.
=== 1152 × 864 (XGA+) ===
XGA+ stands for Extended Graphics Array Plus and is a computer display standard, usually understood to refer to the 1152 × 864 resolution with an aspect ratio of 4:3. Until the advent of widescreen LCDs, XGA+ was often used on 17-inch desktop CRT monitors. It is the highest 4:3 resolution not greater than 220 pixels (≈1.05 megapixels), with its horizontal dimension a multiple of 32 pixels. This enables it to fit closely into a video memory or framebuffer of 1 MB (1 × 220 bytes), assuming the use of one byte per pixel. The common multiple of 32 pixels constraint is related to alignment.
Historically, the resolution also relates to the earlier standard of 1152 × 900 pixels, which was adopted by Sun Microsystems for the Sun-2 workstation in the early 1980s. A decade later, Apple Computer selected the resolution of 1152 × 870 for their 21-inch CRT monitors, intended for use as two-page displays on the Macintosh II computer. These resolutions are even closer to the limit of a 1 MB framebuffer, but their aspect ratios differ slightly from the common 4:3.
XGA+ is the next step after XGA (1024 × 768), although it is not approved by any standard organizations. The next step with an aspect ratio of 4:3 is 1280 × 960 (QuadVGA) or 1400 × 1050 (SXGA+).
=== 1440 × 900 (WXGA+, WSXGA) ===
WXGA+ and WSXGA are terms referring to a computer display resolution of 1440 × 900. Occasionally manufacturers use other terms to refer to this resolution. The Standard Panels Working Group refers to the 1440 × 900 resolution as WXGA (but refers also WXGA to 1280 × 800).
WXGA+ can be considered enhanced versions of WXGA with more pixels. The aspect ratio is 16:10 (widescreen). WXGA+ resolution is common in 19-inch widescreen desktop monitors (a very small number of such monitors use WSXGA+), and is also optional, although less common, in laptop LCDs, in sizes ranging from 12.1 to 17 inches.
==== 1600 × 1024 ====
The name WSXGA is also used to describe a resolution of 1600 × 1024, which has an aspect ratio of 25:16 (52:42 = 1.5625, which is between 14:9 and 16:10).
==== 1280 × 854 ====
WXGA+ has also been used to refer to a resolution of 1280 × 854, which has an aspect ratio very close to 3:2 (1.5).
=== 1280 × 1024 (SXGA) ===
Super XGA (SXGA) is a standard monitor resolution of 1280 × 1024 pixels. This display resolution is the "next step" above the XGA resolution that IBM developed in 1990.
The 1280 × 1024 resolution is not the standard 4:3 aspect ratio, instead it is a 5:4 aspect ratio (1.25:1 instead of 1.3:1). A standard 4:3 monitor using this resolution will have rectangular rather than square pixels, meaning that unless the software compensates for this the picture will be distorted, causing circles to appear elliptical.
SXGA is the most common native resolution of 17-inch and 19-inch LCD monitors. An LCD monitor with SXGA native resolution will typically have a physical 5:4 aspect ratio, preserving a 1:1 pixel aspect ratio.
Sony manufactured a 17-inch CRT monitor with a 5:4 aspect ratio designed for this resolution. It was sold under the Apple brand name.
SXGA is also a popular resolution for cell phone cameras, such as the Motorola Razr and most Samsung and LG phones. Although having been taken over by newer UXGA (2.0-megapixel) cameras, the 1.3-megapixel was the most common around 2007.
Any CRT that can run 1280 × 1024 can also run 1280 × 960 (QuadVGA or sometimes SXGA-), which has the standard 4:3 ratio. A flat panel TFT screen, including one designed for 1280 × 1024, will show stretching distortion when set to display any resolution other than its native one, as the image needs to be interpolated to fit in the fixed grid display. Some TFT displays do not allow a user to disable this, and will prevent the upper and lower portions of the screen from being used forcing a "letterbox" format when set to a 4:3 ratio.
The 1280 × 1024 resolution became popular because at 24 bit/px color depth it fits well into 4 megabytes of video RAM. At the time, memory was extremely expensive. Using 1280 × 1024 at 24-bit color depth allowed using 3.75 MB of video RAM, fitting nicely with VRAM chip sizes which were available at the time (4 MB):
(1280 × 1024) px × 24 bit/px ÷ 8 bit/byte ÷ 220 byte/MB = 3.75 MB
=== 1400 × 1050 (SXGA+) ===
SXGA+ stands for Super Extended Graphics Array Plus and is a computer display standard. An SXGA+ display is commonly used on 14-inch or 15-inch laptop LCD screens with a resolution of 1400 × 1050 pixels. An SXGA+ display is used on a few 12-inch laptop screens such as the ThinkPad X60 and X61 (both only as tablet) as well as the Toshiba Portégé M200 and M400, but those are far less common. At 14.1 inches, Dell offered SXGA+ on many of the Latitude C-Series laptops, such as the C640, and IBM since the ThinkPad T21. Sony also used SXGA+ in their Z1 series, but no longer produces them as widescreen has become more predominant.
In desktop LCDs, SXGA+ is used on some low-end 20-inch monitors, whereas most of the 20-inch LCDs use UXGA (standard screen ratio), or WSXGA+ (widescreen ratio).
A rare resolution of 2800 × 2100, i.e. with double the pixels horizontally and vertically, is known as QSXGA+.
=== 1680 × 1050 (WSXGA+) ===
WSXGA+ stands for Widescreen Super Extended Graphics Array Plus. WSXGA+ displays were commonly used on Widescreen 20-, 21-, and 22-inch LCD monitors from numerous manufacturers (and a very small number of 19-inch widescreen monitors), as well as widescreen 15.4-inch and 17-inch laptop LCD screens like the Thinkpad T61p, the late 17" Apple PowerBook G4 and the unibody Apple 15" MacBook Pro. The resolution is 1680 × 1050 pixels (1,764,000 pixels) with a 16:10 aspect ratio.
WSXGA+ is the widescreen version of SXGA+. The next highest resolution (for widescreen) after it is WUXGA, which is 1920 × 1200 pixels.
=== 1600 × 1200 (UXGA) ===
UXGA (sometimes UGA) is an abbreviation for Ultra Extended Graphics Array referring to a standard monitor resolution of 1600 × 1200 pixels (totaling 1,920,000 pixels), which is exactly four times the default image resolution of SVGA (800 × 600) (totaling 480,000 pixels). Dell Inc. refers to the same resolution of 1,920,000 pixels as UGA. It is generally considered to be the next step above SXGA (1280 × 960 or 1280 × 1024), but some resolutions (such as the unnamed 1366 × 1024 and SXGA+ at 1400 × 1050) fit between the two.
UXGA has been the native resolution of many fullscreen monitors of 15 inches or more, including laptop LCDs such as the ones in the IBM ThinkPad A21p, A30p, A31p, T42p, T43p, T60p, Dell Inspiron 8000/8100/8200 and Latitude/Precision equivalents; some Panasonic Toughbook CF-51 models; and the original Alienware Area 51M gaming laptop. However, in more recent times, UXGA is not used in laptops at all but rather in desktop monitors that have been made in sizes of 20 inches and 21.3 inches. Some 14-inch laptop LCDs with UXGA have also existed (such as the Dell Inspiron 4100), but these are very rare.
There are two different widescreen cousins of UXGA, one called UWXGA with 1600 × 768 (750) and one called WUXGA with 1920 × 1200 resolution.
=== 1920 × 1200 (WUXGA) ===
WUXGA stands for Widescreen Ultra Extended Graphics Array and is a display resolution of 1920 × 1200 pixels (2,304,000 pixels) with a 16:10 screen aspect ratio. It is a wide version of UXGA. By some producers it is called FHD+ because it is the next bigger resolution in vertical direction after FHD (1920 × 1080). WUXGA/FHD+ can be used for viewing high-definition television (HDTV) content, which uses a 16:9 aspect ratio and a 1280 × 720 (720p) or 1920 × 1080 (1080i or 1080p) resolution.
The 16:10 aspect ratio (as opposed to the 16:9 used in widescreen televisions) was chosen because this aspect ratio is appropriate for displaying two full pages of text side by side.
WUXGA resolution has a total of 2,304,000 pixels. One frame of uncompressed 8 BPC RGB WUXGA is 6.75 MiB (6.912 MB). Initially, it was available in widescreen CRTs such as the Sony GDM-FW900 and the Hewlett-Packard A7217A (introduced in 2003), and in 17-inch laptops. Most QXGA displays support 1920 × 1200. WUXGA is also available in some mobile phablet devices such as the Huawei Honor X2 Gem.
The next lower standard resolution (for widescreen) before it is WSXGA+, which is 1680 × 1050 pixels (1,764,000 pixels, or 30.61% fewer than WUXGA); the next higher resolution widescreen is an unnamed 2304 × 1440 resolution (supported by the above GDM-FW900 and A7217A) and then the more common WQXGA, which has 2560 × 1600 pixels (4,096,000 pixels, or 77.78% more than WUXGA).
=== 2048 × 1152 (QWXGA) ===
QWXGA (for Quad-WXGA or Quad Wide Extended Graphics Array) is a display resolution of 2048 × 1152 pixels with a 16:9 aspect ratio.
If taken as a starting point that WXGA has a display resolution of 1366 × 768 or 1280 × 800 a display with a size 4-times of WXGA should have 2732 × 1536 or 2560 × 1600 pixels, but the first is non-existent and the latter is named WQXGA. Conversely, the quarter of QWXGA (2048 × 1152) would have 1024 × 576 pixels but this is named WSVGA.
A few QWXGA LCD monitors were available in 2009 with 23- and 27-inch displays, such as the Acer B233HU (23-inch) and B273HU (27-inch), the Dell SP2309W, and the Samsung 2343BWX. As of 2011, most 2048 × 1152 monitors have been discontinued, and as of 2013, no major manufacturer produces monitors with this resolution.
=== 2048 × 1536 (QXGA) ===
QXGA (for Quad-XGA or Quad Extended Graphics Array) is a display resolution of 2048 × 1536 pixels with a 4:3 aspect ratio as XGA. The name comes from it having four times as many pixels as an XGA display of 1024 × 768.
Examples of LCDs with this resolution are the IBM T210 and the Eizo G33 and R31 screens, but in CRT monitors this resolution is much more common; some examples include the Sony F520, ViewSonic G225fB, NEC FP2141SB or Mitsubishi DP2070SB, Iiyama Vision Master Pro 514, and Dell and HP P1230. Of these monitors, none are still in production.
A related display size is WQXGA, which is a widescreen version.
IDTech manufactured a 15-inch QXGA IPS panel, used in the IBM ThinkPad R50p. NEC sold laptops with QXGA screens in 2002–05 for the Japanese market.
The iPad (from 3rd through 6th generation and Mini 2) also have a QXGA display.
=== 2560 × 1600 (WQXGA) ===
WQXGA (Wide Quad Extended Graphics Array) is a display resolution of 2560 × 1600 pixels with a 16:10 aspect ratio. The name implies a "wide QXGA" (QXGA 2048 × 1536) but it's not. Instead, WQXGA has exactly four times as many pixels as a WXGA (1280 × 800) hence the name "Quad-WXGA" would fit but QWXGA is defined as 2048 × 1152 pixels.
By some producers it is called QHD+ referring to QHD (2560 × 1440). (QHD+ is sometimes also used for the resolution 3200 × 1800 (QHD+).)
To obtain a vertical refresh rate higher than 40 Hz with DVI, this resolution requires dual-link DVI cables and devices. To avoid cable problems monitors are sometimes shipped with an appropriate dual link cable already plugged in. Many video cards support this resolution. One feature that was unique to the 30-inch WQXGA monitors is the ability to function as the centerpiece and main display of a three-monitor array of complementary aspect ratios, with two UXGA (1600 × 1200) 20-inch monitors turned vertically on either side. The resolutions are equal, and the size of the 1600 resolution edges is within a tenth of an inch (16-inch vs. 15.89999"), presenting a "picture window view" without the extreme lateral dimensions, small central panel, asymmetry, resolution differences, or dimensional difference of other three-monitor combinations. The resulting 4960 × 1600 composite image has a 3.1:1 aspect ratio. This also means one UXGA 20-inch monitor in portrait orientation can also be flanked by two 30-inch WQXGA monitors for a 6320 × 1600 composite image with an 11.85:3 (79:20, 3.95:1) aspect ratio.
An early consumer WQXGA monitor was the 30-inch Apple Cinema Display, unveiled by Apple in June 2004. At the time, dual-link DVI was uncommon on consumer hardware, so Apple partnered with Nvidia to develop a special graphics card that had two dual-link DVI ports, allowing simultaneous use of two 30-inch Apple Cinema Displays. The nature of this graphics card, being an add-in AGP card, meant that the monitors could only be used in a desktop computer, like the Power Mac G5, that could have the add-in card installed, and could not be immediately used with laptop computers that lacked this expansion capability.
In March 2009, Apple updated several Macintosh computers with a Mini DisplayPort adapter, such as the Mac mini and iMac. These allow an external connection to a 2560 × 1600 display.
In 2010, WQXGA made its debut in a handful of home theater projectors targeted at the Constant Height Screen application market. Both Digital Projection Inc and projectiondesign released models based on a Texas Instruments DLP chip with a native WQXGA resolution, alleviating the need for an anamorphic lens to achieve 1:2.35 image projection. Many manufacturers have 27–30-inch models that are capable of WQXGA, albeit at a much higher price than lower resolution monitors of the same size. Several mainstream WQXGA monitors are or were available with 30-inch displays, such as the Dell 3007WFP-HC, 3008WFP, U3011, U3014, UP3017, the Hewlett-Packard LP3065, the Gateway XHD3000, LG W3000H, and the Samsung 305T. Specialist manufacturers like NEC, Eizo, Planar Systems, Barco (LC-3001), and possibly others offer similar models. As of 2016, LG Display make a 10-bit 30-inch AH-IPS panel, with wide color gamut, used in monitors from Dell, NEC, HP, Lenovo and Iiyama.
Released in November 2012, Google's Nexus 10 is the first consumer tablet to feature WQXGA resolution. Before its release, the highest resolution available on a tablet was QXGA (2048 × 1536), available on the Apple iPad 3rd and 4th generations devices. Several Samsung Galaxy tablets, including the Note 10.1 (2014 Edition), Tab S 8.4, 10.5 and TabPRO 8.4, 10.1 and Note Pro 12.2, as well as the Gigaset QV1030, also feature a WQXGA resolution display.
In 2012, Apple released the 13 inch MacBook Pro with Retina Display that features a WQXGA display, and the new MacBook Air in 2018.
The LG Gram 17 introduced in 2019 uses a 17-inch WQXGA display.
=== 2560 × 2048 (QSXGA) ===
QSXGA (Quad Super Extended Graphics Array) is a display resolution of 2560 × 2048 pixels with a 5:4 aspect ratio. Grayscale monitors with a 2560 × 2048 resolution, primarily for medical use, are available from Planar Systems (Dome E5), Eizo (Radiforce G51), Barco (Nio 5, MP), WIDE (IF2105MP), IDTech (IAQS80F), and possibly others.
Recent medical displays such as Barco Coronis Fusion 10MP or NDS Dome S10 have a native panel resolution of 4096 × 2560. These are driven by two dual-link DVI or DisplayPort outputs. They can be considered to be two seamless virtual QSXGA displays as they have to be driven simultaneously by both dual-link DVI or DisplayPort since one dual-link DVI or DisplayPort cannot single-handedly display 10 megapixels. A similar resolution of 2560 × 1920 (4:3) was supported by a small number of CRT displays via VGA such as the Viewsonic P225f when paired with the right graphics card.
=== 2880 × 1800 (WQXGA+) ===
Doubling the width and height of WXGA+ 1440 × 900 for a higher pixel density yields WQXGA+.
=== 3200 × 2048 (WQSXGA) ===
WQSXGA (Wide Quad Super Extended Graphics Array) describes a display standard that can support a resolution up to 3200 × 2048 pixels, assuming a 25:16 (1.5625:1) aspect ratio. The Coronis Fusion 6MP DL by Barco supports a slightly wider 3280 × 2048 (approximately 16:10).
=== 3200 × 2400 (QUXGA) ===
QUXGA (Quad Ultra Extended Graphics Array) describes a display standard that can support a resolution up to 3200 × 2400 pixels, assuming a 4:3 aspect ratio.
=== 3840 × 2400 (WQUXGA) ===
WQUXGA (Wide Quad Ultra Extended Graphics Array) describes a display standard that supports a resolution of 3840 × 2400 pixels, which provides a 16:10 aspect ratio. This resolution is exactly four times 1920 × 1200 pixels (WUXGA).
Some manufacturers refer to this resolution as UHD+ because it has some additional lines compared to UHD (3840 × 2160).
Most display cards with a DVI connector are capable of supporting the 3840 × 2400 resolution. However, the maximum refresh rate will be limited by the number of DVI links connected to the monitor. 1, 2, or 4 DVI connectors are used to drive the monitor using various tile configurations. Only the IBM T221-DG5 and IDTech MD22292B5 support the use of dual-link DVI ports through an external converter box. Many systems using these monitors use at least two DVI connectors to send video to the monitor. These DVI connectors can be from the same graphics card, different graphics cards, or even different computers. Motion across the tile boundary(ies) can show tearing if the DVI links are not synchronized. The display panel can be updated at a speed between 0 Hz and 41 Hz (48 Hz for the IBM T221-DG5, -DGP, and IDTech MD22292B5). The refresh rate of the video signal can be higher than 41 Hz (or 48 Hz) but the monitor will not update the display any faster even if graphics card(s) do so.
In June 2001, WQUXGA was introduced in the IBM T220 LCD monitor using a LCD panel built by IDTech. LCD displays that support WQUXGA resolution include: IBM T220, IBM T221, Iiyama AQU5611DTBK, ViewSonic VP2290, ADTX MD22292B, and IDTech MD22292 (models B0, B1, B2, B5, C0, C2). IDTech was the original equipment manufacturer which sold these monitors to ADTX, IBM, Iiyama, and ViewSonic. However, none of the WQUXGA monitors (IBM, ViewSonic, Iiyama, ADTX) are in production anymore: they had prices that were well above even the higher end displays used by graphic professionals, and the lower refresh rates, 41 Hz and 48 Hz, made them less attractive for many applications.
== Unsystematic resolutions ==
Some hardware devices, smartphones in particular, use non-standard resolutions for their displays. Still, their aspect ratio or one of the dimensions is often derived from one of the standards. Many of them have bend edges, rounded corners, notches or islands for sensors, which may make some pixels invisible or unused.
After having used VGA-based 3∶2 resolutions HVGA (480 × 320) and "Retina" DVGA (960 × 640) for several years in their iPhone and iPod products with a screen diagonal of 9 cm or 3.5 inches, Apple started using more exotic variants when they adopted the 16∶9 aspect ratio to provide a consistent pixel density across screen sizes: first 1136 × 640 with the iPhone 5(c/s) and SE 1st for 10 cm or 4 inch screens, and later the 1-megapixel resolution of 1334 × 750 with the iPhone 6(s)/7/8 and SE 2nd/3rd for 12 cm or 4.7 inch screens, while devices with 14 cm or 5.5 inch screens used standard 1920 × 1080 with the iPhone 6(s)/7/8 Plus.
Keeping the pixel density of previous models, the iPhone X(s) and 11 Pro introduced a 2436 × 1125 resolution for 15 cm or 5.8 inch screens, while the iPhone XS Max and 11 Pro Max introduced a 2688 × 1242 resolution for 17 cm or 6.5 inch screens (with a notch) all at an aspect ratio of roughly 13∶6 or, for marketing, 19.5∶9.
Subsequent Apple smartphones and phablets stayed with that aspect ratio but increased screen size slightly with approximately constant pixel density. The resulting resolutions have longer sides divisible by 6 and hardly rounded shorter sides:
1792 × 828 (iPhone 11, Xr),
2532 × 1170 (12/13 (Pro), 14),
2556 × 1179 (14(Pro), 15 Pro),
2778 × 1284 (12/13 Pro Max, 14 Plus),
2796 × 1290 (14/15 Pro Max, 15 Plus).
The only Apple smartphone models that shared an ultra-wide 19½∶9 resolution with Android phones were the iPhone 12/13 Mini with 2340 × 1080.
Other manufacturers have also introduced phones with irregular display resolutions and aspect ratios, such as Samsung's various "Infinity" displays with 37∶18 = 18½∶9 aspect ratios (Galaxy S8/S9 and A8/A9) at resolutions of 2960 × 1440 and 2220 × 1080.
2160 × 1080 is a resolution used by many smartphones since 2018. It has an aspect ratio of 18:9, matching that of the Univisium film format.
Other phones feature an 19∶9 aspect ratio with resolutions like 3040 × 1440 (e.g. S10) and 2280 × 1080 (S10e).
Even wider resolutions with the same aspect ratio of 19½∶9 as iPhones are 3120 × 1440 (e.g. S24+) or 2340 × 1080 (Poco M3).
Some phones have an aspect ratio of ca. 20∶9 at resolutions like 2400 × 1080 (e.g. S10 Lite) and 3200 × 1440 (e.g. S20).
Phones with foldable displays, e.g. Samsung Galaxy Z series, usually have non-systematic resolutions and aspect ratios, which are either roughly square when folded along the longer edge (Fold) or extremely tall when folded along the smaller edge (Flip).
Some air traffic control monitors use displays with a resolution of 2048 × 2048, with an aspect ratio of 1:1, and similar consumer monitors at resolution of 1920 × 1920 are also available aimed primarily at productivity tasks.
== See also ==
Dot pitch
List of common display resolutions
Pixel density
Ultrawide formats for history and comparison of video formats and displays, which are growing wider
== References == | Wikipedia/Graphics_display_resolution |
A physics processing unit (PPU) is a dedicated microprocessor designed to handle the calculations of physics, especially in the physics engine of video games. It is an example of hardware acceleration.
Examples of calculations involving a PPU might include rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects.
The idea is having specialized processors offload time-consuming tasks from a computer's CPU, much like how a GPU performs graphics operations in the main CPU's place. The term was coined by Ageia to describe its PhysX chip. Several other technologies in the CPU-GPU spectrum have some features in common with it, although Ageia's product was the only complete one designed, marketed, supported, and placed within a system exclusively being a PPU.
== History ==
An early academic PPU research project named SPARTA (Simulation of Physics on A Real-Time Architecture) was carried out at Penn State and University of Georgia. This was a simple FPGA based PPU that was limited to two dimensions. This project was extended into a considerably more advanced ASIC-based system named HELLAS.
February 2006 saw the release of the first dedicated PPU PhysX from Ageia (later merged into Nvidia). The unit is most effective in accelerating particle systems, with only a small performance improvement measured for rigid body physics. The Ageia PPU is documented in depth in their US patent application #20050075849. Nvidia/Ageia no longer produces PPUs and hardware acceleration for physics processing, although it is now supported through some of their graphics processing units.
Academic PPU research projects
== AGEIA PhysX ==
The first processor to be advertised being a PPU was named the PhysX chip, introduced by a fabless semiconductor company called AGEIA. Games wishing to take advantage of the PhysX PPU must use AGEIA's PhysX SDK, (formerly known as the NovodeX SDK).
It consists of a general purpose RISC core controlling an array of custom SIMD floating point VLIW processors working in local banked memories, with a switch-fabric to manage transfers between them. There is no cache-hierarchy like in a CPU or GPU.
The PhysX was available from three companies akin to the way video cards are manufactured. ASUS, BFG Technologies, and ELSA Technologies were the primary manufacturers. PCs with the cards already installed were available from system builders such as Alienware, Dell, and Falcon Northwest.
In February 2008, after Nvidia bought Ageia Technologies and eventually cut off the ability to process PhysX on the AGEIA PPU and NVIDIA GPUs in systems with active ATi/AMD GPUs, it seemed that PhysX went 100% to Nvidia. But in March 2008, Nvidia announced that it will make PhysX an open standard for everyone, so the main graphic-processor manufacturers will have PhysX support in the next generation graphics cards. Nvidia announced that PhysX will also be available for some of their released graphics cards just by downloading some new drivers.
See physics engine for a discussion of academic research PPU projects.
=== PhysX P1 (PPU) hardware specifications ===
ASUS and BFG Technologies bought licenses to manufacture alternate versions of AGEIA's PPU, the PhysX P1 with 128 MB GDDR3:
Multi-core device based on the MIPS architecture with integrated physics acceleration hardware and memory subsystem with "tons of cores"
125 million transistors
182 mm2 die size
Fabrication process: 130 nm
Peak power consumption: 30 W
Memory: 128 MB GDDR3 RAM with 128-bit interface
32-bit PCI 3.0 (ASUS also made a PCI Express version card)
Sphere collision tests: 530 million per second (maximum capability)
Convex collision tests: 530,000 per second (maximum capability)
Peak instruction bandwidth: 20 billion per second
== Havok FX ==
The Havok SDK is a major competitor to the PhysX SDK, used in more than 150 games, including major titles like Half-Life 2, Halo 3 and Dead Rising.
To compete with the PhysX PPU, an edition known as Havok FX was to take advantage of multi-GPU technology from ATI (AMD CrossFire) and NVIDIA (SLI) using existing cards to accelerate certain physics calculations.
Havok divides the physics simulation into effect and gameplay physics, with effect physics being offloaded (if possible) to the GPU as Shader Model 3.0 instructions and gameplay physics being processed on the CPU as normal. The important distinction between the two is that effect physics do not affect gameplay (dust or small debris from an explosion, for example); the vast majority of physics operations are still performed in software. This approach differs significantly from the PhysX SDK, which moves all calculations to the PhysX card if it is present.
Since Havok's acquisition by Intel, Havok FX appears to have been shelved or cancelled.
== PPU vs. GPUs ==
The drive toward GPGPU has made GPUs more suitable for the job of a PPU; DX10 added integer data types, unified shader architecture, and a geometry shader stage which allows a broader range of algorithms to be implemented; Modern GPUs support compute shaders, which run across an indexed space and don't require any graphical resources, just general purpose data buffers. NVidia CUDA provides a little more in the way of inter-thread communication and scratchpad-style workspace associated with the threads.
Nonetheless GPUs are built around a larger number of longer latency, slower threads, and designed around texture and framebuffer data paths, and poor branching performance; this distinguishes them from PPUs and Cell as being less well optimized for taking over game world simulation tasks.
The Codeplay Sieve compiler supports the PPU, indicating that the Ageia physX chip would be suitable for GPGPU type tasks. However Ageia seem unlikely to pursue this market.
== PS2 – VU0 ==
Although very different from the PhysX, one could argue the PlayStation 2's VU0 is an early, limited implementation of a PPU. Conversely, one could describe a PPU to a PS2 programmer as an evolved replacement for VU0. Its feature-set and placement within the system is geared toward accelerating game update tasks including physics and AI; it can offload such calculations working off its own instruction stream whilst the CPU is operating on something else. Being a DSP however, it is much more dependent on the CPU to do useful work in a game engine, and would not be capable of implementing a full physics API, so it cannot be classed as a PPU. Also VU0 is capable of providing additional vertex processing power, though this is more a property of the pathways in the system rather than the unit itself.
This usage is similar to Havok FX or GPU physics in that an auxiliary unit's general purpose floating point power is used to complement the CPU in either graphics or physics roles.
== See also ==
adapteva
Digital signal processor
General-purpose computing on graphics processing units (GPGPU) – for applications of existing GPUs to the same physics problems PPUs are designed for
Microsoft Robotics Studio
OpenCL
Physics Abstraction Layer
Scratchpad RAM – relevant to the distributed memory architecture of the Ageia PhysX PPU
Vision processing unit
UA6528 Price & Stock
== References ==
== External links ==
AGEIA Official Website (no longer available)
AGEIA Physx Processor Website (no longer available)
Projects using PhysX SDK (no longer available)
BFG AGEIA PhysX Card Review
Planet PhysX News & Information Page (no longer available)
PC Hardware: AGEIA PhysX Interview (no longer available)
PC Perspective: AGEIA PhysX Physics Processing Unit Preview (no longer available)
Havok FX physics engine (middleware library) SDK (no longer available)
NVIDIA CUDA Toolkit and SDK
PhysX Toolkit and SDK | Wikipedia/Physics_processing_unit |
Accelerated Graphics Port (AGP) is a parallel expansion card standard, designed for attaching a video card to a computer system to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP was progressively phased out in favor of PCI Express (PCIe), which is serial, as opposed to parallel; by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favor of PCI Express.
== Advantages over PCI ==
AGP is a superset of the PCI standard, designed to overcome PCI's limitations in serving the requirements of the era's high-performance graphics cards.
The primary advantage of AGP is that it doesn't share the PCI bus, providing a dedicated, point-to-point pathway between the expansion slot(s) and the motherboard chipset. The direct connection also allows higher clock speeds.
The second major change is the use of split transactions, wherein the address and data phases are separated. The card may send many address phases, so the host can process them in order, avoiding any long delays caused by the bus being idle during read operations.
Third, PCI bus handshaking is simplified. Unlike PCI bus transactions, whose length is negotiated on a cycle-by-cycle basis using the FRAME# and STOP# signals, AGP transfers are always a multiple of 8 bytes long, with the total length included in the request. Further, rather than using the IRDY# and TRDY# signals for each word, data is transferred in blocks of 4 clock cycles (32 words at AGP 8× speed), and pauses are allowed only between blocks.
Finally, AGP allows (mandatory only in AGP 3.0) sideband addressing, meaning that the address and data buses are separated, so the address phase does not use the main address/data (AD) lines at all. This is done by adding an extra 8-bit "SideBand Address" bus, over which the graphics controller can issue new AGP requests while other AGP data is flowing over the main 32 address/data (AD) lines. This results in improved overall AGP data throughput.
This great improvement in memory read performance makes it practical for an AGP card to read textures directly from system RAM, while a PCI graphics card must copy it from system RAM to the card's video memory. System memory is made available using the graphics address remapping table (GART), which apportions main memory as needed for texture storage. The maximum amount of system memory available to AGP is defined as the AGP aperture.
== History ==
The AGP slot first appeared on x86-compatible system boards based on Socket 7 Intel P5 Pentium and Slot 1 P6 Pentium II processors. Intel introduced AGP support with the i440LX Slot 1 chipset on August 26, 1997, and a flood of products followed from all the major system board vendors.
The first Socket 7 chipsets to support AGP were the VIA Apollo VP3, SiS 5591/5592, and the ALI Aladdin V. Intel never released an AGP-equipped Socket 7 chipset. FIC demonstrated the first Socket 7 AGP system board in November 1997 as the FIC PA-2012 based on the VIA Apollo VP3 chipset, followed very quickly by the EPoX P55-VP3 also based on the VIA VP3 chipset which was first to market.
Early video chipsets featuring AGP support included the Rendition Vérité V2200, 3dfx Voodoo Banshee, Nvidia RIVA 128, 3Dlabs PERMEDIA 2, Intel i740, ATI Rage series, Matrox Millennium II, and S3 ViRGE GX/2. Some early AGP boards used graphics processors built around PCI and were simply bridged to AGP. This resulted in the cards benefiting little from the new bus, with the only improvement used being the 66 MHz bus clock, with its resulting doubled bandwidth over PCI, and bus exclusivity. Intel's i740 was explicitly designed to exploit the new AGP feature set; in fact it was designed to texture only from AGP memory, making PCI versions of the board difficult to implement (local board RAM had to emulate AGP memory), though this was eventually accomplished much later in the form of AGP-to-PCI bridges.
Microsoft first introduced AGP support into Windows via the USB Supplement patch for OSR2 of Windows 95 in 1997, also known as OSR2.1. The first Windows NT-based operating system to receive AGP support was Windows NT 4.0 with Service Pack 3, also in 1997. Linux support for AGP-enhanced fast data transfers was first added in 1999 with the implementation of the AGPgart kernel module.
=== Later use ===
With the increasing adoption of PCIe, graphics cards manufacturers continued to produce AGP cards as the standard became obsolete. As GPUs began to be designed to connect to PCIe, an additional PCIe-to-AGP bridge-chip was required to create an AGP-compatible graphics card. The inclusion of a bridge, and the need for a separate AGP card design, incurred additional board costs.
The GeForce 6600 and ATI Radeon X800 XL, released during 2004–2005, were the first bridged cards. In 2009 AGP cards from Nvidia had a ceiling of the GeForce 7 series. In 2011 DirectX 10-capable AGP cards from AMD vendors (Club 3D, HIS, Sapphire, Jaton, Visiontek, Diamond, etc.) included the Radeon HD 2400, 3450, 3650, 3850, 4350, 4650, and 4670. The HD 5000 AGP series mentioned in the AMD Catalyst software was never available. There were many problems with the AMD Catalyst 11.2 - 11.6 AGP hotfix drivers under Windows 7 with the HD 4000 series AGP video cards; use of 10.12 or 11.1 AGP hotfix drivers is a possible workaround. Several of the vendors listed above make available past versions of the AGP drivers.
By 2010, no new motherboard chipsets supported AGP and few new motherboards had AGP slots, however some continued to be produced with older AGP-supporting chipsets.
In 2016, Windows 10 version 1607 dropped support for AGP. Possible future removal of support for AGP from open-source Linux kernel drivers was considered in 2020.
== Versions ==
Intel released "AGP specification 1.0" in 1997. It specified 3.3 V signals and 1× and 2× speeds. Specification 2.0 documented 1.5 V signaling, which could be used at 1×, 2× and the additional 4× speed and 3.0 added 0.8 V signaling, which could be operated at 4× and 8× speeds. (1× and 2× speeds are physically possible, but were not specified.)
Available versions are listed in the adjacent table.
AGP version 3.5 is only publicly mentioned by Microsoft under Universal Accelerated Graphics Port (UAGP), which specifies mandatory supports of extra registers once marked optional under AGP 3.0. Upgraded registers include PCISTS, CAPPTR, NCAPID, AGPSTAT, AGPCMD, NISTAT, NICMD. New required registers include APBASELO, APBASEHI, AGPCTRL, APSIZE, NEPG, GARTLO, GARTHI.
There are various physical interfaces (connectors); see the Compatibility section.
=== Official extensions ===
==== AGP Pro ====
An official extension for cards that required more electrical power, with a longer slot with additional pins for that purpose. AGP Pro cards were usually workstation-class cards used to accelerate professional computer-aided design applications employed in the fields of architecture, machining, engineering, simulations, and similar fields.
==== 64-bit AGP ====
A 64-bit channel was once proposed as an optional standard for AGP 3.0 in draft documents, but it was dropped in the final version of the standard.
The standard allows 64-bit transfer for AGP8× reads, writes, and fast writes; 32-bit transfer for PCI operations.
=== Unofficial variations ===
A number of non-standard variations of the AGP interface have been produced by manufacturers.
==== Internal AGP interface ====
Ultra-AGP, Ultra-AGPII
It is an internal AGP interface standard used by SiS for the north bridge controllers with integrated graphics. The original version supports same bandwidth as AGP 8×, while Ultra-AGPII has maximum 3.2 GB/s bandwidth.
==== PCI-based AGP ports ====
AGP Express
Not a true AGP interface, but allows an AGP card to be connected over the legacy PCI bus on a PCI Express motherboard. It is a technology used on motherboards made by ECS, intended to allow an existing AGP card to be used in a new motherboard instead of requiring a PCIe card to be obtained (since the introduction of PCIe graphics cards few motherboards provide AGP slots). An "AGP Express" slot is basically a PCI slot (with twice the electrical power) with an AGP connector. It offers backward compatibility with AGP cards, but provides incomplete support (some AGP cards do not work with AGP Express) and reduced performance—the card is forced to use the shared PCI bus at its lower bandwidth, rather than having exclusive use of the faster AGP.
AGI
The ASRock Graphics Interface (AGI) is a proprietary variant of the Accelerated Graphics Port (AGP) standard. Its purpose is to provide AGP-support for ASRock motherboards that use chipsets lacking native AGP support. However, it is not fully compatible with AGP, and several video card chipsets are known not to be supported.
AGX
The EPoX Advanced Graphics eXtended (AGX) is another proprietary AGP variant with the same advantages and disadvantages as AGI. User manuals recommend not using AGP 8× ATI cards with AGX slots.
XGP
The Biostar Xtreme Graphics Port is another AGP variant, also with the same advantages and disadvantages as AGI and AGX.
==== PCIe based AGP ports ====
AGR
The Advanced Graphics Riser is a variation of the AGP port used in some PCIe motherboards made by MSI to offer limited backward compatibility with AGP. It is, effectively, a modified PCIe slot allowing for performance comparable to an AGP 4×/8× slot, but does not support all AGP cards; the manufacturer published a list of some cards and chipsets that work with the modified slot.
== Compatibility ==
AGP cards are backward and forward compatible within limits. 1.5 V-only keyed cards will not go into 3.3 V slots and vice versa, though "Universal" cards exist which will fit into either type of slot. There are also unkeyed "Universal" slots that will accept either type of card. When an AGP Universal card is plugged-into an AGP Universal slot, only the 1.5 V portion of the card is used. Some cards, like Nvidia's GeForce 6 series (except the 6200) or ATI's Radeon X800 series, only have keys for 1.5 V to prevent them from being installed in older mainboards without 1.5 V support. Some of the last modern cards with 3.3 V support were:
the Nvidia GeForce FX series (FX 5200, FX 5500, FX 5700, some FX 5800, FX 5900 and some FX 5950),
certain Nvidia GeForce 6 series and 7 series (some 6600, 6800, 7300, 7600, 7800, 7900 and 7950 cards, really uncommon compared to their AGP 1.5v only versions; the GeForce 6200 is the only exception, as it was the most common card with 3.3 V support),
the ATI Radeon 9000 series (Radeon 9500/9700/9800 (R300/R350), but not 9600/9800 (R360/RV360)).
Some cards incorrectly have dual notches, and some motherboards incorrectly have fully open slots, allowing a card to be plugged into a slot that does not support the correct signaling voltage, which may damage card or motherboard. Some incorrectly designed older 3.3 V cards have the 1.5 V key.
AGP Pro cards will not fit into standard slots, but standard AGP cards will work in a Pro slot. Motherboards equipped with a Universal AGP Pro slot will accept a 1.5 V or 3.3 V card in either the AGP Pro or standard AGP configuration, a Universal AGP card, or a Universal AGP Pro card.
There are some proprietary systems incompatible with standard AGP; for example, Apple Power Macintosh computers with the Apple Display Connector (ADC) have an extra connector which delivers power to the attached display. Some cards designed to work with a specific CPU architecture (e.g., PC, Apple) may not work with others due to firmware issues.
Mark Allen of Playtools.com has made the following comments regarding practical AGP compatibility for AGP 3.0 and AGP 2.0: ... nobody makes AGP 3.0 cards, and nobody makes AGP 3.0 motherboards. At least not any manufacturers I can find. Every single video card I could find which claimed to be an AGP 3.0 card was actually a universal 1.5V AGP 3.0 card. And every motherboard which claimed to be an AGP 3.0 motherboard turned out to be a universal 1.5V AGP 3.0 motherboard. It makes sense, if you think about it, because if anyone actually shipped a consumer-oriented product which supported only 0.8 volts, they would end up with lots of confused customers and a support nightmare. In the consumer market, you'd have to be crazy to ship a 0.8 volt only product.
== Power consumption ==
Actual power supplied by an AGP slot depends upon the card used. The maximum current drawn from the various rails is given in the specifications for the various versions. For example, if maximum current is drawn from all supplies and all voltages are at their specified upper limits,: 95 an AGP 3.0 slot can supply up to 48.25 watts; this figure can be used to specify a power supply conservatively, but in practice a card is unlikely ever to draw more than 40 W from the slot, with many using less. AGP Pro provides additional power up to 110 W. Many AGP cards had additional power connectors to supply them with more power than the slot could provide.
== Protocol ==
An AGP bus is a superset of a 66 MHz conventional PCI bus and, immediately after reset, follows the same protocol. The card must act as a PCI target, and optionally may act as a PCI master. (AGP 2.0 added a "fast writes" extension which allows PCI writes from the motherboard to the card to transfer data at higher speed.)
After the card is initialized using PCI transactions, AGP transactions are permitted. For these, the card is always the AGP master and the motherboard is always the AGP target. The card queues multiple requests which correspond to the PCI address phase, and the motherboard schedules the corresponding data phases later. An important part of initialization is telling the card the maximum number of outstanding AGP requests which may be queued at a given time.
AGP requests are similar to PCI memory read and write requests, but use a different encoding on command lines C/BE[3:0] and are always 8-byte aligned; their starting address and length are always multiples of 8 bytes (64 bits). The three low-order bits of the address are used instead to communicate the length of the request.
Whenever the PCI GNT# signal is asserted, granting the bus to the card, three additional status bits ST[2:0] indicate the type of transfer to be performed next. If the bits are 0xx, a previously queued AGP transaction's data is to be transferred; if the three bits are 111, the card may begin a PCI transaction or (if sideband addressing is not in use) queue a request in-band using PIPE#.
=== AGP command codes ===
Like PCI, each AGP transaction begins with an address phase, communicating an address and 4-bit command code. The possible commands are different from PCI, however:
000p
Read
Read 8×(AD[2:0]+1) = 8, 16, 24, ..., 64 bytes. The least significant bit p is 0 for low-priority, 1 for high.
001x
(reserved):
010p
Write
Write 8×(AD[2:0]+1) = 8–64 bytes.
011x
(reserved):
100p
Long read
Read 32×(AD[2:0]+1) = 32, 64, 96, ..., 256 bytes. This is the same as a read request, but the length is multiplied by four.
1010
Flush
Force previously written data to memory, for synchronization. This acts as a low-priority read, taking a queue slot and returning 8 bytes of random data to indicate completion. The address and length supplied with this command are ignored.
1011
(reserved):
1100
Fence
This acts as a memory fence, requiring that all earlier AGP requests complete before any following requests. Ordinarily, for increased performance, AGP uses a very weak consistency model, and allows a later write to pass an earlier read. (E.g. after sending "write 1, write 2, read, write 3, write 4" requests, all to the same address, the read may return any value from 2 to 4. Only returning 1 is forbidden, as writes must complete before following reads.) This operation does not require any queue slots.
1101
Dual address cycle
When making a request to an address above 232, this is used to indicate that a second address cycle will follow with additional address bits. This operates like a regular PCI dual address cycle; it is accompanied by the low-order 32 bits of the address (and the length), and the following cycle includes the high 32 address bits and the desired command. The two cycles make one request, and take only one slot in the request queue. This request code is not used with side-band addressing.
111x
(reserved):
AGP 3.0 dropped high-priority requests and the long read commands, as they were little used. It also mandated side-band addressing, thus dropping the dual address cycle, leaving only four request types: low-priority read (0000), low-priority write (0100), flush (1010) and fence (1100).
=== In-band AGP requests using PIPE# ===
To queue a request in-band, the card must request the bus using the standard PCI REQ# signal, and receive GNT# plus bus status ST[2:0] equal to 111. Then, instead of asserting FRAME# to begin a PCI transaction, the card asserts the PIPE# signal while driving the AGP command, address, and length on the C/BE[3:0], AD[31:3] and AD[2:0] lines, respectively. (If the address is 64 bits, a dual address cycle similar to PCI is used.) For every cycle that PIPE# is asserted, the card sends another request without waiting for acknowledgement from the motherboard, up to the configured maximum queue depth. The last cycle is marked by deasserting REQ#, and PIPE# is deasserted on the following idle cycle.
=== Side-band AGP requests using SBA[7:0] ===
If side-band addressing is supported and configured, the PIPE# signal is not used. (And the signal is re-used for another purpose in the AGP 3.0 protocol, which requires side-band addressing.) Instead, requests are broken into 16-bit pieces which are sent as two bytes across the SBA bus. There is no need for the card to ask permission from the motherboard; a new request may be sent at any time as long as the number of outstanding requests is within the configured maximum queue depth. The possible values are:
0aaa aaaa aaaa alll
Queue a request with the given low-order address bits A[14:3] and length 8×(L[2:0]+1). The command and high-order bits are as previously specified. Any number of requests may be queued by sending only this pattern, as long as the command and higher address bits remain the same.
10cc ccra aaaa aaaa
Use command C[3:0] and address bits A[23:15] for future requests. (Bit R is reserved.) This does not queue a request, but sets values that will be used in all future queued requests.
110r aaaa aaaa aaaa
Use address bits A[35:24] for future requests.
1110 aaaa aaaa aaaa
Use address bits A[47:36] for future requests.
1111 0xxx, 1111 10xx, 1111 110x
Reserved, do not use.
1111 1110
Synchronization pattern used when starting the SBA bus after an idle period.: 68 : 163
1111 1111
No operation; no request. At AGP 1× speed, this may be sent as a single byte and a following 16-bit side-band request started one cycle later. At AGP 2× and higher speeds, all side-band requests, including this NOP, are 16 bits long.
Sideband address bytes are sent at the same rate as data transfers, up to 8× the 66 MHz basic bus clock. Sideband addressing has the advantage that it mostly eliminates the need for turnaround cycles on the AD bus between transfers, in the usual case when read operations greatly outnumber writes.
=== AGP responses ===
While asserting GNT#, the motherboard may instead indicate via the ST bits that a data phase for a queued request will be performed next. There are four queues: two priorities (low- and high-priority) for each of reads and writes, and each is processed in order. Obviously, the motherboard will attempt to complete high-priority requests first, but there is no limit on the number of low-priority responses which may be delivered while the high-priority request is processed.
For each cycle when the GNT# is asserted and the status bits have the value 00p, a read response of the indicated priority is scheduled to be returned. At the next available opportunity (typically the next clock cycle), the motherboard will assert TRDY# (target ready) and begin transferring the response to the oldest request in the indicated read queue. (Other PCI bus signals like FRAME#, DEVSEL# and IRDY# remain deasserted.) Up to four clock cycles worth of data (16 bytes at AGP 1× or 128 bytes at AGP 8×) are transferred without waiting for acknowledgement from the card. If the response is longer than that, both the card and motherboard must indicate their ability to continue on the third cycle by asserting IRDY# (initiator ready) and TRDY#, respectively. If either one does not, wait states will be inserted until two cycles after they both do. (The value of IRDY# and TRDY# at other times is irrelevant and they are usually deasserted.)
The C/BE# byte enable lines may be ignored during read responses, but are held asserted (all bytes valid) by the motherboard.
The card may also assert the RBF# (read buffer full) signal to indicate that it is temporarily unable to receive more low-priority read responses. The motherboard will refrain from scheduling any more low-priority read responses. The card must still be able to receive the end of the current response, and the first four-cycle block of the following one if scheduled, plus any high-priority responses it has requested.
For each cycle when GNT# is asserted and the status bits have the value 01p, write data is scheduled to be sent across the bus. At the next available opportunity (typically the next clock cycle), the card will assert IRDY# (initiator ready) and begin transferring the data portion of the oldest request in the indicated write queue. If the data is longer than four clock cycles, the motherboard will indicate its ability to continue by asserting TRDY# on the third cycle. Unlike reads, there is no provision for the card to delay the write; if it didn't have the data ready to send, it shouldn't have queued the request.
The C/BE# lines are used with write data, and may be used by the card to select which bytes should be written to memory.
The multiplier in AGP 2×, 4× and 8× indicates the number of data transfers across the bus during each 66 MHz clock cycle. Such transfers use source synchronous clocking with a "strobe" signal (AD_STB[0], AD_STB[1], and SB_STB) generated by the data source. AGP 4× adds complementary strobe signals.
Because AGP transactions may be as short as two transfers, at AGP 4× and 8× speeds it is possible for a request to complete in the middle of a clock cycle. In such a case, the cycle is padded with dummy data transfers (with the C/BE# byte enable lines held deasserted).
== Connector pinout ==
The AGP connector contains almost all PCI signals, plus several additions. The connector has 66 contacts on each side, although 4 are removed for each keying notch. Pin 1 is closest to the I/O bracket, and the B and A sides are as in the table, looking down at the motherboard connector.
Contacts are spaced at 1 mm intervals, however they are arranged in two staggered vertical rows so that there is 2 mm space between pins in each row. Odd-numbered A-side contacts, and even-numbered B-side contacts are in the lower row (1.0 to 3.5 mm from the card edge). The others are in the upper row (3.7 to 6.0 mm from the card edge).
PCI signals omitted are:
The −12 V supply
The third and fourth interrupt requests (INTC#, INTD#)
The JTAG pins (TRST#, TCK, TMS, TDI, TDO)
The SMBus pins (SMBCLK, SMBDAT)
The IDSEL pin; an AGP card connects AD[16] to IDSEL internally
The 64-bit extension (REQ64#, ACK64#) and 66 MHz (M66EN) pins
The LOCK# pin for locked transaction support
Signals added are:
Data strobes AD_STB[1:0] (and AD_STB[1:0]# in AGP 2.0)
The sideband address bus SBA[7:0] and SB_STB (and SB_STB# in AGP 2.0)
The ST[2:0] status signals
USB+ and USB− (and OVERCNT# in AGP 2.0)
The PIPE# signal (removed in AGP 3.0 for 0.8 V signaling)
The RBF# signal
The TYPEDET#, Vregcg and Vreggc pins (AGP 2.0 for 1.5V signaling)
The DBI_HI and DBI_LO signals (AGP 3.0 for 0.8 V signaling only)
The GC_DET# and MB_DET# pins (AGP 3.0 for 0.8V signaling)
The WBF# signal (AGP 3.0 fast write extension)
== See also ==
List of device bandwidths
Serial Digital Video Out for ADD DVI adapter cards
AGP Inline Memory Module
== Notes ==
== References ==
== External links ==
Archived AGP Implementors Forum
AGP specifications: 1.0, 2.0, 3.0, Pro 1.0, Pro 1.1a
AGP Compatibility For Sticklers
AGP pinout
AGP expansion slots
AGP compatibility (with pictures)
Universal Accelerated Graphics Port (UAGP)
How Stuff Works - AGP
A discussion from 2003 of what AGP aperture is, how it works, and how much memory should be allocated to it. | Wikipedia/Accelerated_Graphics_Port |
In computer graphics, a texel, texture element, or texture pixel is the fundamental unit of a texture map. Textures are represented by arrays of texels representing the texture space, just as other images are represented by arrays of pixels.
Texels can also be described by image regions that are obtained through simple procedures such as thresholding. Voronoi tesselation can be used to define their spatial relationships—divisions are made at the midpoints between the centroids of each texel and the centroids of every surrounding texel for the entire texture. This results in each texel centroid having a Voronoi polygon surrounding it, which consists of all points that are closer to its own texel centroid than any other centroid.
== Rendering ==
When texturing a 3D surface or surfaces (a process known as texture mapping), the renderer maps texels to appropriate pixels in the geometric fragment (typically a triangle) in the output picture. On modern computers, this operation is accomplished on the graphics processing unit.
The texturing process starts with a location in space. The location can be in world space, but typically it is local to a model space so that the texture moves with the model. A projector function is applied to the location to change the location from a three-element vector (
(
u
,
v
,
z
)
{\displaystyle \left(u,v,z\right)}
) to a two-element (
(
x
,
y
)
{\displaystyle \left(x,y\right)}
) vector with values ranging from zero to one (uv). These values are multiplied by the resolution of the texture to obtain the location of the texel. When a texel is requested that is not on an integer position, texture filtering is applied.
When a texel is requested that is outside of the texture, one of two techniques is used: clamping or wrapping. Clamping limits the texel to the texture size, moving it to the nearest edge if it is more than the texture size. Wrapping moves the texel in increments of the texture's size to bring it back into the texture. Wrapping causes a texture to be repeated; clamping causes it to be in one spot only.
== See also ==
Pixel
Resel
Voxel
== References == | Wikipedia/Texel_(graphics) |
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
A DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample.
There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT. This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT,: ix, xiii, 1, 141–304 used in several ISO/IEC and ITU-T international standards.
DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks. DCT blocks sizes including 8x8 pixels for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels. The DCT has a strong energy compaction property, capable of achieving high quality at high data compression ratios. However, blocky compression artifacts can appear when heavy DCT compression is applied.
== History ==
The DCT was first conceived by Nasir Ahmed while working at Kansas State University. The concept was proposed to the National Science Foundation in 1972. The DCT was originally intended for image compression. Ahmed developed a practical DCT algorithm with his PhD students T. Raj Natarajan and K. R. Rao at the University of Texas at Arlington in 1973. They presented their results in a January 1974 paper, titled Discrete Cosine Transform. It described what is now called the type-II DCT (DCT-II),: 51 as well as the type-III inverse DCT (IDCT).
Since its introduction in 1974, there has been significant research on the DCT. In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm. Further developments include a 1978 paper by M. J. Narasimha and A. M. Peterson, and a 1984 paper by B. G. Lee. These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by the Joint Photographic Experts Group as the basis for JPEG's lossy image compression algorithm in 1992.
The discrete sine transform (DST) was derived from the DCT, by replacing the Neumann condition at x=0 with a Dirichlet condition.: 35-36 The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao. A type-I DST (DST-I) was later described by Anil K. Jain in 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.
In 1975, John A. Roese and Guner S. Robinson adapted the DCT for inter-frame motion-compensated video coding. They experimented with the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, also called block motion compensation. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards.
A DCT variant, the modified discrete cosine transform (MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used in most modern audio compression formats, such as Dolby Digital (AC-3), MP3 (which uses a hybrid DCT-FFT algorithm), Advanced Audio Coding (AAC), and Vorbis (Ogg).
Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at the University of New Mexico in 1995. This allows the DCT technique to be used for lossless compression of images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT and delta modulation. It is a more effective lossless compression algorithm than entropy coding. Lossless DCT is also known as LDCT.
== Applications ==
The DCT is the most widely used transformation technique in signal processing, and by far the most widely used linear transform in data compression. Uncompressed digital media as well as lossless compression have high memory and bandwidth requirements, which is significantly reduced by the DCT lossy compression technique, capable of achieving data compression ratios from 8:1 to 14:1 for near-studio-quality, up to 100:1 for acceptable-quality content. DCT compression standards are used in digital media technologies, such as digital images, digital photos, digital video, streaming media, digital television, streaming television, video on demand (VOD), digital cinema, high-definition video (HD video), and high-definition television (HDTV).
The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strong energy compaction property. In typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlated Markov processes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform (which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions.
DCTs are widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even and odd boundary conditions at the two ends of the array.
DCTs are closely related to Chebyshev polynomials, and fast DCT algorithms (below) are used in Chebyshev approximation of arbitrary functions by series of Chebyshev polynomials, for example in Clenshaw–Curtis quadrature.
=== General applications ===
The DCT is widely used in many applications, which include the following.
=== Visual media standards ===
The DCT-II is an important image compression technique. It is used in image compression standards such as JPEG, and video compression standards such as H.26x, MJPEG, MPEG, DV, Theora and Daala. There, the two-dimensional DCT-II of
N
×
N
{\displaystyle N\times N}
blocks are computed and the results are quantized and entropy coded. In this case,
N
{\displaystyle N}
is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the
(
0
,
0
)
{\displaystyle (0,0)}
element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies.
The integer DCT, an integer approximation of the DCT, is used in Advanced Video Coding (AVC), introduced in 2003, and High Efficiency Video Coding (HEVC), introduced in 2013. The integer DCT is also used in the High Efficiency Image Format (HEIF), which uses a subset of the HEVC video coding format for coding still images. AVC uses 4 x 4 and 8 x 8 blocks. HEVC and HEIF use varied block sizes between 4 x 4 and 32 x 32 pixels. As of 2019, AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers.
==== Image formats ====
==== Video formats ====
=== MDCT audio standards ===
==== General audio ====
==== Speech coding ====
=== Multidimensional DCT ===
Multidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems, variable temporal length 3-D DCT coding, video coding algorithms, adaptive video coding and 3-D Compression. Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using MD DCTs is rapidly increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform and cosine-modulated wavelet bases.
=== Digital signal processing ===
DCT plays an important role in digital signal processing specifically data compression. The DCT is widely implemented in digital signal processors (DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such as encoding, decoding, video, audio, multiplexing, control signals, signaling, and analog-to-digital conversion. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.
=== Compression artifacts ===
A common issue with DCT compression in digital media are blocky compression artifacts, caused by DCT blocks. In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT blocks is taken within each block and the resulting DCT coefficients are quantized. This process can cause blocking artifacts, primarily at high data compression ratios. This can also cause the mosquito noise effect, commonly found in digital video.
DCT blocks are often used in glitch art. The artist Rosa Menkman makes use of DCT-based compression artifacts in her glitch art, particularly the DCT blocks found in most digital media formats such as JPEG digital images and MP3 audio. Another example is Jpegs by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style.
== Informal overview ==
Like any Fourier-related transform, DCTs express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the DFT, a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form of complex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies different boundary conditions from the DFT or other related transforms.
The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an extension of that function outside the domain. That is, once you write a function
f
(
x
)
{\displaystyle f(x)}
as a sum of sinusoids, you can evaluate that sum at any
x
{\displaystyle x}
, even for
x
{\displaystyle x}
where the original
f
(
x
)
{\displaystyle f(x)}
was not specified. The DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an even extension of the original function.
However, because DCTs operate on finite, discrete sequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd at both the left and right boundaries of the domain (i.e. the min-n and max-n boundaries in the definitions below, respectively). Second, one has to specify around what point the function is even or odd. In particular, consider a sequence abcd of four equally spaced data points, and say that we specify an even left boundary. There are two sensible possibilities: either the data are even about the sample a, in which case the even extension is dcbabcd, or the data are even about the point halfway between a and the previous point, in which case the even extension is dcbaabcd (a is repeated).
Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. These choices lead to all the standard variations of DCTs and also discrete sine transforms (DSTs). Half of these possibilities, those where the left boundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST.
These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solve partial differential equations by spectral methods, the boundary conditions are directly specified as a part of the problem being solved. Or, for the MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for the energy compactification properties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series.
In particular, it is well known that any discontinuities in a function reduce the rate of convergence of the Fourier series so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed. However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries. In contrast, a DCT where both boundaries are even always yields a continuous extension at the boundaries (although the slope is generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience.
== Formal definition ==
Formally, the discrete cosine transform is a linear, invertible function
f
:
R
N
→
R
N
{\displaystyle f:\mathbb {R} ^{N}\to \mathbb {R} ^{N}}
(where
R
{\displaystyle \mathbb {R} }
denotes the set of real numbers), or equivalently an invertible N × N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers
x
0
,
…
x
N
−
1
{\displaystyle ~x_{0},\ \ldots \ x_{N-1}~}
are transformed into the N real numbers
X
0
,
…
,
X
N
−
1
{\displaystyle X_{0},\,\ldots ,\,X_{N-1}}
according to one of the formulas:
=== DCT-I ===
X
k
=
1
2
(
x
0
+
(
−
1
)
k
x
N
−
1
)
+
∑
n
=
1
N
−
2
x
n
cos
[
π
N
−
1
n
k
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}={\frac {1}{2}}(x_{0}+(-1)^{k}x_{N-1})+\sum _{n=1}^{N-2}x_{n}\cos \left[\,{\tfrac {\ \pi }{\,N-1\,}}\,n\,k\,\right]\qquad {\text{ for }}~k=0,\ \ldots \ N-1~.}
Some authors further multiply the
x
0
{\displaystyle x_{0}}
and
x
N
−
1
{\displaystyle x_{N-1}}
terms by
2
{\displaystyle {\sqrt {2\,}}\,}
and correspondingly multiply the
X
0
{\displaystyle X_{0}}
and
X
N
−
1
{\displaystyle X_{N-1}}
terms by
1
/
2
{\displaystyle 1/{\sqrt {2\,}}\,}
which, if one further multiplies by an overall scale factor of
2
N
−
1
{\textstyle {\sqrt {{\tfrac {2}{N-1\,}}\,}}}
, makes the DCT-I matrix orthogonal but breaks the direct correspondence with a real-even DFT.
The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of
2
(
N
−
1
)
{\displaystyle 2(N-1)}
real numbers with even symmetry. For example, a DCT-I of
N
=
5
{\displaystyle N=5}
real numbers
a
b
c
d
e
{\displaystyle a\ b\ c\ d\ e}
is exactly equivalent to a DFT of eight real numbers
a
b
c
d
e
d
c
b
{\displaystyle a\ b\ c\ d\ e\ d\ c\ b}
(even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.)
Note, however, that the DCT-I is not defined for
N
{\displaystyle N}
less than 2, while all other DCT types are defined for any positive
N
{\displaystyle N}
.
Thus, the DCT-I corresponds to the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
0
{\displaystyle n=0}
and even around
n
=
N
−
1
{\displaystyle n=N-1}
; similarly for
X
k
{\displaystyle X_{k}}
.
=== DCT-II ===
X
k
=
∑
n
=
0
N
−
1
x
n
cos
[
π
N
(
n
+
1
2
)
k
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\cos \left[\,{\tfrac {\,\pi \,}{N}}\left(n+{\tfrac {1}{2}}\right)k\,\right]\qquad {\text{ for }}~k=0,\ \dots \ N-1~.}
The DCT-II is probably the most commonly used form, and is often simply referred to as the DCT.
This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of
4
N
{\displaystyle 4N}
real inputs of even symmetry, where the even-indexed elements are zero. That is, it is half of the DFT of the
4
N
{\displaystyle 4N}
inputs
y
n
,
{\displaystyle y_{n},}
where
y
2
n
=
0
{\displaystyle y_{2n}=0}
,
y
2
n
+
1
=
x
n
{\displaystyle y_{2n+1}=x_{n}}
for
0
≤
n
<
N
{\displaystyle 0\leq n<N}
,
y
2
N
=
0
{\displaystyle y_{2N}=0}
, and
y
4
N
−
n
=
y
n
{\displaystyle y_{4N-n}=y_{n}}
for
0
<
n
<
2
N
{\displaystyle 0<n<2N}
. DCT-II transformation is also possible using
2
N
{\displaystyle 2N}
signal followed by a multiplication by half shift. This is demonstrated by Makhoul.
Some authors further multiply the
X
0
{\displaystyle X_{0}}
term by
1
/
N
{\displaystyle 1/{\sqrt {N\,}}\,}
and multiply the rest of the matrix by an overall scale factor of
2
/
N
{\textstyle {\sqrt {{2}/{N}}}}
(see below for the corresponding change in DCT-III). This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. This is the normalization used by Matlab. In many applications, such as JPEG, the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. the quantization step in JPEG), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications.
The DCT-II implies the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
−
1
/
2
{\displaystyle n=-1/2}
and even around
n
=
N
−
1
/
2
{\displaystyle n=N-1/2\,}
;
X
k
{\displaystyle X_{k}}
is even around
k
=
0
{\displaystyle k=0}
and odd around
k
=
N
{\displaystyle k=N}
.
=== DCT-III ===
X
k
=
1
2
x
0
+
∑
n
=
1
N
−
1
x
n
cos
[
π
N
(
k
+
1
2
)
n
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}={\tfrac {1}{2}}x_{0}+\sum _{n=1}^{N-1}x_{n}\cos \left[\,{\tfrac {\,\pi \,}{N}}\left(k+{\tfrac {1}{2}}\right)n\,\right]\qquad {\text{ for }}~k=0,\ \ldots \ N-1~.}
Because it is the inverse of DCT-II up to a scale factor (see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT").
Some authors divide the
x
0
{\displaystyle x_{0}}
term by
2
{\displaystyle {\sqrt {2}}}
instead of by 2 (resulting in an overall
x
0
/
2
{\displaystyle x_{0}/{\sqrt {2}}}
term) and multiply the resulting matrix by an overall scale factor of
2
/
N
{\textstyle {\sqrt {2/N}}}
(see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output.
The DCT-III implies the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
0
{\displaystyle n=0}
and odd around
n
=
N
;
{\displaystyle n=N;}
X
k
{\displaystyle X_{k}}
is even around
k
=
−
1
/
2
{\displaystyle k=-1/2}
and even around
k
=
N
−
1
/
2.
{\displaystyle k=N-1/2.}
=== DCT-IV ===
X
k
=
∑
n
=
0
N
−
1
x
n
cos
[
π
N
(
n
+
1
2
)
(
k
+
1
2
)
]
for
k
=
0
,
…
N
−
1
.
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\cos \left[\,{\tfrac {\,\pi \,}{N}}\,\left(n+{\tfrac {1}{2}}\right)\left(k+{\tfrac {1}{2}}\right)\,\right]\qquad {\text{ for }}k=0,\ \ldots \ N-1~.}
The DCT-IV matrix becomes orthogonal (and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of
2
/
N
.
{\textstyle {\sqrt {2/N}}.}
A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT).
The DCT-IV implies the boundary conditions:
x
n
{\displaystyle x_{n}}
is even around
n
=
−
1
/
2
{\displaystyle n=-1/2}
and odd around
n
=
N
−
1
/
2
;
{\displaystyle n=N-1/2;}
similarly for
X
k
.
{\displaystyle X_{k}.}
=== DCT V-VIII ===
DCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary.
In other words, DCT types I–IV are equivalent to real-even DFTs of even order (regardless of whether
N
{\displaystyle N}
is even or odd), since the corresponding DFT is of length
2
(
N
−
1
)
{\displaystyle 2(N-1)}
(for DCT-I) or
4
N
{\displaystyle 4N}
(for DCT-II & III) or
8
N
{\displaystyle 8N}
(for DCT-IV). The four additional types of discrete cosine transform correspond essentially to real-even DFTs of logically odd order, which have factors of
N
±
1
/
2
{\displaystyle N\pm {1}/{2}}
in the denominators of the cosine arguments.
However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.
(The trivial real-even array, a length-one DFT (odd length) of a single number a , corresponds to a DCT-V of length
N
=
1.
{\displaystyle N=1.}
)
== Inverse transforms ==
Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N − 1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa.
Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by
2
/
N
{\textstyle {\sqrt {2/N}}}
so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of √2 (see above), this can be used to make the transform matrix orthogonal.
== Multidimensional DCTs ==
Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension.
=== M-D DCT-II ===
For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above):
X
k
1
,
k
2
=
∑
n
1
=
0
N
1
−
1
(
∑
n
2
=
0
N
2
−
1
x
n
1
,
n
2
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
)
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
=
∑
n
1
=
0
N
1
−
1
∑
n
2
=
0
N
2
−
1
x
n
1
,
n
2
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
.
{\displaystyle {\begin{aligned}X_{k_{1},k_{2}}&=\sum _{n_{1}=0}^{N_{1}-1}\left(\sum _{n_{2}=0}^{N_{2}-1}x_{n_{1},n_{2}}\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right]\right)\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\\&=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}x_{n_{1},n_{2}}\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right].\end{aligned}}}
The inverse of a multi-dimensional DCT is just a separable product of the inverses of the corresponding one-dimensional DCTs (see above), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm.
The 3-D DCT-II is only the extension of 2-D DCT-II in three dimensional space and mathematically can be calculated by the formula
X
k
1
,
k
2
,
k
3
=
∑
n
1
=
0
N
1
−
1
∑
n
2
=
0
N
2
−
1
∑
n
3
=
0
N
3
−
1
x
n
1
,
n
2
,
n
3
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
cos
[
π
N
3
(
n
3
+
1
2
)
k
3
]
,
for
k
i
=
0
,
1
,
2
,
…
,
N
i
−
1.
{\displaystyle X_{k_{1},k_{2},k_{3}}=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}\sum _{n_{3}=0}^{N_{3}-1}x_{n_{1},n_{2},n_{3}}\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right]\cos \left[{\frac {\pi }{N_{3}}}\left(n_{3}+{\frac {1}{2}}\right)k_{3}\right],\quad {\text{for }}k_{i}=0,1,2,\dots ,N_{i}-1.}
The inverse of 3-D DCT-II is 3-D DCT-III and can be computed from the formula given by
x
n
1
,
n
2
,
n
3
=
∑
k
1
=
0
N
1
−
1
∑
k
2
=
0
N
2
−
1
∑
k
3
=
0
N
3
−
1
X
k
1
,
k
2
,
k
3
cos
[
π
N
1
(
n
1
+
1
2
)
k
1
]
cos
[
π
N
2
(
n
2
+
1
2
)
k
2
]
cos
[
π
N
3
(
n
3
+
1
2
)
k
3
]
,
for
n
i
=
0
,
1
,
2
,
…
,
N
i
−
1.
{\displaystyle x_{n_{1},n_{2},n_{3}}=\sum _{k_{1}=0}^{N_{1}-1}\sum _{k_{2}=0}^{N_{2}-1}\sum _{k_{3}=0}^{N_{3}-1}X_{k_{1},k_{2},k_{3}}\cos \left[{\frac {\pi }{N_{1}}}\left(n_{1}+{\frac {1}{2}}\right)k_{1}\right]\cos \left[{\frac {\pi }{N_{2}}}\left(n_{2}+{\frac {1}{2}}\right)k_{2}\right]\cos \left[{\frac {\pi }{N_{3}}}\left(n_{3}+{\frac {1}{2}}\right)k_{3}\right],\quad {\text{for }}n_{i}=0,1,2,\dots ,N_{i}-1.}
Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed.
==== 3-D DCT-II VR DIF ====
In order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows. The transform size N × N × N is assumed to be 2.
x
~
(
n
1
,
n
2
,
n
3
)
=
x
(
2
n
1
,
2
n
2
,
2
n
3
)
x
~
(
n
1
,
n
2
,
N
−
n
3
−
1
)
=
x
(
2
n
1
,
2
n
2
,
2
n
3
+
1
)
x
~
(
n
1
,
N
−
n
2
−
1
,
n
3
)
=
x
(
2
n
1
,
2
n
2
+
1
,
2
n
3
)
x
~
(
n
1
,
N
−
n
2
−
1
,
N
−
n
3
−
1
)
=
x
(
2
n
1
,
2
n
2
+
1
,
2
n
3
+
1
)
x
~
(
N
−
n
1
−
1
,
n
2
,
n
3
)
=
x
(
2
n
1
+
1
,
2
n
2
,
2
n
3
)
x
~
(
N
−
n
1
−
1
,
n
2
,
N
−
n
3
−
1
)
=
x
(
2
n
1
+
1
,
2
n
2
,
2
n
3
+
1
)
x
~
(
N
−
n
1
−
1
,
N
−
n
2
−
1
,
n
3
)
=
x
(
2
n
1
+
1
,
2
n
2
+
1
,
2
n
3
)
x
~
(
N
−
n
1
−
1
,
N
−
n
2
−
1
,
N
−
n
3
−
1
)
=
x
(
2
n
1
+
1
,
2
n
2
+
1
,
2
n
3
+
1
)
{\displaystyle {\begin{array}{lcl}{\tilde {x}}(n_{1},n_{2},n_{3})=x(2n_{1},2n_{2},2n_{3})\\{\tilde {x}}(n_{1},n_{2},N-n_{3}-1)=x(2n_{1},2n_{2},2n_{3}+1)\\{\tilde {x}}(n_{1},N-n_{2}-1,n_{3})=x(2n_{1},2n_{2}+1,2n_{3})\\{\tilde {x}}(n_{1},N-n_{2}-1,N-n_{3}-1)=x(2n_{1},2n_{2}+1,2n_{3}+1)\\{\tilde {x}}(N-n_{1}-1,n_{2},n_{3})=x(2n_{1}+1,2n_{2},2n_{3})\\{\tilde {x}}(N-n_{1}-1,n_{2},N-n_{3}-1)=x(2n_{1}+1,2n_{2},2n_{3}+1)\\{\tilde {x}}(N-n_{1}-1,N-n_{2}-1,n_{3})=x(2n_{1}+1,2n_{2}+1,2n_{3})\\{\tilde {x}}(N-n_{1}-1,N-n_{2}-1,N-n_{3}-1)=x(2n_{1}+1,2n_{2}+1,2n_{3}+1)\\\end{array}}}
where
0
≤
n
1
,
n
2
,
n
3
≤
N
2
−
1
{\displaystyle 0\leq n_{1},n_{2},n_{3}\leq {\frac {N}{2}}-1}
The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, where
c
(
φ
i
)
=
cos
(
φ
i
)
{\displaystyle c(\varphi _{i})=\cos(\varphi _{i})}
.
The original 3-D DCT-II now can be written as
X
(
k
1
,
k
2
,
k
3
)
=
∑
n
1
=
1
N
−
1
∑
n
2
=
1
N
−
1
∑
n
3
=
1
N
−
1
x
~
(
n
1
,
n
2
,
n
3
)
cos
(
φ
k
1
)
cos
(
φ
k
2
)
cos
(
φ
k
3
)
{\displaystyle X(k_{1},k_{2},k_{3})=\sum _{n_{1}=1}^{N-1}\sum _{n_{2}=1}^{N-1}\sum _{n_{3}=1}^{N-1}{\tilde {x}}(n_{1},n_{2},n_{3})\cos(\varphi k_{1})\cos(\varphi k_{2})\cos(\varphi k_{3})}
where
φ
i
=
π
2
N
(
4
N
i
+
1
)
,
and
i
=
1
,
2
,
3.
{\displaystyle \varphi _{i}={\frac {\pi }{2N}}(4N_{i}+1),{\text{ and }}i=1,2,3.}
If the even and the odd parts of
k
1
,
k
2
{\displaystyle k_{1},k_{2}}
and
k
3
{\displaystyle k_{3}}
and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as
X
(
k
1
,
k
2
,
k
3
)
=
∑
n
1
=
1
N
2
−
1
∑
n
2
=
1
N
2
−
1
∑
n
1
=
1
N
2
−
1
x
~
i
j
l
(
n
1
,
n
2
,
n
3
)
cos
(
φ
(
2
k
1
+
i
)
cos
(
φ
(
2
k
2
+
j
)
cos
(
φ
(
2
k
3
+
l
)
)
{\displaystyle X(k_{1},k_{2},k_{3})=\sum _{n_{1}=1}^{{\tfrac {N}{2}}-1}\sum _{n_{2}=1}^{{\tfrac {N}{2}}-1}\sum _{n_{1}=1}^{{\tfrac {N}{2}}-1}{\tilde {x}}_{ijl}(n_{1},n_{2},n_{3})\cos(\varphi (2k_{1}+i)\cos(\varphi (2k_{2}+j)\cos(\varphi (2k_{3}+l))}
where
x
~
i
j
l
(
n
1
,
n
2
,
n
3
)
=
x
~
(
n
1
,
n
2
,
n
3
)
+
(
−
1
)
l
x
~
(
n
1
,
n
2
,
n
3
+
n
2
)
{\displaystyle {\tilde {x}}_{ijl}(n_{1},n_{2},n_{3})={\tilde {x}}(n_{1},n_{2},n_{3})+(-1)^{l}{\tilde {x}}\left(n_{1},n_{2},n_{3}+{\frac {n}{2}}\right)}
+
(
−
1
)
j
x
~
(
n
1
,
n
2
+
n
2
,
n
3
)
+
(
−
1
)
j
+
l
x
~
(
n
1
,
n
2
+
n
2
,
n
3
+
n
2
)
{\displaystyle +(-1)^{j}{\tilde {x}}\left(n_{1},n_{2}+{\frac {n}{2}},n_{3}\right)+(-1)^{j+l}{\tilde {x}}\left(n_{1},n_{2}+{\frac {n}{2}},n_{3}+{\frac {n}{2}}\right)}
+
(
−
1
)
i
x
~
(
n
1
+
n
2
,
n
2
,
n
3
)
+
(
−
1
)
i
+
j
x
~
(
n
1
+
n
2
+
n
2
,
n
2
,
n
3
)
{\displaystyle +(-1)^{i}{\tilde {x}}\left(n_{1}+{\frac {n}{2}},n_{2},n_{3}\right)+(-1)^{i+j}{\tilde {x}}\left(n_{1}+{\frac {n}{2}}+{\frac {n}{2}},n_{2},n_{3}\right)}
+
(
−
1
)
i
+
l
x
~
(
n
1
+
n
2
,
n
2
,
n
3
+
n
3
)
{\displaystyle +(-1)^{i+l}{\tilde {x}}\left(n_{1}+{\frac {n}{2}},n_{2},n_{3}+{\frac {n}{3}}\right)}
+
(
−
1
)
i
+
j
+
l
x
~
(
n
1
+
n
2
,
n
2
+
n
2
,
n
3
+
n
2
)
where
i
,
j
,
l
=
0
or
1.
{\displaystyle +(-1)^{i+j+l}{\tilde {x}}\left(n_{1}+{\frac {n}{2}},n_{2}+{\frac {n}{2}},n_{3}+{\frac {n}{2}}\right){\text{ where }}i,j,l=0{\text{ or }}1.}
===== Arithmetic complexity =====
The whole 3-D DCT calculation needs
[
log
2
N
]
{\displaystyle ~[\log _{2}N]~}
stages, and each stage involves
1
8
N
3
{\displaystyle ~{\tfrac {1}{8}}\ N^{3}~}
butterflies. The whole 3-D DCT requires
[
1
8
N
3
log
2
N
]
{\displaystyle ~\left[{\tfrac {1}{8}}\ N^{3}\log _{2}N\right]~}
butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is
[
7
8
N
3
log
2
N
]
,
{\displaystyle ~\left[{\tfrac {7}{8}}\ N^{3}\ \log _{2}N\right]~,}
and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by
[
3
2
N
3
log
2
N
]
⏟
Real
+
[
3
2
N
3
log
2
N
−
3
N
3
+
3
N
2
]
⏟
Recursive
=
[
9
2
N
3
log
2
N
−
3
N
3
+
3
N
2
]
.
{\displaystyle ~\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N\right]} _{\text{Real}}+\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]} _{\text{Recursive}}=\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~.}
The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by
[
3
2
N
3
log
2
N
]
{\displaystyle ~\left[{\frac {3}{2}}N^{3}\log _{2}N\right]~}
and
[
9
2
N
3
log
2
N
−
3
N
3
+
3
N
2
]
,
{\displaystyle ~\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~,}
respectively. From Table 1, it can be seen that the total number
of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications.
The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor. Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications, it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of the Cooley–Tukey FFT algorithm in 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-style Cooley–Tukey FFT algorithms.
The image to the right shows a combination of horizontal and vertical frequencies for an 8 × 8
(
N
1
=
N
2
=
8
)
{\displaystyle (~N_{1}=N_{2}=8~)}
two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle.
For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data ( 8×8 ) is transformed to a linear combination of these 64 frequency squares.
=== MD-DCT-IV ===
The M-D DCT-IV is just an extension of 1-D DCT-IV on to M dimensional domain. The 2-D DCT-IV of a matrix or an image is given by
X
k
,
ℓ
=
∑
n
=
0
N
−
1
∑
m
=
0
M
−
1
x
n
,
m
cos
(
(
2
m
+
1
)
(
2
k
+
1
)
π
4
N
)
cos
(
(
2
n
+
1
)
(
2
ℓ
+
1
)
π
4
M
)
,
{\displaystyle X_{k,\ell }=\sum _{n=0}^{N-1}\;\sum _{m=0}^{M-1}\ x_{n,m}\cos \left(\ {\frac {\,(2m+1)(2k+1)\ \pi \,}{4N}}\ \right)\cos \left(\ {\frac {\,(2n+1)(2\ell +1)\ \pi \,}{4M}}\ \right)~,}
for
k
=
0
,
1
,
2
…
N
−
1
{\displaystyle ~~k=0,\ 1,\ 2\ \ldots \ N-1~~}
and
ℓ
=
0
,
1
,
2
,
…
M
−
1
.
{\displaystyle ~~\ell =0,\ 1,\ 2,\ \ldots \ M-1~.}
We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields.
== Computation ==
Although the direct application of these formulas would require
O
(
N
2
)
{\displaystyle ~{\mathcal {O}}(N^{2})~}
operations, it is possible to compute the same thing with only
O
(
N
log
N
)
{\displaystyle ~{\mathcal {O}}(N\log N)~}
complexity by factorizing the computation similarly to the fast Fourier transform (FFT). One can also compute DCTs via FFTs combined with
O
(
N
)
{\displaystyle ~{\mathcal {O}}(N)~}
pre- and post-processing steps. In general,
O
(
N
log
N
)
{\displaystyle ~{\mathcal {O}}(N\log N)~}
methods to compute DCTs are known as fast cosine transform (FCT) algorithms.
The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plus
O
(
N
)
{\displaystyle ~{\mathcal {O}}(N)~}
extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least for power-of-two sizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically (Frigo & Johnson 2005). Algorithms based on the Cooley–Tukey FFT algorithm are most common, but any other FFT algorithm is also applicable. For example, the Winograd FFT algorithm leads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by (Feig & Winograd 1992a) for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well (Duhamel & Vetterli 1990).
While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengths N with FFT-based algorithms.
Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the 8 × 8 DCT-II used in JPEG compression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.)
In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size
4
N
{\displaystyle ~4N~}
with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used in FFTPACK and FFTW) was described by Narasimha & Peterson (1978) and Makhoul (1980), and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT-II.
Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent size
N
{\displaystyle ~N~}
real-data FFT is also performed by a real-data split-radix algorithm (as in Sorensen et al. (1987)), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II (
2
N
log
2
N
−
N
+
2
{\displaystyle ~2N\log _{2}N-N+2~}
real-arithmetic operations).
A recent reduction in the operation count to
17
9
N
log
2
N
+
O
(
N
)
{\displaystyle ~{\tfrac {17}{9}}N\log _{2}N+{\mathcal {O}}(N)}
also uses a real-data FFT. So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for small
N
,
{\displaystyle ~N~,}
but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.)
== Example of IDCT ==
Consider this 8x8 grayscale image of capital letter A.
Each basis function is multiplied by its coefficient and then this product is added to the final image.
== See also ==
Discrete wavelet transform
JPEG - Discrete cosine transform - Contains a potentially easier to understand example of DCT transformation
List of Fourier-related transforms
Modified discrete cosine transform
== Notes ==
== References ==
== Further reading ==
Narasimha, M.; Peterson, A. (June 1978). "On the Computation of the Discrete Cosine Transform". IEEE Transactions on Communications. 26 (6): 934–936. doi:10.1109/TCOM.1978.1094144.
Makhoul, J. (February 1980). "A fast cosine transform in one and two dimensions". IEEE Transactions on Acoustics, Speech, and Signal Processing. 28 (1): 27–34. doi:10.1109/TASSP.1980.1163351.
Sorensen, H.; Jones, D.; Heideman, M.; Burrus, C. (June 1987). "Real-valued fast Fourier transform algorithms". IEEE Transactions on Acoustics, Speech, and Signal Processing. 35 (6): 849–863. CiteSeerX 10.1.1.205.4523. doi:10.1109/TASSP.1987.1165220.
Plonka, G.; Tasche, M. (January 2005). "Fast and numerically stable algorithms for discrete cosine transforms". Linear Algebra and Its Applications. 394 (1): 309–345. doi:10.1016/j.laa.2004.07.015.
Duhamel, P.; Vetterli, M. (April 1990). "Fast fourier transforms: A tutorial review and a state of the art". Signal Processing (Submitted manuscript). 19 (4): 259–299. Bibcode:1990SigPr..19..259D. doi:10.1016/0165-1684(90)90158-U.
Ahmed, N. (January 1991). "How I came up with the discrete cosine transform". Digital Signal Processing. 1 (1): 4–9. Bibcode:1991DSP.....1....4A. doi:10.1016/1051-2004(91)90086-Z.
Feig, E.; Winograd, S. (September 1992b). "Fast algorithms for the discrete cosine transform". IEEE Transactions on Signal Processing. 40 (9): 2174–2193. Bibcode:1992ITSP...40.2174F. doi:10.1109/78.157218.
Malvar, Henrique (1992), Signal Processing with Lapped Transforms, Boston: Artech House, ISBN 978-0-89006-467-2
Martucci, S. A. (May 1994). "Symmetric convolution and the discrete sine and cosine transforms". IEEE Transactions on Signal Processing. 42 (5): 1038–1051. Bibcode:1994ITSP...42.1038M. doi:10.1109/78.295213.
Oppenheim, Alan; Schafer, Ronald; Buck, John (1999), Discrete-Time Signal Processing (2nd ed.), Upper Saddle River, N.J: Prentice Hall, ISBN 978-0-13-754920-7
Frigo, M.; Johnson, S. G. (February 2005). "The Design and Implementation of FFTW3" (PDF). Proceedings of the IEEE. 93 (2): 216–231. Bibcode:2005IEEEP..93..216F. CiteSeerX 10.1.1.66.3097. doi:10.1109/JPROC.2004.840301. S2CID 6644892.
Boussakta, Said.; Alshibami, Hamoud O. (April 2004). "Fast Algorithm for the 3-D DCT-II" (PDF). IEEE Transactions on Signal Processing. 52 (4): 992–1000. Bibcode:2004ITSP...52..992B. doi:10.1109/TSP.2004.823472. S2CID 3385296.
Cheng, L. Z.; Zeng, Y. H. (2003). "New fast algorithm for multidimensional type-IV DCT". IEEE Transactions on Signal Processing. 51 (1): 213–220. doi:10.1109/TSP.2002.806558.
Wen-Hsiung Chen; Smith, C.; Fralick, S. (September 1977). "A Fast Computational Algorithm for the Discrete Cosine Transform". IEEE Transactions on Communications. 25 (9): 1004–1009. doi:10.1109/TCOM.1977.1093941.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 12.4.2. Cosine Transform", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2011-08-11, retrieved 2011-08-13
== External links ==
Syed Ali Khayam: The Discrete Cosine Transform (DCT): Theory and Application
Implementation of MPEG integer approximation of 8x8 IDCT (ISO/IEC 23002-2)
Matteo Frigo and Steven G. Johnson: FFTW, FFTW Home Page. A free (GPL) C library that can compute fast DCTs (types I-IV) in one or more dimensions, of arbitrary size.
Takuya Ooura: General Purpose FFT Package, FFT Package 1-dim / 2-dim. Free C & FORTRAN libraries for computing fast DCTs (types II–III) in one, two or three dimensions, power of 2 sizes.
Tim Kientzle: Fast algorithms for computing the 8-point DCT and IDCT, Algorithm Alley.
LTFAT is a free Matlab/Octave toolbox with interfaces to the FFTW implementation of the DCTs and DSTs of type I-IV. | Wikipedia/Inverse_discrete_cosine_transform |
RDNA 2 is a GPU microarchitecture designed by AMD, released with the Radeon RX 6000 series on November 18, 2020. Alongside powering the RX 6000 series, RDNA 2 is also featured in the SoCs designed by AMD for the PlayStation 5, Xbox Series X/S, and Steam Deck consoles.
== Background ==
On July 7, 2019, AMD released the first iteration of the RDNA microarchitecture, a new graphics architecture designed specifically for gaming that replaced the aging Graphics Core Next (GCN) microarchitecture. With RDNA, AMD sought to reduce latency and improve power efficiency over their previous Vega series based on GCN 5th gen and Nvidia's competing Turing microarchitecture.
RDNA 2 was first publicly announced in January 2020 with AMD initially calling RDNA 2 a "refresh" of the original RDNA architecture from the previous year. At AMD's Financial Analysts Day held on March 5, 2020, AMD showed a client GPU roadmap that gave details on RDNA's successor, RDNA 2, that it would again be built using TSMC's 7 nm process and would be coming in 2020. AMD told their investors that they were targeting a 50% uplift in performance-per-watt and increased IPC with the RDNA 2 microarchitecture.
On October 28, 2020, AMD held an online unveiling event for the RDNA 2 architecture and Radeon RX 6000 series. The event came 20 days after AMD's unveiling event for Ryzen 5000 series processors built on the Zen 3 microarchitecture.
== Architectural details ==
=== Compute Unit ===
RDNA 2 contains a significant increase in the number of Compute Units (CUs) with a maximum of 80, a doubling from the maximum of 40 in the Radeon RX 5700 XT. Each Compute Unit contains 64 shader cores. CUs are organized into groups of two named Work Group Processors with 32 KB of shared L0 cache per WGP. Each CU contains two sets of an SIMD32 vector unit, an SISD scalar unit, textures units, and a stack of various caches. New low precision data types like INT4 and INT8 are new supported data types for RDNA 2 CUs.
The RDNA 2 graphics pipeline has been reconfigured and reordered for greater performance-per-watt and more efficient rendering by moving the caches closer to the shader engines. A new mesh shaders model allows shader rendering to be done in parallel using smaller batches of primitives called "meshlets". As a result, the mesh shaders feature enables greater control of the GPU geometry pipeline.
=== Ray tracing ===
Real-time hardware accelerated ray tracing is a new feature for RDNA 2 which is handled by a dedicated ray accelerator inside each CU. Ray tracing on RDNA 2 relies on the more open DirectX Raytracing protocol rather than the Nvidia RTX protocol.
In February 2023, it was reported that driver updates had boosted ray tracing performance by up to 40% using DirectX Raytracing.
=== Clock speeds ===
With RDNA 2 using the same 7 nm node as RDNA, AMD claims that RDNA 2 achieves a 30% frequency increase over its predecessor while using the same power.
=== Cache and memory subsystem ===
In addition to the traditional L1 and L2 caches that GPUs possess, RDNA 2 adds a new global L3 cache that AMD calls "Infinity Cache". This was done to avoid the use of a wider memory bus while still being able to maintain the same data bandwidth. Product technology architect Sam Naffziger said that, without Infinity Cache, "We were looking at the daunting prospect of having to put a 512-bit interface and all the power, area and expense associated with that". Using a wider memory bus requires more power which is in conflict with AMD's increased performance-per-watt goals for RDNA 2. AMD engineers ran tests comparing RDNA 2 silicon featuring a large on-die cache and with wider memory buses. They discovered that having such a cache would aid in the re-use of temporal and spatial data when the GPU is rendering a complex image. It is beneficial for the GPU's compute units to have fast access to a physically close cache rather than searching for data in video memory. AMD claims that RDNA 2's 128 MB of on-die Infinity Cache "dramatically reduces latency and power consumption". The GPU having access to a large L2 or L3 cache allows it to more quickly access necessary data compared to accessing VRAM or system RAM. The Infinity Cache is made up of two sets of 64 MB cache that can run on its own clock rate independent from the GPU cores. The Infinity Cache has a peak internal transfer bandwidth of 1986.6 GB/s and results in less reliance being placed on the GPU's GDDR6 memory controllers. Each Shader Engine now has two sets of L1 caches. The large cache of RDNA 2 GPUs give them a higher overall memory bandwidth compared to Nvidia's GeForce RTX 30 series GPUs.
=== Power efficiency ===
AMD claims that RDNA 2 achieves up to a 54% increase in performance-per-watt over the first RDNA microarchitecture. 21% of that 54% improvement is attributed to performance-per-clock enhancements, in part due to the addition of Infinity Cache.
=== Media engine ===
RDNA 2 uses the VCN 3.0, VCN 3.1, and VCN 3.1.2 video decoding blocks in its media engine. It adds support for AV1 decoding at up to 8K resolution, though AV1 hardware encoding support would not come until RDNA 3 in 2022. However, the low-end Navi 24 die and iGPUs based on RDNA 2.0 do not contain any media encoders and cannot decode AV1 as a result.
== Navi 2x dies ==
== Products ==
=== Desktop ===
=== Mobile ===
=== Workstation ===
==== Desktop Workstation ====
==== Mobile Workstation ====
=== Integrated graphics processing units (iGPUs) ===
=== Consoles ===
== See also ==
Ampere - competing Nvidia microarchitecture released in a similar time-frame with the competing GeForce RTX 30 series
== References == | Wikipedia/RDNA_2 |
In computing, a cryptographic accelerator is a co-processor designed specifically to perform computationally intensive cryptographic operations, doing so far more efficiently than the general-purpose CPU. Because many servers' system loads consist mostly of cryptographic operations, this can greatly increase performance.
Intel's AES-NI is by far the most common cryptographic accelerator in commodity hardware. VIA PadLock is another recent example.
== Operating system support ==
Several operating systems provide some support for cryptographic hardware. The BSD family of systems has the OpenBSD Cryptographic Framework (OCF), Linux systems have the Crypto API, Solaris OS has the Solaris Cryptographic Framework (SCF) and Microsoft Windows has the Microsoft CryptoAPI.
Some cryptographic accelerators offer new machine instructions and can therefore be used directly by programs. Libraries such as OpenSSL and LibreSSL support some such cryptographic accelerators.
Almost all Unix-like operating systems use OpenSSL or the fork LibreSSL as their cryptography library. These libraries use cryptographic accelerators such as AES-NI if available.
== See also ==
TLS acceleration
Hardware-based Encryption
== References == | Wikipedia/Cryptographic_accelerator |
Super VGA (SVGA) or Extended VGA is a broad term that covers a wide range of computer display standards that extended IBM's VGA specification.
When used as shorthand for a resolution, as VGA and XGA often are, SVGA refers to a resolution of 800 × 600.
== History ==
In the late 1980s, after the release of IBM's VGA, third-party manufacturers began making graphics cards based on its specifications with extended capabilities. As these cards grew in popularity, they began to be referred to as "Super VGA".
This term was not an official standard, but a shorthand for enhanced VGA cards which had become common by 1988. The first cards that explicitly used the term were Genoa Systems's SuperVGA and SuperVGA HiRes in 1987.
Super VGA cards broke compatibility with the IBM VGA standard, requiring software developers to provide specific display drivers and implementations for each card their software could operate on. Initially, the heavy restrictions this placed on software developers slowed the uptake of Super VGA cards, which motivated VESA to produce a unifying standard, the VESA BIOS Extensions (VBE), first introduced in 1989, to provide a common software interface to all cards implementing the VBE specification.
Eventually, Super VGA graphics adapters supported innumerable modes.
== Specifications ==
The Super VGA standardized the following resolutions:
640 × 400 or 640 × 480 with 256 colors
800 × 600 with 24-bit color depth
1024 × 768 with 24-bit color depth
1280 × 1024 with 24-bit color depth
SVGA uses the same DE-15 VGA connector as the original standard, and otherwise operates over the same cabling and interfaces as VGA.
== Early manufacturers ==
Some early Super VGA manufacturers and some of their models, where available:
Ahead Technologies (Not related to Nero AG, formerly Ahead Software)
Amdek: VGA ADAPTER/132 (Tseng Labs chipset)
AST Research, Inc.: VGA Plus (rebranded Paradise)
ATI Technologies: VIP (82C451), VGA Wonder
Chips and Technologies: 82C451
Cirrus Logic: CL-GD410/420
Compaq: VGC Board (Paradise chipset)
Everex
Genoa Systems: Genoa VGA 5100-5400 (ET3000)
Orchid Technology: Designer VGA (ET3000), Pro Designer Plus
Western Digital's Paradise Inc.: VGA Plus (PVGA1), VGA Plus 16, VGA Pro
Sigma Designs: SigmaVGA (ET3000)
STB Systems: VGA Extra/EM (ET3000),
Video Seven: V-RAM VGA
Willow Peripherals: VGA-TV/Publisher's, VGA-TV + Genlock
Trident Microsystems: TVGA8800, TVGA8900, and TVGA9000 series
== Gallery ==
== References == | Wikipedia/Super_video_graphics_array |
Graphics Core Next (GCN) is the codename for a series of microarchitectures and an instruction set architecture that were developed by AMD for its GPUs as the successor to its TeraScale microarchitecture. The first product featuring GCN was launched on January 9, 2012.
GCN is a reduced instruction set SIMD microarchitecture contrasting the very long instruction word SIMD architecture of TeraScale. GCN requires considerably more transistors than TeraScale, but offers advantages for general-purpose GPU (GPGPU) computation due to a simpler compiler.
GCN graphics chips were fabricated with CMOS at 28 nm, and with FinFET at 14 nm (by Samsung Electronics and GlobalFoundries) and 7 nm (by TSMC), available on selected models in AMD's Radeon HD 7000, HD 8000, 200, 300, 400, 500 and Vega series of graphics cards, including the separately released Radeon VII. GCN was also used in the graphics portion of Accelerated Processing Units (APUs), including those in the PlayStation 4 and Xbox One.
GCN was succeeded by the RDNA microarchitecture and instruction set architecture in 2019.
== Instruction set ==
The GCN instruction set is owned by AMD and was developed specifically for GPUs. It has no micro-operation for division.
Documentation is available for:
the Graphics Core Next 1 instruction set,
the Graphics Core Next 2 instruction set,
the Graphics Core Next 3 and 4 instruction sets,
the Graphics Core Next 5 instruction set, and
the "Vega" 7nm instruction set architecture (also referred to as Graphics Core Next 5.1).
An LLVM compiler back end is available for the GCN instruction set. It is used by Mesa 3D.
GNU Compiler Collection 9 supports GCN 3 and GCN 5 since 2019 for single-threaded, stand-alone programs, with GCC 10 also offloading via OpenMP and OpenACC.
MIAOW is an open-source RTL implementation of the AMD Southern Islands GPGPU microarchitecture.
In November 2015, AMD announced its Boltzmann Initiative, which aims to enable the porting of CUDA-based applications to a common C++ programming model.
At the Super Computing 15 event, AMD displayed a Heterogeneous Compute Compiler (HCC), a headless Linux driver and HSA runtime infrastructure for cluster-class high-performance computing, and a Heterogeneous-compute Interface for Portability (HIP) tool for porting CUDA applications to the aforementioned common C++ model.
== Microarchitectures ==
As of July 2017, the Graphics Core Next instruction set has seen five iterations. The differences between the first four generations are rather minimal, but the fifth-generation GCN architecture features heavily modified stream processors to improve performance and support the simultaneous processing of two lower-precision numbers in place of a single higher-precision number.
=== Command processing ===
==== Graphics Command Processor ====
The Graphics Command Processor (GCP) is a functional unit of the GCN microarchitecture. Among other tasks, it is responsible for the handling of asynchronous shaders.
==== Asynchronous Compute Engine ====
The Asynchronous Compute Engine (ACE) is a distinct functional block serving computing purposes, whose purpose is similar to that of the Graphics Command Processor.
==== Schedulers ====
Since the third iteration of GCN, the hardware contains two schedulers: one to schedule "wavefronts" during shader execution (the CU Scheduler, or Compute Unit Scheduler) and the other to schedule execution of draw and compute queues. The latter helps performance by executing compute operations when the compute units (CUs) are underutilized due to graphics commands limited by fixed function pipeline speed or bandwidth. This functionality is known as Async Compute.
For a given shader, the GPU drivers may also schedule instructions on the CPU to minimize latency.
=== Geometric processor ===
The geometry processor contains a Geometry Assembler, a Tesselator, and a Vertex Assembler.
The Tesselator is capable of doing tessellation in hardware as defined by Direct3D 11 and OpenGL 4.5 (see AMD January 21, 2017), and succeeded ATI TruForm and hardware tessellation in TeraScale as AMD's then-latest semiconductor intellectual property core.
=== Compute units ===
One compute unit (CU) combines 64 shader processors with 4 texture mapping units (TMUs). The compute units are separate from, but feed into, the render output units (ROPs). Each compute unit consists of the following:
a CU scheduler
a Branch & Message Unit
4 16-lane-wide SIMD Vector Units (SIMD-VUs)
4 64 KiB vector general-purpose register (VGPR) files
1 scalar unit (SU)
a 8 KiB scalar GPR file
a local data share of 64 KiB
4 Texture Filter Units
16 Texture Fetch Load/Store Units
a 16 KiB level 1 (L1) cache
Four Compute units are wired to share a 16KiB L1 instruction cache and a 32KiB L1 data cache, both of which are read-only. A SIMD-VU operates on 16 elements at a time (per cycle), while a SU can operate on one a time (one/cycle). In addition, the SU handles some other operations, such as branching.
Every SIMD-VU has some private memory where it stores its registers. There are two types of registers: scalar registers (S0, S1, etc.), which hold 4 bytes number each, and vector registers (V0, V1, etc.), which each represent a set of 64 4-byte numbers. On the vector registers, every operation is done in parallel on the 64 numbers. which correspond to 64 inputs. For example, it may work on 64 different pixels at a time (for each of them the inputs are slightly different, and thus you get slightly different color at the end).
Every SIMD-VU has room for 512 scalar registers and 256 vector registers.
AMD has claimed that each GCN compute unit (CU) has 64 KiB Local Data Share (LDS).
==== CU scheduler ====
The CU scheduler is the hardware functional block, choosing which wavefronts the SIMD-VU executes. It picks one SIMD-VU per cycle for scheduling. This is not to be confused with other hardware or software schedulers.
==== Wavefront ====
A shader is a small program written in GLSL that performs graphics processing, and a kernel is a small program written in OpenCL that performs GPGPU processing. These processes don't need that many registers, but they do need to load data from system or graphics memory. This operation comes with significant latency. AMD and Nvidia chose similar approaches to hide this unavoidable latency: the grouping of multiple threads. AMD calls such a group a "wavefront", whereas Nvidia calls it a "warp". A group of threads is the most basic unit of scheduling of GPUs that implement this approach to hide latency. It is the minimum size of the data processed in SIMD fashion, the smallest executable unit of code, and the way to processes a single instruction over all of the threads in it at the same time.
In all GCN GPUs, a "wavefront" consists of 64 threads, and in all Nvidia GPUs, a "warp" consists of 32 threads.
AMD's solution is to attribute multiple wavefronts to each SIMD-VU. The hardware distributes the registers to the different wavefronts, and when one wavefront is waiting on some result, which lies in memory, the CU Scheduler assigns the SIMD-VU another wavefront. Wavefronts are attributed per SIMD-VU. SIMD-VUs do not exchange wavefronts. A maximum of 10 wavefronts can be attributed per SIMD-VU (thus 40 per CU).
AMD CodeXL shows tables with the relationship between number of SGPRs and VGPRs to the number of wavefronts, but essentially, for SGPRS it is between 104 and 512 per number of wavefronts, and for VGPRS it is 256 per number of wavefronts.
Note that in conjunction with the SSE instructions, this concept of the most basic level of parallelism is often called a "vector width". The vector width is characterized by the total number of bits in it.
==== SIMD Vector Unit ====
Each SIMD Vector Unit has:
a 16-lane integer and floating point vector Arithmetic Logic Unit (ALU)
64 KiB Vector General Purpose Register (VGPR) file
10× 48-bit Program Counters
Instruction buffer for 10 wavefronts (each wavefront is a group of 64 threads, or the size of one logical VGPR)
A 64-thread wavefront issues to a 16-lane SIMD Unit over four cycles
Each SIMD-VU has 10 wavefront instruction buffers, and it takes 4 cycles to execute one wavefront.
=== Audio and video acceleration blocks ===
Many implementations of GCN are typically accompanied by several of AMD's other ASIC blocks. Including but not limited to the Unified Video Decoder, Video Coding Engine, and AMD TrueAudio.
==== Video Coding Engine ====
The Video Coding Engine is a video encoding ASIC, first introduced with the Radeon HD 7000 series.
The initial version of the VCE added support for encoding I and P frames H.264 in the YUV420 pixel format, along with SVE temporal encode and Display Encode Mode, while the second version added B-frame support for YUV420 and YUV444 I-frames.
VCE 3.0 formed a part of the third generation of GCN, adding high-quality video scaling and the HEVC (H.265) codec.
VCE 4.0 was part of the Vega architecture, and was subsequently succeeded by Video Core Next.
==== TrueAudio ====
=== Unified virtual memory ===
In a preview in 2011, AnandTech wrote about the unified virtual memory, supported by Graphics Core Next.
=== Heterogeneous System Architecture (HSA) ===
Some of the specific HSA features implemented in the hardware need support from the operating system's kernel (its subsystems) and/or from specific device drivers. For example, in July 2014, AMD published a set of 83 patches to be merged into Linux kernel mainline 3.17 for supporting their Graphics Core Next-based Radeon graphics cards. The so-called HSA kernel driver resides in the directory /drivers/gpu/hsa, while the DRM graphics device drivers reside in /drivers/gpu/drm and augment the already existing DRM drivers for Radeon cards. This very first implementation focuses on a single "Kaveri" APU and works alongside the existing Radeon kernel graphics driver (kgd).
=== Lossless Delta Color Compression ===
=== Hardware schedulers ===
Hardware schedulers are used to perform scheduling and offload the assignment of compute queues to the ACEs from the driver to hardware, by buffering these queues until there is at least one empty queue in at least one ACE. This causes the HWS to immediately assign buffered queues to the ACEs until all queues are full or there are no more queues to safely assign.
Part of the scheduling work performed includes prioritized queues which allow critical tasks to run at a higher priority than other tasks without requiring the lower priority tasks to be preempted to run the high priority task, therefore allowing the tasks to run concurrently with the high priority tasks scheduled to hog the GPU as much as possible while letting other tasks use the resources that the high priority tasks are not using. These are essentially Asynchronous Compute Engines that lack dispatch controllers. They were first introduced in the fourth generation GCN microarchitecture, but were present in the third generation GCN microarchitecture for internal testing purposes. A driver update has enabled the hardware schedulers in third generation GCN parts for production use.
=== Primitive Discard Accelerator ===
This unit discards degenerate triangles before they enter the vertex shader and triangles that do not cover any fragments before they enter the fragment shader. This unit was introduced with the fourth generation GCN microarchitecture.
== Generations ==
=== Graphics Core Next 1 ===
The GCN 1 microarchitecture was used in several Radeon HD 7000 series graphics cards.
support for 64-bit addressing (x86-64 address space) with unified address space for CPU and GPU
support for PCIe 3.0
GPU sends interrupt requests to CPU on various events (such as page faults)
support for Partially Resident Textures, which enable virtual memory support through DirectX and OpenGL extensions
AMD PowerTune support, which dynamically adjusts performance to stay within a specific TDP
support for Mantle (API)
There are Asynchronous Compute Engines controlling computation and dispatching.
==== ZeroCore Power ====
ZeroCore Power is a long idle power saving technology, shutting off functional units of the GPU when not in use. AMD ZeroCore Power technology supplements AMD PowerTune.
==== Chips ====
Discrete GPUs (Southern Islands family):
Hainan
Oland
Cape Verde
Pitcairn
Tahiti
=== Graphics Core Next 2 ===
The 2nd generation of GCN was introduced with the Radeon HD 7790 and is also found in the Radeon HD 8770, R7 260/260X, R9 290/290X, R9 295X2, R7 360, and R9 390/390X, as well as Steamroller-based desktop "Kaveri" APUs and mobile "Kaveri" APUs and in the Puma-based "Beema" and "Mullins" APUs. It has multiple advantages over the original GCN, including FreeSync support, AMD TrueAudio and a revised version of AMD PowerTune technology.
GCN 2nd generation introduced an entity called "Shader Engine" (SE). A Shader Engine comprises one geometry processor, up to 44 CUs (Hawaii chip), rasterizers, ROPs, and L1 cache. Not part of a Shader Engine is the Graphics Command Processor, the 8 ACEs, the L2 cache and memory controllers as well as the audio and video accelerators, the display controllers, the 2 DMA controllers and the PCIe interface.
The A10-7850K "Kaveri" contains 8 CUs (compute units) and 8 Asynchronous Compute Engines for independent scheduling and work item dispatching.
At AMD Developer Summit (APU) in November 2013 Michael Mantor presented the Radeon R9 290X.
==== Chips ====
Discrete GPUs (Sea Islands family):
Bonaire
Hawaii
integrated into APUs:
Temash
Kabini
Liverpool (i.e. the APU found in the PlayStation 4)
Durango (i.e. the APU found in the Xbox One and Xbox One S)
Kaveri
Godavari
Mullins
Beema
Carrizo-L
=== Graphics Core Next 3 ===
GCN 3rd generation was introduced in 2014 with the Radeon R9 285 and R9 M295X, which have the "Tonga" GPU. It features improved tessellation performance, lossless delta color compression to reduce memory bandwidth usage, an updated and more efficient instruction set, a new high quality scaler for video, HEVC encoding (VCE 3.0) and HEVC decoding (UVD 6.0), and a new multimedia engine (video encoder/decoder). Delta color compression is supported in Mesa. However, its double precision performance is worse compared to previous generation.
==== Chips ====
discrete GPUs:
Tonga (Volcanic Islands family), comes with UVD 5.0 (Unified Video Decoder)
Fiji (Pirate Islands family), comes with UVD 6.0 and High Bandwidth Memory (HBM 1)
integrated into APUs:
Carrizo, comes with UVD 6.0
Bristol Ridge
Stoney Ridge
=== Graphics Core Next 4 ===
GPUs of the Arctic Islands-family were introduced in Q2 of 2016 with the AMD Radeon 400 series. The 3D-engine (i.e. GCA (Graphics and Compute array) or GFX) is identical to that found in the Tonga-chips. But Polaris feature a newer Display Controller engine, UVD version 6.3, etc.
All Polaris-based chips other than the Polaris 30 are produced on the 14 nm FinFET process, developed by Samsung Electronics and licensed to GlobalFoundries. The slightly newer refreshed Polaris 30 is built on the 12 nm LP FinFET process node, developed by Samsung and GlobalFoundries. The fourth generation GCN instruction set architecture is compatible with the third generation. It is an optimization for 14 nm FinFET process enabling higher GPU clock speeds than with the 3rd GCN generation. Architectural improvements include new hardware schedulers, a new primitive discard accelerator, a new display controller, and an updated UVD that can decode HEVC at 4K resolutions at 60 frames per second with 10 bits per color channel.
==== Chips ====
discrete GPUs:
Polaris 10 (also codenamed Ellesmere) found on "Radeon RX 470" and "Radeon RX 480"-branded graphics cards
Polaris 11 (also codenamed Baffin) found on "Radeon RX 460"-branded graphics cards (also Radeon RX 560D)
Polaris 12 (also codenamed Lexa) found on "Radeon RX 550" and "Radeon RX 540"-branded graphics cards
Polaris 20, which is a refreshed (14 nm LPP Samsung/GloFo FinFET process) Polaris 10 with higher clocks, used for "Radeon RX 570" and "Radeon RX 580"-branded graphics cards
Polaris 21, which is a refreshed (14 nm LPP Samsung/GloFo FinFET process) Polaris 11, used for "Radeon RX 560"-branded graphics cards
Polaris 22, found on "Radeon RX Vega M GH" and "Radeon RX Vega M GL"-branded graphics cards (as part of Kaby Lake-G)
Polaris 23, which is a refreshed (14 nm LPP Samsung/GloFo FinFET process) Polaris 12, used for "Radeon Pro WX 3200" and "Radeon RX 540X"-branded graphics cards (also Radeon RX 640)
Polaris 30, which is a refreshed (12 nm LP GloFo FinFET process) Polaris 20 with higher clocks, used for "Radeon RX 590"-branded graphics cards
In addition to dedicated GPUs, Polaris is utilized in the APUs of the PlayStation 4 Pro and Xbox One X, titled "Neo" and "Scorpio", respectively.
==== Precision Performance ====
FP64 performance of all GCN 4th generation GPUs is 1/16 of FP32 performance.
=== Graphics Core Next 5 ===
AMD began releasing details of their next generation of GCN Architecture, termed the 'Next-Generation Compute Unit', in January 2017. The new design was expected to increase instructions per clock, higher clock speeds, support for HBM2, a larger memory address space. The discrete graphics chipsets also include "HBCC (High Bandwidth Cache Controller)", but not when integrated into APUs. Additionally, the new chips were expected to include improvements in the Rasterisation and Render output units. The stream processors are heavily modified from the previous generations to support packed math Rapid Pack Math technology for 8-bit, 16-bit, and 32-bit numbers. With this there is a significant performance advantage when lower precision is acceptable (for example: processing two half-precision numbers at the same rate as a single single precision number).
Nvidia introduced tile-based rasterization and binning with Maxwell, and this was a big reason for Maxwell's efficiency increase. In January, AnandTech assumed that Vega would finally catch up with Nvidia regarding energy efficiency optimizations due to the new "DSBR (Draw Stream Binning Rasterizer)" to be introduced with Vega.
It also added support for a new shader stage – Primitive Shaders. Primitive shaders provide more flexible geometry processing and replace the vertex and geometry shaders in a rendering pipeline. As of December 2018, the Primitive shaders can't be used because required API changes are yet to be done.
Vega 10 and Vega 12 use the 14 nm FinFET process, developed by Samsung Electronics and licensed to GlobalFoundries. Vega 20 uses the 7 nm FinFET process developed by TSMC.
==== Chips ====
discrete GPUs:
Vega 10 (14 nm Samsung/GloFo FinFET process) (also codenamed Greenland) found on "Radeon RX Vega 64", "Radeon RX Vega 56", "Radeon Vega Frontier Edition", "Radeon Pro V340", Radeon Pro WX 9100, and Radeon Pro WX 8200 graphics cards
Vega 12 (14 nm Samsung/GloFo FinFET process) found on "Radeon Pro Vega 20" and "Radeon Pro Vega 16"-branded mobile graphics cards
Vega 20 (7 nm TSMC FinFET process) found on "Radeon Instinct MI50" and "Radeon Instinct MI60"-branded accelerator cards, "Radeon Pro Vega II", and "Radeon VII"-branded graphics cards.
integrated into APUs:
Raven Ridge came with VCN 1 which supersedes VCE and UVD and allows full fixed-function VP9 decode.
Picasso
Renoir
Cezanne
==== Precision performance ====
Double-precision floating-point (FP64) performance of all GCN 5th generation GPUs, except for Vega 20, is one-sixteenth of FP32 performance. For Vega 20 with Radeon Instinct this is half of FP32 performance. For Vega 20 with Radeon VII this is a quarter of FP32 performance. All GCN 5th generation GPUs support half-precision floating-point (FP16) calculations which is twice of FP32 performance.
== Comparison of GCN GPUs ==
Table contains only discrete GPUs (including mobile). APU(IGP) and console SoCs are not listed.
1 Old code names such as Treasure (Lexa) or Hawaii Refresh (Ellesmere) are not listed.
2 Initial launch date. Launch dates of variant chips such as Polaris 20 (April 2017) are not listed.
== See also ==
List of AMD graphics processing units
== External links ==
Official AMD.com Graphics Core Next (GCN) website
== References == | Wikipedia/Graphics_Core_Next |
A video display controller (VDC), also called a display engine or display interface, is an integrated circuit which is the main component in a video-signal generator, a device responsible for the production of a TV video signal in a computing or game system. Some VDCs also generate an audio signal, but that is not their main function.
VDCs were used in the home computers of the 1980s and also in some early video picture systems.
The VDC is the main component of the video signal generator logic, responsible for generating the timing of video signals such as the horizontal and vertical synchronization signals and the blanking interval signal. Sometimes other supporting chips were necessary to build a complete system, such as RAM to hold pixel data, ROM to hold character fonts, or some discrete logic such as shift registers.
Most often the VDC chip is completely integrated in the logic of the main computer system, (its video RAM appears in the memory map of the main CPU), but sometimes it functions as a coprocessor that can manipulate the video RAM contents independently.
== Video display controller vs. graphics processing unit ==
The difference between a display controller, a graphics accelerator, and a video compression/decompression IC is huge, but, since all of this logic is usually found on the chip of a graphics processing unit and is usually not available separately to the end-customer, there is often much confusion about these very different functional blocks.
GPUs with hardware acceleration became popular during the 1990s, including the S3 ViRGE, the Matrox Mystique, and the Voodoo Graphics; though earlier examples such as the NEC μPD7220 had already existed for some time. VDCs often had special hardware for the creation of "sprites", a function that in more modern VDP chips is done with the "Bit Blitter" using the "Bit blit" function.
One example of a typical video display processor is the "VDP2 32-bit background and scroll plane video display processor" of the Sega Saturn.
Another example is the Lisa (AGA) chip that was used for the improved graphics of the later generation Amiga computers.
That said, it is not completely clear when a "video chip" is a "video display controller" and when it is a "video display processor". For example, the TMS9918 is sometimes called a "video display controller" and sometimes a "video display processor". In general however a "video display processor" has some power to "process" the contents of the video RAM (filling an area of RAM for example), while a "video display controller" only controls the timing of the video synchronization signals and the access to the video RAM.
The graphics processing unit (GPU) goes one step further than the VDP and normally also supports 3D functionality. This is the kind of chip that is used in modern personal computers.
== Types ==
Video display controllers can be divided in several different types, listed here from simplest to most complex;
Video shifters, or "video shift register based systems" (there is no generally agreed upon name for these types of devices), are the most simple type of video controllers. They are directly or indirectly responsible for the video timing signals, but they normally do not access the video RAM directly. They get the video data from the main CPU, a byte at a time, and convert it to a serial bitstream, hence the technical name "video shifter". This serial data stream is then used together with the synchronization signals to output a video signal. The main CPU needs to do the bulk of the work. Normally these chips only support a very low resolution raster graphics mode.
A CRTC, or cathode-ray tube controller, generates the video timings and reads video data from RAM attached to the CRTC to output it via an external character generator ROM (for text modes) or directly to the video output shift register (for high resolution graphics modes). Because the actual capabilities of the video generator depend to a large degree on the external logic, video generator based on a CRTC chip can have a wide range of capabilities, from simple text-mode only systems to high-resolution systems supporting a wide range of colours. Sprites, however, are normally not supported by these systems.
Video interface controllers are much more complex than CRT controllers, and the external circuitry that is needed with a CRTC is embedded in the video controller chip. Sprites are often supported, as are (RAM based) character generators and video RAM dedicated to colour attributes and pallette registers (colour lookup tables) for the high-resolution or text modes.
Video coprocessors have their own internal CPU dedicated to reading (and writing) their own video RAM (which may be shared with the CPU), and converting the contents of this video RAM to a video signal. The main CPU can give commands to the coprocessor, for example to change the video modes or to manipulate the video RAM contents. The video coprocessor also controls the (most often RAM-based) character generator, the colour attribute RAM, palette registers, and the sprite logic (as long as these exist of course).
== List of example VDCs ==
Examples of video display controllers are:
Video shifters
The RCA CDP1861 was a very simple chip, built in CMOS technology (which was unusual for the mid-1970s) to complement the RCA 1802 microprocessor, it was mainly used in the COSMAC VIP. It could only support a very low resolution monochrome graphic mode.
The Television Interface Adaptor (TIA) is the custom video chip that is the heart of the Atari 2600 games console, a primitive chip that relied on the 6502 microprocessor to do most of the work, also was used to generate the audio.
CRT Controllers
The Intel 8275 CRT controller was used in the Convergent Technologies AWS / Burroughs B20, along with some S-100 bus systems.
The Motorola 6845 (MC6845) is a video address generator first introduced by Motorola and used for the Amstrad CPC, and the BBC Micro. It was also used for almost all the early video adapters for the PC, such as the MDA, CGA and EGA adapters. The MDA and CGA use an actual Motorola chip, while the EGA has a custom IBM chipset of five LSI chips; one of those chips includes IBM's reimplementation of the CRTC, which operates like an MC6845 but differs in a few register addresses and functions so it is not 100% compatible. In all later VGA compatible adapters the function of the 6845 is still reproduced inside the video chip, so in a sense all current IBM PC compatible PCs still incorporate the logic of the 6845 CRTC.
Video interface controllers
The Signetics 2636 and 2637 are video controllers best known for their use in the Interton VC 4000 and Emerson Arcadia 2001 respectively.
The MC6847 is a video display generator (VDG) first introduced by Motorola and used in the TRS-80 Color Computer, Dragon 32/64, Laser 200 and Acorn Atom among others.
The MOS Technology 6560 (NTSC) and 6561 (PAL) are known as the video interface controller (VIC) and used in the VIC-20.
The MOS Technology 6567/8562/8564 (NTSC versions) and 6569/8565/8566 (PAL) were known as the VIC-II and were used in the Commodore 64.
The MOS Technology 8563/8568 was used in the Commodore 128 (8563) and Commodore 128D (8568) to create an 80 column text display, as well as several high resolution graphics modes. The Commodore 128 models included a VIC-II to support Commodore 64 compatible video modes.
The MOS Technology 7360 text editing device (TED) was used in the Commodore Plus/4, Commodore 16 and Commodore 116 computers and had an integrated audio capability.
The Philips semiconductors SCC66470 was a VSC (Video- and Systems Controller) used in conjunction with their 68070-Microcontroller e.g. in CD-i systems.
Video coprocessors
The ANTIC (Alpha-Numeric Television Interface Circuit) was an early video system chip used in Atari 8-bit computers. It could read a "Display list" with its own built in CPU and use this data to generate a complex video signal.
The TMS9918 is known as the Video Display Processor (VDP) and was first designed for the Texas Instruments TI-99/4, but was later also used in systems like the MSX (MSX-1), ColecoVision, Memotech MTX series, and for the Sega SG-1000 and SC-3000. The Master System uses an enhanced VDP based on the TMS9918, and the Sega 315-5313 (Yamaha YM7101) VDP used in the Sega Genesis and some arcade machines is a further advancement of the Master System VDP with the original (inferior) TMS9918 modes removed.
The NEC μPD7220. Used in some high-end graphics boards for the IBM PC in the mid 80s, notably in products from Number Nine Visual Technology.
The RP2C02 (NTSC) or RP2C07 (PAL) was a video coprocessor designed by Ricoh for Nintendo's use in the Famicom and Nintendo Entertainment System. It was connected to 2048 bytes of dedicated video RAM, and had a dedicated address bus that allowed additional RAM or ROM to be accessed from the game cartridge. A scrollable playfield of 256×240 pixels was supported, along with a display list of 64 OBJs (sprites), of which 8 could be displayed per scanline.
The Yamaha V9938 is an improved version of the TMS9918, and was mainly used in the MSX2.
The Yamaha V9958 is the Video Display Processor (VDP) mainly used in the MSX2+ and MSX turboR computers.
The VLSI VS21S010D-L is a 128kB SPI/parallel SRAM with an integrated video display controller with variable-bit-depth pixels and a block-move blitter.
The Thomson EF936x series of Graphic Display Processor (GDP), which offers a draw rate of 1 million pixels per second and resolutions up to 1024×512.
== Alternatives to a VDC chip ==
Note that many early home computers did not use a VDP chip, but built the whole video display controller from a lot of discrete logic chips, (examples are the Apple II, PET, and TRS-80). Because these methods are very flexible, video display generators could be very capable (or extremely primitive, depending on the quality of the design), but also needed a lot of components.
Many early systems used some form of an early programmable logic array to create a video system; examples include the ZX Spectrum and ZX81 systems and Elektronika BK-0010, but there were many others. Early implementations were often very primitive, but later implementations sometimes resulted in fairly advanced video systems, like the one in the SAM Coupé. On the lower end, as in the ZX81, the hardware would only perform electrical functions and the timing and level of the video stream was provided by the microprocessor. As the video data rate was high relative to the processor speed, the computer could only perform actual non-display computations during the retrace period between display frames. This limited performance to at most 25% of overall available CPU cycles.
These systems could thus build a very capable system with relatively few components, but the low transistor count of early programmable logic meant that the capabilities of early PLA-based systems were often less impressive than those using the video interface controllers or video coprocessors that were available at the same time. Later PLA solutions, such as those using CPLDs or FPGAs, could result in much more advanced video systems, surpassing those built using off-the-shelf components.
An often-used hybrid solution was to use a video interface controller (often the Motorola 6845) as a basis and expand its capabilities with programmable logic or an ASIC. An example of such a hybrid solution is the original VGA card, that used a 6845 in combination with an ASIC. That is why all current VGA based video systems still use the hardware registers that were provided by the 6845.
== Modern solutions ==
With the advancements made in semiconductor device fabrication, more and more functionality is implemented as integrated circuits, often licensable as semiconductor intellectual property core (SIP core). Display controller System In Package (SiP) blocks can be found on the die of GPUs, APUs and SoCs.
They support a variety of interfaces: VGA, DVI, HDMI, DisplayPort, VHDCI, DMS-59 and more. The PHY includes LVDS, Embedded DisplayPort, TMDS and Flat Panel Display Link, OpenLDI and CML. A modern computer monitor may has built-in LCD controller or OLED controller.
For example, a VGA-signal, which is created by GPU is being transported over a VGA-cable to the monitor built-in controller. Both ends of the cable end in a VGA connector. Laptops and other mobile computers use different interfaces between the display controller and the display. A display controller usually supports multiple computer display standards.
KMS driver is an example of a device driver for display controllers and AMD Eyefinity is a special brand of display controller with multi-monitor support.
RandR (resize and rotate) is a method to configure screen resolution and refresh rate on each individual outputs separately and at the same time configure the settings of the windowing system accordingly.
An example for this dichotomy is offered by ARM Holdings: they offer SIP core for 3D rendering acceleration and for display controller independently. The former has marketing names such as Mali-200 or Mali-T880 while the latter is available as Mali-DP500, Mali-DP550 and Mali-DP650.
== History ==
In 1982, NEC released the NEC μPD7220, one of the most widely used video display controllers in 1980s personal computers. It was used in the NEC PC-9801, APC III, IBM PC compatibles, DEC Rainbow, Tulip System-1, and Epson QX-10. Intel licensed the design and called it the 82720 graphics display controller.
Previously, graphic cards were also called graphic adapters, and the chips used on these ISA/EISA cards consisted solely of a display controller, as this was the only functionality required to connect a computer to a display. Later cards included ICs to perform calculations related to 2D rendering in parallel with the CPU; these cards were referred to as graphics accelerator cards. Similarly, ICs for 3D rendering eventually followed. Such cards were available with VLB, PCI, and AGP interfaces; modern cards typically use the PCI Express bus, as they require much greater bandwidth then the ISA bus can deliver.
== See also ==
List of home computers by video hardware
List of color palettes
== References ==
== External links ==
Embedded Linux Conference 2013 – Anatomy of an Embedded KMS driver on YouTube KMS driver is a device driver for display controllers | Wikipedia/Video_display_controller |
Video Graphics Array (VGA) is a video display controller and accompanying de facto graphics standard, first introduced with the IBM PS/2 line of computers in 1987, which became ubiquitous in the IBM PC compatible industry within three years. The term can now refer to the computer display standard, the 15-pin D-subminiature VGA connector, or the 640 × 480 resolution characteristic of the VGA hardware.
VGA was the last IBM graphics standard to which the majority of IBM PC compatible computer manufacturers conformed, making it the lowest common denominator that virtually all post-1990 PC graphics hardware can be expected to implement.
VGA was adapted into many extended forms by third parties, collectively known as Super VGA, then gave way to custom graphics processing units which, in addition to their proprietary interfaces and capabilities, continue to implement common VGA graphics modes and interfaces to the present day.
The VGA analog interface standard has been extended to support resolutions of up to 2048 × 1536 for general usage, with specialized applications improving it further still.
== Hardware design ==
The color palette random access memory (RAM) and its corresponding digital-to-analog converter (DAC) were integrated into one chip (the RAMDAC) and the cathode-ray tube controller (CRTC) was integrated into a main VGA chip, which eliminated several other chips in previous graphics adapters, so VGA only additionally required external video RAM and timing crystals.
This small part count allowed IBM to include VGA directly on the PS/2 motherboard, in contrast to prior IBM PC models – PC, PC/XT, and PC AT – which required a separate display adapter installed in a slot in order to connect a monitor. The term "array" rather than "adapter" in the name denoted that it was not a complete independent expansion device, but a single component that could be integrated into a system.
Unlike the graphics adapters that preceded it (MDA, CGA, EGA and many third-party options) there was initially no discrete VGA card released by IBM. The first commercial implementation of VGA was a built-in component of the IBM PS/2, in which it was accompanied by 256 KB of video RAM, and a new DE-15 connector replacing the DE-9 used by previous graphics adapters. IBM later released the standalone IBM PS/2 Display Adapter, which utilized the VGA but could be added to machines that did not have it built in.
On some machines and cables, pin 9 was missing. All pin 9 does is power an EEPROM chip in the monitor which tells the graphics card the capabilities on the monitor. Systems or cables missing this are likely using an older version of VGA.
== Capabilities ==
The VGA supports all graphics modes supported by the MDA, CGA and EGA cards, as well as multiple new modes.
=== Standard graphics modes ===
320 × 200 in 4 or 16 colors (CGA/EGA compatibility)
320 × 200 in 256 colors (Mode 13h)
640 × 200 and 640 × 350 in 16 colors or monochrome (CGA/EGA compatibility)
640 × 480 in 16 colors or monochrome
The 640 × 480 16-color and 320 × 200 256-color modes had fully redefinable palettes, with each entry selected from an 18-bit (262,144-color) gamut.
The other modes defaulted to standard EGA or CGA compatible palettes and instructions, but still permitted remapping of the palette with VGA-specific commands.
==== 640 × 480 graphics mode ====
The 640 × 480 resolution (at 256 colors rather than 16) was originally used by IBM in PGC graphics (which VGA offers no backward compatibility for) but did not see wide adoption until VGA was introduced. As the VGA began to be cloned in great quantities by manufacturers who added ever-increasing capabilities, its 640 × 480, 16-color mode became the de facto lowest common denominator of graphics cards. By the mid 1990s, a 640 × 480×16 graphics mode using the VGA memory and register specifications was expected by operating systems such as Windows 95 and OS/2 Warp 3.0, which provided no support for lower resolutions or bit depths, or support for other memory or register layouts without additional drivers. Well into the 2000s, even after the VESA standard for graphics cards became commonplace, the "VGA" graphics mode remained a compatibility option for PC operating systems.
==== Other graphics modes ====
Nonstandard display modes can be implemented, with horizontal resolutions of:
512 to 800 pixels wide, in 16 colors
256 to 400 pixels wide, in 256 colors
And heights of:
200, or 350 to 410 lines (including 400-line) at 70 Hz refresh rate, or
224 to 256, or 448 to 512 lines (including 240 or 480-line) at 60 Hz refresh rate
512 to 600 lines at reduced vertical refresh rates (down to 50 Hz, and including e.g. 528, 544, 552, 560, 576-line), depending on individual monitor compatibility.
For example, high resolution modes with square pixels are available at 768 × 576 or 704 × 528 in 16 colors, or medium-low resolution at 320 × 240 with 256 colors. Alternatively, extended resolution is available with "fat" pixels and 256 colors using, e.g. 400 × 600 (50 Hz) or 360 × 480 (60 Hz), and "thin" pixels, 16 colors and the 70 Hz refresh rate with e.g. 736 × 410 mode.
"Narrow" modes such as 256 × 224 tend to preserve the same pixel ratio as in e.g. 320 × 240 mode unless the monitor is adjusted to stretch the image out to fill the screen, as they are derived simply by masking down the wider mode instead of altering pixel or line timings, but can be useful for reducing memory requirements and pixel addressing calculations for arcade game conversions or console emulators.
The PC version of Pinball Fantasies has the option to use non-standard modes "high res" modes, such as 640 × 350, allowing it to display a larger portion of the pinball table on screen.
=== Standard text modes ===
VGA also implements several text modes:
80 × 25, rendered with a 9 × 16 pixel font, with an effective resolution of 720 × 400
40 × 25, with a 9 × 16 font, with an effective resolution of 360 × 400
80 × 43 or 80 × 50, with an 8 × 8 font grid, with an effective resolution of 640 × 344 or 640 × 400 pixels.
As with the pixel-based graphics modes, additional text modes are possible by programming the VGA correctly, with an overall maximum of about 100 × 80 cells and an active area spanning about 88 × 64 cells.
One variant that is sometimes seen is 80 × 30 or 80 × 60, using an 8 × 16 or 8 × 8 font and an effective 640 × 480 pixel display, which trades use of the more flickery 60 Hz mode for an additional 5 or 10 lines of text and square character blocks (or, at 80 × 30, square half-blocks).
== Technical details ==
Unlike the cards that preceded it, which used binary TTL signals to interface with a monitor (and also composite, in the case of the CGA), the VGA introduced a video interface using pure analog RGB signals, with a range of 0.7 volts peak-to-peak max. In conjunction with a 18-bit RAMDAC (6-bit per RGB channel), this produced a color gamut of 262,144 colors.
The original VGA specifications follow:
Selectable 25.175 MHz or 28.322 MHz master pixel clock
Maximum of 640 horizontal pixels in graphics mode, and 720 pixels in text mode
Maximum of 480 lines
Refresh rates at 60 or 70 Hz
Vertical blank interrupt (Not all clone cards support this.)
Planar mode: up to 16 colors (4 bit planes)
Packed-pixel mode: 256 colors (Mode 13h)
Hardware smooth scrolling support
No Blitter
Supports fast data transfers via "VGA latch" registers
Barrel shifter
Split screen support
=== Signal timings ===
The intended standard value for the horizontal frequency of VGA's 640 × 480 mode is exactly double the value used in the NTSC-M video system, as this made it much easier to offer optional TV-out solutions or external VGA-to-TV converter boxes at the time of VGA's development. It is also at least nominally twice that of CGA, which also supported composite monitors.
All derived VGA timings (i.e. those which use the master 25.175 and 28.322 MHz crystals and, to a lesser extent, the nominal 31.469 kHz line rate) can be varied by software that bypasses the VGA firmware interface and communicates directly with the VGA hardware, as many MS-DOS based games did. However, only the standard modes, or modes that at least use almost exactly the same H-sync and V-sync timings as one of the standard modes, can be expected to work with the original late-1980s and early-1990s VGA monitors. The use of other timings may in fact damage such monitors and thus was usually avoided by software publishers.
Third-party "multisync" CRT monitors were more flexible, and in combination with "super EGA", VGA, and later SVGA graphics cards using extended modes, could display a much wider range of resolutions and refresh rates at arbitrary sync frequencies and pixel clock rates.
For the most common VGA mode (640 × 480, 60 Hz, non-interlaced), the horizontal timings can be found in the HP Super VGA Display Installation Guide and in other places.
=== Typical uses of selected modes ===
640 × 400 @ 70 Hz is traditionally the video mode used for booting VGA-compatible x86 personal computers that show a graphical boot screen, while text-mode boot uses 720 × 400 @ 70 Hz.
This convention has been eroded in recent years, however, with POST and BIOS screens moving to higher resolutions, taking advantage of EDID data to match the resolution to a connected monitor.
640 × 480 @ 60 Hz is the default Windows graphics mode (usually with 16 colors), up to Windows 2000. It remains an option in XP and later versions via the boot menu "low resolution video" option and per-application compatibility mode settings, despite newer versions of Windows now defaulting to 1024 × 768 and generally not allowing any resolution below 800 × 600 to be set.
The need for such a low-quality, universally compatible fallback has diminished since the turn of the millennium, as VGA-signalling-standard screens or adaptors unable to show anything beyond the original resolutions have become increasingly rare.
320 × 200 at 70 Hz was the most common mode for early 1990s PC games, with pixel-doubling and line-doubling performed in hardware to present a 640 × 400 at 70 Hz signal to the monitor.
The Windows 95/98/Me LOGO.SYS boot-up image was 320 × 400 resolution, displayed with pixel-doubling to present a 640 × 400 at 70 Hz signal to the monitor. The 400-line signal was the same as the standard 80 × 25 text mode, which meant that pressing Esc to return to text mode didn't change the frequency of the video signal, and thus the monitor did not have to resynchronize (which could otherwise have taken several seconds).
== Connector ==
The standard VGA monitor interface is a 15-pin D-subminiature connector in the "E" shell, variously referred to as "DE-15", "HD-15" and erroneously "DB-15(HD)".
All VGA connectors carry analog RGBHV (red, green, blue, horizontal sync, vertical sync) video signals. Modern connectors also include VESA DDC pins, for identifying attached display devices.
Because VGA uses low-voltage analog signals, signal degradation becomes a factor with low-quality or overly long cables. Solutions include shielded cables, cables that include a separate internal coaxial cable for each color signal, and "broken out" cables utilizing a separate coaxial cable with a BNC connector for each color signal.
BNC breakout cables typically use five connectors, one each for Red, Green, Blue, Horizontal Sync, and Vertical Sync, and do not include the other signal lines of the VGA interface. With BNC, the coaxial wires are fully shielded end-to-end and through the interconnect so that virtually no crosstalk and very little external interference can occur. The use of BNC RGB video cables predates VGA in other markets and industries.
== Color palette ==
The VGA color system uses register-based palettes to map colors in various bit depths to its 18-bit output gamut. It is backward compatible with the EGA and CGA adapters, but supports extra bit depth for the palette when in these modes.
For instance, when in EGA 16-color modes, VGA offers 16 palette registers, and in 256-color modes, it offers 256 registers. Each palette register contain a 3×6 bit RGB value, selecting a color from the 18-bit gamut of the DAC.
These color registers are initialized to default values IBM expected to be most useful for each mode. For instance, EGA 16-color modes initialize to the default CGA 16-color palette, and the 256-color mode initializes to a palette consisting of 16 CGA colors, 16 grey shades, and then 216 colors chosen by IBM to fit expected use cases.
After initialization they can be redefined at any time without altering the contents of video RAM, permitting palette cycling.
In the 256-color modes, the DAC is set to combine four 2-bit color values, one from each plane, into an 8-bit-value representing an index into the 256-color palette. The CPU interface combines the 4 planes in the same way, a feature called "chain-4", so that each pixel appears to the CPU as a packed 8-bit value representing the palette index.
== Use ==
The video memory of the VGA is mapped to the PC's memory via a window in the range between segments 0xA0000 and 0xBFFFF in the PC's real mode address space (A000:0000 and B000:FFFF in segment:offset notation). Typically, these starting segments are:
0xA0000 for EGA/VGA graphics modes (64 KB)
0xB0000 for monochrome text mode (32 KB)
0xB8000 for color text mode and CGA-compatible graphics modes (32 KB)
A typical VGA card is also provides this port-mapped I/O segment:
0x3B0 to 0x3DF
Due to the use of different address mappings for different modes, it is possible to have a monochrome adapter (i.e. MDA or Hercules) and a color adapter such as the VGA, EGA, or CGA installed in the same machine.
At the beginning of the 1980s, this was typically used to display Lotus 1-2-3 spreadsheets in high-resolution text on a monochrome display and associated graphics on a low-resolution CGA display simultaneously. Many programmers also used such a setup with the monochrome card displaying debugging information while a program ran in graphics mode on the other card. Several debuggers, like Borland's Turbo Debugger, D86 and Microsoft's CodeView could work in a dual monitor setup. Either Turbo Debugger or CodeView could be used to debug Windows.
There were also device drivers such as ox.sys, which implemented a serial interface simulation on the monochrome display and, for example, allowed the user to receive crash messages from debugging versions of Windows without using an actual serial terminal.
It is also possible to use the "MODE MONO" command at the command prompt to redirect the output to the monochrome display. When a monochrome adapter was not present, it was possible to use the 0xB000–0xB7FF address space as additional memory for other programs.
A VGA-capable PCI / PCIe graphics card can provide legacy VGA registers in its PCI configuration space, which may be remapped by BIOS or operating system.
=== Programming ===
"Unchaining" the 256 KB VGA memory into four separate "planes" makes VGA's 256 KB of RAM available in 256-color modes. There is a trade-off for extra complexity and performance loss in some types of graphics operations, but this is mitigated by other operations becoming faster in certain situations:
Single-color polygon filling could be accelerated due to the ability to set four pixels with a single write to the hardware.
The video adapter could assist in copying video RAM regions, which was sometimes faster than doing this with the relatively slow CPU-to-VGA interface.
The use of multiple video pages in hardware allowed double buffering, triple buffering or split screens, which, while available in VGA's 320 × 200 16-color mode, was not possible using stock Mode 13h.
Most particularly, several higher, arbitrary-resolution display modes were possible, all the way up to the programmable limit of 800 × 600 with 16 colors (or 400 × 600 with 256 colors), as well as other custom modes using unusual combinations of horizontal and vertical pixel counts in either color mode.
Software such as Fractint, Xlib and ColoRIX also supported tweaked 256-color modes on standard adaptors using freely-combinable widths of 256, 320, and 360 pixels and heights of 200, 240 and 256 (or 400, 480 and 512) lines, extending still further to 384 or 400 pixel columns and 576 or 600 (or 288, 300). However, 320 × 240 was the best known and most frequently used, as it offered a standard 40-column resolution and 4:3 aspect ratio with square pixels. "320 × 240 × 8" resolution was commonly called Mode X, the name used by Michael Abrash when he presented the resolution in Dr. Dobb's Journal.
The highest resolution modes were only used in special, opt-in cases rather than as standard, especially where high line counts were involved. Standard VGA monitors had a fixed line scan (H-scan) rate – "multisync" monitors being, at the time, expensive rarities – and so the vertical/frame (V-scan) refresh rate had to be reduced in order to accommodate them, which increased visible flicker and thus eye strain. For example, the highest 800 × 600 mode, being otherwise based on the matching SVGA resolution (with 628 total lines), reduced the refresh rate from 60 Hz to about 50 Hz (and 832 × 624, the theoretical maximum resolution achievable with 256 KB at 16 colors, would have reduced it to about 48 Hz, barely higher than the rate at which XGA monitors employed a double-frequency interlacing technique to mitigate full-frame flicker).
These modes were also outright incompatible with some monitors, producing display problems such as picture detail disappearing into overscan (especially in the horizontal dimension), vertical roll, poor horizontal sync or even a complete lack of picture depending on the exact mode attempted. Due to these potential issues, most VGA tweaks used in commercial products were limited to more standards-compliant, "monitor-safe" combinations, such as 320 × 240 (square pixels, three video pages, 60 Hz), 320 × 400 (double resolution, two video pages, 70 Hz), and 360 × 480 (highest resolution compatible with both standard VGA monitors and cards, one video page, 60 Hz) in 256 colors, or double the horizontal resolution in 16-color mode.
== Hardware manufacturers ==
Several companies produced VGA compatible graphic board models.
ATI: Graphics Solution Plus, Wonder series, Mach series
S3 Graphics: S3 911, 911A, 924, 801, 805, 805i, 928, 805p, 928p, S3 Vision series, S3 Trio series
Matrox: MAGIC RGB
Plantronics: Colorplus
Paradise Systems: PEGA 1, PEGA 1a, PEGA 2a
Tseng Labs: ET3000, ET4000, ET6000
Cirrus Logic: CL-GD400, CL-GD500 and CL-GD5000 series
Trident Microsystems: TVGA 8000 series, TVGA 9000 series, TGUI9000 series
IIT
NEC
Chips and Technologies
SiS
Tamerack
Realtek
Oak Technology
LSI
Hualon
Cornerstone Imaging
Winbond
AMD
Western Digital
Intergraph
Texas Instruments
Gemini (defunct)
Genoa Systems (defunct)
== Successors ==
=== Super VGA (SVGA) ===
Super VGA (SVGA) is a display standard developed in 1988, when NEC Home Electronics announced its creation of the Video Electronics Standards Association (VESA). The development of SVGA was led by NEC, along with other VESA members including ATI Technologies and Western Digital. SVGA enabled graphics display resolutions up to 800 × 600 pixels, 56% more than VGA's maximum resolution of 640 × 480 pixels.
=== Extended Graphics Array (XGA) ===
Extended Graphics Array (XGA) is an IBM display standard introduced in 1990. Later it became the most common appellation of the 1024 × 768 pixels display resolution.
== See also ==
VGA text mode
Graphic display resolutions
List of color palettes
List of video connectors
List of monochrome and RGB color formats
List of 16-bit computer hardware palettes
List of defunct graphics chips and card companies
Super VGA
AX-VGA (for Japanese AX architecture computers)
DOS/V
DisplayPort and HDMI (which have largely replaced VGA)
== References ==
== Further reading ==
J. D. Neal (1997). "VGA Chipset Reference". Hardware Level VGA and SVGA Video Programming Information Page.
Jordan Brown and John Kingman (6 May 1996). "CHRP VGA Display Device Binding to IEEE 1275–1994 Standard for Boot (Initialization, Configuration) Firmware". 1.0. Archived from the original on 9 September 2006. Retrieved 22 June 2006. {{cite journal}}: Cite journal requires |journal= (help)
Hinner. "VGA Interface and video signal documents". Signal Level VGA and SVGA Video Information Page.
"IBM VGA Technical Reference Manual" (PDF). This is the original IBM reference. The document provides good overview of VGA functionality and is fairly complete, including a detailed description of standard BIOS modes and some programming techniques.
== External links ==
VGA pinout and signals descriptions | Wikipedia/Video_Graphics_Array |
In computer graphics, tessellation is the dividing of datasets of polygons (sometimes called vertex sets) presenting objects in a scene into suitable structures for rendering. Especially for real-time rendering, data is tessellated into triangles, for example in OpenGL 4.0 and Direct3D 11.
== In graphics rendering ==
A key advantage of tessellation for realtime graphics is that it allows detail to be dynamically added and subtracted from a 3D polygon mesh and its silhouette edges based on control parameters (often camera distance). In previously leading realtime techniques such as parallax mapping and bump mapping, surface details could be simulated at the pixel level, but silhouette edge detail was fundamentally limited by the quality of the original dataset.
In Direct3D 11 pipeline (a part of DirectX 11), the graphics primitive is the patch. The tessellator generates a triangle-based tessellation of the patch according to tessellation parameters such as the TessFactor, which controls the degree of fineness of the mesh. The tessellation, along with shaders such as a Phong shader, allows for producing smoother surfaces than would be generated by the original mesh. By offloading the tessellation process onto the GPU hardware, smoothing can be performed in real time. Tessellation can also be used for implementing subdivision surfaces, level of detail scaling and fine displacement mapping. OpenGL 4.0 uses a similar pipeline, where tessellation into triangles is controlled by the Tessellation Control Shader and a set of four tessellation parameters.
== In computer-aided design ==
In computer-aided design the constructed design is represented by a boundary representation topological model, where analytical 3D surfaces and curves, limited to faces, edges, and vertices, constitute a continuous boundary of a 3D body.
Arbitrary 3D bodies are often too complicated to analyze directly. So they are approximated (tessellated) with a mesh of small, easy-to-analyze pieces of 3D volume—usually either irregular tetrahedra, or irregular hexahedra. The mesh is used for finite element analysis.
The mesh of a surface is usually generated per individual faces and edges (approximated to polylines) so that original limit vertices are included into mesh. To ensure that approximation of the original surface suits the needs of further processing, three basic parameters are usually defined for the surface mesh generator:
The maximum allowed distance between the planar approximation polygon and the surface (known as "sag"). This parameter ensures that mesh is similar enough to the original analytical surface (or the polyline is similar to the original curve).
The maximum allowed size of the approximation polygon (for triangulations it can be maximum allowed length of triangle sides). This parameter ensures enough detail for further analysis.
The maximum allowed angle between two adjacent approximation polygons (on the same face). This parameter ensures that even very small humps or hollows that can have significant effect to analysis will not disappear in mesh.
An algorithm generating a mesh is typically controlled by the above three and other parameters. Some types of computer analysis of a constructed design require an adaptive mesh refinement, which is a mesh made finer (using stronger parameters) in regions where the analysis needs more detail.
== See also ==
ATI TruForm – brand for their hardware tessellation unit from 2001
TeraScale (microarchitecture) § Hardware tessellation – newer unit from June 2007
Graphics Core Next § Geometric processor – most current unit from January 2011
Tessellation shader
Progressive mesh
Mesh generation
Tiled rendering
== External links ==
GPUOpen: OpenGL sample that demonstrates terrain tessellation on the GPU
== References == | Wikipedia/Tessellation_(computer_graphics) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.