text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
The cup is a cooking measure of volume, commonly associated with cooking and serving sizes. In the US, it is traditionally equal to . Because actual drinking cups may differ greatly from the size of this unit, standard measuring cups may be used, with a metric cup commonly being rounded up to 240 millilitres (legal cup), but 250 ml is also used depending on the measuring scale.
United States
Customary cup
In the United States, the customary cup is half of a US liquid pint.
Legal cup
The cup currently used in the United States for nutrition labelling is defined in United States law as 240 ml.
Conversion table to US legal cup
The following information is describing that how to measure US legal cup in different ways.
Coffee cup
A "cup" of coffee in the US is usually 4 fluid ounces (118 ml), brewed using 5 fluid ounces (148 ml) of water. Coffee carafes used with drip coffee makers, e.g. Black and Decker models, have markings for both water and brewed coffee as the carafe is also used for measuring water prior to brewing. A 12-cup carafe, for example, has markings for 4, 6, 8, 10, and 12 cups of water or coffee, which correspond to 20, 30, 40, 50, and 60 US fluid ounces (0.59, 0.89, 1.18, 1.48, and 1.77 litres) of water or 16, 24, 32, 40, and 48 US fluid ounces (0.47, 0.71, 0.95, 1.18, and 1.42 litres) of brewed coffee respectively, the difference being the volume absorbed by the coffee grounds and lost to evaporation during brewing.
Commonwealth of Nations
Metric cup
Australia, Canada, New Zealand, and some other members of the Commonwealth of Nations, being former British colonies that have since metricated, employ a "metric cup" of 250millilitres. Although derived from the metric system, it is not an SI unit. | Cup (unit) | Wikipedia | 415 | 2968115 | https://en.wikipedia.org/wiki/Cup%20%28unit%29 | Physical sciences | Volume | Basics and measurement |
A "coffee cup" is 1.5 dL (i.e. 150 millilitres or 5.07 US customary fluid ounces), and is occasionally used in recipes; in older recipes, cup may mean "coffee cup". It is also used in the US to specify coffeemaker sizes (what can be referred to as a Tasse à café). A "12-cup" US coffeemaker makes 57.6 US customary fluid ounces of coffee, which is equal to 6.8 metric cups of coffee.
Canadian cup
Canada now usually employs the metric cup of 250ml, but its conventional cup was somewhat smaller than both American and imperial units.
1 Canadian cup = 8 imperial fluid ounces = imperial gallon =
= UK tumbler = 1 UK breakfast cup = 1 UK cups = 1 UK teacups = 3 UK coffee cups = 4 UK wine glasses
≈ 0·96 US customary cup
≈ 0·91 metric cup
1 Canadian tablespoon =
= 1 UK tablespoon
≈ 0·96 US customary tablespoon
≈ 0·95 international metric tablespoon ≈ 0·71 Australian metric tablespoon
1 Canadian teaspoon =
= 1 UK teaspoons
≈ 0·96 US customary teaspoon
≈ 0·95 metric teaspoon
British cup
In the United Kingdom, 1 cup is traditionally 6 imperial fluid ounces. The unit is named after a typical drinking cup.
There are three related British culinary measurement units of volume bearing names with the word, ‘cup’: the breakfast cup (8 imperial fluid ounces), the teacup (5 imperial fluid ounces), and the coffee cup (2 imperial fluid ounces).
Further, there are two related British culinary measurement units of volume without the word, ‘cup’, in their names: the tumbler (10 imperial fluid ounces) and the wine glass (2 imperial fluid ounces). | Cup (unit) | Wikipedia | 385 | 2968115 | https://en.wikipedia.org/wiki/Cup%20%28unit%29 | Physical sciences | Volume | Basics and measurement |
All six units are the traditional British equivalents of the US customary cup and the metric cup, used in situations where a US cook would use the US customary cup and a cook using metric units the metric cup. The breakfast cup is the most similar in size to the US customary cup and the metric cup. Which of these six units is used depends on the quantity or volume of the ingredient: there is division of labour between these six units, like the tablespoon and the teaspoon. British cookery books and recipes, especially those from the days before the UK's partial metrication, commonly use two or more of the aforesaid units simultaneously: for example, the same recipe may call for a ‘tumblerful’ of one ingredient and a ‘wineglassful’ of another one; or a ‘breakfastcupful’ or ‘cupful’ of one ingredient, a ‘teacupful’ of a second one, and a ‘coffeecupful’ of a third one. Unlike the US customary cup and the metric cup, a tumbler, a breakfast cup, a cup, a teacup, a coffee cup, and a wine glass are not measuring cups: they are simply everyday drinking vessels commonly found in British households and typically having the respective aforementioned capacities; due to long‑term and widespread use, they have been transformed into measurement units for cooking. There is not a British imperial unit–based culinary measuring cup.
International
Similar units in other languages and cultures are sometimes translated "cup", usually with various values around to of a litre.
Latin American cup
In Latin America, the amount of a "cup" () varies from country to country, using a cup of 200ml (about 7·04 British imperial fluid ounces or 6·76 US customary fluid ounces), 250ml (about 8·80 British imperial fluid ounces or 8·45 US customary fluid ounces), and the US legal or customary amount.
Japanese cup
The traditional Japanese unit equated with a "cup" size is the gō, legally equated with litre (≈ 180.4 ml/6·35 British imperial fluid ounces/6·1 US customary fluid ounces) in 1891, and is still used for reckoning amounts of rice and sake. The Japanese later defined a "cup" as 200 ml. | Cup (unit) | Wikipedia | 473 | 2968115 | https://en.wikipedia.org/wiki/Cup%20%28unit%29 | Physical sciences | Volume | Basics and measurement |
Russian cup
The traditional Russian measurement system included two cup sizes: the "charka" (cup proper) and the "stakan" ("glass"). The charka was usually used for alcoholic drinks and is 123mL (about 4·33 British imperial fluid ounces or 4·16 US customary fluid ounces), while the stakan, used for other liquids, was twice as big and is 246mL (about 8·66 British imperial fluid ounces or 8·32 US customary fluid ounces).
Since metrication, the charka was informally redefined as 100 ml (about 3·52 British imperial fluid ounces or 3·38 US customary fluid ounces), acquiring a new name of "stopka" (related to the traditional Russian measurement unit "stopa"), while there are currently two widely used glass sizes of 250mL (about 8·80 British imperial fluid ounces or 8·45 US customary fluid ounces) and 200 ml (about 7·04 British imperial fluid ounces or 6·76 US customary fluid ounces).
Dutch cup
In The Netherlands, traditionally a "cup" (Dutch: kopje) amounts to 150 ml (about 5·28 British imperial fluid ounces or 5·07 US customary fluid ounces). However, in modern recipes, the US legal cup of 240 ml (about 8·45 British imperial fluid ounces or 8·12 US customary fluid ounces) is more commonly used.
Dry measure
In Europe, recipes normally weigh non-liquid ingredients in grams rather than measuring volume. For example, where an American recipe might specify "1 cup of sugar and 2 cups of milk", a European recipe might specify "200 g sugar and 500 ml of milk". A precise conversion between the two measures takes into account the density of the ingredients, and some recipes specify both weight and volume to facilitate this conversion. Many European measuring cups have markings that indicate the weight of common ingredients for a given volume. | Cup (unit) | Wikipedia | 406 | 2968115 | https://en.wikipedia.org/wiki/Cup%20%28unit%29 | Physical sciences | Volume | Basics and measurement |
Thrinaxodon is an extinct genus of cynodonts, including the species T. liorhinus which lived in what are now South Africa and Antarctica during the Late Permian - Early Triassic. Thrinaxodon lived just after the Permian–Triassic mass extinction event, its survival during the extinction may have been due to its burrowing habits.
Similar to other therapsids, Thrinaxodon adopted a semi-sprawling posture, an intermediary form between the sprawling position of basal tetrapods and the more upright posture present in current mammals. Thrinaxodon is prevalent in the fossil record in part because it was one of the few carnivores of its time, and was of a larger size than similar cynodont carnivores.
Description
Thrinaxodon was a small synapsid roughly the size of a fox and possibly covered in hair. The dentition suggests that it was a carnivore, focusing its diet mostly on insects, small herbivores and invertebrates. Their unique secondary palate successfully separated the nasal passages from the rest of the mouth, allowing the Thrinaxodon to continue mastication without interrupting to breathe, an adaptation important for digestion.
Skull
The nasals of Thrinaxodon are pitted with a large number of foramina. The nasals narrow anteriorly and expand anteriorly and articulate directly with the frontals, pre-frontals and lacrimals; however, there is no interaction with the jugals or the orbitals. The maxilla of Thrinaxodon is also heavily pitted with foramina. The arrangement of foramina on the snout of Thrinaxodon resembles that of lizards, such as Tupinambis, and also bears a single large infraorbital foramen. As such, Thrinaxodon would have had non-muscular lips like those of lizards, not mobile, muscular ones like those of mammals. Without the infraorbital foramen and its associated facial flexibility, it is unlikely that Thrinaxodon would have had whiskers. | Thrinaxodon | Wikipedia | 426 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
On the skull roof of Thrinaxodon, the fronto-nasal suture represents an arrow shape instead of the general transverse process seen in more primitive skull morphologies. The prefrontals, which are slightly anterior and ventral to the frontals exhibit a very small size and come in contact with the post-orbitals, frontals, nasals and lacrimals. More posteriorly on the skull, the parietals lack a sagittal crest. The cranial roof is the narrowest just posterior to the parietal foramen, which is very nearly circular in shape. The temporal crests remain quite discrete throughout the length of the skull. The temporal fenestra have been found with ossified fasciae, giving evidence of some type of a temporal muscle attachment.
The upper jaw contains a secondary palate which separates the nasal passages from the rest of the mouth, which would have given Thrinaxodon the ability to breathe uninterrupted, even if food had been kept in its mouth. This adaptation would have allowed the Thrinaxodon to mash its food to a greater extent, decreasing the amount of time necessary for digestion. The maxillae and palatines meet medially in the upper jaw developing a midline suture. The maxillopalatine suture also includes a posterior palatine foramen. The large palatal roof component of the vomer in Thrinaxodon is just dorsal to the choana, or interior nasal passages. The pterygoid bones extend in the upper jaw and enclose small interpterygoid vacuities that are present on each side of the cultriform processes of the parasphenoids. The parasphenoid and basisphenoid are fused, except for the most anterior/dorsal end of the fused bones, in which there is a slight separation in the trabecular attachment of the basisphenoid. | Thrinaxodon | Wikipedia | 393 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
The otic region is defined by the regions surrounding the temporal fenestrae. Most notable is evidence of a deep recess that is just anterior to the fenestra ovalis, containing evidence of smooth muscle interactions with the skull. Such smooth muscle interactions have been interpreted to be indicative of the tympanum and give the implications that this recess, in conjunction with the fenestra ovalis, outline the origin of the ear in Thrinaxodon. This is a new synapomorphy as this physiology had arisen in Thrinaxodon and had been conserved through late Cynodontia. The stapes contained a heavy cartilage plug, which was fit into the sides of the fenestra ovalis; however, only one half of the articular end of the stapes was able to cover the fenestra ovalis. The remainder of this pit opens to an "un-ossified" region which comes somewhat close to the cochlear recess, giving one the assumption that inner ear articulation occurred directly within this region.
The skull of Thrinaxodon is an important transitional fossil which supports the simplification of synapsid skulls over time. The most notable jump in bone number reduction had occurred between Thrinaxodon and Probainognathus, a change so dramatic that it is most likely that the fossil record for this particular transition is incomplete. Thrinaxodon contains fewer bones in the skull than that of its pelycosaurian ancestors.
Dentition
Data on the dentition of Thrinaxodon liorhinus was compiled by use of a micro CT scanner on a large sample of Thrinaxodon skulls, ranging between in length. These dentition patterns are similar to that of Morganucodon, allowing one to make the assumption that these dentition patterns arose within Thrinaxodontidae and extended into the records of early Mammalia. Adult T. liorhinus assumes the dental pattern of the four incisors, one canine and six postcanines on each side of the upper jaw. This pattern is reflected in the lower jaw by a dental formula of three incisors, one canine and seven or eight postcanines on each side of the lower jaw. With this formula, one can make a small note that in general, adult Thrinaxodon contained anywhere between 44 and 46 total teeth. | Thrinaxodon | Wikipedia | 480 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
Upper incisors in T. liorhinus assume a backwards directed cusp, being curved and pointed at their most distal point, and becoming broader and rounder as they reach their proximal insertion point into the premaxilla. The fourth upper incisor is roughly homologous with a small canine tooth in form, but is positioned too far anteriorly to be a functional canine - thus ruling it out as an instance of convergent evolution. Lower incisors possess a very broad base, which is progressively reduced, heading distally towards the tip of the tooth. The lingual face of the lower incisors is most often concave while the labial face is often convex, and these lower incisors are oriented anteriorly, except in some cases for the third lower incisor, which can assume a more dorsoventral orientation. The incisors are, for the most part, single functional teeth encompassing a broad, cone-like morphology. The canines of T. liorhinus possess small dorsoventrally-directed facets on their surfaces, which appear to be involved with occlusion (dentition alignment in upper- and lower jaw closure). Each canine possesses a replacement canine located within the jaw, posterior to the existing canine, neither of the replacement or functional canine teeth possess any serrated margins only the small facets. It is important to note that the lower canine is directed almost vertically (dorsoventrally) while the upper canine is directed slightly anteriorly. | Thrinaxodon | Wikipedia | 312 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
The upper and lower postcanines in T. liorhinus share some common features but also vary quite a fair amount in comparison to one another. The first postcanine (just posterior to the canine) is most often smaller than the other postcanines and is most often bicuspid. Including the first postcanine, if any of the other postcanines are bicuspid, then it is safe to assume that the posterior accessary cusp is present and that that tooth will not have any cingular or labial cusps. If, however, the tooth is tricuspid, then there is a chance of cingular cusps developing, if this occurs then the anterior cusp will be the first to appear and will be the most pronounced cusp. In the upper postcanines, there should be no occurrence of any teeth possessing more than three cusps, and there is no occurrence of any labial cusps on the upper postcanines. The majority of upper postcanines in the juvenile Thrinaxodon are bicuspid, while only one of these upper teeth are tricuspid. The upper postcanines of an intermediate (between juvenile and adult) Thrinaxodon are all tricuspid with no labial or cingular cusps. The adult upper postcanines retain the intermediate physiologies and possess only tricuspid teeth; however, it is possible for cingular cusps to develop in these adult teeth. The ultimate (posterior-most) upper canine is often the smallest of all canines in the entire jaw system. Little data is known of the juvenile and intermediate forms of the lower postcanines in Thrinaxodon, but the adult lower postcanines all possess multiple (any value more than three) cusps as well as the only appearance of labial cusps. Some older specimens have been found that possess no multiple-cups lower canines, possibly a response to old age or teeth replacement.
Thrinaxodon shows one of the first occurrences of replacement teeth in cynodonts. This was discerned by the presence of replacement pits, which are situated lingual to the functional tooth in the incisors and postcanines. While a replacement canine does exist, more often than not it is not erupted and the original functional canine remains.
Histology | Thrinaxodon | Wikipedia | 489 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
The bone tissue of Thrinaxodon consists of fibro-lamellar bone, to a varying degree across all the separate limbs, most of which develops into parallel-fibred bone tissue towards the periphery. Each of the bones contains a large abundance of globular osteocyte lacunae which radiate a multitude of branched canaliculi. Ontogenetically early bones - mostly consisting of fibro-lamellar tissue - possessed a large amount of vascular canals. These canals are oriented longitudinally within primary osteons that contain radial anastomoses. Regions consisting mostly of parallel-fibred bone tissue contain few simple vascular canals, in comparison to the nearby fibro-lamellar tissues. Parallel-fibred peripheral bone tissue are indicative that bone growth began to slow, and they bring about the assumption that this change in growth was due to the age of the specimen in question. Combine this with the greater organization of osteocyte lacunae in the periphery of adult T. liorhinus, and we approach the assumption that this creature grew very quickly in order to reach adulthood at an accelerated rate. Before Thrinaxodon, ontogenical patterns such as this had not been seen, establishing the idea that reaching peak size rapidly was an adaptively advantageous trait that had arisen with Thrinaxodon. | Thrinaxodon | Wikipedia | 273 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
Within the femur of Thrinaxodon, there is no major region of the bone that is made of parallel-fibred tissues; however, there is a small ring of parallel-fibred bone within the mid-cortex. The remainder of the femur is made of fibro-lamellar tissue; however, the globular osteocyte lacunae become much more organized and the primary osteons assume less vasculature than many other bones as you begin to approach the subperiosteal surface. The femur contains very few bony trabeculae. The humerus differs from the femur in many regards, one of which being that there is a more extensive network of bony trabeculae in the humerus near the meduallary cavity of the bone. The globular osteocyte lacunae become more flattened as you get closer and closer to the midshaft of the humerus. While the vasculature is present, the humerus contains no secondary osteons. The radii and ulnae of Thrinaxodon represent roughly the same histological patterns. In contrast to the humerii and femora, the parallel-fibred region is far more distinct in the distal bones of the forelimb. The medullary cavities are surrounded by multiple layers of very poorly vascularized endosteal lamellar tissue, along with very large cavities near the medullary cavity of the metaphyses.
Discovery and naming
Thrinaxodon was originally discovered in the Lystrosaurus Assemblage Zone of the Beaufort Group of South Africa. The genoholotype, BMNH R 511, was in 1887 described by Richard Owen as the plesiotype of Galesaurus planiceps. In 1894 it was by Harry Govier Seeley made a separate genus with as type species Thrinaxodon liorhinus. Its generic name was taken from the Ancient Greek for "trident tooth", thrinax and odon. The specific name is Latinised Greek for "smooth-nosed". | Thrinaxodon | Wikipedia | 432 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
Thrinaxodon was initially believed to be isolated to that region. Other fossils in South Africa were recovered from the Normandien and Katberg Formations. It had not been until 1977 that additional fossils of Thrinaxodon had been discovered in the Fremouw Formation of Antarctica. Upon its discovery there, numerous experiments were done to confirm whether or not they had found a new species of Thrinaxodontidae, or if they had found another area which T. liorhinus called home. The first experiment was to evaluate the average number of pre-sacral vertebrae in the Antarctic vs African Thrinaxodon. The data actually showed a slight difference between the two, in that the African T. liorhinus contained 26 presacrals, while the Antarctic Thrinaxodon had 27 pre-sacrals. In comparison to other cynodonts, 27 pre-sacrals appeared to be the norm throughout this sub-section of the fossil record. The next step was to evaluate the size of the skull in the two different discovery groups, and in this study they found no difference between the two, the first indication that they may in fact be of the same species. The ribs were the final physiology to be cross-examined, and while they portrayed slight differences in the expanded ribs, against one another, the most important synapomorphy remained consistent between the two, and that was that the intercostal plates overlapped with one another. These evaluations led to the conclusion that they had not found a new species of Thrinaxodontidae, but yet they had found that Thrinaxodon occupied two different geographical regions, which today are separated by an immense expanse of ocean. This discovery was one of many to support the idea of a connected land mass, and that during the early Triassic, Africa and Antarctica must have been linked in some way, shape or form. | Thrinaxodon | Wikipedia | 387 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
Classification
Thrinaxodon belongs to the clade Epicynodontia, a subdivision of the greater clade Cynodontia. Cynodontia eventually led to the evolution of Morganucodon and all other mammalia. Cynodontia belongs to the clade Therapsida, which was the first major clade along the line of the Synapsida. Synapsida represents one of two major splitting points, under the clade Amniota, which also split into Sauropsida, the larger clade containing today's reptiles, birds and Crocodilia. Thrinaxodon represents a fossil transitional in morphology on the road to humans and other extant mammals.
Paleobiology
Ontogeny
There appear to be nine cranial features that successfully separate Thrinaxodon into four ontogenetic stages. The paper denotes that in general, the Thrinaxodon skull increased in size isometrically, except for four regions, one of which being the optic region. Much of the data assumes that the length of the sagittal crest increased at a greater rate in relation to the rest of the skull. The posterior sagittal crest to appear in an earlier ontogenetic stage than the more anterior crest had, and in conjunction with the dorsal deposition of bone, a unified sagittal crest had developed rather than having a single suture span the entire length of the skull.
The bone histology of Thrinaxodon indicates that it most likely had very rapid bone growth during juvenile development, and much slower development throughout adulthood, giving rise to the idea that Thrinaxodon reached peak size very early in its life. | Thrinaxodon | Wikipedia | 329 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
Posture
The posture of Thrinaxodon is an interesting subject, because it represents a transition between the sprawling behavior of the more lizard-like pelycosaurs and the more upright behavior found in modern, and many extinct, Mammalia. In cynodonts such as Thrinaxodon, the distal femoral condyle articulates with the acetabulum in a way that permits the hindlimb to present itself at a 45-degree angle to the rest of the system. This is a large difference in comparison to the distal femoral condyle of pelycosaurs, which permits the femur to be parallel with the ground, forcing them to assume a sprawling-like posture. More interesting is that there is an adaptation that has only been observed within Thrinaxodontidae, which allows them to assume upright posture, similar to that of early Mammalia, within their burrows. These changes in posture are supported by the physiological changes in the torso of Thrinaxodon. Such changes as the first appearance of a segmented rib compartment, in which Thrinaxodon expresses both thoracic and lumbar vertebrae. The thoracic segment of the vertebrae contain ribs with large intercostal plates that most likely assisted with either protection or supporting the main frame of the back. This newly developed arrangement allowed for the appropriate space for a diaphragm, however, without proper soft tissue records, the presence of a diaphragm is purely speculative.
Burrowing
Thrinaxodon has been identified as a burrowing cynodont by numerous discoveries in preserved burrow hollows. There is evidence that the burrows are in fact built by the Thrinaxodon to live in them, and they do not simply inhabit leftover burrows by other creatures. Due to the evolution of a segmented vertebral column into thoracic, lumbar and sacral vertebrae, Thrinaxodon was able to achieve flexibilities that permitted it to comfortably rest within smaller burrows, which may have led to habits such as aestivation or torpor. This evolution of a segmented rib cage suggests that this may have been the first instance of a diaphragm in the synapsid fossil record; however, without the proper soft tissue impressions this is nothing more than an assumption. | Thrinaxodon | Wikipedia | 475 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
The earliest discovery of a burrowing Thrinaxodon places the specimen found around 251 million years ago, a time frame surrounding the Permian–Triassic extinction event. Much of these fossils had been found in the flood plains of South Africa, in the Karoo Basin. This behavior had been seen at a relatively low occurrence in the pre-Cenozoic, dominated by therapsids, early-Triassic cynodonts and some early Mammalia. Thrinaxodon was in fact the first burrowing cynodont that has been found, showing similar behavioral patterns to that of Trirachodon. The first burrowing vertebrate on record was the dicynodont synapsid Diictodon, and it is possible that these burrowing patterns had passed on to the future cynodonts due to the adaptive advantage of burrowing during the extinction. The burrow of Thrinaxodon consists of two laterally sloping halves, a pattern that has only been observed in burrowing non-mammalian Cynodontia. The changes in vertebral/rib anatomy that arose in Thrinaxodon permit the animals to a greater range of flexibility, and the ability to place their snout underneath their hindlimbs, an adaptive response to small living quarters, in order to preserve warmth and/or for aestivation purposes. | Thrinaxodon | Wikipedia | 278 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
A Thrinaxodon burrow contained an injured temnospondyl, Broomistega. The burrow was scanned using a synchrotron, a tool used to observe the contents of the burrows in this experiment, and not damage the intact specimens. The synchrotron revealed an injured rhinesuchid, Broomistega putterilli, showing signs of broken or damaged limbs and two skull perforations, most likely inflicted by the canines of another carnivore. The distance between the perforations was measured in relation to the distance between the canines of the Thrinaxodon in question, and no such relation was found. Therefore, we may assume that the temnospondyl found refuge in the burrow after a traumatic experience and the T. liorhinus allowed it to stay in its burrow until they both ultimately met their respective deaths. Interspecific shelter sharing is a rare anomaly within the fossil record; this T. liorhinus shows one of the first occurrences of this type of behavior in the fossil record, but it currently is unknown if the temnospondyl inhabited the burrow before or after the death of the nesting Thrinaxodon. | Thrinaxodon | Wikipedia | 253 | 2969781 | https://en.wikipedia.org/wiki/Thrinaxodon | Biology and health sciences | Proto-mammals | Animals |
Gnetum gnemon is a gymnosperm species of Gnetum, its native area spans from Mizoram and Assam in India down south through Malay Peninsula, Malay Archipelago and the Philippines in southeast Asia to the western Pacific islands. Common names include gnetum, joint fir, two leaf, melinjo/belinjo (Indonesian), bago (Filipino), and tulip (Tok Pisin).
Description
This species can be easily confused for an angiosperm due to the fruit-like female strobili, broad leaves and male strobili looking like flowers due to convergent evolution.
Tree
It is a small to medium-size tree (unlike most other Gnetum species, which are lianas), growing to 15–22 metres tall and with a trunk diameter of up to 40 cm (16 in). In addition to the tree form, there are also varieties that includes shrub forms (brunonianum, griffithii, and tenerum). The leaves are evergreen, opposite, 8–20 cm long and 3–10 cm broad, entire, emerging bronze-coloured, maturing glossy dark green.
The tree does not flower but still grow male and female sporing organs from single long stems 3–6 centimetres long. Male strobili are small and arranged in long stalks which are often mistaken for flowers, melinjo fruit instead are produced from fertilizing the female strobili.
Fruit
The oval fruit (technically a strobilus) measures 1–3.5 cm long, it consists of a thin velvety integument and a large nut-like endosperm 2–4 cm long inside. Fleshy strobili weigh about 5.5 g, the endosperm alone 3.8 g. It changes colour from yellow to orange, purple or pink when ripe. Melinjo season in Indonesia comes three times in March to April, June to July, and September to October, but the fruiting season in northeast of Philippines mainly from June to September.
Uses
Culinary
Gnetum nuts are eaten boiled, roasted, or raw in most parts of Southeast Asia and Melanesia. The young leaves, flowers, and the outer flesh of the fruits are also edible when cooked and are eaten in Indonesia, the Philippines, Thailand, Vanuatu, Papua New Guinea, the Solomon Islands, and Fiji. They have a slightly sour taste and are commonly eaten in soups and stews. | Gnetum gnemon | Wikipedia | 512 | 1500225 | https://en.wikipedia.org/wiki/Gnetum%20gnemon | Biology and health sciences | Gymnosperms (except conifers) | Plants |
Gnetum is most widely used in Indonesian cuisine where it is known as melinjo or belinjo. The seeds are used for sayur asem (sour vegetable soup) and also, made into raw chips that later need to be deep-fried as crackers (emping, a type of krupuk). The crackers have a slightly bitter taste and are frequently served as a snack or accompaniment to Indonesian dishes.
This plant is commonly cultivated throughout the Aceh region and is regarded as a vegetable of high status. Its male strobili, young leaves and female strobilus are used as ingredients in traditional vegetable curry called . This dish is served on all important traditional occasions, such as and . In the Pidie district, the women pick the red-skinned ripe fruit and make from it.
Phytochemicals
Recently, it has been discovered that melinjo strobili are rich in a stilbenoid composed of resveratrol and identified as a dimer. This result was published in XXIII International Conference on Polyphenols, Canada, in 2006.
Melinjo resveratrol, having antibacterial and antioxidative activity, works as a food preservative, off flavour inhibitor and taste enhancer. This species may have applications in food industries which do not use any synthetic chemicals in their processes.
Four new stilbene oligomers, gnemonol G, H, I and J, were isolated from acetone extract of the root of Gnetum gnemon along with five known stilbenoids, ampelopsin E, cis-ampelopsin E, gnetin C, D and E.
The extraction of dried leaf of Gnetum gnemon with acetone water (1:1) gave C-glycosylflavones (isovitexin, vicenin II, isoswertisin, swertisin, swertiajaponin, isoswertiajaponin). | Gnetum gnemon | Wikipedia | 417 | 1500225 | https://en.wikipedia.org/wiki/Gnetum%20gnemon | Biology and health sciences | Gymnosperms (except conifers) | Plants |
The separation of a 50% ethanol extract of the dried endosperms yielded gnetin C, gnetin L (new stilbenoid), gnemonosides A, C and D, and resveratrol which were tested for DPPH radical scavenging action, antimicrobial activity and inhibition of lipase and α-amylase from porcine pancreas. Gnetin C showed the best effect among these stilbenoids.
Oral administration of the 50% ethanol extract of melinjo fruit at 100 mg/kg/day significantly enhanced the production of the Th1 cytokines IL-2 and IFN-γ irrespective of concanavalin-A stimulation, whereas the production of the Th2 cytokines IL-4 and IL-5 was not affected. New stilbene glucosides gnemonoside L and gnemonoside M, and known stilbenoids resveratrol, isorhapontigenin, gnemonoside D, gnetins C and E were isolated from the extract. Gnemonoside M strongly enhanced Th1 cytokine production in cultured Peyer's patch cells from mice at 10 mg/kg/day. | Gnetum gnemon | Wikipedia | 257 | 1500225 | https://en.wikipedia.org/wiki/Gnetum%20gnemon | Biology and health sciences | Gymnosperms (except conifers) | Plants |
In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).
It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.
There are several definitions of R2 that are only sometimes equivalent. One class of such cases includes that of simple linear regression where r2 is used instead of R2. When only an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values. If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1.
There are cases where R2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept, or when a non-linear function is used to fit the data. In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion.
The coefficient of determination can be more intuitively informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on certain test datasets. | Coefficient of determination | Wikipedia | 399 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
When evaluating the goodness-of-fit of simulated (Ypred) versus measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m·Ypred + b). The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0 (i.e., the 1:1 line).
Definitions
A data set has n values marked y1, ..., yn (collectively known as yi or as a vector y = [y1, ..., yn]T), each associated with a fitted (or modeled, or predicted) value f1, ..., fn (known as fi, or sometimes ŷi, as a vector f).
Define the residuals as (forming a vector e).
If is the mean of the observed data:
then the variability of the data set can be measured with two sums of squares formulas:
The sum of squares of residuals, also called the residual sum of squares:
The total sum of squares (proportional to the variance of the data):
The most general definition of the coefficient of determination is
In the best case, the modeled values exactly match the observed values, which results in and . A baseline model, which always predicts , will have .
Relation to unexplained variance
In a general form, R2 can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data):
As explained variance
A larger value of R2 implies a more successful regression model.
Suppose . This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for.
For regression models, the regression sum of squares, also called the explained sum of squares, is defined as
In some cases, as in simple linear regression, the total sum of squares equals the sum of the two other sums of squares defined above: | Coefficient of determination | Wikipedia | 478 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
See Partitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition of R2 is equivalent to
where n is the number of observations (cases) on the variables.
In this form R2 is expressed as the ratio of the explained variance (variance of the model's predictions, which is ) to the total variance (sample variance of the dependent variable, which is ).
This partition of the sum of squares holds for instance when the model values ƒi have been obtained by linear regression. A milder sufficient condition reads as follows: The model has the form
where the qi are arbitrary values that may or may not depend on i or on other free parameters (the common choice qi = xi is just one special case), and the coefficient estimates and are obtained by minimizing the residual sum of squares.
This set of conditions is an important one and it has a number of implications for the properties of the fitted residuals and the modelled values. In particular, under these conditions:
As squared correlation coefficient
In linear least squares multiple regression (with fitted intercept and slope), R2 equals the square of the Pearson correlation coefficient between the observed and modeled (predicted) data values of the dependent variable.
In a linear least squares regression with a single explanator (with fitted intercept and slope), this is also equal to the squared Pearson correlation coefficient between the dependent variable and explanatory variable .
It should not be confused with the correlation coefficient between two explanatory variables, defined as
where the covariance between two coefficient estimates, as well as their standard deviations, are obtained from the covariance matrix of the coefficient estimates, .
Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, an R2 value can be calculated as the square of the correlation coefficient between the original and modeled data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the form ). According to Everitt, this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables. | Coefficient of determination | Wikipedia | 477 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
Interpretation
R2 is a measure of the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data.
Values of R2 outside the range 0 to 1 occur when the model fits the data worse than the worst possible least-squares predictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth is used (this is the equation used most often), R2 can be less than zero. If equation 2 of Kvålseth is used, R2 can be greater than one.
In all instances where R2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizing SSres. In this case, R2 increases as the number of variables in the model is increased (R2 is monotone increasing with the number of variables included—it will never decrease). This illustrates a drawback to one possible use of R2, where one might keep adding variables (kitchen sink regression) to increase the R2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because the R2 will never decrease as variables are added and will likely experience an increase due to chance alone.
This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R2 statistic can be calculated as above and may still be a useful measure. If fitting is by weighted least squares or generalized least squares, alternative versions of R2 can be calculated appropriate to those statistical frameworks, while the "raw" R2 may still be useful if it is more easily interpreted. Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis. | Coefficient of determination | Wikipedia | 485 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
In a multiple linear model
Consider a linear model with more than a single explanatory variable, of the form
where, for the ith case, is the response variable, are p regressors, and is a mean zero error term. The quantities are unknown coefficients, whose values are estimated by least squares. The coefficient of determination R2 is a measure of the global fit of the model. Specifically, R2 is an element of [0, 1] and represents the proportion of variability in Yi that may be attributed to some linear combination of the regressors (explanatory variables) in X.
R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, R2 = 1 indicates that the fitted model explains all variability in , while R2 = 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept = ) between the response variable and regressors). An interior value such as R2 = 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown, lurking variables or inherent variability."
A caution that applies to R2, as to other statistical descriptions of correlation and association is that "correlation does not imply causation." In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause").
In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the R2 can be referred to as the coefficient of multiple determination. | Coefficient of determination | Wikipedia | 468 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
Inflation of R2
In least squares regression using typical data, R2 is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value of R2, R2 alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, an F-test can be performed on the residual sum of squares , similar to the F-tests in Granger causality, though this is not always appropriate. As a reminder of this, some authors denote R2 by Rq2, where q is the number of columns in X (the number of explanators including the constant).
To demonstrate this property, first recall that the objective of least squares linear regression is
where Xi is a row vector of values of explanatory variables for case i and b is a column vector of coefficients of the respective elements of Xi.
The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns of (the explanatory data matrix whose ith row is Xi) are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting that depends only on y, the non-decreasing property of R2 follows directly from the definition above.
The intuitive reason that using an additional explanatory variable cannot lower the R2 is this: Minimizing is equivalent to maximizing R2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and the R2 unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves the R2.
The above gives an analytical explanation of the inflation of R2. Next, an example based on ordinary least square from a geometric perspective is shown below. | Coefficient of determination | Wikipedia | 404 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
A simple case to be considered first:
This equation describes the ordinary least squares regression model with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space in (without intercept). The residual is shown as the red line.
This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space in (without intercept). Noticeably, the values of and are not the same as in the equation for smaller model space as long as and are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space in , giving the minimal distance from the space.
The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation for R2, a smaller value of will lead to a larger value of R2, meaning that adding regressors will result in inflation of R2.
Caveats
R2 does not indicate whether:
the independent variables are a cause of the changes in the dependent variable;
omitted-variable bias exists;
the correct regression was used;
the most appropriate set of independent variables has been chosen;
there is collinearity present in the data on the explanatory variables;
the model might be improved by using transformed versions of the existing set of independent variables;
there are enough data points to make a solid conclusion;
there are a few outliers in an otherwise good sample.
Extensions
Adjusted R2 | Coefficient of determination | Wikipedia | 416 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
The use of an adjusted R2 (one common notation is , pronounced "R bar squared"; another is or ) is an attempt to account for the phenomenon of the R2 automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting. By far the most used one, to the point that it is typically just referred to as adjusted R, is the correction proposed by Mordecai Ezekiel.
The adjusted R2 is defined as
where dfres is the degrees of freedom of the estimate of the population variance around the model, and dftot is the degrees of freedom of the estimate of the population variance around the mean. dfres is given in terms of the sample size n and the number of variables p in the model, . dftot is given in the same way, but with p being zero for the mean, i.e. .
Inserting the degrees of freedom and using the definition of R2, it can be rewritten as:
where p is the total number of explanatory variables in the model (excluding the intercept), and n is the sample size.
The adjusted R2 can be negative, and its value will always be less than or equal to that of R2. Unlike R2, the adjusted R2 increases only when the increase in R2 (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjusted R2 computed each time, the level at which adjusted R2 reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms. | Coefficient of determination | Wikipedia | 370 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
The adjusted R2 can be interpreted as an instance of the bias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrices add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjusted R2 specifically, the model complexity (i.e. number of parameters) affects the R2 and the term / frac and thereby captures their attributes in the overall performance of the model.
R2 can be interpreted as the variance of the model, which is influenced by the model complexity. A high R2 indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). In 2, the term () will be lower with high complexity and resulting in a higher 2, consistently indicating a better performance.
On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e. increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance.
Considering the calculation of 2, more parameters will increase the R2 and lead to an increase in 2. Nevertheless, adding more parameters will increase the term/frac and thus decrease 2. These two trends construct a reverse u-shape relationship between model complexity and 2, which is in consistent with the u-shape trend of model complexity versus overall performance. Unlike R2, which will always increase when model complexity increases, 2 will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. Using 2 instead of R2 could thereby prevent overfitting. | Coefficient of determination | Wikipedia | 475 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
Following the same logic, adjusted R2 can be interpreted as a less biased estimator of the population R2, whereas the observed sample R2 is a positively biased estimate of the population value. Adjusted R2 is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in the feature selection stage of model building.
The principle behind the adjusted R2 statistic can be seen by rewriting the ordinary R2 as
where and are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statistically unbiased versions: and .
Despite using unbiased estimators for the population variances of the error and the dependent variable, adjusted R2 is not an unbiased estimator of the population R2, which results by using the population variances of the errors and the dependent variable instead of estimating them. Ingram Olkin and John W. Pratt derived the minimum-variance unbiased estimator for the population R2, which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjusting R2 concluded that in most situations either an approximate version of the Olkin–Pratt estimator or the exact Olkin–Pratt estimator should be preferred over (Ezekiel) adjusted R2.
Coefficient of partial determination
The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model. This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model.
The calculation for the partial R2 is relatively straightforward after estimating two models and generating the ANOVA tables for them. The calculation for the partial R2 is
which is analogous to the usual coefficient of determination:
Generalizing and decomposing R2 | Coefficient of determination | Wikipedia | 416 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
As explained above, model selection heuristics such as the adjusted R2 criterion and the F-test examine whether the total R2 sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the total R2 will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high.
Alternatively, one can decompose a generalized version of R2 to quantify the relevance of deviating from a hypothesis. As Hoornweg (2018) shows, several shrinkage estimators – such as Bayesian linear regression, ridge regression, and the (adaptive) lasso – make use of this decomposition of R2 when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as
It is assumed that the matrix X is standardized with Z-scores and that the column vector is centered to have a mean of zero. Let the column vector refer to the hypothesized regression parameters and let the column vector denote the estimated parameters. We can then define
An R2 of 75% means that the in-sample accuracy improves by 75% if the data-optimized b solutions are used instead of the hypothesized values. In the special case that is a vector of zeros, we obtain the traditional R2 again.
The individual effect on R2 of deviating from a hypothesis can be computed with ('R-outer'). This times matrix is given by
where . The diagonal elements of exactly add up to R2. If regressors are uncorrelated and is a vector of zeros, then the diagonal element of simply corresponds to the r2 value between and . When regressors and are correlated, might increase at the cost of a decrease in . As a result, the diagonal elements of may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements of to quantify the relevance of deviating from a hypothesized value. Click on the lasso for an example. | Coefficient of determination | Wikipedia | 494 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
R2 in logistic regression
In the case of logistic regression, usually fit by maximum likelihood, there are several choices of pseudo-R2.
One is the generalized R2 originally proposed by Cox & Snell, and independently by Magee:
where is the likelihood of the model with only the intercept, is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) and n is the sample size. It is easily rewritten to:
where D is the test statistic of the likelihood ratio test.
Nico Nagelkerke noted that it had the following properties:
It is consistent with the classical coefficient of determination when both can be computed;
Its value is maximised by the maximum likelihood estimation of a model;
It is asymptotically independent of the sample size;
The interpretation is the proportion of the variation explained by the model;
The values are between 0 and 1, with 0 denoting that model does not explain any variation and 1 denoting that it perfectly explains the observed variation;
It does not have any unit.
However, in the case of a logistic model, where cannot be greater than 1, R2 is between 0 and : thus, Nagelkerke suggested the possibility to define a scaled R2 as R2/R2max.
Comparison with residual statistics
Occasionally, residual statistics are used for indicating goodness of fit. The norm of residuals is calculated as the square-root of the sum of squares of residuals (SSR):
Similarly, the reduced chi-square is calculated as the SSR divided by the degrees of freedom.
Both R2 and the norm of residuals have their relative merits. For least squares analysis R2 varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage of R2 is the term acts to normalize the value. If the yi values are all multiplied by a constant, the norm of residuals will also change by that constant but R2 will stay the same. As a basic example, for the linear least squares fit to the set of data:
{| class="wikitable"
! x
| 1 || 2 || 3 || 4 || 5
|-
! y
| 1.9 || 3.7 || 5.8 || 8.0 || 9.6
|} | Coefficient of determination | Wikipedia | 506 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
R2 = 0.998, and norm of residuals = 0.302.
If all values of y are multiplied by 1000 (for example, in an SI prefix change), then R2 remains the same, but norm of residuals = 302.
Another single-parameter indicator of fit is the RMSE of the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.
History
The creation of the coefficient of determination has been attributed to the geneticist Sewall Wright and was first published in 1921. | Coefficient of determination | Wikipedia | 128 | 1500869 | https://en.wikipedia.org/wiki/Coefficient%20of%20determination | Mathematics | Probability | null |
SN 1006 was a supernova that is likely the brightest observed stellar event in recorded history, reaching an estimated −7.5 visual magnitude, and exceeding roughly sixteen times the brightness of Venus. Appearing between April 30 and May 1, 1006, in the constellation of Lupus, this "guest star" was described by observers across China, Japan, modern-day Iraq, Egypt, and Europe, and was possibly recorded in North American petroglyphs. Some reports state it was clearly visible in the daytime. Modern astronomers now consider its distance from Earth to be about 7,200 light-years or 2,200 parsecs.
Historic reports
Egyptian astrologer and astronomer Ali ibn Ridwan, writing in a commentary on Ptolemy's Tetrabiblos, stated that the "spectacle was a large circular body, 2 to 3 times as large as Venus. The sky was shining because of its light. The intensity of its light was a little more than a quarter that of Moon light" (or perhaps "than the light of the Moon when one-quarter illuminated").
Like all other observers, Ali ibn Ridwan noted that the new star was low on the southern horizon. Some astrologers interpreted the event as a portent of plague and famine.
The most northerly sighting is recorded in the Annales Sangallenses maiores of the Abbey of Saint Gall in Switzerland, at a latitude of 47.5° north.
Monks at St. Gall provided independent data as to its magnitude and location in the sky, writing that
"[i]n a wonderful manner this was sometimes contracted, sometimes diffused, and moreover sometimes extinguished ... It was seen likewise for three months in the inmost limits of the south, beyond all the constellations which are seen in the sky".
This description is often taken as probable evidence that the supernova was of type Ia.
In The Book of Healing, Iranian philosopher Ibn Sina reported observing this supernova from northeastern Iran. He reported it as a transient celestial object which was stationary and/or tail-less (a star among the stars), that it remained for close to 3 months getting fainter and fainter until it disappeared, that it threw out sparks, that is, it was scintillating and very bright, and that the color changed with time.
Some sources state that the star was bright enough to cast shadows; it was certainly seen during daylight hours for some time. | SN 1006 | Wikipedia | 497 | 1501313 | https://en.wikipedia.org/wiki/SN%201006 | Physical sciences | Notable transient events | Astronomy |
According to Songshi, the official history of the Song dynasty (sections 56 and 461), the star seen on May 1, 1006, appeared to the south of constellation Di, between Lupus and Centaurus. It shone so brightly that objects on the ground could be seen at night.
By December, it was again sighted in the constellation Di. The Chinese astrologer Zhou Keming, who was on his return to Kaifeng from his duty in Guangdong, interpreted the star to the emperor on May 30 as an auspicious star, yellow in color and brilliant in its brightness, that would bring great prosperity to the state over which it appeared. The reported color yellow should be taken with some suspicion, however, because Zhou may have chosen a favorable color for political reasons.
There appear to have been two distinct phases in the early evolution of this supernova. There was first a three-month period at which it was at its brightest; after this period it diminished, then returned for a period of about eighteen months.
Petroglyphs by the Hohokam in White Tank Mountain Regional Park, Arizona, and by the Ancestral Puebloans in Chaco Culture National Historical Park, New Mexico, have been interpreted as the first known North American representations of the supernova, though other researchers remain skeptical. The White Tank Mountain Regional Park petroglyph depicts a "star-like object" over a scorpion symbol. It has been contested that the scorpion represents the constellation Scorpius given a lack of evidence that the Native Americans interpreted the stars of that constellation as a scorpion.
Earlier observations discovered from Yemen may indicate a sighting of SN 1006 on April 17, two weeks before its previously assumed earliest observation.
Remnant | SN 1006 | Wikipedia | 347 | 1501313 | https://en.wikipedia.org/wiki/SN%201006 | Physical sciences | Notable transient events | Astronomy |
SN 1006's associated supernova remnant from this event was not identified until 1965, when Doug Milne and Frank Gardner used the Parkes radio telescope to demonstrate a connection to known radio source PKS 1459−41.
This is located near the star Beta Lupi, displaying a 30 arcmin circular shell.
X-ray and optical emission from this remnant have also been detected, and during 2010 the H.E.S.S. gamma-ray observatory announced the detection of very-high-energy gamma-ray emission from the remnant.
No associated neutron star or black hole has been found, which is the situation expected for the remnant of a Type Ia supernova (a class of explosion believed to completely disrupt its progenitor star).
A survey in 2012 to find any surviving companions of the SN 1006 progenitor found no subgiant or giant companion stars,
indicating that SN 1006 most likely had double degenerate progenitors; that is, the merging of two white dwarf stars.
Remnant SNR G327.6+14.6 has an estimated distance of 2.2 kpc from Earth, making the true linear diameter approximately 20 parsecs.
Effect on Earth
Research has suggested that type Ia supernovae can irradiate the Earth with significant amounts of gamma-ray flux, compared with the typical flux from the Sun, up to distances on the order of 1 kiloparsec. SN 1006 lies well beyond 1 kiloparsec, and it did not appear to have significant effects on Earth. However, a signal of its outburst can be found in nitrate deposits in Antarctic ice. | SN 1006 | Wikipedia | 337 | 1501313 | https://en.wikipedia.org/wiki/SN%201006 | Physical sciences | Notable transient events | Astronomy |
Destructive distillation is a chemical process in which decomposition of unprocessed material is achieved by heating it to a high temperature; the term generally applies to processing of organic material in the absence of air or in the presence of limited amounts of oxygen or other reagents, catalysts, or solvents, such as steam or phenols. It is an application of pyrolysis. The process breaks up or "cracks" large molecules. Coke, coal gas, gaseous carbon, coal tar, ammonia liquor, and coal oil are examples of commercial products historically produced by the destructive distillation of coal.
Destructive distillation of any particular inorganic feedstock produces only a small range of products as a rule, but destructive distillation of many organic materials commonly produces very many compounds, often hundreds, although not all products of any particular process are of commercial importance. The distillate are generally lower molecular weight. Some fractions however polymerise or condense small molecules into larger molecules, including heat-stable tarry substances and chars. Cracking feedstocks into liquid and volatile compounds, and polymerising, or the forming of chars and solids, may both occur in the same process, and any class of the products might be of commercial interest.
Currently the major industrial application of destructive distillation is to coal.
Historically the process of destructive distillation and other forms of pyrolysis led to the discovery of many chemical compounds or elucidation of their structures before contemporary organic chemists had developed the processes to synthesise or specifically investigate the parent molecules. It was especially in the early days that investigation of the products of destructive distillation, like those of other destructive processes, played parts in enabling chemists to deduce the chemical nature of many natural materials. Well known examples include the deduction of the structures of pyranoses and furanoses.
History
In his encyclopedic work Natural History () the Roman naturalist and author Pliny the Elder (23/24 –79 CE) describes how, in the destructive distillation of pine wood, two liquid fractions are produced: a lighter (aromatic oils) and a heavier (pitch_(resin)). The lighter fraction is released in the form of gases, which are condensed and collected. | Destructive distillation | Wikipedia | 470 | 1501800 | https://en.wikipedia.org/wiki/Destructive%20distillation | Physical sciences | Other reactions | Chemistry |
Process
The process of pyrolysis can be conducted in a distillation apparatus (retort) to form the volatile products for collection. The mass of the product will represent only a part of the mass of the feedstock, because much of the material remains as char, ash, and non-volatile tars. In contrast, combustion consumes most of the organic matter, and the net weight of the products amount to roughly the same mass as the fuel and oxidant consumed.
Destructive distillation and related processes are in effect the modern industrial descendants of traditional charcoal burning crafts. As such they are of industrial significance in many regions, such as Scandinavia. The modern processes are sophisticated and require careful engineering to produce the most valuable possible products from the available feedstocks.
Applications
Destructive distillation of wood produces methanol and acetic acid, together with a solid residue of charcoal.
Destructive distillation of a tonne of coal can produce 700 kg of coke, 100 liters of liquor ammonia, 50 liters of coal tar and 400 m3 of coal gas.
Destructive distillation is an increasingly promising method for recycling monomers derived from waste polymers.
Destructive distillation of natural rubber resulted in the discovery of isoprene which led to the creation of synthetic rubbers such as neoprene. | Destructive distillation | Wikipedia | 266 | 1501800 | https://en.wikipedia.org/wiki/Destructive%20distillation | Physical sciences | Other reactions | Chemistry |
Dinocephalians (terrible heads) are a clade of large-bodied early therapsids that flourished in the Early and Middle Permian between 279.5 and 260 million years ago (Ma), but became extinct during the Capitanian mass extinction event. Dinocephalians included herbivorous, carnivorous, and omnivorous forms. Many species had thickened skulls with many knobs and bony projections. Dinocephalians were the first non-mammalian therapsids to be scientifically described and their fossils are known from Russia, China, Brazil, South Africa, Zimbabwe, and Tanzania.
Description
Apart from the biarmosuchians, the dinocephalians are the least advanced therapsids, although still uniquely specialised in their own way. They retain a number of primitive characteristics (e.g. no secondary palate, small dentary) shared with their pelycosaur ancestors, although they are also more advanced in possessing therapsid adaptations like the expansion of the ilium and more erect limbs.
They include carnivorous, herbivorous, and omnivorous forms. Some, like Keratocephalus, Moschops, Struthiocephalus and Jonkeria were semiaquatic, others, like Anteosaurus, were more terrestrial.
Dinocephalians were among the largest animals of the Permian period; only the biggest caseids and pareiasaurs reaching them in size.
Size
Dinocephalians were generally large. The biggest herbivores (Tapinocephalus) and omnivores (Titanosuchus) may have weighed up to , and were some long, while the largest carnivores (such as Titanophoneus and Anteosaurus) were at least as long, with heavy skulls long, and overall masses of around a half-tonne. | Dinocephalia | Wikipedia | 386 | 1502780 | https://en.wikipedia.org/wiki/Dinocephalia | Biology and health sciences | Proto-mammals | Animals |
Skull
All dinocephalians are distinguished by the interlocking incisor (front) teeth. Correlated features are the distinctly downturned facial region, a deep temporal region, and forwardly rotated suspensorium. Shearing contact between the upper and lower teeth (allowing food to be more easily sliced into small bits for digestion) is achieved through keeping a fixed quadrate and a hinge-like movement at the jaw articulation. The lower teeth are inclined forward, and occlusion is achieved by the interlocking of the incisors. The later dinocephalians improved on this system by developing heels on the lingual sides of the incisor teeth that met against one another to form a crushing surface when the jaws were shut.
Most dinocephalians also developed pachyostosis of the bones in the skull, which seems to have been an adaptation for intra-specific behaviour (head-butting), perhaps for territory or a mate. In some types, such as Estemmenosuchus and Styracocephalus, there are also horn-like structures, which evolved independently in each case.
Evolutionary history
The dinocephalians are an ancient group and their ancestry is not clear. It is assumed that they must have evolved during the earlier part of the Roadian, or possibly even the Kungurian epoch, but no trace has been found. These animals radiated at the expense of the dying pelycosaurs, who dominated during the early part of the Permian and may have even gone extinct due to competition with therapsids, especially the short-lived but most dominant dinocephalians. Even the earliest members, the estemmenosuchids and early brithopodids of the Russian Ocher fauna, were already a diverse group of herbivores and carnivores.
During the Wordian and early Capitanian, advanced dinocephalians radiated into a large number of herbivorous forms, representing a diverse megafauna. This is well known from the Tapinocephalus Assemblage Zone of the Southern African Karoo. | Dinocephalia | Wikipedia | 439 | 1502780 | https://en.wikipedia.org/wiki/Dinocephalia | Biology and health sciences | Proto-mammals | Animals |
At the height of their diversity (middle or late Capitanian age) all the dinocephalians suddenly died out, during the Capitanian mass extinction event. The reason for their extinction is not clear; although disease, sudden climatic change, or other factors of environmental stress may have brought about their end. They were replaced by much smaller therapsids; herbivorous dicynodonts and carnivorous biarmosuchians, gorgonopsians and therocephalians.
Taxonomy
Class Synapsida
Order Therapsida
Suborder Dinocephalia
?Driveria
?Mastersonia
Family Estemmenosuchidae
Estemmenosuchus
Molybdopygus
?Parabradysaurus
?Family Phreatosuchidae
Phreatosaurus
Phreatosuchus
?Family Phthinosuchidae
Phthinosuchus
?Phthinosaurus
Family Rhopalodontidae
?Phthinosaurus
Rhopalodon
Clade Anteosauria
Family Anteosauridae
Family Brithopodidae
Family Deuterosauridae
Clade Tapinocephalia
?Dimacrodon
?Driveria
?Mastersonia
Family Styracocephalidae
Family Tapinocephalidae
Family Titanosuchidae | Dinocephalia | Wikipedia | 254 | 1502780 | https://en.wikipedia.org/wiki/Dinocephalia | Biology and health sciences | Proto-mammals | Animals |
Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc., is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below), and permanent (plastic) deformation of the object or the surface (e.g. soil). Note that the slippage between the wheel and the surface also results in energy dissipation. Although some researchers have included this term in rolling resistance, some suggest that this dissipation term should be treated separately from rolling resistance because it is due to the applied torque to the wheel and the resultant slip between the wheel and ground, which is called slip loss or slip resistance. In addition, only the so-called slip resistance involves friction, therefore the name "rolling friction" is to an extent a misnomer.
Analogous with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction.
Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac/asphalt. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete. Soil rolling resistance factor is not dependent on speed.
Primary cause
The primary cause of pneumatic tire rolling resistance is hysteresis: | Rolling resistance | Wikipedia | 453 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber.
— National Academy of Sciences
This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate.
The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion.
Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. Note that railroads also have hysteresis in the roadbed structure. | Rolling resistance | Wikipedia | 400 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
Definitions
In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words, the vehicle would be coasting if it were not for the force to maintain constant speed. This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail).
But there is an even broader sense that would include energy wasted by wheel slippage due to the torque applied from the engine. This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly.
The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like).
Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must, of course, provide the energy to overcome this broad-sense rolling resistance.
For tires, rolling resistance is defined as the energy consumed by a tire per unit distance covered. It is also called rolling friction or rolling drag. It is one of the forces that act to oppose the motion of a driver. The main reason for this is that when the tires are in motion and touch the surface, the surface changes shape and causes deformation of the tire. | Rolling resistance | Wikipedia | 498 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
For highway motor vehicles, there is some energy dissipated in shaking the roadway (and the earth beneath it), the shaking of the vehicle itself, and the sliding of the tires. But, other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances.
Rolling resistance coefficient
The "rolling resistance coefficient" is defined by the following equation:
where
is the rolling resistance force (shown as in figure 1),
is the dimensionless rolling resistance coefficient or coefficient of rolling friction (CRF), and
is the normal force, the force perpendicular to the surface on which the wheel is rolling. | Rolling resistance | Wikipedia | 154 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on a level surface, or zero grade, with zero air resistance) per unit force of weight. It is assumed that all wheels are the same and bear identical weight. Thus: means that it would only take 0.01 pounds to tow a vehicle weighing one pound. For a 1000-pound vehicle, it would take 1000 times more tow force, i.e. 10 pounds. One could say that is in lb(tow-force)/lb(vehicle weight). Since this lb/lb is force divided by force, is dimensionless. Multiply it by 100 and you get the percent (%) of the weight of the vehicle required to maintain slow steady speed. is often multiplied by 1000 to get the parts per thousand, which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ), which is the same as pounds of resistance per 1000 pounds
of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used; this is just . Thus, they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances", sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that is also the force per unit mass in such units. The SI system would use N/tonne (N/T, N/t), which is and is force per unit mass, where g is the acceleration of gravity in SI units (meters per second square).
The above shows resistance proportional to but does not explicitly show any variation with speed, loads, torque, surface roughness, diameter, tire inflation/wear, etc., because itself varies with those factors. It might seem from the above definition of that the rolling resistance is directly proportional to vehicle weight but it is not.
Measurement
There are at least two popular models for calculating rolling resistance. | Rolling resistance | Wikipedia | 452 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
"Rolling resistance coefficient (RRC). The value of the rolling resistance force divided by the wheel load. The Society of Automotive Engineers (SAE) has developed test practices to measure the RRC of tires. These tests (SAE J1269 and SAE J2452) are usually performed on new tires. When measured by using these standard test practices, most new passenger tires have reported RRCs ranging from 0.007 to 0.014." In the case of bicycle tires, values of 0.0025 to 0.005 are achieved. These coefficients are measured on rollers, with power meters on road surfaces, or with coast-down tests. In the latter two cases, the effect of air resistance must be subtracted or the tests performed at very low speeds.
The coefficient of rolling resistance b, which has the dimension of length, is approximately (due to the small-angle approximation of ) equal to the value of the rolling resistance force times the radius of the wheel divided by the wheel load.
ISO 18164:2005 is used to test rolling resistance in Europe.
The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance".
Physical formulae
The coefficient of rolling resistance for a slow rigid wheel on a perfectly elastic surface, not adjusted for velocity, can be calculated by
where
is the sinkage depth
is the diameter of the rigid wheel
The empirical formula for for cast iron mine car wheels on steel rails is:
where
is the wheel diameter in inches
is the load on the wheel in pounds-force
As an alternative to using one can use , which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length. It is defined by the following formula:
where
is the rolling resistance force (shown in figure 1),
is the wheel radius,
is the rolling resistance coefficient or coefficient of rolling friction with dimension of length, and
is the normal force (equal to W, not R, as shown in figure 1). | Rolling resistance | Wikipedia | 412 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
The above equation, where resistance is inversely proportional to radius seems to be based on the discredited "Coulomb's law" (Neither Coulomb's inverse square law nor Coulomb's law of friction). See dependence on diameter. Equating this equation with the force per the rolling resistance coefficient, and solving for , gives = . Therefore, if a source gives rolling resistance coefficient () as a dimensionless coefficient, it can be converted to , having units of length, by multiplying by wheel radius .
Rolling resistance coefficient examples
Table of rolling resistance coefficient examples:
For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s2 × 0.01 = 98.1 N).
Dependence on diameter
Stagecoaches and railroads
According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. This rule has been experimentally verified for cast iron wheels (8″ - 24″ diameter) on steel rail and for 19th century carriage wheels. But there are other tests on carriage wheels that do not agree. Theory of a cylinder rolling on an elastic roadway also gives this same rule These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). This disputed (or wrongly applied)
-"Coulomb's law" is still found in handbooks, however.
Pneumatic tires
For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters).
Dependence on applied torque
The driving torque to overcome rolling resistance and maintain steady speed on level ground (with no air resistance) can be calculated by: | Rolling resistance | Wikipedia | 407 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
where
is the linear speed of the body (at the axle), and
its rotational speed.
It is noteworthy that is usually not equal to the radius of the rolling body as a result of wheel slip. The slip between wheel and ground inevitably occurs whenever a driving or braking torque is applied to the wheel. Consequently, the linear speed of the vehicle differs from the wheel's circumferential speed. It is notable that slip does not occur in driven wheels, which are not subjected to driving torque, under different conditions except braking. Therefore, rolling resistance, namely hysteresis loss, is the main source of energy dissipation in driven wheels or axles, whereas in the drive wheels and axles slip resistance, namely loss due to wheel slip, plays the role as well as rolling resistance. Significance of rolling or slip resistance is largely dependent on the tractive force, coefficient of friction, normal load, etc.
All wheels
"Applied torque" may either be driving torque applied by a motor (often through a transmission) or a braking torque applied by brakes (including regenerative braking). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, i.e. except slip resistance). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%.
A small percentage slip can result in a slip resistance which is much larger than the basic rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels. | Rolling resistance | Wikipedia | 485 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
It is shown that for a passenger car, when the tractive force is about 40% of the maximum traction, the slip resistance is almost equal to the basic rolling resistance (hysteresis loss). But in case of a tractive force equal to 70% of the maximum traction, slip resistance becomes 10 times larger than the basic rolling resistance.
Railroad steel wheels
In order to apply any traction to the wheels, some slippage of the wheel is required. For trains climbing up a grade, this slip is normally 1.5% to 2.5%.
Slip (also known as creep) is normally roughly directly proportional to tractive effort. An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage.
Pneumatic tires
Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). This is in part due to a slip of about 5%. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher.
Dependence on wheel load
Railroad steel wheels
The rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. For example, an empty freight car had about twice the Crr as a loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load.
If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs. | Rolling resistance | Wikipedia | 468 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
Pneumatic tires
For pneumatic tires, the direction of change in Crr (rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. It is reported that, if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But, if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course, this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance.
Dependence on curvature of roadway
General
When a vehicle (motor vehicle or railroad train) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle.
This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering.
Sound
Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway, a process which generates sound as a by-product. The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads. | Rolling resistance | Wikipedia | 387 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
Factors that contribute in tires
Several factors affect the magnitude of rolling resistance a tire generates:
As mentioned in the introduction: wheel radius, forward speed, surface adhesion, and relative micro-sliding.
Material - different fillers and polymers in tire composition can improve traction while reducing hysteresis. The replacement of some carbon black with higher-priced silica–silane is one common way of reducing rolling resistance. The use of exotic materials including nano-clay has been shown to reduce rolling resistance in high performance rubber tires. Solvents may also be used to swell solid tires, decreasing the rolling resistance.
Dimensions - rolling resistance in tires is related to the flex of sidewalls and the contact area of the tire For example, at the same pressure, wider bicycle tires flex less in the sidewalls as they roll and thus have lower rolling resistance (although higher air resistance).
Extent of inflation - Lower pressure in tires results in more flexing of the sidewalls and higher rolling resistance. This energy conversion in the sidewalls increases resistance and can also lead to overheating and may have played a part in the infamous Ford Explorer rollover accidents.
Over inflating tires (such a bicycle tires) may not lower the overall rolling resistance as the tire may skip and hop over the road surface. Traction is sacrificed, and overall rolling friction may not be reduced as the wheel rotational speed changes and slippage increases.
Sidewall deflection is not a direct measurement of rolling friction. A high quality tire with a high quality (and supple) casing will allow for more flex per energy loss than a cheap tire with a stiff sidewall. Again, on a bicycle, a quality tire with a supple casing will still roll easier than a cheap tire with a stiff casing. Similarly, as noted by Goodyear truck tires, a tire with a "fuel saving" casing will benefit the fuel economy through many tread lives (i.e. retreading), while a tire with a "fuel saving" tread design will only benefit until the tread wears down.
In tires, tread thickness and shape has much to do with rolling resistance. The thicker and more contoured the tread, the higher the rolling resistance Thus, the "fastest" bicycle tires have very little tread and heavy duty trucks get the best fuel economy as the tire tread wears out.
Diameter effects seem to be negligible, provided the pavement is hard and the range of diameters is limited. See dependence on diameter. | Rolling resistance | Wikipedia | 509 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
Virtually all world speed records have been set on relatively narrow wheels, probably because of their aerodynamic advantage at high speed, which is much less important at normal speeds.
Temperature: with both solid and pneumatic tires, rolling resistance has been found to decrease as temperature increases (within a range of temperatures: i.e. there is an upper limit to this effect) For a rise in temperature from 30 °C to 70 °C the rolling resistance decreased by 20-25%. Racers heat their tires before racing, but this is primarily used to increase tire friction rather than to decrease rolling resistance. | Rolling resistance | Wikipedia | 117 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
Railroads: Components of rolling resistance
In a broad sense rolling resistance can be defined as the sum of components):
Wheel bearing torque losses.
Pure rolling resistance.
Sliding of the wheel on the rail.
Loss of energy to the roadbed (and earth).
Loss of energy to oscillation of railway rolling stock.
Wheel bearing torque losses can be measured as a rolling resistance at the wheel rim, Crr. Railroads normally use roller bearings which are either cylindrical (Russia) or tapered (United States). The specific rolling resistance in bearings varies with both wheel loading and speed. Wheel bearing rolling resistance is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels.
Comparing rolling resistance of highway vehicles and trains
The rolling resistance of steel wheels on steel rail of a train is far less than that of the rubber tires wheels of an automobile or truck. The weight of trains varies greatly; in some cases they may be much heavier per passenger or per net ton of freight than an automobile or truck, but in other cases they may be much lighter.
As an example of a very heavy passenger train, in 1975, Amtrak passenger trains weighed a little over 7 tonnes per passenger, which is much heavier than an average of a little over one ton per passenger for an automobile. This means that for an Amtrak passenger train in 1975, much of the energy savings of the lower rolling resistance was lost to its greater weight.
An example of a very light high-speed passenger train is the N700 Series Shinkansen, which weighs 715 tonnes and carries 1323 passengers, resulting in a per-passenger weight of about half a tonne. This lighter weight per passenger, combined with the lower rolling resistance of steel wheels on steel rail means that an N700 Shinkansen is much more energy efficient than a typical automobile. | Rolling resistance | Wikipedia | 474 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
In the case of freight, CSX ran an advertisement campaign in 2013 claiming that their freight trains move "a ton of freight 436 miles on a gallon of fuel", whereas some sources claim trucks move a ton of freight about 130 miles per gallon of fuel, indicating trains are more efficient overall. | Rolling resistance | Wikipedia | 60 | 1503750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Physical sciences | Classical mechanics | Physics |
Tip of the red-giant branch (TRGB) is a primary distance indicator used in astronomy. It uses the luminosity of the brightest red-giant-branch stars in a galaxy as a standard candle to gauge the distance to that galaxy. It has been used in conjunction with observations from the Hubble Space Telescope to determine the relative motions of the Local Cluster of galaxies within the Local Supercluster. Ground-based, 8-meter-class telescopes like the VLT are also able to measure the TRGB distance within reasonable observation times in the local universe.
Method
The Hertzsprung–Russell diagram (HR diagram) is a plot of stellar luminosity versus surface temperature for a population of stars. During the core hydrogen burning phase of a Sun-like star's lifetime, it will appear on the HR diagram at a position along a diagonal band called the main sequence. When the hydrogen at the core is exhausted, energy will continue to be generated by hydrogen fusion in a shell around the core. The center of the star will accumulate the helium "ash" from this fusion and the star will migrate along an evolutionary branch of the HR diagram that leads toward the upper right. That is, the surface temperature will decrease and the total energy output (luminosity) of the star will increase as the surface area increases.
At a certain point, the helium at the core of the star will reach a pressure and temperature where it can begin to undergo nuclear fusion through the triple-alpha process. For a star with less than 1.8 times the mass of the Sun, this will occur in a process called the helium flash. The evolutionary track of the star will then carry it toward the left of the HR diagram as the surface temperature increases under the new equilibrium. The result is a sharp discontinuity in the evolutionary track of the star on the HR diagram. This discontinuity is called the tip of the red-giant branch.
When distant stars at the TRGB are measured in the I-band (in the infrared), their luminosity is somewhat insensitive to their composition of elements heavier than helium (metallicity) or their mass; they are a standard candle with an I-band absolute magnitude of –4.0±0.1. This makes the technique especially useful as a distance indicator. The TRGB indicator uses stars in the old stellar populations (Population II). | Tip of the red-giant branch | Wikipedia | 489 | 1505215 | https://en.wikipedia.org/wiki/Tip%20of%20the%20red-giant%20branch | Physical sciences | Basics | Astronomy |
Albedo ( ; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun).
While directional-hemispherical reflectance factor is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages.
Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation).
Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change. Albedo is an important concept in climate science.
Terrestrial albedo | Albedo | Wikipedia | 415 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds.
Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo.
Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost .
In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend. | Albedo | Wikipedia | 473 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
White-sky, black-sky, and blue-sky albedo
For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms:
the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and
the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo.
with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as:
This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface.
Changes to albedo due to human activities
Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. Human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites.
Urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate the urban heat island effect. An estimate in 2022 found that on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions."
Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved. | Albedo | Wikipedia | 425 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change.
Examples of terrestrial albedo effects
Illumination
Albedo is not directly dependent on the illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). However, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics.
Insolation effects
The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes.
Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect.
Climate and weather
Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather. | Albedo | Wikipedia | 469 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds.
Albedo–temperature feedback
When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating.
Snow
Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica, snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (referred to as the ice–albedo positive feedback).
In Switzerland, the citizens have been protecting their glaciers with large white tarpaulins to slow down the ice melt. These large white sheets are helping to reject the rays from the sun and defecting the heat. Although this method is very expensive, it has been shown to work, reducing snow and ice melt by 60%. | Albedo | Wikipedia | 426 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming.
Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets.
The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions.
Small-scale effects
Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing. | Albedo | Wikipedia | 317 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
Solar photovoltaic effects
Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications.
Trees
Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo).
In the case of evergreen forests with seasonal snow cover, albedo reduction may be significant enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts as strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate.
Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit. | Albedo | Wikipedia | 501 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy.
Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming.
Research in 2023, drawing from 176 flux stations globally, revealed a climate trade-off: increased carbon uptake from afforestation results in reduced albedo. Initially, this reduction may lead to moderate global warming over a span of approximately 20 years, but it is expected to transition into significant cooling thereafter.
Water
Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.
At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle.
Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light. | Albedo | Wikipedia | 501 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection.
Snow on top of this sea ice increases the albedo to 0.9.
Clouds
Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth."
Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies.
Aerosol effects
Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain.
Black carbon
Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo.
Astronomical albedo
In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved. | Albedo | Wikipedia | 425 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
Optical or visual albedo
The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids.
Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds.
The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies.
Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion.
In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation. | Albedo | Wikipedia | 499 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by:
where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude.
Radar albedo
In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power.
Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering.
For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo):
where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have .
Radar albedos of Solar System objects
The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references.
Relationship to surface bulk density
In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships:
.
History
The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria. | Albedo | Wikipedia | 491 | 39 | https://en.wikipedia.org/wiki/Albedo | Physical sciences | Astrometry | null |
International Atomic Time (abbreviated TAI, from its French name ) is a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth's geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). It is the basis for Coordinated Universal Time (UTC), which is used for civil timekeeping all over the Earth's surface and which has leap seconds.
UTC deviates from TAI by a number of whole seconds. , immediately after the most recent leap second was put into effect, UTC has been exactly 37 seconds behind TAI. The 37 seconds result from the initial difference of 10 seconds at the start of 1972, plus 27 leap seconds in UTC since 1972. In 2022, the General Conference on Weights and Measures decided to abandon the leap second by or before 2035, at which point the difference between TAI and UTC will remain fixed.
TAI may be reported using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian days and the Gregorian calendar are used. TAI in this form was synchronised with Universal Time at the beginning of 1958, and the two have drifted apart ever since, due primarily to the slowing rotation of the Earth.
Operation
TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. The majority of the clocks involved are caesium clocks; the International System of Units (SI) definition of the second is based on caesium. The clocks are compared using GPS signals and two-way satellite time and frequency transfer. Due to the signal averaging TAI is an order of magnitude more stable than its best constituent clock.
The participating institutions each broadcast, in real time, a frequency signal with timecodes, which is their estimate of TAI. Time codes are usually published in the form of UTC, which differs from TAI by a well-known integer number of seconds. These time scales are denoted in the form UTC(NPL) in the UTC form, where NPL here identifies the National Physical Laboratory, UK. The TAI form may be denoted TAI(NPL). The latter is not to be confused with TA(NPL), which denotes an independent atomic time scale, not synchronised to TAI or to anything else. | International Atomic Time | Wikipedia | 508 | 334 | https://en.wikipedia.org/wiki/International%20Atomic%20Time | Technology | Timekeeping | null |
The clocks at different institutions are regularly compared against each other. The International Bureau of Weights and Measures (BIPM, France), combines these measurements to retrospectively calculate the weighted average that forms the most stable time scale possible. This combined time scale is published monthly in "Circular T", and is the canonical TAI. This time scale is expressed in the form of tables of differences UTC − UTC(k) (equal to TAI − TAI(k)) for each participating institution k. The same circular also gives tables of TAI − TA(k), for the various unsynchronised atomic time scales.
Errors in publication may be corrected by issuing a revision of the faulty Circular T or by errata in a subsequent Circular T. Aside from this, once published in Circular T, the TAI scale is not revised. In hindsight, it is possible to discover errors in TAI and to make better estimates of the true proper time scale. Since the published circulars are definitive, better estimates do not create another version of TAI; it is instead considered to be creating a better realisation of Terrestrial Time (TT).
History
Early atomic time scales consisted of quartz clocks with frequencies calibrated by a single atomic clock; the atomic clocks were not operated continuously. Atomic timekeeping services started experimentally in 1955, using the first caesium atomic clock at the National Physical Laboratory, UK (NPL). It was used as a basis for calibrating the quartz clocks at the Royal Greenwich Observatory and to establish a time scale, called Greenwich Atomic (GA). The United States Naval Observatory began the A.1 scale on 13 September 1956, using an Atomichron commercial atomic clock, followed by the NBS-A scale at the National Bureau of Standards, Boulder, Colorado on 9 October 1957.
The International Time Bureau (BIH) began a time scale, Tm or AM, in July 1955, using both local caesium clocks and comparisons to distant clocks using the phase of VLF radio signals. The BIH scale, A.1, and NBS-A were defined by an epoch at the beginning of 1958 The procedures used by the BIH evolved, and the name for the time scale changed: A3 in 1964 and TA(BIH) in 1969. | International Atomic Time | Wikipedia | 462 | 334 | https://en.wikipedia.org/wiki/International%20Atomic%20Time | Technology | Timekeeping | null |
The SI second was defined in terms of the caesium atom in 1967. From 1971 to 1975 the General Conference on Weights and Measures and the International Committee for Weights and Measures made a series of decisions that designated the BIPM time scale International Atomic Time (TAI).
In the 1970s, it became clear that the clocks participating in TAI were ticking at different rates due to gravitational time dilation, and the combined TAI scale, therefore, corresponded to an average of the altitudes of the various clocks. Starting from the Julian Date 2443144.5 (1 January 1977 00:00:00 TAI), corrections were applied to the output of all participating clocks, so that TAI would correspond to proper time at the geoid (mean sea level). Because the clocks were, on average, well above sea level, this meant that TAI slowed by about one part in a trillion. The former uncorrected time scale continues to be published under the name EAL (Échelle Atomique Libre, meaning Free Atomic Scale).
The instant that the gravitational correction started to be applied serves as the epoch for Barycentric Coordinate Time (TCB), Geocentric Coordinate Time (TCG), and Terrestrial Time (TT), which represent three fundamental time scales in the solar system. All three of these time scales were defined to read JD 2443144.5003725 (1 January 1977 00:00:32.184) exactly at that instant. TAI was henceforth a realisation of TT, with the equation TT(TAI) = TAI + 32.184 s.
The continued existence of TAI was questioned in a 2007 letter from the BIPM to the ITU-R which stated, "In the case of a redefinition of UTC without leap seconds, the CCTF would consider discussing the possibility of suppressing TAI, as it would remain parallel to the continuous UTC." | International Atomic Time | Wikipedia | 384 | 334 | https://en.wikipedia.org/wiki/International%20Atomic%20Time | Technology | Timekeeping | null |
Relation to UTC
Contrary to TAI, UTC is a discontinuous time scale. It is occasionally adjusted by leap seconds. Between these adjustments, it is composed of segments that are mapped to atomic time by a constant offset. From its beginning in 1961 through December 1971, the adjustments were made regularly in fractional leap seconds so that UTC approximated UT2. Afterwards, these adjustments were made only in whole seconds to approximate UT1. This was a compromise arrangement in order to enable a publicly broadcast time scale. The less frequent whole-second adjustments meant that the time scale would be more stable and easier to synchronize internationally. The fact that it continues to approximate UT1 means that tasks such as navigation which require a source of Universal Time continue to be well served by the public broadcast of UTC. | International Atomic Time | Wikipedia | 161 | 334 | https://en.wikipedia.org/wiki/International%20Atomic%20Time | Technology | Timekeeping | null |
Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists.
History
In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer.
In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018.
In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures.
Prominent agricultural scientists
Wilbur Olin Atwater
Robert Bakewell
Norman Borlaug
Luther Burbank
George Washington Carver
Carl Henry Clerk
George C. Clerk
René Dumont
Sir Albert Howard
Kailas Nath Kaul
Thomas Lecky
Justus von Liebig
Jay Laurence Lush
Gregor Mendel
Louis Pasteur
M. S. Swaminathan
Jethro Tull
Artturi Ilmari Virtanen
Sewall Wright
Fields or related disciplines
Scope
Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts:
Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research.
Agronomy is research and development related to studying and improving plant-based crops.
is the science of cultivating the earth.
Hydroponics involves growing plants without soil, by using water-based mineral nutrient solutions in an artificial environment. | Agricultural science | Wikipedia | 416 | 572 | https://en.wikipedia.org/wiki/Agricultural%20science | Technology | Basics | null |
Research topics
Agricultural sciences include research and development on:
Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques)
Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems.
Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products)
Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation)
Theoretical production ecology, relating to crop production modeling
Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems.
Food production and demand globally, with particular attention paid to the primary producers, such as China, India, Brazil, the US, and the EU.
Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering. | Agricultural science | Wikipedia | 307 | 572 | https://en.wikipedia.org/wiki/Agricultural%20science | Technology | Basics | null |
Alchemy (from the Arabic word , ) is an ancient branch of natural philosophy, a philosophical and protoscientific tradition that was historically practised in China, India, the Muslim world, and Europe. In its Western form, alchemy is first attested in a number of pseudepigraphical texts written in Greco-Roman Egypt during the first few centuries AD. Greek-speaking alchemists often referred to their craft as "the Art" (τέχνη) or "Knowledge" (ἐπιστήμη), and it was often characterised as mystic (μυστική), sacred (ἱɛρά), or divine (θɛíα).
Alchemists attempted to purify, mature, and perfect certain materials. Common aims were chrysopoeia, the transmutation of "base metals" (e.g., lead) into "noble metals" (particularly gold); the creation of an elixir of immortality; and the creation of panaceas able to cure any disease. The perfection of the human body and soul was thought to result from the alchemical magnum opus ("Great Work"). The concept of creating the philosophers' stone was variously connected with all of these projects.
Islamic and European alchemists developed a basic set of laboratory techniques, theories, and terms, some of which are still in use today. They did not abandon the Ancient Greek philosophical idea that everything is composed of four elements, and they tended to guard their work in secrecy, often making use of cyphers and cryptic symbolism. In Europe, the 12th-century translations of medieval Islamic works on science and the rediscovery of Aristotelian philosophy gave birth to a flourishing tradition of Latin alchemy. This late medieval tradition of alchemy would go on to play a significant role in the development of early modern science (particularly chemistry and medicine). | Alchemy | Wikipedia | 400 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Modern discussions of alchemy are generally split into an examination of its exoteric practical applications and its esoteric spiritual aspects, despite criticisms by scholars such as Eric J. Holmyard and Marie-Louise von Franz that they should be understood as complementary. The former is pursued by historians of the physical sciences, who examine the subject in terms of early chemistry, medicine, and charlatanism, and the philosophical and religious contexts in which these events occurred. The latter interests historians of esotericism, psychologists, and some philosophers and spiritualists. The subject has also made an ongoing impact on literature and the arts.
Etymology
The word alchemy comes from old French alquemie, alkimie, used in Medieval Latin as . This name was itself adopted from the Arabic word (). The Arabic in turn was a borrowing of the Late Greek term khēmeía (), also spelled khumeia () and khēmía (), with al- being the Arabic definite article 'the'. Together this association can be interpreted as 'the process of transmutation by which to fuse or reunite with the divine or original form'. Several etymologies have been proposed for the Greek term. The first was proposed by Zosimos of Panopolis (3rd–4th centuries), who derived it from the name of a book, the Khemeu. Hermann Diels argued in 1914 that it rather derived from χύμα, used to describe metallic objects formed by casting.
Others trace its roots to the Egyptian name (hieroglyphic 𓆎𓅓𓏏𓊖 ), meaning 'black earth', which refers to the fertile and auriferous soil of the Nile valley, as opposed to red desert sand. According to the Egyptologist Wallis Budge, the Arabic word ʾ actually means "the Egyptian [science]", borrowing from the Coptic word for "Egypt", (or its equivalent in the Mediaeval Bohairic dialect of Coptic, ). This Coptic word derives from Demotic , itself from ancient Egyptian . The ancient Egyptian word referred to both the country and the colour "black" (Egypt was the "black Land", by contrast with the "red Land", the surrounding desert). | Alchemy | Wikipedia | 448 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
History
Alchemy encompasses several philosophical traditions spanning some four millennia and three continents. These traditions' general penchant for cryptic and symbolic language makes it hard to trace their mutual influences and genetic relationships. One can distinguish at least three major strands, which appear to be mostly independent, at least in their earlier stages: Chinese alchemy, centered in China; Indian alchemy, centered on the Indian subcontinent; and Western alchemy, which occurred around the Mediterranean and whose center shifted over the millennia from Greco-Roman Egypt to the Islamic world, and finally medieval Europe. Chinese alchemy was closely connected to Taoism and Indian alchemy with the Dharmic faiths. In contrast, Western alchemy developed its philosophical system mostly independent of but influenced by various Western religions. It is still an open question whether these three strands share a common origin, or to what extent they influenced each other.
Hellenistic Egypt
The start of Western alchemy may generally be traced to ancient and Hellenistic Egypt, where the city of Alexandria was a center of alchemical knowledge, and retained its pre-eminence through most of the Greek and Roman periods. Following the work of André-Jean Festugière, modern scholars see alchemical practice in the Roman Empire as originating from the Egyptian goldsmith's art, Greek philosophy and different religious traditions. Tracing the origins of the alchemical art in Egypt is complicated by the pseudepigraphic nature of texts from the Greek alchemical corpus. The treatises of Zosimos of Panopolis, the earliest historically attested author (fl. c. 300 AD), can help in situating the other authors. Zosimus based his work on that of older alchemical authors, such as Mary the Jewess, Pseudo-Democritus, and Agathodaimon, but very little is known about any of these authors. The most complete of their works, The Four Books of Pseudo-Democritus, were probably written in the first century AD. | Alchemy | Wikipedia | 408 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Recent scholarship tends to emphasize the testimony of Zosimus, who traced the alchemical arts back to Egyptian metallurgical and ceremonial practices. It has also been argued that early alchemical writers borrowed the vocabulary of Greek philosophical schools but did not implement any of its doctrines in a systematic way. Zosimos of Panopolis wrote in the Final Abstinence (also known as the "Final Count"). Zosimos explains that the ancient practice of "tinctures" (the technical Greek name for the alchemical arts) had been taken over by certain "demons" who taught the art only to those who offered them sacrifices. Since Zosimos also called the demons "the guardians of places" (, ) and those who offered them sacrifices "priests" (, ), it is fairly clear that he was referring to the gods of Egypt and their priests. While critical of the kind of alchemy he associated with the Egyptian priests and their followers, Zosimos nonetheless saw the tradition's recent past as rooted in the rites of the Egyptian temples.
Mythology
Zosimos of Panopolis asserted that alchemy dated back to Pharaonic Egypt where it was the domain of the priestly class, though there is little to no evidence for his assertion. Alchemical writers used Classical figures from Greek, Roman, and Egyptian mythology to illuminate their works and allegorize alchemical transmutation. These included the pantheon of gods related to the Classical planets, Isis, Osiris, Jason, and many others.
The central figure in the mythology of alchemy is Hermes Trismegistus (or Thrice-Great Hermes). His name is derived from the god Thoth and his Greek counterpart Hermes. Hermes and his caduceus or serpent-staff, were among alchemy's principal symbols. According to Clement of Alexandria, he wrote what were called the "forty-two books of Hermes", covering all fields of knowledge. The Hermetica of Thrice-Great Hermes is generally understood to form the basis for Western alchemical philosophy and practice, called the hermetic philosophy by its early practitioners. These writings were collected in the first centuries of the common era. | Alchemy | Wikipedia | 451 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Technology
The dawn of Western alchemy is sometimes associated with that of metallurgy, extending back to 3500 BC. Many writings were lost when the Roman emperor Diocletian ordered the burning of alchemical books after suppressing a revolt in Alexandria (AD 292). Few original Egyptian documents on alchemy have survived, most notable among them the Stockholm papyrus and the Leyden papyrus X. Dating from AD 250–300, they contained recipes for dyeing and making artificial gemstones, cleaning and fabricating pearls, and manufacturing of imitation gold and silver. These writings lack the mystical, philosophical elements of alchemy, but do contain the works of Bolus of Mendes (or Pseudo-Democritus), which aligned these recipes with theoretical knowledge of astrology and the classical elements. Between the time of Bolus and Zosimos, the change took place that transformed this metallurgy into a Hermetic art.
Philosophy
Alexandria acted as a melting pot for philosophies of Pythagoreanism, Platonism, Stoicism and Gnosticism which formed the origin of alchemy's character. An important example of alchemy's roots in Greek philosophy, originated by Empedocles and developed by Aristotle, was that all things in the universe were formed from only four elements: earth, air, water, and fire. According to Aristotle, each element had a sphere to which it belonged and to which it would return if left undisturbed. The four elements of the Greek were mostly qualitative aspects of matter, not quantitative, as our modern elements are; "...True alchemy never regarded earth, air, water, and fire as corporeal or chemical substances in the present-day sense of the word. The four elements are simply the primary, and most general, qualities by means of which the amorphous and purely quantitative substance of all bodies first reveals itself in differentiated form." Later alchemists extensively developed the mystical aspects of this concept.
Alchemy coexisted alongside emerging Christianity. Lactantius believed Hermes Trismegistus had prophesied its birth. St Augustine later affirmed this in the 4th and 5th centuries, but also condemned Trismegistus for idolatry. Examples of Pagan, Christian, and Jewish alchemists can be found during this period. | Alchemy | Wikipedia | 483 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Most of the Greco-Roman alchemists preceding Zosimos are known only by pseudonyms, such as Moses, Isis, Cleopatra, Democritus, and Ostanes. Others authors such as Komarios, and Chymes, we only know through fragments of text. After AD 400, Greek alchemical writers occupied themselves solely in commenting on the works of these predecessors. By the middle of the 7th century alchemy was almost an entirely mystical discipline. It was at that time that Khalid Ibn Yazid sparked its migration from Alexandria to the Islamic world, facilitating the translation and preservation of Greek alchemical texts in the 8th and 9th centuries.
Byzantium
Greek alchemy was preserved in medieval Byzantine manuscripts after the fall of Egypt, and yet historians have only relatively recently begun to pay attention to the study and development of Greek alchemy in the Byzantine period.
India
The 2nd millennium BC text Vedas describe a connection between eternal life and gold. A considerable knowledge of metallurgy has been exhibited in a third-century AD text called Arthashastra which provides ingredients of explosives (Agniyoga) and salts extracted from fertile soils and plant remains (Yavakshara) such as saltpetre/nitre, perfume making (different qualities of perfumes are mentioned), granulated (refined) Sugar. Buddhist texts from the 2nd to 5th centuries mention the transmutation of base metals to gold. According to some scholars Greek alchemy may have influenced Indian alchemy but there are no hard evidences to back this claim.
The 11th-century Persian chemist and physician Abū Rayhān Bīrūnī, who visited Gujarat as part of the court of Mahmud of Ghazni, reported that they
The goals of alchemy in India included the creation of a divine body (Sanskrit divya-deham) and immortality while still embodied (Sanskrit jīvan-mukti). Sanskrit alchemical texts include much material on the manipulation of mercury and sulphur, that are homologized with the semen of the god Śiva and the menstrual blood of the goddess Devī.
Some early alchemical writings seem to have their origins in the Kaula tantric schools associated to the teachings of the personality of Matsyendranath. Other early writings are found in the Jaina medical treatise Kalyāṇakārakam of Ugrāditya, written in South India in the early 9th century. | Alchemy | Wikipedia | 495 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Two famous early Indian alchemical authors were Nāgārjuna Siddha and Nityanātha Siddha. Nāgārjuna Siddha was a Buddhist monk. His book, Rasendramangalam, is an example of Indian alchemy and medicine. Nityanātha Siddha wrote Rasaratnākara, also a highly influential work. In Sanskrit, rasa translates to "mercury", and Nāgārjuna Siddha was said to have developed a method of converting mercury into gold.
Scholarship on Indian alchemy is in the publication of The Alchemical Body by David Gordon White.
A modern bibliography on Indian alchemical studies has been written by White.
The contents of 39 Sanskrit alchemical treatises have been analysed in detail in G. Jan Meulenbeld's History of Indian Medical Literature. The discussion of these works in HIML gives a summary of the contents of each work, their special features, and where possible the evidence concerning their dating. Chapter 13 of HIML, Various works on rasaśāstra and ratnaśāstra (or Various works on alchemy and gems) gives brief details of a further 655 (six hundred and fifty-five) treatises. In some cases Meulenbeld gives notes on the contents and authorship of these works; in other cases references are made only to the unpublished manuscripts of these titles.
A great deal remains to be discovered about Indian alchemical literature. The content of the Sanskrit alchemical corpus has not yet (2014) been adequately integrated into the wider general history of alchemy.
Islamic world
After the fall of the Roman Empire, the focus of alchemical development moved to the Islamic World. Much more is known about Islamic alchemy because it was better documented: indeed, most of the earlier writings that have come down through the years were preserved as Arabic translations. The word alchemy itself was derived from the Arabic word al-kīmiyā (الكيمياء). The early Islamic world was a melting pot for alchemy. Platonic and Aristotelian thought, which had already been somewhat appropriated into hermetical science, continued to be assimilated during the late 7th and early 8th centuries through Syriac translations and scholarship. | Alchemy | Wikipedia | 454 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
In the late ninth and early tenth centuries, the Arabic works attributed to Jābir ibn Hayyān (Latinized as "Geber" or "Geberus") introduced a new approach to alchemy. Paul Kraus, who wrote the standard reference work on Jabir, put it as follows:
Islamic philosophers also made great contributions to alchemical hermeticism. The most influential author in this regard was arguably Jabir. Jabir's ultimate goal was Takwin, the artificial creation of life in the alchemical laboratory, up to, and including, human life. He analysed each Aristotelian element in terms of four basic qualities of hotness, coldness, dryness, and moistness. According to Jabir, in each metal two of these qualities were interior and two were exterior. For example, lead was externally cold and dry, while gold was hot and moist. Thus, Jabir theorized, by rearranging the qualities of one metal, a different metal would result. By this reasoning, the search for the philosopher's stone was introduced to Western alchemy. Jabir developed an elaborate numerology whereby the root letters of a substance's name in Arabic, when treated with various transformations, held correspondences to the element's physical properties.
The elemental system used in medieval alchemy also originated with Jabir. His original system consisted of seven elements, which included the five classical elements (aether, air, earth, fire, and water) in addition to two chemical elements representing the metals: sulphur, "the stone which burns", which characterized the principle of combustibility, and mercury, which contained the idealized principle of metallic properties. Shortly thereafter, this evolved into eight elements, with the Arabic concept of the three metallic principles: sulphur giving flammability or combustion, mercury giving volatility and stability, and salt giving solidity. The atomic theory of corpuscularianism, where all physical bodies possess an inner and outer layer of minute particles or corpuscles, also has its origins in the work of Jabir.
From the 9th to 14th centuries, alchemical theories faced criticism from a variety of practical Muslim chemists, including Alkindus, Abū al-Rayhān al-Bīrūnī, Avicenna and Ibn Khaldun. In particular, they wrote refutations against the idea of the transmutation of metals. | Alchemy | Wikipedia | 496 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
From the 14th century onwards, many materials and practices originally belonging to Indian alchemy (Rasayana) were assimilated in the Persian texts written by Muslim scholars.
East Asia
Researchers have found evidence that Chinese alchemists and philosophers discovered complex mathematical phenomena that were shared with Arab alchemists during the medieval period. Discovered in BC China, the "magic square of three" was propagated to followers of Abū Mūsā Jābir ibn Ḥayyān at some point over the proceeding several hundred years. Other commonalities shared between the two alchemical schools of thought include discrete naming for ingredients and heavy influence from the natural elements. The silk road provided a clear path for the exchange of goods, ideas, ingredients, religion, and many other aspects of life with which alchemy is intertwined.
Whereas European alchemy eventually centered on the transmutation of base metals into noble metals, Chinese alchemy had a more obvious connection to medicine. The philosopher's stone of European alchemists can be compared to the Grand Elixir of Immortality sought by Chinese alchemists. In the hermetic view, these two goals were not unconnected, and the philosopher's stone was often equated with the universal panacea; therefore, the two traditions may have had more in common than initially appears. | Alchemy | Wikipedia | 268 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
As early as 317 AD, Ge Hong documented the use of metals, minerals, and elixirs in early Chinese medicine. Hong identified three ancient Chinese documents, titled Scripture of Great Clarity, Scripture of the Nine Elixirs, and Scripture of the Golden Liquor, as texts containing fundamental alchemical information. He also described alchemy, along with meditation, as the sole spiritual practices that could allow one to gain immortality or to transcend. In his work Inner Chapters of the Book of the Master Who Embraces Spontaneous Nature (317 AD), Hong argued that alchemical solutions such as elixirs were preferable to traditional medicinal treatment due to the spiritual protection they could provide. In the centuries following Ge Hong's death, the emphasis placed on alchemy as a spiritual practice among Chinese Daoists was reduced. In 499 AD, Tao Hongjing refuted Hong's statement that alchemy is as important a spiritual practice as Shangqing meditation. While Hongjing did not deny the power of alchemical elixirs to grant immortality or provide divine protection, he ultimately found the Scripture of the Nine Elixirs to be ambiguous and spiritually unfulfilling, aiming to implement more accessible practising techniques.
In the early 700s, Neidan (also known as internal alchemy) was adopted by Daoists as a new form of alchemy. Neidan emphasized appeasing the inner gods that inhabit the human body by practising alchemy with compounds found in the body, rather than the mixing of natural resources that was emphasized in early Dao alchemy. For example, saliva was often considered nourishment for the inner gods and did not require any conscious alchemical reaction to produce. The inner gods were not thought of as physical presences occupying each person, but rather a collection of deities that are each said to represent and protect a specific body part or region. Although those who practised Neidan prioritized meditation over external alchemical strategies, many of the same elixirs and constituents from previous Daoist alchemical schools of thought continued to be utilized in tandem with meditation. Eternal life remained a consideration for Neidan alchemists, as it was believed that one would become immortal if an inner god were to be immortalized within them through spiritual fulfilment. | Alchemy | Wikipedia | 477 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Black powder may have been an important invention of Chinese alchemists. It is said that the Chinese invented gunpowder while trying to find a potion for eternal life. Described in 9th-century texts and used in fireworks in China by the 10th century, it was used in cannons by 1290. From China, the use of gunpowder spread to Japan, the Mongols, the Muslim world, and Europe. Gunpowder was used by the Mongols against the Hungarians in 1241, and in Europe by the 14th century.
Chinese alchemy was closely connected to Taoist forms of traditional Chinese medicine, such as Acupuncture and Moxibustion. In the early Song dynasty, followers of this Taoist idea (chiefly the elite and upper class) would ingest mercuric sulfide, which, though tolerable in low levels, led many to suicide. Thinking that this consequential death would lead to freedom and access to the Taoist heavens, the ensuing deaths encouraged people to eschew this method of alchemy in favour of external sources (the aforementioned Tai Chi Chuan, mastering of the qi, etc.) Chinese alchemy was introduced to the West by Obed Simon Johnson.
Medieval Europe
The introduction of alchemy to Latin Europe may be dated to 11 February 1144, with the completion of Robert of Chester's translation of the ("Book on the Composition of Alchemy") from an Arabic work attributed to Khalid ibn Yazid. Although European craftsmen and technicians pre-existed, Robert notes in his preface that alchemy (here still referring to the elixir rather than to the art itself) was unknown in Latin Europe at the time of his writing. The translation of Arabic texts concerning numerous disciplines including alchemy flourished in 12th-century Toledo, Spain, through contributors like Gerard of Cremona and Adelard of Bath. Translations of the time included the Turba Philosophorum, and the works of Avicenna and Muhammad ibn Zakariya al-Razi. These brought with them many new words to the European vocabulary for which there was no previous Latin equivalent. Alcohol, carboy, elixir, and athanor are examples. | Alchemy | Wikipedia | 448 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Meanwhile, theologian contemporaries of the translators made strides towards the reconciliation of faith and experimental rationalism, thereby priming Europe for the influx of alchemical thought. The 11th-century St Anselm put forth the opinion that faith and rationalism were compatible and encouraged rationalism in a Christian context. In the early 12th century, Peter Abelard followed Anselm's work, laying down the foundation for acceptance of Aristotelian thought before the first works of Aristotle had reached the West. In the early 13th century, Robert Grosseteste used Abelard's methods of analysis and added the use of observation, experimentation, and conclusions when conducting scientific investigations. Grosseteste also did much work to reconcile Platonic and Aristotelian thinking.
Through much of the 12th and 13th centuries, alchemical knowledge in Europe remained centered on translations, and new Latin contributions were not made. The efforts of the translators were succeeded by that of the encyclopaedists. In the 13th century, Albertus Magnus and Roger Bacon were the most notable of these, their work summarizing and explaining the newly imported alchemical knowledge in Aristotelian terms. Albertus Magnus, a Dominican friar, is known to have written works such as the Book of Minerals where he observed and commented on the operations and theories of alchemical authorities like Hermes Trismegistus, pseudo-Democritus and unnamed alchemists of his time. Albertus critically compared these to the writings of Aristotle and Avicenna, where they concerned the transmutation of metals. From the time shortly after his death through to the 15th century, more than 28 alchemical tracts were misattributed to him, a common practice giving rise to his reputation as an accomplished alchemist. Likewise, alchemical texts have been attributed to Albert's student Thomas Aquinas. | Alchemy | Wikipedia | 376 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Roger Bacon, a Franciscan friar who wrote on a wide variety of topics including optics, comparative linguistics, and medicine, composed his Great Work () for as part of a project towards rebuilding the medieval university curriculum to include the new learning of his time. While alchemy was not more important to him than other sciences and he did not produce allegorical works on the topic, he did consider it and astrology to be important parts of both natural philosophy and theology and his contributions advanced alchemy's connections to soteriology and Christian theology. Bacon's writings integrated morality, salvation, alchemy, and the prolongation of life. His correspondence with Clement highlighted this, noting the importance of alchemy to the papacy. Like the Greeks before him, Bacon acknowledged the division of alchemy into practical and theoretical spheres. He noted that the theoretical lay outside the scope of Aristotle, the natural philosophers, and all Latin writers of his time. The practical confirmed the theoretical, and Bacon advocated its uses in natural science and medicine. In later European legend, he became an archmage. In particular, along with Albertus Magnus, he was credited with the forging of a brazen head capable of answering its owner's questions. | Alchemy | Wikipedia | 251 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Soon after Bacon, the influential work of Pseudo-Geber (sometimes identified as Paul of Taranto) appeared. His Summa Perfectionis remained a staple summary of alchemical practice and theory through the medieval and renaissance periods. It was notable for its inclusion of practical chemical operations alongside sulphur-mercury theory, and the unusual clarity with which they were described. By the end of the 13th century, alchemy had developed into a fairly structured system of belief. Adepts believed in the macrocosm-microcosm theories of Hermes, that is to say, they believed that processes that affect minerals and other substances could have an effect on the human body (for example, if one could learn the secret of purifying gold, one could use the technique to purify the human soul). They believed in the four elements and the four qualities as described above, and they had a strong tradition of cloaking their written ideas in a labyrinth of coded jargon set with traps to mislead the uninitiated. Finally, the alchemists practised their art: they actively experimented with chemicals and made observations and theories about how the universe operated. Their entire philosophy revolved around their belief that man's soul was divided within himself after the fall of Adam. By purifying the two parts of man's soul, man could be reunited with God. | Alchemy | Wikipedia | 278 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
In the 14th century, alchemy became more accessible to Europeans outside the confines of Latin-speaking churchmen and scholars. Alchemical discourse shifted from scholarly philosophical debate to an exposed social commentary on the alchemists themselves. Dante, Piers Plowman, and Chaucer all painted unflattering pictures of alchemists as thieves and liars. Pope John XXII's 1317 edict, Spondent quas non-exhibent forbade the false promises of transmutation made by pseudo-alchemists. Roman Catholic Inquisitor General Nicholas Eymerich's Directorium Inquisitorum, written in 1376, associated alchemy with the performance of demonic rituals, which Eymerich differentiated from magic performed in accordance with scripture. This did not, however, lead to any change in the Inquisition's monitoring or prosecution of alchemists. In 1404, Henry IV of England banned the practice of multiplying metals by the passing of the (5 Hen. 4. c. 4) (although it was possible to buy a licence to attempt to make gold alchemically, and a number were granted by Henry VI and Edward IV). These critiques and regulations centered more around pseudo-alchemical charlatanism than the actual study of alchemy, which continued with an increasingly Christian tone. The 14th century saw the Christian imagery of death and resurrection employed in the alchemical texts of Petrus Bonus, John of Rupescissa, and in works written in the name of Raymond Lull and Arnold of Villanova.
Nicolas Flamel is a well-known alchemist to the point where he had many pseudepigraphic imitators. Although the historical Flamel existed, the writings and legends assigned to him only appeared in 1612.
A common idea in European alchemy in the medieval era was a metaphysical "Homeric chain of wise men that link[ed] heaven and earth" that included ancient pagan philosophers and other important historical figures.
Renaissance and early modern Europe
During the Renaissance, Hermetic and Platonic foundations were restored to European alchemy. The dawn of medical, pharmaceutical, occult, and entrepreneurial branches of alchemy followed. | Alchemy | Wikipedia | 452 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
In the late 15th century, Marsilio Ficino translated the Corpus Hermeticum and the works of Plato into Latin. These were previously unavailable to Europeans who for the first time had a full picture of the alchemical theory that Bacon had declared absent. Renaissance Humanism and Renaissance Neoplatonism guided alchemists away from physics to refocus on mankind as the alchemical vessel.
Esoteric systems developed that blended alchemy into a broader occult Hermeticism, fusing it with magic, astrology, and Christian cabala. A key figure in this development was German Heinrich Cornelius Agrippa (1486–1535), who received his Hermetic education in Italy in the schools of the humanists. In his De Occulta Philosophia, he attempted to merge Kabbalah, Hermeticism, and alchemy. He was instrumental in spreading this new blend of Hermeticism outside the borders of Italy.
Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim, 1493–1541) cast alchemy into a new form, rejecting some of Agrippa's occultism and moving away from chrysopoeia. Paracelsus pioneered the use of chemicals and minerals in medicine and wrote, "Many have said of Alchemy, that it is for the making of gold and silver. For me such is not the aim, but to consider only what virtue and power may lie in medicines."
His hermetical views were that sickness and health in the body relied on the harmony of man the microcosm and Nature the macrocosm. He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Iatrochemistry refers to the pharmaceutical applications of alchemy championed by Paracelsus. | Alchemy | Wikipedia | 414 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
John Dee (13 July 1527 – December 1608) followed Agrippa's occult tradition. Although better known for angel summoning, divination, and his role as astrologer, cryptographer, and consultant to Queen Elizabeth I, Dee's alchemical Monas Hieroglyphica, written in 1564 was his most popular and influential work. His writing portrayed alchemy as a sort of terrestrial astronomy in line with the Hermetic axiom As above so below. During the 17th century, a short-lived "supernatural" interpretation of alchemy became popular, including support by fellows of the Royal Society: Robert Boyle and Elias Ashmole. Proponents of the supernatural interpretation of alchemy believed that the philosopher's stone might be used to summon and communicate with angels.
Entrepreneurial opportunities were common for the alchemists of Renaissance Europe. Alchemists were contracted by the elite for practical purposes related to mining, medical services, and the production of chemicals, medicines, metals, and gemstones. Rudolf II, Holy Roman Emperor, in the late 16th century, famously received and sponsored various alchemists at his court in Prague, including Dee and his associate Edward Kelley. King James IV of Scotland, Julius, Duke of Brunswick-Lüneburg, Henry V, Duke of Brunswick-Lüneburg, Augustus, Elector of Saxony, Julius Echter von Mespelbrunn, and Maurice, Landgrave of Hesse-Kassel all contracted alchemists. John's son Arthur Dee worked as a court physician to Michael I of Russia and Charles I of England but also compiled the alchemical book Fasciculus Chemicus.
Although most of these appointments were legitimate, the trend of pseudo-alchemical fraud continued through the Renaissance. Betrüger would use sleight of hand, or claims of secret knowledge to make money or secure patronage. Legitimate mystical and medical alchemists such as Michael Maier and Heinrich Khunrath wrote about fraudulent transmutations, distinguishing themselves from the con artists. False alchemists were sometimes prosecuted for fraud. | Alchemy | Wikipedia | 427 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
The terms "chemia" and "alchemia" were used as synonyms in the early modern period, and the differences between alchemy, chemistry and small-scale assaying and metallurgy were not as neat as in the present day. There were important overlaps between practitioners, and trying to classify them into alchemists, chemists and craftsmen is anachronistic. For example, Tycho Brahe (1546–1601), an alchemist better known for his astronomical and astrological investigations, had a laboratory built at his Uraniborg observatory/research institute. Michael Sendivogius (Michał Sędziwój, 1566–1636), a Polish alchemist, philosopher, medical doctor and pioneer of chemistry wrote mystical works but is also credited with distilling oxygen in a lab sometime around 1600. Sendivogious taught his technique to Cornelius Drebbel who, in 1621, applied this in a submarine. Isaac Newton devoted considerably more of his writing to the study of alchemy (see Isaac Newton's occult studies) than he did to either optics or physics. Other early modern alchemists who were eminent in their other studies include Robert Boyle, and Jan Baptist van Helmont. Their Hermeticism complemented rather than precluded their practical achievements in medicine and science.
Later modern period
The decline of European alchemy was brought about by the rise of modern science with its emphasis on rigorous quantitative experimentation and its disdain for "ancient wisdom". Although the seeds of these events were planted as early as the 17th century, alchemy still flourished for some two hundred years, and in fact may have reached its peak in the 18th century. As late as 1781 James Price claimed to have produced a powder that could transmute mercury into silver or gold. Early modern European alchemy continued to exhibit a diversity of theories, practices, and purposes: "Scholastic and anti-Aristotelian, Paracelsian and anti-Paracelsian, Hermetic, Neoplatonic, mechanistic, vitalistic, and more—plus virtually every combination and compromise thereof." | Alchemy | Wikipedia | 440 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Robert Boyle (1627–1691) pioneered the scientific method in chemical investigations. He assumed nothing in his experiments and compiled every piece of relevant data. Boyle would note the place in which the experiment was carried out, the wind characteristics, the position of the Sun and Moon, and the barometer reading, all just in case they proved to be relevant. This approach eventually led to the founding of modern chemistry in the 18th and 19th centuries, based on revolutionary discoveries and ideas of Lavoisier and John Dalton.
Beginning around 1720, a rigid distinction began to be drawn for the first time between "alchemy" and "chemistry". By the 1740s, "alchemy" was now restricted to the realm of gold making, leading to the popular belief that alchemists were charlatans, and the tradition itself nothing more than a fraud. In order to protect the developing science of modern chemistry from the negative censure to which alchemy was being subjected, academic writers during the 18th-century scientific Enlightenment attempted to divorce and separate the "new" chemistry from the "old" practices of alchemy. This move was mostly successful, and the consequences of this continued into the 19th, 20th and 21st centuries.
During the occult revival of the early 19th century, alchemy received new attention as an occult science. The esoteric or occultist school that arose during the 19th century held the view that the substances and operations mentioned in alchemical literature are to be interpreted in a spiritual sense, less than as a practical tradition or protoscience. This interpretation claimed that the obscure language of the alchemical texts, which 19th century practitioners were not always able to decipher, were an allegorical guise for spiritual, moral or mystical processes. | Alchemy | Wikipedia | 356 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Two seminal figures during this period were Mary Anne Atwood and Ethan Allen Hitchcock, who independently published similar works regarding spiritual alchemy. Both rebuffed the growing successes of chemistry, developing a completely esoteric view of alchemy. Atwood wrote: "No modern art or chemistry, notwithstanding all its surreptitious claims, has any thing in common with Alchemy." Atwood's work influenced subsequent authors of the occult revival including Eliphas Levi, Arthur Edward Waite, and Rudolf Steiner. Hitchcock, in his Remarks Upon Alchymists (1855) attempted to make a case for his spiritual interpretation with his claim that the alchemists wrote about a spiritual discipline under a materialistic guise in order to avoid accusations of blasphemy from the church and state. In 1845, Baron Carl Reichenbach, published his studies on Odic force, a concept with some similarities to alchemy, but his research did not enter the mainstream of scientific discussion.
In 1946, Louis Cattiaux published the Message Retrouvé, a work that was at once philosophical, mystical and highly influenced by alchemy. In his lineage, many researchers, including Emmanuel and Charles d'Hooghvorst, are updating alchemical studies in France and Belgium.
Women
Several women appear in the earliest history of alchemy. Michael Maier names four women who were able to make the philosophers' stone: Mary the Jewess, Cleopatra the Alchemist, Medera, and Taphnutia. Zosimos' sister Theosebia (later known as Euthica the Arab) and Isis the Prophetess also played roles in early alchemical texts. | Alchemy | Wikipedia | 341 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
The first alchemist whose name we know was Mary the Jewess (). Early sources claim that Mary (or Maria) devised a number of improvements to alchemical equipment and tools as well as novel techniques in chemistry. Her best known advances were in heating and distillation processes. The laboratory water-bath, known eponymously (especially in France) as the bain-marie, is said to have been invented or at least improved by her. Essentially a double-boiler, it was (and is) used in chemistry for processes that required gentle heating. The tribikos (a modified distillation apparatus) and the kerotakis (a more intricate apparatus used especially for sublimations) are two other advancements in the process of distillation that are credited to her. Although we have no writing from Mary herself, she is known from the early-fourth-century writings of Zosimos of Panopolis. After the Greco-Roman period, women's names appear less frequently in alchemical literature. | Alchemy | Wikipedia | 208 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Towards the end of the Middle Ages and beginning of the Renaissance, due to the emergence of print, women were able to access the alchemical knowledge from texts of the preceding centuries. Caterina Sforza, the Countess of Forlì and Lady of Imola, is one of the few confirmed female alchemists after Mary the Jewess. As she owned an apothecary, she would practice science and conduct experiments in her botanic gardens and laboratories. Being knowledgeable in alchemy and pharmacology, she recorded all of her alchemical ventures in a manuscript named ('Experiments'). The manuscript contained more than four hundred recipes covering alchemy as well as cosmetics and medicine. One of these recipes was for the water of talc. Talc, which makes up talcum powder, is a mineral which, when combined with water and distilled, was said to produce a solution which yielded many benefits. These supposed benefits included turning silver to gold and rejuvenation. When combined with white wine, its powder form could be ingested to counteract poison. Furthermore, if that powder was mixed and drunk with white wine, it was said to be a source of protection from any poison, sickness, or plague. Other recipes were for making hair dyes, lotions, lip colours. There was also information on how to treat a variety of ailments from fevers and coughs to epilepsy and cancer. In addition, there were instructions on producing the quintessence (or aether), an elixir which was believed to be able to heal all sicknesses, defend against diseases, and perpetuate youthfulness. She also wrote about creating the illustrious philosophers' stone. | Alchemy | Wikipedia | 351 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Some women known for their interest in alchemy were Catherine de' Medici, the Queen of France, and Marie de' Medici, the following Queen of France, who carried out experiments in her personal laboratory. Also, Isabella d'Este, the Marchioness of Mantua, made perfumes herself to serve as gifts. Due to the proliferation in alchemical literature of pseudepigrapha and anonymous works, however, it is difficult to know which of the alchemists were actually women. This contributed to a broader pattern in which male authors credited prominent noblewomen for beauty products with the purpose of appealing to a female audience. For example, in ("Gallant Recipe-Book"), the distillation of lemons and roses was attributed to Elisabetta Gonzaga, the duchess of Urbino. In the same book, Isabella d'Aragona, the daughter of Alfonso II of Naples, is accredited for recipes involving alum and mercury. Ippolita Maria Sforza is even referred to in an anonymous manuscript about a hand lotion created with rose powder and crushed bones.
As the sixteenth century went on, scientific culture flourished and people began collecting "secrets". During this period "secrets" referred to experiments, and the most coveted ones were not those which were bizarre, but the ones which had been proven to yield the desired outcome. In this period, the only book of secrets ascribed to a woman was ('The Secrets of Signora Isabella Cortese'). This book contained information on how to turn base metals into gold, medicine, and cosmetics. However, it is rumoured that a man, Girolamo Ruscelli, was the real author and only used a female voice to attract female readers.
In the nineteenth-century, Mary Anne Atwood's A Suggestive Inquiry into the Hermetic Mystery (1850) marked the return of women during the occult revival. | Alchemy | Wikipedia | 388 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Modern historical research
The history of alchemy has become a recognized subject of academic study. As the language of the alchemists is analysed, historians are becoming more aware of the connections between that discipline and other facets of Western cultural history, such as the evolution of science and philosophy, the sociology and psychology of the intellectual communities, kabbalism, spiritualism, Rosicrucianism, and other mystic movements. Institutions involved in this research include The Chymistry of Isaac Newton project at Indiana University, the University of Exeter Centre for the Study of Esotericism (EXESESO), the European Society for the Study of Western Esotericism (ESSWE), and the University of Amsterdam's Sub-department for the History of Hermetic Philosophy and Related Currents. A large collection of books on alchemy is kept in the Bibliotheca Philosophica Hermetica in Amsterdam.
Journals which publish regularly on the topic of Alchemy include Ambix, published by the Society for the History of Alchemy and Chemistry, and Isis, published by the History of Science Society.
Core concepts
Western alchemical theory corresponds to the worldview of late antiquity in which it was born. Concepts were imported from Neoplatonism and earlier Greek cosmology. As such, the classical elements appear in alchemical writings, as do the seven classical planets and the corresponding seven metals of antiquity. Similarly, the gods of the Roman pantheon who are associated with these luminaries are discussed in alchemical literature. The concepts of prima materia and anima mundi are central to the theory of the philosopher's stone.
Magnum opus
The Great Work of Alchemy is often described as a series of four stages represented by colours.
nigredo, a blackening or melanosis
albedo, a whitening or leucosis
citrinitas, a yellowing or xanthosis
rubedo, a reddening, purpling, or iosis | Alchemy | Wikipedia | 409 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Modernity
Due to the complexity and obscurity of alchemical literature, and the 18th-century diffusion of remaining alchemical practitioners into the area of chemistry, the general understanding of alchemy in the 19th and 20th centuries was influenced by several distinct and radically different interpretations. Those focusing on the exoteric, such as historians of science Lawrence M. Principe and William R. Newman, have interpreted the 'Decknamen' (or code words) of alchemy as physical substances. These scholars have reconstructed physicochemical experiments that they say are described in medieval and early modern texts. At the opposite end of the spectrum, focusing on the esoteric, scholars, such as Florin George Călian and Anna Marie Roos, who question the reading of Principe and Newman, interpret these same Decknamen as spiritual, religious, or psychological concepts.
New interpretations of alchemy are still perpetuated, sometimes merging in concepts from New Age or radical environmentalism movements. Groups like the Rosicrucians and Freemasons have a continued interest in alchemy and its symbolism. Since the Victorian revival of alchemy, "occultists reinterpreted alchemy as a spiritual practice, involving the self-transformation of the practitioner and only incidentally or not at all the transformation of laboratory substances", which has contributed to a merger of magic and alchemy in popular thought.
Esoteric interpretations of historical texts
In the eyes of a variety of modern esoteric and Neo-Hermetic practitioners, alchemy is primarily spiritual. In this interpretation, transmutation of lead into gold is presented as an analogy for personal transmutation, purification, and perfection. | Alchemy | Wikipedia | 346 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
According to this view, early alchemists such as Zosimos of Panopolis () highlighted the spiritual nature of the alchemical quest, symbolic of a religious regeneration of the human soul. This approach is held to have continued in the Middle Ages, as metaphysical aspects, substances, physical states, and material processes are supposed to have been used as metaphors for spiritual entities, spiritual states, and, ultimately, transformation. In this sense, the literal meanings of 'Alchemical Formulas' hid a spiritual philosophy. In the Neo-Hermeticist interpretation, both the transmutation of common metals into gold and the universal panacea are held to symbolize evolution from an imperfect, diseased, corruptible, and ephemeral state toward a perfect, healthy, incorruptible, and everlasting state, so the philosopher's stone then represented a mystic key that would make this evolution possible. Applied to the alchemist, the twin goal symbolized their evolution from ignorance to enlightenment, and the stone represented a hidden spiritual truth or power that would lead to that goal. In texts that are believed to have been written according to this view, the cryptic alchemical symbols, diagrams, and textual imagery of late alchemical works are supposed to contain multiple layers of meanings, allegories, and references to other equally cryptic works; which must be laboriously decoded to discover their true meaning.
In his 1766 Alchemical Catechism, Théodore Henri de Tschudi suggested that the usage of the metals was symbolic: | Alchemy | Wikipedia | 311 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Psychology
Alchemical symbolism has been important in analytical psychology and was revived and popularized from near extinction by the Swiss psychologist Carl Gustav Jung. Jung was initially confounded and at odds with alchemy and its images but after being given a copy of The Secret of the Golden Flower, a Chinese alchemical text translated by his friend Richard Wilhelm, he discovered a direct correlation or parallel between the symbolic images in the alchemical drawings and the inner, symbolic images coming up in his patients' dreams, visions, or fantasies. He observed these alchemical images occurring during the psychic process of transformation, a process that Jung called "individuation". Specifically, he regarded the conjuring up of images of gold or Lapis as symbolic expressions of the origin and goal of this "process of individuation". Together with his alchemical mystica soror (mystical sister) Jungian Swiss analyst Marie-Louise von Franz, Jung began collecting old alchemical texts, compiled a lexicon of key phrases with cross-references, and pored over them. The volumes of work he wrote shed new light onto understanding the art of transubstantiation and renewed alchemy's popularity as a symbolic process of coming into wholeness as a human being where opposites are brought into contact and inner and outer, spirit and matter are reunited in the hieros gamos, or divine marriage. His writings are influential in general psychology, but especially to those who have an interest in understanding the importance of dreams, symbols, and the unconscious archetypal forces (archetypes) that comprise all psychic life.
Both von Franz and Jung have contributed significantly to the subject and work of alchemy and its continued presence in psychology as well as contemporary culture. Among the volumes Jung wrote on alchemy, his magnum opus is Volume 14 of his Collected Works, Mysterium Coniunctionis.
Literature
Alchemy has had a long-standing relationship with art, seen both in alchemical texts and in mainstream entertainment. Literary alchemy appears throughout the history of English literature from Shakespeare to J. K. Rowling, and also the popular Japanese manga Fullmetal Alchemist. Here, characters or plot structure follow an alchemical magnum opus. In the 14th century, Chaucer began a trend of alchemical satire that can still be seen in recent fantasy works like those of the late Sir Terry Pratchett. Another literary work taking inspiration from the alchemical tradition is the 1988 novel The Alchemist by Brazilian writer Paulo Coelho. | Alchemy | Wikipedia | 512 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Visual artists have had a similar relationship with alchemy. While some used it as a source of satire, others worked with the alchemists themselves or integrated alchemical thought or symbols in their work. Music was also present in the works of alchemists and continues to influence popular performers. In the last hundred years, alchemists have been portrayed in a magical and spagyric role in fantasy fiction, film, television, novels, comics and video games.
Science
One goal of alchemy, the transmutation of base substances into gold, is now known to be impossible by means of traditional chemistry, but possible by other physical means. Although not financially worthwhile, gold was synthesized in particle accelerators as early as 1941. | Alchemy | Wikipedia | 149 | 573 | https://en.wikipedia.org/wiki/Alchemy | Physical sciences | Chemistry: General | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.