id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
224443
https://en.wikipedia.org/wiki/Procellariidae
Procellariidae
The family Procellariidae is a group of seabirds that comprises the fulmarine petrels, the gadfly petrels, the diving petrels, the prions, and the shearwaters. This family is part of the bird order Procellariiformes (or tubenoses), which also includes the albatrosses and the storm petrels. The procellariids are the most numerous family of tubenoses, and the most diverse. They range in size from the giant petrels with a wingspan of around , that are almost as large as the albatrosses, to the diving petrels with a wingspan of around that are similar in size to the little auks or dovekies in the family Alcidae. Male and female birds are identical in appearance. The plumage color is generally dull, with blacks, whites, browns and grays. The birds feed on fish, squid and crustacea, with many also taking fisheries discards and carrion. Whilst agile swimmers and excellent in water, petrels have weak legs and can only shuffle on land, with the giant petrels of the genus Macronectes being the only two species that are capable of proper terrestrial locomotion. All species are accomplished long-distance foragers, and many undertake long trans-equatorial migrations. They are colonial breeders, exhibiting long-term mate fidelity and site philopatry. In all species, a single white egg is laid each breeding season. The parents take it in turns to incubate the egg and to forage for food. The feeding area can be at a great distance from the nest site. The incubation times and chick-rearing periods are exceptionally long compared to other birds. Many procellariids have breeding populations of over several million pairs; others number fewer than 200 birds. Humans have traditionally exploited several species of fulmar and shearwater (known as muttonbirds) for food, fuel, and bait, a practice that continues in a controlled fashion today. Several species are threatened by introduced species attacking adults and chicks in breeding colonies and by long-line fisheries. Taxonomy and evolution The family Procellariidae was introduced (as Procellaridæ) by the English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1820. The name is derived from the type genus Procellaria which in turn is derived from the Latin word procella meaning "storm" or "gale". The type genus was named in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. Procellariidae is one of families that make up the order Procellariiformes. Before the introduction of molecular phylogenetics, the traditional arrangement was to divide the Procellariiformes into a set of four families: Diomedeidae containing the albatrosses, Hydrobatidae containing all the storm petrels, Pelecanoididae containing the diving petrels and Procellariidae containing the petrels, shearwaters and fulmars. The family Hydrobatidae was further divided into two subfamilies, the northern storm petrels in Hydrobatinae and the southern or austral storm petrels in Oceanitinae. A 1998 analysis of mitochondrial cytochrome b sequences found there was deep genetic divergence between the two subfamilies. Subsequent large-scale multigene studies found that the two subfamilies were not sister taxa. The storm petrels were therefore split into two families: Hydrobatidae containing the northern storm petrels and Oceanitidae, containing the southern storm petrels. The multigene genetic studies found that the diving petrels in the family Pelecanoididae were nested within the family Procellariidae. As a result, the diving petrels was merged into Procellariidae. The molecular evidence suggests that the albatrosses were the first to diverge from the ancestral stock, and the austral storm petrels next, with the procellariids and northern storm petrels splitting most recently. Within the procellariid family, a genetic analysis based on the cytochrome b gene published in 2004 indicated that the genus Puffinus contained two distinct clades and was polyphyletic. The genus was therefore split and a group of species moved into the resurrected genus Ardenna. The other genera within the family were found to be monotypic but the relationships between the genera remained unclear. This changed when a multigene genetic study published in 2021 provided a genus-level phylogeny of the family. There are 99 species of procellariid in 16 genera. The family has usually been broken up into four fairly distinct groups; the fulmarine petrels, the gadfly petrels, the prions, and the shearwaters. With the inclusion of the diving petrels there are now five main groups. The fulmarine petrels include the largest procellariids, the giant petrels, as well as the two fulmar species, the snow petrel, the Antarctic petrel, and the Cape petrel. The fulmarine petrels are a diverse group with differing habits and appearances, but are linked morphologically by their skull features, particularly the long prominent nasal tubes. The four diving petrels are the smallest procellariids with lengths of around and wingspans of . They are compact birds with short wings that are adapted for use under water. They have a characteristic whirring flight and dive into the water without settling. They probably remain all year in the seas near their breeding sites. The gadfly petrels, so named due to their helter-skelter flight, are the 37 species in the genus Pterodroma. The species vary from small to medium sizes, in length, and are long winged with short hooked bills. They are most closely related to Kerguelen petrel which is placed in its own genus Aphrodroma. The prions comprise six species of true prion in the genus Pachyptila and the closely related blue petrel. Often known in the past as whalebirds, three species have large bills filled with lamellae that they use to filter plankton somewhat as baleen whales do, though the old name derives from their association with whales, not their bills (though "prions" does, deriving from Ancient Greek for "saw"). They are small procellariids, in length, with a prominent dark M-shaped mark across the upperwing of their grey plumage. All are restricted to the southern hemisphere. The shearwaters are adapted for diving after prey instead of foraging on the ocean's surface; several species have been recorded diving deeper than . They are known for the long trans-equatorial migrations undertaken by many species. The shearwaters include the 20 or so species of the genus Puffinus, seven species in the genus Ardenna, as well as the five large Procellaria species and the four Calonectris species. While all these four genera are often known collectively as shearwaters, the Procellaria are called petrels in their common names. Morphology and flight The procellariids are small- to medium-sized seabirds. The largest, the southern giant petrel with a wingspan of , is almost as large as albatrosses; the smallest, the diving petrels have a wingspan of and are similar in size to little auks or dovekies in the family Alcidae. There are no obvious differences between the sexes, although females tend to be slighter. Like all Procellariiformes, the procellariids have a characteristic tubular nasal passage used for olfaction. This ability to smell helps to locate patchily distributed prey at sea and may help locate nesting colonies. The plumage of the procellariids is usually dull, with greys, bluish greys, blacks and browns being the usual colours, although some species have striking patterns such as the Cape petrel. The technique of flight among procellariids depends on foraging methods. Compared to an average bird, all procellariids have a high aspect ratio (meaning their wings are long and narrow) and a heavy wing loading. Therefore, they must maintain a high speed in order to remain in the air. Most procellariids use two techniques to do this, namely, dynamic soaring and slope soaring. Dynamic soaring involves gliding across wave fronts, thus taking advantage of the vertical wind gradient and minimising the effort required to stay in the air. Slope soaring is more straightforward: the procellariid turns to the wind, gaining height, from where it can then glide back down to the sea. Most procellariids aid their flight by means of flap-glides, where bursts of flapping are followed by a period of gliding; the amount of flapping dependent on the strength of the wind and the choppiness of the water. Because of the high speeds required for flight, procellariids need to either run or face into a strong wind in order to take off. The giant petrels share with the albatrosses an adaptation known as a shoulder-lock: a sheet of tendon that locks the wing when fully extended, allowing the wing to be kept up and out without any muscle effort. Gadfly petrels often feed on the wing, snapping prey without landing on the water. The flight of the smaller prions is similar to that of the storm petrels, being highly erratic and involving weaving and even looping the loop. The wings of all species are long and stiff. In some species of shearwater the wings are used to power the birds underwater while diving for prey. Their heavier wing loadings, in comparison with surface-feeding procellariids, allow these shearwaters to achieve considerable depths (below in the case of the short-tailed shearwater). Procellariids generally have weak legs that are set back, and many species move around on land by resting on the breast and pushing themselves forward, often with the help of their wings. The exceptions to this are the two species of giant petrel, which have strong legs used when they feed on land. Distribution and migration The procellariids are present in all the world's oceans and most of the seas. They are absent from the Bay of Bengal and Hudson Bay, but are present year round or seasonally in the rest. The seas north of New Zealand are the centre of procellariid biodiversity, with the most species. Among the groups, the fulmarine petrels have a mostly polar distribution, with most species living around Antarctica and one, the northern fulmar ranging in the Northern Atlantic and Pacific Oceans. Of the four species of diving petrel, two are found along the coasts of South America, while the remaining two have circumpolar distributions in the Southern Ocean. The prions are restricted to the Southern Ocean, and the gadfly petrels are found mostly in the tropics with some temperate species. The shearwaters are the most widespread group and breed in most temperate and tropical seas. Many procellariids undertake long annual migrations in the non-breeding season. Southern species of shearwater such as the sooty shearwater and short-tailed shearwater, breeding on islands off Australia, New Zealand and Chile, undertake transequatorial migrations of millions of birds up to the waters off Alaska and back each year during the austral winter. Manx shearwaters from the North Atlantic also undertake transequatorial migrations from Western Europe and North America to the waters off Brazil in the South Atlantic. The mechanisms of navigation are poorly understood, but displacement experiments where individuals were removed from colonies and flown to far-flung release sites have shown that they are able to home in on their colonies with remarkable precision. A Manx shearwater released in Boston returned to its colony in Skomer, Wales within 13 days, a distance of 5,150 kilometres (3,200 mi). Behaviour Food and feeding The diet of the procellariids is the most diverse of all the Procellariiformes, as are the methods employed to obtain it. With the exception of the giant petrels, all procellariids are exclusively marine, and the diet of all species is dominated by either fish, squid, crustaceans and carrion, or some combination thereof. The majority of species are surface feeders, obtaining food that has been pushed to the surface by other predators or currents, or have floated in death. Among the surface feeders some, principally the gadfly petrels, can obtain food by dipping from flight, while most of the rest feed while sitting on the water. These surface feeders are dependent on their prey being close to the surface, and for this reason procellariids are often found in association with other predators or oceanic convergences. Studies have shown strong associations between many different kinds of seabirds, including wedge-tailed shearwaters, and dolphins and tuna, which push shoaling fish up towards the surface. The gadfly petrels and the Kerguelen petrel mainly feed at night. In so doing they can take advantage of the nocturnal migration of cephalopods and other food species towards the surface. The fulmarine petrels are generalists, which for the most part take many species of fish and crustacea. The giant petrels, uniquely for Procellariiformes, will feed on land, eating the carrion of other seabirds and seals. They will also attack the chicks of other seabirds. The diet of the giant petrels varies according to sex, with the females taking more krill and the males more carrion. All the fulmarine petrels readily feed on fisheries discards at sea, a habit that has been implicated in (but not proved to have caused) the expansion in range of the northern fulmar in the Atlantic. The three larger prion species have bills filled with lamellae, which act as filters to sift zooplankton from the water. Water is forced through the lamellae and small prey items are collected. This technique is often used in conjunction with a method known as hydroplaning where the bird dips its bill beneath the surface and propels itself forward with wings and feet as if walking on the water. The diving petrels and many of the shearwaters are proficient divers. While it has long been known that they regularly dive from the surface to pursue prey, using their wings for propulsion, the depth that they are able to dive to was not appreciated (or anticipated) until scientists began to deploy maximum-depth recorders on foraging birds. Studies of both long-distance migrants such as the sooty shearwater and more sedentary species such as the black-vented shearwater have shown maximum diving depths of and . Tropical shearwaters, such as the wedge-tailed shearwater and the Audubon's shearwater, also dive in order to hunt, making the shearwaters the only tropical seabirds capable of exploiting that ecological niche (all other tropical seabirds feed close to the surface). Many other species of procellariid, from white-chinned petrels to slender-billed prions, dive to a couple of metres below the surface, though not as proficiently or as frequently as the shearwaters. Breeding Colonies The procellariids are colonial, nesting for the most part on islands. These colonies vary in size from over a million birds to just a few pairs, and can be densely concentrated or widely spaced. At one extreme the greater shearwater nests in concentrations of one pair per square metre in three colonies of more than one million pairs, whereas the giant petrels nest in clumped but widely spaced territories that barely qualify as colonial. Colonies are usually located near the coast, but some species nest far inland and even at high altitudes. Hutton's shearwater (Puffinus huttoni) breeds in burrows on the sea-facing mountainside of the Kaikoura Ranges on South Island, New Zealand. The colonies are above sea level at a distance of from the coast. Other exceptions are Barau's petrel (Pterodroma baraui) that breeds at on the island of Réunion in the Indian Ocean, and the snow petrel (Pagodroma nivea) that breeds in Antarctica on mountain ledges up to from the open sea. Most seabirds are colonial, and the reasons for colonial behaviour are assumed to be similar, if incompletely understood by scientists. Procellariids for the most part have weak legs and are unable to easily take off, making them highly vulnerable to mammalian predators. Most procellariid colonies are located on islands that have historically been free of mammals; for this reason some species cannot help but be colonial as they are limited to a few locations to breed. Even species that breed on continental Antarctica, such as the Antarctic petrel, are forced by habitat preference (snow-free north-facing rock) to breed in just a few locations. Most procellariids' nests are in burrows or on the surface on open ground, with a smaller number nesting under the cover of vegetation (such as in a forest). All the fulmarine petrels bar the snow petrel nest in the open, the snow petrel instead nesting inside natural crevices. Of the rest of the procellariids the majority nest in burrows or crevices, with a few tropical species nesting in the open. There are several reasons for these differences. The fulmarine petrels are probably precluded from burrowing by their large size (the crevice-nesting snow petrel is the smallest fulmarine petrel) and the high latitudes they breed in, where frozen ground is difficult to burrow into. The smaller size of the other species, and their lack of agility on land, mean that even on islands free from mammal predators they are still vulnerable to skuas, gulls and other avian predators, something the aggressive oil-spitting fulmars are not. The chicks of all species are vulnerable to predation, but the chicks of fulmarine petrels can defend themselves in a similar fashion to their parents. In the higher latitudes there are thermal advantages to burrow nesting, as the temperature is more stable than on the surface, and there is no wind-chill to contend with. The absence of skuas, gulls and other predatory birds on tropical islands is why some shearwaters and two species of gadfly petrel (Kermadec petrel and the herald petrel) can nest in the open. This has the advantages of reducing competition with burrow nesters from other species and allowing open-ground nesters to nest on coralline islets without soil for burrowing. Procellariids that burrow in order to avoid predation almost always attend their colonies nocturnally in order to reduce predation as well. Procellariids display high levels of philopatry, exhibiting both natal philopatry and site fidelity. Natal philopatry, the tendency of a bird to breed close to where it hatched, is strong among all the Procellariiformes. The evidence for natal philopatry comes from several sources, not the least of which is the existence of several procellariid species that are endemic to a single island. The study of mitochondrial DNA provides evidence of restricted gene flow between different colonies, and has been used to show philopatry in fairy prions. Bird ringing provides compelling evidence of philopatry; a study of Cory's shearwaters nesting near Corsica found that nine out of 61 male chicks that returned to breed at their natal colony actually bred in the burrow they were raised in. This tendency towards philopatry is stronger in some species than others, and several species readily prospect potential new colony sites and colonise them. It is hypothesised that there is a cost to dispersing to a new site, the chance of not finding a mate of the same species, that selects against it for rarer species, whereas there is probably an advantage to dispersal for species that have colony sites that change dramatically during periods of glacial advance or retreat. There are differences in the tendency to disperse based on sex, with females being more likely to breed away from the natal site. Mate and site fidelity Procellariids, as well as having strong natal philopatry, exhibit strong site fidelity, returning to the same nesting site, burrow or territory in sequential years. The figure varies for different species but is high for most species, an estimated 91% for Bulwer's petrels. The strength of this fidelity can also vary with sex; almost 85% of male Cory's shearwaters return to the same burrow to breed the year after a successful breeding attempt, while the figure for females is around 76%. This tendency towards using the same site from year to year is matched by strong mate fidelity, with birds breeding with the same partner for many years; it has been suggested that the two are linked, with site fidelity acting as a means in which partnered birds could meet at the beginning of the breeding season. One pair of northern fulmars bred as a pair in the same site for 25 years. Like the albatrosses the procellariids take several years to reach sexual maturity, though due to the greater variety of sizes and lifestyles, the age of first breeding stretches from two or three years in the smaller species to 12 years in the larger ones. The procellariids lack the elaborate breeding dances of the albatrosses, in no small part due to the tendency of most of them to attend colonies at night and breed in burrows, where visual displays are useless. The fulmarine petrels, which nest on the surface and attend their colonies diurnally, do use a repertoire of stereotyped behaviours such as cackling, preening, head waving and nibbling, but for most species courtship interactions are limited to some billing (rubbing the two bills together) in the burrow and the vocalisations made by all species. The calls serve a number of functions: they are used territorially to protect burrows or territories and to call for mates. Each call type is unique to a particular species and indeed it is possible for procellariids to identify the sex of the bird calling. It may also be possible to assess the quality of potential mates; a study of blue petrels found a link between the rhythm and duration of calls and the body mass of the bird. The ability of an individual to recognise its mate has been demonstrated in several species. Breeding season Like most seabirds, the majority of procellariids breed once a year. There are exceptions; many individuals of the larger species, such as the white-headed petrel, will skip a breeding season after successfully fledging a chick, and some of the smaller species, such as the Christmas shearwaters, breed on a nine-month schedule. Among those that breed annually, there is considerable variation as to the timing; some species breed in a fixed season while others breed all year round. Climate and the availability of food resources are important influences on the timing of procellariid breeding; species that breed at higher latitudes always breed in the summer as conditions are too harsh in the winter. At lower latitudes many, but not all, species breed continuously. Some species breed seasonally to avoid competition with other species for burrows, to avoid predation or to take advantage of seasonally abundant food. Others, such as the tropical wedge-tailed shearwater, breed seasonally for unknown reasons. Among the species that exhibit seasonal breeding there can be high levels of synchronization, both of time of arrival at the colony and of lay date. Procellariids begin to attend their nesting colony around one month prior to laying. Males will arrive first and attend the colony more frequently than females, partly in order to protect a site or burrow from potential competitors. Prior to laying there is a period known as the pre-laying exodus in which both the male and female are away from the colony, building up reserves in order to lay and undertake the first incubation stint respectively. This pre-laying exodus can vary in length from 9 days (as in the Cape petrel) to around 50 days in Atlantic petrels. All procellariids lay a single white egg per pair per breeding season, in common with the rest of the Procellariiformes. The egg is large compared to that of other birds, weighing 6–24% of the female's weight. Immediately after laying the female goes back to sea to feed while the male takes over incubation. Incubation duties are shared by both sexes in shifts that vary in length between species, individuals and the stage of incubation. The longest recorded shift was 29 days by a Murphy's petrel from Henderson Island; the typical length of a gadfly petrel stint is between 13 and 19 days. Fulmarine petrels, shearwaters and prions tend to have shorter stints, averaging between 3 and 13 days. Incubation takes a long time, from 40 days for the smaller species (such as prions) to around 55 days for the larger species. The incubation period is longer if eggs are abandoned temporarily; procellariid eggs are resistant to chilling and can still hatch after being left unattended for a few days. After hatching the chick is brooded by a parent until it is large enough to thermoregulate efficiently, and in some cases defend itself from predation. This guard stage lasts a short while for burrow-nesting species (2–3 days) but longer for surface nesting fulmars (around 16–20 days) and giant petrels (20–30 days). After the guard stage both parents feed the chick. In many species the parent's foraging strategy alternates between short trips lasting 1–3 days and longer trips of 5 days. The shorter trips, which are taken over the continental shelf, benefit the chick with faster growth, but longer trips to more productive pelagic feeding grounds are needed for the parents to maintain their own body condition. The meals are composed of both prey items and stomach oil, an energy-rich food that is lighter to carry than undigested prey items. This oil is created in a stomach organ known as a proventriculus from digested prey items, and gives procellariids and other Procellariiformes their distinctive musty smell. Chick development is quite slow for birds, with fledging taking place at around two months after hatching for the smaller species and four months for the largest species. The chicks of some species are abandoned by the parents; parents of other species continue to bring food to the nesting site after the chick has left. Chicks put on weight quickly and some can outweigh their parents, although they will slim down before they leave the nest. All procellariid chicks fledge by themselves, and there is no further parental care after fledging. Life expectancy of Procellariidae is between 15 and 20 years; the oldest recorded member was a northern fulmar that was over 50 years. Relationship with humans Exploitation Procellariids have been a seasonally abundant source of food for people wherever people have been able to reach their colonies. Early records of human exploitation of shearwaters (along with albatrosses and cormorants) come from the remains of hunter-gatherer middens in southern Chile, where sooty shearwaters were taken 5000 years ago. More recently, procellariids have been hunted for food by Europeans, particularly the northern fulmar in Europe, and various species by Inuit, and sailors around the world. The hunting pressure on the Bermuda petrel, or cahow, was so intense that the species nearly became extinct and did go missing for 300 years. The name of one species, the providence petrel, is derived from its (seemingly) miraculous arrival on Norfolk Island, where it provided a windfall for starving European settlers; within ten years the providence petrel was extinct on Norfolk. Several species of procellariid have gone extinct in the Pacific since the arrival of humans, and their remains have been found in middens dated to that time. More sustainable shearwater harvesting industries developed in Tasmania and New Zealand, where the practice of harvesting what are known as muttonbirds continues today. Threats and conservation While some species of procellariid have populations that number in the millions, many species are much less common and several are threatened with extinction. Human activities have caused dramatic declines in the numbers of some species, particularly species that were originally restricted to one island. According to the IUCN 43 species are listed as vulnerable or worse, with 12 critically endangered. Procellariids are threatened by many threats, but introduced species on their breeding grounds, light pollution, marine fisheries particularly bycatch, pollution, exploitation and climate change are the main threats measures as the number of species involved. The most pressing threat for many species, particularly the smaller ones, comes from species introduced to their colonies. Procellariids overwhelmingly breed on islands away from land predators such as mammals, and for the most part have lost the defensive adaptations needed to deal with them (with the exception of the oil-spitting fulmarine petrels). The introduction of mammal predators such as feral cats, rats, mongooses and mice can have disastrous results for ecologically naïve seabirds. These predators can either directly attack and kill breeding adults, or, more commonly, attack eggs and chicks. Burrowing species that leave their young unattended at a very early stage are particularly vulnerable to attack. Studies on grey-faced petrels breeding on New Zealand's Whale Island (Moutohora) have shown that a population under heavy pressure from Norway rats will produce virtually no young during a breeding season, whereas if the rats are controlled (through the use of poison), breeding success is much higher. That study highlighted the role that non-predatory introduced species can play in harming seabirds; introduced rabbits on the island caused little damage to the petrels, other than damaging their burrows, but they acted as a food source for the rats during the non-breeding season, which allowed rat numbers to be higher than they otherwise would be, resulting in more predators for the petrels to contend with. Interactions with introduced species can be quite complex. Gould's petrels breed only on two islands, Cabbage Tree Island and Boondelbah Island off Port Stephens (New South Wales). Introduced rabbits destroyed the forest understory on Cabbage Tree Island; this both increased the vulnerability of the petrels to natural predators and left them vulnerable to the sticky fruits of the birdlime tree (Pisonia umbellifera), a native plant. In the natural state these fruits lodge in the understory of the forest, but with the understory removed the fruits fall to the ground where the petrels move about, sticking to their feathers and making flight impossible. Larger species of procellariid face similar problems to the albatrosses with long-line fisheries. These species readily take offal from fishing boats and will steal bait from the long lines as they are being set, risking becoming snared on the hooks and drowning. In the case of the spectacled petrel this has led to the species undergoing a large decline and its listing as vulnerable. Diving species, most especially the shearwaters, are also vulnerable to gillnet fisheries. Studies of gill-net fisheries show that shearwaters (sooty and short-tailed) compose 60% of the seabirds killed by gill-nets in Japanese waters and 40% in Monterey Bay, California in the 1980s, with the total number of shearwaters killed in Japan being between 65,000 and 125,000 per annum over the same study period (1978–1981). Procellariids are vulnerable to other threats as well. Ingestion of plastic flotsam is a problem for the family as it is for many other seabirds. Once swallowed, this plastic can cause a general decline in the fitness of the bird, or in some cases lodge in the gut and cause a blockage, leading to death by starvation. Procellariids are also vulnerable to general marine pollution, as well as oil spills. Some species, such as the Barau's petrel, the Newell's shearwater or the Cory's shearwater, which nest high up on large developed islands are victims of light pollution. Chicks that are fledging are attracted to streetlights and are unable to reach the sea. An estimated 20–40% of fledging Barau's petrels are attracted to the streetlights on Réunion. Conservationists are working with governments and fisheries to prevent further declines and increase populations of endangered procellariids. Progress has been made in protecting many colonies where most species are most vulnerable. On 20 June 2001, the Agreement on the Conservation of Albatrosses and Petrels was signed by seven major fishing nations. The agreement lays out a plan to manage fisheries by-catch, protect breeding sites, promote conservation in the industry, and research threatened species. The developing field of island restoration, where introduced species are removed and native species and habitats restored, has been used in several procellariid recovery programmes. Invasive species such as rats, feral cats and pigs have been either removed or controlled in many remote islands in the tropical Pacific (such as the Northwestern Hawaiian Islands), around New Zealand (where island restoration was developed), and in the south Atlantic and Indian Oceans. The grey-faced petrels of Whale Island (mentioned above) have been achieving much higher fledging successes after the introduced Norway rats were finally completely removed. At sea, procellariids threatened by long-line fisheries can be protected using techniques such as setting long-line bait at night, dying the bait blue, setting the bait underwater, increasing the amount of weight on lines and using bird scarers can all reduce the seabird by-catch. The Agreement on the Conservation of Albatrosses and Petrels came into force in 2004 and has been ratified by eight countries, Australia, Ecuador, New Zealand, Spain, South Africa, France, Peru and the United Kingdom. The treaty requires these countries to take specific actions to reduce by-catch and pollution and to remove introduced species from nesting islands.
Biology and health sciences
Procellariiformes
Animals
224605
https://en.wikipedia.org/wiki/Peramelemorphia
Peramelemorphia
The order Peramelemorphia includes the bandicoots and bilbies. All members of the order are endemic to Australia-New Guinea and most have the characteristic bandicoot shape: a plump, arch-backed body with a long, delicately tapering snout, very large upright ears, relatively long, thin legs, and a thin tail. Their size varies from about 140 grams up to 4 kilograms, but most species are about one kilogram. Phylogeny Placement within Marsupialia The position of the Peramelemorphia within the marsupial family tree has long been puzzling and controversial. There are two morphological features in the order that appear to show a clear evolutionary link with another marsupial group: the type of foot, and the teeth. Unfortunately, these clear signposts point in opposite directions. All members of the order are polyprotodont (have several pairs of lower front teeth)—in the case of the Peramelemorphia, three pairs. This suggests that they have evolved within Dasyuromorphia (marsupial carnivores). On the other hand, they also have an unusual feature in their feet: the second and third toes are fused together. This condition is called syndactyly, and is characteristic of the Diprotodontia (the order of marsupial herbivores that includes kangaroos, wombats, possums, and many others). Attempts to resolve this puzzle include the view that the bandicoot group evolved from the carnivores, retaining the polyprotodont dentition, and independently evolving a syndactyl hind foot; the contrary view that syndactyly is so unusual that it is unlikely to have evolved twice and therefore the bandicoot group must have evolved from a possum-like diprotodont creature, and re-evolved its extra teeth. A third view suggests that the bandicoot group evolved from a primitive carnivore, developed the syndactylous hind foot as a specialisation for climbing, and the diprotodonts then split off and evolved the two-tooth jaw that gives them their name. Recent molecular level investigations do not so far appear to have resolved the puzzle, but do strongly suggest that whatever the relationship of the bandicoot group to the other marsupial orders may be, it is a distant one. Relationships within Peramelemorphia Recent molecular analyses have resulted in a phylogenetic reconstruction of the members of Peramelemorphia with quite strong support. The most basal split separates Thylacomyidae (Macrotis) from all other bandicoots. Probably the next to diverge was the recently extinct Chaeropodidae (Chaeropus). The remaining taxa comprise the Peramelidae, which divides into subfamilies Peramelinae (Isoodon and Perameles) and a clade in which the Echymiperinae (Echymipera and Microperoryctes) form a sister group to Peroryctinae (Peroryctes): Fossil record Many specimens of modern peramelemorphian (e.g. Perameles spp. and Isoodon spp.) have been recovered in the fossil record from Pleistocene and Holocene fossil localities. However, very few fossil species have been recovered to date. The first species of fossil peramelemorphian was described by R. A. Stirton in 1955. The specimen Stirton described was a partial lower jaw from the Tirari Desert in Central Australia, Pliocene in age. The lower jaw morphology suggested a relationship with bilbies (Family Thylacomyidae), and was named Ischnodon australis. It was not until 1976 that Archer and Wade described the next fossil bandicoot. A single upper molar was recovered from the Bluff Downs fossil site, Allingham Formation, in northern Queensland, also Pliocene in age. The tooth was similar to that of species of Perameles, and was therefore named Perameles allinghamensis. In 1995, the first Miocene species was described from Riversleigh, and was named Yarala burchfieldi by Dr Jeannette Muirhead. The species was represented by several upper and lower jaws, which were smaller than any living bandicoots and had a very primitive dentition. A skull was later recovered in 2000, the first for any fossil peramelemorphian to date. Features of the skull and dentition suggested that Yarala burchfieldi was distinct from other peramelemorphians, and for this reason, a new Superfamily Yaraloidea and Family Yaralidae were erected to classify this species. In 1997, Muirhead, Dawson and Archer described a new species of Perameles, Perameles bowensis, from teeth recovered from two Pliocene fossil localities, Bow and Wellington Caves. The same species was later reported in 2000 from Chinchilla, Queensland, by Mackness and colleagues. In 2002, Price described a new species Perameles, Perameles sobbei, from the Darling Downs (Pleistocene in age), south-eastern Queensland. This species was represented by a lower jaw and a few isolated lower molars. Additional material were later described in 2005 from the same site, including upper molars. A second species of Yarala, Yarala kida, was described in 2006 by Schwartz. This species was recovered from Kangaroo Well, a late Oligocene site from the Northern Territory in Australia. This species is thought to be even more primitive than Yarala burchfieldi. The second fossil skull of any fossil peramelemorphian was also recovered from Miocene sites of Riversleigh. In fact, more than one skull of this new species was found (and several lower and upper jaws), and was significantly different from any other bandicoot to erect a new genus, Galadi. The species was named Galadi speciosus by Travouillon and colleagues. It was short-snouted unlike modern bandicoots suggesting that it was more carnivorous than its omnivorous modern relatives. Its relationship to other bandicoots is unclear, but it was likely to be less primitive than Yarala but more primitive than living bandicoots. An additional three species of Galadi were later described in 2013 and named Galadi grandis, Galadi amplus and Galadi adversus. Gurovich et al. (2013) described a new species of mouse-sized bandicoot from Riversleigh and from Kutjamarpu, Southern Australia. The species, named Bulungu palara, is represented by a skull and several lower and upper jaws. Two other species in this genus were also described from the Etadunna Formation in South Australia, Bulungu muirheadae which was the oldest fossil bandicoot recovered as of 2013 (about 24 million years old), and Bulungu campbelli. The oldest modern bandicoot (peramelid) and the oldest bilby (Thylacomyid) were later discovered by Travouillon et al., 2014 from Riversleigh World Heritage Area, from middle Miocene fossil deposits (around 15 million years old). The peramelid, Crash bandicoot, was named after the famous video game character and is only represented by a single upper jaw. The bilby, Liyamayi dayinamed after geologist and philanthropist Robert Day, is only known from 3 teeth (2 upper molar, 1 lower molar). The first record of sexual dimorphism (difference in size between males and females) in a fossil bandicoot was reported from two new species from Riversleigh (Travouillon et al. 2014). Named Madju variae and Madju encorensis, they are closely related to modern bandicoots, but do not fall in any modern family, as did Galadi and Bulungu. Instead they are classified as Perameloid, with all known Peremelemorphian, to the exclusion of yaralids. Madju variae is also unusual in preserving an ontogenetic series (age series from pouch young to adult), the second of any fossil marsupial mammal in Australia. The study of this ontogenetic series lead researchers to think that Madju variae developed slow than modern bandicoots, much more like a bilby, and therefore the rapid development of modern bandicoots must have evolved after the middle Miocene, when Australia started to become more arid.
Biology and health sciences
Marsupials
null
224636
https://en.wikipedia.org/wiki/Supersymmetry
Supersymmetry
Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics. A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. The names of bosonic partners of fermions are prefixed with s-, because they are scalar particles. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass. Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified. History A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time. J. L. Gervais and B. Sakita (in 1971), Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972), independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971 in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu. In 1974, Julius Wess and Bruno Zumino identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics, critical phenomena, quantum mechanics to statistical physics, and supersymmetry remains a vital part of many proposed theories in many branches of physics. In particle physics, the first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem. Supersymmetry was coined by Abdus Salam and John Strathdee in 1974 as a simplification of the term super-gauge symmetry used by Wess and Zumino, although Zumino also used the same term at around the same time. The term supergauge was in turn coined by Neveu and Schwarz in 1971 when they devised supersymmetry in the context of string theory. Applications Extension of possible symmetry groups One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. In 1975, the Haag–Łopuszański–Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories. The supersymmetry algebra Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra. The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation: and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression are the generators of translation and σμ are the Pauli matrices. There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup. Supersymmetric quantum mechanics Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right. SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy. In finance In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in finance, and to financial networks. Supersymmetry in quantum field theory In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently. While supersymmetry has not been discovered at high energy, see Section Supersymmetry in particle physics, supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson. Supersymmetry in condensed matter physics SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker–Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' do not matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see Efetov (1997). In 2021, a group of researchers showed that, in theory, SUSY could be realised at the edge of a Moore–Read quantum Hall state. However, to date, no experiments have been done yet to realise it at an edge of a Moore–Read state. In 2022, a different group of researchers created a computer simulation of atoms in 1 dimensions that had supersymmetric topological quasiparticles. Supersymmetry in optics In 2013, integrated optics was found to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics. Supersymmetry in dynamical systems All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry. In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory. The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, such as earthquakes, neuroavalanches, and solar flares, known as the Zipf's law and the Richter scale. Supersymmetry in mathematics SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories that interchanges particles and monopoles. The proof of the Atiyah–Singer index theorem is much simplified by the use of supersymmetric quantum mechanics. Supersymmetry in string theory Supersymmetry is an integral part of string theory, a possible theory of everything. There are two types of string theory, supersymmetric string theory or superstring theory, and non-supersymmetric string theory. By definition of superstring theory, supersymmetry is required in superstring theory at some level. However, even in non-supersymmetric string theory, a type of supersymmetry called misaligned supersymmetry is still required in the theory in order to ensure no physical tachyons appear. Any string theories without some kind of supersymmetry, such as bosonic string theory and the , , and heterotic string theories, will have a tachyon and therefore the spacetime vacuum itself would be unstable and would decay into some tachyon-free string theory usually in a lower spacetime dimension. There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC. Despite the null results for supersymmetry at the LHC so far, some particle physicists have nevertheless moved to string theory in order to resolve the naturalness crisis for certain supersymmetric extensions of the Standard Model. According to the particle physicists, there exists a concept of "stringy naturalness" in string theory, where the string theory landscape could have a power law statistical pull on soft SUSY breaking terms to large values (depending on the number of hidden sector SUSY breaking fields contributing to the soft terms). If this is coupled with an anthropic requirement that contributions to the weak scale not exceed a factor between 2 and 5 from its measured value (as argued by Agrawal et al.), then the Higgs mass is pulled up to the vicinity of 125 GeV while most sparticles are pulled to values beyond the current reach of LHC. (The Higgs was determined to have a mass of 125 GeV ±0.15 GeV in 2022.) An exception occurs for higgsinos which gain mass not from SUSY breaking but rather from whatever mechanism solves the SUSY mu problem. Light higgsino pair production in association with hard initial state jet radiation leads to a soft opposite-sign dilepton plus jet plus missing transverse energy signal. Supersymmetry in particle physics In particle physics, a supersymmetric extension of the Standard Model is a possible candidate for undiscovered particle physics, and seen by some physicists as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete and where limitations of current theories are well established. In particular, one supersymmetric extension of the Standard Model, the Minimal Supersymmetric Standard Model (MSSM), became popular in theoretical particle physics, as the Minimal Supersymmetric Standard Model is the simplest supersymmetric extension of the Standard Model that could resolve major hierarchy problems within the Standard Model, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory. If a supersymmetric extension of the Standard Model is correct, superpartners of the existing elementary particles would be new and undiscovered particles and supersymmetry is expected to be spontaneously broken. There is no experimental evidence that a supersymmetric extension to the Standard Model is correct, or whether or not other extensions to current models might be more accurate. It is only since around 2010 that particle accelerators specifically designed to study physics beyond the Standard Model have become operational (i.e. the Large Hadron Collider (LHC)), and it is not known where exactly to look, nor the energies required for a successful search. However, the negative results from the LHC since 2010 have already ruled out some supersymmetric extensions to the Standard Model, and many physicists believe that the Minimal Supersymmetric Standard Model, while not ruled out, is no longer able to fully resolve the hierarchy problem. Supersymmetric extensions of the Standard Model Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model. One of the original motivations for the Minimal Supersymmetric Standard Model came from the hierarchy problem. Due to the quadratically divergent contributions to the Higgs mass squared in the Standard Model, the quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. Furthermore, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. This problem is known as the hierarchy problem. Supersymmetry close to the electroweak scale, such as in the Minimal Supersymmetric Standard Model, would solve the hierarchy problem that afflicts the Standard Model. It would reduce the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions, and Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale would be achieved in a natural manner, without extraordinary fine-tuning. If supersymmetry were restored at the weak scale, then the Higgs mass would be related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions. Another motivation for the Minimal Supersymmetric Standard Model comes from grand unification, the idea that the gauge symmetry groups should unify at high-energy. In the Standard Model, however, the weak, strong and electromagnetic gauge couplings fail to unify at high energy. In particular, the renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model. After incorporating minimal SUSY at the electroweak scale, the running of the gauge couplings are modified, and joint convergence of the gauge coupling constants is projected to occur at approximately 1016 GeV. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking. In many supersymmetric extensions of the Standard Model, such as the Minimal Supersymmetric Standard Model, there is a heavy stable particle (such as the neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity. Supersymmetry at the electroweak scale (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations. The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking. Searches and constraints for supersymmetry SUSY extensions of the standard model are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Fermilab; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC. In fact, CERN publicly states that if a supersymmetric model of the Standard Model "is correct, supersymmetric particles should appear in collisions at the LHC." Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits, which in 2006 were extended by the D0 experiment at the Tevatron. From 2003 to 2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetric extensions of the Standard Model, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density. Prior to the beginning of the LHC, in 2009, fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV. The first runs of the LHC surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges. In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV. The LHC found no previously unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for any supersymmetric extension of the Standard Model. Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10−28 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem. Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain. The current best limit for the electron's EDM has already reached a sensitivity to rule out so called 'naive' versions of supersymmetric extensions of the Standard Model. Research in the late 2010s and early 2020s from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. Current status The negative findings in the experiments disappointed many physicists, who believed that supersymmetric extensions of the Standard Model (and other theories relying upon it) were by far the most promising theories for "new" physics beyond the Standard Model, and had hoped for signs of unexpected results from the experiments. In particular, the LHC result seems problematic for the Minimal Supersymmetric Standard Model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists consider to be "unnatural" (see naturalness and fine tuning). In response to the so-called "naturalness crisis" in the Minimal Supersymmetric Standard Model, some researchers have abandoned naturalness and the original motivation to solve the hierarchy problem naturally with supersymmetry, while other researchers have moved on to other supersymmetric models such as split supersymmetry. Still others have moved to string theory as a result of the naturalness crisis. Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory in particle physics. However, some researchers suggested that this "naturalness" crisis was premature because various calculations were too optimistic about the limits of masses which would allow a supersymmetric extension of the Standard Model as a solution. General supersymmetry Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions. Extended supersymmetry It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators. The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg–Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton. For four dimensions there are the following theories, with the corresponding multiplets (CPT adds a copy, whenever they are not invariant under such symmetry): {|class="wikitable" !rowspan=4|N = 1 |Chiral multiplet||style="border-right:none"|(0,||style="border-left:none" colspan=8|) |- |Vector multiplet||style="border-right:none"|(,||style="border-left:none" colspan=8|1) |- |Gravitino multiplet||style="border-right:none"|(1,||style="border-left:none" colspan=8|) |- |Graviton multiplet||style="border-right:none"|(,||style="border-left:none" colspan=8|2) |- !rowspan=3|N = 2 |Hypermultiplet||style="border-right:none"|(−,||style="border-left:none;border-right:none"|0,||style="border-left:none" colspan=7|) |- |Vector multiplet||style="border-right:none"|(0,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=7|1) |- |Supergravity multiplet||style="border-right:none"|(1,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=7|2) |- !rowspan=2|N = 4 |Vector multiplet||style="border-right:none"|(−1,||style="border-left:none;border-right:none"|− ,||style="border-left:none;border-right:none"|0,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=5|1) |- |Supergravity multiplet||style="border-right:none"|(0,||style="border-left:none;border-right:none"| ,||style="border-left:none;border-right:none"|1,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=5|2) |- !N = 8 |Supergravity multiplet||style="border-right:none"|(−2,||style="border-left:none;border-right:none"|− ,||style="border-left:none;border-right:none"|−1,||style="border-left:none;border-right:none"|− ,||style="border-left:none;border-right:none"|0,||style="border-left:none;border-right:none"| ,||style="border-left:none;border-right:none"|1,||style="border-left:none;border-right:none"| ,||style="border-left:none"|2) |} Supersymmetry in alternate numbers of dimensions It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven. Fractional supersymmetry Fractional supersymmetry is a generalization of the notion of supersymmetry in which the minimal positive amount of spin does not have to be but can be an arbitrary for integer value of N. Such a generalization is possible in two or fewer spacetime dimensions.
Physical sciences
Particle physics: General
null
224729
https://en.wikipedia.org/wiki/Potassium%20oxide
Potassium oxide
Potassium oxide (KO) is an ionic compound of potassium and oxygen. It is a base. This pale yellow solid is the simplest oxide of potassium. It is a highly reactive compound that is rarely encountered. Some industrial materials, such as fertilizers and cements, are assayed assuming the percent composition that would be equivalent to K2O. Production Potassium oxide is produced from the reaction of oxygen and potassium; this reaction affords potassium peroxide, K2O2. Treatment of the peroxide with potassium produces the oxide: K2O2 + 2 K -> 2 K2O Alternatively and more conveniently, K2O is synthesized by heating potassium nitrate with metallic potassium: 2KNO3 + 10K -> 6K2O + N2 (^) Other possibility is to heat potassium peroxide at 500 °C which decomposes at that temperature giving pure potassium oxide and oxygen. 2K2O2 -> 2K2O + O2 (^) Potassium hydroxide cannot be further dehydrated to the oxide but it can react with molten potassium to produce it, releasing hydrogen as a byproduct. 2KOH + 2K <=> 2K2O + H2 (^) Properties and reactions K2O crystallises in the antifluorite structure. In this motif the positions of the anions and cations are reversed relative to their positions in CaF2, with potassium ions coordinated to 4 oxide ions and oxide ions coordinated to 8 potassium. K2O is a basic oxide and reacts with water violently to produce the caustic potassium hydroxide. It is deliquescent and will absorb water from the atmosphere, initiating this vigorous reaction. Term use in industry The chemical formula K2O (or simply 'K') is used in several industrial contexts: the N-P-K numbers for fertilizers, in cement formulas, and in glassmaking formulas. Potassium oxide is often not used directly in these products, but the amount of potassium is reported in terms of the K2O equivalent for whatever type of potash was used, such as potassium carbonate. For example, potassium oxide is about 83% potassium by weight, while potassium chloride is only 52%. Potassium chloride provides less potassium than an equal amount of potassium oxide. Thus, if a fertilizer is 30% potassium chloride by weight, its standard potassium rating, based on potassium oxide, would be only 18.8%.
Physical sciences
Alkali oxide salts
Chemistry
224785
https://en.wikipedia.org/wiki/Strelitzia
Strelitzia
Strelitzia is a genus of five species of perennial plants, native to South Africa. It belongs to the plant family Strelitziaceae. A common name of the genus is bird of paradise flower/plant, because of a resemblance of its flowers to birds-of-paradise. In South Africa, it is commonly known as a crane flower. Two of the species, S. nicolai and S. reginae, are frequently grown as houseplants. It is the floral emblem of the City of Los Angeles and is featured on the reverse of the South African 50-cent coin. Taxonomy The genus was named by Joseph Banks in honour of the British queen Charlotte of Mecklenburg-Strelitz. Description The species S. nicolai is the largest in the genus, reaching 10 m (33 ft) tall, with stately white and blue flowers; the other species typically reach tall, except S. caudata, which is a tree of a typically smaller size than S. nicolai. The leaves are large, long and broad, similar to a banana leaf in appearance, but with a longer petiole, and arranged strictly in two ranks to form a fan-like crown of evergreen foliage. The flowers are produced in a horizontal inflorescence emerging from a stout spathe. Biology and propagation They are pollinated by sunbirds and blue-faced honeyeaters, which perch on and drink from the spathe. The weight of the bird when standing on the spathe opens it to release the pollen onto the bird's feet, which is then deposited on the next spathe it visits. Strelitzia species lack natural insect pollinators; in areas without sunbirds, plants in this genus generally need hand pollination to successfully set seed. Species and hybrids Five species are recognised, although one—S. juncea—has been shown to be genetically nested within another, S. reginae. It is possibly a mutation that is in the process of speciating. Strelitzia × kewensis (artificial hybrid between S. reginae and S. augusta) Allergenicity Plants in the genus Strelitzia produce no wind-borne pollen, and have an OPALS allergy scale rating of 1, meaning a very low risk of causing allergic reaction. Journal Strelitzia is also the name of the botanic journal of the Pretoria-based National Botanical Institute, which has since been converted into the South African National Biodiversity Institute (SANBI). The Strelitzia journal replaced Memoirs of the Botanical Survey of South Africa and 'Annals of the Kirstenbosch Botanic Gardens.
Biology and health sciences
Zingiberales
Plants
225127
https://en.wikipedia.org/wiki/Proximity%20fuze
Proximity fuze
A proximity fuze (also VT fuze or "variable time fuze") is a fuze that detonates an explosive device automatically when it approaches within a certain distance of its target. Proximity fuzes are designed for elusive military targets such as aircraft and missiles, as well as ships at sea and ground forces. This sophisticated trigger mechanism may increase lethality by 5 to 10 times compared to the common contact fuze or timed fuze. Background Before the invention of the proximity fuze, detonation was induced by direct contact, a timer set at launch, or an altimeter. All of these earlier methods have disadvantages. The probability of a direct hit on a small moving target is low; a shell that just misses the target will not explode. A time- or height-triggered fuze requires good prediction by the gunner and accurate timing by the fuze. If either is wrong, then even accurately aimed shells may explode harmlessly before reaching the target or after passing it. At the start of the Blitz, it was estimated that it took 20,000 rounds to shoot down a single aircraft; other estimates put the figure as high as 100,000 or as low as 2,500. With a proximity fuze, the shell or missile need only pass close by the target at some time during its flight. The proximity fuze makes the problem simpler than the previous methods. Proximity fuzes are also useful for producing air bursts against ground targets. A contact fuze would explode when it hit the ground; it would not be very effective at scattering shrapnel. A timer fuze can be set to explode a few meters above the ground but the timing is vital and usually requires observers to provide information for adjusting the timing. Observers may not be practical in many situations, the ground may be uneven, and the practice is slow in any event. Proximity fuzes fitted to such weapons as artillery and mortar shells solve this problem by having a range of set burst heights [e.g. ] above ground that are selected by gun crews. The shell bursts at the appropriate height above ground. World War II The idea of a proximity fuse had long been considered militarily useful. Several ideas had been considered, including optical systems that shone a light, sometimes infrared, and triggered when the reflection reached a certain threshold, various ground-triggered means using radio signals, and capacitive or inductive methods similar to a metal detector. All of these suffered from the large size of pre-WWII electronics and their fragility, as well as the complexity of the required circuitry. British military researchers at the Telecommunications Research Establishment (TRE) Samuel Curran, William Butement, Edward Shire, and Amherst Thomson conceived of the idea of a proximity fuze in the early stages of World War II. Their system involved a small, short range, Doppler radar. British tests were then carried out with "unrotated projectiles" (the contemporary British term for unguided rockets). However, British scientists were uncertain whether a fuze could be developed for anti-aircraft shells, which had to withstand much higher accelerations than rockets. The British shared a wide range of possible ideas for designing a fuze, including a photoelectric fuze and a radio fuze, with United States during the Tizard Mission in late 1940. To work in shells, a fuze needed to be miniaturized, survive the high acceleration of cannon launch, and be reliable. The National Defense Research Committee assigned the task to the physicist Merle Tuve at the Department of Terrestrial Magnetism. Also eventually pulled in were researchers from the National Bureau of Standards (this research unit of NBS later became part of the Army Research Laboratory). Work was split in 1942, with Tuve's group working on proximity fuzes for shells, while the National Bureau of Standards researchers focused on the technically easier task of bombs and rockets. Work on the radio shell fuze was completed by Tuve's group, known as Section T, at The Johns Hopkins University Applied Physics Lab (APL). Over 100 American companies were mobilized to build some 20 million shell fuzes. The proximity fuze was one of the most important technological innovations of World War II. It was so important that it was a secret guarded to a similar level as the atom bomb project or D-Day invasion. Admiral Lewis Strauss wrote that, The fuze was later found to be able to detonate artillery shells in air bursts, greatly increasing their anti-personnel effects. In Germany, more than 30 (perhaps as many as 50) different proximity fuze designs were developed, or researched, for anti-aircraft use, but none saw service. These included acoustic fuzes triggered by engine sound, one developed by Rheinmetall-Borsig based on electrostatic fields, and radio fuzes. In mid-November 1939, a German neon lamp tube and a design of a prototype proximity fuze based on capacitive effects was received by British Intelligence as part of the Oslo Report. In the post-World War II era, a number of new proximity fuze systems were developed, using radio, optical, and other detection methods. A common form used in modern air-to-air weapons uses a laser as an optical source and time-of-flight for ranging. Design in the UK The first reference to the concept of radar in the UK was made by W. A. S. Butement and P. E. Pollard, who constructed a small breadboard model of a pulsed radar in 1931. They suggested the system would be useful for coast artillery units to accurately measure the range to shipping even at night. The War Office was not interested in the concept, and told the two to work on other issues. In 1936, the Air Ministry took over Bawdsey Manor in Suffolk to further develop their prototype radar systems that emerged the next year as Chain Home. The Army was suddenly extremely interested in the topic of radar, and sent Butement and Pollard to Bawdsey to form what became known as the "Army Cell". Their first project was a revival of their original work on coast defense, but they were soon told to start a second project to develop a range-only radar to aid anti-aircraft guns. As these projects moved from development into prototype form in the late 1930s, Butement turned his attention to other concepts, and among these was the idea of a proximity fuze: In May 1940, a formal proposal from Butement, Edward Shire, and Amherst Thomson was sent to the British Air Defence Establishment based on the second of the two concepts. A breadboard circuit was constructed, and the concept was tested in the laboratory by moving a sheet of tin at various distances. Early field testing connected the circuit to a thyratron trigger operating a tower-mounted camera which photographed passing aircraft to determine distance of fuze function. Prototype fuzes were then constructed in June 1940, and installed in "unrotated projectiles", the British cover name for solid-fueled rockets, and fired at targets supported by balloons. Rockets have relatively low acceleration and no spin creating centrifugal force, so the stresses on the delicate electronic fuze are relatively benign. It was understood that the limited application was not ideal; a proximity fuze would be useful on all types of artillery and especially anti-aircraft artillery, but those had very high accelerations. As early as September 1939, John Cockcroft began a development effort at Pye Ltd. to develop thermionic valves (electron tubes) capable of withstanding these much greater forces. Pye's research was transferred to the United States as part of the technology package delivered by the Tizard Mission when the United States entered the war. Pye's group was apparently unable to get their rugged pentodes to function reliably under high pressures until 6 August 1941, which was after the successful tests by the American group. Looking for a short-term solution to the valve problem, in 1940 the British ordered 20,000 miniature electron tubes intended for use in hearing aids from Western Electric Company and Radio Corporation of America. An American team under Admiral Harold G. Bowen, Sr. correctly deduced that they were meant for experiments with proximity fuzes for bombs and rockets. In September 1940, the Tizard Mission travelled to the US to introduce their researchers to a number of UK developments, and the topic of proximity fuses was raised. The details of the British experiments were passed to the United States Naval Research Laboratory and National Defense Research Committee (NDRC). Information was also shared with Canada in 1940 and the National Research Council of Canada delegated work on the fuze to a team at the University of Toronto. Development in the US Prior to and following receipt of circuitry designs from the British, various experiments were carried out by Richard B. Roberts, Henry H. Porter, and Robert B. Brode under the direction of NDRC Section T Chairman Merle Tuve. Tuve's group was known as Section T, which was located at APL throughout the war. As Tuve later put it in an interview: "We heard some rumors of circuits they were using in the rockets over in England, then they gave us the circuits, but I had already articulated the thing into the rockets, the bombs and shell." As Tuve understood, the circuitry of the fuze was rudimentary. In his words, "The one outstanding characteristic in this situation is the fact that success of this type of fuze is not dependent on a basic technical ideaall of the ideas are simple and well known everywhere." The critical work of adapting the fuze for anti-aircraft shells was done in the United States, not in England. Tuve said that despite being pleased by the outcome of the Butement et al. vs. Varian patent suit, which affirmed that the fuze was a UK invention and thereby saved the U.S. Navy millions of dollars by waiving royalty fees, the fuze design delivered by the Tizard Mission was "not the one we made to work!". A key improvement was introduced by Lloyd Berkner, who developed a system using separate transmitter and receiver circuits. In December 1940, Tuve invited Harry Diamond and Wilbur S. Hinman, Jr, of the United States National Bureau of Standards (NBS) to investigate Berkner's improved fuze and develop a proximity fuze for rockets and bombs to use against German Luftwaffe aircraft. In just two days, Diamond was able to come up with a new fuze design and managed to demonstrate its feasibility through extensive testing at the Naval Proving Ground at Dahlgren, Virginia. On 6 May 1941, the NBS team built six fuzes which were placed in air-dropped bombs and successfully tested over water. Given their previous work on radio and radiosondes at NBS, Diamond and Hinman developed the proximity fuze which employed the Doppler effect of reflected radio waves. The use of the Doppler effect developed by this group was later incorporated in all radio proximity fuzes for bomb, rocket, and mortar applications. Later, the Ordnance Development Division of the National Bureau of Standards (which became the Harry Diamond Laboratories – and later merged into the Army Research Laboratory – in honor of its former chief in subsequent years) developed the first automated production techniques for manufacturing radio proximity fuzes at low cost. While working for a defense contractor in the mid-1940s, Soviet spy Julius Rosenberg stole a working model of an American proximity fuze and delivered it to Soviet intelligence. It was not a fuze for anti-aircraft shells, the most valuable type. In the US, NDRC focused on radio fuzes for use with anti-aircraft artillery, where acceleration was up to 20,000 , compared to about 100  for rockets and much less for dropped bombs. In addition to extreme acceleration, artillery shells were spun by the rifling of the gun barrels to close to 30,000 rpm, creating immense centrifugal force. Working with Western Electric Company and Raytheon Company, miniature hearing-aid tubes were modified to withstand this extreme stress. The T-3 fuze had a 52% success against a water target when tested in January, 1942. The United States Navy accepted that failure rate. A simulated battle conditions test was started on 12 August 1942. Gun batteries aboard cruiser tested proximity-fuzed ammunition against radio-controlled drone aircraft targets over Chesapeake Bay. The tests were to be conducted over two days, but the testing stopped when drones were destroyed early on the first day. The three drones were destroyed with just four projectiles. A particularly successful application was the 90 mm shell with VT fuze with the SCR-584 automatic tracking radar and the M9 Gun Director fire control computer. The combination of these three inventions was successful in shooting down many V-1 flying bombs aimed at London and Antwerp, otherwise difficult targets for anti-aircraft guns due to their small size and high speed. VT (Variable Time) The Allied fuze used constructive and destructive interference to detect its target. The design had four or five electron tubes. One tube was an oscillator connected to an antenna; it functioned as both a transmitter and an autodyne detector (receiver). When the target was far away, little of the oscillator's transmitted energy would be reflected to the fuze. When a target was nearby, it would reflect a significant portion of the oscillator's signal. The amplitude of the reflected signal corresponded to the closeness of the target. This reflected signal would affect the oscillator's plate current, thereby enabling detection. However, the phase relationship between the oscillator's transmitted signal and the signal reflected from the target varied depended on the round trip distance between the fuze and the target. When the reflected signal was in phase, the oscillator amplitude would increase and the oscillator's plate current would also increase. But when the reflected signal was out of phase then the combined radio signal amplitude would decrease, which would decrease the plate current. So the changing phase relationship between the oscillator signal and the reflected signal complicated the measurement of the amplitude of that small reflected signal. This problem was resolved by taking advantage of the change in frequency of the reflected signal. The distance between the fuze and the target was not constant but rather constantly changing due to the high speed of the fuze and any motion of the target. When the distance between the fuze and the target changed rapidly, then the phase relationship also changed rapidly. The signals were in-phase one instant and out-of-phase a few hundred microseconds later. The result was a heterodyne beat frequency which corresponded to the velocity difference. Viewed another way, the received signal frequency was Doppler-shifted from the oscillator frequency by the relative motion of the fuze and target. Consequently, a low frequency signal, corresponding to the frequency difference between the oscillator and the received signal, developed at the oscillator's plate terminal. Two of the four tubes in the VT fuze were used to detect, filter, and amplify this low frequency signal. Note here that the amplitude of this low frequency 'beat' signal corresponds to the amplitude of the signal reflected from the target. If the amplified beat frequency signal's amplitude was large enough, indicating a nearby object, then it triggered the fourth tube – a gas-filled thyratron. Upon being triggered, the thyratron conducted a large current that set off the electrical detonator. In order to be used with gun projectiles, which experience extremely high acceleration and centrifugal forces, the fuze design also needed to utilize many shock-hardening techniques. These included planar electrodes, and packing the components in wax and oil to equalize the stresses. To prevent premature detonation, the inbuilt battery that armed the shell had a several millisecond delay before its electrolytes were activated, giving the projectile time to clear the area of the gun. The designation VT means 'variable time'. Captain S. R. Shumaker, Director of the Bureau of Ordnance's Research and Development Division, coined the term to be descriptive without hinting at the technology. Development The anti-aircraft artillery range at Kirtland Air Force Base in New Mexico was used as one of the test facilities for the proximity fuze, where almost 50,000 test firings were conducted from 1942 to 1945. Testing also occurred at Aberdeen Proving Ground in Maryland, where about 15,000 bombs were dropped. Other locations include Ft. Fisher in North Carolina and Blossom Point, Maryland. US Navy development and early production was outsourced to the Wurlitzer company, at their barrel organ factory in North Tonawanda, New York. Production First large scale production of tubes for the new fuzes was at a General Electric plant in Cleveland, Ohio formerly used for manufacture of Christmas-tree lamps. Fuze assembly was completed at General Electric plants in Schenectady, New York and Bridgeport, Connecticut. Once inspections of the finished product were complete, a sample of the fuzes produced from each lot was shipped to the National Bureau of Standards, where they were subjected to a series of rigorous tests at the specially built Control Testing Laboratory. These tests included low- and high-temperature tests, humidity tests, and sudden jolt tests. By 1944, a large proportion of the American electronics industry concentrated on making the fuzes. Procurement contracts increased from US$60 million in 1942, to $200 million in 1943, to $300 million in 1944 and were topped by $450 million in 1945. As volume increased, efficiency came into play and the cost per fuze fell from $732 in 1942 to $18 in 1945. This permitted the purchase of over 22 million fuzes for approximately one billion dollars ($14.6 billion in 2021 USD). The main suppliers were Crosley, RCA, Eastman Kodak, McQuay-Norris and Sylvania. There were also over two thousand suppliers and subsuppliers, ranging from powder manufacturers to machine shops. It was among the first mass-production applications of printed circuits. Deployment Vannevar Bush, head of the U.S. Office of Scientific Research and Development (OSRD) during the war, credited the proximity fuze with three significant effects. It was important in defense from Japanese kamikaze attacks in the Pacific. Bush estimated a sevenfold increase in the effectiveness of 5-inch anti-aircraft artillery with this innovation. It was an important part of the radar-controlled anti-aircraft batteries that finally neutralized the German V-1 attacks on England. It was used in Europe starting in the Battle of the Bulge where it was very effective in artillery shells fired against German infantry formations, and changed the tactics of land warfare. At first the fuzes were only used in situations where they could not be captured by the Germans. They were used in land-based artillery in the South Pacific in 1944. Also in 1944, fuzes were allocated to the British Army's Anti-Aircraft Command, that was engaged in defending Britain against the V-1 flying bomb. As most of the British heavy anti-aircraft guns were deployed in a long, thin coastal strip (leaving inland free for fighter interceptors), dud shells fell into the sea, safely out of reach of capture. Over the course of the German V-1 campaign, the proportion of flying bombs that were destroyed flying through the coastal gun belt rose from 17% to 74%, reaching 82% during one day. A minor problem encountered by the British was that the fuze was sensitive enough to detonate the shell if it passed too close to a seabird and a number of seabird "kills" were recorded. The Pentagon refused to allow the Allied field artillery use of the fuzes in 1944, although the United States Navy fired proximity-fuzed anti-aircraft shells in the July 1943 Battle of Gela during the invasion of Sicily. After General Dwight D. Eisenhower demanded he be allowed to use the fuzes, 200,000 shells with VT fuzes (code named "POZIT") were used in the Battle of the Bulge in December 1944. They made the Allied heavy artillery far more devastating, as all the shells now exploded just before hitting the ground. German divisions were caught out in open as they had felt safe from timed fire because it was thought that the bad weather would prevent accurate observation. U.S. General George S. Patton credited the introduction of proximity fuzes with saving Liège and stated that their use required a revision of the tactics of land warfare. Bombs and rockets fitted with radio proximity fuzes were in limited service with both the USAAF and USN at the end of WWII.  The main targets for these proximity fuze detonated bombs and rockets were anti-aircraft emplacements and airfields. Sensor types Radio Radio frequency sensing (radar) is the main sensing principle for artillery shells. The device described in World War II patent works as follows: The shell contains a micro-transmitter which uses the shell body as an antenna and emits a continuous wave of roughly 180–220 MHz. As the shell approaches a reflecting object, an interference pattern is created. This pattern changes with shrinking distance: every half wavelength in distance (a half wavelength at this frequency is about 0.7 meters), the transmitter is in or out of resonance. This causes a small cycling of the radiated power and consequently the oscillator supply current of about 200–800 Hz, the Doppler frequency. This signal is sent through a band-pass filter, amplified, and triggers the detonation when it exceeds a given amplitude. Optical Optical sensing was developed in 1935, and patented in the United Kingdom in 1936, by a Swedish inventor, probably Edward W. Brandt, using a petoscope. It was first tested as a part of a detonation device for bombs that were to be dropped over bomber aircraft, part of the UK's Air Ministry's "bombs on bombers" concept. It was considered (and later patented by Brandt) for use with anti-aircraft missiles fired from the ground. It used then a toroidal lens, that concentrated all light from a plane perpendicular to the missile's main axis onto a photocell. When the cell current changed a certain amount in a certain time interval, the detonation was triggered. Some modern air-to-air missiles (e.g., the ASRAAM and AA-12 Adder) use lasers to trigger detonation. They project narrow beams of laser light perpendicular to the flight of the missile. As the missile cruises towards its target the laser energy simply beams out into space. As the missile passes its target some of the energy strikes the target and is reflected to the missile, where detectors sense it and detonate the warhead. Acoustic Acoustic proximity fuzes are actuated by the acoustic emissions from a target (example an aircraft's engine or ship's propeller). Actuation can be either through an electronic circuit coupled to a microphone, or hydrophone, or mechanically using a resonating vibratory reed connected to diaphragm tone filter. During WW2, the Germans had at least five acoustic fuzes for anti-aircraft use under development, though none saw operational service. The most developmentally advanced of the German acoustic fuze designs was the Rheinmetall-Borsig Kranich (German for Crane) which was a mechanical device utilizing a diaphragm tone filter sensitive to frequencies between 140 and 500 Hz connected to a resonating vibratory reed switch used to fire an electrical igniter. The Schmetterling, Enzian, Rheintochter and X4 guided missiles were all designed for use with the Kranich acoustic proximity fuze. During WW2, the National Defense Research Committee (NDRC) investigated the use of acoustic proximity fuzes for anti-aircraft weapons but concluded that there were more promising technological approaches. The NDRC research highlighted the speed of sound as a major limitation in the design and use of acoustic fuzes, particularly in relation to missiles and high-speed aircraft. Hydroacoustic influence is widely used as a detonation mechanism for naval mines and torpedoes. A ship's propeller rotating in water produces a powerful hydroacoustic noise which can be picked up using a hydrophone and used for homing and detonation. Influence firing mechanisms often use a combination of acoustic and magnetic induction receivers. Magnetic Magnetic sensing can only be applied to detect huge masses of iron such as ships. It is used in mines and torpedoes. Fuzes of this type can be defeated by degaussing, using non-metal hulls for ships (especially minesweepers) or by magnetic induction loops fitted to aircraft or towed buoys. Pressure Some naval mines use pressure fuzes which are able to detect the pressure wave of a ship passing overhead. Pressure sensors are usually used in combination with other fuze detonation technologies such as acoustic and magnetic induction. During WW2, pressure activated fuzes were developed for sticks (or trains) of bombs to create above ground airbursts.  The first bomb in the stick was fitted with an impact fuze while the other bombs were fitted with pressure sensitive diaphragm actuated detonators.  The blast from the first bomb was used to trigger the fuze of the second bomb which would explode above ground and in this turn would detonate the third bomb with the process repeated all the way till the last bomb in the string.  Due to the forward speed of the bomber, bombs fitted with pressure detonators would all explode at about the same height above ground along a horizontal trajectory.  This design was used in both the British No.44 "Pistol" and the German Rheinmetall-Borsig BAZ 55A fuzes. Gallery
Technology
Explosive weapons
null
225170
https://en.wikipedia.org/wiki/Gymnotiformes
Gymnotiformes
The Gymnotiformes are an order of teleost bony fishes commonly known as Neotropical knifefish or South American knifefish. They have long bodies and swim using undulations of their elongated anal fin. Found almost exclusively in fresh water (the only exceptions are species that occasionally may visit brackish water to feed), these mostly nocturnal fish are capable of producing electric fields to detect prey, for navigation, communication, and, in the case of the electric eel (Electrophorus electricus), attack and defense. A few species are familiar to the aquarium trade, such as the black ghost knifefish (Apteronotus albifrons), the glass knifefish (Eigenmannia virescens), and the banded knifefish (Gymnotus carapo). Description Anatomy and locomotion Aside from the electric eel (Electrophorus electricus), Gymnotiformes are slender fish with narrow bodies and tapering tails, hence the common name of "knifefishes". They have neither pelvic fins nor dorsal fins, but do possess greatly elongated anal fins that stretch along almost the entire underside of their bodies. The fish swim by rippling this fin, keeping their bodies rigid. This means of propulsion allows them to move backwards as easily as they move forward. The knifefish has approximately one hundred and fifty fin rays along its ribbon-fin. These individual fin rays can be curved nearly twice the maximum recorded curvature for ray-finned fish fin rays during locomotion. These fin rays are curved into the direction of motion, indicating that the knifefish has active control of the fin ray curvature, and that this curvature is not the result of passive bending due to fluid loading. Different wave patterns produced along the length of the elongated anal fin allow for various forms of thrust. The wave motion of the fin resembles traveling sinusoidal waves. A forward traveling wave can be associated with forward motion, while a wave in the reverse direction produces thrust in the opposite direction. This undulating motion of the fin produced a system of linked vortex tubes that were produced along the bottom edge of the fin. A jet was produced at an angle to the fin that was directly related to the vortex tubes, and this jet provides propulsion that moves the fish forward. The wave motion of the fin is similar to that of other marine creatures, such as the undulation of the body of an eel, however the wake vortex produced by the knifefish was found to be a reverse Kármán vortex. This type of vortex is also produced by some fish, such as trout, through the oscillations of their caudal fins. The speed at which the fish moved through the water had no correlation to the amplitude of its undulations, however it was directly related to the frequency of the waves generated. Studies have shown that the natural angle between the body of the knifefish and its fin is essential for efficient forward motion, for if the anal fin was located directly underneath, then an upwards force would be generated with forward thrust, which would require an additional downwards force in order to maintain neutral buoyancy. A combination of forward and reverse wave patterns, which meet towards the center of the anal fin, produce a heave force allowing for hovering, or upwards movement. The ghost knifefish can vary the undulation of the waves, as well as the angle of attack of the fin to achieve various directional changes. The pectoral fins of these fishes can help to control roll and pitch control. By rolling they can generate a vertical thrust to quickly, and efficiently, ambush their prey. The forward movement is determined exclusively by the ribbon fins and the contribution of the pectoral fins for forward movement was negligible. The body is kept relatively rigid and there is very little motion of the center of mass motion during locomotion compared to the body size of the fish. The caudal fin is absent, or in the apteronotids, greatly reduced. The gill opening is restricted. The anal opening is under the head or the pectoral fins. Electroreception and electrogenesis These fish possess electric organs that allow them to produce electric fields, which are usually weak. In most gymnotiforms, the electric organs are derived from muscle cells. However, adult apteronotids are one exception, as theirs are derived from nerve cells (spinal electromotor neurons). In gymnotiforms, the electric organ discharge may be continuous or pulsed. If continuous, it is generated day and night throughout the entire life of the individual. Certain aspects of the electric signal are unique to each species, especially a combination of the pulse waveform, duration, amplitude, phase and frequency. The electric organs of most Gymnotiformes produce tiny discharges of just a few millivolts, far too weak to cause any harm to other fish. Instead, they are used to help navigate the environment, including locating the bottom-dwelling invertebrates that compose their diets. They may also be used to send signals between fish of the same species. In addition to this low-level field, the electric eel also has the capability to produce much more powerful discharges to stun prey. Taxonomy There are currently about 250 valid gymnotiform species in 34 genera and five families, with many additional species yet to be formally described. The actual number of species in the wild is unknown. Gymnotiformes is thought to be the sister group to the Siluriformes from which they diverged in the Cretaceous period (about 120 million years ago). The families have traditionally been classified over suborders and superfamilies as below. Order Gymnotiformes Suborder Gymnotoidei Family Gymnotidae (banded knifefishes and electric eels) Suborder Sternopygoidei Superfamily Rhamphichthyoidea Family Rhamphichthyidae (sand knifefishes) Family Hypopomidae (bluntnose knifefishes) Superfamily Apteronotoidea Family Sternopygidae (glass and rat-tail knifefishes) Family Apteronotidae (ghost knifefishes) Phylogeny Most gymnotiforms are weakly electric, capable of active electrolocation but not of delivering shocks. The electric eels, genus Electrophorus, are strongly electric, and are not closely related to the Anguilliformes, the true eels. Their relationships were analysed by sequencing their mitochondrial genomes in 2019. This shows that contrary to earlier ideas, the Apteronotidae and Sternopygidae are not sister taxa, and that the Gymnotidae are deeply nested among the other families. Actively electrolocating fish are marked on the phylogenetic tree with a small yellow lightning flash . Fish able to deliver electric shocks are marked with a red lightning flash . There are other electric fishes in other families (not shown). Distribution and habitat Gymnotiform fishes inhabit freshwater rivers and streams throughout the humid Neotropics, ranging from southern Mexico to northern Argentina. They are nocturnal fishes. The families Gymnotidae and Hypopomidae are most diverse (numbers of species) and abundant (numbers of individuals) in small non-floodplain streams and rivers, and in floodplain "floating meadows" of aquatic macrophytes (e.g., Eichornium, the Amazonian water hyacinth). On the other hand, families Apteronotidae and Sternopygidae are most diverse and abundant in large rivers. Species of Rhamphichthyidae are moderately diverse in all these habitat types. Evolution Gymnotiformes are among the more derived members of Ostariophysi, a lineage of primary freshwater fishes. The only known fossils are from the Miocene about 7 million years ago (Mya) of Bolivia. Gymnotiformes has no extant species in Africa. This may be because they did not spread into Africa before South America and Africa split, or it may be that they were out-competed by Mormyridae, which are similar in that they also use electrolocation. Approximately 150 Mya, the ancestor to modern-day Gymnotiformes and Siluriformes were estimated to have convergently evolved ampullary receptors, allowing for passive electroreceptive capabilities. As this characteristic occurred after the prior loss of electroreception among the subclass Neopterygii after having been present in the common ancestor of vertebrates, the ampullary receptors of Gymnotiformes are not homologous with those of other jawed non-teleost species, such as chondricthyans. Gymnotiformes and Mormyridae have developed their electric organs and electrosensory systems (ESSs) through convergent evolution. As Arnegard et al. (2005) and Albert and Crampton (2005) show, their last common ancestor was roughly 140 to 208 Mya, and at this time they did not possess ESSs. Each species of Mormyrus (family: Mormyridae) and Gymnotus (family: Gymnotidae) have evolved a unique waveform that allows the individual fish to identify between species, genders, individuals and even between mates with better fitness levels. The differences include the direction of the initial phase of the wave (positive or negative, which correlates to the direction of the current through the electrocytes in the electric organ), the amplitude of the wave, the frequency of the wave, and the number of phases of the wave. One significant force driving this evolution is predation. The most common predators of Gymnotiformes include the closely related Siluriformes (catfish), as well as predation within families (E. electricus is one of the largest predators of Gymnotus). These predators sense electric fields, but only at low frequencies, thus certain species of Gymnotiformes, such as those in Gymnotus, have shifted the frequency of their signals so they can be effectively invisible. Sexual selection is another driving force with an unusual influence, in that females exhibit preference for males with low-frequency signals (which are more easily detected by predators), but most males exhibit this frequency only intermittently. Females prefer males with low-frequency signals because they indicate a higher fitness of the male. Since these low-frequency signals are more conspicuous to predators, the emitting of such signals by males shows that they are capable of evading predation. Therefore, the production of low-frequency signals is under competing evolutionary forces: it is selected against due to the eavesdropping of electric predators, but is favored by sexual selection due to its attractiveness to females. Females also prefer males with longer pulses, also energetically expensive, and large tail lengths. These signs indicate some ability to exploit resources, thus indicating better lifetime reproductive success. Genetic drift is also a factor contributing to the diversity of electric signals observed in Gymnotiformes. Reduced gene flow due to geographical barriers has led to vast differences signal morphology in different streams and drainages.
Biology and health sciences
Gymnotiformes
Animals
225256
https://en.wikipedia.org/wiki/Silicon%20carbide
Silicon carbide
Silicon carbide (SiC), also known as carborundum (), is a hard chemical compound containing silicon and carbon. A wide bandgap semiconductor, it occurs in nature as the extremely rare mineral moissanite, but has been mass-produced as a powder and crystal since 1893 for use as an abrasive. Grains of silicon carbide can be bonded together by sintering to form very hard ceramics that are widely used in applications requiring high endurance, such as car brakes, car clutches and ceramic plates in bulletproof vests. Large single crystals of silicon carbide can be grown by the Lely method and they can be cut into gems known as synthetic moissanite. Electronic applications of silicon carbide such as light-emitting diodes (LEDs) and detectors in early radios were first demonstrated around 1907. SiC is used in semiconductor electronics devices that operate at high temperatures or high voltages, or both. Natural occurrence Naturally occurring moissanite is found in only minute quantities in certain types of meteorite, corundum deposits, and kimberlite. Virtually all the silicon carbide sold in the world, including moissanite jewels, is synthetic. Natural moissanite was first found in 1893 as a small component of the Canyon Diablo meteorite in Arizona by Ferdinand Henri Moissan, after whom the material was named in 1905. Moissan's discovery of naturally occurring SiC was initially disputed because his sample may have been contaminated by silicon carbide saw blades that were already on the market at that time. While rare on Earth, silicon carbide is remarkably common in space. It is a common form of stardust found around carbon-rich stars, and examples of this stardust have been found in pristine condition in primitive (unaltered) meteorites. The silicon carbide found in space and in meteorites is almost exclusively the beta-polymorph. Analysis of SiC grains found in the Murchison meteorite, a carbonaceous chondrite meteorite, has revealed anomalous isotopic ratios of carbon and silicon, indicating that these grains originated outside the solar system. History Early experiments Non-systematic, less-recognized and often unverified syntheses of silicon carbide include: César-Mansuète Despretz's passing an electric current through a carbon rod embedded in sand (1849) Robert Sydney Marsden's dissolution of silica in molten silver in a graphite crucible (1881) Paul Schuetzenberger's heating of a mixture of silicon and silica in a graphite crucible (1881) Albert Colson's heating of silicon under a stream of ethylene (1882). Wide-scale production Wide-scale production is credited to Edward Goodrich Acheson in 1891. Acheson was attempting to prepare artificial diamonds when he heated a mixture of clay (aluminium silicate) and powdered coke (carbon) in an iron bowl. He called the blue crystals that formed carborundum, believing it to be a new compound of carbon and aluminium, similar to corundum. Moissan also synthesized SiC by several routes, including dissolution of carbon in molten silicon, melting a mixture of calcium carbide and silica, and by reducing silica with carbon in an electric furnace. Acheson patented the method for making silicon carbide powder on February 28, 1893. Acheson also developed the electric batch furnace by which SiC is still made today and formed the Carborundum Company to manufacture bulk SiC, initially for use as an abrasive. In 1900 the company settled with the Electric Smelting and Aluminum Company when a judge's decision gave "priority broadly" to its founders "for reducing ores and other substances by the incandescent method". The first use of SiC was as an abrasive. This was followed by electronic applications. In the beginning of the 20th century, silicon carbide was used as a detector in the first radios. In 1907 Henry Joseph Round produced the first LED by applying a voltage to a SiC crystal and observing yellow, green and orange emission at the cathode. The effect was later rediscovered by O.V. Losev in the Soviet Union, in 1923. Production Because natural moissanite is extremely scarce, most silicon carbide is synthetic. Silicon carbide is used as an abrasive, as well as a semiconductor and diamond simulant of gem quality. The simplest process to manufacture silicon carbide is to combine silica sand and carbon in an Acheson graphite electric resistance furnace at a high temperature, between and . Fine SiO2 particles in plant material (e.g. rice husks) can be converted to SiC by heating in the excess carbon from the organic material. The silica fume, which is a byproduct of producing silicon metal and ferrosilicon alloys, can also be converted to SiC by heating with graphite at . The material formed in the Acheson furnace varies in purity, according to its distance from the graphite resistor heat source. Colorless, pale yellow and green crystals have the highest purity and are found closest to the resistor. The color changes to blue and black at greater distance from the resistor, and these darker crystals are less pure. Nitrogen and aluminium are common impurities, and they affect the electrical conductivity of SiC. Pure silicon carbide can be made by the Lely process, in which SiC powder is sublimed into high-temperature species of silicon, carbon, silicon dicarbide (SiC2), and disilicon carbide (Si2C) in an argon gas ambient at 2,500 °C and redeposited into flake-like single crystals, sized up to 2 × 2 cm, at a slightly colder substrate. This process yields high-quality single crystals, mostly of 6H-SiC phase (because of high growth temperature). A modified Lely process involving induction heating in graphite crucibles yields even larger single crystals of 4 inches (10 cm) in diameter, having a section 81 times larger compared to the conventional Lely process. Cubic SiC is usually grown by the more expensive process of chemical vapor deposition (CVD) of silane, hydrogen and nitrogen. Homoepitaxial and heteroepitaxial SiC layers can be grown employing both gas and liquid phase approaches. To form complexly shaped SiC, preceramic polymers can be used as precursors which form the ceramic product through pyrolysis at temperatures in the range 1,000–1,100 °C. Precursor materials to obtain silicon carbide in such a manner include polycarbosilanes, poly(methylsilyne) and polysilazanes. Silicon carbide materials obtained through the pyrolysis of preceramic polymers are known as polymer derived ceramics or PDCs. Pyrolysis of preceramic polymers is most often conducted under an inert atmosphere at relatively low temperatures. Relative to the CVD process, the pyrolysis method is advantageous because the polymer can be formed into various shapes prior to thermalization into the ceramic. SiC can also be made into wafers by cutting a single crystal either using a diamond wire saw or by using a laser. SiC is a useful semiconductor used in power electronics. Structure and properties Silicon carbide exists in about 250 crystalline forms. Through inert atmospheric pyrolysis of preceramic polymers, silicon carbide in a glassy amorphous form is also produced. The polymorphism of SiC is characterized by a large family of similar crystalline structures called polytypes. They are variations of the same chemical compound that are identical in two dimensions and differ in the third. Thus, they can be viewed as layers stacked in a certain sequence. Alpha silicon carbide (α-SiC) is the most commonly encountered polymorph, and is formed at temperatures greater than 1,700 °C and has a hexagonal crystal structure (similar to Wurtzite). The beta modification (β-SiC), with a zinc blende crystal structure (similar to diamond), is formed at temperatures below 1,700 °C. Until recently, the beta form has had relatively few commercial uses, although there is now increasing interest in its use as a support for heterogeneous catalysts, owing to its higher surface area compared to the alpha form. Pure SiC is colorless. The brown to black color of the industrial product results from iron impurities. The rainbow-like luster of the crystals is due to the thin-film interference of a passivation layer of silicon dioxide that forms on the surface. The high sublimation temperature of SiC (approximately 2,700 °C) makes it useful for bearings and furnace parts. Silicon carbide does not melt but begins to sublimate near 2,700 °C like graphite, having an appreciable vapor pressure near that temp. It is also highly inert chemically, partly due to the formation of a thin passivated layer of SiO2. There is currently much interest in its use as a semiconductor material in electronics, where its high thermal conductivity, high electric field breakdown strength and high maximum current density make it more promising than silicon for high-powered devices. SiC has a very low coefficient of thermal expansion of about 2.3 × 10−6 K−1 near 300 K (for 4H and 6H SiC) and experiences no phase transitions in the temperature range 5 K to 340 K that would cause discontinuities in the thermal expansion coefficient. Electrical conductivity Silicon carbide is a semiconductor, which can be doped n-type by nitrogen or phosphorus and p-type by beryllium, boron, aluminium, or gallium. Metallic conductivity has been achieved by heavy doping with boron, aluminium or nitrogen. Superconductivity has been detected in 3C-SiC:Al, 3C-SiC:B and 6H-SiC:B at similar temperatures ~1.5 K. A crucial difference is however observed for the magnetic field behavior between aluminium and boron doping: 3C-SiC:Al is type-II. In contrast, 3C-SiC:B is type-I, as is 6H-SiC:B. Thus the superconducting properties seem to depend more on dopant (B vs. Al) than on polytype (3C- vs 6H-). In an attempt to explain this dependence, it was noted that B substitutes at C sites in SiC, but Al substitutes at Si sites. Therefore, Al and B "see" different environments, in both polytypes. Uses Abrasive and cutting tools In manufacturing, it is used for its hardness in abrasive machining processes such as grinding, honing, water-jet cutting and sandblasting. SiC provides a much sharper and harder alternative for sand blasting as compared to aluminium oxide. Particles of silicon carbide are laminated to paper to create sandpapers and the grip tape on skateboards. In the arts, silicon carbide is a popular abrasive in modern lapidary due to the durability and low cost of the material. In 1982 an exceptionally strong composite of aluminium oxide and silicon carbide whiskers was discovered. Development of this laboratory-produced composite to a commercial product took only three years. In 1985, the first commercial cutting tools made from this alumina and silicon carbide whisker-reinforced composite were introduced into the market. Structural material In the 1980s and 1990s, silicon carbide was studied in several research programs for high-temperature gas turbines in Europe, Japan and the United States. The components were intended to replace nickel superalloy turbine blades or nozzle vanes. However, none of these projects resulted in a production quantity, mainly because of its low impact resistance and its low fracture toughness. Like other hard ceramics (namely alumina and boron carbide), silicon carbide is used in composite armor (e.g. Chobham armor), and in ceramic plates in bulletproof vests. Dragon Skin, which was produced by Pinnacle Armor, used disks of silicon carbide. Improved fracture toughness in SiC armor can be facilitated through the phenomenon of abnormal grain growth or AGG. The growth of abnormally long silicon carbide grains may serve to impart a toughening effect through crack-wake bridging, similar to whisker reinforcement. Similar AGG-toughening effects have been reported in Silicon nitride (Si3N4). Silicon carbide is used as a support and shelving material in high temperature kilns such as for firing ceramics, glass fusing, or glass casting. SiC kiln shelves are considerably lighter and more durable than traditional alumina shelves. In December 2015, infusion of silicon carbide nano-particles in molten magnesium was mentioned as a way to produce a new strong and plastic alloy suitable for use in aeronautics, aerospace, automobile and micro-electronics. Automobile parts Silicon-infiltrated carbon-carbon composite is used for high performance "ceramic" brake discs, as they are able to withstand extreme temperatures. The silicon reacts with the graphite in the carbon-carbon composite to become carbon-fiber-reinforced silicon carbide (C/SiC). These brake discs are used on some road-going sports cars, supercars, as well as other performance cars including the Porsche Carrera GT, the Bugatti Veyron, the Chevrolet Corvette ZR1, the McLaren P1, Bentley, Ferrari, Lamborghini and some specific high-performance Audi cars. Silicon carbide is also used in a sintered form for diesel particulate filters. It is also used as an oil additive to reduce friction, emissions, and harmonics. Foundry crucibles SiC is used in crucibles for holding melting metal in small and large foundry applications. Electric systems The earliest electrical application of SiC was as a surge protection in lightning arresters in electric power systems. These devices must exhibit high resistance until the voltage across them reaches a certain threshold VT at which point their resistance must drop to a lower level and maintain this level until the applied voltage drops below VT flushing current into the ground. It was recognized early on that SiC had such a voltage-dependent resistance, and so columns of SiC pellets were connected between high-voltage power lines and the earth. When a lightning strike to the line raises the line voltage sufficiently, the SiC column will conduct, allowing strike current to pass harmlessly to the earth instead of along the power line. The SiC columns proved to conduct significantly at normal power-line operating voltages and thus had to be placed in series with a spark gap. This spark gap is ionized and rendered conductive when lightning raises the voltage of the power line conductor, thus effectively connecting the SiC column between the power conductor and the earth. Spark gaps used in lightning arresters are unreliable, either failing to strike an arc when needed or failing to turn off afterwards, in the latter case due to material failure or contamination by dust or salt. Usage of SiC columns was originally intended to eliminate the need for the spark gap in lightning arresters. Gapped SiC arresters were used for lightning-protection and sold under the GE and Westinghouse brand names, among others. The gapped SiC arrester has been largely displaced by no-gap varistors that use columns of zinc oxide pellets. Electronic circuit elements Silicon carbide was the first commercially important semiconductor material. A crystal radio "carborundum" (synthetic silicon carbide) detector diode was patented by Henry Harrison Chase Dunwoody in 1906. It found much early use in shipboard receivers. Power electronic devices In 1993, the silicon carbide was considered a semiconductor in both research and early mass production providing advantages for fast, high-temperature and/or high-voltage devices. The first devices available were Schottky diodes, followed by junction-gate FETs and MOSFETs for high-power switching. Bipolar transistors and thyristors were described. A major problem for SiC commercialization has been the elimination of defects: edge dislocations, screw dislocations (both hollow and closed core), triangular defects and basal plane dislocations. As a result, devices made of SiC crystals initially displayed poor reverse blocking performance, though researchers have been tentatively finding solutions to improve the breakdown performance. Apart from crystal quality, problems with the interface of SiC with silicon dioxide have hampered the development of SiC-based power MOSFETs and insulated-gate bipolar transistors. Although the mechanism is still unclear, nitriding has dramatically reduced the defects causing the interface problems. In 2008, the first commercial JFETs rated at 1,200 V were introduced to the market, followed in 2011 by the first commercial MOSFETs rated at 1200 V. JFETs are now available rated 650 V to 1,700 V with resistance as low as 25 mΩ. Beside SiC switches and SiC Schottky diodes (also Schottky barrier diode, SBD) in the popular TO-247 and TO-220 packages, companies started even earlier to implement the bare chips into their power electronic modules. SiC SBD diodes found wide market spread being used in PFC circuits and IGBT power modules. Conferences such as the International Conference on Integrated Power Electronics Systems (CIPS) report regularly about the technological progress of SiC power devices. Major challenges for fully unleashing the capabilities of SiC power devices are: Gate drive: SiC devices often require gate drive voltage levels that are different from their silicon counterparts and may be even unsymmetric, for example, +20 V and −5 V. Packaging: SiC chips may have a higher power density than silicon power devices and are able to handle higher temperatures exceeding the silicon limit of 150 °C. New die attach technologies such as sintering are required to efficiently get the heat out of the devices and ensure a reliable interconnection. Beginning with Tesla Model 3 the inverters in the drive unit use 24 pairs of silicon carbide (SiC) MOSFET chips rated for 650 volts each. Silicon carbide in this instance gave Tesla a significant advantage over chips made of silicon in terms of size and weight. A number of automobile manufacturers are planning to incorporate silicon carbide into power electronic devices in their products. A significant increase in production of silicon carbide is projected, beginning with a large plant opened 2022 by Wolfspeed, in upstate New York. LEDs The phenomenon of electroluminescence was discovered in 1907 using silicon carbide and some of the first commercial LEDs were based on this material. When General Electric of America introduced its SSL-1 Solid State Lamp in March 1967, using a tiny chip of semi-conducting SiC to emit a point of yellow light, it was then the world's brightest LED. By 1970 it had been usurped by brighter red LEDs, but yellow LEDs made from 3C-SiC continued to be manufactured in the Soviet Union in the 1970s and blue LEDs (6H-SiC) worldwide in the 1980s. Carbide LED production soon stopped when a different material, gallium nitride, showed 10–100 times brighter emission. This difference in efficiency is due to the unfavorable indirect bandgap of SiC, whereas GaN has a direct bandgap which favors light emission. However, SiC is still one of the important LED components: It is a popular substrate for growing GaN devices, and it also serves as a heat spreader in high-power LEDs. Astronomy The low thermal expansion coefficient, high hardness, rigidity and thermal conductivity make silicon carbide a desirable mirror material for astronomical telescopes. The growth technology (chemical vapor deposition) has been scaled up to produce disks of polycrystalline silicon carbide up to in diameter, and several telescopes like the Herschel Space Telescope are already equipped with SiC optics, as well the Gaia space observatory spacecraft subsystems are mounted on a rigid silicon carbide frame, which provides a stable structure that will not expand or contract due to heat. Thin-filament pyrometry Silicon carbide fibers are used to measure gas temperatures in an optical technique called thin-filament pyrometry. It involves the placement of a thin filament in a hot gas stream. Radiative emissions from the filament can be correlated with filament temperature. Filaments are SiC fibers with a diameter of 15 micrometers, about one fifth that of a human hair. Because the fibers are so thin, they do little to disturb the flame and their temperature remains close to that of the local gas. Temperatures of about 800–2,500 K can be measured. Heating elements
Physical sciences
Ceramic compounds
Chemistry
225574
https://en.wikipedia.org/wiki/Swordfish
Swordfish
The swordfish (Xiphias gladius), also known as the broadbill in some countries, are large, highly migratory predatory fish characterized by a long, flat, pointed bill. They are a popular sport fish of the billfish category, though elusive. Swordfish are elongated, round-bodied, and lose all teeth and scales by adulthood. These fish are found widely in tropical and temperate parts of the Atlantic, Pacific, and Indian Oceans, and can typically be found from near the surface to a depth of , and exceptionally up to depths of 2,234 m. They commonly reach in length, and the maximum reported is in length and in weight. They are the sole member of their family, Xiphiidae. Taxonomy and etymology The swordfish is named after its long pointed, flat bill, which resembles a sword. The species name, Xiphias gladius, derives from Greek (xiphias, "swordfish"), itself from (xiphos, "sword") and from Latin ("sword"). This makes it superficially similar to other billfish such as marlin, but upon examination, their physiology is quite different and they are members of different families. Several extinct genera are known, such as a large sized Xiphiorhynchus and Aglyptorhynchus. Unlike modern taxa these have equally long lower jaws. Description They commonly reach in length, and the maximum reported is in length and in weight. The International Game Fish Association's all-tackle angling record for a swordfish was a specimen taken off Chile in 1953. Females are larger than males, and Pacific swordfish reach a greater size than northwest Atlantic and Mediterranean swordfish. They reach maturity at 4–5 years of age and the maximum age is believed to be at least 9 years. The oldest swordfish found in a recent study were a 16-year-old female and 12-year-old male. Swordfish ages are derived, with difficulty, from annual rings on fin rays rather than otoliths, since their otoliths are small in size. Temperature regulation Swordfish are ectothermic animals. Along with some species of sharks, they have special organs next to their eyes called heater cells which function to heat their eyes and brains. Their eyes are heated to temperatures measured between 10 and 15 °C (18 and 27 °F) above the surrounding water temperature; this heating greatly improves their vision and, consequently, their predatory efficacy. The swordfish is one of 22 species of fish – including the marlin, tuna, and some sharks – known to have a heat-conservation mechanism. Behavior and ecology Movements and feeding The popular image of the swordfish skewering its prey with its nose is based on little evidence. In a typical environment, swordfish most likely use their noses to slash at prey and inflict weakening injuries. The hypothesis that they may use their noses as spears in a defensive capacity against sharks and other predators is still under review. Mainly, the swordfish relies on its great speed and agility in the water to catch its prey. It is no doubt among the fastest fish, but the basis for the frequently-quoted speed of is unreliable. Research on related marlin (Istiophorus platypterus) suggest a maximum value of is more likely. Swordfish are not schooling fish. They swim alone or in very loose aggregations, separated by as much as from a neighboring swordfish. They are frequently found basking at the surface, airing their first dorsal fin. Boaters report this to be a beautiful sight, as is the powerful jumping for which the species is known. This jumping, also called breaching, may be an effort to dislodge pests, such as remoras or lampreys. Swordfish prefer water temperatures between , but have the widest tolerance among billfish, and can be found from . This highly migratory species typically moves towards colder regions to feed during the summer. Swordfish feed daily, most often at night, when they rise to surface and near-surface waters in search of smaller fish. During the day, they commonly occur to depths of and have exceptionally been recorded as deep as . Adults feed on a wide range of pelagic fish, such as mackerel, barracudinas, silver hake, rockfish, herring, and lanternfishes, but they also take demersal fish, squid, and crustaceans. In the northwestern Atlantic, a survey based on the stomach content of 168 individuals found 82% had eaten squid and 53% had eaten fish, including gadids, scombrids, butterfish, bluefish, and sand lance. Large prey are typically slashed with the sword, while small are swallowed whole. Threats and parasites Almost 50 species of parasites have been documented in swordfish. In addition to remoras, lampreys, and cookiecutter sharks, this includes a wide range of invertebrates, such as tapeworms, roundworms, Myxozoans and copepods. A comparison of the parasites of swordfish in the Atlantic and in the Mediterranean indicated that some parasites, particularly Anisakis spp. larvae identified by genetic markers, could be used as biological tags and support the existence of a Mediterranean swordfish stock. Fully adult swordfish have few natural predators. Among marine mammals, killer whales sometimes prey on adult swordfish. It is believed that sperm whales may also prey on swordfish on rare occasions. The shortfin mako, an exceptionally fast species of shark, sometimes take on swordfish; dead or dying shortfin makos have been found with broken-off swords in their heads, revealing the danger of this type of prey. Juvenile swordfish are far more vulnerable to predation, and are eaten by a wide range of predatory fish. Intensive fishery may be driving swordfishes and sharks into harder competition for reduced amounts of prey and therefore pitting them to fight more. Human fishery is a major predator of swordfishes. The annual reported catch in 2019 of the North Atlantic swordfish amounted to a total of . Breeding In the North Pacific, batch spawning mainly occurs in water warmer than during the spring and summer, and year-round in the equatorial Pacific. In the North Atlantic, spawning is known from the Sargasso Sea, and in water warmer than and less than deep. Spawning occurs from November to February in the South Atlantic off southern Brazil. Spawning is year-round in the Caribbean Sea and other warm regions of the west Atlantic. Large females can carry more eggs than small females, and between 1 million and 29 million eggs have been recorded. The pelagic eggs measure in diameter and days after fertilization, the embryonic development occurs. The surface-living and unique-looking larvae are long at hatching. The bill is evident when the larvae reach in length. Fisheries Swordfish were harvested by a variety of methods at small scale (notably harpoon fishing) until the global expansion of long-line fishing. They have been fished widely since ancient times in places such as the Strait of Messina, where they are still fished with traditional wooden boats called feluccas and are part of the cuisine in that area. Swordfish are vigorous, powerful fighters. Although no unprovoked attacks on humans have been reported, swordfish can be very dangerous when harpooned. They have run their swords through the planking of small boats when hurt. In 2015, a Hawaiian fisherman was killed by a swordfish after attempting to spear the animal. Recreational fishing Recreational fishing has developed a subspecialty called swordfishing. Because of a ban on long-lining along many parts of seashore, swordfish populations are showing signs of recovery from the overfishing caused by long-lining along the coast. Various ways are used to fish for swordfish, but the most common method is deep-drop fishing, since swordfish spend most daylight hours very deep, in the deep scattering layer. The boat is allowed to drift to present a more natural bait. Swordfishing requires strong fishing rods and reels, as swordfish can become quite large, and it is not uncommon to use or more of weight to get the baits deep enough during the day, up to is common. Night fishing baits are usually fished much shallower, often less than . Standard baits are whole mackerel, herring, mullet, bonito, or squid; one can also use live bait. Imitation squids and other imitation fish lures can also be used, and specialized lures made specifically for swordfishing often have battery-powered or glow lights. Even baits are typically presented using glow sticks or specialized deepwater-proof battery operated lights. As food Swordfish are classified as oily fish. Many sources, including the United States Food and Drug Administration, warn about potential toxicity from high levels of methylmercury in swordfish. The FDA recommends that young children, pregnant women, and women planning to become pregnant not eat swordfish. The flesh of some swordfish can acquire an orange tint, reportedly from their diet of shrimp or other prey. Such fish are sold as "pumpkin swordfish", and command a premium over their whitish counterparts. Swordfish is a particularly popular fish for cooking. Since swordfish are large, meat is usually sold as steaks, which are often grilled. Swordfish meat is relatively firm, and can be cooked in ways more fragile types of fish cannot (such as over a grill on skewers). The color of the flesh varies by diet, with fish caught on the East Coast of North America often being rosier. Kashrut A dispute exists as to whether swordfish should be considered a kosher fish according to the laws of kashrut. Standard Orthodox opinion is that swordfish is not kosher, while Conservative Judaism does consider swordfish kosher. All kosher fish must have both fins and scales. The Talmud and the Tosefta are believed by some to present swordfish ("achsaftias") as an example of a kosher fish without scales because swordfish are born with scales they later shed once attaining a length of about 1 meter. The 17th-century Turkish Sephardi halakhic authority Rabbi Chaim ben Yisrael Benvenisti wrote that "It is a widespread custom among all Jews to eat the fish with the sword, known in vernacular as fishei espada, even though it does not have any scales. Because it is said that when it comes out of the water, due to its anger, it shakes and throws off its scales." A 1933 list of kosher fish published by the Agudas HaRabbonim includes swordfish. The following year, Rabbi Yosef Kanowitz published the same list of kosher fish with swordfish still included. Swordfish was widely considered kosher by halakhic authorities until the 1950s. Orthodox opinion began to shift in 1951, when Rabbi Moshe Tendler examined swordfish and decided it was not kosher due to the lack of scales. Tendler's opinion provoked strong debate among halakhic authorities during the 1960s. Among Mediterranean Jews there was a longstanding minhag of considering swordfish kosher. Swordfish was and possibly still is consumed by Jews in Italy, Turkey, Gibraltar, Morocco, Tunisia, and England. Due to Tendler's opinion, swordfish are generally not considered kosher by Orthodox Jews in the United States and Israel. Conservation status In 1998, the U.S. Natural Resources Defense Council and SeaWeb hired Fenton Communications to conduct an advertising campaign to promote their assertion that the swordfish population was in danger due to its popularity as a restaurant entree. The resulting "Give Swordfish a Break" promotion was wildly successful, with 750 prominent U.S. chefs agreeing to remove North Atlantic swordfish from their menus, and also persuaded many supermarkets and consumers across the country. The advertising campaign was repeated by the national media in hundreds of print and broadcast stories, as well as extensive regional coverage. It earned the Silver Anvil award from the Public Relations Society of America, as well as Time magazine's award for the top five environmental stories of 1998. Subsequently, the U.S. National Marine Fisheries Service proposed a swordfish protection plan that incorporated the campaign's policy suggestions. Then-US President Bill Clinton called for a ban on the sale and import of swordfish and in a landmark decision by the federal government, of the Atlantic Ocean were placed off-limits to fishing as recommended by the sponsors. In the North Atlantic, the swordfish stock is fully rebuilt, with biomass estimates currently 5% above the target level. No robust stock assessments for swordfish in the northwestern Pacific or South Atlantic have been made, and data concerning stock status in these regions are lacking. These stocks are considered unknown and a moderate conservation concern. The southwestern Pacific stock is a moderate concern due to model uncertainty, increasing catches, and declining catch per unit effort. Overfishing is likely occurring in the Indian Ocean, and fishing mortality exceeds the maximum recommended level in the Mediterranean, thus these stocks are considered of high conservation concern. In 2010, Greenpeace International added the swordfish to its seafood red list. Extinct Xiphiorhynchoides Relationship with humans Notable incidents In 2007, a fisherman died after being attacked by a swordfish which pierced his eye and its bill penetrated into the man's skull. In 2024, Giulia Manfrini, an Italian surfer died in a rare incident after being struck by a swordfish while surfing off the coast of West Sumatra, Indonesia. In culture The swordfish (Xiphias) has been used by astronomers as another name for the constellation of Dorado. The word swordfish is used as a password in the 1932 Marx Brothers film Horse Feathers. It has since appeared as a password in many films, television series, books, and videogames.
Biology and health sciences
Acanthomorpha
null
225606
https://en.wikipedia.org/wiki/Thomisidae
Thomisidae
The Thomisidae are a family of spiders, including about 170 genera and over 2,100 species. The common name crab spider is often linked to species in this family, but is also applied loosely to many other families of spiders. Many members of this family are also known as flower spiders or flower crab spiders. Description Members of this family of spiders do not spin webs, and are ambush predators. The two front legs are usually longer and more robust than the rest of the legs. The back two legs are smaller, and are usually covered in a series of strong spines. They have dull colorations such as brown, grey, or very bright green, pink, white or yellow. They gain their name from the shape of their body, and they usually move sideways or backwards. These spiders are quite easy to identify and can very rarely be confused with Sparassidae family, though the crab spiders are usually smaller. Etymology Spiders in this family are called "crab spiders" due to their resemblance to crabs, the way such spiders hold their two front pairs of legs, and their ability to scuttle sideways or backwards. The Thomisidae are the family most generally referred to as "crab spiders", though some members of the Sparassidae are called "giant crab spiders", the Selenopidae are called "wall crab spiders", and various members of the Sicariidae are sometimes called "six-eyed crab spiders". Some distantly related orb-weaver spider species such as Gasteracantha cancriformis also are sometimes called "crab spiders". Behavior Thomisidae do not build webs to trap prey, though all of them produce silk for drop lines and sundry reproductive purposes; some are wandering hunters and the most widely known are ambush predators. Some species sit on or beside flowers or fruit, where they grab visiting insects. Individuals of some species, such as Misumena vatia and Thomisus spectabilis, are able to change color over a period of some days, to match the flower on which they are sitting. Some species frequent promising positions among leaves or bark, where they await prey, and some of them sit in the open, where they are startlingly good mimics of bird droppings. However, these members of the family Thomisidae are not to be confused with the spiders that generally are called bird-dropping spiders, not all of which are close relatives of crab spiders. Other species of crab spiders with flattened bodies either hunt in the crevices of tree trunks or under loose bark, or shelter under such crevices by day, and come out at night to hunt. Members of the genus Xysticus hunt in the leaf litter on the ground. In each case, crab spiders use their powerful front legs to grab and hold on to prey while paralysing it with a venomous bite. The spider family Aphantochilidae was incorporated into the Thomisidae in the late 1980s. Aphantochilus species mimic Cephalotes ants, on which they prey. The spiders of Thomisidae are not known to be harmful to humans. However, spiders of a distantly related genus, Sicarius, which are sometimes referred to as "crab spiders", or "six-eyed crab spiders", are close cousins to the recluse spiders, and are highly venomous, though human bites are rare. Sexual dimorphism Several different types of sexual dimorphism have been recorded in crab spiders. Some species exhibit color dimorphisms; however, the most apparent dimorphism is the difference in size between males and females. In some species, this is relatively small; females of Misumena vatia are roughly twice the size of their male counterparts. In other cases, the difference is extreme; on average, female Thomisus onustus are more than 60 times as massive as the males. Several hypothesized explanations are given for the evolution of sexual size dimorphisms in the Thomisidae and other sister taxa. The most widely acknowledged hypothesis for female growth is the fecundity hypothesis: selection favors larger females so they can produce more eggs and healthier offspring. Because males do not carry and lay eggs, a growth in size does not confer a fitness advantage. However, sexual size dimorphism may be a result of male dwarfism. The gravity hypothesis states that the smaller size allows the male to travel with greater ease, providing him with an increased opportunity to find mates. Females are comparatively stationary, and their larger size allows them to capture larger prey, such as butterflies and bees, granting females the additional nutrients necessary for egg production. Other hypotheses propose that sexual size dimorphism evolved by chance, and no selective advantage exists to larger females or smaller males. Taxonomy , this large family contains around 171 genera:
Biology and health sciences
Spiders
Animals
225617
https://en.wikipedia.org/wiki/Third%20law%20of%20thermodynamics
Third law of thermodynamics
The third law of thermodynamics states that the entropy of a closed system at thermodynamic equilibrium approaches a constant value when its temperature approaches absolute zero. This constant value cannot depend on any other parameters characterizing the system, such as pressure or applied magnetic field. At absolute zero (zero kelvins) the system must be in a state with the minimum possible energy. Entropy is related to the number of accessible microstates, and there is typically one unique state (called the ground state) with minimum energy. In such a case, the entropy at absolute zero will be exactly zero. If the system does not have a well-defined order (if its order is glassy, for example), then there may remain some finite entropy as the system is brought to very low temperatures, either because the system becomes locked into a configuration with non-minimal energy or because the minimum energy state is non-unique. The constant value is called the residual entropy of the system. Formulations The third law has many formulations, some more general than others, some equivalent, and some neither more general nor equivalent. The Planck statement applies only to perfect crystalline substances:As temperature falls to zero, the entropy of any pure crystalline substance tends to a universal constant. That is, , where is a universal constant that applies for all possible crystals, of all possible sizes, in all possible external constraints. So it can be taken as zero, giving . The Nernst statement concerns thermodynamic processes at a fixed, low temperature, for condensed systems, which are liquids and solids: The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as the temperature at which it is performed approaches 0 K. That is, . Or equivalently, At absolute zero, the entropy change becomes independent of the process path. That is, where represents a change in the state variable . The unattainability principle of Nernst: It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations. This principle implies that cooling a system to absolute zero would require an infinite number of steps or an infinite amount of time. The statement in adiabatic accessibility: It is impossible to start from a state of positive temperature, and adiabatically reach a state with zero temperature. The Einstein statement: The entropy of any substance approaches a finite value as the temperature approaches absolute zero. That is, where is the entropy, the zero-point entropy is finite-valued, is the temperature, and represents other relevant state variables. This implies that the heat capacity of a substance must (uniformly) vanish at absolute zero, as otherwise the entropy would diverge. There is also a formulation as the impossibility of "perpetual motion machines of the third kind". History The third law was developed by chemist Walther Nernst during the years 1906 to 1912 and is therefore often referred to as the Nernst heat theorem, or sometimes the Nernst-Simon heat theorem to include the contribution of Nernst's doctoral student Francis Simon. The third law of thermodynamics states that the entropy of a system at absolute zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, so that its entropy is determined only by the degeneracy of the ground state. In 1912 Nernst stated the law thus: "It is impossible for any procedure to lead to the isotherm in a finite number of steps." An alternative version of the third law of thermodynamics was enunciated by Gilbert N. Lewis and Merle Randall in 1923: If the entropy of each element in some (perfect) crystalline state be taken as zero at the absolute zero of temperature, every substance has a finite positive entropy; but at the absolute zero of temperature the entropy may become zero, and does so become in the case of perfect crystalline substances. This version states not only will reach zero at 0 K, but itself will also reach zero as long as the crystal has a ground state with only one configuration. Some crystals form defects which cause a residual entropy. This residual entropy disappears when the kinetic barriers to transitioning to one ground state are overcome. With the development of statistical mechanics, the third law of thermodynamics (like the other laws) changed from a fundamental law (justified by experiments) to a derived law (derived from even more basic laws). The basic law from which it is primarily derived is the statistical-mechanics definition of entropy for a large system: where is entropy, is the Boltzmann constant, and is the number of microstates consistent with the macroscopic configuration. The counting of states is from the reference state of absolute zero, which corresponds to the entropy of . Explanation In simple terms, the third law states that the entropy of a perfect crystal of a pure substance approaches zero as the temperature approaches zero. The alignment of a perfect crystal leaves no ambiguity as to the location and orientation of each part of the crystal. As the energy of the crystal is reduced, the vibrations of the individual atoms are reduced to nothing, and the crystal becomes the same everywhere. The third law provides an absolute reference point for the determination of entropy at any other temperature. The entropy of a closed system, determined relative to this zero point, is then the absolute entropy of that system. Mathematically, the absolute entropy of any system at zero temperature is the natural log of the number of ground states times the Boltzmann constant . The entropy of a perfect crystal lattice as defined by Nernst's theorem is zero provided that its ground state is unique, because . If the system is composed of one-billion atoms that are all alike and lie within the matrix of a perfect crystal, the number of combinations of one billion identical things taken one billion at a time is . Hence: The difference is zero; hence the initial entropy can be any selected value so long as all other such calculations include that as the initial entropy. As a result, the initial entropy value of zero is selected is used for convenience. Example: Entropy change of a crystal lattice heated by an incoming photon Suppose a system consisting of a crystal lattice with volume of identical atoms at , and an incoming photon of wavelength and energy . Initially, there is only one accessible microstate: Let us assume the crystal lattice absorbs the incoming photon. There is a unique atom in the lattice that interacts and absorbs this photon. So after absorption, there are possible microstates accessible by the system, each corresponding to one excited atom, while the other atoms remain at ground state. The entropy, energy, and temperature of the closed system rises and can be calculated. The entropy change is From the second law of thermodynamics: Hence Calculating entropy change: We assume and . The energy change of the system as a result of absorbing the single photon whose energy is : The temperature of the closed system rises by This can be interpreted as the average temperature of the system over the range from . A single atom is assumed to absorb the photon, but the temperature and entropy change characterizes the entire system. Systems with non-zero entropy at absolute zero An example of a system that does not have a unique ground state is one whose net spin is a half-integer, for which time-reversal symmetry gives two degenerate ground states. For such systems, the entropy at zero temperature is at least (which is negligible on a macroscopic scale). Some crystalline systems exhibit geometrical frustration, where the structure of the crystal lattice prevents the emergence of a unique ground state. Ground-state helium (unless under pressure) remains liquid. Glasses and solid solutions retain significant entropy at 0 K, because they are large collections of nearly degenerate states, in which they become trapped out of equilibrium. Another example of a solid with many nearly-degenerate ground states, trapped out of equilibrium, is ice Ih, which has "proton disorder". For the entropy at absolute zero to be zero, the magnetic moments of a perfectly ordered crystal must themselves be perfectly ordered; from an entropic perspective, this can be considered to be part of the definition of a "perfect crystal". Only ferromagnetic, antiferromagnetic, and diamagnetic materials can satisfy this condition. However, ferromagnetic materials do not, in fact, have zero entropy at zero temperature, because the spins of the unpaired electrons are all aligned and this gives a ground-state spin degeneracy. Materials that remain paramagnetic at 0 K, by contrast, may have many nearly degenerate ground states (for example, in a spin glass), or may retain dynamic disorder (a quantum spin liquid). Consequences Absolute zero The third law is equivalent to the statement that It is impossible by any procedure, no matter how idealized, to reduce the temperature of any closed system to zero temperature in a finite number of finite operations. The reason that cannot be reached according to the third law is explained as follows: Suppose that the temperature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1. One can think of a multistage nuclear demagnetization setup where a magnetic field is switched on and off in a controlled way. If there were an entropy difference at absolute zero, could be reached in a finite number of steps. However, at there is no entropy difference, so an infinite number of steps would be needed. The process is illustrated in Fig. 1. Example: magnetic refrigeration To be concrete, we imagine that we are refrigerating magnetic material. Suppose we have a large bulk of paramagnetic salt and an adjustable external magnetic field in the vertical direction. Let the parameter represent the external magnetic field. At the same temperature, if the external magnetic field is strong, then the internal atoms in the salt would strongly align with the field, so the disorder (entropy) would decrease. Therefore, in Fig. 1, the curve for is the curve for lower magnetic field, and the curve for is the curve for higher magnetic field. The refrigeration process repeats the following two steps: Isothermal process. Here, we have a chunk of salt in magnetic field and temperature . We divide the chunk into two parts: a large part playing the role of "environment", and a small part playing the role of "system". We slowly increase the magnetic field on the system to , but keep the magnetic field constant on the environment. The atoms in the system would lose directional degrees of freedom (DOF), and the energy in the directional DOF would be squeezed out into the vibrational DOF. This makes it slightly hotter, and then it would lose thermal energy to the environment, to remain in the same temperature . (The environment is now discarded.) Isentropic cooling. Here, the system is wrapped in adiathermal covering, and the external magnetic field is slowly lowered to . This frees up the direction DOF, absorbing some energy from the vibrational DOF. The effect is that the system has the same entropy, but reaches a lower temperature . At every two-step of the process, the mass of the system decreases, as we discard more and more salt as the "environment". However, if the equations of state for this salt is as shown in Fig. 1 (left), then we can start with a large but finite amount of salt, and end up with a small piece of salt that has . Specific heat A non-quantitative description of his third law that Nernst gave at the very beginning was simply that the specific heat of a material can always be made zero by cooling it down far enough. A modern, quantitative analysis follows. Suppose that the heat capacity of a sample in the low temperature region has the form of a power law asymptotically as , and we wish to find which values of are compatible with the third law. We have By the discussion of third law above, this integral must be bounded as , which is only possible if . So the heat capacity must go to zero at absolute zero if it has the form of a power law. The same argument shows that it cannot be bounded below by a positive constant, even if we drop the power-law assumption. On the other hand, the molar specific heat at constant volume of a monatomic classical ideal gas, such as helium at room temperature, is given by with the molar ideal gas constant. But clearly a constant heat capacity does not satisfy Eq. (). That is, a gas with a constant heat capacity all the way to absolute zero violates the third law of thermodynamics. We can verify this more fundamentally by substituting in Eq. (), which yields In the limit this expression diverges, again contradicting the third law of thermodynamics. The conflict is resolved as follows: At a certain temperature the quantum nature of matter starts to dominate the behavior. Fermi particles follow Fermi–Dirac statistics and Bose particles follow Bose–Einstein statistics. In both cases the heat capacity at low temperatures is no longer temperature independent, even for ideal gases. For Fermi gases with the Fermi temperature TF given by Here is the Avogadro constant, the molar volume, and the molar mass. For Bose gases with given by The specific heats given by Eq. () and () both satisfy Eq. (). Indeed, they are power laws with and respectively. Even within a purely classical setting, the density of a classical ideal gas at fixed particle number becomes arbitrarily high as goes to zero, so the interparticle spacing goes to zero. The assumption of non-interacting particles presumably breaks down when they are sufficiently close together, so the value of gets modified away from its ideal constant value. Vapor pressure The only liquids near absolute zero are 3He and 4He. Their heat of evaporation has a limiting value given by with and constant. If we consider a container partly filled with liquid and partly gas, the entropy of the liquid–gas mixture is where is the entropy of the liquid and is the gas fraction. Clearly the entropy change during the liquid–gas transition ( from 0 to 1) diverges in the limit of T→0. This violates Eq. (). Nature solves this paradox as follows: at temperatures below about 100 mK, the vapor pressure is so low that the gas density is lower than the best vacuum in the universe. In other words, below 100 mK there is simply no gas above the liquid. Miscibility If liquid helium with mixed 3He and 4He were cooled to absolute zero, the liquid must have zero entropy. This either means they are ordered perfectly as a mixed liquid, which is impossible for a liquid, or that they fully separate out into two layers of pure liquid. This is precisely what happens. For example, if a solution with 3 3He to 2 4He atoms were cooled, it would start the separation at 0.9 K, purifying more and more, until at absolute zero, when the upper layer becomes purely 3He, and the lower layer becomes purely 4He. Surface tension Let be the surface tension of liquid, then the entropy per area is . So if a liquid can exist down to absolute zero, then since its entropy is constant no matter its shape at absolute zero, its entropy per area must converge to zero. That is, its surface tension would become constant at low temperatures. In particular, the surface tension of 3He is well-approximated by for some parameters . Latent heat of melting The melting curves of 3He and 4He both extend down to absolute zero at finite pressure. At the melting pressure, liquid and solid are in equilibrium. The third law demands that the entropies of the solid and liquid are equal at . As a result, the latent heat of melting is zero, and the slope of the melting curve extrapolates to zero as a result of the Clausius–Clapeyron equation. Thermal expansion coefficient The thermal expansion coefficient is defined as With the Maxwell relation and Eq. () with it is shown that So the thermal expansion coefficient of all materials must go to zero at zero kelvin.
Physical sciences
Thermodynamics
Physics
19218592
https://en.wikipedia.org/wiki/Sodium%20metaborate
Sodium metaborate
Sodium metaborate is a chemical compound of sodium, boron, and oxygen with formula . However, the metaborate ion is trimeric in the anhydrous solid, therefore a more correct formula is or . The formula can be written also as to highlight the relation to the main oxides of sodium and boron. The name is also applied to several hydrates whose formulas can be written for various values of n. The anhydrous and hydrates are colorless crystalline solids. The anhydrous form is hygroscopic. Hydrates and solubility The following hydrates crystallize from solutions of the proper composition in various temperature ranges: tetrahydrate from −6 to 53.6 °C dihydrate from 53.6 °C to 105 °C hemihydrate from 105 °C to the boiling point. Early reports of a monohydrate have not been confirmed. Structure Anhydrous Solid anhydrous sodium metaborate has the hexagonal crystal system with space group . It actually contains a six-membered rings with the formula , consisting of alternating boron and oxygen atoms with one negatively charged extra oxygen atom attached to each boron atom. All nine atoms lie on a plane. The six oxygen atoms are evenly divided into two distinct structural sites, with different B–O bond lengths: B–O(external) 128.0 pm and B–O(bridge) 143.3 pm. The density is 2.348 ± 0.005 g/cm3. The approximate dimensions of the hexagonal cell are a = 1275 pm, c = 733 pm. However, the true unit cell is rhombohedral and has dimensions: ar= 776 pm, α = 110.6°, Z = 6 (5.98) molecules KB0 Dihydrate The dihydrate crystallizes in the triclinic crystal system, but is nearly monoclinic, with both α and γ very close to 90°. The cell parameters are a = 678 pm , b = 1058A pm, c = 588 pm, α = 91.5°, β = 22.5°, γ = 89°, Z = 4, density 1.905 g/cm3. The refractive indices at 25°C and wavelength 589.3 nm are α = 1.439, β = 1.473, γ = 1.484. The dispersion is strong, greater at red than at violet. The transition temperature between the dihydrate and the hemihydrate is 54 °C. However, the crystalline dihydrate will remain metastable until 106 °C to 110 °C, and change slowly above that temperature. Vapor Infrared spectroscopy of the vapor from anhydrous sodium metaborate, heated to between 900 °C and 1400 °C, shows mostly isolated clusters with formula , and some dimers thereof. Electron diffraction studies by Akishin and Spiridonov showed a structure with linear anion and angle of 90-110°. The atomic distances are : 120 pm, : 136 pm,: 214 pm Preparation Sodium metaborate is prepared by the fusion of sodium carbonate and boron oxide or borax . Another way to create the compound is by the fusion of borax with sodium hydroxide at 700 °C: The boiling point of sodium metaborate (1434 °C) is lower than that of boron oxide (1860 °C) and borax (1575 °C) In fact, while the metaborate boils without change of composition, borax gives off a vapor of sodium metaborate with a small excess of sodium oxide . The anhydrous salt can also be prepared from the tetraborate by heating to 270 °C in vacuum. Although not performed industrially, hydrolysis of sodium borohydride with a suitable catalyst gives sodium metaborate and hydrogen gas: (ΔH = −217 kJ/mol) Reactions With water When sodium metaborate is dissolved in water, the anion combines with two water molecules to form the tetrahydroxyborate anion . Electrochemical conversion to borax Electrolysis of a concentrated aqueous solution of 20% with an anion exchange membrane and inert anode (such as gold, palladium, or boron-doped diamond) converts the metaborate anion to tetraborate , and the sodium salt of the later (borax) precipitates as a white powder. Reduction to sodium borohydride Sodium metaborate can be converted to sodium borohydride by several methods, including the reaction with various reducing agents at high temperatures and pressure, or with magnesium hydride by ball milling at room temperature, followed by extraction of the with isopropylamine. Another method is the electrolytic reduction of a concentrated sodium metaborate solution, namely However, this method is not efficient since it competes with the reduction of hydroxide: Nanofiltration membranes can effectively separate the borohydride from the metaborate. Reaction with alcohols Anhydrous sodium metaborate refluxed with methanol yields the corresponding sodium tetramethoxyborate (melting point: 253-258 °C, CAS number: 18024-69-6): The analogous reaction with ethanol yields the sodium tetraethoxyborate. Uses Current and proposed applications of sodium metaborate include: Manufacture of borosilicate glasses, which are resistant to uneven or fast heating because of their small coefficient of thermal expansion. Composition of herbicides. Raising the pH of injected fluids for oil extraction.
Physical sciences
Boric oxyanions
Chemistry
19230414
https://en.wikipedia.org/wiki/Taraxacum
Taraxacum
Taraxacum () is a large genus of flowering plants in the family Asteraceae, which consists of species commonly known as dandelions. The scientific and hobby study of the genus is known as taraxacology. The genus is native to Eurasia but the two most commonplace species worldwide, T. officinale (the common dandelion) and T. erythrospermum (the red-seeded dandelion), were introduced from Europe into North America, where they are an invasive species. Dandelions thrive in temperate regions and can be found in yards, gardens, sides of roads, among crops, and in many other habitats. Both species are edible in their entirety and have a long history of consumption. The common name dandelion ( ; , referring to the jagged leaves) is also given to specific members of the genus. Like other members of the family Asteraceae, they have very small flowers collected together into a composite flower head. Each single flower in a head is called a floret. In part due to their abundance, along with being a generalist species, dandelions are one of the most vital early spring nectar sources for a wide host of pollinators. Many Taraxacum species produce seeds asexually by apomixis, where the seeds are produced without pollination, resulting in offspring that are genetically identical to the parent plant. In general, the leaves are long or longer, simple, lobed-to-pinnatisect, and form a basal rosette above the central taproot. The flower heads are yellow to orange coloured, and are open in the daytime, but closed at night. The heads are borne singly on a hollow stem (scape) that is usually leafless and rises or more above the leaves. Stems and leaves exude a white, milky latex when broken. A rosette may produce several flowering stems at a time. The flower heads are in diameter and consist entirely of ray florets. The flower heads mature into spherical seed heads sometimes called blowballs or clocks (in both British and American English) containing many single-seeded fruits called cypselae, similar to achenes. Each cypsela is attached to a pappus of fine hair-like material which enables anemochorous (wind-aided) dispersal over long distances. The flower head is surrounded by bracts (sometimes mistakenly called sepals) in two series. The inner bracts are erect until the seeds mature, then flex downward to allow the seeds to disperse. The outer bracts are often reflexed downward, but remain appressed in plants of the sections Palustria and Spectabilia. Between the pappus and the achene is a stalk called a beak, which elongates as the fruit matures. The beak breaks off from the achene quite easily, separating the seed from the parachute. Description The species of Taraxacum are tap-rooted, perennial, herbaceous plants, native to temperate areas of the Northern Hemisphere. The genus contains many species, which usually (or in the case of triploids, obligately) reproduce by apomixis, resulting in many local populations and endemism. In the British Isles alone, 234 microspecies (i.e. morphologically distinct clonal populations) are recognised in nine loosely defined sections, of which 40 are "probably endemic". A number of species of Taraxacum are seed-dispersed ruderals that rapidly colonize disturbed soil, especially the common dandelion (T. officinale), which has been introduced over much of the temperate world. After flowering is finished, the dandelion flower head dries out for a day or two. The dried petals and stamens drop off, the bracts reflex (curve backwards), and the parachute ball opens into a full sphere. When development is complete, the mature seeds are attached to white, fluffy "parachutes" which easily detach from the seedhead and glide by wind, dispersing. The seeds are able to cover large distances when dispersed due to the unique morphology of the pappus which works to create a unique type of vortex ring that stays attached to the seed rather than being sent downstream. In addition to the creation of this vortex ring, the pappus can adjust its morphology depending on the moisture in the air. This allows the plume of seeds to close up and reduce the chance to separate from the stem, waiting for optimal conditions that will maximize dispersal and germination. Similar plants Many similar plants in the family Asteraceae with yellow flowers are sometimes known as false dandelions. Dandelion flowers are very similar to those of cat's ears (Hypochaeris). Both plants carry similar flowers, which form into windborne seeds. However, dandelion flowers are borne singly on unbranched, hairless and leafless, hollow stems, while cat's ear flowering stems are branched, solid, and carry bracts. Both plants have a basal rosette of leaves and a central taproot. However, the leaves of dandelions are smooth or glabrous, whereas those of cat's ears are coarsely hairy. Early-flowering dandelions may be distinguished from coltsfoot (Tussilago farfara) by their basal rosette of leaves, their lack of disc florets, and the absence of scales on the flowering stem. Other plants with superficially similar flowers include hawkweeds (Hieracium) and hawksbeards (Crepis). These are readily distinguished by branched flowering stems, which are usually hairy and bear leaves. Classification The genus is taxonomically complex due to the presence of apomixis: any morphologically distinct clonal population would deserve its own microspecies. Phylogenetic approaches are also complicated by the accelerated mutation in apomixic lines and repeated ancient hybridization events in the genus. As of 1970, the group is divided into about 34 macrospecies or sections, and about 2000 microspecies; some botanists take a much narrower view and only accept a total of about 60 (macro)species. By 2015, the number has been revised to include 60 sections and about 2800 microspecies. 30 of these sections are known to reproduce sexually. About 235 apomictic and polyploid microspecies have been recorded in Great Britain and Ireland alone. Botanists specialising in the genus Taraxacum are sometimes called taraxacologists, for example Gunnar Marklund, Johannes Leendert van Soest or A.J. Richards. Selected species Taraxacum albidum, the white-flowered Japanese dandelion, a hybrid between T. coreanum and T. japonicum Taraxacum algarbiense Taraxacum aphrogenes, the Paphos dandelion Taraxacum arcticum Taraxacum balticum Taraxacum brachyceras Taraxacum brevicorniculatum, frequently misidentified as T. kok-saghyz and a poor rubber producer Taraxacum californicum, the California dandelion, an endangered species Taraxacum carneocoloratum Taraxacum centrasiaticum, the Xinjiang dandelion Taraxacum ceratophorum, the horned dandelion, considered by some sources to be a North American subspecies of T. officinale (T. officinale subsp. ceratophorum) Taraxacum coreanum Taraxacum desertorum Taraxacum erythrospermum, the red-seeded dandelion, often considered a variety of T. laevigatum (i.e., T. laevigatum var. erythrospermum) Taraxacum farinosum, the Turkish dandelion Taraxacum holmboei, the Troödos dandelion Taraxacum hybernum Taraxacum japonicum, the Japanese dandelion, no ring of smallish, downward-turned leaves under the flower head Taraxacum kok-saghyz, the Kazakh dandelion, which produces rubber Taraxacum laevigatum, the rock dandelion, achenes reddish brown and leaves deeply cut throughout the length, inner bracts' tips are hooded Taraxacum lissocarpum Taraxacum minimum Taraxacum mirabile Taraxacum officinale (syn. T. officinale subsp. vulgare), the common dandelion, found in many forms Taraxacum pankhurstianum, the St. Kilda dandelion Taraxacum platycarpum, the Korean dandelion Taraxacum pseudoroseum Taraxacum rubifolium, a near-extinct species. Taraxacum suecicum Cultivars 'Amélioré à Coeur Plein' yields an abundant crop without taking up much ground, and tends to blanch itself naturally, due to its clumping growth habit. 'Broad-leaved' - The leaves are thick and tender and easily blanched. In rich soils, they can be up to 60 mm (2') wide. Plants do not go to seed as quickly as French types. 'Vert de Montmagny' is a large-leaved, vigorous grower, which matures early. History Dandelions are thought to have evolved about 30 million years ago in Eurasia. Fossil seeds of Taraxacum tanaiticum have been recorded from the Pliocene of southern Belarus. Dandelions have been used by humans for food and as an herb for much of recorded history. They were well known to ancient Egyptians, Greeks and Romans, and are recorded to have been used in traditional Chinese medicine for over a thousand years. The plant was used as food and medicine by Native Americans. Dandelions were probably brought to North America on the Mayflower for their supposed medicinal benefits. Purposeful cultivation of dandelions seems to have begun in the United States in the early mid-19th century. Etymology The Latin name Taraxacum originates in medieval Arabic writings on pharmacy. The scientist Al-Razi around 900 CE wrote "the tarashaquq is like chicory". The scientist and philosopher Ibn Sīnā around 1000 CE wrote a book chapter on Taraxacum. Gerard of Cremona, in translating Arabic to Latin around 1170, spelled it tarasacon. Common names The English name, dandelion, is a corruption of the French dent de lion meaning "lion's tooth", referring to the coarsely toothed leaves. The plant is also known as blowball, cankerwort, doon-head-clock, witch's gowan, milk witch, lion's-tooth, yellow-gowan, Irish daisy, monks-head, priest's-crown, and puff-ball; other common names include faceclock, pee-a-bed, wet-a-bed, swine's snout, white endive, and wild endive. The English folk name "piss-a-bed" (and indeed the equivalent French ) refers to the strong diuretic effect of the plant's roots. In various northeastern Italian dialects, the plant is known as pisacan ("dog pisses"), because they are found at the side of pavements. In Swedish, it is called maskros (worm rose) after the nymphs of small insects (thrips larvae) usually present in the flowers. Nutrition Raw dandelion greens contain high amounts of vitamins A, C, and K, and are moderate sources of calcium, potassium, iron, and manganese. Raw dandelion greens are 86% water, 9% carbohydrates, 3% protein, and 1% fat. A 100 gram (oz) reference amount supplies 45 Calories. Phytochemicals The raw flowers contain diverse phytochemicals, including polyphenols, such as flavonoids apigenin, isoquercitrin (a quercetin-like compound), and caffeic acid, as well as terpenoids, triterpenes, and sesquiterpenes. The roots contain a substantial amount of the prebiotic fiber inulin. Dandelion greens contain lutein. Taraxalisin, a serine proteinase, is found in the latex of dandelion roots. Maximal activity of the proteinase in the roots is attained in April, at the beginning of plant development after the winter period. Each dandelion seed has a mass(weight) of 500 micrograms or 0.0005g (1/125 of a grain). Properties Edibility The entire plant, including the leaves, stems, flowers, and roots, is edible and rich in nutrients such as calcium, iron, and vitamins A and K. Dandelions grow wild on every continent except Antarctica. Most commercial varieties are native to Eurasia. It's a perennial plant with a taproot, so the greens can be repeatedly harvested if the root remains in the ground. Dandelions contain bitter but water-soluble sesquiterpenes. The bitterness increases later in the season, after the flowers bloom, and as the leaves mature. To make dandelion greens more palatable, they can be blanched, picked young, served with other strong flavors, or some combination. In the Southern United States, they are traditionally served with a hot bacon dressing (similar to spinach salad). In Italy, the leaves are sauteed, added to soups, or added raw to salads. Dandelion greens have been a part of traditional Kashmiri cuisine, Lebanese cuisine, Spanish cuisine, Italian cuisine, Albanian cuisine, Slovenian, Sephardic Jewish, Chinese, Greek cuisine () and Korean cuisines. In Crete, the leaves of a variety called 'Mari' (), 'Mariaki' (), or 'Koproradiko' () are eaten by locals, either raw or boiled, in salads. T. megalorhizon, a species endemic to Crete, is eaten in the same way; it is found only at high altitudes () and in fallow sites, and is called () or (). The flower petals, along with other ingredients, usually including citrus, are used to make dandelion wine. Its ground, roasted roots can be used as a caffeine-free coffee alternative. Dandelion was also commonly used to make the traditional British soft drink dandelion and burdock, and is one of the ingredients of root beer. Dye The yellow flowers can be dried and ground into a yellow-pigmented powder and used as a dye. Allergies Dandelions may cause allergic reactions for sensitive individuals when consumed or coming into contact with skin, but the risk is mild. Latex containing sesquiterpene lactones are present in high concentrations in the main root and stems of the common dandelion. However, only a few researchers have mentioned the possible risk of mild allergic contact dermatitis for people with lactone hypersensitivity. Herbalism Dandelion has been used in traditional medicine in Europe, North America, and China. Food for wildlife Dandelions do not depend on wildlife for distribution or pollination; however much of wildlife benefits from the abundance of the plant. Rabbits, wild turkeys, white-tailed deer, eastern chipmunks, bobwhite quail, and many species of bird will consume the seeds and foliage. Additionally, many insects will collect nectar from the flower, especially in early spring when there are very few other flowers in bloom. Seeds Taraxacum seeds are an important food source for certain birds (linnets, Linaria spp.). Nectar Szabo studied nectar secretion in a dandelion patch over two years ( in 1981 and 1982). He measured average nectar volume at 7.4 μl/flower in 1981 and 3.7 μl/flower in 1982. The flowers tended to open in the morning and close in the afternoon with the concentrations significantly higher on the second day. Leaves Dandelions are used as food plants by the larvae of some species of Lepidoptera (butterflies and moths). Invasive species Dandelions can cause significant economic damage as an invasive species and infestation of other crops worldwide; in some jurisdictions, the species T. officinale is listed as a noxious weed. It can also be considered invasive in protected areas such as national parks. For example, Denali National Park and Preserve in Alaska lists Taraxacum officinale as the most common invasive species in the park and hosts an annual "Dandelion Demolition" event where volunteers are trained to remove the plant from the park's roadsides. Benefits to gardeners With a wide range of uses, the dandelion is cultivated in small gardens to massive farms. It is kept as a companion plant; its taproot brings up nutrients for shallow-rooting plants. It is also known to attract pollinating insects and release ethylene gas, which helps fruit to ripen. Cultural importance It has been a Western tradition for someone to blow out a dandelion seedhead and think of a wish they want to come true. Five dandelion flowers are the emblem of White Sulphur Springs, West Virginia. The citizens celebrate spring with an annual Dandelion Festival. The dandelion is the official flower of the University of Rochester in New York State, and "Dandelion Yellow" is one of the school's official colors. "The Dandelion Yellow" is an official University of Rochester song. Inspiration for engineering The ability of dandelion seeds to travel as far as a kilometer in dry, windy and warm conditions, has been an inspiration for designing light-weight passive drones. In 2018, researchers discovered that dandelion seeds have a separated vortex ring. This work provided evidence that dandelion seeds have fluid behavior around fluid-immersed bodies that may help understand locomotion, weight reduction and particle retention in biological and man-made structures. In 2022, researchers at the University of Washington demonstrated battery-free wireless sensors and computers that mimic dandelion seeds and can float in the wind and disperse across a large area. As a source of natural rubber Dandelions secrete latex when the tissues are cut or broken, yet in the wild type, the latex content is low and varies greatly. Taraxacum kok-saghyz, the Russian dandelion, is a species that produced industrially useful amounts during WW2. Using modern cultivation methods and optimization techniques, scientists in the Fraunhofer Institute for Molecular Biology and Applied Ecology (IME) in Germany developed a cultivar of the Russian dandelion that is suitable for current commercial production of natural rubber. The latex produced exhibits the same quality as the natural rubber from rubber trees. In collaboration with Continental AG, IME is building a pilot facility. , the first prototype test tires made with blends from dandelion-rubber are scheduled for testing on public roads over the next few years. In December 2017, Linglong Group Co. Ltd., a Chinese company, invested $450 million into making commercially viable rubber from dandelions.
Biology and health sciences
Asterales
null
2449741
https://en.wikipedia.org/wiki/Lightning%20strike
Lightning strike
A lightning strike or lightning bolt is a lightning event in which an electric discharge takes place between the atmosphere and the ground. Most originate in a cumulonimbus cloud and terminate on the ground, called cloud-to-ground (CG) lightning. A less common type of strike, ground-to-cloud (GC) lightning, is upward-propagating lightning initiated from a tall grounded object and reaching into the clouds. About 25% of all lightning events worldwide are strikes between the atmosphere and earth-bound objects. Most are intracloud (IC) lightning and cloud-to-cloud (CC), where discharges only occur high in the atmosphere. Lightning strikes the average commercial aircraft at least once a year, but modern engineering and design means this is rarely a problem. The movement of aircraft through clouds can even cause lightning strikes. A single lightning event is a "flash", which is a complex, multistage process, some parts of which are not fully understood. Most CG flashes only "strike" one physical location, referred to as a "termination". The primary conducting channel, the bright, coursing light that may be seen and is called a "strike", is only about one inch (ca. 2.5 cm) in diameter, but because of its extreme brilliance, it often looks much larger to the human eye and in photographs. Lightning discharges are typically miles long, but certain types of horizontal discharges can be tens of miles in length. The entire flash lasts only a fraction of a second. Epidemiology Lightning strikes can injure humans in several different ways: Direct Direct strike – the person is part of a flash channel. Enormous quantities of energy pass through the body very quickly, resulting in internal burns, organ damage, explosions of flesh and bone, and nervous system damage. Depending on the flash strength and access to medical services, it may be instantaneously fatal or cause permanent injury and impairment. Contact injury – an object (generally a conductor) that a person is touching is electrified by a strike. Side splash – branches of currents "jumping" from the primary flash channel electrify the person. Blast injuries – being thrown and suffering blunt-force trauma from the shock wave (if very close) and possible hearing damage from the thunder. Indirect Ground current or "step potential" – Earth surface charges race towards the flash channel during discharge. Because the ground has high impedance, the current "chooses" a better conductor, often a person's legs, passing through the body. The near-instantaneous rate of discharge causes a potential (difference) over distance, which may amount to several thousand volts per linear foot. This phenomenon (also responsible for reports of mass reindeer deaths due to lightning storms) leads to more injuries and deaths than all direct strike effects combined. EMPs – the discharge process produces an electromagnetic pulse (EMP), which may damage an artificial pacemaker, or otherwise affect normal biological processes. Visual artefacts may be induced in the retinas of people located within 200 m (650 ft) of a severe lightning storm. Secondary or resultant: Explosions, fires, accidents. Warning signs of an impending strike nearby can include a crackling sound, sensations of static electricity in the hair or skin, the pungent smell of ozone, or the appearance of a blue haze around persons or objects (St. Elmo's fire). People caught in such extreme situations – without having been able to flee to a safer, fully enclosed space – are advised to assume the "lightning position", which involves "sitting or crouching with knees and feet close together to create only one point of contact with the ground" (with the feet off the ground if sitting; if a standing position is needed, the feet must be touching). Lightning strikes can produce severe injuries in humans, and are lethal in between 10 and 30% of cases, with up to 80% of survivors sustaining long-term injuries. These severe injuries are not usually caused by thermal burns, since the current is too brief to greatly heat up tissues; instead, nerves and muscles may be directly damaged by the high voltage producing holes in their cell membranes, a process called electroporation. In a direct strike, the electrical currents in the flash channel pass directly through the victim. The relatively high voltage drop around poorer electrical conductors (such as a human being), causes the surrounding air to ionize and break down, and the external flashover diverts most of the main discharge current so that it passes "around" the body, reducing injury. Metallic objects in contact with the skin may "concentrate" the lightning's energy, given it is a better natural conductor and the preferred pathway, resulting in more serious injuries, such as burns from molten or evaporating metal. At least two cases have been reported where a strike victim wearing an iPod suffered more serious injuries as a result. During a flash, though, the current flowing through the channel and around the body can generate large electromagnetic fields and EMPs, which may induce electrical transients (surges) within the nervous system or pacemaker of the heart, upsetting normal operations. This effect might explain cases where cardiac arrest or seizures followed a lightning strike that produced no external injuries. It may also point to the victim not being directly struck at all, but just being very close to the strike termination. Another effect of lightning on bystanders is to their hearing. The resulting shock wave of thunder can damage the ears. Also, electrical interference to telephones or headphones may result in damaging acoustic noise. According to the CDC there are about 6,000 lightning strikes per minute, or more than 8 million strikes every day. As of 2008 there were about 240,000 "lightning strikes incidents" around the world each year. According to National Geographic in 2009, about 2,000 people were killed annually worldwide by lightning. If all eight billion humans have an equal chance of being killed over a 70-year lifespan, this gives a lifetime probability of about 1 in 60,000. However, due to increased awareness and improved lightning conductors and protection, the number of annual lightning deaths has been decreasing steadily year by year. According to the National Oceanic and Atmospheric Administration in 2012, over the twenty years to 2012 the United States averaged 51 annual lightning strike fatalities, making it the second-most frequent cause of weather-related death after floods. In the US, as of 1999, between 9 and 10% of those struck died, with an annual average of 25 deaths in the 2010s decade (16 in 2017). In the United States in the period 2009 to 2018 an average of 27 lightning fatalities occurred per year. In the United States an average of 23 people died from lightning per year from 2012 to 2021. Some people suffer from lifelong brain injuries. As of 2005, in Kisii, Kenya, some 30 people die each year from lightning strikes. Kisii's high rate of lightning fatalities occurs because of the frequency of thunderstorms and because many of the area's structures have metal roofs. These statistics do not reflect the difference between direct strikes, where the victim was part of the lightning pathway, indirect effects of being close to the termination point, such as ground currents, and resultant, where the casualty arose from subsequent events, such as fires or explosions. Even the most knowledgeable first responders may not recognize a lightning-related injury, let alone particulars, which a medical examiner, police investigator, or on the rare occasion a trained lightning expert may have difficulty identifying to record accurately. As of 2013, direct-strike casualties could be much higher than reported numbers. In 2015 it was reported that between five and ten deaths from lightning occur in Australia every year with over 100 injuries occurring. In 2018, it was reported that "a direct strike accounts for only 3 to 5 per cent of all injuries and death, while ground currents, which spread out over the ground after lightning strikes, account for up to 50 per cent... ...Where the lightning strikes the ground, the ground becomes highly electrified and if you're within that area of ground electrification..." you can receive an electrical shock from the lightning. As of 2021, it has been reported that "30-60 people are struck by lightning each year in Britain, and on average, 3 (5-10%) of these strikes are fatal." In 2021, it was estimated that "...one in four people struck by lightning were sheltering under trees." Effect on nature Impact on vegetation Trees are frequent conductors of lightning to the ground. Since sap is a relatively poor conductor, its electrical resistance causes it to be heated explosively into steam, which blows off the bark outside the lightning's path. In following seasons, trees overgrow the damaged area and may cover it completely, leaving only a vertical scar. If the damage is severe, the tree may not be able to recover, and decay sets in, eventually killing the tree. In sparsely populated areas such as the Russian Far East and Siberia, lightning strikes are one of the major causes of forest fires. The smoke and mist expelled by a very large forest fire can cause secondary lightning strikes, starting additional fires many kilometers downwind. Shattering of rocks When water in fractured rock is rapidly heated by a lightning strike, the resulting steam explosion can cause rock disintegration and shift boulders. It may be a significant factor in erosion of tropical and subtropical mountains that have never been glaciated. Evidence of lightning strikes includes erratic magnetic fields. Electrical and structural damage Telephones, modems, computers, and other electronic devices can be damaged by lightning, as harmful overcurrent can reach them through the phone jack, Ethernet cable, or electricity outlet. Close strikes can also generate EMPs, especially during "positive" lightning discharges. Lightning currents have a very fast rise time, on the order of 40 kA per microsecond. Hence, although lightning is a form of direct current, conductors of such currents exhibit marked skin effect as with an alternating current, causing most of the currents to flow through the outer surface of the conductor. In addition to electrical wiring damage, the other types of possible damage to consider include structural, fire, and property damage. Prevention and mitigations The field of lightning-protection systems is an enormous industry worldwide due to the impacts lightning can have on the constructs and activities of humankind. Lightning, as varied in properties measured across orders of magnitude as it is, can cause direct effects or have secondary impacts; lead to the complete destruction of a facility or process or simply cause the failure of a remote electronic sensor; it can result in outdoor activities being halted for safety concerns to employees as a thunderstorm nears an area and until it has sufficiently passed; it can ignite volatile commodities stored in large quantities or interfere with the normal operation of a piece of equipment at critical periods of time. Most lightning-protection devices and systems protect physical structures on the earth, aircraft in flight being the notable exception. While some attention has been paid to attempting to control lightning in the atmosphere, all attempts proved extremely limited in success. Chaff and silver iodide crystal concepts were devised to deal directly with the cloud cells, and were dispensed directly into the clouds from an overflying aircraft. The chaff was devised to deal with the electrical manifestations of the storm from within, while the silver iodide salting technique was devised to deal with the mechanical forces of the storm. Protection systems Hundreds of devices, including lightning rods and charge transfer systems, are used to mitigate lightning damage and influence the path of a lightning flash. A lightning rod (or lightning protector) is a metal strip or rod connected to earth through conductors and a grounding system, used to provide a preferred pathway to ground if lightning terminates on a structure. The class of these products is often called a "finial" or "air terminal". A lightning rod or "Franklin rod" in honor of its famous inventor, Benjamin Franklin, is simply a metal rod, and without being connected to the lightning protection system, as was sometimes the case in the past, will provide no added protection to a structure. Other names include "lightning conductor", "arrester", and "discharger"; however, over the years these names have been incorporated into other products or industries with a stake in lightning protection. Lightning arrester, for example, often refers to fused links that explode when a strike occurs to a high-voltage overhead power line to protect the more expensive transformers down the line by opening the circuit. In reality, it was an early form of a heavy duty surge-protection device. Modern arresters, constructed with metal oxides, are capable of safely shunting abnormally high voltage surges to ground while preventing normal system voltages from being shorted to ground. In 1962, the USAF placed protective lightning strike-diversion tower arrays at all of the Italian and Turkish Jupiter MRBM nuclear armed missiles sites after two strikes partially arming the missiles. Monitoring and warning systems The exact location of a lightning strike and when it will occur are still impossible to predict. However, products and systems have been designed of varying complexities to alert people as the probability of a strike increases above a set level determined by a risk assessment for the location's conditions and circumstances. One significant improvement has been in the area of detection of flashes through both ground- and satellite-based observation devices. The strikes and atmospheric flashes are not predicted, but the level of detail recorded by these technologies has vastly improved in the past 20 years. Although commonly associated with thunderstorms at close range, lightning strikes can occur on a day that seems devoid of clouds. This occurrence is known as "a bolt from the blue [sky]"; lightning can strike up to 10 miles from a cloud. Lightning interferes with amplitude modulation (AM) radio signals much more than frequency modulation (FM) signals, providing an easy way to gauge local lightning strike intensity. To do so, one should tune a standard AM medium wave receiver to a frequency with no transmitting stations, and listen for crackles among the static. Stronger or nearby lightning strikes will also cause cracking if the receiver is tuned to a station. As lower frequencies propagate further along the ground than higher ones, the lower medium wave (MW) band frequencies (in the 500–600 kHz range) can detect lightning strikes at longer distances; if the longwave band (153–279 kHz) is available, using it can increase this range even further. Lightning-detection systems have been developed and may be deployed in locations where lightning strikes present special risks, such as public parks. Such systems are designed to detect the conditions which are believed to favor lightning strikes and provide a warning to those in the vicinity to allow them to take appropriate cover. Personal safety The U.S. National Lightning Safety Institute advises American citizens to have a plan for their safety when a thunderstorm occurs and to commence it as soon as the first lightning is seen or thunder heard. This is important, as lightning can strike without rain actually falling and a storm being overhead, contrary to popular belief. If thunder can be heard at all, then a risk of lightning exists. The National Lightning Safety Institute also recommends using the F-B (flash to boom) method to gauge distance to a lightning strike. The flash of a lightning strike and resulting thunder occur at roughly the same time. But light travels 300,000 km/sec, almost a million times the speed of sound. Sound travels at the slower speed of about 340 m/sec (depending on the temperature), so the flash of lightning is seen before thunder is heard. A method to determine the distance between lightning strike and viewer involves counting the seconds between the lightning flash and thunder. Then, dividing by three to determine the distance in kilometers, or by five for miles. Immediate precautions against lightning should be taken if the F-B time is 25 seconds or less, that is, if the lightning is closer than 8 km or 5 miles. A 2014 report suggested that whether a person was standing up, squatting, or lying down when outside during a thunderstorm does not matter, because lightning can travel along the ground; this report suggested being inside a solid structure or vehicle was safest. The riskiest activities include fishing, boating, camping, and golf. A person injured by lightning does not carry an electrical charge, and can be safely handled to apply first aid before emergency services arrive. Lightning can affect the brainstem, which controls breathing. Several studies conducted in South Asia and Africa suggest that the dangers of lightning are not taken sufficiently seriously there. A research team from the University of Colombo found that even in neighborhoods that had experienced deaths from lightning, no precautions were taken against future storms. An expert forum convened in 2007 to address how to raise awareness of lightning and improve lightning-protection standards, and expressed concern that many countries had no official standards for the installation of lightning rods. Safety measures Do not be next to a high object such as a tree or near metal objects like poles and fences. Do not take shelter in car ports, open garages, covered patios, picnic shelters, beach pavilions, tents, sheds, greenhouses, golf shelters and baseball dugouts. Take shelter in a building or a vehicle. It was reported that "The steel frame of a hard topped vehicle can protect you from lightning..." and to "avoid using electronic equipment inside the car and avoid touching anything metal." If inside a building, avoid electrical equipment and plumbing including taking a shower. Risk remains for up to 30 minutes after the last observed lightning or thunder. It has been reported that "If you are on water, get to the shore and off wide, open beaches as quickly as possible as water will transmit strikes from further away. Studies have shown that proximity to water is a common factor in lightning strikes." It has been reported that "If you do not have anywhere to go, then you should make for the lowest possible ground like a valley or ravine." Do not huddle up "...with other people in a group — spread out from your friends as much as you can." If your hair stands on end, lightning is about to strike you or in your vicinity. Get indoors as fast as possible. If not, drop to your knees and bend forward but don't lie flat on the ground. You may also feel a tingling sensation of static electricity on your skin. Notable incidents All events associated or suspected of causing damage are called "lightning incidents" due to four important factors. Forensic evidence of a lightning termination, in the best investigated examples, are minuscule (a pit in metal smaller than a pen point) or inconclusive (dark coloration). The object struck may explode or subsequent fires destroy all of the little evidence that may have been available immediately after the strike itself. The flash channel and discharge itself are not the only causes of injury, ignition, or damages, i.e., ground currents or explosions of flammables. Human sensory acuity is not as fine as that of the milliseconds in duration of a lightning flash, and people's ability to observe this event is subject to the brain's inability to comprehend it. Lightning-detection systems are coming online, both satellite and land-based, but their accuracy is still measured in the hundreds to thousands of feet, rarely allowing them to pinpoint the exact location of the termination. As such it is often inconclusive, albeit highly probable a lightning flash was involved, hence categorizing it as a "lightning incident" covers all bases. Earth-bound 1660s: In 1660, lightning ignited the gunpowder magazine at Osaka Castle, Japan; the resultant explosion set the castle on fire. In 1665, lightning again terminated on the main tower of the castle, igniting a fire, which subsequently burned it to its foundation. 1769: A particularly deadly lightning incident occurred in Brescia, Italy. Lightning struck the Church of St. Nazaire, igniting the 90 tonnes of gunpowder in its vaults; the resulting explosion killed up to 3,000 people and destroyed a sixth of the city. 1901: 11 killed and one was paralyzed below the hips by a strike in Chicago. 1902: A lightning strike damaged the upper section of the Eiffel Tower, requiring the reconstruction of its top. 1916 June 9: At least one man named only as "Johnson" is killed following a lightning strike at his home near San Antonio, Texas. 1970 July 12: The central mast of the Orlunda radio transmitter in central Sweden collapsed after a lightning strike destroyed its foundation insulator. 1976 July 18: During a celebration, a sudden lightning strike killed 9 people at Alpe di Catenaia on the Apennine Mountains in Italy. 1980 June 30: A lightning incident killed 11 pupils in Biego primary school in Kenya in present-day Nyamira County. Another 50 pupils were injured, while others were left traumatized. 1994 November 2: A lightning incident led to the explosion of fuel tanks in Durunka, Egypt, causing 469 fatalities. 2005 October 31: Sixty-eight dairy cows died on a farm at Fernbrook on the Waterfall Way near Dorrigo, New South Wales, after being involved in a lightning incident. Three others were temporarily paralyzed for several hours, later making a full recovery. The cows were sheltering near a tree when it was struck by lightning. Soil resistivity is generally higher than that of animal tissue. When immense amounts of energy are released into the soil, just the few meters up an animal's leg, through its body and down other legs can present a markedly reduced resistance to electrical current and a proportionally higher amount will flow through the animal than the soil on which it is standing. This phenomenon, called earth potential rise, can cause significant and damaging electrical shock, enough to kill large animals. 2007 July: A lightning incident killed up to 30 people when it struck Ushari Dara, a remote mountain village in northwestern Pakistan. 2011 June 8: A lightning strike sent 77 Air Force cadets to the hospital when it struck in the middle of a training camp at Camp Shelby, Mississippi. 2013 February: Nine South African children were hospitalized after a lightning incident occurred on a cricket field at their school, injuring five children on the pitch and four girls who were walking home. 2016 May–June: Rock am Ring festival near Frankfurt was cancelled after at least 80 people were injured due to lightning in the area. Additionally, 11 children in France and three adults in Germany were injured and one man killed in southern Poland around the same dates. 2016 August 26: A herd of wild reindeer was struck on the Hardangervidda in central Norway, killing 323. Norwegian Environment Agency spokesman Kjartan Knutsen said it had never heard of such a death toll before. He said he did not know if multiple strikes occurred, but that they all died in "one moment". 2017: The first live recording of a lightning strike on a cardiac rhythm strip occurred in a teenaged male who had an implanted loop recorder as a cardiac monitor for neurocardiogenic syncope. 2018: A lightning strike killed at least 16 people and injured dozens more at a Seventh-Day Adventist church in Rwanda. 2021: A lightning strike killed a 9-year-old boy in a field in Blackpool, England. 2021: In April, at least 76 people across India were killed by lightning strike on a single weekend; 23 people died on the watchtower of Amer Fort, a popular tourist spot in Rajasthan, and 42 were killed in Uttar Pradesh with the highest toll of 14 happening in the city of Allahabad. Lastly, 11 were killed in Madhya Pradesh with two of them killed while sheltering under trees when they were tending sheep. 2021: On August 4, 17 people were killed by a single lightning strike in Shibganj Upazila of Chapainawabganj district in Bangladesh; 16 people died on the spot and the other one died by heart attack while seeing the others. 2022: On August 4, 3 people were killed and another person was injured after lightning struck a tree in Lafayette Square, Washington, D.C. 2022: On August 5, lightning struck a fuel tank at an oil storage facility in Matanzas, causing a fire and a series of explosions that resulted in at least one death and up to 125 injuries. In addition, 17 firefighters were reported missing. 2022: On August 18, a woman was killed and two people hospitalized after lightning struck a tree in Winter Springs, Florida. 2023: On September 18, a Mexican tourist and a local hamac salesman were struck and killed by a lightning bolt on a beach in Michoacán, Mexico. In-flight Airplanes are commonly struck by lightning without damage, with the typical commercial aircraft hit at least once a year. Sometimes, though, the effects of a strike are serious. 1963 December 8: Pan Am Flight 214 crashed outside Elkton, Maryland, during a severe electrical storm, with a loss of all 81 passengers and crew. The Boeing 707-121, registered as N709PA, was on the final leg of a San Juan–Baltimore–Philadelphia flight. 1969 November 14: The Apollo 12 mission's Saturn V rocket and its ionized exhaust plume became part of a lightning flash channel 36.5 seconds after lift-off. Although the discharge occurred "through" the metal skin and framework of the vehicle, it did not ignite the rocket's highly combustible fuel. 1971 December 24: LANSA Flight 508, a Lockheed L-188A Electra turboprop, registered OB-R-941, operated as a scheduled domestic passenger flight by Lineas Aéreas Nacionales Sociedad Anonima (LANSA), crashed after a lightning strike ignited a fuel tank while it was en route from Lima, Peru, to Pucallpa, Peru, killing 91 people – all of its 6 crew-members and 85 of its 86 passengers. The sole survivor was Juliane Koepcke, who fell down into the Amazon rainforest strapped to her seat and remarkably survived the fall, and was then able to walk through the jungle for 10 days until she was rescued by local fishermen. 2012 November 4: a plane was reported as exploding off the coast of Herne Bay, Kent, while in flight. This did not turn out to be the case; rather, the plane became part of the flash channel, causing observers to report the plane and surrounding sky appeared bright pink. 2019 May 5: Aeroflot Flight 1492, a Sukhoi Superjet 100, was, according to the flight captain, struck by lightning on take-off, damaging electrical systems and forcing the pilots to attempt an emergency landing. The plane hit the ground hard and caught on fire, which engulfed the plane on the runway. Of the 78 people on board, 41 were killed. Most-stricken human Roy Sullivan national park ranger, died 1983, holds a Guinness World Record after surviving seven different lightning strikes. He had multiple injuries across his body. Longest lightning bolt A 2020 lightning bolt across the southern United States set the record for the longest lightning bolt ever detected. The bolt stretched for 477 miles (768 kilometers) over Mississippi, Louisiana, and Texas, although it was between clouds and did not strike the ground. The World Meteorological Organization confirmed its record-breaking status in January 2022.
Physical sciences
Storms
Earth science
2452325
https://en.wikipedia.org/wiki/Fine-art%20photography
Fine-art photography
Fine-art photography is photography created in line with the vision of the photographer as artist, using photography as a medium for creative expression. The goal of fine-art photography is to express an idea, a message, or an emotion. This stands in contrast to representational photography, such as photojournalism, which provides a documentary visual account of specific subjects and events, literally representing objective reality rather than the subjective intent of the photographer; and commercial photography, the primary focus of which is to advertise products or services. History Invention through 1940s One photography historian claimed that "the earliest exponent of 'Fine Art' or composition photography was John Edwin Mayall", who exhibited daguerreotypes illustrating the Lord's Prayer in 1851. Successful attempts to make fine art photography can be traced to Victorian era practitioners such as Julia Margaret Cameron, Charles Lutwidge Dodgson, and Oscar Gustave Rejlander and others. In the U.S. F. Holland Day, Alfred Stieglitz and Edward Steichen were instrumental in making photography a fine art, and Stieglitz was especially notable in introducing it into museum collections. In the UK as recently as 1960, photography was not really recognised as a fine art. S. D. Jouhar said, when he formed the Photographic Fine Art Association at that time: "At the moment, photography is not generally recognized as anything more than a craft. In the USA photography has been openly accepted as Fine Art in certain official quarters. It is shown in galleries and exhibitions as an Art. There is not corresponding recognition in this country. The London Salon shows pictorial photography, but it is not generally understood as an art. Whether a work shows aesthetic qualities or not it is designated 'Pictorial Photography' which is a very ambiguous term. The photographer himself must have confidence in his work and in its dignity and aesthetic value, to force recognition as an Art rather than a Craft". Until the late 1970s several genres predominated, such as nudes, portraits, and natural landscapes (exemplified by Ansel Adams). Breakthrough 'star' artists in the 1970s and 80s, such as Sally Mann, Robert Mapplethorpe, Robert Farber and Cindy Sherman, still relied heavily on such genres, although seeing them with fresh eyes. Others investigated a snapshot aesthetic approach. In the mid-1970s Josef H. Neumann developed chemograms, which are products of both photographic processing and painting on photographic paper. Before the spread of computers and the use of image processing software the process of creating chemograms can be considered an early form of analog post-production, in which the original image is altered after the enlarging process. Unlike works of digital post-production each chemogram is a unique piece. American organizations, such as the Aperture Foundation and the Museum of Modern Art(MoMA), have done much to keep photography at the forefront of the fine arts. MoMA's establishment of a department of photography in 1940 and appointment of Beaumont Newhall as its first curator are often cited as institutional confirmation of photography's status as an art. 1950s to present day There is now a trend toward a careful staging and lighting of the picture, rather than hoping to "discover" it ready-made. Photographers such as Gregory Crewdson, and Jeff Wall are noted for the quality of their staged pictures. Additionally, new technological trends in digital photography have opened a new direction in full spectrum photography, where careful filtering choices across the ultraviolet, visible and infrared lead to new artistic visions. As printing technologies have improved since around 1980, a photographer's art prints reproduced in a finely-printed limited-edition book have now become an area of strong interest to collectors. This is because books usually have high production values, a short print run, and their limited market means they are almost never reprinted. The collector's market in photography books by individual photographers is developing rapidly. According to Art Market Trends 2004 7,000 photographs were sold in auction rooms in 2004, and photographs averaged a 7.6 percent annual price rise from 1994 and 2004. Around 80 percent were sold in the United States, although auction sales only record a fraction of total private sales. There is now a thriving collectors' market for which the most sought-after art photographers will produce high quality archival prints in strictly limited editions. Attempts by online art retailers to sell fine photography to the general public alongside prints of paintings have had mixed results, with strong sales coming only from the traditional major photographers such as Ansel Adams. In addition to the "digital movement" towards manipulation, filtering, or resolution changes, some fine artists deliberately seek a "naturalistic", including "natural lighting" as a value in itself. Sometimes the art work as in the case of Gerhard Richter consists of a photographic image that has been subsequently painted over with oil paints and/or contains some political or historical significance beyond the image itself. The existence of "photographically-projected painting" now blurs the line between painting and photography which traditionally was absolute. Framing and print size Until the mid-1950s it was widely considered vulgar and pretentious to frame a photograph for a gallery exhibition. Prints were usually simply pasted onto blockboard or plywood, or given a white border in the darkroom and then pinned at the corners onto display boards. Prints were thus shown without any glass reflections obscuring them. Steichen's famous The Family of Man exhibition was unframed, the pictures pasted to panels. Even as late as 1966 Bill Brandt's MoMA show was unframed, with simple prints pasted to thin plywood. From the mid-1950s to about 2000 most gallery exhibitions had prints behind glass. Since about 2000 there has been a noticeable move toward once again showing contemporary gallery prints on boards and without glass. In addition, throughout the twentieth century, there was a noticeable increase in the size of prints. Politics Fine art photography is created primarily as an expression of the artist's vision, but as a byproduct it has also been important in advancing certain causes. The work of Ansel Adams in Yosemite and Yellowstone provides an example. Adams is one of the most widely recognized fine art photographers of the 20th century, and was an avid promoter of conservation. While his primary focus was on photography as art, some of his work raised public awareness of the beauty of the Sierra Nevada and helped to build political support for their protection. Such photography has also had effects in the area of censorship law and free expression, due to its concern with the nude body. Overlap with other genres Although fine art photography may overlap with many other genres of photography, the overlaps with fashion photography and photojournalism merit special attention. In 1996 it was stated that there had been a "recent blurring of lines between commercial illustrative photography and fine art photography," especially in the area of fashion. Evidence for the overlap of fine art photography and fashion photography includes lectures, exhibitions, trade fairs such as Art Basel Miami Beach, and books. Photojournalism and fine art photography overlapped beginning in the "late 1960s and 1970s, when... news photographers struck up liaisons with art photography and painting". In 1974 the International Center of Photography opened, with emphases on both "humanitarian photojournalism" and "art photography". By 1987, "pictures that were taken on assignments for magazines and newspapers now regularly reappear[ed] – in frames – on the walls of museums and galleries". Smartphone apps such as Snapchat sometimes are used for fine-art photography. Attitudes of artists in other fields The reactions of artists and writers have contributed significantly to perceptions of photography as fine art. Prominent painters have asserted their interest in the medium: Noted authors, similarly, have responded to the artistic potential of photography: List of definitions Here is a list of definitions of the related terms "art photography", "artistic photography", and "fine art photography". In reference books Among the definitions that can be found in reference books are: "Art photography": "Photography that is done as a fine art – that is, done to express the artist's perceptions and emotions and to share them with others". "Fine art photography": "A picture that is produced for sale or display rather than one that is produced in response to a commercial commission". "Fine art photography": "The production of images to fulfill the creative vision of a photographer. ... Synonymous with art photography". "Art photography": A definition "is elusive," but "when photographers refer to it, they have in mind the photographs seen in magazines such as American Photo, Popular Photography, and Print, and in salons and exhibitions. Art (or artful) photography is salable.". "Artistic photography": "A frequently used but somewhat vague term. The idea underlying it is that the producer of a given picture has aimed at something more than a merely realistic rendering of the subject, and has attempted to convey a personal impression". "Fine art photography": Also called "decor photography," or "photo decor," this "involves selling large photos... that can be used as wall art". In scholarly articles Among the definitions that can be found in scholarly articles are: In 1961, S. D. Jouhar founded the Photographic Fine Art Association, and he was its chairman. Their definition of Fine Art was "Creating images that evoke emotion by a photographic process in which one's mind and imagination are freely but competently exercised." Two studies by Christopherson in 1974 defined "fine art photographers" as "those persons who create and distribute photographs specifically as 'art. A 1986 ethnographic and historical study by Schwartz did not directly define "fine art photography" but did compare it with "camera club photography". It found that fine art photography "is tied to other media" such as painting; "responds to its own history and traditions" (as opposed to "aspir[ing] to the same achievements made by their predecessors"); "has its own vocabulary"; "conveys ideas" (e.g., "concern with form supersedes concern with subject matter"); "is innovative"; "is personal"; "is a lifestyle"; and "participates in the world of commerce." On the World Wide Web Among the definitions that can be found on the World Wide Web are: The Library of Congress Subject Headings use "art photography" as "photography of art," and "artistic photography" (i.e., "Photography, artistic") as "photography as a fine art, including aesthetic theory". The Art & Architecture Thesaurus states that "fine art photography" (preferred term) or "art photography" or "artistic photography" is "the movement in England and the United States, from around 1890 into the early 20th century, which promoted various aesthetic approaches. Historically, has sometimes been applied to any photography whose intention is aesthetic, as distinguished from scientific, commercial, or journalistic; for this meaning, use 'photography. Definitions of "fine art photography" on photographers' static Web pages vary from "the subset of fine art that is created with a camera" to "limited-reproduction photography, using materials and techniques that will outlive the artist". On the concept of limited-reproduction, in the French legal system, there is a very precise legal definition regarding fine art photography being considered as an artwork. The tax code states they, "are considered as artworks the photographs taken by the artist, printed by him/herself or under his/her control, signed and numbered in maximum thirty copies, including all sizes and mountings."
Technology
Optical instruments
null
2453519
https://en.wikipedia.org/wiki/Headstander
Headstander
A headstander is any of several species of South American fish, including Anostomus ternetzi, Anostomus anostomus (family Anostomidae) and members of genus Chilodus from the family Chilodontidae. The name derives from their habit of swimming at a 45° angle, head pointed downwards, as if "standing on their heads". About Headstanders are a group of freshwater fishes that live in streams of South America. Some species, such as Chilodus punctatus and C. gracilis, are common aquarium fishes as well. In nature, they are predominantly found in shallow streams with strong currents and a lot of algae, which they feed off of. They prefer slightly acidic water with medium hardness. The headstander will eat almost any kind of food, but mostly enjoy hair algae. Some headstanders can reach up to 12 cm (4 3/4 inches) in length. They tend to be very active, sensitive to shadows, and like to jump. They also have a tendency to be slightly aggressive. In aquaria, they are most peaceful when kept as a single specimens or in groups of more than 6.
Biology and health sciences
Characiformes
Animals
2454447
https://en.wikipedia.org/wiki/Fata%20Morgana%20%28mirage%29
Fata Morgana (mirage)
A () is a complex form of superior mirage visible in a narrow band right above the horizon. The term Fata Morgana is the Italian translation of "Morgan the Fairy" (Morgan le Fay of Arthurian legend). These mirages are often seen in the Italian Strait of Messina, and were described as fairy castles in the air or false land conjured by her magic. Fata Morgana mirages significantly distort the object or objects on which they are based, often such that the object is completely unrecognizable. A Fata Morgana may be seen on land or at sea, in polar regions, or in deserts. It may involve almost any kind of distant object, including boats, islands, and the coastline. Often, a Fata Morgana changes rapidly. The mirage comprises several inverted (upside down) and upright images stacked on top of one another. Fata Morgana mirages also show alternating compressed and stretched zones. The optical phenomenon occurs because rays of light bend when they pass through air layers of different temperatures in a steep thermal inversion where an atmospheric duct has formed. In calm weather, a layer of significantly warmer air may rest over colder dense air, forming an atmospheric duct that acts like a refracting lens, producing a series of both inverted and erect images. A Fata Morgana requires a duct to be present; thermal inversion alone is not enough to produce this kind of mirage. While a thermal inversion often takes place without there being an atmospheric duct, an atmospheric duct cannot exist without there first being a thermal inversion. Observing a Fata Morgana A Fata Morgana is most commonly seen in polar regions, especially over large sheets of ice that have a uniform low temperature. It may, however, be observed in almost any area. In polar regions the Fata Morgana phenomenon is observed on relatively cold days. In deserts, over oceans, and over lakes, a Fata Morgana may be observed on hot days. To generate the Fata Morgana phenomenon, the thermal inversion has to be strong enough that the curvature of the light rays within the inversion layer is stronger than the curvature of the Earth. Under these conditions, the rays bend and create arcs. An observer needs to be within or below an atmospheric duct in order to be able to see a Fata Morgana. Fata Morgana may be observed from any altitude within the Earth's atmosphere, from sea level up to mountaintops, and even including the view from airplanes. A Fata Morgana may be described as a very complex superior mirage with more than three distorted erect and inverted images. Because of the constantly changing conditions of the atmosphere, a Fata Morgana may change in various ways within just a few seconds of time, including changing to become a straightforward superior mirage. The sequential image here shows sixteen photographic frames of a mirage of the Farallon Islands as seen from San Francisco; the images were all taken on the same day. In the first fourteen frames, elements of the Fata Morgana mirage display alternations of compressed and stretched zones. The last two frames were photographed a few hours later, around sunset time. At that point in time, the air was cooler while the ocean was probably a little bit warmer, which caused the thermal inversion to be not as extreme as it was few hours before. A mirage was still present at that point, but it was not so complex as a few hours before sunset: the mirage was no longer a Fata Morgana, but instead had become a simple superior mirage. Fata Morgana mirages are visible to the naked eye, but in order to be able to see the detail within them, it is best to view them through binoculars, a telescope, or as is the case in the images here, through a telephoto lens. Gabriel Gruber (1740–1805) and (1744–1806), who observed Fata Morgana above Lake Cerknica, were the first to study it in a laboratory setting. Etymology La Fata Morgana ("The Fairy Morgana") is the Italian name of Morgan le Fay, also known as Morgana and other variants, who was described as a powerful sorceress in Arthurian legend. As her name indicates, the figure of Morgan appears to have been originally a fairy figure rather than a human woman. The early works featuring Morgan do not elaborate on her nature, other than describing her role as that of a fairy or magician. Later, she was described as a King Arthur's half-sister and an enchantress. After King Arthur's final battle at Camlann, Morgan takes her half-brother Arthur to Avalon. In medieval times, suggestions for the location of Avalon included the other side of the Earth at the antipodes, Sicily, and other locations in the Mediterranean. Legends claimed that sirens in the waters around Sicily lured the unwary to their death. Morgan is associated not only with Sicily's Mount Etna (the supposedly hollow mountain locally identified as Avalon since the 12th century), but also with sirens. In a medieval French Arthurian romance of the 13th century, Floriant et Florete, she is called "mistress of the fairies of the salt sea" (La mestresse [des] fées de la mer salée). Ever since that time, Fata Morgana has been associated with Sicily in the Italian folklore and literature. For example, a local legend connects Morgan and her magical mirages with Roger I of Sicily and the Norman conquest of the island from the Arabs. Walter Charleton, in his 1654 treatise "Physiologia Epicuro-Gassendo-Charltoniana", devotes several pages to the description of the Morgana of Rhegium, in the Strait of Messina (Book III, Chap. II, Sect. II). He records that a similar phenomenon was reported in Africa by Diodorus Siculus, a Greek historian writing in the first century BC, and that the Rhegium Fata Morgana was described by Damascius, a Greek philosopher of the sixth century AD. In addition, Charleton tells us that Athanasius Kircher described the Rhegium mirage in his book of travels. An early mention of the term Fata Morgana in English, in 1818, referred to such a mirage noticed in the Strait of Messina, between Calabria and Sicily. Famous legends and observations The Flying Dutchman The Flying Dutchman, according to folklore, is a ghost ship that can never go home, and is doomed to sail the seven seas forever. The Flying Dutchman is usually spotted from afar, sometimes seen to be glowing with ghostly light. One of the possible explanations of the origin of the Flying Dutchman legend is a Fata Morgana mirage seen at sea. A Fata Morgana superior mirage of a ship can take many different forms. Even when the boat in the mirage does not seem to be suspended in the air, it still looks ghostly, and unusual, and what is even more important, it is ever-changing in its appearance. Sometimes a Fata Morgana causes a ship to appear to float inside the waves, at other times an inverted ship appears to sail above its real companion. In fact, with a Fata Morgana it can be hard to say which individual segment of the mirage is real and which is not real: when a real ship is out of sight because it is below the horizon line, a Fata Morgana can cause the image of it to be elevated, and then everything which is seen by the observer is a mirage. On the other hand, if the real ship is still above the horizon, the image of it can be duplicated many times and elaborately distorted by a Fata Morgana. Phantom islands In the 19th and early 20th centuries, Fata Morgana mirages may have played a role in a number of unrelated "discoveries" of arctic and antarctic land masses which were later shown not to exist. Icebergs frozen into the pack ice, or the uneven surface of the ice itself, may have contributed to the illusion of distant land features. Sannikov Land Yakov Sannikov and Matvei Gedenschtrom claimed to have seen a land mass north of Kotelny Island during their 1809–1810 cartographic expedition to the New Siberian Islands. Sannikov reported this sighting of a "new land" in 1811, and the supposed island was named after him. Three-quarters of a century later, in 1886, Baron Eduard Toll, a Baltic German explorer in Russian service, reported observing Sannikov Land during another expedition to the New Siberian Islands. In 1900, he would lead still another expedition to the region, which had among its objectives the location and exploration of Sannikov Land. The expedition was unsuccessful in this respect. Toll and three others were lost after they departed their ship, which was stuck in ice for the winter, and embarked on a risky expedition by dog sled. In 1937, the Soviet icebreaker Sadko also tried and failed to find Sannikov Land. Some historians and geographers have theorised that the land mass that Sannikov and Toll saw was actually Fata Morganas of Bennett Island. Croker Mountains In 1818, Sir John Ross led an expedition to discover the long-sought-after Northwest Passage. When he reached Lancaster Sound in Canada, he sighted, in the distance, a land mass with mountains, directly ahead in the ship's course. He named the mountain range the Croker Mountains, after First Secretary to the Admiralty John Wilson Croker, and ordered the ship to turn around and return to England. Several of his officers protested, including First Mate William Edward Parry and Edward Sabine, but they could not dissuade him. The account of Ross's voyage, published a year later, brought to light this disagreement, and the ensuing controversy over the existence of the Croker Mountains ruined Ross's reputation. The year after Ross's expedition, in 1819, Parry was given command of his own Arctic expedition, and proved Ross wrong by continuing west beyond where Ross had turned back, and sailing through the supposed location of the Croker Mountains. The mountain range that had caused Ross to abandon his mission had been a mirage. Ross made two errors. First, he refused to listen to the counsel of his officers, who may have been more familiar with mirages than he was. Second, his attempt to honour Croker by naming a mountain range after him backfired when the mountains turned out to be non-existent. Ross could not obtain ships, or funds, from the government for his subsequent expeditions, and was forced to rely on private backers instead. New South Greenland Benjamin Morrell reported that, in March 1823, while on a voyage to the Antarctic and southern Pacific Ocean, he had explored what he thought was the east coast of New South Greenland. The west coast of New South Greenland had been explored two years earlier by Robert Johnson, who had given the land its name. This name was not adopted, however, and the area, which is the northern part of the Antarctic Peninsula, is now known as Graham Land. Morrell's reported position was actually far to the east of Graham Land. Searches for the land that Morrell claimed to have explored would continue into the early 20th century before New South Greenland's existence was conclusively disproven. Why Morrell reported exploring a non-existent land is unclear, but one possibility is that he mistook a Fata Morgana for actual land. Crocker Land Robert Peary claimed to have seen, while on a 1906 Arctic expedition, a land mass in the distance. He said that it was north-west from the highest point of Cape Thomas Hubbard, which is situated in what is now the northern Canadian territory of Nunavut, and he estimated it to be away, at about 83 degrees N, longitude 100 degrees W. He named it Crocker Land, after George Crocker of the Peary Arctic Club. As Peary's diary contradicts his public claim that he had sighted land, it is now believed that Crocker Land was a fraudulent invention of Peary, created in an unsuccessful attempt to secure further funding from Crocker. In 1913, unaware that Crocker Land was merely an invention, Donald Baxter MacMillan organised the Crocker Land Expedition, which set out to reach and explore the supposed land mass. On 21 April, the members of the expedition did, in fact, see what appeared to be a huge island on the north-western horizon. As MacMillan later said, "Hills, valleys, snow-capped peaks extending through at least one hundred and twenty degrees of the horizon". Piugaattoq, a member of the expedition and an Inuit hunter with 20 years of experience of the area, explained that it was just an illusion. He called it poo-jok, which means 'mist'. However, MacMillan insisted that they press on, even though it was late in the season and the sea ice was breaking up. For five days they went on, following the mirage. Finally, on 27 April, after they had covered some of dangerous sea ice, MacMillan was forced to admit that Piugaattoq was right—the land that they had sighted was in fact a mirage (probably a Fata Morgana). Later, MacMillan wrote: The expedition collected interesting samples, but is still considered to be a failure and a very expensive mistake. The final cost was $100,000 (equivalent to $ million in ). Hy Brasil Hy Brasil is an island that was said to appear once every few years off the coast of County Kerry, Ireland. Hy Brasil has been drawn on ancient maps as a perfectly circular island with a river running directly through it. Lake Ontario Lake Ontario is said to be famous for mirages, with opposite shorelines becoming clearly visible during the events. In July 1866, mirages of boats and islands were seen from Kingston, Ontario. Here the described mirages of vessels "could only be seen with the aid of a telescope". It is often the case when observing a Fata Morgana that one needs to use a telescope or binoculars to really make out the mirage. The "cloud" that the article mentions a few times probably refers to a duct. On 25 August 1894, Scientific American described a "remarkable mirage" seen by the citizens of Buffalo, New York. This description might refer to looming owing to inversion rather than to an actual mirage. McMurdo Sound and Antarctica From McMurdo Station in Antarctica, Fata Morganas are often seen during the Antarctic spring and summer, across McMurdo Sound. An Antarctic Fata Morgana, seen from a C-47 transport flight, was recounted: UFOs Fata Morgana mirages may continue to trick some observers and are still sometimes mistaken for otherworldly objects such as UFOs. A Fata Morgana can display an object that is located below the astronomical horizon as an apparent object hovering in the sky. A Fata Morgana can also magnify such an object vertically and make it look absolutely unrecognizable. Some UFOs which are seen on radar may also be due to Fata Morgana mirages. Official UFO investigations in France indicate: Australia Fata Morgana mirages could explain the mysterious Australian Min Min light phenomenon. This would also explain the way in which the legend has changed over time: The first reports were of a stationary light, which in a Fata Morgana effect would be an image of a campfire. In more recent reports this has changed to moving lights, which in an inversion reflection such as Fata Morgana would be headlights over the horizon being reflected by the inversion. Greenland Fata Morgana Land is a phantom island in the Arctic, reported first in 1907. After an unfruitful search, it was deemed to be Tobias Island. In literature A Fata Morgana is usually associated with something mysterious, something that never could be approached. In the lines, "the weary traveller sees / In desert or prairie vast, / Blue lakes, overhung with trees / That a pleasant shadow cast", because of the mention of blue lakes, it is clear that the author is actually describing not a Fata Morgana, but rather a common inferior or desert mirage. The 1886 drawing shown here of a "Fata Morgana" in a desert might have been an imaginative illustration for the poem, but in reality no mirage ever looks like this. Andy Young writes, "They're always confined to a narrow strip of sky—less than a finger's width at arm's length—at the horizon." The 18th-century poet Christoph Martin Wieland wrote about "Fata Morgana's castles in the air". The idea of castles in the air was probably so irresistible that many languages still use the phrase Fata Morgana to describe a mirage. In the book Thunder Below! about the submarine , the crew sees a Fata Morgana (called an "arctic mirage" in the book) of four ships trapped in the ice. As they try to approach the ships the mirage vanishes. The Fata Morgana is briefly mentioned in the 1936 H. P. Lovecraft horror novel At the Mountains of Madness, in which the narrator states: "On many occasions the curious atmospheric effects enchanted me vastly; these including a strikingly vivid mirage—the first I had ever seen—in which distant bergs became the battlements of unimaginable cosmic castles."
Physical sciences
Atmospheric optics
Earth science
1769573
https://en.wikipedia.org/wiki/Dark%20galaxy
Dark galaxy
A dark galaxy is a hypothesized galaxy with no (or very few) stars. They received their name because they have no visible stars but may be detectable if they contain significant amounts of gas. Astronomers have long theorized the existence of dark galaxies, but there are no confirmed examples to date. Dark galaxies are distinct from intergalactic gas clouds caused by galactic tidal interactions, since these gas clouds do not contain dark matter, so they do not technically qualify as galaxies. Distinguishing between intergalactic gas clouds and galaxies is difficult; most candidate dark galaxies turn out to be tidal gas clouds. The best candidate dark galaxies to date include HI1225+01, AGC229385, and numerous gas clouds detected in studies of quasars. On 25 August 2016, astronomers reported that Dragonfly 44, an ultra diffuse galaxy (UDG) with the mass of the Milky Way galaxy, but with nearly no discernible stars or galactic structure, is made almost entirely of dark matter. Observational evidence Large surveys with sensitive but low-resolution radio telescopes like Arecibo or the Parkes Telescope look for 21-cm emission from atomic hydrogen in galaxies. These surveys are then matched to optical surveys to identify any objects with no optical counterpart—that is, sources with no stars. Another way astronomers search for dark galaxies is to look for hydrogen absorption lines in the spectra of background quasars. This technique has revealed many intergalactic clouds of hydrogen, but following up on candidate dark galaxies is difficult, since these sources tend to be too far away and are often optically drowned out by the bright light from the quasars. Nature of dark galaxies Origin In 2005, astronomers discovered gas cloud VIRGOHI21 and attempted to determine what it was and why it exerted such a massive gravitational pull on galaxy NGC 4254. After years of ruling out other possible explanations, some have concluded that VIRGOHI21 is a dark galaxy. Size The actual size of dark galaxies is unknown because they cannot be observed with normal telescopes. There have been various estimations, ranging from double the size of the Milky Way to the size of a small quasar. Structure Dark galaxies are theoretically composed of dark matter, hydrogen, and dust. Some scientists support the idea that dark galaxies may contain stars. Yet the exact composition of dark galaxies remains unknown because there is no conclusive way to identify them. Nevertheless, astronomers estimate that the mass of the gas in these galaxies is approximately one billion times that of the Sun. Methodology to observe dark bodies Dark galaxies contain no visible stars and are invisible to optical telescopes. The Arecibo Galaxy Environment Survey (AGES) harnessed the Arecibo radio telescope to search for dark galaxies, which are predicted to contain detectable amounts of neutral hydrogen. The Arecibo radio telescope was useful where others are not because of its ability to detect the emission from this neutral hydrogen, specifically the 21-cm line. Alternative theories Scientists say that the galaxies we see today only began to create stars after dark galaxies. Based on numerous scientific assertions, dark galaxies played a big role in many of the galaxies astronomers and scientists see today. Martin Haehnel, from Kavli Institute for Cosmology at the University of Cambridge, claims that the precursor to the Milky Way galaxy was actually a much smaller bright galaxy that had merged with dark galaxies nearby to form the Milky Way we currently see. Multiple scientists agree that dark galaxies are building blocks of modern galaxies. Sebastian Cantalupo of the University of California, Santa Cruz, agrees with this theory. He goes on to say, "In our current theory of galaxy formation, we believe that big galaxies form from the merger of smaller galaxies. Dark galaxies bring to big galaxies a lot of gas, which then accelerates star formation in the bigger galaxies." Scientists have specific techniques they use to locate these dark galaxies. These techniques have the capability of teaching us more about other special events that occur in the universe; for instance, the cosmic web. This "web" is made of invisible filaments of gas and dark matter believed to permeate the universe, as well as "feeding and building galaxies and galaxy clusters where the filaments intersect." Potential dark galaxies FAST J0139+4328 Located 94 million light years away from Earth, this galaxy is visible in radio waves with minimal visible light. HE0450-2958 HE0450-2958 is a quasar at redshift z=0.285. Hubble Space Telescope images showed that the quasar is located at the edge of a large cloud of gas, but no host galaxy was detected for the quasar. The authors of the Hubble study suggested that one possible scenario was that the quasar is located in a dark galaxy. However, subsequent analysis by other groups found no evidence that the host galaxy is anomalously dark, and demonstrated that a normal host galaxy is probably present, so the observations do not support the dark galaxy interpretation. HVC 127-41-330 HVC 127-41-330 is a cloud rotating at high speed between Andromeda and the Triangulum Galaxy. Astronomer Josh Simon considers this cloud to be a dark galaxy because of the speed of its rotation and its predicted mass. J0613+52 J0613+52 is a possible dark galaxy, discovered with the Green Bank Telescope when it was accidentally pointed to the wrong coordinates. Stars could possibly exist within it, but were observed as of January 2024. Nube Nube was discovered in 2023 by analyzing deep optical imagery of an area in Stripe 82. Due to its low surface brightness, Nube is classified as an "almost dark galaxy." Smith's Cloud Smith's Cloud is a candidate to be a dark galaxy, due to its projected mass and survival of encounters with the Milky Way. VIRGOHI21 Initially discovered in 2000, VIRGOHI21 was announced in February 2005 as a good candidate to be a true dark galaxy. It was detected in 21-cm surveys, and was suspected to be a possible cosmic partner to the galaxy NGC 4254. This unusual-looking galaxy appears to be one partner in a cosmic collision, and appeared to show dynamics consistent with a dark galaxy (and apparently inconsistent with the predictions of the Modified Newtonian Dynamics (MOND) theory). However, further observations revealed that VIRGOHI21 was an intergalactic gas cloud, stripped from NGC4254 by a high speed collision. The high speed interaction was caused by infall into the Virgo cluster.
Physical sciences
Galaxy classification
Astronomy
1771156
https://en.wikipedia.org/wiki/Container%20format
Container format
A container format (informally, sometimes called a wrapper) or metafile is a file format that allows multiple data streams to be embedded into a single file, usually along with metadata for identifying and further detailing those streams. Notable examples of container formats include archive files (such as the ZIP format) and formats used for multimedia playback (such as Matroska, MP4, and AVI). Among the earliest cross-platform container formats were Distinguished Encoding Rules and the 1985 Interchange File Format. Design Although containers may identify how data or metadata is encoded, they do not actually provide instructions about how to decode that data. A program that can open a container must also use an appropriate codec to decode its contents. If the program doesn't have the required algorithm, it can't use the contained data. In these cases, programs usually emit an error message that complains of a missing codec, which users may be able to acquire. Container formats can be made to wrap any kind of data. Though there are some examples of such file formats (e.g. Microsoft Windows's DLL files), most container formats are specialized for specific data requirements. For example, since audio and video streams can be coded and decoded with many different algorithms, a container format may be used to provide the appearance of a single file format to users of multimedia playback software. Considerations The differences between various container formats arise from five main issues: Popularity; how widely supported a container is. Overhead. This is the difference in file-size between two files with the same content in a different container. Support for advanced codec functionality. Older formats such as AVI do not support new codec features like B-frames, VBR audio or VFR video natively. The format may be "hacked" to add support, but this creates compatibility problems. Support for advanced content, such as chapters, subtitles, meta-tags, user-data. Support of streaming media. Single coding formats In addition to pure container formats, which specify only the wrapper but not the coding, a number of file formats specify both a storage layer and the coding, as part of modular design and forward compatibility. Examples include the JPEG File Interchange Format (JFIF), for containing JPEG data, and Portable Network Graphics (PNG) formats. In principle, coding can be changed while the storage layer is retained; for example, Multiple-image Network Graphics (MNG) uses the PNG container format but provides animation, while JPEG Network Graphics (JNG) puts JPEG encoded data in a PNG container; in both cases however, the different formats have different magic numbers – the format specifies the coding, though a MNG can contain both PNG-encoded images and JPEG-encoded images. Multimedia container formats The container file is used to identify and interleave different data types. Simpler container formats can contain different types of audio formats, while more advanced container formats can support multiple audio and video streams, subtitles, chapter-information, and meta-data (tags) — along with the synchronization information needed to play back the various streams together. In most cases, the file header, most of the metadata and the synchro chunks are specified by the container format. For example, container formats exist for optimized, low-quality, internet video streaming which differs from high-quality Blu-ray streaming requirements. Container format parts have various names: "chunks" as in RIFF and PNG, "atoms" in QuickTime/MP4, "packets" in MPEG-TS (from the communications term), and "segments" in JPEG. The main content of a chunk is called the "data" or "payload". Most container formats have chunks in sequence, each with a header, while TIFF instead stores offsets. Modular chunks make it easy to recover other chunks in case of file corruption or dropped frames or bit slip, while offsets result in framing errors in cases of bit slip. Some containers are exclusive to audio: AIFF (IFF file format, widely used on the macOS platform) WAV (RIFF file format, widely used on Windows platform) XMF (Extensible Music Format) Other containers are exclusive to still images: FITS (Flexible Image Transport System) still images, raw data, and associated metadata. TIFF (Tag Image File Format) still images and associated metadata. Macintosh PICT resource (PICT), superseded by PDF in Mac OS X Windows Metafile (WMF) = (EMF) Enhanced Metafile Encapsulated PostScript (EPS) Computer Graphics Metafile (CGM) Portable Document Format (PDF) Corel Draw File (CDR) Scalable Vector Graphics (SVG) Rich Text Format file (RTF) Other flexible containers can hold many types of audio and video, as well as other media. The most popular multi-media containers are: 3GP (used by many mobile phones; based on the ISO base media file format) ASF (container for Microsoft WMA and WMV, which today usually do not use a container) AVI (the standard Microsoft Windows container, also based on RIFF) DVR-MS ("Microsoft Digital Video Recording", proprietary video container format developed by Microsoft based on ASF) Flash Video (FLV, F4V) (container for video and audio from Adobe Systems) IFF (first platform-independent container format) Matroska (MKV) (not limited to any coding format, as it can hold virtually anything; it is an open standard container format) MJ2 - Motion JPEG 2000 file format, based on the ISO base media file format which is defined in MPEG-4 Part 12 and JPEG 2000 Part 12 QuickTime File Format (standard QuickTime video container from Apple Inc.) MPEG program stream (standard container for MPEG-1 and MPEG-2 elementary streams on reasonably reliable media such as disks; used also on DVD-Video discs) MPEG-2 transport stream (a.k.a. MPEG-TS) (standard container for digital broadcasting and for transportation over unreliable media; used also on Blu-ray Disc video; typically contains multiple video and audio streams, and an electronic program guide) MP4 (standard audio and video container for the MPEG-4 multimedia portfolio, based on the ISO base media file format defined in MPEG-4 Part 12 and JPEG 2000 Part 12) which in turn was based on the QuickTime file format. Ogg (standard container for Xiph.org audio formats Vorbis and Opus and video format Theora) RM (RealMedia; standard container for RealVideo and RealAudio) WebM (subset of Matroska, used for web-based media distribution on online platforms; container for royalty-free audio formats Vorbis/Opus and video formats VP8/VP9/AV1) There are many other container formats, such as NUT, MXF, GXF, ratDVD, SVI, VOB and DivX Media Format
Technology
File formats
null
1771169
https://en.wikipedia.org/wiki/Citronellal
Citronellal
Citronellal or rhodinal (C10H18O) is a monoterpenoid aldehyde, the main component in the mixture of terpenoid chemical compounds that give citronella oil its distinctive lemon scent. Citronellal is a main isolate in distilled oils from the plants Cymbopogon (excepting C. citratus, culinary lemongrass), lemon-scented gum, and lemon-scented teatree. The (S)-(−)-enantiomer of citronellal makes up to 80% of the oil from kaffir lime leaves and is the compound responsible for its characteristic aroma. Citronellal has insect repellent properties, and research shows high repellent effectiveness against mosquitoes. Another research shows that citronellal has strong antifungal qualities. Compendial status British Pharmacopoeia
Physical sciences
Terpenes and terpenoids
Chemistry
1771297
https://en.wikipedia.org/wiki/Citral
Citral
Citral is an acyclic monoterpene aldehyde. Being a monoterpene, it is made of two isoprene units. Citral is a collective term which covers two geometric isomers that have their own separate names; the E-isomer is named geranial (trans-citral; α-citral) or citral A. The Z-isomer is named neral (cis-citral; β-citral) or citral B. These stereoisomers occur as a mixture, often not in equal proportions; e.g. in essential oil of Australian ginger, the neral to geranial ratio is 0.61. Occurrence Citral is present in the volatile oils of several plants, including lemon myrtle (90–98%), Litsea citrata (90%), Litsea cubeba (70–85%), lemongrass (65–85%), lemon tea-tree (70–80%), Ocimum gratissimum (66.5%), Lindera citriodora (about 65%), Calypranthes parriculata (about 62%), petitgrain (36%), lemon verbena (30–35%), lemon ironbark (26%), lemon balm (11%), lime (6–9%), lemon (2–5%), and orange. Further, in the lipid fraction (essential oil) of Australian ginger (51–71%) Of the many sources of citral, the Australian myrtaceous tree, lemon myrtle, Backhousia citriodora F. Muell. (of the family Myrtaceae), is considered superior. Uses Citral is a precursor in the industrial production of vitamin A, vitamin E, vitamin K. Citral is also precursor to lycopene, ionone and methylionone. Fragrances Citral has a strong lemon (citrus) scent and is used as an aroma compound in perfumery. It is used to fortify lemon oil. (Nerol, another perfumery compound, has a less intense but sweeter lemon note.) The aldehydes citronellal and citral are considered key components responsible for the lemon note with citral preferred. It also has pheromonal effects in acari and insects. The herb Cymbopogon citratus has shown promising insecticidal and antifungal activity against storage pests. Food additive Citral is commonly used as a food additive ingredient. It has been tested (2016) in vitro against the food-borne pathogen Cronobacter sakazakii.
Physical sciences
Terpenes and terpenoids
Chemistry
18207739
https://en.wikipedia.org/wiki/Anoplotherium
Anoplotherium
Anoplotherium is the type genus of the extinct Palaeogene artiodactyl family Anoplotheriidae, which was endemic to Western Europe. It lived from the Late Eocene to the earliest Oligocene. It was the fifth fossil mammal genus to be described with official taxonomic authority, with a history extending back to 1804 when its fossils from Montmartre in Paris, France were first described by the French naturalist Georges Cuvier. Discoveries of incomplete skeletons of A. commune in 1807 led Cuvier to thoroughly describe unusual features for which there are no modern analogues. His drawn skeletal and muscle reconstructions of A. commune in 1812 were amongst the first instances of anatomical reconstructions based on fossil evidence. Cuvier's contributions to palaeontology based on his works on the genus were revolutionary for the field, not only proving the developing ideas of extinction and ecological succession but also paving the way for subfields such as palaeoneurology. Today, there are four known species. Anoplotherium was amongst the largest non-whippomorph artiodactyls of the Palaeogene period, weighing on average to and measuring at least in head and body length and in shoulder height. It was an evolutionarily advanced and unusual artiodactyl, sporting three-toed feet in certain species like A. latipes, a long and robust tail, and a highly-developed brain with strong support for both sense of smell and sensory perception. Its overall robust build may have allowed it to stand bipedally to browse on plants at greater heights, reaching approximately tall, effectively competing with the few other medium to large herbivores it lived with. The full extent of its bipedalism needs to be confirmed by more research, however. The larger, two-toed A. commune and slightly smaller, three-toed A. latipes may be sexual dimorphs in that the former is female and the latter male, but this idea remains speculative. Its closest relative was Diplobune, which similarly is hypothesized to have had specialized behaviours. The artiodactyl lived in western Europe back when it was an archipelago that was isolated from the rest of Eurasia, meaning that it lived in an environment with various other faunas that also evolved with strong levels of endemism. Its exact origins are unknown, but it arose long after a shift towards drier but still subhumid conditions that led to abrasive plants and the extinctions of the large-sized Lophiodontidae, achieving gigantism and establishing itself as a dominant herbivore throughout the entirety of the western European region given its abundant fossil evidence. Its success was abruptly halted by the Grande Coupure extinction and faunal turnover event in the earliest Oligocene of western Europe, which was caused by shifts towards further glaciation and seasonality. Tropical and subtropical forests were rapidly replaced by more temperate environments, and most ocean barriers previously separating western Europe from eastern Eurasia closed, allowing for large faunal dispersals from Asia. Although the specific causes are uncertain, Anoplotherium was likely unable to adapt to these major changes and succumbed to extinction. Taxonomy Research history Identifications While Georges Cuvier knew about fossil bones from the gypsum quarries of the outskirts of Paris (known as the Paris Basin) as early as at least 1800, it was not until 1804 that he would describe them. After describing Palaeotherium, he wrote about the next set of fossils that he was able to discern as being different from Palaeotherium based on dentition form, including the apparent lack of canines that left a large gap between the incisors and premolars. He observed that the hemimandible (half a mandible) had three lower incisors instead of four incisors or none which he said characterized other "pachyderms". Cuvier, basing the name on its apparent lack of suitable arms and canines for offensive attacks, erected the name Anoplotherium. The genus name Anoplotherium means "unarmed beast" and is a compound of the Greek words (, 'not'), (, 'armor, large shield'), and (, 'beast, wild animal'). Cuvier named three species of Anoplotherium in the same year, the first of which was the "sheep-sized" A. commune and the other three of which were "smaller species" that he named A. medium, A. minus, and A. minimum. The etymology of the species name A. commune refers to how "common" fossils of the species were while the etymologies of the other two species were based on sizes compared to A. commune. He also attributed a cloven hoof (or didactyl hoof) to A. commune since the specimen appeared to be large-sized. He thought that Anoplotherium had didactyl hooves instead of tridactyl hooves, which would have separated it from Palaeotherium. Based on the hooves and dentition, he concluded that Anoplotherium was similar to ruminants or camelids. However, in 1807, Cuvier found out that Anoplotherium commune had three toes on its hind limbs, although the third index toes were of smaller sizes compared to the other two. Skeletons In 1807, Cuvier wrote about two incomplete skeletons that were recently uncovered, although the first was partially damaged because it was not collected carefully (which he expressed as having frustrated his understanding of the skeletal anatomy of Anoplotherium initially). The first skeleton, found in the quarries of Montmartre in the commune of Pantin, helped to confirm Cuvier's earlier diagnoses of Anoplotherium as correct. The embedded skeleton was the size of a small horse and helped to confirm the large didactyl feet and the 44 total teeth that it had (11 in each side of its jaw). It also had 11 complete ribs and a fragment of a 12th, matching with the number of ribs of camelids. The most surprising element to Cuvier, however, was the enormous tail with 22 vertebrae in the skeleton, a feature that he said he would not have known about previously, as there are no modern analogues of the elongated and thick tail in any large quadrupedal mammal. The second incomplete skeleton came from Antony, this time more carefully removed with supervision from experts than the first skeleton. In it, he was able to confirm six lumbar vertebrae and three sacral vertebrae, all of which were extremely strong and probably supported the long tail. Most notable to Cuvier was the confirmation that Anoplotherium had two large fingers and one small finger on its front legs, which was unusual for mammals related to it. Significance in palaeontological history Although Palaeotherium and Anoplotherium are not well-recognized compared to fossil animals of other periods (i.e. Mesozoic dinosaurs and Neogene-Quaternary mammals), their fossil discoveries in Montmartre and formal descriptions by Cuvier are recognized as critical moments that pioneered palaeontology to the modern era. Unlike Pleistocene fossil genera in the Americas in early palaeontological history such as Megatherium and Mammut, Palaeotherium and Anoplotherium were not found in surface-level deposits but embedded in deeper, harder rock deposits dating to the Eocene. People in Paris had been previously familiar with animal skeletons being in their area for centuries, some of which were later kept and formally described. However, it was Cuvier who formally erected two fossil genera that came from older deposits, and from his homeland in the continent of Europe instead of the Americas where Megatherium and Mammut were found. The Palaeogene-aged fossils left no evidence of any later descendants, extinct or extant, although the similarities of Palaeotherium to tapirs made proving the theory more difficult. He noticed that below the gypsum was older sediments of seashells and reptiles like what Cuvier described as a giant "crocodile", which would later be known as Mosasaurus. Cuvier knew then that the world that Anoplotherium and Palaeotherium came from was a different span of time before that of the preceding time of sea reptiles and the proceeding times of Megatherium and Mammut, thereby proving the concept of natural extinction. Cuvier's descriptions of an endocast (fossilized brain case) of a cerebral hemisphere belonging to a broken skull of A. commune from Montmartre, starting from 1804 up to 1822, are recognized as the first true instance of palaeoneurology, the study of brain evolution. The very first definition of an "endocast" dates back to 1822 when Cuvier described a mould of the brain of A. commune, noticing that it offered hints to the true shape of the brain of the now-extinct mammal (although it was later found to be a portion of the brain rather than the entirety of it). Since the first endocast study, many other brain studies were conducted for other fossil mammals throughout the second half of the 19th century onward. An 1822 description by Cuvier of a healed fractured femur of A. commune is cited as an early instance of palaeopathology, the study of ancient diseases and injuries on prehistoric organisms. Early depictions In 1812, Cuvier published his drawing of a skeletal reconstruction of A. commune based on known fossil remains of the species including the aforementioned incomplete skeletons. Based on the robust build of the mammal species, he hypothesized that its body structure was similar to otters except for its legs, that it was adapted for semi-aquatic life by swimming for consumption of aquatic plants, lacking long ears similar to semi-aquatic mammals, and living in marshy environments. Cuvier suggested that its lifestyle was therefore similar to semi-aquatic quadrupedal mammals like hippopotamuses and muroid rodents. He thought that in comparison, other species of Anoplotherium such as A. medium and A. minus were adapted for terrestrial behaviours and mixed feeding (browsing and grazing). Today, the reconstruction for the skeletal anatomy has aged well, mostly standing the test of time since 1812. Anoplotherium and Palaeotherium were also depicted in 1822 drawings by the French palaeontologist Charles Léopold Laurillard under the direction of Cuvier, although the restorations were not as detailed as Cuvier's. The reconstruction of Anoplotherium as an aquatic swimmer was supported by multiple 19th century European palaeontologists and persisted for over a century until 1938 when M. Dor rejected the theory of the genus as being aquatic-adapted based on anatomical differences from otters and hippopotamuses that contradict semi-aquatic behaviours and are more consistent with terrestrial life. This rejection was supported by Jerry J. Hooker in 2007 and Svitozar Davydenko et al. in 2023 based on anatomical traits, although the former disagreed with Dor's observations on the tail. Hooker argued that although the distal caudal vertebrae of the anoplothere are less prominent than those of kangaroos (Macropus), the vertebrae patterns of Anoplotherium are more similar to Macropus than ungulates like Bos or Equus. Today, Anoplotherium is thought to be a terrestrial browser with specialized behaviours. A. commune was notably depicted in the Crystal Palace Dinosaurs attraction in the Crystal Palace Park in the United Kingdom, open to the public since 1854 and constructed by English sculptor Benjamin Waterhouse Hawkins. More specifically, three statues of A. commune were made, two of which are standing and the third of which is in a reposed position. These statues resemble hybrids of deer and big cats and measure long. Its inclusion in the Crystal Palace Park reflects the popularity and public interest in Anoplotherium in the 19th century, as it was an icon of palaeontology, geology, and natural history that it was regularly incorporated in palaeontological texts and classrooms (its popularity diminished since the 20th century). The sculptures of A. commune were overall based on Hawkins closely following Cuvier's description of the genus based on known remains, including Cuvier's unpublished robust muscle speculations which are seen as accurate by modern-day standards. Hawkins did also deviate outside of Cuvier's descriptions, however, likely basing its facial designs and the inaccurate presence of tetradactyl limbs (four toes on each foot) instead of didactyl or tridactyl limbs on extant camelids. Besides these errors, the statues have largely been accurate to modern-day depictions of Anoplotherium. Confusions with other mammal groups For much of the 19th century, palaeontologists confused mammals of other families with Anoplotherium largely due to palaeontology being at its early stages. One of the earlier examples is 1822, when Cuvier erected the names A. gracile, A. murinum, A. obliquum, A. leporinum, and A. secundaria, replacing earlier species names within Anoplotherium outside of A. commune. In A. gracile, he noticed differences in the molars that he erected the subgenus Xiphodon. For A. leporinum, A. murinum, and A. obliquum, the subgenus Dichobune was created by Cuvier based on its small size. In 1848, French palaeontologist Auguste Pomel promoted the 2 subgenera to genus ranks and erected an additional genus Amphimeryx for A. murinus and A. obliquus. The revised taxonomies were followed by subsequent palaeontologists like another Paul Gervais. Therefore, the species are no longer classified as Anoplotherium but distant genera. Other mammals initially confused with the genus Anoplotherium but eventually reclassified within the 19th century represented the endemic European artiodactyl family Cainotheriidae (Cainotherium), European and Indian subcontinental members of the perissodactyl family Chalicotheriidae (Anisodon and Nestoritherium), and even endemic South American members of the order Litopterna (Scalabrinitherium and Proterotherium). Revisions within the Anoplotheriidae In 1851, Pomel observed that Anoplotherium species could be determined as having either didactyl hooves (lessened third index) or tridactyl hooves (greater-developed third index) and that the only previously erected species that are valid are A. commune and A. secundaria. In addition, he erected three new species based on additional remains: A. duvmoyi (based on Cuvier's fossil illustrations of A. commune), A. platypus, A. laurillardi (convex incisors on the anterior surface), and A. cuvieri. A. laurillardi derives as a species name from Charles Laurillard. French palaeontologist Paul Gervais in 1852 named the genus Eurytherium based on its presence of tridactyl hooves instead of didactyl hooves, for he made the new species E. latipis the type species and A. platypus a synonym of the former. Henri Filhol would follow Gervais by erecting E. quercyi and E. minus based on dental sizes and reclassifying A. secundarium (or A. secundaria) to Eurytherium. In 1862, Ludwig Rütimeyer erected the subgenus Diplobune for the genus Dichobune on the basis that it was an evolutionary transition between Anoplotherium secundarium and the dichobunid. It was promoted to a distinct genus with one species D. bavaricum being placed into the genus by Oscar Fraas by 1870, however. In 1883, Max Schlosser made Eurytherium a synonym of Anoplotherium because he argued that the limb anatomies and dentitions were specific differences in characteristics rather than major ones that defined an entire genus. Sclosser pointed out that all species of Anoplotherium in some form had three indexes despite A. commune having less developed third indexes than A. latipes. He also reinforced the idea that "A. platypus" is a synonym of A. latipes. The name A. latipes takes priority over A. platypus to the modern day because Pomel in 1851 did not list any specimen for the species, effectively making it a nomen dubium. He also mentioned that the status of A. duvmoyi was not stable due to being based on illustrations, which he considered to be a "hopeless effort". He also supported Diplobune being a valid genus in that he argued that A. secundaria should be renamed to D. secundaria based on dentition and smaller sizes. Schlosser also said that A. cuvieri was an invalid species because the diagnosis based on isolated metatarsal bones was valid-enough. Richard Lydekker erected the species A. cayluxense in 1885 based on its smaller size and unique variations in the molar cusps. He also demoted the genus Diplobune as a synonym of Anoplotherium, meaning that the former's species were added/readded to Anoplotherium as A. secundarium, A. quercyi, A. modicum, A. bavaricum, and A. minus (= A. minor, Filhol 1877). The synonymy of Diplobune with Anoplotherium was not supported by Hans Georg Stehlin in 1910, as he argued that the former was generically distinct from the latter despite their close relations, thus restoring the previous species into Diplobune (with the exception of D. modicum, which he synonymized with D. bavarica) and adding "A. secundarium" into Diplobune as D. secundaria. He also wrote that A. cayluxense was a synonym of D. secundaria. Stehlin also tentatively referred "A." obliquum to the genus Haplomeryx as H? obliquum. As a result of the revisions, the only valid species of Anoplotherium were A. commune, A. latipes, and A. laurillardi. In 1922, Wilhelm Otto Dietrich erected the fourth species A. pompeckji from the locality of Mähringen in Germany, named in honor of German palaeontologist Josef Felix Pompeckj. The species was described as a medium-sized tridactyl species with 4-fingered front limbs and 3-toed hind limbs with slimmer hand bone proportions and a smaller astragalus. A. pompeckji is the least characterized species and has similar dentition to A. laurillardi, making its status less certain compared to the three other species. In 1964, palaeontologist Louis de Bonis reviewed briefly the taxonomic synonyms of Anoplotherium, considering that A. duvernoyi was based on a young individual with incisor characteristics that Pomel did not specify and that A. cuvieri does not differ in metacarpal dimensions from A. laurillardi. He followed Stehlin in recognizing the three main species of Anoplotherium, although he did not mention A. pompeckji in his review. Classification Anoplotherium is the type genus of the Anoplotheriidae, a Palaeogene artiodactyl family endemic to western Europe that lived from the Middle Eocene to the Early Oligocene (~44 to 30 Ma, possible earliest record at ~48 Ma). The exact evolutionary origins and dispersals of the anoplotheriids are uncertain, but they exclusively resided within the continent when it was an archipelago that was isolated by seaway barriers from other regions such as Balkanatolia and the rest of eastern Eurasia. The Anoplotheriidae's relations with other members of the Artiodactyla are not well-resolved, with some determining it to be either a tylopod (which includes camelids and merycoidodonts of the Palaeogene) or a close relative to the infraorder and some others believing that it may have been closer to the Ruminantia (which includes tragulids and other close Palaeogene relatives). The Anoplotheriidae consists of two subfamilies, the Dacrytheriinae and Anoplotheriinae, the latter of which is the younger subfamily that Anoplotherium belongs to. The Dacrytheriinae is the older subfamily of the two that first appeared in the Middle Eocene (since the Mammal Palaeozone Zones unit MP13, possibly up to MP10), although some authors consider them to be a separate family in the form of the Dacrytheriidae. Anoplotheriines made their first appearances by the Late Eocene (MP15-MP16), or ~41-40 Ma, within western Europe with Duerotherium and Robiatherium. By MP17a-MP17b, however, there is a notable gap in the fossil record of anoplotheriines overall as the former two genera seemingly made their last appearances by the previous MP level MP16. By MP18, Anoplotherium and Diplobune made their first appearances in western Europe, but their exact origins are unknown. The two genera were widespread throughout western Europe based on abundant fossil evidence spanning from Portugal, Spain, United Kingdom, France, Germany, and Switzerland for much of pre-Grande Coupure Europe (prior to MP21), meaning that they were typical elements of the Late Eocene up until the earliest Oligocene. The earlier anoplotheriines are considered to be smaller species whereas the later anoplotheriines were larger. Anoplotherium and Diplobune are considered the most derived (or evolutionarily recent) anoplotheriids based on dental morphology and achieved gigantism amongst non-whippomorph artiodactyls, making them some of the largest non-whippomorph artiodactyls of the Palaeogene as well as amongst the largest mammals to roam western Europe at the time (all species of Anoplotherium were large to very large whereas not all species of Diplobune were large). Conducting studies focused on the phylogenetic relations within the Anoplotheriidae has proven difficult due to the general scarcity of fossil specimens of most genera. The phylogenetic relations of the Anoplotheriidae as well as the Xiphodontidae, Mixtotheriidae, and Cainotheriidae have also been elusive due to the selenodont morphologies of the molars, which were convergent with tylopods or ruminants. Some researchers considered the selenodont families Anoplotheriidae, Xiphodontidae, and Cainotheriidae to be within Tylopoda due to postcranial features that were similar to the tylopods from North America in the Palaeogene. Other researchers tie them as being more closely related to ruminants than tylopods based on dental morphology. Different phylogenetic analyses have produced different results for the "derived" selenodont Eocene European artiodactyl families, making it uncertain whether they were closer to the Tylopoda or Ruminantia. In an article published in 2019, Romain Weppe et al. conducted a phylogenetic analysis on the Cainotherioidea within the Artiodactyla based on mandibular and dental characteristics, specifically in terms of relationships with artiodactyls of the Palaeogene. The results retrieved that the superfamily was closely related to the Mixtotheriidae and Anoplotheriidae. They determined that the Cainotheriidae, Robiacinidae, Anoplotheriidae, and Mixtotheriidae formed a clade that was the sister group to the Ruminantia while Tylopoda, along with the Amphimerycidae and Xiphodontidae split earlier in the tree. The phylogenetic tree used for the journal and another published work about the cainotherioids is outlined below: In 2020, Vincent Luccisano et al. created a phylogenetic tree of the basal artiodactyls, a majority endemic to western Europe, from the Palaeogene. In one clade, the "bunoselenodont endemic European" Mixtotheriidae, Anoplotheriidae, Xiphodontidae, Amphimerycidae, Cainotheriidae, and Robiacinidae are grouped together with the Ruminantia. The phylogenetic tree as produced by the authors is shown below: In 2022, Weppe created a phylogenetic analysis in his academic thesis regarding Palaeogene artiodactyl lineages, focusing most specifically on the endemic European families. The phylogenetic tree, according to Weppe, is the first to conduct phylogenetic affinities of all anoplotheriid genera, although not all individual species were included. He found that the Anoplotheriidae, Mixtotheriidae, and Cainotherioidea form a clade based on synapomorphic dental traits (traits thought to have originated from their most recent common ancestor). The result, Weppe mentioned, matches up with previous phylogenetic analyses on the Cainotherioidea with other endemic European Palaeogene artiodactyls that support the families as a clade. As a result, he argued that the proposed superfamily Anoplotherioidea, composing of the Anoplotheriidae and Xiphodontidae as proposed by Alan W. Gentry and Hooker in 1988, is invalid due to the polyphyly of the lineages in the phylogenetic analysis. However, the Xiphodontidae was still found to compose part of a wider clade with the three other groups. Anoplotherium and Diplobune compose a clade of the Anoplotheriidae because of their derived dental traits, supported by them being the latest-appearing anoplotheriids. Description Skull The Anoplotheriidae is characterized in part by low-proportioned skulls with elongated muzzles (the muzzle aligns with the top of the cranium in the case of Anoplotherium), and a wide-open skull orbit. Anoplotherium lacks bony processes and lacrimal fossae. It has large paroccipital processes and shorter postorbital process projections of the lacrimal bone. The skull of Anoplotherium is narrow and elongated, with a constricted postorbital bone indicating poor brain development. It features robust sagittal and nuchal crests, the former having high elevations and emerging from low postorbital ridges and the latter having complicated elevation shifts. The back has a circular foramen magnum and large occipital condyles. The underside has an elongated palate with glenoid surfaces and strong post-glenoid processes of the squamosal bone. The skull's bones are robust, with the spongy diploë bone being greatly developed. The skull's strength is attributed to massive temporal muscles as part of an overall strong body build. The skull has a shallow sella turcica, a pear-shaped cranial fossa, extensive parietal bones, large squamosal bone, narrow occipital bone, and two small occipital buns for muscle attachment. Many cranial traits seen in Anoplotherium are also found in the closely related Diplobune. In the auditory region (including the temporal bones), the periotic bone of the inner ear is extensive, the internal auditory meatus and facial canal openings of the temporal bone being visible in the lower triangular area of the periotic bone. The tympanic part of the temporal bone is connected partially to the squamosal bone, remains separate from the periotic bone, and consists of a small but thick auditory bulla (hollow bony structure of the auditory region), which projects underneath the petrous part of the temporal bone. In a skull fragment of A. laurillardi with incisors and canine alveoli, the known length of the nasal region is large, measuring . The trait of large nasals is similar to what was observed in a skull of Diplobune secundaria, which are recorded to be massive, elongated, and connected to each other and the maxilla. Cyril Gagnaison and Jean-Jacques Leroux proposed in the case of D. secundaria that the elongated nasal region supports the presence of a very tapered tongue, which similar to giraffes may have allowed it to pull plant branches. Endocast anatomy In 1913, R.W. Palmer conducted studies on the brain cast from a cranium of Anoplotherium commune, originating from the Phosphorites of Quercy within the British Museum collections (the endocast is now in the National Museum of Natural History, France as the specimen BMNH 3753). The individual in question was estimated to have weighed by its death similar to extant llamas, weighing considerably less than typical estimates of adult Anoplotherium. The total length of the brain is under , its volume measuring approximately . The form of the brain is naturally narrow and elongated. The cerebellum and cerebrum are both at high positions compared to modern ungulates that have brain hemispheres located above the cerebellum. Palmer noticed that the brain was similar to the modern aardvark (Orycteropus afer). The highly-developed cerebrum that enables a strong sense of smell from Anoplotherium makes it macrosmatic (derived in sense of smell), as also indicated by the enlarged olfactory bulbs and the small size of the neocortex. In both Anoplotherium and Diplobune, the rhinal fissure divides the brain hemisphere horizontally and equally in half. The cerebellar vermis of the cerebellum is divided almost equally by the primary fissure of cerebellum (or "fissura prima"). Additionally, the olfactory bulbs are thick, and the olfactory tubercles take the form of smooth circular elevations that are curved more backwards than the aardvark and are easily noticeable. In another endocast for Anoplotherium, the olfactory bulbs compose 7.5% of the total volume of the brain, above average for both extinct and extant artiodactyls. The neocortex area of the brain, responsible for sensory perception and other sensory brain functions, covers 28% of the medium-sized A. commune endocast's surface area. Another endocast, which belongs to Anoplotherium sp., measures 2 in the cerebrum surface, 2 in neopallium surface, and 3 in endocranial volume. The former two data when calculated together (neopallium surface/cerebrum surface) compose 61.6% in the total neocortical surface area of the brain, meaning that adult Anoplotherium has massive brain and neocortical surface area measurements compared to most Palaeogene artiodactyls, the latter measurement being on par with or less than those of modern artiodactyls. Anoplotherium and other anoplotheriids share traits of generally elongated and parallel sulci (shallow furrows) in the cerebral cortex, as well as a vertical (cordial) sulcus corresponding to the lateral (side) sulcus. The fissures (deep furrows) on the surface of the central area of the brain show clear formations of a complex lateral sulcus (also known as the Sylvian fissure) in a process known as operculization. The operculization of the brain of anoplotheriids is similar to the Anthracotheriidae but does not indicate any close phylogenetic relation, which means that the similarities are an instance of parallel evolution. The measurements of the endocasts of Anoplotherium are larger than those of other Palaeogene artiodactyls in a 2015 study by Ghislain Thiery and Stéphane Ducrocq. Dentition Unlike most mammal fossil genera, Anoplotherium is diagnosed mainly based on postcranial morphology than dental morphology, but it does have diagnoses based on the latter. The dental formula of Anoplotherium and other anoplotheriids is for a total of 44 teeth, consistent with the primitive dental formula for early-middle Palaeogene placental mammals. Anoplotheriids have selenodont (crescent-shaped ridge form) or bunoselenodont (bunodont and selenodont) premolars (P/p) and molars (M/m) made for leaf-browsing diets. The canines (C/c) of the Anoplotheriidae are overall undifferentiated from the incisors (I/i). The lower premolars of the family are piercing and elongated. The upper molars are bunoselenodont in form while the lower molars have selenodont labial cuspids and bunodont (or rounded) lingual cuspids. The subfamily Anoplotheriinae differs from the Dacrytheriinae by the molariform premolars with crescent-shaped paraconules and the lower molars that lack a third cusp between the metaconid and entoconid. The upper molars of Anoplotherium are characterized by trapezoidal outlines in occlusal views (or top views of the tooth enamel), W-shaped ectolophs (crests or ridges of upper molar teeth), and specific differences in cusps. More specifically, the upper molars of the genus contain near-central and conical protocone cusps closely aligned with the mesostyle cusps, conical paraconules that are connected to the parastyle by posterior crests, and compressed parastyles and mesostyles. The lower molars of the anoplotheriid contains the paraconid and metaconid cusps which have pronounced separations by a valley between them. Vertebrae and ribs Anoplotherium has 7 total cervical vertebrae for a series of C1-C7, typical of most mammals. The atlas (C1) is similar to those of camelids such as Lama in form as well as the position of the "alar foramina" in association with facet joint connections involving the axis (C2). An axis that was attributed to A. commune (but also possibly belonging to its close relative Diplobune secundaria) is elongated in length and has a diminished spinous process. The vertebrae C3-C7 are analogous to Cainotherium. The C4 vertebra appears slanted, which hints towards the neck changing in orientation from vertebra C3 to C4 as a potential bending in the front area of the neck, similar to modern bears. As a result of the neck vertebrae morphology, Anoplotherium likely had a sloped, upward position of the neck. Anoplotherium also had 12 thoracic vertebrae, 6 lumbar vertebrae, and 3 sacral vertebrae. The lumbar vertebrae, especially L4-L6, contain transverse processes that are wide, long, and point slightly towards a forward direction. The 3 sacral vertebrae are robust and contain apophyses for strong attachments to the long tail. The vertebrae of the anoplotheriid genus are built for typical ungulate movement. The most unusual postcranial aspect of Anoplotherium compared to other artiodactyls is the long and thick tail, which is made up of 22 caudal vertebrae for strong muscle support. The frontal vertebrae had well-pronounced process, and all vertebrae except for the farthest distal ones have haemal arches on them. Like the chalicothere Chalicotherium and unlike other mammals like caprines of the genus Ovis and Cainotherium, the ribs curve in wider areas and their tubercles do not project as much in the dorsal direction. The ribs of Anoplotherium form a barrel-shaped trunk, meaning that the rib cage is much wider than those of modern ruminants. The ribs generally project sideways due to the very curved positions of them, the position of the tubercle, and the thoracic vertebrae projecting on the upper sides. Limbs Anoplotherium has short limbs and is thought to have been unguligrade in limb positions, with most species having three toes on both their front and hind limbs. A. commune is differentiated from the similar A. latipes by its didactyl ("two-toed") as opposed to tridactyl ("three-toed") digits. Front limbs The scapula (or shoulder blade) has a convex coracoid border and is similar to that of Diplobune. Similar to camels (Camelus), the supraspinous fossa is broader than the infraspinous fossa, but camels have narrower scapulae, especially in distal (back) ends of the supraspinous fossa. The scapular spine is robust, thick, and gradually rises in height distally up until it reaches the edge of the glenoid cavities like camels but unlike most other modern artiodactyls. The coracoid process (normally resembling a small hooklike structure) is reduced to a blunt knob that only slightly projects. The wide supraspinous fossa and broadly curved coracoid edge of the scapulae of Anoplotherium are unlike Cainotherium and Merycoidodon because Anoplotherium shares neither any triangular shape of the shoulder blades nor narrow supraspinous fossae. The elbow morphology of Anoplotherium, based on the structures and articulations of elbow bones like the humerus, radius, and ulna, shows evidence of adaptations to moving the elbow up and down in supination-pronation rotations by 13° maximum. A fully extended elbow could make an angle between the ulna and humerus that measures approximately 135°, indicating high flexibility compared to other artiodactyls, including the already semi-flexible elbows of Cainotherium. Similar in wrist morphology to pigs of the genus Sus, the hooves of Anoplotherium spread out by ~16° when downward, supported by footprint morphology. The wrist may have been able to rotate up and down but only to a limited degree and nowhere near the flexible wrist morphologies of primates, suggesting that the adaptation was not a main feature of the artiodactyl genus but the result of regaining a primitive trait. The carpus consists of the scaphoid, lunate, triquetrum, and pisiform in its first row and the trapezium, trapezoid, capitate, and hamate in its second. Anoplotherium has four digit bones, but those of digit V and, in the case of A. commune, digit II are poorly developed. The second finger (digit II) of Anoplotherium has no capability of rotation or flexible movements, which signifies that it does not play any thumb-like role like in primates or the giant panda. Hind limbs The ilium, part of the hip bone of the greater pelvis bone, is broad and has a firmly rounded iliac crest that meets with the concave underside edge at a sharp angle. The ilium of Anoplotherium can be differentiated from Palaeotherium by the shorter iliac body, the longer ischium (the lower and back area of the hip bone), and a straighter back edge of pelvis that results in a longer pubic symphysis. The acetabular fossa region of the acetabulum surface of the pelvis is large, its acetabular notch being in a posterior (or back) position similar to that in Chalicotherium. The femur is larger than the tibia, has only two trochanters similar to other basal artiodactyls, has a narrow gap between its femoral head and greater trochanter, and has a long femoral neck. The trochanteric fossa, a hollow at the surface of the greater trochanter, is wide in depth and narrow in shape, deepening by the sides. The tibia is robust, strongly supporting muscle attachments based on its crests and processes. The distal end of the fibula plus the medial malleolus prominence of the tibia enclose the center area of the astragalus in order to prevent it from moving sideways. Anoplotheriids with known postcranial fossils have proportionally wide, stocky, and oblique astragali (or talus or ankle bone), differing widely from other artiodactyls. A. latipes differs from A. commune in part by morphologies of the facets plus fossae of the astragalus and a shorter and more robust calcaneum (heel bone). The astragali of anoplotheres share levels of elevations and positions of specific facets with the merycoidodonts that no modern artiodactyls share, possibly an instance of convergent evolution. The medial (sustentacular) facet of Anoplotherium and Diplobune is concave, contrasting with the flat to slightly convex facet of Dacrytherium. The tarsus consists of the navicular, three cueniform bones, and a cuboid bone. The foot of A. commune consisted of two toes, as indicated by the relatively small outermost and middle cuneiform bones. Footprints Large-sized footprints from southern France and north Spain that date to the Late Eocene may have been from Anoplotherium. The ichnogenus is named Anoplotheriipus and was first described from the department of Gard in France by Paul Ellenberger in 1980. The derivation of the genus name refers to the ichnotaxon being closest in affinity to the Anoplotheriidae. The ichnogenus is diagnosed as belonging to a very large artiodactyl, the autopod area exceeding that of A. commune by ~33%, the subparallel position of the two hooves, and the posterior area of the pedal sole being as transversely wide as the anterior area of the pedal sole. Anoplotheriipus is round to rectangular in shape with broad and anteriorly-pronounced cloven digit imprints that resemble poorly-preserved camel tracks. The similar artiodactyl ichnogenus Diplartiopus differs from it by the parallelism of the two fingers that are more elongated. The type species is Anoplotheriipus lavocati, which Ellenberger named in honor of palaeontologist René Lavocat and considered the "most majestic" of the three ichnospecies due to the displayed specific mobility of the metatarsals. It measures to in length and in width, is stocky in shape, and measures 12° in toe divergence. The two fingers are nearly equal in length and, at minimum, measure without the metatarsal bones being taken into account and with the metatarsals. The measurements are considerably higher than typical measurements of the toes of A. commune, which are without the metatarsals and with. Anoplotheriipus similicommunis, deriving in species etymology from "similis" (similar in Latin) and A. commune, is similar to the type ichnospecies but is smaller, corresponding more directly to typical foot measurements of A. commune by its length of and width of . The angle of divergence between the two main toes is 10°, and the minimum lengths of the fingers are without the metatarsals and with. Anoplotheriipus compactus is the third ichnospecies, which in species etymology derives from the Latin word "compactus" meaning "compact" in English due to the short and rounded autopod. It has a less definitive diagnosis compared to the other two ichnotaxa but is similar in size to A. similicommunis and has a nearly circular pedal sole for supporting slightly shorter fingers. Its length is while the width is , and the finger lengths measure - without the metatarsals and - with. The footprints may have been produced by A. latipes although the answer is still uncertain. Size Anoplotherium species were particularly large in the Late Eocene, reaching sizes unusual for most artiodactyl groups in the Palaeogene. The large size estimates began in 1995 when Martinez and Sudre made weight estimates of Palaeogene artiodactyls based on the dimensions of their astragali and M1 teeth. The astragali are common bones in fossil assemblages due to their reduced vulnerability to fragmentation as a result of their stocky shape and compact structure, explaining their choice for using it. The two measurements for A. commune yielded different results, with the M1 giving the body mass of and the astragalus yielding . These estimates are far larger than those of most other Palaeogene artiodactyls in the study, although the researchers pointed out that the M1 measurements could be overestimated compared to the astragalus estimate. In 2014, Takehisa Tsubamoto reexamined the relationship between astragalus size and estimated body mass based on extensive studies of extant terrestrial mammals, reapplying the methods to Palaeogene artiodactyls previously tested by Sudre and Martinez. The researcher used linear measurements and their products with adjusted correction factors. The recalculations resulted in somewhat lower estimates compared to the 1995 results (with the exception of Diplobune minor, which as a shorter astragalus proportion than most other artiodactyls), displayed in the below graph: In 2022, Weppe calculated the body mass of A. commune, yielding . In 2023, Ainara Badiola et al. estimated that the weight of Anoplotherium ranges between and . In their calculations, A. laurillardi was the smaller anoplotheriid that weighed on average . A. latipes was larger and has an average weight estimate of , and A. commune has the heaviest weight estimates at . In 2007, Hooker made size estimates of A. latipes based on an incomplete skeleton of an immature individual from the Hamstead Member of the Bouldnor Formation in the Isle of Wight, United Kingdom. The reconstructed Hamstead level 3 individual gave size measurements of in head and body length. The immature Anoplotherium individual's humerus measures long, so the humeri of mature individuals may have measured about long. As a result, adult A. latipes may have measured in head and body length and in shoulder height. When standing up bipedally on its hind limbs with the back, neck and head at an angle of about 15°, the Hamstead level 3 individual might have reached when browsing while more mature A. latipes individuals might have stood just over . Palaeobiology Since 2007, Anoplotherium is thought to have been a quadruped that could have stood on its hind legs as a bipedal browser thanks to the strong pelvis, long and robust tail for balance, and splayed hind legs. The bipedal adaptations show some instance of convergence with other animals like chalicotheres, various genera of ground sloths, giant pandas (Ailuropoda melanoleuca), gorillas (Gorilla), and the gerenuk (Litocranius walleri). Otherwise, the general body form appears to resemble those of the Canidae. As a result of the bend C3-C4 cervical vertebrae, the neck and head could have maintained horizontal orientations while standing bipedally. The forelimbs could have extended horizontally beyond the snout while the individual stood bipedally, although it could not have reached upward and did not have claws or prehensile organs on the manus unlike Chalicotherium. Therefore, the forearms may have not been used for ripping and tearing plants but as bipedal support. It may have browsed while standing up at a steep angle more comparable to the gerenuk than to Chalicotherium. Its large size and ability to bipedally browse may have given Anoplotherium few sources of terrestrial competition other than from Palaeotherium magnum, a large-sized palaeothere with a long neck that may have reached in body mass. The subspecies P. magnum magnum would have reached just over in browsing height in quadruped stance, and there is no evidence for any bipedal adaptation in palaeotheres. Anoplotherium likely engaged in degrees of niche partitioning with the Late Eocene palaeotheres and Diplobune. While all were folivorous browsers, the palaeotheres Plagiolophus and Palaeotherium may have had small degrees of frugivory while Diplobune was likely adapted to arborealism. How well-adapted Anoplotherium was to abrasive leaves and drier but still subhumid conditions in the Late Eocene is not well-known and requires future research in dentition for answers. Hooker proposed the possibility that the didactyl A. commune and A. latipes may have been sexual dimorphs of the same species (in which A. latipes would be a synonym of A. commune). There are little consistent differences in dental morphology between the two species, with any small differences potentially accounting for individual variations. The differences in toe number between the species may have reflected A. latipes being three-toed and A. commune being two-toed. The palaeontologist explained that while there is no evidence for the extra digit touching the ground while the individual was walking, the extra digit of A. latipes may have served as extra balance while browsing bipedally. The third digit might have also served as part of sparring in intraspecific competition between male individuals. However, he noted that despite the apparent "advantage" of A. latipes in bipedal browsing, there is no evidence of sexual differences in dietary behaviours or preferences. In addition, both species are found in the same localities of Bouldnor in the United Kingdom plus La Débruge and Montmartre in France, that although A. latipes is more common in La Débruge than Montmartre, this may be the results of behavioural and/or taphonomic factors. Grégoire Métais expressed being unconvinced that the third toe of A. latipes is a sexually dimorphic adaptation for bipedal browsing, instead suggesting that they were used in male sparring if A. latipes and A. commune were sexual dimorphs. Some evidence of the morphologies of Anoplotherium have been criticized by some sources. In their study of the morphology of the gerenuk that allows for bipedal, researchers Matt Cartmill and Kaye Brown argued that several postcranial features that were supposedly adaptations of Litocranius and other bipedal genera does not distinguish the gerenuk from other bovids. Ciaran Clark et al. (including J.J. Hooker) found from micro-CT scans that Anoplotherium being a facultative bipedal browser was not supported by the trabecular architecture of the proximal area of the femur. This may have been the result of poor data results from the micro-CT scans and the smaller sample size, which higher-contrast micro-CT data may better answer in postural information. The footprint track patterns of Anoplotheriipus suggest that Anoplotherium walked in very similar movement speeds as each other. Based on groupings of the footprint ichnotaxon within the locality of Fondota in the municipality of Abiego in Spain, Anoplotherium may have commonly walked in small groups which may imply some gregarious (or sociable) behaviour. Palaeoecology Early pre–Grande Coupure Europe For much of the Eocene, a hothouse climate with humid, tropical environments with consistently high precipitations prevailed. Modern mammalian orders including the Perissodactyla, Artiodactyla, and Primates (or the suborder Euprimates) appeared already by the Early Eocene, diversifying rapidly and developing dentitions specialized for folivory. The omnivorous forms mostly either switched to folivorous diets or went extinct by the Middle Eocene (47–37 million years ago) along with the archaic "condylarths". By the Late Eocene (approx. 37–33 mya), most of the ungulate form dentitions shifted from bunodont (or rounded) cusps to cutting ridges (i.e. lophs) for folivorous diets. Land connections between western Europe and North America were interrupted around 53 Ma. From the Early Eocene up until the Grande Coupure extinction event (56–33.9 mya), western Eurasia was separated into three landmasses: western Europe (an archipelago), Balkanatolia (in-between the Paratethys Sea of the north and the Neotethys Ocean of the south), and eastern Eurasia. The Holarctic mammalian faunas of western Europe were therefore mostly isolated from other landmasses including Greenland, Africa, and eastern Eurasia, allowing for endemism to develop. Therefore, the European mammals of the Late Eocene (MP17–MP20 of the Mammal Palaeogene zones) were mostly descendants of endemic Middle Eocene groups. The appearances of derived anoplotheriines by MP18 occurred long after the extinction of the endemic European perissodactyl family Lophiodontidae in MP16, including the largest lophiodont Lophiodon lautricense, likely the result of a shift from humid and highly tropical environments to drier and more temperate forests with open areas and more abrasive vegetation. The surviving herbivorous faunas shifted their dentitions and dietary strategies accordingly to adapt. The environments were still subhumid and full of subtropical evergreen forests, however. The Palaeotheriidae was the sole remaining European perissodactyl group, and frugivorous-folivorous or purely folivorous artiodactyls became the dominant group in western Europe. MP16 also marked the last appearances of most European crocodylomorphs, of which the alligatoroid Diplocynodon was the only survivor due to seemingly adapting to the general decline of tropical climates of the Late Eocene. Late Eocene After a considerable gap in anoplotheriine fossils in MP17a and MP17b, the derived anoplotheriines Anoplotherium and Diplobune made their first known appearances in the MP18 unit. They were exclusive to the western European archipelago, but their exact origins and dispersal routes are unknown. By then, Anoplotherium and Diplobune lived in Central Europe (then an island) and the Iberian Peninsula, only the former genus of which later dispersed into southern England by MP19 due to the apparent lack of ocean barriers. Anoplotherium coexisted with a wide diversity of artiodactyls in western Europe by MP18, ranging from the more widespread Dichobunidae, Tapirulidae, and Anthracotheriidae to many other endemic families consisting of the Xiphodontidae, Choeropotamidae, Cebochoeridae, Amphimerycidae, and Cainotheriidae. Anoplotherium also coexisted with the Palaeotheriidae, the remaining perissodactyl family of western Europe. Late Eocene European groups of the clade Ferae represented predominantly the Hyaenodonta (Hyaenodontinae, Hyainailourinae, and Proviverrinae) but also contained Carnivoramorpha (Miacidae) and Carnivora (small-sized Amphicyonidae). Other mammal groups present in the Late Eocene of western Europe represented the leptictidans (Pseudorhyncocyonidae), primates (Adapoidea and Omomyoidea), eulipotyphlans (Nyctitheriidae), chiropterans, herpetotheriids, apatotherians, and endemic rodents (Pseudosciuridae, Theridomyidae, and Gliridae). The alligatoroid Diplocynodon, present only in Europe since the upper Paleocene, coexisted with pre-Grande Coupure faunas as well. In addition to snakes, frogs, and salamandrids, rich assemblage of lizards are known in western Europe as well from MP16-MP20, representing the Iguanidae, Lacertidae, Gekkonidae, Agamidae, Scincidae, Helodermatidae, and Varanoidea. In the MP18 locality of Zambrana in Spain, A. laurillardi and A. sp. remains were found with undetermined frog and squamate groups, alligatoroid Diplocynodon, the herpetotheriid Peratherium, rodents (Theridomys, Elfomys, Pseudoltinomys, Remys), omomyid Microchoerus, carnivoraformes Quercygale and Paramiacis, dichobunid Dichobune, xiphodonts Xiphodon and Haplomeryx, and palaeotheres (Palaeotherium, Leptolophus, Iberolophus, Pachynolophus, Paranchilophus). As part of a separate landmass at the time, La Débruge of France, dating to MP18, yielded slightly different faunas that coexisted with A. commune, A. latipes, and A. laurillardi, namely the herpetotheriid Peratherium, rodents (Blainvillimys, Theridomys, Plesiarctomys, Glamys), hyaenodonts (Hyaenodon and Pterodon), amphicyonid Cynodictis, palaeotheres (Plagiolophus, Anchilophus, Palaeotherium), dichobunid Dichobune, choeropotamid Choeropotamus, cebochoerids Cebochoerus and Acotherulum, anoplotheriids Dacrytherium and Diplobune, tapirulid Tapirulus, xiphodonts Xiphodon and Dichodon, cainothere Oxacron, amphimerycid Amphimeryx, and anthracothere Elomeryx. Extinction The Grande Coupure event during the latest Eocene to earliest Oligocene (MP20-MP21) is one of the largest and most abrupt faunal turnovers in the Cenozoic of Western Europe and coincident with climate forcing events of cooler and more seasonal climates. The event led to the extinction of 60% of western European mammalian lineages, which were subsequently replaced by Asian immigrants. The Grande Coupure is often dated directly to the Eocene-Oligocene boundary at 33.9 Ma, although some estimate that the event began slightly later, at 33.6–33.4 mya. The event occurred during or after the Eocene-Oligocene transition, an abrupt shift from a hot greenhouse world that characterised much of the Palaeogene to a coolhouse/icehouse world from the Early Oligocene onwards. The massive drop in temperatures results from the first major expansion of the Antarctic ice sheets that caused drastic pCO2 decreases and an estimated drop of ~ in sea level. Many palaeontologists agree that glaciation and the resulting drops in sea level allowed for increased migrations between Balkanatolia and western Europe. The Turgai Strait, which once separated much of Europe from Asia, is often proposed as the main European seaway barrier prior to the Grande Coupure, but some researchers challenged this perception recently, arguing that it completely receded already 37 Ma, long before the Eocene-Oligocene transition. In 2022, Alexis Licht et al. suggested that the Grande Coupure could have possibly been synchronous with the Oi-1 glaciation (33.5 Ma), which records a decline in atmospheric CO2, boosting the Antarctic glaciation that already started by the Eocene-Oligocene transition. The Grande Coupure event also marked a large faunal turnover marking the arrivals of later anthracotheres, entelodonts, ruminants (Gelocidae, Lophiomerycidae), rhinocerotoids (Rhinocerotidae, Amynodontidae, Eggysodontidae), carnivorans (later Amphicyonidae, Amphicynodontidae, Nimravidae, and Ursidae), eastern Eurasian rodents (Eomyidae, Cricetidae, and Castoridae), and eulipotyphlans (Erinaceidae). The Eocene-Oligocene transition of western Europe, as a result of the global climatic conditions, is marked by a transition from tropical and subtropical forests to more open, temperate or mixed deciduous habitats with adaptations to increased seasonality. While Anoplotherium did not last long in the earliest Oligocene, there are disagreements as to whether it survived the Grande Coupure or went extinct at the event. While evidence points towards Anoplotherium being extirpated from areas like France and the United Kingdom by the Grande Coupure (last occurrences MP20), the perception is complicated by the apparent last survival of A. commune in the MP21 locality of Möhren 19 in southern Germany (the edge of western Europe) along with Palaeotherium medium and Diplobune quercyi (slightly younger localities indicate their extinctions and replacements by Grande Coupure immigrants such as the anthracothere Anthracotherium and the rhinocerotid Epiaceratherium). Hooker pointed out that localities like Möhren 19 span earlier times where the surviving endemic faunas are accompanied by some Grande Coupure immigrants but otherwise were not yet joined by certain immigrants such as Anthracotherium. Additionally, the surviving endemics of the locality are missing from other areas dating to MP21. Therefore, he argued that certain older MP21 localities with surviving endemic faunas fill the long gap between the youngest pre-Grande Coupure Lower Hamstead Member and the younger post-Grande Coupure Upper Hamstead Member within the Bouldnor Formation. This interpretation, Hooker explained, means that the localities represented very brief moments of survival of endemic faunas during the Grande Coupure, therefore supporting the idea of a major and rapid faunal extinction and immigration event, including the extinction of Anoplotherium in the event. The extinctions of a majority of endemic artiodactyls, including Anoplotherium, have been attributed to competition with immigrant faunas, environmental changes from cooling climates, or some combination of the two. Sarah C. Joomun et al. determined that certain faunas may have arrived later and therefore may have not played roles in the extinctions. They concluded that climate change, which led to increased seasonality and changes in plant food availability, led the artiodactyls to become unable to adapt to the major changes and go extinct. Weppe made similar arguments towards climate change being the main cause of the Grande Coupure extinction event, arguing that the cooling climates displaced the previously stable subtropical environments of western Europe and caused a collapse in the artiodactyl community, which after their extinctions left empty ecological niches that were passively filled by immigrant faunas.
Biology and health sciences
Other artiodactyla
Animals
18209401
https://en.wikipedia.org/wiki/Aurelia%20%28cnidarian%29
Aurelia (cnidarian)
Aurelia is a genus of jellyfish that are commonly called moon jellies, which are in the class Scyphozoa. There are currently 25 accepted species and many that are still not formally described. The genus was first described in 1816 by Jean-Baptiste Lamarck in his book Histoire Naturelle des Animaux sans Vertèbres (Natural History of Invertebrates). It has been suggested that Aurelia is the best-studied group of gelatinous zooplankton, with Aurelia aurita the best-studied species in the genus; two other species, Aurelia labiata and Aurelia limbata were also traditionally investigated throughout the 20th century. In the early 2000s, studies that considered genetic data showed that diversity in Aurelia was higher than expected based solely on morphology, so one cannot confidently attribute the results from most of the previous studies to the species named. More recently, studies have highlighted the morphological variability (including the potential for phenotypic plasticity) in this genus, emphasizing the difficulty of identifying cryptic species. Species of Aurelia can be found in the Atlantic, Arctic, Pacific and Indian Oceans, and seem to be more common in temperate regions, such as in the waters off northern China, Japan, Korea, New Zealand, the northeastern and northwestern coasts of the United States, and those of northern Europe. Aurelia undergoes alternation of generations, whereby the sexually-reproducing pelagic medusa stage is either male or female, and the benthic polyp stage reproduces asexually. Meanwhile, life cycle reversal, in which polyps are formed directly from juvenile and sexually mature medusae or their fragments, was also observed in Aurelia coerulea (= Aurelia sp. 1). Appearance The similar appearances of moon jellyfish is what has made them so hard to identify. They tend to have a variety of different sizes, however, they typically range in diameter with an average of wide and in height. The polyps of these jellyfish can grow to tall and their ephyrae have an average diameter of . The adult medusae are typically translucent in color but the color of their gut can change based on what they eat; for example, if they eat crustaceans, they can have a pink or lavender tint to them and if they were to eat brine shrimp, the tint would be more of an orange color. Their polyps usually have around 16 tentacles (although Aurelia insularia has 27–33 tentacles) which mostly help with feeding. Feeding The diet of Aurelia is similar to that of other jellyfish. They primarily feed on zooplankton. They may prey on or compete with commercially important fish and their larvae, as well as cause several issues for trawling boats when large aggregations occur, as they may clog and damage fishing nets as well as force fisherman to relocate. Characteristics They are able to sense light and dark and up and down due to rhopalia around the bell margin. After many tests on frogs, it was determined that A. aurita has a proteinaceous venom that causes muscle twitching by inducing the irreversible depolarization of the muscle membrane that is believed to be caused by an increase in the membrane's permeability to sodium ions. Reproduction The medusa stage of the jellyfish reproduce sexually. The males release strings of sperm and the females ingest them. Once the ciliated larvae develop from the egg, they settle on or near the sea floor and develop into benthic polyps. The polyps then reproduce asexually and bud into ephyrae which later turn into medusae.
Biology and health sciences
Cnidarians
Animals
5985207
https://en.wikipedia.org/wiki/Expansion%20of%20the%20universe
Expansion of the universe
The expansion of the universe is the increase in distance between gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion, so it does not mean that the universe expands "into" anything or that space exists "outside" it. To any observer in the universe, it appears that all but the nearest galaxies (which are bound to each other by gravity) move away at speeds that are proportional to their distance from the observer, on average. While objects cannot move faster than light, this limitation applies only with respect to local reference frames and does not limit the recession rates of cosmologically distant objects. Cosmic expansion is a key feature of Big Bang cosmology. It can be modeled mathematically with the Friedmann–Lemaître–Robertson–Walker metric (FLRW), where it corresponds to an increase in the scale of the spatial part of the universe's spacetime metric tensor (which governs the size and geometry of spacetime). Within this framework, the separation of objects over time is associated with the expansion of space itself. However, this is not a generally covariant description but rather only a choice of coordinates. Contrary to common misconception, it is equally valid to adopt a description in which space does not expand and objects simply move apart while under the influence of their mutual gravity. Although cosmic expansion is often framed as a consequence of general relativity, it is also predicted by Newtonian gravity. According to inflation theory, the universe suddenly expanded during the inflationary epoch (about 10−32 of a second after the Big Bang), and its volume increased by a factor of at least 1078 (an expansion of distance by a factor of at least 1026 in each of the three dimensions). This would be equivalent to expanding an object 1 nanometer across (, about half the width of a molecule of DNA) to one approximately 10.6 light-years across (about , or 62 trillion miles). Cosmic expansion subsequently decelerated to much slower rates, until around 9.8 billion years after the Big Bang (4 billion years ago) it began to gradually expand more quickly, and is still doing so. Physicists have postulated the existence of dark energy, appearing as a cosmological constant in the simplest gravitational models, as a way to explain this late-time acceleration. According to the simplest extrapolation of the currently favored cosmological model, the Lambda-CDM model, this acceleration becomes dominant in the future. History In 1912–1914, Vesto Slipher discovered that light from remote galaxies was redshifted, a phenomenon later interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used the Einstein field equations to provide theoretical evidence that the universe is expanding. Swedish astronomer Knut Lundmark was the first person to find observational evidence for expansion, in 1924. According to Ian Steer of the NASA/IPAC Extragalactic Database of Galaxy Distances, "Lundmark's extragalactic distance estimates were far more accurate than Hubble's, consistent with an expansion rate (Hubble constant) that was within 1% of the best measurements today." In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, and also presented observational evidence for a linear relationship between distance to galaxies and their recessional velocity. Edwin Hubble observationally confirmed Lundmark's and Lemaître's findings in 1929. Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other. Astronomer Walter Baade recalculated the size of the known universe in the 1940s, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of the Hubble constant was estimated to be between . On 13 January 1994, NASA formally announced a completion of its repairs related to the main mirror of the Hubble Space Telescope, allowing for sharper images and, consequently, more accurate analyses of its observations. Shortly after the repairs were made, Wendy Freedman's 1994 Key Project analyzed the recession velocity of M100 from the core of the Virgo Cluster, offering a Hubble constant measurement of . Later the same year, Adam Riess et al. used an empirical method of visual-band light-curve shapes to more finely estimate the luminosity of Type Ia supernovae. This further minimized the systematic measurement errors of the Hubble constant, to . Reiss's measurements on the recession velocity of the nearby Virgo Cluster more closely agree with subsequent and independent analyses of Cepheid variable calibrations of Type Ia supernova, which estimates a Hubble constant of . In 2003, David Spergel's analysis of the cosmic microwave background during the first year observations of the Wilkinson Microwave Anisotropy Probe satellite (WMAP) further agreed with the estimated expansion rates for local galaxies, . Structure of cosmic expansion The universe at the largest scales is observed to be homogeneous (the same everywhere) and isotropic (the same in all directions), consistent with the cosmological principle. These constraints demand that any expansion of the universe accord with Hubble's law, in which objects recede from each observer with velocities proportional to their positions with respect to that observer. That is, recession velocities scale with (observer-centered) positions according to where the Hubble rate quantifies the rate of expansion. is a function of cosmic time. Dynamics of cosmic expansion Mathematically, the expansion of the universe is quantified by the scale factor, , which is proportional to the average separation between objects, such as galaxies. The scale factor is a function of time and is conventionally set to be at the present time. Because the universe is expanding, is smaller in the past and larger in the future. Extrapolating back in time with certain cosmological models will yield a moment when the scale factor was zero; our current understanding of cosmology sets this time at 13.787 ± 0.020 billion years ago. If the universe continues to expand forever, the scale factor will approach infinity in the future. It is also possible in principle for the universe to stop expanding and begin to contract, which corresponds to the scale factor decreasing in time. The scale factor is a parameter of the FLRW metric, and its time evolution is governed by the Friedmann equations. The second Friedmann equation, shows how the contents of the universe influence its expansion rate. Here, is the gravitational constant, is the energy density within the universe, is the pressure, is the speed of light, and is the cosmological constant. A positive energy density leads to deceleration of the expansion, , and a positive pressure further decelerates expansion. On the other hand, sufficiently negative pressure with leads to accelerated expansion, and the cosmological constant also accelerates expansion. Nonrelativistic matter is essentially pressureless, with , while a gas of ultrarelativistic particles (such as a photon gas) has positive pressure . Negative-pressure fluids, like dark energy, are not experimentally confirmed, but the existence of dark energy is inferred from astronomical observations. Distances in the expanding universe Comoving coordinates In an expanding universe, it is often useful to study the evolution of structure with the expansion of the universe factored out. This motivates the use of comoving coordinates, which are defined to grow proportionally with the scale factor. If an object is moving only with the Hubble flow of the expanding universe, with no other motion, then it remains stationary in comoving coordinates. The comoving coordinates are the spatial coordinates in the FLRW metric. Shape of the universe The universe is a four-dimensional spacetime, but within a universe that obeys the cosmological principle, there is a natural choice of three-dimensional spatial surface. These are the surfaces on which observers who are stationary in comoving coordinates agree on the age of the universe. In a universe governed by special relativity, such surfaces would be hyperboloids, because relativistic time dilation means that rapidly receding distant observers' clocks are slowed, so that spatial surfaces must bend "into the future" over long distances. However, within general relativity, the shape of these comoving synchronous spatial surfaces is affected by gravity. Current observations are consistent with these spatial surfaces being geometrically flat (so that, for example, the angles of a triangle add up to 180 degrees). Cosmological horizons An expanding universe typically has a finite age. Light, and other particles, can have propagated only a finite distance. The comoving distance that such particles can have covered over the age of the universe is known as the particle horizon, and the region of the universe that lies within our particle horizon is known as the observable universe. If the dark energy that is inferred to dominate the universe today is a cosmological constant, then the particle horizon converges to a finite value in the infinite future. This implies that the amount of the universe that we will ever be able to observe is limited. Many systems exist whose light can never reach us, because there is a cosmic event horizon induced by the repulsive gravity of the dark energy. Within the study of the evolution of structure within the universe, a natural scale emerges, known as the Hubble horizon. Cosmological perturbations much larger than the Hubble horizon are not dynamical, because gravitational influences do not have time to propagate across them, while perturbations much smaller than the Hubble horizon are straightforwardly governed by Newtonian gravitational dynamics. Consequences of cosmic expansion Velocities and redshifts An object's peculiar velocity is its velocity with respect to the comoving coordinate grid, i.e., with respect to the average expansion-associated motion of the surrounding material. It is a measure of how a particle's motion deviates from the Hubble flow of the expanding universe. The peculiar velocities of nonrelativistic particles decay as the universe expands, in inverse proportion with the cosmic scale factor. This can be understood as a self-sorting effect. A particle that is moving in some direction gradually overtakes the Hubble flow of cosmic expansion in that direction, asymptotically approaching material with the same velocity as its own. More generally, the peculiar momenta of both relativistic and nonrelativistic particles decay in inverse proportion with the scale factor. For photons, this leads to the cosmological redshift. While the cosmological redshift is often explained as the stretching of photon wavelengths due to "expansion of space", it is more naturally viewed as a consequence of the Doppler effect. Temperature The universe cools as it expands. This follows from the decay of particles' peculiar momenta, as discussed above. It can also be understood as adiabatic cooling. The temperature of ultrarelativistic fluids, often called "radiation" and including the cosmic microwave background, scales inversely with the scale factor (i.e. ). The temperature of nonrelativistic matter drops more sharply, scaling as the inverse square of the scale factor (i.e. ). Density The contents of the universe dilute as it expands. The number of particles within a comoving volume remains fixed (on average), while the volume expands. For nonrelativistic matter, this implies that the energy density drops as , where is the scale factor. For ultrarelativistic particles ("radiation"), the energy density drops more sharply, as . This is because in addition to the volume dilution of the particle count, the energy of each particle (including the rest mass energy) also drops significantly due to the decay of peculiar momenta. In general, we can consider a perfect fluid with pressure , where is the energy density. The parameter is the equation of state parameter. The energy density of such a fluid drops as Nonrelativistic matter has while radiation has . For an exotic fluid with negative pressure, like dark energy, the energy density drops more slowly; if it remains constant in time. If , corresponding to phantom energy, the energy density grows as the universe expands. Expansion history Cosmic inflation Inflation is a period of accelerated expansion hypothesized to have occurred at a time of around 10−32 seconds. It would have been driven by the inflaton, a field that has a positive-energy false vacuum state. Inflation was originally proposed to explain the absence of exotic relics predicted by grand unified theories, such as magnetic monopoles, because the rapid expansion would have diluted such relics. It was subsequently realized that the accelerated expansion would also solve the horizon problem and the flatness problem. Additionally, quantum fluctuations during inflation would have created initial variations in the density of the universe, which gravity later amplified to yield the observed spectrum of matter density variations. During inflation, the cosmic scale factor grew exponentially in time. In order to solve the horizon and flatness problems, inflation must have lasted long enough that the scale factor grew by at least a factor of e60 (about 1026). Radiation epoch The history of the universe after inflation but before a time of about 1 second is largely unknown. However, the universe is known to have been dominated by ultrarelativistic Standard Model particles, conventionally called radiation, by the time of neutrino decoupling at about 1 second. During radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time. Matter epoch Since radiation redshifts as the universe expands, eventually nonrelativistic matter came to dominate the energy density of the universe. This transition happened at a time of about 50 thousand years after the Big Bang. During the matter-dominated epoch, cosmic expansion also decelerated, with the scale factor growing as the 2/3 power of the time (). Also, gravitational structure formation is most efficient when nonrelativistic matter dominates, and this epoch is responsible for the formation of galaxies and the large-scale structure of the universe. Dark energy Around 3 billion years ago, at a time of about 11 billion years, dark energy is believed to have begun to dominate the energy density of the universe. This transition came about because dark energy does not dilute as the universe expands, instead maintaining a constant energy density. Similarly to inflation, dark energy drives accelerated expansion, such that the scale factor grows exponentially in time. Measuring the expansion rate The most direct way to measure the expansion rate is to independently measure the recession velocities and the distances of distant objects, such as galaxies. The ratio between these quantities gives the Hubble rate, in accordance with Hubble's law. Typically, the distance is measured using a standard candle, which is an object or event for which the intrinsic brightness is known. The object's distance can then be inferred from the observed apparent brightness. Meanwhile, the recession speed is measured through the redshift. Hubble used this approach for his original measurement of the expansion rate, by measuring the brightness of Cepheid variable stars and the redshifts of their host galaxies. More recently, using Type Ia supernovae, the expansion rate was measured to be H0=. This means that for every million parsecs of distance from the observer, recessional velocity of objects at that distance increases by about . Supernovae are observable at such great distances that the light travel time therefrom can approach the age of the universe. Consequently, they can be used to measure not only the present-day expansion rate but also the expansion history. In work that was awarded the 2011 Nobel Prize in Physics, supernova observations were used to determine that cosmic expansion is accelerating in the present epoch. By assuming a cosmological model, e.g. the Lambda-CDM model, another possibility is to infer the present-day expansion rate from the sizes of the largest fluctuations seen in the cosmic microwave background. A higher expansion rate would imply a smaller characteristic size of CMB fluctuations, and vice versa. The Planck collaboration measured the expansion rate this way and determined H0 = . There is a disagreement between this measurement and the supernova-based measurements, known as the Hubble tension. A third option proposed recently is to use information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), to measure the expansion rate. Such measurements do not yet have the precision to resolve the Hubble tension. In principle, the cosmic expansion history can also be measured by studying how redshifts, distances, fluxes, angular positions, and angular sizes of astronomical objects change over the course of the time that they are being observed. These effects are too small to have yet been detected. However, changes in redshift or flux could be observed by the Square Kilometre Array or Extremely Large Telescope in the mid-2030s. Conceptual considerations and misconceptions Measuring distances in expanding space At cosmological scales, the present universe conforms to Euclidean space, what cosmologists describe as geometrically flat, to within experimental error. Consequently, the rules of Euclidean geometry associated with Euclid's fifth postulate hold in the present universe in 3D space. It is, however, possible that the geometry of past 3D space could have been highly curved. The curvature of space is often modeled using a non-zero Riemann curvature tensor in curvature of Riemannian manifolds. Euclidean "geometrically flat" space has a Riemann curvature tensor of zero. "Geometrically flat" space has three dimensions and is consistent with Euclidean space. However, spacetime has four dimensions; it is not flat according to Einstein's general theory of relativity. Einstein's theory postulates that "matter and energy curve spacetime, and there is enough matter and energy to provide for curvature." In part to accommodate such different geometries, the expansion of the universe is inherently general-relativistic. It cannot be modeled with special relativity alone: Though such models exist, they may be at fundamental odds with the observed interaction between matter and spacetime seen in the universe. The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM cosmological model. Two of the dimensions of space are omitted, leaving one dimension of space (the dimension that grows as the cone gets larger) and one of time (the dimension that proceeds "up" the cone's surface). The narrow circular end of the diagram corresponds to a cosmological time of 700 million years after the Big Bang, while the wide end is a cosmological time of 18 billion years, where one can see the beginning of the accelerating expansion as a splaying outward of the spacetime, a feature that eventually dominates in this model. The purple grid lines mark cosmological time at intervals of one billion years from the Big Bang. The cyan grid lines mark comoving distance at intervals of one billion light-years in the present era (less in the past and more in the future). The circular curling of the surface is an artifact of the embedding with no physical significance and is done for illustrative purposes; a flat universe does not curl back onto itself. (A similar effect can be seen in the tubular shape of the pseudosphere.) The brown line on the diagram is the worldline of Earth (or more precisely its location in space, even before it was formed). The yellow line is the worldline of the most distant known quasar. The red line is the path of a light beam emitted by the quasar about 13 billion years ago and reaching Earth at the present day. The orange line shows the present-day distance between the quasar and Earth, about 28 billion light-years, which is a larger distance than the age of the universe multiplied by the speed of light, ct. According to the equivalence principle of general relativity, the rules of special relativity are locally valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed c; in the diagram, this means, according to the convention of constructing spacetime diagrams, that light beams always make an angle of 45° with the local grid lines. It does not follow, however, that light travels a distance ct in a time t, as the red worldline illustrates. While it always moves locally at c, its time in transit (about 13 billion years) is not related to the distance traveled in any simple way, since the universe expands as the light beam traverses space and time. The distance traveled is thus inherently ambiguous because of the changing scale of the universe. Nevertheless, there are two distances that appear to be physically meaningful: the distance between Earth and the quasar when the light was emitted, and the distance between them in the present era (taking a slice of the cone along the dimension defined as the spatial dimension). The former distance is about 4 billion light-years, much smaller than ct, whereas the latter distance (shown by the orange line) is about 28 billion light-years, much larger than ct. In other words, if space were not expanding today, it would take 28 billion years for light to travel between Earth and the quasar, while if the expansion had stopped at the earlier time, it would have taken only 4 billion years. The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light-years away. In fact, the light emitted towards Earth was actually moving away from Earth when it was first emitted; the metric distance to Earth increased with cosmological time for the first few billion years of its travel time, also indicating that the expansion of space between Earth and the quasar at the early time was faster than the speed of light. None of this behavior originates from a special property of metric expansion, but rather from local principles of special relativity integrated over a curved surface. Topology of expanding space Over time, the space that makes up the universe is expanding. The words 'space' and 'universe', sometimes used interchangeably, have distinct meanings in this context. Here 'space' is a mathematical concept that stands for the three-dimensional manifold into which our respective positions are embedded, while 'universe' refers to everything that exists, including the matter and energy in space, the extra dimensions that may be wrapped up in various strings, and the time through which various events take place. The expansion of space is in reference to this 3D manifold only; that is, the description involves no structures such as extra dimensions or an exterior universe. The ultimate topology of space is a posteriori – something that in principle must be observed – as there are no constraints that can simply be reasoned out (in other words there cannot be any a priori constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines that intersect with themselves, ultimately the question as to whether we are in something like a "Pac-Man universe", where if traveling far enough in one direction would allow one to simply end up back in the same place like going all the way around the surface of a balloon (or a planet like the Earth), is an observational question that is constrained as measurable or non-measurable by the universe's global geometry. At present, observations are consistent with the universe having infinite extent and being a simply connected space, though cosmological horizons limit our ability to distinguish between simple and more complicated proposals. The universe could be infinite in extent or it could be finite; but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe. Thus any edges or exotic geometries or topologies would not be directly observable, since light has not reached scales on which such aspects of the universe, if they exist, are still allowed. For all intents and purposes, it is safe to assume that the universe is infinite in spatial extent, without edge or strange connectedness. Regardless of the overall shape of the universe, the question of what the universe is expanding into is one that does not require an answer, according to the theories that describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand, since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No "outside" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything "outside" the expanding universe into which the universe expands. Even if the overall spatial extent is infinite and thus the universe cannot get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite. Effects of expansion on small scales The expansion of space is sometimes described as a force that acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general. In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they "drop out" of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so. There is no difference between the inertial expansion of the universe and the inertial separation of nearby objects in a vacuum; the former is simply a large-scale extrapolation of the latter. Once objects are bound by gravity, they no longer recede from each other. Thus, the Andromeda Galaxy, which is bound to the Milky Way Galaxy, is actually falling towards us and is not expanding away. Within the Local Group, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Beyond the Local Group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. Such future events are predicted by knowing the precise way the Hubble Flow is changing as well as the masses of the objects to which we are being gravitationally pulled. Currently, the Local Group is being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor", with which we would eventually merge if dark energy were not acting. A consequence of metric expansion being due to inertial motion is that a uniform local "explosion" of matter into a vacuum can be locally described by the FLRW geometry, the same geometry that describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe, which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed c with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging. The situation changes somewhat with the introduction of dark energy or a cosmological constant. A cosmological constant due to a vacuum energy density has the effect of adding a repulsive force between objects that is proportional (not inversely proportional) to distance. Unlike inertia it actively "pulls" on objects that have clumped together under the influence of gravity, and even on individual atoms. However, this does not cause the objects to grow steadily or to disintegrate; unless they are very weakly bound, they will simply settle into an equilibrium state that is slightly (undetectably) larger than it would otherwise have been. As the universe expands and the matter in it thins, the gravitational attraction decreases (since it is proportional to the density), while the cosmological repulsion increases. Thus, the ultimate fate of the ΛCDM universe is a near-vacuum expanding at an ever-increasing rate under the influence of the cosmological constant. However, gravitationally bound objects like the Milky Way do not expand, and the Andromeda Galaxy is moving fast enough towards us that it will still merge with the Milky Way in around 3 billion years. Metric expansion and speed of light At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity. This is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field). While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "distance" here is somewhat different from that used in an inertial frame. The definition of distance used here is the summation or integration of local comoving distances, all done at constant local proper time. For example, galaxies that are farther than the Hubble radius, approximately 4.5 gigaparsecs or 14.7 billion light-years, away from us have a recession speed that is faster than the speed of light. Visibility of these objects depends on the exact expansion history of the universe. Light that is emitted today from galaxies beyond the more-distant cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past. Because of the high rate of expansion, it is also possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and even professional physicists. Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within the fields of education and communication of scientific concepts. Common analogies for cosmic expansion The expansion of the universe is often illustrated with conceptual models where an expanding object is taken to represent expanding space. These models can be misleading to the extent that they give the false impression that expanding space must carry objects with it. In reality, the expansion of the universe can alternatively be thought of as corresponding only to the inertial motion of objects away from one another. In the "ant on a rubber rope model" one imagines an ant (idealized as pointlike) crawling at a constant speed on a perfectly elastic rope that is constantly stretching. If we stretch the rope in accordance with the ΛCDM scale factor and think of the ant's speed as the speed of light, then this analogy is conceptually accurate – the ant's position over time will match the path of the red line on the embedding diagram above. In the "rubber sheet model", one replaces the rope with a flat two-dimensional rubber sheet that expands uniformly in all directions. The addition of a second spatial dimension allows for the possibility of showing local perturbations of the spatial geometry by local curvature in the sheet. In the "balloon model" the flat sheet is replaced by a spherical balloon that is inflated from an initial size of zero (representing the Big Bang). A balloon has positive Gaussian curvature, even though observations suggest that the real universe is spatially flat, but this inconsistency can be eliminated by making the balloon very large so that it is locally flat within the limits of observation. This analogy is potentially confusing since it could wrongly suggest that the Big Bang took place at the center of the balloon. In fact points off the surface of the balloon have no meaning, even if they were occupied by the balloon at an earlier time or will be occupied later. In the "raisin bread model", one imagines a loaf of raisin bread expanding in an oven. The loaf (space) expands as a whole, but the raisins (gravitationally bound objects) do not expand; they merely move farther away from each other. This analogy has the disadvantage of wrongly implying that the expansion has a center and an edge.
Physical sciences
Physical cosmology
Astronomy
5985739
https://en.wikipedia.org/wiki/Carrot
Carrot
The carrot (Daucus carota subsp. sativus) is a root vegetable, typically orange in colour, though heirloom variants including purple, black, red, white, and yellow cultivars exist, all of which are domesticated forms of the wild carrot, Daucus carota, native to Europe and Southwestern Asia. The plant probably originated in Iran and was originally cultivated for its leaves and seeds. The carrot is a biennial plant in the umbellifer family, Apiaceae. World production of carrots (combined with turnips) for 2022 was 42 million tonnes, led by China producing 44% of the total. The characteristic orange colour is from beta-carotene, making carrots a rich source of vitamin A. A myth that carrots help people to see in the dark was spread as propaganda in the Second World War, to account for the ability of British pilots to fight in the dark; the real explanation was the introduction of radar. Etymology The word is first recorded in English around 1530 and was borrowed from the Middle French , itself from the Late Latin , from the ancient Greek (), originally from the Proto-Indo-European root ('horn'), due to its horn-like shape. In Old English, carrots (typically white at the time) were not clearly distinguished from parsnips. The word's use as a colour name in English was first recorded around 1670, originally referring to yellowish-red hair. Description Daucus carota is a biennial plant. In the first year, energy is stored in the taproot to enable the plant to flower in its second year. Soon after germination, carrot seedlings show a distinct demarcation between taproot and stem: the stem is thicker and lacks lateral roots. At the upper end of the stem is the seed leaf. The first true leaf appears about 10–15 days after germination. Subsequent leaves are alternate (with a single leaf attached to a node), spirally arranged, and pinnately compound, with leaf bases sheathing the stem. As the plant grows, the bases of the seed leaves, near the taproot, are pushed apart. The stem, located just above the ground, is compressed and the internodes are not distinct. When the seed stalk elongates for flowering, the tip of the stem narrows and becomes pointed, and the stem extends upward to become a highly branched inflorescence up to tall. Most of the taproot consists of a pulpy outer cortex (phloem) and an inner core (xylem). High-quality carrots have a large proportion of cortex compared to core. Although a completely xylem-free carrot is not possible, some cultivars have small and deeply pigmented cores; the taproot can appear to lack a core when the colour of the cortex and core are similar in intensity. Taproots are typically long and conical, although cylindrical and nearly spherical cultivars are available. The root diameter can range from to as much as at the widest part. The root length ranges from , although most are between . Flower development begins when the flat meristem changes from producing leaves to an uplifted, conical meristem capable of producing stem elongation and a cluster of flowers. The cluster is a compound umbel, and each umbel contains several smaller umbels (umbellets). The first (primary) umbel occurs at the end of the main floral stem; smaller secondary umbels grow from the main branch, and these further branch into third, fourth, and even later-flowering umbels. A large, primary umbel can contain up to 50 umbellets, each of which may have as many as 50 flowers; subsequent umbels have fewer flowers. Individual flowers are small and white, sometimes with a light green or yellow tint. They consist of five petals, five stamens, and an entire calyx. The stamens usually split and fall off before the stigma becomes receptive to receive pollen. The stamens of the brown, male, sterile flowers degenerate and shrivel before the flower fully opens. In the other type of male sterile flower, the stamens are replaced by petals, and these petals do not fall off. A nectar-containing disc is present on the upper surface of the carpels. Flowers change sex in their development, so the stamens release their pollen before the stigma of the same flower is receptive. The arrangement is centripetal, meaning the oldest flowers are near the edge and the youngest flowers are in the center. Flowers usually first open at the outer edge of the primary umbel, followed about a week later on the secondary umbels, and then in subsequent weeks in higher-order umbels. The usual flowering period of individual umbels is 7 to 10 days, so a plant can be in the process of flowering for 30–50 days. The distinctive umbels and floral nectaries attract pollinating insects. After fertilization and as seeds develop, the outer umbellets of an umbel bend inward causing the umbel shape to change from slightly convex or fairly flat to concave, and when cupped it resembles a bird's nest. The fruit that develops is a schizocarp consisting of two mericarps; each mericarp is a true seed. The paired mericarps are easily separated when they are dry. Premature separation (shattering) before harvest is undesirable because it can result in seed loss. Mature seeds are flattened on the commissural side that faced the septum of the ovary. The flattened side has five longitudinal ribs. The bristly hairs that protrude from some ribs are usually removed by abrasion during milling and cleaning. Seeds also contain oil ducts and canals. Seeds vary somewhat in size, ranging from less than 500 to more than 1000 seeds per gram. The carrot is a diploid species, and has nine relatively short, uniform-length chromosomes (2n=18). The genome size is estimated to be 473 mega base pairs, which is four times larger than Arabidopsis thaliana, one-fifth the size of the maize genome, and about the same size as the rice genome. Chemistry Polyacetylenes can be found in Apiaceae vegetables like carrots where they show cytotoxic activities. Falcarinol and falcarindiol (cis-heptadeca-1,9-diene-4,6-diyne-3,8-diol) are such compounds. This latter compound shows antifungal activity towards Mycocentrospora acerina and Cladosporium cladosporioides. Falcarindiol is the main compound responsible for bitterness in carrots. Other compounds include pyrrolidine present in the leaves and 6-hydroxymellein. Taxonomy Both written history and molecular genetic studies indicate that the domestic carrot has a single origin in Central Asia. Its wild ancestors probably originated in Greater Iran (regions of which are now Iran and Afghanistan), which remains the centre of diversity for the wild carrot Daucus carota. A naturally occurring subspecies of the wild carrot was presumably bred selectively over the centuries to reduce bitterness, increase sweetness and minimise the woody core; this process produced the familiar garden vegetable. Cultivation History When first cultivated, carrots were grown for their aromatic leaves and seeds rather than their roots. Carrot seeds have been found in Switzerland and Southern Germany dating back to 2000–3000 BC. Some close relatives of the carrot are still grown for their leaves and seeds, such as parsley, coriander (cilantro), fennel, anise, dill and cumin. The first mention of the root in classical sources is from the 1st century AD; the Romans ate a root vegetable called pastinaca, which may have been either the carrot or the closely related parsnip. The plant is depicted and described in the Eastern Roman Juliana Anicia Codex, a 6th-century AD Constantinopolitan copy of the Greek physician Dioscorides' 1st-century pharmacopoeia of herbs and medicines, . The text states that "the root can be cooked and eaten". Another copy of this work, Codex Neapolitanes from late 6th or early 7th century, has basically the same illustrations but with roots in purple. The plant was introduced into Spain by the Moors in the 8th century. In the 10th century, roots from West Asia, India and Europe were purple. The modern carrot originated in Afghanistan at about this time. The 11th-century Jewish scholar Simeon Seth describes both red and yellow carrots, as does the 12th-century Arab-Andalusian agriculturist, Ibn al-'Awwam. Cultivated carrots appeared in China in the 12th century, and in Japan in the 16th or 17th century. The orange carrot was created by Dutch growers. There is pictorial evidence that the orange carrot existed at least in 512, but it is probable that it was not a stable variety until the Dutch bred the cultivar termed the "Long Orange" at the start of the 18th century. Some claim that the Dutch created the orange carrots to honor the Dutch flag at the time and William of Orange, but other authorities argue these claims lack convincing evidence and it is possible that the orange carrot was favored by the Europeans because it does not brown the soups and stews as the purple carrot does and, as such, was more visually attractive. Modern carrots were described at about this time by the English antiquary John Aubrey (1626–1697): "Carrots were first sown at Beckington in Somersetshire. Some very old Man there [in 1668] did remember their first bringing hither." European settlers introduced the carrot to colonial America in the 17th century. Outwardly purple carrots, still orange on the inside, were sold in British stores beginning in 2002. Propagation Carrots are grown from seed and can take up to four months (120 days) to mature, but most cultivars mature within 70 to 80 days under the right conditions. They grow best in full sun but tolerate some shade. The optimum temperature is . The ideal soil is deep, loose and well-drained, sandy or loamy, with a pH of 6.3 to 6.8. Fertilizer should be applied according to soil type because the crop requires low levels of nitrogen, moderate phosphate and high potash. Rich or rocky soils should be avoided, as these will cause the roots to become hairy and/or misshapen. Irrigation is applied when needed to keep the soil moist. After sprouting, the crop is eventually thinned to a spacing of and weeded to prevent competition beneath the soil. Pests and diseases There are several diseases that can reduce the yield and market value of carrots. The most devastating carrot disease is Alternaria leaf blight, which has been known to eradicate entire crops. A bacterial leaf blight caused by Xanthomonas campestris can also be destructive in warm, humid areas. Root knot nematodes (Meloidogyne species) can cause stubby or forked roots, or galls. Cavity spot, caused by the oomycetes Pythium violae and Pythium sulcatum, results in irregularly shaped, depressed lesions on the taproots. Physical damage can also reduce the value of carrot crops. The two main forms of damage are splitting, whereby a longitudinal crack develops during growth that can be a few centimetres to the entire length of the root, and breaking, which occurs postharvest. These disorders can affect over 30% of commercial crops. Factors associated with high levels of splitting include wide plant spacing, early sowing, lengthy growth durations, and genotype. Carrots can be good companions for other plants; if left to flower, the carrot, like any umbellifer, attracts predatory wasps that kill many garden pests. Cultivars Carrot cultivars can be grouped into two broad classes: "Eastern" carrots and "Western" carrots. A number of novelty cultivars have been bred for particular characteristics. "Eastern" (a European and American continent reference) carrots were domesticated in Persia (probably in the lands of modern-day Iran and Afghanistan within West Asia) during the 10th century, or possibly earlier. Specimens of the Eastern carrot that survive to the present day are commonly purple or yellow, and often have branched roots. The purple colour common in these carrots comes from anthocyanin pigments. The "Western" carrot emerged in the Netherlands in the 16th or 17th century. There is a popular belief that its orange colour made it popular in those countries as an emblem of the House of Orange and the struggle for Dutch independence, although there is little evidence for this beyond oral tradition and the timing. Western carrot cultivars are commonly classified by their root shape. The four general types are: Chantenay. Although the roots are shorter than other cultivars, they have vigorous foliage and greater girth, being broad in the shoulders and tapering towards a blunt, rounded tip. They store well, have a pale core, and are mostly used for processing. Danvers. These have strong foliage, and the roots are longer than Chantenay types, and they have a conical shape with a well-defined shoulder, tapering to a point. They are somewhat shorter than Imperator cultivars, but more tolerant of heavy soil conditions. Danvers cultivars store well and are used both fresh and for processing. They were developed in 1871 in Danvers, Massachusetts. Imperator. This cultivar has vigorous foliage, is of high sugar content, and has long and slender roots, tapering to a pointed tip. Imperator types are the most widely cultivated by commercial growers. Nantes. These have sparse foliage, are cylindrical, short with a blunter tip than Imperator types, and attain high yields in a range of conditions. The skin is easily damaged and the core is deeply pigmented. They are brittle, high in sugar, and store less well than other types. Breeding programs have developed new cultivars to have dense amounts of chemically-stable acylated pigments, such as anthocyanins, which can produce different colours. One particular cultivar lacks the usual orange pigment due to carotene, owing its white colour to a recessive gene for tocopherol (vitamin E), but this cultivar and wild carrots do not provide nutritionally significant amounts of vitamin E. Storage Carrots can be stored for several months in the refrigerator or over winter in a cool dry place. For long term storage, unwashed carrots can be placed in a bucket between dry layers of sand, a 50/50 mix of sand and wood shavings, or in soil. A temperature range of and 90–95% humidity is best. During storage, carrots may be subject to the development of bitterness, white blush, and browning, leading to carrot losses. Bitterness can be prevented by storage in well-ventilated rooms with low ethylene content (for example, without ethylene-producing fruit and vegetables). White blush and browning can be countered with application of edible films, heat treatment, application of hydrogen sulfide, and ultraviolet irradiation. Production In 2022, world production of carrots (combined with turnips) was 42 million tonnes, led by China with 44% of the total. Uzbekistan, the United States, and Russia were the only other countries producing over 1 million tonnes annually (table). Uses Nutrition Raw carrots are 88% water, 9% carbohydrates, 1% protein, and contain negligible fat (table). In a reference amount of , raw carrots supply 41 calories and have a rich content (20% or more of the Daily Value, DV) of vitamin A (93% DV) and a moderate amount (10–19% DV) of vitamin K (11% DV) and potassium (11% DV), but otherwise have low content of micronutrients (table). As a common source of beta-carotene in diets, carrots are a provitamin A source; an enzyme that converts beta-carotene into vitamin A in the small intestine. Culinary Carrots can be eaten in a variety of ways. Only 3 percent of the β-carotene in raw carrots is released during digestion: this can be improved to 39% by pulping, cooking and adding cooking oil. Alternatively they may be chopped and boiled, fried or steamed, and cooked in soups and stews, as well as baby and pet foods. A well-known dish is carrots julienne. Together with onion and celery, carrots are one of the primary vegetables used in a mirepoix to make broths. The greens are edible as a leaf vegetable, but are rarely eaten by humans; some sources suggest that the greens contain toxic alkaloids. When used for this purpose, they are harvested young in high-density plantings, before significant root development, and typically used stir-fried, or in salads. Some people are allergic to carrots. In a 2010 study on the prevalence of food allergies in Europe, 3.6 percent of young adults showed some degree of sensitivity to carrots. Because the major carrot allergen, the protein Dauc c 1.0104, is cross-reactive with homologues in birch pollen (Bet v 1) and mugwort pollen (Art v 1), most carrot allergy sufferers are also allergic to pollen from these plants. In India, carrots are used in a variety of ways, as salads or as vegetables added to spicy rice or dal dishes. A popular variation in north India is the Gajar Ka Halwa carrot dessert, which has carrots grated and cooked in milk until the whole mixture is solid, after which nuts and butter are added. Carrot salads are usually made with grated carrots with a seasoning of mustard seeds and green chillies popped in hot oil. Carrots can also be cut into thin strips and added to rice, can form part of a dish of mixed roast vegetables, or can be blended with tamarind to make chutney. Since the late 1980s, baby carrots or mini-carrots (carrots that have been peeled and cut into uniform cylinders) have been a popular ready-to-eat snack food available in many supermarkets. Carrot juice is widely marketed, especially as a health drink, either stand-alone or blended with juices from fruits and other vegetables. The sweetness of carrots allows the vegetable to be used in some fruit-like roles. They are used grated in carrot cakes, as well as carrot puddings, an English dish thought to have originated in the early 19th century. Carrots can be used alone or blended with fruits in jams and preserves. In the European Union, there is a rule specifying that only fruits can be used in making jams; to preserve the Portuguese carrot jam delicacy (or Doce de Cenoura in Portuguese), the Council of the European Union adopted a directive that changed the legal statute of carrot from "vegetable" into "fruit". Very high consumption of carrots over a long period of time can result in carotenemia, a harmless yellow-orange discoloration of the skin caused by a buildup of carotenoids. In culture Despite popular belief, the provitamin A beta-carotene from carrots does not actually help people to see in the dark unless they suffer from vitamin A deficiency. This myth was propaganda used by the Royal Air Force during the Second World War to explain why British pilots had improved night vision which enabled their success during nighttime air battles; in reality, it was thanks to newly adopted radar technology. The consumption of carrots was advocated in Britain at the time as part of a Dig for Victory campaign. A radio program called The Kitchen Front encouraged people to grow, store and use carrots in various novel ways, including making carrot jam and Woolton pie, named after the Lord Woolton, the Minister for Food. The British public during WWII generally believed that eating carrots would help them see better at night and in 1942 there was a 100,000-ton surplus of carrots from the extra production.
Biology and health sciences
Apiales
null
5988148
https://en.wikipedia.org/wiki/Reproductive%20medicine
Reproductive medicine
Reproductive medicine is a branch of medicine concerning the male and female reproductive systems. It encompasses a variety of reproductive conditions, their prevention and assessment, as well as their subsequent treatment and prognosis. Reproductive medicine has allowed the development of artificial reproductive techniques (ARTs) which have allowed advances in overcoming human infertility, as well as being used in agriculture and in wildlife conservation. Some examples of ARTs include IVF, artificial insemination (AI) and embryo transfer, as well as genome resource banking. History The study of reproductive medicine is thought to date back to Aristotle, where he came up with the “Haematogenous Reproduction Theory”. However, evidence-based reproductive medicine is traceable back to the 1970s. Since then, there have been many milestones for reproductive medicine, including the birth of Louise Brown, the first baby to be conceived through IVF in 1978. Despite this, it was not until 1989 that it became a clinical discipline thanks to the work of Iain Chalmers in developing the systematic review and the Cochrane collection. Scope Reproductive medicine addresses issues of sexual education, puberty, family planning, birth control, infertility, reproductive system disease (including sexually transmitted infections) and sexual dysfunction. In women, reproductive medicine also covers menstruation, ovulation, pregnancy and menopause, as well as gynecologic disorders that affect fertility. The field cooperates with and overlaps mainly with reproductive endocrinology and infertility, sexual medicine and andrology, but also to some degree with gynecology, obstetrics, urology, genitourinary medicine, medical endocrinology, pediatric endocrinology, genetics, and psychiatry. Conditions Reproductive medicine deals with prevention, diagnosis and management of the following conditions. This section will give examples of a number of common conditions affecting the human reproductive system. Infectious diseases Reproductive tract infections (RTIs) are infections that affect the reproductive tract. There are three types of RTIs: endogenous RTIs, iatrogenic RTIs and sexually transmitted infections. Endogenous RTIs are caused by an overgrowth of bacteria which is normally present. An example of an endogenous RTI is bacterial vaginosis. Iatrogenic RTIs are infections contracted as a result of a medical procedure. Sexually transmitted infections (STIs) are infections spread by sexual activity, usually by vaginal intercourse, anal sex, oral sex, and rarely manual sex. Many STIs are curable; however, some STIs such as HIV are incurable. STIs can be bacterial, viral or fungal and affect both men and women. Some examples of STIs are listed below: Bacterial STIs Chlamydia Gonorrhoea Syphilis Viral STIs Herpes Human Papilloma Virus (HPV) Human Immunodeficiency Virus (HIV) Cancer Many parts of the Reproductive system can be affected by cancer. Below are some examples of Reproductive cancers: Reproductive cancers affecting women Breast cancer Ovarian cancer Uterine cancer Cervical cancer Reproductive cancers affecting men Prostate cancer Penile cancer Testicular cancer Male breast cancer Benign prostatic hyperplasia Conditions affecting fertility A significant part of reproductive medicine involves promoting fertility in both men and women. Causes of infertility or subfertility in women Ovulatory dysfunction Polycystic ovary syndrome (PCOS) Hypergonadotropic hypogonadism Hypogonadotropic hypogonadism Tubular dysfunction Pelvic inflammatory disease Endometriosis Previous sterilisation Previous surgery Cervical or uterine dysfunction Congenital abnormalities Fibroids Asherman's syndrome Hormonal issues Hypothyroidism Hyperthyroidism Cushing's syndrome Congenital adrenal hyperplasia Causes of infertility or subfertility in men Problems with sperm number or function Cryptorchidism Y chromosome micro-deletions Varicocele Hypogonadotropic hypogonadism Hypergonadotropic hypogonadism Tubular dysfunction Congenital abnormalities Prior sexually transmitted infections Vasectomy Problems with sperm delivery Premature ejaculation Damage to the reproductive organs Retrograde ejaculation Certain genetic diseases Disorders of sex development Congenital abnormalities Congenital abnormalities of the female reproductive system Cervical abnormalities Cervical agenesis Cervical duplication Hymen abnormalities Imperforate hymen Microperforate hymen Septate hymen Uterine abnormalities Duplicate uterus Unicornate uterus Septate uterus Vaginal abnormalities Transverse vaginal septum Vertical vaginal septum Vaginal agenesis Mayer-Rokitansky-Küster-Hauser syndrome Vulvar abnormalities Labial hypoplasia Labial hypertrophy Congenital abnormalities of the male reproductive system Cryptorchidism Hypospadias Epispadias Endocrine Disorders Disorders due to hormone excess Polycystic ovarian syndrome (PCOS) Granulosa cell tumour Leydig cell tumour Teratoma Disorders due to hormone deficiency Hypogonadism Turner's syndrome Klinefelter's syndrome Disorders due to hormone hypersensitivity Idiopathic hirsutism Disorders due to hormone resistance Androgen insensitivity syndrome 5a-reductase deficiency Non-functioning endocrine tumours Ovarian cysts Carcinoma Teratoma Seminoma Secondary endocrine disorders (originating in the pituitary gland) Pituitary gonadotrophinoma Hypopituitarism Kallmann's syndrome Assessment and treatment Assessment and treatment of reproductive conditions is a key area of reproductive medicine. Female assessment starts with a full medical history (anamnesis) which provides details of the woman's general health, sexual history and relevant family history. A physical examination will also take place to identify abnormalities such as hirsutism, abdominal masses, infection, cysts or fibroids. A blood test can inform the clinician of the endocrine status of the patient. Progesterone levels are measured to check for ovulation, and other ovulatory hormones can also be measured. Imaging techniques such as pelvic ultrasounds can also be used to assess the internal anatomy. Male assessment also starts with a history and physical examination to look for any visible abnormalities. Investigations of semen samples also take place to assess the volume, motility and number of sperm, as well as identifying infections. Once the investigations are complete, treatment of identified conditions can occur. For fertility issues, this may involve assisted reproductive technology (ART) such as in-vitro fertilisation (IVF) or fertility medication. There are surgical methods that can be used as treatment however these are now performed less frequently due to the increasing success of the less invasive techniques. Treatment is also required for sexually transmitted infections (STIs). These can take the form of antibiotics for bacterial infections such as chlamydia or highly active anti-retroviral therapy (HAART) for the HIV virus. Education and training Before starting a career in reproductive medicine, individuals must first obtain an undergraduate degree. The next step is medical school, where they earn a Doctor of Medicine (MD) or Doctor of Osteopathic Medicine (DO) degree. Specialists in reproductive medicine usually undergo medical residency training in obstetrics and gynecology followed by medical fellowship training in reproductive endocrinology and infertility. An alternative path to practicing reproductive medicine after medical school involves a medical residency in urology, followed by a medical fellowship in male infertility. The education and training required to practice reproductive medicine is typically 15-16 years in duration. After completing medical fellowship, physicians can obtain board certification and must maintain continuing medical education (CME). CME is necessary in reproductive medicine as advancements in technology and treatment options require ongoing learning and skill development. For reproductive medicine specialists in contraception, other methods of training are possible. Specialists tend to be organized in specialty organizations such as the American Society for Reproductive Medicine (ASRM) and European Society of Human Reproduction and Embryology (ESHRE). Anamnesis The anamnesis or medical history taking of issues related to reproductive or sexual medicine may be inhibited by a person's reluctance to disclose intimate or uncomfortable information. Even if such an issue is on the person's mind, they often do not start talking about such an issue without the physician initiating the subject by a specific question about sexual or reproductive health. Some familiarity with the doctor generally makes it easier for person to talk about intimate issues such as sexual subjects, but for some people, a very high degree of familiarity may make the person reluctant to reveal such intimate issues. When visiting a health provider about sexual issues, having both partners of a couple present is often necessary, and is typically a good thing, but may also prevent the disclosure of certain subjects, and, according to one report, increases the stress level. Ethical and medicolegal issues There are many ethical and legal issues surrounding reproductive medicine. In the UK the Human Fertilisation and Embryology Authority (HEFA) regulates many aspects of reproductive medicine in the UK, including IVF, Artificial Insemination, storage of reproductive tissue and research in this field. HEFA was established due to the Human Fertilisation and Embryology Act (1990). This act was reviewed and the Human Fertilisation and Embryology Act (2008) was passed through parliament as an update to the 1990 act. For therapies such as IVF, many countries have strict guidelines. In the UK, referrals are only given to women under 40 who have either undergone 12 cycles of artificial insemination, or have tried and failed to conceive for 2 years. While NICE recommends NHS clinical commissioning groups (CCGs) to provide 3 NHS funded cycles of IVF, many only offer 1 cycle, with some only offering IVF in exceptional circumstances on the NHS. If an individual does not meet the criteria or has gone through the maximum number of NHS-funded cycles, the individual will have to pay for private treatment Many reproductive technologies are seen to have ethical problems, including IVF, mitochondrial replacement therapy, germline modification, preimplantation genetic diagnosis. There are many groups around the world which oppose to ARTs, including religious groups and pro-life charities such as LIFE.
Biology and health sciences
Fields of medicine
Health
5989592
https://en.wikipedia.org/wiki/Maximum%20cardinality%20matching
Maximum cardinality matching
Maximum cardinality matching is a fundamental problem in graph theory. We are given a graph , and the goal is to find a matching containing as many edges as possible; that is, a maximum cardinality subset of the edges such that each vertex is adjacent to at most one edge of the subset. As each edge will cover exactly two vertices, this problem is equivalent to the task of finding a matching that covers as many vertices as possible. An important special case of the maximum cardinality matching problem is when is a bipartite graph, whose vertices are partitioned between left vertices in and right vertices in , and edges in always connect a left vertex to a right vertex. In this case, the problem can be efficiently solved with simpler algorithms than in the general case. Algorithms for bipartite graphs Flow-based algorithm The simplest way to compute a maximum cardinality matching is to follow the Ford–Fulkerson algorithm. This algorithm solves the more general problem of computing the maximum flow. A bipartite graph can be converted to a flow network as follows. Add a source vertex ; add an edge from to each vertex in . Add a sink vertex ; add an edge from each vertex in to . Assign a capacity of 1 to each edge. Since each edge in the network has integral capacity, there exists a maximum flow where all flows are integers; these integers must be either 0 or 1 since the all capacities are 1. Each integral flow defines a matching in which an edge is in the matching if and only if its flow is 1. It is a matching because: The incoming flow into each vertex in is at most 1, so the outgoing flow is at most 1 too, so at most one edge adjacent to each vertex in is present. The outgoing flow from each vertex in is at most 1, so the incoming flow is at most 1 too, so at most one edge adjacent to each vertex in is present. The Ford–Fulkerson algorithm proceeds by repeatedly finding an augmenting path from some to some and updating the matching by taking the symmetric difference of that path with (assuming such a path exists). As each path can be found in time, the running time is , and the maximum matching consists of the edges of that carry flow from to . Advanced algorithms An improvement to this algorithm is given by the more elaborate Hopcroft–Karp algorithm, which searches for multiple augmenting paths simultaneously. This algorithm runs in time. The algorithm of Chandran and Hochbaum for bipartite graphs runs in time that depends on the size of the maximum matching , which for is Using Boolean operations on words of size the complexity is further improved to More efficient algorithms exist for special kinds of bipartite graphs: For sparse bipartite graphs, the maximum matching problem can be solved in with Madry's algorithm based on electric flows. For planar bipartite graphs, the problem can be solved in time where is the number of vertices, by reducing the problem to maximum flow with multiple sources and sinks. Algorithms for arbitrary graphs The blossom algorithm finds a maximum-cardinality matching in general (not necessarily bipartite) graphs. It runs in time . A better performance of for general graphs, matching the performance of the Hopcroft–Karp algorithm on bipartite graphs, can be achieved with the much more complicated algorithm of Micali and Vazirani. The same bound was achieved by an algorithm by and an algorithm by Gabow and Tarjan. An alternative approach uses randomization and is based on the fast matrix multiplication algorithm. This gives a randomized algorithm for general graphs with complexity . This is better in theory for sufficiently dense graphs, but in practice the algorithm is slower. Other algorithms for the task are reviewed by Duan and Pettie (see Table I). In terms of approximation algorithms, they also point out that the blossom algorithm and the algorithms by Micali and Vazirani can be seen as approximation algorithms running in linear time for any fixed error bound. Applications and generalizations By finding a maximum-cardinality matching, it is possible to decide whether there exists a perfect matching. The problem of finding a matching with maximum weight in a weighted graph is called the maximum weight matching problem, and its restriction to bipartite graphs is called the assignment problem. If each vertex can be matched to several vertices at once, then this is a generalized assignment problem. A priority matching is a particular maximum-cardinality matching in which prioritized vertices are matched first. The problem of finding a maximum-cardinality matching in hypergraphs is NP-complete even for 3-uniform hypergraphs.
Mathematics
Graph theory
null
1162141
https://en.wikipedia.org/wiki/Facies
Facies
In geology, a facies ( , ; same pronunciation and spelling in the plural) is a body of rock with distinctive characteristics. The characteristics can be any observable attribute of rocks (such as their overall appearance, composition, or condition of formation) and the changes that may occur in those attributes over a geographic area. A facies encompasses all the characteristics of a rock including its chemical, physical, and biological features that distinguish it from adjacent rock. The term "facies" was introduced by the Swiss geologist Amanz Gressly in 1838 and was part of his significant contribution to the foundations of modern stratigraphy, which replaced the earlier notions of Neptunism. Walther's law Walther's law of facies, or simply Walther's law, named after the geologist Johannes Walther, states that the vertical succession of facies reflects lateral changes in environment. Conversely, it states that when a depositional environment "migrates" laterally, sediments of one depositional environment come to lie on top of another. In Russia the law is known as Golovkinsky-Walther's law, honoring also Nikolai A. Golovkinsky. A classic example of this law is the vertical stratigraphic succession that typifies marine transgressions and regressions. Types Sedimentary Ideally, a sedimentary facies is a distinctive rock unit that forms under certain conditions of sedimentation, reflecting a particular process or environment. Sedimentary facies are either descriptive or interpretative. Sedimentary facies are bodies of sediment that are recognizably distinct from adjacent sediments that resulted from different depositional environments. Generally, geologists distinguish facies by the aspect of the rock or sediment being studied. Facies based on petrological characters (such as grain size and mineralogy) are called lithofacies, whereas facies based on fossil content are called biofacies. A facies is usually further subdivided. The characteristics of the rock unit come from the depositional environment and from the original composition. Sedimentary facies reflect their depositional environment, each facies being a distinct kind of sediment for that area or environment. Since its inception in 1838, the facies concept has been extended to related geological concepts. For example, characteristic associations of organic microfossils, and particulate organic material, in rocks or sediments, are called palynofacies. Discrete seismic units are similarly referred to as seismic facies. Sedimentary facies are described in a group of "facies descriptors" which must be distinct, reproducible and exhaustive. A reliable facies description of an outcrop in the field would include: composition, texture, sedimentary structure(s), bedding geometry, nature of bedding contact, fossil content and colour. Metamorphic The sequence of minerals that develop during progressive metamorphism (that is, metamorphism at progressively higher temperatures and/or pressures) define a facies series.
Physical sciences
Stratigraphy
Earth science
1162170
https://en.wikipedia.org/wiki/Stratigraphic%20unit
Stratigraphic unit
A stratigraphic unit is a volume of rock of identifiable origin and relative age range that is defined by the distinctive and dominant, easily mapped and recognizable petrographic, lithologic or paleontologic features (facies) that characterize it. Units must be mappable and distinct from one another, but the contact need not be particularly distinct. For instance, a unit may be defined by terms such as "when the sandstone component exceeds 75%". Lithostratigraphic units Sequences of sedimentary and volcanic rocks are subdivided on the basis of their shared or associated lithology. Formally identified lithostratigraphic units are structured in a hierarchy of lithostratigraphic rank, higher rank units generally comprising two or more units of lower rank. Going from smaller to larger in rank, the main lithostratigraphic ranks are bed, member, formation, group and supergroup. Formal names of lithostratigraphic units are assigned by geological surveys. Units of formation or higher rank are usually named for the unit's type location, and the formal name usually also states the unit's rank or lithology. A lithostratigraphic unit may have a change in rank over a some distance; a group may thin to a formation in another region and a formation may reduce in rank for member or bed as it "pinches out". Bed A bed is a lithologically distinct layer within a member or formation and is the smallest recognisable stratigraphic unit. These are not normally named, but may be in the case of a marker horizon. Member A member is a named lithologically distinct part of a formation. Not all formations are subdivided in this way and even where they are recognized, they may only form part of the formation. A member need not be mappable at the same scale as a formation. Formation Formations are the primary units used in the subdivision of a sequence and may vary in scale from tens of centimetres to kilometres. They should be distinct lithologically from other formations, although the boundaries do not need to be sharp. To be formally recognised, a formation must have sufficient extent to be useful in mapping an area. Group A group is a set of two or more formations that share certain lithological characteristics. A group may be made up of different formations in different geographical areas and individual formations may appear in more than one group. Groups are occasionally divided into subgroups, but subgroups are not mentioned in the North American Stratigraphic Code, and are permitted under International Commission on Stratigraphy guidelines only in exceptional circumstances. Supergroup A supergroup is a set of two or more associated groups and/or formations that share certain lithological characteristics. A supergroup may be made up of different groups in different geographical areas. Biostratigraphic units A sequence of fossil-bearing sedimentary rocks can be subdivided on the basis of the occurrence of particular fossil taxa. A unit defined in this way is known as a biostratigraphic unit, generally shortened to biozone. The five commonly used types of biozone are assemblage, range, abundance, interval and lineage zones. An assemblage zone is a stratigraphic interval characterised by an assemblage of three or more coexisting fossil taxa that distinguish it from surrounding strata. A range zone is a stratigraphic interval that represents the occurrence range of a specific fossil taxon, based on the localities where it has been recognised. An abundance zone is a stratigraphic interval in which the abundance of a particular taxon (or group of taxa) is significantly greater than seen in neighbouring parts of the succession. An interval zone is a stratigraphic interval whose top and base are defined by horizons that mark the first or last occurrence of two different taxa. A lineage zone is a stratigraphic interval that contains fossils that represent parts of the evolutionary lineage of a particular fossil group. This is a special case of a range zone.
Physical sciences
Stratigraphy
Earth science
1162226
https://en.wikipedia.org/wiki/Potts%20model
Potts model
In statistical mechanics, the Potts model, a generalization of the Ising model, is a model of interacting spins on a crystalline lattice. By studying the Potts model, one may gain insight into the behaviour of ferromagnets and certain other phenomena of solid-state physics. The strength of the Potts model is not so much that it models these physical systems well; it is rather that the one-dimensional case is exactly solvable, and that it has a rich mathematical formulation that has been studied extensively. The model is named after Renfrey Potts, who described the model near the end of his 1951 Ph.D. thesis. The model was related to the "planar Potts" or "clock model", which was suggested to him by his advisor, Cyril Domb. The four-state Potts model is sometimes known as the Ashkin–Teller model, after Julius Ashkin and Edward Teller, who considered an equivalent model in 1943. The Potts model is related to, and generalized by, several other models, including the XY model, the Heisenberg model and the N-vector model. The infinite-range Potts model is known as the Kac model. When the spins are taken to interact in a non-Abelian manner, the model is related to the flux tube model, which is used to discuss confinement in quantum chromodynamics. Generalizations of the Potts model have also been used to model grain growth in metals, coarsening in foams, and statistical properties of proteins. A further generalization of these methods by James Glazier and Francois Graner, known as the cellular Potts model, has been used to simulate static and kinetic phenomena in foam and biological morphogenesis. Definition Vector Potts model The Potts model consists of spins that are placed on a lattice; the lattice is usually taken to be a two-dimensional rectangular Euclidean lattice, but is often generalized to other dimensions and lattice structures. Originally, Domb suggested that the spin takes one of possible values , distributed uniformly about the circle, at angles where and that the interaction Hamiltonian is given by with the sum running over the nearest neighbor pairs over all lattice sites, and is a coupling constant, determining the interaction strength. This model is now known as the vector Potts model or the clock model. Potts provided the location in two dimensions of the phase transition for . In the limit , this becomes the XY model. Standard Potts model What is now known as the standard Potts model was suggested by Potts in the course of his study of the model above and is defined by a simpler Hamiltonian: where is the Kronecker delta, which equals one whenever and zero otherwise. The standard Potts model is equivalent to the Ising model and the 2-state vector Potts model, with . The standard Potts model is equivalent to the three-state vector Potts model, with . Generalized Potts model A generalization of the Potts model is often used in statistical inference and biophysics, particularly for modelling proteins through direct coupling analysis. This generalized Potts model consists of 'spins' that each may take on states: (with no particular ordering). The Hamiltonian is, where is the energetic cost of spin being in state while spin is in state , and is the energetic cost of spin being in state . Note: . This model resembles the Sherrington-Kirkpatrick model in that couplings can be heterogeneous and non-local. There is no explicit lattice structure in this model. Physical properties Phase transitions Despite its simplicity as a model of a physical system, the Potts model is useful as a model system for the study of phase transitions. For example, for the standard ferromagnetic Potts model in , a phase transition exists for all real values , with the critical point at . The phase transition is continuous (second order) for and discontinuous (first order) for . For the clock model, there is evidence that the corresponding phase transitions are infinite order BKT transitions, and a continuous phase transition is observed when . Further use is found through the model's relation to percolation problems and the Tutte and chromatic polynomials found in combinatorics. For integer values of , the model displays the phenomenon of 'interfacial adsorption' with intriguing critical wetting properties when fixing opposite boundaries in two different states . Relation with the random cluster model The Potts model has a close relation to the Fortuin-Kasteleyn random cluster model, another model in statistical mechanics. Understanding this relationship has helped develop efficient Markov chain Monte Carlo methods for numerical exploration of the model at small , and led to the rigorous proof of the critical temperature of the model. At the level of the partition function , the relation amounts to transforming the sum over spin configurations into a sum over edge configurations i.e. sets of nearest neighbor pairs of the same color. The transformation is done using the identity This leads to rewriting the partition function as where the FK clusters are the connected components of the union of closed segments . This is proportional to the partition function of the random cluster model with the open edge probability . An advantage of the random cluster formulation is that can be an arbitrary complex number, rather than a natural integer. Alternatively, instead of FK clusters, the model can be formulated in terms of spin clusters, using the identity A spin cluster is the union of neighbouring FK clusters with the same color: two neighbouring spin clusters have different colors, while two neighbouring FK clusters are colored independently. Measure-theoretic description The one dimensional Potts model may be expressed in terms of a subshift of finite type, and thus gains access to all of the mathematical techniques associated with this formalism. In particular, it can be solved exactly using the techniques of transfer operators. (However, Ernst Ising used combinatorial methods to solve the Ising model, which is the "ancestor" of the Potts model, in his 1924 PhD thesis). This section develops the mathematical formalism, based on measure theory, behind this solution. While the example below is developed for the one-dimensional case, many of the arguments, and almost all of the notation, generalizes easily to any number of dimensions. Some of the formalism is also broad enough to handle related models, such as the XY model, the Heisenberg model and the N-vector model. Topology of the space of states Let Q = {1, ..., q} be a finite set of symbols, and let be the set of all bi-infinite strings of values from the set Q. This set is called a full shift. For defining the Potts model, either this whole space, or a certain subset of it, a subshift of finite type, may be used. Shifts get this name because there exists a natural operator on this space, the shift operator τ : QZ → QZ, acting as This set has a natural product topology; the base for this topology are the cylinder sets that is, the set of all possible strings where k+1 spins match up exactly to a given, specific set of values ξ0, ..., ξk. Explicit representations for the cylinder sets can be gotten by noting that the string of values corresponds to a q-adic number, however the natural topology of the q-adic numbers is finer than the above product topology. Interaction energy The interaction between the spins is then given by a continuous function V : QZ → R on this topology. Any continuous function will do; for example will be seen to describe the interaction between nearest neighbors. Of course, different functions give different interactions; so a function of s0, s1 and s2 will describe a next-nearest neighbor interaction. A function V gives interaction energy between a set of spins; it is not the Hamiltonian, but is used to build it. The argument to the function V is an element s ∈ QZ, that is, an infinite string of spins. In the above example, the function V just picked out two spins out of the infinite string: the values s0 and s1. In general, the function V may depend on some or all of the spins; currently, only those that depend on a finite number are exactly solvable. Define the function Hn : QZ → R as This function can be seen to consist of two parts: the self-energy of a configuration [s0, s1, ..., sn] of spins, plus the interaction energy of this set and all the other spins in the lattice. The limit of this function is the Hamiltonian of the system; for finite n, these are sometimes called the finite state Hamiltonians. Partition function and measure The corresponding finite-state partition function is given by with C0 being the cylinder sets defined above. Here, β = 1/kT, where k is the Boltzmann constant, and T is the temperature. It is very common in mathematical treatments to set β = 1, as it is easily regained by rescaling the interaction energy. This partition function is written as a function of the interaction V to emphasize that it is only a function of the interaction, and not of any specific configuration of spins. The partition function, together with the Hamiltonian, are used to define a measure on the Borel σ-algebra in the following way: The measure of a cylinder set, i.e. an element of the base, is given by One can then extend by countable additivity to the full σ-algebra. This measure is a probability measure; it gives the likelihood of a given configuration occurring in the configuration space QZ. By endowing the configuration space with a probability measure built from a Hamiltonian in this way, the configuration space turns into a canonical ensemble. Most thermodynamic properties can be expressed directly in terms of the partition function. Thus, for example, the Helmholtz free energy is given by Another important related quantity is the topological pressure, defined as which will show up as the logarithm of the leading eigenvalue of the transfer operator of the solution. Free field solution The simplest model is the model where there is no interaction at all, and so V = c and Hn = c (with c constant and independent of any spin configuration). The partition function becomes If all states are allowed, that is, the underlying set of states is given by a full shift, then the sum may be trivially evaluated as If neighboring spins are only allowed in certain specific configurations, then the state space is given by a subshift of finite type. The partition function may then be written as where card is the cardinality or count of a set, and Fix is the set of fixed points of the iterated shift function: The q × q matrix A is the adjacency matrix specifying which neighboring spin values are allowed. Interacting model The simplest case of the interacting model is the Ising model, where the spin can only take on one of two values, sn ∈ and only nearest neighbor spins interact. The interaction potential is given by This potential can be captured in a 2 × 2 matrix with matrix elements with the index σ, σ′ ∈ {−1, 1}. The partition function is then given by The general solution for an arbitrary number of spins, and an arbitrary finite-range interaction, is given by the same general form. In this case, the precise expression for the matrix M is a bit more complex. The goal of solving a model such as the Potts model is to give an exact closed-form expression for the partition function and an expression for the Gibbs states or equilibrium states in the limit of n → ∞, the thermodynamic limit. Applications Signal and image processing The Potts model has applications in signal reconstruction. Assume that we are given noisy observation of a piecewise constant signal g in Rn. To recover g from the noisy observation vector f in Rn, one seeks a minimizer of the corresponding inverse problem, the Lp-Potts functional Pγ(u), which is defined by The jump penalty forces piecewise constant solutions and the data term couples the minimizing candidate u to the data f. The parameter γ > 0 controls the tradeoff between regularity and data fidelity. There are fast algorithms for the exact minimization of the L1 and the L2-Potts functional. In image processing, the Potts functional is related to the segmentation problem. However, in two dimensions the problem is NP-hard.
Physical sciences
Magnetostatics
Physics
1162781
https://en.wikipedia.org/wiki/Chemical%20reactor
Chemical reactor
A chemical reactor is an enclosed volume in which a chemical reaction takes place. In chemical engineering, it is generally understood to be a process vessel used to carry out a chemical reaction, which is one of the classic unit operations in chemical process analysis. The design of a chemical reactor deals with multiple aspects of chemical engineering. Chemical engineers design reactors to maximize net present value for the given reaction. Designers ensure that the reaction proceeds with the highest efficiency towards the desired output product, producing the highest yield of product while requiring the least amount of money to purchase and operate. Normal operating expenses include energy input, energy removal, raw material costs, labor, etc. Energy changes can come in the form of heating or cooling, pumping to increase pressure, frictional pressure loss or agitation.Chemical reaction engineering is the branch of chemical engineering which deals with chemical reactors and their design, especially by application of chemical kinetics to industrial systems. Overview The most common basic types of chemical reactors are tanks (where the reactants mix in the whole volume) and pipes or tubes (for laminar flow reactors and plug flow reactors) Both types can be used as continuous reactors or batch reactors, and either may accommodate one or more solids (reagents, catalysts, or inert materials), but the reagents and products are typically fluids (liquids or gases). Reactors in continuous processes are typically run at steady-state, whereas reactors in batch processes are necessarily operated in a transient state. When a reactor is brought into operation, either for the first time or after a shutdown, it is in a transient state, and key process variables change with time. There are three idealised models used to estimate the most important process variables of different chemical reactors: Batch reactor model, Continuous stirred-tank reactor model (CSTR), and Plug flow reactor model (PFR). Many real-world reactors can be modeled as a combination of these basic types. Key process variables include: Residence time (τ, lower case Greek tau) Volume (V) Temperature (T) Pressure (P) Concentrations of chemical species (C1, C2, C3, ... Cn) Heat transfer coefficients (h, U) A tubular reactor can often be a packed bed. In this case, the tube or channel contains particles or pellets, usually a solid catalyst. The reactants, in liquid or gas phase, are pumped through the catalyst bed. A chemical reactor may also be a fluidized bed; see Fluidized bed reactor. Chemical reactions occurring in a reactor may be exothermic, meaning giving off heat, or endothermic, meaning absorbing heat. A tank reactor may have a cooling or heating jacket or cooling or heating coils (tubes) wrapped around the outside of its vessel wall to cool down or heat up the contents, while tubular reactors can be designed like heat exchangers if the reaction is strongly exothermic, or like furnaces if the reaction is strongly endothermic. Types Batch reactor The simplest type of reactor is a batch reactor. Materials are loaded into a batch reactor, and the reaction proceeds with time. A batch reactor does not reach a steady state, and control of temperature, pressure and volume is often necessary. Many batch reactors therefore have ports for sensors and material input and output. Batch reactors are typically used in small-scale production and reactions with biological materials, such as in brewing, pulping, and production of enzymes. One example of a batch reactor is a pressure reactor. CSTR (continuous stirred-tank reactor) In a CSTR, one or more fluid reagents are introduced into a tank reactor which is typically stirred with an impeller to ensure proper mixing of the reagents while the reactor effluent is removed. Dividing the volume of the tank by the average volumetric flow rate through the tank gives the space time, or the time required to process one reactor volume of fluid. Using chemical kinetics, the reaction's expected percent completion can be calculated. Some important aspects of the CSTR: At steady-state, the mass flow rate in must equal the mass flow rate out, otherwise the tank will overflow or go empty (transient state). While the reactor is in a transient state the model equation must be derived from the differential mass and energy balances. The reaction proceeds at the reaction rate associated with the final (output) concentration, since the concentration is assumed to be homogenous throughout the reactor. Often, it is economically beneficial to operate several CSTRs in series. This allows, for example, the first CSTR to operate at a higher reagent concentration and therefore a higher reaction rate. In these cases, the sizes of the reactors may be varied in order to minimize the total capital investment required to implement the process. It can be demonstrated that an infinite number of infinitely small CSTRs operating in series would be equivalent to a PFR. The behavior of a CSTR is often approximated or modeled by that of a Continuous Ideally Stirred-Tank Reactor (CISTR). All calculations performed with CISTRs assume perfect mixing. If the residence time is 5-10 times the mixing time, this approximation is considered valid for engineering purposes. The CISTR model is often used to simplify engineering calculations and can be used to describe research reactors. In practice it can only be approached, particularly in industrial size reactors in which the mixing time may be very large. A loop reactor is a hybrid type of catalytic reactor that physically resembles a tubular reactor, but operates like a CSTR. The reaction mixture is circulated in a loop of tube, surrounded by a jacket for cooling or heating, and there is a continuous flow of starting material in and product out. PFR (plug flow reactor) In a PFR, sometimes called continuous tubular reactor (CTR), one or more fluid reagents are pumped through a pipe or tube. The chemical reaction proceeds as the reagents travel through the PFR. In this type of reactor, the changing reaction rate creates a gradient with respect to distance traversed; at the inlet to the PFR the rate is very high, but as the concentrations of the reagents decrease and the concentration of the product(s) increases the reaction rate slows. Some important aspects of the PFR: The idealized PFR model assumes no axial mixing: any element of fluid traveling through the reactor doesn't mix with fluid upstream or downstream from it, as implied by the term "plug flow". Reagents may be introduced into the PFR at locations in the reactor other than the inlet. In this way, a higher efficiency may be obtained, or the size and cost of the PFR may be reduced. A PFR has a higher theoretical efficiency than a CSTR of the same volume. That is, given the same space-time (or residence time), a reaction will proceed to a higher percentage completion in a PFR than in a CSTR. This is not always true for reversible reactions. For most chemical reactions of industrial interest, it is impossible for the reaction to proceed to 100% completion. The rate of reaction decreases as the reactants are consumed until the point where the system reaches dynamic equilibrium (no net reaction, or change in chemical species occurs). The equilibrium point for most systems is less than 100% complete. For this reason a separation process, such as distillation, often follows a chemical reactor in order to separate any remaining reagents or byproducts from the desired product. These reagents may sometimes be reused at the beginning of the process, such as in the Haber process. In some cases, very large reactors would be necessary to approach equilibrium, and chemical engineers may choose to separate the partially reacted mixture and recycle the leftover reactants. Under laminar flow conditions, the assumption of plug flow is highly inaccurate, as the fluid traveling through the center of the tube moves much faster than the fluid at the wall. The continuous oscillatory baffled reactor (COBR) achieves thorough mixing by the combination of fluid oscillation and orifice baffles, allowing plug flow to be approximated under laminar flow conditions. Semibatch reactor A semibatch reactor is operated with both continuous and batch inputs and outputs. A fermenter, for example, is loaded with a batch of medium and microbes which constantly produces carbon dioxide that must be removed continuously. Similarly, reacting a gas with a liquid is usually difficult, because a large volume of gas is required to react with an equal mass of liquid. To overcome this problem, a continuous feed of gas can be bubbled through a batch of a liquid. In general, in semibatch operation, one chemical reactant is loaded into the reactor and a second chemical is added slowly (for instance, to prevent side reactions), or a product which results from a phase change is continuously removed, for example a gas formed by the reaction, a solid that precipitates out, or a hydrophobic product that forms in an aqueous solution. Catalytic reactor Although catalytic reactors are often implemented as plug flow reactors, their analysis requires more complicated treatment. The rate of a catalytic reaction is proportional to the amount of catalyst the reagents contact, as well as the concentration of the reactants. With a solid phase catalyst and fluid phase reagents, this is proportional to the exposed area, efficiency of diffusion of reagents in and products out, and efficacy of mixing. Perfect mixing usually cannot be assumed. Furthermore, a catalytic reaction pathway often occurs in multiple steps with intermediates that are chemically bound to the catalyst; and as the chemical binding to the catalyst is also a chemical reaction, it may affect the kinetics. Catalytic reactions often display so-called falsified kinetics, when the apparent kinetics differ from the actual chemical kinetics due to physical transport effects. The behavior of the catalyst is also a consideration. Particularly in high-temperature petrochemical processes, catalysts are deactivated by processes such as sintering, coking, and poisoning. A common example of a catalytic reactor is the catalytic converter that processes toxic components of automobile exhausts. However, most petrochemical reactors are catalytic, and are responsible for most industrial chemical production, with extremely high-volume examples including sulfuric acid, ammonia, reformate/BTEX (benzene, toluene, ethylbenzene and xylene), and fluid catalytic cracking. Various configurations are possible, see Heterogeneous catalytic reactor.
Physical sciences
Chemical engineering
Chemistry
1162949
https://en.wikipedia.org/wiki/Pedetes
Pedetes
Pedetes is a genus of rodent, the springhares, in the family Pedetidae. Members of the genus are distributed across southern and Eastern Africa. Species A number of species both extant and extinct are classified in the genus Pedetes. They include: South African springhare or springhaas (Pedetes capensis) East African springhare (Pedetes surdaster) Pedetes laetoliensis (Davies, 1987) (Pliocene fossil) Throughout the 20th century, the living species (and occasionally the prehistoric one) were merged into P. capensis, making the genus monotypic. Ecology These rodents are generally nocturnal and sleep through the day in burrows they dig. They feed on foliage, roots and other vegetable matter, and occasionally arthropods. Outside the burrow they usually move around by hopping on their hind legs. When only one springhare species was recognized, it was listed as vulnerable by the IUCN in 1996 due to an approximately 20% decrease in the population over the previous ten years. This has been caused by intense hunting and the loss of habitat. However, the negative trend has not persisted, and both species are now listed as Species of Least Concern. The coat of these rodents is known to glow a fluorescent color when viewed under black light. Vocalisations This rodent has a range of vocalizations at its disposal. They can grunt and pleat. They also have a piping contact call.
Biology and health sciences
Rodents
Animals
1163049
https://en.wikipedia.org/wiki/Lumen%20%28unit%29
Lumen (unit)
The lumen (symbol: lm) is the unit of luminous flux, a measure of the perceived power of visible light emitted by a source, in the International System of Units (SI). Luminous flux differs from power (radiant flux), which encompasses all electromagnetic waves emitted, including non-visible ones such as thermal radiation (infrared). By contrast, luminous flux is weighted according to a model (a "luminosity function") of the human eye's sensitivity to various wavelengths; this weighting is standardized by the CIE and ISO. The lumen is defined as equivalent to one candela-steradian (symbol cd·sr): 1 lm = 1 cd·sr. A full sphere has a solid angle of 4π steradians (≈ 12.56637 sr), so an isotropic light source (that uniformly radiates in all directions) with a luminous intensity of one candela has a total luminous flux of . One lux is one lumen per square metre. Explanation If a light source emits one candela of luminous intensity uniformly across a solid angle of one steradian, the total luminous flux emitted into that angle is one lumen (1 cd·1 sr = 1 lm). Alternatively, an isotropic one-candela light-source emits a total luminous flux of exactly 4π lumens. If the source were partly covered by an ideal absorbing hemisphere, that system would radiate half as much luminous flux—only 2π lumens. The luminous intensity would still be one candela in those directions that are not obscured. The lumen can be thought of casually as a measure of the total amount of visible light in some defined beam or angle, or emitted from some source. The number of candelas or lumens from a source also depends on its spectrum, via the nominal response of the human eye as represented in the luminosity function. The difference between the units lumen and lux is that the lux takes into account the area over which the luminous flux is spread. A flux of 1,000 lumens, concentrated into an area of one square metre, lights up that square metre with an illuminance of 1,000 lux. The same 1,000 lumens, spread out over ten square metres, produces a dimmer illuminance of only 100 lux. In equation form, . A source radiating a power of one watt of light in the color for which the eye is most efficient (a wavelength of 555 nm, in the green region of the optical spectrum) has luminous flux of 683 lumens. So a lumen represents at least 1/683 watts of visible light power, depending on the spectral distribution. Lighting Lamps used for lighting are commonly labelled with their light output in lumens; in many jurisdictions, this is required by law. A 23 W spiral compact fluorescent lamp emits about 1,400–1,600 lm. Many compact fluorescent lamps and other alternative light sources are labelled as being equivalent to an incandescent bulb with a specific power. Below is a table that shows typical luminous flux for common incandescent bulbs and their equivalents. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt. On 1 September 2010, European Union legislation came into force mandating that lighting equipment must be labelled primarily in terms of luminous flux (lm), instead of electric power (W). That change is a result of the EU's Eco-design Directive for Energy-using Products (EuP). For example, according to the European Union standard, an energy-efficient bulb that claims to be the equivalent of a 60 W tungsten bulb must have a minimum light output of 700-810 lm. Projector output ANSI lumens The light output of projectors (including video projectors) is typically measured in lumens. A standardized procedure for testing projectors has been established by the American National Standards Institute, which involves averaging together several measurements taken at different positions. For marketing purposes, the luminous flux of projectors that have been tested according to this procedure may be quoted in "ANSI lumens", to distinguish them from those tested by other methods. ANSI lumen measurements are in general more accurate than the other measurement techniques used in the projector industry. This allows projectors to be more easily compared on the basis of their brightness specifications. The method for measuring ANSI lumens is defined in the IT7.215 document which was created in 1992. First the projector is set up to display an image in a room at a temperature of . The brightness and contrast of the projector are adjusted so that on a full white field, it is possible to distinguish between a 5% screen area block of 95% peak white, and two identically sized 100% and 90% peak white boxes at the center of the white field. The light output is then measured on a full white field at nine specific locations around the screen and averaged. This average is then multiplied by the screen area to give the brightness of the projector in "ANSI lumens". Peak lumens Peak lumens is a measure of light output normally used with CRT video projectors. The testing uses a test pattern typically at either 10 and 20 percent of the image area as white at the center of the screen, the rest as black. The light output is measured just in this center area. Limitations with CRT video projectors result in them producing greater brightness when just a fraction of the image content is at peak brightness. For example, the Sony VPH-G70Q CRT video projector produces 1200 "peak" lumens but just 200 ANSI lumens. Color light output Brightness (white light output) measures the total amount of light projected in lumens. The color brightness specification Color Light Output measures red, green, and blue each on a nine-point grid, using the same approach as that used to measure brightness. SI photometric units
Physical sciences
Light
null
1163775
https://en.wikipedia.org/wiki/Black%20ghost%20knifefish
Black ghost knifefish
The black ghost knifefish (Apteronotus albifrons) is a tropical fish belonging to the ghost knifefish family (Apteronotidae). They originate in freshwater habitats in South America where they range from Venezuela to the Paraguay–Paraná River, including the Amazon Basin. They are popular in aquaria. The fish is all black except for two white rings on its tail, and a white blaze on its nose, which can occasionally extend into a stripe down its back. It moves mainly by undulating a long fin on its underside. It will grow to a length of . Black ghost knifefish are nocturnal. They are a weakly electric fish which use an electric organ and receptors distributed over the length of their body in order to locate prey, including insect larvae. Natural habitat The black ghost knifefish natively lives in sandy bottom creeks in South America. Natives believe that the ghosts of the departed take up residence in these fish, hence the name. In aquaria Black ghost knifefish require a medium sized tank of when smaller, though as they grow larger will require a much larger tank of around . A lid is needed as they have been known to jump out. Black Ghost knifefish get to a maximum size of in the wild, but usually stop growing at in home aquaria, although they may grow to in a larger tank. They should be provided with a shelter (such as a plastic tube or driftwood) in which to hide. They prefer a dimly lit tank as their eyesight is optimized for low light. Black ghost knife fish are weakly electric fish. They will eat smaller fish in the tank and are intolerant of conspecifics. As with other scaleless fish, they are vulnerable to parasite infestations such as ich (Ichthyophthirius multifiliis). They can reproduce in captivity, although there are only a few reports of Black Ghost Knifefish reproducing. It is possible to use a device to convert a captive fish's electrical signals into audible sound, allowing listeners to hear the fish "talk". The Bakken Museum in Minneapolis has a display with such a device and a black ghost knifefish. Electricity The black ghost knifefish is a weakly electric fish as a result of the electromotor and electrosensory systems it possesses. While some fish can only receive electric signals, the black ghost knifefish can both produce and sense the electrical impulses. Electrogenesis occurs when a specialized electric organ found in the tail of the fish generates electrical signals, which are thus called electric organ discharges (EODs). Then, for these EODs to be sensed by the fish, electroreception occurs when groups of sensory cells embedded in the skin, known as electroreceptor organs, detect the electrical change. The EODs are used for two major purposes: electrolocation and communication. The kind of EOD produced can be used to distinguish between two types of weakly electric fish: the pulse-type and the wave-type. The black ghost knifefish are considered to be the latter type, because they can continuously generate EODs in small intervals. Wave-type EODs have a narrow power spectra, and can be heard as a tonal sound, where the discharge rate establishes the fundamental frequency. By emitting its own continuous sinusoidal train of EODs, the fish can determine the presence of nearby objects by sensing perturbations in timing and amplitude of electric fields, an ability known as active electrolocation. The particular organs used to sense the self-generated high-frequency EODs are tuberous electroreceptor organs. On the other hand, when low-frequency electric fields are generated by external sources instead of the fish itself, a different class of electroreceptor organs is used for this passive electrolocation, called ampullary organs. Therefore, the black ghost knifefish uses an active and a passive electrosystem, each with its own corresponding receptor organs. The fish can also use a mechanosensory lateral line system, which detects water disturbances created by the motion of the fish's body. As nocturnal hunters, the fish can rely on all three systems to navigate through dark environments and detect their prey. Each species has a characteristic EOD baseline frequency range, which varies with sex and age within the species, as well. The baseline frequency is maintained to be almost constant at stable temperature, but will usually be changed due to the presence of others of the same species. Such changes in frequency relevant to social interaction are called frequency modulations (FMs). The role these FMs have in communication is significant, as black ghost knifefish have developed jamming avoidance responses, which are behavioral responses that avoid the overlapping of EOD frequencies between conspecific individuals to prevent sensory confusion. Moreover, a study was conducted that focused on sexual dimorphism in electrocommunication signals. Female black ghost knifefish generate EODs at a higher frequency than the males, an FM which can be used for gender recognition. A study found the subordinate black ghost knifefish exhibited noticeable gradual frequency rises (GFRs) in their EODs whereas the dominant fish did not, supporting the researchers' hypothesis that GFRs during communication are indicative of submissive signals.
Biology and health sciences
Gymnotiformes
Animals
1164051
https://en.wikipedia.org/wiki/Capra%20%28genus%29
Capra (genus)
Capra is a genus of mammals, the goats, comprising ten species, including the markhor and several species known as ibexes. The domestic goat (Capra hircus) is a domesticated species derived from the bezoar ibex (Capra aegagrus aegagrus). It is one of the oldest domesticated species of animal—according to archaeological evidence its earliest domestication occurred in Iran at 10,000 calibrated calendar years ago. Wild goats are animals of mountain habitats. They are very agile and hardy, able to climb on bare rock and survive on sparse vegetation. They can be distinguished from the genus Ovis, which includes sheep, by the presence of scent glands close to the feet, in the groin, and in front of the eyes, and the absence of other facial glands, and by the presence of a beard in some specimens, and of hairless calluses on the knees of the forelegs. Taxonomy All members of the genus Capra are bovids (members of the family Bovidae), and more specifically caprines (subfamily Caprinae). As such they are ruminants, meaning they chew the cud, and have four-chambered stomachs which play a vital role in digesting, regurgitating, and redigesting their food. The genus has sometimes been taken to include Ovis (sheep) and Ammotragus (Barbary sheep), but these are usually regarded as distinct genera, leaving Capra for ibexes. In this smaller genus, some authors have recognized only two species, the markhor on one side and all other forms included in one species on the other side. Today, nine wild species are usually accepted to which is added the domestic goat: West Asian ibex also known as the wild goat (Capra aegagrus) Bezoar ibex (Capra aegagrus aegagrus) Sindh ibex (Capra aegagrus blythi) Domestic goat (Capra hircus; includes feral goat; sometimes considered a subspecies of C. aegagrus) Asian ibex also known as the Siberian ibex (Capra sibirica) Markhor (Capra falconeri) West Caucasian tur (Capra caucasica) East Caucasian tur (Capra cylindricornis) Alpine ibex (Capra ibex) Iberian ibex also known as the Spanish ibex (Capra pyrenaica) Nubian ibex (Capra nubiana) Walia ibex (Capra walie) The goats of the genus Capra have complex systematic relationships, which are still not completely resolved. Recent studies based on mitochondrial DNA suggest that the Asian ibex and the Nubian ibex represent distinct species, which are not very closely related to the physically similar Alpine ibex. The Alpine ibex forms a group with the Iberian ibex. The West Caucasian tur appears to be more closely related to the wild goat than to the East Caucasian tur. The markhor is relatively little separated from other forms—previously it had been considered to be a separate branch of the genus. Almost all wild goat species are allopatric (geographically separated)—the only geographical overlaps are the wild goat (Capra aegagrus) with the East Caucasian tur (Capra cylindricornis), and the markhor (Capra falconeri) with the Asian ibex (Capra sibirica). In both cases, the overlapping species do not usually interbreed in the wild, but in captivity, all Capra species can interbreed, producing fertile offspring. Species and subspecies Domestication and uses Along with sheep, goats were among the first domesticated animals. The domestication process started at least 10,000 years ago in what is now northern Iran. Easy human access to goat hair, meat, and milk were the primary motivations. Goat skins were popularly used until the Middle Ages for water and wine bottles when traveling and camping, and in certain regions as parchment for writing.
Biology and health sciences
Artiodactyla
null
1164226
https://en.wikipedia.org/wiki/Markhor
Markhor
The markhor (Capra falconeri) is a large wild Capra (goat) species native to South Asia and Central Asia, mainly within Pakistan, the Karakoram range, parts of Afghanistan, and the Himalayas. It is listed on the IUCN Red List as Near Threatened since 2015. The markhor is the national animal of Pakistan, where it is also known as the screw-horn or screw-horned goat. The word mārkhor' 'is from Persian word "Markhar", meaning "Curly", because of its curly horns comes from both Pashto and classical Persian languages, referencing the ancient belief that the markhor would actively kill and consume snakes. This regional myth is believed to stem from the "snake-like" form of the male markhor's horns, twisting and curling like a snake, possibly leading ancient peoples to associate them with snakes. On 2 May 2024, the United Nations General Assembly declared 24 May as the International Day of the Markhor. Description Markhor stand at the shoulder, are long and weigh from . They have the highest maximum shoulder height among the species in the genus Capra, but is surpassed in length and weight by the Siberian ibex. The coat is of a grizzled, light brown to black colour, and is smooth and short in summer, while growing longer and thicker in winter. The fur of the lower legs is black and white. Markhor are sexually dimorphic, with males having longer hair on the chin, throat, chest and shanks. Females are redder in colour, with shorter hair, a short black beard, and are maneless. Both sexes have tightly curled, corkscrew-like horns, which close together at the head, but spread upwards toward the tips. The horns of males can grow up to long, and up to in females. The males have a pungent smell, which surpasses that of the domestic goat. Behaviour and ecology Markhor are adapted to mountainous terrain, and can be found between in elevation. They typically inhabit shrub forests made up primarily of oaks (Quercus ilex), pines (Pinus gerardiana), and junipers (Juniperus macropoda). They are diurnal, and are mainly active in the early morning and late afternoon. Their diets shift seasonally: in the spring and summer periods they graze, but turn to browsing in winter, sometimes standing on their hind legs to reach high branches. The mating season is during winter, when the males fight each other by lunging, locking of horns, and attempting to push each other off balance. The gestation period lasts 135–170 days, and usually results in the birth of one or two kids, and occasionally three. Markhor live in herds, usually numbering nine animals, composed of adult females and their young. Adult males are largely solitary. Adult females and kids comprise most of the markhor population, with adult females making up 32% and kids making up 31%. Adult males comprise 19% of the population, while subadults (males aged 2–3 years) make up 12%, and yearlings (females aged 12–24 months) 9%. Their alarm call closely resembles the bleating of domestic goats. Early in the season the males and females may be found together on the open grassy patches and clear slopes among the forest. During the summer, the males remain in the forest, while the females generally climb to the highest rocky ridges above. In the spring, the females stay closer to cliffs in areas with more rock coverage to provide protection for their offspring. The males stay in higher elevated areas with more access to vegetation for foraging so as to improve their body's condition. Predators Eurasian lynx (Lynx lynx), snow leopard (Panthera uncia), Himalayan wolf (Canis lupus chanco) and brown bear (Ursus arctos) are the main predators of the markhor. The golden eagle (Aquila chrysaetos) has been reported to prey upon young markhor. The markhor possess keen eyesight and a strong sense of smell to detect nearby predators. Markhor are very aware of their surroundings and on high alert; in exposed areas, they are quick to spot and flee from predators. Taxonomy Aegoceros (Capra) Falconeri was the scientific name proposed by Johann Andreas Wagner in 1839 based on a female specimen from the Indian Himalayas. Multiple subspecies have been recognized, often based on horn configuration, but it has been shown that this can vary greatly even within the same population confined to one mountain range. Astor markhor or Astore markhor (C. f. falconeri) Bukharan markhor (C. f. heptneri) Kabul markhor (C. f. megaceros) Kashmir markhor (C. f. cashmiriensis) Sulaiman markhor (C. f. jerdoni) Astor markhor The Astor markhor has large, flat horns, branching widely and then going up nearly straight with only a half turn. It is synonymous with Capra falconeri cashmiriensis or Pir Panjal markhor, which has heavy, flat horns, twisted like a corkscrew. The Astor markhor also has a tendency to sexually segregate outside the mating season because of multiple different mechanisms. The females are usually confined to cliffs with less forage coverage, while the males live in areas with a lot more forage coverage. Within Afghanistan, the Astor markhor is limited to the east in the high and mountainous monsoon forests of Laghman and Nuristan. In India, this subspecies is restricted to a portion of the Pir Panjal range in southwestern Jammu and Kashmir. Throughout this range, Astor markhor populations are scattered, starting east of the Banihal Pass (50 km from the Chenab River) on the Jammu–Srinagar highway westward to the disputed border with Pakistan. Recent surveys indicate it still occurs in catchments of the Limber and Lachipora Rivers in the Jhelum Valley Forest Division, and around Shupiyan to the south of Srinagar. In Pakistan, the Astor markhor there is restricted to the Indus and its tributaries, as well as to the Kunar (Chitral) River and its tributaries. Along the Indus, it inhabits both banks from Jalkot (Kohistan District) upstream to near the Tungas village (Baltistan), with Gakuch being its western limit up the Gilgit River, Chalt up the Hunza River, and the Parishing Valley up the Astore River. It has been said to occur on the right side of the Yasin Valley (Gilgit District), though this is unconfirmed. The flare-horned markhor is also found around Chitral and the border areas with Afghanistan, where it inhabits a number of valleys along the Kunar River (Chitral District), from Arandu on the west bank and Drosh on the east bank, up to Shoghor along the Lutkho River, and as far as Barenis along the Mastuj River. The largest population is currently found in Chitral National Park in Pakistan. Bukharan markhor Although the Bukharan markhor or Tajik markhur (Capra falconeri heptneri) formerly lived in most of the mountains stretching along the north banks of the Upper Amu Darya and the Pyanj Rivers from Turkmenistan to Tajikistan, two to three scattered populations now occur in a greatly reduced distribution. It is limited to the region between lower Pyanj and the Vakhsh Rivers near Kulyab in Tajikistan (near ), and in the Kugitangtau Range in Uzbekistan and Turkmenistan (around ). This subspecies may possibly exist in the Darwaz Peninsula of northern Afghanistan near the border with Tajikistan. Before 1979, almost nothing was known of this subspecies or its distribution in Afghanistan, and no new information has been established in Afghanistan since that time. Kabul markhor The Kabul markhor (Capra falconeri megaceros) has horns with a slight corkscrew, as well as a twist. A junior synonym is Capra falconeri jerdoni. Until 1978, the Kabul markhor survived in Afghanistan only in the Kabul Gorge and the Kohe Safi area of Kapissa, and in some isolated pockets in between. It now lives the most inaccessible regions of its once wider range in the mountains of Kapissa and Kabul Provinces, after having been driven from its original habitat by intensive poaching. In Pakistan, its present range consists only of small isolated areas in Baluchistan, Khyber Pakhtunkhwa (KPK) province and in Dera Ghazi Khan District (Punjab Province). The KPK Forest Department considered that the areas of Mardan and Sheikh Buddin were still inhabited by the subspecies. At least 100 animals are thought to live on the Pakistani side of the Safed Koh range (Districts of Kurram and Khyber). Relationship with the domestic goat Certain authors have postulated that the markhor is the ancestor of some breeds of domestic goat. The Angora goat has been regarded by some as a direct descendant of the Central Asian markhor.Olive Schreiner (1898). Angora goat ... : and, A paper on the ostrich ... London : Longmans, 1898 Charles Darwin postulated that modern goats arose from crossbreeding markhor with wild goats. Evidence for markhors crossbreeding with domestic goats has been found. One study suggested that 35.7% of captive markhors in the analysis (ranging from three different zoos) had mitochondrial DNA from domestic goats. Other authors have suggested that markhor may have been the ancestor of some Egyptian goat breeds, based on their similar horns, though the lack of an anterior keel on the horns of the markhor belies any close relationship. The Changthangi domestic goat of Ladakh and Tibet may derive from the markhor. The Girgentana goat of Sicily is thought to have been bred from markhor, as is the Bilberry goat of Ireland. The Kashmiri feral herd of about 200 individuals on the Great Orme limestone headland of Wales are derived from a herd maintained at Windsor Great Park belonging to Queen Victoria. Fecal samples taken from markhor and domestic goats indicate that there is a serious level of competition for food between the two species. The competition for food between herbivores is believed to have significantly reduced the standing crop of forage in the Himalaya–Karkoram–Hindukush ranges. Domestic livestock have an advantage over wild herbivores since the density of their herds often pushes their competitors out of the best grazing areas, and decreased forage availability has a negative effect on female fertility. Threats Hunting for meat as a means of subsistence or trade in wildlife parts adds to the growing problem for wildlife managers in many countries. Poaching, with its indirect impacts as disturbance, increasing fleeing distances and resulting reduction of effective habitat size, is by far the most important factor threatening the survival of the markhor populations. The most important types of poachers seem to be local inhabitants, state border guards, the latter usually relying on local hunting guides, and Afghans, illegally crossing the border. Poaching causes fragmentation of the population into small islands where the remaining subpopulations are prone to extinction. The markhor is a valued trophy hunting prize for its spiral horns. The Pakistani government issued several tags in an attempt to save the species, which since the introduction of hunting the species has seen a remarkable rebound. The continuing declines of markhor populations finally caught the attention of the international community. Hunting In British India, markhor were considered to be among the most challenging game species, because of the danger involved in stalking and pursuing them in high, mountainous terrain. According to Arthur Brinckman in his The Rifle in Cashmere, "a man who is a good walker will never wish for any finer sport than ibex or markhoor shooting". Elliot Roosevelt wrote of how he shot two markhor in 1881, his first on 8 July, his second on 1 August. Although it is illegal to hunt markhor in Afghanistan, they have been traditionally hunted in Nuristan and Laghman Provinces, and this may have intensified during the War in Afghanistan. In Pakistan, hunting markhor is legal as part of a conservation process: expensive hunting licenses are available from the Pakistani government that allow the hunting of old markhors, which are no longer good for breeding purposes. In India, it is illegal to hunt markhor but they are poached for food and for their horns, which are thought to have medicinal properties. Markhor have also been successfully introduced to private game ranches in Texas. Unlike the aoudad, blackbuck, nilgai, ibex, and axis deer, however, markhor have not escaped in sufficient numbers to establish free-range wild populations in Texas. The International Union for the Conservation of Nature and Natural Resources currently classifies the markhor as a near threatened species, because of its relatively small population (2013 estimate: ~5,800 individuals), the absence of a projected total population decline, and its reliance on ongoing conservation efforts to maintain population levels. There are reservations in Tajikistan to protect the markhors. In 1973, two reservations were established. The Dashtijum Strict Reserve (also called the Zapovednik in Russian) offers markhor protect across 20,000 ha. The Dashtijum Reserve (called the Zakasnik in Russian) covers 53,000 ha. Though these reserves exist to protect and conserve the markhor population, the regulations are poorly enforced making poaching common as well as habitat destruction. Although markhors still face ongoing threats, recent studies have shown considerable success with regards to the conservation approach. The approach began in the 1900s when a local hunter was convinced by a hunting tourist to stop poaching markhors. The local hunter established a conservancy that inspired two other local organizations called Morkhur and Muhofiz. The two organizations expect that their conversations will not only protect, but allow them to sustainably exploit the markhor species. This approach has been effective compared to the protection of lands that lack enforcement and security. In India, the markhor is a fully protected (Schedule I) species under Jammu and Kashmir's Wildlife (Protection) Act of 1978. In culture The markhor is the national animal of Pakistan. It was one of the 72 animals featured on the World Wide Fund for Nature Conservation Coin Collection in 1976. Markhor marionettes are used in the Afghan puppet shows known as buz-baz. The markhor has also been mentioned in a Pakistani computer-animated film known as Allahyar and the Legend of Markhor''. Etymology The name is thought to be derived from Persian language — a conjunction of (, "snake, serpent") and the suffix (, "-eater"), interpreted to represent the animal's alleged ability to kill snakes, or as a reference to its corkscrew-like horns, which are somewhat reminiscent of coiling snakes. In folklore the markhor is believed to kill and eat serpents. Thereafter, while chewing the cud, a foam-like substance comes out of its mouth that drops on the ground and dries. This foam-like substance is sought after by the local people, who believe it is useful in extracting the poison from snakebites.
Biology and health sciences
Bovidae
Animals
1164286
https://en.wikipedia.org/wiki/Sea%20lettuce
Sea lettuce
The sea lettuces comprise the genus Ulva, a group of edible green algae that is widely distributed along the coasts of the world's oceans. The type species within the genus Ulva is Ulva lactuca, lactuca being Latin for "lettuce". The genus also includes the species previously classified under the genus Enteromorpha, the former members of which are known under the common name green nori. Description Individual blades of Ulva can grow to be more than 400 mm (16 in) in size, but this occurs only when the plants are growing in sheltered areas. A macroscopic alga which is light to dark green in colour, it is attached by disc holdfast. Their structure is a leaflike flattened thallus. Nutrition and contamination Sea lettuce is eaten by a number of different sea animals, including manatees and the sea slugs known as sea hares. Many species of sea lettuce are a food source for humans in Scandinavia, Great Britain, Ireland, China, and Japan (where this food is known as aosa). Sea lettuce as a food for humans is eaten raw in salads and cooked in soups. It is high in protein, soluble dietary fiber, and a variety of vitamins and minerals, especially iron. However, contamination with toxic heavy metals at certain sites where it can be collected makes it dangerous for human consumption. Aquarium trade Sea lettuce species are commonly found in the saltwater aquarium trade, where the plants are valued for their high nutrient uptake and edibility. Many reef aquarium keepers use sea lettuce species in refugia or grow it as a food source for herbivorous fish. Sea lettuce is very easy to keep, tolerating a wide range of lighting and temperature conditions. In the refugium, sea lettuce can be attached to live rock or another surface, or simply left to drift in the water. Health concerns In August 2009, unprecedented amounts of these algae washed up on the beaches of Brittany, France, causing a major public health scare as it decomposed. The rotting leaves produced large quantities of hydrogen sulfide, a toxic gas. In one incident near Saint-Michel-en-Grève, a horse rider lost consciousness and his horse died after breathing the seaweed fumes; in another, a lorry driver driving a load of decomposing sea lettuce passed out, crashed, and died, with toxic fumes claimed to be the cause. Environmentalists blamed the phenomenon on excessive nitrogenous compounds washed out to sea from improper disposal of pig and poultry animal waste from industrial farms. Species Species in the genus Ulva include: Accepted species Ulva acanthophora (Kützing) Hayden, Blomster, Maggs, P.C. Silva, Stanhope & J.R. Waaland, 2003 Ulva anandii Amjad & Shameel, 1993 Ulva arasakii Chihara, 1969 Ulva atroviridis Levring, 1938 Ulva australis Areschoug, 1854 Ulva beytensis Thivy & Sharma, 1966 Ulva bifrons Ardré, 1967 Ulva brevistipita V.J. Chapman, 1956 Ulva burmanica (Zeller) De Toni, 1889 Ulva californica Wille, 1899 Ulva chaetomorphoides (Børgesen) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva clathrata (Roth) C. Agardh, 1811 Ulva compressa Linnaeus, 1753 Ulva conglobata Kjellman, 1897 Ulva cornuta Lightfoot, 1777 Ulva covelongensis V. Krishnamurthy & H. Joshi, 1969 Ulva crassa V.J. Chapman, 1956 Ulva crassimembrana (V.J. Chapman) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva curvata (Kützing) De Toni, 1889 Ulva denticulata P.J.L. Dangeard, 1959 Ulva diaphana Hudson, 1778 Ulva elegans Gayral, 1960 Ulva enteromorpha Le Jolis, 1863 Ulva erecta (Lyngbye) Fries Ulva expansa (Setchell) Setchell & N.L. Gardner, 1920 Ulva fasciata Delile, 1813 Ulva flexuosa Wulfen, 1803 Ulva geminoidea V.J. Chapman, 1956 Ulva gigantea (Kützing) Bliding, 1969 Ulva grandis Saifullah & Nizamuddin, 1977 Ulva hookeriana (Kützing) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland Ulva hopkirkii (M'Calla ex Harvey) P. Crouan & H. Crouan Ulva howensis (A.H.S. Lucas) Kraft, 2007 Ulva indica Roth, 1806 Ulva intestinalis Linnaeus, 1753 Ulva intestinaloides (R.P.T. Koeman & Hoek) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva javanica N.L. Burman, 1768 Ulva kylinii (Bliding) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva lactuca Linnaeus, 1753 Ulva laetevirens J.E. Areschoug, 1854 Ulva laingii V.J. Chapman, 1956 Ulva linearis P.J.L. Dangeard, 1957 Ulva linza Linnaeus, 1753 Ulva lippii Lamouroux Ulva litoralis Suhr ex Kützing Ulva littorea Suhr Ulva lobata (Kützing) Harvey, 1855 Ulva maeotica (Proshkina-Lavrenko) P.M.Tsarenko, 2011 Ulva marginata (J. Agardh) Le Jolis Ulva micrococca (Kützing) Gobi Ulva mutabilis Föyn, 1958 Ulva neapolitana Bliding, 1960 Ulva nematoidea Bory de Saint-Vincent, 1828 Ulva ohnoi Hiraoka & Shimada, 2004 Ulva olivascens P.J.L. Dangeard Ulva pacifica Endlicher Ulva papenfussii Pham-Hoang Hô, 1969 Ulva parva V.J. Chapman, 1956 Ulva paschima Bast Ulva patengensis Salam & Khan, 1981 Ulva percursa (C. Agardh) C. Agardh Ulva pertusa Kjellman, 1897 Ulva phyllosa (V.J. Chapman) Papenfuss Ulva polyclada Kraft, 2007 Ulva popenguinensis P.J.L. Dangeard, 1958 Ulva porrifolia (S.G. Gmelin) J.F. Gmelin Ulva profunda W.R. Taylor, 1928 Ulva prolifera O.F.Müller, 1778 Ulva pseudocurvata Koeman & Hoek, 1981 Ulva pseudolinza (R.P.T. Koeman & Hoek) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva pulchra Jaasund, 1976 Ulva quilonensis Sindhu & Panikkar, 1995 Ulva radiata (J. Agardh) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva ralfsii (Harvey) Le Jolis, 1863 Ulva ranunculata Kraft & A.J.K. Millar, 2000 Ulva reticulata Forsskål, 1775 Ulva rhacodes (Holmes) Papenfuss, 1960 Ulva rigida C. Agardh, 1823 Ulva rotundata Bliding, 1968 Ulva saifullahii Amjad & Shameel, 1993 Ulva serrata A.P.de Candolle Ulva simplex (K.L. Vinogradova) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003 Ulva sorensenii V.J. Chapman, 1956 Ulva spinulosa Okamura & Segawa, 1936 Ulva stenophylla Setchell & N.L. Gardner, 1920 Ulva sublittoralis Segawa, 1938 Ulva subulata (Wulfen) Naccari Ulva taeniata (Setchell) Setchell & N.L. Gardner, 1920 Ulva tanneri H.S. Hayden & J.R. Waaland, 2003 Ulva tenera Kornmann & Sahling Ulva torta (Mertens) Trevisan, 1841 Ulva tuberosa Palisot de Beauvois Ulva uncialis (Kützing) Montagne, 1850 Ulva uncinata Mohr Ulva uncinata Mertens Ulva usneoides Bonnemaison Ulva utricularis (Roth) C. Agardh Ulva utriculosa C. Agardh Ulva uvoides Bory de Saint-Vincent Ulva ventricosa A.P.de Candolle Nomina dubia Ulva costata Wollny, 1881 Ulva repens Clemente, 1807 Ulva tetragona A.P.de Candolle, 1807 A newly discovered Indian endemic species of Ulva with tubular thallus indistinguishable from Ulva intestinalis has been formally established in 2014 as Ulva paschima Bast. Ten new species have been discovered in New Caledonia: Ulva arbuscula, Ulva planiramosa, Ulva batuffolosa, Ulva tentaculosa, Ulva finissima, Ulva pluriramosa, Ulva scolopendra and Ulva spumosa.
Biology and health sciences
Green algae
Plants
1164446
https://en.wikipedia.org/wiki/African%20sacred%20ibis
African sacred ibis
{{Speciesbox |name=African sacred ibis |status=LC |status_system=IUCN3.1 |status_ref= |image=Sacred ibis (Threskiornis aethiopicus).jpg |image_caption=in Ethiopia |taxon=Threskiornis aethiopicus |authority=(Latham, 1790) |synonyms=Tantalus aethiopicus Latham, 1790 |range_map=ThreskiornisAethiopicusIUCNver2024.png |range_map_caption=Native and introduced ranges}} The African sacred ibis (Threskiornis aethiopicus) is a species of ibis, a wading bird of the family Threskiornithidae. It is native to much of Africa, as well as small parts of Iraq, Iran and Kuwait. It is especially known for its role in Ancient Egyptian religion, where it was linked to the god Thoth. The species is currently extirpated from Egypt. Taxonomy It is very closely related to the black-headed ibis and the Australian white ibis, with which it forms a superspecies complex, so much so that the three species are considered conspecific by some ornithologists. In mixed flocks these ibises often hybridise. The Australian white ibis is often called the sacred ibis colloquially. Although known to the ancient civilisations of Greece, Rome and especially Africa, ibises were unfamiliar to western Europeans from the fall of Rome until the 19th century, and mentions of this bird in the ancient works of these civilisations were supposed to describe some type of curlew or other bird, and were thus translated as such. In 1758, Linnaeus was convinced that the ancient authors were describing a cattle egret (Bubulcus ibis), which he thus described as Ardea ibis. Following the work of Mathurin Jacques Brisson, who calls it Ibis candida in 1760, in the 12th edition of his Systema Naturae of 1766 Linnaeus classifies it as Tantalus ibis. These were also unfamiliar birds that did not occur in Europe at the time, in English in these times called the 'Egyptian ibis' by Latham, and the 'emseesy' or 'ox-bird' by George Shaw. In 1790, John Latham provided the first unambiguous modern scientific description of the sacred ibis as Tantalus aethiopicus, mentioning James Bruce of Kinnaird who called it 'abou hannes' in his writings describing his travels in Sudan and Ethiopia, and also described Tantalus melanocephalus of India. Georges Cuvier named it Ibis religiosus in his Le Règne Animal of 1817. In 1842, George Robert Gray reclassified the bird under the new genus Threskiornis, because the type of the genus Tantalus was designated as to be the wood stork, also formerly known as the wood ibis or wood pelican, and Gray decided these birds could not be classified in the same genus. In a comprehensive review of plumage patterns by Holyoak in 1970, it was noted that the three taxa were extremely similar and that the Australian birds resembled Threskiornis aethiopicus in adult plumage and T. melanocephalus in juvenile plumage, he thus proposed they all be considered part of a single species T. aethiopicus. At the time, this was generally accepted by the scientific community; however, in 'The Birds of the Western Palearctic' compendium of 1977, Roselaar advocated splitting the group into four species, recognising T. bernieri, based again on the then known geographical morphological differences. In 1990, Sibley & Monroe, in the general reference 'Distribution and Taxonomy of Birds of the World' followed Roselaar in recognising four species, which they repeated in 'A World Checklist of Birds' of 1993. This taxon being split from T. melanocephalus and T. molucca was further advocated by another morphological study by Lowe and Richards in 1991, where they looked at plumage, bill and neck sack morphology and used many more skins. They concluded that the differences were such to merit separate species status for the three taxa, especially as they could find no intergradation in morphological characters in possible contact zones in SE Asia. They also cite observed differences in courtship displays between the Australian and African birds. Based on these characteristics they recommended the Madagascan birds T. bernieri and T. abbotti be considered a subspecies of T. aethiopicus. Description An adult individual is long with all-white body plumage apart from dark plumes on the rump. Wingspan is and body weight . Males are generally slightly larger than females. The bald head and neck, thick curved bill and legs are black. The white wings show a black rear border in flight. The eyes are brown with a dark red orbital ring. Sexes are similar, but juveniles have dirty white plumage, a smaller bill and some feathering on the neck, greenish-brown scapulars and more black on the primary coverts. This bird is usually silent, but occasionally makes puppy-like yelping noises, unlike its vocal relative, the hadada ibis. Distribution Native The sacred ibis breeds in Sub-Saharan Africa and southeastern Iraq. A number of populations are migrant with the rains; some of the South African birds migrate 1,500 km as far north as Zambia, the African birds north of the equator migrate in the opposite direction. The Iraqi population usually migrates to southwestern Iran, but wandering vagrants have been seen as far south as Oman (rare, but regular) and as far north as the Caspian coasts of Kazakhstan and Russia (before 1945). Africa It was formerly found in North Africa including Egypt, where it was commonly venerated and mummified as a votive offering to the god Thoth. For many centuries until the Roman period the main temples buried a few dozen of thousands of birds a year, and to sustain sufficient numbers for the demand for sacrifices by pilgrims from all over Egypt, it was for some time believed that ibis breeding farms (called ibiotropheia by Herodotus) existed. Aristotle mentions in c. 350 BC that many sacred ibises are found all over Egypt. Strabo, writing around 20 AD, mentions large amounts of the birds in the streets of Alexandria, where he was living at the time; picking through the trash, attacking provisions, and defiling everything with their dung. Pierre Belon notes the many ibises in Egypt during his travels there in the late 1540s (he thought they were an odd type of stork). Benoît de Maillet, in his Description de l'Egypte (1735) relates that at the turn of the 17th century, when the great caravans travelled yearly to Mecca, great clouds of ibises would follow them from Egypt for over a hundred leagues into the desert to feed on the dung left at the encampments. By 1850, however, the species had disappeared from Egypt both as a breeding and migrant population, with the last, albeit questionable, sighting in 1864. An examination of the genetic diversity among mummified ibises suggested that there was no reduction in genetic diversity as would be caused if they were bred in captivity and further studies on isotopes suggest that the birds were not just wild caught but came from a wide geographic range. The species did not breed in southern Africa before the beginning of the 20th century, but it has benefited from irrigation, dams, and commercial agricultural practices such as dung heaps, carrion and refuse tips. It began to breed in the early 20th century, and in the 1970s the first colonies of ibises were recorded in Zimbabwe and South Africa. Its population for example expanded 2-3-fold during the period between 1972 and 1995 in Orange Free State. It is now found throughout southern Africa. The species is a common resident in most parts of South Africa. Local numbers are swollen in summer by individuals migrating southwards from the equator. Elsewhere in Africa it occurs throughout the continent south of the Sahara, but it is largely absent in the deserts of southwestern Africa (i.e. the Namib, the Karoo, the Kalahari) and probably the rainforests of the Congo. In west Africa it is fairly uncommon across the Sahel, except for the major floodplain systems. It can commonly found breeding along the Niger, in the Inner Niger Delta of Mali, the Logone of C.A.R., Lac Fitri in Chad, the Saloum Delta of Senegal, and other localities in relatively small numbers such as in The Gambia. It is common across eastern Africa and southern Africa. Large numbers can be found in the Sudd swamps and Lake Kundi in Sudan in the dry season. It is fairly widespread along the upper Nile River, and is quite common around Mogadishu, Somalia. In Tanzania there are a number of sites with 500 to 1,000+ birds, totalling some 20,000 birds. Asia The bird is also native to Yemen; in 2003 it bred in large numbers on small islands near Haramous and along the Red Sea coast near Hodeidah and Aden, where it was often found at waste-water treatment plants. It has been recorded nesting on a shipwreck in the Red Sea. It is also seen as a vagrant on Socotra. With the Yemen Civil War and famine, there have been no new census reports on the species in Yemen, though an estimate of approximately 30 mature individuals was given in 2015. The species was fairly common in Iraq in the first half of the 20th century, but by the late 1960s it had become very scarce, with the population thought to number no more than 200 birds. The population was thought to have suffered greatly during the draining of the Mesopotamian Marshes of southeastern Iraq starting in the late 1980s, and feared to have disappeared entirely, but it has continuously been observed breeding in a colony in the Hawizeh Marshes (a part of the Mesopotamian Marshes) as of 2008, numbering up to 27 adults. It is also native to Kuwait, where it occurs as an extremely rare migrant, with only two known sightings, the last being a flock of 17 in 2007. There are no records of the bird in Iran before the 1970s; however, small numbers were found overwintering in Khuzestan in 1970. Since the 1990s numbers appear to have slowly increased to a few dozen. Introduced The first African sacred ibises brought to Europe were two imported from Egypt to France in the mid-1700s. In the 1800s the first escapes were sighted in Europe (in Austria, Italy). In the 1970s it became fashionable for many zoos in Europe and elsewhere to keep their birds in free-flying colonies, which were allowed to forage in the area but would return to roost in the zoo every day. As such feral populations were established in Italy, France, Spain, Portugal, the Netherlands, the Canary Islands, Florida, Taiwan, the United Arab Emirates and possibly Bahrain. Some studies indicate that the introduced populations in Europe have significant economic and ecological impacts, while others suggest that they constitute no substantial threat to native European bird species. Europe In Europe, the African sacred ibis is included since 2016 in the list of Invasive Alien Species of Union concern (the Union list). This implies that this species cannot be imported, bred, transported, commercialized, or intentionally released into the environment in the whole of the European Union. In France the African sacred ibises have become established along its Atlantic coast following the feral breeding of birds which were the offspring of a large free-flying population originating from the Branféré Zoological Gardens in southern Brittany. The first successful breeding was in 1993 at two sites, the Golfe du Morbihan and Lac de Grand-Lieu, and respectively from Branféré. By 2005 the Atlantic French breeding population was estimated at 1,100 pairs and winter censuses led to an estimated total population of up to 3,000 birds. A separate population originated from a zoo at Sigean on the Mediterranean coast of France and by 2005 the colony at the Étang de Bages-Sigean was estimated at 250 pairs. A cull was begun and by 2011 the population had fallen to 560–600 pairs. By January 2017 the eradication programme had lowered the number of birds in roosts in western France to 300–500 birds and the Lac de Grand-Lieu was the only regular breeding site in the region; as the programme has progressed the birds have become warier and the reduced numbers mean the effort and cost per bird has increased and complete eradication may never be achieved. The population near Sigean was eradicated by killing and capturing the birds with only a few remaining in the Camargue. This species is not considered established in mainland Spain. The Barcelona Zoo kept a small free-flying population which bred in the zoo and at least once in 1974 in the surrounding city park. Between 1983 and 1985 they had increased to 18 birds, but these subsequently declined to 4–6 pairs in the 1990s and the birds were permanently caged by the end of the 1990s (the zoo still has some). In 2001 the remaining birds in the surroundings were culled, thus ending the occurrence of the species in the 'wild' in the area. However, in the early 2000s vagrants most probably from France were recorded in northern Catalonia, and sporadic observations throughout the year have been recorded since then along the Mediterranean and Cantabrian coasts. There were a total of about twenty approved records of sightings between 1994 and 2004. As of 2009, birds entering Spain from France are shot. The population in Italy may have been introduced from the zoo Le Cornelle which has kept a free-flying group since the early 1980s, or possibly from Brittany, but this is unclear. The first pair was seen breeding in the nearby heronry at Oldenico, in Lame del Sesia Regional Park in Novara, NW Italy, in 1989. By 1998 there was a colony of 9 pairs and 48 birds there; by 2000 there were 24–26 pairs, and by 2003 there were 25–30 breeding pairs. A second colony appeared in 2004 at another nearby heronry at Casalbeltrame. These birds would mostly feed in the rice fields in the area, but would also migrate elsewhere during the summer, with the population at the roosts increasing in the winter. In 2008, the number of breeding ibis was estimated at 80–100 pairs, and at least 300 birds. That same year, six individuals, consisting of three pairs, were observed roosting at a heronry in Casaleggio. By 2009 they were said to be one of the most characteristic animals of the rice-growing area of Novara and Vercellese. In 2010 the species was reported attempting to breed in the Po Delta, northeast Italy. By 2014 reports of individuals and small flocks were recorded in various areas from the Po Valley down to Tuscany. Outside the Piedmont Region, cases of possible nesting are reported in Emilia-Romagna, Veneto and Lombardy. As of 2017 there do not seem to be coordinated control efforts in Italy. In the Netherlands, sacred ibises were introduced from three sources; primarily from the free-flying flock at the aviary zoo Avifauna, and another group of 11 birds which escaped from a private bird trader in Weert when a tree fell on their enclosure sometime between 1998 and 2000 which would all return to their cage each winter. Furthermore, in 2000 a group of sacred ibises escaped from a zoo near Münster, some of which apparently crossed the border into Overijssel, as the colours of their rings closely matched. The free-flying Avifauna flock numbered 12 in 2001, 30 in 2003, and an estimated maximum of 41 birds escaped the zoo eventually. There had been sightings throughout the country for many years, but in 2002 successful breeding was first reported in a nature reserve some 40 km from Avifauna. By 2007 the feral population in the Netherlands had increased to 15 pairs breeding at three locations, including in a tree just outside the zoo. Pairs would regularly move from the zoo to the nature reserve in the summer and vice versa. The next year, in 2008, the tree outside the zoo was cut down, and free-flying birds were recaptured, clipped and caged. 2008/2009 was also a cold winter and many birds died. By 2009 37 birds had been recaptured and by 2010 there were no more birds breeding in the wild. The birds in Weert were halved in number after the 2008–2009 winter and had disappeared somewhere between 2011 and 2015. As of 2016 a few birds survive, some still attempting to breed in Overijssel, and handful sightings of less than three reported. Possible vagrants from France have also been noted (by their rings) after 2010. Elsewhere The sacred ibis is not considered invasive on the Canary Islands. It is kept in zoos on Tenerife, Gran Canaria, Lanzarote and Fuerteventura, two of which kept their collections free-flying. In 1989, the first ibis was seen in the wild. In 1997, the first pair was seen breeding outside a zoo, the population reached a maximum of 5 pairs between then and 2005, and 30 pairs is given by Clergeau & Yésou in 2006 (though this last number is untrustworthy). The birds are divided between the islands Lanzarote (near Arrecife in an old heron colony) and Fuerteventura (in the zoo near La Lajita but free-flying). On both islands, these birds have remained very near to the zoos. The breeding is 'controlled'. There is disagreement regarding the origin of other records, especially during the migratory period. Ibises have been seen on all four of the islands where there are zoos that keep them. Introduced sacred ibises bred in the United Arab Emirates in the wildlife reserve on Sir Bani Yas Island, where 6 were introduced in the early 1980s, and which did not leave the island. There was only one left in 1989 and it died that year. Al Ain Zoo has had a flock since 1976, which had increased to some 70 birds by 1991. There are records of ibises showing up in Dubai since the 1980s. Birds in Al Ain initially stayed at the zoo, but began to fly from the zoo to the sewage treatment plant and a shallow wet area in the former public park, now luxury villa park, Ain Al Fayda, where their numbers increased slowly up to 32 in 1997 and they had bred by 1998. They were not numerous outside these locations in 2002, but by 2001 1–5 ibises would show up regularly in Dubai in such places as the golf course, the sewage treatment plant, and the construction site of the now completed Dubai International City. Breeding has since occurred in Dubai. The Dubai birds especially may be partially vagrants arrived from the Iraqi marshes, as they often show up during the migrating season. On the other hand, a bird showing up in Iran is suspected to be from the introduced UAE population. As of 2010 the population in Al Ain numbers over 75 birds, and the free-flying zoo birds roost in two subcolonies on top of their aviary. Birds regularly show up throughout the city and surrounding villages and can often be seen in the early morning in parks and roundabouts picking up scraps left by people the night before. A breeding population was listed as introduced on Bahrain since at least 2006, but it is also said to be a vagrant on the island. In Taiwan, the founding population escaped from a zoo prior to 1984, at which time the first wild birds were seen at Guandu in Taipei. In 1998 it was estimated that some 200 birds roamed freely, primarily in northern Taiwan. In 2010 it was added to the Checklist of Birds of Taiwan with the status of 'uncommon' (as opposed to 'rare'). By 2010 the birds were also occasionally sighted on the Matsu Islands, which are only 19 km off the coast of Fujian province in mainland China (and only a few kilometres from other coastal Chinese islands), but 190 km from Taiwan. In 2012 the population was estimated to be 500–600 individuals, and had spread to the west of Taiwan. The first attempts at culling were performed in 2012 using the egg-oiling method (unsuccessful), and by killing chicks from nests (successful). By 2016 the number was estimated at 1000 individuals, of whom around 500 inhabited a wetland in Changhua County. In 2018, the Forestry Bureau embarked on the removal of the population by cooperating with the indigenous hunters, and by August 2021, at least 16,205 birds had been removed by the program. In Florida five individuals of the species are thought to have escaped Miami Metro Zoo, and perhaps more from private collections, after Hurricane Andrew in 1992. These birds lived in the surroundings but would return to roost at the zoo at night, and the population slowly increased to 30 or 40 by 2005. That same year two pairs were found nesting in the Everglades. Two or three years later the decision was made to remove the species. By 2009 75 birds were removed from Florida, and the birds are believed to be eradicated. Ecology Habitat The African sacred ibis occurs in marshy wetlands and mud flats, both inland and on the coast. It preferably nests on trees in or near water. It feeds wading in very shallow wetlands or slowly stomping in wet pastures with soft soil. It will also visit cultivation and rubbish dumps. Diet The species are predators which feed primarily by day, generally in flocks. The diet consists of mainly insects, worms, crustaceans, molluscs and other invertebrates, as well as various fish, frogs, reptiles, small mammals and carrion. It may also probe into the soil with its long beak for invertebrates such as earthworms. It even sometimes feeds on seeds. Sacred ibises were observed to occasionally feed on the contents of pelican eggs broken by Egyptian vultures in the mixed colonies of the ibises, cormorants, pelicans and Abdim's storks at Lake Shala in Ethiopia. On Central Island in Lake Turkana sacred ibises were noted to incidentally eat Nile crocodile eggs excavated by Nile monitors. Most recently, in 2006, observations were reported from a large mixed colony on Bird Island (called Penguin Island in the article) in South Africa, where 10,000 pairs of gannets nested, together with 4800 pairs of Cape cormorant and other species such as gulls and jackass penguin. Within a period of 3 years, a few specialized sacred ibis individuals out of the 400 that roosted on the island had fed on at least 152 eggs of the cormorant (other species were even more ovivorous). In a study of pellets and stomachs contents of nestlings in the Free State, South Africa, food is mostly reported to consist of frogs (mainly Amietia angolensis and Xenopus laevis), Potamonautes warreni crabs, blow fly maggots, Sphingidae caterpillars, and adult beetles. During the first 10 days of life nestlings fed mainly on crabs and beetles, and later mainly on Sphingidae caterpillars and more beetles. The breeding colony collected different (proportions of) prey the subsequent year. The food of one one-month-old nestlings at Lake Shala, Ethiopia, consisted of beetle larvae, caterpillars and beetles. In France, adult ibises fed largely on the invasive crayfish Procambarus clarkii, for nestlings larvae of Eristalis species are important. In France, they sometimes supplement their diet by feeding at rubbish tips in the winter. Predators The most important predator of nestlings of the sacred ibis in Kenya is the African fish eagle, which preferentially searches for the largest (sub-)colonies to attack, but in Ethiopia and South Africa it poses less of a threat. Diseases This species was reported to be susceptible to avian botulism in a list of dead animals found around a man-made lake in South Africa which tested positive for the pathogen in the late 1960s and early 1970s. During a large scale mortality of Cape cormorants from avian cholera in 1991 in western South Africa, small numbers of sacred ibis were killed. The new species Chlamydia ibidis was isolated from feral sacred ibis in France in 2013; it infected 6–7 of the 70 birds tested. In 1887, the Italian scientist Corrado Parona reported a 3 cm Physaloptera species of nematode in the orbital cavity of a sacred ibis collected in Metemma, Abyssinia (now Ethiopia) in 1882. He thought it was perhaps a new species, as it differed morphologically from earlier seen worms. A single adult female was recovered, and it has never been seen again. As the Physaloptera species infecting birds are generally parasites of the intestines of raptors; it might be an artefact, or perhaps a misidentification, or possibly a dead-end host infection. The digenean trematode Patagifer bilobus, a fluke, has been reported from sacred ibis in Sudan before 1949. It lives in the small intestine of this species, among numerous other ibises, spoonbills, and a few other waterfowl. It has a complicated life history involving three hosts: the eggs hatch in fresh water where they infect a ram's horn snail in which they multiply and produce cercariae, which exit and encyst in a larger snail such as a Lymnaea, waiting to be eaten by a bird. Reproduction The species usually breeds once per year in the wet season. Breeding season is from March to August in Africa, from April to May in Iraq. It builds a stick nest, often in a baobab tree. The bird nests in tree colonies, often with other large wading birds such as storks, herons, African spoonbills, African darters, cormorants. It may also form single-species groups on offshore islands or abandoned buildings. Island nests are often made on the ground. Large colonies consist of numerous subcolonies and can number 1000 birds. Females lay one to five eggs per season, incubated by both parents for 21 to 29 days. After hatching, one parent continuously stays at the nest for the first seven days. Chicks fledge after 35 to 40 days and are independent after 44 to 48 days, reaching sexual maturity one to five years after hatching. Conservation The African sacred ibis is classified as "Least Concern" by the IUCN. The global population is estimated at 200,000–450,000 individuals but appears to be decreasing. It is covered by the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA). In myth and legend For many centuries, sacred ibis, along with two other species in lesser numbers, were commonly mummified by the Ancient Egyptians as a votive offering to the god Thoth. Thoth, whose head is that of an ibis, is the Ancient Egyptian god of wisdom and reason, and thus of truth, knowledge, learning and study, and writing and mathematics. The sacred ibis was considered the living incarnation of Thoth on earth. Pilgrims from all over Egypt brought thousands of ibis offerings to four or more main temples, which at their peak mummified and buried thousands of birds a year in gigantic and ancient catacombs (one complex was in operation for 700 years). Eventually, an estimated eight million birds were mummified and entombed by the Ancient Egyptians. It has long been thought, to sustain sufficient numbers for the large and sometimes growing demand for sacrifices by the people, dozens of ibis breeding farms (called ibiotropheia by Herodotus) were established, initially throughout the regions of Egypt, but later centralised around the main temples, each producing around a thousand birds for mummies annually. An examination of the mitochondrial DNA disputes this and suggests that not only were wild birds caught and added to the captive flocks, but that they provided the bulk of the supply. The mummified birds were often young and were usually killed by breaking their neck. The head and bill were often placed between the tail feathers, and a piece of food was often placed in the bill (often a snail). The particulars of the mummification ritual often differed. The mummies could be stored in ceramic jars, wooden chests or stone sarcophagi. Not all mummies contain whole birds; some (cheaper ones) contain only a leg, an eggshell, or even dried grass from the nest. Birds were given different burials according to their status; as pets, offerings or holy individuals. Special sacred birds were afforded special mummification, transported from their cities to the temples long after normal offerings were sourced from temple farm flocks, and honoured with more luxurious burial. Different regions in Egypt observed slightly different practises regarding the ritual beliefs. Ibis mummification started by at least 1,100 BC and petered out by approximately 30 BC. Although the numbers of burials peaked at different times depending on the region and temple, the rituals were most popular from the Late Period to the Ptolemaic Period. Mummified specimens of the sacred ibis were brought back to Europe by Napoleon's army, where they became part of an early debate about evolution. According to Herodotus and Pliny the Elder the ibis was invoked against incursions of winged serpents. Herodotus wrote: Josephus tells us that when Moses led the Hebrews to make war upon the Æthiopians, he brought a great number of the birds in cages of papyrus to oppose any serpents. Due to perhaps a mistranslation of the Greek of Herodotus, before the early 18th century Europeans were convinced that these ibises had human feet. Pliny the Elder tells us that it was said that the flies that brought pestilence died immediately upon propitiatory sacrifices of this bird. According to Claudius Aelianus in De Natura Animalium, and Gaius Julius Solinus, both quoting much earlier but now lost authors, the sacred ibis procreates with its bill, and thus the bird is always a virgin. Aristotle, writing some 500 years earlier, also mentions this theory, but repudiates it. Picrius mentions how the venomous basilisk is hatched from the eggs of ibis, nurtured from the poisons of all the serpents the birds devour. These authors and many others also mention how crocodiles and snakes are rendered motionless after being touched by the feather of an ibis. Claudius Aelianus also says the ibis is consecrated to the moon. Pliny and Galen ascribe the invention of the clyster (enema) to the ibis, as according to them it gave such treatments to hippopotami. Plutarch assures us it uses only salt water for this purpose. 1600 years later this was still accepted science, as Claude Perrault, in his anatomical descriptions of the bird, claimed to have found a hole in the bill which the bird used for that purpose. In the century before the time of Christ and for at least a century after, the worship of Isis had become quite popular in Rome, especially among women, and the ibis had become one of her associated symbols. A number of frescoes and mosaics in the patrician villas of Pompeii and Herculaneum of 50BC-79AD show these birds. According to some translations of the septuagint, the ibis is one of the unclean birds which may not be eaten (, ).
Biology and health sciences
Pelecanimorphae
Animals
1164768
https://en.wikipedia.org/wiki/Loam
Loam
Loam (in geology and soil science) is soil composed mostly of sand (particle size > ), silt (particle size > ), and a smaller amount of clay (particle size < ). By weight, its mineral composition is about 40–40–20% concentration of sand–silt–clay, respectively. These proportions can vary to a degree, however, and result in different types of loam soils: sandy loam, silty loam, clay loam, sandy clay loam, silty clay loam, and loam. In the United States Department of Agriculture, textural classification triangle, the only soil that is not predominantly sand, silt, or clay is called "loam". Loam soils generally contain more nutrients, moisture, and humus than sandy soils, have better drainage and infiltration of water and air than silt- and clay-rich soils, and are easier to till than clay soils. In fact, the primary definition of loam in most dictionaries is soils containing humus (organic content) with no mention of particle size or texture, and this definition is used by many gardeners. The different types of loam soils each have slightly different characteristics, with some draining liquids more efficiently than others. The soil's texture, especially its ability to retain nutrients and water, are crucial. Loam soil is suitable for growing most plant varieties. Bricks made of loam, mud, sand, and water, with an added binding material such as rice husks or straw, have been used in construction since ancient times. Classifications Loam soils can be classified into more specific subtypes. Some examples are sandy loam, silt loam, clay loam, and silty clay loam. Different soil phases have some variation in characteristics like stoniness and erosion that are too minor to affect native vegetative growth but can be significant for crop cultivation. Use in farming Loam is considered ideal for gardening and agricultural uses because it retains nutrients well and retains water while still allowing excess water to drain away. A soil dominated by one or two of the three particle size groups can behave like loam if it has a strong granular structure, promoted by a high content of organic matter. However, a soil that meets the textural (geological) definition of loam can lose its characteristic desirable qualities when it is compacted, depleted of organic matter, or has clay dispersed throughout its fine-earth fraction. For example, pea can be cultivated in sandy loam and clay loam soils, but not more compacted sandy soils. Use in house construction Loam (the high-humus definition, not the soil texture definition) may be used for the construction of houses, for example in loam post and beam construction. Building crews can build a layer of loam on the inside of walls, which can help to control air humidity. Loam, combined with straw, can be used as rough construction material to build walls. This is one of the oldest technologies for house construction in the world. Within this there are two broad methods: the use of rammed earth, or unfired bricks (adobe).
Physical sciences
Sedimentology
Earth science
1165029
https://en.wikipedia.org/wiki/Collimator
Collimator
A collimator is a device which narrows a beam of particles or waves. To narrow can mean either to cause the directions of motion to become more aligned in a specific direction (i.e., make collimated light or parallel rays), or to cause the spatial cross section of the beam to become smaller (beam limiting device). History The English physicist Henry Kater was the inventor of the floating collimator, which rendered a great service to practical astronomy. He reported about his invention in January 1825. In his report, Kater mentioned previous work in this area by Carl Friedrich Gauss and Friedrich Bessel. Optical collimators In optics, a collimator may consist of a curved mirror or lens with some type of light source and/or an image at its focus. This can be used to replicate a target focused at infinity with little or no parallax. In lighting, collimators are typically designed using the principles of nonimaging optics. Optical collimators can be used to calibrate other optical devices, to check if all elements are aligned on the optical axis, to set elements at proper focus, or to align two or more devices such as binoculars or gun barrels and gunsights. A surveying camera may be collimated by setting its fiduciary markers so that they define the principal point, as in photogrammetry. Optical collimators are also used as gun sights in the collimator sight, which is a simple optical collimator with a cross hair or some other reticle at its focus. The viewer only sees an image of the reticle. They have to use it either with both eyes open and one eye looking into the collimator sight, with one eye open and moving the head to alternately see the sight and the target, or with one eye to partially see the sight and target at the same time. Adding a beam splitter allows the viewer to see the reticle and the field of view, making a reflector sight. Collimators may be used with laser diodes and CO2 cutting lasers. Proper collimation of a laser source with long enough coherence length can be verified with a shearing interferometer. X-ray, gamma ray, and neutron collimators In X-ray optics, gamma ray optics, and neutron optics, a collimator is a device that filters a stream of rays so that only those traveling parallel to a specified direction are allowed through. Collimators are used for X-ray, gamma-ray, and neutron imaging because it is difficult to focus these types of radiation into an image using lenses, as is routine with electromagnetic radiation at optical or near-optical wavelengths. Collimators are also used in radiation detectors in nuclear power stations to make them directionally sensitive. Applications The figure to the right illustrates how a Söller collimator is used in neutron and X-ray machines. The upper panel shows a situation where a collimator is not used, while the lower panel introduces a collimator. In both panels the source of radiation is to the right, and the image is recorded on the gray plate at the left of the panels. Without a collimator, rays from all directions will be recorded; for example, a ray that has passed through the top of the specimen (to the right of the diagram) but happens to be travelling in a downwards direction may be recorded at the bottom of the plate. The resultant image will be so blurred and indistinct as to be useless. In the lower panel of the figure, a collimator has been added (blue bars). This may be a sheet of lead or other material opaque to the incoming radiation with many tiny holes bored through it or in the case of neutrons it can be a sandwich arrangement (which can be up to several feet long; see ENGIN-X) with many layers alternating between neutron absorbing material (e.g., gadolinium) with neutron transmitting material. This can be something simple, such as air; alternatively, if mechanical strength is needed, a material such as aluminium may be used. If this forms part of a rotating assembly, the sandwich may be curved. This allows energy selection in addition to collimation; the curvature of the collimator and its rotation will present a straight path only to one energy of neutrons. Only rays that are travelling nearly parallel to the holes will pass through them—any others will be absorbed by hitting the plate surface or the side of a hole. This ensures that rays are recorded in their proper place on the plate, producing a clear image. For industrial radiography using gamma radiation sources such as iridium-192 or cobalt-60, a collimator (beam limiting device) allows the radiographer to control the exposure of radiation to expose a film and create a radiograph, to inspect materials for defects. A collimator in this instance is most commonly made of tungsten, and is rated according to how many half value layers it contains, i.e., how many times it reduces undesirable radiation by half. For instance, the thinnest walls on the sides of a 4 HVL tungsten collimator thick will reduce the intensity of radiation passing through them by 88.5%. The shape of these collimators allows emitted radiation to travel freely toward the specimen and the x-ray film, while blocking most of the radiation that is emitted in undesirable directions such as toward workers. Limitations Although collimators improve resolution, they also reduce intensity by blocking incoming radiation, which is undesirable for remote sensing instruments that require high sensitivity. For this reason, the gamma ray spectrometer on the Mars Odyssey is a non-collimated instrument. Most lead collimators let less than 1% of incident photons through. Attempts have been made to replace collimators with electronic analysis. In radiation therapy Collimators (beam limiting devices) are used in linear accelerators used for radiotherapy treatments. They help to shape the beam of radiation emerging from the machine and can limit the maximum field size of a beam. The treatment head of a linear accelerator consists of both a primary and secondary collimator. The primary collimator is positioned after the electron beam has reached a vertical orientation. When using photons, it is placed after the beam has passed through the X-ray target. The secondary collimator is positioned after either a flattening filter (for photon therapy) or a scattering foil (for electron therapy). The secondary collimator consists of two jaws which can be moved to either enlarge or minimize the size of the treatment field. New systems involving multileaf collimators (MLCs) are used to further shape a beam to localise treatment fields in radiotherapy. MLCs consist of approximately 50–120 leaves of heavy, metal collimator plates which slide into place to form the desired field shape. Computing the spatial resolution To find the spatial resolution of a parallel hole collimator with a hole length, , a hole diameter and a distance to the imaged object , the following formula can be used where the effective length is defined as Where is the linear attenuation coefficient of the material from which the collimator is made.
Technology
Optical components
null
1166049
https://en.wikipedia.org/wiki/Quenching
Quenching
In materials science, quenching is the rapid cooling of a workpiece in water, gas, oil, polymer, air, or other fluids to obtain certain material properties. A type of heat treating, quenching prevents undesired low-temperature processes, such as phase transformations, from occurring. It does this by reducing the window of time during which these undesired reactions are both thermodynamically favorable and kinetically accessible; for instance, quenching can reduce the crystal grain size of both metallic and plastic materials, increasing their hardness. In metallurgy, quenching is most commonly used to harden steel by inducing a martensite transformation, where the steel must be rapidly cooled through its eutectoid point, the temperature at which austenite becomes unstable. Rapid cooling prevents the formation of cementite structure, instead forcibly dissolving carbon atoms in the ferrite lattice. In steel alloyed with metals such as nickel and manganese, the eutectoid temperature becomes much lower, but the kinetic barriers to phase transformation remain the same. This allows quenching to start at a lower temperature, making the process much easier. High-speed steel also has added tungsten, which serves to raise kinetic barriers, which, among other effects, gives material properties (hardness and abrasion resistance) as though the workpiece had been cooled more rapidly than it really has. Even cooling such alloys slowly in the air has most of the desired effects of quenching; high-speed steel weakens much less from heat cycling due to high-speed cutting. Extremely rapid cooling can prevent the formation of all crystal structures, resulting in amorphous metal or "metallic glass". Quench hardening Quench hardening is a mechanical process in which steel and cast iron alloys are strengthened and hardened. These metals consist of ferrous metals and alloys. This is done by heating the material to a certain temperature, depending on the material. This produces a harder material by either surface hardening or through-hardening varying on the rate at which the material is cooled. The material is then often tempered to reduce the brittleness that may increase from the quench hardening process. Items that may be quenched include gears, shafts, and wear blocks. Purpose Before hardening, cast steels and iron are of a uniform and lamellar (or layered) pearlitic grain structure. This is a mixture of ferrite and cementite formed when steel or cast iron are manufactured and cooled at a slow rate. Pearlite is not an ideal material for many common applications of steel alloys as it is quite soft. By heating pearlite past its eutectoid transition temperature of 727 °C and then rapidly cooling, some of the material's crystal structure can be transformed into a much harder structure known as martensite. Steels with this martensitic structure are often used in applications when the workpiece must be highly resistant to deformation, such as the cutting edge of blades. This is very efficient. Process The process of quenching is a progression, beginning with heating the sample. Most materials are heated to between , with careful attention paid to keeping temperatures throughout the workpiece uniform. Minimizing uneven heating and overheating is key to imparting desired material properties. The second step in the quenching process is soaking. Workpieces can be soaked in air (air furnace), a liquid bath, or a vacuum. The recommended time allocation in salt or lead baths is up to 6 minutes. Soaking times can range a little higher within a vacuum. As in the heating step, it is important that the temperature throughout the sample remains as uniform as possible during soaking. Once the workpiece has finished soaking, it moves on to the cooling step. During this step, the part is submerged into some kind of quenching fluid; different quenching fluids can have a significant effect on the final characteristics of a quenched part. Water is one of the most efficient quenching media where maximum hardness is desired, but there is a small chance that it may cause distortion and tiny cracking. When hardness can be sacrificed, mineral oils are often used. These oil-based fluids often oxidize and form sludge during quenching, which consequently lowers the efficiency of the process. The cooling rate of oil is much less than water. Intermediate rates between water and oil can be obtained with a purpose-formulated quenchant, a substance with an inverse solubility that therefore deposits on the object to slow the rate of cooling. Quenching can also be accomplished using inert gases, such as nitrogen and noble gases. Nitrogen is commonly used at greater than atmospheric pressure ranging up to 20 bar absolute. Helium is also used because its thermal capacity is greater than nitrogen. Alternatively, argon can be used; however, its density requires significantly more energy to move, and its thermal capacity is less than the alternatives. To minimize distortion in the workpiece, long cylindrical workpieces are quenched vertically; flat workpieces are quenched on the edge; and thick sections should enter the bath first. To prevent steam bubbles the bath is agitated. Often, after quenching, an iron or steel alloy will be excessively hard and brittle due to an overabundance of martensite. In these cases, another heat treatment technique known as tempering is performed on the quenched material to increase the toughness of iron-based alloys. Tempering is usually performed after hardening, to reduce some of the excess hardness, and is done by heating the metal to some temperature below the critical point for a certain period of time, then allowing it to cool in still air. Mechanism of heat removal during quenching Heat is removed in three particular stages: Stage A: Vapor bubbles formed over metal and starts cooling During this stage, due to the Leidenfrost effect, the object is fully surrounded by vapor which insulates it from the rest of the liquid. Stage B: Vapor-transport cooling Once the temperature has dropped enough, the vapor layer will destabilize and the liquid will be able to fully contact the object and heat will be removed much more quickly. Stage C: Liquid cooling This stage occurs when the temperature of the object is below the boiling point of the liquid. History There is evidence of the use of quenching processes by blacksmiths stretching back into the middle of the Iron Age, but little detailed information exists related to the development of these techniques and the procedures employed by early smiths. Although early ironworkers must have swiftly noticed that processes of cooling could affect the strength and brittleness of iron, and it can be claimed that heat treatment of steel was known in the Old World from the late second millennium BC, it is hard to identify deliberate uses of quenching archaeologically. Moreover, it appears that, at least in Europe, "quenching and tempering separately do not seem to have become common until the 15th century"; it is helpful to distinguish between "full quenching" of steel, where the quenching is so rapid that only martensite forms, and "slack quenching", where the quenching is slower or interrupted, which also allows pearlite to form and results in a less brittle product. The earliest examples of quenched steel may come from ancient Mesopotamia, with a relatively secure example of a fourth-century BC quench-hardened chisel from Al Mina in Turkey. Book 9, lines 389-94 of Homer's Odyssey is widely cited as an early, possibly the first, written reference to quenching: as when a man who works as a blacksmith plunges a screaming great axe blade or adze into cold water, treating it for temper, since this is the way steel is made strong, even so Cyclops' eye sizzled about the beam of the olive. However, it is not beyond doubt that the passage describes deliberate quench-hardening, rather than simply cooling. Likewise, there is a prospect that the Mahabharata refers to the oil-quenching of iron arrowheads, but the evidence is problematic. Pliny the Elder addressed the topic of quenchants, distinguishing the water of different rivers. Chapters 18-21 of the twelfth-century De diversis artis by Theophilus Presbyter mentions quenching, recommending amongst other things that 'tools are also given a harder tempering in the urine of a small, red-headed boy than in ordinary water'. One of the fuller early discussions of quenching is the first Western printed book on metallurgy, Von Stahel und Eysen, published in 1532, which is characteristic of late-medieval technical treatises. The modern scientific study of quenching began to gain real momentum from the seventeenth century, with a major step being the observation-led discussion by Giambattista della Porta in his 1558 Magia Naturalis.
Technology
Metallurgy
null
11588774
https://en.wikipedia.org/wiki/Coal-fired%20power%20station
Coal-fired power station
A coal-fired power station or coal power plant is a thermal power station which burns coal to generate electricity. Worldwide there are about 2,500 coal-fired power stations, on average capable of generating a gigawatt each. They generate about a third of the world's electricity, but cause many illnesses and the most early deaths per unit of energy produced, mainly from air pollution. World installed capacity doubled from 2000 to 2023 and increased 2% in 2023. A coal-fired power station is a type of fossil fuel power station. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines that turn generators. Thus chemical energy stored in coal is converted successively into thermal energy, mechanical energy and, finally, electrical energy. Coal-fired power stations are a significant source of greenhouse gas emissions, releasing approximately 12 billion tonnes of carbon dioxide annually, representing about one-fifth of global emissions. This makes them the largest single contributor to climate change. China accounts for over half of global coal-fired electricity generation. While the total number of operational coal plants began declining in 2020, due to retirements in Europe and the Americas, construction continues in Asia, primarily in China. The profitability of some plants is maintained by externalities, as the health and environmental costs of coal production and use are not fully reflected in electricity prices. However, newer plants face the risk of becoming stranded assets. The UN Secretary General has called for OECD nations to phase out coal-fired generation by 2030, and the rest of the world by 2040. History The first coal-fired power stations were built in the late 19th century and used reciprocating engines to generate direct current. Steam turbines allowed much larger plants to be built in the early 20th century and alternating current was used to serve wider areas. Transport and delivery of coal Coal is delivered by highway truck, rail, barge, collier ship or coal slurry pipeline. Generating stations are sometimes built next to a mine; especially one mining coal, such as lignite, which is not valuable enough to transport long-distance; so may receive coal by conveyor belt or massive diesel-electric-drive trucks. A large coal train called a "unit train" may be 2 km long, containing 130-140 cars with around 100 tonnes of coal in each one, for a total load of over 10,000 tonnes. A large plant under full load requires at least one coal delivery this size every day. Plants may get as many as three to five trains a day, especially in "peak season" during the hottest summer or coldest winter months (depending on local climate) when power consumption is high. Modern unloaders use rotary dump devices, which eliminate problems with coal freezing in bottom dump cars. The unloader includes a train positioner arm that pulls the entire train to position each car over a coal hopper. The dumper clamps an individual car against a platform that swivels the car upside down to dump the coal. Swiveling couplers enable the entire operation to occur while the cars are still coupled together. Unloading a unit train takes about three hours. Shorter trains may use railcars with an "air-dump", which relies on air pressure from the engine plus a "hot shoe" on each car. This "hot shoe" when it comes into contact with a "hot rail" at the unloading trestle, shoots an electric charge through the air dump apparatus and causes the doors on the bottom of the car to open, dumping the coal through the opening in the trestle. Unloading one of these trains takes anywhere from an hour to an hour and a half. Older unloaders may still use manually operated bottom-dump rail cars and a "shaker" attached to dump the coal. A collier (cargo ship carrying coal) may hold of coal and takes several days to unload. Some colliers carry their own conveying equipment to unload their own bunkers; others depend on equipment at the plant. For transporting coal in calmer waters, such as rivers and lakes, flat-bottomed barges are often used. Barges are usually unpowered and must be moved by tugboats or towboats. For start up or auxiliary purposes, the plant may use fuel oil as well. Fuel oil can be delivered to plants by pipeline, tanker, tank car or truck. Oil is stored in vertical cylindrical steel tanks with capacities as high as . The heavier no. 5 "bunker" and no. 6 fuels are typically steam-heated before pumping in cold climates. Operation As a type of thermal power station, a coal-fired power station converts chemical energy stored in coal successively into thermal energy, mechanical energy and, finally, electrical energy. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The heat from the burning pulverized coal converts boiler water to steam, which is then used to spin turbines that turn generators. Compared to a thermal power station burning other fuel types, coal specific fuel processing and ash disposal is required. For units over about 200 MW capacity, redundancy of key components is provided by installing duplicates of the forced and induced draft fans, air preheaters, and fly ash collectors. On some units of about 60 MW, two boilers per unit may instead be provided. The hundred largest coal power stations range in size from 3,000 MW to 6,700 MW. Coal processing Coal is prepared for use by crushing the rough coal to pieces less than in size. The coal is then transported from the storage yard to in-plant storage silos by conveyor belts at rates up to 4,000 tonnes per hour. In plants that burn pulverized coal, silos feed coal to pulverizers (coal mills) that take the larger 5 cm pieces, grind them to the consistency of talcum powder, sort them, and mix them with primary combustion air, which transports the coal to the boiler furnace and preheats the coal in order to drive off excess moisture content. A 500 MWe plant may have six such pulverizers, five of which can supply coal to the furnace at 250 tonnes per hour under full load. In plants that do not burn pulverized coal, the larger 5 cm pieces may be directly fed into the silos which then feed either mechanical distributors that drop the coal on a traveling grate or the cyclone burners, a specific kind of combustor that can efficiently burn larger pieces of fuel. Boiler operation Plants designed for lignite (brown coal) are used in locations as varied as Germany, Victoria, Australia, and North Dakota. Lignite is a much younger form of coal than black coal. It has a lower energy density than black coal and requires a much larger furnace for equivalent heat output. Such coals may contain up to 70% water and ash, yielding lower furnace temperatures and requiring larger induced-draft fans. The firing systems also differ from black coal and typically draw hot gas from the furnace-exit level and mix it with the incoming coal in fan-type mills that inject the pulverized coal and hot gas mixture into the boiler. Ash disposal The ash is often stored in ash ponds. Although the use of ash ponds in combination with air pollution controls (such as wet scrubbers) decreases the amount of airborne pollutants, the structures pose serious health risks for the surrounding environment. Power utility companies have often built the ponds without liners, especially in the United States, and therefore chemicals in the ash can leach into groundwater and surface waters. Since the 1990s, power utilities in the U.S. have designed many of their new plants with dry ash handling systems. The dry ash is disposed in landfills, which typically include liners and groundwater monitoring systems. Dry ash may also be recycled into products such as concrete, structural fills for road construction and grout. Fly ash collection Fly ash is captured and removed from the flue gas by electrostatic precipitators or fabric bag filters (or sometimes both) located at the outlet of the furnace and before the induced draft fan. The fly ash is periodically removed from the collection hoppers below the precipitators or bag filters. Generally, the fly ash is pneumatically transported to storage silos and stored on site in ash ponds, or transported by trucks or railroad cars to landfills. Bottom ash collection and disposal At the bottom of the furnace, there is a hopper for collection of bottom ash. This hopper is kept filled with water to quench the ash and clinkers falling down from the furnace. Arrangements are included to crush the clinkers and convey the crushed clinkers and bottom ash to on-site ash ponds, or off-site to landfills. Ash extractors are used to discharge ash from municipal solid waste–fired boilers. Flexibility Effective energy policy, law and electricity markets are essential for grid flexibility. While the flexibility of some coal-fired power stations can be enhanced, they generally offer less dispatchable generation than most gas-fired power plants. A key aspect of flexibility is low minimum load; however, certain flexibility upgrades for coal plants may be more costly than deploying renewable energy sources with battery storage. Coal power generation , coal was the largest single source of electricity generation, fueling two-thirds of global electricity generation, and representing 34% of the total supply. Over half of global coal-fired generation in 2020 occurred in China, and coal provided approximately 60% of electricity in China, India and Indonesia. Globally in 2020, 2,059 GW of coal-fired capacity was operational, with 50 GW newly commissioned and 25 GW under construction (primarily in China), while 38 GW was retired (mainly in the US and EU). By 2023, global coal power capacity had increased to 2,130 GW, largely due to 47.4 GW of additions in China. While some nations pledged to transition away from coal power at the 2021 United Nations Climate Change Conference (COP26) through the Global Coal to Clean Power Transition Statement, significant challenges persist, especially in developing countries such as Indonesia and Vietnam. Efficiency There are 4 main types of coal-fired power station in increasing order of efficiency are: subcritical, supercritical, ultra-supercritical and cogeneration (also called combined heat and power or CHP). Subcritical is the least efficient type, however recent innovations have allowed retrofits to older subcritical plants to meet or even exceed efficiency of supercritical plants. Integrated gasification combined cycle design Integrated gasification combined cycle (IGCC) is a coal-based power generation technology that uses a high-pressure gasifier to convert coal (or other carbon-based fuels) into pressurized synthesis gas (syngas). The gasification process allows the use of a combined cycle generator, typically achieving higher efficiency. IGCC also facilitates removal of certain pollutants from the syngas before power generation. However, this technology is more expensive than conventional coal-fired power stations. Carbon dioxide emissions As coal is mainly carbon, coal-fired power stations have a high carbon intensity. On average, coal power stations emit far more greenhouse gas per unit electricity generated compared with other energy sources (see also life-cycle greenhouse-gas emissions of energy sources). In 2018 coal burnt to generate electricity emitted over 10 Gt of the 34 Gt total from fuel combustion (the overall total greenhouse gas emissions for 2018 was 55 Gt e). Mitigation Phase out From 2015 to 2020, although coal generation hardly fell in absolute terms, some of its market share was taken by wind and solar. In 2020 only China increased coal power generation, and globally it fell by 4%. However, in 2021, China declared that it limited coal generation until 2025 and subsequently phase it out over time. The UN Secretary General has said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040, otherwise limiting global warming to 1.5 °C, a target of the Paris Agreement, would be extremely difficult. A 2024 analysis by The Economist concluded that financing phase-out would be cheaper than carbon offsets. However phasing out in Asia can be a financial challenge as plants there are relatively young: in China the co-benefits of closing a plant vary greatly depending on its location. Vietnam is among the few coal-dependent fast developing countries that fully pledged to phase out unbated coal power by the 2040s or as soon as possible thereafter. Ammonia co-firing Ammonia has a high hydrogen density and is easy to handle. It can be used as storing carbon-free fuel in gas turbine power generation and help significantly reduce CO₂ emissions as a fuel. In Japan, the first major four-year test project was started in June 2021 to develop technology to enable co-firing a significant amount of ammonia at a large-scale commercial coal-fired plant. However low-carbon hydrogen and ammonia is in demand for sustainable shipping, which unlike electricity generation, has few other clean options. Conversion Some power stations are being converted to burn gas, biomass or waste, and conversion to thermal storage will be trialed in 2023. Carbon capture Retrofitting some existing coal-fired power stations with carbon capture and storage was being considered in China in 2020, but this is very expensive, reduces the energy output and for some plants is not technically feasible. Pollution Coal burning power plants kill many thousands of people every year with their emissions of particulates, microscopic air pollutants that enter human lungs and other human organs and induce a variety of adverse medical conditions, including asthma, heart disease, low birth weight and cancers. In the U.S. alone, such particulates, known as PM2.5 (particulates with a diameter of 2.5 μm or less), caused at least 460,000 excess deaths over two decades. In some countries pollution is somewhat controlled by best available techniques, for example those in the EU through its Industrial Emissions Directive. In the United States, coal-fired plants are governed at the national level by several air pollution regulations, including the Mercury and Air Toxics Standards (MATS) regulation, by effluent guidelines for water pollution, and by solid waste regulations under the Resource Conservation and Recovery Act (RCRA). Coal-fired power stations continue to pollute in lightly regulated countries: such as the Western Balkans, India, Russia and South Africa, causing over a hundred thousand early deaths each year. Local air pollution Damage to health from particulates, sulfur dioxide and nitrogen oxide occurs mainly in Asia and is often due to burning low quality coal, such as lignite, in plants lacking modern flue gas treatment. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Evidence indicates that exposure to sulfur, sulfates, or PM2.5 from coal emissions may be associated with higher relative morbidity or mortality risk than that to other PM2.5 constituents or PM2.5 from other sources per unit concentration. Water pollution Pollutants such as heavy metals leaching into ground water from unlined coal ash storage ponds or landfills pollute water, possibly for decades or centuries. Pollutant discharges from ash ponds to rivers (or other surface water bodies) typically include arsenic, lead, mercury, selenium, chromium, and cadmium. Mercury emissions from coal-fired power plants can fall back onto the land and water in rain, and then be converted into methylmercury by bacteria. Through biomagnification, this mercury can then reach dangerously high levels in fish. More than half of atmospheric mercury comes from coal-fired power plants. Coal-fired power plants also emit sulfur dioxide and nitrogen. These emissions lead to acid rain, which can restructure food webs and lead to the collapse of fish and invertebrate populations. Mitigation of local pollution local pollution in China, which has by far the most coal-fired power stations, is forecast to be reduced further in the 2020s and 2030s, especially if small and low efficiency plants are retired early. Economics Subsidies Coal power plants tend to serve as base load technology, as they have high availability factors, and are relatively difficult and expensive to ramp up and down. As such, they perform poorly in real-time energy markets, where they are unable to respond to changes in the locational marginal price. In the United States, this has been especially true in light of the advent of cheap natural gas, which can serve as a fuel in dispatchable power plants that substitute the role of baseload on the grid. In 2020 the coal industry was subsidized $US18 billion. Finance Coal financing is the financial support provided for coal-related projects, encompassing coal mining and coal-fired power stations. Its role in shaping the global energy landscape and its environmental and climate impacts have made it a subject of concern. The misalignment of coal financing with international climate objectives, particularly the Paris Agreement, has garnered attention. The Paris Agreement aims to restrict global warming to well below 2 degrees Celsius and ideally limit it to 1.5 degrees Celsius. Achieving these goals necessitates a substantial reduction in coal-related activities. Studies, including finance-based accounting of coal emissions, have revealed a misalignment of coal financing with climate objectives. Major nations, such as China, Japan, and the U.S., have extended financial support to overseas coal power infrastructure. The largest backers are Chinese banks under the Belt and Road Initiative (BRI). This support has led to significant long-term climate and financial risks and harms the objectives of reducing CO2 emissions set by the Paris Agreement, of which China, the United States and Japan are signatories. A substantial portion of the associated emissions is anticipated to occur after 2019. Coal financing poses challenges to the global decarbonization of the power generation sector. As renewable energy technologies become cost-competitive, the economic viability of coal projects diminishes, making past fossil fuel investments less attractive. To address these concerns and align with climate goals, there is a growing call for stricter policies regarding overseas coal financing. Countries, including Japan and the U.S., have faced criticism for permitting the financing of certain coal projects. Strengthening the policies, potentially by banning public financing of coal projects entirely, would enhance their climate efforts and credibility. In addition, Enhanced transparency in disclosing financing details is crucial for evaluating their environmental impacts. Capacity factors In India capacity factors are below 60%. In 2020 coal-fired power stations in the United States had an overall capacity factor of 40%; that is, they operated at a little less than half of their cumulative nameplate capacity. Stranded assets If global warming is limited to well below 2 °C as specified in the Paris Agreement, coal plant stranded assets of over US$500 billion are forecast by 2050, mostly in China. In 2020 think tank Carbon Tracker estimated that 39% of coal-fired plants were already more expensive than new renewables and storage and that 73% would be by 2025. about half of China's coal power companies are losing money and old and small power plants "have no hope of making profits". India is keeping potential stranded assets operating by subsidizing them. Politics In May 2021, the G7 committed to end support for coal-fired power stations within the year. The G7's commitment to end coal support is significant as their coal capacity decreased from 23% (443 GW) in 2015 to 15% (310 GW) in 2023, reflecting a shift towards greener policies. This contrasts with China and India, where coal remains central to energy policy. As of 2023, the Group of Twenty (G20) holds 92% of the world's operating coal capacity (1,968 GW) and 88% of pre-construction capacity (336 GW). The energy policy of China regarding coal and coal in China are the most important factors regarding the future of coal-fired power stations, because the country has so many. According to one analysis local officials overinvested in coal-fired power in the mid-2010s because central government guaranteed operating hours and set a high wholesale electricity price. In democracies coal power investment follows an environmental Kuznets curve. The energy policy of India about coal is an issue in the politics of India. Protests In the 21st century people have often protested against opencast mining, for example at Hambach Forest, Akbelen Forest and Ffos-y-fran; and at sites of proposed new plants, such as in Kenya and China.
Technology
Power generation
null
95913
https://en.wikipedia.org/wiki/Coltan
Coltan
Coltan (short for columbite–tantalites and known industrially as tantalite) is a dull black metallic ore from which the elements niobium and tantalum are extracted. The niobium-dominant mineral in coltan is columbite (after niobium's original American name columbium), and the tantalum-dominant mineral is tantalite. Tantalum from coltan is used to manufacture tantalum capacitors which are used for mobile phones, personal computers, automotive electronics, and cameras. Coltan mining is widespread in the Democratic Republic of the Congo. Production and supply Approximately 71% of the global tantalum supply in 2008 was newly mined, 20% was from recycling, and the remainder was from tin slag and inventory. Tantalum minerals are mined in the Democratic Republic of the Congo, Colombia, Rwanda, Australia, Brazil, China, Ethiopia, Mozambique and Kenya. Tantalum is also produced in Thailand and Malaysia as a by-product of tin mining and smelting. Potential future mines, in descending order of magnitude, are being explored in Egypt, Greenland, China, Australia, Finland, Canada, Nigeria and Brazil. Globally, 60% of all mining companies have registered with the highly regulated stock exchanges in Toronto and Vancouver. However, due to environmental regulations, no mining of coltan is currently taking place in Canada itself, with the exception of a single proposed mine in Blue River, British Columbia. In Canada, Tanco Mine near Bernic Lake in Manitoba has tantalum reserves, is the world's largest producer of caesium, and is operated by Global Advanced Metals Pty Ltd. A discussion of Canadian mining by Natural Resources Canada, updated in 2017, does not mention either coltan or tantalum. A Rwandan official discussing prospective mines in his country said that Canada had 4% of global production in 2009; but in rock so hard that the ore is too expensive to extract. In 2009, Rwanda had 9% of the world's tantalum production. In 2016, Rwanda accounted for 50% of global tantalum production. In 2016, Rwanda announced that AB Minerals Corporation would open a coltan separation plant in Rwanda by mid-2017, the first to operate on the African continent. Uganda and Rwanda both exported coltan in the early 2000s after they invaded the DRC, but the bulk of this coltan was not mined within those countries but smuggled from Congolese mines, according to the final report of the UN Panel of Experts on the Illegal Exploitation of Natural Resources and Other Forms of Wealth in the Democratic Republic of Congo. In 2013, Highland African Mining Company (HAMC), now Noventa, closed its Marropino mine in the Gilé District of Zambézia Province, Mozambique, citing poor-quality infrastructure and ore that was both very radioactive and mostly depleted. HAMC was losing US$3.00 on every ton extracted and had reported accumulated losses of around US$150 million by June 2013. Reserves have been identified in Afghanistan, but the ongoing war there precludes either general exploration or exploring specifically for coltan for the foreseeable future. The United States does not produce tantalum due to the poor quality of its reserves. Australian mining company Sons of Gwalia once produced half the world's tantalum but went into administration in 2004. Talison Minerals paid $205 million to buy the Wodgina and Greenbushes tantalum business of Sons of Gwalia but temporarily closed Wodgina because of falling tantalum prices. The mine re-opened in 2011 but closed again after less than a year. Atlas Iron began mining iron ore there in 2010 and ceased operations there in April 2017. Global Advanced Mining announced in 2018 that it planned to restart tantalum production at the Greenbushes mine within a year. Talison Lithium, 51% owned by Chinese company Tianqi Lithium Industries, Inc. (SZSE:002466) and 49% by the US-based Albemarle Corporation, will continue to mine lithium at Greenbushes in parallel with the GAM tantalum operation. Venezuelan President Hugo Chávez announced in 2009 that a significant reserve of coltan was discovered in western Venezuela, although at least one coltan mining operation had previously been authorized in the area. Nonetheless, he outlawed private mines in the region and, saying that the FARC was financing itself with illegal mining, sent 15,000 troops in to deal with them. Technical advisers for the mining project were allegedly provided by a subsidiary of Khatam-al Anbiya Construction Headquarters, a fully owned enterprise of the Iranian Revolutionary Guard which had been under US sanctions since October 25, 2007. Also in 2009, the Colombian government announced coltan reserves had been found in Colombia's eastern provinces. Director of the Colombian Police Oscar Naranjo Trujillo stated in October 2011 that the FARC and the Sinaloa Cartel are working together in the unlicensed coltan mining in Colombia. Colombia announced a joint operation with the United States to arrest three suspects who, according to Semana, inherited the illegal business run from their brother, Francisco Cifuentes Villa, alias 'Pancho Cifuentes', who once worked for Pablo Escobar. In 2012 Colombian police seized 17 tons of coltan in Guainía Department. The police said it had been mined on an indigenous reserve and bought for $10 a kilo and sold for $80 to 100 dollars a kilo, after smuggling it across the border into Brazil, where there are smelters, and sold on through the black market to buyers in Germany, Belgium, Kazakhstan and the United States. Colombia has 5% of global coltan reserves. One of the regions suffering from illegal gold and coltan mining in Colombia is the wetland known as Estrella Fluvial del Inírida (Inírida Fluvial Star), a Ramsar protected wetland. Use and demand Coltan is used primarily for the production of tantalum capacitors, used in mobile phones and almost every kind of electronic device. Niobium and tantalum have a wide range of uses, including refractive lenses for glasses, cameras, phones and printers. They are also used in semiconductor circuits, and capacitors for small electronic devices such as hearing aids, pacemakers, and MP3 players, as well as in computer hard drives, automobile electronics, and surface acoustic wave (SAW) filters for mobile phones. Coltan is also used to make high-temperature alloys for jet engines, air-based turbines, and land-based turbines. More recently, in the late 2000s, the nickel-tantalum super-alloys used in jet engines account for 15% of tantalum consumption, but pending orders for the Airbus and the 787 Dreamliner may increase this proportion, as well as China's pending order for 62 787-8 airplanes. In 2012, electronics companies that used coltan included Acer Inc., AMP, Apple Inc., Canon Inc., Dell, HP Inc., HTC, IBM, Intel, Lenovo, LG, Microsoft, Motorola, Nikon, Nintendo, Nokia, Panasonic, Philips, RIM (now Blackberry Limited), Samsung, Sandisk, Sharp Corporation, Sony, and Toshiba. Some companies have taken steps to reduce their use of conflict minerals by tracing the source of minerals in their supply chains, auditing smelters, and certifying conflict-free coltan mines. As of 2012, the companies that lagged behind these efforts the most were Nintendo, HTC, Sharp Corporation, Nikon, and Canon Inc. Resource curse Certain countries rich in natural resources have been said to suffer from the apparently paradoxical "resource curse" - showing worse economic development than countries with fewer resources. Wealth of resources may also correspond to "... the likelihood of weak democratic development, corruption, and civil war". High levels of corruption lead to great political instability because whoever controls the assets (usually the political leaders and the government, in the case of the Democratic Republic of the Congo) can use them for their own benefit. The resources generate wealth, which the leaders use to stay in power "... either through legal means, or coercive ones (e.g. funding militias)". The increased importance of coltan in electronics "occurred as warlords and armies in the eastern Congo converted artisanal mining operations ... into slave labour regimes to earn hard currency to finance their militias," as one anthropological study put it in 2008. When much of eastern Congo came under the control of Rwandan forces in the 1990s, Rwanda suddenly became a major exporter of coltan, benefiting from the weakness of the Congolese government. The soaring price "brought in as much as $20 million a month to rebel groups" and other factions trading coltan mined in northeastern Congo, according to a U.N. report. Mining For Congolese, mining is the readiest source of income, because the work is consistently available, even if only for a dollar a day. The work can be laborious; miners can walk for days into the forest to reach the ore, scratch it from the earth with hand tools, and pan it. About 90% of young men in Congo have done this. Research found that many Congolese leave farming because they need money quickly and cannot wait for crops to grow. Farming also presents its own obstacles. For example, the lack of roads in the Congolese interior makes it extremely difficult to transport produce to market and a harvest can be seized by militias or the military. With their food gone, people resort to mining to survive. But organized mines may be run by corrupt groups such as militias. The Congolese mine coltan with few tools, no safety procedures, and often no mining experience. No government aid or intervention is available in many unethical and abusive circumstances. Miners consider coltan mining a way to provide for themselves in the face of widespread war and conflict and a government that has no concern for their welfare. A 2007 study of the radioactivity of the coltan mined in Masisi and other parts of the North Kivu Province found "that grinding and sieving coltan can give rise to high occupational doses, up to 18 mSv per year on average." Ethics of mining in the Democratic Republic of Congo Conflicts in the Democratic Republic of Congo (DRC) have made it difficult for the DRC to benefit from the exploitation of its coltan reserves. Mining of coltan is mainly artisanal and small-scale and vulnerable to extortion and human trafficking. A 2003 UN Security Council report stated that much of the ore is mined illegally and smuggled across Congo's eastern border by militias from neighbouring Uganda, Burundi and Rwanda. All three countries named by the United Nations as coltan smugglers denied doing this. Austrian journalist however has documented links between multi-national companies like Bayer and the smuggling and illegal coltan mines. A United Nations committee investigating the plunder of gems and minerals from the Congo, listed in its final report in 2003 approximately 125 companies and individuals whose business activities breach international norms. Companies accused of irresponsible corporate behavior included Cabot Corporation, Eagle Wings Resources International the Forrest Group and OM Group. Some of the fighters were eventually tried before the International Criminal Court tribunal in The Hague on charges of crimes against humanity. Income from coltan smuggling likely financed the military occupation of Congo, and prolonged the civil conflict afterwards. A UN panel studied the eastern Congo for months before releasing a remarkably sharp condemnation of the ongoing military occupation of eastern Congo by Ugandan, Rwandan, and other foreign military forces, as well as the many bands of Congolese rebels fighting with one another. The UN report accused the fighters of massively looting Congolese natural resources, and said that the war persisted because the fighters were enriching themselves by mining and smuggling out coltan, timber, gold, and diamonds. They also said that smuggled minerals financed the fighting and provided money for weapons. A 2005 report on the Rwandan economy by the South African Institute for Security Studies found that Rwanda official coltan production soared nearly tenfold between 1999 and 2001, from 147 tons to 1,300 tons, and for the first time provided more revenue than from the country traditional primary exports, tea and coffee. Similarly, Uganda exported 2.5 tons of coltan exports a year before the conflict broke out in 1997. In 1999 its export volume exploded to nearly 70 tons. Many of the corporations participating in the 1999-2000 business stampede caused by $400 coltan were in fact participants in the conflict. The Rwandan army, as Rwanda Metals, exported at least 100 tons per month. A UN panel estimated that the Coltan extraction causes problems that adjoin or overlap those caused by blood diamonds and uses similar methods such as smuggling across the porous Rwandan border, environmentalists and human rights workers began to speak of "conflict minerals" or "conflict resources" more generally. It is difficult to verify the sourcing of fungible materials like ores, so some processors, Cabot Corporation (USA) for example, have announced that they would avoid unsourced Central African coltan altogether. The Rwandan army could have made $20 million a month, and must have made at least $250 million over 18 months. "This is substantial enough to finance the war," the panel noted in its report. In 2009, DRC coltan was going to China to be manufactured into wires and electronic-grade tantalum powder. Coltan imports from the DRC into Europe usually went to Russia or Central/Eastern Europe, via the route through Dar es Salaam in Tanzania and Piraeus in Greece to the Balkans. An offshore consortium registered in the British Virgin Islands named Nova Dies controlled most of the trans-Balkan trade route. This export pipeline mostly carries unprocessed coltan mined in unsafe artisanal mines, so this market hinders development of safer extraction infrastructure in the DRC. The Balkan trade route, therefore, poses a long-term threat to the DRC's economy; it finances and validates the vast harm done to DR Congo by the violent and corrupt past and current system. Estimates of Congo's coltan deposits range upwards from 64% of global reserves. but estimates at the high end of the range are difficult to trace to reliable data. Professional bodies like the British Geological Survey estimate that Central Africa as a whole has 9% of global assets. Tantalum, the primary element extracted from coltan, can also be obtained from other sources, but Congolese coltan represented around 10% of world production in 2008. The United States responded to conflict minerals with section 1501 of the 2010 Dodd-Frank Act, which required companies that might have conflict minerals including Coltan in their supply chain to register with the US Securities and Exchange Commission and disclose their suppliers. The legislation appears to have had limited success. Based on extensive qualitative fieldwork conducted from 2014 to 2016 with coltan buyers operating in Bukama Territory, Kalemie and Lubumbashi, Katanga Province, one researcher suggested that conflict mineral reforms resulted in better oversight and organization of supply chains, but that inaction by the Congolese government had led to locally negotiated solutions and territorialization, leading to secretive mining activities. Environmental concerns Uncontrolled mining in the DRC causes soil erosion and pollutes lakes and rivers, affecting the hydrology and ecology of the region. The eastern mountain gorilla's population has diminished as well. Miners, far from food sources and often hungry, hunt gorillas. The gorilla population in the DRC fell from 17,000 to 5,000 in the decade prior to 2009, and Mountain Gorillas in the Great Lakes region numbered only 700, UNEP said in 2009. Hunted for bushmeat, a prized delicacy in western Africa, and threatened by logging, slash-and-burn agriculture and armed conflict, the gorilla population was critically endangered, they said. The population of Grauer's gorillas were particularly threatened by changes in their environment, with a population in January 2018 of only about 3,800. An estimated 3–5 million tons of bushmeat is obtained by killing animals, including gorillas, every year. Demand for bushmeat comes from urban dwellers who consider it a delicacy, as well as from remote populations of artisanal miners. Environmentalists who interviewed miners in and around Kahuzi-Biéga National Park and the Itombwe Nature Reserve found that the miners did confirm that they had been eating bushmeat and that they did think that the practice had caused a decline in primate numbers. Since the miners said they would cease the practice if they had another food supply, the authors suggested that efforts to stop the gorilla population decline should consider addressing this issue to reduce the depredations of subsistence hunting. The mines in these nature reserves were producing cassiterite, gold, coltan and wolframite, and "most mines were controlled by armed groups." Health concerns There is a high prevalence of respiratory complaints in Congolese informal coltan miners. It has been suggested that efficient occupational safety measures be implemented. Also, there is a need to regulate the informal mining business due to a high death toll. Price increases and changes in demand The production and sale of coltan and niobium from African mines dropped significantly after the dramatic price spike in 2000 from the dot-com frenzy, from $400 to the current price level of around $100. Figures from the United States Geological Survey partially confirm this. The Tantalum-Niobium International Study Centre in Belgium, the country that colonized the DRC, has encouraged international buyers to avoid Congolese coltan on ethical grounds: "take care in obtaining ... raw materials from lawful sources. Harm, or the threat of harm, to local people, wildlife or the environment is unacceptable." In addition to environmental harm caused by erosion, pollution and deforestation, agriculture and as a result food security suffered in the DRC as a result of mining. A follow-up UN report in 2003 noted a sharp increase in 1999 and 2000 in the global price of tantalum, which naturally increased coltan production. Some of the increased production came from eastern DC where there are "rebel groups and unscrupulous business people" forcing farmers and their families to leave land where the rebels wanted to mine, "forcing them to work in artisanal mines...widespread destruction of agriculture and devastating social effects occurred, which in a number of instances were akin to slavery." A shift also took place from traditional sources such as Australia to new suppliers such as Egypt, perhaps because of the bankruptcy of the world's biggest supplier, Australia's Sons of Gwalia may have caused or contributed to this change. The operations previously owned by Gwalia in Wodgina and Greenbushes continue to operate in some capacity.
Physical sciences
Minerals
Earth science
96049
https://en.wikipedia.org/wiki/Alcohol%20proof
Alcohol proof
Alcohol proof (usually termed simply "proof" in relation to a beverage) is a measure of the content of ethanol (alcohol) in an alcoholic beverage. The term was originally used in England and from 1816 was equal to about 1.75 times the percentage of alcohol by volume (ABV). The United Kingdom today uses ABV instead of proof. In the United States, alcohol proof is defined as twice the percentage of ABV. The definition of proof in terms of ABV varies from country to country. The measurement of alcohol content and the statement of content on bottles of alcoholic beverages is regulated by law in many countries. In 1972, Canada phased out the use of "proof"; in 1973, the European Union followed suit; and the United Kingdom, where the concept originated, started using ABV instead in 1980. The United States Code mandates the use of ABV, but permits proof to be used also. The degree symbol (°) is sometimes used to indicate alcohol proof, either alone (e.g. 10°) or after a space and joined to the letter P as a unit name (e.g. 13 °P). History The term proof dates back to 16th century England, when spirits were taxed at different rates depending on their alcohol content. Similar terminology and methodology spread to other nations as spirit distillation, and taxation, became common. In England, spirits were originally tested with a basic "burn-or-no-burn" test, in which an alcohol-containing liquid that would ignite was said to be "above proof", and one which would not was said to be "under proof". A liquid just alcoholic enough to maintain combustion was defined as 100 proof and was the basis for taxation. Because the flash point of alcohol is highly dependent on temperature, 100 proof defined this way ranges from 20% at to 96% at alcohol by (ABW); at 100 proof would be 50% AB. Another early method for testing liquor's alcohol content was the "gunpowder method". Gunpowder was soaked in a spirit, and if the gunpowder could still burn, the spirit was rated above proof. This test relies on the fact that potassium nitrate (a chemical in gunpowder) is significantly more soluble in water than in alcohol. While less influenced by temperature than the simpler burn-or-no-burn test, gunpowder tests also lacked true reproducibility. Factors including the grain size of gunpowder and the time it sat in the spirit impact the dissolution of potassium nitrate and therefore what would be defined as 100 proof. However, the gunpowder method is significantly less variable than the burn-or-no-burn method, and 100 proof defined by it is traditionally defined as 57.15% ABV. By the end of the 17th century, England had introduced tests based on specific gravity for defining proof. However, it was not until 1816 that a legal standard based on specific density was defined in England. 100 proof was defined as a spirit with the specific gravity of pure water at the same temperature. From the 19th century until 1 January 1980, the UK officially measured alcohol content by proof spirit, defined as spirit with a gravity of that of water, or , and equivalent to 57.15% ABV. The value 57.15% is very close to the fraction . This led to the approximation that 100-proof spirit has an ABV of . From this, it follows that to convert the ABV expressed as a percentage to degrees proof, it is only necessary to multiply the ABV by . Thus pure 100% alcohol will have 100×() = 175 proof, and a spirit containing 40% ABV will have 40×() = 70 proof. The proof system in the United States was established around 1848 and was based on percent alcohol rather than specific gravity. Fifty percent alcohol by volume was defined as 100 proof. Note that this is different from 50% volume fraction (expressed as a percentage); the latter does not take into account change in volume on mixing, whereas the former does. To make 50% ABV from pure alcohol, one would take 50 parts of alcohol and dilute to 100 parts of solution with water, all the while mixing the solution. To make 50% alcohol by volume fraction, one would take 50 parts alcohol and 50 parts water, measured separately, and then mix them together. The resulting volume will not be 100 parts but between 96 and 97 parts, since the smaller water molecules can take up some of the space between the larger alcohol molecules (see volume change). The use of proof as a measure of alcohol content is now mostly linguistic and historical. Today, liquor is sold in most locations with labels that state its percentage alcohol by volume. Governmental regulation European Union The European Union (EU) follows recommendations of the International Organization of Legal Metrology (OIML). OIML's International Recommendation No. 22 (1973) provides standards for measuring alcohol strength by volume and by mass. A preference for one method over the other is not stated in the document, but if alcohol strength by volume is used, it must be expressed as a percentage of total volume at a temperature of . The document does not address alcohol proof or the labeling of bottles. United Kingdom On 1 January 1980, Britain adopted the ABV system of measurement prescribed by the European Union, of which it was then a member. The OIML recommendation for ABV used by the EU states the alcohol by volume in a mixture containing alcohol as a percentage of the total volume of the mixture at a temperature of . It replaced the Sikes hydrometer method of measuring the proof of spirits, which had been used in Britain for over 160 years. United States In the United States, alcohol content is legally mandated to be specified as an ABV percentage. For bottled spirits over containing no solids, actual alcohol content is allowed to vary by up to 0.15% of the ABV stated on the label. By contrast, bottled spirits which are less than 100 ml (as well as those which otherwise contain solids) may vary by up to 0.25%. Proof (the term degrees proof is not used), defined as being twice the percentage of alcohol by volume, may be optionally stated in conjunction with the ABV. For example, whisky may be labeled as 50% ABV and as 100 proof; 86-proof whisky contains 43% ABV. The most typical bottling proof for spirits in the United States is 80 US proof, and there is special legal recognition of 100-proof spirits in the bottled in bond category defined since 1897. The Code of Federal Regulations requires that liquor labels state the percentage of ABV at a temperature of . The regulation permits, but does not require, a statement of the proof, provided that it is printed close to the ABV number. In practice, proof levels continue to be stated on nearly all spirits labels in the United States, and are more commonly used than ABV when describing spirits in journalism and informal settings. Canada Beverages were labelled by alcohol proof in Canada until 1972, then replaced by ABV.
Physical sciences
Other_2
Basics and measurement
96558
https://en.wikipedia.org/wiki/Maxwell%27s%20demon
Maxwell's demon
Maxwell's demon is a thought experiment that appears to disprove the second law of thermodynamics. It was proposed by the physicist James Clerk Maxwell in 1867. In his first letter, Maxwell referred to the entity as a "finite being" or a "being who can play a game of skill with the molecules". Lord Kelvin would later call it a "demon". In the thought experiment, a demon controls a door between two chambers containing gas. As individual gas molecules (or atoms) approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool down. This would decrease the total entropy of the system, seemingly without applying any work, thereby violating the second law of thermodynamics. The concept of Maxwell's demon has provoked substantial debate in the philosophy of science and theoretical physics, which continues to the present day. It stimulated work on the relationship between thermodynamics and information theory. Most scientists argue that, on theoretical grounds, no practical device can violate the second law in this way. Other researchers have implemented forms of Maxwell's demon in experiments, though they all differ from the thought experiment to some extent and none has been shown to violate the second law. Origin and history of the idea The thought experiment first appeared in a letter Maxwell wrote to Peter Guthrie Tait on 11 December 1867. It appeared again in a letter to John William Strutt in 1871, before it was presented to the public in Maxwell's 1872 book on thermodynamics titled Theory of Heat. In his letters and books, Maxwell described the agent opening the door between the chambers as a "finite being". Being a deeply religious man, he never used the word "demon". Instead, William Thomson (Lord Kelvin) was the first to use it for Maxwell's concept, in the journal Nature in 1874, and implied that he intended the Greek mythology interpretation of a daemon, a supernatural being working in the background, rather than a malevolent being. Original thought experiment The second law of thermodynamics ensures (through statistical probability) that two bodies of different temperature, when brought into contact with each other and isolated from the rest of the Universe, will evolve to a thermodynamic equilibrium in which both bodies have approximately the same temperature. The second law is also expressed as the assertion that in an isolated system, entropy never decreases. Maxwell conceived a thought experiment as a way of furthering the understanding of the second law. His description of the experiment is as follows: In other words, Maxwell imagines one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference. The demon must allow molecules to pass in both directions in order to produce only a temperature difference; one-way passage only of faster-than-average molecules from A to B will cause higher temperature and pressure to develop on the B side. Criticism and development Several physicists have presented calculations that show that the second law of thermodynamics will not actually be violated, if a more complete analysis is made of the whole system including the demon. The essence of the physical argument is to show, by calculation, that any demon must "generate" more entropy segregating the molecules than it could ever eliminate by the method described. That is, it would take more thermodynamic work to gauge the speed of the molecules and selectively allow them to pass through the opening between A and B than the amount of energy gained by the difference of temperature caused by doing so. One of the most famous responses to this question was suggested in 1929 by Leó Szilárd, and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Since the demon and the gas are interacting, we must consider the total entropy of the gas and the demon combined. The expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas. In 1960, Rolf Landauer raised an exception to this argument. He realized that some measuring processes need not increase thermodynamic entropy as long as they were thermodynamically reversible. He suggested these "reversible" measurements could be used to sort the molecules, violating the Second Law. However, due to the connection between entropy in thermodynamics and information theory, this also meant that the recorded measurement must not be erased. In other words, to determine whether to let a molecule through, the demon must acquire information about the state of the molecule and either discard it or store it. Discarding it leads to immediate increase in entropy, but the demon cannot store it indefinitely. In 1982, Charles Bennett showed that, however well prepared, eventually the demon will run out of information storage space and must begin to erase the information it has previously gathered. Erasing information is a thermodynamically irreversible process that increases the entropy of a system. Although Bennett had reached the same conclusion as Szilard's 1929 paper, that a Maxwellian demon could not violate the second law because entropy would be created, he had reached it for different reasons. Regarding Landauer's principle, the minimum energy dissipated by deleting information was experimentally measured by Eric Lutz et al. in 2012. Furthermore, Lutz et al. confirmed that in order to approach the Landauer's limit, the system must asymptotically approach zero processing speed. Recently, Landauer's principle has also been invoked to resolve an apparently unrelated paradox of statistical physics, Loschmidt’s paradox. John Earman and John D. Norton have argued that Szilárd and Landauer's explanations of Maxwell's demon begin by assuming that the second law of thermodynamics cannot be violated by the demon, and derive further properties of the demon from this assumption, including the necessity of consuming energy when erasing information, etc. It would therefore be circular to invoke these derived properties to defend the second law from the demonic argument. Bennett later acknowledged the validity of Earman and Norton's argument, while maintaining that Landauer's principle explains the mechanism by which real systems do not violate the second law of thermodynamics. Recent progress Although the argument by Landauer and Bennett only answers the consistency between the second law of thermodynamics and the whole cyclic process of the entire system of a Szilard engine (a composite system of the engine and the demon), a recent approach based on the non-equilibrium thermodynamics for small fluctuating systems has provided deeper insight on each information process with each subsystem. From this viewpoint, the measurement process is regarded as a process where the correlation (mutual information) between the engine and the demon increases, decreasing the entropy of the system in an amount given by the mutual information. If the correlation changes, thermodynamic relations such as the second law of thermodynamics and the fluctuation theorem for each subsystem should be modified, and for the case of external control a second-law like inequality and a generalized fluctuation theorem with mutual information are satisfied. For more general information processes including biological information processing, both inequality and equality with mutual information hold. When repeated measurements are performed, the entropy reduction of the system is given by the entropy of the sequence of measurements, which takes into account the reduction of information due to the correlation between the measurements. Applications Real-life versions of Maxwellian demons occur, but all such "real demons" or molecular demons have their entropy-lowering effects duly balanced by increase of entropy elsewhere. Molecular-sized mechanisms are no longer found only in biology; they are also the subject of the emerging field of nanotechnology. Single-atom traps used by particle physicists allow an experimenter to control the state of individual quanta in a way similar to Maxwell's demon. If hypothetical mirror matter exists, Zurab Silagadze proposes that demons can be envisaged, "which can act like perpetuum mobiles of the second kind: extract heat energy from only one reservoir, use it to do work and be isolated from the rest of ordinary world. Yet the Second Law is not violated because the demons pay their entropy cost in the hidden (mirror) sector of the world by emitting mirror photons." Experimental work In 2007, David Leigh announced the creation of a nano-device based on the Brownian ratchet popularized by Richard Feynman. Leigh's device is able to drive a chemical system out of equilibrium, but it must be powered by an external source (light in this case) and therefore does not violate thermodynamics. Previously, researchers including Nobel Prize winner Fraser Stoddart had created ring-shaped molecules called rotaxanes which could be placed on an axle connecting two sites, A and B. Particles from either site would bump into the ring and move it from end to end. If a large collection of these devices were placed in a system, half of the devices had the ring at site A and half at B, at any given moment in time. Leigh made a minor change to the axle so that if a light is shone on the device, the center of the axle will thicken, restricting the motion of the ring. It keeps the ring from moving, however, only if it is at A. Over time, therefore, the rings will be bumped from B to A and get stuck there, creating an imbalance in the system. In his experiments, Leigh was able to take a pot of "billions of these devices" from 50:50 equilibrium to a 70:30 imbalance within a few minutes. In 2009, Mark G. Raizen developed a laser atomic cooling technique which realizes the process Maxwell envisioned of sorting individual atoms in a gas into different containers based on their energy. The new concept is a one-way wall for atoms or molecules that allows them to move in one direction, but not go back. The operation of the one-way wall relies on an irreversible atomic and molecular process of absorption of a photon at a specific wavelength, followed by spontaneous emission to a different internal state. The irreversible process is coupled to a conservative force created by magnetic fields and/or light. Raizen and collaborators proposed using the one-way wall in order to reduce the entropy of an ensemble of atoms. In parallel, Gonzalo Muga and Andreas Ruschhaupt independently developed a similar concept. Their "atom diode" was not proposed for cooling, but rather for regulating the flow of atoms. The Raizen Group demonstrated significant cooling of atoms with the one-way wall in a series of experiments in 2008. Subsequently, the operation of a one-way wall for atoms was demonstrated by Daniel Steck and collaborators later in 2008. Their experiment was based on the 2005 scheme for the one-way wall, and was not used for cooling. The cooling method realized by the Raizen Group was called "single-photon cooling", because only one photon on average is required in order to bring an atom to near-rest. This is in contrast to other laser cooling techniques which use the momentum of the photon and require a two-level cycling transition. In 2006, Raizen, Muga, and Ruschhaupt showed in a theoretical paper that as each atom crosses the one-way wall, it scatters one photon, and information is provided about the turning point and hence the energy of that particle. The entropy increase of the radiation field scattered from a directional laser into a random direction is exactly balanced by the entropy reduction of the atoms as they are trapped by the one-way wall. This technique is widely described as a "Maxwell's demon" because it realizes Maxwell's process of creating a temperature difference by sorting high and low energy atoms into different containers. However, scientists have pointed out that it does not violate the second law of thermodynamics, does not result in a net decrease in entropy, and cannot be used to produce useful energy. This is because the process requires more energy from the laser beams than could be produced by the temperature difference generated. The atoms absorb low entropy photons from the laser beam and emit them in a random direction, thus increasing the entropy of the environment. In 2014, Pekola et al. demonstrated an experimental realization of a Szilárd engine. Only a year later and based on an earlier theoretical proposal, the same group presented the first experimental realization of an autonomous Maxwell's demon, which extracts microscopic information from a system and reduces its entropy by applying feedback. The demon is based on two capacitively coupled single-electron devices, both integrated on the same electronic circuit. The operation of the demon is directly observed as a temperature drop in the system, with a simultaneous temperature rise in the demon arising from the thermodynamic cost of generating the mutual information. In 2016, Pekola et al. demonstrated a proof-of-principle of an autonomous demon in coupled single-electron circuits, showing a way to cool critical elements in a circuit with information as a fuel. Pekola et al. have also proposed that a simple qubit circuit, e.g., made of a superconducting circuit, could provide a basis to study a quantum Szilard's engine. As metaphor Daemons in computing, generally processes that run on servers to respond to users, are named for Maxwell's demon. Historian Henry Brooks Adams, in his manuscript The Rule of Phase Applied to History, attempted to use Maxwell's demon as a historical metaphor, though he misunderstood and misapplied the original principle. Adams interpreted history as a process moving towards "equilibrium", but he saw militaristic nations (he felt Germany pre-eminent in this class) as tending to reverse this process, a Maxwell's demon of history. Adams made many attempts to respond to the criticism of his formulation from his scientific colleagues, but the work remained incomplete at his death in 1918 and was published posthumously.
Physical sciences
Statistical mechanics
Physics
96590
https://en.wikipedia.org/wiki/Ammonium%20nitrate
Ammonium nitrate
Ammonium nitrate is a chemical compound with the formula . It is a white crystalline salt consisting of ions of ammonium and nitrate. It is highly soluble in water and hygroscopic as a solid, although it does not form hydrates. It is predominantly used in agriculture as a high-nitrogen fertilizer. Its other major use is as a component of explosive mixtures used in mining, quarrying, and civil construction. It is the major constituent of ANFO, a popular industrial explosive which accounts for 80% of explosives used in North America; similar formulations have been used in improvised explosive devices. Many countries are phasing out its use in consumer applications due to concerns over its potential for misuse. Accidental ammonium nitrate explosions have killed thousands of people since the early 20th century. Global production was estimated at 21.6 million tonnes in 2017. By 2021, global production of ammonium nitrate was down to 16.7 million tonnes. Occurrence Ammonium nitrate is found as the natural mineral gwihabaite (formerly known as nitrammite) – the ammonium analogue of saltpetre (mineralogical name: niter) – in the driest regions of the Atacama Desert in Chile, often as a crust on the ground or in conjunction with other nitrate, iodate, and halide minerals. Ammonium nitrate was mined there until the Haber–Bosch process made it possible to synthesize nitrates from atmospheric nitrogen, thus rendering nitrate mining obsolete. Production, reactions and crystalline phases The industrial production of ammonium nitrate entails the acid-base reaction of ammonia with nitric acid: HNO3 + NH3 → NH4NO3 The ammonia required for this process is obtained by the Haber process from nitrogen and hydrogen. Ammonia produced by the Haber process can be oxidized to nitric acid by the Ostwald process. Ammonia is used in its anhydrous form (a gas) and the nitric acid is concentrated. The reaction is violent owing to its highly exothermic nature. After the solution is formed, typically at about 83% concentration, the excess water is evaporated off to leave an ammonium nitrate (AN) content of 95% to 99.9% concentration (AN melt), depending on grade. The AN melt is then made into "prills" or small beads in a spray tower, or into granules by spraying and tumbling in a rotating drum. The prills or granules may be further dried, cooled, and then coated to prevent caking. These prills or granules are the typical AN products in commerce. Another production method is a variant of the nitrophosphate process: Ca(NO3)2 + 2 NH3 + CO2 + H2O → 2 NH4NO3 + CaCO3 The products, calcium carbonate and ammonium nitrate, may be separately purified or sold combined as calcium ammonium nitrate. Ammonium nitrate can also be made via metathesis reactions: (NH4)2SO4 + Ba(NO3)2 → 2 NH4NO3 + BaSO4 (NH4)2SO4 + Ca(NO3)2 → 2 NH4NO3 + CaSO4 NH4Cl + AgNO3 → NH4NO3 + AgCl Reactions As ammonium nitrate is a salt, both the cation, , and the anion, , may take part in chemical reactions. Solid ammonium nitrate decomposes on heating. At temperatures below around 300 °C, the decomposition mainly produces nitrous oxide and water: NH4NO3 → N2O + 2 H2O At higher temperatures, the following reaction predominates. 2 NH4NO3 → 2 N2 + O2 + 4 H2O Both decomposition reactions are exothermic and their products are gas. Under certain conditions, this can lead to a runaway reaction, with the decomposition process becoming explosive. See for details. Many ammonium nitrate disasters, with loss of lives, have occurred. The red–orange colour in an explosion cloud is due to nitrogen dioxide, a secondary reaction product. Crystalline phases A number of crystalline phases of ammonium nitrate have been observed. The following occur under atmospheric pressure. {| class="wikitable" |- ! Phase ! Temperature (°C) ! Symmetry |- | (liquid) | (above 169.6) | |- | I | 169.6 to 125.2 | cubic |- | II | 125.2 to 84.2 | tetragonal |- | III | 84.2 to 32.3 | α-rhombic |- | IV | 32.3 to −16.8 | β-rhombic |- | V | below −16.8 | tetragonal |} The transition between β-rhombic to α-rhombic forms (at 32.3 °C) occurs at ambient temperature in many parts of the world. These forms have a 3.6% difference in density and hence transition between them causes a change in volume. One practical consequence of this is that ammonium nitrate cannot be used as a solid rocket motor propellant, as it develops cracks. Stabilized ammonium nitrate (PSAN) was developed as a solution to this and incorporates metal halides stabilisers, which prevent density fluctuations. Applications Fertilizer Ammonium nitrate is an important fertilizer with NPK rating 34-0-0 (34% nitrogen). It is less concentrated than urea (46-0-0), giving ammonium nitrate a slight transportation disadvantage. Ammonium nitrate's advantage over urea is that it is more stable and does not rapidly lose nitrogen to the atmosphere. Explosives Ammonium nitrate readily forms explosive mixtures with varying properties when combined with explosives such as TNT or with fuels like aluminum powder or fuel oil. Examples of explosives containing ammonium nitrate include: Amatex (ammonium nitrate, TNT and RDX) Amatol (ammonium nitrate and TNT) Ammonal (ammonium nitrate and aluminum powder) ANFO (ammonium nitrate and fuel oil) Astrolite (ammonium nitrate and hydrazine rocket fuel) Goma-2 (ammonium nitrate, nitroglycol, nitrocellulose, dibutyl phthalate and fuel) Minol (explosive) (ammonium nitrate, TNT and aluminum powder) Nitrolite (ammonium nitrate, TNT and nitroglycerin +) DBX (ammonium nitrate, RDX, TNT and aluminum powder) Tovex (ammonium nitrate and methylammonium nitrate) Mixture with fuel oil ANFO is a mixture of 94% ammonium nitrate ("AN") and 6% fuel oil ("FO") widely used as a bulk industrial explosive. It is used in coal mining, quarrying, metal mining, and civil construction in undemanding applications where the advantages of ANFO's low cost, relative safety, and ease of use matter more than the benefits offered by conventional industrial explosives, such as water resistance, oxygen balance, high detonation velocity, and performance in small diameters. Terrorism Ammonium nitrate-based explosives were used in the Sterling Hall bombing in Madison, Wisconsin, 1970, the Oklahoma City bombing in 1995, the 2011 Delhi bombings, the 2011 bombing in Oslo, the Myyrmanni bombing and the 2013 Hyderabad blasts. In November 2009, the government of the North West Frontier Province (NWFP) of Pakistan imposed a ban on ammonium sulfate, ammonium nitrate, and calcium ammonium nitrate fertilizers in the former Malakand Divisioncomprising the Upper Dir, Lower Dir, Swat, Chitral, and Malakand districts of the NWFP – following reports that those chemicals were used by militants to make explosives. Due to these bans, "Potassium chloratethe material which allows safety matches to catch firehas surpassed fertilizer as the explosive of choice for insurgents." Niche uses Ammonium nitrate is used in some instant cold packs, as its dissolution in water is highly endothermic. In 2021, King Abdullah University of Science and Technology in Saudi Arabia conducted experiments to study the potential for dissolving ammonium nitrate in water for off-grid cooling systems and as a refrigerant. They suggested that the water could be distilled and reused using solar energy to avoid water wastage in severe environments. It was once used, in combination with independently explosive "fuels" such as guanidine nitrate, as a cheaper (but less stable) alternative to 5-aminotetrazole in the inflators of airbags manufactured by Takata Corporation, which were recalled as unsafe after killing 14 people. The current USA death total is 27. Safety, handling, and storage Numerous safety guidelines are available for storing and handling ammonium nitrate. Health and safety data are shown on the safety data sheets available from suppliers and from various governments. Pure ammonium nitrate does not burn, but as a strong oxidizer, it supports and accelerates the combustion of organic (and some inorganic) material. It should not be stored near combustible substances. While ammonium nitrate is stable at ambient temperature and pressure under many conditions, it may detonate from a strong initiation charge. It should not be stored near high explosives or blasting agents. Molten ammonium nitrate is very sensitive to shock and detonation, particularly if it becomes contaminated with incompatible materials such as combustibles, flammable liquids, acids, chlorates, chlorides, sulfur, metals, charcoal and sawdust. Contact with certain substances such as chlorates, mineral acids and metal sulfides, can lead to vigorous or even violent decomposition capable of igniting nearby combustible material or detonating. Ammonium nitrate begins decomposition after melting, releasing , HNO3, NH and H2O. It should not be heated in a confined space. The resulting heat and pressure from decomposition increases the sensitivity to detonation and increases the speed of decomposition. Detonation may occur at 80 atmospheres. Contamination can reduce this to 20 atmospheres. Ammonium nitrate has a critical relative humidity of 59.4% at 30 °C. At higher humidity it will absorb moisture from the atmosphere. Therefore, it is important to store ammonium nitrate in a tightly sealed container. Otherwise, it can coalesce into a large, solid mass. Ammonium nitrate can absorb enough moisture to liquefy. Blending ammonium nitrate with certain other fertilizers can lower the critical relative humidity. The potential for use of the material as an explosive has prompted regulatory measures. For example, in Australia, the Dangerous Goods Regulations came into effect in August 2005 to enforce licensing in dealing with such substances. Licenses are granted only to applicants (industry) with appropriate security measures in place to prevent any misuse. Additional uses such as education and research purposes may also be considered, but individual use will not. Employees of those with licenses to deal with the substance are still required to be supervised by authorized personnel and are required to pass a security and national police check before a license may be granted. Health hazards Ammonium nitrate is not hazardous to health and is usually used in fertilizer products. Ammonium nitrate has an LD50 of 2217 mg/kg, which for comparison is about two-thirds that of table salt. Disasters Ammonium nitrate decomposes, non-explosively, into the gases nitrous oxide and water vapor when heated. However, it can be induced to decompose explosively by detonation. Large stockpiles of the material can also be a major fire risk due to their supporting oxidation, a situation which can easily escalate to detonation. Explosions are not uncommon: relatively minor incidents occur most years, and several large and devastating explosions have also occurred. Examples include the Oppau explosion of 1921 (one of the largest artificial non-nuclear explosions), the Texas City disaster of 1947, the 2015 Tianjin explosions in China, and the 2020 Beirut explosion. Ammonium nitrate can explode through two mechanisms: Shock induced detonation. An explosive charge within or in contact with a mass of ammonium nitrate causes the ammonium nitrate to detonate. Examples of such disasters are Kriewald, Morgan (present-day Sayreville, New Jersey), Oppau, and Tessenderlo. Deflagration to detonation transition. The ammonium nitrate explosion results from a fire that spreads into the ammonium nitrate (Texas City, TX; Brest; West, TX; Tianjin; Beirut), or from ammonium nitrate mixing with a combustible material during the fire (Gibbstown, Cherokee, Nadadores). The fire must be confined at least to a degree for successful transition from a fire to an explosion.
Physical sciences
Salts
null
96628
https://en.wikipedia.org/wiki/Offspring
Offspring
In biology, offspring are the young creation of living organisms, produced either by sexual or asexual reproduction. Collective offspring may be known as a brood or progeny. This can refer to a set of simultaneous offspring, such as the chicks hatched from one clutch of eggs, or to all offspring produced over time, as with the honeybee. Offspring can occur after mating, artificial insemination, or as a result of cloning. Human offspring (descendants) are referred to as children; male children are sons and female children are daughters (see Kinship). Overview Offspring contains many parts and properties that are precise and accurate in what they consist of, and what they define. As the offspring of a new species, also known as a child or f1 generation, consist of genes of the father and the mother, which is also known as the parent generation. Each of these offspring contains numerous genes which have coding for specific tasks and properties. Males and females both contribute equally to the genotypes of their offspring, in which gametes fuse and form. An important aspect of the formation of the parent offspring is the chromosome, which is a structure of DNA which contains many genes. To focus more on the offspring and how it results in the formation of the f1 generation, is an inheritance called sex linkage, which is a gene located on the sex chromosome, and patterns of this inheritance differ in both male and female. The explanation that proves the theory of the offspring having genes from both parent generations is proven through a process called crossing over, which consists of taking genes from the male chromosomes and genes from the female chromosome, resulting in a process of meiosis occurring, and leading to the splitting of the chromosomes evenly. Depending on which genes are dominantly expressed in the gene will result in the sex of the offspring. The female will always give an X chromosome, whereas the male, depending on the situation, will either give an X chromosome or a Y chromosome. If a male offspring is produced, the gene will consist of an X and a Y chromosome, and if a female offspring is produced, the gene will consist of two X chromosomes. Cloning is the production of an offspring which represents the identical genes to its parent. Reproductive cloning begins with the removal of the nucleus from an egg, which holds the genetic material. In order to clone an organ, a stem cell is to be produced and then utilized to clone that specific organ. A common misconception of cloning is that it produces an exact copy of the parent being cloned. Cloning copies the DNA/genes of the parent and then creates a genetic duplicate. The clone will not be a similar copy as they will grow up in different surroundings from the parent and may encounter different opportunities and experiences that can result in epigenetic changes. Although mostly positive, cloning also faces some setbacks in terms of ethics and human health. Though cell division and DNA replication is a vital part of survival, there are many steps involved and mutations can occur with permanent change in an organism's and their offspring's DNA. Some mutations can be good as they result in random evolution periods which may be good for the species, but most mutations are bad as they can change the genotypes of offspring, which can result in changes that harm the species.
Biology and health sciences
Biological reproduction
Biology
96652
https://en.wikipedia.org/wiki/Seahorse
Seahorse
A seahorse (also written sea-horse and sea horse) is any of 46 species of small marine bony fish in the genus Hippocampus. The genus name comes from the Ancient Greek (), itself from () meaning "horse" and () meaning "sea monster" or "sea animal". Having a head and neck suggestive of a horse, seahorses also feature segmented bony armour, an upright posture and a curled prehensile tail. Along with the pipefishes and seadragons (Phycodurus and Phyllopteryx) they form the family Syngnathidae. Habitat Seahorses are mainly found in shallow tropical and temperate salt water throughout the world, from about 45°S to 45°N. They live in sheltered areas such as seagrass beds, estuaries, coral reefs, and mangroves. Four species are found in Pacific waters from North America to South America. In the Atlantic, Hippocampus erectus ranges from Nova Scotia to Uruguay. H. zosterae, known as the dwarf seahorse, is found in the Bahamas. Colonies have been found in European waters such as the Thames Estuary. Two species live in the Mediterranean Sea: H. guttulatus (the long-snouted seahorse), H. hippocampus (the short-snouted seahorse). These species form territories; males stay within of habitat, while females range over about one hundred times that. Description Seahorses range in size from . They are named for their equine appearance, with bent necks and long snouted heads and a distinctive trunk and tail. Although they are bony fish, they do not have scales, but rather thin skin stretched over a series of bony plates, which are arranged in rings throughout their bodies. Each species has a distinct number of rings. The armor of bony plates also protects them against predators, and because of this outer skeleton, they no longer have ribs. Seahorses swim upright, propelling themselves using the dorsal fin, another characteristic not shared by their close pipefish relatives, which swim horizontally. Razorfish are the only other fish that swim vertically. The pectoral fins, located on either side of the head behind their eyes, are used for steering. They lack the caudal fin typical of fishes. Their prehensile tail is composed of square-like rings that can be unlocked only in the most extreme conditions. They are adept at camouflage, and can grow and reabsorb spiny appendages depending on their habitat. Unusual among fish, a seahorse has a flexible, well-defined neck. It also sports a crown-like spine or horn on its head, termed a "coronet", which is distinct for each species. Seahorses swim very poorly, rapidly fluttering a dorsal fin and using pectoral fins to steer. The slowest-moving fish in the world is H. zosterae (the dwarf seahorse), with a top speed of about per hour. Since they are poor swimmers, they are most likely to be found resting with their prehensile tail wound around a stationary object. They have long snouts, which they use to suck up food, and their eyes can move independently of each other like those of a chameleon. Evolution and fossil record Anatomical evidence, supported by molecular, physical, and genetic evidence, demonstrates that seahorses are highly modified pipefish. The fossil record of seahorses, however, is very sparse. The best known and best studied fossils are specimens of Hippocampus guttulatus (though literature more commonly refers to them under the synonym of H. ramulosus), from the Marecchia River formation of Rimini Province, Italy, dating back to the Lower Pliocene, about 3 million years ago. The earliest known seahorse fossils are of two pipefish-like species, H. sarmaticus and H. slovenicus, from the coprolitic horizon of Tunjice Hills, a middle Miocene lagerstätte in Slovenia dating back about 13 million years. Molecular dating implies that pipefish and seahorses diverged during the Late Oligocene. This has led to speculation that seahorses evolved in response to large areas of shallow water, newly created as the result of tectonic events. The shallow water would have allowed the expansion of seagrass habitats that served as camouflage for the seahorses' upright posture. These tectonic changes occurred in the western Pacific Ocean, pointing to an origin there, with molecular data suggesting two later, separate invasions of the Atlantic Ocean. In 2016, a study published in Nature found the seahorse genome to be the most rapidly evolving fish genome studied so far. The evolution of seahorses from pipefish may have been an adaptation related to the biomechanics of prey capture. The unique posture of the seahorse allows them to capture small shrimps at larger distances than the pipefish is capable of. Reproduction The male seahorse is equipped with a brood pouch on the ventral, or front-facing, side of the tail. When mating, the female seahorse deposits up to 1,500 eggs in the male's pouch. The male carries the eggs for 9 to 45 days until the seahorses emerge fully developed, but very small. The young are then released into the water, and the male often mates again within hours or days during the breeding season. Courtship Before breeding, seahorses may court for several days. Scientists believe the courtship behavior synchronizes the animals' movements and reproductive states, so that the male can receive the eggs when the female is ready to deposit them. During this time, they may change color, swim side by side holding tails or grip the same strand of sea grass with their tails, and wheel around in unison in what is known as a "predawn dance". They eventually engage in a "true courtship dance" lasting about 8 hours, during which the male pumps water through the egg pouch on his trunk which expands and opens to display its emptiness. When the female's eggs reach maturity, she and her mate let go of any anchors and drift upward snout-to-snout, out of the sea grass, often spiraling as they rise. They interact for about 6 minutes, reminiscent of courtship. The female inserts her ovipositor into the male's brood pouch and deposits dozens to thousands of eggs. As the female releases her eggs, her body slims while his swells. Both animals then sink back into the sea grass and she swims away. Phases of courtship Seahorses exhibit four phases of courtship that are indicated by clear behavioral changes and changes in the intensity of the courtship act. Phase 1, the initial courtship phase, typically takes place in the early morning one or two days before physical copulation. During this phase the potential mates brighten in colour, quiver, and display rapid side-to-side body vibrations. These displays are performed alternately by both the male and the female seahorse. The following phases, 2 through 4, happen sequentially on the day of copulation. Phase 2 is marked by the female pointing, a behaviour in which the female will raise her head to form an oblique angle with her body. In phase 3 males will also begin the same pointing behaviour in response to the female. Finally, the male and female will repeatedly rise upward together in a water column and end in mid-water copulation, in which the female will transfer her eggs directly into the male's brood pouch. Phase 1: Initial courtship This initial courtship behaviour takes place about 30 minutes after dawn on each courtship day, until the day of copulation. During this phase the males and females will remain apart during the night, but after dawn they will come together in a side-by-side position, brighten, and engage in courtship behaviour for about 2 to 38 minutes. There is repeated reciprocal quivering. This starts when the male approaches the female, brightens and begins to quiver. The female will follow the male with her own display, in which she will also brighten and quiver about 5 seconds later. As the male quivers, he will rotate his body towards the female who will then rotate her body away. During phase 1 the tails of both seahorses are positioned within 1 cm of each other on the same hold-fast and both of their bodies are angled slightly outward from the point of attachment. However, the female will shift her tail attachment site, causing the pair to circle their common hold-fast. Phase 2: Pointing and pumping This phase begins with the female beginning her pointing posture, by leaning her body towards the male, who will simultaneously lean away and quiver. This phase can last up to 54 minutes. Following phase 2 is a latency period (typically between 30 minutes and four hours), during which the seahorses display no courtship behaviour and females are not bright; males will usually display a pumping motion with their body. Phase 3: Pointing – pointing The third phase begins with the females brightening and assuming the pointing position. The males respond with their own brightening and pointing display. This phase ends with the male departing. It usually lasts nine minutes and can occur one to six times during courtship. Phase 4: Rising and copulation The final courtship phase includes 5–8 bouts of courtship. Each bout of courtship begins with both the male and female anchored to the same plant about 3 cm apart; usually they are facing each other and are still bright in colour from the previous phase. During the first bout, following the facing behaviour, the seahorses will rise upward together anywhere from 2 to 13 cm in a water column. During the final rise, the female will insert her ovipositor and transfer her eggs through an opening into the male's brood pouch. Fertilization During fertilization in Hippocampus kuda, the brood pouch was found to be open for only six seconds while egg deposition occurred. During this time seawater entered the pouch where the spermatozoa and eggs meet in a seawater milieu. This hyperosmotic environment facilitates sperm activation and motility. The fertilization is therefore regarded as being physiologically 'external' within a physically 'internal' environment after the closure of the pouch. It is believed that this protected form of fertilization reduces sperm competition among males. Within the Syngnathidae (pipefishes and seahorses) protected fertilization has not been documented in the pipefishes but the lack of any distinct differences in the relation of testes size to body size suggests that pipefishes may also have evolved mechanisms for more efficient fertilization with reduced sperm competition. Gestation The fertilized eggs are then embedded in the pouch wall and become surrounded by a spongy tissue. The pouch provides oxygen, as well as a controlled environment incubator. Though the egg yolk contributes nourishment to the developing embryo, the male sea horses contribute additional nutrients such as energy-rich lipids and also calcium to allow them to build their skeletal system, by secreting them into the brood pouch that are absorbed by the embryos. Further they also offer immunological protection, osmoregulation, gas exchange and waste transport. The eggs then hatch in the pouch, where the salinity of the water is regulated; this prepares the newborns for life in the sea. Birth The number of young released by the male seahorse averages 100–1000 for most species, but may be as low as 5 for the smaller species, or as high as 2,500. When the fry are ready to be born, the male expels them with muscular contractions. He typically gives birth at night and is ready for the next batch of eggs by morning when his mate returns. Like almost all other fish species, seahorses do not nurture their young after birth. Infants are susceptible to predators or ocean currents which wash them away from feeding grounds or into temperatures too extreme for their delicate bodies. Less than 0.5% of infants survive to adulthood, explaining why litters are so large. These survival rates are actually fairly high compared to other fish, because of their protected gestation, making the process worth the great cost to the father. The eggs of most other fish are abandoned immediately after fertilization. Reproductive roles Reproduction is energetically costly to the male. This brings into question why the sexual role reversal even takes place. In an environment where one partner incurs more energy costs than the other, Bateman's principle suggests that the lesser contributor takes the role of the aggressor. Male seahorses are more aggressive and sometimes fight for female attention. According to Amanda Vincent of Project Seahorse, only males tail-wrestle and snap their heads at each other. This discovery prompted further study of energy costs. To estimate the female's direct contribution, researchers chemically analyzed the energy stored in each egg. To measure the burden on the males, oxygen consumption was used. By the end of incubation, the male consumed almost 33% more oxygen than before mating. The study concluded that the female's energy expenditure while generating eggs is twice that of males during incubation, confirming the standard hypothesis. Why the male seahorse (and other members of the Syngnathidae) carries the offspring through gestation is unknown, though some researchers believe it allows for shorter birthing intervals, in turn resulting in more offspring. Given an unlimited number of ready and willing partners, males have the potential to produce 17% more offspring than females in a breeding season. Also, females have "time-outs" from the reproductive cycle 1.2 times longer than those of males. This seems to be based on mate choice, rather than physiology. When the female's eggs are ready, she must lay them in a few hours or eject them into the water column. Making eggs is a huge cost to her physically, since they amount to about a third of her body weight. To protect against losing a clutch, the female demands a long courtship. The daily greetings help to cement the bond between the pair. Monogamy Though seahorses are not known to mate for life, many species form pair bonds that last through at least the breeding season. Some species show a higher level of mate fidelity than others. However, many species readily switch mates when the opportunity arises. H. abdominalis and H. breviceps have been shown to breed in groups, showing no continuous mate preference. Many more species' mating habits have not been studied, so it is unknown how many species are actually monogamous, or how long those bonds actually last. Although monogamy within fish is not common, it does appear to exist for some. In this case, the mate-guarding hypothesis may be an explanation. This hypothesis states, "males remain with a single female because of ecological factors that make male parental care and protection of offspring especially advantageous." Because the rates of survival for newborn seahorses are so low, incubation is essential. Though not proven, males could have taken on this role because of the lengthy period the females require to produce their eggs. If males incubate while females prepare the next clutch (amounting to a third of body weight), they can reduce the interval between clutches. Feeding habits Seahorses use their long snouts to eat their food with ease. However, they are slow to consume their food and have extremely simple digestive systems that lack a stomach, so they must eat constantly to stay alive. Seahorses are not very good swimmers, and for this reason they need to anchor themselves to seaweed, coral or anything else that will keep the seahorse in place. They do this by using their prehensile tails to grasp their object of choice. Seahorses feed on small crustaceans floating in the water or crawling on the bottom. With excellent camouflage seahorses ambush prey that floats within striking range, sitting and waiting until an optimal moment. Mysid shrimp and other small crustaceans are favorites, but some seahorses have been observed eating other kinds of invertebrates and even larval fish. In a study of seahorses, the distinctive head morphology was found to give them a hydrodynamic advantage that creates minimal interference while approaching an evasive prey. Thus the seahorse can get very close to the copepods on which it preys. After successfully closing in on the prey without alerting it, the seahorse gives an upward thrust and rapidly rotates the head aided by large tendons that store and release elastic energy, to bring its long snout close to the prey. This step is crucial for prey capture, as oral suction only works at a close range. This two-phase prey capture mechanism is termed pivot-feeding. Seahorses have three distinctive feeding phases: preparatory, expansive, and recovery. During the preparatory phase, the seahorse slowly approaches the prey while in an upright position, after which it slowly flexes its head ventrally. In the expansive phase, the seahorse captures its prey by simultaneously elevating its head, expanding the buccal cavity, and sucking in the prey item. During the recovery phase, the jaws, head, and hyoid apparatus of the seahorse return to their original positions. The amount of available cover influences the seahorse's feeding behaviour. For example, in wild areas with small amounts of vegetation, seahorses will sit and wait, but an environment with extensive vegetation will prompt the seahorse to inspect its environment, feeding while swimming rather than sitting and waiting. Conversely, in an aquarium setting with little vegetation, the seahorse will fully inspect its environment and makes no attempt to sit and wait. Threats of extinction Because data is lacking on the sizes of the various seahorse populations, as well as other issues including how many seahorses are dying each year, how many are being born, and the number used for souvenirs, there is insufficient information to assess their risk of extinction, and the risk of losing more seahorses remains a concern. Coral reefs and seagrass beds are deteriorating, reducing viable habitats for seahorses. Additionally, bycatch in many areas causes high cumulative effects on seahorses, with an estimated 37 million individuals being removed annually over 21 countries. Aquaria While many aquarium hobbyists keep them as pets, seahorses collected from the wild tend to fare poorly in home aquaria. Many eat only live foods such as brine shrimp and are prone to stress, which damages their immune systems and makes them susceptible to disease. In recent years, however, captive breeding has become more popular. Such seahorses survive better in captivity, and are less likely to carry diseases. They eat frozen mysidacea (crustaceans) that are readily available from aquarium stores, and do not experience the stress of moving out of the wild. Although captive-bred seahorses are more expensive, they take no toll on wild populations. Seahorses should be kept in an aquarium with low flow and placid tank mates. They are slow feeders, so fast, aggressive feeders will leave them without food. Seahorses can coexist with many species of shrimp and other bottom-feeding creatures. Gobies also make good tank-mates. Keepers are generally advised to avoid eels, tangs, triggerfish, squid, octopus, and sea anemones. Water quality is very important for the survival of seahorses in an aquarium. They are delicate species which should not be added to a new tank. The water parameters are recommended to be as follows although these fish may acclimatise to different water over time: Temperature: pH: 8.1–8.4 Ammonia: 0 mg/L (0 ppm) (0.01 mg/L (0.01 ppm) may be tolerated for short periods) Nitrite: 0 mg/L (0 ppm) (0.125 mg/L (0.125 ppm) may be tolerated for short periods) S.G.: 1.021–1.024 at A water-quality problem will affect fish behaviour and can be shown by clamped fins, reduced feeding, erratic swimming, and gasping at the surface. Seahorses require vertical swimming space to perform reproductive functions and to prevent depth-related health conditions like gas bubble disease, so a refugium that is at least 20 inches by 51 centimeters deep is recommended inside an aquarium. Animals sold as "freshwater seahorses" are usually the closely related pipefish, of which a few species live in the lower reaches of rivers. The supposed true "freshwater seahorse" called H. aimei is not a valid species, but a synonym sometimes used for Barbour's and hedgehog seahorses. The latter, which is often confused with the former, can be found in estuarine environments, but is not actually a freshwater fish. Consumption Seahorse populations are thought to be endangered as a result of overfishing and habitat destruction. Despite a lack of scientific studies or clinical trials, the consumption of seahorses is widespread in traditional Chinese medicine, primarily in connection with impotence, wheezing, nocturnal enuresis, and pain, as well as labor induction. Up to 20 million seahorses may be caught each year to be sold for such uses. Preferred species of seahorses include H. kellogii, H. histrix, H. kuda, H. trimaculatus, and H. mohnikei. Seahorses are also consumed by Indonesians, central Filipinos, and many other ethnic groups. Import and export of seahorses has been controlled under CITES since 15 May 2004. However, Indonesia, Japan, Norway, and South Korea have chosen to opt out of the trade rules set by CITES. The problem may be exacerbated by the growth of pills and capsules as the preferred method of ingesting seahorses. Pills are cheaper and more available than traditional, individually tailored prescriptions of whole seahorses, but the contents are harder to track. Seahorses once had to be of a certain size and quality before they were accepted by TCM practitioners and consumers. Declining availability of the preferred large, pale, and smooth seahorses has been offset by the shift towards prepackaged preparations, which makes it possible for TCM merchants to sell previously unused, or otherwise undesirable juvenile, spiny, and dark-coloured animals. Dried seahorse retails from US$600 to $3000 per kilogram, with larger, paler, and smoother animals commanding the highest prices. In terms of value based on weight, seahorses retail for more than the price of silver and almost that of gold in Asia. Species On the basis of the newest overall taxonomic review of the genus Hippocampus with further new species and partial taxonomic review, the number of recognized species in this genus is considered to be 46 (retrieved May 2020): Hippocampus abdominalis Lesson, 1827 (big-belly seahorse) Hippocampus algiricus Kaup, 1856 (West African seahorse) Hippocampus angustus Günther, 1870 (narrow-bellied seahorse) Hippocampus barbouri Jordan & Richardson, 1908 (Barbour's seahorse) Hippocampus bargibanti Whitley, 1970 (pygmy seahorse) Hippocampus breviceps Peters, 1869 (short-headed seahorse) Hippocampus camelopardalis Bianconi, 1854 (giraffe seahorse) Hippocampus capensis Boulenger, 1900 (Knysna seahorse) Hippocampus casscsio Zhang, Qin, Wang & Lin, 2016 (Beibu Bay seahorse) Hippocampus colemani Kuiter, 2003 (Coleman's pygmy seahorse) Hippocampus comes Cantor, 1850 (tiger-tail seahorse) Hippocampus coronatus Temminck & Schlegel, 1850 (crowned seahorse) Hippocampus curvicuspis Fricke, 2004 (New Caledonian seahorse) Hippocampus dahli J. D. Ogilby, 1908 (lowcrown seahorse) Hippocampus debelius Gomon & Kuiter, 2009 (softcoral seahorse) Hippocampus denise Lourie & Randall, 2003 (Denise's pygmy seahorse) Hippocampus erectus Perry, 1810 (lined seahorse) Hippocampus fisheri Jordan & Evermann, 1903 (Fisher's seahorse) Hippocampus guttulatus Cuvier, 1829 (long-snouted seahorse) Hippocampus haema Han, Kim, Kai & Senou, 2017 (Korean seahorse) Hippocampus hippocampus (Linnaeus, 1758) (short-snouted seahorse) Hippocampus histrix Kaup, 1856 (spiny seahorse) Hippocampus ingens Girard, 1858 (Pacific seahorse) Hippocampus japapigu Short, R. Smith, Motomura, Harasti & H. Hamilton, 2018 (Japanese pygmy seahorse) Hippocampus jayakari Boulenger, 1900 (Jayakar's seahorse) Hippocampus jugumus Kuiter, 2001 (collared seahorse) Hippocampus kelloggi Jordan & Snyder, 1901 (great seahorse) Hippocampus kuda Bleeker, 1852 (spotted seahorse) Hippocampus minotaur Gomon, 1997 (bullneck seahorse) Hippocampus mohnikei Bleeker, 1854 (Japanese seahorse) Hippocampus nalu Short, Claassens, R. Smith, De Brauwer, H. Hamilton, Stat & Harasti, 2020 (South African pygmy seahorse or Sodwana pygmy seahorse) Hippocampus paradoxus Foster & Gomon, 2010 (paradoxical seahorse) Hippocampus patagonicus Piacentino & Luzzatto, 2004 (Patagonian seahorse) Hippocampus planifrons Peters, 1877 (flatface seahorse, false-eye seahorse) Hippocampus pontohi Lourie & Kuiter, 2008 (Pontoh's pygmy seahorse) Hippocampus pusillus Fricke, 2004 (pygmy thorny seahorse) Hippocampus reidi Ginsburg, 1933 (longsnout seahorse) Hippocampus satomiae Lourie & Kuiter, 2008 (Satomi's pygmy seahorse) Hippocampus sindonis Jordan & Snyder, 1901 (Sindo's seahorse) Hippocampus spinosissimus Weber, 1913 (hedgehog seahorse) Hippocampus subelongatus Castelnau, 1873 (West Australian seahorse) Hippocampus trimaculatus Leach, 1814 (longnose seahorse) Hippocampus tristis Castelnau, 1872 (Lazarus Seahorse) Hippocampus tyro Randall & Lourie, 2009 (Tyro seahorse) Hippocampus waleananus Gomon & Kuiter, 2009 (Walea soft coral pygmy seahorse) Hippocampus whitei Bleeker, 1855 (White's seahorse) Hippocampus zebra Whitley, 1964 (zebra seahorse) Hippocampus zosterae Jordan & Gilbert, 1882 (dwarf seahorse) Pygmy seahorses Pygmy seahorses are those members of the genus that are less than tall and wide. Previously the term was applied exclusively to the species H. bargibanti but since 1997, discoveries have made this usage obsolete. The species H. minotaur, H. denise, H. colemani, H. pontohi, H. severnsi, H. satomiae, H. waleananus, H. nalu, H. japapigu have been described. Other species that are believed to be unclassified have also been reported in books, dive magazines and on the Internet. They can be distinguished from other species of seahorse by their 12 trunk rings, low number of tail rings (26–29), the location in which young are brooded in the trunk region of males and their extremely small size. Molecular analysis (of ribosomal RNA) of 32 Hippocampus species found that H. bargibanti belongs in a separate clade from other members of the genus and therefore that the species diverged from the other species in the ancient past. Most pygmy seahorses are well camouflaged and live in close association with other organisms including colonial hydrozoans (Lytocarpus and Antennellopsis), coralline algae (Halimeda), and sea fans (Muricella, Annella, and Acanthogorgia). This combined with their small size accounts for why most species have only been noticed and classified since 2001.
Biology and health sciences
Acanthomorpha
null
97025
https://en.wikipedia.org/wiki/Direct3D
Direct3D
Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties. Direct3D contains many commands for 3D computer graphics rendering; however, since version 8, Direct3D has superseded the DirectDraw framework and also taken responsibility for the rendering of 2D graphics. Microsoft strives to continually update Direct3D to support the latest technology available on 3D graphics cards. Direct3D offers full vertex software emulation but no pixel software emulation for features not available in hardware. For example, if software programmed using Direct3D requires pixel shaders and the video card on the user's computer does not support that feature, Direct3D will not emulate it, although it will compute and render the polygons and textures of the 3D models, albeit at a usually degraded quality and performance compared to the hardware equivalent. The API does include a Reference Rasterizer (or REF device), which emulates a generic graphics card in software, although it is too slow for most real-time 3D applications and is typically only used for debugging. A new real-time software rasterizer, WARP, designed to emulate the complete feature set of Direct3D 10.1, is included with Windows 7 and Windows Vista Service Pack 2 with the Platform Update; its performance is said to be on par with lower-end 3D cards on multi-core CPUs. As part of DirectX, Direct3D is available for Windows 95 and above, and is the base for the vector graphics API on the different versions of Xbox console systems. The Wine compatibility layer, a free software reimplementation of several Windows APIs, includes an implementation of Direct3D. Direct3D's main competitor is Khronos' OpenGL and its follow-on Vulkan. Fahrenheit was an attempt by Microsoft and SGI to unify OpenGL and Direct3D in the 1990s, but was eventually canceled. Overview Direct3D 6.0 – Multitexturing Direct3D 7.0 – Hardware Transformation, Clipping and Lighting (TCL/T&L), DXVA 1.0 Direct3D 8.0 – Pixel Shader 1.0/1.1 & Vertex Shader 1.0/1.1 Direct3D 8.1 – Pixel Shader 1.2/1.3/1.4 Direct3D 9.0 – Shader Model 2.0 (Pixel Shader 2.0 & Vertex Shader 2.0) Direct3D 9.0a – Shader Model 2.0a (Pixel Shader 2.0a & Vertex Shader 2.0a) Direct3D 9.0b – Pixel Shader 2.0b, H.264 Direct3D 9.0c – last version supported for Windows 98/ME (early releases) and for Windows 2000/XP (all releases); Shader Model 3.0 (Pixel Shader 3.0 & Vertex Shader 3.0) Direct3D 9.0L – Windows Vista only; Direct3D 9.0c, Shader Model 3.0, Windows Graphics Foundation 1.0, GPGPU Direct3D 10.0 – Windows Vista/Windows 7; Shader Model 4.0, Windows Graphics Foundation 2.0, DXVA 2.0 Direct3D 10.1 – Windows Vista SP1/Windows 7; Shader Model 4.1, Windows Graphics Foundation 2.1, DXVA 2.1 Direct3D 11.0 – Windows Vista SP2/Windows 7; Shader Model 5.0, Tessellation, Multithreaded rendering, Compute shaders, implemented by hardware and software running Direct3D 9/10/10.1 Direct3D 11.1 – Windows 8 (partially supported on Windows 7 SP1 also); Stereoscopic 3D Rendering, H.265 Direct3D 11.2 – Windows 8.1; Tiled resources Direct3D 11.3 – Windows 10 Direct3D 12.0 – Windows 10; low-level rendering API, Shader Model 5.1 and 6.0 Direct3D 12.1 – Windows 10; DirectX Raytracing Direct3D 12.2 – Windows 10; DirectX 12 Ultimate Direct3D 2.0 and 3.0 In 1992, Servan Keondjian, Doug Rabson and Kate Seekings started a company named RenderMorphics, which developed a 3D graphics API named Reality Lab, which was used in medical imaging and CAD software. Two versions of this API were released. Microsoft bought RenderMorphics in February 1995, bringing its staff on board to implement a 3D graphics engine for Windows 95. The first version of Direct3D shipped in DirectX 2.0 (June 2, 1996) and DirectX 3.0 (September 26, 1996). Direct3D initially implemented an "immediate mode" 3D API and layered upon it a "retained mode" 3D API. Both types of API were already offered with the second release of Reality Lab before Direct3D was released. Like other DirectX APIs, such as DirectDraw, both were based on COM. The retained mode API was a scene graph API that attained little adoption. Game developers clamored for more direct control of the hardware's activities than the Direct3D retained mode could provide. Only two games that sold a significant volume, Lego Island and Lego Rock Raiders, were based on the Direct3D retained mode, so Microsoft did not update the retained mode API after DirectX 3.0. For DirectX 2.0 and 3.0, the Direct3D immediate mode used an "execute buffer" programming model that Microsoft hoped hardware vendors would support directly. Execute buffers were intended to be allocated in hardware memory and parsed by the hardware to perform the 3D rendering. They were considered extremely awkward to program at the time, however, hindering adoption of the new API and prompting calls for Microsoft to adopt OpenGL as the official 3D rendering API for games as well as workstation applications. (see OpenGL vs. Direct3D) Rather than adopt OpenGL as a gaming API, Microsoft chose to continue improving Direct3D, not only to be competitive with OpenGL, but to compete more effectively with other proprietary APIs such as 3dfx's Glide. From the beginning, the immediate mode also supported Talisman's tiled rendering with the BeginScene/EndScene methods of the IDirect3DDevice interface. Direct3D 4.0 No substantive changes were planned to Direct3D for DirectX 4.0, which was scheduled to ship in late 1996 and then canceled. Direct3D 5.0 In December 1996, a team in Redmond took over development of the Direct3D Immediate Mode, while the London-based RenderMorphics team continued work on the Retained Mode. The Redmond team added the DrawPrimitive API that eliminated the need for applications to construct execute buffers, making Direct3D more closely resemble other immediate mode rendering APIs such as Glide and OpenGL. The first beta of DrawPrimitive shipped in February 1997, and the final version shipped with DirectX 5.0 in August 1997. Besides introducing an easier-to-use immediate mode API, DirectX 5.0 added the SetRenderTarget method that enabled Direct3D devices to write their graphical output to a variety of DirectDraw surfaces. Direct3D 6.0 DirectX 6.0 (released in August, 1998) introduced numerous features to cover contemporary hardware (such as multitexture and stencil buffers) as well as optimized geometry pipelines for x87, SSE and 3DNow! and optional texture management to simplify programming. Direct3D 6.0 also included support for features that had been licensed by Microsoft from specific hardware vendors for inclusion in the API, in exchange for the time-to-market advantage to the licensing vendor. S3 texture compression support was one such feature, renamed as DXTC for purposes of inclusion in the API. Another was TriTech's proprietary bump mapping technique. Microsoft included these features in DirectX, then added them to the requirements needed for drivers to get a Windows logo to encourage broad adoption of the features in other vendors' hardware. A minor update to DirectX 6.0 came in the February, 1999 DirectX 6.1 update. Besides adding DirectMusic support for the first time, this release improved support for Intel Pentium III 3D extensions. A confidential memo sent in 1997 shows Microsoft planning to announce full support for Talisman in DirectX 6.0, but the API ended up being canceled (See the Microsoft Talisman page for details). Direct3D 7.0 DirectX 7.0 (released in September, 1999) introduced the .dds texture format and support for transform and lighting hardware acceleration (first available on PC hardware with Nvidia's GeForce 256), as well as the ability to allocate vertex buffers in hardware memory. Hardware vertex buffers represent the first substantive improvement over OpenGL in DirectX history. Direct3D 7.0 also augmented DirectX support for multitexturing hardware, and represents the pinnacle of fixed-function multitexture pipeline features: although powerful, it was so complicated to program that a new programming model was needed to expose the shading capabilities of graphics hardware. Direct3D 7.0 also introduced DXVA features. Direct3D 8.0 DirectX 8.0 (released in November, 2000) introduced programmability in the form of vertex and pixel shaders, enabling developers to write code without worrying about superfluous hardware state. The complexity of the shader programs depended on the complexity of the task, and the display driver compiled those shaders to instructions that could be understood by the hardware. Direct3D 8.0 and its programmable shading capabilities were the first major departure from an OpenGL-style fixed-function architecture, where drawing is controlled by a complicated state machine. Direct3D 8.0 also eliminated DirectDraw as a separate API. Direct3D subsumed all remaining DirectDraw API calls still needed for application development, such as Present(), the function used to display rendering results. Direct3D was not considered to be user friendly, but as of DirectX version 8.1, many usability problems were resolved. Direct3D 8 contained many powerful 3D graphics features, such as vertex shaders, pixel shaders, fog, bump mapping and texture mapping. Direct3D 9 Direct3D 9.0 (released in December, 2002) added a new version of the High Level Shader Language support for floating-point texture formats, Multiple Render Targets (MRT), Multiple-Element Textures, texture lookups in the vertex shader and stencil buffer techniques. Direct3D 9Ex Direct3D 9Ex (previously versioned 9.0L ("L" standing for Longhorn, the codename for Windows Vista)), an extension only available in Windows Vista and beyond, allows the use of the advantages offered by Windows Vista's Windows Display Driver Model (WDDM) and is used for Windows Aero. Direct3D 9Ex, in conjunction with DirectX 9 class WDDM drivers allows graphics memory to be virtualized and paged out to system memory, allows graphics operations to be interrupted and scheduled and allow DirectX surfaces to be shared across processes. Direct3D 9Ex was previously known as version 1.0 of Windows Graphics Foundation (WGF). Direct3D 9Ex improvements - Win32 apps Direct3D 10 Windows Vista includes a major update to the Direct3D API. Originally called WGF 2.0 (Windows Graphics Foundation 2.0), then DirectX 10 and DirectX Next, Direct3D 10 features an updated shader model 4.0 and optional interruptibility for shader programs. In this model shaders still consist of fixed stages as in previous versions, but all stages support a nearly unified interface, as well as a unified access paradigm for resources such as textures and shader constants. The language itself has been extended to be more expressive, including integer operations, a greatly increased instruction count, and more C-like language constructs. In addition to the previously available vertex and pixel shader stages, the API includes a geometry shader stage that breaks the old model of one vertex in/one vertex out, to allow geometry to be generated from within a shader, thus allowing for complex geometry to be generated entirely by the graphics hardware. Windows XP and earlier are not supported by DirectX 10.0 and above. Furthermore, Direct3D 10 dropped support for the retained mode API which had been a part of Direct3D since the beginning, making Windows Vista incompatible with 3D games that had used the retained mode API as their rendering engine. Unlike prior versions of the API, Direct3D 10 no longer uses "capability bits" (or "caps") to indicate which features are supported on a given graphics device. Instead, it defines a minimum standard of hardware capabilities which must be supported for a display system to be "Direct3D 10 compatible". This is a significant departure, with the goal of streamlining application code by removing capability-checking code and special cases based on the presence or absence of specific capabilities. Because Direct3D 10 hardware was comparatively rare after the initial release of Windows Vista and because of the massive install base of non-Direct3D 10 compatible graphics cards, the first Direct3D 10-compatible games still provide Direct3D 9 render paths. Examples of such titles are games originally written for Direct3D 9 and ported to Direct3D 10 after their release, such as Company of Heroes, or games originally developed for Direct3D 9 with a Direct3D 10 path retrofitted later during their development, such as Hellgate: London or Crysis. The DirectX 10 SDK became available in February 2007. Direct3D 10.0 Direct3D 10.0 level hardware must support the following features: the ability to process entire primitives in the new geometry-shader stage, the ability to output pipeline-generated vertex data to memory using the stream-output stage, multisampled alpha-to-coverage support, readback of a depth/stencil surface or a multisampled resource once it is no longer bound as a render target, full HLSL integration – all Direct3D 10 shaders are written in HLSL and implemented with the common-shader core, integer and bitwise shader operations, organization of pipeline state into 5 immutable state objects, organization of shader constants into constant buffers, increased number of render targets, textures, and samplers, no shader length limit, new resource types and resource formats, layered runtime/API layers, option to perform per-primitive material swapping and setup using a geometry shader, increased generalization of resource access using a view, removed legacy hardware capability bits (caps). Fixed pipelines are being done away with in favor of fully programmable pipelines (often referred to as unified pipeline architecture), which can be programmed to emulate the same. New state object to enable (mostly) the CPU to change states efficiently. Unified shader model enhances the programmability of the graphics pipeline. It adds instructions for integer and bitwise calculations. The common shader core provides a full set of IEEE-compliant 32-bit integer and bitwise operations. These operations enable a new class of algorithms in graphics hardware—examples include compression and packing techniques, FFTs, and bitfield program-flow control. Geometry shaders, which work on adjacent triangles which form a mesh. Texture arrays enable swapping of textures in GPU without CPU intervention. Predicated rendering allows drawing calls to be ignored based on some other conditions. This enables rapid occlusion culling, which prevents objects from being rendered if it is not visible or too far to be visible. Instancing 2.0 support, allowing multiple instances of similar meshes, such as armies, or grass or trees, to be rendered in a single draw call, reducing the processing time needed for multiple similar objects to that of a single one. Direct3D 10.1 Direct3D 10.1 was announced by Microsoft shortly after the release of Direct3D 10 as a minor update. The specification was finalized with the release of November 2007 DirectX SDK and the runtime was shipped with the Windows Vista SP1, which is available since mid-March 2008. Direct3D 10.1 sets a few more image quality standards for graphics vendors, and gives developers more control over image quality. Features include finer control over anti-aliasing (both multisampling and supersampling with per sample shading and application control over sample position) and more flexibilities to some of the existing features (cubemap arrays and independent blending modes). Direct3D 10.1 level hardware must support the following features: Multisampling has been enhanced to generalize coverage based transparency and make multisampling work more effectively with multi-pass rendering, better culling behavior – Zero-area faces are automatically culled; this affects wireframe rendering only, independent blend modes per render target, new sample-frequency pixel shader execution with primitive rasterization, increased pipeline stage bandwidth, both color and depth/stencil MSAA surfaces can now be used with CopyResource as either a source or destination, MultisampleEnable only affects line rasterization (points and triangles are unaffected), and is used to choose a line drawing algorithm. This means that some multisample rasterization from Direct3D 10 are no longer supported, Texture Sampling – sample_c and sample_c_lz instructions are defined to work with both Texture2DArrays and TextureCubeArrays use the Location member (the alpha component) to specify an array index, support for TextureCubeArrays. Mandatory 32-bit floating point filtering. Floating Point Rules – Uses the same IEEE-754 rules for floating-point EXCEPT 32-bit floating point operations have been tightened to produce a result within 0.5 unit-last-place (0.5 ULP) of the infinitely precise result. This applies to addition, subtraction, and multiplication. (accuracy to 0.5 ULP for multiply, 1.0 ULP for reciprocal). Formats – The precision of float16 blending has increased to 0.5 ULP. Blending is also required for UNORM16/SNORM16/SNORM8 formats. Format Conversion while copying between certain 32/64/128 bit prestructured, typed resources and compressed representations of the same bit widths. Mandatory support for 4x MSAA for all render targets except R32G32B32A32 and R32G32B32. Shader model 4.1 Unlike Direct3D 10 which strictly required Direct3D 10-class hardware and driver interfaces, Direct3D 10.1 runtime can run on Direct3D 10.0 hardware using a concept of "feature levels", but new features are supported exclusively by new hardware which expose feature level 10_1. The only available Direct3D 10.1 hardware as of June 2008 were the Radeon HD 3000 series and Radeon HD 4000 series from ATI; in 2009, they were joined by Chrome 430/440GT GPUs from S3 Graphics and select lower-end models in GeForce 200 series from Nvidia. In 2011, Intel chipsets started supporting Direct3D 10.1 with the introduction of Intel HD Graphics 2000 (GMA HD). Direct3D 11 Direct3D 11 was released as part of Windows 7. It was presented at Gamefest 2008 on July 22, 2008 and demonstrated at the Nvision 08 technical conference on August 26, 2008. The Direct3D 11 Technical Preview has been included in November 2008 release of DirectX SDK. AMD previewed working DirectX11 hardware at Computex on June 3, 2009, running some DirectX 11 SDK samples. The Direct3D 11 runtime is able to run on Direct3D 9 and 10.x-class hardware and drivers using the concept of "feature levels", expanding on the functionality first introduced in Direct3D 10.1 runtime. Feature levels allow developers to unify the rendering pipeline under Direct3D 11 API and make use of API improvements such as better resource management and multithreading even on entry-level cards, though advanced features such as new shader models and rendering stages will only be exposed on up-level hardware. There are three "10 Level 9" profiles which encapsulate various capabilities of popular DirectX 9.0a cards, and Direct3D 10, 10.1, and 11 each have a separate feature level; each upper level is a strict superset of a lower level. Tessellation was earlier considered for Direct3D 10, but was later abandoned. GPUs such as Radeon R600 feature a tessellation engine that can be used with Direct3D 9/10/10.1 and OpenGL, but it's not compatible with Direct3D 11 (according to Microsoft). Older graphics hardware such as Radeon 8xxx, GeForce 3/4 had support for another form of tesselation (RT patches, N patches) but those technologies never saw substantial use. As such, their support was dropped from newer hardware. Microsoft has also hinted at other features such as order independent transparency, which was never exposed by the Direct3D API but supported almost transparently by early Direct3D hardware such as Videologic's PowerVR line of chips. Direct3D 11.0 Direct3D 11.0 features include: Support for Shader Model 5.0, Dynamic shader linking, addressable resources, additional resource types, subroutines, geometry instancing, coverage as pixel shader input, programmable interpolation of inputs, new texture compression formats (1 new LDR format and 1 new HDR format), texture clamps to limit WDDM preload, require 8-bits of subtexel and sub-mip precision on texture filtering, 16K texture limits, Gather4(support for multi-component textures, support for programmable offsets), DrawIndirect, conservative oDepth, Depth Bias, addressable stream output, per-resource mipmap clamping, floating-point viewports, shader conversion instructions, improved multithreading. Shader Model 5 Support for Tessellation and Tessellation Shaders to increase at runtime the number of visible polygons from a low detail polygonal model Multithreaded rendering — to render to the same Direct3D device object from different threads for multi core CPUs Compute shaders — which exposes the shader pipeline for non-graphical tasks such as stream processing and physics acceleration, similar in spirit to what OpenCL, Nvidia CUDA, ATI Stream, and HLSL Shader Model 5 achieve among others. Mandatory support for 4x MSAA for all render targets and 8x MSAA for all render target formats except R32G32B32A32 formats. Other notable features are the addition of two new texture compression algorithms for more efficient packing of high quality and HDR/alpha textures and an increased texture cache. First seen in the Release Candidate version, Windows 7 integrates the first released Direct3D 11 support. The Platform Update for Windows Vista includes full-featured Direct3D 11 runtime and DXGI 1.1 update, as well as other related components from Windows 7 like WARP, Direct2D, DirectWrite, and WIC. Direct3D 11.1 Direct3D 11.1 is an update to the API that ships with Windows 8. The Direct3D runtime in Windows 8 features DXGI 1.2 and requires new WDDM 1.2 device drivers. Preliminary version of the Windows SDK for Windows 8 Developer Preview was released on September 13, 2011. The new API features shader tracing and HLSL compiler enhancements, support for minimum precision HLSL scalar data types, UAVs (Unordered Access Views) at every pipeline stage, target-independent rasterization (TIR), option to map SRVs of dynamic buffers with NO_OVERWRITE, shader processing of video resources, option to use logical operations in a render target, option to bind a subrange of a constant buffer to a shader and retrieve it, option to create larger constant buffers than a shader can access, option to discard resources and resource views, option to change subresources with new copy options, option to force the sample count to create a rasterizer state, option to clear all or part of a resource view, option to use Direct3D in Session 0 processes, option to specify user clip planes in HLSL on feature level 9 and higher, support for shadow buffer on feature level 9, support for video playback, extended support for shared Texture2D resources, and on-the-fly swapping between Direct3D 10 and 11 contexts and feature levels. Direct3D 11.1 includes new feature level 11_1, which brings minor updates to the shader language, such as larger constant buffers and optional double-precision instructions, as well as improved blending modes and mandatory support for 16-bit color formats to improve the performance of entry-level GPUs such as Intel HD Graphics. WARP has been updated to support feature level 11_1. The Platform Update for Windows 7 includes a limited set of features from Direct3D 11.1, though components that depend on WDDM 1.2 – such as feature level 11_1 and its related APIs, or quad buffering for stereoscopic rendering – are not present. Direct3D 11.2 Direct3D 11.2 was shipped with Windows 8.1. New hardware features require DXGI 1.3 with WDDM 1.3 drivers and include runtime shader modification and linking, Function linking graph(FLG), inbox HLSL compiler, option to annotate graphics commands. Feature levels 11_0 and 11_1 introduce optional support for tiled resources with shader level of detail clamp (Tier2). The latter feature effectively provides control over the hardware page tables present in many current GPUs. WARP was updated to fully support the new features. There is no feature level 11_2 however; the new features are dispersed across existing feature levels. Those that are hardware-dependent can be checked individually via CheckFeatureSupport. Some of the "new" features in Direct3D 11.2 actually expose some old hardware features in a more granular way; for example D3D11_FEATURE_D3D9_SIMPLE_INSTANCING_SUPPORT exposes partial support for instancing on feature level 9_1 and 9_2 hardware, otherwise fully supported from feature level 9_3 onward. Direct3D 11.X Direct3D 11.X is a superset of DirectX 11.2 running on the Xbox One. It includes some features, such as draw bundles, that were later announced as part of DirectX 12. Direct3D 11.3 Direct3D 11.3 shipped in July 2015 with Windows 10; it includes minor rendering features from Direct3D 12, while keeping the overall structure of the Direct3D 11.x API. Direct3D 11.3 introduces optional Shader Specified Stencil Reference Value, Typed Unordered Access View Loads, Rasterizer Ordered Views (ROVs), optional Standard Swizzle, optional Default Texture Mapping, Conservative Rasterization (out of three tiers), optional Unified Memory Access (UMA) support, and additional Tiled Resources (tier 2) (Volume tiled resources). Direct3D 11.4 Direct3D 11.4 version 1511 – Initial Direct3D 11.4 was introduced with Windows 10 Threshold 2 update (version 1511) improving external graphics adapters support and DXGI 1.5. Direct3D 11.4 version 1607 – Updated Direct3D 11.4 with Windows 10 Anniversary Update (version 1607) includes support WDDM 2.1 and for UHDTV HDR10 format (ST 2084) and variable refresh rates support for UWP applications. Direct3D 12 Direct3D 12 allows a lower level of hardware abstraction than earlier versions, enabling future applications to significantly improve multithreaded scaling and decrease CPU utilization. This is achieved by better matching the Direct3D abstraction layer with the underlying hardware, through new features such as Indirect Drawing, descriptor tables, concise pipeline state objects, and draw call bundles. Reducing driver overhead is the main attraction of Direct3D 12, similarly to AMD's Mantle. In the words of its lead developer Max McMullen, the main goal of Direct3D 12 is to achieve "console-level efficiency" and improved CPU parallelism. Although Nvidia has announced broad support for Direct3D 12, they were also somewhat reserved about the universal appeal of the new API, noting that while game engine developers may be enthusiastic about directly managing GPU resources from their application code, "a lot of [other] folks wouldn't" be happy to have to do that. Some new hardware features are also in Direct3D 12, including Shader Model 5.1, Volume Tiled Resources(Tier 2), Shader Specified Stencil Reference Value, Typed UAV Load, Conservative Rasterization(Tier 1), better collision and culling with Conservative Rasterization, Rasterizer Ordered Views (ROVs), Standard Swizzles, Default Texture Mapping, Swap Chains, swizzled resources and compressed resources, additional blend modes, programmable blend and efficient order-independent transparency (OIT) with pixel ordered UAV. Pipeline state objects (PSOs) have evolved from Direct3D 11, and the new concise pipeline states mean that the process has been simplified. DirectX 11 offered flexibility in how its states could be altered, to the detriment of performance. Simplifying the process and unifying the pipelines (e.g. pixel shader states) lead to a more streamlined process, significantly reducing the overheads and allowing the graphics card to draw more calls for each frame. Once created, the PSO is immutable. Root signatures introduce configurations to link command lists to resources required by shaders. They define the layout of resources that shaders will use and specifies what resources will be bound to the pipeline. A graphics command list has both a graphics and compute root signature, while a compute command list will have only a compute root signature. These root signatures are completely independent of each other. While the root signature lays out the types of data for shaders to use, it does not define or map the actual memory or data. Root parameters are one type of entry in a root signature. The actual values of the root parameters that are modified at runtime are called root arguments. This is the data that the shaders read. Within Direct3D 11, the commands are sent from the CPU to the GPU one by one, and the GPU works through these commands sequentially. This means that commands are bottlenecked by the speed at which the CPU could send these commands in a linear fashion. Within DirectX 12 these commands are sent as command lists, containing all the required information within a single package. The GPU is then capable of computing and executing this command in one single process, without having to wait on any additional information from the CPU. Within these command lists are bundles. Where previously commands were just taken, used, and then forgotten by the GPU, bundles can be reused. This decreases the workload of the GPU and means repeated assets can be used much faster. While resource binding is fairly convenient in Direct3D 11 for developers at the moment, its inefficiency means several modern hardware capabilities are being drastically underused. When a game engine needed resources in DX11, it had to draw the data from scratch every time, meaning repeat processes and unnecessary uses. In Direct3D 12, descriptor heaps and tables mean the most often used resources can be allocated by developers in tables, which the GPU can quickly and easily access. This can contribute to better performance than Direct3D 11 on equivalent hardware, but it also entails more work for the developer. Dynamic Heaps are also a feature of Direct3D 12. Direct3D 12 features explicit multi-adapter support, allowing the explicit control of multiple GPUs configuration systems. Such configurations can be built with graphics adapter of the same hardware vendor as well of different hardware vendor together. An experimental support of D3D 12 for Windows 7 SP1 has been released by Microsoft in 2019 via a dedicated NuGet package. Direct3D 12 version 1607 – With the Windows 10 anniversary update (version 1607), released on August 2, 2016, the Direct3D 12 runtime has been updated to support constructs for explicit multithreading and inter-process communication, allowing developers to take advantage of modern massively parallel GPUs. Other features include updated root signatures version 1.1, as well as support for HDR10 format and variable refresh rates. Direct3D 12 version 1703 – With the Windows 10 Creators Update (version 1703), released on April 11, 2017, the Direct3D 12 runtime has been updated to support Shader Model 6.0 and DXIL. and Shader Model 6.0 requires Windows 10 Anniversary Update (version 1607), WDDM 2.1. New graphical features are Depth Bounds Testing and Programmable MSAA. Direct3D 12 version 1709 – Direct3D in Windows 10 Fall Creators Update (version 1709), released on October 17, 2017, includes improved debugging. Direct3D 12 version 1809 – Windows 10 October 2018 Update (version 1809) brings support for DirectX Raytracing so GPUs can benefit from its API. Direct3D 12 version 1903 – Windows 10 May 2019 Update (version 1903) brings support for DirectML and NPUs. DirectML can support both compute shaders and tensor shaders. Direct3D 12 version 2004 – Windows 10 May 2020 Update (version 2004) brings support for DirectX 12 Ultimate, Mesh & Amplification Shaders, Sampler Feedback, as well DirectX Raytracing Tier 1.1 and memory allocation improvements. Direct3D 12 version 21H2 – Windows 10 version 21H2 and Windows 11 version 21H2 brings support for DirectStorage. Architecture Direct3D is a Microsoft DirectX API subsystem component. The aim of Direct3D is to abstract the communication between a graphics application and the graphics hardware drivers. It is presented like a thin abstract layer at a level comparable to GDI (see attached diagram). Direct3D contains numerous features that GDI lacks. Direct3D is an Immediate mode graphics API. It provides a low-level interface to every video card 3D function (transformations, clipping, lighting, materials, textures, depth buffering and so on). It once had a higher level Retained mode component, now officially discontinued. Direct3D immediate mode presents three main abstractions: devices, resources and Swap Chains (see attached diagram). Devices are responsible for rendering the 3D scene. They provide an interface with different rendering capabilities. For example, the mono device provides white and black rendering, while the RGB device renders in color. There are four types of devices: HAL (hardware abstraction layer) device: For devices supporting hardware acceleration. Reference device: Simulates new functions not yet available in hardware. It is necessary to install the Direct3D SDK to use this device type. Null reference device: Does nothing. This device is used when the SDK is not installed and a reference device is requested. Pluggable software device: Performs software rendering. This device was introduced with DirectX 9.0. Every device contains at least one swap chain. A swap chain is made up of one or more back buffer surfaces. Rendering occurs in the back buffer. Moreover, devices contain a collection of resources; specific data used during rendering. Each resource has four attributes: Type: Determines the type of resource: surface, volume, texture, cube texture, volume texture, surface texture, index buffer or vertex buffer. Pool: Describes how the resource is managed by the runtime and where it is stored. In the Default pool the resource will exist only in device memory. Resources in the managed pool will be stored in system memory, and will be sent to the device when required. Resources in system memory pool will only exist in system memory. Finally, the scratch pool is basically the same as the system memory pool, but resources are not bound by hardware restrictions. Format: Describes the layout of the resource data in memory. For example, D3DFMT_R8G8B8 format value means a 24 bits color depth (8 bits for red, 8 bits for green and 8 bits for blue). Usage: Describes, with a collection of flag bits, how the resource will be used by the application. These flags dictate which resources are used in dynamic or static access patterns. Static resource values don't change after being loaded, whereas dynamic resource values may be modified. Direct3D implements two display modes: Fullscreen mode: The Direct3D application generates all of the graphical output for a display device. In this mode Direct3D automatically captures Alt-Tab and sets/restores screen resolution and pixel format without the programmer intervention. This also provides plenty of problems for debugging due to the 'Exclusive Cooperative Mode'. Windowed mode: The result is shown inside the area of a window. Direct3D communicates with GDI to generate the graphical output in the display. Windowed mode can have the same level of performance as full-screen, depending on driver support. Pipeline The Microsoft Direct3D 11 API defines a process to convert a group of vertices, textures, buffers, and state into an image on the screen. This process is described as a rendering pipeline with several distinct stages. The different stages of the Direct3D 11 pipeline are: Input-Assembler: Reads in vertex data from an application supplied vertex buffer and feeds them down the pipeline. Vertex Shader: Performs operations on a single vertex at a time, such as transformations, skinning, or lighting. Hull-Shader: Performs operations on sets of patch control points, and generates additional data known as patch constants. Tessellator: Subdivides geometry to create higher-order representations of the hull. Domain-Shader: Performs operations on vertices output by the tessellation stage, in much the same way as a vertex shader. Geometry Shader: Processes entire primitives such as triangles, points, or lines. Given a primitive, this stage discards it, or generates one or more new primitives. Stream-Output: Can write out the previous stage's results to memory. This is useful to recirculate data back into the pipeline. Rasterizer: Converts primitives into pixels, feeding these pixels into the pixel shader. The Rasterizer may also perform other tasks such as clipping what is not visible, or interpolating vertex data into per-pixel data. Pixel Shader: Determines the final pixel color to be written to the render target and can also calculate a depth value to be written to the depth buffer. Output-Merger: Merges various types of output data (pixel shader values, alpha blending, depth/stencil...) to build the final result. The pipeline stages illustrated with a round box are fully programmable. The application provides a shader program that describes the exact operations to be completed for that stage. Many stages are optional and can be disabled altogether. Feature levels In Direct3D 5 to 9, when new versions of the API introduced support for new hardware capabilities, most of them were optional – each graphics vendor maintained their own set of supported features in addition to the basic required functionality. Support for individual features had to be determined using "capability bits" or "caps", making cross-vendor graphics programming a complex task. Direct3D 10 introduced a much simplified set of mandatory hardware requirements based on most popular Direct3D 9 capabilities which all supporting graphics cards had to adhere to, with only a few optional capabilities for supported texture formats and operations. Direct3D 10.1 added a few new mandatory hardware requirements, and to remain compatible with 10.0 hardware and drivers, these features were encapsulated in two sets called "feature levels", with 10.1 level forming a superset of 10.0 level. As Direct3D 11.0, 11.1 and 12 added support for new hardware, new mandatory capabilities were further grouped in upper feature levels. Direct3D 11 also introduced "10level9", a subset of the Direct3D 10 API with three feature levels encapsulating various Direct3D 9 cards with WDDM drivers, and Direct3D 11.1 re-introduced a few optional features for all levels, which were expanded in Direct3D 11.2 and later versions. This approach allows developers to unify the rendering pipeline and use a single version of the API on both newer and older hardware, taking advantage of performance and usability improvements in the newer runtime. New feature levels are introduced with updated versions of the API and typically encapsulate: major mandatory features – (Direct3D 11.0, 12), a few minor features (Direct3D 10.1, 11.1), or a common set of previously optional features (Direct3D 11.0 "10 level 9"). Each upper level is a strict superset of a lower level, with only a few new or previously optional features that move to the core functionality on an upper level. More advanced features in a major revision of the Direct3D API such as new shader models and rendering stages are only exposed on up-level hardware. Separate capabilities exist to indicate support for specific texture operations and resource formats; these are specified per each texture format using a combination of capability flags. Feature levels use underscore as a delimiter (i.e. "12_1"), while API/runtime versions use dot (i.e. "Direct3D 11.4"). Direct3D 11 levels In Direct3D 11.4 for Windows 10, there are nine feature levels provided by structure; levels 9_1, 9_2 and 9_3 (collectively known as Direct3D 10 Level 9) re-encapsulate various features of popular Direct3D 9 cards, levels 10_0, 10_1 refer to respective legacy versions of Direct3D 10, 11_0 and 11_1 reflects the feature introduced with Direct3D 11 and Direct3D 11.1 APIs and runtimes, while levels 12_0 and 12_1 correspond the new feature levels introduced with the Direct3D 12 API. Direct3D 12 levels Direct3D 12 for Windows 10 requires graphics hardware conforming to feature levels 11_0 and 11_1 which support virtual memory address translations and requires WDDM 2.0 drivers. There are two new feature levels, 12_0 and 12_1, which include some new features exposed by Direct3D 12 that are optional on levels 11_0 and 11_1. Some previously optional features are realigned as baseline on levels 11_0 and 11_1. Shader Model 6.0 has been released with Windows 10 Creators Update and requires Windows 10 Anniversary Update, WDDM 2.1 drivers. Direct3D 12 introduces a revamped resource binding model which allows explicit control of memory. Abstract "resource view" objects are now represented with resource descriptors, which are allocated using memory heaps and tables. Resource Binding tiers define maximum number of resources that can be addressed using CBV (constant buffer view), SRV (shader resource view) and UAV (unordered access view), as well as texture sampler units. Tier 3 hardware allows fully bindless resources only restricted by the size of the descriptor heap, while Tier 1 and Tier 2 hardware impose some limits on the number of descriptors ("views") that can be used simultaneously. Multithreading WDDM driver model in Windows Vista and higher supports arbitrarily large number of execution contexts (or threads) in hardware or in software. Windows XP only supported multitasked access to Direct3D, where separate applications could execute in different windows and be hardware accelerated, and the OS had limited control about what the GPU could do and the driver could switch execution threads arbitrarily. The ability to execute the runtime in a multi-threaded mode has been introduced with Direct3D 11 runtime. Each execution context is presented with a resource view of the GPU. Execution contexts are protected from each other, however a rogue or badly written app can take control of the execution in the user-mode driver and could potentially access data from another process within GPU memory by sending modified commands. Though protected from access by another app, a well-written app still needs to protect itself against failures and device loss caused by other applications. The OS manages the threads all by itself, allowing the hardware to switch from one thread to the other when appropriate, and also handles memory management and paging (to system memory and to disk) via integrated OS-kernel memory management. Finer-grained context switching, i.e. being able to switch two execution threads at the shader-instruction level instead of the single-command level or even batch of commands, was introduced in WDDM/DXGI 1.2 which shipped with Windows 8. This overcomes a potential scheduling problem when application would have very long execution of a single command/batch of commands and will have to be terminated by the OS watchdog timer. WDDM 2.0 and DirectX 12 have been reengineered to allow fully multithreaded draw calls. This was achieved by making all resources immutable (i.e. read-only), serializing the rendering states and using draw call bundles. This avoids complex resource management in the kernel-mode driver, making possible multiple reentrant calls to the user-mode driver via concurrent executions contexts supplied by separate rendering threads in the same application. Direct3D Mobile Direct3D Mobile is derived from Direct3D but has a smaller memory footprint. Windows CE provides Direct3D Mobile support. Alternative implementations The following alternative implementations of Direct3D API exist. They are useful for non-Windows platforms and for hardware without some versions of DX support: WineD3D – The Wine open source project has working implementations of the Direct3D APIs via translation to OpenGL. Wine's implementation can also be run on Windows under certain conditions. vkd3d – vkd3d is an open source 3D graphics library built on top of Vulkan which allows to run Direct3D 12 applications on top of Vulkan. It's primarily used by the Wine project, and is now included with Valve's Proton project bundled with Steam on Linux. DXVK – An open source Vulkan-based translation layer for Direct3D 9/10/11 which allows running 3D applications on Linux using Wine. It is used by Proton/Steam for Linux. DXVK is able to run a large number of modern Windows games under Linux. D9VK – An obsolete fork of DXVK for adding Direct3D 9 support, included with Steam/Proton on Linux. On December 16, 2019 D9VK was merged into DXVK. D8VK – An obsolete fork of DXVK for adding Direct3D 8 support on Linux. It was merged with DXVK version 2.4 which was released on July 10, 2024. Gallium Nine – Gallium Nine makes it possible to run Direct3D 9 applications on Linux natively, i.e. without any calls translation which allows for a near native speed. It depends on Wine and Mesa. Related tools D3DX Direct3D comes with D3DX, a library of tools designed to perform common mathematical calculations on vectors, matrices and colors, calculating look-at and projection matrices, spline interpolations, and several more complicated tasks, such as compiling or assembling shaders used for 3D graphic programming, compressed skeletal animation storage and matrix stacks. There are several functions that provide complex operations over 3D meshes like tangent-space computation, mesh simplification, precomputed radiance transfer, optimizing for vertex cache friendliness and stripification, and generators for 3D text meshes. 2D features include classes for drawing screen-space lines, text and sprite based particle systems. Spatial functions include various intersection routines, conversion from/to barycentric coordinates and bounding box/sphere generators. D3DX is provided as a dynamic link library (DLL). D3DX is deprecated from Windows 8 onward and can't be used in Windows Store apps. Some features present in previous versions of D3DX were removed in Direct3D 11 and now provided as separate sources: Windows SDK and Visual Studio A large part of the math library has been removed. Microsoft recommends use of the DirectX Math library instead. Spherical harmonics math has been removed and is now distributed as source. The Effect framework has been removed and is now distributed as source via CodePlex. The Mesh interface and geometry functions have been removed and are now distributed as source via CodePlex under DirectXMesh geometry processing library. Texture functions have been removed and are now distributed as source via CodePlex under DirectXTex texture processing library. General helpers have been removed and are now distributed as source via CodePlex under DirectX Tool Kit (DirectXTK) project. The isochart texture atlas has been removed and is now distributed as source via CodePlex under UVAtlas project. DXUT DXUT (also called the sample framework) is a layer built on top of the Direct3D API. The framework is designed to help the programmer spend less time with mundane tasks, such as creating a window, creating a device, processing Windows messages and handling device events. DXUT have been removed with the Windows SDK 8.0 and now distributed as source via CodePlex.
Technology
Software development: General
null
97026
https://en.wikipedia.org/wiki/Breadth-first%20search
Breadth-first search
Breadth-first search (BFS) is an algorithm for searching a tree data structure for a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. Extra memory, usually a queue, is needed to keep track of the child nodes that were encountered but not yet explored. For example, in a chess endgame, a chess engine may build the game tree from the current position by applying all possible moves and use breadth-first search to find a win position for White. Implicit trees (such as game trees or other problem-solving trees) may be of infinite size; breadth-first search is guaranteed to find a solution node if one exists. In contrast, (plain) depth-first search (DFS), which explores the node branch as far as possible before backtracking and expanding other nodes, may get lost in an infinite branch and never make it to the solution node. Iterative deepening depth-first search avoids the latter drawback at the price of exploring the tree's top parts over and over again. On the other hand, both depth-first algorithms typically require far less extra memory than breadth-first search. Breadth-first search can be generalized to both undirected graphs and directed graphs with a given start node (sometimes referred to as a 'search key'). In state space search in artificial intelligence, repeated searches of vertices are often allowed, while in theoretical analysis of algorithms based on breadth-first search, precautions are typically taken to prevent repetitions. BFS and its application in finding connected components of graphs were invented in 1945 by Konrad Zuse, in his (rejected) Ph.D. thesis on the Plankalkül programming language, but this was not published until 1972. It was reinvented in 1959 by Edward F. Moore, who used it to find the shortest path out of a maze, and later developed by C. Y. Lee into a wire routing algorithm (published in 1961). Pseudocode Input: A graph and a starting vertex of Output: Goal state. The parent links trace the shortest path back to 1 procedure BFS(G, root) is 2 let Q be a queue 3 label root as explored 4 Q.enqueue(root) 5 while Q is not empty do 6 v := Q.dequeue() 7 if v is the goal then 8 return v 9 for all edges from v to w in G.adjacentEdges(v) do 10 if w is not labeled as explored then 11 label w as explored 12 w.parent := v 13 Q.enqueue(w) More details This non-recursive implementation is similar to the non-recursive implementation of depth-first search, but differs from it in two ways: it uses a queue (First In First Out) instead of a stack (Last In First Out) and it checks whether a vertex has been explored before enqueueing the vertex rather than delaying this check until the vertex is dequeued from the queue. If is a tree, replacing the queue of this breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one. The Q queue contains the frontier along which the algorithm is currently searching. Nodes can be labelled as explored by storing them in a set, or by an attribute on each node, depending on the implementation. Note that the word node is usually interchangeable with the word vertex. The parent attribute of each node is useful for accessing the nodes in a shortest path, for example by backtracking from the destination node up to the starting node, once the BFS has been run, and the predecessors nodes have been set. Breadth-first search produces a so-called breadth first tree. You can see how a breadth first tree looks in the following example. Example The following is an example of the breadth-first tree obtained by running a BFS on German cities starting from Frankfurt: Analysis Time and space complexity The time complexity can be expressed as , since every vertex and every edge will be explored in the worst case. is the number of vertices and is the number of edges in the graph. Note that may vary between and , depending on how sparse the input graph is. When the number of vertices in the graph is known ahead of time, and additional data structures are used to determine which vertices have already been added to the queue, the space complexity can be expressed as , where is the number of vertices. This is in addition to the space required for the graph itself, which may vary depending on the graph representation used by an implementation of the algorithm. When working with graphs that are too large to store explicitly (or infinite), it is more practical to describe the complexity of breadth-first search in different terms: to find the nodes that are at distance from the start node (measured in number of edge traversals), BFS takes time and memory, where is the "branching factor" of the graph (the average out-degree). Completeness In the analysis of algorithms, the input to breadth-first search is assumed to be a finite graph, represented as an adjacency list, adjacency matrix, or similar representation. However, in the application of graph traversal methods in artificial intelligence the input may be an implicit representation of an infinite graph. In this context, a search method is described as being complete if it is guaranteed to find a goal state if one exists. Breadth-first search is complete, but depth-first search is not. When applied to infinite graphs represented implicitly, breadth-first search will eventually find the goal state, but depth first search may get lost in parts of the graph that have no goal state and never return. BFS ordering An enumeration of the vertices of a graph is said to be a BFS ordering if it is the possible output of the application of BFS to this graph. Let be a graph with vertices. Recall that is the set of neighbors of . Let be a list of distinct elements of , for , let be the least such that is a neighbor of , if such a exists, and be otherwise. Let be an enumeration of the vertices of . The enumeration is said to be a BFS ordering (with source ) if, for all , is the vertex such that is minimal. Equivalently, is a BFS ordering if, for all with , there exists a neighbor of such that . Applications Breadth-first search can be used to solve many problems in graph theory, for example: Copying garbage collection, Cheney's algorithm Finding the shortest path between two nodes u and v, with path length measured by number of edges (an advantage over depth-first search) (Reverse) Cuthill–McKee mesh numbering Ford–Fulkerson method for computing the maximum flow in a flow network Serialization/Deserialization of a binary tree vs serialization in sorted order, allows the tree to be re-constructed in an efficient manner. Construction of the failure function of the Aho-Corasick pattern matcher. Testing bipartiteness of a graph. Implementing parallel algorithms for computing a graph's transitive closure.
Mathematics
Algorithms
null
97034
https://en.wikipedia.org/wiki/Depth-first%20search
Depth-first search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually a stack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph. A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux as a strategy for solving mazes. Properties The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time where is the number of vertices and the number of edges. This is linear in the size of the graph. In these applications it also uses space in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as for breadth-first search and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce. For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. DFS may also be used to collect a sample of graph nodes. However, incomplete DFS, similarly to incomplete BFS, is biased towards nodes of high degree. Example For the following graph: a depth-first search starting at the node A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory. Performing the same search without remembering previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G. Iterative deepening is one technique to avoid this infinite loop and would reach all nodes. Output of a depth-first search The result of a depth-first search of a graph can be conveniently described in terms of a spanning tree of the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges. Vertex orderings It is also possible to use depth-first search to linearly order the vertices of a graph or tree. There are four possible ways of doing this: A preordering is a list of the vertices in the order that they were first visited by the depth-first search algorithm. This is a compact and natural way of describing the progress of the search, as was done earlier in this article. A preordering of an expression tree is the expression in Polish notation. A postordering is a list of the vertices in the order that they were last visited by the algorithm. A postordering of an expression tree is the expression in reverse Polish notation. A reverse preordering is the reverse of a preordering, i.e. a list of the vertices in the opposite order of their first visit. Reverse preordering is not the same as postordering. A reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the opposite order of their last visit. Reverse postordering is not the same as preordering. For binary trees there is additionally in-ordering and reverse in-ordering. For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D. Reverse postordering produces a topological sorting of any directed acyclic graph. This ordering is also useful in control-flow analysis as it often represents a natural linearization of the control flows. The graph above might represent the flow of control in the code fragment below, and it is natural to consider this code in the order A B C D or A C B D but not natural to use the order A B D C or A C D B. if (A) then { B } else { C } D Pseudocode A recursive implementation of DFS: procedure DFS(G, v) is label v as discovered for all directed edges from v to w that are in G.adjacentEdges(v) do if vertex w is not labeled as discovered then recursively call DFS(G, w) A non-recursive implementation of DFS with worst-case space complexity , with the possibility of duplicate vertices on the stack: procedure DFS_iterative(G, v) is let S be a stack S.push(v) while S is not empty do v = S.pop() if v is not labeled as discovered then label v as discovered for all edges from v to w in G.adjacentEdges(v) do if w is not labeled as discovered then S.push(w) These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor of v visited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G. The non-recursive implementation is similar to breadth-first search but differs from it in two ways: it uses a stack instead of a queue, and it delays checking whether a vertex has been discovered until the vertex is popped from the stack rather than making this check before adding the vertex. If is a tree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one. Another possible implementation of iterative depth-first search uses a stack of iterators of the list of neighbors of a node, instead of a stack of nodes. This yields the same traversal as recursive DFS. procedure DFS_iterative(G, v) is let S be a stack label v as discovered S.push(iterator of G.adjacentEdges(v)) while S is not empty do if S.peek().hasNext() then w = S.peek().next() if w is not labeled as discovered then label w as discovered S.push(iterator of G.adjacentEdges(w)) else S.pop() Applications Algorithms that use depth-first search as a building block include: Finding connected components. Topological sorting. Finding 2-(edge or vertex)-connected components. Finding 3-(edge or vertex)-connected components. Finding the bridges of a graph. Generating words in order to plot the limit set of a group. Finding strongly connected components. Determining whether a species is closer to one species or another in a phylogenetic tree. Planarity testing. Solving puzzles with only one solution, such as mazes. (DFS can be adapted to find all solutions to a maze by only including nodes on the current path in the visited set.) Maze generation may use a randomized DFS. Finding biconnectivity in graphs. Succession to the throne shared by the Commonwealth realms. Complexity The computational complexity of DFS was investigated by John Reif. More precisely, given a graph , let be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. A decision version of the problem (testing whether some vertex occurs before some vertex in this order) is P-complete, meaning that it is "a nightmare for parallel processing". A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity class RNC. As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC.
Mathematics
Algorithms
null
97114
https://en.wikipedia.org/wiki/Heterotroph
Heterotroph
A heterotroph (; ) is an organism that cannot produce its own food, instead taking nutrition from other sources of organic carbon, mainly plant or animal matter. In the food chain, heterotrophs are primary, secondary and tertiary consumers, but not producers. Living organisms that are heterotrophic include all animals and fungi, some bacteria and protists, and many parasitic plants. The term heterotroph arose in microbiology in 1946 as part of a classification of microorganisms based on their type of nutrition. The term is now used in many fields, such as ecology, in describing the food chain. Heterotrophs may be subdivided according to their energy source. If the heterotroph uses chemical energy, it is a chemoheterotroph (e.g., humans and mushrooms). If it uses light for energy, then it is a photoheterotroph (e.g., green non-sulfur bacteria). Heterotrophs represent one of the two mechanisms of nutrition (trophic levels), the other being autotrophs (auto = self, troph = nutrition). Autotrophs use energy from sunlight (photoautotrophs) or oxidation of inorganic compounds (lithoautotrophs) to convert inorganic carbon dioxide to organic carbon compounds and energy to sustain their life. Comparing the two in basic terms, heterotrophs (such as animals) eat either autotrophs (such as plants) or other heterotrophs, or both. Detritivores are heterotrophs which obtain nutrients by consuming detritus (decomposing plant and animal parts as well as feces). Saprotrophs (also called lysotrophs) are chemoheterotrophs that use extracellular digestion in processing decayed organic matter. The process is most often facilitated through the active transport of such materials through endocytosis within the internal mycelium and its constituent hyphae.<ref name="advanced_biology_principles_1">"The purpose of saprotrophs and their internal nutrition, as well as the main two types of fungi that are most often referred to, as well as describes, visually, the process of saprotrophic nutrition through a diagram of hyphae, referring to the Rhizobium on damp, stale whole-meal bread or rotting fruit." Advanced Biology Principles, p 296.</ref> Types Heterotrophs can be organotrophs or lithotrophs. Organotrophs exploit reduced carbon compounds as electron sources, like carbohydrates, fats, and proteins from plants and animals. On the other hand, lithoheterotrophs use inorganic compounds, such as ammonium, nitrite, or sulfur, to obtain electrons. Another way of classifying different heterotrophs is by assigning them as chemotrophs or phototrophs. Phototrophs utilize light to obtain energy and carry out metabolic processes, whereas chemotrophs use the energy obtained by the oxidation of chemicals from their environment. Photoorganoheterotrophs, such as Rhodospirillaceae and purple non-sulfur bacteria synthesize organic compounds using sunlight coupled with oxidation of organic substances. They use organic compounds to build structures. They do not fix carbon dioxide and apparently do not have the Calvin cycle. Chemolithoheterotrophs like Oceanithermus profundus obtain energy from the oxidation of inorganic compounds, including hydrogen sulfide, elemental sulfur, thiosulfate, and molecular hydrogen. Mixotrophs (or facultative chemolithotroph) can use either carbon dioxide or organic carbon as the carbon source, meaning that mixotrophs have the ability to use both heterotrophic and autotrophic methods. Although mixotrophs have the ability to grow under both heterotrophic and autotrophic conditions, C. vulgaris'' have higher biomass and lipid productivity when growing under heterotrophic compared to autotrophic conditions. Heterotrophs, by consuming reduced carbon compounds, are able to use all the energy that they obtain from food for growth and reproduction, unlike autotrophs, which must use some of their energy for carbon fixation. Both heterotrophs and autotrophs alike are usually dependent on the metabolic activities of other organisms for nutrients other than carbon, including nitrogen, phosphorus, and sulfur, and can die from lack of food that supplies these nutrients. This applies not only to animals and fungi but also to bacteria. Origin and diversification The chemical origin of life hypothesis suggests that life originated in a prebiotic soup with heterotrophs. The summary of this theory is as follows: early Earth had a highly reducing atmosphere and energy sources such as electrical energy in the form of lightning, which resulted in reactions that formed simple organic compounds, which further reacted to form more complex compounds and eventually resulted in life. Alternative theories of an autotrophic origin of life contradict this theory. The theory of a chemical origin of life beginning with heterotrophic life was first proposed in 1924 by Alexander Ivanovich Oparin, and eventually published "The Origin of Life." It was independently proposed for the first time in English in 1929 by John Burdon Sanderson Haldane. While these authors agreed on the gasses present and the progression of events to a point, Oparin championed a progressive complexity of organic matter prior to the formation of cells, while Haldane had more considerations about the concept of genes as units of heredity and the possibility of light playing a role in chemical synthesis (autotrophy).   Evidence grew to support this theory in 1953, when Stanley Miller conducted an experiment in which he added gasses that were thought to be present on early Earth – water (H2O), methane (CH4), ammonia (NH3), and hydrogen (H2) – to a flask and stimulated them with electricity that resembled lightning present on early Earth. The experiment resulted in the discovery that early Earth conditions were supportive of the production of amino acids, with recent re-analyses of the data recognizing that over 40 different amino acids were produced, including several not currently used by life. This experiment heralded the beginning of the field of synthetic prebiotic chemistry, and is now known as the Miller–Urey experiment. On early Earth, oceans and shallow waters were rich with organic molecules that could have been used by primitive heterotrophs. This method of obtaining energy was energetically favorable until organic carbon became more scarce than inorganic carbon, providing a potential evolutionary pressure to become autotrophic. Following the evolution of autotrophs, heterotrophs were able to utilize them as a food source instead of relying on the limited nutrients found in their environment. Eventually, autotrophic and heterotrophic cells were engulfed by these early heterotrophs and formed a symbiotic relationship. The endosymbiosis of autotrophic cells is suggested to have evolved into the chloroplasts while the endosymbiosis of smaller heterotrophs developed into the mitochondria, allowing the differentiation of tissues and development into multicellularity. This advancement allowed the further diversification of heterotrophs. Today, many heterotrophs and autotrophs also utilize mutualistic relationships that provide needed resources to both organisms. One example of this is the mutualism between corals and algae, where the former provides protection and necessary compounds for photosynthesis while the latter provides oxygen. However this hypothesis is controversial as CO2 was the main carbon source at the early Earth, suggesting that early cellular life were autotrophs that relied upon inorganic substrates as an energy source and lived at alkaline hydrothermal vents or acidic geothermal ponds. Simple biomolecules transported from space was considered to have been either too reduced to have been fermented or too heterogeneous to support microbial growth. Heterotrophic microbes likely originated at low H2 partial pressures. Bases, amino acids, and ribose are considered to be the first fermentation substrates. Heterotrophs are currently found in each domain of life: Bacteria, Archaea, and Eukarya. Domain Bacteria includes a variety of metabolic activity including photoheterotrophs, chemoheterotrophs, organotrophs, and heterolithotrophs. Within Domain Eukarya, kingdoms Fungi and Animalia are entirely heterotrophic, though most fungi absorb nutrients through their environment. Most organisms within Kingdom Protista are heterotrophic while Kingdom Plantae is almost entirely autotrophic, except for myco-heterotrophic plants. Lastly, Domain Archaea varies immensely in metabolic functions and contains many methods of heterotrophy. Flowchart Autotroph Chemoautotroph Photoautotroph Heterotroph Chemoheterotroph Photoheterotroph Ecology Many heterotrophs are chemoorganoheterotrophs that use organic carbon (e.g. glucose) as their carbon source, and organic chemicals (e.g. carbohydrates, lipids, proteins) as their electron sources. Heterotrophs function as consumers in food chain: they obtain these nutrients from saprotrophic, parasitic, or holozoic nutrients. They break down complex organic compounds (e.g., carbohydrates, fats, and proteins) produced by autotrophs into simpler compounds (e.g., carbohydrates into glucose, fats into fatty acids and glycerol, and proteins into amino acids). They release the chemical energy of nutrient molecules by oxidizing carbon and hydrogen atoms from carbohydrates, lipids, and proteins to carbon dioxide and water, respectively. They can catabolize organic compounds by respiration, fermentation, or both. Fermenting heterotrophs are either facultative or obligate anaerobes that carry out fermentation in low oxygen environments, in which the production of ATP is commonly coupled with substrate-level phosphorylation and the production of end products (e.g. alcohol, , sulfide). These products can then serve as the substrates for other bacteria in the anaerobic digest, and be converted into CO2 and CH4, which is an important step for the carbon cycle for removing organic fermentation products from anaerobic environments. Heterotrophs can undergo respiration, in which ATP production is coupled with oxidative phosphorylation. This leads to the release of oxidized carbon wastes such as CO2 and reduced wastes like H2O, H2S, or N2O into the atmosphere. Heterotrophic microbes' respiration and fermentation account for a large portion of the release of CO2 into the atmosphere, making it available for autotrophs as a source of nutrient and plants as a cellulose synthesis substrate. Respiration in heterotrophs is often accompanied by mineralization, the process of converting organic compounds to inorganic forms. When the organic nutrient source taken in by the heterotroph contains essential elements such as N, S, P in addition to C, H, and O, they are often removed first to proceed with the oxidation of organic nutrient and production of ATP via respiration. S and N in organic carbon source are transformed into H2S and NH4+ through desulfurylation and deamination, respectively. Heterotrophs also allow for dephosphorylation as part of decomposition. The conversion of N and S from organic form to inorganic form is a critical part of the nitrogen and sulfur cycle. H2S formed from desulfurylation is further oxidized by lithotrophs and phototrophs while NH4+ formed from deamination is further oxidized by lithotrophs to the forms available to plants. Heterotrophs' ability to mineralize essential elements is critical to plant survival. Most opisthokonts and prokaryotes are heterotrophic; in particular, all animals and fungi are heterotrophs. Some animals, such as corals, form symbiotic relationships with autotrophs and obtain organic carbon in this way. Furthermore, some parasitic plants have also turned fully or partially heterotrophic, while carnivorous plants consume animals to augment their nitrogen supply while remaining autotrophic. Animals are classified as heterotrophs by ingestion, fungi are classified as heterotrophs by absorption.
Biology and health sciences
Ecology
Biology
97169
https://en.wikipedia.org/wiki/Port
Port
A port is a maritime facility comprising one or more wharves or loading areas, where ships load and discharge cargo and passengers. Although usually situated on a sea coast or estuary, ports can also be found far inland, such as Hamburg, Manchester and Duluth; these access the sea via rivers or canals. Because of their roles as ports of entry for immigrants as well as soldiers in wartime, many port cities have experienced dramatic multi-ethnic and multicultural changes throughout their histories. Ports are extremely important to the global economy; 70% of global merchandise trade by value passes through a port. For this reason, ports are also often densely populated settlements that provide the labor for processing and handling goods and related services for the ports. Today by far the greatest growth in port development is in Asia, the continent with some of the world's largest and busiest ports, such as Singapore and the Chinese ports of Shanghai and Ningbo-Zhoushan. As of 2020, the busiest passenger port in Europe is the Port of Helsinki in Finland. Nevertheless, countless smaller ports do exist that may only serve their local tourism or fishing industries. Ports can have a wide environmental impact on local ecologies and waterways, most importantly water quality, which can be caused by dredging, spills and other pollution. Ports are heavily affected by changing environmental factors caused by climate change as most port infrastructure is extremely vulnerable to sea level rise and coastal flooding. Internationally, global ports are beginning to identify ways to improve coastal management practices and integrate climate change adaptation practices into their construction. Historical ports Wherever ancient civilisations engaged in maritime trade, they tended to develop sea ports. One of the world's oldest known artificial harbors is at Wadi al-Jarf on the Red Sea. Along with the finding of harbor structures, ancient anchors have also been found. Other ancient ports include Guangzhou during Qin dynasty China and Canopus, the principal Egyptian port for Greek trade before the foundation of Alexandria. In Ancient Greece, Athens' port of Piraeus was the base for the Athenian fleet which played a crucial role in the Battle of Salamis against the Persians in 480 BCE. In ancient India from 3700 BCE, Lothal was a prominent city of the Indus valley civilisation, located in the Bhal region of the modern state of Gujarāt. Ostia Antica was the port of ancient Rome with Portus established by Claudius and enlarged by Trajan to supplement the nearby port of Ostia. In Japan, during the Edo period, the island of Dejima was the only port open for trade with Europe and received only a single Dutch ship per year, whereas Osaka was the largest domestic port and the main trade hub for rice. Ostia Antica () is an ancient Roman city and the port of Rome located at the mouth of the Tiber. It is near modern Ostia, southwest of Rome. Due to silting and the invasion of sand, the site now lies from the sea. The name Ostia (the plural of ostium) derives from Latin os 'mouth'. Ostia is now a large archaeological site noted for the excellent preservation of its ancient buildings, magnificent frescoes and impressive mosaics. The city's decline after antiquity led to harbor deterioration, marshy conditions, and reduced population. Sand dunes covering the site aided its preservation. Its remains provide insights into a city of commercial importance. As in Pompeii, Ostia's ruins provide details about Roman urbanism that are not accessible within the city of Rome itself. Post-classical Swahili kingdoms are known to have had trade port islands and trade routes with the Islamic world and Asia. They were described by Greek historians as "metropolises". Famous African trade ports such as Mombasa, Zanzibar, Mogadishu and Kilwa were known to Chinese sailors such as Zheng He and medieval Islamic historians such as the Berber Islamic voyager Abu Abdullah ibn Battuta. Many of these ancient sites no longer exist or function as modern ports. Even in more recent times, ports sometimes fall out of use. Rye, East Sussex, was an important English port in the Middle Ages, but the coastline changed and it is now from the sea, while the ports of Ravenspurn and Dunwich have been lost to coastal erosion. The maritime republics (), also called merchant republics (), were Italian thalassocratic port cities which, starting from the Middle Ages, enjoyed political autonomy and economic prosperity brought about by their maritime activities. The term, coined during the 19th century, generally refers to four Italian cities, whose coats of arms have been shown since 1947 on the flags of the Italian Navy and the Italian Merchant Navy: Amalfi, Genoa, Pisa, and Venice. In addition to the four best known cities, Ancona, Gaeta, Noli, and, in Dalmatia, Ragusa, are also considered maritime republics; in certain historical periods, they had no secondary importance compared to some of the better known cities. Uniformly scattered across the Italian peninsula, the maritime republics were important not only for the history of navigation and commerce: in addition to precious goods otherwise unobtainable in Europe, new artistic ideas and news concerning distant countries also spread. From the 10th century, they built fleets of ships both for their own protection and to support extensive trade networks across the Mediterranean, giving them an essential role in reestablishing contacts between Europe, Asia, and Africa, which had been interrupted during the early Middle Ages. They also had an essential role in the Crusades and produced renowned explorers and navigators such as Marco Polo and Christopher Columbus. Over the centuries, the maritime republics — both the best known and the lesser known but not always less important — experienced fluctuating fortunes. In the 9th and 10th centuries, this phenomenon began with Amalfi and Gaeta, which soon reached their heyday. Meanwhile, Venice began its gradual ascent, while the other cities were still experiencing the long gestation that would lead them to their autonomy and to follow up on their seafaring vocation. After the 11th century, Amalfi and Gaeta declined rapidly, while Genoa and Venice became the most powerful republics. Pisa followed and experienced its most flourishing period in the 13th century, and Ancona and Ragusa allied to resist Venetian power. Following the 14th century, while Pisa declined to the point of losing its autonomy, Venice and Genoa continued to dominate navigation, followed by Ragusa and Ancona, which experienced their golden age in the 15th century. In the 16th century, with Ancona's loss of autonomy, only the republics of Venice, Genoa, and Ragusa remained, which still experienced great moments of splendor until the mid-17th century, followed by over a century of slow decline that ended with the Napoleonic invasion. Modern ports Whereas early ports tended to be just simple harbours, modern ports tend to be multimodal distribution hubs, with transport links using sea, river, canal, road, rail and air routes. Successful ports are located to optimize access to an active hinterland, such as the London Gateway. Ideally, a port will grant easy navigation to ships, and will give shelter from wind and waves. Ports are often on estuaries, where the water may be shallow and may need regular dredging. Deep water ports such as Milford Haven are less common, but can handle larger ships with a greater draft, such as super tankers, Post-Panamax vessels and large container ships. Other businesses such as regional distribution centres, warehouses and freight-forwarders, canneries and other processing facilities find it advantageous to be located within a port or nearby. Modern ports will have specialised cargo-handling equipment, such as gantry cranes, reach stackers and forklift trucks. Ports usually have specialised functions: some tend to cater mainly for passenger ferries and cruise ships; some specialise in container traffic or general cargo; and some ports play an important military role for their nation's navy. Some third world countries and small islands such as Ascension and St Helena still have limited port facilities, so that ships must anchor off while their cargo and passengers are taken ashore by barge or launch (respectively). In modern times, ports survive or decline, depending on current economic trends. In the UK, both the ports of Liverpool and Southampton were once significant in the transatlantic passenger liner business. Once airliner traffic decimated that trade, both ports diversified to container cargo and cruise ships. Up until the 1950s the Port of London was a major international port on the River Thames, but changes in shipping and the use of containers and larger ships have led to its decline. Thamesport, a small semi-automated container port (with links to the Port of Felixstowe, the UK's largest container port) thrived for some years, but has been hit hard by competition from the emergent London Gateway port and logistics hub. In mainland Europe, it is normal for ports to be publicly owned, so that, for instance, the ports of Rotterdam and Amsterdam are owned partly by the state and partly by the cities themselves. Even though modern ships tend to have bow-thrusters and stern-thrusters, many port authorities still require vessels to use pilots and tugboats for manoeuvering large ships in tight quarters. For instance, ships approaching the Belgian port of Antwerp, an inland port on the River Scheldt, are obliged to use Dutch pilots when navigating on that part of the estuary that belongs to the Netherlands. Ports with international traffic have customs facilities. Types The terms "port" and "seaport" are used for different types of facilities handling ocean-going vessels, and river port is used for river traffic, such as barges and other shallow-draft vessels. Inland port An inland port is a port on a navigable lake, river (fluvial port), or canal with access to a sea or ocean, which therefore allows a ship to sail from the ocean inland to the port to load or unload its cargo. An example of this is the St. Lawrence Seaway which allows ships to travel from the Atlantic Ocean several thousand kilometers inland to Great Lakes ports like Toronto, Duluth-Superior, and Chicago. The term inland port is also used for dry ports. Seaport A seaport is a port located on the shore of a sea or ocean. It is further categorized as commercial and non-commercial: Commercial ones includes "cruise ports" and "cargo ports". Additionally, "cruise ports" are also known as a "home port" or a "port of call"; and "cargo port" is also further categorized into a "bulk" or "break bulk port" or as a "container port". Non-commercial seaports are marina and fishing ports. Cargo port Cargo ports are quite different from cruise ports, because each handles very different cargo, which has to be loaded and unloaded by a variety of mechanical means. Bulk cargo ports may handle one particular type of cargo or numerous cargoes, such as grains, liquid fuels, liquid chemicals, wood, automobiles, etc. Such ports are known as the "bulk" or "break bulk ports". Ports that handle containerized cargo are known as container ports. Most cargo ports handle all sorts of cargo, but some ports are very specific as to what cargo they handle. Additionally, individual cargo ports may be divided into different operating terminals which handle the different types of cargoes, and may be operated by different companies, also known as terminal operators, or stevedores. Cruise port A cruise home port is the port where cruise ship passengers board (or embark) to start their cruise and disembark the cruise ship at the end of their cruise. It is also where the cruise ship's supplies are loaded for the cruise, which includes everything from fresh water and fuel to fruits, vegetables, champagne, and any other supplies needed for the cruise. "Cruise home ports" are very busy places during the day the cruise ship is in port, because off-going passengers debark their baggage and on-coming passengers board the ship in addition to all the supplies being loaded. Cruise home ports tend to have large passenger terminals to handle the large number of passengers passing through the port. The busiest cruise home port in the world is the Port of Miami, Florida. Port of call A port of call is an intermediate stop for a ship on its sailing itinerary. At these ports, cargo ships may take on supplies or fuel, as well as unloading and loading cargo while cruise liners have passengers get on or off ship. Fishing port A fishing port is a port or harbor for landing and distributing fish. It may be a recreational facility, but it is usually commercial. A fishing port is the only port that depends on an ocean product, and depletion of fish may cause a fishing port to be uneconomical. Marina A marina is a port for recreational boating. Warm-water port A warm-water port (also known as an ice-free port) is one where the water does not freeze in winter. This is mainly used in the context of countries with mostly cold winters where parts of the coastline freezes over every winter. Because they are available year-round, warm-water ports can be of great geopolitical or economic interest. Such settlements as Narvik in Norway, Dalian in China, Murmansk, Novorossiysk, Petropavlovsk-Kamchatsky and Vostochny Port in Russia, Odesa in Ukraine, Kushiro in Japan and Valdez at the terminus of the Alaska Pipeline owe their very existence to being ice-free ports. The Baltic Sea and similar areas have ports available year-round beginning in the 20th century thanks to icebreakers, but earlier access problems prompted Russia to expand its territory to the Black Sea. Dry port A dry port is an inland intermodal terminal directly connected by road or rail to a seaport and operating as a centre for the transshipment of sea cargo to inland destinations. Smart port A smart port uses technologies, including the Internet of Things (IoT) and artificial intelligence (AI) to be more efficient at handling goods. Smart ports usually deploy cloud-based software as part of the process of greater automation to help generate the operating flow that helps the port work smoothly. At present, most of the world's ports have somewhat embedded technology, if not for full leadership. However, thanks to global government initiatives and exponential growth in maritime trade, the number of intelligent ports has gradually increased. A report by business intelligence provider Visiongain assessed that Smart Ports Market spending would reach $1.5 bn in 2019. Environmental issues Ports and their operation are often a cause of environmental issues, such as sediment contamination and spills from ships and are susceptible to larger environmental issues, such as human caused climate change and its effects. Dredging Every year 100 million cubic metres of marine sediment are dredged to improve waterways around ports. Dredging, in its practice, disturbs local ecosystems, brings sediments into the water column, and can stir up pollutants captured in the sediments. Invasive species Invasive species are often spread by the bilge water and species attached to the hulls of ships. It is estimated that there are over 7000 invasive species transported in bilge water around the world on a daily basis Invasive species can have direct or indirect interactions with native sea life. Direct interaction such as predation, is when a native species with no natural predator is all of a sudden prey of an invasive specie. Indirect interaction can be diseases or other health conditions brought by invasive species. Air pollution Ports are also a source of increased air pollution as a result of ships and land transportation at the port. Transportation corridors around ports have higher exhaust emissions and this can have related health effects on local communities. Water quality Water quality around ports is often lower because of both direct and indirect pollution from the shipping, and other challenges caused by the port's community, such as trash washing into the ocean. Spills, pollution and contamination Sewage from ships, and leaks of oil and chemicals from shipping vessels can contaminate local water, and cause other effects like nutrient pollution in the water. Climate change and sea level rise Ports and their infrastructure are very vulnerable to climate change and sea level rise, because many of them are in low-lying areas designed for status quo water levels. Variable weather, coastal erosion, and sea level rise all put pressure on existing infrastructure, resulting in subsidence, coastal flooding and other direct pressures on the port. Reducing impact There are several initiatives to decrease negative environmental impacts of ports. The World Port Sustainability Program points to all of the Sustainable Development Goals as potential ways of addressing port sustainability. These include SIMPYC, the World Ports Climate Initiative, the African Green Port Initiative, EcoPorts and Green Marine. World's major ports Africa The port of Tangier Med is the largest port on the Mediterranean and in Africa by capacity and went into service in July 2007. The busiest port in Africa is Port Said in Egypt. Asia The port of Shanghai is the largest port in the world in both cargo tonnage and activity. It regained its position as the world's busiest port by cargo tonnage and the world's busiest container port in 2009 and 2010, respectively. It is followed by the ports of Singapore, Hong Kong and Kaohsiung, Taiwan, all of which are in East and Southeast Asia. The port of Singapore is the world's second-busiest port in terms of total shipping tonnage, it also transships a third of the world's shipping containers, half of the world's annual supply of crude oil, and is the world's busiest transshipment port. Europe Europe's busiest container port and biggest port by cargo tonnage by far is the Port of Rotterdam, in the Netherlands. It is followed by the Belgian Port of Antwerp or the German Port of Hamburg, depending on which metric is used. In turn, the Spanish Port of Valencia is the busiest port in the Mediterranean basin, while the Portuguese Port of Sines is the busiest atlantic port. The Port of Trieste, Italy, is the main port of the northern Adriatic and starting point of the Transalpine Pipeline. North America The largest ports include the Port of South Louisiana, a vast sprawling port centered in the New Orleans area, Houston, Port of New York/New Jersey, Los Angeles in the U.S., Manzanillo in Mexico and Vancouver in Canada. Panama also has the Panama Canal that connects the Pacific and Atlantic Ocean, and is a key conduit for international trade. Oceania The largest port in Oceania is the Port of Melbourne. South America According to ECLAC's "Maritime and Logistics Profile of Latin America and the Caribbean", the largest ports in South America are the Port of Santos in Brazil, Cartagena in Colombia, Callao in Peru, Guayaquil in Ecuador, and the Port of Buenos Aires in Argentina.
Technology
Coastal infrastructure
null
97218
https://en.wikipedia.org/wiki/John%20F.%20Kennedy%20International%20Airport
John F. Kennedy International Airport
John F. Kennedy International Airport is a major international airport serving New York City and its metropolitan area. JFK Airport is located on the southwestern shore of Long Island, in Queens, New York City, bordering Jamaica Bay. It is the busiest of the seven airports in the New York airport system, the sixth-busiest airport in the United States, and the busiest international commercial airport in North America. The airport, which covers , is the largest in the New York metropolitan area. Over 90 airlines operate from JFK Airport, with nonstop or direct flights to destinations on all six inhabited continents. JFK Airport is located in the Jamaica neighborhood of Queens, southeast of Midtown Manhattan. The airport features five passenger terminals and four runways. It is primarily accessible via car, bus, shuttle, or other vehicle transit via the JFK Expressway or Interstate 678 (Van Wyck Expressway), or by train. JFK is a hub for American Airlines and Delta Air Lines as well as the primary operating base for JetBlue. The airport is also a former hub for Braniff, Eastern, Flying Tigers, National, Northeast, Northwest, Pan Am, Seaboard World, Tower Air, and TWA. The facility opened in 1948 as New York International Airport and was commonly known as Idlewild Airport. Following the assassination of John F. Kennedy in 1963, the airport was renamed John F. Kennedy International Airport in tribute to him. History Construction What would become known as John F. Kennedy International Airport opened in 1948 as New York International Airport though it was commonly known as Idlewild Airport. after the Idlewild Beach Golf Course that it displaced. It was built to relieve LaGuardia Field, which had become overcrowded after its 1939 opening. In late 1941, mayor Fiorello La Guardia announced that the city had tentatively chosen a large area of marshland on Jamaica Bay, which included the Idlewild Golf Course as well as a summer hotel and a landing strip called the Jamaica Sea-Airport, for a new airfield. Title to the land was conveyed to the city at the end of December 1941. Construction began in 1943, though the airport's final layout was not yet decided upon. About US$60 million was initially spent with governmental funding, but only of the Idlewild Golf Course site were earmarked for use. The project was renamed Major General Alexander E. Anderson Airport in 1943 after a Queens resident who had commanded a Federalized National Guard unit in the southern United States and died in late 1942. The renaming was vetoed by Mayor La Guardia and reinstated by the New York City Council; in common usage, the airport was still called "Idlewild". In 1944, the New York City Board of Estimate authorized the condemnation of another for Idlewild. The Port of New York Authority (now the Port Authority of New York and New Jersey) leased the Idlewild property from the City of New York in 1947 and maintains this lease today. In March 1948, the City Council changed the official name to New York International Airport, Anderson Field, but the common name remained "Idlewild" until December 24, 1963. The airport was intended as the world's largest and most efficient, with "no confusion and no congestion". Early operations The first flight from Idlewild was on July 1, 1948, with the opening ceremony attended by U.S. President Harry S. Truman and Governor of New York Thomas E. Dewey, who were both running for president in that year's presidential election. The Port Authority cancelled foreign airlines' permits to use LaGuardia, forcing them to move to Idlewild during the next couple of years. Idlewild at the time had a single terminal building; by 1949, the terminal building was being expanded to . Further expansions would come in following years, including a control tower in 1952, as well as new and expanded buildings and taxiways. Idlewild opened with six runways and a seventh under construction; runways 1L and 7L were held in reserve and never came into use as runways. Runway 31R (originally ) is still in use; runway 31L (originally ) opened soon after the rest of the airport and is still in use; runway 1R closed in 1957 and runway 7R closed around 1966. Runway 4 (originally 8,000 ft, now runway 4L) opened June 1949 and runway 4R was added ten years later. A smaller runway 14/32 was built after runway 7R closed and was used until 1990 by general aviation, STOL, and smaller commuter flights. The first jet airliner to land at Idlewild was an Avro Jetliner flying from Malton Airport in Toronto carrying the world's first cargo of jet airmail on April 18, 1950. A 1951 policy instituted by the Port Authority effectively prohibited jets from landing at the city's airports. After tests demonstrating that it was no noisier than the loudest of the then-current propeller plane, approval was granted for a Sud Aviation Caravelle prototype to be the next jet airliner to land at Idlewild, on May 2, 1957. Later in 1957, the Soviet Union sought approval for two jet-powered Tupolev Tu-104 flights carrying diplomats to land at Idlewild; the Port Authority did not allow them, saying noise tests had to be done first. In 1951, the airport averaged 73 daily airline operations (takeoffs plus landings); the October 1951 Airline Guide shows nine domestic departures a day on National and Northwest. Much of Newark Airport's traffic shifted to Idlewild (which averaged 242 daily airline operations in 1952) when Newark was temporarily closed in February 1952 after a series of three plane crashes in the two preceding months in Elizabeth, all of which had fatalities; flights were shifted to Idlewild and La Guardia, which were both able to have planes take off and land over the water, rather than over the densely populated areas surrounding Newark Airport. The airport remained closed in Newark until November 1952, with new flight patterns that took planes away from Elizabeth. L-1049 Constellations and DC-7s appeared between 1951 and 1953 and did not use LaGuardia for their first several years, bringing more traffic to Idlewild. The April 1957 Airline Guide cites a total of 1,283 departures a week, including about 250 from Eastern Air Lines, 150 from National Airlines and 130 from Pan American. Separate terminals By 1954, Idlewild had the highest volume of international air traffic of any airport globally. The Port of New York Authority originally planned a single 55-gate terminal, but the major airlines did not agree with this plan, arguing that the terminal would be far too small for future traffic. Architect Wallace Harrison then designed a plan for each major airline at the airport to be given its own space to develop its own terminal. This scheme made construction more practical, made terminals more navigable, and introduced incentives for airlines to compete with each other for the best design. The revised plan met airline approval in 1955, with seven terminals initially planned. Five terminals were for individual airlines, one was for three airlines, and one was for international arrivals (National Airlines and British Airways arrived later). In addition, there would be an 11-story control tower, roadways, parking lots, taxiways, and a reflecting lagoon in the center. The airport was designed for aircraft up to gross weight The airport had to be modified in the late 1960s to accommodate the Boeing 747's weight. The International Arrivals Building, or IAB, was the first new terminal at the airport, opening in December 1957. The building was designed by SOM. The terminal stretched nearly and was parallel to runway 7R. The terminal had "finger" piers at right angles to the main building allowing more aircraft to park, an innovation at the time. The building was expanded in 1970 to accommodate jetways. However, by the 1990s the overcrowded building was showing its age and it did not provide adequate space for security checkpoints. It was demolished in 2000 and replaced with Terminal 4. United Airlines and Delta Air Lines opened Terminal 7 (later renumbered Terminal 9), a SOM design similar to the IAB, in October 1959. It was demolished in 2008. Eastern Air Lines opened their Chester L. Churchill-designed Terminal 1 in November 1959. The terminal was demolished in 1995 and replaced with the current Terminal 1. American Airlines opened Terminal 8 in February 1960. It was designed by Kahn and Jacobs and had a stained-glass facade designed by Robert Sowers, the largest stained-glass installation in the world until 1979. The facade was removed in 2007 as the terminal was demolished to make room for the new Terminal 8; American cited the prohibitive cost of removing the enormous installation. Pan American World Airways opened the Worldport (later Terminal 3) in 1960, designed by Tippetts-Abbett-McCarthy-Stratton. It featured a large, elliptical roof suspended by 32 sets of radial posts and cables; the roof extended beyond the base of the terminal to cover the passenger loading area. It was one of the first airline terminals in the world to feature jetways that connected to the terminal and that could be moved to provide an easy walkway for passengers from the terminal to a docked aircraft. Jetways replaced the need to have to board the plane outside via airstairs that descend from an aircraft, truck-mounted mobile stairs, or wheeled stairs. The Worldport was demolished in 2013. Trans World Airlines opened the TWA Flight Center in 1962, designed by Eero Saarinen with a distinctive winged-bird shape. With the demise of TWA in 2001, the terminal remained vacant until 2005 when JetBlue and the Port Authority of New York and New Jersey (PANYNJ) financed the construction of a new 26-gate terminal partly encircling the Saarinen building. Called Terminal 5 (Now T5), the new terminal opened on October 22, 2008. T5 is connected to the Saarinen central building through the original passenger departure-arrival tubes that connected the building to the outlying gates. The original Saarinen terminal, also known as the head house, has since been converted into the TWA Hotel. Northwest Orient, Braniff International Airways, and Northeast Airlines opened a joint terminal in November 1962 (later Terminal 2). It was demolished in 2023 to make way for a new Terminal 1. National Airlines opened the Sundrome (later Terminal 6) in 1969. The terminal was designed by I.M.Pei. It was unique for its use of all-glass mullions dividing the window sections, unprecedented at the time. On October 30, 2000, United Airlines and the Port Authority of New York and New Jersey announced plans to redevelop this terminal and the TWA Flight Center as a new United terminal. Terminal 6 was used by JetBlue from 2001 until JetBlue moved to Terminal 5 in 2008. The Sundrome was demolished in October 2011 to make room for additional gates at JetBlue's Terminal 5. Later operation The airport was renamed John F. Kennedy International Airport on December 24, 1963, a month and two days after the assassination of President John F. Kennedy; Mayor Robert F. Wagner Jr. proposed the renaming. The IDL and KIDL codes have since been reassigned to Indianola Municipal Airport in Mississippi, and the now-renamed Kennedy Airport was given the codes JFK and KJFK, the fallen president's initials. Airlines began scheduling jets to Idlewild in 1958–59; LaGuardia did not get jets until 1964, and JFK became New York's busiest airport. It had more airline takeoffs and landings than LaGuardia and Newark combined from 1962 to 1967 and was the second-busiest airport in the country, peaking at 403,981 airline operations in 1967. LaGuardia received a new terminal and longer runways from 1960 to 1966. By the mid-1970s, the two airports had roughly equal airline traffic (by flight count); Newark was in third place until the 1980s, except during LaGuardia's reconstruction. Concorde, operated by Air France and British Airways, made scheduled trans-Atlantic supersonic flights to JFK from November 22, 1977, until its retirement by British Airways on October 24, 2003. Air France had retired the aircraft in May 2003. Construction of the AirTrain JFK people-mover system began in 1998, after decades of planning for a direct rail link to the airport. Although the system was originally scheduled to open in 2002, it opened on December 17, 2003, after delays caused by construction and a fatal crash. The rail network links each airport terminal to the New York City Subway and the Long Island Rail Road at Howard Beach and Jamaica. The airport's new Terminal 1 opened on May 28, 1998; Terminal 4, the $1.4 billion replacement for the International Arrivals Building, opened on May 24, 2001. JetBlue's Terminal 5 incorporates the TWA Flight Center, and Terminals 8 and 9 were demolished and rebuilt as Terminal 8 for the American Airlines hub. The Port Authority Board of Commissioners approved a $20 million planning study for the redevelopment of Terminals 2 and 3, the Delta Air Lines hub, in 2008. On March 19, 2007, JFK was the first airport in the United States to receive a passenger Airbus A380 flight. The route, with an over-500-passenger capacity, was operated by Lufthansa and Airbus and arrived at Terminal 1. On August 1, 2008, it received the first regularly scheduled commercial A380 flight to the United States (on Emirates' New York–Dubai route) at Terminal 4. Although the service was suspended in 2009 due to poor demand, the aircraft was reintroduced in November 2010. Airlines operating A380s to JFK include Singapore Airlines (on its New York–Frankfurt–Singapore route), Lufthansa (on its New York–Frankfurt route), Korean Air (on its New York–Seoul route), Asiana Airlines (on its New York–Seoul route), Etihad Airways (on its New York–Abu Dhabi route), and Emirates (on its New York–Milan–Dubai and New York–Dubai routes). On December 8, 2015, JFK was the first U.S. airport to receive a commercial Airbus A350 flight when Qatar Airways began using the aircraft on one of its New York–Doha routes. The airport currently hosts the world's longest flight, Singapore Airlines Flights 23 and 24 (SQ23 and SQ24). The route was launched in 2020 between Singapore and New York JFK, and uses the Airbus A350-900ULR. Major robberies The Air France robbery took place in April 1967 when associates of the Lucchese crime family stole $420,000 (equivalent of approximately $ million in ) from the Air France cargo terminal at the airport. It was the largest cash robbery in the United States at the time. It was carried out by Henry Hill, Robert McMahon, Tommy DeSimone and Montague Montemurro, on a tip-off from McMahon. Hill believed it was the Air France robbery that endeared him to the Mafia. Air France was contracted to transport American currency that had been exchanged in Southeast Asia for deposit in the United States. Their aircraft regularly delivered three or four $60,000 packages at a time. Hill and associates obtained a key to a cement block strong room where the money was stored. They entered the unsecured cargo terminal and entered the strong room unchallenged. They took seven bags in a large suitcase. The theft was not discovered until the following Monday. The Lufthansa heist took place on December 11, 1978, at the airport. The robbery netted an estimated US$5.875 million (equivalent to US$ million in ), including US$5 million in cash and US$875,000 in jewelry. It was the largest cash robbery committed on American soil at the time. James Burke, an associate of the Lucchese crime family of New York, was believed to be the mastermind behind the robbery, but was never charged with the crime. Burke is also alleged to have either committed or ordered the murders of many in the robbery, both to avoid being implicated in the heist and to keep their shares of the money for himself. The only person convicted in the Lufthansa heist was Louis Werner, an airport worker involved with the planning. The money and jewellery have never been recovered. The heist's magnitude made it one of the longest-investigated crimes in U.S. history; the latest arrest associated with the robbery was made in 2014, which resulted in acquittal. Access Rail All lines of AirTrain JFK, the airport's dedicated rail network, stop at each passenger terminal. The system also serves Federal Circle, the JFK long-term parking lot, and two multimodal rapid transit stations: Howard Beach and Jamaica. While AirTrain travel within airport property is complimentary, external transfers at the latter two locations are paid via OMNY or MetroCard and provide access to the New York City Subway, Long Island Rail Road, and MTA Bus services. Bus , only the bus serves Terminal 8. The serve JFK's cargo terminals. The Q10 and B15 serve the Lefferts Boulevard station on the AirTrain and it includes a free transfer. The B15, Q3, and Q10 buses will return to Terminal 5 in 2026 due to construction. Bus fares are paid via OMNY or MetroCard, with free transfers provided to New York City Subway services. Vehicle Vehicles primarily access the airport via the Van Wyck Expressway (I-678) or JFK Expressway, both of which are connected to the Belt Parkway and various surface streets in South Ozone Park and Springfield Gardens. The airport operates parking facilities consisting of multi-level terminal garages, surface spaces in the Central Terminal Area, and a long-term parking lot with total accommodation for more than 17,000 vehicles. A travel plaza on airport property also contains a food court, filling station, and originally four Tesla Superchargers. The original 4 Tesla Superchargers were later replaced with a new station with 12 stalls. Taxis and other for-hire vehicles (FHV) serving JFK are licensed by the New York City Taxi & Limousine Commission. In 2019, PANYNJ approved the implementation of "airport access fee" surcharges on FHV and taxi trips, with the revenue earmarked to support the agency's capital programs. Terminals Overview JFK has five active terminals, containing 130 gates in total. The terminals are numbered 1, 4, 5, 7, and 8. The terminal buildings, except for the former Tower Air terminal, are arranged in a deformed U-shaped wavy pattern around a central area containing parking, a power plant, and other airport facilities. The terminals are connected by the AirTrain system and access roads. Directional signage throughout the terminals was designed by Paul Mijksenaar. A 2006 survey by J.D. Power and Associates in conjunction with Aviation Week found that JFK ranked second in overall traveller satisfaction among large airports in the United States, behind Harry Reid International Airport, which serves the Las Vegas metropolitan area. Until the early 1990s, each terminal was known by the primary airline that served it, except for Terminal 4, which was known as the International Arrivals Building. In the early 1990s, all terminals were given numbers except for the Tower Air terminal, which sat outside the Central Terminals area and was not numbered. Like the other airports controlled by the Port Authority, JFK's terminals are sometimes managed and maintained by independent terminal operators. At JFK, all terminals are managed by airlines or consortiums of the airlines serving them, except for the Schiphol Group-operated Terminal 4. All terminals can handle international arrivals that are not pre-cleared. Most inter-terminal connections require passengers to exit security, then walk, use a shuttle bus, or use the AirTrain JFK to get to the other terminal, then re-clear security. Terminal 1 Terminal 1 opened in 1998, 50 years after the opening of JFK, at the direction of the Terminal One Group, a consortium of four key operating carriers: Air France, Japan Airlines, Korean Air, and Lufthansa. This partnership was founded after the four airlines reached an agreement that the then-existing international carrier facilities were inadequate for their needs. The Eastern Air Lines terminal was located on the site of present-day Terminal 1. Terminal 1 is served by SkyTeam carriers Air France, China Eastern Airlines, ITA Airways, Korean Air, Saudia, and Scandinavian Airlines; Star Alliance carriers Air China, Air New Zealand, Asiana Airlines, Austrian Airlines, Brussels Airlines, Egyptair, EVA Air, Lufthansa, Swiss International Air Lines, TAP Air Portugal, and Turkish Airlines; and Oneworld carrier Royal Air Maroc. Other airlines serving Terminal 1 include Air Serbia, Azores Airlines, Cayman Airways, Flair Airlines, Neos, Philippine Airlines, VivaAerobús, and Volaris. Terminal 1 was designed by William Nicholas Bodouva + Associates. It and Terminal 4 are the two terminals at JFK Airport with the capability of handling the Airbus A380 aircraft, which Korean Air flies on the route from Seoul–Incheon and Lufthansa from Munich. Air France operated Concorde here until 2003. Terminal 1 has 11 gates. Terminal 4 Terminal 4, developed by LCOR, Inc., is managed by JFKIAT (IAT) LLC, a subsidiary of the Schiphol Group and was the first in the United States to be managed by a foreign airport operator. Terminal 4 currently contains 48 gates in two concourses and functions as the hub for Delta Air Lines at JFK. Concourse A (gates A2–A12, A14–A17, A19, and A21) serves primarily Asian and some European airlines along with Delta Connection flights. Concourse B (gates B20, B22-B55) primarily serves both domestic and international flights of Delta and its SkyTeam partners. Airlines servicing Terminal 4 include SkyTeam carriers Aeromexico, Air Europa, China Airlines, Delta Air Lines, Kenya Airways, KLM, Virgin Atlantic, and XiamenAir; Star Alliance carriers Air India, Avianca, Copa Airlines, and Singapore Airlines; and non-alliance carriers Caribbean Airlines, El Al, Emirates, Etihad Airways, Hawaiian Airlines, JetBlue (late night international arrivals only), LATAM Brasil, LATAM Chile, LATAM Peru, Uzbekistan Airways, and WestJet. Like Terminal 1, the facility is Airbus A380-compatible with service currently provided by Emirates to Dubai (both non-stop and one-stop via Milan), and Etihad Airways to Abu Dhabi. Opened in early 2001 and designed by SOM, the facility was built for $1.4 billion and replaced JFK's old International Arrivals Building (IAB), which opened in 1957 and was designed by the same architectural firm. The new construction incorporated a mezzanine-level AirTrain station, an expansive check-in hall, and a four-block-long retail area. Terminal 4 has seen multiple expansions over the years. On May 24, 2013, the completion of a $1.4 billion project added mechanized checked-bag screening, a centralized security checkpoint (consolidating two checkpoints into one new fourth-floor location), nine international gates, improved U.S. Customs and Border Protection facilities, and, at the time, the largest Sky Club lounge in Delta's network. Later that year, the expansion also improved passenger connectivity with Terminal 2 by bolstering inter-terminal JFK Jitney shuttle bus service and building a dedicated 8,000 square-foot bus holdroom facility adjacent to gate B20. Also in 2013, Delta, JFKIAT and the Port Authority agreed to a further $175 million Phase II expansion, which called for 11 new regional jet gates to supersede capacity previously provided by the soon-to-be-demolished Terminal 2 hardstands and Terminal 3. Delta sought funding from the New York City Industrial Development Agency, and work on Phase II was completed in January 2015. By 2017, plans to expand Terminal 4's passenger capacity were being floated in conjunction with a more significant JFK modernization proposal. In early 2020, Governor Cuomo announced that the Port Authority and Delta/IAT had agreed to terms extending Concourse A by 16 domestic gates, renovating the arrival/departure halls, and improving land-side roadways for $3.8 billion. By April 2021, that plan had been scaled-back to $1.5 billion worth of improvements as a result of financial hardships imposed by the COVID-19 pandemic. The revised plan called for arrival/departure hall modernization and just ten new gates in Concourse A. Consolidation of Delta's operations within T4 occurred in early 2023, along with the new gates opening. Delta also opened a new Sky Club in Concourse A. The airline plans to open a lounge exclusive to Delta One customers by June 2024. It would be the largest in the airline's network. In 2019, American Express began construction of a Centurion lounge that subsequently opened in October 2020. The structural addition extends the headhouse between the control tower and gate A2, and includes 15,000 square-feet of dining, bars, and fitness facilities. In 2024, Terminal 4 announced an expansion of its Arts & Culture program with a digital and static photography exhibit in collaboration with the Cradle of Aviation Museum; a mural representing Queens by local artist Zeehan Wazed; a series of photographs by Terminal 4 employees, and the first-ever freestanding hologram device in an airport in partnership with Proto hologram which shows animals from the Bronx Zoo and has been used to beam in comedian Howie Mandel as a live hologram to surprise passengers. Terminal 5 Terminal 5 opened in 2008 for JetBlue, the manager and primary tenant of the building, functioning as its operating base at JFK. The terminal is also used by Cape Air. On November 12, 2014, JetBlue opened the International Arrivals Concourse (T5i) at the terminal. The terminal was redesigned by Gensler and constructed by Turner Construction, and sits behind the preserved Eero Saarinen-designed terminal originally known as the TWA Flight Center, which is now connected to the new structure and is considered part of Terminal 5. The TWA Flight Center reopened as the TWA Hotel in May 2019. The active Terminal 5 building has 30 gates: 1 through 12 and 14 through 30, with gates 25 through 30 handling international flights that are not pre-cleared (gates 28–30 opened in November 2014). Aer Lingus opened an airport lounge in 2015. The terminal opened a rooftop lounge open to all passengers in 2015, T5 Rooftop & Wooftop Lounge, located near Gate 28. In August 2016, Fraport USA was selected by JetBlue as the concessions developer to help attract and manage concessions tenants that align with JetBlue's vision for Terminal 5. During the summer of 2016, JetBlue renovated Terminal 5, completely overhauling the check-in lobby. Terminal 7 Terminal 7 was designed by GMW Architects and built for British Overseas Airways Corporation (BOAC) and Air Canada in 1970. Prior to 2022, the terminal was operated by British Airways, and was also the only airport terminal operated on US soil by a foreign carrier. British Airways operated Concorde here until 2003. Airlines operating out of Terminal 7 include Oneworld carrier Alaska Airlines; Star Alliance carriers Air Canada Express, All Nippon Airways, Ethiopian Airlines and LOT Polish Airlines; and non-alliance carriers Aer Lingus, Condor, Frontier Airlines, HiSky, Icelandair, Kuwait Airways, Norse Atlantic Airways, and Sun Country Airlines. In 1989, the terminal was renovated and expanded for $120 million. The expansion was designed by William Nicholas Bodouva + Associates, Architects. In 1997, the Port Authority approved British Airways' plans to renovate and expand the terminal. The $250 million project was designed by Corgan Associates and was completed in 2003. The renovated terminal has 12 gates. In 2015, British Airways extended its lease on the terminal through 2022, with an option of a further three years. BA also planned to spend $65 million to renovate the terminal. Despite being operated by British Airways, a major A380 operator, Terminal 7 is not currently able to handle the aircraft type. As a result, British Airways could not operate A380s on the lucrative London–Heathrow to New York flights, even though in 2014, there was an advertising campaign that British Airways was going to do so. British Airways planned to join its Oneworld partners in Terminal 8, however, and did not exercise its lease options on Terminal 7. The terminal is now operated by JFK Millennium Partners, a consortium including JetBlue, RXR Realty, and Vantage Airport Group, who will eventually demolish the current terminal. At the same time, a new Terminal 6 will begin to be built to serve as a direct replacement. In late 2020, United Airlines announced they would return to JFK in February 2021 after a 5-year hiatus. As of March 28, 2021, United operated transcontinental nonstop service from Terminal 7 to its west coast hubs in San Francisco and Los Angeles. On October 29, 2022, however, United suspended service to JFK once again. Terminal 8 Terminal 8 is a major Oneworld hub with American Airlines operating its hub here. In 1999, American Airlines began an eight-year program to build the largest passenger terminal at JFK, designed by DMJM Aviation to replace both Terminal 8 and Terminal 9. The new terminal was built in four phases, which involved the construction of a new midfield concourse and the demolition of old Terminals 8 and 9. It was built in stages between 2005 and its official opening in August 2007. American Airlines, the third-largest carrier at JFK, manages Terminal 8 and is the largest carrier at the terminal. Other Oneworld airlines that operate out of Terminal 8 include British Airways, Cathay Pacific, Finnair, Iberia, Japan Airlines, Qantas, Qatar Airways, and Royal Jordanian. Non-alliance carrier China Southern Airlines also uses the terminal. In 2019, it was announced that British Airways and Iberia would move into Terminal 8 preceding the demolition of Terminal 7 and that the terminal would be expanded and changed to accommodate more widebody aircraft that British Airways, Iberia and other Oneworld airlines regularly send to JFK. On January 7, 2020, construction began expanding and improving Terminal 8 with construction completed in 2022. This construction marked the first phase in the airport's expansion; the terminal had the same number of gates as before, plus four hardstands. British Airways began operating some flights out of Terminal 8 on November 17, 2022, while all flights moved from Terminal 7 on December 1, 2022. Iberia also moved to Terminal 8 on December 1, while Japan Airlines moved to the terminal on May 28, 2023. The terminal is twice the size of Madison Square Garden. It offers dozens of retail and food outlets, 84 ticket counters, 44 self-service kiosks, ten security checkpoint lanes, and a U.S. Customs and Border Protection facility that can process more than 1,600 people an hour. Terminal 8 has an annual capacity of 12.8M passengers. It has one American Airlines Admirals Club and three lounges for premium class passengers as well as frequent flyers (Greenwich, Soho, and Chelsea lounges). Terminal 8 has 31 gates: 14 gates in Concourse B (1–8, 10, 12, 14, 16, 18, and 20) and 17 gates in Concourse C (31–47). Passenger access to and from Concourse C is by a tunnel that includes moving walkways. Reconstruction On January 4, 2017, the office of then-New York governor Andrew Cuomo announced a plan to renovate most of the airport's existing infrastructure for $7 to $10 billion. The Airport Master Plan Advisory Panel had reported that JFK, ranked 59th out of the world's top 100 airports by Skytrax, was expected to experience severe capacity constraints from increased use. The airport was expected to serve about 75 million annual passengers in 2020 and 100 million by 2050, up from 60 million when the report was published. The panel had several recommendations, including enlarging the newer terminals; relocating older terminals; reconfiguring highway ramps and increasing the number of lanes on the Van Wyck Expressway; lengthening AirTrain JFK trainsets or connecting the line to the New York City transportation system, and rebuilding the Jamaica station with direct connections to the Long Island Rail Road and the New York City Subway. No start date has yet been proposed for the project; in July 2017, Cuomo's office began accepting proposals for master plans to renovate the airport. When all the construction is finished, the airport will have 149 total gates: 145 with jetways and four hardstands. Notably, previous plans included adding cars to AirTrain trainsets; widening connector ramps between the Van Wyck Expressway and Grand Central Parkway in Kew Gardens; and adding another lane in each direction to the Van Wyck, at a combined cost of $1.5 billion. It is unclear how many, if any, of those proposals are still being considered. New Terminal 1 In October 2018, Cuomo released details of a $13 billion plan to rebuild passenger facilities and approaches to JFK Airport. Two all-new international terminals would be built. One of the terminals, a $7 billion, , 23-gate structure replacing Terminals 1, 2 and the vacant space of Terminal 3. It will connect to Terminal 4, and it will be financed and built by a partnership between Munich Airport Group, Lufthansa, Air France, Korean Air, and Japan Airlines. Of these 23 gates, all are international gates, 22 are widebody gates (four of which can accommodate an Airbus A380), and one is a narrowbody gate. This would also require reconfiguring portions of the roadway network to accommodate the new terminal. On December 13, 2021, New York Governor Kathy Hochul gave a further update on the plans to build a new Terminal 1, which in a further developed form would cost US$9.5 billion. The new facility is inspired by the new Terminal B at LaGuardia Airport. The new terminal will have New York City-inspired art, similar to Terminal B at LGA. The New Terminal 1 began construction on September 8, 2022, and will open in phases with the first 14 gates on its east side along with the departures and arrivals hall scheduled to open in 2026 on the site of the demolished Terminal 2. The current Terminal 1 will then be demolished, and in its place, the next five gates on the west side of the terminal will open in 2028, and the final four gates will open in 2030. An additional extension of the terminal on its west side with a further four gates (with an extra A380 gate) has been proposed in the event of excess traffic. Expanded Terminal 4 On February 11, 2020, Cuomo and the Port Authority, along with Delta Air Lines, announced a $3.8 billion plan to add sixteen domestic, regional gates to the 'A' side of Terminal 4, replacing Terminal 2. The main headhouse would have been expanded to accommodate additional passengers and open in 2022. The airport finished construction on a downsized plan in 2023, allowing the demolition of Terminal 2, the consolidation of flights for Delta, and the ability to build the new Terminal 1. An expanded roadway will be completed in 2025. Delta consolidated their operations into Terminal 4 in January 2023, along with opening 10 new gates in Terminal 4's Concourse A. An additional expansion to Concourse B was expected to be completed by the fall of 2023. New Terminal 6 Construction on a new Terminal 6 began in February 2023. The terminal was designed by Corgan and will have ten gates, nine of which will be wide-body gates. The terminal will be opened in multiple phases; the first phase is expected to be completed by 2026 and, , is projected to cost $4.2 billion. The full terminal is expected to open in 2028. The new terminal will connect to Terminal 5; Terminal 7 will be demolished after the new Terminal 6's first phase of construction is completed. The construction will be built under a public–private partnership between the Port Authority and a consortium, known as JFK Millennium Partners, comprising JetBlue, RXR Realty, and Vantage Airport Group. Former terminals JFK Airport was originally built with ten terminals, compared to the five it has today. Ten terminals remained until the late 1990s, then nine remained until the early 2000s, followed by eight until 2011, seven until 2013 and six until 2023. Terminal 1 (1959–1995) The original Terminal 1 opened in November 1959, for Eastern Air Lines. It was designed by Chester L. Churchill. Eastern was the primary tenant of this terminal until its collapse on January 19, 1991. Shortly after Eastern's collapse, the terminal became vacant until it was finally demolished in 1995. It was located on the site of today's Terminal 1, which opened in 1998. Terminal 2 (1962–2023) Terminal 2 opened in November 1962 as the home of Northeast Airlines, Braniff International Airways, and Northwest Orient, and was last occupied by Delta Air Lines. The facility contained 11 jetbridge-equipped gates (C60–C70) and one mezzanine-level airline club, and it formerly housed several hardstands for smaller regional airliners. The terminal did not have a U.S. Customs and Border Protection processing facility, and was unable to accept any international flights arriving unless subject to US Customs preclearance. It was designed by the architectural firm White & Mariani. Delta moved over to Terminal 2 following the merger with Northeast Airlines swapping places with Braniff, Pan Am moved its domestic flights to this terminal in 1986. Upon the completion of Terminal 4, T2's gates were prefaced with the letter 'C', and airside shuttle buses provided passenger connectivity between the terminals. Before 2013, Terminal 2 hosted most of Delta's operations in conjunction with Terminal 3. Still, the 2013–2015 expansion of Terminal 4 allowed the airline to consolidate most of its operations in the new larger facility, including international and transcontinental flights. In mid-2020, following drastic schedule reductions in the wake of the COVID-19 pandemic, Delta suspended all operations from Terminal 2; the terminal re-opened to flights in July 2021. Terminal 2 permanently closed for departures on January 10, 2023, and for arrivals on January 15, 2023. Terminal 2 was demolished to make room for the new Terminal 1. Terminal 3 (1960–2013) Terminal 3 opened as the Worldport on May 24, 1960, for Pan American World Airways (Pan Am); it expanded after the introduction of the Boeing 747 in 1971. After Pan Am's demise in 1991, Delta Air Lines took over ownership of the terminal and was its only occupant until its closure on May 23, 2013. It had a connector to Terminal 2, Delta's other terminal, used mainly for domestic flights. Terminal 3 had 16 Jetway-equipped gates: 1–10, 12, 14–18 with two hardstand gates (Gate 11) and a helipad on Taxiway KK. A $1.2 billion project was completed in 2013, under which Terminal 4 was expanded, and Delta subsequently moved its T3 operations to T4. On May 23, 2013, the final departure from the terminal, Delta Air Lines Flight 268, a Boeing 747-400 to Tel Aviv Ben Gurion Airport, departed from Gate 6 at 23:25 local time. The terminal ceased operations on May 24, 2013, exactly fifty-three years after its opening. Demolition began soon after that and was completed by Summer 2014. The site where Terminal 3 used to stand is now used for aircraft parking by Delta Air Lines. There has been a major media outcry, particularly in other countries, over the demolition of the Worldport. Several online petitions requesting the restoration of the original 'flying saucer' gained popularity. International Arrivals Building The International Arrivals Building (IAB) was opened in December 1957 and was replaced with the new Terminal 4 in 2001. It was designed by SOM. TWA Flight Center The TWA Flight Center was opened in 1962 and closed in 2001 after its primary tenant, Trans World Airlines, went out of business; the terminal had seen increased capacity issues in the years prior. It was designed by renowned architect Eero Saarinen, with extensions designed by Roche-Dinkeloo opening in 1970. The TWA Flight Center was not demolished after closure, as it had been named a New York City designated landmark in 1994. Instead, it sat abandoned until it was incorporated into the current JetBlue Terminal 5. It was then converted into the Jet Age-themed TWA Hotel, which opened in 2019. Terminal 6 (1969–2011) Terminal 6 opened as the Sundrome on November 30, 1969, for National Airlines. National was the tenant of this terminal until it was fully acquired by Pan American World Airways (Pan Am) on January 7, 1980. Terminal 6 had 14 gates. It was designed by architect I.M. Pei. Trans World Airlines (TWA) then expanded into the terminal, referring to it as the TWA Terminal Annex, later called the TWA Domestic Terminal. It was eventually connected to the TWA Flight Center. Later, after TWA reduced flights at JFK, Terminal 6 was used by United Airlines (SFO and LAX transcontinental flights), ATA Airlines, a reincarnated Pan Am II, Carnival Air Lines, Vanguard Airlines, and America West Airlines. In 2000, JetBlue began service from Terminal 6, later opening a temporary complex in 2006 that increased its capacity by adding seven gates. Until 2008, JetBlue was the tenant of Terminal 6. It became vacant on October 22, 2008, when JetBlue moved to Terminal 5 and was finally demolished in 2011. The international arrivals annex of Terminal 5 now uses a portion of the site, and the rest of the site is used for aircraft parking by JetBlue, but will be occupied by the new Terminal 6, an annex to Terminal 5, planned to be fully opened by 2027. Terminal 8 (1960–2008) The original Terminal 8 opened in February 1960; its stained-glass façade was the largest at the time. It was always used by American Airlines, and, in later years, it was used by other Oneworld airlines that did not use Terminal 7. This terminal, along with Terminal 9, was demolished in 2008 and replaced with the current Terminal 8. Terminal 9 (1959–2008) Terminal 9 opened in October 1959 as the home of United Airlines and Delta Air Lines. Braniff International Airways moved over to Terminal 9 in 1972 after swapping terminals with Delta following Delta's acquisition of Northeast Airlines. It operated out of Terminal 9 until its collapse on May 12, 1982. United used Terminal 9 from its opening in 1959 until it vacated the terminal in 1991 and became a tenant at British Airways' Terminal 7. Northwest Airlines used Terminal 9 from 1986 to 1991. Terminal 9 became the home of American Airlines' domestic operations and American Eagle flights for the remainder of its life. This terminal, along with the original Terminal 8, was demolished in 2008 and replaced with the current Terminal 8. Tower Air terminal The Tower Air terminal, unlike other terminals at JFK Airport, sat outside the Central Terminals area in Building 213 in Cargo Area A. Originally used by Pan Am until the expansion of the Worldport (later Terminal 3), it was later used by Tower Air and TWA shuttle until the airline was acquired by American Airlines in 2001. Building 213 has not been used since 2000. Runways and taxiways The airport covers 5,200 acres or . Over of paved taxiways allow aircraft to move around the airfield. The standard width of these taxiways is , with heavy-duty shoulders and erosion control pavement on each side. The taxiways are generally of asphalt concrete composition thick. Painted markings, lighted signage, and embedded pavement lighting, including runway status lights, provide both position and directional information for taxiing aircraft. There are four runways (two pairs of parallel runways) surrounding the airport's central terminal area. Operational facilities Air navigation The air traffic control tower, designed by Pei Cobb Freed & Partners and constructed on the ramp-side of Terminal 4, began full FAA operations in October 1994. An Airport Surface Detection Equipment (ASDE) radar unit sits atop the tower. At the time of its completion, the JFK tower, at , was the world's tallest control tower. It was subsequently displaced from that position by towers at other airports in both the United States and overseas, including those at Hartsfield–Jackson Atlanta International Airport, currently the tallest tower at any U.S. airport, at and at KLIA2 in Kuala Lumpur, Malaysia, currently the world's tallest control tower at . A VOR-DME station, identified as JFK, is located on the airport property between runways 4R/22L and 4L/22R. Physical plant JFK is supplied with electricity by the Kennedy International Airport Power Plant, owned and operated by Calpine Corporation. The natural gas-fired electric cogeneration facility uses two General Electric LM6000 gas turbine engines to supply a total of 110 megawatts, which is purchased by the Port Authority for airport operations. Excess energy is also sold to the New York Independent System Operator. The facility was authorized in 1990, designed by RMJM, and first entered commercial service in February 1995. Heating and cooling for all of JFK's passenger terminals is provided by a co-located Central Heating and Refrigeration Plant (CHRP) in conjunction with a Thermal Distribution System (TDS) that entered service in August 1994. Waste heat from the power plant powers two heat recovery steam generators and a 25-megawatt steam turbine, which in turn run chillers to generate 28,000 tons of refrigeration, or heat exchangers to create 225 million Btu/hour. Aviation ground service Aircraft service facilities include seven aircraft hangars, an engine overhaul building, a aircraft fuel storage facility, and a truck garage. Fixed-base operation service for general aviation flights is provided by Modern Aviation, which possesses the airport's exclusive helipad. Other facilities The airport hosts an extensive array of administrative, government, and air cargo support buildings. In 2002, the New York metropolitan area accounted for 18 percent of import (and over 24 percent of all) air cargo volume in the nation. At that time, JFK itself was reported to have 4.5 million ft2 (418,064 m2) of warehouse space with another under construction. Three chapels, including Our Lady of the Skies Chapel, provide for the religious needs of airline passengers. In January 2017, the Ark at JFK Airport, a luxury terminal for pets, opened for $65 million. Ark was built ostensibly so that people who were transporting pets and other animals would be able to provide luxurious accommodations for these animals. At the time, it was supposed to be the only such facility in the U.S. In January 2018, Ark's owner sued the Port Authority for violating a clause that would have given Ark the exclusive rights to inspect all animals who arrive at JFK from other countries. In the lawsuit, the owner stated that Ark had incurred significant operational losses because many animals were instead being transported to a United States Department of Agriculture facility in Newburgh. Airport hotels Several hotels are adjacent to JFK Airport, including the Courtyard by Marriott and the Crowne Plaza. The former Ramada Plaza JFK Hotel is Building 144, and it was formerly the only on-site hotel at JFK Airport. It was previously a part of Forte Hotels and previously the Travelodge New York JFK. Due to its role in housing friends and relatives of aircraft crash victims in the 1990s and 2000s, the hotel became known as the "Heartbreak Hotel". In 2009 the PANYNJ stated in its preliminary 2010 budget that it was closing the hotel due to "declining aviation activity and a need for substantial renovation" and that it expected to save $1 million per month. The hotel closed on December 1, 2009. Almost 200 employees lost their jobs. On July 27, 2015, Governor Andrew Cuomo announced in a press conference that the TWA Flight Center building would be used by the TWA Hotel, a 505-room hotel with of conference, event, or meeting space. The new hotel is estimated to have cost $265 million. The hotel has a observation deck with an infinity pool. Groundbreaking for the hotel occurred on December 15, 2016, and it opened on May 15, 2019. Airlines and destinations Passenger Cargo When ranked by the value of shipments passing through it, JFK is the number three freight gateway in the United States (after the Port of Los Angeles and the Port of New York and New Jersey), and the number one international air freight gateway in the United States. Almost 21% of all U.S. international air freight by value and 9.6% by tonnage moved through JFK in 2008. The JFK air cargo complex is a Foreign Trade Zone, which legally lies outside the customs area of the United States. JFK is a major hub for air cargo between the United States and Europe. London, Brussels and Frankfurt are JFK's three top trade routes. The European airports are mostly a link in a global supply chain, however. The top destination markets for cargo flying out of JFK in 2003 were Tokyo, Seoul and London. Similarly, the top origin markets for imports at JFK were Seoul, Hong Kong, Taipei and London. 20 cargo airlines operate out of JFK, among them: Air ACT, Air China Cargo, ABX Air, Asiana Cargo, Atlas Air, CAL Cargo Air Lines, Cargolux, Cathay Cargo, China Airlines, EVA Air Cargo, Emirates SkyCargo, Nippon Cargo Airlines, FedEx Express, DHL Aviation, Kalitta Air, Korean Air Cargo, Lufthansa Cargo, UPS Airlines, Southern Air, National Airlines, Icelandair Cargo, and, formerly, World Airways. Top 5 carriers together transported 33.1% of all revenue freight in 2005: American Airlines (10.9% of the total), FedEx Express (8.8%), Lufthansa Cargo (5.2%), Korean Air Cargo (4.9%), and China Airlines (3.8%). There are also some on-demand cargo charter services to JFK, operated by carriers such as Silk Way West Airlines. Most cargo and maintenance facilities at JFK are located north and west of the main terminal area. DHL, FedEx Express, Japan Airlines, Lufthansa, Nippon Cargo Airlines and United Airlines have cargo facilities at JFK. In 2000, Korean Air Cargo opened a new $102 million cargo terminal at JFK with total floor area of and capability of handling 200,000 tons annually. In 2007, American Airlines opened a new priority parcel service facility at their Terminal 8, featuring 30-minute drop-offs and pick-ups for priority parcel shipments within the US. Statistics Passenger numbers Top destinations Airline market share Other Information services In the immediate vicinity of the airport, parking and other information can be obtained by tuning to a highway advisory radio station at 1630 AM. A second station at 1700 AM provides information on traffic concerns for drivers leaving the airport. Kennedy Airport, along with the other Port Authority airports (LaGuardia and Newark), uses a uniform style of signage throughout the airport properties. Yellow signs direct passengers to airline gates, ticketing and other flight services; green signs direct passengers to ground transportation services and black signs lead to restrooms, telephones and other passenger amenities. In addition, the Port Authority operates "Welcome Centers" and taxi dispatch booths in each airline terminal, where staff provide customers with information on taxis, limousines, other ground transportation and hotels. Former New York City traffic reporter Bernie Wagenblast provides the voice for the airport's radio stations and the messages heard on board AirTrain JFK and in its stations. Notable staff Stephen Abraham, colloquially known as Kennedy Steve, was an air traffic controller at JFK between 1994 and 2017. Abraham was known for his distinct "informal" tone and controlling-style while handling ground traffic at the airport. Many of his interactions with pilots were recorded and featured on various social media platforms, including various YouTube channels. In 2017, Abraham was awarded the Dale Wright Award by the National Air Traffic Controllers Association (NATCA) for distinguished professionalism and exceptional career service to NATCA and the National Airspace System. In 2019, he was hired as Airside Operations and Ramp Manager at JFK's Terminal 1. Accidents and incidents
Technology
North America
null
97313
https://en.wikipedia.org/wiki/Jacobi%20symbol
Jacobi symbol
Jacobi symbol for various k (along top) and n (along left side). Only are shown, since due to rule (2) below any other k can be reduced modulo n. Quadratic residues are highlighted in yellow — note that no entry with a Jacobi symbol of −1 is a quadratic residue, and if k is a quadratic residue modulo a coprime n, then , but not all entries with a Jacobi symbol of 1 (see the and rows) are quadratic residues. Notice also that when either n or k is a square, all values are nonnegative. The Jacobi symbol is a generalization of the Legendre symbol. Introduced by Jacobi in 1837, it is of theoretical interest in modular arithmetic and other branches of number theory, but its main use is in computational number theory, especially primality testing and integer factorization; these in turn are important in cryptography. Definition For any integer a and any positive odd integer n, the Jacobi symbol is defined as the product of the Legendre symbols corresponding to the prime factors of n: where is the prime factorization of n. The Legendre symbol is defined for all integers a and all odd primes p by Following the normal convention for the empty product, = 1. When the lower argument is an odd prime, the Jacobi symbol is equal to the Legendre symbol. Table of values The following is a table of values of Jacobi symbol with n ≤ 59, k ≤ 30, n odd. Properties The following facts, even the reciprocity laws, are straightforward deductions from the definition of the Jacobi symbol and the corresponding properties of the Legendre symbol. The Jacobi symbol is defined only when the upper argument ("numerator") is an integer and the lower argument ("denominator") is a positive odd integer. 1. If n is (an odd) prime, then the Jacobi symbol is equal to (and written the same as) the corresponding Legendre symbol. 2. If , then 3. If either the top or bottom argument is fixed, the Jacobi symbol is a completely multiplicative function in the remaining argument: 4. 5. The law of quadratic reciprocity: if m and n are odd positive coprime integers, then 6. and its supplements 7. , and 8. Combining properties 4 and 8 gives: 9. Like the Legendre symbol: If  = −1 then a is a quadratic nonresidue modulo n. If a is a quadratic residue modulo n and gcd(a,n) = 1, then  = 1. But, unlike the Legendre symbol: If  = 1 then a may or may not be a quadratic residue modulo n. This is because for a to be a quadratic residue modulo n, it has to be a quadratic residue modulo every prime factor of n. However, the Jacobi symbol equals one if, for example, a is a non-residue modulo exactly two of the prime factors of n. Although the Jacobi symbol cannot be uniformly interpreted in terms of squares and non-squares, it can be uniformly interpreted as the sign of a permutation by Zolotarev's lemma. The Jacobi symbol is a Dirichlet character to the modulus n. Calculating the Jacobi symbol The above formulas lead to an efficient algorithm for calculating the Jacobi symbol, analogous to the Euclidean algorithm for finding the gcd of two numbers. (This should not be surprising in light of rule 2.) Reduce the "numerator" modulo the "denominator" using rule 2. Extract any even "numerator" using rule 9. If the "numerator" is 1, rules 3 and 4 give a result of 1. If the "numerator" and "denominator" are not coprime, rule 3 gives a result of 0. Otherwise, the "numerator" and "denominator" are now odd positive coprime integers, so we can flip the symbol using rule 6, then return to step 1. In addition to the codes below, Riesel has it in Pascal. Implementation in Lua function jacobi(n, k) assert(k > 0 and k % 2 == 1) n = n % k t = 1 while n ~= 0 do while n % 2 == 0 do n = n / 2 r = k % 8 if r == 3 or r == 5 then t = -t end end n, k = k, n if n % 4 == 3 and k % 4 == 3 then t = -t end n = n % k end if k == 1 then return t else return 0 end end Implementation in C++ // a/n is represented as (a,n) int jacobi(int a, int n) { assert(n > 0 && n%2 == 1); // Step 1 a = (a % n + n) % n; // Handle (a < 0) // Step 3 int t = 0; // XOR of bits 1 and 2 determines sign of return value while (a != 0) { // Step 2 while (a % 4 == 0) a /= 4; if (a % 2 == 0) { t ^= n; // Could be "^= n & 6"; we only care about bits 1 and 2 a /= 2; } // Step 4 t ^= a & n & 2; // Flip sign if a % 4 == n % 4 == 3 int r = n % a; n = a; a = r; } if (n != 1) return 0; else if ((t ^ (t >> 1)) & 2) return -1; else return 1; } Example of calculations The Legendre symbol is only defined for odd primes p. It obeys the same rules as the Jacobi symbol (i.e., reciprocity and the supplementary formulas for and and multiplicativity of the "numerator".) Problem: Given that 9907 is prime, calculate . Using the Legendre symbol Using the Jacobi symbol The difference between the two calculations is that when the Legendre symbol is used the "numerator" has to be factored into prime powers before the symbol is flipped. This makes the calculation using the Legendre symbol significantly slower than the one using the Jacobi symbol, as there is no known polynomial-time algorithm for factoring integers. In fact, this is why Jacobi introduced the symbol. Primality testing There is another way the Jacobi and Legendre symbols differ. If the Euler's criterion formula is used modulo a composite number, the result may or may not be the value of the Jacobi symbol, and in fact may not even be −1 or 1. For example, So if it is unknown whether a number n is prime or composite, we can pick a random number a, calculate the Jacobi symbol and compare it with Euler's formula; if they differ modulo n, then n is composite; if they have the same residue modulo n for many different values of a, then n is "probably prime". This is the basis for the probabilistic Solovay–Strassen primality test and refinements such as the Baillie–PSW primality test and the Miller–Rabin primality test. As an indirect use, it is possible to use it as an error detection routine during the execution of the Lucas–Lehmer primality test which, even on modern computer hardware, can take weeks to complete when processing Mersenne numbers over (the largest known Mersenne prime as of October 2024). In nominal cases, the Jacobi symbol: This also holds for the final residue and hence can be used as a verification of probable validity. However, if an error occurs in the hardware, there is a 50% chance that the result will become 0 or 1 instead, and won't change with subsequent terms of (unless another error occurs and changes it back to -1).
Mathematics
Modular arithmetic
null
97340
https://en.wikipedia.org/wiki/Nautilus
Nautilus
Nautilus (, ) are the ancient pelagic marine mollusc species of the cephalopod family Nautilidae. This is the sole extant family of the superfamily Nautilaceae and the suborder Nautilina. It comprises nine living species in two genera, the type of which is the genus Nautilus. Though it more specifically refers to the species Nautilus pompilius, the name chambered nautilus is also used for any of the Nautilidae. All are protected under CITES Appendix II. Depending on species, adult shell diameter is between . Nautilidae, both extant and extinct, are characterized by involute or more or less convolute shells that are generally smooth, with compressed or depressed whorl sections, straight to sinuous sutures, and a tubular, generally central siphuncle. Having survived relatively unchanged for hundreds of millions of years, nautiluses represent the only living members of the subclass Nautiloidea, and are often considered "living fossils". The word nautilus is derived from the Greek word nautílos "sailor", it originally referred to a type of octopus of the genus Argonauta, also known as 'paper nautilus', which were thought to use two of their arms as sails. Anatomy Tentacles The arm crown of modern nautilids (genera Nautilus and Allonautilus) is very distinct in comparison to coleoids. Unlike the ten-armed Decabrachia or the eight-armed Octopodiformes, nautilus may possess any number of tentacles (cirri) from 50 to over 90 tentacles depending on the sex and individual. These tentacles are classified into three distinct categories: ocular, digital, and labial (buccal). There are two sets of ocular tentacles: one set in front of the eye (pre-ocular) and one set behind the eye (post-ocular). The digital and labial tentacles are arrayed circularly around the mouth, with the digital tentacles forming the outermost ring and the labial tentacles in between the digital tentacles and the mouth. There are 19 pairs of digital tentacles that, together with the ocular tentacles, make up the 42 appendages that are visible when observing the animal (not counting the modified tentacles that form the hood). The labial tentacles are generally not visible, being smaller than the digital tentacles, and more variable both in number and in shape. Males modify three of their labial tentacles into the spadix, which delivers spermatophores into the female during copulation. The tentacle is composed of two distinct structures: the first structure, a fleshy sheath that contains the second structure: an extendable cirrus (plural: cirri). The sheaths of the digital tentacles are fused at their base into a single mass referred to as the cephalic sheath. The digital cirri can be fully withdrawn into the sheath and are highly flexible, capable of extending just over double their fully retracted length and show a high degree of allowable bendability and torsion. Despite not having suckers, the digital tentacles show strong adhesive capabilities. Adhesion is achieved through the secretion of a neutral (rather than acidic) mucopolysaccharide from secretory cells in the ridges of the digital cirri. Release is triggered through contraction of the tentacle musculature rather than the secretion of a chemical solvent, similar to the adhesion/release system in Euprymna, though it is unclear whether these adhesives are homologous. The ocular tentacles show no adhesive capability but operate as sensory organs. Both the ocular tentacles and the eight lateral digital tentacles show chemoreceptive abilities; the preocular tentacles detect distant odor and the lateral digital tentacles detect nearby odor. Digestive system The radula is wide and distinctively has nine teeth. The mouth consists of a parrot-like beak made up of two interlocking jaws capable of ripping the animal's food— mostly crustaceans— from the rocks to which they are attached. Males can be superficially differentiated from females by examining the arrangement of tentacles around the buccal cone: males have a spadix organ (shaped like a spike or shovel) located on the left side of the cone making the cone look irregular, whereas the buccal cone of the female is bilaterally symmetrical. The crop is the largest portion of the digestive tract, and is highly extensible. From the crop, food passes to the small muscular stomach for crushing, and then goes past a digestive caecum before entering the relatively brief intestine. Circulatory system Like all cephalopods, the blood of the nautilus contains hemocyanin, which is blue in its oxygenated state. There are two pairs of gills which are the only remnants of the ancestral metamerism to be visible in extant cephalopods. Oxygenated blood arrives at the heart through four ventricles and flows out to the animal's organs through distinct aortas but returns through veins which are too small and varied to be specifically described. The one exception to this is the vena cava, a single large vein running along the underside of the crop into which nearly all other vessels containing deoxygenated blood empty. All blood passes through one of the four sets of filtering organs (composed of one pericardial appendage and two renal appendages) upon leaving the vena cava and before arriving at the gills for re-oxygenation. Blood waste is emptied through a series of corresponding pores into the pallial cavity. Nervous system The central component of the nautilus nervous system is the oesophageal nerve ring which is a collection of ganglia, commissures, and connectives that together form a ring around the animal's oesophagus. From this ring extend all of the nerves forward to the mouth, tentacles, and funnel; laterally to the eyes and rhinophores; and posteriorly to the remaining organs. The nerve ring does not constitute what is typically considered a cephalopod "brain": the upper portion of the nerve ring lacks differentiated lobes, and most of the nervous tissue appears to focus on finding and consuming food (i.e., it lacks a "higher learning" center). Nautili also tend to have rather short memory spans, and the nerve ring is not protected by any form of brain case. Shell Nautili are the sole living cephalopods whose bony body structure is externalized as a planispiral shell. The animal can withdraw completely into its shell and close the opening with a leathery hood formed from two specially folded tentacles. The shell is coiled, aragonitic, nacreous and pressure-resistant, imploding at a depth of about . The nautilus shell is composed of two layers: a matte white outer layer with dark orange stripes, and a striking white iridescent inner layer. The innermost portion of the shell is a pearlescent blue-gray. The osmeña pearl, contrarily to its name, is not a pearl, but a jewellery product derived from this part of the shell. Internally, the shell divides into camerae (chambers), the chambered section being called the phragmocone. The divisions are defined by septa, each of which is pierced in the middle by a duct, the siphuncle. As the nautilus matures, it creates new, larger camerae and moves its growing body into the larger space, sealing the vacated chamber with a new septum. The camerae increase in number from around 4 at the moment of hatching to 30 or more in adults. The shell coloration also keeps the animal cryptic in the water. When seen from above, the shell is darker in color and marked with irregular stripes, which helps it blend into the dark water below. The underside is almost completely white, making the animal indistinguishable from brighter waters near the surface. This mode of camouflage is called countershading. The nautilus shell presents one of the finest natural examples of a logarithmic spiral, although it is not a golden spiral. The use of nautilus shells in art and literature is covered at nautilus shell. Size N. pompilius is the largest species in the genus. One form from Indonesia and northern Australia, once called N. repertus, may reach in diameter. However, most nautilus species never exceed . Nautilus macromphalus is the smallest species, usually measuring only . A dwarf population from the Sulu Sea (Nautilus pompilius suluensis) is even smaller, with a mean shell diameter of . Physiology Buoyancy and movement To swim, the nautilus draws water into and out of the living chamber with its hyponome, which uses jet propulsion. This mode of propulsion is generally considered inefficient compared to propulsion with fins or undulatory locomotion, however, the nautilus has been found to be particularly efficient compared to other jet-propelled marine animals like squid and jellyfish, or even salmon at low speeds. It is thought that this is related to the use of asymmetrical contractile cycles and may be an adaptation to mitigate metabolic demands and protect against hypoxia when foraging at depth. While water is inside the chamber, the siphuncle extracts salt from it and diffuses it into the blood. The animal adjusts its buoyancy only in long term density changes by osmosis, either removing liquid from its chambers or allowing water from the blood in the siphuncle to slowly refill the chambers. This is done in response to sudden changes in buoyancy that can occur with predatory attacks of fish, which can break off parts of the shell. This limits nautiluses in that they cannot operate under the extreme hydrostatic pressures found at depths greater than approximately , and in fact implode at about that depth, causing instant death. The gas also contained in the chambers is slightly below atmospheric pressure at sea level. The maximum depth at which they can regulate buoyancy by osmotic removal of chamber liquid is not known. The nautilus has the extremely rare ability to withstand being brought to the surface from its deep natural habitat without suffering any apparent damage from the experience. Whereas fish or crustaceans brought up from such depths inevitably arrive dead, a nautilus will be unfazed despite the pressure change of as much as . The exact reasons for this ability, which is thought to be coincidental rather than specifically functional, are not known, though the perforated structure of the animal's vena cava is thought to play an important role. Senses Unlike many other cephalopods, nautiluses do not have what many consider to be good vision; their eye structure is highly developed but lacks a solid lens. Whereas a sealed lens allows for the formation of highly focused and clear, detailed surrounding imagery, nautiluses have a simple pinhole eye open to the environment which only allows for the creation of correspondingly simple imagery. Instead of vision, the animal is thought to use olfaction (smell) as the primary sense for foraging and for locating and identifying potential mates. The "ear" of the nautilus consists of structures called otocysts located immediately behind the pedal ganglia near the nerve ring. They are oval structures densely packed with elliptical calcium carbonate crystals. Brain and intelligence Nautiluses are much closer to the first cephalopods that appeared about 500 million years ago than the early modern cephalopods that appeared maybe 100 million years later (ammonoids and coleoids). They have a seemingly simple brain, not the large complex brains of octopus, cuttlefish and squid, and had long been assumed to lack intelligence. But the cephalopod nervous system is quite different from that of other animals, and recent experiments have shown not only memory, but a changing response to the same event over time. In a study in 2008, a group of nautiluses (N. pompilius) were given food as a bright blue light flashed until they began to associate the light with food, extending their tentacles every time the blue light was flashed. The blue light was again flashed without the food 3 minutes, 30 minutes, 1 hour, 6 hours, 12 hours, and 24 hours later. The nautiluses continued to respond excitedly to the blue light for up to 30 minutes after the experiment. An hour later they showed no reaction to the blue light. However, between 6 and 12 hours after the training, they again responded to the blue light, but more tentatively. The researchers concluded that nautiluses had memory capabilities similar to the "short-term" and "long-term memories" of the more advanced cephalopods, despite having different brain structures. However, the long-term memory capability of nautiluses was much shorter than that of other cephalopods. The nautiluses completely forgot the earlier training 24 hours later, in contrast to octopuses, for example, which can remember conditioning for weeks afterwards. However, this may be simply the result of the conditioning procedure being suboptimal for sustaining long-term memories in nautiluses. Nevertheless, the study showed that scientists had previously underestimated the memory capabilities of nautiluses. Reproduction and lifespan Nautiluses reproduce by laying eggs. Gravid females attach the fertilized eggs, either singly or in small batches, to rocks in warmer waters (21–25 Celsius), whereupon the eggs take eight to twelve months to develop until the juveniles hatch. Females spawn once per year and regenerate their gonads, making nautiluses the only cephalopods to present iteroparity or polycyclic spawning. Nautiluses are sexually dimorphic, in that males have four tentacles modified into an organ, called the "spadix", which transfers sperm into the female's mantle during mating. At sexual maturity, the male shell becomes slightly larger than the female's. Males have been found to greatly outnumber females in practically all published studies, accounting for 60 to 94% of all recorded individuals at different sites. The lifespan of nautiluses may exceed 20 years, which is exceptionally lengthy for a cephalopod, many of whom live less than three even in captivity and under ideal living conditions. However, nautiluses typically do not reach sexual maturity until they are about 15 years old, limiting their reproductive lifespan to often less than five years. Nautilus male has a reproductive organ named Van der Hoeven's organ. Nautilus female has two reproductive organs whose functions are unknown, the Organ of Valenciennes and Owen's laminated organ. Ecology Range and habitat Nautiluses are found only in the Indo-Pacific, from 30° N to 30° S latitude and 90° E to 175° E longitude. They inhabit the deep slopes of coral reefs. Nautiluses usually inhabit depths of several hundred metres. It has long been believed that nautiluses rise at night to feed, mate, and lay eggs, but it appears that, in at least some populations, the vertical movement patterns of these animals are far more complex. The greatest depth at which a nautilus has been sighted is (N. pompilius). Implosion depth for nautilus shells is thought to be around . Only in New Caledonia, the Loyalty Islands, and Vanuatu can nautiluses be observed in very shallow water, at depths of as little as . This is due to the cooler surface waters found in these southern hemisphere habitats as compared to the many equatorial habitats of other nautilus populations – these usually being restricted to depths greater than . Nautiluses generally avoid water temperatures above . Diet Nautiluses are scavengers and opportunistic predators. They eat lobster molts, hermit crabs, and carrion of any kind. Evolutionary history Fossil records indicate that nautiloids have experienced minimal morphological changes over the past 500 million years. Many were initially straight-shelled, as in the extinct genus Lituites. They developed in the Late Cambrian period and became a significant group of sea predators during the Ordovician period. Certain species reached over in size. The other cephalopod subclass, Coleoidea, diverged from the nautiloids long ago and the nautilus has remained relatively unchanged since. Nautiloids were much more extensive and varied 200 million years ago. The ancestors of all Coleoidea (shell-less Cephalopods) once possessed shells, and many early cephalopod species are only known from shell remains. Following the K-Pg extinction event most nautiloid species went extinct, while members of Coleoidea managed to survive. Following the mass extinction, the nautilus became the only extant species of nautiloids. The family Nautilidae has its origin in the Trigonocerataceae (Centroceratina), specifically in the Syringonautilidae of the Late Triassic and continues to this day with Nautilus, the type genus, and its close relative, Allonautilus. Fossil genera The fossil record of Nautilidae begins with Cenoceras in the Late Triassic, a highly varied genus that makes up the Jurassic Cenoceras complex. Cenoceras is evolute to involute, and globular to lentincular; with a suture that generally has a shallow ventral and lateral lobe and a siphuncle that is variable in position but never extremely ventral or dorsal. Cenoceras is not found above the Middle Jurassic and is followed by the Upper Jurassic-Miocene Eutrephoceras. Eutrephoceras is generally subglobular, broadly rounded laterally and ventrally, with a small to occluded umbilicus, broadly rounded hyponomic sinus, only slightly sinuous sutures, and a small siphuncle that is variable in position. Next to appear is the Lower Cretaceous Strionautilus from India and the European ex-USSR, named by Shimankiy in 1951. Strionautilus is compressed, involute, with fine longitudinal striations. Whorl sections are subrectangular, sutures sinuous, the siphuncle subcentral. Also from the Cretaceous is Pseudocenoceras, named by Spath in 1927. Pseudocenoceras is compressed, smooth, with subrectangular whorl sections, flattened venter, and a deep umbilicus. The suture crosses the venter essentially straight and has a broad, shallow, lateral lobe. The siphuncle is small and subcentral. Pseudocenoceras is found in the Crimea and in Libya. Carinonautilus is a genus from the Upper Cretaceous of India, named by Spengler in 1919. Carinonautilus is a very involute form with high whorl section and flanks that converge on a narrow venter that bears a prominent rounded keel. The umbilicus is small and shallow, the suture only slightly sinuous. The siphuncle is unknown. Obinautilus has also been placed in Nautilidae by some authorities, though it may instead be an argonautid octopus. Taxonomy The family Nautilidae contains up to nine extant species and several extinct species: Genus Allonautilus A. perforatus A. scrobiculatus Genus Nautilus †N. altifrons †N. balcombensis N. belauensis †N. butonensis †N. campbelli †N. cookanus †N. geelongensis †N. javanus N. macromphalus N. pompilius (type) N. p. pompilius N. p. suluensis †N. praepompilius Nautilus samoaensis Barord et al., 2023 – Samoa N. stenomphalus †Nautilus taiwanus Huang, 2002– Taiwan Nautilus vanuatuensis Barord et al., 2023 – Vanuatu Nautilus vitiensis Barord et al., 2023 – Fiji Genetic data collected in 2011 pointed to there being only three extant species: A. scrobiculatus, N. macromphalus, and N. pompilius, with N. belauensis and N. stenomphalus both subsumed under N. pompilius, possibly as subspecies, though this was prior to the description of three additional species (samoaensis, vanuatuensis and vitiensis). Dubious or uncertain taxa The following taxa associated with the family Nautilidae are of uncertain taxonomic status: Conservation status and human use Nautilus are collected or fished for sale as live animals or to carve the shells for souvenirs and collectibles, not for just the shape of their shells, but also the nacreous inner shell layer, which is used as a pearl substitute. In Samoa, nautilus shells decorate the forehead band of a traditional headdress called tuiga. Nautilus shells were popular items in the Renaissance and Baroque cabinet of curiosities and were often mounted by goldsmiths on a thin stem to make extravagant nautilus shell cups. The low fecundity, late maturity, long gestation period and long life-span of nautiluses suggest that these species are vulnerable to overexploitation and demand for the ornamental shell is causing population declines. The threats from trade in these shells has led to countries such as Indonesia legally protecting the chambered nautilus with fines of up to US$8,500 and/or 5 years in prison for trading in this species. Despite their legal protection, these shells were reported to be openly sold at tourist areas in Bali as of 2014. The continued trade of these animals has led to a call for increased protection and in 2016 all species in Family Nautilidae were added to CITES Appendix II, regulating international trade. In human culture Palauans see nautili () as a symbol of vulnerable or fragile character from a belief that they easily die even from slight bumps on ocean rocks; hence someone who gets quickly angry after being pranked is compared to one ().
Biology and health sciences
Cephalopods
Animals
97528
https://en.wikipedia.org/wiki/Isoprene
Isoprene
Isoprene, or 2-methyl-1,3-butadiene, is a common volatile organic compound with the formula CH2=C(CH3)−CH=CH2. In its pure form it is a colorless volatile liquid. It is produced by many plants and animals (including humans) and its polymers are the main component of natural rubber. History and etymology C. G. Williams named the compound in 1860 after obtaining it from the pyrolysis of natural rubber. He correctly deduced the mass shares of carbon and hydrogen (but due to modern atomic weight of carbon not yet adopted at the Karlsruhe Congress arrived at an incorrect formula C10H8). He didn't specify the reasons for the name, but it's hypothesized that it came from "propylene" with which isoprene shares some physical and chemical properties. The first one to observe recombination of isoprene into rubber-like substance was in 1879, and William A. Tilden identified its structure five years later. Natural occurrences Isoprene is produced and emitted by many species of trees (major producers are oaks, poplars, eucalyptus, and some legumes). Yearly production of isoprene emissions by vegetation is around 600 million metric tons, half from tropical broadleaf trees and the remainder primarily from shrubs. This is about equivalent to methane emissions and accounts for around one-third of all hydrocarbons released into the atmosphere. In deciduous forests, isoprene makes up approximately 80% of hydrocarbon emissions. While their contribution is small compared to trees, microscopic and macroscopic algae also produce isoprene. Plants Isoprene is made through the methyl-erythritol 4-phosphate pathway (MEP pathway, also called the non-mevalonate pathway) in the chloroplasts of plants. One of the two end-products of MEP pathway, dimethylallyl pyrophosphate (DMAPP), is cleaved by the enzyme isoprene synthase to form isoprene and diphosphate. Therefore, inhibitors that block the MEP pathway, such as fosmidomycin, also block isoprene formation. Isoprene emission increases dramatically with temperature and maximizes at around 40 °C. This has led to the hypothesis that isoprene may protect plants against heat stress (thermotolerance hypothesis, see below). Emission of isoprene is also observed in some bacteria and this is thought to come from non-enzymatic degradations from DMAPP. Global emission of isoprene by plants is estimated at 350 million tons per year. Regulation Isoprene emission in plants is controlled both by the availability of the substrate (DMAPP) and by enzyme (isoprene synthase) activity. In particular, light, CO2 and O2 dependencies of isoprene emission are controlled by substrate availability, whereas temperature dependency of isoprene emission is regulated both by substrate level and enzyme activity. Human & other organisms Isoprene is the most abundant hydrocarbon measurable in the breath of humans. The estimated production rate of isoprene in the human body is 0.15 μmol/(kg·h), equivalent to approximately 17 mg/day for a person weighing 70 kg. Human breath isoprene originates from lipolytic cholesterol metabolism within the skeletal muscular peroxisomes and IDI2 gene acts as the production determinant. Due to the absence of IDI2 gene, animals such as pigs and bottle-nose dolphins do not exhale isoprene. Isoprene is common in low concentrations in many foods. Many species of soil and marine bacteria, such as Actinomycetota, are capable of degrading isoprene and using it as a fuel source. Biological roles Isoprene emission appears to be a mechanism that trees use to combat abiotic stresses. In particular, isoprene has been shown to protect against moderate heat stress (around 40 °C). It may also protect plants against large fluctuations in leaf temperature. Isoprene is incorporated into and helps stabilize cell membranes in response to heat stress. Isoprene also confers resistance to reactive oxygen species. The amount of isoprene released from isoprene-emitting vegetation depends on leaf mass, leaf area, light (particularly photosynthetic photon flux density, or PPFD) and leaf temperature. Thus, during the night, little isoprene is emitted from tree leaves, whereas daytime emissions are expected to be substantial during hot and sunny days, up to 25 μg/(g dry-leaf-weight)/hour in many oak species. Isoprenoids The isoprene skeleton can be found in naturally occurring compounds called terpenes and terpenoid (oxygenated terpenes), collectively called isoprenoids. These compounds do not arise from isoprene itself. Instead, the precursor to isoprene units in biological systems is dimethylallyl pyrophosphate (DMAPP) and its isomer isopentenyl pyrophosphate (IPP). The plural 'isoprenes' is sometimes used to refer to terpenes in general. Examples of isoprenoids include carotene, phytol, retinol (vitamin A), tocopherol (vitamin E), dolichols, and squalene. Heme A has an isoprenoid tail, and lanosterol, the sterol precursor in animals, is derived from squalene and hence from isoprene. The functional isoprene units in biological systems are dimethylallyl pyrophosphate (DMAPP) and its isomer isopentenyl pyrophosphate (IPP), which are used in the biosynthesis of naturally occurring isoprenoids such as carotenoids, quinones, lanosterol derivatives (e.g. steroids) and the prenyl chains of certain compounds (e.g. phytol chain of chlorophyll). Isoprenes are used in the cell membrane monolayer of many Archaea, filling the space between the diglycerol tetraether head groups. This is thought to add structural resistance to harsh environments in which many Archaea are found. Similarly, natural rubber is composed of linear polyisoprene chains of very high molecular weight and other natural molecules. Industrial production Isoprene is most readily available industrially as a byproduct of the thermal cracking of petroleum naphtha or oil, as a side product in the production of ethylene. Where thermal cracking of oil is less common, isoprene can be produced by dehydrogenation of isopentane. Isoprene can be synthesized in two steps from isobutylene, starting with its ene reaction with formaldehyde to give isopentenol, which can be dehydrated to isoprene: Where cheap acetylene is produced from coal-derived calcium carbide, it may be combined with acetone to make 3-methylbutynol which is then hydrogenated and dehydrated to isoprene. About 800,000 metric tons are produced annually. About 95% of isoprene production is used to produce cis-1,4-polyisoprene—a synthetic version of natural rubber. Natural rubber consists mainly of poly-cis-isoprene with a molecular mass of 100,000 to 1,000,000 g/mol. Typically natural rubber contains a few percent of other materials, such as proteins, fatty acids, resins, and inorganic materials. Some natural rubber sources, called gutta percha, are composed of trans-1,4-polyisoprene, a structural isomer that has similar, but not identical, properties.
Physical sciences
Terpenes and terpenoids
Chemistry
17000875
https://en.wikipedia.org/wiki/Grinding%20%28abrasive%20cutting%29
Grinding (abrasive cutting)
Grinding is a type of abrasive machining process which uses a grinding wheel as cutting tool. A wide variety of machines are used for grinding, best classified as portable or stationary: Portable power tools such as angle grinders, die grinders and cut-off saws Stationary power tools such as bench grinders and cut-off saws Stationary hydro- or hand-powered sharpening stones Milling practice is a large and diverse area of manufacturing and toolmaking. It can produce very fine finishes and very accurate dimensions; yet in mass production contexts, it can also rough out large volumes of metal quite rapidly. It is usually better suited to the machining of very hard materials than is "regular" machining (that is, cutting larger chips with cutting tools such as tool bits or milling cutters), and until recent decades it was the only practical way to machine such materials as hardened steels. Compared to "regular" machining, it is usually better suited to taking very shallow cuts, such as reducing a shaft's diameter by half a thousandth of an inch or 12.7 μm. Grinding is a subset of cutting, as grinding is a true metal-cutting process. Each grain of abrasive functions as a microscopic single-point cutting edge (although of high negative rake angle), and shears a tiny chip that is analogous to what would conventionally be called a "cut" chip (turning, milling, drilling, tapping, etc.) . However, among people who work in the machining fields, the term cutting is often understood to refer to the macroscopic cutting operations, and grinding is often mentally categorized as a "separate" process. This is why the terms are usually used separately in shop-floor practice. Lapping and sanding are subsets of grinding. Processes The choice of grinding operation is determined by the size, shape, features and the desired production rate. Creep-feed grinding Creep-feed grinding (CFG) was a grinding process which was invented in Germany in the late 1950s by Edmund and Gerhard Lang. Normal grinding is used primarily to finish surfaces, but CFG is used for high rates of material removal, competing with milling and turning as a manufacturing process choice. CFG has grinding depth up to 6 mm (0.236 inches) and workpiece speed is low. Surfaces with a softer-grade resin bond are used to keep workpiece temperature low and an improved surface finish up to 1.6 μm Rmax. CFG can take 117 s to remove of material. Precision grinding would take more than 200 s to do the same. CFG has the disadvantage of a wheel that is constantly degrading, requires high spindle power (), and is limited in the length of part it can machine. To address the problem of wheel sharpness, continuous-dress creep-feed grinding (CDCF) was developed in 1970s. The wheel is dressed constantly during machining in CDCF process and keeps the wheel in a state of specified sharpness. It takes only 17 s to remove of material, a huge gain in productivity. 38 hp (28 kW) spindle power is required, with low-to-conventional spindle speeds. The limit on part length was erased. High-efficiency deep grinding (HEDG) is another type of grinding. This process uses plated superabrasive wheels. These wheels never need dressing and last longer than other wheels. This reduces capital equipment investment costs. HEDG can be used on long part lengths and removes material at a rate of in 83 s. HEDG requires high spindle power and high spindle speeds. Peel grinding, patented under the name of Quickpoint in 1985 by Erwin Junker Maschinenfabrik, GmbH in Nordrach, Germany, uses a thin superabrasive grinding disk oriented almost parallel to a cylindrical workpiece and operates somewhat like a lathe turning tool. Ultra-high speed grinding (UHSG) can run at speeds higher than 40,000 fpm (200 m/s), taking 41 s to remove of material, but is still in the research-and-development (R&D) stage. It also requires high spindle power and high spindle speeds. Cylindrical grinding Cylindrical grinding (also called center-type grinding) is used to grind the cylindrical surfaces and shoulders of the workpiece. The workpiece is mounted on centers and rotated by a device known as a lathe dog or center driver. The abrasive wheel and the workpiece are rotated by separate motors and at different speeds. The table can be adjusted to produce tapers. The wheel head can be swiveled. The five types of cylindrical grinding are: outside diameter (OD) grinding, inside diameter (ID) grinding, plunge grinding, creep feed grinding, and centerless grinding. A cylindrical grinder has a grinding (abrasive) wheel, two centers that hold the workpiece, and a chuck, grinding dog, or other mechanism to drive the work. Most cylindrical grinding machines include a swivel to allow the forming of tapered pieces. The wheel and workpiece move parallel to one another in both the radial and longitudinal directions. The abrasive wheel can have many shapes. Standard disk-shaped wheels can be used to create a tapered or straight workpiece geometry, while formed wheels are used to create more elaborate shapes and produces less vibration than using a regular disk-shaped wheel. Tolerances for cylindrical grinding are held within ± for diameter and ± for roundness. Precision work can reach tolerances as high as ± for diameter and ± for roundness. Surface finishes can range from to , with typical finishes ranging from . Surface grinding Surface grinding uses a rotating abrasive wheel to remove material, creating a flat surface. The tolerances that are normally achieved with surface grinding are ± for grinding a flat material and ± for a parallel surface. The surface grinder is composed of an abrasive wheel, a workholding device known as a chuck, either electromagnetic or vacuum, and a reciprocating table. Grinding is commonly used on cast iron and various types of steel. These materials lend themselves to grinding because they can be held by the magnetic chuck commonly used on grinding machines and do not melt into the cutting wheel, which clogs it and prevents it from cutting. Materials that are less commonly ground are aluminum, stainless steel, brass, and plastics. These all tend to clog the cutting wheel more than steel and cast iron, but can be ground with special techniques. Others Centerless grinding: the workpiece is supported by a blade instead of by centers or chucks. Two wheels are used; the larger one is used to grind the surface of the workpiece, and the smaller wheel is used to regulate the axial movement of the workpiece. Types of centerless grinding include through-feed grinding, in-feed/plunge grinding, and internal centerless grinding. Electrochemical grinding: a positively-charged workpiece in a conductive fluid is eroded by a negatively-charged grinding wheel. The pieces from the workpiece are dissolved into the conductive fluid. Electrolytic in-process dressing (ELID) grinding: in this ultra-precision grinding technology, the grinding wheel is dressed electrochemically and in-process to maintain the accuracy of the grinding. An ELID cell consists of a metal-bonded grinding wheel, a cathode electrode, a pulsed DC power supply, and electrolyte. The wheel is connected to the positive terminal of the DC power supply through a carbon brush, and the electrode is connected to the negative pole of the power supply. Usually, alkaline liquids are used as both electrolytes and coolant for grinding. A nozzle is used to inject the electrolyte into the gap between wheel and electrode. The gap is usually maintained to be approximately 0.1 mm to 0.3 mm. During the grinding operation one side of the wheel takes part in the grinding operation whereas the other side of the wheel is being dressed by an electrochemical reaction. The dissolution of the metallic bond material is caused by the dressing which in turns results the continuous protrusion of new sharp grits. is a specialized type of cylindrical grinding where the grinding wheel has the exact shape of the final product. The grinding wheel does not traverse the workpiece. Internal grinding is used to grind the internal diameter of the workpiece. Tapered holes can be ground with the use of internal grinders that can swivel on the horizontal. Pre-grinding: when a new tool has been built and has been heat-treated, it is pre-ground before welding or hardfacing commences. This usually involves grinding the outside diameter (OD) slightly higher than the finish grind OD to ensure the correct finish size. Grinding wheel A grinding wheel is an expendable wheel used for various grinding and abrasive machining operations. It is generally made from a matrix of coarse abrasive particles pressed and bonded together to form a solid, circular shape; various profiles and cross-sections are available depending on the intended usage for the wheel. Grinding wheels may also be made from a solid steel or aluminium disc with particles bonded to the surface. Lubrication The use of fluids in a grinding process is often necessary to cool and lubricate the wheel and workpiece as well as remove the chips produced in the grinding process. The most common grinding fluids are water-soluble chemical fluids, water-soluble oils, synthetic oils, and petroleum-based oils. It is imperative that the fluid be applied directly to the cutting area to prevent the fluid being blown away from the piece due to rapid rotation of the wheel. The workpiece Workholding methods The workpiece is manually clamped to a lathe dog, powered by the faceplate, that holds the piece in between two centers and rotates the piece. The piece and the grinding wheel rotate in opposite directions and small bits of the piece are removed as it passes along the grinding wheel. In some instances special drive centers may be used to allow the edges to be ground. The workholding method affects the production time as it changes set up times. Workpiece materials Typical workpiece materials include aluminum, brass, plastics, cast iron, mild steel, and stainless steel. Aluminum, brass, and plastics can have poor-to-fair machinability characteristics for cylindrical grinding. Cast Iron and mild steel have very good characteristics for cylindrical grinding. Stainless steel is very difficult to grind due to its toughness and ability to work harden, but can be worked with the right grade of grinding wheels. Workpiece geometry The final shape of a workpiece is the mirror image of the grinding wheel, with cylindrical wheels creating cylindrical pieces and formed wheels creating formed pieces. Typical sizes on workpieces range from 0.75 in to 20 in (18 mm to 1 m) and 0.80 in to 75 in (2 cm to 4 m) in length, although pieces from 0.25 in to 60 in (6 mm to 1.5 m) in diameter and 0.30 in to 100 in (8 mm to 2.5 m) in length can be ground. The resulting shapes can be straight cylinders, straight-edged conical shapes, or even crankshafts for engines that experience relatively low torque. Effects on workpiece materials Chemical property changes include an increased susceptibility to corrosion because of high surface stress. Mechanical properties will change due to stresses put on the part during finishing. High grinding temperatures may cause a thin martensitic layer to form on the part, which will lead to reduced material strength from microcracks. Physical property changes include the possible loss of magnetic properties on ferromagnetic materials.
Technology
Metallurgy
null
17001425
https://en.wikipedia.org/wiki/Minor%20planet
Minor planet
According to the International Astronomical Union (IAU), a minor planet is an astronomical object in direct orbit around the Sun that is exclusively classified as neither a planet nor a comet. Before 2006, the IAU officially used the term minor planet, but that year's meeting reclassified minor planets and comets into dwarf planets and small Solar System bodies (SSSBs). In contrast to the eight official planets of the Solar System, all minor planets fail to clear their orbital neighborhood. Minor planets include asteroids (near-Earth objects, Earth trojans, Mars trojans, Mars-crossers, main-belt asteroids and Jupiter trojans), as well as distant minor planets (Uranus trojans, Neptune trojans, centaurs and trans-Neptunian objects), most of which reside in the Kuiper belt and the scattered disc. , there are known objects, divided into 740,000 numbered, with only one of them recognized as a dwarf planet (secured discoveries) and 652,085 unnumbered minor planets, with only five of those officially recognized as a dwarf planet. The first minor planet to be discovered was Ceres in 1801, though it was called a 'planet' at the time and an 'asteroid' soon after; the term minor planet was not introduced until 1841, and was considered a subcategory of 'planet' until 1932. The term planetoid has also been used, especially for larger, planetary objects such as those the IAU has called dwarf planets since 2006. Historically, the terms asteroid, minor planet, and planetoid have been more or less synonymous. This terminology has become more complicated by the discovery of numerous minor planets beyond the orbit of Jupiter, especially trans-Neptunian objects that are generally not considered asteroids. A minor planet seen releasing gas may be dually classified as a comet. Objects are called dwarf planets if their own gravity is sufficient to achieve hydrostatic equilibrium and form an ellipsoidal shape. All other minor planets and comets are called small Solar System bodies. The IAU stated that the term minor planet may still be used, but the term small Solar System body will be preferred. However, for purposes of numbering and naming, the traditional distinction between minor planet and comet is still used. Populations Hundreds of thousands of minor planets have been discovered within the Solar System and thousands more are discovered each month. The Minor Planet Center has documented over 213 million observations and 794,832 minor planets, of which 541,128 have orbits known well enough to be assigned permanent official numbers. Of these, 21,922 have official names. , the lowest-numbered unnamed minor planet is , and the highest-numbered named minor planet is 594913 ꞌAylóꞌchaxnim. There are various broad minor-planet populations: Asteroids; traditionally, most have been bodies in the inner Solar System. Near-Earth asteroids, those whose orbits take them inside the orbit of Mars. Further subclassification of these, based on orbital distance, is used: Apohele asteroids orbit inside of Earth's perihelion distance and thus are contained entirely within the orbit of Earth. Aten asteroids, those that have a semimajor axis of less than Earth's and an aphelion (furthest distance from the Sun) greater than 0.983 AU. Apollo asteroids are those asteroids with a semimajor axis greater than Earth's while having a perihelion distance of 1.017 AU or less. Like Aten asteroids, Apollo asteroids are Earth-crossers. Amor asteroids are those near-Earth asteroids that approach the orbit of Earth from beyond but do not cross it. Amor asteroids are further subdivided into four subgroups, depending on where their semimajor axis falls between Earth's orbit and the asteroid belt. Earth trojans, asteroids sharing Earth's orbit and gravitationally locked to it. As of 2022, two Earth trojans are known: 2010 TK7 and 2020 XL5. Mars trojans, asteroids sharing Mars's orbit and gravitationally locked to it. As of 2007, eight such asteroids are known. Asteroid belt, whose members follow roughly circular orbits between Mars and Jupiter. These are the original and best-known group of asteroids. Jupiter trojans, asteroids sharing Jupiter's orbit and gravitationally locked to it. Numerically they are estimated to equal the main-belt asteroids. Distant minor planets, an umbrella term for minor planets in the outer Solar System. Centaurs, bodies in the outer Solar System between Jupiter and Neptune. They have unstable orbits due to the gravitational influence of the giant planets, and therefore must have come from elsewhere, probably outside Neptune. Neptune trojans, bodies sharing Neptune's orbit and gravitationally locked to it. Although only a handful are known, there is evidence that Neptune trojans are more numerous than either the asteroids in the asteroid belt or the Jupiter trojans. Trans-Neptunian objects, bodies at or beyond the orbit of Neptune, the outermost planet. The Kuiper belt, objects inside an apparent population drop-off approximately 55 AU from the Sun. Classical Kuiper belt objects like Makemake, also known as cubewanos, are in primordial, relatively circular orbits that are not in resonance with Neptune. Resonant Kuiper belt objects. Plutinos, bodies like that are in a 2:3 resonance with Neptune. Scattered disc objects like Eris, with aphelia outside the Kuiper belt. These are thought to have been scattered by Neptune. Resonant scattered disc objects. Detached objects such as Sedna, with both an aphelion and a perihelion outside the Kuiper belt. Sednoids, detached objects with a perihelion greater than 75 AU (Sedna, , and Leleākūhonua). The Oort cloud, a hypothetical population thought to be the source of long-period comets and that may extend to 50,000 AU from the Sun. Naming conventions All astronomical bodies in the Solar System need a distinct designation. The naming of minor planets runs through a three-step process. First, a provisional designation is given upon discovery—because the object still may turn out to be a false positive or become lost later on—called a provisionally designated minor planet. After the observation arc is accurate enough to predict its future location, a minor planet is formally designated and receives a number. It is then a numbered minor planet. Finally, in the third step, it may be named by its discoverers. However, only a small fraction of all minor planets have been named. The vast majority are either numbered or have still only a provisional designation. Example of the naming process: – provisional designation upon discovery on 24 April 1932 – formal designation, receives an official number 1862 Apollo – named minor planet, receives a name, the alphanumeric code is dropped Provisional designation A newly discovered minor planet is given a provisional designation. For example, the provisional designation consists of the year of discovery (2002) and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. 433 Eros). The formal naming convention uses parentheses around the number, but dropping the parentheses is quite common. Informally, it is common to drop the number altogether or to drop it after the first mention when a name is repeated in running text. Minor planets that have been given a number but not a name keep their provisional designation, e.g. (29075) 1950 DA. Because modern discovery techniques are finding vast numbers of new asteroids, they are increasingly being left unnamed. The earliest discovered to be left unnamed was for a long time (3360) 1981 VA, now 3360 Syrinx. In November 2006 its position as the lowest-numbered unnamed asteroid passed to (now 3708 Socus), and in May 2021 to . On rare occasions, a small object's provisional designation may become used as a name in itself: the then-unnamed gave its "name" to a group of objects that became known as classical Kuiper belt objects ("cubewanos") before it was finally named 15760 Albion in January 2018. A few objects are cross-listed as both comets and asteroids, such as 4015 Wilson–Harrington, which is also listed as 107P/Wilson–Harrington. Numbering Minor planets are awarded an official number once their orbits are confirmed. With the increasing rapidity of discovery, these are now six-figure numbers. The switch from five figures to six figures arrived with the publication of the Minor Planet Circular (MPC) of October 19, 2005, which saw the highest-numbered minor planet jump from 99947 to 118161. Naming The first few asteroids were named after figures from Greek and Roman mythology, but as such names started to dwindle the names of famous people, literary characters, discoverers' spouses, children, colleagues, and even television characters were used. Gender The first asteroid to be given a non-mythological name was 20 Massalia, named after the Greek name for the city of Marseille. The first to be given an entirely non-Classical name was 45 Eugenia, named after Empress Eugénie de Montijo, the wife of Napoleon III. For some time only female (or feminized) names were used; Alexander von Humboldt was the first man to have an asteroid named after him, but his name was feminized to 54 Alexandra. This unspoken tradition lasted until 334 Chicago was named; even then, female names showed up in the list for years after. Eccentric As the number of asteroids began to run into the hundreds, and eventually, in the thousands, discoverers began to give them increasingly frivolous names. The first hints of this were 482 Petrina and 483 Seppina, named after the discoverer's pet dogs. However, there was little controversy about this until 1971, upon the naming of 2309 Mr. Spock (the name of the discoverer's cat). Although the IAU subsequently discouraged the use of pet names as sources, eccentric asteroid names are still being proposed and accepted, such as 4321 Zero, 6042 Cheshirecat, 9007 James Bond, 13579 Allodd and 24680 Alleven, and 26858 Misterrogers. Discoverer's name A well-established rule is that, unlike comets, minor planets may not be named after their discoverer(s). One way to circumvent this rule has been for astronomers to exchange the courtesy of naming their discoveries after each other. Rare exceptions to this rule are 1927 Suvanto and 96747 Crespodasilva. 1927 Suvanto was named after its discoverer, Rafael Suvanto, posthumously by the Minor Planet Center. He died four years after the discovery in the last days of the Finnish winter war of 1939-40. 96747 Crespodasilva was named after its discoverer, Lucy d'Escoffier Crespo da Silva, because she died shortly after the discovery, at age 22. Languages Names were adapted to various languages from the beginning. 1 Ceres, Ceres being its Anglo-Latin name, was actually named Cerere, the Italian form of the name. German, French, Arabic, and Hindi use forms similar to the English, whereas Russian uses a form, Tserera, similar to the Italian. In Greek, the name was translated to Δήμητρα (Demeter), the Greek equivalent of the Roman goddess Ceres. In the early years, before it started causing conflicts, asteroids named after Roman figures were generally translated in Greek; other examples are Ἥρα (Hera) for 3 Juno, Ἑστία (Hestia) for 4 Vesta, Χλωρίς (Chloris) for 8 Flora, and Πίστη (Pistis) for 37 Fides. In Chinese, the names are not given the Chinese forms of the deities they are named after, but rather typically have a syllable or two for the character of the deity or person, followed by 神 'god(dess)' or 女 'woman' if just one syllable, plus 星 'star/planet', so that most asteroid names are written with three Chinese characters. Thus Ceres is 穀神星 'grain goddess planet', Pallas is 智神星 'wisdom goddess planet', etc. Physical properties of comets and minor planets Commission 15 of the International Astronomical Union is dedicated to the Physical Study of Comets & Minor Planets. Archival data on the physical properties of comets and minor planets are found in the PDS Asteroid/Dust Archive. This includes standard asteroid physical characteristics such as the properties of binary systems, occultation timings and diameters, masses, densities, rotation periods, surface temperatures, albedoes, spin vectors, taxonomy, and absolute magnitudes and slopes. In addition, European Asteroid Research Node (E.A.R.N.), an association of asteroid research groups, maintains a Data Base of Physical and Dynamical Properties of Near Earth Asteroids. Environmental properties Environmental characteristics have three aspects: space environment, surface environment and internal environment, including geological, optical, thermal and radiological environmental properties, etc., which are the basis for understanding the basic properties of minor planets, carrying out scientific research, and are also an important reference basis for designing the payload of exploration missions Radiation environment Without the protection of an atmosphere and its own strong magnetic field, the minor planet's surface is directly exposed to the surrounding radiation environment. In the cosmic space where minor planets are located, the radiation on the surface of the planets can be divided into two categories according to their sources: one comes from the sun, including electromagnetic radiation from the sun, and ionizing radiation from the solar wind and solar energy particles; the other comes from the sun outside the solar system, that is, galactic cosmic rays, etc. Optical environment Usually during one rotation period of a minor planet, the albedo of a minor planet will change slightly due to its irregular shape and uneven distribution of material composition. This small change will be reflected in the periodic change of the planet's light curve, which can be observed by ground-based equipment, so as to obtain the planet's magnitude, rotation period, rotation axis orientation, shape, albedo distribution, and scattering properties. Generally speaking, the albedo of minor planets is usually low, and the overall statistical distribution is bimodal, corresponding to C-type (average 0.035) and S-type (average 0.15) minor planets. In the minor planet exploration mission, measuring the albedo and color changes of the planet surface is also the most basic method to directly know the difference in the material composition of the planet surface. Geological environment The geological environment on the surface of minor planets is similar to that of other unprotected celestial bodies, with the most widespread geomorphological feature present being impact craters: however, the fact that most minor planets are rubble pile structures, which are loose and porous, gives the impact action on the surface of minor planets its unique characteristics. On highly porous minor planets, small impact events produce spatter blankets similar to common impact events: whereas large impact events are dominated by compaction and spatter blankets are difficult to form, and the longer the planets receive such large impacts, the greater the overall density. In addition, statistical analysis of impact craters is an important means of obtaining information on the age of a planet surface. Although the Crater Size-Frequency Distribution (CSFD) method of dating commonly used on minor planet surfaces does not allow absolute ages to be obtained, it can be used to determine the relative ages of different geological bodies for comparison. In addition to impact, there are a variety of other rich geological effects on the surface of minor planets, such as mass wasting on slopes and impact crater walls, large-scale linear features associated with graben, and electrostatic transport of dust. By analysing the various geological processes on the surface of minor planets, it is possible to learn about the possible internal activity at this stage and some of the key evolutionary information about the long-term interaction with the external environment, which may lead to some indication of the nature of the parent body's origin. Many of the larger planets are often covered by a layer of soil (regolith) of unknown thickness. Compared to other atmosphere-free bodies in the solar system (e.g. the Moon), minor planets have weaker gravity fields and are less capable of retaining fine-grained material, resulting in a somewhat larger surface soil layer size. Soil layers are inevitably subject to intense space weathering that alters their physical and chemical properties due to direct exposure to the surrounding space environment. In silicate-rich soils, the outer layers of Fe are reduced to nano-phase Fe (np-Fe), which is the main product of space weathering. For some small planets, their surfaces are more exposed as boulders of varying sizes, up to 100 metres in diameter, due to their weaker gravitational pull. These boulders are of high scientific interest, as they may be either deeply buried material excavated by impact action or fragments of the planet's parent body that have survived. The rocks provide more direct and primitive information about the material inside the minor planet and the nature of its parent body than the soil layer, and the different colours and forms of the rocks indicate different sources of material on the surface of the minor planet or different evolutionary processes. Magnetic environment Usually in the interior of the planet, the convection of the conductive fluid will generate a large and strong magnetic field. However, the size of a minor planet is generally small and most of the minor planets have a "crushed stone pile" structure, and there is basically no "dynamo" structure inside, so it will not generate a self-generated dipole magnetic field like the Earth. But some minor planets do have magnetic fields—on the one hand, some minor planets have remanent magnetism: if the parent body had a magnetic field or if the nearby planetary body has a strong magnetic field, the rocks on the parent body will be magnetised during the cooling process and the planet formed by the fission of the parent body will still retain remanence, which can also be detected in extraterrestrial meteorites from the minor planets; on the other hand, if the minor planets are composed of electrically conductive material and their internal conductivity is similar to that of carbon- or iron-bearing meteorites, the interaction between the minor planets and the solar wind is likely to be unipolar induction, resulting in an external magnetic field for the minor planet. In addition, the magnetic fields of minor planets are not static; impact events, weathering in space and changes in the thermal environment can alter the existing magnetic fields of minor planets. At present, there are not many direct observations of minor planet magnetic fields, and the few existing planets detection projects generally carry magnetometers, with some targets such as Gaspra and Braille measured to have strong magnetic fields nearby, while others such as Lutetia have no magnetic field.
Physical sciences
Planetary science
null
3361149
https://en.wikipedia.org/wiki/Ophthalmosaurus
Ophthalmosaurus
Ophthalmosaurus (Greek ὀφθάλμος ophthalmos 'eye' and σαῦρος sauros 'lizard') is a genus of ichthyosaur known from the Middle-Late Jurassic. Possible remains from the earliest Cretaceous, around 145 million years ago, are also known. It was a relatively medium-sized ichthyosaur, measuring long and weighing . Named for its extremely large eyes, it had a jaw containing many small but robust teeth. Major fossil finds of this genus have been recorded in Europe with a second species possibly being found in North America. Description Ophthalmosaurus was a medium-sized ichthyosaur, growing to measure in length and weighing between . It had a robust, streamlined body that was nearly as wide as it was tall in frontal view. Like other derived ichthyosaurs Ophthalmosaurus had a powerful tail ending in a pronounced bi-lobed caudal fluke whose lower half was formed around the caudal spine whereas the upper lobe was made up entirely from soft tissue. The limbs of Ophthalmosaurus were short and rounded with the forelimbs being noticeably larger than the hind limbs. The combination of rather inflexible trunk, powerful caudal fluke and reduced limbs suggests a tail-propelled mode of locomotion with the limbs helping with steering, differing from the anguilliform (eel-like) way more basal ichthyosaurs swam. The skull of Ophthalmosaurus was long with a slender, toothed rostrum and an enlarged posterior portion of the cranium. The dentition was relatively small with robust tooth crowns and the lateral area of the cranium was almost entirely occupied by the animal's massive eyes that gave the genus its name. The proportionally large eyes of Ophthalmosaurus measured in diameter at the outer margin of the bony sclerotic ring, while the sclerotic aperture itself measured in diameter. Discovery and species Ophthalmosaurus was first described by Harry Seeley in 1874 with particular focus on the morphology of the clavicular bones. Over the years following its description a variety of genera have been sunk into Ophthalmosaurus. Among them, Apatodontosaurus, Ancanamunia, Baptanodon, Mollesaurus, Paraophthalmosaurus, Undorosaurus and Yasykovia were all considered junior synonyms of Ophthalmosaurus in a study published by Maisch & Matzke in 2000. However, more recent cladistic analyses have contested Maisch & Matzke's conclusion. Mollesaurus periallus from Argentina was considered a valid genus of ophthalmosaurid by Druckenmiller and Maxwell (2010), Paraophthalmosaurus and Yasykovia were both recovered as distinct genera by Storrs et al., but were later sunk into Nannopterygius while Undorosaurus'''s validity is now accepted by most authors, including Maisch (2010) who originally proposed the synonymy. The two other Russian taxa might be also valid. Likewise the Mexican ophthalmosaurid Jabalisaurus had also been referred to Ophthalmosaurus before being described as a distinct species and genus in 2021.Ophthalmosaurus natans was described as Sauranodon, then later renamed to Baptanodon by Marsh in 1880. However this decision was questioned not long afterwards with Baptanodon instead being considered an American species of Ophthalmosaurus. Recent analysis have recovered the species as closer to other ophthalmosaurines than to the Ophthalmosaurus type species, suggesting that the previous name should be reinstated. Similarly, Ophthalmosaurus chrisorum, whose holotype has been recovered in Canada and described by Russell in 1993, was moved to its own genus Arthropterygius in 2010 by Maxwell. While primarily known from the Jurassic, material from the Spilsby Sandstone dating to the early Berriasian stage of the Lower Cretaceous has been referred to cf. Ophthalmosaurus (i.e., either Ophthalmosaurus or a closely related species). Classification Within Ophthalmosauridae, Ophthalmosaurus was once considered most closely related to Aegirosaurus. However, many recent cladistic analyses found Ophthalmosaurus to nest in a clade with Acamptonectes and Mollesaurus. Aegirosaurus was found more closely related to Platypterygius, and thus does not belong to the Ophthalmosaurinae. Phylogeny The cladogram below follows Fischer et al. 2012. The following cladogram shows a possible phylogenetic position of Ophthalmosaurus in Ophthalmosauridae according to the analysis performed by Zverkov and Jacobs (2020). Palaeobiology Ophthalmosaurus icenicus possessed small teeth with robust tooth crowns and signs of slight wear differing notably from the robust teeth of later species of Platypterygius, known to have hunted large prey including turtles and birds, and the minute teeth of Baptanodon, interpreted to be a soft prey specialist. Fischer et al. (2016) conclude that this intermediary tooth morphology indicates that Ophthalmosaurus icenicus was most likely a generalist predator, feeding on a variety of smaller prey items. Ophthalmosaurus could likely dive for around 20 minutes. Assuming a conservative cruising speed of ( being more likely), Ophthalmosaurus could reach depths of or more during a dive, reaching the mesopelagic zone. However, while studies on the biomechanics of Ophthalmosaurus suggests that such feats could be physically achieved, studies on the environment of the Peterborough member of the Oxford Clay suggest that Ophthalmosaurus'' instead inhabited relatively shallow waters there, being determined to have been just deep at a distance of from the shore.
Biology and health sciences
Prehistoric marine reptiles
Animals
3364761
https://en.wikipedia.org/wiki/Ray%20%28optics%29
Ray (optics)
In optics, a ray is an idealized geometrical model of light or other electromagnetic radiation, obtained by choosing a curve that is perpendicular to the wavefronts of the actual light, and that points in the direction of energy flow. Rays are used to model the propagation of light through an optical system, by dividing the real light field up into discrete rays that can be computationally propagated through the system by the techniques of ray tracing. This allows even very complex optical systems to be analyzed mathematically or simulated by computer. Ray tracing uses approximate solutions to Maxwell's equations that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray optics or geometrical optics does not describe phenomena such as diffraction, which require wave optics theory. Some wave phenomena such as interference can be modeled in limited circumstances by adding phase to the ray model. Definition A light ray is a line (straight or curved) that is perpendicular to the light's wavefronts; its tangent is collinear with the wave vector. Light rays in homogeneous media are straight. They bend at the interface between two dissimilar media and may be curved in a medium in which the refractive index changes. Geometric optics describes how rays propagate through an optical system. Objects to be imaged are treated as collections of independent point sources, each producing spherical wavefronts and corresponding outward rays. Rays from each object point can be mathematically propagated to locate the corresponding point on the image. A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. Special rays There are many special rays that are used in optical modelling to analyze an optical system. These are defined and described below, grouped by the type of system they are used to model. Interaction with surfaces An is a ray of light that strikes a surface. The angle between this ray and the perpendicular or normal to the surface is the angle of incidence. The corresponding to a given incident ray, is the ray that represents the light reflected by the surface. The angle between the surface normal and the reflected ray is known as the angle of reflection. The Law of Reflection says that for a specular (non-scattering) surface, the angle of reflection is always equal to the angle of incidence. The or transmitted ray corresponding to a given incident ray represents the light that is transmitted through the surface. The angle between this ray and the normal is known as the angle of refraction, and it is given by Snell's law. Conservation of energy requires that the power in the incident ray must equal the sum of the power in the refracted ray, the power in the reflected ray, and any power absorbed at the surface. If the material is birefringent, the refracted ray may split into ordinary and extraordinary rays, which experience different indexes of refraction when passing through the birefringent material. Optical systems A meridional ray or tangential ray is a ray that is confined to the plane containing the system's optical axis and the object point from which the ray originated. This plane is called meridional plane or tangential plane. A skew ray is a ray that does not propagate in a plane that contains both the object point and the optical axis (meridional or tangential plane). Such rays do not cross the optical axis anywhere and are not parallel to it. The marginal ray (sometimes known as an a ray or a marginal axial ray) in an optical system is the meridional ray that starts from an on-axis object point (the point where an object to be imaged crosses the optical axis) and touches an edge of the aperture stop of the system. This ray is useful, because it crosses the optical axis again at the location where a real image will be formed, or the backward extension of the ray path crosses the axis where a virtual image will be formed. Since the entrance pupil and exit pupil are images of the aperture stop, for a real image pupil, the lateral distance of the marginal ray from the optical axis at the pupil location defines the pupil size. For a virtual image pupil, an extended line, forward along the marginal ray before the first optical element or backward along the marginal ray after the last optical element, determines the size of the entrance or exit pupil, respectively. The principal ray or chief ray (sometimes known as the b ray) in an optical system is the meridional ray that starts at an edge of an object and passes through the center of the aperture stop. The distance between the chief ray (or an extension of it for a virtual image) and the optical axis at an image location defines the size of the image. This ray (or forward and backward extensions of it for virtual image pupils) crosses the optical axis at the locations of the entrance and exit pupils. The marginal and chief rays together define the Lagrange invariant, which characterizes the throughput or etendue of the optical system. Some authors define a "principal ray" for each object point, and in this case, the principal ray starting at an edge point of the object may then be called the marginal principal ray. A sagittal ray or transverse ray from an off-axis object point is a ray propagating in the plane that is perpendicular to the meridional plane for this object point and contains the principal ray (for the object point) before refraction (so along the original principal ray direction). This plane is called sagittal plane. Sagittal rays intersect the pupil along a line that is perpendicular to the meridional plane for the ray's object point and passes through the optical axis. If the axis direction is defined to be the z axis, and the meridional plane is the y-z plane, sagittal rays intersect the pupil at yp= 0. The principal ray is both sagittal and meridional. All other sagittal rays are skew rays. A paraxial ray is a ray that makes a small angle to the optical axis of the system and lies close to the axis throughout the system. Such rays can be modeled reasonably well by using the paraxial approximation. When discussing ray tracing this definition is often reversed: a "paraxial ray" is then a ray that is modeled using the paraxial approximation, not necessarily a ray that remains close to the axis. A finite ray or real ray is a ray that is traced without making the paraxial approximation. A parabasal ray is a ray that propagates close to some defined "base ray" rather than the optical axis. This is more appropriate than the paraxial model in systems that lack symmetry about the optical axis. In computer modeling, parabasal rays are "real rays", that is rays that are treated without making the paraxial approximation. Parabasal rays about the optical axis are sometimes used to calculate first-order properties of optical systems. Fiber optics A meridional ray is a ray that passes through the axis of an optical fiber. A skew ray is a ray that travels in a non-planar zig-zag path and never crosses the axis of an optical fiber. A guided ray, bound ray, or trapped ray is a ray in a multi-mode optical fiber, which is confined by the core. For step index fiber, light entering the fiber will be guided if it makes an angle with the fiber axis that is less than the fiber's acceptance angle. A leaky ray or tunneling ray is a ray in an optical fiber that geometric optics predicts would totally reflect at the boundary between the core and the cladding, but which suffers loss due to the curved core boundary. Geometrical optics Ray tracing
Physical sciences
Optics
Physics
716631
https://en.wikipedia.org/wiki/Melanoma
Melanoma
Melanoma is the most dangerous type of skin cancer; it develops from the melanin-producing cells known as melanocytes. It typically occurs in the skin, but may rarely occur in the mouth, intestines, or eye (uveal melanoma). In women, melanomas most commonly occur on the legs; while in men, on the back. Melanoma is frequently referred to as malignant melanoma. However, the medical community stresses that there is no such thing as a 'benign melanoma' and recommends that the term 'malignant melanoma' should be avoided as redundant. About 25% of melanomas develop from moles. Changes in a mole that can indicate melanoma include increaseespecially rapid increasein size, irregular edges, change in color, itchiness, or skin breakdown. The primary cause of melanoma is ultraviolet light (UV) exposure in those with low levels of the skin pigment melanin. The UV light may be from the sun or other sources, such as tanning devices. Those with many moles, a history of affected family members, and poor immune function are at greater risk. A number of rare genetic conditions, such as xeroderma pigmentosum, also increase the risk. Diagnosis is by biopsy and analysis of any skin lesion that has signs of being potentially cancerous. Avoiding UV light and using sunscreen in UV-bright sun conditions may prevent melanoma. Treatment typically is removal by surgery of the melanoma and the potentially affected adjacent tissue bordering the melanoma. In those with slightly larger cancers, nearby lymph nodes may be tested for spread (metastasis). Most people are cured if metastasis has not occurred. For those in whom melanoma has spread, immunotherapy, biologic therapy, radiation therapy, or chemotherapy may improve survival. With treatment, the five-year survival rates in the United States are 99% among those with localized disease, 65% when the disease has spread to lymph nodes, and 25% among those with distant spread. The likelihood that melanoma will reoccur or spread depends on its thickness, how fast the cells are dividing, and whether or not the overlying skin has broken down. Melanoma is the most dangerous type of skin cancer. Globally, in 2012, it newly occurred in 232,000 people. In 2015, 3.1 million people had active disease, which resulted in 59,800 deaths. Australia and New Zealand have the highest rates of melanoma in the world. High rates also occur in Northern Europe and North America, while it is less common in Asia, Africa, and Latin America. In the United States, melanoma occurs about 1.6 times more often in men than women. Melanoma has become more common since the 1960s in areas mostly populated by people of European descent. Signs and symptoms Early signs of melanoma are changes to the shape or color of existing moles or, in the case of nodular melanoma, the appearance of a new lump anywhere on the skin. At later stages, the mole may itch, ulcerate, or bleed. Early signs of melanoma are summarized by the mnemonic "ABCDEEFG": Asymmetry Borders (irregular with edges and corners) Colour (variegated) Diameter (greater than , about the size of a pencil eraser) Evolving over time This classification does not apply to nodular melanoma, which has its own classifications: Elevated above the skin surface Firm to the touch Growing Metastatic melanoma may cause nonspecific paraneoplastic symptoms, including loss of appetite, nausea, vomiting, and fatigue. Metastasis (spread) of early melanoma is possible, but relatively rare; less than a fifth of melanomas diagnosed early become metastatic. Brain metastases are particularly common in patients with metastatic melanoma. It can also spread to the liver, bones, abdomen, or distant lymph nodes. Cause Melanomas are usually caused by DNA damage resulting from exposure to UV light from the sun. Genetics also play a role. Melanoma can also occur in skin areas with little sun exposure (i.e. mouth, soles of feet, palms of hands, genital areas). People with dysplastic nevus syndrome, also known as familial atypical multiple mole melanoma, are at increased risk for the development of melanoma. Having more than 50 moles indicates an increased risk of melanoma. A weakened immune system makes cancer development easier due to the body's weakened ability to fight cancer cells. UV radiation UV radiation exposure from tanning beds increases the risk of melanoma. The International Agency for Research on Cancer finds that tanning beds are "carcinogenic to humans" and that people who begin using tanning devices before the age of thirty years are 75% more likely to develop melanoma. Those who work in airplanes also appear to have an increased risk, believed to be due to greater exposure to UV. UVB light, emanating from the sun at wavelengths between 315 and 280 nm, is absorbed directly by DNA in skin cells, which results in a type of direct DNA damage called cyclobutane pyrimidine dimers. Thymine, cytosine, or cytosine-thymine dimers are formed by the joining of two adjacent pyrimidine bases within a strand of DNA. UVA light presents at wavelengths longer than UVB (between 400 and 315 nm); and it can also be absorbed directly by DNA in skin cells, but at lower efficienciesabout 1/100 to 1/1000 of UVB. Exposure to radiation (UVA and UVB) is a major contributor to developing melanoma. Occasional extreme sun exposure that results in "sunburn" on areas of the human body is causally related to melanoma; and such areas of only intermittent exposure apparently explains why melanoma is more common on the back in men and on the legs in women. The risk appears to be strongly influenced by socioeconomic conditions rather than indoor versus outdoor occupations; it is more common in professional and administrative workers than unskilled workers. Other factors are mutations in (or total loss of) tumor suppressor genes. Using sunbeds with their deeply penetrating UVA rays has been linked to the development of skin cancers, including melanoma. Possible significant elements in determining risk include the intensity and duration of sun exposure, the age at which sun exposure occurs, and the degree of skin pigmentation. Melanoma rates tend to be highest in countries settled by migrants from Europe which have a large amount of direct, intense sunlight to which the skin of the settlers is not adapted, most notably Australia. Exposure during childhood is a more important risk factor than exposure in adulthood. This is seen in migration studies in Australia. Incurring multiple severe sunburns increases the likelihood that future sunburns develop into melanoma due to cumulative damage. UV-high sunlight and tanning beds are the main sources of UV radiation that increase the risk for melanoma and living close to the equator increases exposure to UV radiation. Genetics A number of rare mutations, which often run in families, greatly increase melanoma susceptibility. Several genes increase risks. Some rare genes have a relatively high risk of causing melanoma; some more common genes, such as a gene called MC1R that causes red hair, have a relatively lower elevated risk. Genetic testing can be used to search for the mutations. One class of mutations affects the gene CDKN2A. An alternative reading frame mutation in this gene leads to the destabilization of p53, a transcription factor involved in apoptosis and in 50% of human cancers. Another mutation in the same gene results in a nonfunctional inhibitor of CDK4, a cyclin-dependent kinase that promotes cell division. Mutations that cause the skin condition xeroderma pigmentosum (XP) also increase melanoma susceptibility. Scattered throughout the genome, these mutations reduce a cell's ability to repair DNA. Both CDKN2A and XP mutations are highly penetrant (the chances of a carrier to express the phenotype is high). Familial melanoma is genetically heterogeneous, and loci for familial melanoma appear on the chromosome arms 1p, 9p and 12q. Multiple genetic events have been related to melanoma's pathogenesis (disease development). The multiple tumor suppressor 1 (CDKN2A/MTS1) gene encodes p16INK4a – a low-molecular weight protein inhibitor of cyclin-dependent protein kinases (CDKs) – which has been localised to the p21 region of human chromosome 9. FAMMM is typically characterized by having 50 or more combined moles in addition to a family history of melanoma. It is transmitted autosomal dominantly and mostly associated with the CDKN2A mutations. People who have CDKN2A mutation associated FAMMM have a 38 fold increased risk of pancreatic cancer. Other mutations confer lower risk, but are more common in the population. People with mutations in the MC1R gene are two to four times more likely to develop melanoma than those with two wild-type (typical unaffected type) copies. MC1R mutations are very common, and all red-haired people have a mutated copy. Mutation of the MDM2 SNP309 gene is associated with increased risks for younger women. Fair- and red-haired people, persons with multiple atypical nevi or dysplastic nevi and persons born with giant congenital melanocytic nevi are at increased risk. A family history of melanoma greatly increases a person's risk, because mutations in several genes have been found in melanoma-prone families. People with a history of one melanoma are at increased risk of developing a second primary tumor. Fair skin is the result of having less melanin in the skin, which means less protection from UV radiation exists. Pathophysiology The earliest stage of melanoma starts when melanocytes begin out-of-control growth. Melanocytes are found between the outer layer of the skin (the epidermis) and the next layer (the dermis). This early stage of the disease is called the radial growth phase, when the tumor is less than 1 mm thick, and spreads at the level of the basal epidermis. Because the cancer cells have not yet reached the blood vessels deeper in the skin, it is very unlikely that this early-stage melanoma will spread to other parts of the body. If the melanoma is detected at this stage, then it can usually be completely removed with surgery. When the tumor cells start to move in a different direction – vertically up into the epidermis and into the papillary dermis – cell behaviour changes dramatically. The next step in the evolution is the invasive radial growth phase, in which individual cells start to acquire invasive potential. From this point on, melanoma is capable of spreading. The Breslow's depth of the lesion is usually less than , while the Clark level is usually 2. The vertical growth phase (VGP) following is invasive melanoma. The tumor becomes able to grow into the surrounding tissue and can spread around the body through blood or lymph vessels. The tumor thickness is usually more than , and the tumor involves the deeper parts of the dermis. The host elicits an immunological reaction against the tumor during the VGP, which is judged by the presence and activity of the tumor infiltrating lymphocytes (TILs). These cells sometimes completely destroy the primary tumor; this is called regression, which is the latest stage of development. In certain cases, the primary tumor is completely destroyed and only the metastatic tumor is discovered. About 40% of human melanomas contain activating mutations affecting the structure of the B-Raf protein, resulting in constitutive signaling through the Raf to MAP kinase pathway. A cause common to most cancers is damage to DNA. UVA light mainly causes thymine dimers. UVA also produces reactive oxygen species and these inflict other DNA damage, primarily single-strand breaks, oxidized pyrimidines and the oxidized purine 8-oxoguanine (a mutagenic DNA change) at 1/10, 1/10, and 1/3rd the frequencies of UVA-induced thymine dimers, respectively. If unrepaired, cyclobutane pyrimidine dimer (CPD) photoproducts can lead to mutations by inaccurate translesion synthesis during DNA replication or repair. The most frequent mutations due to inaccurate synthesis past CPDs are cytosine to thymine (C>T) or CC>TT transition mutations. These are commonly referred to as UV fingerprint mutations, as they are the most specific mutation caused by UV, being frequently found in sun-exposed skin, but rarely found in internal organs. Errors in DNA repair of UV photoproducts, or inaccurate synthesis past these photoproducts, can also lead to deletions, insertions, and chromosomal translocations. The entire genomes of 25 melanomas were sequenced. On average, about 80,000 mutated bases (mostly C>T transitions) and about 100 structural rearrangements were found per melanoma genome. This is much higher than the roughly 70 mutations across generations (parent to child). Among the 25 melanomas, about 6,000 protein-coding genes had missense, nonsense, or splice site mutations. The transcriptomes of over 100 melanomas has also been sequenced and analyzed. Almost 70% of all human protein-coding genes are expressed in melanoma. Most of these genes are also expressed in other normal and cancer tissues, with some 200 genes showing a more specific expression pattern in melanoma compared to other forms of cancer. Examples of melanoma specific genes are tyrosinase, MLANA, and PMEL. UV radiation causes damage to the DNA of cells, typically thymine dimerization, which when unrepaired can create mutations in the cell's genes. This strong mutagenic factor makes cutaneous melanoma the tumor type with the highest number of mutations. When the cell divides, these mutations are propagated to new generations of cells. If the mutations occur in protooncogenes or tumor suppressor genes, the rate of mitosis in the mutation-bearing cells can become uncontrolled, leading to the formation of a tumor. Data from patients suggest that aberrant levels of activating transcription factor in the nucleus of melanoma cells are associated with increased metastatic activity of melanoma cells; studies from mice on skin cancer tend to confirm a role for activating transcription factor-2 in cancer progression. Cancer stem cells may also be involved. Gene mutations Large-scale studies, such as The Cancer Genome Atlas, have characterized recurrent somatic alterations likely driving initiation and development of cutaneous melanoma. The Cancer Genome Atlas study has established four subtypes: BRAF mutant, RAS mutant, NF1 mutant, and triple wild-type. The most frequent mutation occurs in the 600th codon of BRAF (50% of cases). BRAF is normally involved in cell growth, and this specific mutation renders the protein constitutively active and independent of normal physiological regulation, thus fostering tumor growth. RAS genes (NRAS, HRAS and KRAS) are also recurrently mutated (30% of TCGA cases) and mutations in the 61st or 12th codons trigger oncogenic activity. Loss-of-function mutations often affect tumor suppressor genes such as NF1, TP53 and CDKN2A. Other oncogenic alterations include fusions involving various kinases such as BRAF, RAF1, ALK, RET, ROS1, NTRK1., NTRK3 and MET BRAF, RAS, and NF1 mutations and kinase fusions are remarkably mutually exclusive, as they occur in different subsets of patients. Assessment of mutation status can, therefore, improve patient stratification and inform targeted therapy with specific inhibitors. In some cases (3–7%) mutated versions of BRAF and NRAS undergo copy-number amplification. Metastasis The research done by Sarna's team proved that heavily pigmented melanoma cells have Young's modulus about 4.93, when in non-pigmented ones it was only 0.98. In another experiment they found that elasticity of melanoma cells is important for its metastasis and growth: non-pigmented tumors were bigger than pigmented and it was much easier for them to spread. They showed that there are both pigmented and non-pigmented cells in melanoma tumors, so that they can both be drug-resistant and metastatic. Diagnosis Looking at or visually inspecting the area in question is the most common method of suspecting a melanoma. Moles that are irregular in color or shape are typically treated as candidates. To detect melanomas (and increase survival rates), it is recommended to learn to recognize them (see "ABCDE" mnemonic), to regularly examine moles for changes (shape, size, color, itching or bleeding) and to consult a qualified physician when a candidate appears. In-person inspection of suspicious skin lesions is more accurate than visual inspection of images of suspicious skin lesions. When used by trained specialists, dermoscopy is more helpful to identify malignant lesions than use of the naked eye alone. Reflectance confocal microscopy may have better sensitivity and specificity than dermoscopy in diagnosing cutaneous melanoma but more studies are needed to confirm this result. However, many melanomas present as lesions smaller than 6 mm in diameter, and all melanomas are malignant when they first appear as a small dot. Physicians typically examine all moles, including those less than 6 mm in diameter. Seborrheic keratosis may meet some or all of the ABCD criteria, and can lead to false alarms. Doctors can generally distinguish seborrheic keratosis from melanoma upon examination or with dermatoscopy. Some advocate replacing "enlarging" with "evolving": moles that change and evolve are a concern. Alternatively, some practitioners prefer "elevation". Elevation can help identify a melanoma, but lack of elevation does not mean that the lesion is not a melanoma. Most melanomas in the US are detected before they become elevated. By the time elevation is visible, they may have progressed to the more dangerous invasive stage. Ugly duckling One method is the "ugly duckling sign". Correlation of common lesion characteristics is made. Lesions that deviate from the common characteristics are labeled an "ugly duckling", and a further professional exam is required. The "Little Red Riding Hood" sign suggests that individuals with fair skin and light-colored hair might have difficult-to-diagnose amelanotic melanomas. Extra care is required when examining such individuals, as they might have multiple melanomas and severely dysplastic nevi. A dermatoscope must be used to detect "ugly ducklings", as many melanomas in these individuals resemble nonmelanomas or are considered to be "wolves in sheep's clothing". These fair-skinned individuals often have lightly pigmented or amelanotic melanomas that do not present easy-to-observe color changes and variations. Their borders are often indistinct, complicating visual identification without a dermatoscope. Amelanotic melanomas and melanomas arising in fair-skinned individuals are very difficult to detect, as they fail to show many of the characteristics in the ABCD rule, break the "ugly duckling" sign, and are hard to distinguish from acne scarring, insect bites, dermatofibromas, or lentigines. Biopsy Following a visual examination and a dermatoscopic exam, or in vivo diagnostic tools such as a confocal microscope, the doctor may biopsy the suspicious mole. A skin biopsy performed under local anesthesia is often required to assist in making or confirming the diagnosis and in defining severity. Elliptical excisional biopsies may remove the tumor, followed by histological analysis and Breslow scoring. Incisional biopsies such as punch biopsies are usually contraindicated in suspected melanomas, because of the possibility of sampling error or local implantation causing misestimation of tumour thickness. However, fears that such biopsies may increase the risk of metastatic disease seem unfounded. Total body photography, which involves photographic documentation of as much body surface as possible, is often used during follow-up for high-risk patients. The technique has been reported to enable early detection and provides a cost-effective approach (with any digital camera), but its efficacy has been questioned due to its inability to detect macroscopic changes. The diagnosis method should be used in conjunction with (and not as a replacement for) dermoscopic imaging, with a combination of both methods appearing to give extremely high rates of detection. Histopathologic types Melanoma is a type of neuroectodermal neoplasm. There are four main types of melanoma: Other histopathologic types are: Mucosal melanoma; When melanoma occurs on mucous membranes. Desmoplastic melanoma Melanoma with small nevus-like cells Melanoma with features of a Spitz nevus Uveal melanoma Vaginal melanoma Polypoid melanoma, a subclass of nodular melanoma. In situ or invasive A melanoma in situ has not invaded beyond the basement membrane, whereas an invasive melanoma has spread beyond it. Some histopathological types of melanoma are inherently invasive, including nodular melanoma and lentigo maligna melanoma, where the in situ counterpart to lentigo maligna melanoma is lentigo maligna. Lentigo maligna is sometimes classified as a very early melanoma, and sometimes a precursor to melanoma. Superficial spreading melanomas and acral lentiginous melanomas can be either in situ or invasive, but acral lentiginous melanomas are almost always invasive. Staging Further context on cancer staging is available at TNM. Metastatic melanomas can be detected by X-rays, CT scans, MRIs, PET and PET/CTs, ultrasound, LDH testing and photoacoustic detection. However, there is lack of evidence in the accuracy of staging of people with melanoma with various imaging methods. Melanoma stages according to AJCC, 8th edition: TX: Primary tumor thickness cannot be assessed (such as a diagnosis by curettage) T0: No evidence of primary tumor (such as unknown primary or completely regressed melanoma) Stage 1 and 2 require an N (lymph node) class of: N0 – No regional metastases. Stage 1, 2 and 3 require an M (metastasis status) of: M0: No evidence of distant metastasis Older systems include "Clark level" and "Breslow's depth", quantifying microscopic depth of tumor invasion. Laboratory Lactate dehydrogenase (LDH) tests are often used to screen for metastases, although many patients with metastases (even end-stage) have a normal LDH; extraordinarily high LDH often indicates the metastatic spread of the disease to the liver. It is common for patients diagnosed with melanoma to have chest X-rays and an LDH test, and in some cases CT, MRI, and/or PET scans. Although controversial, sentinel lymph node biopsies and examination of the lymph nodes are also performed in patients to assess spread to the lymph nodes. A diagnosis of melanoma is supported by the presence of the S-100 protein marker. HMB-45 is a monoclonal antibody that reacts against an antigen present in melanocytic tumors such as melanomas. It is used in anatomic pathology as a marker for such tumors. The antibody was generated to an extract of melanoma. It reacts positively against melanocytic tumors but not other tumors, thus demonstrating specificity and sensitivity. The antibody also reacts positively against junctional nevus cells but not intradermal nevi, and against fetal melanocytes but not normal adult melanocytes. HMB-45 is nonreactive with almost all non-melanoma human malignancies, with the exception of rare tumors showing evidence of melanogenesis (e.g., pigmented schwannoma, clear cell sarcoma) or tumors associated with tuberous sclerosis complex (angiomyolipoma and lymphangiomyoma). Prevention There is no evidence to support or refute adult population screening for melanoma. Ultraviolet radiation Minimizing exposure to sources of ultraviolet radiation (the sun and sunbeds), following sun protection measures and wearing sun protective clothing (long-sleeved shirts, long trousers, and broad-brimmed hats) can offer protection. Using artificial light for tanning was once believed to help prevent skin cancers, but it can actually lead to an increased incidence of melanomas. UV nail lamps, which are used in nail salons to dry nail polish, are another common and widespread source of UV radiation that could be avoided. Although the risk of developing skin cancer through UV nail lamp use is low, it is still recommended to wear fingerless gloves and/or apply SPF 30 or greater sunscreen to the hands before using a UV nail lamp. The body uses UV light to generate vitamin D so there is a need to balance getting enough sunlight to maintain healthy vitamin D levels and reducing the risk of melanoma; it takes around a half-hour of sunlight for the body to generate its vitamin D for the day and this is about the same amount of time it takes for fair-skinned people to get a sunburn. Exposure to sunlight can be intermittent instead of all at one time. Sunscreen Sunscreen appears to be effective in preventing melanoma. In the past, use of sunscreens with a sun protection factor (SPF) rating of 50 or higher on exposed areas were recommended; as older sunscreens more effectively blocked UVA with higher SPF. Currently, newer sunscreen ingredients (avobenzone, zinc oxide, and titanium dioxide) effectively block both UVA and UVB even at lower SPFs. Sunscreen also protects against squamous cell carcinoma, another skin cancer. Concerns have been raised that sunscreen might create a false sense of security against sun damage. Medications A 2005 review found tentative evidence that statin and fibrate medication may decrease the risk of melanoma. A 2006 review however did not support any benefit. Treatment Confirmation of the clinical diagnosis is done with a skin biopsy. This is usually followed up with a wider excision of the scar or tumor. Depending on the stage, a sentinel lymph node biopsy may be performed. Controversy exists around trial evidence for sentinel lymph node biopsy; with unclear evidence of benefit as of 2015. Treatment of advanced melanoma is performed from a multidisciplinary approach. Surgery Excisional biopsies may remove the tumor, but further surgery is often necessary to reduce the risk of recurrence. Complete surgical excision with adequate surgical margins and assessment for the presence of detectable metastatic disease along with short- and long-term followup is standard. Often this is done by a wide local excision (WLE) with margins. Melanoma-in-situ and lentigo malignas are treated with narrower surgical margins, usually . Many surgeons consider the standard of care for standard excision of melanoma-in-situ, but margin might be acceptable for margin controlled surgery (Mohs surgery, or the double-bladed technique with margin control). The wide excision aims to reduce the rate of tumor recurrence at the site of the original lesion. This is a common pattern of treatment failure in melanoma. Considerable research has aimed to elucidate appropriate margins for excision with a general trend toward less aggressive treatment during the last decades. A 2009 meta-analysis of randomized controlled trials found a small difference in survival rates favoring wide excision of primary cutaneous melanomas, but these results were not statistically significant. Mohs surgery has been reported with cure rate as low as 77% and as high as 98.0% for melanoma-in-situ. CCPDMA and the "double scalpel" peripheral margin controlled surgery is equivalent to Mohs surgery in effectiveness on this "intra-epithelial" type of melanoma. Melanomas that spread usually do so to the lymph nodes in the area of the tumor before spreading elsewhere. Attempts to improve survival by removing lymph nodes surgically (lymphadenectomy) were associated with many complications, but no overall survival benefit. Recently, the technique of sentinel lymph node biopsy has been developed to reduce the complications of lymph node surgery while allowing assessment of the involvement of nodes with tumor. Biopsy of sentinel lymph nodes is a widely used procedure when treating cutaneous melanoma. Neither sentinel lymph node biopsy nor other diagnostic tests should be performed to evaluate early, thin melanoma, including melanoma in situ, T1a melanoma or T1b melanoma ≤ 0.5mm. People with these conditions are unlikely to have the cancer spread to their lymph nodes or anywhere else and have a 5-year survival rate of 97%. Because of these considerations, sentinel lymph node biopsy is considered unnecessary health care for them. Furthermore, baseline blood tests and radiographic studies should not be performed only based on identifying this kind of melanoma, as there are more accurate tests for detecting cancer and these tests have high false-positive rates. To potentially correct false positives, gene expression profiling may be used as auxiliary testing for ambiguous and small lesions. Sentinel lymph node biopsy is often performed, especially for T1b/T2+ tumors, mucosal tumors, ocular melanoma and tumors of the limbs. A process called lymphoscintigraphy is performed in which a radioactive tracer is injected at the tumor site to localize the sentinel node(s). Further precision is provided using a blue tracer dye, and surgery is performed to biopsy the node(s). Routine hematoxylin and eosin (H&E) and immunoperoxidase staining will be adequate to rule out node involvement. Polymerase chain reaction (PCR) tests on nodes, usually performed to test for entry into clinical trials, now demonstrate that many patients with a negative sentinel lymph node actually had a small number of positive cells in their nodes. Alternatively, a fine-needle aspiration biopsy may be performed and is often used to test masses. If a lymph node is positive, depending on the extent of lymph node spread, a radical lymph node dissection will often be performed. If the disease is completely resected, the patient will be considered for adjuvant therapy. Excisional skin biopsy is the management of choice. Here, the suspect lesion is totally removed with an adequate (but minimal, usually 1 or 2 mm) ellipse of surrounding skin and tissue. To avoid disruption of the local lymphatic drainage, the preferred surgical margin for the initial biopsy should be narrow (1 mm). The biopsy should include the epidermal, dermal, and subcutaneous layers of the skin. This enables the histopathologist to determine the thickness of the melanoma by microscopic examination. This is described by Breslow's thickness (measured in millimeters). However, for large lesions, such as suspected lentigo maligna, or for lesions in surgically difficult areas (face, toes, fingers, eyelids), a small punch biopsy in representative areas will give adequate information and will not disrupt the final staging or depth determination. In no circumstances should the initial biopsy include the final surgical margin (0.5 cm, 1.0 cm, or 2 cm), as a misdiagnosis can result in excessive scarring and morbidity from the procedure. A large initial excision will disrupt the local lymphatic drainage and can affect further lymphangiogram-directed lymphnode dissection. A small punch biopsy can be used at any time where for logistical and personal reasons a patient refuses more invasive excisional biopsy. Small punch biopsies are minimally invasive and heal quickly, usually without noticeable scarring. Add on treatment Adjuvant treatment after surgery may reduce the risk of recurrence after surgery, especially in high-risk melanomas. Routines vary in different countries, but today (2024) the most common adjuvant treatment is immune checkpoint inhibitor treatment for up to a year post-surgery. In the early 2000s, a relatively common strategy was to treat patients with high risk of recurrence with up to a year of high-dose interferon treatment, which has severe side effects, but may improve the patient's prognosis slightly. A 2013 meta-analysis suggested that the addition of interferon alpha increased disease-free and overall survival for people with AJCC TNM stage II-III cutaneous melanoma. A 2011 meta-analysis showed that interferon could lengthen the time before a melanoma comes back but increased survival by only 3% at 5 years. The unpleasant side effects also greatly decrease quality of life. In the European Union, interferon is usually not used outside the scope of clinical trials. Chemotherapy Chemotherapy drugs such as dacarbazine have been the backbone of metastatic melanoma treatment since FDA approval in 1975; however, its efficacy in terms of survival has never been proven in an RCT. Since the approval of immune checkpoint inhibitors, dacarbazine and its oral counterpart temozolomide constitute potential treatment options in later lines of therapy. Multiple drugs are available to patients to decrease the size of the tumor. By lessening the size of the tumor, some symptoms can be relieved; however, this does not necessarily lead to remission. Some of these drugs are dacarbazine, temozolomide, and fotemustine. Combinations of drugs are also used and, in some cases, present higher remission rates. These medication combinations can have harmful side effects. To maintain quality of life, patients require assistive treatments and observation. Although combinations of drugs increase remission rates, the survival rate does not show an increase. In people with locally advanced cutaneous malignancies and sarcoma, isolated limb infusion (ILI) has been found to be a minimally invasive and well-tolerated procedure for delivering regional chemotherapy. Targeted therapy Melanoma cells have mutations that allow them to survive and grow indefinitely in the body. Small-molecule targeted therapies work by blocking the genes involved in pathways for tumor proliferation and survival. The main treatments are BRAF, C-Kit and NRAS inhibitors. These inhibitors work to inhibit the downstream pathways involved in cell proliferation and tumour development due to specific gene mutations. People can be treated with small-molecule targeted inhibitors if they are positive for the specific mutation. BRAF inhibitors, such as vemurafenib and dabrafenib and a MEK inhibitor trametinib are the most effective, approved treatments for BRAF positive melanoma. Melanoma tumors can develop resistance during therapy which can make therapy no longer effective, but combining the use of BRAF and MEK inhibitors may create a fast and lasting melanoma therapy response. A number of treatments improve survival over traditional chemotherapy. Biochemotherapy (chemotherapy with cytokines IL-2 and IFN-α) combined with BRAF inhibitors improved survival for people with BRAF positive melanoma. Biochemotherapy alone did not improve overall survival and had higher toxicity than chemotherapy. Combining multiple chemotherapy agents (polychemotherapy) did not improve survival over monochemotherapy. Targeted therapies result in relatively short progression-free survival (PFS) times. The therapy combination of dabrafenib and trametinib has a 3-year PFS of 23%, and 5-year PFS of 13%. Lifileucel (Amtagvi) is a tumor-derived autologous T cell immunotherapy that was approved for medical use in the United States in February 2024. Immunotherapy Immunotherapy is aimed at stimulating the person's immune system against the tumor, by enhancing the body's own ability to recognize and kill cancer cells. The current approach to treating melanoma with immunotherapy includes three broad categories of treatments including cytokines, immune check point inhibitors, and adoptive cell transfer. These treatment options are most often used in people with metastatic melanoma and significantly improves overall survival. However, these treatments are often costly. For example, one immune check point inhibitor treatment, pembrolizumab, costs US$10,000 to $12,000 for a single dose administered every 3 weeks. Cytokine therapies used for melanoma include IFN-a and IL-2. IL-2 (Proleukin) was the first new therapy approved (1990 EU, 1992 US) for the treatment of metastatic melanoma in 20 years. IL-2 may offer the possibility of a complete and long-lasting remission in this disease in a small percentage of people with melanoma. Intralesional IL-2 for in-transit metastases has a high complete response rate ranging from 40 to 100%. Similarly, IFN-a has shown only modest survival benefits and high toxicity, limiting its use as a stand-alone therapy. Immune check point inhibitors include anti-CTLA-4 monoclonal antibodies (ipilimumab and tremelimumab), toll-like receptor (TLR) agonists, CD40 agonists, anti-PD-1 (pembrolizumab, pidilizumab, and nivolumab) and PD-L1 antibodies. Evidence suggests that anti-PD-1 antibodies are more effective than anti-CTLA4 antibodies with less systemic toxicity. The five-year progression-free survival for immunotherapy with pembrolizumab is 21%. A therapeutic approach that includes the combination of different therapies improves overall survival and progression-free survival compared to treatment with the separate immunotherapy drugs alone. Ongoing research is looking at treatment by adoptive cell transfer. Adoptive cell transfer refers to the application of pre-stimulated, modified T cells or dendritic cells and is presently used to minimize complications from graft-versus-host disease. The combination nivolumab/relatlimab (Opdualag) was approved for medical use in the United States in March 2022. Lentigo maligna Standard excision is still being done by most surgeons. Unfortunately, the recurrence rate is exceedingly high (up to 50%). This is due to the ill-defined visible surgical margin, and the facial location of the lesions (often forcing the surgeon to use a narrow surgical margin). The narrow surgical margin used, combined with the limitation of the standard "bread-loafing" technique of fixed tissue histology – result in a high "false negative" error rate, and frequent recurrences. Margin control (peripheral margins) is necessary to eliminate the false negative errors. If bread loafing is used, distances from sections should approach 0.1 mm to assure that the method approaches complete margin control. A meta-analysis of the literature in 2014 found no randomized controlled trials of surgical interventions to treat lentigo maligna or melanoma in-situ, even though surgery is the most widely used treatment. Mohs surgery has been done with cure rate reported to be as low as 77%, and as high as 95% by another author. The "double scalpel" peripheral margin controlled excision method approximates the Mohs method in margin control, but requires a pathologist intimately familiar with the complexity of managing the vertical margin on the thin peripheral sections and staining methods. Some melanocytic nevi, and melanoma-in-situ (lentigo maligna) have resolved with an experimental treatment, imiquimod (Aldara) topical cream, an immune enhancing agent. Some derma-surgeons are combining the two methods: surgically excising the cancer and then treating the area with Aldara cream postoperatively for three months. While some studies have suggested the adjuvant use of topical tazarotene, the current evidence is insufficient to recommend it and suggests that it increases topical inflammation, leading to lower patient compliance. Radiation Radiation therapy is often used after surgical resection for patients with locally or regionally advanced melanoma or for patients with un-resectable distant metastases. Kilovoltage x-ray beams are often used for these treatments and have the property of the maximum radiation dose occurring close to the skin surface. It may reduce the rate of local recurrence but does not prolong survival. Radioimmunotherapy of metastatic melanoma is currently under investigation. Radiotherapy has a role in the palliation of metastatic melanoma. Prognosis Factors that affect prognosis include: tumor thickness in millimeters (Breslow's depth), depth related to skin structures (Clark level), type of melanoma, presence of ulceration, presence of lymphatic/perineural invasion, presence of tumor-infiltrating lymphocytes (if present, prognosis is better), location of lesion, presence of satellite lesions, and presence of regional or distant metastasis. Certain types of melanoma have worse prognoses but this is explained by their thickness. Less invasive melanomas even with lymph node metastases carry a better prognosis than deep melanomas without regional metastasis at time of staging. Local recurrences tend to behave similarly to a primary unless they are at the site of a wide local excision (as opposed to a staged excision or punch/shave excision) since these recurrences tend to indicate lymphatic invasion. When melanomas have spread to the lymph nodes, one of the most important factors is the number of nodes with malignancy. Extent of malignancy within a node is also important; micrometastases in which malignancy is only microscopic have a more favorable prognosis than macrometastases. In some cases micrometastases may only be detected by special staining, and if malignancy is only detectable by a rarely employed test known as the polymerase chain reaction (PCR), the prognosis is better. Macro-metastases in which malignancy is clinically apparent (in some cases cancer completely replaces a node) have a far worse prognosis, and if nodes are matted or if there is extracapsular extension, the prognosis is worse still. In addition to these variables, expression levels and copy number variations of a number of relevant genes may be used to support assessment of melanoma prognosis. Stage IV melanoma, in which it has metastasized, is the most deadly skin malignancy: five-year survival is 22.5%. When there is distant metastasis, the cancer is generally considered incurable. The five-year survival rate is less than 10%. The median survival is 6–12 months. Treatment is palliative, focusing on life extension and quality of life. In some cases, patients may live many months or even years with metastatic melanoma (depending on the aggressiveness of the treatment). Metastases to skin and lungs have a better prognosis. Metastases to brain, bone and liver are associated with a worse prognosis. Survival is better with metastasis in which the location of the primary tumor is unknown. There is not enough definitive evidence to adequately stage, and thus give a prognosis for, ocular melanoma and melanoma of soft parts, or mucosal melanoma (e.g., rectal melanoma), although these tend to metastasize more easily. Even though regression may increase survival, when a melanoma has regressed, it is impossible to know its original size and thus the original tumor is often worse than a pathology report might indicate. About 200 genes are prognostic in melanoma, with both unfavorable genes where high expression is correlated to poor survival and favorable genes where high expression is associated with longer survival times. Examples of unfavorable genes are MCM6 and TIMELESS; an example of a favorable gene is WIPI1. An increased neutrophil-to-lymphocyte ratio is associated with worse outcomes. Epidemiology Globally, in 2012, melanoma occurred in 232,000 people and resulted in 55,000 deaths. Australia and New Zealand have the highest rates of melanoma in the world. It has become more common in the last 20 years in areas that are mostly Caucasian. The rate of melanoma has increased in the recent years, but it is not clear to what extent changes in behavior, in the environment, or in early detection are involved. Australia Australia has a very high – and increasing – rate of melanoma. In 2012, deaths from melanoma occurred in 7.3–9.8 per 100,000 population. In Australia, melanoma is the third most common cancer in either sex; indeed, its incidence is higher than for lung cancer, although the latter accounts for more deaths. It is estimated that in 2012, more than 12,000 Australians were diagnosed with melanoma: given Australia's modest population, this is better expressed as 59.6 new cases per 100,000 population per year; >1 in 10 of all new cancer cases were melanomas. Melanoma incidence in Australia is matter of significance, for the following reasons: Australian melanoma incidence has increased by more than 30 per cent between 1991 and 2009. Australian melanoma age-standardized incidence rates were, as of 2008, at least 12 times higher than the world average. Australian melanoma incidence is, by some margin, the highest in the world. Overall age-standardized cancer incidence in Australia is the highest in the world, and this is attributable to melanoma alone. Age-standardized overall cancer incidence is similar to New Zealand, but there is a statistically significant difference between Australia and all other parts of the developed world including North America, Western Europe, and the Mediterranean. United States In the United States, about 9,000 people die from melanoma a year. In 2011, it affected 19.7 per 100,000, and resulted in death in 2.7 per 100,000. In 2013: 71,943 people in the United States were diagnosed with melanomas of the skin, including 42,430 men and 29,513 women. 9,394 people in the United States died from melanomas of the skin, including 6,239 men and 3,155 women. The American Cancer Society's estimates for melanoma incidence in the United States for 2017 are: About 87,110 new melanomas will be diagnosed (about 52,170 in men and 34,940 in women). About 9,730 people are expected to die of melanoma (about 6,380 men and 3,350 women). Melanoma is more than 20 times more common in whites than in African Americans. Overall, the lifetime risk of getting melanoma is about 2.5% (1 in 40) for whites, 0.1% (1 in 1,000) for African Americans, and 0.5% (1 in 200) for Mexicans. The risk of melanoma increases as people age. The average age of people when the disease is diagnosed is 63. History Although melanoma is not a new disease, evidence for its occurrence in antiquity is rather scarce. However, one example lies in a 1960s examination of nine Peruvian mummies, radiocarbon dated to be approximately 2400 years old, which showed apparent signs of melanoma: melanotic masses in the skin and diffuse metastases to the bones. John Hunter is reported to be the first to operate on metastatic melanoma in 1787. Although not knowing precisely what it was, he described it as a "cancerous fungous excrescence". The excised tumor was preserved in the Hunterian Museum of the Royal College of Surgeons of England. It was not until 1968 that microscopic examination of the specimen revealed it to be an example of metastatic melanoma. The French physician René Laennec was the first to describe melanoma as a disease entity. His report was initially presented during a lecture for the Faculté de Médecine de Paris in 1804 and then published as a bulletin in 1806. The first English-language report of melanoma was presented by an English general practitioner from Stourbridge, William Norris in 1820. In his later work in 1857 he remarked that there is a familial predisposition for development of melanoma (Eight Cases of Melanosis with Pathological and Therapeutical Remarks on That Disease). Norris was also a pioneer in suggesting a link between nevi and melanoma and the possibility of a relationship between melanoma and environmental exposures, by observing that most of his patients had pale complexions. He also described that melanomas could be amelanotic and later showed the metastatic nature of melanoma by observing that they can disseminate to other visceral organs. The first formal acknowledgment of advanced melanoma as untreatable came from Samuel Cooper in 1840. He stated that the only chance for a cure depends upon the early removal of the disease (i.e., early excision of the malignant mole) ...' More than one and a half centuries later this situation remains largely unchanged. Terminology The word melanoma came to English from 19th-century Neo-Latin and uses combining forms derived from ancient Greek roots: melano- (denoting melanin) + -oma (denoting a tissue mass and especially a neoplasm), in turn from Greek μέλας melas, "dark", and -ωμα oma, "process". The word melanoma has a long history of being used in a broader sense to refer to any melanocytic tumor, typically, but not always malignant, but today the narrower sense referring only to malignant types has become so dominant that benign tumors are usually not called melanomas anymore and the word melanoma is now usually taken to mean malignant melanoma unless otherwise specified. Terms such as "benign melanocytic tumor" unequivocally label the benign types, and modern histopathologic tumor classifications used in medicine do not use the word for benign tumors. Research Pharmacotherapy research for un-resectable or metastatic melanoma is ongoing. Targeted therapies In clinical research, adoptive cell therapy and gene therapy, are being tested. Two kinds of experimental treatments developed at the National Cancer Institute (NCI), have been used in metastatic melanoma with tentative success. The first treatment involves adoptive cell therapy (ACT) using TILs immune cells (tumor-infiltrating lymphocytes) isolated from a person's own melanoma tumor. These cells are grown in large numbers in a laboratory and returned to the patient after a treatment that temporarily reduces normal T cells in the patient's body. TIL therapy following lymphodepletion can result in durable complete response in a variety of setups. The second treatment, adoptive transfer of genetically altered autologous lymphocytes, depends on delivering genes that encode so called T cell receptors (TCRs), into patient's lymphocytes. After that manipulation lymphocytes recognize and bind to certain molecules found on the surface of melanoma cells and kill them. A cancer vaccine showed modest benefit in late-stage testing in 2009 against melanoma. BRAF inhibitors About 60% of melanomas contain a mutation in the B-Raf gene. Early clinical trials suggested that B-Raf inhibitors including Plexxicon's vemurafenib could lead to substantial tumor regression in a majority of patients if their tumor contain the B-Raf mutation. In June 2011, a large clinical trial confirmed the positive findings from those earlier trials. In August 2011, Vemurafenib received FDA approval for the treatment of late-stage melanoma. In May 2013 the US FDA approved dabrafenib as a single agent treatment for patients with BRAF V600E mutation-positive advanced melanoma. Some researchers believe that combination therapies that simultaneously block multiple pathways may improve efficacy by making it more difficult for the tumor cells to mutate before being destroyed. In October 2012 a study reported that combining Dabrafenib with a MEK inhibitor trametinib led to even better outcomes. Compared to Dabrafenib alone, progression-free survival was increased to 41% from 9%, and the median progression-free survival increased to 9.4 months versus 5.8 months. Some side effects were, however, increased in the combined study. In January 2014, the FDA approved the combination of dabrafenib and trametinib for the treatment of people with BRAF V600E/K-mutant metastatic melanoma. In June 2018, the FDA approved the combination of a BRAF inhibitor encorafenib and a MEK inhibitor binimetinib for the treatment of un-resectable or metastatic melanoma with a BRAF V600E or V600K mutation. Eventual resistance to BRAF and MEK inhibitors may be due to a cell surface protein known as EphA2 which is now being investigated. Ipilimumab At the American Society of Clinical Oncology Conference in June 2010, the Bristol Myers Squibb pharmaceutical company reported the clinical findings of their drug ipilimumab. The study found an increase in median survival from 6.4 to 10 months in patients with advanced melanomas treated with the monoclonal ipilimumab, versus an experimental vaccine. It also found a one-year survival rate of 25% in the control group using the vaccine, 44% in the vaccine and ipilimumab group, and 46% in the group treated with ipilimumab alone. However, some have raised concerns about this study for its use of the unconventional control arm, rather than comparing the drug against a placebo or standard treatment. The criticism was that although Ipilimumab performed better than the vaccine, the vaccine has not been tested before and may be causing toxicity, making the drug appear better by comparison. Ipilimumab was approved by the FDA in March 2011 to treat patients with late-stage melanoma that has spread or cannot be removed by surgery. In June 2011, a clinical trial of ipilimumab plus dacarbazine combined this immune system booster with the standard chemotherapy drug that targets cell division. It showed an increase in median survival for these late stage patients to 11 months instead of the 9 months normally seen. Researchers were also hopeful of improving the five year survival rate, though serious adverse side-effects were seen in some patients. A course of treatment costs $120,000. The drug's brandname is Yervoy. Surveillance methods Advances in high resolution ultrasound scanning have enabled surveillance of metastatic burden to the sentinel lymph nodes. The Screening and Surveillance of Ultrasound in Melanoma trial (SUNMEL) is evaluating ultrasound as an alternative to invasive surgical methods. Oncolytic virotherapy In some countries oncolytic virotherapy methods are studied and used to treat melanoma. Oncolytic virotherapy is a promising branch of virotherapy, where oncolytic viruses are used to treat diseases; viruses can increase metabolism, reduce anti-tumor immunity and disorganize vasculature. Talimogene laherparepvec (T-VEC) (which is a herpes simplex virus type 1–derived oncolytic immunotherapy), was shown to be useful against metastatic melanoma in 2015 with an increased survival of 4.4 months. Antivirals Antiretrovirals have been tested in vitro against melanoma. The rationale behind this lies in their potential to inhibit human endogenous retroviruses, whose activity has been associated with the development of melanoma. Evidence from studies on melanoma cell lines indicates that antiretroviral drugs, including lamivudine, doravirine, and cabotegravir, can effectively downregulate the expression of human endogenous retroviruses (HERV-K). These drugs not only reduce cell growth and invasiveness but also enhance the potential of immune checkpoint therapies. Furthermore, they have shown promise in addressing resistance mechanisms that emerge following prolonged treatment with BRAF inhibitors like dabrafenib and AZ628. By restoring apoptosis, decreasing cell viability, and influencing tumor suppressor proteins, these antiretrovirals offer a compelling strategy to tackle therapeutic resistance in melanoma. Further developments are awaited through animal model testing.
Biology and health sciences
Cancer
null
716765
https://en.wikipedia.org/wiki/Wild%20rice
Wild rice
Wild rice, also called manoomin, mnomen, psíŋ, Canada rice, Indian rice, or water oats, is any of four species of grasses that form the genus Zizania, and the grain that can be harvested from them. The grain was historically and is still gathered and eaten in North America and, to a lesser extent, China, where the plant's stem is used as a vegetable. Wild rice is not directly related to domesticated rice (Oryza sativa and Oryza glaberrima), although both belong to the same botanical tribe Oryzeae. Wild-rice grains have a chewy outer sheath with a tender inner grain that has a slightly vegetal taste. The plants grow in shallow water in small lakes and slow-flowing streams; often, only the flowering head of wild rice rises above the water. The grain is eaten by dabbling ducks and other aquatic wildlife. Species Three species of wild rice are native to North America: Northern wild rice (Zizania palustris) is an annual plant native to the Great Lakes region of North America, the aquatic areas of the Boreal Forest regions of Northern Ontario, Alberta, Saskatchewan and Manitoba in Canada and Minnesota, Wisconsin, Michigan and Idaho in the US. Southern or annual wild rice (Z. aquatica), also an annual, grows in the Saint Lawrence River, the state of Florida, and on the Atlantic and Gulf coasts of the United States. Texas wild rice (Z. texana) is a perennial plant found only in a small area along the San Marcos River in central Texas. One species is native to Asia: Manchurian wild rice (Z. latifolia; incorrect synonym: Z. caduciflora) is a perennial native to China. Texas wild rice is in danger of extinction due to loss of suitable habitat in its limited range and to pollution. The pollen of Texas wild rice can only travel about 30 inches away from a parent plant. If pollen does not land on a receptive female flower within that distance, no seeds are produced. Manchurian wild rice has almost disappeared from the wild in its native range, but has been accidentally introduced into the wild in New Zealand and is considered an invasive species there. The genomes of northern and Manchurian wild rices have been sequenced. There appears to be a whole-genome duplication after the genus split from Oryza. Culinary use The species most commonly harvested as grain are the annual species: Zizania palustris and Zizania aquatica. The former, though now domesticated and grown commercially, is still often gathered from lakes in the traditional manner, especially by indigenous peoples in North America; the latter was also used extensively in the past. The stems and root shoots also contain an edible portion on the interior. Use by Native Americans Native Americans and others harvest wild rice by canoeing into a stand of plants, and bending the ripe grain heads with two small wooden poles/sticks called "knockers" or "flails", so as to thresh the seeds into the canoe. One person vans (or "knocks") rice into the canoe while the other paddles slowly or uses a push pole. The plants are not beaten with the knockers, but require only a gentle brushing to dislodge the mature grain. Some seeds fall to the muddy bottom and germinate later in the year. The size of the knockers, as well as other details, are prescribed in state and tribal law. By Minnesota statute, knockers must be at most diameter, long, and weight. Several Native American cultures, such as the Ojibwe, consider wild rice to be a sacred component of their culture. The Ojibwe people call this plant manoomin, meaning "harvesting berry" (commonly translated "good berry"). In 2018, the White Earth Nation of Ojibwe granted manoomin certain rights (sometimes compared to rights of nature or to granting it legal personhood), including the right to exist and flourish; in August 2021, the Ojibwe filed a lawsuit on behalf of wild rice to stop the Enbridge Line 3 oil sands pipeline, which puts the plant's habitat at risk. Tribes that are recorded as historically harvesting Zizania aquatica are the Dakota, Menominee, Meskwaki, Ojibwe, Cree, Omaha, Ponca, Thompson, and Ho-Chunk (Winnebago). Native people who utilized Zizania palustris are the Ojibwe, Ottawa/Odawa and Potawatomi. Ways of preparing it varied from stewing the grains with venison stock and/or maple syrup, making it into stuffings for wild birds, or even steaming it into sweets like puffed rice, or rice pudding sweetened with maple syrup. For these groups, the harvest of wild rice is an important cultural (and often economic) event. The Omǣqnomenēwak tribe were named Omanoominii by the neighboring Ojibwa after this plant. Many places in Illinois, Indiana, Manitoba, Michigan, Minnesota, Ontario, Saskatchewan, and Wisconsin are named after this plant, including Mahnomen, Minnesota, and Menomonie, Wisconsin; many lakes and streams bear the name "Rice", "Wildrice", "Wild Rice", or "Zizania". Commercialisation Because of its nutritional value and taste, wild rice increased in popularity in the late 20th century, and commercial cultivation began in the U.S. and Canada to supply the increased demand. In 1950, James and Gerald Godward started experimenting with wild rice in a one-acre meadow north of Brainerd, Minnesota. They constructed dikes around the acre, dug ditches for drainage, and put in water controls. In the fall, they tilled the soil. Then, in the spring of 1951, they acquired of seed from Wildlife Nurseries Inc. They scattered the seed onto the soil, diked it in, and flooded the paddy. Much to their surprise, since they were told wild rice needs flowing water to grow well, the seeds sprouted and produced a crop. They continued to experiment with wild rice throughout the early 1950s and were the first to officially cultivate the previously wild crop. In the United States, the main producers are California and Minnesota (where it is the official state grain), and it is mainly cultivated in paddy fields. In Canada, it is usually harvested from natural bodies of water; the largest producer is Saskatchewan. Wild rice is also produced in Hungary and Australia. In Hungary, cultivation started in 1974 on the rice field of Szarvas. Manchurian wild rice Manchurian wild rice (), gathered from the wild, was once an important grain in ancient China. It is now very rare in the wild, and its use as a grain has completely disappeared in China, though it continues to be cultivated for its stems. The swollen crisp white stems of Manchurian wild rice are grown as a vegetable, popular in East and Southeast Asia. The swelling occurs because of infection with the smut fungus Ustilago esculenta. The fungus prevents the plant from flowering, so the crop is propagated asexually, the infection being passed from mother plant to daughter plant. Harvest must be made between about 120 days and 170 days after planting, after the stem begins to swell, but before the infection reaches its reproductive stage, when the stem will begin to turn black and eventually disintegrate into fungal spores. The vegetable is especially common in China, where it is known as gāosǔn (高筍) or jiāobái (茭白). In Japan it is known as makomodake (:ja:マコモダケ). Other names which may be used in English include coba and water bamboo. Importation of the vegetable to the United States is prohibited in order to protect North American species from the smut fungus. Nutrition Wild rice is relatively high in protein, the amino acid lysine and dietary fiber, and low in fat. Nutritional analysis shows wild rice to be the grain second only to oats in protein content per 100 calories. Like true rice, it does not contain gluten. It is also a good source of certain minerals and B vitamins. One cup of cooked wild rice provides 5% or more of the daily value of thiamin, riboflavin, iron, and potassium; 10% or more of the daily value of niacin, vitamin B6, folate, magnesium, phosphorus; 15% of zinc; and over 20% of manganese. Safety Wild rice seeds can be infected by the highly toxic fungus ergot, which is dangerous if eaten. Infected grains have pink or purplish blotches or growths of the fungus, from the size of a seed to several times larger. Archaeology of Minnesota wild rice Food source Anthropologists since the early 1900s have focused on wild rice as a food source, often with an emphasis on the harvesting of the aquatic plant in the Lake Superior region by the Anishinaabe people, also known as the Chippewa, Ojibwa and Ojibwe. The Smithsonian Institution's Bureau of American Ethnology published The Wild Rice Gatherers in the Upper Great Lakes: A Study in American Primitive Economics by Albert Ernest Jenks in 1901. In addition to his fieldwork interviewing members of various tribal communities, Jenks examined the accounts of explorers, fur traders and government agents from the early 1600s to the late 1800s to detail an "aboriginal economic activity which is absolutely unique, and in which no article is employed not of aboriginal conception and workmanship". His study further notes wild rice's importance in the fur-trading era because the region would have been nearly inaccessible if not for the availability of wild rice and the ability to store it for long periods of time. Wild rice's social and economic importance has continued into present times for the Anishinaabe and other north woods tribal members despite the availability of more easily obtainable food sources. Processing by various cultures The continued use of wild rice from ancient to modern times has provided opportunities to examine the plant's processing by various cultures through the archaeological record they left behind during their occupation of seasonal ricing camps. Early ethnographic reports, tribal accounts and historical writings also inform archaeological research in the human use of wild rice. For example, geographer and ethnologist Henry Schoolcraft in the mid-1800s wrote about depressions in the ground on the shore of a lake with wild rice growing in the water. He wrote that wild rice processors placed animal hides in the holes, filled them with rice and stomped on the rice to thresh it. These jigging pits are part of the husking needed to process wild rice, and archaeologists see these holes in the soil stratigraphy in archaeological excavations today. Such historical records from the post-contact period in the Lake Superior region focus on Anishinaabe harvesting and processing techniques. Archaeological investigations of wild rice processing from the American era, before and after the creation of federal Indian reservations, also provide information on the loss of traditional harvesting areas, as 1800s fur trader and Indian interpreter Benjamin G. Armstrong wrote about outsiders "who claimed to have acquired title to all the swamps and overflowed lakes on the reservations, depriving the Indians of their rice fields, cranberry marshes and hay meadows". Despite the close association of the Anishinaabe and wild rice today, indigenous use of this food for subsistence also predates their arrival in the Lake Superior region. The Anishinaabe today were part of a larger Algonquian group who left eastern North America on a centuries-long journey to the west along the St. Lawrence River and Great Lakes. The Anishinaabe migration story details a vision to follow a giant clam shell in the sky to a place where the food grows on the water. This journey ended between the late 1400s and early 1600s in the Lake Superior wild rice country when they encountered the plant. Prehistory Archaeological and other scientific investigations have focused on the prehistoric exploitation of wild rice by humans, including: 1) the Anishinaabe, 2) so-called proto-Anishinaabe who may have later transformed into this culture from an earlier form, 3) other indigenous groups who exist today such as the Sioux people, and 4) archaeological-categorized cultures from the Initial and Terminal Woodland periods whose living lineages today are more difficult to identify. A seminal 1969 archaeological study indicated the prehistoric nature of indigenous wild rice harvesting and processing through radiocarbon dating, putting to rest argument made by some European-Americans that wild rice production did not begin until post-contact times. Researchers tested clay linings of thermal features and jigging pits associated with parching and threshing of the plant. But a more precise dating of the antiquity of human use of wild rice and the appearance of the plant itself in lakes and streams have been the subjects of continuing academic debates. These disputes may be framed around these questions: When did wild rice first appear in various areas of the region? When was it plentiful enough to be harvested in quantities to be a significant food source? What is the relationship of wild rice to the introduction of pottery and to increases in indigenous populations in the past 2,000 years? "The use of wild rice by and its influence on prehistoric people in northeast Minnesota has led to much argument among archaeologists and paleoecologists". As an example, archaeologists divide human occupation of northeast Minnesota into numerous time periods. They are: the Paleo-Indian period from 7,000 years ago (5000 BC) extending back to an uncertain time after the glaciers receded from the last Ice Age; the Archaic period from 2,500 to 7,000 years ago (5000–500 BC); the Initial Woodland period from 2,500 to 1,300 years ago (500 BC–700 AD); the Terminal Woodland period from 1,300 to 400 years ago (700–1600 AD); and the historical period after that time. These rough dates are open to debate and vary by location in the state. In general, two lines of inquiry have focused on archaeological wild rice: 1) The radiocarbon dating of charred wild rice seeds or the associated charcoal left behind during the parching stage of rice production, and 2) Examination of preserved wild rice seeds associated with specific prehistoric pottery styles found in excavations of processing sites. Different pottery styles in northern Minnesota are linked to certain times in the Initial and Terminal Woodland periods stretching from around 500 BC to the time of contact between indigenous peoples and Europeans. To place this in context, "Although ceramics may have appeared as early as 2,000 BC in the southeastern United States, it is about 1,500 years later that they became evident in the Midwest". After European contact, indigenous wild rice processors generally abandoned ceramic vessels in favor of metal kettles. Woodland period The Initial Woodland period in northeast Minnesota marks the beginning of the use of pottery and burial mound building in the archaeological record. The Initial Woodland also experienced an increase in indigenous population. One hypothesis is that wild rice as a food source was related to these three developments. An example of a northeast Minnesota wild rice location, the Big Rice site in the Superior National Forest, considered a classic Initial and Terminal Woodland period type site, illustrates the methods of archaeological investigations into the plant's use by humans through time. Archaeological techniques along with ethnographic records and tribal oral testimony, when taken together, suggest use of this particular lakeside site since 50 BC. On its own, accelerator mass spectrometry (AMS) radiocarbon dating of wild rice seeds and charcoal samples from the Big Rice itself indicated indigenous use of this site dating to 2,050 years ago. Furthermore, all excavation levels that solely contained ceramics only used during the Initial Woodland period (known as Laurel pottery complex) also included wild rice seeds. This indicated the use of wild rice during the Initial Woodland period, according to the study. Excavators have documented more than 50,000 pottery shards from the site from the Initial and Terminal Woodland periods. Specifically, researchers analyzed ceramic rimsherds of Laurel pottery from the Initial Woodland period and Blackduck, Sandy Lake and Selkirk pottery styles from the Terminal Woodland period. Each pottery type had wild rice seeds associated with it in the soil layers of archaeological deposits. These soil layers were not contaminated with pottery from other eras. This suggests intensive exploitation of the site for wild rice processing through these time periods by different cultures. For example, archaeologists often associate Sandy Lake pottery with the Sioux people, who were later displaced by the Anishinaabe and possibly other Algonquian migrants. Archaeologists often associate Selkirk pottery with the Cree people, an Algonquian group. An examination of the pollen sequence at Big Rice indicates that wild rice existed in "harvestable quantities" 3,600 years ago during the Archaic period. This date is 1,600 years before the AMS radiocarbon date of human-processed charred wild rice seeds at the site during the Initial Woodland period, although there is no archaeological evidence of human use of the wild rice at the site that far back in time as of yet.
Biology and health sciences
Grains
Plants
717849
https://en.wikipedia.org/wiki/Spoiler%20%28aeronautics%29
Spoiler (aeronautics)
In aeronautics, a spoiler (sometimes called a lift spoiler or lift dumper) is a device which intentionally reduces the lift component of an airfoil in a controlled way. Most often, spoilers are plates on the top surface of a wing that can be extended upward into the airflow to spoil the streamline flow. By so doing, the spoiler creates a controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers differ from airbrakes in that airbrakes are designed to increase drag without disrupting the lift distribution across the wing span, while spoilers disrupt the lift distribution as well as increasing drag. Spoilers fall into two categories: those that are deployed at controlled angles during flight to increase descent rate or control roll, and those that are fully deployed immediately on landing to greatly reduce lift ("lift dumpers") and increase drag. In modern fly-by-wire aircraft, the same set of control surfaces serve both functions. Spoilers were used by most gliders (sailplanes) until the 1960s to control their rate of descent and thus achieve a controlled landing. Since then, spoilers on gliders have almost entirely been replaced by airbrakes, usually of the Schempp-Hirth type. Spoilers and airbrakes enable the glide angle to be altered during the approach while leaving the speed unchanged. Airliners are almost always fitted with spoilers. Spoilers are used to increase descent rate without increasing speed. Their use is often limited, however, as the turbulent airflow that develops behind them causes noise and vibration, which may cause discomfort to passengers. Spoilers may also be differentially operated for roll control instead of ailerons; Martin Aircraft was the first company to develop such spoilers in 1948. On landing, however, the spoilers are nearly always fully deployed to help slow the aircraft. The increase in form drag created by the spoilers directly assists the braking effect. However, the most gain comes as the spoilers cause a dramatic loss of lift and hence the weight of the aircraft is transferred from the wings to the undercarriage, allowing the wheels to be mechanically braked with less tendency to skid. In air-cooled piston engine aircraft, spoilers may be needed to avoid shock cooling the engines. In a descent without spoilers, air speed is increased and the engine will be at low power, producing less heat than normal. The engine may cool too rapidly, resulting in stuck valves, cracked cylinders or other problems. Spoilers alleviate the situation by allowing the aircraft to descend at a desired rate while letting the engine run at a power setting that keeps it from cooling too quickly (especially true for turbocharged piston engines, which generate higher temperatures than normally aspirated engines). Spoiler controls Spoiler controls can be used for roll control (outboard or mid-span spoilers) or descent control (inboard spoilers). Some aircraft use spoilers in combination with or in lieu of ailerons for roll control, primarily to reduce adverse yaw when rudder input is limited by higher speeds. For such spoilers the term spoileron has been coined. In the case of a spoileron, in order for it to be used as a control surface, it is raised on one wing only, thus decreasing lift and increasing drag, causing roll and yaw. Eliminating dedicated ailerons also avoids the problem of control reversal and allows flaps to occupy a greater portion of the wing trailing edge. Almost all modern jet airliners are fitted with inboard lift spoilers which are used together during descent to increase the rate of descent and control speed. Some aircraft use lift spoilers on landing approach to control descent without changing the aircraft's attitude. One jet airliner not fitted with lift spoilers was the Douglas DC-8 which used reverse thrust in flight on the two inboard engines to control descent speed (however the aircraft was fitted with lift dumpers). The Lockheed Tristar was fitted with a system called Direct Lift Control that used the spoilers on landing approach to control descent. Airbus aircraft with fly-by-wire control utilise wide-span spoilers for descent control, spoilerons, gust alleviation, and lift dumpers. Especially on landing approach, the full width of spoilers can be seen controlling the aircraft's descent rate and bank. Lift dumpers Lift dumpers are a special type of spoiler extending along much of the wing's length and designed to dump as much lift as possible on landing. Lift dumpers have only two positions, deployed and retracted. Lift dumpers have three main functions: putting most of the weight of the aircraft on the wheels for maximum braking effect, increasing form drag, and preventing aircraft "bounce" on landing. Lift dumpers are almost always deployed automatically on touch down. The flight deck control has three positions: off, automatic ("armed"), and manual (rarely used). On landing approach "automatic" is selected and, on touchdown, a sensor called a weight-on-wheels switch signals the lift dumpers to be raised. The flight control spoilers are also raised as additional lift dumpers. Virtually all modern jet aircraft are fitted with lift dumpers. The British Aerospace 146 is fitted with particularly wide-span spoilers to generate additional drag and make reverse thrust unnecessary. A number of accidents have been caused either by inadvertently deploying lift dumpers on landing approach, or forgetting to set them to "automatic". Incidents and accidents Air Canada Flight 621 – Premature deployment of the spoilers at low altitude contributed to this crash in Toronto on 5 July 1970. United Airlines Flight 553 – Forgetting to deactivate the spoilers contributed to crash at Chicago Midway International Airport on 8 December 1972. Loftleiðir Icelandic Airlines Flight 509  – Deployment of lift dumpers while attempting to arm them 40 feet above the runway caused this accident at John F. Kennedy International Airport on 23 June 1973. American Airlines Flight 965 – Forgetting to deactivate the spoilers while climbing to avoid a mountain contributed to this crash on 20 December 1995. American Airlines Flight 1420 – Forgetting to deploy the spoilers contributed to this crash at Little Rock National Airport on 1 June 1999. Atlantic Airways Flight 670 – The spoilers did not deploy during landing on a fairly short wet runway, causing overrun and falling over a cliff, on 10 October 2006. TAM Airlines Flight 3054 – This Airbus A320's pilots were aware of their deactivated starboard engine #2 thrust reverser, and so apparently did not attempt to use it to brake when attempting to land at São Paulo's Congonhas Airport on 17 July 2007; under one theory of the cause, they used an old procedure, which reduced the required runway length for landing but was superseded because it invited pilot error, which required them to leave the engine in idle rather than reverse thrust, and mistakenly left the engine at full power. The plane's spoilers may have been their only method of braking at speed. The plane slid off the runway, over a major highway, and ploughed into a warehouse, killing all 186 on board as well as several on the ground. It was Brazil's worst aviation disaster. 2023 Elmina Beechcraft 390 crash – Inadvertent deployment of spoilers of a Beechcraft 390 Premier I business jet while attempting to arm them before landing, resulting in a sudden loss of lift and subsequent crash in Sungai Buloh, Selangor, Malaysia, on 17 August 2023.
Technology
Aircraft components
null
718081
https://en.wikipedia.org/wiki/Off-road%20vehicle
Off-road vehicle
An off-road vehicle (ORV) also referred to as an off-highway vehicle (OHV), overland vehicle, or adventure vehicle, is any vehicle designed to drive on non-paved roads and surfaces, such as trails and forest roads, that have rough, uneven, and low-traction surfaces. Off-road vehicles have been popularized through competitive off-road events, such as the annual Dakar Rally, which challenges drivers to navigate a variety of terrain across various countries. History One of the first modified off-road vehicles was the Kégresse track, a conversion undertaken first by Adolphe Kégresse, who designed the track while working for Tsar Nicholas II of Russia, between 1906 and 1916. The system uses a caterpillar track with a flexible belt, rather than interlocking metal segments. It can be fitted to a conventional vehicle to turn it into a half-track, suitable for use over rough or soft ground. After the Russian Revolution of 1917, Kégresse returned to his native France where the system was used on Citroën cars between 1921 and 1937 for off-road and military vehicles. Citroën sponsored several overland expeditions with vehicles crossing North Africa and Central Asia. A huge wheeled vehicle designed from 1937 to 1939 under the direction of Thomas Poulter, called the Antarctic Snow Cruiser, was intended to facilitate transport in Antarctica. The project featured several innovative aspects but faced operational challenges under harsh conditions in Antarctica, leading to its eventual discontinuation. Early off-road vehicles, such as the U.S. Jeep Wagoneer and Ford Bronco, the British Range Rover, and the station wagon-bodied Japanese Toyota Land Cruiser, Nissan Patrol, and Suzuki Lj's series all had bodies similar to those of a station wagon, on a body comparable to that of a light truck, with four-wheel-drive drivetrains. During the 1990s, as off-road vehicles became more popular, more companies started to produce their own line of what became known as Sport Utility Vehicles. Manufacturers began to add more features to allow off-road vehicles to compete in the consumer market with regular vehicles. Over time, this evolved into modern SUV. It also evolved into the newer crossover vehicle, where utility and off-road capability were sacrificed for better on-road handling and luxury. Technical details To be able to drive off the pavement, off-road vehicles need low ground pressure, high ground clearance, and a way to keep their wheels or tracks grounded on uneven surfaces. Wheeled vehicles manage this by using large or additional tires, combined with high and flexible suspension. Tracked vehicles have wide tracks, and flexible suspension on the road wheels. The choice of wheels versus tracks is one of cost and suitability. A tracked drive-train is more expensive to produce and maintain, but has greater off-road performance. Wheeled drive-trains are cheaper and enables higher speeds. Tires play a significant role for any wheeled off-road vehicle, with off-road tire tread types varying depending on the terrain type. Common types of off-road tires are A/T (All Terrain) and M/T (Mud Terrain). While the A/T tires perform well on the sand, they are less capable in mud. Sand Blaster and Mud bogging tires can be used for the most challenging terrains such as dirt, sand, and water to maintain traction at high angles and speeds (off-road motorsport). Most off-road vehicles are fitted with low gearing, allowing the operator to optimise the engine's available power while moving slowly through challenging terrain. An internal combustion engine coupled to a standard gearbox often has an output speed too high, resolved using either a very low ("granny") first gear (like the all-wheel drive Volkswagen Transporter versions) or an additional gearbox in line with the first, called a reduction drive. Some vehicles, like the Bv206 in the picture on the right, also have torque converters to reduce the gearing. Criticism Safety SUVs have a higher center of gravity, so they are more likely to be in rollover accidents than passenger cars. According to a study conducted in the United States, SUVs have twice the fatality rate of passenger cars and have nearly triple the fatality rate in rollover accidents. In the United States, light trucks (including SUVs) represent 36 percent of all registered vehicles. They are involved in about half of the fatal two-vehicle crashes with passenger cars, and 80 percent of these fatalities are to occupants of passenger cars. Environment In the United States, the number of ORV users since 1972 has climbed sevenfold—from five million to 36 million in 2000. Government policies that protect wilderness but also allow recreational ORVs, have been the subject of some debate within the United States and other countries. All trail and off-trail activities impact natural vegetation and wildlife, which can lead to erosion, invasive species, habitat loss, and ultimately, species loss decreasing an ecosystem's ability to maintain homeostasis. ORVs cause greater stress to the environment than foot traffic alone, and ORV operators who attempt to test their vehicles against natural obstacles can do significantly more damage than those who follow legal trails. Illegal use of off-road vehicles has been identified as a serious land management problem ranked with dumping garbage and other forms of vandalism. Many user organisations, such as Tread Lightly! and the Sierra Club, publish and encourage appropriate trail ethics. ORVs have also been criticised for producing more pollution in areas that might normally have none, in addition to noise pollution that can cause hearing impairment and stress in wildlife. In 2002, the United States Environmental Protection Agency adopted emissions standards for all-terrain vehicles that "when fully implemented in 2012... are expected to prevent the release of more than two million tons of air pollution each year—the equivalent of removing the pollution from more than 32 million cars every year." Civilian use Common commercial vehicles used for off-roading include four-wheel-drive pickup trucks and SUVs such as the Ford F-Series, Jeep Wrangler, and Toyota Land Cruiser, among others. Typically, owners will perform additional modifications to the wheels, tires, suspension, and body to improve their performance off-road. Several decommissioned military vehicles have also been used by civilians, including the Jeep CJ and the AM General Hummer. Some, like the early Land Rovers, were adapted to military use from civilian specifications. Specialised off-road vehicles include Utility terrain vehicles (UTVs), All-terrain vehicles (ATVs), dirt bikes, dune buggies, rock crawlers, and sandrails. All-terrain vehicle Other applications Military vehicle The military market for off-road vehicles used to be large, but, since the fall of the Iron Curtain in the 1990s, it has dried up to some extent. The U.S. jeeps, developed during World War II, coined the word many people use for any light off-road vehicle. In the U.S., the Jeeps' successor from the mid-1980s was the AM General HMMWV series. The Red Army used the GAZ-61 and GAZ-64 during World War II. The Eastern Bloc used the GAZ-69 and UAZ-469 in similar roles. Experimental vehicle Commercial vehicle OKA Bus Coober Pedy Oodnadatta One Day Mail Run Arctic Trucks Scientific vehicle Northwest Passage Drive Expedition WindSled Expedition vehicle Vehicles used as the primary transport in an expedition, not for profit, scientific research or personal use. American Expedition Vehicles AEV Brute AEV Prospector EarthCruiser
Technology
Motorized road transport
null
718855
https://en.wikipedia.org/wiki/Self-organized%20criticality
Self-organized criticality
Self-organized criticality (SOC) is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality. The concept was put forward by Per Bak, Chao Tang and Kurt Wiesenfeld ("BTW") in a paper published in 1987 in Physical Review Letters, and is considered to be one of the mechanisms by which complexity arises in nature. Its concepts have been applied across fields as diverse as geophysics, physical cosmology, evolutionary biology and ecology, bio-inspired computing and optimization (mathematics), economics, quantum gravity, sociology, solar physics, plasma physics, neurobiology and others. SOC is typically observed in slowly driven non-equilibrium systems with many degrees of freedom and strongly nonlinear dynamics. Many individual examples have been identified since BTW's original paper, but to date there is no known set of general characteristics that guarantee a system will display SOC. Overview Self-organized criticality is one of a number of important discoveries made in statistical physics and related fields over the latter half of the 20th century, discoveries which relate particularly to the study of complexity in nature. For example, the study of cellular automata, from the early discoveries of Stanislaw Ulam and John von Neumann through to John Conway's Game of Life and the extensive work of Stephen Wolfram, made it clear that complexity could be generated as an emergent feature of extended systems with simple local interactions. Over a similar period of time, Benoît Mandelbrot's large body of work on fractals showed that much complexity in nature could be described by certain ubiquitous mathematical laws, while the extensive study of phase transitions carried out in the 1960s and 1970s showed how scale invariant phenomena such as fractals and power laws emerged at the critical point between phases. The term self-organized criticality was first introduced in Bak, Tang and Wiesenfeld's 1987 paper, which clearly linked together those factors: a simple cellular automaton was shown to produce several characteristic features observed in natural complexity (fractal geometry, pink (1/f) noise and power laws) in a way that could be linked to critical-point phenomena. Crucially, however, the paper emphasized that the complexity observed emerged in a robust manner that did not depend on finely tuned details of the system: variable parameters in the model could be changed widely without affecting the emergence of critical behavior: hence, self-organized criticality. Thus, the key result of BTW's paper was its discovery of a mechanism by which the emergence of complexity from simple local interactions could be spontaneous—and therefore plausible as a source of natural complexity—rather than something that was only possible in artificial situations in which control parameters are tuned to precise critical values. An alternative view is that SOC appears when the criticality is linked to a value of zero of the control parameters. Despite the considerable interest and research output generated from the SOC hypothesis, there remains no general agreement with regards to its mechanisms in abstract mathematical form. Bak Tang and Wiesenfeld based their hypothesis on the behavior of their sandpile model. Models of self-organized criticality In chronological order of development: Stick-slip model of fault failure Bak–Tang–Wiesenfeld sandpile Forest-fire model Olami–Feder–Christensen model Bak–Sneppen model Early theoretical work included the development of a variety of alternative SOC-generating dynamics distinct from the BTW model, attempts to prove model properties analytically (including calculating the critical exponents), and examination of the conditions necessary for SOC to emerge. One of the important issues for the latter investigation was whether conservation of energy was required in the local dynamical exchanges of models: the answer in general is no, but with (minor) reservations, as some exchange dynamics (such as those of BTW) do require local conservation at least on average . It has been argued that the energy released in the BTW "sandpile" model should actually generate 1/f2 noise rather than 1/f noise. This claim was based on untested scaling assumptions, and a more rigorous analysis showed that sandpile models generally produce 1/fa spectra, with a<2. However, the dynamics of the accumulated stress does exhibit the 1/f noise in the BTW model. Other simulation models were proposed later that could also produce true 1/f noise. In addition to the nonconservative theoretical model mentioned above , other theoretical models for SOC have been based upon information theory, mean field theory, the convergence of random variables, and cluster formation. A continuous model of self-organised criticality is proposed by using tropical geometry. Key theoretical issues yet to be resolved include the calculation of the possible universality classes of SOC behavior and the question of whether it is possible to derive a general rule for determining if an arbitrary algorithm displays SOC. Self-organized criticality in nature SOC has become established as a strong candidate for explaining a number of natural phenomena, including: The magnitude of earthquakes (Gutenberg–Richter law) and frequency of aftershocks (Omori law) Fluctuations in economic systems such as financial markets (references to SOC are common in econophysics) The evolution of proteins Forest fires Neuronal avalanches in the cortex Acoustic emission from fracturing materials Despite the numerous applications of SOC to understanding natural phenomena, the universality of SOC theory has been questioned. For example, experiments with real piles of rice revealed their dynamics to be far more sensitive to parameters than originally predicted. Furthermore, it has been argued that 1/f scaling in EEG recordings are inconsistent with critical states, and whether SOC is a fundamental property of neural systems remains an open and controversial topic. Self-organized criticality and optimization It has been found that the avalanches from an SOC process make effective patterns in a random search for optimal solutions on graphs. An example of such an optimization problem is graph coloring. The SOC process apparently helps the optimization from getting stuck in a local optimum without the use of any annealing scheme, as suggested by previous work on extremal optimization.
Physical sciences
Phase transitions
Physics
719187
https://en.wikipedia.org/wiki/Abies%20nordmanniana
Abies nordmanniana
Abies nordmanniana, the Nordmann fir or Caucasian fir, is a fir indigenous to the mountains south and east of the Black Sea, in Turkey, Georgia and the Russian Caucasus. It occurs at altitudes of 900–2,200 m on mountains with precipitation of over 1,000 mm. The current distribution of the Nordmann fir is associated with the forest refugia that existed during the Ice Age at the eastern and southern Black Sea coast. In spite of currently suitable climate, the species is not found in areas of the Eastern Greater Caucasus, which are separated from the Black Sea Coast by more than 400–500 km. Description It is a large evergreen coniferous tree growing to 55–61 m tall and with a trunk diameter of up to 2 m. In the Western Caucasus Reserve, some specimens have been reported to be and even tall, the tallest trees in the Caucasus, Anatolia, the Russian Federation and the continent of Europe. The leaves are needle-like, flattened, 1.8–3.5 cm long and 2 mm wide by 0.5 mm thick, glossy dark green above, and with two blue-white bands of stomata below. The tip of the leaf is usually blunt, often slightly notched at the tip, but can be pointed, particularly on strong-growing shoots on young trees. The cones are 10–20 cm long and 4–5 cm broad, with about 150–200 scales, each scale with an exserted bract and two winged seeds; they disintegrate when mature to release the seeds. Taxonomy The species is named by Christian von Steven after his compatriot, the Finnish zoologist Alexander von Nordmann (1803–1866), who was the director of the Odessa Botanical Gardens. Subspecies There are two subspecies (treated as distinct species by some botanists), intergrading where they meet in northern Turkey at about 36°E longitude: Caucasian fir (Abies nordmanniana subsp. nordmanniana). Native to the Caucasus mountains and eastern Pontic Mountains of northeastern Turkey west to about 36°E. Shoots often pubescent (hairy). Turkish fir (Abies nordmanniana subsp. equi-trojani). Native to northwestern Turkey, including the western Pontic Mountains as well as Uludağ and other mountains southeast of the Sea of Marmara. Often treated as a separate species, Abies bornmuelleriana. In Turkey this subspecies is treated as a distinct species (Abies equi-trojani Asch. & Sint. ex Bois.). It is endemic to a single location on Kaz Dağı (Mount Ida) in Balıkesir Province, northwestern Turkey. This subspecies occupies an area of only 164 km2 and is assessed as "Endangered". Its shoots are usually glabrous (hairless). Uses The Nordmann fir is one of the most important species grown for Christmas trees, being favoured for its attractive foliage, with needles that are not sharp and do not drop readily when the tree dries out. It is also a popular ornamental tree in parks and large gardens, and along with the cultivar 'Golden Spreader' has gained the Royal Horticultural Society's Award of Garden Merit. In Europe, the tree has also been used for reforestation as a way to mitigate expected forest decline caused by climate changes. The wood is soft and white, and is used for general construction, paper, etc. Gallery
Biology and health sciences
Pinaceae
Plants
719534
https://en.wikipedia.org/wiki/Biological%20carbon%20fixation
Biological carbon fixation
Biological carbon fixation, or сarbon assimilation, is the process by which living organisms convert inorganic carbon (particularly carbon dioxide, ) to organic compounds. These organic compounds are then used to store energy and as structures for other biomolecules. Carbon is primarily fixed through photosynthesis, but some organisms use chemosynthesis in the absence of sunlight. Chemosynthesis is carbon fixation driven by chemical energy rather than from sunlight. The process of biological carbon fixation plays a crucial role in the global carbon cycle, as it serves as the primary mechanism for removing from the atmosphere and incorporating it into living biomass. The primary production of organic compounds allows carbon to enter the biosphere. Carbon is considered essential for life as a base element for building organic compounds. The element of carbon forms the bases biogeochemical cycles (or nutrient cycles) and drives communities of living organisms. Understanding biological carbon fixation is essential for comprehending ecosystem dynamics, climate regulation, and the sustainability of life on Earth. Organisms that grow by fixing carbon, such as most plants and algae, are called autotrophs. These include photoautotrophs (which use sunlight) and lithoautotrophs (which use inorganic oxidation). Heterotrophs, such as animals and fungi, are not capable of carbon fixation but are able to grow by consuming the carbon fixed by autotrophs or other heterotrophs. Seven natural autotrophic carbon fixation pathways are currently known. They are the: i) Calvin-Benson-Bassham (Calvin Cycle), ii) Reverse Krebs (rTCA) cycle, iii) the reductive acetyl-CoA (Wood-Ljungdahl pathway), iv) 3-hydroxy propionate [3-HP] bicycle, v) 3-hydroypropionate/4- hydroxybutyrate (3-HP/4-HB) cycle, vi) the dicarboxylate/ 4-hydroxybutyrate (DC/4-HB) cycle, and vii) the reductive glycine (rGly) pathway. "Fixed carbon," "reduced carbon," and "organic carbon" may all be used interchangeably to refer to various organic compounds. Net vs. gross CO2 fixation The primary form of fixed inorganic carbon is carbon dioxide (CO2). It is estimated that approximately 250 billion tons of carbon dioxide are converted by photosynthesis annually. The majority of the fixation occurs in terrestrial environments, especially the tropics. The gross amount of carbon dioxide fixed is much larger since approximately 40% is consumed by respiration following photosynthesis. Historically, it is estimated that approximately 2×1011 billion tons of carbon has been fixed since the origin of life. Overview of the carbon fixation cycles Seven autotrophic carbon fixation pathways are known: the Calvin Cycle, the Reverse Krebs Cycle, the reductive acetyl-CoA, the 3-HP bicycle, the 3-HP/4-HB cycle, the DC/4-HB cycles, and the reductive glycine pathway. The organisms the Calvin cycle is found in are plants, algae, cyanobacteria, aerobic proteobacteria, and purple bacteria. The Calvin cycle fixes carbon in the chloroplasts of plants and algae, and in the cyanobacteria. It also fixes carbon in the anoxygenic photosynthesis in one type of Pseudomonadota called purple bacteria, and in some non-phototrophic Pseudomonadota. Of the other autotrophic pathways, three are known only in bacteria (the reductive citric acid cycle, the 3-hydroxypropionate cycle, and the reductive glycine pathway), two only in archaea (two variants of the 3-hydroxypropionate cycle), and one in both bacteria and archaea (the reductive acetyl CoA pathway). Sulfur- and hydrogen-oxidizing bacteria often use the Calvin cycle or the reductive citric acid cycle. List of pathways Calvin cycle The Calvin cycle accounts for 90% of biological carbon fixation. Consuming adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH), the Calvin cycle in plants accounts for the predominance of carbon fixation on land. In algae and cyanobacteria, it accounts for the dominance of carbon fixation in the oceans. The Calvin cycle converts carbon dioxide into sugar, as triose phosphate (TP), which is glyceraldehyde 3-phosphate (GAP) together with dihydroxyacetone phosphate (DHAP): 3 CO2 + 12 e− + 12 H+ + Pi → TP + 4 H2O An alternative perspective accounts for NADPH (source of e−) and ATP: 3 CO2 + 6 NADPH + 6 H+ + 9 ATP + 5 H2O → TP + 6 NADP+ + 9 ADP + 8 Pi The formula for inorganic phosphate (Pi) is HOPO32− + 2 H+.Formulas for triose and TP are C2H3O2-CH2OH and C2H3O2-CH2OPO32− + 2 H+. Reverse Krebs cycle The reverse Krebs cycle, also known as the reverse TCA cycle (rTCA) or reductive citric acid cycle, is an alternative to the standard Calvin-Benson cycle for carbon fixation. It has been found in strict anaerobic or microaerobic bacteria (as Aquificales) and anaerobic archea. It was discovered by Evans, Buchanan and Arnon in 1966 working with the photosynthetic green sulfur bacterium Chlorobium limicola. In particular, it is one of the most used pathways in hydrothermal vents by the Campylobacterota. This feature allows primary production in the ocean's aphotic environments, or "dark primary production." Without it, there would be no primary production in aphotic environments, which would lead to habitats without life. The cycle involves the biosynthesis of acetyl-CoA from two molecules of CO2. The key steps of the reverse Krebs cycle are: Oxaloacetate to malate, using NADH + H+ Oxaloacetate + NADH/H+ -> Malate + NAD+ Fumarate to succinate, catalyzed by an oxidoreductase, Fumarate reductase Fumarate + FADH2 <=> Succinate + FAD Succinate to succinyl-CoA, an ATP-dependent step Succinate + ATP + CoA -> Succinyl-CoA + ADP + Pi Succinyl-CoA to alpha-ketoglutarate, using one molecule of CO2 Succinyl-CoA + CO2 + Fd{(red)} -> alpha-ketoglutarate + Fd{(ox)} Alpha-ketoglutarate to isocitrate, using NADPH + H+ and another molecule of CO2 Alpha-ketoglutarate + CO2 + NAD(P)H/H+ -> Isocitrate + NAD(P)+ Citrate converted into oxaloacetate and acetyl-CoA, this is an ATP dependent step and the key enzyme is the ATP citrate lyase Citrate + ATP + CoA -> Oxaloacetate + Acetyl-CoA + ADP + Pi This pathway is cyclic due to the regeneration of the oxaloacetate. The bacteria Gammaproteobacteria and Riftia pachyptila switch from the Calvin-Benson cycle to the rTCA cycle in response to concentrations of H2S. Reductive acetyl CoA pathway The reductive acetyl CoA pathway (CoA) pathway, also known as the Wood-Ljungdahl pathway uses CO2 as electron acceptor and carbon source, and H2 as an electron donor to form acetic acid. This metabolism is widespread within the phylum Bacillota, especially in the Clostridia. The pathway is also used by methanogens, which are mainly Euryarchaeota, and several anaerobic chemolithoautotrophs, such as sulfate-reducing bacteria and archaea. It is probably performed also by the Brocadiales, an order of Planctomycetota that oxidize ammonia in anaerobic conditions. Hydrogenotrophic methanogenesis, which is only found in certain archaea and accounts for 80% of global methanogenesis, is also based on the reductive acetyl CoA pathway. The Carbon Monoxide Dehydrogenase/Acetyl-CoA Synthase is the oxygen-sensitive enzyme that permits the reduction of CO2 to CO and the synthesis of acetyl-CoA in several reactions. One branch of this pathway, the methyl branch, is similar but non-homologous between bacteria and archaea. In this branch happens the reduction of CO2 to a methyl residue bound to a cofactor. The intermediates are formate for bacteria and formyl-methanofuran for archaea, and also the carriers, tetrahydrofolate and tetrahydropterins respectively in bacteria and archaea, are different, such as the enzymes forming the cofactor-bound methyl group. Otherwise, the carbonyl branch is homologous between the two domains and consists of the reduction of another molecule of CO2 to a carbonyl residue bound to an enzyme, catalyzed by the CO dehydrogenase/acetyl-CoA synthase. This key enzyme is also the catalyst for the formation of acetyl-CoA starting from the products of the previous reactions, the methyl and the carbonyl residues. This carbon fixation pathway requires only one molecule of ATP for the production of one molecule of pyruvate, which makes this process one of the main choice for chemolithoautotrophs limited in energy and living in anaerobic conditions. 3-Hydroxypropionate [3-HP] bicycle The 3-hydroxypropionate bicycle, also known as 3-HP/malyl-CoA cycle, discovered only in 1989, is utilized by green non-sulfur phototrophs of Chloroflexaceae family, including the maximum exponent of this family Chloroflexus auranticus by which this way was discovered and demonstrated. The 3-hydroxypropionate bicycle is composed of two cycles, and the name of this way comes from the 3-hydroxypropionate, which corresponds to an intermediate characteristic of it. The first cycle is a way of synthesis of glyoxylate. During this cycle, two equivalents of bicarbonate are fixed by the action of two enzymes: the acetyl-CoA carboxylase catalyzes the carboxylation of the acetyl-CoA to malonyl-CoA and propionyl-CoA carboxylase catalyses the carboxylation of propionyl-CoA to methylamalonyl-CoA. From this point, a series of reactions lead to the formation of glyoxylate, which will thus become part of the second cycle. In the second cycle, glyoxylate is approximately one equivalent of propionyl-CoA forming methylamalonyl-CoA. This, in turn, is then converted through a series of reactions into citramalyl-CoA. The citramalyl-CoA is split into pyruvate and acetyl-CoA thanks to the enzyme MMC lyase. The pyruvate is released at this point, while the acetyl-CoA is reused and carboxylated again at malonyl-CoA, thus reconstituting the cycle. A total of 19 reactions are involved in the 3-hydroxypropionate bicycle, and 13 multifunctional enzymes are used. The multi-functionality of these enzymes is an important feature of this pathway which thus allows the fixation of three bicarbonate molecules. It is a costly pathway: 7 ATP molecules are consumed to synthesise the new pyruvate and 3 ATP for the phosphate triose. An important characteristic of this cycle is that it allows the co-assimilation of numerous compounds, making it suitable for the mixotrophic organisms. Cycles related to the 3-hydroxypropionate cycle A variant of the 3-hydroxypropionate cycle was found to operate in the aerobic extreme thermoacidophile archaeon Metallosphaera sedula. This pathway is called the 3-hydroxypropionate/4-hydroxybutyrate (3-HP/4-HB) cycle. Yet another variant of the 3-hydroxypropionate cycle is the dicarboxylate/4-hydroxybutyrate (DC/4-HB) cycle. It was discovered in anaerobic archaea. It was proposed in 2008 for the hyperthermophile archeon Ignicoccus hospitalis. Enoyl-CoA carboxylases/reductases fixation is catalyzed by enoyl-CoA carboxylases/reductases. Non-autotrophic pathways Although no heterotrophs use carbon dioxide in biosynthesis, some carbon dioxide is incorporated in their metabolism. Notably pyruvate carboxylase consumes carbon dioxide (as bicarbonate ions) as part of gluconeogenesis, and carbon dioxide is consumed in various anaplerotic reactions. 6-phosphogluconate dehydrogenase catalyzes the reductive carboxylation of ribulose 5-phosphate to 6-phosphogluconate in E. coli under elevated CO2 concentrations. Carbon isotope discrimination Some carboxylases, particularly RuBisCO, preferentially bind the lighter carbon stable isotope carbon-12 over the heavier carbon-13. This is known as carbon isotope discrimination and results in carbon-12 to carbon-13 ratios in the plant that are higher than in the free air. Measurement of this isotopic ratio is important in the evaluation of water use efficiency in plants, and also in assessing the possible or likely sources of carbon in global carbon cycle studies. Biological carbon fixation in soils In addition to photosynthetic and chemosynthetic processes, biological carbon fixation occurs in soil through the activity of microorganisms, such as bacteria and fungi. These soil microbes play a crucial role in the global carbon cycle by sequestering carbon from decomposed organic matter and recycling it back into the soil, thereby contributing to soil fertility and ecosystem productivity. In soil environments, organic matter derived from dead plant and animal material undergoes decomposition, a process carried out by a diverse community of microorganisms. During decomposition, complex organic compounds are broken down into simpler molecules by the action of enzymes produced by bacteria, fungi, and other soil organisms. As organic matter is decomposed, carbon is released in various forms, including carbon dioxide () and dissolved organic carbon (DOC). However, not all of the carbon released during decomposition is immediately lost to the atmosphere; a significant portion is retained in the soil through processes collectively known as soil carbon sequestration. Soil microbes, particularly bacteria and fungi, play a pivotal role in this process by incorporating decomposed organic carbon into their biomass or by facilitating the formation of stable organic compounds, such as humus and soil organic matter. One key mechanism by which soil microbes sequester carbon is through the production of microbial biomass. Bacteria and fungi assimilate carbon from decomposed organic matter into their cellular structures as they grow and reproduce. This microbial biomass serves as a reservoir for stored carbon in the soil, effectively sequestering carbon from the atmosphere. Additionally, soil microbes contribute to the formation of stable soil organic matter through the synthesis of extracellular polymers, enzymes, and other biochemical compounds. These substances help bind together soil particles, forming aggregates that protect organic carbon from microbial decomposition and physical erosion. Over time, these aggregates accumulate in the soil, resulting in the formation of soil organic matter, which can persist for centuries to millennia. The sequestration of carbon in soil not only helps mitigate the accumulation of atmospheric and mitigate climate change but also enhances soil fertility, water retention, and nutrient cycling, thereby supporting plant growth and ecosystem productivity. Consequently, understanding the role of soil microbes in biological carbon fixation is essential for managing soil health, mitigating climate change, and promoting sustainable land management practices. Biological carbon fixation is a fundamental process that sustains life on Earth by regulating atmospheric levels, supporting the growth of plants and other photosynthetic organisms, and maintaining ecological balance.
Biology and health sciences
Metabolic processes
Biology
720240
https://en.wikipedia.org/wiki/Potassium%20bitartrate
Potassium bitartrate
Potassium bitartrate, also known as potassium hydrogen tartrate, with formula KC4H5O6, is a chemical compound with a number of uses. It is the potassium acid salt of tartaric acid (a carboxylic acid). Especially in cooking, it is also known as cream of tartar. It is used as a component of baking powders and baking mixes, as mordant in textile dyeing, as reducer of chromium trioxide in mordants for wool, as a metal processing agent that prevents oxidation, as an intermediate for other potassium tartrates, as a cleaning agent when mixed with a weak acid such as vinegar, and as reference standard pH buffer. Medical uses include as a medical cathartic, as a diuretic, and as a historic veterinary laxative and diuretic. It is produced as a byproduct of winemaking by purifying the precipitate that is deposited in wine barrels. It arises from the tartaric acid and potassium naturally occurring in grapes. In culinary applications, potassium bitartrate is valued for its role in stabilizing egg whites, which enhances the volume and texture of meringues and soufflés. Its acidic properties prevent sugar syrups from crystallizing, aiding in the production of smooth confections such as candies and frostings. When combined with baking soda, it acts as a leavening agent, producing carbon dioxide gas that helps baked goods rise. Additionally, potassium bitartrate is used to stabilize whipped cream, allowing it to retain its shape for longer periods. History Potassium bitartrate was first characterized by Swedish chemist Carl Wilhelm Scheele (1742–1786). This was a result of Scheele's work studying fluorite and hydrofluoric acid. Scheele may have been the first scientist to publish work on potassium bitartrate, but use of potassium bitartrate has been reported to date back 7000 years to an ancient village in northern Iran. Modern applications of cream of tartar started in 1768 after it gained popularity when the French started using it regularly in their cuisine. In 2021, a connection between potassium bitartrate and canine and feline toxicity of grapes was first proposed. Since then, it has been deemed likely as the source of grape and raisin toxicity to pets. Occurrence Potassium bitartrate is naturally formed in grapes from the acid dissociation of tartaric acid into bitartrate and tartrate ions. Potassium bitartrate has a low solubility in water. It crystallizes in wine casks during the fermentation of grape juice, and can precipitate out of wine in bottles. The rate of potassium bitartrate precipitation depends on the rates of nuclei formation and crystal growth, which varies based on a wine's alcohol, sugar, and extract content. The crystals (wine diamonds) will often form on the underside of a cork in wine-filled bottles that have been stored at temperatures below , and will seldom, if ever, dissolve naturally into the wine. Over time, crystal formation is less likely to occur due to the decreasing supersaturation of potassium bitartrate, with the greatest amount of precipitation occurring in the initial few days of cooling. Historically, it was known as beeswing for its resemblance to the sheen of bees' wings. It was collected and purified to produce the white, odorless, acidic powder used for many culinary and other household purposes. These crystals also precipitate out of fresh grape juice that has been chilled or allowed to stand for some time. To prevent crystals from forming in homemade grape jam or jelly, the prerequisite fresh grape juice should be chilled overnight to promote crystallization. The potassium bitartrate crystals are removed by filtering through two layers of cheesecloth. The filtered juice may then be made into jam or jelly. In some cases they adhere to the side of the chilled container, making filtering unnecessary. The presence of crystals is less prevalent in red wines than in white wines. This is because red wines have a higher amount of tannin and colouring matter present as well as a higher sugar and extract content than white wines. Various methods such as promoting crystallization and filtering, removing the active species required for potassium bitartrate precipitation, and adding additives have been implemented to reduce the presence of potassium bitartrate crystals in wine. Applications In food In food, potassium bitartrate is used for: Stabilizing egg whites, increasing their warmth-tolerance and volume Stabilizing whipped cream, maintaining its texture and volume Anti-caking and thickening Preventing sugar syrups from crystallizing by causing some of the sucrose to break down into glucose and fructose Reducing discoloration of boiled vegetables Additionally, it is used as a component of: Baking powder, as an acid ingredient to activate baking soda Salt substitutes, in combination with potassium chloride A similar acid salt, sodium acid pyrophosphate, can be confused with cream of tartar because of its common function as a component of baking powder. Baking Adding cream of tartar to egg whites gives volume to cakes, and makes them more tender. As cream of tartar is added, the pH decreases to around the isoelectric point of the foaming proteins in egg whites. Foaming properties of egg whites are optimal at this pH due to increased protein-protein interactions. The low pH also results in a whiter crumb in cakes due to flour pigments that respond to these pH changes. However, adding too much cream of tartar (>2.4% weight of egg white) can affect the texture and taste of cakes. The optimal cream of tartar concentration to increase volume and the whiteness of interior crumbs without making the cake too tender, is about 1/4 tsp per egg white. As an acid, cream of tartar with heat reduces sugar crystallization in invert syrups by helping to break down sucrose into its monomer components - fructose and glucose in equal parts. Preventing the formation of sugar crystals makes the syrup have a non-grainy texture, shinier and less prone to break and dry. However, a downside of relying on cream of tartar to thin out crystalline sugar confections (like fudge) is that it can be hard to add the right amount of acid to get the desired consistency. Cream of tartar is used as a type of acid salt that is crucial in baking powder. Upon dissolving in batter or dough, the tartaric acid that is released reacts with baking soda to form carbon dioxide that is used for leavening. Since cream of tartar is fast-acting, it releases over 70 percent of carbon dioxide gas during mixing. Household use Potassium bitartrate can be mixed with an acidic liquid, such as lemon juice or white vinegar, to make a paste-like cleaning agent for metals, such as brass, aluminium, or copper, or with water for other cleaning applications, such as removing light stains from porcelain. This mixture is sometimes mistakenly made with vinegar and sodium bicarbonate (baking soda), which actually react to neutralize each other, creating carbon dioxide and a sodium acetate solution. Cream of tartar was often used in traditional dyeing where the complexing action of the tartrate ions was used to adjust the solubility and hydrolysis of mordant salts such as tin chloride and alum. Cream of tartar, when mixed into a paste with hydrogen peroxide, can be used to clean rust from some hand tools, notably hand files. The paste is applied, left to set for a few hours, and then washed off with a baking soda/water solution. After another rinse with water and thorough drying, a thin application of oil will protect the file from further rusting. Slowing the set time of plaster of Paris products (most widely used in gypsum plaster wall work and artwork casting) is typically achieved by the simple introduction of almost any acid diluted into the mixing water. A commercial retardant premix additive sold by USG to trade interior plasterers includes at least 40% potassium bitartrate. The remaining ingredients are the same plaster of Paris and quartz-silica aggregate already prominent in the main product. This means that the only active ingredient is the cream of tartar. Cosmetics For dyeing hair, potassium bitartrate can be mixed with henna as the mild acid needed to activate the henna. Medicinal use Cream of tartar has been used internally as a purgative, but this is dangerous because an excess of potassium, or hyperkalemia, may occur. Chemistry Potassium bitartrate is the United States' National Institute of Standards and Technology's primary reference standard for a pH buffer. Using an excess of the salt in water, a saturated solution is created with a pH of 3.557 at . Upon dissolution in water, potassium bitartrate will dissociate into acid tartrate, tartrate, and potassium ions. Thus, a saturated solution creates a buffer with standard pH. Before use as a standard, it is recommended that the solution be filtered or decanted between and . Potassium carbonate can be made by burning cream of tartar, which produces "pearl ash". This process is now obsolete but produced a higher quality (reasonable purity) than "potash" extracted from wood or other plant ashes. Production It is produced as a byproduct of winemaking by purifying the precipitate that is deposited in wine barrels. It arises from the tartaric acid and potassium naturally occurring in grapes.
Physical sciences
Organic salts
Chemistry
720452
https://en.wikipedia.org/wiki/Deadbolt
Deadbolt
A deadbolt or deadlock is a type of lock morticed into a wooden door where a bolt is thrown into the door frame, using a key from either side, to secure the door. It is distinct from a spring bolt lock because a deadbolt can only be opened by a key or handle. The more common spring bolt lock uses a spring to hold the bolt in place, allowing retraction by applying force to the bolt itself. A deadbolt can therefore make a door more resistant to entry without the correct key, as well as make the door more resistant to forced entry. A deadbolt is often used to complement a spring-bolt lock on an entry door to a building. Common types A deadlock, if it is cylinder operated, may be either single cylinder or double cylinder. A single cylinder deadlock will accept a key on one side of the lock, but is operated by a twist knob on the other side. Double cylinder locks will accept a key on both sides and therefore do not require (and often do not have) any twist knob. This prevents unwanted unlocking of the door by forced access to the interior twist knob (via a nearby window, for example). Double cylinder locks are sometimes banned from areas because they can be difficult to open from the inside and violate fire safety regulations. Some lock manufacturers also have a "lockable" knob: a key is always needed on one side (usually external), and a twist knob can be used on the other (internal), unless a button has been pressed, in which case a key is also needed on the internal side. A variant of the standard deadbolt is the vertical deadbolt, invented by Samuel Segal. Vertical deadbolts resist jimmying, in which an intruder inserts a crowbar between the door and the jamb and attempts to prise the bolt out of the door. Other types of deadbolts include: Classroom-function (thumb-turn only unlocks door) Exit-only function (no external cylinder) Push-button deadbolt (mechanical or electrical) Single cylinder with removable thumb-turn Safety The double cylinder design raises a safety issue. In the event of a fire, occupants will be prevented from escaping through double-cylinder locked doors unless the correct key is used. This is an avoidable cause of death in house fires. The risk can be mitigated by locking the deadlock only when there are no occupants inside the building, or leaving the key near the keyhole. Some fire departments suggest putting the key on a small nail or screw near the door at floor level, since the cleanest air is at floor level and one may be crawling to get to the exit, thus placing the key where it is easiest to find. Note that single cylinder dead locks (with an unlocked twist mechanism on the inside of the door) do not have this problem, and therefore are most commonly used on fire exits. Some areas have fire safety codes that do not allow a locked exit.
Technology
Mechanisms
null
720752
https://en.wikipedia.org/wiki/Meganeura
Meganeura
Meganeura is a genus of extinct insects from the Late Carboniferous (approximately 300 million years ago). It is a member of the extinct order Meganisoptera, which are closely related to and resemble dragonflies and damselflies (with dragonflies, damselflies and meganisopterans being part of the broader group Odonatoptera). Like other odonoapterans, they were predatory, with their diet mainly consisting of other insects. The genus belongs to the Meganeuridae, a family including other similarly giant dragonfly-like insects ranging from the Late Carboniferous to Middle Permian. With single wing length reaching and a wingspan about , M. monyi is one of the largest-known flying insect species. Fossils of Meganeura were first discovered in Late Carboniferous (Stephanian) Coal Measures of Commentry, France in 1880. In 1885, French paleontologist Charles Brongniart described and named the fossil "Meganeura" (great-nerved), which refers to the network of veins on the insect's wings. Another fine fossil specimen was found in 1979 at Bolsover in Derbyshire. The holotype is housed in the National Museum of Natural History, in Paris. Despite being the iconic "giant dragonfly", fossils of Meganeura are poorly preserved in comparison to other meganeurids. Lifestyle Research on close relatives Meganeurula and Meganeurites suggest that Meganeura was adapted to open habitats, and similar in behaviour to extant hawkers. The eyes of Meganeura were likely enlarged relative to body size. Meganeura had spines on the tibia and tarsi sections of the legs, which would have functioned as a "flying trap" to capture prey. An engineering examination estimated that the mass of the largest specimens with wingspans over 70 cm to be 100 to 150 grams. The analysis also suggested that Meganeura would be susceptible to overheating. Studies indicate that Meganuera's typically lived near the edge of bodies of water such as streams and ponds. Furthermore, as carnivores they mainly ate other insects, small amphibians, and vertebrates using their long spine-like legs to grab and hold their prey. Size There has been some controversy as to how insects of the Carboniferous period were able to grow so large. Oxygen levels and atmospheric density. The way oxygen is diffused through the insect's body via its tracheal breathing system puts an upper limit on body size, which prehistoric insects seem to have well exceeded. It was originally proposed by that Meganeura was able to fly only because the atmosphere of Earth at that time contained more oxygen than the present 20 percent. This hypothesis was initially dismissed by fellow scientists, but has found approval more recently through further study into the relationship between gigantism and oxygen availability. If this hypothesis is correct, these insects would have been susceptible to falling oxygen levels and certainly could not survive in our modern atmosphere. Other research indicates that insects really do breathe, with "rapid cycles of tracheal compression and expansion". Recent analysis of the flight energetics of modern insects and birds suggests that both the oxygen levels and air density provide an upper bound on size. The presence of very large Meganeuridae with wing spans rivaling those of Meganeura during the Permian, when the oxygen content of the atmosphere was already much lower than in the Carboniferous, presented a problem to the oxygen-related explanations in the case of the giant dragonflies. However, despite the fact that meganeurids had the largest-known wingspans, their bodies were not very heavy, being less massive than those of several living Coleoptera; therefore, they were not true giant insects, only being giant in comparison with their living relatives. Lack of predators. Other explanations for the large size of meganeurids compared to living relatives are warranted. suggested that the lack of aerial vertebrate predators allowed pterygote insects to evolve to maximum sizes during the Carboniferous and Permian periods, perhaps accelerated by an evolutionary "arms race" for increase in body size between plant-feeding Palaeodictyoptera and Meganisoptera as their predators. Aquatic larvae stadium. Another theory suggests that insects that developed in water before becoming terrestrial as adults grew bigger as a way to protect themselves against the high levels of oxygen.
Biology and health sciences
Fossil arthropods
Animals
19232035
https://en.wikipedia.org/wiki/Gazelle
Gazelle
A gazelle is one of many antelope species in the genus Gazella . There are also seven species included in two further genera; Eudorcas and Nanger, which were formerly considered subgenera of Gazella. A third former subgenus, Procapra, includes three living species of Asian gazelles. Gazelles are known as swift animals. Some can run at bursts as high as or run at a sustained speed of . Gazelles are found mostly in the deserts, grasslands, and savannas of Africa, but they are also found in southwest and central Asia and the Indian subcontinent. They tend to live in herds, and eat fine, easily digestible plants and leaves. Gazelles are relatively small antelopes, most standing high at the shoulder, and are generally fawn-colored. The gazelle genera are Gazella, Eudorcas, and Nanger. The taxonomy of these genera is confused, and the classification of species and subspecies has been an unsettled issue. Currently, the genus Gazella is widely considered to contain about 10 species. One species is extinct: the Queen of Sheba's gazelle. Most surviving gazelle species are considered threatened to varying degrees. Closely related to the true gazelles are the Tibetan goa, and Mongolian gazelles (species of the genus Procapra), the blackbuck of Asia, and the African springbok. One widely familiar gazelle is the African species Thomson's gazelle (Eudorcas thomsonii), sometimes referred to as a "tommie". It is around in shoulder height and is coloured brown and white with a distinguishing black stripe. The males have long, often curved, horns. Like many other prey species, tommies exhibit a distinctive behaviour of stotting (running and jumping high before fleeing) when they are threatened by predators such as cheetahs, lions, African wild dogs, crocodiles, hyenas, and leopards. Etymology and their name Gazelle is derived from French gazelle, Old French gazel, probably via Old Spanish gacel, probably from North African pronunciation of , Maghrebi pronunciation . To Europe it first came to Old Spanish and Old French, and then around 1600 the word entered the English language. The Arab people traditionally hunted the gazelle. Later appreciated for its grace, however, it became a symbol most commonly associated in Arabic literature with human beauty. In many countries in northwestern Sub-Saharan Africa, the gazelle is commonly referred to as "dangelo", meaning "swift deer". Symbolism or totemism in African families The gazelle, like the antelope to which it is related, is the totem of many African families. Some examples include the Joof family of the Senegambia region, the Bagananoa of Botswana in Southern Africa (said to be descended from the BaHurutshe), and the Eraraka (or Erarak) clan of Uganda. As is common in many African societies, it is forbidden for the Joof or Eraraka to kill or touch the family totem. Poetry One of the traditional themes of Arabic love poetry involves comparing the gazelle with the beloved, and linguists theorize ghazal, the word for love poetry in Arabic, is related to the word for gazelle. It is related that the Caliph Abd al-Malik (646–705) freed a gazelle that he had captured because of her resemblance to his beloved: The theme is found in the ancient Hebrew Song of Songs. (8:14) Species The gazelles are divided into three genera and numerous species. Prehistoric species Fossils of genus Gazella are found in Miocene, Pliocene and Pleistocene deposits of Eurasia and Africa, which occupuied a broader distribution that modern members of the genus. The earliest members of the genus are known from the Middle Miocene of Africa, around 14 million years ago with members of the genus inhabiting Europe from the Late Miocene until their extinction in the region during the Early Pleistocene around 1.8 million years ago. Genus Gazella Gazella borbonica - Early Pleistocene Europe Gazella capricornis - Miocene Asia Gazella harmonae - Pliocene of Ethiopia, unusual spiral horns Gazella praethomsoni - Pliocene Africa Gazella negevensis - Early Miocene Asia Gazella thomasi - Thomas's gazelle Gazella vanhoepeni - Pliocene Africa Subgenus Vetagazella Gazella altidens Gazella blacki - Pliocene Asia Gazella deperdita - Late Miocene Europe Gazella dorcadoides - Middle Miocene Asia Gazella pilgrimi - Late Miocene Europe Gazella gaudryi - Middle Miocene Eurasia Gazella kueitensis - Pliocene Asia Gazella lydekkeri - Mid to Late Miocene Asia Gazella paotehensis - Middle Miocene Asia Gazella paragutturosa - Pleistocene Asia Gazella parasinensis - Pliocene Asia Gazella praegaudryi - Pleistocene Africa Gazella sinensis - Pliocene Asia Gazella brianus - Pliocene Asia Subgenus Gazella Gazella janenschi - Pliocene Africa Subgenus Trachelocele Gazella atlantica - Pleistocene Africa Gazella tingitana - Pleistocene Africa Subgenus Deprezia Gazella psolea - Pliocene Africa Gallery
Biology and health sciences
Artiodactyla
null
18214141
https://en.wikipedia.org/wiki/Western%20honey%20bee
Western honey bee
The western honey bee or European honey bee (Apis mellifera) is the most common of the 7–12 species of honey bees worldwide. The genus name Apis is Latin for 'bee', and mellifera is the Latin for 'honey-bearing' or 'honey-carrying', referring to the species' production of honey. Like all honey bee species, the western honey bee is eusocial, creating colonies with a single fertile female (or "queen"), many normally non-reproductive females or "workers", and a small proportion of fertile males or "drones". Individual colonies can house tens of thousands of bees. Colony activities are organized by complex communication between individuals, through both pheromones and the dance language. The western honey bee was one of the first domesticated insects, and it is the primary species maintained by beekeepers to this day for both its honey production and pollination activities. With human assistance, the western honey bee now occupies every continent except Antarctica. Western honey bees are threatened by pests and diseases, especially the Varroa mite and colony collapse disorder. There are indications that the species is rare, if not extinct in the wild in Europe and as of 2014, the western honey bee was assessed as "Data Deficient" on the IUCN Red List. Numerous studies indicate that the species has undergone significant declines in Europe; however, it is not clear if they refer to population reduction of wild or managed colonies. Further research is required to enable differentiation between wild and non-wild colonies in order to determine the conservation status of the species in the wild, meaning self sustaining, without treatments or management. Western honey bees are an important model organism in scientific studies, particularly in the fields of social evolution, learning, and memory; they are also used in studies of pesticide toxicity, especially via pollen, to assess non-target impacts of commercial pesticides. Distribution and habitat The western honey bee can be found on every continent except Antarctica. The species is believed to have originated in Africa or Asia, and it spread naturally through Africa, the Middle East and Europe. Humans are responsible for its considerable additional range, introducing European subspecies into North America (early 1600s), South America, Australia, New Zealand, and eastern Asia. Subspecies Western honey bees adapted to the local environments as they spread geographically. These adaptations include synchronizing colony cycles to the timing of local flower resources, forming a winter cluster in colder climates, migratory swarming in Africa, and enhanced foraging behavior in desert areas. All together, these variations resulted in 31 recognized subspecies. Previously it was believed that the various subspecies were all cross-fertile, but in 2013 it was found that the A. m. mellifera queens do not mate with non-A. m. mellifera drones. The subspecies are divided into four major branches, based on work by Ruttner and confirmed by mitochondrial DNA analysis. African subspecies belong to branch A, northwestern European subspecies branch M, southwestern European subspecies branch C and Middle Eastern subspecies branch O. Life cycle Colony life cycle Unlike most other bee species, western honey bees have perennial colonies which persist year after year. Because of this high degree of sociality and permanence, western honey bee colonies can be considered superorganisms. This means that reproduction of the colony, rather than individual bees, is the biologically significant unit. Western honey bee colonies reproduce through a process called "swarming". In most climates, western honey bees swarm in the spring and early summer, when there is an abundance of blooming flowers from which to collect nectar and pollen. In response to these favorable conditions, the hive creates one to two dozen new queens. Just as the pupal stages of these "daughter queens" are nearly complete, the old queen and approximately two-thirds of the adult workers leave the colony in a swarm, traveling some distance to find a new location suitable for building a hive (e.g., a hollow tree trunk). In the old colony, the daughter queens often start "piping", just prior to emerging as adults, and, when the daughter queens eventually emerge, they fight each other until only one remains; the survivor then becomes the new queen. If one of the sisters emerges before the others, she may kill her siblings while they are still pupae, before they have a chance to emerge as adults. Once she has dispatched all of her rivals, the new queen, the only fertile female, lays all the eggs for the old colony, which her mother has left. Virgin females are able to lay eggs, which develop into males (a trait found in bees, wasps, and ants because of haplodiploidy). However, she requires a mate to produce female offspring, which comprise 90% or more of bees in the colony at any given time. Thus, the new queen goes on one or more nuptial flights, each time mating with 1–17 drones. Once she has finished mating, usually within two weeks of emerging, she remains in the hive, playing the primary role of laying eggs. Throughout the rest of the growing season, the colony produces many workers, who gather pollen and nectar as cold-season food; the average population of a healthy hive in midsummer may be as high as 40,000 to 80,000 bees. Nectar from flowers is processed by worker bees, who evaporate it until the moisture content is low enough to discourage mold, transforming it into honey, which can then be capped over with wax and stored almost indefinitely. In the temperate climates to which western honey bees are adapted, the bees gather in their hive and wait out the cold season, during which the queen may stop laying. During this time, activity is slow, and the colony consumes its stores of honey used for metabolic heat production in the cold season. In mid- through late winter, the queen starts laying again. This is probably triggered by day length. Depending on the subspecies, new queens (and swarms) may be produced every year, or less frequently, depending on local environmental conditions and a number of characteristics inside the hive. Individual bee life cycle Like other insects that undergo complete metamorphosis, the western honey bee has four distinct life stages: egg, larva, pupa and adult. The complex social structure of western honey bee hives means that all of these life stages occur simultaneously throughout much of the year. The queen deposits a single egg into each cell of a honeycomb prepared by worker bees. The egg hatches into a legless, eyeless larva fed by "nurse" bees (worker bees who maintain the interior of the colony). After about a week, the larva is sealed in its cell by the nurse bees and begins its pupal stage. After another week, it emerges as an adult bee. It is common for defined regions of the comb to be filled with young bees (also called "brood"), while others are filled with pollen and honey stores. Worker bees secrete the wax used to build the hive, clean, maintain and guard it, raise the young and forage for nectar and pollen; the nature of the worker's role varies with age. For the first 10 days of their lives, worker bees clean the hive and feed the larvae. After this, they begin building comb cells. On days 16 through 20, workers receive nectar and pollen from older workers and store it. After the 20th day, a worker leaves the hive and spends the remainder of its life as a forager. Although worker bees are usually infertile females, when some subspecies are stressed they may lay fertile eggs. Since workers are not fully sexually developed, they do not mate with drones and thus can only produce haploid (male) offspring. Queens and workers have a modified ovipositor called a stinger, with which they defend the hive. Unlike those of bees of any other genus and of the queens of their species, the stinger of worker western honey bees is barbed. Contrary to popular belief, a bee does not always die soon after stinging; this misconception is based on the fact that a bee will usually die after stinging a human or other mammals. The stinger and its venom sac, with musculature and a ganglion allowing them to continue delivering venom after they are detached, are designed to pull free of the body when they lodge. This apparatus (including barbs on the stinger) is thought to have evolved in response to predation by vertebrates, since the barbs do not function (and the stinger apparatus does not detach) unless the stinger is embedded in elastic material. The barbs do not always "catch", so a bee may occasionally pull its stinger free and fly off unharmed (or sting again). Although the average lifespan of a queen in most subspecies is three to five years, reports from the German honey bee subspecies (A. m. mellifera) previously used for beekeeping indicate that a queen can live up to eight years. Because a queen's store of sperm is depleted near the end of her life, she begins laying more unfertilised eggs; for this reason, beekeepers often replace queens every year or two. The lifespan of workers varies considerably over the year in regions with long winters. Workers born in spring and summer work hard, and live only a few weeks, but those born in autumn remain inside for several months as the colony clusters. On average during the year, about 1% of a colony's worker bees die naturally per day. Except for the queen, all of a colony's workers are replaced about every four months. Social caste Behavioral and physiological differences between castes and subcastes arise from phenotypic plasticity, which relies on gene expression rather than heritable genotypic differences. Queens The queen bee is a fertile female, who, unlike workers (which are also female), has a fully developed reproductive system. She is larger than her workers, and has a characteristic rounder, longer abdomen. A female egg can become either a queen or a worker bee. Workers and queen larvae are both fed royal jelly, which is high in protein and low in flavonoids, during the first three days. After that, larval prospective workers are switched to a diet of mixed pollen and nectar (often called "bee bread"), while prospective queens continue to receive royal jelly. In the absence of flavonoids and the presence of a high-protein diet, female bees grow into queens by developing the vigorous reproductive system necessary to maintain a colony of tens of thousands of daughter workers. Periodically, the colony determines that a new queen is needed. There are three general causes: The hive is filled with honey, leaving little room for new eggs. This will trigger a swarm, where the old queen will take about half the worker bees to establish a new colony, and leave a new queen with the other half of the workers to continue the old one. The old queen begins to fail, which is thought to be demonstrated by a decrease in queen pheromones throughout the hive. This is known as supersedure, and at its end, the old queen is usually killed. The old queen dies suddenly, a situation known as emergency supersedure. The worker bees find several eggs (or larvae) of the appropriate age range and feed them royal jelly to try to develop them into new queens. Emergency supersedure can generally be recognized because new queen cells are built out from comb cells, instead of hanging from the bottom of a frame. Regardless of the trigger, workers develop existing larvae into queens by continuing to feed them royal jelly, rather than switching them to bee bread, and by extending the selected larvae's cells to house the developing larger-bodied queens. Queens are not raised in the typical horizontal brood cells of the honeycomb. A queen cell is larger and oriented vertically. If workers sense that an old queen is weakening, they produce emergency cells (known as supersedure cells) from cells with eggs or young larvae and which protrude from the comb. When the queen finishes her larval feeding and pupates, she moves into a head-downward position and later chews her way out of the cell. At pupation, workers cap (seal) the cell. The queen asserts control over the worker bees by releasing a complex suite of pheromones, known as queen scent. After several days of orientation in and around the hive, the young queen flies to a drone congregation area – a site near a clearing and generally about above the ground – where drones from different hives congregate. They detect the presence of a queen in their congregation area by her smell, find her by sight and mate with her in midair; drones can be induced to mate with "dummy" queens with the queen pheromone. A queen will mate multiple times, and may leave to mate several days in a row (weather permitting) until her spermatheca is full. The queen lays all the eggs in a healthy colony. The number and pace of egg-laying is controlled by weather, resource availability and specific racial characteristics. Queens generally begin to slow egg-laying in the early fall, and may stop during the winter. Egg-laying generally resumes in late winter when the days lengthen, peaking in the spring. At the height of the season, the queen may lay over 2,500 eggs per day (more than her body mass). She fertilizes each egg (with stored sperm from the spermatheca) as it is laid in a worker-sized cell. Eggs laid in drone-sized (larger) cells are left unfertilized; these unfertilized eggs, with half as many genes as queen or worker eggs, develop into drones. Workers Workers are females produced by the queen that develop from fertilized, diploid eggs. Workers are essential for social structure and proper colony functioning. They carry out the main tasks of the colony, because the queen is occupied solely with reproducing. These females raise their sister workers and future queens that eventually leave the nest to start their own colony. They also forage and return to the nest with nectar and pollen to feed the young, and defend the colony. Drones Drones are the colony's male bees. Since they do not have ovipositors, they do not have stingers. Drone honey bees do not forage for nectar or pollen. The primary purpose of a drone is to fertilize a new queen. Many drones mate with a given queen in flight; each dies immediately after mating, since the process of insemination requires a lethally convulsive effort. Drone honey bees are haploid (single, unpaired chromosomes) in their genetic structure, and are descended only from their mother (the queen). In temperate regions, drones are generally expelled from the hive before winter, dying of cold and starvation since they cannot forage, produce honey or care for themselves. Given their larger size (1.5 times that of worker bees), inside the hive it is believed that drones may play a significant role in thermoregulation. Drones are typically located near the center of hive clusters for unclear reasons. It is postulated that it is to maintain sperm viability, which may be compromised at cooler temperatures. Another possible explanation is that a more central location allows drones to contribute to warmth, since at temperatures below their ability to contribute declines. Queen–worker conflict When a fertile female worker produces drones, a conflict arises between her interests and those of the queen. The worker shares one-half of her genes with the drone and one-quarter with her brothers, favouring her offspring over those of the queen. The queen shares one-half of her genes with her sons and one-quarter with the sons of fertile female workers. This pits the worker against the queen and other workers, who try to maximize their reproductive fitness by rearing the offspring most related to them. This relationship leads to a phenomenon called "worker policing". In these rare situations, other worker bees in the hive, who are genetically more related to the queen's sons than those of the fertile workers, patrol the hive and remove worker-laid eggs. Another form of worker policing is aggression toward fertile females. Some studies suggest a queen pheromone which may help workers distinguish worker-laid and queen-laid eggs, but others indicate egg viability as the key factor in eliciting the behavior. Worker policing is an example of forced altruism, where the benefits of worker reproduction are minimized and that of rearing the queen's offspring maximized. In very rare instances, workers subvert the policing mechanisms of the hive, laying eggs faster than other workers remove them; this is known as anarchic syndrome. Anarchic workers can activate their ovaries at a higher rate and contribute a greater proportion of males to the hive. Although an increase in the number of drones decreases the overall productivity of the hive, it increases the reproductive fitness of the drones' mother. Anarchic syndrome is an example of selection working in opposite directions at the individual and group levels for the stability of the hive. Under ordinary circumstances, if the queen dies or is removed, reproduction in workers increases because a significant proportion of workers then have activated ovaries. The workers produce a last batch of drones before the hive collapses. Although during this period worker policing is usually absent, in certain groups of bees it continues. According to the strategy of kin selection, worker policing is not favored if a queen mates just once. In that case, workers are related by three-quarters of their genes, and the sons of workers are related more than usual to sons of the queen. Then the benefit of policing is negated. Experiments confirming this hypothesis have shown a correlation between higher mating rates and increased rates of worker policing in many species of social hymenoptera. Behavior Thermoregulation The western honey bee needs an internal body temperature of to fly; this temperature is maintained in the nest to develop the brood, and is the optimal temperature for the creation of wax. The temperature on the periphery of the cluster varies with outside air temperature, and the winter cluster's internal temperature may be as low as . Western honey bees can forage over a air-temperature range because of behavioral and physiological mechanisms for regulating the temperature of their flight muscles. From low to high air temperatures, the mechanisms are: shivering before flight and stopping flight for additional shivering, passive body-temperature regulation based on work, and evaporative cooling from regurgitated honey-sac contents. Body temperatures differ, depending on caste and expected foraging rewards. The optimal air temperature for foraging is . During flight, the bee's relatively large flight muscles create heat which must be dissipated. The honey bee uses evaporative cooling to release heat through its mouth. Under hot conditions, heat from the thorax is dissipated through the head; the bee regurgitates a droplet of warm internal fluid — a "honeycrop droplet" – which reduces the temperature of its head by . Below bees are immobile, and above their activity slows. Western honey bees can tolerate temperatures up to for short periods. They lack the thermal defense exhibited by Apis cerana, but at least one subspecies, Apis mellifera cypria, is capable of killing invading hornets through asphyxiation, despite not being able to attain lethal temperatures. Aging Apis mellifera honey bees with high amounts of flight experience exhibit increased DNA damage in flight muscle, as measured by elevated 8-Oxo-2'-deoxyguanosine, compared to bees with less flight experience. This increased DNA damage is likely due to an imbalance of pro- and anti-oxidants during flight-associated oxidative stress. Flight induced oxidative DNA damage appears to hasten senescence and limit lifespan in A. mellifera. Communication Western honey bee behavior has been extensively studied. Karl von Frisch, who received the 1973 Nobel Prize in Physiology or Medicine for his study of honey bee communication, noticed that bees communicate with dance. Through these dances, bees communicate information regarding the distance, the situation, and the direction of a food source by the dances of the returning (honey bee) worker bee on the vertical comb of the hive. Honey bees direct other bees to food sources with the round dance and the waggle dance. Although the round dance tells other foragers that food is within of the hive, it provides insufficient information about direction. The waggle dance, which may be vertical or horizontal, provides more detail about the distance and direction of a food source. Foragers are also thought to rely on their olfactory sense to help locate a food source after they are directed by the dances. Western honey bees also change the precision of the waggle dance to indicate the type of site that is set as a new goal. Their close relatives, dwarf honey bees, do not. Therefore, western honey bees seem to have evolved a better means of conveying information than their common ancestors with the dwarf honey bee. Another means of communication is the shaking signal, also known as the jerking dance, vibration dance or vibration signal. Although the shaking signal is most common in worker communication, it also appears in reproductive swarming. A worker bee vibrates its body dorsoventrally while holding another bee with its front legs. Jacobus Biesmeijer, who examined shaking signals in a forager's life and the conditions leading to its performance, found that experienced foragers executed 92% of observed shaking signals and 64% of these signals were made after the discovery of a food source. About 71% of shaking signals occurred before the first five successful foraging flights of the day; other communication signals, such as the waggle dance, were performed more often after the first five successes. Biesmeijer demonstrated that most shakers are foragers and the shaking signal is most often executed by foraging bees on pre-foraging bees, concluding that it is a transfer message for several activities (or activity levels). Sometimes the signal increases activity, as when active bees shake inactive ones. At other times, such as the end of the day, the signal is an inhibitory mechanism. However, the shaking signal is preferentially directed towards inactive bees. All three forms of communication among honey bees are effective in foraging and task management. Pheromones Pheromones (substances involved in chemical communication) are essential to honey bee survival. Western honey bees rely on pheromones for nearly all behaviors, including mating, alarm, defense, orientation, kin and colony recognition, food production and integrating colony activities. The alarm pheromone has shown to be attractive to the small hive beetle. Therefore, there is a tradeoff between recruiting guards bees to defend the invaders and attract more beetles. The small hive beetle has a lower sensing threshold for the honeybee pheromone, which exacerbates the damage to honeybee hive. Sociality There is some degree of variability of sociality between individuals. Like a great many other social insects, A. mellifera engages in trophallaxis. When the duration of trophallaxis pairings was measured, it was found that like human social interactions, there are durable long-term trends for each individual bee. There is less inter-individual variation than found in humans however, possibly reflecting the higher genetic relatedness between hivemates. Domestication Humans have been collecting honey from western honey bees for thousands of years, with evidence in the form of rock art found in France and Spain, dating to around 7,000 BCE. The western honey bee is one of the few invertebrate animals to have been domesticated. Bees were likely first domesticated in ancient Egypt, where tomb paintings depict beekeeping, before 2600 BC. Europeans brought bees to North America in 1622. Beekeepers have selected western honey bees for several desirable features: the ability of a colony to survive periods with little food the ability of a colony to survive cold weather resistance to disease increased honey production reduced aggressiveness reduced tendency to swarm reduced nest building easy pacification with smoke These modifications, along with artificial change of location, have improved western honey bees from the point of view of the beekeeper, and simultaneously made them more dependent on beekeepers for their survival. In Europe, cold weather survival was likely selected for, consciously or not, while in Africa, selection probably favoured the ability to survive heat, drought, and heavy rain. Authors do not agree on whether this degree of artificial selection constitutes genuine domestication. In 1603, John Guillim wrote "The Bee I may well reckon a domestic insect, being so pliable to the benefit of the keeper." More recently, many biologists working on pollination take the domesticated status of western honey bees for granted. For example, Rachael Winfree and colleagues write "We used crop pollination as a model system, and investigated whether the loss of a domesticated pollinator (the honey bee) could be compensated for by native, wild bee species." Similarly, Brian Dennis and William Kemp write: "Although the domestication of the honey bee is closely connected to the evolution of food-based socio-economic systems in many cultures throughout the world, in current economic terms, and in the U.S. alone, the estimated wholesale value of honey, more than $317 million dollars in 2013, pales in comparison to aggregate estimated annual value of pollination services, variously valued at $11–15 billion." On the other hand, P. R. Oxley and B. P. Oldroyd (2010) consider the domestication of western honey bees, at best, partial. Oldroyd observes that the lack of full domestication is somewhat surprising, given that people have kept bees for at least 7,000 years. Instead, beekeepers have found ways to manage bees using hives, while the bees remain "largely unchanged from their wild cousins". Leslie Bailey and B. V. Ball, in their book Honey Bee Pathology, call western honey bees "feral insects", in contrast to the domestic silk moth (Bombyx mori) which they call "the only insect that has been domesticated", and refer to the "popular belief among many biologists as well as beekeepers that bees are domesticated". They argue that western honey bees are able to survive without human help, and in fact require to "be left at liberty" to survive. Further, they argue that even if bees could be raised away from the wild, they would still have to fly freely to gather nectar and pollinate plants. Therefore, they argue, beekeeping is "the exploitation of colonies of a wild insect", with little more than the provision of a weatherproof cavity for them to nest in. Likewise, Pilar de la Rua and colleagues argue that western honey bees are not fully domesticated, because "endemic subspecies-specific genetic footprints can still be identified in Europe and Africa", making conservation of wild bee diversity important. Further, they argue that the difficulty of controlling drones for mating is a serious handicap and a sign that domestication is not complete, in particular as "extensive gene flow usually occurs between wild/feral and managed honeybee populations". Beekeeping The western honey bee is a colonial insect which is housed, transported by and sometimes fed by beekeepers. Honey bees do not survive and reproduce individually, but as part of the colony (a superorganism). Western honey bees collect flower nectar and convert it to honey, which is stored in the hive. The nectar, transported in the bees' stomachs, is converted with the addition of digestive enzymes and storage in a honey cell for partial dehydration. Nectar and honey provide the energy for the bees' flight muscles and for heating the hive during the winter. Western honey bees also collect pollen which, after being processed to bee bread, supplies protein and fat for the bee brood to grow. Centuries of selective breeding by humans have created western honey bees which produce far more honey than the colony needs, and beekeepers (also known as apiarists) harvest the surplus honey. Beekeepers provide a place for the colony to live and store honey. There are seven basic types of beehive: skeps, Langstroth hives, top-bar hives, box hives, log gums, D. E. hives, and miller hives. All U.S. states require beekeepers to use movable frames to allow bee inspectors to check the brood for disease. This allows beekeepers to keep Langstroth, top-bar and D.E. hives without special permission, granted for purposes such as museum use. Modern hives also enable beekeepers to transport bees, moving from field to field as crops require pollinating (a source of income for beekeepers). In cold climates, some beekeepers have kept colonies alive (with varying degrees of success) by moving them indoors for winter. While this can protect the colonies from extremes of temperature and make winter care and feeding more convenient for the beekeeper, it increases the risk of dysentery and causes an excessive buildup of carbon dioxide from the bees' respiration. Inside wintering has been refined by Canadian beekeepers, who use large barns solely for the wintering of bees; automated ventilation systems assist in carbon dioxide dispersal. Products Honey bees Honey bees are one of the products of a beehive. They can be purchased as mated queens, in spring packages of a queen along with two to five pounds (0.91 to 2.27 kg) of honey bees, as nucleus colonies (which include frames of brood), or as full colonies. Commerce of western honey bees dates back to as early as 1622, when the first colonies of bees were shipped from England to Virginia. Modern methods of producing queens and dividing colonies for increase date back to the late 1800s. Honey was extracted by killing off the hive, and bees and bee products were mainly an object of local trade. The first commercial beekeeper in the United States is considered Moses Quinby of New York, who experimented with movable box hives, which allow extraction without killing the hive. The improvements in roads and motor vehicles after World War I allowed commercial beekeepers to expand the size of their businesses. Pollination The western honey bee is an important pollinator of crops; this service accounts for much of the species' commercial value. In 2005, the estimated commercial value of western honey bees was just under $200 billion worldwide. A large number of the crop species farmed worldwide depend on it. Although orchards and fields have increased in size, wild pollinators have dwindled. In a number of regions the pollination shortage is addressed by migratory beekeepers, who supply hives during a crop bloom and move them after the blooming period. Commercial beekeepers plan their movements and wintering locations according to anticipated pollination services. At higher latitudes it is difficult (or impossible) to overwinter sufficient bees, or to have them ready for early blooming plants. Much migration is seasonal, with hives wintering in warmer climates and moving to follow the bloom at higher latitudes. In California, almond pollination occurs in February, early in the growing season before local hives have built up their populations. Almond orchards require two hives per acre, or per hive, for maximum yield, and pollination is dependent on the importation of hives from warmer climates. Almond pollination (in February and March in the U.S.) is the largest managed pollination event in the world, requiring more than one-third of all managed honey bees in the country. Bees are also moved en masse for pollination of apples in New York, Michigan, and Washington. Despite honey bees' inefficiency as blueberry pollinators, large numbers are moved to Maine because they are the only pollinators who can be easily moved and concentrated for this and other monoculture crops. Bees and other insects maintain flower constancy by transferring pollen to other biologically specific plants; this prevents flower stigmas from being clogged with pollen from other species. In 2000, Drs. Roger Morse and Nicholas Calderone of Cornell University attempted to quantify the effects of the western honey bee on American food crops. Their calculations came up with a figure of US$14.6 billion in food crop value. Honey Honey is the complex substance made from nectar and sweet deposits from plants and trees, which are gathered, modified and stored in the comb by honey bees. Honey is a biological mixture of inverted sugars, primarily glucose and fructose. It has antibacterial and anti-fungal properties. Honey from the western honey bee, along with the bee Tetragonisca angustula, has specific antibacterial activity towards an infection-causing bacteria, Staphylococcus aureus. Honey will not rot or ferment when stored under normal conditions, but it will crystallize over time. Although crystallized honey is acceptable for human use, bees can only use liquid honey and will remove and discard any crystallized honey from the hive. Bees produce honey by collecting nectar, a clear liquid consisting of nearly 80 percent water and complex sugars. The collecting bees store the nectar in a second stomach and return to the hive, where worker bees remove the nectar. The worker bees digest the raw nectar for about 30 minutes, using digestive enzymes to break down the complex sugars into simpler ones. Raw honey is then spread in empty honeycomb cells to dry, reducing its water content to less than 20 percent. When nectar is being processed, honey bees create a draft through the hive by fanning with their wings. When the honey has dried, the honeycomb cells are sealed (capped) with wax to preserve it. Beeswax Mature worker bees secrete beeswax from glands on their abdomen, using it to form the walls and caps of the comb. When honey is harvested, the wax can be collected for use in products like candles and seals. Bee bread Bees collect pollen in a pollen basket and carry it back to the hive where, after undergoing fermentation and turning into bee bread, it becomes a protein source for brood-rearing. Excess pollen can be collected from the hive; although it is sometimes consumed as a dietary supplement by humans, bee pollen may cause an allergic reaction in susceptible individuals. Bee brood Bee brood, the eggs, larvae, or pupae of honey bees, is edible and highly nutritious. Bee brood contains the same amount of protein that beef or poultry does. Bee brood is often harvested as a byproduct when the beekeeper has excess bees and does not wish to have any more. Propolis Propolis is a resinous mixture collected by honey bees from tree buds, sap flows or other botanical sources, which is used as a sealant for unwanted open spaces in the hive. Although propolis is alleged to have health benefits (tincture of propolis is marketed as a cold and flu remedy), it may cause severe allergic reactions in some individuals. Propolis is also used in wood finishes, and gives a Stradivarius violin its unique red color. Royal jelly Royal jelly is a honey bee secretion used to nourish the larvae and queen. It is marketed for its alleged but unsupported claims of health benefits. On the other hand, it may cause severe allergic reactions in some individuals. Genome Female bees are diploid and have 32 chromosomes, whereas males are haploid and have only 16. As of October 28, 2006, the Honey Bee Genome Sequencing Consortium fully sequenced and analyzed the genome of Apis mellifera, the western honey bee. Since 2007, attention has been devoted to colony collapse disorder, a decline in western honey bee colonies in a number of regions. The western honey bee is the third insect, after the fruit fly and the mosquito, whose genome has been mapped. According to scientists who analyzed its genetic code, the honey bee originated in Africa and spread to Europe in two ancient migrations. Scientists have found that genes related to smell outnumber those for taste, and the European honey bee has fewer genes regulating immunity than the fruit fly and the mosquito. The genome sequence also revealed that several groups of genes, particularly those related to circadian rhythm, resembled those of vertebrates more than other insects. Another significant finding from the honey bee genome study was that the western honey bee was the first insect to be discovered with a functional DNA methylation system since functional key enzymes (DNA methyltransferase-1 and -3) were identified in the genome. DNA methylation is one of the important mechanisms in epigenetics to study gene expression and regulation without changing the DNA sequence, but modifications on DNA activity. DNA methylation later was identified to play an important role in gene regulation and gene splicing. The genome is unusual in having few transposable elements, although they were present in the evolutionary past (remains and fossils have been found) and evolved more slowly than those in fly species. Since 2018 a new version of the honey bee genome is available on NCBI (Amel_HAv3.1, BioProject ID: PRJNA471592). This assembly contains full chromosome length scaffolds, which means that the sequence data for each chromosome is contiguous, and not split between multiple pieces called scaffolds. The existence of a highly contiguous reference genome for a species enables more detailed investigations of evolutionary processes that affect the genome as well as more accurate estimations of for example differentiation between populations and diversity within populations. An important process that shapes the honey bee genome is meiotic recombination, the rate of which is strongly elevated in honey bees and other social insects of the Hymenoptera order compared to most other eukaryotic species except fungi and protozoa. The reason for elevated recombination rates in social Hymenoptera is not fully understood, but one theory is that it is related to their social behaviour. The increased genetic diversity resulting from high recombination rates could make the workers less vulnerable to parasites and facilitate their specialisations to different tasks in the colony. Hazards and survival Parasites, diseases and pesticides Western honey bee populations face threats to their survival, increasing interests into other pollinator species, like the common eastern bumblebee. North American and European populations were severely depleted by Varroa mite infestations during the early 1990s, and U.S. beekeepers were further affected by colony collapse disorder in 2006 and 2007. Some subspecies of Apis mellifera show naturally varroa sensitive hygiene, for example Apis mellifera lamarckii and Apis mellifera carnica. Improved cultural practices and chemical treatments against Varroa mites saved most commercial operations; new bee breeds are beginning to reduce beekeeper dependence on acaricides. Feral bee populations were greatly reduced during this period; they are slowly recovering, primarily in mild climates, due to natural selection for Varroa resistance and repopulation by resistant breeds. Although it is generally believed that insecticides have also depleted bee populations, particularly when used in excess of label directions, as bee pests and diseases (including American foulbrood and tracheal mites) are becoming resistant to medications, research in this regard has not been conclusive. A 2012 study of the effect of neonicotinoid-based insecticides showed "no effects observed in field studies at field-realistic dosages." A new study in 2020 found that neonicotinoid insecticides affected the developmental stability of honey bees, particularly haploid males were more susceptible to neonicotinoids than diploid females. The 2020 study also found that heterozygosity may play a key role in buffering insecticide exposure. Milkweed In North America, various native milkweed species may be found with dead western honey bees stuck to their flowers. The non-native western honey bees are attracted to the flowers but are not adapted to their pollination mechanisms. The milkweed pollinium is collected when the tarsus (foot) of an insect falls into one of the flower's stigmatic slits as it obtains nectar from the flower's hood. If the insect is unable to remove its tarsus from the stigmatic slit it is likely to die due to predation or starvation/exhaustion. If the insect is able to escape with damaged or missing tarsi it may also be likely to die from its injuries. Western honey bees which escape with their tarsi intact may have their nectar gathering ability obstructed by parts of the pollinia being stuck to the bee's proboscis, resulting in starvation. The pollinia may also stick to the bee's tarsal claws, causing a lack of climbing ability and honey gathering which may result in expulsion from the colony leading to death. Native butterflies, moths, flies, beetles, bees and wasps are common milkweed visitors which are often able to escape without issue, though some species of Megachile, Halictus, Astata, Lucilia, Trichius, Pamphila and Scepsis have been found dead on the flowers. After removing over 140 dead bees from a patch of A. sullivantii, entomologist Charles Robertson quipped "... it seems that the flowers are better adapted to kill hive-bees than to produce fruit through their aid." Predators Insect predators of western honey bees include the Asian giant hornet and other wasps, robber flies, dragonflies such as the green darner, some mantises, water striders and the European beewolf. Arachnid predators of western honey bees include fishing spiders, lynx spiders, goldenrod spiders and St. Andrew's cross spiders. Reptile and amphibian predators of western honey bees include the black girdled lizard, anoles, and other lizards, and various anuran amphibians including the American toad, the American bullfrog and the wood frog. Specialist bird predators of western honey bees include the bee-eaters; other birds that may take western honey bees include grackles, hummingbirds, tyrant flycatchers and the summer tanager. Most birds that eat bees do so opportunistically; however, summer tanagers will sit on a limb and catch dozens of bees from the hive entrance as they emerge. Mammals that sometimes predators of western honey bees include giant armadillos, opossums, raccoons, skunks, the North American least shrew and the honey badger. Immune mechanisms Innate immune mechanisms Humoral and cellular immune mechanisms of western honey bees are similar to those of other insects. Trans-generational immune priming (TGIP) is an approach that insects use to pass specific immunity from one generation to the next. The offspring are more likely to overcome pathogens that their parents have encountered. TGIP resembles adaptive immune responses but with different underlying mechanisms. TGIP against Paenibacillus larvae, which causes American foulbrood, has been demonstrated. The egg-yolk protein Vitellogenin (Vg) plays an important role in TGIP in honey bees, as it participates in the information transmitted between queen and offspring. Immune elicitors such as fragments or microbes are considered pathogen-associated molecular patterns (PAMPs). Vg can bind and deliver PAMPs to offspring and thereafter lead to the induction of immunity-related genes. In laboratory experiments, injecting heat-killed P. larvae into honey bee queens can prevent 26% of death in their offspring. Offspring produced by queens orally vaccinated in this way were 30%–50% more likely to survive infection. Immune priming in queens triples differentiated hemocytes in their offspring. Social immune mechanisms Grooming The behavior of bees using their legs and mandibles to remove parasites like mites and dust-like materials from their bodies is referred to as grooming. Grooming includes self-grooming (auto-grooming) and inter-grooming (allo-grooming) between nest mates. Self-grooming involves pulling on antennae, rubbing the head with the forelegs, and rubbing the thorax or abdomen with the middle or hind legs. Inter-grooming is a colony-level behavior, and individuals within the colony gain benefits from their nest mates in this manner. By exhibiting a grooming dance, other nest mates are attracted and assist to remove parasites via stroking with antennae or legs and licking. Grooming limits ectoparasite load within colonies, especially eliminating Varroa mites. Hygienic behavior Hygienic behavior targeting brood cells consists of three main steps: detection, uncapping and removal. Adults are able to identify the distinct odors associated with healthy or unhealthy broods and subsequently remove the unhealthy ones from the hive. Hygienic behavior effectively responds to Varroa mites, the fungus Ascosphaera apis which causes chalkbrood diseases, and the P. larvae. Freeze-killed brood assay is a simple strategies to assess the hygienic behavior of honey bee colonies. As an environmental threat Some entomologists have observed that non-native, feral western honey bees can have negative impacts within their non-native environment. Imported bees may displace native bees, and may also promote reproduction of invasive plants ignored by native pollinators. Honey bees are not native to the Americas, arriving with colonists in North America in the 18th century. Thomas Jefferson mentioned this in his
Biology and health sciences
Hymenoptera
null
18223985
https://en.wikipedia.org/wiki/Spiral%20separator
Spiral separator
The term spiral separator can refer to either a device for separating slurry components by density (wet spiral separators), or for a device for sorting particles by shape (dry spiral separators). Wet spiral separators Spiral separators of the wet type, also called spiral concentrators, are devices to separate solid components in a slurry, based upon a combination of the solid particle density as well as the particle's hydrodynamic properties (e.g. drag). The device consists of a tower, around which is wound a sluice, from which slots or channels are placed in the base of the sluice to extract solid particles that have come out of suspension. As larger and heavier particles sink to the bottom of the sluice faster and experience more drag from the bottom, they travel slower, and so move towards the center of the spiral. Conversely, light particles stay towards the outside of the spiral, with the water, and quickly reach the bottom. At the bottom, a "cut" is made with a set of adjustable bars, channels, or slots, separating the low and high density parts. Efficiency Typical spiral concentrators will use a slurry from about 20%-40% solids by weight, with a particle size somewhere between 0.75—1.5mm (17-340 mesh), though somewhat larger particle sizes are sometimes used. The spiral separator is less efficient at the particle sizes of 0.1—0.074mm however. For efficient separation, the density difference between the heavy minerals and the light minerals in the feedstock should be at least 1 g/cm3; and because the separation is dependent upon size and density, spiral separators are most effective at purifying ore if its particles are of uniform size and shape. A spiral separator may process a couple tons per hour of ore, per flight, and multiple flights may be stacked in the same space as one, to improve capacity. Many things can be done to improve the separation efficiency, including: changing the rate of material feed changing the grain size of the material changing the slurry mass percentage adjusting the cutter bar positions running the output of one spiral separator (often, a third, intermediate, cut) through a second. adding washwater inlets along the length of the spiral, to aid in separating light minerals adding multiple outlets along the length, to improve the ability of the spiral to remove heavy contaminants adding ridges on the sluice at an angle to the direction of flow. Dry spiral separators Dry spiral separators, capable of distinguishing round particles from nonrounds, are used to sort the feed by shape. The device consists of a tower, around which is wound an inwardly inclined flight. A catchment funnel is placed around this inner flight. Round particles roll at a higher speed than other objects, and so are flung off the inner flight and into the collection funnel. Shapes which are not round enough are collected at the bottom of the flight. Separators of this type may be used for removing weed seeds from the intended harvest, or to remove deformed lead shot.
Technology
Metallurgy
null
4548056
https://en.wikipedia.org/wiki/Pleurotus
Pleurotus
Pleurotus is a genus of gilled mushrooms which includes one of the most widely eaten mushrooms, P. ostreatus. Species of Pleurotus may be called oyster, abalone, or tree mushrooms, and are some of the most commonly cultivated edible mushrooms in the world. Pleurotus fungi have also been used in mycoremediation of pollutants, such as petroleum and polycyclic aromatic hydrocarbons. Description The caps may be laterally attached (with no stipe). If there is a stipe, it is normally eccentric and the gills are decurrent along it. The term pleurotoid is used for any mushroom with this general shape. The spores are smooth and elongated (described as "cylindrical"). Where hyphae meet, they are joined by clamp connections. Pleurotus is not considered to be a bracket fungus, and most of the species are monomitic (with a soft consistency). However, remarkably, P. dryinus can sometimes be dimitic, meaning that it has additional skeletal hyphae, which give it a tougher consistency like bracket fungi. In the American Pacific Northwest, oysters can be found from March to May. Taxonomy The classification of species within the genus Pleurotus is difficult due to high phenotypic variability across wide geographic ranges, geographic overlap of species, and ongoing evolution and speciation. Early taxonomic efforts placed the oyster mushrooms within a very broad Agaricus as Agaricus ostreatus (Jacq. 1774). Paul Kummer defined the genus Pleurotus in 1871; since then, the genus has been narrowed with some species reclassified to other genera, such as Favolaschia, Hohenbuehelia, Lentinus, Marasmiellus, Omphalotus, Panellus, Pleurocybella, and Resupinatus. See Singer (1986) for an example of Pleurotus taxonomy based on morphological characteristics. Phylogeny More recently, molecular phylogenetics has been utilized to determine genetic and evolutionary relationships between groups within the genus, delineating discrete clades. Pleurotus, along with the closely related genus Hohenbuehelia, has been shown to be monophyletic. Tests of cross-breeding viability between groups have been used to further define which groups are deserving of species rank, as opposed to subspecies, variety, or synonymy. If two groups of morphologically distinct Pleurotus fungi are able to cross-breed and produce fertile offspring, they meet one definition of species. These reproductively discrete groups, referred to as intersterility groups, have begun to be defined in Pleurotus. Many binomial names used in literature are now being grouped together as species complexes using this technique, and may change. Phylogenetic species The following species list is organized according to 1. phylogenetic clade, 2. intersterility group (group number in Roman numerals) or sub-clade, and then 3. any older binomial names that have been found to be closely related, reproductively compatible, or synonymous, although they may no longer be taxonomically valid. This list is likely to be incomplete. P. ostreatus clade I. P. ostreatus (oyster or pearl oyster mushroom) – North America and northern Eurasia P. florida II. P. pulmonarius (phoenix or Indian oyster mushroom) – North America, Eurasia, and Australasia P. columbinus P. sapidus III. P. populinus – North America VI. P. eryngii (king oyster mushroom) – Europe and the Middle East P. ferulae P. fossulatus – Afghanistan P. nebrodensis XII. P. abieticola – Asia XIII. P. albidus – Caribbean, Central America, South America P. djamor-cornucopiae clade IV. P. cornucopiae (branched oyster mushroom) – Europe P. citrinopileatus (golden oyster mushroom) – eastern Asia P. euosmus (tarragon oyster mushroom) V. P. djamor (pink oyster mushroom) – pantropical P. flabellatus P. salmoneo-stramineus P. salmonicolor XI. P. opuntiae – North America, New Zealand XVI. P. calyptratus P. cystidiosus clade VII. P. cystidiosus (abalone mushroom) – global P. abalonus – Taiwan P. fuscosquamulosus – Africa, Europe P. smithii – Mexico IX. P. dryinus – North America, Europe, and New Zealand VIII. Lentinus levis – subtropical to tropical, moved to genus Lentinus. X. P. tuber-regium (king tuber mushroom) – Africa, Asia, Australasia XIV. P. australis (brown oyster mushroom) – Australia and New Zealand XV. P. purpureo-olivaceus – Australia and New Zealand P. rattenburyi Incertae sedis species P. parsonsii P. velatus Former species P. gardneri was reclassified to the genus Neonothopanus in 2011. P. levis was reclassified to the genus Lentinus. P. sajor-caju was reclassified to the genus Lentinus. P. nidiformis was reclassified to the genus Omphalotus in 1994. Etymology The genus name Pleurotus refers to the mushroom caps being laterally attached to the substrate. It is derived of the Ancient Greek word : pleurón rib, side. Ecology Pleurotus fungi are found in both tropical and temperate climates throughout the world. Most species of Pleurotus are white-rot fungi on hardwood trees, although some also decay conifer wood. Pleurotus eryngii is unusual in being a weak parasite of herbaceous plants, and P. tuber-regium produces underground sclerotia. In addition to being saprotrophic, all species of Pleurotus are also nematophagous, catching nematodes by paralyzing them with a toxin. In the case of the carnivorous mushroom Pleurotus ostreatus, it was shown that small, fragile lollipop-shaped structures (toxocysts) on fungal hyphae contain a volatile ketone, 3-octanone, which disrupts the cell membrane integrity of nematodes, leading to rapid cell and organismal death, hypothetically either to defend themselves and/or to acquire nutrients. Uses Culinary Oyster mushrooms are popular for cooking, torn up or sliced, especially in stir fry or sauté, because they are consistently thin, and so will cook more evenly than uncut mushrooms of other types. They are often used in vegetarian cuisine. Bioremediation The 2007 Cosco Busan oil spill was remediated partly by using 1000 mats of human hair collected from Bay Area salons woven into mats, then used to grow oyster mushrooms, helping to absorb the oil. After the 2017 Tubbs Fire in California, oyster mushrooms were grown to help remediate toxic ash run-off.
Biology and health sciences
Edible fungi
Plants
4548379
https://en.wikipedia.org/wiki/Atmosphere%20of%20Mars
Atmosphere of Mars
The atmosphere of Mars is the layer of gases surrounding Mars. It is primarily composed of carbon dioxide (95%), molecular nitrogen (2.85%), and argon (2%). It also contains trace levels of water vapor, oxygen, carbon monoxide, hydrogen, and noble gases. The atmosphere of Mars is much thinner and colder than Earth's having a max density 20g/m3 (about 2% of Earth’s value) with a temperature generally below zero down to -60 Celsius. The average surface pressure is about which is 0.6% of the Earth's value. The currently thin Martian atmosphere prohibits the existence of liquid water on the surface of Mars, but many studies suggest that the Martian atmosphere was much thicker in the past. The higher density during spring and fall is reduced by 25% during the winter when carbon dioxide partly freezes at the pole caps. The highest atmospheric density on Mars is equal to the density found above the Earth's surface and is ≈0.020 kg/m3. The atmosphere of Mars has been losing mass to space since the planet's core slowed down, and the leakage of gases still continues today. The atmosphere of Mars is colder than Earth’s owing to the larger distance from the Sun, receiving less solar energy and has a lower effective temperature, which is about . The average surface emission temperature of Mars is just , which is comparable to inland Antarctica. Although Mars' atmosphere consists primarily of carbon dioxide, the greenhouse effect in the Martian atmosphere is much weaker than Earth's: on Mars, versus on Earth due to the much lower density of carbon dioxide, leading to less greenhouse warming. The daily range of temperature in the lower atmosphere presents ample variation due to the low thermal inertia; it can range from to near near the surface in some regions. The temperature of the upper part of the Martian atmosphere is also significantly lower than Earth's because of the absence of stratospheric ozone and the radiative cooling effect of carbon dioxide at higher altitudes. Dust devils and dust storms are prevalent on Mars, which are sometimes observable by telescopes from Earth, and in 2018 even with the naked eye as a change in colour and brightness of the planet. Planet-encircling dust storms (global dust storms) occur on average every 5.5 Earth years (every 3 Martian years) on Mars and can threaten the operation of Mars rovers. However, the mechanism responsible for the development of large dust storms is still not well understood. It has been suggested to be loosely related to gravitational influence of both moons, somewhat similar to the creation of tides on Earth. The Martian atmosphere is an oxidized atmosphere. The photochemical reactions in the atmosphere tend to oxidize the organic species and turn them into carbon dioxide or carbon monoxide. Although the most sensitive methane probe on the recently launched ExoMars Trace Gas Orbiter failed to find methane in the atmosphere over the whole of Mars, several previous missions and ground-based telescopes detected unexpected levels of methane in the Martian atmosphere, which may even be a biosignature for life on Mars. However, the interpretation of the measurements is still highly controversial and lacks a scientific consensus. Atmospheric evolution The mass and composition of the Martian atmosphere are thought to have changed over the course of the planet's lifetime. A thicker, warmer and wetter atmosphere is required to explain several apparent features in the earlier history of Mars, such as the existence of liquid water bodies. Observations of the Martian upper atmosphere, measurements of isotopic composition and analyses of Martian meteorites, provide evidence of the long-term changes of the atmosphere and constraints for the relative importance of different processes. Atmosphere in the early history In general, the gases found on modern Mars are depleted in lighter stable isotopes, indicating the Martian atmosphere has changed by some mass-selected processes over its history. Scientists often rely on these measurements of isotope composition to reconstruct conditions of the Martian atmosphere in the past. While Mars and Earth have similar 12C / 13C and 16O / 18O ratios, 14N is much more depleted in the Martian atmosphere. It is thought that the photochemical escape processes are responsible for the isotopic fractionation and has caused a significant loss of nitrogen on geological timescales. Estimates suggest that the initial partial pressure of N2 may have been up to 30 hPa. Hydrodynamic escape in the early history of Mars may explain the isotopic fractionation of argon and xenon. On modern Mars, the atmosphere is not leaking these two noble gases to outer space owing to their heavier mass. However, the higher abundance of hydrogen in the Martian atmosphere and the high fluxes of extreme UV from the young Sun, together could have driven a hydrodynamic outflow and dragged away these heavy gases. Hydrodynamic escape also contributed to the loss of carbon, and models suggest that it is possible to lose of CO2 by hydrodynamic escape in one to ten million years under much stronger solar extreme UV on Mars. Meanwhile, more recent observations made by the MAVEN orbiter suggested that sputtering escape is very important for the escape of heavy gases on the nightside of Mars and could have contributed to 65% loss of argon in the history of Mars. The Martian atmosphere is particularly prone to impact erosion owing to the low escape velocity of Mars. An early computer model suggested that Mars could have lost 99% of its initial atmosphere by the end of late heavy bombardment period based on a hypothetical bombardment flux estimated from lunar crater density. In terms of relative abundance of carbon, the ratio on Mars is only 10% of that on Earth and Venus. Assuming the three rocky planets have the same initial volatile inventory, then this low ratio implies the mass of CO2 in the early Martian atmosphere should have been ten times higher than the present value. The huge enrichment of radiogenic 40Ar over primordial 36Ar is also consistent with the impact erosion theory. One of the ways to estimate the amount of water lost by hydrogen escape in the upper atmosphere is to examine the enrichment of deuterium over hydrogen. Isotope-based studies estimate that 12 m to over 30 m global equivalent layer of water has been lost to space via hydrogen escape in Mars' history. It is noted that atmospheric-escape-based approach only provides the lower limit for the estimated early water inventory. To explain the coexistence of liquid water and faint young Sun during early Mars' history, a much stronger greenhouse effect must have occurred in the Martian atmosphere to warm the surface up above freezing point of water. Carl Sagan first proposed that a 1 bar H2 atmosphere can produce enough warming for Mars. The hydrogen can be produced by the vigorous outgassing from a highly reduced early Martian mantle and the presence of CO2 and water vapor can lower the required abundance of H to generate such a greenhouse effect. Nevertheless, photochemical modeling showed that maintaining an atmosphere with this high level of H2 is difficult. SO2 has also been one of the proposed effective greenhouse gases in the early history of Mars. However, other studies suggested that high solubility of SO2, efficient formation of H2SO4 aerosol and surface deposition prohibit the long-term build-up of SO2 in the Martian atmosphere, and hence reduce the potential warming effect of SO2. Atmospheric escape on modern Mars Despite the lower gravity, Jeans escape is not efficient in the modern Martian atmosphere due to the relatively low temperature at the exobase (≈200 K at 200 km altitude). It can only explain the escape of hydrogen from Mars. Other non-thermal processes are needed to explain the observed escape of oxygen, carbon and nitrogen. Hydrogen escape Molecular hydrogen (H2) is produced from the dissociation of H2O or other hydrogen-containing compounds in the lower atmosphere and diffuses to the exosphere. The exospheric H2 then decomposes into hydrogen atoms, and the atoms that have sufficient thermal energy can escape from the gravitation of Mars (Jeans escape). The escape of atomic hydrogen is evident from the UV spectrometers on different orbiters. While most studies suggested that the escape of hydrogen is close to diffusion-limited on Mars, more recent studies suggest that the escape rate is modulated by dust storms and has a large seasonality. The estimated escape flux of hydrogen range from 107 cm−2 s−1 to 109 cm−2 s−1. Carbon escape Photochemistry of CO2 and CO in ionosphere can produce CO2+ and CO+ ions, respectively: +  ⟶  +  ⟶  An ion and an electron can recombine and produce electronic-neutral products. The products gain extra kinetic energy due to the Coulomb attraction between ions and electrons. This process is called dissociative recombination. Dissociative recombination can produce carbon atoms that travel faster than the escape velocity of Mars, and those moving upward can then escape the Martian atmosphere: UV photolysis of carbon monoxide is another crucial mechanism for the carbon escape on Mars: + ( < 116  nm) ⟶ . Other potentially important mechanisms include the sputtering escape of CO2 and collision of carbon with fast oxygen atoms. The estimated overall escape flux is about 0.6 × 107 cm−2 s−1 to 2.2 × 107 cm−2 s−1 and depends heavily on solar activity. Nitrogen escape Like carbon, dissociative recombination of N2+ is important for the nitrogen escape on Mars. In addition, other photochemical escape mechanism also play an important role: +  ⟶  Nitrogen escape rate is very sensitive to the mass of the atom and solar activity. The overall estimated escape rate of 14N is 4.8 × 105 cm−2 s−1. Oxygen escape Dissociative recombination of CO2+ and O2+ (produced from CO2+ reaction as well) can generate the oxygen atoms that travel fast enough to escape: However, the observations showed that there are not enough fast oxygen atoms the Martian exosphere as predicted by the dissociative recombination mechanism. Model estimations of oxygen escape rate suggested it can be over 10 times lower than the hydrogen escape rate. Ion pick and sputtering have been suggested as the alternative mechanisms for the oxygen escape, but this model suggests that they are less important than dissociative recombination at present. Ionospheric escape The interaction of the solar wind and the interplanetary magnetic field with the Martian conductive ionosphere induces electrodynamic currents, that have been mapped and studied in detail, using MAVEN. These currents can drive the ionospheric species to high altitudes, where the solar wind is able to sweep them away from the planet, resulting to global scale ion outflows. They are however not sufficient to explain the atmospheric and ionospheric losses of Mars over its lifetime. Current chemical composition Carbon dioxide CO2 is the main component of the Martian atmosphere. It has a mean volume (molar) ratio of 94.9%. In winter polar regions, the surface temperature can be lower than the frost point of CO2. CO2 gas in the atmosphere can condense on the surface to form 1–2 m thick solid dry ice. In summer, the polar dry ice cap can undergo sublimation and release the CO2 back to the atmosphere. As a result, significant annual variability in atmospheric pressure (≈25%) and atmospheric composition can be observed on Mars. The condensation process can be approximated by the Clausius–Clapeyron relation for CO2. There also exists the potential for adsorption of CO2 into and out of the regolith to contribute to the annual atmospheric variability. Although the sublimation and deposition of CO2 ice in the polar caps is the driving force behind seasonal cycles, other processes such as dust storms, atmospheric tides, and transient eddies also play a role. Understanding each of these more minor processes and how they contribute to the overall atmospheric cycle will give a clearer picture as to how the Martian atmosphere works as a whole. It has been suggested that the regolith on Mars has high internal surface area, implying that it might have a relatively high capacity for the storage of adsorbed gas. Since adsorption works through the adhesion of a film of molecules onto a surface, the amount of surface area for any given volume of material is the main contributor for how much adsorption can occur. A solid block of material, for example, would have no internal surface area, but a porous material, like a sponge, would have high internal surface area. Given the loose, finely grained nature of the Martian regolith, there is the possibility of significant levels of CO2 adsorption into it from the atmosphere. Adsorption from the atmosphere into the regolith has previously been proposed as an explanation for the observed cycles in the methane and water mixing ratios. More research is needed to help determine if CO2 adsorption is occurring, and if so, the extent of its impact on the overall atmospheric cycle. Despite the high concentration of CO2 in the Martian atmosphere, the greenhouse effect is relatively weak on Mars (about 5 °C) because of the low concentration of water vapor and low atmospheric pressure. While water vapor in Earth's atmosphere has the largest contribution to greenhouse effect on modern Earth, it is present in only very low concentration in the Martian atmosphere. Moreover, under low atmospheric pressure, greenhouse gases cannot absorb infrared radiation effectively because the pressure-broadening effect is weak. In the presence of solar UV radiation (hν, photons with wavelength shorter than 225 nm), CO2 in the Martian atmosphere can be photolyzed via the following reaction: + ( < 225 nm) ⟶ . If there is no chemical production of CO2, all the CO2 in the current Martian atmosphere would be removed by photolysis in about 3,500 years. The hydroxyl radicals (OH) produced from the photolysis of water vapor, together with the other odd hydrogen species (e.g. H, HO2), can convert carbon monoxide (CO) back to CO2. The reaction cycle can be described as: Mixing also plays a role in regenerating CO2 by bringing the O, CO, and O2 in the upper atmosphere downward. The balance between photolysis and redox production keeps the average concentration of CO2 stable in the modern Martian atmosphere. CO2 ice clouds can form in winter polar regions and at very high altitudes (>50 km) in tropical regions, where the air temperature is lower than the frost point of CO2. Nitrogen N2 is the second most abundant gas in the Martian atmosphere. It has a mean volume ratio of 2.6%. Various measurements showed that the Martian atmosphere is enriched in 15N. The enrichment of heavy isotopes of nitrogen is possibly caused by mass-selective escape processes. Argon Argon is the third most abundant gas in the Martian atmosphere. It has a mean volume ratio of 1.9%. In terms of stable isotopes, Mars is enriched in 38Ar relative to 36Ar, which can be attributed to hydrodynamic escape. One of Argon's isotopes, 40Ar, is produced from the radioactive decay of 40K. In contrast, 36Ar is primordial: It was present in the atmosphere after the formation of Mars. Observations indicate that Mars is enriched in 40Ar relative to 36Ar, which cannot be attributed to mass-selective loss processes. A possible explanation for the enrichment is that a significant amount of primordial atmosphere, including 36Ar, was lost by impact erosion in the early history of Mars, while 40Ar was emitted to the atmosphere after the impact. Oxygen and ozone The estimated mean volume ratio of molecular oxygen (O2) in the Martian atmosphere is 0.174%. It is one of the products of the photolysis of CO2, water vapor, and ozone (O). It can react with atomic oxygen (O) to re-form ozone (O). In 2010, the Herschel Space Observatory detected molecular oxygen in the Martian atmosphere. Atomic oxygen is produced by photolysis of CO2 in the upper atmosphere and can escape the atmosphere via dissociative recombination or ion pickup. In early 2016, Stratospheric Observatory for Infrared Astronomy (SOFIA) detected atomic oxygen in the atmosphere of Mars, which has not been found since the Viking and Mariner mission in the 1970s. In 2019, NASA scientists working on the Curiosity rover mission, who have been taking measurements of the gas, discovered that the amount of oxygen in the Martian atmosphere rose by 30% in spring and summer. Similar to stratospheric ozone in Earth's atmosphere, the ozone present in the Martian atmosphere can be destroyed by catalytic cycles involving odd hydrogen species: Net: Since water is an important source of these odd hydrogen species, higher abundance of ozone is usually observed in the regions with lower water vapor content. Measurements showed that the total column of ozone can reach 2–30 μm-atm around the poles in winter and spring, where the air is cold and has low water saturation ratio. The actual reactions between ozone and odd hydrogen species may be further complicated by the heterogeneous reactions that take place in water-ice clouds. It is thought that the vertical distribution and seasonality of ozone in the Martian atmosphere is driven by the complex interactions between chemistry and transport of oxygen-rich air from sunlit latitudes to the poles. The UV/IR spectrometer on Mars Express (SPICAM) has shown the presence of two distinct ozone layers at low-to-mid latitudes. These comprise a persistent, near-surface layer below an altitude of , a separate layer that is only present in northern spring and summer with an altitude varying from 30 to 60 km, and another separate layer that exists 40–60 km above the southern pole in winter, with no counterpart above the Mars's north pole. This third ozone layer shows an abrupt decrease in elevation between 75 and 50 degrees south. SPICAM detected a gradual increase in ozone concentration at until midwinter, after which it slowly decreased to very low concentrations, with no layer detectable above . Water vapor Water vapor is a trace gas in the Martian atmosphere and has huge spatial, diurnal and seasonal variability. Measurements made by Viking orbiter in the late 1970s suggested that the entire global total mass of water vapor is equivalent to about 1 to 2 km3 of ice. More recent measurements by Mars Express orbiter showed that the globally annually-averaged column abundance of water vapor is about 10–20 precipitable microns (pr. μm). Maximum abundance of water vapor (50-70 pr. μm) is found in the northern polar regions in early summer due to the sublimation of water ice in the polar cap. Unlike in Earth's atmosphere, liquid-water clouds cannot exist in the Martian atmosphere; this is because of the low atmospheric pressure. Cirrus-like water-ice clouds have been observed by the cameras on Opportunity rover and Phoenix lander. Measurements made by the Phoenix lander showed that water-ice clouds can form at the top of the planetary boundary layer at night and precipitate back to the surface as ice crystals in the northern polar region. Methane As a volcanic and biogenic species, methane is of interest to geologists and astrobiologists. However, methane is chemically unstable in an oxidizing atmosphere with UV radiation. The lifetime of methane in the Martian atmosphere is about 400 years. The detection of methane in a planetary atmosphere may indicate the presence of recent geological activities or living organisms. Since 2004, trace amounts of methane (range from 60 ppb to under detection limit (< 0.05 ppb)) have been reported in various missions and observational studies. The source of methane on Mars and the explanation for the enormous discrepancy in the observed methane concentrations are still under active debate.
Physical sciences
Solar System
Astronomy
4548438
https://en.wikipedia.org/wiki/Incisivosaurus
Incisivosaurus
Incisivosaurus ("incisor lizard") is a genus of small, probably herbivorous theropod dinosaurs from the early Cretaceous Period of what is now the People's Republic of China. The first specimen to be described (by Xu et al. in 2002), IVPP V13326, is a skull that was collected from the lowermost levels (the fluvial Lujiatun beds) of the Yixian Formation (dating to the Barremian stage about 126 million years ago) in the Sihetun area, near Beipiao City, in western Liaoning Province. The most significant, and highly unusual, characteristic of this dinosaur is its apparent adaptation to an herbivorous or omnivorous lifestyle. It was named for its prominent, rodent-like front teeth, which show wear patterns commonly found in plant-eating dinosaurs. The specific name gauthieri honors Dr. Jacques Gauthier, a pioneer of the phylogenetic method of classification. Description The initial description of Incisivosaurus by Xu et al. showed that the skull, which measures approximately in length, preserves the most complete dentition known for any oviraptorosaurian. Their cladistic analysis indicated that Incisivosaurus lies at the base of the oviraptorosaurian group, making it more primitive than Caudipteryx and the oviraptorids. A subsequent study by Osmolska et al. in 2004 described the distinguishing skeletal features of Incisivosaurus, including a long snout that made up about half the total length of the skull, a slender lower jaw with a long fenestra (opening), and its distinctive, large, flattened front teeth. In addition to these unique features, Incisivosaurus shared many traits with more typical oviraptorosaurs, allowing its classification with that group. Several features, including its numerous teeth (most advanced oviraptorids were toothless), show that it was a primitive member of the group, and several features of the skull even support a relationship with the therizinosaurs, another theropod group that was probably herbivorous. In 2009 the holotype skull was scanned and analyzed in three dimensions. The results indicated that Incisivosaurus had less bird - like air spaces in the skull bones than later oviraptorosaurs did. It also found that Incisivosaurus had reduced olfactory lobes and expanded optic lobes similar to ornithomimosaurs. It suggested that the most birdlike features of oviraptorosaurs may have been convergent with birds. Incisivosaurus is assumed to have been feathered like most other maniraptoran theropods. Its total body length has been estimated at and its weight at 2–4.6 kg (4.4–10 lbs). Feathered juvenile specimens In 2010, two feathered oviraptorosaur specimens were described, both of which preserved feather traces. These specimens (both juveniles, though one closer to maturity than the other) showed that the feathers were similar to the related Caudipteryx, with long (symmetrical) vaned feathers on the hand and tail, and the rest of the body covered in simpler, downy feathers. Though initially interpreted as specimens of Similicaudipteryx, later research suggested that they could instead be referred to Incisivosaurus. The nature of the feathers preserved in the two Yixian specimens appeared to Xu and colleagues, who described the two feathered specimens, to change with age. The youngest specimen had relatively short primary feathers (those anchored to the hand) compared to its tail feathers. In the older specimen, the primary feathers were the same length as the tail feathers, and secondary feathers (those anchored to the lower arm) were also present. The primary feathers may have grown more slowly than the tail feathers, not reaching equal size until the animal was close to maturity, and the secondary feathers would not appear at all until this more mature stage. This suggests that the wing feathers had little use at a young age, only becoming fully developed with maturity. Additionally, the youngest specimen's vaned feathers appeared to lack barbs except at the tip, instead consisting of a solid sheet. Xu and colleagues interpreted the stark differences in the feathers of the two specimens as primarily age-related. They speculated that hatchlings would have been covered in natal down like modern birds. As the animal aged, the down would be replaced by vaned pennaceous feathers on the hands and tail, but ribbon-like and primitive in form, similar to the tail feathers of Confuciusornis, Epidexipteryx, and some enantiornithines. These feathers would be lost through moulting as the animal aged, and replaced with more modern-style barbed feathers. The primary feathers grew more slowly than the tail feathers, not reaching equal size until the animal was close to maturity, and the secondary feathers would not appear at all until this more mature stage. This suggests that the wing feathers had little use at a young age, only becoming fully developed with maturity. However, feather development specialist Richard Prum disputed the above interpretation of the feathers in a November 2010 letter to the journal Nature. Prum noted that the apparently ribbon-like structure of the juvenile's feathers were consistent with pennaceous feathers in the midst of moulting. In modern birds, new vaned feathers emerge from the feather follicle enclosed in a "pin feather", a solid tube covered in keratin. Usually, the tip of this tube will fall away first, leaving a structure identical to that seen in the fossil. Later, the rest of the sheath falls away when the entire feather has fully developed. Prum also noted, as did Xu and his team, that the structure of the oviraptorosaur feathers is fundamentally different from other prehistoric birds with ribbon-like tail feathers. In those other species, the ribbon portion is formed from a flattened and expanded rachis, or central quill, of the feather, with the feather barbs expanding out at the tip. In the fossil specimen, however, the "ribbon" like portion is the same width as the vaned tip. This is consistent with what is seen in feathers in the process of moulting. Prum concluded that rather than representing an instance of feathers changing in form as the animal aged, this specimen represents the first known fossil evidence of feather moulting. Prum also noted that in modern birds, tail feathers moult sequentially, not simultaneously as in the oviraptorosaur specimen. However, the sequential moulting of modern birds is because the birds need to retain their ability to fly during the moult (except in penguins). For lineages more primitive than the advent of flight, like oviraptorosaurs, this would not have been an issue, and all the wing and tail feathers of primitive feathered theropods may have moulted simultaneously, more like penguins than flying birds. However, Xu et al. (2010) rebutted that the purported moulting evidence is problematic due to the complete absence of previous-generation feathers, and suggested that the feather is too large to be considered as a "pin feather". Other authors agreed with the reply by Xu et al. (2010) that the structures do not represent the "pin feather", though they considered that the specimen might represent a mid to late immature stage. Classification Incisivosaurus, as well as its potential synonym Protarchaeopteryx, were included in the phylogenetic analysis of a 2014 study on the group Paraves and its relatives. In the unweight cladogram, Incisivosaurus was rendered as the sister taxon to Protarchaeopteryx, with their group being the most primitive oviraptorosaurians. In both weighted analyses however, Protarchaeopteryx was found to be the most primitive oviraptorosaurian, with Incisivosaurus as the next most basal. One of the weighted cladograms, using TNT, is shown below. Paleobiology A 2022 study of the bite force of Incisivosaurus and comparisons with other oviraptorosaurs such as Citipati, Khaan, and Conchoraptor suggests that Incisivosaurus had a very strong bite force similar to ornithomimosaurs 33 times its weight. The moderate jaw gape seen in oviraptorosaurs is indicative of herbivory, but it is clear they were feeding on much tougher vegetation than other herbivorous theropods in their environment, such as ornithomimosaurs and therizinosaurs. The examinations suggest oviraptorosaurs may have been powerful-biting generalists or specialists that partook of niche partitioning both in body size and jaw function.
Biology and health sciences
Theropods
Animals
7876894
https://en.wikipedia.org/wiki/Nairobi%20fly
Nairobi fly
Nairobi fly is the common name for two species of rove beetle in the genus Paederus, native to East Africa originating from Tanzania. The beetles contain a corrosive substance known as pederin, which can cause chemical burns if it comes into contact with skin. Because of these burns, the Nairobi fly is sometimes referred to as a "dragon bug." Description Adult beetles are predominantly black and red in colour, and measure 6–10 mm in length and 0.5–1.0 mm in width. Their head, lower abdomen, and elytra are black, with the thorax and upper abdomen red. Biology The beetles live in moist habitats and are often beneficial to agriculture because they will eat crop pests. Adults are attracted to artificial light sources, and as a result, inadvertently come into contact with humans. Heavy rains, sometimes brought on by El Niño events, provide the conditions for the Nairobi fly to thrive. Outbreaks have occurred in 1998, 2007, 2019, and 2020. Relationship to humans Paederus dermatitis The beetles neither sting nor bite, but their haemolymph contains pederin, a potent toxin that causes blistering and Paederus dermatitis. The toxin is released when the beetle is crushed against the skin, often at night, when sleepers inadvertently brush the insect from their faces. People are advised to gently brush or blow the insect off their skin to prevent irritation. Research from a group at the University of Hyderabad in 2024 suggest that the use of LED lights at night may be a solution to prevent acid fly attacks. The study however warns that there may be other unknown factors that may still attract the flies into living areas.
Biology and health sciences
Beetles (Coleoptera)
Animals
7878457
https://en.wikipedia.org/wiki/Computer
Computer
A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation; or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Pre-20th century Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately . Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, . The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. First computer Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. Electromechanical calculating machine In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. Analog computers During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems). Digital computers Electromechanical Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Vacuum tubes and digital electronic circuits Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. Modern computers Concept of modern computer The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Stored programs Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. Transistors The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. Integrated circuits The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. Mobile computers The first mobile computers were heavy and ran from mains power. The IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: By architecture Analog computer Digital computer Hybrid computer Harvard architecture Von Neumann architecture Complex instruction set computer Reduced instruction set computer By size, form-factor and purpose Supercomputer Mainframe computer Minicomputer (term no longer used), Midrange computer Server Rackmount server Blade server Tower server Personal computer Workstation Microcomputer (term no longer used) Home computer (term fallen into disuse) Desktop computer Tower desktop Slimline desktop Multimedia computer (non-linear editing system computers, video editing PCs and the like, this term is no longer used) Gaming computer All-in-one PC Nettop (Small form factor PCs, Mini PCs) Home theater PC Keyboard computer Portable computer Thin client Internet appliance Laptop computer Desktop replacement computer Gaming laptop Rugged laptop 2-in-1 PC Ultrabook Chromebook Subnotebook Smartbook Netbook Mobile computer Tablet computer Smartphone Ultra-mobile PC Pocket PC Palmtop PC Handheld PC Pocket computer Wearable computer Smartwatch Smartglasses Single-board computer Plug computer Stick PC Programmable logic controller Computer-on-module System on module System in a package System-on-chip (Also known as an Application Processor or AP if it lacks circuitry such as radio circuitry) Microcontroller Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. History of computing hardware Other hardware topics A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are: Computer keyboard Digital camera Graphics tablet Image scanner Joystick Microphone Mouse Overlay keyboard Real-time clock Trackball Touchscreen Light pen Output devices The means through which computer gives output are known as output devices. Some examples of output devices are: Computer monitor Printer PC speaker Projector Sound card Video card Control unit The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Read the code for the next instruction from the cell indicated by the program counter. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems. Increment the program counter so it points to the next instruction. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code. Provide the necessary data to an ALU or register. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation. Write the result from the ALU back to a memory location or to a register or perhaps an output device. Jump back to step (1). Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. Central processing unit (CPU) The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. Arithmetic logic unit (ALU) The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. Memory A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: random-access memory or RAM read-only memory or ROM RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary. In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. Input/output (I/O) I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. Multitasking While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Multiprocessing Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". Languages There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications. Programs The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Stored program architecture This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: begin: addi $8, $0, 0 # initialize sum to 0 addi $9, $0, 1 # set first number to add = 1 loop: slti $10, $9, 1000 # check if the number is less than 1000 beq $10, $0, finish # if odd number is greater than n then exit add $8, $8, $9 # update sum addi $9, $9, 1 # get next number j loop # repeat the summing process finish: add $2, $8, $0 # put sum in output register Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. Machine code In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Programming language Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. Low-level languages Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC. Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. High-level languages Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Bugs Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of computers. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s, computer networking become almost ubiquitous, due to the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL. The number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments. Unconventional computers A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer, a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Future There is active research to make unconventional computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly. Computer architecture paradigms There are many types of computer architectures: Quantum computer vs. Chemical computer Scalar processor vs. Vector processor Non-Uniform Memory Access (NUMA) computers Register machine vs. Stack machine Harvard architecture vs. von Neumann architecture Cellular architecture Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. Artificial intelligence A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule-based systems and pattern recognition systems. Rule-based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern-based systems use data about a problem to generate conclusions. Examples of pattern-based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
Technology
Technology
null
11603215
https://en.wikipedia.org/wiki/Geological%20history%20of%20Earth
Geological history of Earth
The geological history of the Earth follows the major geological events in Earth's past based on the geological time scale, a system of chronological measurement based on the study of the planet's rock layers (stratigraphy). Earth formed about 4.54 billion years ago by accretion from the solar nebula, a disk-shaped mass of dust and gas left over from the formation of the Sun, which also created the rest of the Solar System. Initially, Earth was molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a planetoid with the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from asteroids, produced the oceans. However, in 2020, researchers reported that sufficient water to fill the oceans may have always been on the Earth since the beginning of the planet's formation. As the surface continually reshaped itself over hundreds of millions of years, continents formed and broke apart. They migrated across the surface, occasionally combining to form a supercontinent. Roughly , the earliest-known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia, , then finally Pangaea, which broke apart . The present pattern of ice ages began about , then intensified at the end of the Pliocene. The polar regions have since undergone repeated cycles of glaciation and thawing, repeating every 40,000–100,000 years. The Last Glacial Period of the current ice age ended about 10,000 years ago. Precambrian The Precambrian includes approximately 90% of geologic time. It extends from 4.6 billion years ago to the beginning of the Cambrian Period (about 539 Ma). It includes the first three of the four eons of Earth's prehistory (the Hadean, Archean and Proterozoic) and precedes the Phanerozoic eon. Major volcanic events altering the Earth's environment and causing extinctions may have occurred 10 times in the past 3 billion years. Hadean Eon During Hadean time (4.6–4 Ga), the Solar System was forming, probably within a large cloud of gas and dust around the Sun, called an accretion disc from which Earth formed . The Hadean Eon is not formally recognized, but it essentially marks the era before we have adequate record of significant solid rocks. The oldest dated zircons date from about . Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a large planetoid with the Earth. More recent potassium isotopic studies suggest that the Moon was formed by a smaller, high-energy, high-angular-momentum giant impact cleaving off a significant portion of the Earth. Some of this object's mass merged with Earth, significantly altering its internal composition, and a portion was ejected into space. Some of the material survived to form the orbiting Moon. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans. However, in 2020, researchers reported that sufficient water to fill the oceans may have always been on the Earth since the beginning of the planet's formation. During the Hadean the Late Heavy Bombardment occurred (approximately ) during which a large number of impact craters are believed to have formed on the Moon, and by inference on Earth, Mercury, Venus and Mars as well. However, some scientists argue against this hypothetical Late Heavy Bombardment, pointing out that the conclusion has been drawn from data which are not fully representative (only a few crater hotspots on the Moon have been analyzed). Archean Eon The Earth of the early Archean () may have had a different tectonic style. It is widely believed that the early Earth was dominated by vertical tectonic processes, such as stagnant lid, heat-pipe, or sagduction, which eventually transitioned to plate tectonics during the planet's mid-stage evolution. However, an alternative view proposes that Earth never experienced a vertical tectonic phase and that plate tectonics have been active throughout its entire history. During this time, the Earth's crust cooled enough that rocks and continental plates began to form. Some scientists think because the Earth was hotter in the past, plate tectonic activity was more vigorous than it is today, resulting in a much greater rate of recycling of crustal material. This may have prevented cratonization and continent formation until the mantle cooled and convection slowed down. Others argue that the subcontinental lithospheric mantle is too buoyant to subduct and that the lack of Archean rocks is a function of erosion and subsequent tectonic events. Some geologists view the sudden increase in aluminum content in zircons as an indicator of the beginning of plate tectonics. Unlike Proterozoic rocks, Archean rocks are distinguished by the presence of heavily metamorphosed deep-water sediments, such as graywackes, mudstones, volcanic sediments and banded iron formations. Greenstone belts are typical Archean formations, consisting of alternating high- and low-grade metamorphic rocks. The high-grade rocks were derived from volcanic island arcs, while the low-grade metamorphic rocks represent deep-sea sediments eroded from the neighboring island rocks and deposited in a forearc basin. In short, greenstone belts represent sutured protocontinents. The Earth's magnetic field was established 3.5 billion years ago. The solar wind flux was about 100 times the value of the modern Sun, so the presence of the magnetic field helped prevent the planet's atmosphere from being stripped away, which is what probably happened to the atmosphere of Mars. However, the field strength was lower than at present and the magnetosphere was about half the modern radius. Proterozoic Eon The geologic record of the Proterozoic () is more complete than that for the preceding Archean. In contrast to the deep-water deposits of the Archean, the Proterozoic features many strata that were laid down in extensive shallow epicontinental seas; furthermore, many of these rocks are less metamorphosed than Archean-age ones, and plenty are unaltered. Study of these rocks shows that the eon featured massive, rapid continental accretion (unique to the Proterozoic), supercontinent cycles, and wholly modern orogenic activity. Roughly , the earliest-known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia, 600–540 Ma. The first-known glaciations occurred during the Proterozoic, one that began shortly after the beginning of the eon, while there were at least four during the Neoproterozoic, climaxing with the Snowball Earth of the Varangian glaciation. Phanerozoic The Phanerozoic Eon is the current eon in the geologic timescale. It covers roughly 539 million years. During this period continents drifted apart, but eventually collected into a single landmass known as Pangea, before splitting again into the current continental landmasses. The Phanerozoic is divided into three eras – the Paleozoic, the Mesozoic and the Cenozoic. Most of the evolution of multicellular life occurred during this time period. Paleozoic Era The Paleozoic era spanned roughly (Ma) and is subdivided into six geologic periods: from oldest to youngest, they are the Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. Geologically, the Paleozoic starts shortly after the breakup of a supercontinent called Pannotia and at the end of a global ice age. Throughout the early Paleozoic, Earth's landmass was broken up into a substantial number of relatively small continents. Toward the end of the era, the continents gathered together into a supercontinent called Pangaea, which included most of Earth's land area. Cambrian Period The Cambrian is a major division of the geologic timescale that begins about 538.8 ± 0.2 Ma. Cambrian continents are thought to have resulted from the breakup of a Neoproterozoic supercontinent called Pannotia. The waters of the Cambrian period appear to have been widespread and shallow. Continental drift rates may have been anomalously high. Laurentia, Baltica and Siberia remained independent continents following the break-up of the supercontinent of Pannotia. Gondwana started to drift toward the South Pole. Panthalassa covered most of the southern hemisphere, and minor oceans included the Proto-Tethys Ocean, Iapetus Ocean and Khanty Ocean. Ordovician period The Ordovician period started at a major extinction event called the Cambrian–Ordovician extinction event some time about 485.4 ± 1.9 Ma. During the Ordovician the southern continents were collected into a single continent called Gondwana. Gondwana started the period in the equatorial latitudes and, as the period progressed, drifted toward the South Pole. Early in the Ordovician the continents Laurentia, Siberia and Baltica were still independent continents (since the break-up of the supercontinent Pannotia earlier), but Baltica began to move toward Laurentia later in the period, causing the Iapetus Ocean to shrink between them. Also, Avalonia broke free from Gondwana and began to head north toward Laurentia. The Rheic Ocean was formed as a result of this. By the end of the period, Gondwana had neared or approached the pole and was largely glaciated. The Ordovician came to a close in a series of extinction events that, taken together, comprise the second-largest of the five major extinction events in Earth's history in terms of percentage of genera that became extinct. The only larger one was the Permian-Triassic extinction event. The extinctions occurred approximately and mark the boundary between the Ordovician and the following Silurian Period. The most-commonly accepted theory is that these events were triggered by the onset of an ice age, in the Hirnantian faunal stage that ended the long, stable greenhouse conditions typical of the Ordovician. The ice age was probably not as long-lasting as once thought; study of oxygen isotopes in fossil brachiopods shows that it was probably no longer than 0.5 to 1.5 million years. The event was preceded by a fall in atmospheric carbon dioxide (from 7000ppm to 4400ppm) which selectively affected the shallow seas where most organisms lived. As the southern supercontinent Gondwana drifted over the South Pole, ice caps formed on it. Evidence of these ice caps has been detected in Upper Ordovician rock strata of North Africa and then-adjacent northeastern South America, which were south-polar locations at the time. Silurian Period The Silurian is a major division of the geologic timescale that started about 443.8 ± 1.5 Ma. During the Silurian, Gondwana continued a slow southward drift to high southern latitudes, but there is evidence that the Silurian ice caps were less extensive than those of the late Ordovician glaciation. The melting of ice caps and glaciers contributed to a rise in sea levels, recognizable from the fact that Silurian sediments overlie eroded Ordovician sediments, forming an unconformity. Other cratons and continent fragments drifted together near the equator, starting the formation of a second supercontinent known as Euramerica. The vast ocean of Panthalassa covered most of the northern hemisphere. Other minor oceans include Proto-Tethys, Paleo-Tethys, Rheic Ocean, a seaway of Iapetus Ocean (now in between Avalonia and Laurentia), and newly formed Ural Ocean. Devonian Period The Devonian spanned roughly from 419 to 359 Ma. The period was a time of great tectonic activity, as Laurasia and Gondwana drew closer together. The continent Euramerica (or Laurussia) was created in the early Devonian by the collision of Laurentia and Baltica, which rotated into the natural dry zone along the Tropic of Capricorn. In these near-deserts, the Old Red Sandstone sedimentary beds formed, made red by the oxidized iron (hematite) characteristic of drought conditions. Near the equator Pangaea began to consolidate from the plates containing North America and Europe, further raising the northern Appalachian Mountains and forming the Caledonian Mountains in Great Britain and Scandinavia. The southern continents remained tied together in the supercontinent of Gondwana. The remainder of modern Eurasia lay in the Northern Hemisphere. Sea levels were high worldwide, and much of the land lay submerged under shallow seas. The deep, enormous Panthalassa (the "universal ocean") covered the rest of the planet. Other minor oceans were Paleo-Tethys, Proto-Tethys, Rheic Ocean and Ural Ocean (which was closed during the collision with Siberia and Baltica). Carboniferous Period The Carboniferous extends from about 358.9 ± 0.4 to about 298.9 ± 0.15 Ma. A global drop in sea level at the end of the Devonian reversed early in the Carboniferous; this created the widespread epicontinental seas and carbonate deposition of the Mississippian. There was also a drop in south polar temperatures; southern Gondwana was glaciated throughout the period, though it is uncertain if the ice sheets were a holdover from the Devonian or not. These conditions apparently had little effect in the deep tropics, where lush coal swamps flourished within 30 degrees of the northernmost glaciers. A mid-Carboniferous drop in sea-level precipitated a major marine extinction, one that hit crinoids and ammonites especially hard. This sea-level drop and the associated unconformity in North America separate the Mississippian Period from the Pennsylvanian period. The Carboniferous was a time of active mountain building, as the supercontinent Pangea came together. The southern continents remained tied together in the supercontinent Gondwana, which collided with North America-Europe (Laurussia) along the present line of eastern North America. This continental collision resulted in the Hercynian orogeny in Europe, and the Alleghenian orogeny in North America; it also extended the newly uplifted Appalachians southwestward as the Ouachita Mountains. In the same time frame, much of present eastern Eurasian Plate welded itself to Europe along the line of the Ural Mountains. There were two major oceans in the Carboniferous: the Panthalassa and Paleo-Tethys. Other minor oceans were shrinking and eventually closed the Rheic Ocean (closed by the assembly of South and North America), the small, shallow Ural Ocean (which was closed by the collision of Baltica, and Siberia continents, creating the Ural Mountains) and Proto-Tethys Ocean. Permian Period The Permian extends from about 298.9 ± 0.15 to 252.17 ± 0.06 Ma. During the Permian all the Earth's major land masses, except portions of East Asia, were collected into a single supercontinent known as Pangaea. Pangaea straddled the equator and extended toward the poles, with a corresponding effect on ocean currents in the single great ocean (Panthalassa, the universal sea), and the Paleo-Tethys Ocean, a large ocean that was between Asia and Gondwana. The Cimmeria continent rifted away from Gondwana and drifted north to Laurasia, causing the Paleo-Tethys to shrink. A new ocean was growing on its southern end, the Tethys Ocean, an ocean that would dominate much of the Mesozoic Era. Large continental landmasses create climates with extreme variations of heat and cold ("continental climate") and monsoon conditions with highly seasonal rainfall patterns. Deserts seem to have been widespread on Pangaea. Mesozoic Era The Mesozoic extended roughly from . After the vigorous convergent plate mountain-building of the late Paleozoic, Mesozoic tectonic deformation was comparatively mild. Nevertheless, the era featured the dramatic rifting of the supercontinent Pangaea. Pangaea gradually split into a northern continent, Laurasia, and a southern continent, Gondwana. This created the passive continental margin that characterizes most of the Atlantic coastline (such as along the U.S. East Coast) today. Triassic Period The Triassic Period extends from about 252.17 ± 0.06 to 201.3 ± 0.2 Ma. During the Triassic, almost all the Earth's land mass was concentrated into a single supercontinent centered more or less on the equator, called Pangaea ("all the land"). This took the form of a giant "Pac-Man" with an east-facing "mouth" constituting the Tethys sea, a vast gulf that opened farther westward in the mid-Triassic, at the expense of the shrinking Paleo-Tethys Ocean, an ocean that existed during the Paleozoic. The remainder was the world-ocean known as Panthalassa ("all the sea"). All the deep-ocean sediments laid down during the Triassic have disappeared through subduction of oceanic plates; thus, very little is known of the Triassic open ocean. The supercontinent Pangaea was rifting during the Triassic—especially late in the period—but had not yet separated. The first nonmarine sediments in the rift that marks the initial break-up of Pangea—which separated New Jersey from Morocco—are of Late Triassic age; in the U.S., these thick sediments comprise the Newark Supergroup. Because of the limited shoreline of one super-continental mass, Triassic marine deposits are globally relatively rare; despite their prominence in Western Europe, where the Triassic was first studied. In North America, for example, marine deposits are limited to a few exposures in the west. Thus Triassic stratigraphy is mostly based on organisms living in lagoons and hypersaline environments, such as Estheria crustaceans and terrestrial vertebrates. Jurassic Period The Jurassic Period extends from about 201.3 ± 0.2 to 145.0 Ma. During the early Jurassic, the supercontinent Pangaea broke up into the northern supercontinent Laurasia and the southern supercontinent Gondwana; the Gulf of Mexico opened in the new rift between North America and what is now Mexico's Yucatan Peninsula. The Jurassic North Atlantic Ocean was relatively narrow, while the South Atlantic did not open until the following Cretaceous Period, when Gondwana itself rifted apart. The Tethys Sea closed, and the Neotethys basin appeared. Climates were warm, with no evidence of glaciation. As in the Triassic, there was apparently no land near either pole, and no extensive ice caps existed. The Jurassic geological record is good in western Europe, where extensive marine sequences indicate a time when much of the continent was submerged under shallow tropical seas; famous locales include the Jurassic Coast World Heritage Site and the renowned late Jurassic lagerstätten of Holzmaden and Solnhofen. In contrast, the North American Jurassic record is the poorest of the Mesozoic, with few outcrops at the surface. Though the epicontinental Sundance Sea left marine deposits in parts of the northern plains of the United States and Canada during the late Jurassic, most exposed sediments from this period are continental, such as the alluvial deposits of the Morrison Formation. The first of several massive batholiths were emplaced in the northern Cordillera beginning in the mid-Jurassic, marking the Nevadan orogeny. Important Jurassic exposures are also found in Russia, India, South America, Japan, Australasia and the United Kingdom. Cretaceous Period The Cretaceous Period extends from circa to . During the Cretaceous, the late Paleozoic-early Mesozoic supercontinent of Pangaea completed its breakup into present day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin orogenies that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies. Though Gondwana was still intact in the beginning of the Cretaceous, Gondwana itself broke up as South America, Antarctica and Australia rifted away from Africa (though India and Madagascar remained attached to each other); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide. To the north of Africa the Tethys Sea continued to narrow. Broad shallow seas advanced across central North America (the Western Interior Seaway) and Europe, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged. The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous exposures occur in Europe and China. In the area that is now India, massive lava beds called the Deccan Traps were laid down in the very late Cretaceous and early Paleocene. Cenozoic Era The Cenozoic Era covers the  million years since the Cretaceous–Paleogene extinction event up to and including the present day. By the end of the Mesozoic era, the continents had rifted into nearly their present form. Laurasia became North America and Eurasia, while Gondwana split into South America, Africa, Australia, Antarctica and the Indian subcontinent, which collided with the Asian plate. This impact gave rise to the Himalayas. The Tethys Sea, which had separated the northern continents from Africa and India, began to close up, forming the Mediterranean Sea. Paleogene Period The Paleogene (alternatively Palaeogene) Period is a unit of geologic time that began and ended 23.03 Ma and comprises the first part of the Cenozoic Era. This period consists of the Paleocene, Eocene and Oligocene Epochs. Paleocene Epoch The Paleocene, lasted from to . In many ways, the Paleocene continued processes that had begun during the late Cretaceous Period. During the Paleocene, the continents continued to drift toward their present positions. Supercontinent Laurasia had not yet separated into three continents. Europe and Greenland were still connected. North America and Asia were still intermittently joined by a land bridge, while Greenland and North America were beginning to separate. The Laramide orogeny of the late Cretaceous continued to uplift the Rocky Mountains in the American west, which ended in the succeeding epoch. South and North America remained separated by equatorial seas (they joined during the Neogene); the components of the former southern supercontinent Gondwana continued to split apart, with Africa, South America, Antarctica and Australia pulling away from each other. Africa was heading north toward Europe, slowly closing the Tethys Ocean, and India began its migration to Asia that would lead to a tectonic collision and the formation of the Himalayas. Eocene Epoch During the Eocene ( - ), the continents continued to drift toward their present positions. At the beginning of the period, Australia and Antarctica remained connected, and warm equatorial currents mixed with colder Antarctic waters, distributing the heat around the world and keeping global temperatures high. But when Australia split from the southern continent around 45 Ma, the warm equatorial currents were deflected away from Antarctica, and an isolated cold water channel developed between the two continents. The Antarctic region cooled down, and the ocean surrounding Antarctica began to freeze, sending cold water and ice floes north, reinforcing the cooling. The present pattern of ice ages began about . The northern supercontinent of Laurasia began to break up, as Europe, Greenland and North America drifted apart. In western North America, mountain building started in the Eocene, and huge lakes formed in the high flat basins among uplifts. In Europe, the Tethys Sea finally vanished, while the uplift of the Alps isolated its final remnant, the Mediterranean, and created another shallow sea with island archipelagos to the north. Though the North Atlantic was opening, a land connection appears to have remained between North America and Europe since the faunas of the two regions are very similar. India continued its journey away from Africa and began its collision with Asia, creating the Himalayan orogeny. Oligocene Epoch The Oligocene Epoch extends from about to . During the Oligocene the continents continued to drift toward their present positions. Antarctica continued to become more isolated and finally developed a permanent ice cap. Mountain building in western North America continued, and the Alps started to rise in Europe as the African Plate continued to push north into the Eurasian Plate, isolating the remnants of Tethys Sea. A brief marine incursion marks the early Oligocene in Europe. There appears to have been a land bridge in the early Oligocene between North America and Europe since the faunas of the two regions are very similar. During the Oligocene, South America was finally detached from Antarctica and drifted north toward North America. It also allowed the Antarctic Circumpolar Current to flow, rapidly cooling the continent. Neogene Period The Neogene Period is a unit of geologic time starting 23.03 Ma. and ends at 2.588 Ma. The Neogene Period follows the Paleogene Period. The Neogene consists of the Miocene and Pliocene and is followed by the Quaternary Period. Miocene Epoch The Miocene extends from about 23.03 to 5.333 Ma. During the Miocene continents continued to drift toward their present positions. Of the modern geologic features, only the land bridge between South America and North America was absent, the subduction zone along the Pacific Ocean margin of South America caused the rise of the Andes and the southward extension of the Meso-American peninsula. India continued to collide with Asia. The Tethys Seaway continued to shrink and then disappeared as Africa collided with Eurasia in the Turkish-Arabian region between 19 and 12 Ma (ICS 2004). Subsequent uplift of mountains in the western Mediterranean region and a global fall in sea levels combined to cause a temporary drying up of the Mediterranean Sea resulting in the Messinian salinity crisis near the end of the Miocene. Pliocene Epoch The Pliocene extends from to . During the Pliocene continents continued to drift toward their present positions, moving from positions possibly as far as from their present locations to positions only 70 km from their current locations. South America became linked to North America through the Isthmus of Panama during the Pliocene, bringing a nearly complete end to South America's distinctive marsupial faunas. The formation of the Isthmus had major consequences on global temperatures, since warm equatorial ocean currents were cut off and an Atlantic cooling cycle began, with cold Arctic and Antarctic waters dropping temperatures in the now-isolated Atlantic Ocean. Africa's collision with Europe formed the Mediterranean Sea, cutting off the remnants of the Tethys Ocean. Sea level changes exposed the land-bridge between Alaska and Asia. Near the end of the Pliocene, about (the start of the Quaternary Period), the current ice age began. The polar regions have since undergone repeated cycles of glaciation and thaw, repeating every 40,000–100,000 years. Quaternary Period Pleistocene Epoch The Pleistocene extends from to 11,700 years before present. The modern continents were essentially at their present positions during the Pleistocene, the plates upon which they sit probably having moved no more than relative to each other since the beginning of the period. Holocene Epoch The Holocene Epoch began approximately 11,700 calendar years before present and continues to the present. During the Holocene, continental motions have been less than a kilometer. The last glacial period of the current ice age ended about 10,000 years ago. Ice melt caused world sea levels to rise about in the early part of the Holocene. In addition, many areas above about 40 degrees north latitude had been depressed by the weight of the Pleistocene glaciers and rose as much as over the late Pleistocene and Holocene, and are still rising today. The sea level rise and temporary land depression allowed temporary marine incursions into areas that are now far from the sea. Holocene marine fossils are known from Vermont, Quebec, Ontario and Michigan. Other than higher latitude temporary marine incursions associated with glacial depression, Holocene fossils are found primarily in lakebed, floodplain and cave deposits. Holocene marine deposits along low-latitude coastlines are rare because the rise in sea levels during the period exceeds any likely upthrusting of non-glacial origin. Post-glacial rebound in Scandinavia resulted in the emergence of coastal areas around the Baltic Sea, including much of Finland. The region continues to rise, still causing weak earthquakes across Northern Europe. The equivalent event in North America was the rebound of Hudson Bay, as it shrank from its larger, immediate post-glacial Tyrrell Sea phase, to near its present boundaries.
Physical sciences
Geological history
null
5994167
https://en.wikipedia.org/wiki/Carnot%20cycle
Carnot cycle
A Carnot cycle is an ideal thermodynamic cycle proposed by French physicist Sadi Carnot in 1824 and expanded upon by others in the 1830s and 1840s. By Carnot's theorem, it provides an upper limit on the efficiency of any classical thermodynamic engine during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference through the application of work to the system. In a Carnot cycle, a system or engine transfers energy in the form of heat between two thermal reservoirs at temperatures and (referred to as the hot and cold reservoirs, respectively), and a part of this transferred energy is converted to the work done by the system. The cycle is reversible, and entropy is conserved, merely transferred between the thermal reservoirs and the system without gain or loss. When work is applied to the system, heat moves from the cold to hot reservoir (heat pump or refrigeration). When heat moves from the hot to the cold reservoir, the system applies work to the environment. The work done by the system or engine to the environment per Carnot cycle depends on the temperatures of the thermal reservoirs and the entropy transferred from the hot reservoir to the system per cycle such as , where is heat transferred from the hot reservoir to the system per cycle. Stages A Carnot cycle as an idealized thermodynamic cycle performed by a Carnot heat engine, consisting of the following steps: In this case, since it is a reversible thermodynamic cycle (no net change in the system and its surroundings per cycle) or, This is true as and are both smaller in magnitude and in fact are in the same ratio as . The pressure–volume graph When a Carnot cycle is plotted on a pressure–volume diagram (), the isothermal stages follow the isotherm lines for the working fluid, the adiabatic stages move between isotherms, and the area bounded by the complete cycle path represents the total work that can be done during one cycle. From point 1 to 2 and point 3 to 4 the temperature is constant (isothermal process). Heat transfer from point 4 to 1 and point 2 to 3 are equal to zero (adiabatic process). Properties and significance The temperature–entropy diagram The behavior of a Carnot engine or refrigerator is best understood by using a temperature–entropy diagram (T–S diagram), in which the thermodynamic state is specified by a point on a graph with entropy (S) as the horizontal axis and temperature (T) as the vertical axis (). For a simple closed system (control mass analysis), any point on the graph represents a particular state of the system. A thermodynamic process is represented by a curve connecting an initial state (A) and a final state (B). The area under the curve is: which is the amount of heat transferred in the process. If the process moves the system to greater entropy, the area under the curve is the amount of heat absorbed by the system in that process; otherwise, it is the amount of heat removed from or leaving the system. For any cyclic process, there is an upper portion of the cycle and a lower portion. In T-S diagrams for a clockwise cycle, the area under the upper portion will be the energy absorbed by the system during the cycle, while the area under the lower portion will be the energy removed from the system during the cycle. The area inside the cycle is then the difference between the two (the absorbed net heat energy), but since the internal energy of the system must have returned to its initial value, this difference must be the amount of work done by the system per cycle. Referring to , mathematically, for a reversible process, we may write the amount of work done over a cyclic process as: Since dU is an exact differential, its integral over any closed loop is zero and it follows that the area inside the loop on a T–S diagram is (a) equal to the total work performed by the system on the surroundings if the loop is traversed in a clockwise direction, and (b) is equal to the total work done on the system by the surroundings as the loop is traversed in a counterclockwise direction. The Carnot cycle Evaluation of the above integral is particularly simple for a Carnot cycle. The amount of energy transferred as work is The total amount of heat transferred from the hot reservoir to the system (in the isothermal expansion) will be and the total amount of heat transferred from the system to the cold reservoir (in the isothermal compression) will be Due to energy conservation, the net heat transferred, , is equal to the work performed The efficiency is defined to be: where is the work done by the system (energy exiting the system as work), > 0 is the heat taken from the system (heat energy leaving the system), > 0 is the heat put into the system (heat energy entering the system), is the absolute temperature of the cold reservoir, and is the absolute temperature of the hot reservoir. is the maximum system entropy is the minimum system entropy The expression with the temperature can be derived from the expressions above with the entropy: and . Since , a minus sign appears in the final expression for . This is the Carnot heat engine working efficiency definition as the fraction of the work done by the system to the thermal energy received by the system from the hot reservoir per cycle. This thermal energy is the cycle initiator. Reversed Carnot cycle A Carnot heat-engine cycle described is a totally reversible cycle. That is, all the processes that compose it can be reversed, in which case it becomes the Carnot heat pump and refrigeration cycle. This time, the cycle remains exactly the same except that the directions of any heat and work interactions are reversed. Heat is absorbed from the low-temperature reservoir, heat is rejected to a high-temperature reservoir, and a work input is required to accomplish all this. The P–V diagram of the reversed Carnot cycle is the same as for the Carnot heat-engine cycle except that the directions of the processes are reversed. Carnot's theorem It can be seen from the above diagram that for any cycle operating between temperatures and , none can exceed the efficiency of a Carnot cycle. Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Thus, Equation gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent: Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature. In other words, the maximum efficiency is achieved if and only if entropy does not change per cycle. An entropy change per cycle is made, for example, if there is friction leading to dissipation of work into heat. In that case, the cycle is not reversible and the Clausius theorem becomes an inequality rather than an equality. Otherwise, since entropy is a state function, the required dumping of heat into the environment to dispose of excess entropy leads to a (minimal) reduction in efficiency. So Equation gives the efficiency of any reversible heat engine. In mesoscopic heat engines, work per cycle of operation in general fluctuates due to thermal noise. If the cycle is performed quasi-statically, the fluctuations vanish even on the mesoscale. However, if the cycle is performed faster than the relaxation time of the working medium, the fluctuations of work are inevitable. Nevertheless, when work and heat fluctuations are counted, an exact equality relates the exponential average of work performed by any heat engine to the heat transfer from the hotter heat bath. Efficiency of real heat engines Carnot realized that, in reality, it is not possible to build a thermodynamically reversible engine. So, real heat engines are even less efficient than indicated by Equation . In addition, real engines that operate along the Carnot cycle style (isothermal expansion / isentropic expansion / isothermal compression / isentropic compression) are rare. Nevertheless, Equation is extremely useful for determining the maximum efficiency that could ever be expected for a given set of thermal reservoirs. Although Carnot's cycle is an idealization, Equation as the expression of the Carnot efficiency is still useful. Consider the average temperatures, at which the first integral is over a part of a cycle where heat goes into the system and the second integral is over a cycle part where heat goes out from the system. Then, replace TH and TC in Equation by 〈TH〉 and 〈TC〉, respectively, to estimate the efficiency a heat engine. For the Carnot cycle, or its equivalent, the average value 〈TH〉 will equal the highest temperature available, namely TH, and 〈TC〉 the lowest, namely TC. For other less efficient thermodynamic cycles, 〈TH〉 will be lower than TH, and 〈TC〉 will be higher than TC. This can help illustrate, for example, why a reheater or a regenerator can improve the thermal efficiency of steam power plants and why the thermal efficiency of combined-cycle power plants (which incorporate gas turbines operating at even higher temperatures) exceeds that of conventional steam plants. The first prototype of the diesel engine was based on the principles of the Carnot cycle. As a macroscopic construct The Carnot heat engine is, ultimately, a theoretical construct based on an idealized thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built.
Physical sciences
Thermodynamics
Physics
5994488
https://en.wikipedia.org/wiki/Legless%20lizard
Legless lizard
Legless lizard may refer to any of several groups of lizards that have independently lost limbs or reduced them to the point of being of no use in locomotion. It is the common name for the family Pygopodidae. These lizards are often distinguishable from snakes on the basis of one or more of the following characteristics: possessing eyelids, possessing external ear openings, lack of broad belly scales, notched rather than forked tongue, having two more-or-less-equal lungs, and/or having a very long tail (while snakes have a long body and short tail). Every stage of reduction of the shoulder girdle —including complete loss— occurs among limbless squamates, but the pelvic girdle is never completely lost regardless of the degree of limb reduction or loss. At least the ilium is retained in limbless lizards and most basal snakes. Many families of lizards have independently evolved limblessness or greatly reduced limbs (which are presumably non-functional in locomotion), including the following examples: Anguinae – An entirely legless subfamily native to Europe, Asia, North America and North Africa, contains well-known species such as slowworms, glass snakes/lizards and the scheltopusik; nested within the family Anguidae, which also contains a legged subfamily called Gerrhonotinae. Cordylidae – an African family of 66 species, with one virtually legless genus Chamaesaura, containing five species with hindlimbs reduced to small scaly protuberances. Pygopodidae – all 44 species; they belong to the genera Aprasia, Delma, Lialis, Ophidiocephalus, Paradelma, Pletholax and Pygopus. All are endemic to Australia, except the two species of Lialis, which also occur in New Guinea, one of which is endemic to that island. Pygopodids are not strictly legless since, although they lack forelimbs, they possess hindlimbs that are greatly reduced to small digitless flaps, hence the often used common names of "flap-footed lizards" or "scaly-foot". The pygopodids are considered an advanced evolutionary clade of the Gekkota, which also contains six families of geckos. Dibamidae – all 23 species in the family, which comprises the monotypic Mexican genus Anelytropsis and the Southeast Asian genus Dibamus. All are limbless burrowers that are nearly or completely blind. Anniellidae – comprising the single genus Anniella, which contains six legless lizards that inhabit central / southern California and Baja California, Mexico. Ophiodes – a genus of legless lizard native to South America, nested within the otherwise legged galliwasps (Diploglossidae). Gymnophthalmidae – a large neotropical family containing many species with reduced limbs, the most extreme being the 23 species in the genus Bachia, which escape by making sudden saltatory "figure-8" flicks with the body and tail. Scincidae – commonly known as skinks, the largest lizard family with over 1500 species, of which many are limbless and nearly-limbless species, including (but not confined to) the genera Acontias, Feylinia, Melanoseps, Paracontias, Scelotes and Typhlosaurus from Africa, Lerista, Ophioscincus, Coeranoscincus and Anomalopus from Australia, and some species in the genera Chalcides from southern Europe and North Africa. Amphisbaenia – commonly known as worm lizards, comprising 201 extant species in 6 families, most of which are legless (hindlimbs always absent): Amphisbaenidae Bipedidae Blanidae Cadeidae Rhineuridae Trogonophidae
Biology and health sciences
Lizards and other Squamata
Animals
5998674
https://en.wikipedia.org/wiki/Hyposmia
Hyposmia
Hyposmia, or microsmia, is a reduced ability to smell and to detect odors. A related condition is anosmia, in which no odors can be detected. Some of the causes of olfaction problems are allergies, nasal polyps, viral infections and head trauma. In 2012 an estimated 9.8 million people aged 40 and older in the United States had hyposmia and an additional 3.4 million had anosmia/severe hyposmia. Hyposmia might be a very early sign of Parkinson's disease. Hyposmia is also an early and almost universal finding in Alzheimer's disease and dementia with Lewy bodies. Lifelong hyposmia could be caused by Kallmann syndrome or Autism Spectrum Disorder. Along with other chemosensory disturbances, hyposmia can be a key indicator of COVID-19. Epidemiology The National Health and Nutrition Examination Survey (NHANES) collected data on chemosensory function (taste and smell) in a nationally-representative sample of US civilian, non-institutionalized persons in 2012. Olfactory function was assessed on persons aged 40 years and older with an 8-item, odor identification test (Pocket Smell Tests, Sensonics, Inc., Haddon Heights, NJ). Odors included food odors (strawberry, chocolate, onion, grape), warning odors (natural gas, smoke) and household odors (leather, soap). Olfactory function score was based on the number of correct identifications. Prevalence (%) of anosmia/severe hyposmia (scores 0 to 3) was 0.3 at age 40–49 rising to 14.1 at age 80+. Prevalence of hyposmia (scores 4 to 5) was much higher: 3.7% at age 40–49 and 25.9% at 80+. Both were more prevalent in individuals of African descent than in those of Caucasian descent. Chemosensory data were also collected in a larger NHANES sample in 2013–2014. The prevalence of smell disorder (scores 0–5 out of 8 correct) was 13.5% in persons aged 40 years and over. If the same prevalence occurred in 2016, an estimated 20.5 million persons 40 and over had hyposmia or anosmia. In addition multiple demographic socioeconomic, and lifestyle characteristics were assessed as risk factors for diminished smell. In statistical analyses, greater age and male sex, coupled with either black and/or non-Hispanic ethnicity, low family income, low educational attainment, high alcohol consumption (more than 4 drinks per day), and a history of asthma or cancer were independently associated with a greater prevalence of smell impairment.
Biology and health sciences
Symptoms and signs
Health
17016531
https://en.wikipedia.org/wiki/Hagen%E2%80%93Poiseuille%20equation
Hagen–Poiseuille equation
In non ideal fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section. It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845. The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation. Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen). Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled (as per Poiseuille's law). Both effects contribute to the actual pressure drop. Equation In standard fluid-kinetics notation: where is the pressure difference between the two ends, is the length of pipe, is the dynamic viscosity, is the volumetric flow rate, is the pipe radius, is the cross-sectional area of pipe. The equation does not hold close to the pipe entrance. The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than 1/48 of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow. Relation to the Darcy–Weisbach equation Normally, Hagen–Poiseuille flow implies not just the relation for the pressure drop, above, but also the full solution for the laminar flow profile, which is parabolic. However, the result for the pressure drop can be extended to turbulent flow by inferring an effective turbulent viscosity in the case of turbulent flow, even though the flow profile in turbulent flow is strictly speaking not actually parabolic. In both cases, laminar or turbulent, the pressure drop is related to the stress at the wall, which determines the so-called friction factor. The wall stress can be determined phenomenologically by the Darcy–Weisbach equation in the field of hydraulics, given a relationship for the friction factor in terms of the Reynolds number. In the case of laminar flow, for a circular cross section: where is the Reynolds number, is the fluid density, and is the mean flow velocity, which is half the maximal flow velocity in the case of laminar flow. It proves more useful to define the Reynolds number in terms of the mean flow velocity because this quantity remains well defined even in the case of turbulent flow, whereas the maximal flow velocity may not be, or in any case, it may be difficult to infer. In this form the law approximates the Darcy friction factor, the energy (head) loss factor, friction loss factor or Darcy (friction) factor in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of a slightly different form of the law was made independently by Wiedman in 1856 and Neumann and E. Hagenbach in 1858 (1859, 1860). Hagenbach was the first who called this law Poiseuille's law. The law is also very important in hemorheology and hemodynamics, both fields of physiology. Poiseuille's law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work. Derivation The Hagen–Poiseuille equation can be derived from the Navier–Stokes equations. The laminar flow through a pipe of uniform (circular) cross-section is known as Hagen–Poiseuille flow. The equations governing the Hagen–Poiseuille flow can be derived directly from the Navier–Stokes momentum equations in 3D cylindrical coordinates by making the following set of assumptions: The flow is steady ( ). The radial and azimuthal components of the fluid velocity are zero ( ). The flow is axisymmetric ( ). The flow is fully developed ( ). Here however, this can be proved via mass conservation, and the above assumptions. Then the angular equation in the momentum equations and the continuity equation are identically satisfied. The radial momentum equation reduces to , i.e., the pressure is a function of the axial coordinate only. For brevity, use instead of . The axial momentum equation reduces to where is the dynamic viscosity of the fluid. In the above equation, the left-hand side is only a function of and the right-hand side term is only a function of , implying that both terms must be the same constant. Evaluating this constant is straightforward. If we take the length of the pipe to be and denote the pressure difference between the two ends of the pipe by (high pressure minus low pressure), then the constant is simply defined such that is positive. The solution is Since needs to be finite at , . The no slip boundary condition at the pipe wall requires that at (radius of the pipe), which yields . Thus we have finally the following parabolic velocity profile: The maximum velocity occurs at the pipe centerline (), . The average velocity can be obtained by integrating over the pipe cross section, The easily measurable quantity in experiments is the volumetric flow rate . Rearrangement of this gives the Hagen–Poiseuille equation Although more lengthy than directly using the Navier–Stokes equations, an alternative method of deriving the Hagen–Poiseuille equation is as follows. Liquid flow through a pipe Assume the liquid exhibits laminar flow. Laminar flow in a round pipe prescribes that there are a bunch of circular layers (lamina) of liquid, each having a velocity determined only by their radial distance from the center of the tube. Also assume the center is moving fastest while the liquid touching the walls of the tube is stationary (due to the no-slip condition). To figure out the motion of the liquid, all forces acting on each lamina must be known: The pressure force pushing the liquid through the tube is the change in pressure multiplied by the area: . This force is in the direction of the motion of the liquid. The negative sign comes from the conventional way we define . Viscosity effects will pull from the faster lamina immediately closer to the center of the tube. Viscosity effects will drag from the slower lamina immediately closer to the walls of the tube. Viscosity When two layers of liquid in contact with each other move at different speeds, there will be a shear force between them. This force is proportional to the area of contact , the velocity gradient perpendicular to the direction of flow , and a proportionality constant (viscosity) and is given by The negative sign is in there because we are concerned with the faster moving liquid (top in figure), which is being slowed by the slower liquid (bottom in figure). By Newton's third law of motion, the force on the slower liquid is equal and opposite (no negative sign) to the force on the faster liquid. This equation assumes that the area of contact is so large that we can ignore any effects from the edges and that the fluids behave as Newtonian fluids. Faster lamina Assume that we are figuring out the force on the lamina with radius . From the equation above, we need to know the area of contact and the velocity gradient. Think of the lamina as a ring of radius , thickness , and length . The area of contact between the lamina and the faster one is simply the surface area of the cylinder: . We don't know the exact form for the velocity of the liquid within the tube yet, but we do know (from our assumption above) that it is dependent on the radius. Therefore, the velocity gradient is the change of the velocity with respect to the change in the radius at the intersection of these two laminae. That intersection is at a radius of . So, considering that this force will be positive with respect to the movement of the liquid (but the derivative of the velocity is negative), the final form of the equation becomes where the vertical bar and subscript following the derivative indicates that it should be taken at a radius of . Slower lamina Next let's find the force of drag from the slower lamina. We need to calculate the same values that we did for the force from the faster lamina. In this case, the area of contact is at instead of . Also, we need to remember that this force opposes the direction of movement of the liquid and will therefore be negative (and that the derivative of the velocity is negative). Putting it all together To find the solution for the flow of a laminar layer through a tube, we need to make one last assumption. There is no acceleration of liquid in the pipe, and by Newton's first law, there is no net force. If there is no net force then we can add all of the forces together to get zero or First, to get everything happening at the same point, use the first two terms of a Taylor series expansion of the velocity gradient: The expression is valid for all laminae. Grouping like terms and dropping the vertical bar since all derivatives are assumed to be at radius , Finally, put this expression in the form of a differential equation, dropping the term quadratic in . The above equation is the same as the one obtained from the Navier–Stokes equations and the derivation from here on follows as before. Startup of Poiseuille flow in a pipe When a constant pressure gradient is applied between two ends of a long pipe, the flow will not immediately obtain Poiseuille profile, rather it develops through time and reaches the Poiseuille profile at steady state. The Navier–Stokes equations reduce to with initial and boundary conditions, The velocity distribution is given by where is the Bessel function of the first kind of order zero and are the positive roots of this function and is the Bessel function of the first kind of order one. As , Poiseuille solution is recovered. Poiseuille flow in an annular section If is the inner cylinder radii and is the outer cylinder radii, with constant applied pressure gradient between the two ends , the velocity distribution and the volume flux through the annular pipe are When , , the original problem is recovered. Poiseuille flow in a pipe with an oscillating pressure gradient Flow through pipes with an oscillating pressure gradient finds applications in blood flow through large arteries. The imposed pressure gradient is given by where , and are constants and is the frequency. The velocity field is given by where where and are the Kelvin functions and . Plane Poiseuille flow Plane Poiseuille flow is flow created between two infinitely long parallel plates, separated by a distance with a constant pressure gradient is applied in the direction of flow. The flow is essentially unidirectional because of infinite length. The Navier–Stokes equations reduce to with no-slip condition on both walls Therefore, the velocity distribution and the volume flow rate per unit length are Poiseuille flow through some non-circular cross-sections Joseph Boussinesq derived the velocity profile and volume flow rate in 1868 for rectangular channel and tubes of equilateral triangular cross-section and for elliptical cross-section. Joseph Proudman derived the same for isosceles triangles in 1914. Let be the constant pressure gradient acting in direction parallel to the motion. The velocity and the volume flow rate in a rectangular channel of height and width are The velocity and the volume flow rate of tube with equilateral triangular cross-section of side length are The velocity and the volume flow rate in the right-angled isosceles triangle , are The velocity distribution for tubes of elliptical cross-section with semiaxes and is Here, when , Poiseuille flow for circular pipe is recovered and when , plane Poiseuille flow is recovered. More explicit solutions with cross-sections such as snail-shaped sections, sections having the shape of a notch circle following a semicircle, annular sections between homofocal ellipses, annular sections between non-concentric circles are also available, as reviewed by . Poiseuille flow through arbitrary cross-section The flow through arbitrary cross-section satisfies the condition that on the walls. The governing equation reduces to If we introduce a new dependent variable as then it is easy to see that the problem reduces to that integrating a Laplace equation satisfying the condition on the wall. Poiseuille's equation for an ideal isothermal gas For a compressible fluid in a tube the volumetric flow rate and the axial velocity are not constant along the tube; but the mass flow rate is constant along the tube length. The volumetric flow rate is usually expressed at the outlet pressure. As fluid is compressed or expanded, work is done and the fluid is heated or cooled. This means that the flow rate depends on the heat transfer to and from the fluid. For an ideal gas in the isothermal case, where the temperature of the fluid is permitted to equilibrate with its surroundings, an approximate relation for the pressure drop can be derived. Using ideal gas equation of state for constant temperature process (i.e., is constant) and the conservation of mass flow rate (i.e., is constant), the relation can be obtained. Over a short section of the pipe, the gas flowing through the pipe can be assumed to be incompressible so that Poiseuille law can be used locally, Here we assumed the local pressure gradient is not too great to have any compressibility effects. Though locally we ignored the effects of pressure variation due to density variation, over long distances these effects are taken into account. Since is independent of pressure, the above equation can be integrated over the length to give Hence the volumetric flow rate at the pipe outlet is given by This equation can be seen as Poiseuille's law with an extra correction factor expressing the average pressure relative to the outlet pressure. Electrical circuits analogy Electricity was originally understood to be a kind of fluid. This hydraulic analogy is still conceptually useful for understanding circuits. This analogy is also used to study the frequency response of fluid-mechanical networks using circuit tools, in which case the fluid network is termed a hydraulic circuit. Poiseuille's law corresponds to Ohm's law for electrical circuits, . Since the net force acting on the fluid is equal to , where , i.e. , then from Poiseuille's law, it follows that . For electrical circuits, let be the concentration of free charged particles (in m−3) and let be the charge of each particle (in coulombs). (For electrons, .) Then is the number of particles in the volume , and is their total charge. This is the charge that flows through the cross section per unit time, i.e. the current . Therefore, . Consequently, , and But , where is the total charge in the volume of the tube. The volume of the tube is equal to , so the number of charged particles in this volume is equal to , and their total charge is . Since the voltage , it follows then This is exactly Ohm's law, where the resistance is described by the formula . It follows that the resistance is proportional to the length of the resistor, which is true. However, it also follows that the resistance is inversely proportional to the fourth power of the radius , i.e. the resistance is inversely proportional to the second power of the cross section area of the resistor, which is different from the electrical formula. The electrical relation for the resistance is where is the resistivity; i.e. the resistance is inversely proportional to the cross section area of the resistor. The reason why Poiseuille's law leads to a different formula for the resistance is the difference between the fluid flow and the electric current. Electron gas is inviscid, so its velocity does not depend on the distance to the walls of the conductor. The resistance is due to the interaction between the flowing electrons and the atoms of the conductor. Therefore, Poiseuille's law and the hydraulic analogy are useful only within certain limits when applied to electricity. Both Ohm's law and Poiseuille's law illustrate transport phenomena. Medical applications – intravenous access and fluid delivery The Hagen–Poiseuille equation is useful in determining the vascular resistance and hence flow rate of intravenous (IV) fluids that may be achieved using various sizes of peripheral and central cannulas. The equation states that flow rate is proportional to the radius to the fourth power, meaning that a small increase in the internal diameter of the cannula yields a significant increase in flow rate of IV fluids. The radius of IV cannulas is typically measured in "gauge", which is inversely proportional to the radius. Peripheral IV cannulas are typically available as (from large to small) 14G, 16G, 18G, 20G, 22G, 26G. As an example, assuming cannula lengths are equal, the flow of a 14G cannula is 1.73 times that of a 16G cannula, and 4.16 times that of a 20G cannula. It also states that flow is inversely proportional to length, meaning that longer lines have lower flow rates. This is important to remember as in an emergency, many clinicians favor shorter, larger catheters compared to longer, narrower catheters. While of less clinical importance, an increased change in pressure () — such as by pressurizing the bag of fluid, squeezing the bag, or hanging the bag higher (relative to the level of the cannula) — can be used to speed up flow rate. It is also useful to understand that viscous fluids will flow slower (e.g. in blood transfusion).
Physical sciences
Fluid mechanics
Physics
1771587
https://en.wikipedia.org/wiki/Pregnancy
Pregnancy
Pregnancy is the time during which one or more offspring develops (gestates) inside a woman's uterus (womb). A multiple pregnancy involves more than one offspring, such as with twins. Pregnancy usually occurs by sexual intercourse, but can also occur through assisted reproductive technology procedures. A pregnancy may end in a live birth, a miscarriage, an induced abortion, or a stillbirth. Childbirth typically occurs around 40 weeks from the start of the last menstrual period (LMP), a span known as the gestational age. This is just over nine months. Counting by fertilization age, the length is about 38 weeks. Pregnancy is "the presence of an implanted human embryo or fetus in the uterus"; implantation occurs on average 8–9 days after fertilization. An embryo is the term for the developing offspring during the first seven weeks following implantation (i.e. ten weeks' gestational age), after which the term fetus is used until birth. Signs and symptoms of early pregnancy may include missed periods, tender breasts, morning sickness (nausea and vomiting), hunger, implantation bleeding, and frequent urination. Pregnancy may be confirmed with a pregnancy test. Methods of birth control—or, more accurately, contraception—are used to avoid pregnancy. Pregnancy is divided into three trimesters of approximately three months each. The first trimester includes conception, which is when the sperm fertilizes the egg. The fertilized egg then travels down the fallopian tube and attaches to the inside of the uterus, where it begins to form the embryo and placenta. During the first trimester, the possibility of miscarriage (natural death of embryo or fetus) is at its highest. Around the middle of the second trimester, movement of the fetus may be felt. At 28 weeks, more than 90% of babies can survive outside of the uterus if provided with high-quality medical care, though babies born at this time will likely experience serious health complications such as heart and respiratory problems and long-term intellectual and developmental disabilities. Prenatal care improves pregnancy outcomes. Nutrition during pregnancy is important to ensure healthy growth of the fetus. Prenatal care also include avoiding recreational drugs (including tobacco and alcohol), taking regular exercise, having blood tests, and regular physical examinations. Complications of pregnancy may include disorders of high blood pressure, gestational diabetes, iron-deficiency anemia, and severe nausea and vomiting. In the ideal childbirth, labor begins on its own "at term". Babies born before 37 weeks are "preterm" and at higher risk of health problems such as cerebral palsy. Babies born between weeks 37 and 39 are considered "early term" while those born between weeks 39 and 41 are considered "full term". Babies born between weeks 41 and 42 weeks are considered "late-term" while after 42 weeks they are considered "post-term". Delivery before 39 weeks by labor induction or caesarean section is not recommended unless required for other medical reasons. Terminology Associated terms for pregnancy are gravid and parous. Gravidus and gravid come from the Latin word meaning "heavy" and a pregnant female is sometimes referred to as a gravida. Gravidity refers to the number of times that a female has been pregnant. Similarly, the term parity is used for the number of times that a female carries a pregnancy to a viable stage. Twins and other multiple births are counted as one pregnancy and birth. A woman who has never been pregnant is referred to as a nulligravida. A woman who is (or has been only) pregnant for the first time is referred to as a primigravida, and a woman in subsequent pregnancies as a multigravida or as multiparous. Therefore, during a second pregnancy a woman would be described as gravida 2, para 1 and upon live delivery as gravida 2, para 2. In-progress pregnancies, abortions, miscarriages and/or stillbirths account for parity values being less than the gravida number. Women who have never carried a pregnancy more than 20 weeks are referred to as nulliparous. A pregnancy is considered term at 37 weeks of gestation. It is preterm if less than 37 weeks and post-term at or beyond 42 weeks of gestation. The American College of Obstetricians and Gynecologists have recommended further division with early term 37 weeks up to 39 weeks, full term 39 weeks up to 41 weeks, and late term 41 weeks up to 42 weeks. The terms preterm and post-term have largely replaced earlier terms of premature and postmature. Preterm and postterm are defined above, whereas premature and postmature have historical meaning and relate more to the infant's size and state of development rather than to the stage of pregnancy. Demographics and statistics About 213 million pregnancies occurred in 2012, of which, 190 million (89%) were in the developing world and 23 million (11%) were in the developed world. The number of pregnancies in women aged between 15 and 44 is 133 per 1,000 women. About 10% to 15% of recognized pregnancies end in miscarriage. In 2016, complications of pregnancy resulted in 230,600 maternal deaths, down from 377,000 deaths in 1990. Common causes include bleeding, infections, hypertensive diseases of pregnancy, obstructed labor, miscarriage, abortion, or ectopic pregnancy. Globally, 44% of pregnancies are unplanned. Over half (56%) of unplanned pregnancies are aborted. Among unintended pregnancies in the United States, 60% of the women used birth control to some extent during the month pregnancy began. Signs and symptoms Each person's pregnancy can be different and many women do not experience all of the common signs and symptoms. The usual signs and symptoms of pregnancy do not significantly interfere with activities of daily living or pose a health-threat to the mother or baby. Complications during pregnancy can cause other more severe symptoms, such as those associated with anemia. Early signs and symptoms of pregnancy may include: Tiredness or fatigue (one of the most common symptoms) Missed period Nausea or morning sickness, may or may not include vomiting Breast tenderness (common during the first trimester). Increased frequency of urination Other signs and symptoms that some people may experience at different stages of pregnancy: Constipation Mood swings Regurgitation, heartburn, and bloating Headaches Food cravings and or food aversions Light spotting may sometimes be a early sign of pregnancy in some women. It is also called implantation bleeding. Pelvic girdle pain Back pain Darkening of the areolas Braxton Hicks contractions. Occasional, irregular, and often painless contractions that occur several times per day. Peripheral edema swelling of the lower limbs. Common complaint in advancing pregnancy. Can be caused by inferior vena cava syndrome resulting from compression of the inferior vena cava and pelvic veins by the uterus leading to increased hydrostatic pressure in lower extremities. Low blood pressure often caused by compression of both the inferior vena cava and the abdominal aorta (aortocaval compression syndrome). Increased urinary frequency. A common complaint, caused by increased intravascular volume, elevated glomerular filtration rate, and compression of the bladder by the expanding uterus. Urinary tract infection Varicose veins. Common complaint caused by relaxation of the venous smooth muscle and increased intravascular pressure. Hemorrhoids (piles). Swollen veins at or inside the anal area. Caused by impaired venous return, straining associated with constipation, or increased intra-abdominal pressure in later pregnancy. Stretch marks Melasma, also known as the mask of pregnancy, is a discoloration, most often of the face. It usually begins to fade several months after giving birth. Timeline The chronology of pregnancy is, unless otherwise specified, generally given as gestational age, where the starting point is the beginning of the woman's last menstrual period (LMP), or the corresponding age of the gestation as estimated by a more accurate method if available. This model means that the woman is counted as being "pregnant" two weeks before conception and three weeks before implantation. Sometimes, timing may also use the fertilization age, which is the age of the embryo since conception. Start of gestational age The American Congress of Obstetricians and Gynecologists recommends the following methods to calculate gestational age: Directly calculating the days since the beginning of the last menstrual period. Early obstetric ultrasound, comparing the size of an embryo or fetus to that of a reference group of pregnancies of known gestational age (such as calculated from last menstrual periods), and using the mean gestational age of other embryos or fetuses of the same size. If the gestational age as calculated from an early ultrasound is contradictory to the one calculated directly from the last menstrual period, it is still the one from the early ultrasound that is used for the rest of the pregnancy. In case of in vitro fertilization, calculating days since oocyte retrieval or co-incubation and adding 14 days. Trimesters Pregnancy is divided into three trimesters, each lasting for approximately three months. The exact length of each trimester can vary between sources. The first trimester begins with the start of gestational age as described above, that is, the beginning of week 1, or 0 weeks + 0 days of gestational age (GA). It ends at week 12 (11 weeks + 6 days of GA) or end of week 14 (13 weeks + 6 days of GA). The second trimester is defined as starting, between the beginning of week 13 (12 weeks +0 days of GA) and beginning of week 15 (14 weeks + 0 days of GA). It ends at the end of week 27 (26 weeks + 6 days of GA) or end of week 28 (27 weeks + 6 days of GA). The third trimester is defined as starting, between the beginning of week 28 (27 weeks + 0 days of GA) or beginning of week 29 (28 weeks + 0 days of GA). It lasts until childbirth. Estimation of due date Due date estimation basically follows two steps: Determination of which time point is to be used as origin for gestational age, as described in the section above. Adding the estimated gestational age at childbirth to the above time point. Childbirth on average occurs at a gestational age of 280 days (40 weeks), which is therefore often used as a standard estimation for individual pregnancies. However, alternative durations as well as more individualized methods have also been suggested. The American College of Obstetricians and Gynecologists divides full term into three divisions: Early-term: 37 weeks and 0 days through 38 weeks and 6 days Full-term: 39 weeks and 0 days through 40 weeks and 6 days Late-term: 41 weeks and 0 days through 41 weeks and 6 days Post-term: greater than or equal to 42 weeks and 0 days Naegele's rule is a standard way of calculating the due date for a pregnancy when assuming a gestational age of 280 days at childbirth. The rule estimates the expected date of delivery (EDD) by adding a year, subtracting three months, and adding seven days to the origin of gestational age. Alternatively there are mobile apps, which essentially always give consistent estimations compared to each other and correct for leap year, while pregnancy wheels made of paper can differ from each other by 7 days and generally do not correct for leap year. Furthermore, actual childbirth has only a certain probability of occurring within the limits of the estimated due date. A study of singleton live births came to the result that childbirth has a standard deviation of 14 days when gestational age is estimated by first trimester ultrasound, and 16 days when estimated directly by last menstrual period. Physiology Capacity Fertility and fecundity are the respective capacities to fertilize and establish a clinical pregnancy and have a live birth. Infertility is an impaired ability to establish a clinical pregnancy and sterility is the permanent inability to establish a clinical pregnancy. The capacity for pregnancy depends on the reproductive system, its development and its variation, as well as on the condition of a person. Women as well as intersex and transgender people who have a functioning female reproductive system are capable of pregnancy. In some cases, someone might be able to produce fertilizable eggs, but might not have a womb or none that can sufficiently gestate, in which case they might find surrogacy. Initiation Through an interplay of hormones that includes follicle stimulating hormone that stimulates folliculogenesis and oogenesis creates a mature egg cell, the female gamete. Fertilization is the event where the egg cell fuses with the male gamete, spermatozoon. After the point of fertilization, the fused product of the female and male gamete is referred to as a zygote or fertilized egg. The fusion of female and male gametes usually occurs following the act of sexual intercourse. Pregnancy rates for sexual intercourse are highest during the menstrual cycle time from some 5 days before until 1 to 2 days after ovulation. Fertilization can also occur by assisted reproductive technology such as artificial insemination and in vitro fertilisation. Fertilization (conception) is sometimes used as the initiation of pregnancy, with the derived age being termed fertilization age. Fertilization usually occurs about two weeks before the next expected menstrual period. A third point in time is also considered by some people to be the true beginning of a pregnancy: This is time of implantation, when the future fetus attaches to the lining of the uterus. This is about a week to ten days after fertilization. Development of embryo and fetus The sperm and the egg cell, which has been released from one of the female's two ovaries, unite in one of the two fallopian tubes. The fertilized egg, known as a zygote, then moves toward the uterus, a journey that can take up to a week to complete. Cell division begins approximately 24 to 36 hours after the female and male cells unite. Cell division continues at a rapid rate and the cells then develop into what is known as a blastocyst. The blastocyst arrives at the uterus and attaches to the uterine wall, a process known as implantation. The development of the mass of cells that will become the infant is called embryogenesis during the first approximately ten weeks of gestation. During this time, cells begin to differentiate into the various body systems. The basic outlines of the organ, body, and nervous systems are established. By the end of the embryonic stage, the beginnings of features such as fingers, eyes, mouth, and ears become visible. Also during this time, there is development of structures important to the support of the embryo, including the placenta and umbilical cord. The placenta connects the developing embryo to the uterine wall to allow nutrient uptake, waste elimination, and gas exchange via the mother's blood supply. The umbilical cord is the connecting cord from the embryo or fetus to the placenta. After about ten weeks of gestational age—which is the same as eight weeks after conception—the embryo becomes known as a fetus. At the beginning of the fetal stage, the risk of miscarriage decreases sharply. At this stage, a fetus is about in length, the heartbeat is seen via ultrasound, and the fetus makes involuntary motions. During continued fetal development, the early body systems, and structures that were established in the embryonic stage continue to develop. Sex organs begin to appear during the third month of gestation. The fetus continues to grow in both weight and length, although the majority of the physical growth occurs in the last weeks of pregnancy. Electrical brain activity is first detected at the end of week 5 of gestation, but as in brain-dead patients, it is primitive neural activity rather than the beginning of conscious brain activity. Synapses do not begin to form until week 17. Neural connections between the sensory cortex and thalamus develop as early as 24 weeks' gestational age, but the first evidence of their function does not occur until around 30 weeks, when minimal consciousness, dreaming, and the ability to feel pain emerges. Although the fetus begins to move during the first trimester, it is not until the second trimester that movement, known as quickening, can be felt. This typically happens in the fourth month, more specifically in the 20th to 21st week, or by the 19th week if the woman has been pregnant before. It is common for some women not to feel the fetus move until much later. During the second trimester, when the body size changes, maternity clothes may be worn. Maternal changes During pregnancy, a woman undergoes many normal physiological changes, including behavioral, cardiovascular, hematologic, metabolic, renal, and respiratory changes. Increases in blood sugar, breathing, and cardiac output are all required. Levels of progesterone and estrogens rise continually throughout pregnancy, suppressing the hypothalamic axis and therefore the menstrual cycle. A full-term pregnancy at an early age (less than 25 years) reduces the risk of breast, ovarian, and endometrial cancer, and the risk declines further with each additional full-term pregnancy. The fetus is genetically different from its mother and can therefore be viewed as an unusually successful allograft. The main reason for this success is increased immune tolerance during pregnancy, which prevents the mother's body from mounting an immune system response against certain triggers. During the first trimester, minute ventilation increases by 40 percent. The womb will grow to the size of a lemon by eight weeks. Many symptoms and discomforts of pregnancy, such as nausea and tender breasts, appear in the first trimester. During the second trimester, most women feel more energized and put on weight as the symptoms of morning sickness subside. They begin to feel regular fetal movements, which can become strong and even disruptive. Braxton Hicks contractions are sporadic uterine contractions that may start around six weeks into a pregnancy; however, they are usually not felt until the second or third trimester. Final weight gain takes place during the third trimester; this is the most weight gain throughout the pregnancy. The woman's abdomen will transform in shape as the fetus turns in a downward position ready for birth. The woman's navel will sometimes become convex, "popping" out, due to the expanding abdomen. The uterus, the muscular organ that holds the developing fetus, can expand up to 20 times its normal size during pregnancy. Head engagement, also called "lightening" or "dropping", occurs as the fetal head descends into a cephalic presentation. While it relieves pressure on the upper abdomen and gives a renewed ease in breathing, it also severely reduces bladder capacity, resulting in a need to void more frequently, and increases pressure on the pelvic floor and the rectum. It is not possible to predict when lightening will occur. In a first pregnancy it may happen a few weeks before the due date, though it may happen later or even not until labor begins, as is typical with subsequent pregnancies. It is during the third trimester that maternal activity and sleep positions may affect fetal development due to restricted blood flow. For instance, the enlarged uterus may impede blood flow by compressing the vena cava when lying flat, a condition that can be relieved by lying on the left side. Childbirth Childbirth, referred to as labor and delivery in the medical field, is the process whereby an infant is born. A woman is considered to be in labor when she begins experiencing regular uterine contractions, accompanied by changes of her cervix—primarily effacement and dilation. While childbirth is widely experienced as painful, some women do report painless labors, while others find that concentrating on the birth helps to quicken labor and lessen the sensations. Most births are successful vaginal births, but sometimes complications arise and a woman may undergo a cesarean section. During the time immediately after birth, both the mother and the baby are hormonally cued to bond, the mother through the release of oxytocin, a hormone also released during breastfeeding. Studies show that skin-to-skin contact between a mother and her newborn immediately after birth is beneficial for both the mother and baby. A review done by the World Health Organization found that skin-to-skin contact between mothers and babies after birth reduces crying, improves mother–infant interaction, and helps mothers to breastfeed successfully. They recommend that neonates be allowed to bond with the mother during their first two hours after birth, the period that they tend to be more alert than in the following hours of early life. Childbirth maturity stages In the ideal childbirth, labor begins on its own when a woman is "at term". Events before completion of 37 weeks are considered preterm. Preterm birth is associated with a range of complications and should be avoided if possible. Sometimes if a woman's water breaks or she has contractions before 39 weeks, birth is unavoidable. However, spontaneous birth after 37 weeks is considered term and is not associated with the same risks of a preterm birth. Planned birth before 39 weeks by caesarean section or labor induction, although "at term", results in an increased risk of complications. This is from factors including underdeveloped lungs of newborns, infection due to underdeveloped immune system, feeding problems due to underdeveloped brain, and jaundice from underdeveloped liver. Babies born between 39 and 41 weeks' gestation have better outcomes than babies born either before or after this range. This special time period is called "full term". Whenever possible, waiting for labor to begin on its own in this time period is best for the health of the mother and baby. The decision to perform an induction must be made after weighing the risks and benefits, but is safer after 39 weeks. Events after 42 weeks are considered postterm. When a pregnancy exceeds 42 weeks, the risk of complications for both the woman and the fetus increases significantly. Therefore, in an otherwise uncomplicated pregnancy, obstetricians usually prefer to induce labor at some stage between 41 and 42 weeks. Postnatal period The postpartum period also referred to as the puerperium, is the postnatal period that begins immediately after delivery and extends for about six weeks. During this period, the mother's body begins the return to pre-pregnancy conditions that includes changes in hormone levels and uterus size. Diagnosis The beginning of pregnancy may be detected either based on symptoms by the woman herself, or by using pregnancy tests. However, an important condition with serious health implications that is quite common is the denial of pregnancy by the pregnant woman. About 1 in 475 denials will last until around the 20th week of pregnancy. The proportion of cases of denial, persisting until delivery is about 1 in 2500. Conversely, some non-pregnant women have a very strong belief that they are pregnant along with some of the physical changes. This condition is known as a false pregnancy. Physical signs Most pregnant women experience a number of symptoms, which can signify pregnancy. A number of early medical signs are associated with pregnancy. These signs include: the presence of human chorionic gonadotropin (hCG) in the blood and urine missed menstrual period implantation bleeding that occurs at implantation of the embryo in the uterus during the third or fourth week after last menstrual period increased basal body temperature sustained for over two weeks after ovulation Chadwick's sign (bluish discolouration of the cervix, vagina, and vulva) Goodell's sign (softening of the vaginal portion of the cervix) Hegar's sign (softening of the uterine isthmus) Pigmentation of the linea alba, called linea nigra (darkening of the skin in a midline of the abdomen, resulting from hormonal changes, usually appearing around the middle of pregnancy). Darkening of the nipples and areolas due to an increase in hormones. Biomarkers Pregnancy detection can be accomplished using one or more various pregnancy tests, which detect hormones generated by the newly formed placenta, serving as biomarkers of pregnancy. Blood and urine tests can detect pregnancy by 11 and 14 days, respectively, after fertilization. Blood pregnancy tests are more sensitive than urine tests (giving fewer false negatives). Home pregnancy tests are urine tests, and normally detect a pregnancy 12 to 15 days after fertilization. A quantitative blood test can determine approximately the date the embryo was fertilized because hCG levels double every 36 to 72 hours before 8 weeks' gestation. A single test of progesterone levels can also help determine how likely a fetus will survive in those with a threatened miscarriage (bleeding in early pregnancy), but only if the ultrasound result was inconclusive. Ultrasound Obstetric ultrasonography can detect fetal abnormalities, detect multiple pregnancies, and improve gestational dating at 24 weeks. The resultant estimated gestational age and due date of the fetus are slightly more accurate than methods based on last menstrual period. Ultrasound is used to measure the nuchal fold in order to screen for Down syndrome. Management Prenatal care Pre-conception counseling is care that is provided to a woman or couple to discuss conception, pregnancy, current health issues and recommendations for the period before pregnancy. Prenatal medical care is the medical and nursing care recommended for women during pregnancy, time intervals and exact goals of each visit differ by country. Women who are high risk have better outcomes if they are seen regularly and frequently by a medical professional than women who are low risk. A woman can be labeled as high risk for different reasons including previous complications in pregnancy, complications in the current pregnancy, current medical diseases, or social issues. The aim of good prenatal care is prevention, early identification, and treatment of any medical complications. A basic prenatal visit consists of measurement of blood pressure, fundal height, weight and fetal heart rate, checking for symptoms of labor, and guidance for what to expect next. Nutrition Nutrition during pregnancy is important to ensure healthy growth of the fetus. Nutrition during pregnancy is different from the non-pregnant state. There are increased energy requirements and specific micronutrient requirements. Women benefit from education to encourage a balanced energy and protein intake during pregnancy. Some women may need professional medical advice if their diet is affected by medical conditions, food allergies, or specific religious/ ethical beliefs. Further studies are needed to access the effect of dietary advice to prevent gestational diabetes, although low quality evidence suggests some benefit. Adequate periconceptional (time before and right after conception) folic acid (also called folate or Vitamin B9) intake has been shown to decrease the risk of fetal neural tube defects, such as spina bifida. L-methylfolate, the bioavailable form of folate is also considered acceptable to take. L-methylfolate is best used by the 40% to 60% of the population with genetic polymorphisms that reduce or impair conversion of folic acid into its active form. The neural tube develops during the first 28 days of pregnancy, a urine pregnancy test is not usually positive until 14 days post-conception, explaining the necessity to guarantee adequate folate intake before conception. Folate is abundant in green leafy vegetables, legumes, and citrus. In the United States and Canada, most wheat products (flour, noodles) are fortified with folic acid. Weight gain The amount of healthy weight gain during a pregnancy varies. Weight gain is related to the weight of the baby, the placenta, extra circulatory fluid, larger tissues, and fat and protein stores. Most needed weight gain occurs later in pregnancy. The Institute of Medicine recommends an overall pregnancy weight gain for those of normal weight (body mass index of 18.5–24.9), of 11.3–15.9 kg (25–35 pounds) having a singleton pregnancy. Women who are underweight (BMI of less than 18.5), should gain between 12.7 and 18 kg (28–40 lb), while those who are overweight (BMI of 25–29.9) are advised to gain between 6.8 and 11.3 kg (15–25 lb) and those who are obese (BMI ≥ 30) should gain between 5–9 kg (11–20 lb). These values reference the expectations for a term pregnancy. During pregnancy, insufficient or excessive weight gain can compromise the health of the mother and fetus. The most effective intervention for weight gain in underweight women is not clear. Being or becoming overweight in pregnancy increases the risk of complications for mother and fetus, including cesarean section, gestational hypertension, pre-eclampsia, macrosomia and shoulder dystocia. Excessive weight gain can make losing weight after the pregnancy difficult. Some of these complications are risk factors for stroke. Around 50% of women of childbearing age in developed countries like the United Kingdom are overweight or obese before pregnancy. Diet modification is the most effective way to reduce weight gain and associated risks in pregnancy. Medication Drugs used during pregnancy can have temporary or permanent effects on the fetus. Anything (including drugs) that can cause permanent deformities in the fetus are labeled as teratogens. In the U.S., drugs were classified into categories A, B, C, D and X based on the Food and Drug Administration (FDA) rating system to provide therapeutic guidance based on potential benefits and fetal risks. Drugs, including some multivitamins, that have demonstrated no fetal risks after controlled studies in humans are classified as Category A. On the other hand, drugs like thalidomide with proven fetal risks that outweigh all benefits are classified as Category X. Recreational drugs The use of recreational drugs in pregnancy can cause various pregnancy complications. Alcoholic drinks consumed during pregnancy can cause one or more fetal alcohol spectrum disorders. According to the CDC, there is no known safe amount of alcohol during pregnancy and no safe time to drink during pregnancy, including before a woman knows that she is pregnant. Tobacco smoking during pregnancy can cause a wide range of behavioral, neurological, and physical difficulties. Smoking during pregnancy causes twice the risk of premature rupture of membranes, placental abruption and placenta previa. Smoking is associated with 30% higher odds of preterm birth. Prenatal cocaine exposure is associated with premature birth, birth defects and attention deficit disorder. Prenatal methamphetamine exposure can cause premature birth and congenital abnormalities. Short-term neonatal outcomes in methamphetamine babies show small deficits in infant neurobehavioral function and growth restriction. Long-term effects in terms of impaired brain development may also be caused by methamphetamine use. Cannabis in pregnancy has been shown to be teratogenic in large doses in animals, but has not shown any teratogenic effects in humans. Exposure to toxins Intrauterine exposure to environmental toxins in pregnancy has the potential to cause adverse effects on prenatal development, and to cause pregnancy complications. Air pollution has been associated with low birth weight infants. Conditions of particular severity in pregnancy include mercury poisoning and lead poisoning. To minimize exposure to environmental toxins, the American College of Nurse-Midwives recommends: checking whether the home has lead paint, washing all fresh fruits and vegetables thoroughly and buying organic produce, and avoiding cleaning products labeled "toxic" or any product with a warning on the label. Pregnant women can also be exposed to toxins in the workplace, including airborne particles. The effects of wearing an N95 filtering facepiece respirator are similar for pregnant women as for non-pregnant women, and wearing a respirator for one hour does not affect the fetal heart rate. Death by violence Pregnant women or those who have recently given birth in the U.S. are more likely to be murdered than to die from obstetric causes. These homicides are a combination of intimate partner violence and firearms. Health authorities have called the violence "a health emergency for pregnant women", but say that pregnancy-related homicides are preventable if healthcare providers identify those women at risk and offer assistance to them. Sexual activity Most women can continue to engage in sexual activity, including sexual intercourse, throughout pregnancy. Research suggests that during pregnancy both sexual desire and frequency of sexual relations decrease during the first and third trimester, with a rise during the second trimester. Sex during pregnancy is a low-risk behavior except when the healthcare provider advises that sexual intercourse be avoided for particular medical reasons. For a healthy pregnant woman, there is no single safe or right way to have sex during pregnancy. Exercise Regular aerobic exercise during pregnancy appears to improve (or maintain) physical fitness. Physical exercise during pregnancy appears to decrease the need for C-section and reduce time in labor, and even vigorous exercise carries no significant risks to babies while providing significant health benefits to the mother. Studies show that performing light moderate intensity and strength exercises while pregnant does not harm the mother’s cardiovascular system and may limit excessive weight gain. The American College of Sports and Medicine recommends pregnant women should participate in at least 150 minutes/week of moderate exercise. These forms of exercise should avoid heavy lifting, hot temperatures, and high impact sports. The Clinical Practice Obstetrics Committee of Canada recommends that "All women without contraindications should be encouraged to participate in aerobic and strength-conditioning exercises as part of a healthy lifestyle during their pregnancy". Although an upper level of safe exercise intensity has not been established, women who were regular exercisers before pregnancy and who have uncomplicated pregnancies should be able to engage in high intensity exercise programs without a higher risk of prematurity, lower birth weight, or gestational weight gain. In general, participation in a wide range of recreational activities appears to be safe, with the avoidance of those with a high risk of falling such as horseback riding or skiing or those that carry a risk of abdominal trauma, such as soccer or hockey. Bed rest, outside of research studies, is not recommended as there is potential harm and no evidence of benefit. High intensity exercise During pregnancy, women can experience a loss of postural stability, pelvic incontinence, back pain, and fatigue, among other symptoms. Resistance training has been found to reduce pregnancy symptoms and reduce postpartum complications. Provided that women also regularly participate in low-impact training, strength training can improve pelvic girdle pain severity postpartum. When incorporating exercises that focus on pelvic muscle strength, they can help reduce pain and stress urinary incontinence. Engaging in regular exercise and physical activity has been shown to be beneficial during pregnancy. Acute bouts of high intensity interval training can help decrease the risks of health complications associated with pregnancy, maintain a healthy body fat percentage during pregnancy, as well as improve overall well-being. Pregnant women who participated in high intensity interval training have been shown to undergo physical improvements in body composition after intervention as well as show general improvement in cardiorespiratory fitness and exercise tolerance. Taking part in this style of exercise, similarly to moderate intensity continuous training, has also been shown to improve glycemic response and insulin sensitivity. There are specific concerns to be avoided with exercise during pregnancy such as overheating, fall-risk, and remaining in a supine position for an extended period of time. Inexperienced individuals new to high-intensity interval training could potentially increase their risk for negative conditions associated with hypertension, such as pre-eclampsia. Sleep It has been suggested that shift work and exposure to bright light at night should be avoided at least during the last trimester of pregnancy to decrease the risk of psychological and behavioral problems in the newborn. Stress The children of women who had high stress levels during pregnancy are slightly more likely to have externalizing behavioral problems such as impulsivity. The behavioral effect was most pronounced during early childhood. Dental care The increased levels of progesterone and estrogen during pregnancy make gingivitis more likely; the gums become edematous, red in colour, and tend to bleed. Also a pyogenic granuloma or "pregnancy tumor", is commonly seen on the labial surface of the papilla. Lesions can be treated by local debridement or deep incision depending on their size, and by following adequate oral hygiene measures. There have been suggestions that severe periodontitis may increase the risk of having preterm birth and low birth weight; however, a Cochrane review found insufficient evidence to determine if periodontitis can develop adverse birth outcomes. Flying In low risk pregnancies, most health care providers approve flying until about 36 weeks of gestational age. Most airlines allow pregnant women to fly short distances at less than 36 weeks, and long distances at less than 32 weeks. Many airlines require a doctor's note that approves flying, especially at over 28 weeks. During flights, the risk of deep vein thrombosis is decreased by getting up and walking occasionally, as well as by avoiding dehydration. The exposure to cosmic radiation is negligible for most travelers. For pregnant women, even the longest intercontinental fight would expose them less than 15% of both the NCRPM and ICRP limit. Full body scanners do not use ionizing radiation, and are safe in pregnancy. Pregnancy classes and birth plan To prepare for the birth of the baby, health care providers recommend that parents attend antenatal classes during the third trimester of pregnancy. Classes include information about the process of labor and birth and the various kinds of births, including both vaginal and caesarean delivery, the use of forceps, and other interventions that may be needed to safely deliver the infant. Types of pain relief, including relaxation techniques, are discussed. Partners or others who may plan to support a woman during her labor and delivery learn how to assist in the birth. It is also suggested that a birth plan be written at this time. A birth plan is a written statement that outlines the desires of the mother during labor and delivery of the baby. Discussing the birth plan with the midwife or other care provider gives parents a chance to ask questions and learn more about the process of labour. In 1991 the WHO launched the Baby-Friendly Hospital Initiative, a global program that recognizes birthing centers and hospitals that offer optimal levels of care for giving birth. Facilities that have been certified as "Baby Friendly" accept visits from expecting parents to familiarize them with the facility and the staff. Complications Each year, ill health as a result of pregnancy is experienced (sometimes permanently) by more than 20 million women around the world. In 2016, complications of pregnancy resulted in 230,600 deaths down from 377,000 deaths in 1990. Common causes include bleeding (72,000), infections (20,000), hypertensive diseases of pregnancy (32,000), obstructed labor (10,000), and pregnancy with abortive outcome (20,000), which includes miscarriage, abortion, and ectopic pregnancy. The following are some examples of pregnancy complications: Pregnancy induced hypertension Anemia Postpartum depression, a common but solvable complication following childbirth that may result from decreased hormonal levels. Postpartum psychosis Thromboembolic disorders, with an increased risk due to hypercoagulability in pregnancy. These are the leading cause of death in pregnant women in the US. Pruritic urticarial papules and plaques of pregnancy (PUPPP), a skin disease that develops around the 32nd week. Signs are red plaques, papules, and itchiness around the belly button that then spreads all over the body except for the inside of hands and face. Ectopic pregnancy, including abdominal pregnancy, implantation of the embryo outside the uterus Hyperemesis gravidarum, excessive nausea and vomiting that is more severe than normal morning sickness. Pulmonary embolism, a blood clot that forms in the legs and migrates to the lungs. Acute fatty liver of pregnancy is a rare complication thought to be brought about by a disruption in the metabolism of fatty acids by mitochondria. There is also an increased susceptibility and severity of certain infections in pregnancy. Miscarriage and stillbirth Miscarriage is the most common complication of early pregnancy. It is defined as the loss of an embryo or fetus before it is able to survive independently. The most common symptom of miscarriage is vaginal bleeding with or without pain. The miscarriage may be evidenced by a clot-like material passing through and out of the vagina. About 80% of miscarriages occur in the first 12 weeks of pregnancy. The underlying cause in about half of cases involves chromosomal abnormalities. Stillbirth is defined as fetal death after 20 or 28 weeks of pregnancy, depending on the source. It results in a baby born without signs of life. Each year about 21,000 babies are stillborn in the U.S. Sadness, anxiety, and guilt may occur after a miscarriage or a stillbirth. Emotional support may help with processing the loss. Fathers may experience grief over the loss as well. A large study found that there is a need to increase the accessibility of support services available for fathers. Diseases in pregnancy A pregnant woman may have a pre-existing disease, which is not directly caused by the pregnancy, but may cause complications to develop that include a potential risk to the pregnancy; or a disease may develop during pregnancy. Diabetes mellitus and pregnancy deals with the interactions of diabetes mellitus (not restricted to gestational diabetes) and pregnancy. Risks for the child include miscarriage, growth restriction, growth acceleration, large for gestational age (macrosomia), polyhydramnios (too much amniotic fluid), and birth defects. Thyroid disease in pregnancy can, if uncorrected, cause adverse effects on fetal and maternal well-being. The deleterious effects of thyroid dysfunction can also extend beyond pregnancy and delivery to affect neurointellectual development in the early life of the child. Demand for thyroid hormones is increased during pregnancy, which may cause a previously unnoticed thyroid disorder to worsen. Untreated celiac disease can cause a miscarriage, intrauterine growth restriction, small for gestational age, low birthweight and preterm birth. Often reproductive disorders are the only manifestation of undiagnosed celiac disease and most cases are not recognized. Complications or failures of pregnancy cannot be explained simply by malabsorption, but by the autoimmune response elicited by the exposure to gluten, which causes damage to the placenta. The gluten-free diet avoids or reduces the risk of developing reproductive disorders in pregnant women with celiac disease. Also, pregnancy can be a trigger for the development of celiac disease in genetically susceptible women who are consuming gluten. Lupus in pregnancy confers an increased rate of fetal death in utero, miscarriage, and of neonatal lupus. Hypercoagulability in pregnancy is the propensity of pregnant women to develop thrombosis (blood clots). Pregnancy itself is a factor of hypercoagulability (pregnancy-induced hypercoagulability), as a physiologically adaptive mechanism to prevent postpartum bleeding. However, in combination with an underlying hypercoagulable state, the risk of thrombosis or embolism may become substantial. Abortion An abortion is the termination of an embryo or fetus via medical method. It is usually done within the first trimester, sometimes in the second, and rarely in the third. Reasons for pregnancies being undesired are broad. Many jurisdictions restrict or prohibit abortion, with rape being the most legally permissible exception. Birth control and education Family planning, as well as the availability and use of contraception, along with increased comprehensive sex education, has enabled many to prevent pregnancies when they are not desired. Schemes and funding to support education and the means to prevent pregnancies when they are not intended have been instrumental and are part of the third of the Sustainable Development Goals (SDGs) advanced by the United Nations. Technologies and science Assisted reproductive technology Modern reproductive medicine offers many forms of assisted reproductive technology for couples who stay childless against their will, such as fertility medication, artificial insemination, in vitro fertilization and surrogacy. Medical imaging Medical imaging may be indicated in pregnancy because of pregnancy complications, disease, or routine prenatal care. Medical ultrasonography including obstetric ultrasonography, and magnetic resonance imaging (MRI) without contrast agents are not associated with any risk for the mother or the fetus, and are the imaging techniques of choice for pregnant women. Projectional radiography, CT scan and nuclear medicine imaging result in some degree of ionizing radiation exposure, but in most cases the absorbed doses are not associated with harm to the baby. At higher dosages or frequency, effects can include miscarriage, birth defects and intellectual disability. Epidemiology About 213 million pregnancies occurred in 2012 of which 190 million were in the developing world and 23 million were in the developed world. This is about 133 pregnancies per 1,000 women aged 15 to 44. About 10% to 15% of recognized pregnancies end in miscarriage. Globally, 44% of pregnancies are unplanned. Over half (56%) of unplanned pregnancies are aborted. In countries where abortion is prohibited, or only carried out in circumstances where the mother's life is at risk, 48% of unplanned pregnancies are aborted illegally. Compared to the rate in countries where abortion is legal, at 69%. Of pregnancies in 2012, 120 million occurred in Asia, 54 million in Africa, 19 million in Europe, 18 million in Latin America and the Caribbean, 7 million in North America, and 1 million in Oceania. Pregnancy rates are 140 per 1000 women of childbearing age in the developing world and 94 per 1000 in the developed world. The rate of pregnancy, as well as the ages at which it occurs, differ by country and region. It is influenced by a number of factors, such as cultural, social and religious norms; access to contraception; and rates of education. The total fertility rate (TFR) in 2013 was estimated to be highest in Niger (7.03 children/woman) and lowest in Singapore (0.79 children/woman). In Europe, the average childbearing age has been rising continuously for some time. In Western, Northern, and Southern Europe, first-time mothers are on average 26 to 29 years old, up from 23 to 25 years at the start of the 1970s. In a number of European countries (Spain), the mean age of women at first childbirth has crossed the 30-year threshold. This process is not restricted to Europe. Asia, Japan and the United States are all seeing average age at first birth on the rise, and increasingly the process is spreading to countries in the developing world like China, Turkey and Iran. In the US, the average age of first childbirth was 25.4 in 2010. In the United States and United Kingdom, 40% of pregnancies are unplanned, and between a quarter and half of those unplanned pregnancies were unwanted pregnancies. In the US, a woman's educational attainment and her marital status are historically correlated with childbearing: the percentage of women unmarried at the time of first birth drops with increasing educational level. Three studies conducted between 2015 and 2018 indicate a large fraction (~80%) of women without a high school diploma or local equivalent in the US are unmarried at the time of their first birth. By contrast, the same studies indicated fewer women with a bachelor's degree or higher (~24%) have their first child while unmarried. However, this phenomenon also has a strong generational component: a 1996 study found 48.2% of US women without a bachelor's degree had their first child whilst unmarried, and only 4% of women with a bachelor's degree had their first child whilst unmarried. These studies indicate a rising trend for US women of all educational levels to be unmarried at the time of their first birth, and thus a recent weakening of the correlation between educational attainment, marital status, and childbearing. Legal and social aspects Legal protection Many countries have various legal regulations in place to protect pregnant women and their children. Many countries have laws against pregnancy discrimination. Maternity Protection Convention ensures that pregnant women are exempt from activities such as night shifts or carrying heavy stocks. Maternity leave typically provides paid leave from work during roughly the last trimester of pregnancy and for some time after birth. Notable extreme cases include Norway (8 months with full pay) and the United States (no paid leave at all except in some states). In the United States, some actions that result in miscarriage or stillbirth, such as beating a pregnant woman, are considered crimes. One law that does so is the federal Unborn Victims of Violence Act. In 2014, the American state of Tennessee passed a law which allows prosecutors to charge a woman with criminal assault if she uses illegal drugs during her pregnancy and her fetus or newborn is harmed as a result. However, protections are not universal. In Singapore, the Employment of Foreign Manpower Act forbids current and former work permit holders from becoming pregnant or giving birth in Singapore without prior permission. Violation of the Act is punishable by a fine of up to S$10,000 (US$) and deportation, and until 2010, their employers would lose their $5,000 security bond. Teenage pregnancy Teenage pregnancy is also known as adolescent pregnancy. The WHO defines adolescence as the period between the ages of 10 and 19 years. Adolescents face higher health risks than women who give birth at age 20 to 24 and their infants are at a higher risk for preterm birth, low birth weight, and other severe neonatal conditions. Their children continue to face greater challenges, both behavioral and physical, throughout their lives. Teenage pregnancies are also related to social issues, including social stigma, lower educational levels, and poverty. Studies show that female adolescents are often in abusive relationships at the time of their conceiving. Nurse-Family Partnership (NFP) is a non-profit organization operating in the United States and the UK designed to serve the needs of low income young mothers who may have special needs in their first pregnancy. Each mother served is partnered with a registered nurse early in her pregnancy and receives ongoing nurse home visits that continue through her child's second birthday. NFP intervention has been associated with improvements in maternal health, child health, and economic security. Racial disparities There are significant racial imbalances in pregnancy and neonatal care systems. Midwifery guidance, treatment, and care have been related to better birth outcomes. Diminishing racial inequities in health is an increasingly large public health challenge in the United States. Despite the fact that average rates have decreased, data on neonatal mortality demonstrates that racial disparities have persisted and grown. The death rate for African American babies is nearly double that of white neonates. According to studies, congenital defects, SIDS, preterm birth, and low birth weight are all more common among African American babies. Midwifery care has been linked to better birth and postpartum outcomes for both mother and child. It caters to the needs of the woman and provides competent, sympathetic care, and is essential for maternal health improvement. The presence of a doula, or birth assistant, during labor and delivery, has also been associated with improved levels of satisfaction with medical birth care. Providers recognized their profession from a historical standpoint, a link to African origins, the diaspora, and prevailing African American struggles. Providers participated in both direct clinical experience and activist involvement. Advocacy efforts aimed to enhance the number of minority birth attendants and to promote the benefits of woman-centered birth care to neglected areas. Transgender people Transgender people have experienced significant advances in societal acceptance in recent years leaving many health professionals unprepared to provide quality care. A 2015 report suggests that "numbers of transgender individuals who are seeking family planning, fertility, and pregnancy services could certainly be quite large". Regardless of prior hormone replacement therapy treatments, the progression of pregnancy and birthing procedures for transgender people who carry pregnancies are typically the same as those of cisgender women. However, transgender people may be subjected to discrimination, which can include a variety of negative social, emotional, and medical experiences, as pregnancy is regarded as an exclusively female activity. According to a study by the American College of Obstetricians and Gynecologists, there is a lack of awareness, services, and medical assistance available to pregnant trans men. Culture In most cultures, pregnant women have a special status in society and receive particularly gentle care. At the same time, they are subject to expectations that may exert great psychological pressure, such as having to produce a son and heir. In many traditional societies, pregnancy must be preceded by marriage, on pain of ostracism of mother and (illegitimate) child. Overall, pregnancy is accompanied by numerous customs that are often subject to ethnological research, often rooted in traditional medicine or religion. The baby shower is an example of a modern custom. Contrary to common misconception, women historically in the United States were not expected to seclude themselves during pregnancy, as was popularized by Gone With the Wind. Pregnancy is an important topic in sociology of the family. The prospective child may preliminarily be placed into numerous social roles. The parents' relationship and the relation between parents and their surroundings are also affected. A belly cast may be made during pregnancy as a keepsake. Arts Images of pregnant women, especially small figurines, were made in traditional cultures in many places and periods, though it is rarely one of the most common types of image. These include ceramic figures from some Pre-Columbian cultures, and a few figures from most of the ancient Mediterranean cultures. Many of these seem to be connected with fertility. Identifying whether such figures are actually meant to show pregnancy is often a problem, as well as understanding their role in the culture concerned. Among the oldest surviving examples of the depiction of pregnancy are prehistoric figurines found across much of Eurasia and collectively known as Venus figurines. Some of these appear to be pregnant. Due to the important role of the Mother of God in Christianity, the Western visual arts have a long tradition of depictions of pregnancy, especially in the biblical scene of the Visitation, and devotional images called a Madonna del Parto. The unhappy scene usually called Diana and Callisto, showing the moment of discovery of Callisto's forbidden pregnancy, is sometimes painted from the Renaissance onwards. Gradually, portraits of pregnant women began to appear, with a particular fashion for "pregnancy portraits" in elite portraiture of the years around 1600. Pregnancy, and especially pregnancy of unmarried women, is also an important motif in literature. Notable examples include Thomas Hardy's 1891 novel Tess of the d'Urbervilles and Goethe's 1808 play Faust.
Biology and health sciences
Biology
null
1773278
https://en.wikipedia.org/wiki/Model%20of%20computation
Model of computation
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology. Categories Models of computation can be classified into three categories: sequential models, functional models, and concurrent models. Sequential models Sequential models include: Finite-state machines Post machines (Post–Turing machines and tag machines). Pushdown automata Register machines Random-access machines Turing machines Decision tree model Functional models Functional models include: Abstract rewriting systems Combinatory logic General recursive functions Lambda calculus Concurrent models Concurrent models include: Actor model Cellular automaton Interaction nets Kahn process networks Logic gates and digital circuits Petri nets Process calculus Synchronous Data Flow Some of these models have both deterministic and nondeterministic variants. Nondeterministic models correspond to limits of certain sequences of finite computers, but do not correspond to any subset of finite computers; they are used in the study of computational complexity of algorithms. Models differ in their expressive power; for example, each function that can be computed by a finite-state machine can also be computed by a Turing machine, but not vice versa. Uses In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above-mentioned Turing machine model.
Mathematics
Computability theory
null
1774970
https://en.wikipedia.org/wiki/Multigraph
Multigraph
In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge. There are 2 distinct notions of multiple edges: Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes. Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges. A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two. For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph that is permitted to have loops. Undirected multigraph (edges without own identity) A multigraph G is an ordered pair G := (V, E) with V a set of vertices or nodes, E a multiset of unordered pairs of vertices, called edges or lines. Undirected multigraph (edges with own identity) A multigraph G is an ordered triple G := (V, E, r) with V a set of vertices or nodes, E a set of edges or lines, r : E → {{x,y} : x, y ∈ V}, assigning to each edge an unordered pair of endpoint nodes. Some authors allow multigraphs to have loops, that is, an edge that connects a vertex to itself, while others call these pseudographs, reserving the term multigraph for the case with no loops. Directed multigraph (edges without own identity) A multidigraph is a directed graph which is permitted to have multiple arcs, i.e., arcs with the same source and target nodes. A multidigraph G is an ordered pair G := (V, A) with V a set of vertices or nodes, A a multiset of ordered pairs of vertices called directed edges, arcs or arrows. A mixed multigraph G := (V, E, A) may be defined in the same way as a mixed graph. Directed multigraph (edges with own identity) A multidigraph or quiver G is an ordered 4-tuple G := (V, A, s, t) with V a set of vertices or nodes, A a set of edges or lines, , assigning to each edge its source node, , assigning to each edge its target node. This notion might be used to model the possible flight connections offered by an airline. In this case the multigraph would be a directed graph with pairs of directed parallel edges connecting cities to show that it is possible to fly both to and from these locations. In category theory a small category can be defined as a multidigraph (with edges having their own identity) equipped with an associative composition law and a distinguished self-loop at each vertex serving as the left and right identity for composition. For this reason, in category theory the term graph is standardly taken to mean "multidigraph", and the underlying multidigraph of a category is called its underlying digraph. Labeling Multigraphs and multidigraphs also support the notion of graph labeling, in a similar way. However there is no unity in terminology in this case. The definitions of labeled multigraphs and labeled multidigraphs are similar, and we define only the latter ones here. Definition 1: A labeled multidigraph is a labeled graph with labeled arcs. Formally: A labeled multidigraph G is a multigraph with labeled vertices and arcs. Formally it is an 8-tuple where is a set of vertices and is a set of arcs. and are finite alphabets of the available vertex and arc labels, and are two maps indicating the source and target vertex of an arc, and are two maps describing the labeling of the vertices and arcs. Definition 2: A labeled multidigraph is a labeled graph with multiple labeled arcs, i.e. arcs with the same end vertices and the same arc label (note that this notion of a labeled graph is different from the notion given by the article graph labeling).
Mathematics
Graph theory
null
1776396
https://en.wikipedia.org/wiki/Ready-mix%20concrete
Ready-mix concrete
Ready-mix concrete (RMC) is concrete that is manufactured in a batch plant, according to each specific job requirement, then delivered to the job site "ready to use". There are two types with the first being the barrel truck or in–transit mixers. This type of truck delivers concrete in a plastic state to the site. The second is the volumetric concrete mixer. This delivers the ready mix in a dry state and then mixes the concrete on site. However, other sources divide the material into three types: Transit Mix, Central Mix or Shrink Mix concrete. Ready-mix concrete refers to concrete that is specifically manufactured for customers' construction projects, and supplied to the customer on site as a single product. It is a mixture of Portland or other cements, water and aggregates: sand, gravel, or crushed stone. All aggregates should be of a washed type material with limited amounts of fines or dirt and clay. An admixture is often added to improve workability of the concrete and/or increase setting time of concrete (using retarders) to factor in the time required for the transit mixer to reach the site. The global market size is disputed depending on the source. It was estimated at 650 billion dollars in 2019. However it was estimated at just under 500 billion dollars in 2018. History There is some dispute as to when the first ready-mix delivery was made and when the first factory was built. Some sources suggest as early as 1913 in Baltimore. By 1929 there were over 100 plants operating in the United States. The industry did not expand significantly until the 1960s, and has continued to grow since then. Design Batch plants combine a precise amount of gravel, sand, water and cement by weight (as per a mix design formulation for the grade of concrete recommended by the structural engineer or architect), allowing specialty concrete mixtures to be developed and implemented on construction sites. Ready-mix concrete is often used instead of other materials due to the cost and wide range of uses in building, particularly in large projects like high-rise buildings and bridges. It has a long life span when compared to other products of a similar use, like roadways. It has an average life span of 30 years under high traffic areas compared to the 10 to 12 year life of asphalt concrete with the same traffic. Ready-mixed concrete is used in construction projects where the construction site is not willing, or is unable, to mix concrete on site. Using ready-mixed concrete means product is delivered finished, on demand, in the specific quantity required, in the specific mix design required. For a small to medium project, the cost and time of hiring mixing equipment, labour, plus purchase and storage for the ingredients of concrete, added to environmental concerns (cement dust is an airborne health hazard) may simply be not worthwhile when compared to the cost of ready-mixed concrete, where the customer pays for what they use, and allows others do the work up to that point. For a large project, outsourcing concrete production to ready-mixed concrete suppliers means delegating the quality control and testing, material logistics and supply chain issues and mix design, to specialists who are already established for those tasks, trading off against introducing another contracted external supplier who needs to make a profit, and losing the control and immediacy of on-site mixing. Ready-mix concrete is bought and sold by volume – usually expressed in cubic meters (cubic yards in the US). Batching and mixing is done under controlled conditions. In the UK, ready-mixed concrete is specified either informally, by constituent weight or volume (1-2-4 or 1-3-6 being common mixes) or using the formal specification standards of the European standard EN 206+ A1, which is supplemented in the UK by BS 8500. This allows the customer to specify what the concrete has to be able to withstand in terms of ground conditions, exposure, and strength, and allows the concrete manufacturer to design a mix that meets that requirement using the materials locally available to a batching plant. This is verified by laboratory testing, such as performing cube tests to verify compressive strength, flexural tests, and supplemented by field testing, such as slump tests done on site to verify plasticity of the mix. The performance of a concrete mix can be altered by use of admixtures. Admixtures can be used to reduce water requirements, entrain air into a mixture, to improve surface durability, or even superplasticise concrete to make it self-levelling, as self-consolidating concrete, the use of admixtures requires precision in dosing and mix design, which is more difficult without the dosing/measuring equipment and laboratory backing of a batching plant, which means they are not easily used outside of ready-mixed concrete. Concrete has a limited lifespan between batching / mixing and curing. This means that ready-mixed concrete should be placed within 30 to 45 minutes of the batching process to hold slump and mix design specifications in the US, though in the UK, environmental and material factors, plus in-transit mixing, allow for up two hours to elapse. Modern admixtures and water reducers can modify that time span to some degree. Ready-mixed concrete can be transported and placed at site using a number of methods. The most common and simplest is the chute fitted to the back of transit mixer trucks (as in picture), which is suitable for placing concrete near locations where a truck can back in. Dumper trucks, crane hoppers, truck-mounted conveyors, and, in extremis, wheelbarrows, can be used to place concrete from trucks where access is not direct. Some concrete mixes are suitable for pumping with a concrete pump. In 2011, there were 2,223 companies employing 72,925 workers that produced ready-mix concrete in the United States. Advantages of ready-mix concrete Materials are combined in a batch plant, and the hydration process begins at the moment water meets the cement, so the travel time from the plant to the site, and the time before the concrete is placed on-site, is critical over longer distances. Some sites are just too distant. The use of admixtures, retarders, and cement-like pulverized fly ash or ground granulated blast-furnace slag (GGBFS) can be used to slow the hydration process, allowing for longer transit and waiting time. Concrete is formable and pourable, but a steady supply is needed for large forms. If there is a supply interruption, and the concrete cannot be poured all at once, a cold joint may appear in the finished form. The biggest advantage is that concrete is produced under controlled conditions. Therefore, Quality concrete is obtained, as a ready-mix concrete mix plant makes use of sophisticated equipment and consistent methods. There is strict control over the testing of materials, process parameters, and continuous monitoring of key practices during the manufacturing process. Poor control on the input materials, batching and mixing methods in the case of site mix concrete is solved in a ready-mix concrete production method. Speed in the construction practices followed in ready mix concrete plant is followed continuously by having mechanized operations. The output obtained from a site mix concrete plant using a 8/12 mixer is 4 to 5 metric cubes per hour which is 30-60 metric cubes per hour in a ready mix concrete plant. Better handling and proper mixing practice will help reduce the consumption of cement by 10 – 12%. The use of admixtures and other cementitious materials will help to reduce the amount of cement as is required to make the desired grade of concrete. Less consumption of cement indirectly results in less environmental pollution. Ready mix concrete manufacture have less dependency on human labor hence the chances of human error are reduced. This will also reduce the dependency on intensive labor. Cracking and shrinkage. Concrete shrinks as it cures. It can shrink over a 10-foot long area (3.05 meters). This causes stress internally on the concrete and must be accounted for by the engineers and finishers placing the concrete, and may require the use of steel reinforcement or pre-stressed concrete elements where this is critical. Access roads and site access have to be able to carry the weight of the ready-mix truck plus load which can be up to 32 tonnes for an eight-wheel 9 m3 truck. (Green concrete is approximately . This problem can be overcome by utilizing so-called "mini mix" trucks which use smaller 4 m3 capacity mixers able to reach more weight restricted sites. Even smaller mixers are used to allow a 7.5 tonne truck to hold approximately 1.25 m3, to reach restricted inner city areas with bans on larger trucks. Metered concrete As an alternative to centralized batch plant system is the volumetric mobile mixer. This is often referred to as on-site concrete, site mixed concrete or mobile mix concrete. This is a mobile miniaturized version of the large stationary batch plant. They are used to provide ready mix concrete utilizing a continuous batching process or metered concrete system. The volumetric mobile mixer is a truck that holds sand, rock, cement, water, fiber, and some add mixtures and color depending on how the batch plant is outfitted. These trucks mix or batch the ready mix on the job site. This type of truck can mix as much or as little amount of concrete as needed. The on-site mixing eliminates the travel time hydration that can cause the transit mixed concrete to become unusable. These trucks are as precise as the centralized batch plant system, since the trucks are scaled and tested using the same ASTM (American standard test method) like all other ready mix manufactures. This is a hybrid approach between centralized batch plants and traditional on-site mixing. Each type of system has advantages and disadvantages, depending on the location, size of the job, and mix design set forth by the engineer. Transit mixed ready mix versus volumetric mixed ready mix A centralized concrete batching plant can serve a wide area. Site-mix trucks can serve an even larger area including very remote locations that standard trucks cannot. The batch plants are located in areas zoned for industrial use, while the delivery trucks can service residential districts or inner cities. Site-mix trucks have the same capabilities. Volumetric trucks often have a lower water demand during the batching process. This will produce a concrete that can be significantly stronger in compressive strength compared to the centralized batch plant for the same mix design using the ASTM C109 test method. Centralized batch systems are limited by the size of the fleet. It may take upwards of 10 minutes to batch and load out one truck depending on the plant size and type. They are unable to change mix designs in the middle of an individual batching process, but can quickly offer a greater range of mixes overall as a central yard has more stock capacity for different types of cement, aggregates, and admixtures than a single truck has room for on site. Volumetric mixers can seamlessly change all aspects of the mix design while still producing concrete, as long as the raw materials are on site. They can continuously mix quality concrete for an indefinite time while being continuously loaded with fresh materials. They can produce 1 yard of concrete in as little as 40 seconds depending on the mix design and batch plant size outfitted. Centralised batching, using the same supply of materials over a long period (a fixed plant will likely have a fixed set of suppliers in its locality), the same scales which can be calibrated by weighbridges, the same measuring equipment for admixtures, moisture etc., and often the same batching operator, can have tighter tolerances for mixes, use a centralised lab to design and verify dozens of mixes to different specifications across multiple jobs for that plant, and can therefore produce a very predictable, consistent result for major projects. Each plant will have a batching recipe book (or equivalent automated batching program) to batch and load any quantity of any mix design on demand. Centralized batching can scale quickly with less movement than on site mixers, using aggregate trucks, cement tankers and ground stocks to achieve up to 240 cubic metres an hour from a single plant. This allows consistent large-scale pours across a site quickly, as supply logistics for cement, water, and aggregate are fixed to a single point with greater storage capacity, and therefore easier to scale, and more tolerant of short supply interruptions. For small loads (orders under 10 yards) transit mixers typically return to their batch plant after each delivery. Volumetric trucks can go directly from job to job until a truck is emptied, reducing traffic and fuel consumption.
Technology
Building materials
null
1776839
https://en.wikipedia.org/wiki/Sample%20size%20determination
Sample size determination
Sample size determination or estimation is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complex studies, different sample sizes may be allocated, such as in stratified surveys or experimental designs with multiple treatment groups. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group. Sample sizes may be chosen in several ways: using experience – small samples, though sometimes unavoidable, can result in wide confidence intervals and risk of errors in statistical hypothesis testing. using a target variance for an estimate to be derived from the sample eventually obtained, i.e., if a high precision is required (narrow confidence interval) this translates to a low target variance of the estimator. the use of a power target, i.e. the power of statistical test to be applied once the sample is collected. using a confidence level, i.e. the larger the required confidence level, the larger the sample size (given a constant precision requirement). Introduction Sample size determination is a crucial aspect of research methodology that plays a significant role in ensuring the reliability and validity of study findings. In order to influence the accuracy of estimates, the power of statistical tests, and the general robustness of the research findings, it entails carefully choosing the number of participants or data points to be included in a study. Consider the case where we are conducting a survey to determine the average satisfaction level of customers regarding a new product. To determine an appropriate sample size, we need to consider factors such as the desired level of confidence, margin of error, and variability in the responses. We might decide that we want a 95% confidence level, meaning we are 95% confident that the true average satisfaction level falls within the calculated range. We also decide on a margin of error, of ±3%, which indicates the acceptable range of difference between our sample estimate and the true population parameter. Additionally, we may have some idea of the expected variability in satisfaction levels based on previous data or assumptions. Importance Larger sample sizes generally lead to increased precision when estimating unknown parameters. For instance, to accurately determine the prevalence of pathogen infection in a specific species of fish, it is preferable to examine a sample of 200 fish rather than 100 fish. Several fundamental facts of mathematical statistics describe this phenomenon, including the law of large numbers and the central limit theorem. In some situations, the increase in precision for larger sample sizes is minimal, or even non-existent. This can result from the presence of systematic errors or strong dependence in the data, or if the data follows a heavy-tailed distribution, or because the data is strongly dependent or biased. Sample sizes may be evaluated by the quality of the resulting estimates, as follows. It is usually determined on the basis of the cost, time or convenience of data collection and the need for sufficient statistical power. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test. For example, if we are comparing the support for a certain political candidate among women with the support for that candidate among men, we may wish to have 80% power to detect a difference in the support levels of 0.04 units. Estimation Estimation of a proportion A relatively simple situation is estimation of a proportion. It is a fundamental aspect of statistical analysis, particularly when gauging the prevalence of a specific characteristic within a population. For example, we may wish to estimate the proportion of residents in a community who are at least 65 years old. The estimator of a proportion is , where X is the number of 'positive' instances (e.g., the number of people out of the n sampled people who are at least 65 years old). When the observations are independent, this estimator has a (scaled) binomial distribution (and is also the sample mean of data from a Bernoulli distribution). The maximum variance of this distribution is 0.25, which occurs when the true parameter is p = 0.5. In practical applications, where the true parameter p is unknown, the maximum variance is often employed for sample size assessments. If a reasonable estimate for p is known the quantity may be used in place of 0.25. As the sample size n grows sufficiently large, the distribution of will be closely approximated by a normal distribution. Using this and the Wald method for the binomial distribution, yields a confidence interval, with Z representing the standard Z-score for the desired confidence level (e.g., 1.96 for a 95% confidence interval), in the form: To determine an appropriate sample size n for estimating proportions, the equation below can be solved, where W represents the desired width of the confidence interval. The resulting sample size formula, is often applied with a conservative estimate of p (e.g., 0.5): for n, yielding the sample size , in the case of using 0.5 as the most conservative estimate of the proportion. (Note: W/2 = margin of error.) In the figure below one can observe how sample sizes for binomial proportions change given different confidence levels and margins of error. Otherwise, the formula would be , which yields . For example, in estimating the proportion of the U.S. population supporting a presidential candidate with a 95% confidence interval width of 2 percentage points (0.02), a sample size of (1.96)2/ (0.022) = 9604 is required with the margin of error in this case is 1 percentage point. It is reasonable to use the 0.5 estimate for p in this case because the presidential races are often close to 50/50, and it is also prudent to use a conservative estimate. The margin of error in this case is 1 percentage point (half of 0.02). In practice, the formula : is commonly used to form a 95% confidence interval for the true proportion. The equation can be solved for n, providing a minimum sample size needed to meet the desired margin of error. The foregoing is commonly simplified: n = 4/W2 = 1/B2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. For B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys. However, the results reported may not be the exact value as numbers are preferably rounded up. Knowing that the value of the n is the minimum number of sample points needed to acquire the desired result, the number of respondents then must lie on or above the minimum. Estimation of a mean Simply speaking, if we are trying to estimate the average time it takes for people to commute to work in a city. Instead of surveying the entire population, you can take a random sample of 100 individuals, record their commute times, and then calculate the mean (average) commute time for that sample. For example, person 1 takes 25 minutes, person 2 takes 30 minutes, ..., person 100 takes 20 minutes. Add up all the commute times and divide by the number of people in the sample (100 in this case). The result would be your estimate of the mean commute time for the entire population. This method is practical when it's not feasible to measure everyone in the population, and it provides a reasonable approximation based on a representative sample. In a precisely mathematical way, when estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ2, the standard error of the sample mean is: This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form , where Z is a standard Z-score for the desired level of confidence (1.96 for a 95% confidence interval). To determine the sample size n required for a confidence interval of width W, with W/2 as the margin of error on each side of the sample mean, the equation can be solved. This yields the sample size formula, for n: . For instance, if estimating the effect of a drug on blood pressure with a 95% confidence interval that is six units wide, and the known standard deviation of blood pressure in the population is 15, the required sample size would be , which would be rounded up to 97, since sample sizes must be integers and must meet or exceed the calculated minimum value. Understanding these calculations is essential for researchers designing studies to accurately estimate population means within a desired level of confidence. Required sample sizes for hypothesis tests One of the prevalent challenges faced by statisticians revolves around the task of calculating the sample size needed to attain a specified statistical power for a test, all while maintaining a pre-determined Type I error rate α, which signifies the level of significance in hypothesis testing. It yields a certain power for a test, given a predetermined. As follows, this can be estimated by pre-determined tables for certain values, by Mead's resource equation, or, more generally, by the cumulative distribution function: Tables The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. The parameters used are: The desired statistical power of the trial, shown in column to the left. Cohen's d (= effect size), which is the expected difference between the means of the target values between the experimental group and the control group, divided by the expected standard deviation. Mead's resource equation Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate. All the parameters in the equation are in fact the degrees of freedom of the number of their concepts, and hence, their numbers are subtracted by 1 before insertion into the equation. The equation is: where: N is the total number of individuals or units in the study (minus 1) B is the blocking component, representing environmental effects allowed for in the design (minus 1) T is the treatment component, corresponding to the number of treatment groups (including control group) being used, or the number of questions being asked (minus 1) E is the degrees of freedom of the error component and should be somewhere between 10 and 20. For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate. Cumulative distribution function Let Xi, i = 1, 2, ..., n be independent observations taken from a normal distribution with unknown mean μ and known variance σ2. Consider two hypotheses, a null hypothesis: and an alternative hypothesis: for some 'smallest significant difference' μ* > 0. This is the smallest value for which we care about observing a difference. Now, for (1) to reject H0 with a probability of at least 1 − β when Ha is true (i.e. a power of 1 − β), and (2) reject H0 with probability α when H0 is true, the following is necessary: If zα is the upper α percentage point of the standard normal distribution, then and so 'Reject H0 if our sample average () is more than ' is a decision rule which satisfies (2). (This is a 1-tailed test.) In such a scenario, achieving this with a probability of at least 1−β when the alternative hypothesis Ha is true becomes imperative. Here, the sample average originates from a Normal distribution with a mean of μ*. Thus, the requirement is expressed as: Through careful manipulation, this can be shown (see Statistical power Example) to happen when where is the normal cumulative distribution function. Stratified sample size With more complicated sampling techniques, such as stratified sampling, the sample can often be split up into sub-samples. Typically, if there are H such sub-samples (from H different strata) then each of them will have a sample size nh, h = 1, 2, ..., H. These nh must conform to the rule that n1 + n2 + ... + nH = n (i.e., that the total sample size is given by the sum of the sub-sample sizes). Selecting these nh optimally can be done in various ways, using (for example) Neyman's optimal allocation. There are many reasons to use stratified sampling: to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs. In general, for H strata, a weighted sample mean is with The weights, , frequently, but not always, represent the proportions of the population elements in the strata, and . For a fixed sample size, that is , which can be made a minimum if the sampling rate within each stratum is made proportional to the standard deviation within each stratum: , where and is a constant such that . An "optimum allocation" is reached when the sampling rates within the strata are made directly proportional to the standard deviations within the strata and inversely proportional to the square root of the sampling cost per element within the strata, : where is a constant such that , or, more generally, when Qualitative research Qualitative research approaches sample size determination with a distinctive methodology that diverges from quantitative methods. Rather than relying on predetermined formulas or statistical calculations, it involves a subjective and iterative judgment throughout the research process. In qualitative studies, researchers often adopt a subjective stance, making determinations as the study unfolds. Sample size determination in qualitative studies takes a different approach. It is generally a subjective judgment, taken as the research proceeds. One common approach is to continually include additional participants or materials until a point of "saturation" is reached. Saturation occurs when new participants or data cease to provide fresh insights, indicating that the study has adequately captured the diversity of perspectives or experiences within the chosen sample saturation is reached. The number needed to reach saturation has been investigated empirically. Unlike quantitative research, qualitative studies face a scarcity of reliable guidance regarding sample size estimation prior to beginning the research. Imagine conducting in-depth interviews with cancer survivors, qualitative researchers may use data saturation to determine the appropriate sample size. If, over a number of interviews, no fresh themes or insights show up, saturation has been reached and more interviews might not add much to our knowledge of the survivor's experience. Thus, rather than following a preset statistical formula, the concept of attaining saturation serves as a dynamic guide for determining sample size in qualitative research. There is a paucity of reliable guidance on estimating sample sizes before starting the research, with a range of suggestions given. In an effort to introduce some structure to the sample size determination process in qualitative research, a tool analogous to quantitative power calculations has been proposed. This tool, based on the negative binomial distribution, is particularly tailored for thematic analysis.
Mathematics
Statistics
null
14193930
https://en.wikipedia.org/wiki/Animal%20feed
Animal feed
Animal feed is food given to domestic animals, especially livestock, in the course of animal husbandry. There are two basic types: fodder and forage. Used alone, the word feed more often refers to fodder. Animal feed is an important input to animal agriculture, and is frequently the main cost of the raising or keeping of animals. Farms typically try to reduce cost for this food, by growing their own, grazing animals, or supplementing expensive feeds with substitutes, such as food waste like spent grain from beer brewing. Animal wellbeing is highly dependent on feed that reflects a well balanced nutrition. Some modern agricultural practices, such as fattening cows on grains or in feed lots, have detrimental effects on the environment and animals. For example, increased corn or other grain in feed for cows, makes their microbiomes more acidic weakening their immune systems and making cows a more likely vector for E. coli, while other feeding practices can improve animal impacts. For example, feeding cows certain kinds of seaweed, reduces their production of methane, reducing the greenhouse gases from meat production. When an environmental crisis strikes farmers or herders, such as a drought or extreme weather driven by climate change, farmers often have to shift to more expensive manufactured animal feed, which can negatively effect their economic viability. For example, a 2017 drought in Senegal reduced the availability of grazing lands leading to skyrocketing demand and prices for manufactured animal feed, causing farmers to sell large portions of their herds. Additionally agriculture for producing animal feed puts pressure on land use: feed crops need land that otherwise might be used for human food and can be one of the driving factors for deforestation, soil degradation and climate change. Fodder "Fodder" refers particularly to foods or forages given to the animals (including plants cut and carried to them), rather than that which they forage for themselves. It includes hay, straw, silage, compressed and pelleted feeds, oils and mixed rations, and sprouted grains and legumes. Grass and crop residues are the most important source of animal feed globally. Grains account for 11% of the total dry matter consume by livestock at global level and oilseed crops by-products such as soybean cakes account for 5%. The amount of grain used to produce the same unit of meat varies substantially between species and production systems. According to FAO, ruminants require an average of 2.8 kg of grains to produce 1 kg of meat while monogastrics require 3.2. These figures vary between 0.1 for extensive ruminant systems to 9.4 in beef feedlots, and from 0.1 in backyard chicken production to 4 in industrial pig production. Farmed fish can also be fed on grain and use even less than poultry. The two most important feed grains are maize and soybean, and the United States is by far the largest exporter of both, averaging about half of the global maize trade and 40% of the global soya trade in the years leading up the 2012 drought. Other feed grains include wheat, oats, barley, and rice, among many others. Traditional sources of animal feed include household food scraps and the byproducts of food processing industries such as milling and brewing. Material remaining from milling oil crops like peanuts, soy, and corn are important sources of fodder. Scraps fed to pigs are called slop, and those fed to chicken are called chicken scratch. Brewer's spent grain is a byproduct of beer making that is widely used as animal feed. is fodder that is blended from various raw materials and additives. These blends are formulated according to the specific requirements of the target animal. They are manufactured by feed compounders as meal type, pellets or crumbles. The main ingredients used in commercially prepared feed are the feed grains, which include corn, soybeans, sorghum, oats, and barley. Compound feed may also include premixes, which may also be sold separately. Premixes are composed of microingredients such as vitamins, minerals, chemical preservatives, antibiotics, fermentation products, and other ingredients that are purchased from premix companies, usually in sacked form, for blending into commercial rations. Because of the availability of these products, farmers who use their own grain can formulate their own rations and be assured that their animals are getting the recommended levels of minerals and vitamins, although they are still subject to the Veterinary Feed Directive. According to the American Feed Industry Association, as much as $20 billion worth of feed ingredients are purchased each year. These products range from grain mixes to orange rinds and beet pulps. The feed industry is one of the most competitive businesses in the agricultural sector and is by far the largest purchaser of U.S. corn, feed grains, and soybean meal. Tens of thousands of farmers with feed mills on their own farms are able to compete with huge conglomerates with national distribution. Feed crops generated $23.2 billion in cash receipts on U.S. farms in 2001. At the same time, farmers spent a total of $24.5 billion on feed that year. With progressing climate change and reoccurring droughts, extensive rangeland agriculture increasingly suffers of forage shortage. Innovative approaches to substitute forage include the harvesting and processing of shrubs into animal feed. This has been extensively researched and applied in Namibia, using waste biomass resulting from woody encroachment. In 2011, around 734.5 million tons of feed were produced annually around the world. History The US Animal Drug Availability Act 1996, passed during the Clinton era, was the first attempt in that country to regulate the use of medicated feed. In 1997, in response to outbreaks of Bovine spongiform encephalopathy, commonly known as mad cow disease, the United States and Canada banned a range of animal tissues from cattle feed. Feed bans in United States (2009) Canada (2007) expanded on this, prohibiting the use of potentially infectious tissue in all animal and pet food and fertilizers. Forage Manufacture Nutrition In agriculture today, the nutritional needs of farm animals are well understood and may be satisfied through natural forage and fodder alone, or augmented by direct supplementation of nutrients in concentrated, controlled form. The nutritional quality of feed is influenced not only by the nutrient content, but also by many other factors such as feed presentation, hygiene, digestibility, and effect on intestinal health. Feed additives provide a mechanism through which these nutrient deficiencies can be resolved, improving animal rate of growth, health, and well-being. Many farm animals have a diet largely consisting of grain-based ingredients because of the higher costs of quality feed. Major ingredients Chelates Insects Soy By animal Bird feed Cat food Cattle feeding Dog food Equine nutrition Fish feed Pet food Pig farming Poultry feed Sheep husbandry
Technology
Animal husbandry
null
1166642
https://en.wikipedia.org/wiki/Fletching
Fletching
Fletching is the fin-shaped aerodynamic stabilization device attached on arrows, bolts, darts, or javelins, and are typically made from light semi-flexible materials such as feathers or bark. Each piece of such fin is a fletch, also known as a flight or feather. A fletcher is a person who attaches fletchings to the shaft of arrows, fletchers were traditionally associated with the Worshipful Company of Fletchers, a guild in the City of London. The word is related to the French word , meaning 'arrow', via the ultimate root of Old Frankish . Description As a noun, fletching refers collectively to the fins or vanes, each of which individually is known as a fletch. Traditionally, the fletching consists of three matched half-feathers attached near the back of the arrow or shaft of the dart that are equally spaced 120° degree intervals around its circumference. Four fletchings have also been used. In English archery, the male feather, from a cock, is used on the outside of the arrow, while the other two stabilizing feathers are from a female, or hen. Traditional archery lore about feather curvature is that a right handed archer should shoot a right winged feather and right handed helical, and a left handed archer should use the opposite. Slow motion cameras show the arrow does not begin to spin until it is well past the riser (centre section of the bow), and the most important point is to have consistency in fletching. Shooting a feathered arrow with a bow with a riser shelf, instead of a plastic vane, is wiser since the feathers will compress and flatten while coming off the bow. On compound bows, feathers may be a hindrance, and plastic vanes are a better solution. At the high speeds coming off a compound bow, plastic vanes with no curvature still allow the arrow to fly straight without tumbling. Also, noise is increased with feathers on these higher-powered bows, which can be a problem for hunters. Today, modern plastics may be used instead. Fletches were traditionally attached with glue and silk thread, but with modern glue/thread/tape this is no longer necessary, unless the arrow is a reproduction of a historical arrow. The fletching is used to stabilize the arrow aerodynamically. Feather fletches impart a natural spin on an arrow due to the rough and smooth sides of a feather and the natural curve, determined by which wing the feather came from. Vanes need to be placed at a slight angle (called an offset fletch), or set into a twist (called a helical fletch) to create the same effect, but all are there to impart stability to the projectile to ensure that the projectile does not tumble during flight. More generally, "fletching" can refer to any structures added to a projectile to aerodynamically stabilize its flight, many of which resemble arrows in form and function. For instance, the feathers at the butt end of a dart (of the type cast using an atlatl) are very similar in purpose and construction to those used in arrows. Most of the techniques of fletching were likely adapted from earlier dart-making techniques. The fins used to stabilize rockets work in a similar manner.
Technology
Archery
null
1167316
https://en.wikipedia.org/wiki/Vienna%20U-Bahn
Vienna U-Bahn
The Vienna U-Bahn (), where U-Bahn is an abbreviation of the German word Untergrundbahn (), is a rapid transit system serving Vienna, Austria. The five-line network consists of of route, serving 109 stations. 459.8 million passengers rode the U-Bahn in 2019. The modern-day U-Bahn opened on 25 February 1978, after test operations that began on 8 May 1976. Parts of two of the lines, designated U4 and U6, date back to the Stadtbahn ("city railway") system, which opened in 1898. Parts of the U2 and U6 lines began as subway tunnels built to accommodate earlier tram lines. Only the U1 and U3 were built wholly as new subway lines. Lines are designated by a number and the prefix "U" (for U-Bahn) and identified on station signage and related literature by a colour. There are five lines; U1, U2, U3, U4 and U6. Since the late 1960s there have been numerous suggestions of routings for a line U5, but all these projects had been shelved until the construction of a new U5 was announced in early 2014. Stations are often named after streets, public spaces, or districts, and in some special cases after prominent buildings at or near the station. The policy of the Wiener Linien, however, states that they prefer not to name stations after buildings. Ticketing for the network is integrated under the (VOR) along with other means of public transport in Vienna, including trams and buses. Local tickets are valid on S-Bahn suburban rail services and other train services but those are operated by the state railway operator, ÖBB. Tickets are not valid on bus services operated by Vienna Airport Lines or the City Airport Train express train. U-Bahn network With the September 2017 opening of the , five-station extension of the U1 line, the five-line U-Bahn network consists of of route, serving 109 stations. Further extensions of the Vienna U-Bahn are scheduled to be completed in the 2026-2032, finally creating the missing line U5. Upon completion of the U5 and U2 projects, there will then be a network that is long with 116 stations. Some plans have been proposed for the system beyond 2032, when the U2/U5 project is completed, although such plans are currently unfunded. U-Bahn services run between 05:00 and around 01:00 at intervals between two and five minutes during the day and up to eight minutes after 20:00. Since 4 September 2010, there has been 24-hour service operating at 15-minute intervals on Friday and Saturday evenings, and on evenings prior to a public holiday. The 24-hour U-Bahn is supplemented on these nights by the Vienna NightLine bus service. Stations Map Metropolitan railway and underground trams The stretch between Heiligenstadt and Hütteldorf, covering modern day U4, used to be part of the metropolitan railway (Stadtbahn). The stretch between Längenfeldgasse and Spittelau, covering most of U6, also used to be a part, although Spittelau wasn't a station yet. The stretch between Karlsplatz and Schottenring (covering some of U2, and in the future some of it will cover a section of U5) was part of an underground tram network (U-Straßenbahn). History and projected expansions Planning for an underground railway in Vienna can be traced back to the 1840s. Since then, there have been numerous plans and concessions to build such a project, making Vienna the city with the most subway planning. The concession request of the engineer Heinrich Sichrowsky dates from 1844 with the idea of a pneumatic, or atmospheric, railway based on the system of Medhurst and Clegg. The trains would have been advanced pneumatically by stationary steam engine air pumps. Sichrowsky’s route would have led from Lobkowitzplatz, below the Vienna Glacis, and along the Wien River to Hütteldorf. Although such trains had been built in London and Paris, it found in Vienna no investors for its stock company, so this idea was rejected. The connecting railway project of Julius Pollak (1849) was also conceived as an atmospheric system. Sichrowsky's request was the starting point for a series of plans that were mostly not approved and could not be implemented. For example, in 1858 the city planner Ludwig Zettl proposed to make an overburden of the former moat instead of filling it, and then to set up a railroad tram in this enclosed ditch, which would bypass the city. This would have created a connection between the central station and the market halls, while at the same time the gas-lit tunnels were to serve as warehouses for food. By 1873, at least 25 plans for a municipal railway system came up, with only the Verbindungsbahn, which already appeared in the much larger overall plan by Carl Ritter von Ghega in his project for Vienna's urban expansion of 1858, being later implemented as part of the mainline railway line. Ghega had already worked out a belt railway project along the line wall in 1845. The first planning for a subway in deep-seated tunnels by Emil Winkler dates back to 1873, with planning proposals that were also based on the first systematic traffic census in Vienna. Another wave of public transport projects were developed as the ring road was close to be finished. The conception of the British engineers James Bunton and Joseph Fogerty won out, since their plans, approved in 1881, included trains to be run in tunnels, in open incisions, and on elevated tracks. In 1883, the project of an "electric secondary railway" of the company Siemens & Halske planned for a small profile rail system with three lines. The construction failed due to the concern of the city council that inner city business life could be affected, especially since the project for the first time ever included a tunneling of the city center. The first system to be constructed was a four-line Stadtbahn railway network (which had been planned to have three main and three local lines) using steam trains. Ground was broken in 1892, and the system was opened in stages between 11 May 1898 and 6 August 1901. At Hütteldorf, the Stadtbahn connected to railway service to the west, and at Heiligenstadt, to railway service on the Franz Josef Line, which then ran northwestwards within the Austro-Hungarian Empire to Eger. Some of the Jugendstil stations for this system designed by Otto Wagner are still in use. However, the Stadtbahn proved inadequate for mass transport, less successful than the tramway. Starting in 1910, plans were considered for an underground system, but were interrupted by the First World War, which also necessitated closing the Stadtbahn to civilian use. After the war, the economic situation of a smaller and poorer country ruled out continuing with the plan. However, starting on 26 May 1924 the Stadtbahn was electrified, something that many had called for before the war, and from autumn 1925 it was integrated with the tramway rather than the railways. The frequency of trains tripled. Plans for a U-Bahn dating to 1912–14 were revived and discussions took place in 1929, but the Great Depression again necessitated abandoning planning. Both in 1937 and after the Anschluß, when Vienna became the largest city by surface area in Nazi Germany, ambitious plans for a U-Bahn, and a new central railway station, were discussed. Test tunnelling took place, but these plans, too, had to be shelved when the Second World War broke out. Severe war damage caused the Stadtbahn system to be suspended in some areas until 27 May 1945. The redevelopment of stations took until the 1950s. Meanwhile, Vienna was occupied by the four allied powers until 1955, and in 1946 had returned three quarters of the pre-war expanded Greater Vienna to the state of Lower Austria. Two proposals for U-Bahn systems were nonetheless presented, in 1953 and 1954. Increasing car traffic led to cutbacks in the S-Bahn network that were partially made up for by buses. The U-Bahn issue was also politicised: in the 1954 and 1959 city council elections, the conservative Austrian People's Party championed construction of a U-Bahn, but the more powerful Social Democratic Party of Austria campaigned for putting housing first. The city council repeatedly rejected the U-Bahn idea in the late 1950s and early 1960s. Extensions of the Stadtbahn system had always been discussed as an alternative to building a new U-Bahn. But it was not until the late 1960s, when the Stadtbahn and the Schnellbahn were no longer able to adequately serve the ever-increasing public traffic, that the decision to build a new network was taken. On 26 January 1968, the city council voted to begin construction of a basic network (Grundnetz). Construction began on 3 November 1969 on and under Karlsplatz, where three lines of the basic network were to meet, and where central control of the U-Bahn was located. Test operation began on 8 May 1976 on line U4, and the first newly constructed (underground) stretch of line opened on 25 February 1978 (five stations on U1 between Reumannplatz and Karlsplatz). The construction of the Vienna U-Bahn network can be divided into several stages: Initial construction (1969–1982): Basic network (Grundnetz) First, the basic network (Grundnetz) was chosen from the various network designs. During 1967, plans for the U2 were radically reduced and the U3 completely deleted, and the approved basic network was described as a 'closer basic network'. This closer basic network, consisting of the U1, U2 and U4 lines, included: New route between Reumannplatz and Praterstern U2 between Karlsplatz and Schottenring U4 between Hütteldorf and Heiligenstadt, consisting almost entirely of modification and adaptation of the existing Stadtbahn line Construction began on 3 November 1969. On 25 February 1978, the first Vienna U-Bahn route between Karlsplatz and Reumannplatz, the U1, went into operation. With twelve partial commissionings, the Vienna U-Bahn basic network was completed on 3 September 1982. Missing U5 Placeholder line numbers were awarded during initial planning of the Vienna subway network between 1966 and 1973. The designation U5 was used in initial and later plans, but ultimately none of the segments with its numeration were approved for construction. In early expansion variant plans, the U5 would have run between Meidlinger Hauptstraße and St. Marx using the already partially-tunneled southern belt route. It later referred to the current branch of the line U2 from Schottenring to the stadium, which was planned to connect to a new segment to Hernals. , there is no U5 line; today's U2 line consists of parts of the previously planned U2 and U5, which are connected by an arc between the stations Rathaus and Schottentor (this was originally planned only as an operating track and is still the narrowest curve in the Vienna subway network). The designation U3 was for a long time a gap in the network, but there was already in the construction of the basic network preliminary work. Thus, the entire tunnel tube of the U3 between Naglergasse / Graben and Stubentor was completed during construction of the U1 (at Stephansplatz), in order to avoid further excavation work in the area of the cathedral. Since 2003, several plans and internal working papers of Vienna have again been planning long-term plans for a U5 line, but only in early 2014 did they again make concrete efforts to actually realize the line. Finally, in March 2014, it was announced that the U5 line would be constructed in several stages of development as part of a U2 / U5 line cross. Starting at Karlsplatz, the new line will use the existing U2 section, with a new section to be built from Rathaus station northwards. In the first expansion step, however, the line will be run until 2023, for the time being, only up to one stop on Frankhplatz in the area of the old AKH. The further construction in the 17th district is planned; However, the construction costs must first be negotiated with the federal government. (See: Fifth stage of the subway network). The current U2 will then continue southwards in newly built tunnels: From Rathaus, it will connect with the U3 at Neubaugasse and the U4 at Pilgramgasse, continuing further south and connecting with the S-Bahn network at Matzleinsdorfer Platz. The financial resources for the construction come from the previously planned, but abandoned, southern extension of the U2 towards Gudrunstraße. The already approved expenditures by the federal government have not expired and could therefore be spent on these revised expansion plans. Proposed U7 In some designs also a line U7 was provided, which should connect the Floridsdorf and Donaustadt districts Floridsdorf station, Kagran and Aspern east of the Danube running. However, due to insufficient urbanization, this project was not found to be meaningful and was never planned, as it would be possible to transport almost the same number of people by means of a much cheaper tram line, which is the replacement of tram line 26 east of Wagramer Straße, from there to the Ziegelhofstraße six stops further on its own track body, by the Gewerbepark Stadlau to the subway station Hausfeldstraße on the northern edge of Aspern was also reached. Second expansion phase (1982–2000): Lines U3 and U6 The second phase involved the expansion of the U3 and U6 lines (about ). The groundbreaking ceremony for this phase took place on 7 September 1983 on Pottendsdorfer Street at the Philadelphia Bridge and after six years, the central section of the U6 between Philadelphia Bridge and Heligenstadt/Friedensbrücke went into operation. After completion of the basic network, the Vienna subway system was extended in 1989 to the line U6 with the route Heiligenstadt-Philadelphiabrücke (). For the belt line, the last remaining line of the light rail, had been modernized and converted to legal traffic. In order to preserve the valuable building fabric, the line was not rebuilt for operation with the underground railcars of the other lines; tram or metro-like trains with overhead power lines were used. Northern line endpoint was now only Heiligenstadt; the alternative northern terminus of the last light rail line, Friedensbrücke (U4), was not approached. 1995 followed the first extension of this line in the south: from Philadelphiabrücke (now station Meidling) to Siebenhirten including the elevated railway line of the former express tram line 64 over a length of . In 1996, the U6 was extended in the north to Floridsdorf and the previous terminus Heiligenstadt (U4) is no longer approached. The two remaining, abandoned links of the former light rail are like much of the U6 listed building and are now partly used as a bike path. In 1991, the completely newly built line U3 between Erdberg and Volkstheater was opened, which crosses the first district after the U1 as a second line. The western terminus Ottakring was reached in 1998, the southeastern end of the U3 is since the year 2000 in the station Simmering. The total length of the network increased with these construction measures of the second stage to . Third expansion phase (2001–2010): The first extensions of U1 and U2 In 1996, a new U-Bahn contract, known as the "30 billion package", was settled. For the first time in Europe, a U-Bahn project had to undergo a costly and lengthy environmental impact assessment, as the U2 extension showed a length of more than ). This expansion phase involved: U1 extension to Leopoldau On 19 October 2001, the groundbreaking ceremony for the extension of U1 was held, for which the two districts had been waiting for 20 years. After five years of construction, the long extension of the U1 was opened on 2 September 2006. U2 extension from Schottenring to Stadium On 12 June 2003, the groundbreaking ceremony took place outside the Stadion (stadium). Because of the 2008 European Football Championships in Austria, there was enormous pressure to complete the construction on time. The Wiener Linien met the deadline, and on 10 May 2008 the U2 extension to the stadium was opened. U2 extension from Stadium to Aspern On 2 October 2010, a further six stations were opened taking the U2 across the Danube via Donaustadtbrücke to Aspernstraße in the twenty-second district (Donaustadt). An additional , three-station extension of the U2 to Aspern Seestadt was officially opened on 5 October 2013. Fourth expansion phase (since 2010): Further extension of the Vienna U-Bahn Planning for a fourth U-Bahn expansion phase began in 2001 and concrete ideas were put forth in the 2003 Transport Master Plan. In 2007, there are plans for the extensions in Vienna, this provided the necessary extensions: The extended U2 from Aspernstraße to Seestadt Aspern (then time horizon 2013) The extended U1 from Reumannplatz to Rothneusiedl (then time horizon 2015) The extended U2 from Karlsplatz to Gudrunstraße (then time horizon 2019) 2012 package In March 2012, it was officially announced that the southern branch of U1 would instead be extended to Oberlaa and not the originally planned Rothneusiedl. This was achieved by expanding the pre-existing route of tram line 67. The change to the original plans was thought to be due cost issues or the incomplete development of the area surrounding Rothneusiedl. This extension was ultimately opened to the public on 2 September 2017, thereby expanding the Vienna metro network by and 5 stations. In the area of the station Alaudagasse preparations for a future line bifurcation were made, should the further development in Rothneusiedl warrant a branch line there. 2014 package The originally planned southern extension of U2 to Gudrunstraße indefinitely delayed for financial reasons and since the need suggested is no longer there. With the budgeted funds, the so-called line cross U2 / U5 will be created instead. The line U2 coming from Seestadt and Schottentor will receive a new south branch, leading to the S-Bahn station Matzleinsdorfer Platz. The remaining route of the U2 between Karlsplatz and Universitätsstraße will be taken over by a newly created U5 line, which will be supplemented by the station Frankhplatz (Altes AKH) for the time being. This line should also lead a fully automatic operation, as currently used at the Nuremberg U-Bahn. The U5 line will be Vienna's first driverless U-Bahn line. Start of construction for the resulting crossroads is scheduled for 2018, 2024 (U5), and 2026 (U2) respectively. Fifth expansion phase: Extension of U2 and U5 It is planned to extend the U2 line from Matzleinsdorfer Platz to Wienerberg and to extend the U5 from Frankhplatz to Hernals. One further possibility is to build a second southern branch of the U1, which would terminate in Rothneusiedl. Further expansion options Other possible expansion options are: Expanding the U5 beyond Hernals to Dornbach Expanding the U5 beyond Karlsplatz as was planned in the 4th expansion phase for the U2 Timeline Rolling stock The Vienna U-Bahn has three types of rolling stock, and has permanent way equipment. The U1, U2, U3, and U4 have two types of rolling stock: the older U/U1/U2 type (introduced in 1972) and the newer V type (introduced in 2002). The U6 has one class of train, the T/T1 type (introduced in 1993), the older E6/C6 having been retired in 2008 and now mostly operating in Utrecht in the Netherlands and Kraków in Poland, with a single set being preserved at Vienna's tramway museum ("Remise"). U/U11/U2 class The first cars of the type U, developed by Simmering-Graz-Pauker (SGP) were delivered in 1972. The smallest unit is a two-axle motorcars, long and wide, permanently coupled twin railcar. A train is made up of three double cars. By 2008, short-haul trains with two double-wagons were used during downtimes or on the U2 line. Technically, the cars are very similar to the Munich and Nuremberg subway trains. However, there are significant differences in the award-winning car design. By 1982, a total of 135 double railcars Type U were delivered, but are now retired. From 1987 SGP supplied with the type U1 (later referred to as U11), a second generation, which looks like its predecessor outwardly. The technical equipment has been further developed and includes water-cooled three-phase motors, brakes with energy recovery and modernized emergency braking and safety equipment. In the years 2000 to 2010 trains of the later type series of the type U were rebuilt and equipped with new three-phase motors, which should extend their life for another 20 years. The converted trains are called Type U2. These vehicles operate on the lines U2 and U3. The interior of a car consists of eight pairs of vis-à-vis seats in the middle section, nine seats on the driverless ends and two pairs of seats opposite each other in the opposite end of the car. In 2006, the U1 and U2 LED displays replaced the original in-and-out illuminated telltale displays. In addition, the trains will gradually be retrofitted with plastic seats, video surveillance and warning lights to signal the door closing operation. At Type U, no such conversions are made because the vehicles are successively scanned. An individual railcar has 49 seats and 91 standing places. In a train consisting of three double railcars, this is 294 seats and 546 standing places. The maximum speed is . The design of the "Silver Arrows" trimmings comes from the railway designer Johann Benda. V-Cars: Newer Generation In the late 1990s, a consortium of companies Siemens, ELIN and Adtranz developed a new train called Type V or "V-Car". It is a continuous, permanently coupled six-car train consisting of two non-motorized control cars and four motorized intermediate cars. This corresponds to the length of three double wagons of the Ux type family. After a prototype had been mostly used on the line U3 from December 2000, 25 sets were purchased in June 2002 and again 15 trains of this type in December 2007. Of these, the first sets were delivered from February 2005, which received their operating license in mid-August 2006 after several delays. At the end of September 2009, another 20 vehicles were ordered. In contrast to the prototype, the production cars in the interior were adapted to the new standard and got gray instead of white sidewalls and red plastic seats instead of the originally installed fabric seats. The newer Type V lines also feature yellow instead of gray-red handrails, improved interior displays and warning lights to signal the door closing operation. A car consists of eight pairs of Vis-à-vis seats in the middle section and six seats each at the car transitions. At the beginning and end of the trains there are multipurpose compartments with four folding seats each and automatically extending ramps at each station to close the platform gap. They are the first Vienna subway cars to have air conditioning and are factory-equipped with video surveillance. In order to keep the station stays short and avoid blocking by passengers, the doors have only sensitive sensor edges as anti-trap instead of light barriers. An individually opened door therefore only closes again as part of a central closing operation. All entrances can also be opened centrally from the driver's seat. The trains were equipped with extensive safety technology, such as fire detectors in the roof areas, temperature sensors and dry extinguishing pipes on the undercarriage. Smoke or temperature exceedances are immediately transmitted to the driver. The type V cars have 260 seats and 618 standing places. Their top speed is . The exterior design is the responsibility of Porsche Design. A similar variant of this Type is also in service in Oslo, Norway as type OS MX3000. X-Cars Siemens was contracted to deliver and maintain 34 6-car Type X trains in September 2017. The order includes an option for an additional eleven trains. The vehicles are suited for both fully automated operation and driver operation. They will be used on the future Line U5 in a driverless configuration, and will serve on Lines U1 to U4 with drivers. Delivery started in spring of 2020 with a pre-series vehicle, with the last trains in this order scheduled for delivery at the end of 2030. The first trainset entered service on 16 June 2023. This Metro-Train based on Siemens Inspiro. Line U6 The Line U6 was originally slated for rapid transit conversion like Line U4. However taking into account historic preservation of original Vienna Stadtbahn stations and structures, construction costs and disruption of existing services, decided to keep Line U6 with much of its original operations. Today Line U6 is unique when compared to other U-Bahn lines with overhead lines, low floor LRVs and optical signals (no LZB). T-Class Since 1993 Bombardier Wien has been developing wide, double-articulated low-floor vehicles of the type T, which are similarly deployed as Type 400 on the Lokalbahn Wien-Baden and serve as the basis for the successful vehicle family Flexity Swift are. A set consists of three permanently coupled cars, a train of four sets. By 2008, short-cut trains from three sets also operated during off-peak hours. The T-cars drove initially in conjunction with the older E6 / c6 cars, so that each train was a low-floor car, today only trains made entirely from T- and T1-cars. Seats: 232, standing room: 544. A video-monitored, equipped with air conditioning, electronic interior and exterior displays and new design further development of the Type T comes since May 2008 as Type T1 used and replaced the old E6 / c6 high-floor suits. Since 24 December 2008 only vehicles type T and T1 on the U6. The T and T1 cars can be coupled with each other so that trains from T and T1 cars can run mixed. At the end of 2009, the T-cars began to be equipped with electronic indoor and outdoor displays and to improve the safety of passengers, personnel and against vandalism with a video surveillance of the interiors and thus visually align the T1. Also, the older cloth seats in the T-wagons are gradually being replaced by new, red plastic seats with yellow handles, which can also be found in the T1 car and in the Type V metro car. The vehicles of the Tx type family will also receive successive warning lights for signaling the door closing operation. So that they can be transferred via the tram network to the main workshop of Wiener Linien, the T and T1 cars are equipped for tram operation. Former Trains From the light rail operation, the wide, six-axle articulated wagons type E6 (railcar) and c6 (sidecar) "type Mannheim" were taken over, which were built in 1979 by Lohner and Rotax in Duewag license. An entire train offered 192 seats and 432 standing places. Until the end of 2008, the trains still operated in conjunction with T-cars, i.e. E6 + c6 + T + c6 + E6. In May 2008, the delivery of the type T1 began, which should completely replace the type E6 / c6. On 23 December 2008 E6 / c6 cars ran for the last time on the U6. Most of the vehicles were sold to Utrecht or Krakow. A train consisting of a railcar and a sidecar is obtained in the Museum of Remise. The E6 and c6 in Utrecht were sold to Krakow in 2014. Art In common with many urban transit systems, the Vienna U-Bahn has art works in stations. These include: Altes Landgut: Face Surveillance Snails by Yves Netzhammer. Aspern Nord: Aspern Affairs, two big artistic maps of Vienna at the end of the platform, one showing the situation in 1809 during the Napoleonic Wars and one of 1912, where the airport in Aspern (at the time the biggest airport in Europe) can be seen. Also there are colored "lifelines" above the tracks that show the names of famous people, and the dates of their birth and death. The artworks were created by Stephan Huber. Erdberg: Mosaics Stadteinwärts and Stadtauswärts by Peter Atanasov Hütteldorfer Straße: U-BauAlphabet by Georg Salner Johnstraße: übertragung by Michael Schneider Karlsplatz: Pi by Ken Lum Karlsplatz: Spatial installation by Peter Kogler Karlsplatz: Frieze Unisono di colori by Ernst Friedrich und Eleonor Friedrich Landstraße: Enamel wall by Oswald Oberhuber Landstraße: Installation Planet der Pendler mit den drei Zeitmonden by Kurt Hofstetter Laurenzgasse: Mural by Heimo Zobernig Museumsquartier: Lauf der Geschöpfe, Der Jubilierende, Wächter, Lebenskeim and Tor des Verborgenen by Rudi Wach Ottakring: U-Turn by Margot Pilz Ottakring: Graffiti wall by Wiener Graffiti Union Praterstern: Einen Traum träumen und ihn mit anderen teilen ... by Susanne Zemrosser Schottentor: varying installations in glass case Schweglerstraße: Kunst der Technik by Nam June Paik Stadlau: Nepomuk by Werner Feiersinger Stubentor: Bewegungen der Seele by Michael Hedwig Südtiroler Platz – Hauptbahnhof: SUED by Franz Graf Taborstraße: ein Garten (zum Beispiel) by Ingeborg Strobl Volkstheater: Das Werden der Natur by Anton Lehmden Westbahnhof: Cirka 55 Schritte durch Europa by Adolf Frohner Zippererstraße: Kid's Kunst – Mobilität im kommenden Jahrtausend (children's art) Rochusgasse: Roman archaeological remains
Technology
Europe_2
null
1168300
https://en.wikipedia.org/wiki/Amber%20%28color%29
Amber (color)
The color amber is a pure chroma color, located on the color wheel midway between the colors of yellow and orange. The color name is derived from the material also known as amber, which is commonly found in a range of yellow-orange-brown-red colors; likewise, as a color, amber can refer to a range of yellow-orange colors. In English, the first recorded use of the term as a color name, rather than a reference to the specific substance, was in 1500. SAE/ECE amber Amber is one of several technically defined colors used in automotive signal lamps. In North America, SAE standard J578 governs the colorimetry of vehicle lights, while outside North America the internationalized European ECE regulations hold force. Both standards designate a range of orange-yellow hues in the CIE color space as "amber". In the past, the ECE amber definition was more restrictive than the SAE definition, but the current ECE definition is identical to the more permissive SAE standard. The SAE formally uses the term "yellow amber", though the color is most often referred to as "yellow". This is not the same as selective yellow, a color used in some fog lamps and headlamps. Formal definitions Previously, ECE amber was defined according to the 1968 Convention on Road Traffic, as follows: Recent revisions to the ECE regulations have aligned ECE Amber with SAE Yellow, defined as follows: The entirety of these definitions lie outside the gamut of the sRGB color space — such a pure color cannot be represented using RGB primaries. The color box shown above is a desaturated approximation, produced by taking the centroid of the standard definition and moving it towards the D65 white point, until it meets the sRGB gamut triangle. Lighting LEDs are called amber when their wavelength is approximately 590 nm. Chronomatic low pressure sodium lamps are 580 to 590 nm. Cultural use Computers The Digital Equipment Corporation (DEC) VT220 computer terminals were available with amber phosphors in their CRTs. Interior design The original Amber Room in the Catherine Palace of Tsarskoye Selo near Saint Petersburg was a complete chamber decoration of amber panels backed with gold leaf and mirrors. Due to its singular beauty, it was sometimes dubbed the "Eighth Wonder of the World". Sports In Gaelic games, Armagh play in a darker amber color (the amber that is prevalent in the Irish flag), Offaly play in the original colors of the Irish flag (green, white and amber) and Kilkenny also play in black and amber, albeit a more yellow amber. Amber is a color worn by English football clubs Hull City AFC, Bradford City AFC, Barnet FC, Shrewsbury Town FC (As part of stripes), Mansfield Town, Cambridge United FC and Sutton United. Everton has incorporated amber in away and thirds kits, and as an accent color, since at least 1967. The color is also worn by the Scottish football club Motherwell FC, as well as many other sports clubs around the world. Traffic engineering Amber is used in traffic lights and turn signals. Theatre Amber, sometimes named "Bastard Amber", along with 'Moonlight Blue', is one of the two most common colors used in stage lighting.
Physical sciences
Colors
Physics
1168570
https://en.wikipedia.org/wiki/Joint%20dislocation
Joint dislocation
A joint dislocation, also called luxation, occurs when there is an abnormal separation in the joint, where two or more bones meet. A partial dislocation is referred to as a subluxation. Dislocations are often caused by sudden trauma to the joint like an impact or fall. A joint dislocation can cause damage to the surrounding ligaments, tendons, muscles, and nerves. Dislocations can occur in any major joint (shoulder, knees, etc.) or minor joint (toes, fingers, etc.). The most common joint dislocation is a shoulder dislocation. The treatment for joint dislocation is usually by closed reduction, that is, skilled manipulation to return the bones to their normal position. Only trained medical professionals should perform reductions since the manipulation can cause injury to the surrounding soft tissue, nerves, or vascular structures. Signs and Symptoms The following symptoms are common with any type of dislocation. Intense pain Joint instability Deformity of the joint area Reduced muscle strength Bruising or redness of joint area Difficulty moving joint Stiffness Complications Joint dislocations can have associated injuries to surrounding tissues and structures, including muscle strains, ligament and tendon injuries, neurovascular injuries, and fractures. Depending on the location of the dislocation, there are different complications to consider. In the shoulder, vessel and nerve injuries are rare, but can cause many impairments and requires a longer recovery process. Knee dislocations are rare, but can be complicated by injuries to arteries and nerves, leading to limb-threatening complications. Degenerative changes following injury to the wrist are common, with many developing arthritis. Persistent nerve pain years after the initial trauma is not uncommon. Most finger dislocations occur in the middle of the finger (PIP) and are complicated by ligamentous injury (volar plate). Since most dislocations involving the joint near the fingertip (DIP joint) are due to trauma, there is often an associated fracture or tissue injury. Hip dislocations are at risk for osteonecrosis of the femoral head, femoral head fractures, the development of osteoarthritis, and sciatic nerve injury. Given the strength of ligaments in the foot and ankle, ankle dislocation-fractures can occur. Causes Joint dislocations are caused by trauma to the joint or when an individual falls on a specific joint. Great and sudden force applied, by either a blow or fall, to the joint can cause the bones in the joint to be displaced or dislocated from their normal position. With each dislocation, the ligaments keeping the bones fixed in the correct position can be damaged or loosened, making it easier for the joint to be dislocated in the future. Risk Factors A variety of risk factors can predispose individuals to joint dislocation. They can vary depending on location of the joint. Genetic factors and underlying medical conditions can further increase risk. Genetic conditions, such as hypermobility syndrome and Ehlers-Danlos Syndrome put individuals at increased risk for dislocations. Hypermobility syndrome is an inherited disorder that affects the ligaments around joints. The loosened or stretched ligaments in the joint provide less stability and allow for the joint to dislocate more easily. Dislocation can also occur because of conditions such as Rheumatoid arthritis. In Rheumatoid arthritis the production of synovial fluid decreases, gradually causing pain, swollen joints, and stiffness. A forceful push causes friction and can dislocate the joint. Notably, joint instability in the neck is a potential complication of rheumatoid arthritis. Participation in sports, being male, variations in the shape of the joint, being older, and joint hypermobility in males are risk factors associated with an increased risk of first time dislocation. Participation in sports, being a young male, history of a previous dislocation with an associated injury, and any history of previous dislocation are risk factors associated with recurrent dislocations. Diagnosis Initial evaluation of a suspected joint dislocation begins with a thorough patient history, including mechanism of injury, and physical examination. Special attention should be focused on the neurovascular exam both before and after reduction, as injury to these structures may occur during the injury or during the reduction process. Imaging studies are frequently obtained to assist with diagnosis and to determine the extent of injury. Imaging Types Standard plain radiographs, usually a minimum of 2-views Generally, pre- and post-reduction X-rays are recommended. Initial X-ray can confirm the diagnosis as well as evaluate for any concomitant fractures. Post-reduction radiographs confirm successful reduction alignment and can exclude any other bony injuries that may have been caused during the reduction procedure. In certain instances if initial X-rays are normal but injury is suspected, there is possible benefit of stress/weight-bearing views to further assess for disruption of ligamentous structures and/or need for surgical intervention. This may be utilized with AC joint separations. Nomenclature: Joint dislocations are named based on the distal component in relation to the proximal one. Ultrasound Ultrasound may be useful in an acute setting, particularly with suspected shoulder dislocations. Although it may not be as accurate in detecting any associated fractures, in one observational study ultrasonography identified 100% of shoulder dislocations, and was 100% sensitive in identifying successful reduction when compared to plain radiographs. Ultrasound may also have utility in diagnosing AC joint dislocations. In infants <6 months of age with suspected developmental dysplasia of the hip (congenital hip dislocation), ultrasound is the imaging study of choice as the proximal femoral epiphysis has not significantly ossified at this age. Cross-sectional imaging (CT or MRI) Plain films are generally sufficient in making a joint dislocation diagnosis. However, cross-sectional imaging can subsequently be used to better define and evaluate abnormalities that may be missed or not clearly seen on plain X-rays. CT is not commonly used, however it is useful in further analyzing any bony aberrations, and CT angiogram may be utilized if vascular injury is suspected. In addition to improved visualization of bony abnormalities, MRI permits for a more detailed inspection of the joint-supporting structures in order to assess for ligamentous and other soft tissue injury. Classification Dislocations can either be full, referred to as luxation, or partial, referred to as subluxation. Simple dislocations are dislocations without an associated fracture, while complex dislocations have an associated fracture. Depending on the type of joint involved (i.e. ball-and-socket, hinge), the dislocation can further be classified by anatomical position, such as an anterior hip dislocation. Prevention Avoiding positions and activities that place the joint at risk for dislocation are effective strategies to prevent dislocation. Strengthening exercises targeting muscles surrounding the joint are important to prevent dislocation. Treatment Pain Control Pain control is an important component of managing joint dislocations. Joint dislocations can be painful and appropriate pain control is helpful during joint reduction. Non-operative Reduction/Repositioning X-rays are usually taken to confirm a diagnosis and detect any fractures which may also have occurred at the time of dislocation. A dislocation is easily seen on an X-ray. Once a diagnosis is confirmed, the joint is usually manipulated back into position. This can be a very painful process, therefore this is typically done either in the emergency department under sedation or in an operating room under a general anaesthetic. A dislocated joint should be reduced into its normal position only by a trained medical professional. Trying to reduce a joint without any training could worsen the injury. It is important to reduce the joint as soon as possible. Delaying reduction can compromise the blood supply to the joint. This is especially true in the case of a dislocated ankle, due to the anatomy of the blood supply to the foot. On field reduction is crucial for joint dislocations. As they are extremely common in sports events, managing them correctly at the game at the time of injury, can reduce long term issues. They require prompt evaluation, diagnosis, reduction, and postreduction management before the person can be evaluated at a medical facility. After a dislocation, injured joints are usually held in place by a splint (for straight joints like fingers and toes) or a bandage (for complex joints like shoulders). Immobilization Immobilization is a method of treatment to place the injured joint in a sling or in another immobilizing device in order to keep the joint stable. A 2012 Cochrane review, found no statistically significant difference in healing or long-term joint mobility between simple shoulder dislocations treated conservatively versus surgically. Shorter immobilization periods are encouraged, with the goal of return to increased range-of-motion activities as soon as possible. Shorter immobilization periods is linked to increased ranges of motion in some joints. Rehabilitation Additionally, the joint muscles, tendons and ligaments must also be strengthened. This is usually done through a course of physiotherapy, which will also help reduce the chances of repeated dislocations of the same joint. The shoulder is a prime example of this. Any shoulder dislocation should be followed up with thorough physical therapy. The most common treatment method for a dislocation of the Glenohumeral Joint (GH Joint/Shoulder Joint) is exercise based management. For glenohumeral instability, the therapeutic program depends on specific characteristics of the instability pattern, severity, recurrence and direction with adaptations made based on the needs of the patient. In general, the therapeutic program should focus on restoration of strength, normalization of range of motion and optimization of flexibility and muscular performance. Throughout all stages of the rehabilitation program, it is important to take all related joints and structures into consideration. Operative Surgery is often considered in extensive injuries or after failure of conservative management with strengthening exercises. The need for surgery will depend on the location of the dislocation and the extent of the injury. Shoulder injuries can also be surgically stabilized, depending on the severity, using arthroscopic surgery. Prognosis Prognosis varies depending on the location and extent of the dislocation. The prognosis of a shoulder dislocation is dependent on various factors including age, strength, connective tissue health and severity of the injury causing the dislocation. There is a good prognosis in simple elbow dislocations in younger people. Older people report more pain and stiffness on average. Wrist dislocations are often difficult to manage due to the difficulty in healing the small bones in the wrist. Finger displacement towards the back of the hand is often irreducible due to associated injuries, while finger displacement towards the palm of the hand is more readily reducible. Epidemiology Each joint in the body can be dislocated, however, there are common sites where most dislocations occur. The most common dislocated parts of the body are discussed as follows: Dislocated shoulder Anterior shoulder dislocation is the most common type of shoulder dislocation, accounting for at least 90% of shoulder dislocations. Anterior shoulder dislocations have a recurrence rate around 39%, with younger age at initial dislocation, male sex, and joint hyperlaxity being risk factors for increased recurrence. The incidence rate of anterior shoulder dislocations is roughly 23.1 to 23.9 per 100,000 person-years. Young males have a higher incidence rate, roughly four times that of the overall population. Recurrent anterior shoulder dislocations have a higher rate of labrum tears (Bankart lesion) and humerus fractures/dents (Hill-Sachs lesion) compared to initial dislocations. Shoulder dislocations account for 45% of all dislocation visits to the emergency room. Elbow The incidence rate of elbow dislocations is 5 to 6 per 100,000 persons per year. Posterior dislocations are the most common type of elbow dislocations, comprising 90% of all elbow dislocations. Wrist Overall, injuries to the small bones and ligaments in the wrist are uncommon. Lunate dislocations are the most common. Finger Interphalangeal (IP) or metacarpophalangeal (MCP) joint dislocations In the United States, men are most likely to sustain a finger dislocation with an incidence rate of 17.8 per 100,000 person-years. Women have an incidence rate of 4.65 per 100,000 person-years. The average age group that sustain a finger dislocation are between 15 and 19 years old. The most common dislocations are in the proximal interphalangeal (PIP) joints. Hip Posterior and anterior hip dislocation Anterior dislocations are less common than posterior dislocations. 10% of all dislocations are anterior and this is broken down into superior and inferior types. Superior dislocations account for 10% of all anterior dislocations, and inferior dislocations account for 90%. 16-40 year old males are more likely to receive dislocations due to a car accident. When an individual receives a hip dislocation, there is an incidence rate of 95% that they will receive an injury to another part of their body as well. 46–84% of hip dislocations occur secondary to traffic accidents, the remaining percentage is due based on falls, industrial accidents or sporting injury. Knee The majority of knee dislocations (64.5%) are caused by trauma to the knee, with more than half caused by car and motorcycle accidents. The incidence rate of initial patellar dislocations is roughly 32.8 per 100,000 person years. Nearly 41% of knee dislocations have an associated fracture, with the majority of these fractures in one of the legs. Nerve injury occurs in about 15.3% of knee dislocations, while major artery injury occurs in 7.8% of knee dislocations. More than half (53.5%) of knee dislocations have an associated torn meniscus. Tendon rupture occurs up to 13.1% of the time. Foot and Ankle Lisfranc injury is a dislocation or fracture-dislocation injury at the tarsometatarsal joints Subtalar dislocation, or talocalcaneonavicular dislocation, is a simultaneous dislocation of the talar joints at the talocalcaneal and talonavicular levels. Subtalar dislocations without associated fractures represent about 1% of all traumatic injuries of the foot and 1-2% of all dislocations, and they are associated with high energy trauma. Early closed reduction is recommended, otherwise open reduction without further delay. Total talar dislocation is very rare and has very high rates of complications. Ankle Sprains primarily occur as a result of tearing the ATFL (anterior talofibular ligament) in the Talocrural Joint. The ATFL tears most easily when the foot is in plantarflexion and inversion. Ankle dislocation without fracture is rare. Gallery
Biology and health sciences
Types
Health
1168808
https://en.wikipedia.org/wiki/Scarlet%20%28color%29
Scarlet (color)
Scarlet is a bright red color, sometimes with a slightly orange tinge. In the spectrum of visible light, and on the traditional color wheel, it is one-quarter of the way between red and orange, slightly less orange than vermilion. According to surveys in Europe and the United States, scarlet and other bright shades of red are the colors most associated with courage, force, passion, heat, and joy. In the Roman Catholic Church, scarlet is the color worn by a cardinal, and is associated with the blood of Christ and the Christian martyrs, and with sacrifice. Scarlet is also associated with immorality and sin, particularly prostitution or adultery, largely because of a passage referring to "The Great Harlot", "dressed in purple and scarlet", in the Bible (Revelation 17:1–6). Etymology The word comes from the Middle English "scarlat", from the Old French escarlate, from the Latin "scarlatum", from the Persian saqerlât. The term scarlet was also used in the Middle Ages for a type of cloth that was often bright red. An early recorded use of scarlet as a color name in the English language dates to 1250. History Ancient world Scarlet has been a color of power, wealth and luxury since ancient times. Scarlet dyes were first mentioned in 8th century BC, under the name Armenian Red, and they were described in Persian and Assyrian writings. The color was exported from Persia to Rome. During the Roman Empire, it was second in prestige only to the purple worn by the Emperors. Roman officers wore scarlet cloaks called paludamenta, and persons of high rank were referred to as the coccinati, the people of red. The color is also mentioned several times in the Bible, both in the Old and New Testament; in the Latin Vulgate version of the book of Isaiah (1:18) it says, "If your sins be as scarlet (si fuerint peccata vestra ut coccinum) they shall be made white as snow", and in the book of Revelation (17:1-6) it describes the "Great Harlot" (meretricius magnus) dressed in scarlet and purple (circumdata purpura et coccino), and riding upon a scarlet beast (besteam coccineam). The Latin term for scarlet used in the Bible comes from coccus, a "tiny grain". The finest scarlets in ancient times were made from the tiny scale insect called kermes, which fed on certain oak trees in Turkey, Persia, Armenia and other parts of the Middle East. The insects contained a very strong natural dye, also called kermes, which produced the scarlet color. The insects were so small they were historically thought to be a kind of grain. This was the origin of the expression "dyed in the grain." Middle Ages and Renaissance The early Christian church adopted many of the symbols of the Roman Empire, including the importance of the color scarlet. The flag of the Crusaders was a scarlet cross on a white background, with scarlet indicating blood and sacrifice. By a church edict in 1295, Cardinals of the church, second in authority to the Pope, wore red robes, but a red closer in color to the purple of the Byzantine Emperors, a color coming from murex, a type of mollusk. After the fall of Constantinople to the Turks in 1453, however, the imperial purple was no longer available, and Cardinals began instead to wear scarlet made from kermes. During the Middle Ages and Renaissance, scarlet was the color worn by Kings, princes and the wealthy, partly because of its color and partly because of its high price. The exact shade, which varied widely, was not as important as the brilliance and richness of the color. The finest scarlet, called scarlatto or Venetian scarlet, came from Venice, where it was made from kermes by a specific guild which closely guarded the formula. Cloth dyed scarlet cost as much as ten times more than cloth dyed with blue. 16th to 19th century In the Assumption, by Titian (1516–1518), the figures of God, the Virgin Mary and two apostles are highlighted by their scarlet costumes, painted with vermilion pigment from Venice. The young Queen Elizabeth I (here in about 1563) liked to wear bright reds, before she adopted the more sober image of the Virgin Queen. Her satin gown was probably dyed with kermes. In the 16th century, an even more vivid scarlet began to arrive in Europe from the New World. When the Spanish conquistadores conquered Mexico, they found that the Aztecs were making brilliant red shades from another variety of scale insect called cochineal, similar to the European kermes vermiilo, but producing better shades of red at lower costs. The first shipments were sent from Mexico to Seville in 1523. The Venetian guilds at first tried to block the use of the cochineal in Europe, but before the century was over, it was being used to make scarlet dye in Spain, France, Italy, and Holland, and almost all the fine scarlet garments of Europe were made with cochineal. Scarlet was the traditional color of the British nobility in the 17th and 18th century. The members of the House of Lords wore red ceremonial gowns for the opening of Parliament, and today sit on red benches. The red military uniform was adopted by the English New Model Army in 1645, and was still worn as a dress uniform until the outbreak of the First World War in August 1914. Ordinary soldiers wore red coats dyed with madder, while officers wore scarlet coats dyed with the more expensive cochineal. This led to British soldiers being known as red coats. After 1873 all ranks of the regiments wearing red tunics changed to the more vivid shade of scarlet. 20th and 21st century From the 8th century until the early 20th century, the most important scarlet pigment used in western art was vermilion, made from the mineral cinnabar. It was used, along with red lake pigments, by artists from Botticelli and Raphael to Renoir. However, in 1919 commercial production began of an intense new synthetic pigment, cadmium red, made from cadmium sulfide and selenium. The new pigment became the standard red of Henri Matisse and the other important painters of the 20th century. In the 20th century, scarlet also became associated with revolution. Red flags had first been used as revolutionary emblems, symbolizing the blood of martyrs, during the French Revolution and Paris uprisings in 1848. Red became the color of socialism, then communism, and became the color of the flags of both the Soviet Union and Communist China. China still uses a scarlet flag; in Chinese culture red is also the color of happiness. Since the fall of the Soviet Union, the flag of Russia consists of red, blue and white, the colors of the historic Russian flag from the time of Peter the Great that were adapted by him from the colors of the flag of the Netherlands. In science and nature Scarlet in culture Academic dress Scarlet is the color worn in traditional academic dress in the United Kingdom for those awarded doctorates. It is also the color of many of the undergraduate gowns worn by students of the ancient universities of Scotland. In academic dress in the United States, scarlet is used for hood bindings (borders) and, depending on the university or school, other parts of the dress (velvet chevrons, facings, etc.) to denote a degree in some form or branch of Theology (e.g., Sacred Theology, Canon Law, Divinity, Ministry). In the French academic dress system, the five traditional fields of study (Arts, Science, Medicine, Law and Divinity) are each symbolized by a distinctive color, which appears in the academic dress of the people who graduated in this field. Scarlet is the distinctive color for Law. As such, it is also the color worn on their court dress by French high magistrates. Film and television Captain Scarlet and the Mysterons was a British 1960s science-fiction marionette TV series. The captains are all named after a color, including Scarlet, Magenta, White and Grey. The Scarlet Pumpernickel is a 1950 Looney Tunes short starring Daffy Duck, a parody of The Scarlet Pimpernel. Wanda Maximoff aka Scarlet Witch is the protagonist of WandaVision the 2021 Television miniseries. Literature The novel The Scarlet Letter by 19th-century American writer Nathaniel Hawthorne depicts the life of the fictional character Hester Prynne, who wears a prominent scarlet letter "A" (for "adulteress") on her chest as a punishment for adultery. A Study in Scarlet was an 1888 detective-mystery novel by Sir Arthur Conan Doyle, introducing and featuring Sherlock Holmes, solving a baffling mystery whose main clue was a message painted on a wall in blood. The Scarlet Pimpernel was a popular play and novel about intrigue and adventure during the French Revolution, written by Baroness Emmuska Orczy. The play was first staged in London in 1903, about an English lord, Sir Percy Blakely (Blakeney), who wore a disguise and rescued French nobles from the guillotine during the French Revolution. He was supported by a secret club, the League of the Scarlet Pimpernel, and left the red flower of that name as his calling card. The Baroness later converted her play into a highly successful novel, upon which several movies were based. The hero, who lived a double life as a foppish British aristocrat by day and a disguised fighter for justice by night, inspired later heroes such as Batman and Superman. The novel The Scarlet Plague by Jack London tells of a terrible epidemic, which had decimated mankind in the future world of 2073. In the novel Scarlet Sails by Russian writer Alexander Grin the protagonist, an imaginative young girl Assol, dreams of a beautiful ship with scarlet sails to come and take her away from the dismal fishing village. Partly due to the association with this novel, scarlet had become viewed as the color of hope and passion in Russian and related cultures. Military In the modern British army, scarlet is still worn by the Foot Guards, the Life Guards, and by some regimental bands or drummers for ceremonial purposes. Officers and NCOs of those regiments which previously wore red retain scarlet as the color of their "mess" or formal evening jackets. The Royal Gibraltar Regiment has a scarlet tunic in its winter dress. Scarlet is worn for some full dress, military band or mess uniforms in the modern armies of a number of the countries that made up the former British Empire. These include the Australian, Jamaican, New Zealand, Fijian, Canadian, Kenyan, Ghanaian, Indian, Singaporean, Sri Lankan and Pakistani armies. The musicians of the United States Marine Corps Band wear red, following an 18th-century military tradition that the uniforms of band members are the reverse of the uniforms of the other soldiers in their unit. Since the US Marine uniform is blue with red facings, the band wears the reverse. The Brazilian Marine Corps wears a red dress uniform. Red Serge is the uniform of the Royal Canadian Mounted Police, created in 1873 as the North-West Mounted Police, and given its present name in 1920. The uniform was adapted from the tunic of the British Army. Cadets at the Royal Military College of Canada also wear red dress uniforms. Scarlet is the branch color of the United States Army Field Artillery Corps. Scarlet and gold are the colors of the United States Marine Corps. Scarlet is the color of the beret given to United States Air Force Combat Controllers, after completion of Combat Control School at Pope Air Force Base. Orders and decorations Scarlet is the color of the robes and sash of the Order of the Bath in the United Kingdom, and the Order has a Gentleman Usher of the Scarlet Rod. Religion In the Roman Catholic Church, scarlet robes—symbolizing the color of the blood of Christ and the Christian martyrs—are worn by cardinals as a symbol of their willingness to defend their faith with their own blood. Scarlet red, with or without the use of gold stripes, is the proper color in the Catholic church's liturgy for Palm Sunday, for Good Friday, for Pentecost, for memorials and feasts of saints who were martyred, and for funerals of the Pope or for cardinals. In the Lutheran tradition, scarlet is the color for paraments for Palm or Passion Sunday, and for all of Holy Week through Maundy Thursday. Prostitution In countries that have traditionally been dominated by Christian ideas, scarlet is associated with prostitution. The Book of Revelation refers to the Whore of Babylon riding upon a "scarlet beast" and dressed in purple and scarlet. The phrase Great Scarlet Whore has been used by Puritans in the 17th century, and the phrase The Scarlet Woman was used by many Protestants and later Mormons in North America well into the 20th century. Scarlet and crimson are also linked to the Judeo-Christian concept of sin in the Book of Isaiah, rendered in the King James Version "though your sins be as scarlet, they shall be as white as snow; though they be red like crimson, they shall be as wool." The connection of red or scarlet with prostitution was very common in Europe and America. Prostitutes were obliged to wear red in some European cities, and even today areas in European cities where prostitutes can work legally are known as red-light districts. Sex worker advocacy groups like the Scarlet Alliance use the striking color to associate themselves with prostitution. Sports The Scarlets are the name of a Welsh professional rugby union team, and play in scarlet. Scarlet is the sole official color of Rutgers University as an institution. The NCAA Division I athletic teams for its main campus, Rutgers University–New Brunswick, are called the Scarlet Knights, while the NCAA Division III teams for its satellite campuses at Newark and Camden are called the Scarlet Raiders and Scarlet Raptors, respectively. The Atlanta Braves has scarlet red as one of the team colors. The official colors of the University of Nebraska–Lincoln athletic teams are scarlet and cream. The official colors of Ohio State University athletic teams are scarlet and gray. The official colors Illinois Institute of Technology athletic teams are scarlet and gray. The official colors of Texas Tech University athletic teams are scarlet and black as noted in the fight song. The official colors of the Boston University athletic teams are scarlet and white. The official color of the Scuderia Ferrari is scarlet red. Variations of scarlet Websafe scarlet This is a variation on the standard RGB or Hex combination that produces a truer Scarlet color on some monitors. It is slightly more orange than the standard Scarlet RGB value of 255, 36, 0, but does give a truer color on displays where the red dominates over the orange and would otherwise make the color appear more as a normal red rather than a genuine Scarlet. Torch red This is the color now called scarlet in Crayola crayons. It was originally formulated as torch red in 1998 and then renamed scarlet by Crayola in 2000. Flame The first recorded use of flame as a color name in English was in 1590. The source of this color is the ISCC-NBS Dictionary of Color Names (1955), a color dictionary used by stamp collectors to identify the colors of stamps. A sample of the color "Flame" (color sample #34) is also displayed in the Dictionary online version. Fire brick Displayed below is the web color fire brick, a medium dark shade of scarlet/red. Boston University Scarlet Displayed adjacent is the color Boston University Scarlet, the color which, along with white, is symbolic of Boston University. The color is identical to Utah Crimson. Gallery
Physical sciences
Colors
Physics
1169181
https://en.wikipedia.org/wiki/Spectacled%20bear
Spectacled bear
The spectacled bear (Tremarctos ornatus), also known as the South American bear, Andean bear, Andean short-faced bear or mountain bear and locally as jukumari (Aymara and Quechua), ukumari (Quechua) or ukuku, is a species of bear native to the Andes Mountains in northern and western South America. It is the only living species of bear native to South America, and the last remaining short-faced bear (subfamily Tremarctinae). Its closest relatives are the extinct Tremarctos floridanus, and the giant short-faced bears (Arctodus and Arctotherium), which became extinct at the end of the Pleistocene around 12,000 years ago. Unlike other omnivorous bears, the diet of the spectacled bear is mostly herbivorous. The species is classified as Vulnerable by the IUCN because of habitat loss. Description The spectacled bear is the only bear native to South America and is the largest land carnivore in that part of the world, although as little as 5% of its diet is composed of meat. Among South America's extant, native land animals, only the Baird's tapir, South American tapir and mountain tapir are heavier than the bear. The spectacled bear is a mid-sized species of bear. Overall, its fur is blackish in colour, though bears may vary from jet black to dark brown and to even a reddish hue. The species typically has distinctive beige or ginger-coloured markings across its face and upper chest, though not all spectacled bears have "spectacle" markings. The pattern and extent of pale markings are slightly different on each individual bear, and bears can be readily distinguished by this. Males are a third larger than females in dimensions and sometimes twice their weight. Males can weigh from , and females can weigh from . Head-and-body length can range from , though mature males do not measure less than . On average males weigh about and females average about , thus it rivals the polar bear for the most sexually dimorphic modern bear. A male in captivity that was considered obese weighed . The tail is a mere in length, and the shoulder height is from . Compared to other living bears, this species has a more rounded face with a relatively short and broad snout. In some extinct species of the Tremarctinae subfamily, this facial structure has been thought to be an adaptation to a largely carnivorous diet, despite the modern spectacled bears' herbivorous dietary preferences. The spectacled bear's sense of smell is extremely sensitive. They can perceive from the ground when a tree is loaded with ripe fruit. On the other hand, their hearing is moderate and their vision is short. Distribution and habitat Despite some rare spilling-over into eastern Panama, spectacled bears are mostly restricted to specific regions in northern and western South America. They can range in western Venezuela, Colombia, Ecuador, Peru, western Bolivia, and northwestern Argentina. Its elongated geographical distribution is only wide but with a length of more than . The species is found almost entirely in the Andes Mountains. Before spectacled bear populations became fragmented during the last 500 years, the species had a reputation for being adaptable, as it is found in a wide variety of habitats and altitudes throughout its range, including cloud forests, high-altitude grasslands, dry forests and scrub deserts. A single spectacled bear population on the border of Peru and Ecuador inhabited as great a range of habitat types as the world's brown bears (Ursus arctos) now occupy. The best habitats for spectacled bears are humid to very humid montane forests. These cloud forests typically occupy a elevational band between depending on latitude. Generally, the wetter these forests are the more food species there are that can support bears. Occasionally, they may reach altitudes as low as , but are not typically found below in the foothills. They can even range up to the mountain snow line at over in elevation. Therefore, it is well known that bears use all these types of habitats in regional movements; however, the seasonal patterns of these movements are still unknown. Nowadays, the distribution area of the Tremarctos ornatus is influenced by the human presence, mainly due to habitat destruction and degradation, hunting and fragmentation of populations. This fragmentation is mainly found in Venezuela, Colombia, Ecuador and Argentina. It represents several problems to this population because, first, their persistence is compromised if they are small, isolated populations, even without facing habitat lost or hunting. Second, the transformation of the landscape represents loss of availability of the type of habitats spectacled bears need. Third, fragmentation exposes bears to hunting and killing due to its accessibility. Naming and etymology Tremarctos ornatus is commonly referred to in English as the "spectacled bear", a reference to the light colouring on its chest, neck and face, which may resemble spectacles in some individuals, or the "Andean bear" for its distribution along the Andes. The root trem- comes from a Greek word meaning "hole"; arctos is the Greek word for "bear". Tremarctos is a reference to an unusual hole on the animal's humerus. , Latin for "decorated", is a reference to the markings that give the bear its common English name. Phylogeny A 2007 investigation into the mitochondrial DNA of bear species indicates that the subfamily Tremarctinae, which includes the extant spectacled bear, diverged from the Ursinae subfamily approximately 5.7 million years ago. Tremarctinae includes the extinct American giant short-faced and Florida short-faced bears. Behaviour and diet Spectacled bears are one of four extant bear species that are habitually arboreal, alongside the American black bear (Ursus americanus) and Asian black bear (U. thibetanus), and the sun bear (Helarctos malayanus). In Andean cloud forests, spectacled bears may be active both during the day and night, but specimens in the Peruvian desert are reported to bed down under vegetative cover during the day. Their continued survival alongside humans has depended mostly on their ability to climb even the tallest trees of the Andes. They usually retreat from the presence of humans, often by climbing trees. Once up a tree, they may often build a platform, perhaps to aid in concealment, as well as to rest and store food on. Although spectacled bears are solitary and tend to isolate themselves from one another to avoid competition, they are not territorial. They have even been recorded to feed in small groups at abundant food sources. Males are reported to have an average home range of during the wet season and during the dry season. Females are reported to have an average home range of in the wet season and in the dry season. When encountered by humans or other spectacled bears, they will react in a docile but cautious manner, unless the intruder is seen as a threat or a mother's cubs are endangered. Like other bears, mothers are protective of their young and have attacked poachers. There is only a single reported human death due to a spectacled bear, which occurred while it was being hunted and was already shot. The only predators of cubs include cougars (Puma concolor) and possibly male spectacled bears. The bears "appear to avoid" jaguars, but the jaguar has considerably different habitat preferences, does not overlap with the spectacled bear in altitude on any specific mountain slope, and only overlaps slightly (900m) in altitude if the entire Cordillera Oriental is considered based upon unpublished data. Generally, the only threat against adult bears is humans. The longest-lived captive bear, at the Salisbury Zoological Park (United States)|Salisbury Zoo, In Salisbury Maryland, in the US, attained a lifespan of 37 years and 11 months. Lifespan in the wild has not been studied, but bears are believed to commonly live to 20 years or more unless they run foul of humans. Spectacled bears are more herbivorous than most other bears; normally about 5 to 7% of their diets is meat. The most common foods for these bears include cactus, bromeliads (especially Puya ssp., Tillandsia ssp. and Guzmania ssp.) palm nuts, bamboo hearts, frailejon (Espeletia spp.), orchid bulbs, fallen fruit on the forest floor, unopened palm leaves, and moss. They will also peel back tree bark to eat the nutritious second layer. Much of this vegetation is very tough to open or digest for most animals, and the bear is one of the few species in its range to exploit these food sources. The spectacled bear has the largest zygomatic mandibular muscles relative to its body size and the shortest muzzle of any living bear, slightly surpassing the relative size of the giant panda's (Ailuropoda melanoleuca) morphology here. Not coincidentally, both species are known for extensively consuming tough, fibrous plants. Unlike the ursid bears whose fourth premolar has a more well-developed protoconid, an adaptation for shearing flesh, the fourth premolar of spectacled bears has blunt lophs with three pulp cavities instead of two, and can have three roots instead of the two that characterize ursid bears. The musculature and tooth characteristics are designed to support the stresses of grinding and crushing vegetation. Besides the giant panda, the spectacled bear is perhaps the most herbivorous living bear species. These bears also eat agricultural products, such as sugarcane (Saccharum ssp.), honey (made by Apis ssp.), and maize (Zea mays), and have been known to travel above the tree line for berries and more ground-based bromeliads. When food is abundant, such as large corn fields, up to nine individual bears have fed close by each other in a single vicinity. Animal prey is usually quite small, but these bears can prey on adult deer, llama (Lama glama) and domestic cattle (Bos taurus) and horses (Equus caballus). A spectacled bear was captured on a remote video-monitor predaceously attacking an adult mountain tapir perhaps nearly twice its own body mass, and adult horse and cattle killed by spectacled bears have been even heavier. Animal prey has included rabbits, mice, other rodents, birds at the nest (especially ground-nesting birds like tinamous or lapwings (Vanellus ssp.), arthropods, and carrion. They are occasionally accused of killing livestock, especially cattle, and raiding corn fields. Allegedly, some bears become habituated to eating cattle, but the bears are actually more likely to eat cattle as carrion and some farmers may accidentally assume the spectacled bear killed them. Due to fear of loss of stock, bears may be killed on sight. Reproduction Most of the information available about the reproduction of this species has been through observation of captive animals. In captivity, mating is concentrated in between February and September, according to the latitude, and, in the wild, it has been seen how mating may occur at almost any time of the year, but activity normally peaks in April and June, at the beginning of the wet season and corresponding with the peak of fruit-ripening. The mating pair are together for one to two weeks, during which they will copulate multiple times for 12–45 minutes at a time. The courtship is based on games and non-aggressive fights while intercourse can be accompanied by loud sounds from both animals. In the wild, births usually occur in the dry season, between December and February but in captivity it occurs all year within the species' distribution. The gestation period is 5.5 to 8.5 months. From one to three cubs may be born, with four being rare and two being the average. The cubs are born with their eyes closed and weigh about each. Although this species does not give birth during the hibernation cycle as do northern bear species, births usually occur in a small den and the female waits until the cubs can see and walk before she leaves with them, this occurs in between three and four months after birth. Females grow more slowly than males. The size of the litter has been positively correlated with both the weight of the female and the abundance and variety of food sources, particularly the degree to which fruiting is temporally predictable. The cubs often stay with the female for one year before striking out on their own. This is related to the time mothers breastfeed (1 year), but keep providing maternal care for an additional year. Breeding maturity is estimated to be reached at between four and seven years of age for both sexes, based solely on captive bears. Females usually give birth for the first time when they are 5 years old and their fecundity is shorter than that of the males, who keep fertility almost all their lives. Something that is in favor of the subsistence of the bear population is their longevity, since they are able to raise at least two cubs to adulthood, contributing to population replacement. Wild bears can live for an average of 20 years. Conservation Threats The Andean bear is threatened due to poaching and habitat loss, attributable to agricultural expansion and illegal mining. Poaching might have several reasons: trophy hunting, pet trade, religious or magical beliefs, natural products trade and conflicts with humans. Trophy hunting of Andean bear was apparently popular during the 19th century in some rural areas of Latin-America. In the costumbrist novel María by Colombian writer Jorge Isaacs, it was portrayed as an activity for privileged young men in Colombia. Tales regarding pet bears are also known from documents about the Ecuadorian aristocracy of that time. These threats might have diminished in recent years, but there are still isolated reports of captive bears confiscated in rural areas, which usually are unable to adapt again to their natural habitat and must be kept in zoological facilities. Religious or magical beliefs might be motivations for killing Andean bears, especially in places where bears are related to myths of disappearing women or children, or where bear parts are related to traditional medicine or superstitions. In this context, the trade of bear parts might have commercial value. Their gall bladders appear to be valued in traditional Chinese medicine and can fetch a high price on the international market. Conflicts with humans, however, appear to be the most common cause of poaching in large portions of its distribution. Andean Bears are often suspected of attacking cattle and raiding crops, and are killed for retaliation or in order to avoid further damages. It has been argued that attacks to cattle attributed to Andean bear are partly due to other predators. Raiding of crops can be frequent in areas with diminishing natural resources and extensive crops in former bear habitat, or when problematic individuals get used to human environments. The intensity of poaching can create ecological traps for Andean bears. That is, if bears are attracted to areas of high habitat quality that are also high in poaching risk. Perhaps the most epidemic problem for the species is extensive logging and farming, which has led to habitat loss for the largely tree-dependent bears. Shortage of natural food sources might push bears to feed on crops or livestock, increasing the conflict that usually results in poaching of individuals. Impacts of climate changes on bear habitat and food sources are not fully understood, but might have potential negative impact in the near future. As stated, one of the major limitations to the viability of bear populations is human-caused mortality, mainly poaching and habitat loss; but the other big limitation is population size. Therefore, the most effective actions for their viability will be to increase population size and decrease poaching. For these actions to be effective, it is needed to understand where they are carried out, identifying areas where habitat protection and landscape management are realistically capable of maintaining large bear populations. Perception of the Andean bear There are two views of the Andean Bears. One is ex-situ, people that live far from where the bears inhabit; for them, the spectacled bears are usually charismatic symbols of the wilderness, animals that are not aggressive and that are mainly vegetarians.  The other view is in-situ, people that live in areas where the bears inhabit; for them, bears are cattle predators, pests that should be killed as a preventative measure and where any cattle loss is immediately attributed to them, becoming persecuted and hunted. Conservation actions and plans The IUCN has recommended the following courses for spectacled bear conservation: expansion and implementation of conservation land to prevent further development, greater species level research and monitoring of trends and threats, more concerted management of current conservation areas, stewardship programs for bears which engage local residents and the education of the public regarding spectacled bears, especially the benefits of conserving the species due to its effect on natural resources. National governments, NGOs and rural communities have made different commitments to conservation of this species along its distribution. Conservation actions in Venezuela date back to the early 1990s, and have been based mostly on environmental education at several levels and the establishment of protected areas. The effort of several organisations has led to a widespread recognition of the Andean bear in Venezuelan society, raising it as an emblematic species of conservation efforts in the country, and to the establishment of a 10-year action plan. Evidence regarding the objective effectiveness of these programs (like reducing poaching risk, maintaining population viability, and reducing extinction risk) is subject to debate and needs to be further evaluated. Legislation against bear hunting exists, but is rarely enforced. This leads to persistence of the poaching problem, even inside protected areas. In 2006, the Spectacled Bear Conservation Society was established in Peru to study and protect the spectacled bear. Spectacled bear and protected areas To evaluate the protected status of the Andean bears researchers evaluated the percentage of their habitats included in national and protected areas in 1998.  This evaluation showed that only 18.5% of the bear range was located in 58 protected areas, highlighting that many of them were small, especially those in the northern Andes. The largest park had an area of while the median size of 43 parks from Venezuela, Colombia and Ecuador was , which may result too small to maintain a sustainable bear population.  Therefore, these researchers stated the importance of the creation of habitat blocks outside protected areas since they might provide opportunities for the protection of these animals. Other suggested conservation strategies Researchers suggest the following spectacled bear conservation strategies: Protect high-quality habitats while maintaining connectivity between their different elevational zones. In reality, it is not possible to manage all the undisturbed habitat the bears need in the long term. As such, it is important to identify those high-quality habitats that maximize biodiversity gain. Alleviate human-bear conflicts through conflict management, thinking about the spatial configuration of this animal habitat. Mitigate human impacts on protected areas through the design of comprehensive management strategies. Sustain landscape diversity in the bear conservation study areas to ensure them food and seasonal access to resources in all the habitats they frequent. Maintain bear population connectivity, emphasizing those conservation areas that connect different ecosystems, such as the cloud forest and the paramo. Rethink roads: where they are built, how and with what purpose, understanding that they define the macro configuration of bear habitat and are a barrier for bear movements and population connectivity. Integrate hydrological criteria at a landscape scale will benefit bears and other biotic communities that associate with aquatic environments, including humans. Linking bear habitat conservation and water management can be effective for the development of conservation strategies that benefit all. In places where it is almost impossible to establish new protected areas due mainly to the fact that many people already live in the area, the creation of natural corridors is possibly the best tool for the conservation of species with migratory patterns such as the endangered Andean bear. Spectacled bear in Ecuador Spectacled bears in Ecuador live in approximately of paramo and cloud forest habitats. About one-third of this area is part of the National System of Protected Areas and the remaining 67% is located on unprotected, undeveloped areas that have suffered a substantial reduction of approximately 40% from its original distribution. Due to this land-use conversion to agricultural uses, important amounts of the spectacled bear habitat have been lost. This has fragmented their territory and isolated populations to small areas that might result in extirpations in the long term. Therefore, the distribution of the species in the country is set in numerous habitat patches, from which many are small. In popular culture The children's character Paddington Bear is a spectacled bear from Peru. Stephen Fry authored a book, Rescuing the Spectacled Bear: A Peruvian Diary, following two BBC programmes documenting him visiting Peru to participate in rescuing a couple of spectacled bears.
Biology and health sciences
Bears
Animals
1169469
https://en.wikipedia.org/wiki/AC%20adapter
AC adapter
An AC adapter or AC/DC adapter (also called a wall charger, power adapter, power brick, or wall wart) is a type of external power supply, often enclosed in a case similar to an AC plug. AC adapters deliver electric power to devices that lack internal components to draw voltage and power from mains power themselves. The internal circuitry of an external power supply is often very similar to the design that would be used for a built-in or internal supply. When used with battery-powered equipment, adapters typically charge the battery as well as powering the equipment. Aside from obviating the need for internal power supplies, adapters offer flexibility: a device can draw power from 120 VAC or 230 VAC mains, vehicle battery, or aircraft battery, just by using different adapters. Safety can be another advantage, as hazardous 120 or 240 volt mains power is transformed to a lower, safer voltage at the wall outlet before going into the appliance handled by the user. Modes of operation Originally, most AC/DC adapters were linear power supplies, containing a transformer to convert the mains electricity voltage to a lower voltage, a rectifier to convert it to pulsating DC, and a filter to smooth the pulsating waveform to DC, with residual ripple variations small enough to leave the powered device unaffected. Size and weight of the device was largely determined by the transformer, which in turn was determined by the power output and mains frequency. Ratings over a few watts made the devices too large and heavy to be physically supported by a wall outlet. The output voltage of these adapters varied with load; for equipment requiring a more stable voltage, linear voltage regulator circuitry was added. Losses in the transformer and the linear regulator were considerable; efficiency was relatively low, and significant power dissipated as heat even when not driving a load. Early in the twenty-first century, switched-mode power supplies (SMPSs) became almost ubiquitous for this purpose due to their compact size and light weight relative to their power output ability. Mains voltage is rectified to a high direct voltage driving a switching circuit, which contains a transformer operating at a high frequency and outputs direct current at the desired voltage. The high-frequency ripple is more easily filtered out than mains-frequency. The high frequency allows the transformer to be small, which reduces its losses; and the switching regulator can be much more efficient than a linear regulator. The result is a much more efficient, smaller, and lighter device. Safety is ensured, as in the older linear circuit, because a transformer still provides galvanic isolation. A linear circuit must be designed for a specific, narrow range of input voltages (e.g., 220–240 VAC) and must use a transformer appropriate for the frequency (usually 50 or 60 Hz), but a switched-mode supply can work efficiently over a very wide range of voltages and frequencies; a single 100–240 VAC unit will handle almost any mains supply in the world. Many inexpensive switched-mode AC adapters do not implement adequate filtering and/or shielding for electromagnetic interference that they generate. The nature of these high speed, high-energy switching designs is such that when these preventative measures are not implemented, relatively high energy harmonics can be generated, and radiated, well into the radio portion of the spectrum. The amount of RF energy typically decreases with frequency; so, for instance, interference in the medium wave (US AM) broadcast band in the one megahertz region may be strong, while interference with the FM broadcast band around 100 megahertz may be considerably less. Distance is a factor; the closer the interference is to a radio receiver, the more intense it will be. Even WiFi reception in the gigahertz range can be degraded if the receiving antennae are very close to a radiating AC adapter. A determination of if interference is coming from a specific AC adapter can be made simply by unplugging the suspect adapter while observing the amount of interference received in the problem radio band. In a modern household or business environment, there may be multiple AC adapters in use; in such a case, unplug them all, then plug them back in one by one until the culprit or culprits is found. Advantages External AC adapters are widely used to power small or portable electronic devices. The advantages include: Safety – External power adapters can free product designers from worrying about some safety issues. Much of this style of equipment uses only voltages low enough not to be a safety hazard internally, although the power supply must out of necessity use dangerous mains voltage. If an external power supply is used (usually via a power connector, often of coaxial type), the equipment need not be designed with concern for hazardous voltages inside the enclosure. This is particularly relevant for equipment with lightweight cases which may break and expose internal electrical parts. Heat reduction – Heat reduces reliability and longevity of electronic components, and can cause sensitive circuits to become inaccurate or malfunction. A separate power supply removes a source of heat from the apparatus. Electrical noise reduction – Because radiated electrical noise falls off with the square of the distance, it is to the manufacturer's advantage to convert potentially noisy AC line power or automotive power to "clean", filtered DC in an external adapter, at a safe distance from noise-sensitive circuitry. Weight and size reduction – Removing power components and the mains connection plug from equipment powered by rechargeable batteries reduces the weight and size which must be carried. Ease of replacement – Power supplies are more prone to failure than other circuitry due to their exposure to power spikes and their internal generation of waste heat. External power supplies can be replaced quickly by a user without the need to have the powered device repaired. Configuration versatility – Externally powered electronic products can be used with different power sources as needed (e.g. 120 VAC, 240 VAC, 12 VDC, or external battery pack), for convenient use in the field, or when traveling. Simplified product inventory, distribution, and certification – An electronic product that is sold and used internationally must be powered from a wide range of power sources, and must meet product safety regulations in many jurisdictions, usually requiring expensive certification by national or regional safety agencies such as Underwriters Laboratories (UL) or TÜV. A single version of a device may be used in many markets, with the different power requirements met by different external power supplies, so that only one version of the device need be manufactured, stocked, and tested. If the design of the device is modified over time (a frequent occurrence), the power supply design itself need not be retested (and vice versa). Constant voltage is produced by a specific type of adapter used for computers and laptops. These types of adapters are commonly known as eliminators. Problems A survey of consumers showed widespread dissatisfaction with the cost, inconvenience, and wastefulness of the profusion of power adapters used by electronic devices. Efficiency The issue of inefficiency of some power supplies has become well known, with U.S. president George W. Bush referring in 2001 to such devices as "Energy Vampires". Legislation is being enacted in the EU and a number of U.S. states to reduce the level of energy wasted by some of these devices. Such initiatives include standby power and the One Watt Initiative. But others have argued that these inefficient devices are low-powered, e.g., devices that are used for small battery chargers, so even if they have a low efficiency, the amount of energy they waste is less than 1% of household consumption of electric energy. Considering the total efficiency of power supplies for small electronic equipment, the older mains-frequency linear transformer-based power supply was found in a 2002 report to have efficiencies from 20 to 75%, and have considerable energy loss even when powered up but not supplying power. Switched-mode power supplies (SMPSs) are much more efficient; a good design can be 80–90% efficient, and is also much smaller and lighter. In 2002 most external plug-in "wall wart" power adapters commonly used for low-power consumer electronics devices were of linear design, as well as supplies built into some equipment. External supplies are usually left plugged in even when not in use, and consume from a few watts to 35 watts of power in that state. The report concluded that about 32 billion kilowatt-hours (kWh) per year, about 1% of total electrical energy consumption, could be saved in the United States by replacing all linear power supplies (average efficiency 40–50%) with advanced switching designs (efficiency 80–90%), by replacing older switching supplies (efficiencies of less than 70%) with advanced designs (efficiency of at least 80%), and by reducing standby consumption of supplies to not more than 1 watt. Since the report was published, SMPSs have indeed replaced linear supplies to a great extent, even in wall warts. The 2002 report estimated that 6% of electrical energy used in the U.S. "flows through" power supplies (not counting only the wall warts). The website where the report was published said in 2010 that despite the spread of SMPSs, "today's power supplies consume at least 2% of all U.S. electricity production. More efficient power supply designs could cut that usage in half". Since wasted electrical energy is released as heat, an inefficient power supply is hot to the touch, as is one that wastes power without an electrical load. This waste heat is itself a problem in warm weather, since it may require additional air conditioning to prevent overheating, and even to remove the unwanted heat from large supplies. Universal power adapters External power adapters can fail, or can become separated from the product they are intended to power. Consequently, there is a market for replacement adapters. The replacement must match input and output voltages, match or exceed current capability, and be fitted with a matching connector. Many electrical products are poorly labeled with information concerning the power supply they require, so it is prudent to record the specifications of the original power supply in advance, to ease replacement if the original is later lost. Careful labeling of power adapters can also reduce the likelihood of a mixup which could cause equipment damage. Some "universal" replacement power supplies allow output voltage and polarity to be switched to match a range of equipment. With the advent of switch-mode supplies, adapters which can work with any voltage from 110 VAC to 240 VAC became widely available; previously either 100–120 VAC or 200–240 VAC versions were used. Adapters which can also be used with motor vehicle and aircraft power (see EmPower) are available. Four-way X connectors or six-way star connectors, also known as spider connectors, with multiple plug sizes and types are common on generic power supplies. Other replacement power supplies have arrangements for changing the power connector, with four to nine different alternatives available when purchased in a set. This allows many different configurations of AC adapters to be put together, without requiring soldering. Philmore and other competing brands offer similar AC adapters with interchangeable connectors. The label on a power supply may not be a reliable guide to the actual voltage it supplies under varying conditions. Many low-cost power supplies are "unregulated", in that their voltage can change considerably with load. If they are lightly loaded, they may put out much more than the nominal "name plate" voltage, which could damage the load. If they are heavily loaded, the output voltage may droop appreciably, in some cases well below the nominal label voltage even within the nominal rated current, causing the equipment being supplied to malfunction or be damaged. Supplies with linear (as against switched) regulators are heavy, bulky, and expensive. Modern switched-mode power supplies (SMPSs) are smaller, lighter, and more efficient. They put out a much more constant voltage than unregulated supplies as the input voltage and the load current vary. When introduced, their prices were high, but by the early 21st century the prices of switch-mode components had dropped to a degree which allowed even cheap supplies to use this technology, saving the cost of a larger and heavier mains-frequency transformer. Auto-sensing adapters Some universal adapters automatically set their output voltage and maximum current according to which of a range of interchangeable tips is fitted; tips are available to fit and supply appropriate power to many notebook computers and mobile devices. Different tips may use the same connector, but automatically supply different power; it is essential to use the right tip for the apparatus being powered, but no switch needs to be set correctly by the user. The advent of switch-mode power supplies has allowed adapters to work from any AC mains supply from 100 to 240 V with an appropriate plug; operation from standard 12 V DC vehicle and aircraft supplies can also be supported. With the appropriate adapter, accessories, and tips, a variety of equipment can be powered from almost any source of power. A "Green Plug" system has been proposed, based on USB technology, by which the consuming device would tell the external power supply what kind of power is needed. Battery eliminator A battery eliminator is an adapter intended to allow a device intended for battery operation, such as a radio, to be operated from an AC outlet. All radios, except crystal sets, used inconvenient and messy vacuum tube batteries until the mid- to late-1920s. Battery eliminators that plugged into light sockets became very popular. Early commercial units were produced by the Edward S. Rogers, Sr. company in 1925 as a complement to its line of batteryless radios. Another early producer of battery eliminators was the Galvin Manufacturing Corporation (later known as Motorola), which was opened on September 25, 1928 by Paul Galvin and his brother Joseph E. Galvin. Eliminators became obsolete for radios after RCA introduced AC tubes in 1927, enabling receivers to plug into household power. The industry rapidly adopted AC tubes, and companies which launched exclusively manufacture that product such as Philco had to quickly pivot to radio manufacturing to remain relevant and existent. Laptop charger In early laptop computers, the power supply units were internal like in desktop computers. To facilitate portability by sparing physical space and reducing the weight, power supply units were externalized. When a laptop computer is operated while recharging, the integrated circuitry which controls the charging makes use of a power supply unit's remaining electrical current capacity. This allows supplying the device's components with power during usage while maintaining an uncompromised constant charging speed. Use of USB The USB connector (and voltage) has emerged as a de facto standard in low-power AC adapters for many portable devices. In addition to serial digital data exchange, the USB standard also provides , up to ( over USB 3.0). Numerous accessory gadgets ("USB decorations") were designed to connect to USB only for DC power and not for data interchange. The USB Implementers Forum in March, 2007 released the USB Battery Charging Specification which defines, "...limits as well as detection, control and reporting mechanisms to permit devices to draw current in excess of the USB 2.0 specification for charging ...". Electric fans, lamps, alarms, coffee warmers, battery chargers, and even toys have been designed to tap power from a USB connector. Plug-in adapters equipped with USB receptacles are widely available to convert or power or automotive power to USB power (see photo at right). The trend towards more-compact electronic devices has driven a shift towards the micro-USB and mini-USB connectors, which are electrically compatible in function to the original USB connector but physically smaller. In 2012, a USB Power Delivery Specification was proposed to standardize delivery of up to 100 watts, suitable for devices such as laptop computers that usually depend on proprietary adapters. Standards The ITU published Recommendation ITU-T L.1000, "Universal power adapter and charger solution for mobile terminals and other hand-held ICT devices", which specifies a charger similar in most respects to that of the GSMA/OMTP proposal and to the European Common external power supply. The ITU recommendation was expanded and updated in June, 2011. The hope is to markedly reduce the profusion of non-interchangeable power adapters. The European Union defined a Common external power supply for "hand-held data-enabled mobile phones" (smartphones) sold from 2010, intended to replace the many incompatible proprietary power supplies and eliminate waste by reducing the total number of supplies manufactured. Conformant supplies deliver 5 VDC via a micro-USB connector, with preferred input voltage handled ranging from 90 to 264 VAC. In 2006 Larry Page, a founder of Google, proposed a and up to standard for almost all equipment requiring an external converter, with new buildings fitted with wiring, making external AC-to-DC adapter circuitry unnecessary. IEC has created a standard for interchangeable laptop power supplies, IEC 62700 (full name "IEC Technical Specification 62700: DC Power supply for notebook computer"), which was published on February 6, 2014.
Technology
Consumer electronics
null
1170166
https://en.wikipedia.org/wiki/Chirality%20%28chemistry%29
Chirality (chemistry)
In chemistry, a molecule or ion is called chiral () if it cannot be superposed on its mirror image by any combination of rotations, translations, and some conformational changes. This geometric property is called chirality (). The terms are derived from Ancient Greek (cheir) 'hand'; which is the canonical example of an object with this property. A chiral molecule or ion exists in two stereoisomers that are mirror images of each other, called enantiomers; they are often distinguished as either "right-handed" or "left-handed" by their absolute configuration or some other criterion. The two enantiomers have the same chemical properties, except when reacting with other chiral compounds. They also have the same physical properties, except that they often have opposite optical activities. A homogeneous mixture of the two enantiomers in equal parts is said to be racemic, and it usually differs chemically and physically from the pure enantiomers. Chiral molecules will usually have a stereogenic element from which chirality arises. The most common type of stereogenic element is a stereogenic center, or stereocenter. In the case of organic compounds, stereocenters most frequently take the form of a carbon atom with four distinct (different) groups attached to it in a tetrahedral geometry. Less commonly, other atoms like N, P, S, and Si can also serve as stereocenters, provided they have four distinct substituents (including lone pair electrons) attached to them. A given stereocenter has two possible configurations (R and S), which give rise to stereoisomers (diastereomers and enantiomers) in molecules with one or more stereocenter. For a chiral molecule with one or more stereocenter, the enantiomer corresponds to the stereoisomer in which every stereocenter has the opposite configuration. An organic compound with only one stereogenic carbon is always chiral. On the other hand, an organic compound with multiple stereogenic carbons is typically, but not always, chiral. In particular, if the stereocenters are configured in such a way that the molecule can take a conformation having a plane of symmetry or an inversion point, then the molecule is achiral and is known as a meso compound. Molecules with chirality arising from one or more stereocenters are classified as possessing central chirality. There are two other types of stereogenic elements that can give rise to chirality, a stereogenic axis (axial chirality) and a stereogenic plane (planar chirality). Finally, the inherent curvature of a molecule can also give rise to chirality (inherent chirality). These types of chirality are far less common than central chirality. BINOL is a typical example of an axially chiral molecule, while trans-cyclooctene is a commonly cited example of a planar chiral molecule. Finally, helicene possesses helical chirality, which is one type of inherent chirality. Chirality is an important concept for stereochemistry and biochemistry. Most substances relevant to biology are chiral, such as carbohydrates (sugars, starch, and cellulose), all but one of the amino acids that are the building blocks of proteins, and the nucleic acids. Naturally occurring triglycerides are often chiral, but not always. In living organisms, one typically finds only one of the two enantiomers of a chiral compound. For that reason, organisms that consume a chiral compound usually can metabolize only one of its enantiomers. For the same reason, the two enantiomers of a chiral pharmaceutical usually have vastly different potencies or effects. Definition The chirality of a molecule is based on the molecular symmetry of its conformations. A conformation of a molecule is chiral if and only if it belongs to the Cn, Dn, T, O, I point groups (the chiral point groups). However, whether the molecule itself is considered to be chiral depends on whether its chiral conformations are persistent isomers that could be isolated as separated enantiomers, at least in principle, or the enantiomeric conformers rapidly interconvert at a given temperature and timescale through low-energy conformational changes (rendering the molecule achiral). For example, despite having chiral gauche conformers that belong to the C2 point group, butane is considered achiral at room temperature because rotation about the central C–C bond rapidly interconverts the enantiomers (3.4 kcal/mol barrier). Similarly, cis-1,2-dichlorocyclohexane consists of chair conformers that are nonidentical mirror images, but the two can interconvert via the cyclohexane chair flip (~10 kcal/mol barrier). As another example, amines with three distinct substituents (R1R2R3N:) are also regarded as achiral molecules because their enantiomeric pyramidal conformers rapidly undergo pyramidal inversion. However, if the temperature in question is low enough, the process that interconverts the enantiomeric chiral conformations becomes slow compared to a given timescale. The molecule would then be considered to be chiral at that temperature. The relevant timescale is, to some degree, arbitrarily defined: 1000 seconds is sometimes employed, as this is regarded as the lower limit for the amount of time required for chemical or chromatographic separation of enantiomers in a practical sense. Molecules that are chiral at room temperature due to restricted rotation about a single bond (barrier to rotation ≥ ca. 23 kcal/mol) are said to exhibit atropisomerism. A chiral compound can contain no improper axis of rotation (Sn), which includes planes of symmetry and inversion center. Chiral molecules are always dissymmetric (lacking Sn) but not always asymmetric (lacking all symmetry elements except the trivial identity). Asymmetric molecules are always chiral. The following table shows some examples of chiral and achiral molecules, with the Schoenflies notation of the point group of the molecule. In the achiral molecules, X and Y (with no subscript) represent achiral groups, whereas X and X or Y and Y represent enantiomers. Note that there is no meaning to the orientation of an S axis, which is just an inversion. Any orientation will do, so long as it passes through the center of inversion. Also note that higher symmetries of chiral and achiral molecules also exist, and symmetries that do not include those in the table, such as the chiral C or the achiral S. An example of a molecule that does not have a mirror plane or an inversion and yet would be considered achiral is 1,1-difluoro-2,2-dichlorocyclohexane (or 1,1-difluoro-3,3-dichlorocyclohexane). This may exist in many conformers (conformational isomers), but none of them has a mirror plane. In order to have a mirror plane, the cyclohexane ring would have to be flat, widening the bond angles and giving the conformation a very high energy. This compound would not be considered chiral because the chiral conformers interconvert easily. An achiral molecule having chiral conformations could theoretically form a mixture of right-handed and left-handed crystals, as often happens with racemic mixtures of chiral molecules (see Chiral resolution#Spontaneous resolution and related specialized techniques), or as when achiral liquid silicon dioxide is cooled to the point of becoming chiral quartz. Stereogenic centers A stereogenic center (or stereocenter) is an atom such that swapping the positions of two ligands (connected groups) on that atom results in a molecule that is stereoisomeric to the original. For example, a common case is a tetrahedral carbon bonded to four distinct groups a, b, c, and d (Cabcd), where swapping any two groups (e.g., Cbacd) leads to a stereoisomer of the original, so the central C is a stereocenter. Many chiral molecules have point chirality, namely a single chiral stereogenic center that coincides with an atom. This stereogenic center usually has four or more bonds to different groups, and may be carbon (as in many biological molecules), phosphorus (as in many organophosphates), silicon, or a metal (as in many chiral coordination compounds). However, a stereogenic center can also be a trivalent atom whose bonds are not in the same plane, such as phosphorus in P-chiral phosphines (PRR′R″) and sulfur in S-chiral sulfoxides (OSRR′), because a lone-pair of electrons is present instead of a fourth bond. Similarly, a stereogenic axis (or plane) is defined as an axis (or plane) in the molecule such that the swapping of any two ligands attached to the axis (or plane) gives rise to a stereoisomer. For instance, the C2-symmetric species 1,1′-bi-2-naphthol (BINOL) and 1,3-dichloroallene have stereogenic axes and exhibit axial chirality, while (E)-cyclooctene and many ferrocene derivatives bearing two or more substituents have stereogenic planes and exhibit planar chirality. Chirality can also arise from isotopic differences between atoms, such as in the deuterated benzyl alcohol PhCHDOH; which is chiral and optically active ([α]D = 0.715°), even though the non-deuterated compound PhCH2OH is not. If two enantiomers easily interconvert, the pure enantiomers may be practically impossible to separate, and only the racemic mixture is observable. This is the case, for example, of most amines with three different substituents (NRR′R″), because of the low energy barrier for nitrogen inversion. When the optical rotation for an enantiomer is too low for practical measurement, the species is said to exhibit cryptochirality. Chirality is an intrinsic part of the identity of a molecule, so the systematic name includes details of the absolute configuration (R/S, D/L, or other designations). Manifestations of chirality Flavor: the artificial sweetener aspartame has two enantiomers. L-aspartame tastes sweet whereas D-aspartame is tasteless. Odor: R-(–)-carvone smells like spearmint whereas S-(+)-carvone smells like caraway. Drug effectiveness: the antidepressant drug citalopram is sold as a racemic mixture. However, studies have shown that only the (S)-(+) enantiomer (escitalopram) is responsible for the drug's beneficial effects. Drug safety: D‑penicillamine is used in chelation therapy and for the treatment of rheumatoid arthritis whereas L‑penicillamine is toxic as it inhibits the action of pyridoxine, an essential B vitamin. In biochemistry Many biologically active molecules are chiral, including the naturally occurring amino acids (the building blocks of proteins) and sugars. The origin of this homochirality in biology is the subject of much debate. Most scientists believe that Earth life's "choice" of chirality was purely random, and that if carbon-based life forms exist elsewhere in the universe, their chemistry could theoretically have opposite chirality. However, there is some suggestion that early amino acids could have formed in comet dust. In this case, circularly polarised radiation (which makes up 17% of stellar radiation) could have caused the selective destruction of one chirality of amino acids, leading to a selection bias which ultimately resulted in all life on Earth being homochiral. Enzymes, which are chiral, often distinguish between the two enantiomers of a chiral substrate. One could imagine an enzyme as having a glove-like cavity that binds a substrate. If this glove is right-handed, then one enantiomer will fit inside and be bound, whereas the other enantiomer will have a poor fit and is unlikely to bind. -forms of amino acids tend to be tasteless, whereas -forms tend to taste sweet. Spearmint leaves contain the -enantiomer of the chemical carvone or R-(−)-carvone and caraway seeds contain the -enantiomer or S-(+)-carvone. The two smell different to most people because our olfactory receptors are chiral. Chirality is important in context of ordered phases as well, for example the addition of a small amount of an optically active molecule to a nematic phase (a phase that has long range orientational order of molecules) transforms that phase to a chiral nematic phase (or cholesteric phase). Chirality in context of such phases in polymeric fluids has also been studied in this context. In inorganic chemistry Chirality is a symmetry property, not a property of any part of the periodic table. Thus many inorganic materials, molecules, and ions are chiral. Quartz is an example from the mineral kingdom. Such noncentric materials are of interest for applications in nonlinear optics. In the areas of coordination chemistry and organometallic chemistry, chirality is pervasive and of practical importance. A famous example is tris(bipyridine)ruthenium(II) complex in which the three bipyridine ligands adopt a chiral propeller-like arrangement. The two enantiomers of complexes such as [Ru(2,2′-bipyridine)3]2+ may be designated as Λ (capital lambda, the Greek version of "L") for a left-handed twist of the propeller described by the ligands, and Δ (capital delta, Greek "D") for a right-handed twist (pictured). dextro- and levo-rotation (the clockwise and counterclockwise optical rotation of plane-polarized light) uses similar notation, but shouldn't be confused. Chiral ligands confer chirality to a metal complex, as illustrated by metal-amino acid complexes. If the metal exhibits catalytic properties, its combination with a chiral ligand is the basis of asymmetric catalysis. Methods and practices The term optical activity is derived from the interaction of chiral materials with polarized light. In a solution, the (−)-form, or levorotatory form, of an optical isomer rotates the plane of a beam of linearly polarized light counterclockwise. The (+)-form, or dextrorotatory form, of an optical isomer does the opposite. The rotation of light is measured using a polarimeter and is expressed as the optical rotation. Enantiomers can be separated by chiral resolution. This often involves forming crystals of a salt composed of one of the enantiomers and an acid or base from the so-called chiral pool of naturally occurring chiral compounds, such as malic acid or the amine brucine. Some racemic mixtures spontaneously crystallize into right-handed and left-handed crystals that can be separated by hand. Louis Pasteur used this method to separate left-handed and right-handed sodium ammonium tartrate crystals in 1849. Sometimes it is possible to seed a racemic solution with a right-handed and a left-handed crystal so that each will grow into a large crystal. Liquid chromatography (HPLC and TLC) may also be used as an analytical method for the direct separation of enantiomers and the control of enantiomeric purity, e.g. active pharmaceutical ingredients (APIs) which are chiral. Miscellaneous nomenclature Any non-racemic chiral substance is called scalemic. Scalemic materials can be enantiopure or enantioenriched. A chiral substance is enantiopure when only one of two possible enantiomers is present so that all molecules within a sample have the same chirality sense. Use of homochiral as a synonym is strongly discouraged. A chiral substance is enantioenriched or heterochiral when its enantiomeric ratio is greater than 50:50 but less than 100:0. Enantiomeric excess or e.e. is the difference between how much of one enantiomer is present compared to the other. For example, a sample with 40% e.e. of R contains 70% R and 30% S (70% − 30% = 40%). History The rotation of plane polarized light by chiral substances was first observed by Jean-Baptiste Biot in 1812, and gained considerable importance in the sugar industry, analytical chemistry, and pharmaceuticals. Louis Pasteur deduced in 1848 that this phenomenon has a molecular basis. The term chirality itself was coined by Lord Kelvin in 1894. Different enantiomers or diastereomers of a compound were formerly called optical isomers due to their different optical properties. At one time, chirality was thought to be restricted to organic chemistry, but this misconception was overthrown by the resolution of a purely inorganic compound, a cobalt complex called hexol, by Alfred Werner in 1911. In the early 1970s, various groups established that the human olfactory organ is capable of distinguishing chiral compounds.
Physical sciences
Stereochemistry
Chemistry
1170169
https://en.wikipedia.org/wiki/Chirality%20%28physics%29
Chirality (physics)
A chiral phenomenon is one that is not identical to its mirror image (see the article on mathematical chirality). The spin of a particle may be used to define a handedness, or helicity, for that particle, which, in the case of a massless particle, is the same as chirality. A symmetry transformation between the two is called parity transformation. Invariance under parity transformation by a Dirac fermion is called chiral symmetry. Chirality and helicity The helicity of a particle is positive ("right-handed") if the direction of its spin is the same as the direction of its motion. It is negative ("left-handed") if the directions of spin and motion are opposite. So a standard clock, with its spin vector defined by the rotation of its hands, has left-handed helicity if tossed with its face directed forwards. Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive. The chirality of a particle is more abstract: It is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massless particles – photons, gluons, and (hypothetical) gravitons – chirality is the same as helicity; a given massless particle appears to spin in the same direction along its axis of motion regardless of point of view of the observer. For massive particles – such as electrons, quarks, and neutrinos – chirality and helicity must be distinguished: In the case of these particles, it is possible for an observer to change to a reference frame moving faster than the spinning particle, in which case the particle will then appear to move backwards, and its helicity (which may be thought of as "apparent chirality") will be reversed. That is, helicity is a constant of motion, but it is not Lorentz invariant. Chirality is Lorentz invariant, but is not a constant of motion: a massive left-handed spinor, when propagating, will evolve into a right handed spinor over time, and vice versa. A massless particle moves with the speed of light, so no real observer (who must always travel at less than the speed of light) can be in any reference frame where the particle appears to reverse its relative direction of spin, meaning that all real observers see the same helicity. Because of this, the direction of spin of massless particles is not affected by a change of inertial reference frame (a Lorentz boost) in the direction of motion of the particle, and the sign of the projection (helicity) is fixed for all reference frames: The helicity of massless particles is a relativistic invariant (a quantity whose value is the same in all inertial reference frames) which always matches the massless particle's chirality. The discovery of neutrino oscillation implies that neutrinos have mass, so the photon is the only confirmed massless particle; gluons are expected to also be massless, although this has not been conclusively tested. Hence, these are the only two particles now known for which helicity could be identical to chirality, and only the photon has been confirmed by measurement. All other observed particles have mass and thus may have different helicities in different reference frames. Chiral theories Particle physicists have only observed or inferred left-chiral fermions and right-chiral antifermions engaging in the charged weak interaction. In the case of the weak interaction, which can in principle engage with both left- and right-chiral fermions, only two left-handed fermions interact. Interactions involving right-handed or opposite-handed fermions have not been shown to occur, implying that the universe has a preference for left-handed chirality. This preferential treatment of one chiral realization over another violates parity, as first noted by Chien Shiung Wu in her famous experiment known as the Wu experiment. This is a striking observation, since parity is a symmetry that holds for all other fundamental interactions. Chirality for a Dirac fermion is defined through the operator , which has eigenvalues ±1; the eigenvalue's sign is equal to the particle's chirality: +1 for right-handed, −1 for left-handed. Any Dirac field can thus be projected into its left- or right-handed component by acting with the projection operators or on . The coupling of the charged weak interaction to fermions is proportional to the first projection operator, which is responsible for this interaction's parity symmetry violation. A common source of confusion is due to conflating the , chirality operator with the helicity operator. Since the helicity of massive particles is frame-dependent, it might seem that the same particle would interact with the weak force according to one frame of reference, but not another. The resolution to this paradox is that , for which helicity is not frame-dependent. By contrast, for massive particles, chirality is not the same as helicity, or, alternatively, helicity is not Lorentz invariant, so there is no frame dependence of the weak interaction: a particle that couples to the weak force in one frame does so in every frame. A theory that is asymmetric with respect to chiralities is called a chiral theory, while a non-chiral (i.e., parity-symmetric) theory is sometimes called a vector theory. Many pieces of the Standard Model of physics are non-chiral, which is traceable to anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a vector theory, since both chiralities of all quarks appear in the theory, and couple to gluons in the same way. The electroweak theory, developed in the mid 20th century, is an example of a chiral theory. Originally, it assumed that neutrinos were massless, and assumed the existence of only left-handed neutrinos and right-handed antineutrinos. After the observation of neutrino oscillations, which imply that neutrinos are massive (like all other fermions) the revised theories of the electroweak interaction now include both right- and left-handed neutrinos. However, it is still a chiral theory, as it does not respect parity symmetry. The exact nature of the neutrino is still unsettled and so the electroweak theories that have been proposed are somewhat different, but most accommodate the chirality of neutrinos in the same way as was already done for all other fermions. Chiral symmetry Vector gauge theories with massless Dirac fermion fields exhibit chiral symmetry, i.e., rotating the left-handed and the right-handed components independently makes no difference to the theory. We can write this as the action of rotation on the fields:   and   or   and   With flavors, we have unitary rotations instead: . More generally, we write the right-handed and left-handed states as a projection operator acting on a spinor. The right-handed and left-handed projection operators are and Massive fermions do not exhibit chiral symmetry, as the mass term in the Lagrangian, , breaks chiral symmetry explicitly. Spontaneous chiral symmetry breaking may also occur in some theories, as it most notably does in quantum chromodynamics. The chiral symmetry transformation can be divided into a component that treats the left-handed and the right-handed parts equally, known as vector symmetry, and a component that actually treats them differently, known as axial symmetry. (cf. Current algebra.) A scalar field model encoding chiral symmetry and its breaking is the chiral model. The most common application is expressed as equal treatment of clockwise and counter-clockwise rotations from a fixed frame of reference. The general principle is often referred to by the name chiral symmetry. The rule is absolutely valid in the classical mechanics of Newton and Einstein, but results from quantum mechanical experiments show a difference in the behavior of left-chiral versus right-chiral subatomic particles. Example: u and d quarks in QCD Consider quantum chromodynamics (QCD) with two massless quarks and (massive fermions do not exhibit chiral symmetry). The Lagrangian reads In terms of left-handed and right-handed spinors, it reads (Here, is the imaginary unit and the Dirac operator.) Defining it can be written as The Lagrangian is unchanged under a rotation of qL by any 2×2 unitary matrix , and qR by any 2×2 unitary matrix . This symmetry of the Lagrangian is called flavor chiral symmetry, and denoted as . It decomposes into The singlet vector symmetry, , acts as and thus invariant under gauge symmetry. This corresponds to baryon number conservation. The singlet axial group transforms as the following global transformation However, it does not correspond to a conserved quantity, because the associated axial current is not conserved. It is explicitly violated by a quantum anomaly. The remaining chiral symmetry turns out to be spontaneously broken by a quark condensate formed through nonperturbative action of QCD gluons, into the diagonal vector subgroup known as isospin. The Goldstone bosons corresponding to the three broken generators are the three pions. As a consequence, the effective theory of QCD bound states like the baryons, must now include mass terms for them, ostensibly disallowed by unbroken chiral symmetry. Thus, this chiral symmetry breaking induces the bulk of hadron masses, such as those for the nucleons — in effect, the bulk of the mass of all visible matter. In the real world, because of the nonvanishing and differing masses of the quarks, is only an approximate symmetry to begin with, and therefore the pions are not massless, but have small masses: they are pseudo-Goldstone bosons. More flavors For more "light" quark species, flavors in general, the corresponding chiral symmetries are , decomposing into and exhibiting a very analogous chiral symmetry breaking pattern. Most usually, is taken, the u, d, and s quarks taken to be light (the eightfold way), so then approximately massless for the symmetry to be meaningful to a lowest order, while the other three quarks are sufficiently heavy to barely have a residual chiral symmetry be visible for practical purposes. An application in particle physics In theoretical physics, the electroweak model breaks parity maximally. All its fermions are chiral Weyl fermions, which means that the charged weak gauge bosons W and W only couple to left-handed quarks and leptons. Some theorists found this objectionable, and so conjectured a GUT extension of the weak force which has new, high energy W′ and Z′ bosons, which do couple with right handed quarks and leptons: to Here, (pronounced " left") is from above, while is the baryon number minus the lepton number. The electric charge formula in this model is given by where and are the left and right weak isospin values of the fields in the theory. There is also the chromodynamic . The idea was to restore parity by introducing a left-right symmetry. This is a group extension of (the left-right symmetry) by to the semidirect product This has two connected components where acts as an automorphism, which is the composition of an involutive outer automorphism of with the interchange of the left and right copies of with the reversal of . It was shown by Mohapatra & Senjanovic (1975) that left-right symmetry can be spontaneously broken to give a chiral low energy theory, which is the Standard Model of Glashow, Weinberg, and Salam, and also connects the small observed neutrino masses to the breaking of left-right symmetry via the seesaw mechanism. In this setting, the chiral quarks and are unified into an irreducible representation ("irrep") The leptons are also unified into an irreducible representation The Higgs bosons needed to implement the breaking of left-right symmetry down to the Standard Model are This then provides three sterile neutrinos which are perfectly consistent with neutrino oscillation data. Within the seesaw mechanism, the sterile neutrinos become superheavy without affecting physics at low energies. Because the left–right symmetry is spontaneously broken, left–right models predict domain walls. This left-right symmetry idea first appeared in the Pati–Salam model (1974) and Mohapatra–Pati models (1975).
Physical sciences
Quantum numbers
Physics
18233581
https://en.wikipedia.org/wiki/Finite%20element%20method
Finite element method
Finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems. FEM is a general numerical method for solving partial differential equations in two- or three-space variables (i.e., some boundary value problems). There are also studies about using FEM to solve high-dimensional problems. To solve a problem, FEM subdivides a large system into smaller, simpler parts called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution that has a finite number of points. FEM formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then approximates a solution by minimizing an associated error function via the calculus of variations. Studying or analyzing a phenomenon with FEM is often referred to as finite element analysis (FEA). Basic concepts The subdivision of a whole domain into simpler parts has several advantages: Accurate representation of complex geometry; Inclusion of dissimilar material properties; Easy representation of the total solution; and Capture of local effects. A typical approach using the method involves the following: Step 1: Dividing the domain of the problem into a collection of subdomains, with each subdomain represented by a set of element equations for the original problem. Step 2: Systematically recombining all sets of element equations into a global system of equations for the final calculation. The global system of equations uses known solution techniques and can be calculated from the initial values of the original problem to obtain a numerical answer. In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are often partial differential equations (PDEs). To explain the approximation of this process, FEM is commonly introduced as a special case of the Galerkin method. The process, in mathematical language, is to construct an integral of the inner product of the residual and the weight functions; then, set the integral to zero. In simple terms, it is a procedure that minimizes the approximation error by fitting trial functions into the PDE. The residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally using the following: a set of algebraic equations for steady-state problems; and a set of ordinary differential equations for transient problems. These equation sets are element equations. They are linear if the underlying PDE is linear and vice versa. Algebraic equation sets that arise in the steady-state problems are solved using numerical linear algebraic methods. In contrast, ordinary differential equation sets that occur in the transient problems are solved by numerical integrations using standard techniques such as Euler's method or the Runge–Kutta method. In the second step above, a global system of equations is generated from the element equations by transforming coordinates from the subdomains' local nodes to the domain's global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system. The process is often carried out using FEM software with coordinate data generated from the subdomains. The practical application of FEM is known as finite element analysis (FEA). FEA, as applied in engineering, is a computational tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into smaller elements, as well as the use of software coded with a FEM algorithm. When applying FEA, the complex problem is usually a physical system with the underlying physics, such as the Euler–Bernoulli beam equation, the heat equation, or the Navier–Stokes equations, expressed in either PDEs or integral equations, while the divided, smaller elements of the complex problem represent different areas in the physical system. FEA may be used for analyzing problems over complicated domains (e.g., cars and oil pipelines) when the domain changes (e.g., during a solid-state reaction with a moving boundary), when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource, as they remove multiple instances of creating and testing complex prototypes for various high-fidelity situations. For example, in a frontal crash simulation, it is possible to increase prediction accuracy in important areas, like the front of the car, and reduce it in the rear of the car, thus reducing the cost of the simulation. Another example would be in numerical weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena, such as tropical cyclones in the atmosphere or eddies in the ocean, rather than relatively calm areas. A clear, detailed, and practical presentation of this approach can be found in the textbook The Finite Element Method for Engineers. History While it is difficult to quote the date of the invention of FEM, the method originated from the need to solve complex elasticity and structural analysis problems in civil and aeronautical engineering. Its development can be traced back to work by Alexander Hrennikoff and Richard Courant in the early 1940s. Another pioneer was Ioannis Argyris. In the USSR, the introduction of the practical application of FEM is usually connected with Leonard Oganesyan. It was also independently rediscovered in China by Feng Kang in the late 1950s and early 1960s, based on the computations of dam constructions, where it was called the "finite difference method" based on variation principles. Although the approaches used by these pioneers are different, they share one essential characteristic: the mesh discretization of a continuous domain into a set of discrete sub-domains, usually called elements. Hrennikoff's work discretizes the domain by using a lattice analogy, while Courant's approach divides the domain into finite triangular sub-regions to solve second-order elliptic partial differential equations that arise from the problem of the torsion of a cylinder. Courant's contribution was evolutionary, drawing on a large body of earlier results for PDEs developed by Lord Rayleigh, Walther Ritz, and Boris Galerkin. The application of FEM gained momentum in the 1960s and 1970s due to the developments of J. H. Argyris and his co-workers at the University of Stuttgart; R. W. Clough and his co-workers at University of California Berkeley; O. C. Zienkiewicz and his co-workers Ernest Hinton, Bruce Irons, and others at Swansea University; Philippe G. Ciarlet at the University of Paris 6; and Richard Gallagher and his co-workers at Cornell University. During this period, additional impetus was provided by the available open-source FEM programs. NASA sponsored the original version of NASTRAN. University of California Berkeley made the finite element programs SAP IV and, later, OpenSees widely available. In Norway, the ship classification society Det Norske Veritas (now DNV GL) developed Sesam in 1969 for use in the analysis of ships. A rigorous mathematical basis for FEM was provided in 1973 with a publication by Gilbert Strang and George Fix. The method has since been generalized for the numerical modeling of physical systems in a wide variety of engineering disciplines, such as electromagnetism, heat transfer, and fluid dynamics. Technical discussion The structure of finite element methods A finite element method is characterized by a variational formulation, a discretization strategy, one or more solution algorithms, and post-processing procedures. Examples of the variational formulation are the Galerkin method, the discontinuous Galerkin method, mixed methods, etc. A discretization strategy is understood to mean a clearly defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions), and (c) the mapping of reference elements onto the elements of the mesh. Examples of discretization strategies are the h-version, p-version, hp-version, x-FEM, isogeometric analysis, etc. Each discretization strategy has certain advantages and disadvantages. A reasonable criterion in selecting a discretization strategy is to realize nearly optimal performance for the broadest set of mathematical models in a particular model class. Various numerical solution algorithms can be classified into two broad categories; direct and iterative solvers. These algorithms are designed to exploit the sparsity of matrices that depend on the variational formulation and discretization strategy choices. Post-processing procedures are designed to extract the data of interest from a finite element solution. To meet the requirements of solution verification, postprocessors need to provide for a posteriori error estimation in terms of the quantities of interest. When the errors of approximation are larger than what is considered acceptable, then the discretization has to be changed either by an automated adaptive process or by the action of the analyst. Some very efficient postprocessors provide for the realization of superconvergence. Illustrative problems P1 and P2 The following two problems demonstrate the finite element method. P1 is a one-dimensional problem where is given, is an unknown function of , and is the second derivative of with respect to . P2 is a two-dimensional problem (Dirichlet problem) where is a connected open region in the plane whose boundary is nice (e.g., a smooth manifold or a polygon), and and denote the second derivatives with respect to and , respectively. The problem P1 can be solved directly by computing antiderivatives. However, this method of solving the boundary value problem (BVP) works only when there is one spatial dimension. It does not generalize to higher-dimensional problems or problems like . For this reason, we will develop the finite element method for P1 and outline its generalization to P2. Our explanation will proceed in two steps, which mirror two essential steps one must take to solve a boundary value problem (BVP) using the FEM. In the first step, one rephrases the original BVP in its weak form. Little to no computation is usually required for this step. The transformation is done by hand on paper. The second step is discretization, where the weak form is discretized in a finite-dimensional space. After this second step, we have concrete formulae for a large but finite-dimensional linear problem whose solution will approximately solve the original BVP. This finite-dimensional problem is then implemented on a computer. Weak formulation The first step is to convert P1 and P2 into their equivalent weak formulations. The weak form of P1 If solves P1, then for any smooth function that satisfies the displacement boundary conditions, i.e. at and , we have Conversely, if with satisfies (1) for every smooth function then one may show that this will solve P1. The proof is easier for twice continuously differentiable (mean value theorem) but may be proved in a distributional sense as well. We define a new operator or map by using integration by parts on the right-hand-side of (1): where we have used the assumption that . The weak form of P2 If we integrate by parts using a form of Green's identities, we see that if solves P2, then we may define for any by where denotes the gradient and denotes the dot product in the two-dimensional plane. Once more can be turned into an inner product on a suitable space of once differentiable functions of that are zero on . We have also assumed that (see Sobolev spaces). The existence and uniqueness of the solution can also be shown. A proof outline of the existence and uniqueness of the solution We can loosely think of to be the absolutely continuous functions of that are at and (see Sobolev spaces). Such functions are (weakly) once differentiable, and it turns out that the symmetric bilinear map then defines an inner product which turns into a Hilbert space (a detailed proof is nontrivial). On the other hand, the left-hand-side is also an inner product, this time on the Lp space . An application of the Riesz representation theorem for Hilbert spaces shows that there is a unique solving (2) and, therefore, P1. This solution is a-priori only a member of , but using elliptic regularity, will be smooth if is. Discretization P1 and P2 are ready to be discretized, which leads to a common sub-problem (3). The basic idea is to replace the infinite-dimensional linear problem: Find such that with a finite-dimensional version: where is a finite-dimensional subspace of . There are many possible choices for (one possibility leads to the spectral method). However, we take as a space of piecewise polynomial functions for the finite element method. For problem P1 We take the interval , choose values of with and we define by: where we define and . Observe that functions in are not differentiable according to the elementary definition of calculus. Indeed, if then the derivative is typically not defined at any , . However, the derivative exists at every other value of , and one can use this derivative for integration by parts. For problem P2 We need to be a set of functions of . In the figure on the right, we have illustrated a triangulation of a 15-sided polygonal region in the plane (below), and a piecewise linear function (above, in color) of this polygon which is linear on each triangle of the triangulation; the space would consist of functions that are linear on each triangle of the chosen triangulation. One hopes that as the underlying triangular mesh becomes finer and finer, the solution of the discrete problem (3) will, in some sense, converge to the solution of the original boundary value problem P2. To measure this mesh fineness, the triangulation is indexed by a real-valued parameter which one takes to be very small. This parameter will be related to the largest or average triangle size in the triangulation. As we refine the triangulation, the space of piecewise linear functions must also change with . For this reason, one often reads instead of in the literature. Since we do not perform such an analysis, we will not use this notation. Choosing a basis To complete the discretization, we must select a basis of . In the one-dimensional case, for each control point we will choose the piecewise linear function in whose value is at and zero at every , i.e., for ; this basis is a shifted and scaled tent function. For the two-dimensional case, we choose again one basis function per vertex of the triangulation of the planar region . The function is the unique function of whose value is at and zero at every . Depending on the author, the word "element" in the "finite element method" refers to the domain's triangles, the piecewise linear basis function, or both. So, for instance, an author interested in curved domains might replace the triangles with curved primitives and so might describe the elements as being curvilinear. On the other hand, some authors replace "piecewise linear" with "piecewise quadratic" or even "piecewise polynomial". The author might then say "higher order element" instead of "higher degree polynomial". The finite element method is not restricted to triangles (tetrahedra in 3-d or higher-order simplexes in multidimensional spaces). Still, it can be defined on quadrilateral subdomains (hexahedra, prisms, or pyramids in 3-d, and so on). Higher-order shapes (curvilinear elements) can be defined with polynomial and even non-polynomial shapes (e.g., ellipse or circle). Examples of methods that use higher degree piecewise polynomial basis functions are the hp-FEM and spectral FEM. More advanced implementations (adaptive finite element methods) utilize a method to assess the quality of the results (based on error estimation theory) and modify the mesh during the solution aiming to achieve an approximate solution within some bounds from the exact solution of the continuum problem. Mesh adaptivity may utilize various techniques; the most popular are: moving nodes (r-adaptivity) refining (and unrefined) elements (h-adaptivity) changing order of base functions (p-adaptivity) combinations of the above (hp-adaptivity). Small support of the basis The primary advantage of this choice of basis is that the inner products and will be zero for almost all . (The matrix containing in the location is known as the Gramian matrix.) In the one dimensional case, the support of is the interval . Hence, the integrands of and are identically zero whenever . Similarly, in the planar case, if and do not share an edge of the triangulation, then the integrals and are both zero. Matrix form of the problem If we write and then problem (3), taking for , becomes If we denote by and the column vectors and , and if we let and be matrices whose entries are and then we may rephrase (4) as It is not necessary to assume . For a general function , problem (3) with for becomes actually simpler, since no matrix is used, where and for . As we have discussed before, most of the entries of and are zero because the basis functions have small support. So we now have to solve a linear system in the unknown where most of the entries of the matrix , which we need to invert, are zero. Such matrices are known as sparse matrices, and there are efficient solvers for such problems (much more efficient than actually inverting the matrix.) In addition, is symmetric and positive definite, so a technique such as the conjugate gradient method is favored. For problems that are not too large, sparse LU decompositions and Cholesky decompositions still work well. For instance, MATLAB's backslash operator (which uses sparse LU, sparse Cholesky, and other factorization methods) can be sufficient for meshes with a hundred thousand vertices. The matrix is usually referred to as the stiffness matrix, while the matrix is dubbed the mass matrix. General form of the finite element method In general, the finite element method is characterized by the following process. One chooses a grid for . In the preceding treatment, the grid consisted of triangles, but one can also use squares or curvilinear polygons. Then, one chooses basis functions. We used piecewise linear basis functions in our discussion, but it is common to use piecewise polynomial basis functions. Separate consideration is the smoothness of the basis functions. For second-order elliptic boundary value problems, piecewise polynomial basis function that is merely continuous suffice (i.e., the derivatives are discontinuous.) For higher-order partial differential equations, one must use smoother basis functions. For instance, for a fourth-order problem such as , one may use piecewise quadratic basis functions that are . Another consideration is the relation of the finite-dimensional space to its infinite-dimensional counterpart in the examples above . A conforming element method is one in which space is a subspace of the element space for the continuous problem. The example above is such a method. If this condition is not satisfied, we obtain a nonconforming element method, an example of which is the space of piecewise linear functions over the mesh, which are continuous at each edge midpoint. Since these functions are generally discontinuous along the edges, this finite-dimensional space is not a subspace of the original . Typically, one has an algorithm for subdividing a given mesh. If the primary method for increasing precision is to subdivide the mesh, one has an h-method (h is customarily the diameter of the largest element in the mesh.) In this manner, if one shows that the error with a grid is bounded above by , for some and , then one has an order p method. Under specific hypotheses (for instance, if the domain is convex), a piecewise polynomial of order method will have an error of order . If instead of making h smaller, one increases the degree of the polynomials used in the basis function, one has a p-method. If one combines these two refinement types, one obtains an hp-method (hp-FEM). In the hp-FEM, the polynomial degrees can vary from element to element. High-order methods with large uniform p are called spectral finite element methods (SFEM). These are not to be confused with spectral methods. For vector partial differential equations, the basis functions may take values in . Various types of finite element methods AEM The Applied Element Method or AEM combines features of both FEM and Discrete element method or (DEM). A-FEM Yang and Lui introduced the Augmented-Finite Element Method, whose goal was to model the weak and strong discontinuities without needing extra DoFs, as PuM stated. CutFEM The Cut Finite Element Approach was developed in 2014. The approach is "to make the discretization as independent as possible of the geometric description and minimize the complexity of mesh generation, while retaining the accuracy and robustness of a standard finite element method." Generalized finite element method The generalized finite element method (GFEM) uses local spaces consisting of functions, not necessarily polynomials, that reflect the available information on the unknown solution and thus ensure good local approximation. Then a partition of unity is used to “bond” these spaces together to form the approximating subspace. The effectiveness of GFEM has been shown when applied to problems with domains having complicated boundaries, problems with micro-scales, and problems with boundary layers. Mixed finite element method The mixed finite element method is a type of finite element method in which extra independent variables are introduced as nodal variables during the discretization of a partial differential equation problem. Variable – polynomial The hp-FEM combines adaptively elements with variable size h and polynomial degree p to achieve exceptionally fast, exponential convergence rates. hpk-FEM The hpk-FEM combines adaptively elements with variable size h, polynomial degree of the local approximations p, and global differentiability of the local approximations (k-1) to achieve the best convergence rates. XFEM The extended finite element method (XFEM) is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method by enriching the solution space for solutions to differential equations with discontinuous functions. Extended finite element methods enrich the approximation space to naturally reproduce the challenging feature associated with the problem of interest: the discontinuity, singularity, boundary layer, etc. It was shown that for some problems, such an embedding of the problem's feature into the approximation space can significantly improve convergence rates and accuracy. Moreover, treating problems with discontinuities with XFEMs suppresses the need to mesh and re-mesh the discontinuity surfaces, thus alleviating the computational costs and projection errors associated with conventional finite element methods at the cost of restricting the discontinuities to mesh edges. Several research codes implement this technique to various degrees: GetFEM++ xfem++ openxfem++ XFEM has also been implemented in codes like Altair Radios, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.). Scaled boundary finite element method (SBFEM) The introduction of the scaled boundary finite element method (SBFEM) came from Song and Wolf (1997). The SBFEM has been one of the most profitable contributions in the area of numerical analysis of fracture mechanics problems. It is a semi-analytical fundamental-solutionless method combining the advantages of finite element formulations and procedures and boundary element discretization. However, unlike the boundary element method, no fundamental differential solution is required. S-FEM The S-FEM, Smoothed Finite Element Methods, is a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining mesh-free methods with the finite element method. Spectral element method Spectral element methods combine the geometric flexibility of finite elements and the acute accuracy of spectral methods. Spectral methods are the approximate solution of weak-form partial equations based on high-order Lagrangian interpolants and used only with certain quadrature rules. Meshfree methods Discontinuous Galerkin methods Finite element limit analysis Stretched grid method Loubignac iteration Loubignac iteration is an iterative method in finite element methods. Crystal plasticity finite element method (CPFEM) The crystal plasticity finite element method (CPFEM) is an advanced numerical tool developed by Franz Roters. Metals can be regarded as crystal aggregates, which behave anisotropy under deformation, such as abnormal stress and strain localization. CPFEM, based on the slip (shear strain rate), can calculate dislocation, crystal orientation, and other texture information to consider crystal anisotropy during the routine. It has been applied in the numerical study of material deformation, surface roughness, fractures, etc. Virtual element method (VEM) The virtual element method (VEM), introduced by Beirão da Veiga et al. (2013) as an extension of mimetic finite difference (MFD) methods, is a generalization of the standard finite element method for arbitrary element geometries. This allows admission of general polygons (or polyhedra in 3D) that are highly irregular and non-convex in shape. The name virtual derives from the fact that knowledge of the local shape function basis is not required and is, in fact, never explicitly calculated. Link with the gradient discretization method Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of the gradient discretization method (GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and nonlinear elliptic problems, linear, nonlinear, and degenerate parabolic problems), hold as well for these particular FEMs. Comparison to the finite difference method The finite difference method (FDM) is an alternative way of approximating solutions of PDEs. The differences between FEM and FDM are: The most attractive feature of the FEM is its ability to handle complicated geometries (and boundaries) with relative ease. While FDM in its basic form is restricted to handle rectangular shapes and simple alterations thereof, the handling of geometries in FEM is theoretically straightforward. FDM is not usually used for irregular CAD geometries but more often for rectangular or block-shaped models. FEM generally allows for more flexible mesh adaptivity than FDM. The most attractive feature of finite differences is that it is straightforward to implement. One could consider the FDM a particular case of the FEM approach in several ways. E.g., first-order FEM is identical to FDM for Poisson's equation if the problem is discretized by a regular rectangular mesh with each rectangle divided into two triangles. There are reasons to consider the mathematical foundation of the finite element approximation more sound, for instance, because the quality of the approximation between grid points is poor in FDM. The quality of a FEM approximation is often higher than in the corresponding FDM approach, but this is highly problem-dependent, and several examples to the contrary can be provided. Generally, FEM is the method of choice in all types of analysis in structural mechanics (i.e., solving for deformation and stresses in solid bodies or dynamics of structures). In contrast, computational fluid dynamics (CFD) tend to use FDM or other methods like finite volume method (FVM). CFD problems usually require discretization of the problem into a large number of cells/gridpoints (millions and more). Therefore the cost of the solution favors simpler, lower-order approximation within each cell. This is especially true for 'external flow' problems, like airflow around the car, airplane, or weather simulation. Finite element and fast fourier transform (FFT) methods Another method used for approximating solutions to a partial differential equation is the Fast Fourier Transform (FFT), where the solution is approximated by a fourier series computed using the FFT. For approximating the mechanical response of materials under stress, FFT is often much faster, but FEM may be more accurate. One example of the respective advantages of the two methods is in simulation of rolling a sheet of aluminum (an FCC metal), and drawing a wire of tungsten (a BCC metal). This simulation did not have a sophisticated shape update algorithm for the FFT method. In both cases, the FFT method was more than 10 times as fast as FEM, but in the wire drawing simulation, where there were large deformations in grains, the FEM method was much more accurate. In the sheet rolling simulation, the results of the two methods were similar. FFT has a larger speed advantage in cases where the boundary conditions are given in the materials strain, and loses some of its efficiency in cases where the stress is used to apply the boundary conditions, as more iterations of the method are needed. The FE and FFT methods can also be combined in a voxel based method (2) to simulate deformation in materials, where the FE method is used for the macroscale stress and deformation, and the FFT method is used on the microscale to deal with the effects of microscale on the mechanical response. Unlike FEM, FFT methods’ similarities to image processing methods means that an actual image of the microstructure from a microscope can be input to the solver to get a more accurate stress response. Using a real image with FFT avoids meshing the microstructure, which would be required if using FEM simulation of the microstructure, and might be difficult. Because fourier approximations are inherently periodic, FFT can only be used in cases of periodic microstructure, but this is common in real materials. FFT can also be combined with FEM methods by using fourier components as the variational basis for approximating the fields inside an element, which can take advantage of the speed of FFT based solvers. Application Various specializations under the umbrella of the mechanical engineering discipline (such as aeronautical, biomechanical, and automotive industries) commonly use integrated FEM in the design and development of their products. Several modern FEM packages include specific components such as thermal, electromagnetic, fluid, and structural working environments. In a structural simulation, FEM helps tremendously in producing stiffness and strength visualizations and minimizing weight, materials, and costs. FEM allows detailed visualization of where structures bend or twist, indicating the distribution of stresses and displacements. FEM software provides a wide range of simulation options for controlling the complexity of modeling and system analysis. Similarly, the desired level of accuracy required and associated computational time requirements can be managed simultaneously to address most engineering applications. FEM allows entire designs to be constructed, refined, and optimized before the design is manufactured. The mesh is an integral part of the model and must be controlled carefully to give the best results. Generally, the higher the number of elements in a mesh, the more accurate the solution of the discretized problem. However, there is a value at which the results converge, and further mesh refinement does not increase accuracy. This powerful design tool has significantly improved both the standard of engineering designs and the design process methodology in many industrial applications. The introduction of FEM has substantially decreased the time to take products from concept to the production line. Testing and development have been accelerated primarily through improved initial prototype designs using FEM. In summary, benefits of FEM include increased accuracy, enhanced design and better insight into critical design parameters, virtual prototyping, fewer hardware prototypes, a faster and less expensive design cycle, increased productivity, and increased revenue. In the 1990s FEM was proposed for use in stochastic modeling for numerically solving probability models and later for reliability assessment. FEM is widely applied for approximating differential equations that describe physical systems. This method is very popular in the community of Computational fluid dynamics, and there are many applications for solving Navier–Stokes equations with FEM. Recently, the application of FEM has been increasing in the researches of computational plasma. Promising numerical results using FEM for Magnetohydrodynamics, Vlasov equation, and Schrödinger equation have been proposed.
Mathematics
Differential equations
null