id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
972966
https://en.wikipedia.org/wiki/Gerridae
Gerridae
The Gerridae are a family of insects in the order Hemiptera, commonly known as water striders, water skeeters, water scooters, water bugs, pond skaters, water skippers, water gliders, water skimmers or puddle flies. Consistent with the classification of the Gerridae as true bugs (i.e., suborder Heteroptera), gerrids have mouthparts evolved for piercing and sucking, and distinguish themselves by having the unusual ability to walk on water, making them pleuston (surface-living) animals. They are anatomically built to transfer their weight to be able to run on top of the water's surface. As a result, one could likely find water striders present in any pond, river, or lake. Over 1,700 species of gerrids have been described, 10% of them being marine. While 90% of the Gerridae are freshwater bugs, the oceanic Halobates makes the family quite exceptional among insects. The genus Halobates was first heavily studied between 1822 and 1883 when Francis Buchanan White collected several different species during the Challenger Expedition. Around this time, Eschscholtz discovered three species of the Gerridae, bringing attention to the species, though little of their biology was known. Since then, the Gerridae have been continuously studied due to their ability to walk on water and unique social characteristics. Description The family Gerridae is physically characterized by having hydrofuge hairpiles, retractable preapical claws, and elongated legs and body. Hydrofuge hairpiles are small, hydrophobic microhairs. These are tiny hairs with more than one thousand microhairs per mm. The entire body is covered by these hairpiles, providing the water strider resistance to splashes or drops of water. These hairs repel the water, preventing drops from weighing down the body. Size They are generally small, long-legged insects and the body length of most species is between . A few are between . Among widespread genera, the North Hemisphere Aquarius includes the largest species, generally exceeding , at least among females, and the largest species averaging about . Females typically average larger than males of their own species, but it appears to be reversed in the largest species, the relatively poorly known Gigantometra gigas of streams in northern Vietnam and adjacent southern China. It typically reaches a body length of about in wingless males and in winged females (winged males, however, only average marginally larger than females). In this species each middle and hind leg can surpass . Antennae Water striders have two antennae with four segments on each. Antennal segments are numbered from closest to the head to farthest. The antennae have short, stiff bristles in segment III. Relative lengths of the antennae segments can help identify unique species within the family Gerridae, but in general, segment I is longer and stockier than the remaining three. The four segments combined are usually no longer than the length of the water strider head. Thorax The thorax of water striders is generally long, narrow, and small in size. It generally ranges from 1.6 mm to 3.6 mm long across the species, with some bodies more cylindrical or rounder than others. The pronotum, or outer layer of the thorax, of the water strider can be either shiny or dull depending on the species, and covered with microhairs to help repel water. The abdomen of a water strider can have several segments and contains both the metasternum and omphalium. Appendages Gerridae have front, middle, and back legs. The front legs are shortest and have preapical claws adapted to puncture prey. Preapical claws are claws that are not at the end of the leg, but rather halfway through, like mantises. The middle legs are longer than the first pair and shorter than the last pair and are adapted for propulsion through the water. The hind pair is the longest and is used for spreading weight over a large surface area, as well as steering the bug across the surface of the water. The front legs are attached just posterior to the eyes, while the middle legs are attached closer to the back legs which attach midthorax but extend beyond the terminal end of the body. Wings Some water striders have wings present on the dorsal side of their thorax, while other species of Gerridae do not, particularly Halobates. Water striders experience wing length polymorphism that has affected their flight ability and evolved in a phylogenetic manner where populations are either long-winged, wing-dimorphic, or short-winged. Wing dimorphism consists of summer gerrid populations evolving different length wings than winter populations within the same species. Habitats with rougher waters are likely to hold gerrids with shorter wings, while habitats with calm waters are likely to hold long-winged gerrids. This is due to potential for damage of the wings and ability for dispersal. Evolution Cretogerris, from the Cretaceous (Albian) Charentese amber of France, was initially suggested as a gerrid. However, it was later interpreted as an indeterminate member of Gerroidea. They are morphologically similar to the unrelated Chresmoda, an enigmatic genus of insect known from the Late Jurassic to the Mid Cretaceous with a presumably similar lifestyle. Molecular analysis suggest an origin of the family Gerridae about 128 Million years ago (Mya) in the Cretaceous, splitting from the sister group Veliidae, with whom they share a single origin of rowing as a locomotive mechanism. According on the transcriptome-based phylogeny, Gerridae is a monophyletic group. Wing polymorphism Wing polymorphism (i.e., the presence of multiple wing morphs in a given species) has independently evolved multiple times in Gerridae, as well as complete wing loss, something that has been important for the evolution of the variety in species we see today, and dispersal of Gerridae. The existence of wing polymorphism in a given species can be explained as a particular case oogenesis-flight syndrome. Following this rationale, which is commonly applied in insects, developing short wings provides the individual with the capacity to dedicate the energy stores that would usually be used for wing and wing muscle development to increasing egg production and reproducing early, ultimately enhancing the individual's fitness. The ability for one brood to have young with wings and the next not allows water striders to adapt to changing environments. Long, medium, short, and nonexistent wing forms are all necessary depending on the environment and season. Long wings allow for flight to a neighboring water body when one gets too crowded, but they can get wet and weigh a water strider down. Short wings may allow for short travel, but limit how far a gerrid can disperse. Nonexistent wings prevent a gerrid from being weighed down, but prevent dispersal. Wing polymorphism is common in the Gerridae despite most univoltine populations being completely apterous (wingless) or macropterous (with wings). Apterous populations of gerrids would be restricted to stable aquatic habitats that experience little change in environment, while macropterous populations can inhabit more changing, variable water supplies. Stable waters are usually large lakes and rivers, while unstable waters are generally small and seasonal. Gerrids produce winged forms for dispersal purposes and macropterous individuals are maintained due to their ability to survive in changing conditions. Wings are necessary if the body of water is likely to dry since the gerrid must fly to a new source of water. However, wingless forms are favored due to competition for ovarian development and wings and reproductive success is the main goal due to the selfish gene theory. Overwintering gerrids usually are macropterous, or with wings, so they can fly back to their aquatic habitat after winter. An environmental switch mechanism controls seasonal dimorphism observed in bivoltine species, or species having two broods per year. This switch mechanism is what helps determine whether or not a brood with wings will evolve. Temperature also plays an important role in photoperiodic switch. Temperatures signify the seasons and thus when wings are needed since they hibernate during winter. Ultimately, these switching mechanisms alter genetic alleles for wing characteristics, helping to maintain biological dispersal. Ability to walk on water Water striders are able to walk on top of water due to a combination of several factors. Water striders use the high surface tension of water and long, hydrophobic legs to help them stay above water. Gerridae species use this surface tension to their advantage through their highly adapted legs and distributed weight. The legs of a water strider are long and slender, allowing the weight of the water strider body to be distributed over a large surface area. The legs are strong, but have flexibility that allows the water striders to keep their weight evenly distributed and flow with the water movement. Hydrofuge hairs line the body surface of the water strider. There are several thousand hairs per square millimeter, providing the water strider with a hydrofuge body that prevents wetting from waves, rain, or spray, which could inhibit their ability to keep their entire body above the water surface if the water stuck and weighed down the body. This position of keeping the majority of the body above the water surface, called epipleustonic, is a defining characteristic of water striders. If the body of the water strider were to accidentally become submerged, for instance by a large wave, the tiny hairs would trap air. Tiny air bubbles throughout the body act as buoyancy to bring the water strider to the surface again, while also providing air bubbles to breathe from underwater. Despite their success in overcoming submergence in water, however, water striders are not as competent in oil, and experimental oil spills have suggested that oil spilled in freshwater systems can drive water strider immobility and death. The tiny hairs on the legs provide both a hydrophobic surface as well as a larger surface area to spread their weight over the water. The middle legs used for rowing have particularly well developed fringe hairs on the tibia and tarsus to help increase movement through the ability to thrust. The hind pair of legs are used for steering When the rowing stroke begins, the middle tarsi of gerrids are quickly pressed down and backwards to create a circular surface wave in which the crest can be used to propel a forward thrust. The semicircular wave created is essential to the ability of the water strider to move rapidly since it acts as a counteracting force to push against. As a result, water striders often move at 1 meter per second or faster. Life cycle Gerrids generally lay their eggs on submerged rocks or vegetation using a gelatinous substance as a glue. Gravid females carry between two and twenty eggs. The eggs are creamy white or translucent, but become bright orange. Gerrids go through the egg stage, five instar stages of nymphal forms, and then the adult stage. Instar durations of water striders are highly correlated throughout the larval period. This means that individuals tend to develop at the same rate through each instar stage. Each nymphal stage lasts 7–10 days and the water strider molts, shedding its old cuticle through a Y-shaped suture dorsal to the head and thorax. Nymphs are very similar to adults in behavior and diet, but are smaller (1 mm long), paler, and lack differentiation in tarsal and genital segments. It takes approximately 60 to 70 days for a water strider to reach adulthood, though this development rate has been found highly correlated to the water temperature the eggs are in. Ecology Habitat Gerridae generally inhabit surfaces of calm waters. The majority of water striders inhabit freshwater areas, with the exception of Asclepios, Halobates, Stenobates and a few other genera, which inhabit marine waters. The marine species are generally coastal, but a few Halobates live offshore (oceanic) and are the only insects of this habitat. Gerridae prefer an environment abundant with insects or zooplankton and one that contains several rocks or plants to oviposit eggs on. It has been studied by prevalence of water striders in varying environments, that water striders most prefer waters around . Any water temperature lower than is unfavorable. This is likely due to the fact that development rates of young are temperature dependent [5]. The cooler the surrounding waters, the slower the development of the young is. Prominent genera Gerridae are present in Europe, the former USSR, Canada, US, South Africa, South America, Australia, China and Malaysia [5]. None have been yet identified in New Zealand waters. Diet Gerrids are aquatic predators and feed on invertebrates, mainly spiders and insects, that fall onto the water surface. Water striders are attracted to this food source by ripples produced by the struggling prey. The water strider uses its front legs as sensors for the vibrations produced by the ripples in the water. The water strider punctures the prey item's body with its proboscis, injects salivary enzymes that break down the prey's internal structures, and then sucks out the resulting fluid. Gerrids prefer living prey, though they are indiscriminate feeders when it comes to terrestrial insect type. Halobates, which are found on open sea, feed off floating insects, zooplankton, and occasionally resort to cannibalism of their own nymphs. Cannibalism is frequent and helps control population sizes and restrict conflicting territories. During the non-mating season when gerrids live in cooperative groups, and cannibalism rates are lower, water striders will openly share large kills with others around them. Some gerrids are collectors, feeding off sediment or deposit surface. Predators Gerrids, or water striders, are preyed upon largely by birds and some fish. Petrels, terns, and some marine fish prey on Halobates. Fish do not appear to be the main predators of water striders, but will eat them in cases of starvation. Scent gland secretions from the thorax are responsible for repelling fish from eating them. Gerrids are largely hunted by birds of a wide range of species dependent on habitat. Some water striders are hunted by frogs, but they are not their main food source. Water striders are also sometimes hunted by each other. Water strider cannibalism involves mainly hunting nymphs for mating territory and sometimes for food. Parasites Several endoparasites have been found in gerrids. Trypanosamatid flagellates, nematodes, and parasitic Hymenoptera all act as endoparasites. Water mite larvae act as ectoparasites of water striders. Dispersal Sudden increases in salt concentration in the water of gerrid habitats can trigger migration of water striders. Water striders will move to areas of lower salt concentration, resulting in the mix of genes within brackish and freshwater bodies. Nymphal population density also affects the dispersal of water striders. The higher density of water striders in the nymphal stage results in a higher percentage of brachypterous adults developing flight muscles. These flight muscles allow for the water striders to fly to neighboring bodies of water and mate, resulting in the spread of genes. This spread and mixing of genes can be beneficial due to a heterozygotic advantage. Generally, water striders will try to disperse in such a way to lower the density of gerrids in one area or pool of water. Most do this by flight, but those that lack wings or wing muscles will rely on the current of their water body or flooding. Eggs in Halobates are often laid on floating ocean debris and thus spread across the ocean by this drifting matter. Mating behavior Sex discrimination in some Gerridae species is determined through communication of ripple frequency produced on the water surface. Males predominantly produce these ripples in the water. There are three main frequencies found in ripple communication: 25 Hz as a repel signal, 10 Hz as a threat signal, and 3 Hz as a courtship signal. An approaching gerrid will first give out a repel signal to let the other water strider know they are in its area. If the other gerrid does not return the repel signal, then the bug knows it is a female and will switch to the courtship signal. A receptive female will lower her abdomen and allow the male to mount her and mate. A non-receptive female will raise her abdomen and emit a repel signal. Males that are allowed to mate stay attached to the same female for the entire reproductive season. This is to ensure that the female's young belong to the mounting male and thus guarantee the spread of his genes. Females oviposit, or lay their eggs, by submerging and attaching the eggs to stable surfaces such as plants or stones. Some water strider species will lay the eggs at the water edge if the body of water is calm enough. The amount of eggs laid depends on the amount of food available to the mother during the reproductive season. The availability of food and dominance among other gerrids in the area both play crucial roles in the amount of food obtained and thus, resulting fecundity. Water striders will reproduce all year long in tropical regions where it remains warm, but only during the warm months in seasonal habitats. Gerrids that live in environments with winters will overwinter in the adult stage. This is due to the large energy cost which would need to be spent to maintain their body temperature at functional levels. These water striders have been found in leaf litter or under stationary shelters such as logs and rocks during the winter in seasonal areas. This reproductive diapause is a result of shortening day lengths during larval development and seasonal variation in lipid levels. Shorter day length signals the water strider of the coming temperature drops, also acting as a physical signal the body uses to store lipids throughout the body as food sources. Water striders use these lipids to metabolize during their hibernation. The length of the hibernation depends when the environment warms and the days become longer again. Social behavior Kin discrimination is rare in Gerridae, only really being seen in Halobates. Without hunger playing a role, several studies have shown that neither Aquarius remigis nor Limnoporus dissortis parents preferentially cannibalize on non-kin. Those two species are highly prevalent in American waters. These species do not show familial tendencies, leaving their young to forage on their own. Females cannibalize more on young than males do and, in particular, on first-instar nymphs. Young must disperse as soon as their wings are fully developed to avoid cannibalism and other territorial conflicts since neither parents nor siblings can identify members genetically related to themselves. Gerridae are territorial insects and make this known by their vibration patterns. Both female and male adult Gerridae hold separate territories, though usually the male territories are larger than the female. During the mating season, gerrids will emit warning vibrations through the water and defend both their territory and the female in it. Even though gerridae are very conspicuous, making their presence known through repel signals, they often live in large groups. These large groups usually form during the non-mating season since there is less need to compete. Instead of competing to reproduce, water striders can work together to obtain nutrition and shelter outside of the mating season. Water striders will attempt to disperse when these groups become too dense. They do so by flying away or cannibalizing. In popular culture In the video game Super Mario 64, in the level Wet-Dry World, there are enemies named Skeeter that are based on water striders and their movement. The name comes from "water skeeter", an alternative name for water striders. In the 2002 film The Tuxedo, water striders are genetically modified by bioterrorists to have bacteria that can spread from person to person, causing severe dehydration and instant death.
Biology and health sciences
Hemiptera (true bugs)
Animals
973737
https://en.wikipedia.org/wiki/Formatted%20text
Formatted text
In computing, formatted text, styled text, or rich text, as opposed to plain text, is digital text which has styling information beyond the minimum of semantic elements: colours, styles (boldface, italic), sizes, and special features in HTML (such as hyperlinks). Terminology Formatted text cannot rightly be identified with binary files or be distinct from ASCII text. This is because formatted text is not necessarily binary, it may be text-only, such as HTML, RTF or enriched text files, and it may be ASCII-only. Conversely, a plain text file may be non-ASCII (in an encoding such as Unicode UTF-8). Text-only formatted text is achieved by markup which too is textual, while some editors of formatted text like Microsoft Word save in a binary format. Beginnings of formatted text Formatted text has its genesis in the pre-computer use of underscoring to embolden passages in typewritten manuscripts. In the first interactive systems of early computer technology, underlining was not possible, and users made up for this lack (and the lack of formatting in ASCII) by using certain symbols as substitutes. Emphasis, for example, could be achieved in ASCII in a number of ways: Capitalization: Surrounding with underscores: Surrounding with asterisks: Spacing: Surrounding by underscores was also used for book titles: Markup languages Formatting can be marked by tags distinguished from the body text by special characters, such as angle brackets in HTML. For example, this text: The dog is classified as Canis familiaris in taxonomy. is marked up in HTML thus: <p>The dog is classified as <i>Canis familiaris</i> in taxonomy.</p> The italicised text is enclosed by an opening and a closing italics tag. In LaTeX, the text would be marked up like this: The dog is classified as \textit{Canis familiaris} in taxonomy. Most markup languages can be edited with any text editor, needing no special software. Many markup languages can also be edited with specialized software designed to automate some functions or present the output as WYSIWYG. Formatted document files Since the invention of MacWrite, the first WYSIWYG word processor, in which the typist codes the formatting visually rather than by inserting textual markup, word processors have tended to save to binary files. Opening such files with a text editor reveals them embedded with various binary characters, either around the formatted text (e.g. in WordPerfect) or separate from it, at the beginning or end of the file (e.g. in Microsoft Word). Formatted text documents in binary files have, however, the disadvantages of formatting scope and secrecy. Whereas the extent of formatting is accurately marked in markup languages, WYSIWYG formatting is based on memory, that is, keeping for example your pressing of the boldface button until cancelled. This can lead to formatting mistakes and maintenance troubles. As for secrecy, formatted text document file formats tend to be proprietary and undocumented, leading to difficulty in coding compatibility by third parties, and also to unnecessary upgrades because of version changes. WordStar was a popular word processor that did not use binary files with hidden characters. OpenOffice.org Writer saves files in an XML format. However, the resultant file is a binary since it is compressed (a tarball equivalent). PDF is another formatted text file format that is usually binary (using compression for the text, and storing graphics and fonts in binary). It is generally an end-user format, written from an application such as Microsoft Word or OpenOffice.org Writer, and not editable by the user once done.
Technology
Data storage and memory
null
973884
https://en.wikipedia.org/wiki/Erinaceidae
Erinaceidae
Erinaceidae is a family in the order Eulipotyphla, consisting of the hedgehogs and moonrats. Until recently, it was assigned to the order Erinaceomorpha, which has been subsumed with the paraphyletic Soricomorpha into Eulipotyphla. Eulipotyphla has been shown to be monophyletic; Soricomorpha is paraphyletic because both Soricidae and Talpidae share a more recent common ancestor with Erinaceidae than with solenodons. Erinaceidae contains the well-known hedgehogs (subfamily Erinaceinae) of Eurasia and Africa and the gymnures or moonrats (subfamily Galericinae) of Southeast Asia. This family was once considered part of the order Insectivora, but that polyphyletic order is now considered defunct. Characteristics Erinaceids are generally shrew-like in form, with long snouts and short tails. They are, however, much larger than shrews, ranging from in body length and in weight, in the case of the short-tailed gymnure, up to and in the moonrat. All but one species have five toes in each foot, in some cases with strong claws for digging, and they have large eyes and ears. Hedgehogs possess hair modified into sharp spines to form a protective covering over the upper body and flanks, while gymnures have only normal hair. Most species have anal scent glands, but these are far better developed in gymnures, which can have a powerful odor. Erinaceids are omnivorous, with the major part of their diet consisting of insects, earthworms, and other small invertebrates. They also eat seeds and fruit, and occasionally birds' eggs, along with any carrion they come across. Their teeth are sharp and suited for impaling invertebrate prey. The dental formula for erinaceids is: Hedgehogs are nocturnal, but gymnures are less so, and may be active during the day. Many species live in simple burrows, while others construct temporary nests on the surface from leaves and grass, or shelter in hollow logs or similar hiding places. Erinaceids are solitary animals outside the breeding season, and the father plays no role in raising the young. Female erinaceids give birth after a gestation period of around six to seven weeks. The young are born blind and hairless, although hedgehogs begin to sprout their spines within 36 hours of birth. Evolution Erinaceids are a group of placental mammals that have retained many of their ancestral traits, having changed little since their origin in the Eocene. The so-called 'giant hedgehog' (actually a gymnure) Deinogalerix, from the Miocene of Gargano Island (part of modern Italy), was the size of a large rabbit, and may have eaten vertebrate prey or carrion, rather than insects. Classification Order Eulipotyphla †Family Amphilemuridae †Genus Alsaticopithecus †Genus Amphilemur †Genus Gesneropithex †Genus Macrocranion †Macrocranion germonpreae †Macrocranion junnei †Macrocranion nitens †Macrocranion robinsoni †Macrocranion tenerum †Macrocranion vandebroeki †Genus Pholidocercus †Pholidocercus hassiacus Family Erinaceidae †Genus Silvacola †Silvacola acares †Genus Oligoechinus Subfamily Erinaceinae †Genus Amphechinus †Amphechinus akespensis †Amphechinus arverniensis †Amphechinus baudelotae †Amphechinus edwardsi †Amphechinus ginsburgi †Amphechinus golpeae †Amphechinus horncloudi †Amphechinus intermedius †Amphechinus kreuzae †Amphechinus major †Amphechinus microdus †Amphechinus minutissimus †Amphechinus robinsoni †Amphechinus taatsiingolensis Genus †Ladakhechinus †Ladakhechinus iugummontis Genus Atelerix Four-toed hedgehog, Atelerix albiventris North African hedgehog, Atelerix algirus Southern African hedgehog, Atelerix frontalis Somali hedgehog, Atelerix sclateri Genus Erinaceus Amur hedgehog, Erinaceus amurensis Southern white-breasted hedgehog, Erinaceus concolor European hedgehog, Erinaceus europaeus Northern white-breasted hedgehog, Erinaceus roumanicus Genus Hemiechinus Long-eared hedgehog, Hemiechinus auritus Indian long-eared hedgehog, Hemiechinus collaris Genus Mesechinus Daurian hedgehog, Mesechinus dauuricus Hugh's hedgehog, Mesechinus hughi Gaoligong forest hedgehog, Mesechinus wangi Small-toothed forest hedgehog, Mesechinus miodon Genus Paraechinus Desert hedgehog, Paraechinus aethiopicus Brandt's hedgehog, Paraechinus hypomelas Indian hedgehog, Paraechinus micropus Bare-bellied hedgehog, Paraechinus nudiventris Subfamily Galericinae †Genus Deinogalerix †Deinogalerix brevirostris †Deinogalerix freudenthali †Deinogalerix intermedius †Deinogalerix koenigswaldi †Deinogalerix minor Genus Echinosorex Moonrat, Echinosorex gymnura †Genus Galerix †Galerix aurelianensis †Galerix exilis †Galerix kostakii †Galerix remmerti †Galerix rutlandae †Galerix saratji †Galerix stehlini †Galerix symeonidisi †Galerix uenayae Genus Hylomys Long-eared gymnure, Hylomys megalotis Dwarf gymnure, Hylomys parvus Javan short-tailed gymnure or Lesser Moonrat, Hylomys suillus Genus Neohylomys Hainan gymnure, Neonylomys hainanensis Genus Neotetracus Shrew gymnure, Neotetracus sinensis Genus Podogymnura Dinagat gymnure, Podogymnura aureospinula Eastern Mindanao gymnure, Podogymnura intermedia Mindanao gymnure, Podogymnura truei
Biology and health sciences
Eulipotyphla
Animals
974163
https://en.wikipedia.org/wiki/Backstaff
Backstaff
The backstaff is a navigational instrument that was used to measure the altitude of a celestial body, in particular the Sun or Moon. When observing the Sun, users kept the Sun to their back (hence the name) and observed the shadow cast by the upper vane on a horizon vane. It was invented by the English navigator John Davis, who described it in his book Seaman's Secrets in 1594. Types of backstaffs Backstaff is the name given to any instrument that measures the altitude of the sun by the projection of a shadow. It appears that the idea for measuring the sun's altitude using back observations originated with Thomas Harriot. Many types of instruments evolved from the cross-staff that can be classified as backstaffs. Only the Davis quadrant remains dominant in the history of navigation instruments. Indeed, the Davis quadrant is essentially synonymous with backstaff. However, Davis was neither the first nor the last to design such an instrument and others are considered here as well. Davis quadrant Captain John Davis invented a version of the backstaff in 1594. Davis was a navigator who was quite familiar with the instruments of the day such as the mariner's astrolabe, the quadrant and the cross-staff. He recognized the inherent drawbacks of each and endeavoured to create a new instrument that could reduce those problems and increase the ease and accuracy of obtaining solar elevations. One early version of the quadrant staff is shown in Figure 1. It had an arc affixed to a staff so that it could slide along the staff (the shape is not critical, though the curved shape was chosen). The arc (A) was placed so that it would cast its shadow on the horizon vane (B). The navigator would look along the staff and observe the horizon through a slit in the horizon vane. By sliding the arc so that the shadow aligned with the horizon, the angle of the sun could be read on the graduated staff. This was a simple quadrant, but it was not as accurate as one might like. The accuracy in the instrument is dependent on the length of the staff, but a long staff made the instrument more unwieldy. The maximum altitude that could be measured with this instrument was 45°. The next version of his quadrant is shown in Figure 2. The arc on the top of the instrument in the previous version was replaced with a shadow vane placed on a transom. This transom could be moved along a graduated scale to indicate the angle of the shadow above the staff. Below the staff, a 30° arc was added. The horizon, seen through the horizon vane on the left, is aligned with the shadow. The sighting vane on the arc is moved until it aligns with the view of the horizon. The angle measured is the sum of the angle indicated by the position of the transom and the angle measured on the scale on the arc. The instrument that is now identified with Davis is shown in Figure 3. This form evolved by the mid-17th century. The quadrant arc has been split into two parts. The smaller radius arc, with a span of 60°, was mounted above the staff. The longer radius arc, with a span of 30° was mounted below. Both arcs have a common centre. At the common centre, a slotted horizon vane was mounted (B). A moveable shadow vane was placed on the upper arc so that its shadow was cast on the horizon vane. A moveable sight vane was mounted on the lower arc (C). It is easier for a person to place a vane at a specific location than to read the arc at an arbitrary position. This is due to Vernier acuity, the ability of a person to align two line segments accurately. Thus an arc with a small radius, marked with relatively few graduations, can be used to place the shadow vane accurately at a specific angle. On the other hand, moving the sight vane to the location where the line to the horizon meets the shadow requires a large arc. This is because the position may be at a fraction of a degree and a large arc allows one to read smaller graduations with greater accuracy. The large arc of the instrument, in later years, was marked with transversals to allow the arc to be read to greater accuracy than the main graduations allow. Thus Davis was able to optimize the construction of the quadrant to have both a small and a large arc, allowing the effective accuracy of a single arc quadrant of large radius without making the entire instrument so large. This form of the instrument became synonymous with the backstaff. It was one of the most widely used forms of the backstaff. Continental European navigators called it the English Quadrant. A later modification of the Davis quadrant was to use a Flamsteed glass in place of the shadow vane; this was suggested by John Flamsteed. This placed a lens on the vane that projected an image of the sun on the horizon vane instead of a shadow. It was useful under conditions where the sky was hazy or lightly overcast; the dim image of the sun was shown more brightly on the horizon vane where a shadow could not be seen. Usage In order to use the instrument, the navigator would place the shadow vane at a location anticipating the altitude of the sun. Holding the instrument in front of him, with the sun at his back, he holds the instrument so that the shadow cast by the shadow vane falls on the horizon vane at the side of the slit. He then moves the sight vane so that he observes the horizon in a line from the sight vane through the horizon vane's slit while simultaneously maintaining the position of the shadow. This permits him to measure the angle between the horizon and the sun as the sum of the angle read from the two arcs. Since the shadow's edge represents the limb of the sun, he must correct the value for the semidiameter of the sun. Instruments that derived from the Davis quadrant The Elton's quadrant derived from the Davis quadrant. It added an index arm with spirit levels to provide an artificial horizon. Demi-cross The demi-cross was an instrument that was contemporary with the Davis quadrant. It was popular outside England. The vertical transom was like a half-transom on a cross-staff, hence the name demi-cross. It supported a shadow vane (A in Figure 4) that could be set to one of several heights (three according to May, four according to de Hilster). By setting the shadow vane height, the range of angles that could be measured was set. The transom could be slid along the staff and the angle read from one of the graduated scales on the staff. The sight vane (C) and horizon vane (B) were aligned visually with the horizon. With the shadow vane's shadow cast on the horizon vane and aligned with the horizon, the angle was determined. In practice, the instrument was accurate but more unwieldy than the Davis quadrant. Plough The plough was the name given to an unusual instrument that existed for a short time. It was part cross-staff and part backstaff. In Figure 5, A is the transom that casts its shadow on the horizon vane at B. It functions in the same manner as the staff in Figure 1. C is the sighting vane. The navigator uses the sighting vane and the horizon vane to align the instrument horizontally. The sighting vane can be moved left to right along the staff. D is a transom just as one finds on a cross-staff. This transom has two vanes on it that can be moved closer or farther from the staff to emulate different-length transoms. The transom can be moved on the staff and used to measure angles. Almucantar staff The Almucantar staff is a device specifically used for measuring the altitude of the sun at low altitudes. Cross-staff The cross-staff was normally a direct observation instrument. However, in later years it was modified for use with back observations. Quadrant There was a variation of the quadrant – the Back observation quadrant – that was used for measuring the sun's altitude by observing the shadow cast on a horizon vane. Thomas Hood cross-staff Thomas Hood invented this cross-staff in 1590. It could be used for surveying, astronomy or other geometric problems. It consists of two components, a transom and a yard. The transom is the vertical component and is graduated from 0° at the top to 45° at the bottom. At the top of the transom, a vane is mounted to cast a shadow. The yard is horizontal and is graduated from 45° to 90°. The transom and yard are joined by a special fitting (the double socket in Figure 6) that permits independent adjustments of the transom vertically and the yard horizontally. It was possible to construct the instrument with the yard at the top of the transom rather than at the bottom. Initially, the transom and yard are set so that the two are joined at their respective 45° settings. The instrument is held so that the yard is horizontal (the navigator can view the horizon along the yard to assist in this). The socket is loosened so that the transom is moved vertically until the shadow of the vane is cast at the yard's 90° setting. If the movement of just the transom can accomplish this, the altitude is given by the transom's graduations. If the sun is too high for this, the yard horizontal opening in the socket is loosened and the yard is moved to allow the shadow to land on the 90° mark. The yard then yields the altitude. It was a fairly accurate instrument, as the graduations were well spaced compared to a conventional cross-staff. However, it was a bit unwieldy and difficult to handle in wind. Benjamin Cole quadrant A late addition to the collection of backstaves in the navigation world, this device was invented by Benjamin Cole in 1748. The instrument consists of a staff with a pivoting quadrant on one end. The quadrant has a shadow vane, which can optionally take a lens like the Davis quadrant's Flamsteed glass, at the upper end of the graduated scale (A in Figure 7). This casts a shadow or projects an image of the sun on the horizon vane (B). The observer views the horizon through a hole in the sight vane (D) and a slit in the horizon vane to ensure the instrument is level. The quadrant component is rotated until the horizon and the sun's image or shadow are aligned. The altitude can then be read from the quadrant's scale. In order to refine the reading, a circular vernier is mounted on the staff (C). The fact that such an instrument was introduced in the middle of the 18th century shows that the quadrant was still a viable instrument even in the presence of the octant. English scientist George Adams created a very similar backstaff at the same time. Adam's version ensured that the distance between the Flamsteed glass and horizon vane was the same as the distance from the vane to the sight vane. Cross bow quadrant Edmund Gunter invented the cross bow quadrant, also called the mariner's bow, around 1623. It gets its name from the similarity to the archer's crossbow. This instrument is interesting in that the arc is 120° but is only graduated as a 90° arc. As such, the angular spacing of a degree on the arc is slightly greater than one degree. Examples of the instrument can be found with a 0° to 90° graduation or with two mirrored 0° to 45° segments centred on the midpoint of the arc. The instrument has three vanes, a horizon vane (A in Figure 8) which has an opening in it to observe the horizon, a shadow vane (B) to cast a shadow on the horizon vane and a sighting vane (C) that the navigator uses to view the horizon and shadow at the horizon vane. This serves to ensure the instrument is level while simultaneously measuring the altitude of the sun. The altitude is the difference in the angular positions of the shadow and sighting vanes. With some versions of this instrument, the sun's declination for each day of the year was marked on the arc. This permitted the navigator to set the shadow vane to the date and the instrument would read the altitude directly.
Technology
Navigation
null
974169
https://en.wikipedia.org/wiki/Algebraic%20function
Algebraic function
In mathematics, an algebraic function is a function that can be defined as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are: Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by . In more precise terms, an algebraic function of degree in one variable is a function that is continuous in its domain and satisfies a polynomial equation of positive degree where the coefficients are polynomial functions of , with integer coefficients. It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the 's. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is algebraic over the field generated by these coefficients. The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number. Sometimes, coefficients that are polynomial over a ring are considered, and one then talks about "functions algebraic over ". A function which is not algebraic is called a transcendental function, as it is for example the case of . A composition of transcendental functions can give an algebraic function: . As a polynomial equation of degree n has up to n roots (and exactly n roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to n functions, sometimes also called branches. Consider for example the equation of the unit circle: This determines y, except only up to an overall sign; accordingly, it has two branches: An algebraic function in m variables is similarly defined as a function which solves a polynomial equation in m + 1 variables: It is normally assumed that p should be an irreducible polynomial. The existence of an algebraic function is then guaranteed by the implicit function theorem. Formally, an algebraic function in m variables over the field K is an element of the algebraic closure of the field of rational functions K(x1, ..., xm). Algebraic functions in one variable Introduction and overview The informal definition of an algebraic function provides a number of clues about their properties. To gain an intuitive understanding, it may be helpful to regard algebraic functions as functions which can be formed by the usual algebraic operations: addition, multiplication, division, and taking an nth root. This is something of an oversimplification; because of the fundamental theorem of Galois theory, algebraic functions need not be expressible by radicals. First, note that any polynomial function is an algebraic function, since it is simply the solution y to the equation More generally, any rational function is algebraic, being the solution to Moreover, the nth root of any polynomial is an algebraic function, solving the equation Surprisingly, the inverse function of an algebraic function is an algebraic function. For supposing that y is a solution to for each value of x, then x is also a solution of this equation for each value of y. Indeed, interchanging the roles of x and y and gathering terms, Writing x as a function of y gives the inverse function, also an algebraic function. However, not every function has an inverse. For example, y = x2 fails the horizontal line test: it fails to be one-to-one. The inverse is the algebraic "function" . Another way to understand this, is that the set of branches of the polynomial equation defining our algebraic function is the graph of an algebraic curve. The role of complex numbers From an algebraic perspective, complex numbers enter quite naturally into the study of algebraic functions. First of all, by the fundamental theorem of algebra, the complex numbers are an algebraically closed field. Hence any polynomial relation p(y, x) = 0 is guaranteed to have at least one solution (and in general a number of solutions not exceeding the degree of p in y) for y at each point x, provided we allow y to assume complex as well as real values. Thus, problems to do with the domain of an algebraic function can safely be minimized. Furthermore, even if one is ultimately interested in real algebraic functions, there may be no means to express the function in terms of addition, multiplication, division and taking nth roots without resorting to complex numbers (see casus irreducibilis). For example, consider the algebraic function determined by the equation Using the cubic formula, we get For the square root is real and the cubic root is thus well defined, providing the unique real root. On the other hand, for the square root is not real, and one has to choose, for the square root, either non-real square root. Thus the cubic root has to be chosen among three non-real numbers. If the same choices are done in the two terms of the formula, the three choices for the cubic root provide the three branches shown, in the accompanying image. It may be proven that there is no way to express this function in terms of nth roots using real numbers only, even though the resulting function is real-valued on the domain of the graph shown. On a more significant theoretical level, using complex numbers allows one to use the powerful techniques of complex analysis to discuss algebraic functions. In particular, the argument principle can be used to show that any algebraic function is in fact an analytic function, at least in the multiple-valued sense. Formally, let p(x, y) be a complex polynomial in the complex variables x and y. Suppose that x0 ∈ C is such that the polynomial p(x0, y) of y has n distinct zeros. We shall show that the algebraic function is analytic in a neighborhood of x0. Choose a system of n non-overlapping discs Δi containing each of these zeros. Then by the argument principle By continuity, this also holds for all x in a neighborhood of x0. In particular, p(x, y) has only one root in Δi, given by the residue theorem: which is an analytic function. Monodromy Note that the foregoing proof of analyticity derived an expression for a system of n different function elements fi(x), provided that x is not a critical point of p(x, y). A critical point is a point where the number of distinct zeros is smaller than the degree of p, and this occurs only where the highest degree term of p or the discriminant vanish. Hence there are only finitely many such points c1, ..., cm. A close analysis of the properties of the function elements fi near the critical points can be used to show that the monodromy cover is ramified over the critical points (and possibly the point at infinity). Thus the holomorphic extension of the fi has at worst algebraic poles and ordinary algebraic branchings over the critical points. Note that, away from the critical points, we have since the fi are by definition the distinct zeros of p. The monodromy group acts by permuting the factors, and thus forms the monodromy representation of the Galois group of p. (The monodromy action on the universal covering space is related but different notion in the theory of Riemann surfaces.) History The ideas surrounding algebraic functions go back at least as far as René Descartes. The first discussion of algebraic functions appears to have been in Edward Waring's 1794 An Essay on the Principles of Human Knowledge in which he writes: let a quantity denoting the ordinate, be an algebraic function of the abscissa x, by the common methods of division and extraction of roots, reduce it into an infinite series ascending or descending according to the dimensions of x, and then find the integral of each of the resulting terms.
Mathematics
Functions: General
null
974321
https://en.wikipedia.org/wiki/Rialto%20Bridge
Rialto Bridge
The Rialto Bridge (; ) is the oldest of the four bridges spanning the Grand Canal in Venice, Italy. Connecting the (districts) of San Marco and San Polo, it has been rebuilt several times since its first construction as a pontoon bridge in 1173, and is now a significant tourist attraction in the city. The present stone bridge is a single span designed by Antonio da Ponte. Construction began in 1588 and was completed in 1591. It is similar to the wooden bridge it succeeded. Two ramps lead up to a central portico. On either side of the portico, the covered ramps carry rows of shops. The engineering of the bridge was considered so audacious that architect Vincenzo Scamozzi predicted future ruin. The bridge has defied its critics to become one of the architectural icons, and top tourist attractions, in Venice. History The first dry crossing of the Grand Canal was a pontoon bridge built in 1181 by Nicolò Barattieri. It was called the Ponte della Moneta, presumably because of the mint that stood near its eastern entrance. The development and importance of the Rialto market on the eastern bank increased traffic on the floating bridge, so it was replaced in 1255 by a wooden bridge. This structure had two ramps meeting at a movable central section, that could be raised to allow the passage of tall ships. The connection with the market eventually led to a change of name for the bridge. During the first half of the 15th century, two rows of shops were built along the sides of the bridge. The rents brought an income to the State Treasury, which helped maintain the bridge. Maintenance was vital for the timber bridge. It was partly burnt in the revolt led by Bajamonte Tiepolo in 1310. In 1444, it collapsed under the weight of a crowd rushing to see the marriage of the Marquis of Ferrara and it collapsed again in 1524. The idea of rebuilding the bridge in stone was first proposed in 1503. Several projects were considered over the following decades. In 1551, the authorities requested proposals for the renewal of the Rialto Bridge, among other things. Plans were offered by famous architects, such as Jacopo Sansovino, Palladio and Vignola, but all involved a Classical approach with several arches, which was judged inappropriate to the situation. Michelangelo also was considered as designer of the bridge. Other names It was called Shylock's bridge in Robert Browning's poem "A Toccata of Galuppi's".
Technology
Bridges
null
974623
https://en.wikipedia.org/wiki/Silver%20%28color%29
Silver (color)
Silver or metallic gray is a color tone resembling gray that is a representation of the color of polished silver. The visual sensation usually associated with the metal silver is its metallic shine. This cannot be reproduced by a simple solid color because the shiny effect is due to the material's brightness varying with the surface angle to the light source. In addition, there is no mechanism for showing metallic or fluorescent colors on a computer without resorting to rendering software that simulates the action of light on a shiny surface. Consequently, in art and in heraldry, one would typically use a metallic paint that glitters like real silver. A matte gray color could also be used to represent silver. History The first recorded use of silver as a color name in English was in 1481. In heraldry, the word argent is used, derived from Latin argentum over Medieval French argent. Silver Displayed at right is the web color silver. Since version 3.2 of HTML "silver" is a name for one of the 16 basic-VGA-colors. HTML-example: <body bgcolor="silver"> CSS-example: body { background-color:silver; } Variations of silver Silver (Crayola) Crayola crayons have a color called silver which is a pale tone of silver color. This silver has been a Crayola color since 1903. Crayola's silver is not a neutral grayscale color but a warm gray with a very slight tinge of orange-red. Silver pink The color silver pink is displayed on the right. The color name silver pink first came into use in 1948. The source of this color is the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers. Silver sand On the right is displayed the color silver sand. The color name silver sand for this silver-tone has been used since 2001 when it was promulgated as one of the colors on the Xona.com Color List. Silver chalice On the right is displayed the color silver chalice. The color name silver chalice for this tone of silver has been in use since 2001 when it was promulgated as one of the colors on the Xona.com Color List. Roman silver On the right is displayed the color Roman silver. Roman silver, a blue-gray tone of silver, is one of the colors on the Resene Color List. Old silver At right is displayed the color old silver. Old silver is a color formulated to resemble tarnished silver. The first recorded use of old silver as a color name in English was in 1905. The normalized color coordinates for old silver are identical to battleship gray. Sonic silver Sonic silver is a tone of silver included in Metallic FX crayons, specialty crayons formulated by Crayola in 2001. Silver in nature Plants A silver birch is a tree in the birch family. The leaves are whitish silver on the underside. A silver fir is a valuable timber tree that originated in Europe. A silver maple is characterized by lacy, delicate leaves that are lighter grayish-green on the underside. These trees get their name from the shimmering effect the two-toned leaves give when fluttering in a breeze. Animals A silverfish is an insect which may eat paper or cloth. Many fish have a silver color. A silver fox is a "genetically determined phase of the common red fox in which the pelt is black tipped with white". A silverback gorilla is an adult male gorilla. Silver in culture Aphorisms The expression "every cloud has a silver lining" is used to point out that something good can often come out of even a bad situation. The expression "silver-tongued" refers to a person who possesses the power of fluent, persuasive, eloquent, and witty speech. The expression "born with a silver spoon in his/her mouth" means someone is born into a wealthy or well-to-do family. Astronomy The Chinese name Silver River (銀河) is used throughout East Asia, including Korea and Japan to denote the Milky Way Galaxy (An alternative name for the Milky Way in ancient China, especially in poems, is "Heavenly Han River"(天汉).). In Japanese, "Silver River" (銀河 ginga) means galaxies in general, and the Milky Way is called the "Silver River System" (銀河系 gingakei) or the "River of Heaven" (天の川 Amanokawa or Amanogawa). Film The silver screen is a poetic name for a motion picture screen. This metaphor derived from the early 20th century when all movies were filmed in black and white, and some screens of the era used metallic silver as a reflecting agent. Science fiction films often show spaceship or starship crews wearing silver body suits. Silver City is a 2004 political satire and drama film written and directed by John Sayles. Geography Nevada is referred to as the silver state because of the historically rich silver mines located there, such as the Comstock Lode. Gerontology The aging of the baby boomers has been called the "silver tsunami", although this phrase is controversial due to its ageist connotations. When someone 55 or older gets divorced, it is called a "silver divorce". Heraldry In heraldry there is no distinction between silver and white, represented as "argent". In English heraldry argent (silver) or white signified brightness, purity, virtue, or innocence. Literature The Silver Cord is a 1926 play by Sidney Howard about the emotional tie between a mother and a son, and the term "silver cord" is sometimes used to represent this tie. Silver Child is the first in The Silver Sequence is a fantasy brook trilogy by Cliff McNish consisting of Silver Child, Silver City and Silver World. The Silver Chair is a book in C. S. Lewis's allegorical fantasy series The Chronicles of Narnia. Marriage The 25th wedding anniversary is called the silver anniversary; guests at a 25th wedding anniversary party are expected to bring gifts made of silver. By extension, the 25th anniversary of any significant event is called its Silver Jubilee. Military The Silver Star is the third-highest decoration that can be awarded by the U.S. Military. Music Silver Apples was a psychedelic electronic music duo from New York City that formed in 1967. Silverhead was a British band, led by singer/actor Michael Des Barres. They were a part of the glam rock music scene of the early 1970s. Silver Convention was a popular disco group. Silverchair is a contemporary Australian rock band. Silver Fox is a song by RJD2 from his 2002 album Deadringer. Panelology The Silver Surfer is a popular comic book character. Silver Fox is a character in the Marvel Comics universe. Parapsychology Those who claim to have had out-of-body experiences sometimes report that they observe a silver cord connecting their astral body to their physical body. Politics The Silver Shirts was an American fascist organization during the 1930s. Real estate The Silverdome, a stadium in Pontiac, Michigan constructed in 1975 for $55,000,000 (about $220,000,000 in 2009 dollars), sold in 2009 for $583,000, symbolizing the collapse of real estate prices in the Detroit metropolitan area due to deindustrialization in the rust belt. Religion In Paganism, silver represents wisdom, intelligence, and memory. It has a feminine energy and it is used to grow psychic ability. Role playing games In Dungeons & Dragons, the silver dragon is one of the metallic dragons. School colors Silver is one of two school colors of Christopher Newport University. Scouting The Silver Wolf Award is the highest award made by The Scout Association "for services of the most exceptional character". The Silver Award is the highest award for GS Cadettes in Girl Scouting of the United States of America (GSUSA). Sexuality In the bandana code of the gay leather subculture, wearing a silver lamé bandana on the left means that one is a rock star, movie star, celebrity, or big time groupie; wearing a silver lamé bandana on the right means that one is a groupie looking to have sex with one of the aforementioned types of people. Sports The Las Vegas Raiders of the National Football League and the San Antonio Spurs of the National Basketball Association use silver as one of their primary colors, along with black. The Detroit Lions football team uses the color silver along with the color Honolulu blue for its team logo and uniforms.
Physical sciences
Colors
Physics
975020
https://en.wikipedia.org/wiki/Messier%20106
Messier 106
Messier 106 (also known as NGC 4258) is an intermediate spiral galaxy in the constellation Canes Venatici. It was discovered by Pierre Méchain in 1781. M106 is at a distance of about 22 to 25 million light-years away from Earth. M106 contains an active nucleus classified as a Type 2 Seyfert, and the presence of a central supermassive black hole has been demonstrated from radio-wavelength observations of the rotation of a disk of molecular gas orbiting within the inner light-year around the black hole. NGC 4217 is a possible companion galaxy of Messier 106. Besides the two visible arms, it has two "anomalous arms" detectable using an X-ray telescope. Characteristics M106 has a water vapor megamaser (the equivalent of a laser operating in microwave instead of visible light and on a galactic scale) that is seen by the 22-GHz line of ortho-H2O that evidences dense and warm molecular gas. Water masers are useful for observing nuclear accretion disks in active galaxies. The water masers in M106 enabled the first case of a direct measurement of the distance to a galaxy, thereby providing an independent anchor for the cosmic distance ladder. M106 has a slightly warped, thin, almost edge-on Keplerian disc which is on a subparsec scale. It surrounds a central area with mass . It is one of the largest and brightest nearby galaxies, similar in size and luminosity to the Andromeda Galaxy. The supermassive black hole at the core has a mass of . M106 has also played an important role in calibrating the cosmic distance ladder. Before, Cepheid variables from other galaxies could not be used to measure distances since they cover ranges of metallicities different from the Milky Way's. M106 contains Cepheid variables similar to both the metallicities of the Milky Way and other galaxies' Cepheids. By measuring the distance of the Cepheids with metallicities similar to our galaxy, astronomers are able to recalibrate the other Cepheids with different metallicities, a key fundamental step in improving quantification of distances to other galaxies in the universe. Supernovae Two supernovae have been observed in M106: SN 1981K (type II, mag. 17) was reported by E. Hummel and verified by Paul Wild by examining archival photos dated 3 November 1981. SN 2014bc (type II, mag. 14.8) was discovered by the PS1 Science Consortium 3Pi survey on 19 May 2014.
Physical sciences
Notable galaxies
Astronomy
11730924
https://en.wikipedia.org/wiki/Merit%20order
Merit order
The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price (which may reflect the order of their short-run marginal costs of production) and sometimes pollution, together with amount of energy that will be generated. In a centralized management scheme, the ranking is such that those with the lowest marginal costs are the first sources to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching power generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons. In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. The effect of renewable energy on merit order The high demand for electricity during peak demand pushes up the bidding price for electricity, and the often relatively inexpensive baseload power supply mix is supplemented by 'peaking power plants', which produce electrical power at higher cost, and therefore are priced higher for their electrical output. Increasing the supply of renewable energy tends to lower the average price per unit of electricity because wind energy and solar energy have very low marginal costs: they do not have to pay for fuel, and the sole contributors to their marginal cost is operations and maintenance. With cost often reduced by feed-in-tariff revenue, their electricity is as a result, less costly on the spot market than that from coal or natural gas, and transmission companies typically` buy from them first. Solar and wind electricity therefore substantially reduce the amount of highly priced peak electricity that transmission companies need to buy, during the times when solar/wind power is available, reducing the overall cost. A study by the Fraunhofer Institute ISI found that this "merit order effect" had allowed solar power to reduce the price of electricity on the German energy exchange by 10% on average, and by as much as 40% in the early afternoon. In 2007; as more solar electricity was fed into the grid, peak prices may come down even further. By 2006, the "merit order effect" indicated that the savings in electricity costs to German consumers, on average, more than offset the support payments paid by customers for renewable electricity generation. A 2013 study estimated the merit order effect of both wind and photovoltaic electricity generation in Germany between the years 2008 and 2012. For each additional GWh of renewables fed into the grid, the price of electricity in the day-ahead market was reduced by 0.11–0.13¢/kWh. The total merit order effect of wind and photovoltaics ranged from 0.5¢/kWh in 2010 to more than 1.1¢/kWh in 2012. The near-zero marginal cost of wind and solar energy does not, however, translate into zero marginal cost of peak load electricity in a competitive open electricity market system as wind and solar supply alone often cannot be dispatched to meet peak demand without incurring marginal transmission costs and potentially the costs of ``batteries. The purpose of the merit order dispatching paradigm was to enable the lowest net cost electricity to be dispatched first thus minimising overall electricity system costs to consumers. Intermittent wind and solar is sometimes able to supply this economic function. If peak wind (or solar) supply and peak demand both coincide in time and quantity, the price reduction is larger. On the other hand, solar energy tends to be most abundant at noon, whereas peak demand is late afternoon in warm climates, leading to the so-called duck curve. A 2008 study by the Fraunhofer Institute ISI in Karlsruhe, Germany found that windpower saves German consumers €5billion a year. It is estimated to have lowered prices in European countries with high wind generation by between 3 and 23€/MWh. On the other hand, renewable energy in Germany increased the price for electricity, consumers there now pay 52.8 €/MWh more only for renewable energy (see German Renewable Energy Sources Act), average price for electricity in Germany now is increased to 26¢/kWh. Increasing electrical grid costs for new transmission, market trading and storage associated with wind and solar are not included in the marginal cost of power sources, instead grid costs are combined with source costs at the consumer end. Economic dispatch Economic dispatch is the short-term determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission and operational constraints. The Economic Dispatch Problem can be solved by specialized computer software which should satisfy the operational and system constraints of the available resources and corresponding transmission capabilities. In the US Energy Policy Act of 2005, the term is defined as "the operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognising any operational limits of generation and transmission facilities". The main idea is that, in order to satisfy the load at a minimum total cost, the set of generators with the lowest marginal costs must be used first, with the marginal cost of the final generator needed to meet load setting the system marginal cost. This is the cost of delivering one additional MWh of energy onto the system. Due to transmission constraints, this cost can vary at different locations within the power grid - these different cost levels are identified as "locational marginal prices" (LMPs). The historic methodology for economic dispatch was developed to manage fossil fuel burning power plants, relying on calculations involving the input/output characteristics of power stations. Basic mathematical formulation The following is based on an analytical methodology following Biggar and Hesamzadeh (2014) and Kirschen (2010). The economic dispatch problem can be thought of as maximising the economic welfare of a power network whilst meeting system constraints. For a network with buses (nodes), suppose that is the rate of generation, and is the rate of consumption at bus . Suppose, further, that is the cost function of producing power (i.e., the rate at which the generator incurs costs when producing at rate ), and is the rate at which the load receives value or benefits (expressed in currency units) when consuming at rate . The total welfare is then The economic dispatch task is to find the combination of rates of production and consumption () which maximise this expression subject to a number of constraints: The first constraint, which is necessary to interpret the constraints that follow, is that the net injection at each bus is equal to the total production at that bus less the total consumption: The power balance constraint requires that the sum of the net injections at all buses must be equal to the power losses in the branches of the network: The power losses depend on the flows in the branches and thus on the net injections as shown in the above equation. However it cannot depend on the injections on all the buses as this would give an over-determined system. Thus one bus is chosen as the Slack bus and is omitted from the variables of the function . The choice of Slack bus is entirely arbitrary, here bus is chosen. The second constraint involves capacity constraints on the flow on network lines. For a system with lines this constraint is modeled as: where is the flow on branch , and is the maximum value that this flow is allowed to take. Note that the net injection at the slack bus is not included in this equation for the same reasons as above. These equations can now be combined to build the Lagrangian of the optimization problem: where π and μ are the Lagrangian multipliers of the constraints. The conditions for optimality are then: where the last condition is needed to handle the inequality constraint on line capacity. Solving these equations is computationally difficult as they are nonlinear and implicitly involve the solution of the power flow equations. The analysis can be simplified using a linearised model called a DC power flow. There is a special case which is found in much of the literature. This is the case in which demand is assumed to be perfectly inelastic (i.e., unresponsive to price). This is equivalent to assuming that for some very large value of and inelastic demand . Under this assumption, the total economic welfare is maximised by choosing . The economic dispatch task reduces to: Subject to the constraint that and the other constraints set out above. Environmental dispatch In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. Due to the added complexity, a number of algorithms have been employed to optimize this environmental/economic dispatch problem. Notably, a modified bees algorithm implementing chaotic modeling principles was successfully applied not only in silico, but also on a physical model system of generators. Other methods used to address the economic emission dispatch problem include Particle Swarm Optimization (PSO) and neural networks Another notable algorithm combination is used in a real-time emissions tool called Locational Emissions Estimation Methodology (LEEM) that links electric power consumption and the resulting pollutant emissions. The LEEM estimates changes in emissions associated with incremental changes in power demand derived from the locational marginal price (LMP) information from the independent system operators (ISOs) and emissions data from the US Environmental Protection Agency (EPA). LEEM was developed at Wayne State University as part of a project aimed at optimizing water transmission systems in Detroit, MI starting in 2010 and has since found a wider application as a load profile management tool that can help reduce generation costs and emissions.
Technology
Concepts
null
2970774
https://en.wikipedia.org/wiki/Radiant%20flux
Radiant flux
In radiometry, radiant flux or radiant power is the radiant energy emitted, reflected, transmitted, or received per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), one joule per second (), while that of spectral flux in frequency is the watt per hertz () and that of spectral flux in wavelength is the watt per metre ()—commonly the watt per nanometre (). Mathematical definitions Radiant flux Radiant flux, denoted ('e' for "energetic", to avoid confusion with photometric quantities), is defined as where is the time; is the radiant energy passing out of a closed surface ; is the Poynting vector, representing the current density of radiant energy; is the normal vector of a point on ; represents the area of ; represents the time period. The rate of energy flow through the surface fluctuates at the frequency of the radiation, but radiation detectors only respond to the average rate of flow. This is represented by replacing the Poynting vector with the time average of its norm, giving where is the time average, and is the angle between and Spectral flux Spectral flux in frequency, denoted Φe,ν, is defined as where is the frequency. Spectral flux in wavelength, denoted , is defined as where is the wavelength. SI radiometry units
Physical sciences
Electromagnetic radiation
Physics
2971205
https://en.wikipedia.org/wiki/Van%20Deemter%20equation
Van Deemter equation
The van Deemter equation in chromatography, named for Jan van Deemter, relates the variance per unit length of a separation column to the linear mobile phase velocity by considering physical, kinetic, and thermodynamic properties of a separation. These properties include pathways within the column, diffusion (axial and longitudinal), and mass transfer kinetics between stationary and mobile phases. In liquid chromatography, the mobile phase velocity is taken as the exit velocity, that is, the ratio of the flow rate in ml/second to the cross-sectional area of the ‘column-exit flow path.’ For a packed column, the cross-sectional area of the column exit flow path is usually taken as 0.6 times the cross-sectional area of the column. Alternatively, the linear velocity can be taken as the ratio of the column length to the dead time. If the mobile phase is a gas, then the pressure correction must be applied. The variance per unit length of the column is taken as the ratio of the column length to the column efficiency in theoretical plates. The van Deemter equation is a hyperbolic function that predicts that there is an optimum velocity at which there will be the minimum variance per unit column length and, thence, a maximum efficiency. The van Deemter equation was the result of the first application of rate theory to the chromatography elution process. Van Deemter equation The van Deemter equation relates height equivalent to a theoretical plate (HETP) of a chromatographic column to the various flow and kinetic parameters which cause peak broadening, as follows: Where HETP = a measure of the resolving power of the column [m] A = Eddy-diffusion parameter, related to channeling through a non-ideal packing [m] B = diffusion coefficient of the eluting particles in the longitudinal direction, resulting in dispersion [m2 s−1] C = Resistance to mass transfer coefficient of the analyte between mobile and stationary phase [s] u = speed [m s−1] In open tubular capillaries, the A term will be zero as the lack of packing means channeling does not occur. In packed columns, however, multiple distinct routes ("channels") exist through the column packing, which results in band spreading. In the latter case, A will not be zero. The form of the Van Deemter equation is such that HETP achieves a minimum value at a particular flow velocity. At this flow rate, the resolving power of the column is maximized, although in practice, the elution time is likely to be impractical. Differentiating the van Deemter equation with respect to velocity, setting the resulting expression equal to zero, and solving for the optimum velocity yields the following: Plate count The plate height given as: with the column length and the number of theoretical plates can be estimated from a chromatogram by analysis of the retention time for each component and its standard deviation as a measure for peak width, provided that the elution curve represents a Gaussian curve. In this case the plate count is given by: By using the more practical peak width at half height the equation is: or with the width at the base of the peak: Expanded van Deemter The Van Deemter equation can be further expanded to: Where: H is plate height λ is particle shape (with regard to the packing) dp is particle diameter γ, ω, and R are constants Dm is the diffusion coefficient of the mobile phase dc is the capillary diameter df is the film thickness Ds is the diffusion coefficient of the stationary phase. u is the linear velocity Rodrigues equation The Rodrigues equation, named for Alírio Rodrigues, is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. The equation is: where and is the intraparticular Péclet number.
Physical sciences
Chromatography
Chemistry
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Satellite galaxy
A satellite galaxy is a smaller companion galaxy that travels on bound orbits within the gravitational potential of a more massive and luminous host galaxy (also known as the primary galaxy). Satellite galaxies and their constituents are bound to their host galaxy, in the same way that planets within the Solar System are gravitationally bound to the Sun. While most satellite galaxies are dwarf galaxies, satellite galaxies of large galaxy clusters can be much more massive. The Milky Way is orbited by about fifty satellite galaxies, the largest of which is the Large Magellanic Cloud. Moreover, satellite galaxies are not the only astronomical objects that are gravitationally bound to larger host galaxies (see globular clusters). For this reason, astronomers have defined galaxies as gravitationally bound collections of stars that exhibit properties that cannot be explained by a combination of baryonic matter (i.e. ordinary matter) and Newton's laws of gravity. For example, measurements of the orbital speed of stars and gas within spiral galaxies result in a velocity curve that deviates significantly from the theoretical prediction. This observation has motivated various explanations such as the theory of dark matter and modifications to Newtonian dynamics. Therefore, despite also being satellites of host galaxies, globular clusters should not be mistaken for satellite galaxies. Satellite galaxies are not only more extended and diffuse compared to globular clusters, but are also enshrouded in massive dark matter halos that are thought to have been endowed to them during the formation process. Satellite galaxies generally lead tumultuous lives due to their chaotic interactions with both the larger host galaxy and other satellites. For example, the host galaxy is capable of disrupting the orbiting satellites via tidal and ram pressure stripping. These environmental effects can remove large amounts of cold gas from satellites (i.e. the fuel for star formation), and this can result in satellites becoming quiescent in the sense that they have ceased to form stars. Moreover, satellites can also collide with their host galaxy resulting in a minor merger (i.e. merger event between galaxies of significantly different masses). On the other hand, satellites can also merge with one another resulting in a major merger (i.e. merger event between galaxies of comparable masses). Galaxies are mostly composed of empty space, interstellar gas and dust, and therefore galaxy mergers do not necessarily involve collisions between objects from one galaxy and objects from the other, however, these events generally result in much more massive galaxies. Consequently, astronomers seek to constrain the rate at which both minor and major mergers occur to better understand the formation of gigantic structures of gravitationally bound conglomerations of galaxies such as galactic groups and clusters. History Early 20th century Prior to the 20th century, the notion that galaxies existed beyond the Milky Way was not well established. In fact, the idea was so controversial at the time that it led to what is now heralded as the "Shapley-Curtis Great Debate" aptly named after the astronomers Harlow Shapley and Heber Doust Curtis that debated the nature of "nebulae" and the size of the Milky Way at the National Academy of Sciences on April 26, 1920. Shapley argued that the Milky Way was the entire universe (spanning over 100,000 lightyears or 30 kiloparsec across) and that all of the observed "nebulae" (currently known as galaxies) resided within this region. On the other hand, Curtis argued that the Milky Way was much smaller and that the observed nebulae were in fact galaxies similar to the Milky Way. This debate was not settled until late 1923 when the astronomer Edwin Hubble measured the distance to M31 (currently known as the Andromeda galaxy) using Cepheid Variable stars. By measuring the period of these stars, Hubble was able to estimate their intrinsic luminosity and upon combining this with their measured apparent magnitude he estimated a distance of 300 kpc, which was an order-of-magnitude larger than the estimated size of the universe made by Shapley. This measurement verified that not only was the universe much larger than previously expected, but it also demonstrated that the observed nebulae were actually distant galaxies with a wide range of morphologies (see Hubble sequence). Modern times Despite Hubble's discovery that the universe was teeming with galaxies, a majority of the satellite galaxies of the Milky Way and the Local Group remained undetected until the advent of modern astronomical surveys such as the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES). In particular, the Milky Way is currently known to host 59 satellite galaxies (see satellite galaxies of the Milky Way), of which two known as the Large Magellanic Cloud and Small Magellanic Cloud have been observable in the Southern Hemisphere with the unaided eye since ancient times. Nevertheless, modern cosmological theories of galaxy formation and evolution predict a much larger number of satellite galaxies than what is observed (see missing satellites problem). However, more recent high resolution simulations have demonstrated that the current number of observed satellites pose no threat to the prevalent theory of galaxy formation. Motivations to study satellite galaxies Spectroscopic, photometric and kinematic observations of satellite galaxies have yielded a wealth of information that has been used to study, among other things, the formation and evolution of galaxies, the environmental effects that enhance and diminish the rate of star formation within galaxies and the distribution of dark matter within the dark matter halo. As a result, satellite galaxies serve as a testing ground for prediction made by cosmological models. Classification of satellite galaxies As mentioned above, satellite galaxies are generally categorized as dwarf galaxies and therefore follow a similar Hubble classification scheme as their host with the minor addition of a lowercase "d" in front of the various standard types to designate the dwarf galaxy status. These types include dwarf irregular (dI), dwarf spheroidal (dSph), dwarf elliptical (dE) and dwarf spiral (dS). However, out of all of these types it is believed that dwarf spirals are not satellites, but rather dwarf galaxies that are only found in the field. Dwarf irregular satellite galaxies Dwarf irregular satellite galaxies are characterized by their chaotic and asymmetric appearance, low gas fractions, high star formation rate and low metallicity. Three of the closest dwarf irregular satellites of the Milky Way include the Small Magellanic Cloud, Canis Major Dwarf, and the newly discovered Antlia 2. Dwarf elliptical satellite galaxies Dwarf elliptical satellite galaxies are characterized by their oval appearance on the sky, disordered motion of constituent stars, moderate to low metallicity, low gas fractions and old stellar population. Dwarf elliptical satellite galaxies in the Local Group include NGC 147, NGC 185, and NGC 205, which are satellites of our neighboring Andromeda galaxy. Dwarf spheroidal satellite galaxies Dwarf spheroidal satellite galaxies are characterized by their diffuse appearance, low surface brightness, high mass-to-light ratio (i.e. dark matter dominated), low metallicity, low gas fractions and old stellar population. Moreover, dwarf spheroidals make up the largest population of known satellite galaxies of the Milky Way. A few of these satellites include Hercules, Pisces II and Leo IV, which are named after the constellation in which they are found. Transitional types As a result of minor mergers and environmental effects, some dwarf galaxies are classified as intermediate or transitional type satellite galaxies. For example, Phoenix and LGS3 are classified as intermediate types that appear to be transitioning from dwarf irregulars to dwarf spheroidals. Furthermore, the Large Magellanic Cloud is considered to be in the process of transitioning from a dwarf spiral to a dwarf irregular. Formation of satellite galaxies According to the standard model of cosmology (known as the ΛCDM model), the formation of satellite galaxies is intricately connected to the observed large-scale structure of the Universe. Specifically, the ΛCDM model is based on the premise that the observed large-scale structure is the result of a bottom-up hierarchical process that began after the recombination epoch in which electrically neutral hydrogen atoms were formed as a result of free electrons and protons binding together. As the ratio of neutral hydrogen to free protons and electrons grew, so did fluctuations in the baryonic matter density. These fluctuations rapidly grew to the point that they became comparable to dark matter density fluctuations. Moreover, the smaller mass fluctuations grew to nonlinearity, became virialized (i.e. reached gravitational equilibrium), and were then hierarchically clustered within successively larger bound systems. The gas within these bound systems condensed and rapidly cooled into cold dark matter halos that steadily increased in size by coalescing together and accumulating additional gas via a process known as accretion. The largest bound objects formed from this process are known as superclusters, such as the Virgo Supercluster, that contain smaller clusters of galaxies that are themselves surrounded by even smaller dwarf galaxies. Furthermore, in this model dwarfs galaxies are considered to be the fundamental building blocks that give rise to more massive galaxies, and the satellites that are observed around these galaxies are the dwarfs that have yet to be consumed by their host. Accumulation of mass in dark matter halos A crude yet useful method to determine how dark matter halos progressively gain mass through mergers of less massive halos can be explained using the excursion set formalism, also known as the extended Press-Schechter formalism (EPS). Among other things, the EPS formalism can be used to infer the fraction of mass that originated from collapsed objects of a specific mass at an earlier time by applying the statistics of Markovian random walks to the trajectories of mass elements in -space, where and represent the mass variance and overdensity, respectively. In particular the EPS formalism is founded on the ansatz that states "the fraction of trajectories with a first upcrossing of the barrier at is equal to the mass fraction at time that is incorporated in halos with masses ". Consequently, this ansatz ensures that each trajectory will upcross the barrier given some arbitrarily large , and as a result it guarantees that each mass element will ultimately become part of a halo. Furthermore, the fraction of mass that originated from collapsed objects of a specific mass at an earlier time can be used to determine average number of progenitors at time within the mass interval that have merged to produce a halo of at time . This is accomplished by considering a spherical region of mass with a corresponding mass variance and linear overdensity , where is the linear growth rate that is normalized to unity at time and is the critical overdensity at which the initial spherical region has collapsed to form a virialized object. Mathematically, the progenitor mass function is expressed as:where and is the Press-Schechter multiplicity function that describes the fraction of mass associated with halos in a range . Various comparisons of the progenitor mass function with numerical simulations have concluded that good agreement between theory and simulation is obtained only when is small, otherwise the mass fraction in high mass progenitors is significantly underestimated, which can be attributed to the crude assumptions such as assuming a perfectly spherical collapse model and using a linear density field as opposed to a non-linear density field to characterize collapsed structures. Nevertheless, the utility of the EPS formalism is that it provides a computationally friendly approach for determining properties of dark matter halos. Halo merger rate Another utility of the EPS formalism is that it can be used to determine the rate at which a halo of initial mass M merges with a halo with mass between M and M+ΔM. This rate is given by where , . In general the change in mass, , is the sum of a multitude of minor mergers. Nevertheless, given an infinitesimally small time interval it is reasonable to consider the change in mass to be due to a single merger events in which transitions to . Galactic cannibalism (minor mergers) Throughout their lifespan, satellite galaxies orbiting in the dark matter halo experience dynamical friction and consequently descend deeper into the gravitational potential of their host as a result of orbital decay. Throughout the course of this descent, stars in the outer region of the satellite are steadily stripped away due to tidal forces from the host galaxy. This process, which is an example of a minor merger, continues until the satellite is completely disrupted and consumed by the host galaxies. Evidence of this destructive process can be observed in stellar debris streams around distant galaxies. Orbital decay rate As satellites orbit their host and interact with each other they progressively lose small amounts of kinetic energy and angular momentum due to dynamical friction. Consequently, the distance between the host and the satellite progressively decreases in order to conserve angular momentum. This process continues until the satellite ultimately mergers with the host galaxy. Furthermore, If we assume that the host is a singular isothermal sphere (SIS) and the satellite is a SIS that is sharply truncated at the radius at which it begins to accelerate towards the host (known as the Jacobi radius), then the time that it takes for dynamical friction to result in a minor merger can be approximated as follows:where is the initial radius at , is the velocity dispersion of the host galaxy, is the velocity dispersion of the satellite and is the Coulomb logarithm defined as with , and respectively representing the maximum impact parameter, the half-mass radius and the typical relative velocity. Moreover, both the half-mass radius and the typical relative velocity can be rewritten in terms of the radius and velocity dispersion such that and . Using the Faber-Jackson relation, the velocity dispersion of satellites and their host can be estimated individually from their observed luminosity. Therefore, using the equation above it is possible to estimate the time that it takes for a satellite galaxy to be consumed by the host galaxy. Minor merger driven star formation In 1978, pioneering work involving the measurement of the colors of merger remnants by the astronomers Beatrice Tinsley and Richard Larson gave rise to the notion that mergers enhance star formation. Their observations showed that an anomalous blue color was associated with the merger remnants. Prior to this discovery, astronomers had already classified stars (see stellar classifications) and it was known that young, massive stars were bluer due to their light radiating at shorter wavelengths. Furthermore, it was also known that these stars live short lives due to their rapid consumption of fuel to remain in hydrostatic equilibrium. Therefore, the observation that merger remnants were associated with large populations of young, massive stars suggested that mergers induced rapid star formation (see starburst galaxy). Since this discovery was made, various observations have verified that mergers do indeed induce vigorous star formation. Despite major mergers being far more effective at driving star formation than minor mergers, it is known that minor mergers are significantly more common than major mergers so the cumulative effect of minor mergers over cosmic time is postulated to also contribute heavily to burst of star formation. Minor mergers and the origins of thick disk components Observations of edge-on galaxies suggest the universal presence of a thin disk, thick disk and halo component of galaxies. Despite the apparent ubiquity of these components, there is still ongoing research to determine if the thick disk and thin disk are truly distinct components. Nevertheless, many theories have been proposed to explain the origin of the thick disk component, and among these theories is one that involves minor mergers. In particular, it is speculated that the preexisting thin disk component of a host galaxy is heated during a minor merger and consequently the thin disk expands to form a thicker disk component.
Physical sciences
Basics_2
Astronomy
2977013
https://en.wikipedia.org/wiki/Night%20heron
Night heron
The night herons are medium-sized herons, 58–65 cm, in the genera Nycticorax, Nyctanassa, and Gorsachius. The genus name Nycticorax derives from the Greek for "night raven" and refers to the largely nocturnal feeding habits of this group of birds, and the croaking crow-like, almost like a barking sound, call of the best known species, the black-crowned night heron. In Europe and the Western United States, night heron is often used to refer to the black-crowned night heron, since it is the only member of the genus in that continent. The black-crowned night heron was named the official bird of the city of Oakland, California. Adults are short-necked, short-legged, and stout herons with a primarily brown or grey plumage, and, in most, a black crown. Young birds are brown, flecked with white. At least some of the extinct Mascarenes taxa appear to have retained this juvenile plumage in adult birds. Night herons nest alone or in colonies, on platforms of sticks in a group of trees, or on the ground in protected locations such as islands or reedbeds. 3–8 eggs are laid. Night herons stand still at the water's edge, and wait to ambush prey, mainly at night. They primarily eat small fish, crustaceans, frogs, aquatic insects, and small mammals. During the day, they rest in trees or bushes. There are seven extant species. The genus Nycticorax has suffered more than any other Pelecaniformes genus from extinction, mainly because of their capability to colonize small, predator-free oceanic islands, and a tendency to evolve towards flightlessness. Night herons in Europe breed mainly in southern and southeastern Europe and migrate across the Sahara to winter in central and west Africa. Genera Nyctanassa Nycticorax Gorsachius Gallery
Biology and health sciences
Pelecanimorphae
Animals
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Hadley cell
The Hadley cell, also known as the Hadley circulation, is a global-scale tropical atmospheric circulation that features air rising near the equator, flowing poleward near the tropopause at a height of above the Earth's surface, cooling and descending in the subtropics at around 25 degrees latitude, and then returning equatorward near the surface. It is a thermally direct circulation within the troposphere that emerges due to differences in insolation and heating between the tropics and the subtropics. On a yearly average, the circulation is characterized by a circulation cell on each side of the equator. The Southern Hemisphere Hadley cell is slightly stronger on average than its northern counterpart, extending slightly beyond the equator into the Northern Hemisphere. During the summer and winter months, the Hadley circulation is dominated by a single, cross-equatorial cell with air rising in the summer hemisphere and sinking in the winter hemisphere. Analogous circulations may occur in extraterrestrial atmospheres, such as on Venus and Mars. Global climate is greatly influenced by the structure and behavior of the Hadley circulation. The prevailing trade winds are a manifestation of the lower branches of the Hadley circulation, converging air and moisture in the tropics to form the Intertropical Convergence Zone (ITCZ) where the Earth's heaviest rains are located. Shifts in the ITCZ associated with the seasonal variability of the Hadley circulation cause monsoons. The sinking branches of the Hadley cells give rise to the oceanic subtropical ridges and suppress rainfall; many of the Earth's deserts and arid regions are located in the subtropics coincident with the position of the sinking branches. The Hadley circulation is also a key mechanism for the meridional transport of heat, angular momentum, and moisture, contributing to the subtropical jet stream, the moist tropics, and maintaining a global thermal equilibrium. The Hadley circulation is named after George Hadley, who in 1735 postulated the existence of hemisphere-spanning circulation cells driven by differences in heating to explain the trade winds. Other scientists later developed similar arguments or critiqued Hadley's qualitative theory, providing more rigorous explanations and formalism. The existence of a broad meridional circulation of the type suggested by Hadley was confirmed in the mid-20th century once routine observations of the upper troposphere became available via radiosondes. Observations and climate modelling indicate that the Hadley circulation has expanded poleward since at least the 1980s as a result of climate change, with an accompanying but less certain intensification of the circulation; these changes have been associated with trends in regional weather patterns. Model projections suggest that the circulation will widen and weaken throughout the 21st century due to climate change. Mechanism and characteristics The Hadley circulation describes the broad, thermally direct, and meridional overturning of air within the troposphere over the low latitudes. Within the global atmospheric circulation, the meridional flow of air averaged along lines of latitude are organized into circulations of rising and sinking motions coupled with the equatorward or poleward movement of air called meridional cells. These include the prominent "Hadley cells" centered over the tropics and the weaker "Ferrell cells" centered over the mid-latitudes. The Hadley cells result from the contrast of insolation between the warm equatorial regions and the cooler subtropical regions. The uneven heating of Earth's surface results in regions of rising and descending air. Over the course of a year, the equatorial regions absorb more radiation from the Sun than they radiate away. At higher latitudes, the Earth emits more radiation than it receives from the Sun. Without a mechanism to exchange heat meridionally, the equatorial regions would warm and the higher latitudes would cool progressively in disequilibrium. The broad ascent and descent of air results in a pressure gradient force that drives the Hadley circulation and other large-scale flows in both the atmosphere and the ocean, distributing heat and maintaining a global long-term and subseasonal thermal equilibrium. The Hadley circulation covers almost half of the Earth's surface area, spanning from roughly the Tropic of Cancer to the Tropic of Capricorn. Vertically, the circulation occupies the entire depth of the troposphere. The Hadley cells comprising the circulation consist of air carried equatorward by the trade winds in the lower troposphere that ascends when heated near the equator, along with air moving poleward in the upper troposphere. Air that is moved into the subtropics cools and then sinks before returning equatorward to the tropics; the position of the sinking air associated with the Hadley cell is often used as a measure of the meridional width of the global tropics. The equatorward return of air and the strong influence of heating make the Hadley cell a thermally-driven and enclosed circulation. Due to the buoyant rise of air near the equator and the sinking of air at higher latitudes, a pressure gradient develops near the surface with lower pressures near the equator and higher pressures in the subtropics; this provides the motive force for the equatorward flow in the lower troposphere. However, the release of latent heat associated with condensation in the tropics also relaxes the decrease in pressure with height, resulting in higher pressures aloft in the tropics compared to the subtropics for a given height in the upper troposphere; this pressure gradient is stronger than its near-surface counterpart and provides the motive force for the poleward flow in the upper troposphere. Hadley cells are most commonly identified using the mass-weighted, zonally-averaged stream function of meridional winds, but they can also be identified by other measurable or derivable physical parameters such as velocity potential or the vertical component of wind at a particular pressure level. Given the latitude and the pressure level , the Stokes stream function characterizing the Hadley circulation is given by where is the radius of Earth, is the acceleration due to the gravity of Earth, and is the zonally averaged meridional wind at the prescribed latitude and pressure level. The value of gives the integrated meridional mass flux between the specified pressure level and the top of the Earth's atmosphere, with positive values indicating northward mass transport. The strength of the Hadley cells can be quantified based on including the maximum and minimum values or averages of the stream function both overall and at various pressure levels. Hadley cell intensity can also be assessed using other physical quantities such as the velocity potential, vertical component of wind, transport of water vapor, or total energy of the circulation. Structure and components The structure of the Hadley circulation and its components can be inferred by graphing zonal and temporal averages of global winds throughout the troposphere. At shorter timescales, individual weather systems perturb wind flow. Although the structure of the Hadley circulation varies seasonally, when winds are averaged annually (from an Eulerian perspective) the Hadley circulation is roughly symmetric and composed of two similar Hadley cells with one in each of the northern and southern hemispheres, sharing a common region of ascending air near the equator; however, the Southern Hemisphere Hadley cell is stronger. The winds associated with the annually-averaged Hadley circulation are on the order of . However, when averaging the motions of air parcels as opposed to the winds at fixed locations (a Lagrangian perspective), the Hadley circulation manifests as a broader circulation that extends farther poleward. Each Hadley cell can be described by four primary branches of airflow within the tropics: An equatorward, lower branch within the planetary boundary layer An ascending branch near the equator A poleward, upper branch in the upper troposphere A descending branch in the subtropics The trade winds in the low-latitudes of both Earth's northern and southern hemispheres converge air towards the equator, producing a belt of low atmospheric pressure exhibiting abundant storms and heavy rainfall known as the Intertropical Convergence Zone (ITCZ). This equatorward movement of air near the Earth's surface constitutes the lower branch of the Hadley cell. The position of the ITCZ is influenced by the warmth of sea surface temperatures (SST) near the equator and the strength of cross-equatorial pressure gradients. In general, the ITCZ is located near the equator or is offset towards the summer hemisphere where the warmest SSTs are located. On an annual average, the rising branch of the Hadley circulation is slightly offset towards the Northern Hemisphere, away from the equator. Due to the Coriolis force, the trade winds deflect opposite the direction of Earth's rotation, blowing partially westward rather than directly equatorward in both hemispheres. The lower branch accrues moisture resulting from evaporation across Earth's tropical oceans. A warmer environment and converging winds force the moistened air to ascend near the equator, resulting in the rising branch of the Hadley cell. The upward motion is further enhanced by the release of latent heat as the uplift of moist air results in an equatorial band of condensation and precipitation. The Hadley circulation's upward branch largely occurs in thunderstorms occupying only around one percent of the surface area of the tropics. The transport of heat in the Hadley circulation's ascending branch is accomplished most efficiently by hot towerscumulonimbus clouds bearing strong updrafts that do not mix in drier air commonly found in the middle troposphere and thus allow the movement of air from the highly moist tropical lower troposphere into the upper troposphere. Approximately 1,500–5,000 hot towers daily near the ITCZ region are required to sustain the vertical heat transport exhibited by the Hadley circulation. The ascent of air rises into the upper troposphere to a height of , after which air diverges outward from the ITCZ and towards the poles. The top of the Hadley cell is set by the height of the tropopause as the stable stratosphere above prevents the continued ascent of air. Air arising from the low latitudes has higher absolute angular momentum about Earth's axis of rotation. The distance between the atmosphere and Earth's axis decreases poleward; to conserve angular momentum, poleward-moving air parcels must accelerate eastward. The Coriolis effect limits the poleward extent of the Hadley circulation, accelerating air in the direction of the Earth's rotation and forming a jet stream directed zonally rather than continuing the poleward flow of air at each Hadley cell's poleward boundary. Considering only the conservation of angular momentum, a parcel of air at rest along the equator would accelerate to a zonal speed of by the time it reached 30° latitude. However, small-scale turbulence along the parcel's poleward trek and large-scale eddies in the mid-latitude dissipate angular momentum. The jet associated with the Southern Hemisphere Hadley cell is stronger than its northern counterpart due to the stronger intensity of the Southern Hemisphere cell. The cooler, higher-latitudes leads to cooling of air parcels, which causes the poleward air to eventually descend. When the movement of air is averaged annually, the descending branch of the Hadley cell is located roughly over the 25th parallel north and the 25th parallel south. The moisture in the subtropics is then partly advected poleward by eddies and partly advected equatorward by the lower branch of the Hadley cell, where it is later brought towards the ITCZ. Although the zonally-averaged Hadley cell is organized into four main branches, these branches are aggregations of more concentrated air flows and regions of mass transport. Several theories and physical models have attempted to explain the latitudinal width of the Hadley cell. The Held–Hou model provides one theoretical constraint on the meridional extent of the Hadley cells. By assuming a simplified atmosphere composed of a lower layer subject to friction from the Earth's surface and an upper layer free from friction, the model predicts that the Hadley circulation would be restricted to within of the equator if parcels do not have any net heating within the circulation. According to the Held–Hou model, the latitude of the Hadley cell's poleward edge scales according to where is the difference in potential temperature between the equator and the pole in radiative equilibrium, is the height of the tropopause, is the Earth's rotation rate, and is a reference potential temperature. Other compatible models posit that the width of the Hadley cell may scale with other physical parameters such as the vertically-averaged Brunt–Väisälä frequency in the tropopshere or the growth rate of baroclinic waves shed by the cell. Seasonality and variability The Hadley circulation varies considerably with seasonal changes. Around the equinox during the spring and autumn for either the northern or southern hemisphere, the Hadley circulation takes the form of two relatively weaker Hadley cells in both hemispheres, sharing a common region of ascent over the ITCZ and moving air aloft towards each cell's respective hemisphere. However, closer to the solstices, the Hadley circulation transitions into a more singular and stronger cross-equatorial Hadley cell with air rising in the summer hemisphere and broadly descending in the winter hemisphere. The transition between the two-cell and single-cell configuration is abrupt, and during most of the year the Hadley circulation is characterized by a single dominant Hadley cell that transports air across the equator. In this configuration, the ascending branch is located in the tropical latitudes of the warmer summer hemisphere and the descending branch is positioned in the subtropics of the cooler winter hemisphere. Two cells are still present in each hemisphere, though the winter hemisphere's cell becomes much more prominent while the summer hemisphere's cell becomes displaced poleward. The intensification of the winter hemisphere's cell is associated with a steepening of gradients in geopotential height, leading to an acceleration of trade winds and stronger meridional flows. The presence of continents relaxes temperature gradients in the summer hemisphere, accentuating the contrast between the hemispheric Hadley cells. Reanalysis data from 1979–2001 indicated that the dominant Hadley cell in boreal summer extended from 13°S to 31°N on average. In both boreal and austral winters, the Indian Ocean and the western Pacific Ocean contribute most to the rising and sinking motions in the zonally-averaged Hadley circulation. However, vertical flows over Africa and the Americas are more marked in boreal winter. At longer interannual timescales, variations in the Hadley circulation are associated with variations in the El Niño–Southern Oscillation (ENSO), which impacts the positioning of the ascending branch; the response of the circulation to ENSO is non-linear, with a more marked response to El Niño events than La Niña events. During El Niño, the Hadley circulation strengthens due to the increased warmth of the upper troposphere over the tropical Pacific and the resultant intensification of poleward flow. However, these changes are not asymmetric, during the same events, the Hadley cells over the western Pacific and the Atlantic are weakened. During the Atlantic Niño, the circulation over the Atlantic is intensified. The Atlantic circulation is also enhanced during periods when the North Atlantic oscillation is strongly positive. The variation in the seasonally-averaged and annually-averaged Hadley circulation from year to year is largely accounted for by two juxtaposed modes of oscillation: an equatorial symmetric mode characterized by single cell straddling the equator and an equatorial symmetric mode characterized by two cells on either side of the equator. Energetics and transport The Hadley cell is an important mechanism by which moisture and energy are transported both between the tropics and subtropics and between the northern and southern hemispheres. However, it is not an efficient transporter of energy due to the opposing flows of the lower and upper branch, with the lower branch transporting sensible and latent heat equatorward and the upper branch transporting potential energy poleward. The resulting net energy transport poleward represents around 10 percent of the overall energy transport involved in the Hadley cell. The descending branch of the Hadley cell generates clear skies and a surplus of evaporation relative to precipitation in the subtropics. The lower branch of the Hadley circulation accomplishes most of the transport of the excess water vapor accumulated in the subtropical atmosphere towards the equatorial region. The strong Southern Hemisphere Hadley cell relative to its northern counterpart leads to a small net energy transport from the northern to the southern hemisphere; as a result, the transport of energy at the equator is directed southward on average, with an annual net transport of around 0.1 PW. In contrast to the higher latitudes where eddies are the dominant mechanism for transporting energy poleward, the meridional flows imposed by the Hadley circulation are the primary mechanism for poleward energy transport in the tropics. As a thermally direct circulation, the Hadley circulation converts available potential energy to the kinetic energy of horizontal winds. Based on data from January 1979 and December 2010, the Hadley circulation has an average power output of 198 TW, with maxima in January and August and minima in May and October. Although the stability of the tropopause largely limits the movement of air from the troposphere to the stratosphere, some tropospheric air penetrates into the stratosphere via the Hadley cells. The Hadley circulation may be idealized as a heat engine converting heat energy into mechanical energy. As air moves towards the equator near the Earth's surface, it accumulates entropy from the surface either by direct heating or the flux of sensible or latent heat. In the ascending branch of a Hadley cell, the ascent of air is approximately an adiabatic process with respect to the surrounding environment. However, as parcels of air move equatorward in the cell's upper branch, they lose entropy by radiating heat to space at infrared wavelengths and descend in response. This radiative cooling occurs at a rate of at least 60  W m−2 and may exceed 100 W m−2 in winter. The heat accumulated during the equatorward branch of the circulation is greater than the heat lost in the upper poleward branch; the excess heat is converted into the mechanical energy that drives the movement of air. This difference in heating also results in the Hadley circulation transporting heat poleward as the air supplying the Hadley cell's upper branch has greater moist static energy than the air supplying the cell's lower branch. Within the Earth's atmosphere, the timescale at which air parcels lose heat due to radiative cooling and the timescale at which air moves along the Hadley circulation are at similar orders of magnitude, allowing the Hadley circulation to transport heat despite cooling in the circulation's upper branch. Air with high potential temperature is ultimately moved poleward in the upper troposphere while air with lower potential temperature is brought equatorward near the surface. As a result, the Hadley circulation is one mechanism by which the disequilibrium produced by uneven heating of the Earth is brought towards equilibrium. When considered as a heat engine, the thermodynamic efficiency of the Hadley circulation averaged around 2.6 percent between 1979–2010, with small seasonal variability. The Hadley circulation also transports planetary angular momentum poleward due to Earth's rotation. Because the trade winds are directed opposite the Earth's rotation, eastward angular momentum is transferred to the atmosphere via frictional interaction between the winds and topography. The Hadley cell then transfers this angular momentum through its upward and poleward branches. The poleward branch accelerates and is deflected east in both the northern and southern hemispheres due to the Coriolis force and the conservation of angular momentum, resulting in a zonal jet stream above the descending branch of the Hadley cell. The formation of such a jet implies the existence of a thermal wind balance supported by the amplification of temperature gradients in the jet's vicinity resulting from the Hadley circulation's poleward heat advection. The subtropical jet in the upper troposphere coincides with where the Hadley cell meets the Ferrell cell. The strong wind shear accompanying the jet presents a significant source of baroclinic instability from which waves grow; the growth of these waves transfers heat and momentum polewards. Atmospheric eddies extract westerly angular momentum from the Hadley cell and transport it downward, resulting in the mid-latitude westerly winds. Formulation and discovery The broad structure and mechanism of the Hadley circulationcomprising convective cells moving air due to temperature differences in a manner influenced by the Earth's rotationwas first proposed by Edmund Halley in 1685 and George Hadley in 1735. Hadley had sought to explain the physical mechanism for the trade winds and the westerlies; the Hadley circulation and the Hadley cells are named in honor of his pioneering work. Although Hadley's ideas invoked physical concepts that would not be formalized until well after his death, his model was largely qualitative and without mathematical rigor. Hadley's formulation was later recognized by most meteorologists by the 1920s to be a simplification of more complicated atmospheric processes. The Hadley circulation may have been the first attempt to explain the global distribution of winds in Earth's atmosphere using physical processes. However, Hadley's hypothesis could not be verified without observations of winds in the upper-atmosphere. Data collected by routine radiosondes beginning in the mid-20th century confirmed the existence of the Hadley circulation. Early explanations of the trade winds In the 15th and 16th centuries, observations of maritime weather conditions were of considerable importance to maritime transport. Compilations of these observations showed consistent weather conditions from year to year and significant seasonal variability. The prevalence of dry conditions and weak winds at around 30° latitude and the equatorward trade winds closer to the equator, mirrored in the northern and southern hemispheres, was apparent by 1600. Early efforts by scientists to explain aspects of global wind patterns often focused on the trade winds as the steadiness of the winds was assumed to portend a simple physical mechanism. Galileo Galilei proposed that the trade winds resulted from the atmosphere lagging behind the Earth's faster tangential rotation speed in the low latitudes, resulting in the westward trades directed opposite of Earth's rotation. In 1685, English polymath Edmund Halley proposed at a debate organized by the Royal Society that the trade winds resulted from east to west temperature differences produced over the course of a day within the tropics. In Halley's model, as the Earth rotated, the location of maximum heating from the Sun moved west across the Earth's surface. This would cause air to rise, and by conservation of mass, Halley argued that air would be moved to the region of evacuated air, generating the trade winds. Halley's hypothesis was criticized by his friends, who noted that his model would lead to changing wind directions throughout the course of a day rather than the steady trade winds. Halley conceded in personal correspondence with John Wallis that "Your questioning my hypothesis for solving the Trade Winds makes me less confident of the truth thereof". Nonetheless, Halley's formulation was incorporated into Chambers's Encyclopaedia and La Grande Encyclopédie, becoming the most widely-known explanation for the trade winds until the early 19th century. Though his explanation of the trade winds was incorrect, Halley correctly predicted that the surface trade winds should be accompanied by an opposing flow aloft following mass conservation. George Hadley's explanation Unsatisfied with preceding explanations for the trade winds, George Hadley proposed an alternate mechanism in 1735. Hadley's hypothesis was published in the paper "On the Cause of the General Trade Winds" in Philosophical Transactions of the Royal Society. Like Halley, Hadley's explanation viewed the trade winds as a manifestation of air moving to take the place of rising warm air. However, the region of rising air prompting this flow lay along the lower latitudes. Understanding that the tangential rotation speed of the Earth was fastest at the equator and slowed farther poleward, Hadley conjectured that as air with lower momentum from higher latitudes moved equatorward to replace the rising air, it would conserve its momentum and thus curve west. By the same token, the rising air with higher momentum would spread poleward, curving east and then sinking as it cooled to produce westerlies in the mid-latitudes. Hadley's explanation implied the existence of hemisphere-spanning circulation cells in the northern and southern hemispheres extending from the equator to the poles, though he relied on an idealization of Earth's atmosphere that lacked seasonality or the asymmetries of the oceans and continents. His model also predicted rapid easterly trade winds of around , though he argued that the action of surface friction over the course of a few days slowed the air to the observed wind speeds. Colin Maclaurin extended Hadley's model to the ocean in 1740, asserting that meridional ocean currents were subject to similar westward or eastward deflections. Hadley was not widely associated with his theory due to conflation with his older brother, John Hadley, and Halley; his theory failed to gain much traction in the scientific community for over a century due to its unintuitive explanation and the lack of validating observations. Several other natural philosophers independently forwarded explanations for the global distribution of winds soon after Hadley's 1735 proposal. In 1746, Jean le Rond d'Alembert provided a mathematical formulation for global winds, but disregarded solar heating and attributed the winds to the gravitational effects of the Sun and Moon. Immanuel Kant, also unsatisfied with Halley's explanation for the trade winds, published an explanation for the trade winds and westerlies in 1756 with similar reasoning as Hadley. In the latter part of the 18th century, Pierre-Simon Laplace developed a set of equations establishing a direct influence of Earth's rotation on wind direction. Swiss scientist Jean-André Deluc published an explanation of the trade winds in 1787 similar to Hadley's hypothesis, connecting differential heating and the Earth's rotation with the direction of the winds. English chemist John Dalton was the first to clearly credit Hadley's explanation of the trade winds to George Hadley, mentioning Hadley's work in his 1793 book Meteorological Observations and Essays. In 1837, Philosophical Magazine published a new theory of wind currents developed by Heinrich Wilhelm Dove without reference to Hadley but similarly explaining the direction of the trade winds as being influenced by the Earth's rotation. In response, Dalton later wrote a letter to the editor to the journal promoting Hadley's work. Dove subsequently credited Hadley so frequently that the overarching theory became known as the "Hadley–Dove principle", popularizing Hadley's explanation for the trade winds in Germany and Great Britain. Critique of Hadley's explanation The work of Gustave Coriolis, William Ferrel, Jean Bernard Foucault, and Henrik Mohn in the 19th century helped establish the Coriolis force as the mechanism for the deflection of winds due to Earth's rotation, emphasizing the conservation of angular momentum in directing flows rather than the conservation of linear momentum as Hadley suggested; Hadley's assumption led to an underestimation of the deflection by a factor of two. The acceptance of the Coriolis force in shaping global winds led to debate among German atmospheric scientists beginning in the 1870s over the completeness and validity of Hadley's explanation, which narrowly explained the behavior of initially meridional motions. Hadley's use of surface friction to explain why the trade winds were much slower than his theory would predict was seen as a key weakness in his ideas. The southwesterly motions observed in cirrus clouds at around 30°N further discounted Hadley's theory as their movement was far slower than the theory would predict when accounting for the conservation of angular momentum. In 1899, William Morris Davis, a professor of physical geography at Harvard University, gave a speech at the Royal Meteorological Society criticizing Hadley's theory for its failure to account for the transition of an initially unbalanced flow to geostrophic balance. Davis and other meteorologists in the 20th century recognized that the movement of air parcels along Hadley's envisaged circulation was sustained by a constant interplay between the pressure gradient and Coriolis forces rather than the conservation of angular momentum alone. Ultimately, while the atmospheric science community considered the general ideas of Hadley's principle valid, his explanation was viewed as a simplification of more complex physical processes. Hadley's model of the global atmospheric circulation being characterized by hemisphere-wide circulation cells was also challenged by weather observations showing a zone of high pressure in the subtropics and a belt of low pressure at around 60° latitude. This pressure distribution would imply a poleward flow near the surface in the mid-latitudes rather than an equatorward flow implied by Hadley's envisioned cells. Ferrel and James Thomson later reconciled the pressure pattern with Hadley's model by proposing a circulation cell limited to lower altitudes in the mid-latitudes and nestled within the broader, hemisphere-wide Hadley cells. Carl-Gustaf Rossby proposed in 1947 that the Hadley circulation was limited to the tropics, forming one part of a dynamically-driven and multi-celled meridional flow. Rossby's model resembled that of a similar three-celled model developed by Ferrel in 1860. Direct observation The three-celled model of the global atmospheric circulationwith Hadley's conceived circulation forming its tropical componenthad been widely accepted by the meteorological community by the early 20th century. However, the Hadley cell's existence was only validated by weather observations near the surface, and its predictions of winds in the upper troposphere remained untested. The routine sampling of the upper troposphere by radiosondes that emerged in the mid-20th century confirmed the existence of meridional overturning cells in the atmosphere. Influence on climate The Hadley circulation is one of the most important influences on global climate and planetary habitability, as well as an important transporter of angular momentum, heat, and water vapor. Hadley cells flatten the temperature gradient between the equator and the poles, making the extratropics milder. The global precipitation pattern of high precipitation in the tropics and a lack of precipitation at higher latitudes is a consequence of the positioning of the rising and sinking branches of Hadley cells, respectively. Near the equator, the ascent of humid air results in the heaviest precipitation on Earth. The periodic movement of the ITCZ and thus the seasonal variation of the Hadley circulation's rising branches produces the world's monsoons. The descending motion of air associating with the sinking branch produces surface divergence consistent with the prominence of subtropical high-pressure areas. These semipermanent regions of high pressure lie primarily over the ocean between 20° and 40° latitude. Arid conditions are associated with the descending branches of the Hadley circulation, with many of the Earth's deserts and semiarid or arid regions underlying the sinking branches of the Hadley circulation. The cloudy marine boundary layer common in the subtropics may be seeded by cloud condensation nuclei exported out of the tropics by the Hadley circulation. Effects of climate change Natural variability Paleoclimate reconstructions of trade winds and rainfall patterns suggest that the Hadley circulation changed in response to natural climate variability. During Heinrich events within the last 100,000 years, the Northern Hemisphere Hadley cell strengthened while the Southern Hemisphere Hadley cell weakened. Variation in insolation during the mid- to late-Holocene resulted in a southward migration of the Northern Hemisphere Hadley cell's ascending and descending branches closer to their present-day positions. Tree rings from the mid-latitudes of the Northern Hemisphere suggest that the historical position of the Hadley cell branches have also shifted in response to shorter oscillations, with the Northern Hemisphere descending branch moving southward during positive phases of the El Niño–Southern Oscillation and Pacific decadal oscillation and northward during the corresponding negative phases. The Hadley cells were displaced southward between 1400–1850, concurrent with drought in parts of the Northern Hemisphere. Hadley cell expansion and intensity changes Observed trends According to the IPCC Sixth Assessment Report (AR6), the Hadley circulation has likely expanded since at least the 1980s in response to climate change, with medium confidence in an accompanying intensification of the circulation. An expansion of the overall circulation poleward by about 0.1°–0.5° latitude per decade since the 1980s is largely accounted for by the poleward shift of the Northern Hemisphere Hadley cell, which in atmospheric reanalysis has shown a more marked expansion since 1992. However, the AR6 also reported medium confidence in the expansion of the Northern Hemisphere Hadley cell being within the range of internal variability. In contrast, the AR6 assessed that it was likely that the Southern Hemisphere Hadley cell's poleward expansion was due to anthropogenic influence; this finding was based on CMIP5 and CMIP6 climate models. Studies have produced a large range of estimates for the rate of widening of the tropics due to the use of different metrics; estimates based on upper-tropospheric properties tend to yield a wider range of values. The degree to which the circulation has expanded varies by season, with trends in summer and autumn being larger and statistically significant in both hemispheres. The widening of the Hadley circulation has also resulted in a likely widening of the ITCZ since the 1970s. Reanalyses also suggest that the summer and autumn Hadley cells in both hemispheres have widened and that the global Hadley circulation has intensified since 1979, with a more pronounced intensification in the Northern Hemisphere. Between 1979–2010, the power generated by the global Hadley circulation increased by an average of 0.54 TW per year, consistent with an increased input of energy into the circulation by warming SSTs over the tropical oceans. (For comparison, the Hadley circulation's overall power ranges from 0.5 TW to 218 TW throughout the year in the Northern Hemisphere and from 32 to 204 TW in the Southern.) In contrast to reanalyses, CMIP5 climate models depict a weakening of the Hadley circulation since 1979. The magnitude of long-term changes in the circulation strength are thus uncertain due to the influence of large interannual variability and the poor representation of the distribution of latent heat release in reanalyses. The expansion of the Hadley circulation due to climate change is consistent with the Held–Hou model, which predicts that the latitudinal extent of the circulation is proportional to the square root of the height of the tropopause. Warming of the troposphere raises the tropopause height, enabling the upper poleward branch of the Hadley cells to extend farther and leading to an expansion of the cells. Results from climate models suggest that the impact of internal variability (such as from the Pacific decadal oscillation) and the anthropogenic influence on the expansion of the Hadley circulation since the 1980s have been comparable. Human influence is most evident in the expansion of the Southern Hemisphere Hadley cell; the AR6 assessed medium confidence in associating the expansion of the Hadley circulation in both hemispheres with the added radiative forcing of greenhouse gasses. Physical mechanisms and projected changes The physical processes by which the Hadley circulation expands by human influence are unclear but may be linked to the increased warming of the subtropics relative to other latitudes in both the Northern and Southern hemispheres. The enhanced subtropical warmth could enable expansion of the circulation poleward by displacing the subtropical jet and baroclinic eddies poleward. Poleward expansion of the Southern Hemisphere Hadley cell in the austral summer was attributed by the IPCC Fifth Assessment Report (AR5) to stratospheric ozone depletion based on CMIP5 model simulations, while CMIP6 simulations have not shown as clear of a signal. Ozone depletion could plausibly affect the Hadley circulation through the increase of radiative cooling in the lower stratosphere; this would increase the phase speed of baroclinic eddies and displace them poleward, leading to expansion of Hadley cells. Other eddy-driven mechanisms for expanding Hadley cells have been proposed, involving changes in baroclinicity, wave breaking, and other releases of instability. In the extratropics of the Northern Hemisphere, increasing concentrations of black carbon and tropospheric ozone may be a major forcing on that hemisphere's Hadley cell expansion in boreal summer. Projections from climate models indicate that a continued increase in the concentration of greenhouse gas would result in continued widening of the Hadley circulation. However, simulations using historical data suggest that forcing from greenhouse gasses may account for about 0.1° per decade of expansion of the tropics. Although the widening of the Hadley cells due to climate change has occurred concurrent with an increase in their intensity based on atmospheric reanalyses, climate model projections generally depict a weakening circulation in tandem with a widening circulation by the end of the 21st century. A longer term increase in the concentration of carbon dioxide may lead to a weakening of the Hadley circulation as a result of the reduction of radiative cooling in the troposphere near the circulation's sinking branches. However, changes in the oceanic circulation within the tropics may attenuate changes in the intensity and width of the Hadley cells by reducing thermal contrasts. Changes to weather patterns The expansion of the Hadley circulation due to climate change is connected to changes in regional and global weather patterns. A widening of the tropics could displace the tropical rain belt, expand subtropical deserts, and exacerbate wildfires and drought. The documented shift and expansion of subtropical ridges are associated with changes in the Hadley circulation, including a westward extension of the subtropical high over the northwestern Pacific, changes in the intensity and position of the Azores High, and the poleward displacement and intensification of the subtropical high pressure belt in the Southern Hemisphere. These changes have influenced regional precipitation amounts and variability, including drying trends over southern Australia, northeastern China, and northern South Asia. The AR6 assessed limited evidence that the expansion of the Northern Hemisphere Hadley cell may have led in part to drier conditions in the subtropics and a poleward expansion of aridity during boreal summer. Precipitation changes induced by Hadley circulation changes may lead to changes in regional soil moisture, with modelling showing the most significant declines in the Mediterranean Sea, South Africa, and the Southwestern United States. However, the concurrent effects of changing surface temperature patterns over land lead to uncertainties over the influence of Hadley cell broadening on drying over subtropical land areas. Climate modelling suggests that the shift in the position of the subtropical highs induced by Hadley cell broadening may reduce oceanic upwelling at low latitudes and enhance oceanic upwelling at high latitudes. The expansion of subtropical highs in tandem with the circulation's expansion may also entail a widening of oceanic regions of high salinity and low marine primary production. A decline in extratropical cyclones in the storm track regions in model projections is partly influenced by Hadley cell expansion. Poleward shifts in the Hadley circulation are associated with shifts in the paths of tropical cyclones in the Northern and Southern hemispheres, including a poleward trend in the locations where storms attained their peak intensity. Extraterrestrial Hadley circulations Outside of Earth, any thermally direct circulation that circulates air meridionally across planetary-scale gradients of insolation may be described as a Hadley circulation. A terrestrial atmosphere subject to excess equatorial heating tends to maintain an axisymmetric Hadley circulation with rising motions near the equator and sinking at higher latitudes. Differential heating is hypothesized to result in Hadley circulations analogous to Earth's on other atmospheres in the Solar System, such as on Venus, Mars, and Titan. As with Earth's atmosphere, the Hadley circulation would be the dominant meridional circulation for these extraterrestrial atmospheres. Though less understood, Hadley circulations may also be present on the gas giants of the Solar System and should in principle materialize on exoplanetary atmospheres. The spatial extent of a Hadley cell on any atmosphere may be dependent on the rotation rate of the planet or moon, with a faster rotation rate leading to more contracted Hadley cells (with a more restrictive poleward extent) and a more cellular global meridional circulation. The slower rotation rate reduces the Coriolis effect, thus reducing the meridional temperature gradient needed to sustain a jet at the Hadley cell's poleward boundary and thus allowing the Hadley cell to extend farther poleward. Venus, which rotates slowly, may have Hadley cells that extend farther poleward than Earth's, spanning from the equator to high latitudes in each of the northern and southern hemispheres. Its broad Hadley circulation would efficiently maintain the nearly isothermal temperature distribution between the planet's pole and equator and vertical velocities of around . Observations of chemical tracers such as carbon monoxide provide indirect evidence for the existence of the Venusian Hadley circulation. The presence of poleward winds with speeds up to around at an altitude of are typically understood to be associated with the upper branch of a Hadley cell, which may be located above the Venusian surface. The slow vertical velocities associated with the Hadley circulation have not been measured, though they may have contributed to the vertical velocities measured by Vega and Venera missions. The Hadley cells may extend to around 60° latitude, equatorward of a mid-latitude jet stream demarcating the boundary between the hypothesized Hadley cell and the polar vortex. The planet's atmosphere may exhibit two Hadley circulations, with one near the surface and the other at the level of the upper cloud deck. The Venusian Hadley circulation may contribute to the superrotation of the planet's atmosphere. Simulations of the Martian atmosphere suggest that a Hadley circulation is also present in Mars' atmosphere, exhibiting a stronger seasonality compared to Earth's Hadley circulation. This greater seasonality results from diminished thermal inertia resulting from the lack of an ocean and the planet's thinner atmosphere. Additionally, Mars' orbital eccentricity leads to a stronger and wider Hadley cell during its northern winter compared to its southern winter. During most of the Martian year, when a single Hadley cell prevails, its rising and sinking branches are located at 30° and 60° latitude, respectively, in global climate modelling. The tops of the Hadley cells on Mars may reach higher (to around altitude) and be less defined compared to on Earth due to the lack of a strong tropopause on Mars. While latent heating from phase changes associated with water drive much of the ascending motion in Earth's Hadley circulation, ascent in Mars' Hadley circulation may be driven by radiative heating of lofted dust and intensified by the condensation of carbon dioxide near the polar ice cap of Mars' wintertime hemisphere, steepening pressure gradients. Over the course of the Martian year, the mass flux of the Hadley circulation ranges between 109 kg s−1 during the equinoxes and 1010 at the solstices. A Hadley circulation may also be present in the atmosphere of Saturn's moon Titan. Like Venus, the slow rotation rate of Titan may support a spatially broad Hadley circulation. General circulation modeling of Titan's atmosphere suggests the presence of a cross-equatorial Hadley cell. This configuration is consistent with the meridional winds observed by the Huygens spacecraft when it landed near Titan's equator. During Titan's solstices, its Hadley circulation may take the form of a single Hadley cell that extends from pole to pole, with warm gas rising in the summer hemisphere and sinking in the winter hemisphere. A two-celled configuration with ascent near the equator is present in modelling during a limited transitional period near the equinoxes. The distribution of convective methane clouds on Titan and observations from Huygens spacecraft suggest that the rising branch of its Hadley circulation occurs in the mid-latitudes of its summer hemisphere. Frequent cloud formation occurs at 40° latitude in Titan's summer hemisphere from ascent analogous to Earth's ITCZ.
Physical sciences
Atmospheric circulation
null
6954092
https://en.wikipedia.org/wiki/Homogeneous%20differential%20equation
Homogeneous differential equation
A differential equation can be homogeneous in either of two respects. A first order differential equation is said to be homogeneous if it may be written where and are homogeneous functions of the same degree of and . In this case, the change of variable leads to an equation of the form which is easy to solve by integration of the two members. Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term. History The term homogeneous was first applied to differential equations by Johann Bernoulli in section 9 of his 1726 article De integraionibus aequationum differentialium (On the integration of differential equations). Homogeneous first-order differential equations A first-order ordinary differential equation in the form: is a homogeneous type if both functions and are homogeneous functions of the same degree . That is, multiplying each variable by a parameter , we find Thus, Solution method In the quotient , we can let to simplify this quotient to a function of the single variable : That is Introduce the change of variables ; differentiate using the product rule: This transforms the original differential equation into the separable form or which can now be integrated directly: equals the antiderivative of the right-hand side (see ordinary differential equation). Special case A first order differential equation of the form (, , , , , are all constants) where can be transformed into a homogeneous type by a linear transformation of both variables ( and are constants): Homogeneous linear differential equations A linear differential equation is homogeneous if it is a homogeneous linear equation in the unknown function and its derivatives. It follows that, if is a solution, so is , for any (non-zero) constant . In order for this condition to hold, each nonzero term of the linear differential equation must depend on the unknown function or any derivative of it. A linear differential equation that fails this condition is called inhomogeneous. A linear differential equation can be represented as a linear operator acting on where is usually the independent variable and is the dependent variable. Therefore, the general form of a linear homogeneous differential equation is where is a differential operator, a sum of derivatives (defining the "0th derivative" as the original, non-differentiated function), each multiplied by a function of : where may be constants, but not all may be zero. For example, the following linear differential equation is homogeneous: whereas the following two are inhomogeneous: The existence of a constant term is a sufficient condition for an equation to be inhomogeneous, as in the above example.
Mathematics
Differential equations
null
42967
https://en.wikipedia.org/wiki/Ornithology
Ornithology
Ornithology is a branch of zoology that concerns the study of birds. Several aspects of ornithology differ from related disciplines, due partly to the high visibility and the aesthetic appeal of birds. It has also been an area with a large contribution made by amateurs in terms of time, resources, and financial support. Studies on birds have helped develop key concepts in biology including evolution, behaviour and ecology such as the definition of species, the process of speciation, instinct, learning, ecological niches, guilds, insular biogeography, phylogeography, and conservation. While early ornithology was principally concerned with descriptions and distributions of species, ornithologists today seek answers to very specific questions, often using birds as models to test hypotheses or predictions based on theories. Most modern biological theories apply across life forms, and the number of scientists who identify themselves as "ornithologists" has therefore declined. A wide range of tools and techniques are used in ornithology, both inside the laboratory and out in the field, and innovations are constantly made. Most biologists who recognise themselves as "ornithologists" study specific biology research areas, such as anatomy, physiology, taxonomy (phylogenetics), ecology, or behaviour. Definition and etymology The word "ornithology" comes from the late 16th-century Latin ornithologia meaning "bird science" from the Greek ὄρνις ornis ("bird") and λόγος logos ("theory, science, thought"). History The history of ornithology largely reflects the trends in the history of biology, as well as many other scientific disciplines, including ecology, anatomy, physiology, paleontology, and more recently, molecular biology. Trends include the move from mere descriptions to the identification of patterns, thus towards elucidating the processes that produce these patterns. Early knowledge and study Humans have had an observational relationship with birds since prehistory, with some stone-age drawings being amongst the oldest indications of an interest in birds. Birds were perhaps important as food sources, and bones of as many as 80 species have been found in excavations of early Stone Age settlements. Water bird and seabird remains have also been found in shell mounds on the island of Oronsay off the coast of Scotland. Cultures around the world have rich vocabularies related to birds. Traditional bird names are often based on detailed knowledge of the behaviour, with many names being onomatopoeic, and still in use. Traditional knowledge may also involve the use of birds in folk medicine and knowledge of these practices are passed on through oral traditions (see ethnoornithology). Hunting of wild birds as well as their domestication would have required considerable knowledge of their habits. Poultry farming and falconry were practised from early times in many parts of the world. Artificial incubation of poultry was practised in China around 246 BC and around at least 400 BC in Egypt. The Egyptians also made use of birds in their hieroglyphic scripts, many of which, though stylized, are still identifiable to species. Early written records provide valuable information on the past distributions of species. For instance, Xenophon records the abundance of the ostrich in Assyria (Anabasis, i. 5); this subspecies from Asia Minor is extinct and all extant ostrich races are today restricted to Africa. Other old writings such as the Vedas (1500–800 BC) demonstrate the careful observation of avian life histories and include the earliest reference to the habit of brood parasitism by the Asian koel (Eudynamys scolopaceus). Like writing, the early art of China, Japan, Persia, and India also demonstrate knowledge, with examples of scientifically accurate bird illustrations. Aristotle in 350 BC in his History of animals noted the habit of bird migration, moulting, egg laying, and lifespans, as well as compiling a list of 170 different bird species. However, he also introduced and propagated several myths, such as the idea that swallows hibernated in winter, although he noted that cranes migrated from the steppes of Scythia to the marshes at the headwaters of the Nile. The idea of swallow hibernation became so well established that even as late as in 1878, Elliott Coues could list as many as 182 contemporary publications dealing with the hibernation of swallows and little published evidence to contradict the theory. Similar misconceptions existed regarding the breeding of barnacle geese. Their nests had not been seen, and they were believed to grow by transformations of goose barnacles, an idea that became prevalent from around the 11th century and noted by Bishop Giraldus Cambrensis (Gerald of Wales) in Topographia Hiberniae (1187). Around 77 AD, Pliny the Elder described birds, among other creatures, in his Historia Naturalis. The earliest record of falconry comes from the reign of Sargon II (722–705 BC) in Assyria. Falconry is thought to have made its entry to Europe only after AD 400, brought in from the east after invasions by the Huns and Alans. Starting from the eighth century, numerous Arabic works on the subject and general ornithology were written, as well as translations of the works of ancient writers from Greek and Syriac. In the 12th and 13th centuries, crusades and conquest had subjugated Islamic territories in southern Italy, central Spain, and the Levant under European rule, and for the first time translations into Latin of the great works of Arabic and Greek scholars were made with the help of Jewish and Muslim scholars, especially in Toledo, which had fallen into Christian hands in 1085 and whose libraries had escaped destruction. Michael Scotus from Scotland made a Latin translation of Aristotle's work on animals from Arabic here around 1215, which was disseminated widely and was the first time in a millennium that this foundational text on zoology became available to Europeans. Falconry was popular in the Norman court in Sicily, and a number of works on the subject were written in Palermo. Emperor Frederick II of Hohenstaufen (1194–1250) learned about an falconry during his youth in Sicily and later built up a menagerie and sponsored translations of Arabic texts, among which the popular Arabic work known as the Liber Moaminus by an unknown author which was translated into Latin by Theodore of Antioch from Syria in 1240–1241 as the De Scientia Venandi per Aves, and also Michael Scotus (who had removed to Palermo) translated Ibn Sīnā's Kitāb al-Ḥayawān of 1027 for the Emperor, a commentary and scientific update of Aristotle's work which was part of Ibn Sīnā's massive Kitāb al-Šifāʾ. Frederick II eventually wrote his own treatise on falconry, the De arte venandi cum avibus, in which he related his ornithological observations and the results of the hunts and experiments his court enjoyed performing. Several early German and French scholars compiled old works and conducted new research on birds. These included Guillaume Rondelet, who described his observations in the Mediterranean, and Pierre Belon, who described the fish and birds that he had seen in France and the Levant. Belon's Book of Birds (1555) is a folio volume with descriptions of some 200 species. His comparison of the skeleton of humans and birds is considered as a landmark in comparative anatomy. Volcher Coiter (1534–1576), a Dutch anatomist, made detailed studies of the internal structures of birds and produced a classification of birds, De Differentiis Avium (around 1572), that was based on structure and habits. Konrad Gesner wrote the Vogelbuch and Icones avium omnium around 1557. Like Gesner, Ulisse Aldrovandi, an encyclopedic naturalist, began a 14-volume natural history with three volumes on birds, entitled ornithologiae hoc est de avibus historiae libri XII, which was published from 1599 to 1603. Aldrovandi showed great interest in plants and animals, and his work included 3000 drawings of fruits, flowers, plants, and animals, published in 363 volumes. His Ornithology alone covers 2000 pages and included such aspects as the chicken and poultry techniques. He used a number of traits including behaviour, particularly bathing and dusting, to classify bird groups. William Turner's Historia Avium (History of Birds), published at Cologne in 1544, was an early ornithological work from England. He noted the commonness of kites in English cities where they snatched food out of the hands of children. He included folk beliefs such as those of anglers. Anglers believed that the osprey emptied their fishponds and would kill them, mixing the flesh of the osprey into their fish bait. Turner's work reflected the violent times in which he lived, and stands in contrast to later works such as Gilbert White's 1789 The Natural History and Antiquities of Selborne that were written in a tranquil era. In the 17th century, Francis Willughby (1635–1672) and John Ray (1627–1705) created the first major system of bird classification that was based on function and morphology rather than on form or behaviour. Willughby's Ornithologiae libri tres (1676) completed by John Ray is sometimes considered to mark the beginning of scientific ornithology. Ray also worked on Ornithologia, which was published posthumously in 1713 as Synopsis methodica avium et piscium. The earliest list of British birds, Pinax Rerum Naturalium Britannicarum, was written by Christopher Merrett in 1667, but authors such as John Ray considered it of little value. Ray did, however, value the expertise of the naturalist Sir Thomas Browne (1605–82), who not only answered his queries on ornithological identification and nomenclature, but also those of Willoughby and Merrett in letter correspondence. Browne himself in his lifetime kept an eagle, owl, cormorant, bittern, and ostrich, penned a tract on falconry, and introduced the words "incubation" and "oviparous" into the English language. Towards the late 18th century, Mathurin Jacques Brisson (1723–1806) and Comte de Buffon (1707–1788) began new works on birds. Brisson produced a six-volume work Ornithologie in 1760 and Buffon's included nine volumes (volumes 16–24) on birds Histoire naturelle des oiseaux (1770–1785) in his work on science Histoire naturelle générale et particulière (1749–1804). Jacob Temminck sponsored François Le Vaillant [1753–1824] to collect bird specimens in Southern Africa and Le Vaillant's six-volume Histoire naturelle des oiseaux d'Afrique (1796–1808) included many non-African birds. His other bird books produced in collaboration with the artist Barraband are considered among the most valuable illustrated guides ever produced. Louis Pierre Vieillot (1748–1831) spent 10 years studying North American birds and wrote the Histoire naturelle des oiseaux de l'Amerique septentrionale (1807–1808?). Vieillot pioneered in the use of life histories and habits in classification. Alexander Wilson composed a nine-volume work, American Ornithology, published 1808–1814, which is the first such record of North American birds, significantly antedating Audubon. In the early 19th century, Lewis and Clark studied and identified many birds in the western United States. John James Audubon, born in 1785, observed and painted birds in France and later in the Ohio and Mississippi valleys. From 1827 to 1838, Audubon published The Birds of America, which was engraved by Robert Havell Sr. and his son Robert Havell Jr. Containing 435 engravings, it is often regarded as the greatest ornithological work in history. Scientific studies The emergence of ornithology as a scientific discipline began in the 18th century, when Mark Catesby published his two-volume Natural History of Carolina, Florida, and the Bahama Islands, a landmark work which included 220 hand-painted engravings and was the basis for many of the species Carl Linnaeus described in the 1758 Systema Naturae. Linnaeus' work revolutionised bird taxonomy by assigning every species a binomial name, categorising them into different genera. However, ornithology did not emerge as a specialised science until the Victorian era—with the popularization of natural history, and the collection of natural objects such as bird eggs and skins. This specialization led to the formation in Britain of the British Ornithologists' Union in 1858. In 1859, the members founded its journal The Ibis. The sudden spurt in ornithology was also due in part to colonialism. At 100 years later, in 1959, R. E. Moreau noted that ornithology in this period was preoccupied with the geographical distributions of various species of birds. The bird collectors of the Victorian era observed the variations in bird forms and habits across geographic regions, noting local specialization and variation in widespread species. The collections of museums and private collectors grew with contributions from various parts of the world. The naming of species with binomials and the organization of birds into groups based on their similarities became the main work of museum specialists. The variations in widespread birds across geographical regions caused the introduction of trinomial names. The search for patterns in the variations of birds was attempted by many. Friedrich Wilhelm Joseph Schelling (1775–1854), his student Johann Baptist von Spix (1781–1826), and several others believed that a hidden and innate mathematical order existed in the forms of birds. They believed that a "natural" classification was available and superior to "artificial" ones. A particularly popular idea was the Quinarian system popularised by Nicholas Aylward Vigors (1785–1840), William Sharp Macleay (1792–1865), William Swainson, and others. The idea was that nature followed a "rule of five" with five groups nested hierarchically. Some had attempted a rule of four, but Johann Jakob Kaup (1803–1873) insisted that the number five was special, noting that other natural entities such as the senses also came in fives. He followed this idea and demonstrated his view of the order within the crow family. Where he failed to find five genera, he left a blank insisting that a new genus would be found to fill these gaps. These ideas were replaced by more complex "maps" of affinities in works by Hugh Edwin Strickland and Alfred Russel Wallace. A major advance was made by Max Fürbringer in 1888, who established a comprehensive phylogeny of birds based on anatomy, morphology, distribution, and biology. This was developed further by Hans Gadow and others. The Galapagos finches were especially influential in the development of Charles Darwin's theory of evolution. His contemporary Alfred Russel Wallace also noted these variations and the geographical separations between different forms leading to the study of biogeography. Wallace was influenced by the work of Philip Lutley Sclater on the distribution patterns of birds. For Darwin, the problem was how species arose from a common ancestor, but he did not attempt to find rules for delineation of species. The species problem was tackled by the ornithologist Ernst Mayr, who was able to demonstrate that geographical isolation and the accumulation of genetic differences led to the splitting of species. Early ornithologists were preoccupied with matters of species identification. Only systematics counted as true science and field studies were considered inferior through much of the 19th century. In 1901, Robert Ridgway wrote in the introduction to The Birds of North and Middle America that: This early idea that the study of living birds was merely recreation held sway until ecological theories became the predominant focus of ornithological studies. The study of birds in their habitats was particularly advanced in Germany with bird ringing stations established as early as 1903. By the 1920s, the Journal für Ornithologie included many papers on the behaviour, ecology, anatomy, and physiology, many written by Erwin Stresemann. Stresemann changed the editorial policy of the journal, leading both to a unification of field and laboratory studies and a shift of research from museums to universities. Ornithology in the United States continued to be dominated by museum studies of morphological variations, species identities, and geographic distributions, until it was influenced by Stresemann's student Ernst Mayr. In Britain, some of the earliest ornithological works that used the word ecology appeared in 1915. The Ibis, however, resisted the introduction of these new methods of study, and no paper on ecology appeared until 1943. The work of David Lack on population ecology was pioneering. Newer quantitative approaches were introduced for the study of ecology and behaviour, and this was not readily accepted. For instance, Claud Ticehurst wrote: David Lack's studies on population ecology sought to find the processes involved in the regulation of population based on the evolution of optimal clutch sizes. He concluded that population was regulated primarily by density-dependent controls, and also suggested that natural selection produces life-history traits that maximize the fitness of individuals. Others, such as Wynne-Edwards, interpreted population regulation as a mechanism that aided the "species" rather than individuals. This led to widespread and sometimes bitter debate on what constituted the "unit of selection". Lack also pioneered the use of many new tools for ornithological research, including the idea of using radar to study bird migration. Birds were also widely used in studies of the niche hypothesis and Georgii Gause's competitive exclusion principle. Work on resource partitioning and the structuring of bird communities through competition were made by Robert MacArthur. Patterns of biodiversity also became a topic of interest. Work on the relationship of the number of species to area and its application in the study of island biogeography was pioneered by E. O. Wilson and Robert MacArthur. These studies led to the development of the discipline of landscape ecology. John Hurrell Crook studied the behaviour of weaverbirds and demonstrated the links between ecological conditions, behaviour, and social systems. Principles from economics were introduced to the study of biology by Jerram L. Brown in his work on explaining territorial behaviour. This led to more studies of behaviour that made use of cost-benefit analyses. The rising interest in sociobiology also led to a spurt of bird studies in this area. The study of imprinting behaviour in ducks and geese by Konrad Lorenz and the studies of instinct in herring gulls by Nicolaas Tinbergen led to the establishment of the field of ethology. The study of learning became an area of interest and the study of bird songs has been a model for studies in neuroethology. The study of hormones and physiology in the control of behaviour has also been aided by bird models. These have helped in finding the proximate causes of circadian and seasonal cycles. Studies on migration have attempted to answer questions on the evolution of migration, orientation, and navigation. The growth of genetics and the rise of molecular biology led to the application of the gene-centered view of evolution to explain avian phenomena. Studies on kinship and altruism, such as helpers, became of particular interest. The idea of inclusive fitness was used to interpret observations on behaviour and life history, and birds were widely used models for testing hypotheses based on theories postulated by W. D. Hamilton and others. The new tools of molecular biology changed the study of bird systematics, which changed from being based on phenotype to the underlying genotype. The use of techniques such as DNA–DNA hybridization to study evolutionary relationships was pioneered by Charles Sibley and Jon Edward Ahlquist, resulting in what is called the Sibley–Ahlquist taxonomy. These early techniques have been replaced by newer ones based on mitochondrial DNA sequences and molecular phylogenetics approaches that make use of computational procedures for sequence alignment, construction of phylogenetic trees, and calibration of molecular clocks to infer evolutionary relationships. Molecular techniques are also widely used in studies of avian population biology and ecology. Rise to popularity The use of field glasses or telescopes for bird observation began in the 1820s and 1830s, with pioneers such as J. Dovaston (who also pioneered in the use of bird feeders), but instruction manuals did not begin to insist on the use of optical aids such as "a first-class telescope" or "field glass" until the 1880s. The rise of field guides for the identification of birds was another major innovation. The early guides such as Thomas Bewick's two-volume guide and William Yarrell's three-volume guide were cumbersome, and mainly focused on identifying specimens in the hand. The earliest of the new generation of field guides was prepared by Florence Merriam, sister of Clinton Hart Merriam, the mammalogist. This was published in 1887 in a series Hints to Audubon Workers: Fifty Birds and How to Know Them in Grinnell's Audubon Magazine. These were followed by new field guides, from the pioneering illustrated handbooks of Frank Chapman to the classic Field Guide to the Birds by Roger Tory Peterson in 1934, to Birds of the West Indies published in 1936 by Dr. James Bond - the same who inspired the amateur ornithologist Ian Fleming in naming his famous literary spy. The interest in birdwatching grew in popularity in many parts of the world, and the possibility for amateurs to contribute to biological studies was soon realized. As early as 1916, Julian Huxley wrote a two-part article in The Auk, noting the tensions between amateurs and professionals, and suggested the possibility that the "vast army of bird lovers and bird watchers could begin providing the data scientists needed to address the fundamental problems of biology." The amateur ornithologist Harold F. Mayfield noted that the field was also funded by non-professionals. He noted that in 1975, 12% of the papers in American ornithology journals were written by persons who were not employed in biology related work. Organizations were started in many countries, and these grew rapidly in membership, most notable among them being the Royal Society for the Protection of Birds (RSPB) in Britain and the Audubon Society in the US, which started in 1885. Both these organizations were started with the primary objective of conservation. The RSPB, born in 1889, grew from a small Croydon-based group of women, including Eliza Phillips, Etta Lemon, Catherine Hall and Hannah Poland. Calling themselves the "Fur, Fin, and Feather Folk", the group met regularly and took a pledge "to refrain from wearing the feathers of any birds not killed for the purpose of food, the ostrich only exempted." The organization did not allow men as members initially, avenging a policy of the British Ornithologists' Union to keep out women. Unlike the RSPB, which was primarily conservation oriented, the British Trust for Ornithology was started in 1933 with the aim of advancing ornithological research. Members were often involved in collaborative ornithological projects. These projects have resulted in atlases which detail the distribution of bird species across Britain. In Canada, citizen scientist Elsie Cassels studied migratory birds and was involved in establishing Gaetz Lakes bird sanctuary. In the United States, the Breeding Bird Surveys, conducted by the United States Geological Survey, have also produced atlases with information on breeding densities and changes in the density and distribution over time. Other volunteer collaborative ornithology projects were subsequently established in other parts of the world. Techniques The tools and techniques of ornithology are varied, and new inventions and approaches are quickly incorporated. The techniques may be broadly dealt under the categories of those that are applicable to specimens and those that are used in the field, but the classification is rough and many analysis techniques are usable both in the laboratory and field or may require a combination of field and laboratory techniques. Collections The earliest approaches to modern bird study involved the collection of eggs, a practice known as oology. While collecting became a pastime for many amateurs, the labels associated with these early egg collections made them unreliable for the serious study of bird breeding. To preserve eggs, a tiny hole was made and the contents extracted. This technique became standard with the invention of the blow drill around 1830. Egg collection is no longer popular; however, historic museum collections have been of value in determining the effects of pesticides such as DDT on physiology. Museum bird collections continue to act as a resource for taxonomic studies. The use of bird skins to document species has been a standard part of systematic ornithology. Bird skins are prepared by retaining the key bones of the wings, legs, and skull along with the skin and feathers. In the past, they were treated with arsenic to prevent fungal and insect (mostly dermestid) attack. Arsenic, being toxic, was replaced by less-toxic borax. Amateur and professional collectors became familiar with these skinning techniques and started sending in their skins to museums, some of them from distant locations. This led to the formation of huge collections of bird skins in museums in Europe and North America. Many private collections were also formed. These became references for comparison of species, and the ornithologists at these museums were able to compare species from different locations, often places that they themselves never visited. Morphometrics of these skins, particularly the lengths of the tarsus, bill, tail, and wing became important in the descriptions of bird species. These skin collections have been used in more recent times for studies on molecular phylogenetics by the extraction of ancient DNA. The importance of type specimens in the description of species make skin collections a vital resource for systematic ornithology. However, with the rise of molecular techniques, establishing the taxonomic status of new discoveries, such as the Bulo Burti boubou (Laniarius liberatus, no longer a valid species) and the Bugun liocichla (Liocichla bugunorum), using blood, DNA and feather samples as the holotype material, has now become possible. Other methods of preservation include the storage of specimens in spirit. Such wet specimens have special value in physiological and anatomical study, apart from providing better quality of DNA for molecular studies. Freeze drying of specimens is another technique that has the advantage of preserving stomach contents and anatomy, although it tends to shrink, making it less reliable for morphometrics. In the field The study of birds in the field was helped enormously by improvements in optics. Photography made it possible to document birds in the field with great accuracy. High-power spotting scopes today allow observers to detect minute morphological differences that were earlier possible only by examination of the specimen "in the hand". The capture and marking of birds enable detailed studies of life history. Techniques for capturing birds are varied and include the use of bird liming for perching birds, mist nets for woodland birds, cannon netting for open-area flocking birds, the bal-chatri trap for raptors, decoys and funnel traps for water birds. The bird in the hand may be examined and measurements can be made, including standard lengths and weights. Feather moult and skull ossification provide indications of age and health. Sex can be determined by examination of anatomy in some sexually nondimorphic species. Blood samples may be drawn to determine hormonal conditions in studies of physiology, identify DNA markers for studying genetics and kinship in studies of breeding biology and phylogeography. Blood may also be used to identify pathogens and arthropod-borne viruses. Ectoparasites may be collected for studies of coevolution and zoonoses. In many cryptic species, measurements (such as the relative lengths of wing feathers in warblers) are vital in establishing identity. Captured birds are often marked for future recognition. Rings or bands provide long-lasting identification, but require capture for the information on them to be read. Field-identifiable marks such as coloured bands, wing tags, or dyes enable short-term studies where individual identification is required. Mark and recapture techniques make demographic studies possible. Ringing has traditionally been used in the study of migration. In recent times, satellite transmitters provide the ability to track migrating birds in near-real time. Techniques for estimating population density include point counts, transects, and territory mapping. Observations are made in the field using carefully designed protocols and the data may be analysed to estimate bird diversity, relative abundance, or absolute population densities. These methods may be used repeatedly over large timespans to monitor changes in the environment. Camera traps have been found to be a useful tool for the detection and documentation of elusive species, nest predators and in the quantitative analysis of frugivory, seed dispersal and behaviour. In the laboratory Many aspects of bird biology are difficult to study in the field. These include the study of behavioural and physiological changes that require a long duration of access to the bird. Nondestructive samples of blood or feathers taken during field studies may be studied in the laboratory. For instance, the variation in the ratios of stable hydrogen isotopes across latitudes makes establishing the origins of migrant birds possible using mass spectrometric analysis of feather samples. These techniques can be used in combination with other techniques such as ringing. The first attenuated vaccine developed by Louis Pasteur, for fowl cholera, was tested on poultry in 1878. Anti-malarials were tested on birds which harbour avian-malarias. Poultry continues to be used as a model for many studies in non-mammalian immunology. Studies in bird behaviour include the use of tamed and trained birds in captivity. Studies on bird intelligence and song learning have been largely laboratory-based. Field researchers may make use of a wide range of techniques such as the use of dummy owls to elicit mobbing behaviour, and dummy males or the use of call playback to elicit territorial behaviour and thereby to establish the boundaries of bird territories. Studies of bird migration including aspects of navigation, orientation, and physiology are often studied using captive birds in special cages that record their activities. The Emlen funnel, for instance, makes use of a cage with an inkpad at the centre and a conical floor where the ink marks can be counted to identify the direction in which the bird attempts to fly. The funnel can have a transparent top and visible cues such as the direction of sunlight may be controlled using mirrors or the positions of the stars simulated in a planetarium. The entire genome of the domestic fowl (Gallus gallus) was sequenced in 2004, and was followed in 2008 by the genome of the zebra finch (Taeniopygia guttata). Such whole-genome sequencing projects allow for studies on evolutionary processes involved in speciation. Associations between the expression of genes and behaviour may be studied using candidate genes. Variations in the exploratory behaviour of great tits (Parus major) have been found to be linked with a gene orthologous to the human gene DRD4 (Dopamine receptor D4) which is known to be associated with novelty-seeking behaviour. The role of gene expression in developmental differences and morphological variations have been studied in Darwin's finches. The difference in the expression of Bmp4 have been shown to be associated with changes in the growth and shape of the beak. The chicken has long been a model organism for studying vertebrate developmental biology. As the embryo is readily accessible, its development can be easily followed (unlike mice). This also allows the use of electroporation for studying the effect of adding or silencing a gene. Other tools for perturbing their genetic makeup are chicken embryonic stem cells and viral vectors. Collaborative studies With the widespread interest in birds, use of a large number of people to work on collaborative ornithological projects that cover large geographic scales has been possible. These citizen science projects include nationwide projects such as the Christmas Bird Count, Backyard Bird Count, the North American Breeding Bird Survey, the Canadian EPOQ or regional projects such as the Asian Waterfowl Census and Spring Alive in Europe. These projects help to identify distributions of birds, their population densities and changes over time, arrival and departure dates of migration, breeding seasonality, and even population genetics. The results of many of these projects are published as bird atlases. Studies of migration using bird ringing or colour marking often involve the cooperation of people and organizations in different countries. Applications Wild birds impact many human activities, while domesticated birds are important sources of eggs, meat, feathers, and other products. Applied and economic ornithology aim to reduce the ill effects of problem birds and enhance gains from beneficial species. The role of some species of birds as pests has been well known, particularly in agriculture. Granivorous birds such as the queleas in Africa are among the most numerous birds in the world, and foraging flocks can cause devastation. Many insectivorous birds are also noted as beneficial in agriculture. Many early studies on the benefits or damages caused by birds in fields were made by analysis of stomach contents and observation of feeding behaviour. Modern studies aimed to manage birds in agriculture make use of a wide range of principles from ecology. Intensive aquaculture has brought humans in conflict with fish-eating birds such as cormorants. Large flocks of pigeons and starlings in cities are often considered as a nuisance, and techniques to reduce their populations or their impacts are constantly innovated. Birds are also of medical importance, and their role as carriers of human diseases such as Japanese encephalitis, West Nile virus, and influenza H5N1 have been widely recognized. Bird strikes and the damage they cause in aviation are of particularly great importance, due to the fatal consequences and the level of economic losses caused. The airline industry incurs worldwide damages of an estimated US$1.2 billion each year. Many species of birds have been driven to extinction by human activities. Being conspicuous elements of the ecosystem, they have been considered as indicators of ecological health. They have also helped in gathering support for habitat conservation. Bird conservation requires specialized knowledge in aspects of biology and ecology, and may require the use of very location-specific approaches. Ornithologists contribute to conservation biology by studying the ecology of birds in the wild and identifying the key threats and ways of enhancing the survival of species. Critically endangered species such as the California condor have had to be captured and bred in captivity. Such ex situ conservation measures may be followed by reintroduction of the species into the wild.
Biology and health sciences
Basics_2
Biology
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Hubble's law
Hubble's law, also known as the Hubble–Lemaître law, is the observation in physical cosmology that galaxies are moving away from Earth at speeds proportional to their distance. In other words, the farther a galaxy is from the Earth, the faster it moves away. A galaxy's recessional velocity is typically determined by measuring its redshift, a shift in the frequency of light emitted by the galaxy. The discovery of Hubble's law is attributed to work published by Edwin Hubble in 1929, but the notion of the universe expanding at a calculable rate was first derived from general relativity equations in 1922 by Alexander Friedmann. The Friedmann equations showed the universe might be expanding, and presented the expansion speed if that were the case. Before Hubble, astronomer Carl Wilhelm Wirtz had, in 1922 and 1924, deduced with his own data that galaxies that appeared smaller and dimmer had larger redshifts and thus that more distant galaxies recede faster from the observer. In 1927, Georges Lemaître concluded that the universe might be expanding by noting the proportionality of the recessional velocity of distant bodies to their respective distances. He estimated a value for this ratio, which—after Hubble confirmed cosmic expansion and determined a more precise value for it two years later—became known as the Hubble constant. Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917. Combining Slipher's velocities with Henrietta Swan Leavitt's intergalactic distance calculations and methodology allowed Hubble to better calculate an expansion rate for the universe. Hubble's law is considered the first observational basis for the expansion of the universe, and is one of the pieces of evidence most often cited in support of the Big Bang model. The motion of astronomical objects due solely to this expansion is known as the Hubble flow. It is described by the equation , with the constant of proportionality—the Hubble constant—between the "proper distance" to a galaxy (which can change over time, unlike the comoving distance) and its speed of separation , i.e. the derivative of proper distance with respect to the cosmic time coordinate. Though the Hubble constant is constant at any given moment in time, the Hubble parameter , of which the Hubble constant is the current value, varies with time, so the term constant is sometimes thought of as somewhat of a misnomer. The Hubble constant is most frequently quoted in km/s/Mpc, which gives the speed of a galaxy away as . Simplifying the units of the generalized form reveals that specifies a frequency (SI unit: s−1), leading the reciprocal of to be known as the Hubble time (14.4 billion years). The Hubble constant can also be stated as a relative rate of expansion. In this form  = 7%/Gyr, meaning that, at the current rate of expansion, it takes one billion years for an unbound structure to grow by 7%. Discovery A decade before Hubble made his observations, a number of physicists and mathematicians had established a consistent theory of an expanding universe by using Einstein field equations of general relativity. Applying the most general principles to the nature of the universe yielded a dynamic solution that conflicted with the then-prevalent notion of a static universe. Slipher's observations In 1912, Vesto M. Slipher measured the first Doppler shift of a "spiral nebula" (the obsolete term for spiral galaxies) and soon discovered that almost all such objects were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside the Milky Way galaxy. FLRW equations In 1922, Alexander Friedmann derived his Friedmann equations from Einstein field equations, showing that the universe might expand at a rate calculable by the equations. The parameter used by Friedmann is known today as the scale factor and can be considered as a scale invariant form of the proportionality constant of Hubble's law. Georges Lemaître independently found a similar solution in his 1927 paper discussed in the following section. The Friedmann equations are derived by inserting the metric for a homogeneous and isotropic universe into Einstein's field equations for a fluid with a given density and pressure. This idea of an expanding spacetime would eventually lead to the Big Bang and Steady State theories of cosmology. Lemaître's equation In 1927, two years before Hubble published his own article, the Belgian priest and astronomer Georges Lemaître was the first to publish research deriving what is now known as Hubble's law. According to the Canadian astronomer Sidney van den Bergh, "the 1927 discovery of the expansion of the universe by Lemaître was published in French in a low-impact journal. In the 1931 high-impact English translation of this article, a critical equation was changed by omitting reference to what is now known as the Hubble constant." It is now known that the alterations in the translated paper were carried out by Lemaître himself. Shape of the universe Before the advent of modern cosmology, there was considerable talk about the size and shape of the universe. In 1920, the Shapley–Curtis debate took place between Harlow Shapley and Heber D. Curtis over this issue. Shapley argued for a small universe the size of the Milky Way galaxy, and Curtis argued that the universe was much larger. The issue was resolved in the coming decade with Hubble's improved observations. Cepheid variable stars outside the Milky Way Edwin Hubble did most of his professional astronomical observing work at Mount Wilson Observatory, home to the world's most powerful telescope at the time. His observations of Cepheid variable stars in "spiral nebulae" enabled him to calculate the distances to these objects. Surprisingly, these objects were discovered to be at distances which placed them well outside the Milky Way. They continued to be called nebulae, and it was only gradually that the term galaxies replaced it. Combining redshifts with distance measurements The velocities and distances that appear in Hubble's law are not directly measured. The velocities are inferred from the redshift of radiation and distance is inferred from brightness. Hubble sought to correlate brightness with parameter . Combining his measurements of galaxy distances with Vesto Slipher and Milton Humason's measurements of the redshifts associated with the galaxies, Hubble discovered a rough proportionality between redshift of an object and its distance. Though there was considerable scatter (now known to be caused by peculiar velocities—the 'Hubble flow' is used to refer to the region of space far enough out that the recession velocity is larger than local peculiar velocities), Hubble was able to plot a trend line from the 46 galaxies he studied and obtain a value for the Hubble constant of 500 (km/s)/Mpc (much higher than the currently accepted value due to errors in his distance calibrations; see cosmic distance ladder for details). Hubble diagram Hubble's law can be easily depicted in a "Hubble diagram" in which the velocity (assumed approximately proportional to the redshift) of an object is plotted with respect to its distance from the observer. A straight line of positive slope on this diagram is the visual depiction of Hubble's law. Cosmological constant abandoned After Hubble's discovery was published, Albert Einstein abandoned his work on the cosmological constant, a term he had inserted into his equations of general relativity to coerce them into producing the static solution he previously considered the correct state of the universe. The Einstein equations in their simplest form model either an expanding or contracting universe, so Einstein introduced the constant to counter expansion or contraction and lead to a static and flat universe. After Hubble's discovery that the universe was, in fact, expanding, Einstein called his faulty assumption that the universe is static his "greatest mistake". On its own, general relativity could predict the expansion of the universe, which (through observations such as the bending of light by large masses, or the precession of the orbit of Mercury) could be experimentally observed and compared to his theoretical calculations using particular solutions of the equations he had originally formulated. In 1931, Einstein went to Mount Wilson Observatory to thank Hubble for providing the observational basis for modern cosmology. The cosmological constant has regained attention in recent decades as a hypothetical explanation for dark energy. Interpretation The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's law as follows: where is the recessional velocity, typically expressed in km/s. is Hubble's constant and corresponds to the value of (often termed the Hubble parameter which is a value that is time dependent and which can be expressed in terms of the scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript . This value is the same throughout the universe for a given comoving time. is the proper distance (which can change over time, unlike the comoving distance, which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3-space defined by given cosmological time. (Recession velocity is just ). Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted and is not established except for small redshifts. For distances larger than the radius of the Hubble sphere , objects recede at a rate faster than the speed of light (See Uses of the proper distance for a discussion of the significance of this): Since the Hubble "constant" is a constant only in space, not in time, the radius of the Hubble sphere may increase or decrease over various time intervals. The subscript '0' indicates the value of the Hubble constant today. Current evidence suggests that the expansion of the universe is accelerating (see Accelerating universe), meaning that for any given galaxy, the recession velocity is increasing over time as the galaxy moves to greater and greater distances; however, the Hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some distance and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones. Redshift velocity and recessional velocity Redshift can be measured by determining the wavelength of a known transition, such as hydrogen α-lines for distant quasars, and finding the fractional shift compared to a stationary reference. Thus, redshift is a quantity unambiguously acquired from observation. Care is required, however, in translating these to recessional velocities: for small redshift values, a linear relation of redshift to recessional velocity applies, but more generally the redshift-distance law is nonlinear, meaning the co-relation must be derived specifically for each given model and epoch. Redshift velocity The redshift is often described as a redshift velocity, which is the recessional velocity that would produce the same redshift it were caused by a linear Doppler effect (which, however, is not the case, as the velocities involved are too large to use a non-relativistic formula for Doppler shift). This redshift velocity can easily exceed the speed of light. In other words, to determine the redshift velocity , the relation: is used. That is, there is between redshift velocity and redshift: they are rigidly proportional, and not related by any theoretical reasoning. The motivation behind the "redshift velocity" terminology is that the redshift velocity agrees with the velocity from a low-velocity simplification of the so-called Fizeau–Doppler formula Here, , are the observed and emitted wavelengths respectively. The "redshift velocity" is not so simply related to real velocity at larger velocities, however, and this terminology leads to confusion if interpreted as a real velocity. Next, the connection between redshift or redshift velocity and recessional velocity is discussed. Recessional velocity Suppose is called the scale factor of the universe, and increases as the universe expands in a manner that depends upon the cosmological model selected. Its meaning is that all measured proper distances between co-moving points increase proportionally to . (The co-moving points are not moving relative to their local environments.) In other words: where is some reference time. If light is emitted from a galaxy at time and received by us at , it is redshifted due to the expansion of the universe, and this redshift is simply: Suppose a galaxy is at distance , and this distance changes with time at a rate . We call this rate of recession the "recession velocity" : We now define the Hubble constant as and discover the Hubble law: From this perspective, Hubble's law is a fundamental relation between (i) the recessional velocity associated with the expansion of the universe and (ii) the distance to an object; the connection between redshift and distance is a crutch used to connect Hubble's law with observations. This law can be related to redshift approximately by making a Taylor series expansion: If the distance is not too large, all other complications of the model become small corrections, and the time interval is simply the distance divided by the speed of light: or According to this approach, the relation is an approximation valid at low redshifts, to be replaced by a relation at large redshifts that is model-dependent. See velocity-redshift figure. Observability of parameters Strictly speaking, neither nor in the formula are directly observable, because they are properties of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it. For relatively nearby galaxies (redshift much less than one), and will not have changed much, and can be estimated using the formula where is the speed of light. This gives the empirical relation found by Hubble. For distant galaxies, (or ) cannot be calculated from without specifying a detailed model for how changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: is the factor by which the universe has expanded while the photon was traveling towards the observer. Expansion velocity vs. peculiar velocity In using Hubble's law to determine distances, only the velocity due to the expansion of the universe can be used. Since gravitationally interacting galaxies move relative to each other independent of the expansion of the universe, these relative velocities, called peculiar velocities, need to be accounted for in the application of Hubble's law. Such peculiar velocities give rise to redshift-space distortions. Time-dependence of Hubble parameter The parameter is commonly called the "Hubble constant", but that is a misnomer since it is constant in space only at a fixed time; it varies with time in nearly all cosmological models, and all observations of far distant objects are also observations into the distant past, when the "constant" had a different value. "Hubble parameter" is a more correct term, with denoting the present-day value. Another common source of confusion is that the accelerating universe does imply that the Hubble parameter is actually increasing with time; since in most accelerating models increases relatively faster than so decreases with time. (The recession velocity of one chosen galaxy does increase, but different galaxies passing a sphere of fixed radius cross the sphere more slowly at later times.) On defining the dimensionless deceleration parameter it follows that From this it is seen that the Hubble parameter is decreasing with time, unless ; the latter can only occur if the universe contains phantom energy, regarded as theoretically somewhat improbable. However, in the standard Lambda cold dark matter model (Lambda-CDM or ΛCDM model), will tend to −1 from above in the distant future as the cosmological constant becomes increasingly dominant over matter; this implies that will approach from above to a constant value of ≈ 57 (km/s)/Mpc, and the scale factor of the universe will then grow exponentially in time. Idealized Hubble's law The mathematical derivation of an idealized Hubble's law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated, the theorem is this: In fact, this applies to non-Cartesian spaces as long as they are locally homogeneous and isotropic, specifically to the negatively and positively curved spaces frequently considered as cosmological models (see shape of the universe). An observation stemming from this theorem is that seeing objects recede from us on Earth is not an indication that Earth is near to a center from which the expansion is occurring, but rather that observer in an expanding universe will see objects receding from them. Ultimate fate and age of the universe The value of the Hubble parameter changes over time, either increasing or decreasing depending on the value of the so-called deceleration parameter , which is defined by In a universe with a deceleration parameter equal to zero, it follows that , where is the time since the Big Bang. A non-zero, time-dependent value of simply requires integration of the Friedmann equations backwards from the present time to the time when the comoving horizon size was zero. It was long thought that was positive, indicating that the expansion is slowing down due to gravitational attraction. This would imply an age of the universe less than (which is about 14 billion years). For instance, a value for of 1/2 (once favoured by most theorists) would give the age of the universe as . The discovery in 1998 that is apparently negative means that the universe could actually be older than . However, estimates of the age of the universe are very close to . Olbers' paradox The expansion of space summarized by the Big Bang interpretation of Hubble's law is relevant to the old conundrum known as Olbers' paradox: If the universe were infinite in size, static, and filled with a uniform distribution of stars, then every line of sight in the sky would end on a star, and the sky would be as bright as the surface of a star. However, the night sky is largely dark. Since the 17th century, astronomers and other thinkers have proposed many possible ways to resolve this paradox, but the currently accepted resolution depends in part on the Big Bang theory, and in part on the Hubble expansion: in a universe that existed for a finite amount of time, only the light of a finite number of stars has had enough time to reach us, and the paradox is resolved. Additionally, in an expanding universe, distant objects recede from us, which causes the light emanated from them to be redshifted and diminished in brightness by the time we see it. Dimensionless Hubble constant Instead of working with Hubble's constant, a common practice is to introduce the dimensionless Hubble constant, usually denoted by and commonly referred to as "little h", then to write Hubble's constant as , all the relative uncertainty of the true value of being then relegated to . The dimensionless Hubble constant is often used when giving distances that are calculated from redshift using the formula . Since is not precisely known, the distance is expressed as: In other words, one calculates 2998 × and one gives the units as Mpc  or  Mpc. Occasionally a reference value other than 100 may be chosen, in which case a subscript is presented after to avoid confusion; e.g. denotes  , which implies . This should not be confused with the dimensionless value of Hubble's constant, usually expressed in terms of Planck units, obtained by multiplying by (from definitions of parsec and ), for example for , a Planck unit version of is obtained. Acceleration of the expansion A value for measured from standard candle observations of Type Ia supernovae, which was determined in 1998 to be negative, surprised many astronomers with the implication that the expansion of the universe is currently "accelerating" (although the Hubble factor is still decreasing with time, as mentioned above in the Interpretation section; see the articles on dark energy and the ΛCDM model). Derivation of the Hubble parameter Start with the Friedmann equation: where is the Hubble parameter, is the scale factor, is the gravitational constant, is the normalised spatial curvature of the universe and equal to −1, 0, or 1, and is the cosmological constant. Matter-dominated universe (with a cosmological constant) If the universe is matter-dominated, then the mass density of the universe can be taken to include just matter so where is the density of matter today. From the Friedmann equation and thermodynamic principles we know for non-relativistic particles that their mass density decreases proportional to the inverse volume of the universe, so the equation above must be true. We can also define (see density parameter for ) therefore: Also, by definition, where the subscript refers to the values today, and . Substituting all of this into the Friedmann equation at the start of this section and replacing with gives Matter- and dark energy-dominated universe If the universe is both matter-dominated and dark energy-dominated, then the above equation for the Hubble parameter will also be a function of the equation of state of dark energy. So now: where is the mass density of the dark energy. By definition, an equation of state in cosmology is , and if this is substituted into the fluid equation, which describes how the mass density of the universe evolves with time, then If is constant, then implying: Therefore, for dark energy with a constant equation of state , If this is substituted into the Friedman equation in a similar way as before, but this time set , which assumes a spatially flat universe, then (see shape of the universe) If the dark energy derives from a cosmological constant such as that introduced by Einstein, it can be shown that . The equation then reduces to the last equation in the matter-dominated universe section, with set to zero. In that case the initial dark energy density is given by If dark energy does not have a constant equation-of-state , then and to solve this, must be parametrized, for example if , giving Other ingredients have been formulated. Units derived from the Hubble constant Hubble time The Hubble constant has units of inverse time; the Hubble time is simply defined as the inverse of the Hubble constant, i.e. This is slightly different from the age of the universe, which is approximately 13.8 billion years. The Hubble time is the age it would have had if the expansion had been linear, and it is different from the real age of the universe because the expansion is not linear; it depends on the energy content of the universe (see ). We currently appear to be approaching a period where the expansion of the universe is exponential due to the increasing dominance of vacuum energy. In this regime, the Hubble parameter is constant, and the universe grows by a factor each Hubble time: Likewise, the generally accepted value of 2.27 Es−1 means that (at the current rate) the universe would grow by a factor of in one exasecond. Over long periods of time, the dynamics are complicated by general relativity, dark energy, inflation, etc., as explained above. Hubble length The Hubble length or Hubble distance is a unit of distance in cosmology, defined as — the speed of light multiplied by the Hubble time. It is equivalent to 4,420 million parsecs or 14.4 billion light years. (The numerical value of the Hubble length in light years is, by definition, equal to that of the Hubble time in years.) Substituting into the equation for Hubble's law, reveals that the Hubble distance specifies the distance from our location to those galaxies which are receding from us at the speed of light Hubble volume The Hubble volume is sometimes defined as a volume of the universe with a comoving size of . The exact definition varies: it is sometimes defined as the volume of a sphere with radius , or alternatively, a cube of side . Some cosmologists even use the term Hubble volume to refer to the volume of the observable universe, although this has a radius approximately three times larger. Determining the Hubble constant The value of the Hubble constant, , cannot be measured directly, but is derived from a combination of astronomical observations and model-dependent assumptions. Increasingly accurate observations and new models over many decades have led to two sets of highly precise values which do not agree. This difference is known as the "Hubble tension". Earlier measurements For the original 1929 estimate of the constant now bearing his name, Hubble used observations of Cepheid variable stars as "standard candles" to measure distance. The result he obtained was , much larger than the value astronomers currently calculate. Later observations by astronomer Walter Baade led him to realize that there were distinct "populations" for stars (Population I and Population II) in a galaxy. The same observations led him to discover that there are two types of Cepheid variable stars with different luminosities. Using this discovery, he recalculated Hubble constant and the size of the known universe, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of was estimated to be between . The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs, who claimed the value was around 100, and Allan Sandage, who claimed the value was near 50. In one demonstration of vitriol shared between the parties, when Sandage and Gustav Andreas Tammann (Sandage's research colleague) formally acknowledged the shortcomings of confirming the systematic error of their method in 1975, Vaucouleurs responded "It is unfortunate that this sober warning was so soon forgotten and ignored by most astronomers and textbook writers". In 1996, a debate moderated by John Bahcall between Sidney van den Bergh and Gustav Tammann was held in similar fashion to the earlier Shapley–Curtis debate over these two competing values. This previously wide variance in estimates was partially resolved with the introduction of the ΛCDM model of the universe in the late 1990s. Incorporating the ΛCDM model, observations of high-redshift clusters at X-ray and microwave wavelengths using the Sunyaev–Zel'dovich effect, measurements of anisotropies in the cosmic microwave background radiation, and optical surveys all gave a value of around 50–70 km/s/Mpc for the constant. Precision cosmology and the Hubble tension By the late 1990s, advances in ideas and technology allowed higher precision measurements. However, two major categories of methods, each with high precision, fail to agree. "Late universe" measurements using calibrated distance ladder techniques have converged on a value of approximately . Since 2000, "early universe" techniques based on measurements of the cosmic microwave background have become available, and these agree on a value near . (This accounts for the change in the expansion rate since the early universe, so is comparable to the first number.) Initially, this discrepancy was within the estimated measurement uncertainties and thus no cause for concern. However, as techniques have improved, the estimated measurement uncertainties have shrunk, but the discrepancies have not, to the point that the disagreement is now highly statistically significant. This discrepancy is called the Hubble tension. An example of an "early" measurement, the Planck mission published in 2018 gives a value for of . In the "late" camp is the higher value of determined by the Hubble Space Telescope and confirmed by the James Webb Space Telescope in 2023. The "early" and "late" measurements disagree at the >5 σ level, beyond a plausible level of chance. The resolution to this disagreement is an ongoing area of active research. Reducing systematic errors Since 2013 much effort has gone in to new measurements to check for possible systematic errors and improved reproducibility. The "late universe" or distance ladder measurements typically employ three stages or "rungs". In the first rung distances to Cepheids are determined while trying to reduce luminosity errors from dust and correlations of metallicity with luminosity. The second rung uses Type Ia supernova, explosions of almost constant amount of mass and thus very similar amounts of light; the primary source of systematic error is the limited number of objects that can be observed. The third rung of the distance ladder measures the red-shift of supernova to extract the Hubble flow and from that the constant. At this rung corrections due to motion other than expansion are applied. As an example of the kind of work needed to reduce systematic errors, photometry on observations from the James Webb Space Telescope of extra-galactic Cepheids confirm the findings from the HST. The higher resolution avoided confusion from crowding of stars in the field of view but came to the same value for H0. The "early universe" or inverse distance ladder measures the observable consequences of spherical sound waves on primordial plasma density. These pressure waves – called baryon acoustic oscillations (BAO) – cease once the universe cooled enough for electrons to stay bound to nuclei, ending the plasma and allowing the photons trapped by interaction with the plasma to escape. The pressure waves then become very small perturbations in density imprinted on the cosmic microwave background and on the large scale density of galaxies across the sky. Detailed structure in high precision measurements of the CMB can matched to physics models of the oscillations. These models depend upon the Hubble constant such that a match reveals a value for the constant. Similarly, the BAO affects the statistical distribution of matter, observed as distant galaxies across the sky. These two independent kinds of measurements produce similar values for the constant from the current models, giving strong evidence that systematic errors in the measurements themselves do not affect the result. Other kinds of measurements In addition to measurements based on calibrated distance ladder techniques or measurements of the CMB, other methods have been used to determine the Hubble constant. In October 2018, scientists used information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), of determining the Hubble constant. In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817, an event known as a dark siren. Their measurement of the Hubble constant is (km/s)/Mpc. Also in July 2019, astronomers reported another new method, using data from the Hubble Space Telescope and based on distances to red giant stars calculated using the tip of the red-giant branch (TRGB) distance indicator. Their measurement of the Hubble constant is . In February 2020, the Megamaser Cosmology Project published independent results based on astrophysical masers visible at cosmological distances and which do not require multi-step calibration. That work confirmed the distance ladder results and differed from the early-universe results at a statistical significance level of 95%. In July 2020, measurements of the cosmic background radiation by the Atacama Cosmology Telescope predict that the Universe should be expanding more slowly than is currently observed. In July 2023, an independent estimate of the Hubble constant was derived from a kilonova, the optical afterglow of a neutron star merger. Due to the blackbody nature of early kilonova spectra, such systems provide strongly constraining estimators of cosmic distance. Using the kilonova AT2017gfo (the aftermath of, once again, GW170817), these measurements indicate a local-estimate of the Hubble constant of . Possible resolutions of the Hubble tension The cause of the Hubble tension is unknown, and there are many possible proposed solutions. The most conservative is that there is an unknown systematic error affecting either early-universe or late-universe observations. Although intuitively appealing, this explanation requires multiple unrelated effects regardless of whether early-universe or late-universe observations are incorrect, and there are no obvious candidates. Furthermore, any such systematic error would need to affect multiple different instruments, since both the early-universe and late-universe observations come from several different telescopes. Alternatively, it could be that the observations are correct, but some unaccounted-for effect is causing the discrepancy. If the cosmological principle fails (see ), then the existing interpretations of the Hubble constant and the Hubble tension have to be revised, which might resolve the Hubble tension. In particular, we would need to be located within a very large void, up to about a redshift of 0.5, for such an explanation to conflate with supernovae and baryon acoustic oscillation observations. Yet another possibility is that the uncertainties in the measurements could have been underestimated, but given the internal agreements this is neither likely, nor resolves the overall tension. Finally, another possibility is new physics beyond the currently accepted cosmological model of the universe, the ΛCDM model. There are very many theories in this category, for example, replacing general relativity with a modified theory of gravity could potentially resolve the tension, as can a dark energy component in the early universe, dark energy with a time-varying equation of state, or dark matter that decays into dark radiation. A problem faced by all these theories is that both early-universe and late-universe measurements rely on multiple independent lines of physics, and it is difficult to modify any of those lines while preserving their successes elsewhere. The scale of the challenge can be seen from how some authors have argued that new early-universe physics alone is not sufficient; while other authors argue that new late-universe physics alone is also not sufficient. Nonetheless, astronomers are trying, with interest in the Hubble tension growing strongly since the mid 2010s. Measurements of the Hubble constant
Physical sciences
Physical cosmology
null
42986
https://en.wikipedia.org/wiki/Alternating%20current
Alternating current
Alternating current (AC) is an electric current that periodically reverses direction and changes its magnitude continuously with time, in contrast to direct current (DC), which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. The abbreviations AC and DC are often used to mean simply alternating and direct, respectively, as when they modify current or voltage. The usual waveform of alternating current in most electric power circuits is a sine wave, whose positive half-period corresponds with positive direction of the current and vice versa (the full period is called a cycle). "Alternating current" most commonly refers to power distribution, but a wide range of other applications are technically alternating current although it is less common to describe them by that term. In many applications, like guitar amplifiers, different waveforms are used, such as triangular waves or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission. Transmission, distribution, and domestic power supply Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer. This allows the power to be transmitted through power lines efficiently at high voltage, which reduces the energy lost as heat due to resistance of the wire, and transformed to a lower, safer voltage for use. Use of a higher voltage leads to significantly more efficient transmission of power. The power losses () in the wire are a product of the square of the current ( I ) and the resistance (R) of the wire, described by the formula: This means that when transmitting a fixed power on a given wire, if the current is halved (i.e. the voltage is doubled), the power loss due to the wire's resistance will be reduced to one quarter. The power transmitted is equal to the product of the current and the voltage (assuming no phase difference); that is, Consequently, power transmitted at a higher voltage requires less loss-producing current than for the same power at a lower voltage. Power is often transmitted at hundreds of kilovolts on pylons, and transformed down to tens of kilovolts to be transmitted on lower level lines, and finally transformed down to 100 V – 240 V for domestic use. High voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, and then stepped up to a high voltage for transmission. Near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world. High-voltage direct-current (HVDC) electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step the voltage of DC down for end user applications such as lighting incandescent bulbs. Three-phase electrical generation is very common. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° (one-third of a complete 360° phase) to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other. If coils are added opposite to these (60° spacing), they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used. For example, a 12-pole machine would have 36 coils (10° spacing). The advantage is that lower rotational speeds can be used to generate the same frequency. For example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency; the lower speed is preferable for larger machines. If the load on a three-phase system is balanced equally among the phases, no current flows through the neutral point. Even in the worst-case unbalanced (linear) load, the neutral current will not exceed the highest of the phase currents. Non-linear loads (e.g. the switch-mode power supplies widely used) may require an oversized neutral bus and neutral conductor in the upstream distribution panel to handle harmonics. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors. For three-phase at utilization voltages a four-wire system is often used. When stepping down three-phase, a transformer with a Delta (3-wire) primary and a Star (4-wire, center-earthed) secondary is often used so there is no need for a neutral on the supply side. For smaller customers (just how small varies by country and age of the installation) only a single phase and neutral, or two phases and neutral, are taken to the property. For larger installations, all three phases and neutral are taken to the main distribution panel. From the three-phase main panel, both single and three-phase circuits may lead off. Three-wire single-phase systems, with a single center-tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. This arrangement is sometimes incorrectly referred to as two phase. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center-tapped transformer with a voltage of 55 V between each power conductor and earth. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools. An additional wire, called the bond (or earth) wire, is often connected between non-current-carrying metal enclosures and earth ground. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. Bonding all non-current-carrying metal parts into one complete system ensures there is always a low electrical impedance path to ground sufficient to carry any fault current for as long as it takes for the system to clear the fault. This low impedance path allows the maximum amount of fault current, causing the overcurrent protection device (breakers, fuses) to trip or burn out as quickly as possible, bringing the electrical system to a safe state. All bond wires are bonded to ground at the main service panel, as is the neutral/identified conductor if present. AC power supply frequencies The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 Hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan. Low frequency A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower transmission losses, which are proportional to frequency. The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2/3 Hz) is still used in some European rail systems, such as in Austria, Germany, Norway, Sweden and Switzerland. High frequency Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. Computer mainframe systems were often powered by 400 Hz or 415 Hz for benefits of ripple reduction while using smaller internal AC to DC conversion units. Effects at high frequencies A direct current flows uniformly throughout the cross-section of a homogeneous electrically conducting wire. An alternating current of any frequency is forced away from the wire's center, toward its outer surface. This is because an alternating current (which is the result of the acceleration of electric charge) creates electromagnetic waves (a phenomenon known as electromagnetic radiation). Electric conductors are not conducive to electromagnetic waves (a perfect electric conductor prohibits all electromagnetic waves within its boundary), so a wire that is made of a non-perfect conductor (a conductor with finite, rather than infinite, electrical conductivity) pushes the alternating current, along with their associated electromagnetic fields, away from the wire's center. The phenomenon of alternating current being pushed away from the center of the conductor is called skin effect, and a direct current does not exhibit this effect, since a direct current does not create electromagnetic waves. At very high frequencies, the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63%. Even at relatively low frequencies used for power transmission (50 Hz – 60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high-current conductors are usually hollow to reduce their mass and cost. This tendency of alternating current to flow predominantly in the periphery of conductors reduces the effective cross-section of the conductor. This increases the effective AC resistance of the conductor since resistance is inversely proportional to the cross-sectional area. A conductor's AC resistance is higher than its DC resistance, causing a higher energy loss due to ohmic heating (also called I2R loss). Techniques for reducing AC resistance For low to medium frequencies, conductors can be divided into stranded wires, each insulated from the others, with the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers. Techniques for reducing radiation loss As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation. Twisted pairs At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation and inductive coupling. A twisted pair must be used with a balanced signaling system so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively canceled by radiation from the other wire, resulting in almost no radiation loss. Coaxial cables Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The current flowing on the surface of the inner conductor is equal and opposite to the current flowing on the inner surface of the outer tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the dielectric separating the inner and outer tubes being a non-ideal insulator) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables often use a perforated dielectric layer to separate the inner and outer conductors in order to minimize the power dissipated by the dielectric. Waveguides Waveguides are similar to coaxial cables, as both consist of tubes, with the biggest difference being that waveguides have no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide. Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are feasible only at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide causes dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large. Fiber optics At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used. Formulation Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation: , where is the peak voltage (unit: volt), is the angular frequency (unit: radians per second). The angular frequency is related to the physical frequency, (unit: hertz), which represents the number of cycles per second, by the equation . is the time (unit: second). The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of is +1 and the minimum value is −1, an AC voltage swings between and . The peak-to-peak voltage, usually written as or , is therefore . Root mean square voltage Below an AC waveform (with no DC component) is assumed. The RMS voltage is the square root of the mean over one cycle of the square of the instantaneous voltage. Power The relationship between voltage and the power delivered is: , where represents a load resistance. Rather than using instantaneous power, , it is more practical to use a time-averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as , because Power oscillation For this reason, AC power's waveform becomes Full-wave rectified sine, and its fundamental frequency is double of the one of the voltage's. Examples of alternating current To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time-averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to: For 230 V AC, the peak voltage is therefore , which is about 325 V, and the peak power is , that is 460 RW. During the course of one cycle (two cycle as the power) the voltage rises from zero to 325 V, the power from zero to 460 RW, and both falls through zero. Next, the voltage descends to reverse direction, -325 V, but the power ascends again to 460 RW, and both returns to zero. Information transmission Alternating current is used to transmit information, as in the cases of telephone and cable television. Information signals are carried over a wide range of AC frequencies. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Cable television and other cable-transmitted information currents may alternate at frequencies of tens to thousands of megahertz. These frequencies are similar to the electromagnetic wave frequencies often used to transmit the same types of information over the air. History The first alternator to produce alternating current was an electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832. Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Alternating current technology was developed further by the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris. In 1876, Russian engineer Pavel Yablochkov invented a lighting system where sets of induction coils were installed along a high-voltage AC line. Instead of changing voltage, the primary windings transferred power to the secondary windings which were connected to one or several electric candles (arc lamps) of his own design, used to keep the failure of one lamp from disabling the entire circuit. In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment. Transformers The development of the alternating current transformer to change voltage from low to high level and back, allowed generation and consumption at low voltages and transmission, over great distances, at high voltage, with savings in the cost of conductors and energy losses. A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They exhibited an AC system powering arc and incandescent lights was installed along five railway stations for the Metropolitan Railway in London and a single-phase multiple-user AC distribution system Turin in 1884. These early induction coils with open magnetic circuits were inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems. In the UK, Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and open core transformer designs with serial connections for utilization loads - similar to Gaulard and Gibbs. In 1890, he designed their power station at Deptford and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system. In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz Works of Budapest, determined that open-core devices were impractical, as they were incapable of reliably regulating voltage. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around a ring core of iron wires or else surrounded by a core of iron wires. In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The Ganz factory in 1884 shipped the world's first five high-efficiency AC transformers. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 140 to 2000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems' by the invention of constant voltage generators in 1885. In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores. Ottó Bláthy also invented the first AC electricity meter. Adoption The AC power system was developed and adopted rapidly after 1886. In March of that year, Westinghouse engineer William Stanley, designing a system based on the Gaulard and Gibbs transformer, demonstrated a lighting system in Great Barrington: A Siemens generator's voltage of 500 volts was converted into 3000 volts, and then the voltage was stepped down to 500 volts by six Westinghouse transformers. With this setup, the Westinghouse company successfully powered thirty 100-volt incandescent bulbs in twenty shops along the main street of Great Barrington. By the fall of that year Ganz engineers installed a ZBD transformer power system with AC generators in Rome. Based on Stanley's success, the new Westinghouse Electric went on to develop alternating current (AC) electric infrastructure throughout the United States. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Thomas Edison (a proponent of direct current), who attempted to discredit alternating current as too dangerous in a public campaign called the "war of the currents". In 1888, alternating current systems gained further viability with the introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was independently further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown in Germany on one side, and Jonas Wenström in Sweden on the other, though Brown favored the two-phase system. The Ames Hydroelectric Generating Plant, constructed in 1890, was among the first hydroelectric alternating current power plants. A long-distance transmission of single-phase electricity from a hydroelectric generating plant in Oregon at Willamette Falls sent power fourteen miles downriver to downtown Portland for street lighting in 1890. In 1891, another transmission system was installed in Telluride Colorado. The first three-phase system was established in 1891 in Frankfurt, Germany. The Tivoli–Rome transmission was completed in 1892. The San Antonio Canyon Generator was the third commercial single-phase hydroelectric AC power plant in the United States to provide long-distance electricity. It was completed on December 31, 1892, by Almarian William Decker to provide power to the city of Pomona, California, which was 14 miles away. Meanwhile, the possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine in Sweden. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer 400 horsepower a distance of , becoming the first commercial application. In 1893, Westinghouse built an alternating current system for the Chicago World Exposition. In 1893, Decker designed the first American commercial three-phase power plant using alternating current—the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California. Decker's design incorporated 10 kV three-phase transmission and established the standards for the complete system of generation, transmission and motors used in USA today. The original Niagara Falls Adams Power Plant with three two-phase generators was put into operation in August 1895, but was connected to the remote transmission system only in 1896. The Jaruga Hydroelectric Power Plant in Croatia was set in operation two days later, on 28 August 1895. Its generator (42 Hz, 240 kW) was made and installed by the Hungarian company Ganz, while the transmission line from the power plant to the City of Šibenik was long, and the municipal distribution grid 3000 V/110 V included six transforming stations. Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles LeGeyt Fortescue in 1918.
Physical sciences
Electrical circuits
null
42991
https://en.wikipedia.org/wiki/Sugar%20glider
Sugar glider
The sugar glider (Petaurus breviceps) is a small, omnivorous, arboreal, and nocturnal gliding possum. The common name refers to its predilection for sugary foods such as sap and nectar and its ability to glide through the air, much like a flying squirrel. They have very similar habits and appearance to the flying squirrel, despite not being closely related—an example of convergent evolution. The scientific name, Petaurus breviceps, translates from Latin as "short-headed rope-dancer", a reference to their canopy acrobatics. The sugar glider is characterised by its pair of gliding membranes, known as patagia, which extend from its forelegs to its hindlegs. Gliding serves as an efficient means of reaching food and evading predators. The animal is covered in soft, pale grey to light brown fur which is countershaded, being lighter in colour on its underside. The sugar glider, as strictly defined in a recent analysis, is only native to a small portion of southeastern Australia, corresponding to southern Queensland and most of New South Wales east of the Great Dividing Range; the extended species group, including populations which may or may not belong to P. breviceps, occupies a larger range covering much of coastal eastern and northern Australia, New Guinea, and nearby islands. Members of Petaurus are popular exotic pets; these pet animals are also frequently referred to as "sugar gliders", but recent research indicates, at least for American pets, that they are not P. breviceps but a closely related species, ultimately originating from a single source near Sorong in West Papua. This would possibly make them members of the Krefft's glider (P. notatus), but the taxonomy of Papuan Petaurus populations is still poorly resolved. Taxonomy and evolution The genus Petaurus is believed to have originated in New Guinea during the mid Miocene epoch, approximately 18 to 24 million years ago. The modern Australian Petaurus, along with New Guinean members of what were formerly considered P. breviceps, diverged from their closest living New Guinean relatives ~9-12 mya. They probably dispersed from New Guinea to Australia between 4.8 and ~8.4 mya, with the oldest Petaurus fossils in Australia being dated to 4.46 million years. This may have been possible due to sea level lowering from about 7 to 10 mya, resulting in land bridges between New Guinea and Australia. The taxonomy of the species is complex, and is still not fully resolved. It was formerly understood to have a wide range across Australia and New Guinea, being the only glider to have this distribution, and to be divided into seven subspecies, with three occurring in Australia and four in New Guinea. This traditional subspecific division was based on small morphological differences, such as colour and body size. However, a 2010 genetic analysis using mitochondrial DNA indicates that these morphologically-defined subspecies may not represent genetically unique populations. Further studies have found significant genetic variation within populations traditionally classified in P. breviceps, sufficient to warrant splitting the species into multiple. The subspecies P. b. biacensis, from Biak Island off of New Guinea, was reclassified as a separate species, the Biak glider (Petaurus biacensis). In 2020, a landmark study suggested that P. breviceps actually comprised three cryptic species: the Krefft's glider (Petaurus notatus), found throughout most of eastern Australia and introduced to Tasmania, the savanna glider (Petaurus ariel), native to northern Australia, and a more narrowly defined P. breviceps, restricted to a small section of coastal forest in southern Queensland and most of New South Wales. In addition, other sugar glider populations throughout this range (such as those on New Guinea and the Cape York Peninsula) may represent undescribed species or be conspecific with previously described species. This indicates that contrary to previous findings of a large range (which in fact applied to P. notatus and, to a lesser extent, to P. ariel), P. breviceps is a range-restricted species that is sensitive to ecological disasters, such as the 2019-20 Australian bushfires, which significantly affected large portions of its habitat. P. breviceps and P. notatus are estimated to have diverged ~1 million years ago, and may have originated from long term geographic isolation. The early-mid Pleistocene saw an uplifting of the Great Dividing Range, contributing to and coinciding with aridification of the interior of Australia, including on the western side of the range. This, as well as other climactic and geographic factors, may have isolated the ancestors of P. breviceps to refugia on the eastern, coastal side of the Great Dividing Range. This would be an example of allopatric speciation. Distribution and habitat Sugar gliders are distributed in the coastal forests of southeastern Queensland and most of New South Wales. Their distribution extends to altitudes of 2000m in the eastern ranges. In parts of its range, it may overlap with Krefft's glider (P. notatus). The sugar glider occurs in sympatry with the squirrel glider and yellow-bellied glider; and their coexistence is permitted through niche partitioning where each species has different patterns of resource use. Like all arboreal, nocturnal marsupials, sugar gliders are active at night, and they shelter during the day in tree hollows lined with leafy twigs. The average home range of sugar gliders is , and is largely related to the abundance of food sources; density ranges from two to six individuals per hectare (0.8–2.4 per acre). Native owls (Ninox sp.) are their primary predators; others in their range include kookaburras, goannas, snakes, and quolls. Feral cats (Felis catus) also represent a significant threat. Appearance and anatomy The sugar glider has a squirrel-like body with a long, partially (weakly) prehensile tail. The length from the nose to the tip of the tail is about , and males and females weigh respectively. Heart rate range is 200–300 beats per minute, and respiratory rate is 16–40 breaths per minute. The sugar glider is a sexually dimorphic species, with males typically larger than females. Sexual dimorphism has likely evolved due to increased mate competition arising through social group structure; and is more pronounced in regions of higher latitude, where mate competition is greater due to increased food availability. The fur coat on the sugar glider is thick, soft, and is usually blue-grey; although some have been known to be yellow, tan or (rarely) albino. A black stripe is seen from its nose to midway on its back. Its belly, throat, and chest are cream in colour. Males have four scent glands, located on the forehead, chest, and two paracloacal (associated with, but not part of the cloaca, which is the common opening for the intestinal, urinal and genital tracts) that are used for marking of group members and territory. Scent glands on the head and chest of males appear as bald spots. Females also have a paracloacal scent gland and a scent gland in the pouch, but do not have scent glands on the chest or forehead. The sugar glider is nocturnal; its large eyes help it to see at night and its ears swivel to help locate prey in the dark. The eyes are set far apart, allowing more precise triangulation from launching to landing locations while gliding. Each foot on the sugar glider has five digits, with an opposable toe on each hind foot. These opposable toes are clawless, and bend such that they can touch all the other digits, like a human thumb, allowing it to firmly grasp branches. The second and third digits of the hind foot are partially syndactylous (fused together), forming a grooming comb. The fourth digit of the forefoot is sharp and elongated, aiding in extraction of insects under the bark of trees. The gliding membrane extends from the outside of the fifth digit of each forefoot to the first digit of each hind foot. When the legs are stretched out, this membrane allows the sugar glider to glide a considerable distance. The membrane is supported by well developed tibiocarpalis, humerodorsalis and tibioabdominalis muscles, and its movement is controlled by these supporting muscles in conjunction with trunk, limb and tail movement. Lifespan in the wild is up to 9 years; is typically up to 12 years in captivity, and the maximum reported lifespan is 17.8 years. Biology and behaviour Gliding The sugar glider is one of a number of volplane (gliding) possums in Australia. It glides with the fore- and hind-limbs extended at right angles to the body, with feet flexed upwards. The animal launches itself from a tree, spreading its limbs to expose the gliding membranes. This creates an aerofoil enabling it to glide or more. For every travelled horizontally when gliding, it falls . Steering is controlled by moving limbs and adjusting the tension of the gliding membrane; for example, to turn left, the left forearm is lowered below the right. This form of arboreal locomotion is typically used to travel from tree to tree; the species rarely descends to the ground. Gliding provides three dimensional avoidance of arboreal predators, and minimal contact with ground dwelling predators; as well as possible benefits in decreasing time and energy consumption spent foraging for nutrient poor foods that are irregularly distributed. Young carried in the pouch of females are protected from landing forces by the septum that separates them within the pouch. Torpor Sugar gliders can tolerate ambient air temperatures of up to through behavioural strategies such as licking their coat and exposing the wet area, as well as drinking small quantities of water. In cold weather, sugar gliders will huddle together to avoid heat loss, and will enter torpor to conserve energy. Huddling as an energy conserving mechanism is not as efficient as torpor. Before entering torpor, a sugar glider will reduce activity and body temperature normally in order to lower energy expenditure and avoid torpor. With energetic constraints, the sugar glider will enter into daily torpor for 2–23 hours while in rest phase. Torpor differs from hibernation in that torpor is usually a short-term daily cycle. Entering torpor saves energy for the animal by allowing its body temperature to fall to a minimum of to . When food is scarce, as in winter, heat production is lowered in order to reduce energy expenditure. With low energy and heat production, it is important for the sugar glider to peak its body mass by fat content in the autumn (May/June) in order to survive the following cold season. In the wild, sugar gliders enter into daily torpor more often than sugar gliders in captivity. The use of torpor is most frequent during winter, likely in response to low ambient temperature, rainfall, and seasonal fluctuation in food sources. Diet and nutrition Sugar gliders are seasonally adaptive omnivores with a wide variety of foods in their diet, and mainly forage in the lower layers of the forest canopy. Sugar gliders may obtain up to half their daily water intake through drinking rainwater, with the remainder obtained through water held in its food. In summer they are primarily insectivorous, and in the winter when insects (and other arthropods) are scarce, they are mostly exudativorous (feeding on acacia gum, eucalyptus sap, manna, honeydew or lerp). Sugar gliders have an enlarged caecum to assist in digestion of complex carbohydrates obtained from gum and sap. To obtain sap or gum from plants, sugar gliders will strip the bark off trees or open bore holes with their teeth to access stored liquid. Little time is spent foraging for insects, as it is an energetically expensive process, and sugar gliders will wait until insects fly into their habitat, or stop to feed on flowers. Gliders consume approximately 11 g of dry food matter per day. This equates to roughly 8% and 9.5% of body weight for males and females, respectively. They are opportunistic feeders and can be carnivorous, preying mostly on lizards and small birds. They eat many other foods when available, such as nectar, acacia seeds, bird eggs, pollen, fungi and native fruits. Pollen can make up a large portion of their diet, therefore sugar gliders are likely to be important pollinators of Banksia species. Reproduction Like most marsupials, female sugar gliders have two ovaries and two uteri; they are polyestrous, meaning they can go into heat several times a year. The female has a marsupium (pouch) in the middle of her abdomen to carry offspring. The pouch opens anteriorly, and two lateral pockets extend posteriorly when young are present. Four nipples are usually present in the pouch, although reports of individuals with two nipples have been recorded. Male sugar gliders have two pairs of bulbourethral glands and a bifurcated penis to correspond with the two uteri of females. The age of sexual maturity in sugar gliders varies slightly between the males and females. Males reach maturity at 4 to 12 months of age, while females require from 8 to 12 months. In the wild, sugar gliders breed once or twice a year depending on the climate and habitat conditions, while they can breed multiple times a year in captivity as a result of consistent living conditions and proper diet. A sugar glider female gives birth to one (19%) or two (81%) babies (joeys) per litter. The gestation period is 15 to 17 days, after which the tiny joey will crawl into a mother's pouch for further development. They are born largely undeveloped and furless, with only the sense of smell being developed. The mother has a scent gland in the external marsupium to attract the sightless joeys from the uterus. Joeys have a continuous arch of cartilage in their shoulder girdle which disappears soon after birth; this supports the forelimbs, assisting the climb into the pouch. Young are completely contained in the pouch for 60 days after birth, wherein mammae provide nourishment during the remainder of development. Eyes first open around 80 days after birth, and young will leave the nest around 110 days after birth. By the time young are weaned, the thermoregulatory system is developed, and in conjunction with a large body size and thicker fur, they are able to regulate their own body temperature. Breeding is seasonal in southeast Australia, with young only born in winter and spring (June to November). Unlike animals that move along the ground, the sugar glider and other gliding species produce fewer, but heavier, offspring per litter. This allows female sugar gliders to retain the ability to glide when pregnant. Socialisation Sugar gliders are highly social animals. They live in family groups or colonies consisting of up to seven adults, plus the current season's young. Up to four age classes may exist within each group, although some sugar gliders are solitary, not belonging to a group. They engage in social grooming, which in addition to improving hygiene and health, helps bond the colony and establish group identity. Within social communities, there are two codominant males who suppress subordinate males, but show no aggression towards each other. These co-dominant pairs are more related to each other than to subordinates within the group; and share food, nests, mates, and responsibility for scent marking of community members and territories. Territory and members of the group are marked with saliva and a scent produced by separate glands on the forehead and chest of male gliders. Intruders who lack the appropriate scent marking are expelled violently. Rank is established through scent marking; and fighting does not occur within groups, but does occur when communities come into contact with each other. Within the colony, no fighting typically takes place beyond threatening behaviour. Each colony defends a territory of about where eucalyptus trees provide a staple food source. Sugar gliders are one of the few species of mammals that exhibit male parental care. The oldest codominant male in a social community shows a high level of parental care, as he is the probable father of any offspring due to his social status. This paternal care evolved in sugar gliders as young are more likely to survive when parental investment is provided by both parents. In the sugar glider, biparental care allows one adult to huddle with the young and prevent hypothermia while the other parent is out foraging, as young sugar gliders aren't able to thermoregulate until they are 100 days old (3.5 months). Communication in sugar gliders is achieved through vocalisations, visual signals and complex chemical odours. Chemical odours account for a large part of communication in sugar gliders, similar to many other nocturnal animals. Odours may be used to mark territory, convey health status of an individual, and mark rank of community members. Gliders produce a number of vocalisations including barking and hissing. Human relations Conservation Under the prior taxonomy, the sugar glider was not considered endangered, and its conservation rank was "Least Concern (LC)" on the IUCN Red List. However, with newer taxonomic studies indicating that it has a small and restricted range, it is now thought to be far more sensitive to potential threats. For example, the species' native range was hit hard by the 2019-20 Australian bushfires, which occurred just a few months prior to the publishing of the study indicating the true extent of its range. Sugar gliders use tree hollows, making them especially sensitive to intense fires. However, despite the loss of natural habitat in Australia over the last 200 years, it is adaptable and capable of living in small patches of remnant bush, particularly if it does not have to cross large expanses of cleared land to reach them. Sugar gliders may persist in areas that have undergone mild-moderate selective logging, as long as three to five hollow bearing trees are retained per hectare. Although not currently threatened by habitat loss, the ability of sugar gliders to forage and avoid predators successfully may be decreased in areas of high light pollution. Conservation in Australia is enacted at the federal, state and local levels, where sugar gliders are protected as a native species. The central conservation law in Australia is the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act). The National Parks and Wildlife Act 1974 is an example of conservation law in the state of South Australia, where it is legal to keep (only) one sugar glider without a permit, provided it was acquired legally from a source with a permit. A permit is required to obtain or possess more than one glider, or if one wants to sell or give away any glider in their possession. It is illegal to capture or sell wild sugar gliders without a permit. In captivity In captivity, the sugar glider can suffer from calcium deficiencies if not fed an adequate diet. A lack of calcium in the diet causes the body to leach calcium from the bones, with the hind legs first to show noticeable dysfunction. Calcium to phosphorus ratios should be 2:1 to prevent hypocalcemia, sometimes known as hind leg paralysis (HLP). Their diet should be 50% insects (gut-loaded) or other sources of protein, 25% fruit and 25% vegetables. Some of the more recognised diets are Bourbon's Modified Leadbeaters (BML), High Protein Wombaroo (HPW) and various calcium rich diets with Leadbeaters Mixture (LBM). Iron storage disease (hemochromatosis) is another dietary problem that has been reported in captive gliders and can lead to fatal complications if not diagnosed and treated early. A large amount of attention and environmental enrichment may be required for the highly social species, especially for those kept as individuals. Inadequate social interaction can lead to depression and behavioural disorders such as loss of appetite, irritability and self-mutilation. As a pet In several countries, the sugar glider (or what was formerly considered to be the sugar glider) is popular as an exotic pet, and is sometimes referred to as a pocket pet. In Australia, there is opposition to keeping native animals as pets from Australia's largest wildlife rehabilitation organisation (WIRES), and concerns from Australian wildlife conservation organisations regarding animal welfare risks including neglect, cruelty and abandonment. In Australia, sugar gliders can be kept in Victoria, South Australia, and the Northern Territory. However, they are not allowed to be kept as pets in Western Australia, New South Wales, the Australian Capital Territory, Queensland or Tasmania. DNA analysis indicates that "the USA (sugar) glider population originates from West Papua, Indonesia with no illegal harvesting from other native areas such as Papua New Guinea or Australia". Given that the West Papuan gliders have been tentatively classified as Krefft's gliders (albeit to be changed in the future), this indicates that at least the captive gliders kept in the United States are Krefft's gliders, not sugar gliders.
Biology and health sciences
Diprotodontia
Animals
43024
https://en.wikipedia.org/wiki/Levee
Levee
A levee ( or ), dike (American English), dyke (British English; see spelling differences), embankment, floodbank, or stop bank is an elevated ridge, natural or artificial, alongside the banks of a river, often intended to protect against flooding of the area adjoining the river. It is usually earthen and often runs parallel to the course of a river in its floodplain or along low-lying coastlines. Naturally occurring levees form on river floodplains following flooding, where sediment and alluvium is deposited and settles, forming a ridge and increasing the river channel's capacity. Alternatively, levees can be artificially constructed from fill, designed to regulate water levels. In some circumstances, artificial levees can be environmentally damaging. Ancient civilizations in the Indus Valley, ancient Egypt, Mesopotamia and China all built levees. Today, levees can be found around the world, and failures of levees due to erosion or other causes can be major disasters, such as the catastrophic 2005 levee failures in Greater New Orleans that occurred as a result of Hurricane Katrina. Etymology Speakers of American English use the word levee, from the French word (from the feminine past participle of the French verb , 'to raise'). It originated in New Orleans a few years after the city's founding in 1718 and was later adopted by English speakers. The name derives from the trait of the levee's ridges being raised higher than both the channel and the surrounding floodplains. The modern word dike or dyke most likely derives from the Dutch word , with the construction of dikes well attested as early as the 11th century. The Westfriese Omringdijk, completed by 1250, was formed by connecting existing older dikes. The Roman chronicler Tacitus mentions that the rebellious Batavi pierced dikes to flood their land and to protect their retreat (70 CE). The word originally indicated both the trench and the bank. It closely parallels the English verb to dig. In Anglo-Saxon, the word already existed and was pronounced as dick in northern England and as ditch in the south. Similar to Dutch, the English origins of the word lie in digging a trench and forming the upcast soil into a bank alongside it. This practice has meant that the name may be given to either the excavation or to the bank. Thus Offa's Dyke is a combined structure and Car Dyke is a trench – though it once had raised banks as well. In the English Midlands and East Anglia, and in the United States, a dike is what a ditch is in the south of England, a property-boundary marker or drainage channel. Where it carries a stream, it may be called a running dike as in Rippingale Running Dike, which leads water from the catchwater drain, Car Dyke, to the South Forty Foot Drain in Lincolnshire (TF1427). The Weir Dike is a soak dike in Bourne North Fen, near Twenty and alongside the River Glen, Lincolnshire. In the Norfolk and Suffolk Broads, a dyke may be a drainage ditch or a narrow artificial channel off a river or broad for access or mooring, some longer dykes being named, e.g., Candle Dyke. In parts of Britain, particularly Scotland and Northern England, a dyke may be a field wall, generally made with dry stone. Uses The main purpose of artificial levees is to prevent flooding of the adjoining countryside and to slow natural course changes in a waterway to provide reliable shipping lanes for maritime commerce over time; they also confine the flow of the river, resulting in higher and faster water flow. Levees can be mainly found along the sea, where dunes are not strong enough, along rivers for protection against high floods, along lakes or along polders. Furthermore, levees have been built for the purpose of impoldering, or as a boundary for an inundation area. The latter can be a controlled inundation by the military or a measure to prevent inundation of a larger area surrounded by levees. Levees have also been built as field boundaries and as military defences. More on this type of levee can be found in the article on dry-stone walls. Levees can be permanent earthworks or emergency constructions (often of sandbags) built hastily in a flood emergency. Some of the earliest levees were constructed by the Indus Valley civilization (in Pakistan and North India from ) on which the agrarian life of the Harappan peoples depended. Levees were also constructed over 3,000 years ago in ancient Egypt, where a system of levees was built along the left bank of the River Nile for more than , stretching from modern Aswan to the Nile Delta on the shores of the Mediterranean. The Mesopotamian civilizations and ancient China also built large levee systems. Because a levee is only as strong as its weakest point, the height and standards of construction have to be consistent along its length. Some authorities have argued that this requires a strong governing authority to guide the work and may have been a catalyst for the development of systems of governance in early civilizations. However, others point to evidence of large-scale water-control earthen works such as canals and/or levees dating from before King Scorpion in Predynastic Egypt, during which governance was far less centralized. Another example of a historical levee that protected the growing city-state of Mēxihco-Tenōchtitlan and the neighboring city of Tlatelōlco, was constructed during the early 1400s, under the supervision of the tlahtoani of the altepetl Texcoco, Nezahualcoyotl. Its function was to separate the brackish waters of Lake Texcoco (ideal for the agricultural technique Chināmitls) from the fresh potable water supplied to the settlements. However, after the Europeans destroyed Tenochtitlan, the levee was also destroyed and flooding became a major problem, which resulted in the majority of The Lake being drained in the 17th century. Levees are usually built by piling earth on a cleared, level surface. Broad at the base, they taper to a level top, where temporary embankments or sandbags can be placed. Because flood discharge intensity increases in levees on both river banks, and because silt deposits raise the level of riverbeds, planning and auxiliary measures are vital. Sections are often set back from the river to form a wider channel, and flood valley basins are divided by multiple levees to prevent a single breach from flooding a large area. A levee made from stones laid in horizontal rows with a bed of thin turf between each of them is known as a spetchel. Artificial levees require substantial engineering. Their surface must be protected from erosion, so they are planted with vegetation such as Bermuda grass in order to bind the earth together. On the land side of high levees, a low terrace of earth known as a banquette is usually added as another anti-erosion measure. On the river side, erosion from strong waves or currents presents an even greater threat to the integrity of the levee. The effects of erosion are countered by planting suitable vegetation or installing stones, boulders, weighted matting, or concrete revetments. Separate ditches or drainage tiles are constructed to ensure that the foundation does not become waterlogged. River flood prevention Prominent levee systems have been built along the Mississippi River and Sacramento River in the United States, and the Po, Rhine, Meuse River, Rhône, Loire, Vistula, the delta formed by the Rhine, Maas/Meuse and Scheldt in the Netherlands and the Danube in Europe. During the Chinese Warring States period, the Dujiangyan irrigation system was built by the Qin as a water conservation and flood control project. The system's infrastructure is located on the Min River, which is the longest tributary of the Yangtze River, in Sichuan, China. The Mississippi levee system represents one of the largest such systems found anywhere in the world. It comprises over of levees extending some along the Mississippi, stretching from Cape Girardeau, Missouri, to the Mississippi delta. They were begun by French settlers in Louisiana in the 18th century to protect the city of New Orleans. The first Louisiana levees were about high and covered a distance of about along the riverside. The U.S. Army Corps of Engineers, in conjunction with the Mississippi River Commission, extended the levee system beginning in 1882 to cover the riverbanks from Cairo, Illinois to the mouth of the Mississippi delta in Louisiana. By the mid-1980s, they had reached their present extent and averaged in height; some Mississippi levees are as high as . The Mississippi levees also include some of the longest continuous individual levees in the world. One such levee extends southwards from Pine Bluff, Arkansas, for a distance of some . The scope and scale of the Mississippi levees has often been compared to the Great Wall of China. The United States Army Corps of Engineers (USACE) recommends and supports cellular confinement technology (geocells) as a best management practice. Particular attention is given to the matter of surface erosion, overtopping prevention and protection of levee crest and downstream slope. Reinforcement with geocells provides tensile force to the soil to better resist instability. Artificial levees can lead to an elevation of the natural riverbed over time; whether this happens or not and how fast, depends on different factors, one of them being the amount and type of the bed load of a river. Alluvial rivers with intense accumulations of sediment tend to this behavior. Examples of rivers where artificial levees led to an elevation of the riverbed, even up to a point where the riverbed is higher than the adjacent ground surface behind the levees, are found for the Yellow River in China and the Mississippi in the United States. Coastal flood prevention Levees are very common on the marshlands bordering the Bay of Fundy in New Brunswick and Nova Scotia, Canada. The Acadians who settled the area can be credited with the original construction of many of the levees in the area, created for the purpose of farming the fertile tidal marshlands. These levees are referred to as dykes. They are constructed with hinged sluice gates that open on the falling tide to drain freshwater from the agricultural marshlands and close on the rising tide to prevent seawater from entering behind the dyke. These sluice gates are called "aboiteaux". In the Lower Mainland around the city of Vancouver, British Columbia, there are levees (known locally as dikes, and also referred to as "the sea wall") to protect low-lying land in the Fraser River delta, particularly the city of Richmond on Lulu Island. There are also dikes to protect other locations which have flooded in the past, such as the Pitt Polder, land adjacent to the Pitt River, and other tributary rivers. Coastal flood prevention levees are also common along the inland coastline behind the Wadden Sea, an area devastated by many historic floods. Thus the peoples and governments have erected increasingly large and complex flood protection levee systems to stop the sea even during storm floods. The biggest of these are the huge levees in the Netherlands, which have gone beyond just defending against floods, as they have aggressively taken back land that is below mean sea level. Spur dykes or groynes These typically man-made hydraulic structures are situated to protect against erosion. They are typically placed in alluvial rivers perpendicular, or at an angle, to the bank of the channel or the revetment, and are used widely along coastlines. There are two common types of spur dyke, permeable and impermeable, depending on the materials used to construct them. Natural examples Natural levees commonly form around lowland rivers and creeks without human intervention. They are elongated ridges of mud and/or silt that form on the river floodplains immediately adjacent to the cut banks. Like artificial levees, they act to reduce the likelihood of floodplain inundation. Deposition of levees is a natural consequence of the flooding of meandering rivers which carry high proportions of suspended sediment in the form of fine sands, silts, and muds. Because the carrying capacity of a river depends in part on its depth, the sediment in the water which is over the flooded banks of the channel is no longer capable of keeping the same number of fine sediments in suspension as the main thalweg. The extra fine sediments thus settle out quickly on the parts of the floodplain nearest to the channel. Over a significant number of floods, this will eventually result in the building up of ridges in these positions and reducing the likelihood of further floods and episodes of levee building. If aggradation continues to occur in the main channel, this will make levee overtopping more likely again, and the levees can continue to build up. In some cases, this can result in the channel bed eventually rising above the surrounding floodplains, penned in only by the levees around it; an example is the Yellow River in China near the sea, where oceangoing ships appear to sail high above the plain on the elevated river. Levees are common in any river with a high suspended sediment fraction and thus are intimately associated with meandering channels, which also are more likely to occur where a river carries large fractions of suspended sediment. For similar reasons, they are also common in tidal creeks, where tides bring in large amounts of coastal silts and muds. High spring tides will cause flooding, and result in the building up of levees. Failures and breaches Both natural and man-made levees can fail in a number of ways. Factors that cause levee failure include overtopping, erosion, structural failures, and levee saturation. The most frequent (and dangerous) is a levee breach. Here, a part of the levee actually breaks or is eroded away, leaving a large opening for water to flood land otherwise protected by the levee. A breach can be a sudden or gradual failure, caused either by surface erosion or by subsurface weakness in the levee. A breach can leave a fan-shaped deposit of sediment radiating away from the breach, described as a crevasse splay. In natural levees, once a breach has occurred, the gap in the levee will remain until it is again filled in by levee building processes. This increases the chances of future breaches occurring in the same location. Breaches can be the location of meander cutoffs if the river flow direction is permanently diverted through the gap. Sometimes levees are said to fail when water overtops the crest of the levee. This will cause flooding on the floodplains, but because it does not damage the levee, it has fewer consequences for future flooding. Among various failure mechanisms that cause levee breaches, soil erosion is found to be one of the most important factors. Predicting soil erosion and scour generation when overtopping happens is important in order to design stable levee and floodwalls. There have been numerous studies to investigate the erodibility of soils. Briaud et al. (2008) used Erosion Function Apparatus (EFA) test to measure the erodibility of the soils and afterwards by using Chen 3D software, numerical simulations were performed on the levee to find out the velocity vectors in the overtopping water and the generated scour when the overtopping water impinges the levee. By analyzing the results from EFA test, an erosion chart to categorize erodibility of the soils was developed. Hughes and Nadal in 2009 studied the effect of combination of wave overtopping and storm surge overflow on the erosion and scour generation in levees. The study included hydraulic parameters and flow characteristics such as flow thickness, wave intervals, surge level above levee crown in analyzing scour development. According to the laboratory tests, empirical correlations related to average overtopping discharge were derived to analyze the resistance of levee against erosion. These equations could only fit to the situation, similar to the experimental tests, while they can give a reasonable estimation if applied to other conditions. Osouli et al. (2014) and Karimpour et al. (2015) conducted lab scale physical modeling of levees to evaluate score characterization of different levees due to floodwall overtopping. Another approach applied to prevent levee failures is electrical resistivity tomography (ERT). This non-destructive geophysical method can detect in advance critical saturation areas in embankments. ERT can thus be used in monitoring of seepage phenomena in earth structures and act as an early warning system, e.g., in critical parts of levees or embankments. Negative impacts Large scale structures designed to modify natural processes inevitably have some drawbacks or negative impacts. Ecological impact Levees interrupt floodplain ecosystems that developed under conditions of seasonal flooding. In many cases, the impact is two-fold, as reduced recurrence of flooding also facilitates land-use change from forested floodplain to farms. Increased height In a natural watershed, floodwaters spread over a landscape and slowly return to the river. Downstream, the delivery of water from the area of flooding is spread out in time. If levees keep the floodwaters inside a narrow channel, the water is delivered downstream over a shorter time period. The same volume of water over a shorter time interval means higher river stage (height). As more levees are built upstream, the recurrence interval for high-water events in the river increases, often requiring increases in levee height. Levee breaches produce high-energy flooding During natural flooding, water spilling over banks rises slowly. When a levee fails, a wall of water held back by the levee suddenly pours out over the landscape, much like a dam break. Impacted areas far from a breach may experience flooding similar to a natural event, while damage near a breach can be catastrophic, including carving out deep holes and channels in the nearby landscape. Prolonged flooding after levee failure Under natural conditions, floodwaters return quickly to the river channel as water-levels drop. During a levee breach, water pours out into the floodplain and moves down-slope where it is blocked from return to the river. Flooding is prolonged over such areas, waiting for floodwater to slowly infiltrate and evaporate. Subsidence and seawater intrusion Natural flooding adds a layer of sediment to the floodplain. The added weight of such layers over many centuries makes the crust sink deeper into the mantle, much like a floating block of wood is pushed deeper into the water if another board is added on top. The momentum of downward movement does not immediately stop when new sediment layers stop being added, resulting in subsidence (sinking of land surface). In coastal areas, this results in land dipping below sea level, the ocean migrating inland, and salt-water intruding into freshwater aquifers. Coastal sediment loss Where a large river spills out into the ocean, the velocity of the water suddenly slows and its ability to transport sand and silt decreases. Sediments begin to settle out, eventually forming a delta and extending to the coastline seaward. During subsequent flood events, water spilling out of the channel will find a shorter route to the ocean and begin building a new delta. Wave action and ocean currents redistribute some of the sediment to build beaches along the coast. When levees are constructed all the way to the ocean, sediments from flooding events are cut off, the river never migrates, and elevated river velocity delivers sediment to deep water where wave action and ocean currents cannot redistribute. Instead of a natural wedge shaped delta forming, a "birds-foot delta" extends far out into the ocean. The results for surrounding land include beach depletion, subsidence, salt-water intrusion, and land loss.
Technology
Hydraulic infrastructure
null
43026
https://en.wikipedia.org/wiki/Endianness
Endianness
In computing, endianness is the order in which bytes within a word of digital data are transmitted over a data communication medium or addressed (by rising addresses) in computer memory, counting only byte significance compared to earliness. Endianness is primarily expressed as big-endian (BE) or little-endian (LE), terms introduced by Danny Cohen into computer science for data ordering in an Internet Experiment Note published in 1980. The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. By analogy, a CPU may read a digital word big end first, or little end first. Computers store information in various-sized groups of binary bits. Each group is assigned a number, called its address, that the computer uses to access that data. On most modern computers, the smallest data group with an address is eight bits long and is called a byte. Larger groups comprise two or more bytes, for example, a 32-bit word contains four bytes. There are two possible ways a computer could number the individual bytes in a larger group, starting at either end. Both types of endianness are in widespread use in digital electronic engineering. The initial choice of endianness of a new design is often arbitrary, but later technology revisions and updates perpetuate the existing endianness to maintain backward compatibility. A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system, in contrast, stores the least-significant byte at the smallest address. Of the two, big-endian is thus closer to the way the digits of numbers are written left-to-right in English, comparing digits to bytes. Bi-endianness is a feature supported by numerous computer architectures that feature switchable endianness in data fetches and stores or for instruction fetches. Other orderings are generically called middle-endian or mixed-endian. Big-endianness is the dominant ordering in networking protocols, such as in the Internet protocol suite, where it is referred to as network order, transmitting the most significant byte first. Conversely, little-endianness is the dominant ordering for processor architectures (x86, most ARM implementations, base RISC-V implementations) and their associated memory. File formats can use either ordering; some formats use a mixture of both or contain an indicator of which ordering is used throughout the file. Characteristics Computer memory consists of a sequence of storage cells (smallest addressable units); in machines that support byte addressing, those units are called bytes. Each byte is identified and accessed in hardware and software by its memory address. If the total number of bytes in memory is n, then addresses are enumerated from 0 to n − 1. Computer programs often use data structures or fields that may consist of more data than can be stored in one byte. In the context of this article where its type cannot be arbitrarily complicated, a "field" consists of a consecutive sequence of bytes and represents a "simple data value" which – at least potentially – can be manipulated by one single hardware instruction. On most systems, the address of a multi-byte simple data value is the address of its first byte (the byte with the lowest address). There are exceptions to this rule – for example, the Add instruction of the IBM 1401 addresses variable-length fields at their low-order (highest-addressed) position with their lengths being defined by a word mark set at their high-order (lowest-addressed) position. When an operation such as addition is performed, the processor begins at the low-order positions at the high addresses of the two fields and works its way down to the high-order. Another important attribute of a byte being part of a "field" is its "significance". These attributes of the parts of a field play an important role in the sequence the bytes are accessed by the computer hardware, more precisely: by the low-level algorithms contributing to the results of a computer instruction. Numbers Positional number systems (mostly base 2, or less often base 10) are the predominant way of representing and particularly of manipulating integer data by computers. In pure form this is valid for moderate sized non-negative integers, e.g. of C data type unsigned. In such a number system, the value of a digit which it contributes to the whole number is determined not only by its value as a single digit, but also by the position it holds in the complete number, called its significance. These positions can be mapped to memory mainly in two ways: Decreasing numeric significance with increasing memory addresses (or increasing time), known as big-endian and Increasing numeric significance with increasing memory addresses (or increasing time), known as little-endian. In these expressions, the term "end" is meant as the extremity where the big resp. little significance is written first, namely where the field starts. The integer data that are directly supported by the computer hardware have a fixed width of a low power of 2, e.g. 8 bits ≙ 1 byte, 16 bits ≙ 2 bytes, 32 bits ≙ 4 bytes, 64 bits ≙ 8 bytes, 128 bits ≙ 16 bytes. The low-level access sequence to the bytes of such a field depends on the operation to be performed. The least-significant byte is accessed first for addition, subtraction and multiplication. The most-significant byte is accessed first for division and comparison. See . Text When character (text) strings are to be compared with one another, e.g. in order to support some mechanism like sorting, this is very frequently done lexicographically where a single positional element (character) also has a positional value. Lexicographical comparison means almost everywhere: first character ranks highest – as in the telephone book. Almost all machines which can do this using a single instruction are big-endian or at least mixed-endian. Integer numbers written as text are always represented most significant digit first in memory, which is similar to big-endian, independently of text direction. Byte addressing When memory bytes are printed sequentially from left to right (e.g. in a hex dump), little-endian representation of integers has the significance increasing from right to left. In other words, it appears backwards when visualized, which can be counter-intuitive. This behavior arises, for example, in FourCC or similar techniques that involve packing characters into an integer, so that it becomes a sequence of specific characters in memory. For example, take the string "JOHN", stored in hexadecimal ASCII. On big-endian machines, the value appears left-to-right, coinciding with the correct string order for reading the result ("J O H N"). But on a little-endian machine, one would see "N H O J". Middle-endian machines complicate this even further; for example, on the PDP-11, the 32-bit value is stored as two 16-bit words "JO" "HN" in big-endian, with the characters in the 16-bit words being stored in little-endian, resulting in "O J N H". Byte swapping Byte-swapping consists of rearranging bytes to change endianness. Many compilers provide built-ins that are likely to be compiled into native processor instructions (/), such as . Software interfaces for swapping include: Standard network endianness functions (from/to BE, up to 32-bit). Windows has a 64-bit extension in . BSD and Glibc functions (from/to BE and LE, up to 64-bit). macOS macros (from/to BE and LE, up to 64-bit). The function in C++23. Some CPU instruction sets provide native support for endian byte swapping, such as (x86 — 486 and later, i960 — i960Jx and later), and (ARMv6 and later). Some compilers have built-in facilities for byte swapping. For example, the Intel Fortran compiler supports the non-standard specifier when opening a file, e.g.: . Other compilers have options for generating code that globally enables the conversion for all file IO operations. This permits the reuse of code on a system with the opposite endianness without code modification. Considerations Simplified access to part of a field On most systems, the address of a multi-byte value is the address of its first byte (the byte with the lowest address); little-endian systems of that type have the property that, for sufficiently low data values, the same value can be read from memory at different lengths without using different addresses (even when alignment restrictions are imposed). For example, a 32-bit memory location with content can be read at the same address as either 8-bit (value = 4A), 16-bit (004A), 24-bit (00004A), or 32-bit (0000004A), all of which retain the same numeric value. Although this little-endian property is rarely used directly by high-level programmers, it is occasionally employed by code optimizers as well as by assembly language programmers. While not allowed by C++, such type punning code is allowed as "implementation-defined" by the C11 standard and commonly used in code interacting with hardware. Calculation order Some operations in positional number systems have a natural or preferred order in which the elementary steps are to be executed. This order may affect their performance on small-scale byte-addressable processors and microcontrollers. However, high-performance processors usually fetch multi-byte operands from memory in the same amount of time they would have fetched a single byte, so the complexity of the hardware is not affected by the byte ordering. Addition, subtraction, and multiplication start at the least significant digit position and propagate the carry to the subsequent more significant position. On most systems, the address of a multi-byte value is the address of its first byte (the byte with the lowest address). The implementation of these operations is marginally simpler using little-endian machines where this first byte contains the least significant digit. Comparison and division start at the most significant digit and propagate a possible carry to the subsequent less significant digits. For fixed-length numerical values (typically of length 1,2,4,8,16), the implementation of these operations is marginally simpler on big-endian machines. Some big-endian processors (e.g. the IBM System/360 and its successors) contain hardware instructions for lexicographically comparing varying length character strings. The normal data transport by an assignment statement is in principle independent of the endianness of the processor. Hardware Many historical and extant processors use a big-endian memory representation, either exclusively or as a design option. The IBM System/360 uses big-endian byte order, as do its successors System/370, ESA/390, and z/Architecture. The PDP-10 uses big-endian addressing for byte-oriented instructions. The IBM Series/1 minicomputer uses big-endian byte order. The Motorola 6800 / 6801, the 6809 and the 68000 series of processors use the big-endian format. Solely big-endian architectures include the IBM z/Architecture and OpenRISC. The PDP-11 minicomputer, however, uses little-endian byte order, as does its VAX successor. The Datapoint 2200 used simple bit-serial logic with little-endian to facilitate carry propagation. When Intel developed the 8008 microprocessor for Datapoint, they used little-endian for compatibility. However, as Intel was unable to deliver the 8008 in time, Datapoint used a medium-scale integration equivalent, but the little-endianness was retained in most Intel designs, including the MCS-48 and the 8086 and its x86 successors, including IA-32 and x86-64 processors. The MOS Technology 6502 family (including Western Design Center 65802 and 65C816), the Zilog Z80 (including Z180 and eZ80), the Altera Nios II, the Atmel AVR, the Andes Technology NDS32, the Qualcomm Hexagon, and many other processors and processor families are also little-endian. The Intel 8051, unlike other Intel processors, expects 16-bit addresses for LJMP and LCALL in big-endian format; however, xCALL instructions store the return address onto the stack in little-endian format. Bi-endianness Some instruction set architectures feature a setting which allows for switchable endianness in data fetches and stores, instruction fetches, or both; those instruction set architectures are referred to as bi-endian. Architectures that support switchable endianness include PowerPC/Power ISA, SPARC V9, ARM versions 3 and above, DEC Alpha, MIPS, Intel i860, PA-RISC, SuperH SH-4, IA-64, C-Sky, and RISC-V. This feature can improve performance or simplify the logic of networking devices and software. The word bi-endian, when said of hardware, denotes the capability of the machine to compute or pass data in either endian format. Many of these architectures can be switched via software to default to a specific endian format (usually done when the computer starts up); however, on some systems, the default endianness is selected by hardware on the motherboard and cannot be changed via software (e.g. Alpha, which runs only in big-endian mode on the Cray T3E). IBM AIX and IBM i run in big-endian mode on bi-endian Power ISA; Linux originally ran in big-endian mode, but by 2019, IBM had transitioned to little-endian mode for Linux to ease the porting of Linux software from x86 to Power. SPARC has no relevant little-endian deployment, as both Oracle Solaris and Linux run in big-endian mode on bi-endian SPARC systems, and can be considered big-endian in practice. ARM, C-Sky, and RISC-V have no relevant big-endian deployments, and can be considered little-endian in practice. The term bi-endian refers primarily to how a processor treats data accesses. Instruction accesses (fetches of instruction words) on a given processor may still assume a fixed endianness, even if data accesses are fully bi-endian, though this is not always the case, such as on Intel's IA-64-based Itanium CPU, which allows both. Some nominally bi-endian CPUs require motherboard help to fully switch endianness. For instance, the 32-bit desktop-oriented PowerPC processors in little-endian mode act as little-endian from the point of view of the executing programs, but they require the motherboard to perform a 64-bit swap across all 8 byte lanes to ensure that the little-endian view of things will apply to I/O devices. In the absence of this unusual motherboard hardware, device driver software must write to different addresses to undo the incomplete transformation and also must perform a normal byte swap. Some CPUs, such as many PowerPC processors intended for embedded use and almost all SPARC processors, allow per-page choice of endianness. SPARC processors since the late 1990s (SPARC v9 compliant processors) allow data endianness to be chosen with each individual instruction that loads from or stores to memory. The ARM architecture supports two big-endian modes, called BE-8 and BE-32. CPUs up to ARMv5 only support BE-32 or word-invariant mode. Here any naturally aligned 32-bit access works like in little-endian mode, but access to a byte or 16-bit word is redirected to the corresponding address and unaligned access is not allowed. ARMv6 introduces BE-8 or byte-invariant mode, where access to a single byte works as in little-endian mode, but accessing a 16-bit, 32-bit or (starting with ARMv8) 64-bit word results in a byte swap of the data. This simplifies unaligned memory access as well as memory-mapped access to registers other than 32-bit. Many processors have instructions to convert a word in a register to the opposite endianness, that is, they swap the order of the bytes in a 16-, 32- or 64-bit word. Recent Intel x86 and x86-64 architecture CPUs have a MOVBE instruction (Intel Core since generation 4, after Atom), which fetches a big-endian format word from memory or writes a word into memory in big-endian format. These processors are otherwise thoroughly little-endian. There are also devices which use different formats in different places. For instance, the BQ27421 Texas Instruments battery gauge uses the little-endian format for its registers and the big-endian format for its random-access memory. SPARC historically used big-endian until version 9, which is bi-endian. Similarly early IBM POWER processors were big-endian, but the PowerPC and Power ISA descendants are now bi-endian. The ARM architecture was little-endian before version 3 when it became bi-endian. Floating point Although many processors use little-endian storage for all types of data (integer, floating point), there are a number of hardware architectures where floating-point numbers are represented in big-endian form while integers are represented in little-endian form. There are ARM processors that have mixed-endian floating-point representation for double-precision numbers: each of the two 32-bit words is stored as little-endian, but the most significant word is stored first. VAX floating point stores little-endian 16-bit words in big-endian order. Because there have been many floating-point formats with no network standard representation for them, the XDR standard uses big-endian IEEE 754 as its representation. It may therefore appear strange that the widespread IEEE 754 floating-point standard does not specify endianness. Theoretically, this means that even standard IEEE floating-point data written by one machine might not be readable by another. However, on modern standard computers (i.e., implementing IEEE 754), one may safely assume that the endianness is the same for floating-point numbers as for integers, making the conversion straightforward regardless of data type. Small embedded systems using special floating-point formats may be another matter, however. Variable-length data Most instructions considered so far contain the size (lengths) of their operands within the operation code. Frequently available operand lengths are 1, 2, 4, 8, or 16 bytes. But there are also architectures where the length of an operand may be held in a separate field of the instruction or with the operand itself, e.g. by means of a word mark. Such an approach allows operand lengths up to 256 bytes or larger. The data types of such operands are character strings or BCD. Machines able to manipulate such data with one instruction (e.g. compare, add) include the IBM 1401, 1410, 1620, System/360, System/370, ESA/390, and z/Architecture, all of them of type big-endian. Middle-endian Numerous other orderings, generically called middle-endian or mixed-endian, are possible. The PDP-11 is in principle a 16-bit little-endian system. The instructions to convert between floating-point and integer values in the optional floating-point processor of the PDP-11/45, PDP-11/70, and in some later processors, stored 32-bit "double precision integer long" values with the 16-bit halves swapped from the expected little-endian order. The UNIX C compiler used the same format for 32-bit long integers. This ordering is known as PDP-endian. UNIX was one of the first systems to allow the same code to be compiled for platforms with different internal representations. One of the first programs converted was supposed to print out , but on the Series/1 it printed instead. A way to interpret this endianness is that it stores a 32-bit integer as two little-endian 16-bit words, with a big-endian word ordering: Segment descriptors of IA-32 and compatible processors keep a 32-bit base address of the segment stored in little-endian order, but in four nonconsecutive bytes, at relative positions 2, 3, 4 and 7 of the descriptor start. Software Logic design Hardware description languages (HDLs) used to express digital logic often support arbitrary endianness, with arbitrary granularity. For example, in SystemVerilog, a word can be defined as little-endian or big-endian. Files and filesystems The recognition of endianness is important when reading a file or filesystem created on a computer with different endianness. Fortran sequential unformatted files created with one endianness usually cannot be read on a system using the other endianness because Fortran usually implements a record (defined as the data written by a single Fortran statement) as data preceded and succeeded by count fields, which are integers equal to the number of bytes in the data. An attempt to read such a file using Fortran on a system of the other endianness results in a run-time error, because the count fields are incorrect. Unicode text can optionally start with a byte order mark (BOM) to signal the endianness of the file or stream. Its code point is U+FEFF. In UTF-32 for example, a big-endian file should start with ; a little-endian should start with . Application binary data formats, such as MATLAB .mat files, or the .bil data format, used in topography, are usually endianness-independent. This is achieved by storing the data always in one fixed endianness or carrying with the data a switch to indicate the endianness. An example of the former is the binary XLS file format that is portable between Windows and Mac systems and always little-endian, requiring the Mac application to swap the bytes on load and save when running on a big-endian Motorola 68K or PowerPC processor. TIFF image files are an example of the second strategy, whose header instructs the application about the endianness of their internal binary integers. If a file starts with the signature it means that integers are represented as big-endian, while means little-endian. Those signatures need a single 16-bit word each, and they are palindromes, so they are endianness independent. stands for Intel and stands for Motorola. Intel CPUs are little-endian, while Motorola 680x0 CPUs are big-endian. This explicit signature allows a TIFF reader program to swap bytes if necessary when a given file was generated by a TIFF writer program running on a computer with a different endianness. As a consequence of its original implementation on the Intel 8080 platform, the operating system-independent File Allocation Table (FAT) file system is defined with little-endian byte ordering, even on platforms using another endianness natively, necessitating byte-swap operations for maintaining the FAT on these platforms. ZFS, which combines a filesystem and a logical volume manager, is known to provide adaptive endianness and to work with both big-endian and little-endian systems. Networking Many IETF RFCs use the term network order, meaning the order of transmission for bytes over the wire in network protocols. Among others, the historic defines the network order for protocols in the Internet protocol suite to be big-endian. However, not all protocols use big-endian byte order as the network order. The Server Message Block (SMB) protocol uses little-endian byte order. In CANopen, multi-byte parameters are always sent least significant byte first (little-endian). The same is true for Ethernet Powerlink. The Berkeley sockets API defines a set of functions to convert 16- and 32-bit integers to and from network byte order: the (host-to-network-short) and (host-to-network-long) functions convert 16- and 32-bit values respectively from machine (host) to network order; the and functions convert from network to host order. These functions may be a no-op on a big-endian system. While the high-level network protocols usually consider the byte (mostly meant as octet) as their atomic unit, the lowest layers of a network stack may deal with ordering of bits within a byte.
Technology
Computer architecture concepts
null
43081
https://en.wikipedia.org/wiki/Traffic
Traffic
Traffic comprises pedestrians, vehicles, ridden or herded animals, trains, and other conveyances that use public ways (roads/sidewalks) for travel and transportation. Traffic laws govern and regulate traffic, while rules of the road include traffic laws and informal rules that may have developed over time to facilitate the orderly and timely flow of traffic. Organized traffic generally has well-established priorities, lanes, right-of-way, and traffic control at intersections. (International Regulations for Preventing Collisions at Sea govern the oceans and influence some laws for navigating domestic waters.) Traffic is formally organized in many jurisdictions, with marked lanes, junctions, intersections, interchanges, traffic signals, cones, or signs. Traffic is often classified by type: heavy motor vehicle (e.g., car, truck), other vehicle (e.g., moped, bicycle), and pedestrian. Different classes may share speed limits and easement, or may be segregated. Some jurisdictions may have very detailed and complex rules of the road while others rely more on drivers' common sense and willingness to cooperate. Organization typically produces a better combination of travel safety and efficiency. Events which disrupt the flow and may cause traffic to degenerate into a disorganized mess include road construction, collisions, and debris in the roadway. On particularly busy freeways, a minor disruption may persist in a phenomenon known as traffic waves. A complete breakdown of organization may result in traffic congestion and gridlock. Simulations of organized traffic frequently involve queuing theory, stochastic processes and equations of mathematical physics applied to traffic flow. Etymology and types The word traffic originally meant "trade" (as it still does) and comes from the Old Italian verb trafficare and noun traffico. The origin of the Italian words is unclear. Suggestions include Catalan trafegar "decant", an assumed Vulgar Latin verb transfricare 'rub across', an assumed Vulgar Latin combination of trans- and facere 'make or do', Arabic tafriq 'distribution', and Arabic taraffaqa, which can mean 'seek profit'. Broadly, the term covers many kinds of traffic including network traffic, air traffic, marine traffic and rail traffic, but it is often used narrowly to mean only road traffic. Rules of the road Rules of the road and driving etiquette are the general practices and procedures that road users are required to follow. These rules usually apply to all road users, though they are of special importance to motorists and cyclists. These rules govern interactions between vehicles and pedestrians. The basic traffic rules are defined by an international treaty under the authority of the United Nations, the 1968 Vienna Convention on Road Traffic. Not all countries are signatory to the convention and, even among signatories, local variations in practice may be found. There are also unwritten local rules of the road, which are generally understood by local drivers. As a general rule, drivers are expected to avoid a collision with another vehicle and pedestrians, regardless of whether or not the applicable rules of the road allow them to be where they happen to be. In addition to the rules applicable by default, traffic signs and traffic lights must be obeyed, and instructions may be given by a police officer, either routinely (on a busy crossing instead of traffic lights) or as road traffic control around a construction zone, accident, or other road disruption. Directionality Traffic heading in inverse ways ought to be isolated so as to not hinder each other's way. The most essential guideline is whether to utilize the left or right half of the street. Traffic regulations In many countries, the rules of the road are codified, setting out the legal requirements and punishments for breaking them. In the United Kingdom, the rules are set out in the Highway Code, which includes not only obligations but also advice on how to drive sensibly and safely. In the United States, traffic laws are regulated by the states and municipalities through their respective traffic codes. Most of these are based at least in part on the Uniform Vehicle Code, but there are variations from state to state. In states such as Florida, traffic law and criminal law are separate; therefore, unless someone flees the scene of an accident or commits vehicular homicide or manslaughter, they are only guilty of a minor traffic offense. However, states such as South Carolina have completely criminalised their traffic law, so, for example, one is guilty of a misdemeanor simply for travelling 5 miles over the speed limit. Trail ethics (right of way) Trail ethics are a set of informal rules for right of way for users of trails, including hikers, mountaineers, equestrians, cyclists, and mountain bikers. Organised traffic Passage priority (right of way) Vehicles often come into conflict with other vehicles and pedestrians because their intended courses of travel intersect, and thus interfere with each other's routes. The general principle that establishes who has the right to go first is called "right of way" or "priority". It establishes who has the right to use the conflicting part of the road and who has to wait until the other does so. Signs, signals, markings and other features are often used to make priority explicit. Some signs, such as the stop sign, are nearly universal. When there are no signs or markings, different rules are observed depending on the location. These default priority rules differ between countries, and may even vary within countries. Trends toward uniformity are exemplified at an international level by the Vienna Convention on Road Signs and Signals, which prescribes standardised traffic control devices (signs, signals, and markings) for establishing the right of way where necessary. Crosswalks (or pedestrian crossings) are common in populated areas, and may indicate that pedestrians have priority over vehicular traffic. In most modern cities, the traffic signal is used to establish the right of way on the busy roads. Its primary purpose is to give each road a duration of time in which its traffic may use the intersection in an organised way. The intervals of time assigned for each road may be adjusted to take into account factors such as difference in volume of traffic, the needs of pedestrians, or other traffic signals. Pedestrian crossings may be located near other traffic control devices; if they are not also regulated in some way, vehicles must give priority to them when in use. Traffic on a public road usually has priority over other traffic such as traffic emerging from private access; rail crossings and drawbridges are typical exceptions. Uncontrolled traffic Uncontrolled traffic comes in the absence of lane markings and traffic control signals. On roads without marked lanes, drivers tend to keep to the appropriate side if the road is wide enough. Drivers frequently overtake others. Obstructions are common. Intersections have no signals or signage, and a particular road at a busy intersection may be dominant – that is, its traffic flows – until a break in traffic, at which time the dominance shifts to the other road where vehicles are queued. At the intersection of two perpendicular roads, a traffic jam may result if four vehicles face each other side-on. Turning Drivers often seek to turn onto another road or onto private property. The vehicle's blinking turn signals (commonly known as "blinkers" or "indicators") are often used as a way to announce one's intention to turn, thus alerting other drivers. The actual usage of directional signals varies greatly amongst countries, although its purpose is to indicate a driver's intention to depart from the current (and natural) flow of traffic well before the departure is executed (typically 3 seconds as a guideline). This will usually mean that turning traffic must stop and wait for a breach to turn, and this might cause inconvenience for drivers that follow them but do not want to turn. This is why dedicated lanes and protected traffic signals for turning are sometimes provided. On busier intersections where a protected lane would be ineffective or cannot be built, turning may be entirely prohibited, and drivers will be required to "drive around the block" in order to accomplish the turn. Many cities employ this tactic quite often; in San Francisco, due to its common practice, making three right turns is known colloquially as a "San Francisco left turn". Likewise, as many intersections in Taipei City are too busy to allow direct left turns, signs often direct drivers to drive around the block to turn. Turning rules are by no means universal. For example, in New Zealand (a drive-on-the-left country) between 1977 and 2012, left turning traffic had to give way to opposing right-turning traffic wishing to take the same road (unless there were multiple lanes, but then one must take care in case a vehicle jumped lanes). New Zealand abolished this particular rule on 25 March 2012, except at roundabouts or when denoted by a Give Way or Stop sign. Although the rule caused initial driver confusion, and many intersections required or still require modification, the change is predicted to eventually prevent one death and 13 serious injuries annually. On roads with multiple lanes, turning traffic is generally expected to move to the lane closest to the direction they wish to turn. For example, traffic intending to turn right will usually move to the rightmost lane before the intersection. Likewise, left-turning traffic will move to the leftmost lane. Exceptions to this rule may exist where for example the traffic authority decides that the two rightmost lanes will be for turning right, in which case drivers may take whichever of them to turn. Traffic may adapt to informal patterns that rise naturally rather than by force of authority. For example, it is common for drivers to observe (and trust) the turn signals used by other drivers in order to make turns from other lanes. If several vehicles on the right lane are all turning right, a vehicle may come from the next-to-right lane and turn right as well, in parallel with the other right-turning vehicles. Intersections In most of Continental Europe, the default rule is to give priority to the right, but this may be overridden by signs or road markings. There, priority was initially given according to the social rank of each traveler, but early in the life of the automobile this rule was deemed impractical and replaced with the priorité à droite (priority to the right) rule, which still applies. At a traffic circle where priorité à droite is not overridden, traffic on what would otherwise be a roundabout gives way to traffic entering the circle. Most French roundabouts now have give-way signs for traffic entering the circle, but there remain some notable exceptions that operate on the old rule, such as the Place de l'Étoile around the Arc de Triomphe. Priority to the right where used in continental Europe may be overridden by an ascending hierarchy of markings, signs, signals, and authorized persons. In the United Kingdom, priority is generally indicated by signs or markings, so that almost all junctions between public roads (except those governed by traffic signals) have a concept of a major road and minor road. The default give-way-to-the-right rule used in Continental Europe causes problems for many British and Irish drivers who are accustomed to having right of way by default unless otherwise indicated. A very small proportion of low-traffic junctions are unmarked – typically on housing estates or in rural areas. Here the rule is to "proceed with great care" i.e. slow the vehicle and check for traffic on the intersecting road. Other countries use various methods similar to the above examples to establish the right of way at intersections. For example, in most of the United States, the default priority is to yield to traffic from the right, but this is usually overridden by traffic control devices or other rules, like the boulevard rule. This rule holds that traffic entering a major road from a smaller road or alley must yield to the traffic of the busier road, but signs are often still posted. The boulevard rule can be compared with the above concept of a major and minor road, or the priority roads that may be found in countries that are parties to the Vienna Convention on Road Signs and Signals. Perpendicular intersections Also known as a "four-way" intersection, this intersection is the most common configuration for roads that cross each other, and the most basic type. If traffic signals do not control a four-way intersection, signs or other features are typically used to control movements and make clear priorities. The most common arrangement is to indicate that one road has priority over the other, but there are complex cases where all traffic approaching an intersection must yield and may be required to stop. In the United States, South Africa, and Canada, there are four-way intersections with a stop sign at every entrance, called four-way stops. A failed signal or a flashing red light is equivalent to a four-way stop, or an all-way stop. Special rules for four-way stops may include: In the countries that use four-way stops, pedestrians always have priority at crosswalks – even at unmarked ones, which exist as the logical continuations of the sidewalks at every intersection with approximately right angles – unless signed or painted otherwise. Whichever vehicle first stops at the stop line – or before the crosswalk, if there is no stop line – has priority. If two vehicles stop at the same time, priority is given to the vehicle on the right. If several vehicles arrive at the same time, a right-of-way conflict may arise wherein no driver has the legal right-of-way. This may result in drivers informally signaling to other drivers to indicate their intent to yield, for example by waving or flashing headlights. In Europe and other places, there are similar intersections. These may be marked by special signs (according to the Vienna Convention on Road Signs and Signals), a danger sign with a black X representing a crossroads. This sign informs drivers that the intersection is uncontrolled and that default rules apply. In Europe and in many areas of North America the default rules that apply at uncontrolled four-way intersections are almost identical: Rules for pedestrians differ by country, in the United States and Canada pedestrians generally have priority at such an intersection. All vehicles must give priority to any traffic approaching from their right, Then, if the vehicle is turning right or continuing on the same road it may proceed. Vehicles turning left must also give priority to traffic approaching from the opposite direction, unless that traffic is also turning left. If the intersection is congested, vehicles must alternate directions and/or circulate priority to the right one vehicle at a time. Protected intersection for bicycles A number of features make this protected intersection. A corner refuge island, a setback crossing of the pedestrians and cyclists, generally between 1.5–7 metres of setback, a forward stop bar, which allows cyclists to stop for a traffic light well ahead of motor traffic who must stop behind the crosswalk. Separate signal staging or at least an advance green for cyclists and pedestrians is used to give cyclists and pedestrians no conflicts or a head start over traffic. The design makes a right turn on red, and sometimes left on red depending on the geometry of the intersection in question, possible in many cases, often without stopping. This type of intersection is common in the bicycle-friendly Netherlands. Pedestrian crossings Pedestrians must often cross from one side of a road to the other, and in doing so may come into the way of vehicles traveling on the road. In many places pedestrians are entirely left to look after themselves, that is, they must observe the road and cross when they can see that no traffic will threaten them. Busier cities usually provide pedestrian crossings, which are strips of the road where pedestrians are expected to cross. The actual appearance of pedestrian crossings varies greatly, but the two most common appearances are: (1) a series of lateral white stripes or (2) two longitudinal white lines. The former is usually preferred, as it stands out more conspicuously against the dark pavement. Some pedestrian crossings accompany a traffic signal to make vehicles stop at regular intervals so pedestrians can cross. Some countries have "intelligent" pedestrian signals, where the pedestrian must push a button in order to assert their intention to cross. In some countries, approaching traffic is monitored by radar or by electromagnetic sensors buried in the road surface, and the pedestrian crossing lights are set to red if a speed infringement is detected. This has the effect of enforcing the local speed limit. See Speed Limits below. Pedestrian crossings without traffic signals are also common. In this case, the traffic laws usually states that the pedestrian has the right of way when crossing, and that vehicles must stop when a pedestrian uses the crossing. Countries and driving cultures vary greatly as to the extent to which this is respected. In the state of Nevada the car has the right of way when the crosswalk signal specifically forbids pedestrian crossing. Traffic culture is a determinant factor for the behaviors of all road users’ traffic. Specifically, it has a main role in crashes. Some jurisdictions forbid crossing or using the road anywhere other than at crossings, termed jaywalking. In other areas, pedestrians may have the right to cross where they choose, and have right of way over vehicular traffic while crossing. In most areas, an intersection is considered to have a crosswalk, even if not painted, as long as the roads meet at approximate right angles. The United Kingdom and Croatia are among the exceptions. Pedestrian crossings may also be located away from intersections. Level crossings A level crossing is an at-grade intersection of a railway by a road. Because of safety issues, they are often equipped with closable gates, crossing bells and warning signs. Speed limits The higher the speed of a vehicle, the more difficult collision avoidance becomes and the greater the damage if a collision does occur. Therefore, many countries of the world limit the maximum speed allowed on their roads. Vehicles are not supposed to be driven at speeds which are higher than the posted maximum. To enforce speed limits, two approaches are generally employed. In the United States, it is common for the police to patrol the streets and use special equipment (typically a radar unit) to measure the speed of vehicles, and pull over any vehicle found to be in violation of the speed limit. In Brazil, Colombia and some European countries, there are computerized speed-measuring devices spread throughout the city, which will automatically detect speeding drivers and take a photograph of the license plate (or number plate), which is later used for applying and mailing the ticket. Many jurisdictions in the U.S. use this technology as well. A mechanism that was developed in Germany is the Grüne Welle, or green wave, which is an indicator that shows the optimal speed to travel for the synchronized green lights along that corridor. Driving faster or slower than the speed set by the behavior of the lights causes the driver to encounter many red lights. This discourages drivers from speeding or impeding the flow of traffic. See related traffic wave and Pedestrian Crossings, above. Overtaking Overtaking (or passing) refers to a maneuver by which one or more vehicles traveling in the same direction are passed by another vehicle. On two-lane roads, when there is a split line or a dashed line on the side of the overtaker, drivers may overtake when it is safe. On multi-lane roads in most jurisdictions, overtaking is permitted in the "slower" lanes, though many require a special circumstance. See "Lanes" below. In the United Kingdom and Canada, notably on extra-urban roads, a solid white or yellow line closer to the driver is used to indicate that no overtaking is allowed in that lane. A double white or yellow line means that neither side may overtake. In the United States, a solid white line means that lane changes are discouraged and a double white line means that the lane change is prohibited. Lanes When a street is wide enough to accommodate several vehicles traveling side-by-side, it is usual for traffic to organize itself into lanes, that is, parallel corridors of traffic. Some roads have one lane for each direction of travel and others have multiple lanes for each direction. Most countries apply pavement markings to clearly indicate the limits of each lane and the direction of travel that it must be used for. In other countries lanes have no markings at all and drivers follow them mostly by intuition rather than visual stimulus. On roads that have multiple lanes going in the same direction, drivers may usually shift amongst lanes as they please, but they must do so in a way that does not cause inconvenience to other drivers. Driving cultures vary greatly on the issue of "lane ownership": in some countries, drivers traveling in a lane will be very protective of their right to travel in it while in others drivers will routinely expect other drivers to shift back and forth. Designation and overtaking The usual designation for lanes on divided highways is the fastest lane is the one closest to the center of the road, and the slowest to the edge of the road. Drivers are usually expected to keep in the slowest lane unless overtaking, though with more traffic congestion all lanes are often used. When driving on the left: The lane designated for faster traffic is on the right. The lane designated for slower traffic is on the left. Most freeway exits are on the left. Overtaking is permitted to the right, and sometimes to the left. When driving on the right: The lane designated for faster traffic is on the left. The lane designated for slower traffic is on the right. Most freeway exits are on the right. Overtaking is permitted to the left, and sometimes to the right. Countries party to the Vienna Convention on Road Traffic have uniform rules about overtaking and lane designation. The convention details (amongst other things) that "Every driver shall keep to the edge of the carriageway appropriate to the direction of traffic", and the "Drivers overtaking shall do so on the side opposite to that appropriate to the direction of traffic", notwithstanding the presence or absence of oncoming traffic. Allowed exceptions to these rules include turning or heavy traffic, traffic in lines, or situation in which signs or markings must dictate otherwise. These rules must be more strictly adhered to on roads with oncoming traffic, but still apply on multi-lane and divided highways. Many countries in Europe are party to the Vienna Conventions on traffic and roads. In Australia (which is not a contracting party), traveling in any lane other than the "slow" lane on a road with a speed limit at or above is an offence, unless signage is posted to the contrary or the driver is overtaking. Many areas in North America do not have any laws about staying to the slowest lanes unless overtaking. In those areas, unlike many parts of Europe, traffic is allowed to overtake on any side, even in a slower lane. This practice is known as "passing on the right" in the United States and "overtaking on the inside" and "undertaking" in the United Kingdom. When referring to individual lanes on dual carriageways, one does not consider traffic travelling the opposite direction. The inside lane (in the British English sense, i.e. the lane beside the hard shoulder) refers to the lane used for normal travel, while the middle lane is used for overtaking cars on the inside lane. The outside lane (i.e. closest to oncoming traffic) is used for overtaking vehicles in the middle lane. The same principle lies with dual carriageways with more than three lanes. U.S.-state-specific practices In some US states (such as Louisiana, Massachusetts and New York), although there are laws requiring all traffic on a public way to use the right-most lane unless overtaking, this rule is often ignored and seldom enforced on multi-lane roadways. Some states, such as Colorado, use a combination of laws and signs restricting speeds or vehicles on certain lanes to emphasize overtaking only on the left lane, and to avoid a psychological condition commonly called road rage. In California, cars may use any lane on multi-lane roadways. Drivers moving slower than the general flow of traffic are required to stay in the right-most lanes (by California Vehicle Code (CVC) 21654) to keep the way clear for faster vehicles and thus speed up traffic. However, faster drivers may legally pass in the slower lanes if conditions allow (by CVC 21754). But the CVC also requires trucks to stay in the right lane, or in the right two lanes if the roadway has four or more lanes going in their direction. The oldest freeways in California, and some freeway interchanges, often have ramps on the left, making signs like "TRUCKS OK ON LEFT LANE" or "TRUCKS MAY USE ALL LANES" necessary to override the default rule. Lane splitting, or riding motorcycles in the space between cars in traffic, is permitted as long as it is done in a safe and prudent manner. One-way roadways In order to increase traffic capacity and safety, a route may have two or more separate roads for each direction of traffic. Alternatively, a given road might be declared one-way. High-speed roads In large cities, moving from one part of the city to another by means of ordinary streets and avenues can be time-consuming since traffic is often slowed by at-grade junctions, tight turns, narrow marked lanes and lack of a minimum speed limit. Therefore, it has become common practice for larger cities to build roads for faster through traffic. There are two different types of roads used to provide high-speed access across urban areas: The controlled-access highway (freeway or motorway) is a divided multi-lane highway with fully controlled access and grade-separated intersections (no cross traffic). Some freeways are called expressways, super-highways, or turnpikes, depending on local usage. Access to freeways is fully controlled; entering and leaving the freeway is permitted only at grade-separated interchanges. The limited-access road (often called expressway in areas where the name does not refer to a freeway or motorway) is a lower-grade type of road with some or many of the characteristics of a controlled-access highway: usually a broad multi-lane avenue, frequently divided, with some grade separation at intersections. Motor vehicle drivers wishing to travel over great distances within the city will usually take the freeways or expressways in order to minimize travel time. When a crossing road is at the same grade as the freeway, a bridge (or, less often, an underpass) will be built for the crossing road. If the freeway is elevated, the crossing road will pass underneath it. Minimum speed signs are sometimes posted (although increasingly rare) and usually indicate that any vehicle traveling slower than should indicate a slower speed of travel to other motor vehicles by engaging the vehicle's four-way flashing lights. Alternative slower-than-posted speeds may be in effect, based on the posted speed limit of the highway/freeway. Systems of freeways and expressways are also built to connect distant and regional cities, notable systems include the Interstate highways, the Autobahnen and the Expressway Network of the People's Republic of China. One-way streets In more sophisticated systems such as large cities, this concept is further extended: some streets are marked as being one-way, and on those streets all traffic must flow in only one direction. Pedestrians on the sidewalks are generally not limited to one-way movement. Drivers wishing to reach a destination they have already passed must return via other streets. One-way streets, despite the inconveniences to some individual drivers, can greatly improve traffic flow since they usually allow traffic to move faster and tend to simplify intersections. Congested traffic In some places traffic volume is consistently, extremely large, either during periods of time referred to as rush hour or perpetually. Exceptionally, traffic upstream of a vehicular collision or an obstruction, such as construction, may also be constrained, resulting in a traffic jam. Such dynamics in relation to traffic congestion is known as traffic flow. Traffic engineers sometimes gauge the quality of traffic flow in terms of level of service. In measured traffic data, common spatiotemporal empirical features of traffic congestion have been found that are qualitatively the same for different highways in different countries. Some of these common features distinguish the wide moving jam and synchronized flow phases of congested traffic in Kerner's three-phase traffic theory. Rush hour During business days in most major cities, traffic congestion reaches great intensity at predictable times of the day due to the large number of vehicles using the road at the same time. This phenomenon is called rush hour or peak hour, although the period of high traffic intensity often exceeds one hour. Since the advent of car radios, radio programming during rush hour is likely to be called drive time. Congestion mitigation Rush hour policies Some cities adopt policies to reduce rush-hour traffic and pollution and encourage the use of public transportation. For example, in São Paulo, Manila and in Mexico City, each vehicle has a specific day of the week in which it is forbidden from traveling the roads during rush hour. The day for each vehicle is taken from the license plate number, and this rule is enforced by traffic police and also by hundreds of strategically positioned traffic cameras backed by computerized image-recognition systems that issue tickets to offending drivers. In the United States and Canada, several expressways have a special lane (called an "HOV Lane" – High Occupancy Vehicle Lane) that can only be used by cars carrying two (some locations-three) or more people. Also, many major cities have instituted strict parking prohibitions during rush hour on major arterial streets leading to and from the central business district. During designated weekday hours, vehicles parked on these primary routes are subject to prompt ticketing and towing at owner expense. The purpose of these restrictions is to make available an additional traffic lane in order to maximize available traffic capacity. Additionally, several cities offer a public telephone service where citizens can arrange rides with others depending on where they live and work. The purpose of these policies is to reduce the number of vehicles on the roads and thus reduce rush-hour traffic intensity. Metered freeways are also a solution for controlling rush hour traffic. In Phoenix, Arizona and Seattle, Washington, among other places, metered on-ramps have been implemented. During rush hour, traffic signals are used with green lights to allow one car per blink of the light to proceed on to the freeway. Rush hour is typically caused by multiple cars all going to once place at the same time. There is no way to fix the issue because the economy has set times for work, school, and running errands all during the same hours. There is no avoiding this problem because it exists in every major metropolitan area in the world. Pre-emption In some areas, emergency responders are provided with specialized equipment, such as a Mobile Infrared Transmitter, which allows emergency response vehicles, particularly fire-fighting apparatus, to have high-priority travel by having the lights along their route change to green. The technology behind these methods has evolved, from panels at the fire department (which could trigger and control green lights for certain major corridors) to optical systems (which the individual fire apparatus can be equipped with to communicate directly with receivers on the signal head). In certain jurisdictions, public transport buses and government-operated winter service vehicles are permitted to use this equipment to extend the length of a green light. During emergencies where evacuation of a heavily populated area is required, local authorities may institute contraflow lane reversal, in which all lanes of a road lead away from a danger zone regardless of their original flow. Aside from emergencies, contraflow may also be used to ease traffic congestion during rush hour or at the end of a sports event (where a large number of cars are leaving the venue at the same time). For example, the six lanes of the Lincoln Tunnel can be changed from three inbound and three outbound to a two/four configuration depending on traffic volume. The Brazilian highways Rodovia dos Imigrantes and Rodovia Anchieta connect São Paulo to the Atlantic coast. Almost all lanes of both highways are usually reversed during weekends to allow for heavy seaside traffic. The reversibility of the highways requires many additional highway ramps and complicated interchanges. Intelligent transportation systems An intelligent transportation system (ITS) is a system of hardware, software, and operators-in-the-loop that allow better monitoring and control of traffic in order to optimize traffic flow. As the number of vehicle lane miles traveled per year continues to increase dramatically, and as the number of vehicle lane miles constructed per year has not been keeping pace, this has led to ever-increasing traffic congestion. As a cost-effective solution toward optimizing traffic, ITS presents a number of technologies to reduce congestion by monitoring traffic flows through the use of sensors and live cameras or analysing cellular phone data travelling in cars (floating car data) and in turn rerouting traffic as needed through the use of variable message boards (VMS), highway advisory radio, on board or off board navigation devices and other systems through integration of traffic data with navigation systems. Additionally, the roadway network has been increasingly fitted with additional communications and control infrastructure to allow traffic operations personnel to monitor weather conditions, for dispatching maintenance crews to perform snow or ice removal, as well as intelligent systems such as automated bridge de-icing systems which help to prevent accidents. Aviation In aviation, right-of-way rules are established over the principle that the least maneuverable aircraft takes priority. In the United States, the Code of Federal Regulations ranks air traffic in the following passage order: Any aircraft in distress Air balloon Glider Airship An aircraft towing or refueling other aircraft has the right-of-way over all other engine-driven aircraft Powered parachute, weight-shift-control aircraft, airplane, and rotorcraft In addition, head-on approaching aircraft shall alter course to the right. An aircraft being overtaken has the right-of-way. A landing aircraft has the right-of-way over other surface-operating aircraft.
Technology
Basics_7
null
43085
https://en.wikipedia.org/wiki/Rutile
Rutile
Rutile is an oxide mineral composed of titanium dioxide (TiO2), the most common natural form of TiO2. Rarer polymorphs of TiO2 are known, including anatase, akaogiite, and brookite. Rutile has one of the highest refractive indices at visible wavelengths of any known crystal and also exhibits a particularly large birefringence and high dispersion. Owing to these properties, it is useful for the manufacture of certain optical elements, especially polarization optics, for longer visible and infrared wavelengths up to about 4.5 micrometres. Natural rutile may contain up to 10% iron and significant amounts of niobium and tantalum. Rutile derives its name from the Latin ('red'), in reference to the deep red color observed in some specimens when viewed by transmitted light. Rutile was first described in 1803 by Abraham Gottlob Werner using specimens obtained in Horcajuelo de la Sierra, Madrid (Spain), which is consequently the type locality. Occurrence Rutile is a common accessory mineral in high-temperature and high-pressure metamorphic rocks and in igneous rocks. Thermodynamically, rutile is the most stable polymorph of TiO2 at all temperatures, exhibiting lower total free energy than metastable phases of anatase or brookite. Consequently, the transformation of the metastable TiO2 polymorphs to rutile is irreversible. As it has the lowest molecular volume of the three main polymorphs, it is generally the primary titanium-bearing phase in most high-pressure metamorphic rocks, chiefly eclogites. Within the igneous environment, rutile is a common accessory mineral in plutonic igneous rocks, though it is also found occasionally in extrusive igneous rocks, particularly those such as kimberlites and lamproites that have deep mantle sources. Anatase and brookite are found in the igneous environment, particularly as products of autogenic alteration during the cooling of plutonic rocks; anatase is also found in placer deposits sourced from primary rutile. The occurrence of large specimen crystals is most common in pegmatites, skarns, and granite greisens. Rutile is found as an accessory mineral in some altered igneous rocks, and in certain gneisses and schists. In groups of acicular crystals it is frequently seen penetrating quartz as in the from Graubünden, Switzerland. In 2005 the Republic of Sierra Leone in West Africa had a production capacity of 23% of the world's annual rutile supply, which rose to approximately 30% in 2008. Crystal structure Rutile has a tetragonal unit cell, with unit cell parameters a = b = 4.584 Å, and c = 2.953 Å. The titanium cations have a coordination number of 6, meaning they are surrounded by an octahedron of 6 oxygen atoms. The oxygen anions have a coordination number of 3, resulting in a trigonal planar coordination. Rutile also shows a screw axis when its octahedra are viewed sequentially. When formed under reducing conditions, oxygen vacancies can occur, coupled to Ti3+ centers. Hydrogen can enter these gaps, existing as an individual vacancy occupant (pairing as a hydrogen ion) or creating a hydroxide group with an adjacent oxygen. Rutile crystals are most commonly observed to exhibit a prismatic or acicular growth habit with preferential orientation along their c axis, the [001] direction. This growth habit is favored as the {110} facets of rutile exhibit the lowest surface free energy and are therefore thermodynamically most stable. The c-axis oriented growth of rutile appears clearly in nanorods, nanowires and abnormal grain growth phenomena of this phase. Application In large enough quantities in beach sands, rutile forms an important constituent of heavy minerals and ore deposits. Miners extract and separate the valuable minerals – e.g., rutile, zircon, and ilmenite. The main uses for rutile are the manufacture of refractory ceramic, as a pigment, and for the production of titanium metal. Finely powdered rutile is a brilliant white pigment and is used in paints, plastics, paper, foods, and other applications that call for a bright white color. Titanium dioxide pigment is the single greatest use of titanium worldwide. Nanoscale particles of rutile are transparent to visible light but are highly effective in the absorption of ultraviolet radiation (sunscreen). The UV absorption of nano-sized rutile particles is blue-shifted compared to bulk rutile so that higher-energy UV light is absorbed by the nanoparticles. Hence, they are used in sunscreens to protect against UV-induced skin damage. Small rutile needles present in gems are responsible for an optical phenomenon known as asterism. Asteriated gems are known as "star" gems. Star sapphires, star rubies, and other star gems are highly sought after and are generally more valuable than their normal counterparts. Rutile is widely used as a welding electrode covering. It is also used as a part of the ZTR index, which classifies highly weathered sediments. Semiconductor Rutile, as a large band-gap semiconductor, has in recent decades been the subject of significant research towards applications as a functional oxide for applications in photocatalysis and dilute magnetism. Research efforts typically utilize small quantities of synthetic rutile rather than mineral-deposit derived materials. Synthetic rutile Synthetic rutile was first produced in 1948 and is sold under a variety of names. It can be produced from the titanium ore ilmenite through the Becher process. Very pure synthetic rutile is transparent and almost colorless, being slightly yellow, in large pieces. Synthetic rutile can be made in a variety of colors by doping. The high refractive index gives an adamantine luster and strong refraction that leads to a diamond-like appearance. The near-colorless diamond substitute is sold as "Titania", which is the old-fashioned chemical name for this oxide. However, rutile is seldom used in jewellery because it is not very hard (scratch-resistant), measuring only about 6 on the Mohs hardness scale. As the result of growing research interest in the photocatalytic activity of titanium dioxide, in both anatase and rutile phases (as well as biphasic mixtures of the two phases), rutile TiO2 in powder and thin film form is frequently fabricated in laboratory conditions through solution based routes using inorganic precursors (typically TiCl4) or organometallic precursors (typically alkoxides such as titanium isopropoxide, also known as TTIP). Depending on synthesis conditions, the first phase to crystallize may be the metastable anatase phase, which can then be converted to the equilibrium rutile phase through thermal treatment. The physical properties of rutile are often modified using dopants to impart improved photocatalytic activity through improved photo-generated charge carrier separation, altered electronic band structures and improved surface reactivity.
Physical sciences
Minerals
Earth science
43093
https://en.wikipedia.org/wiki/Flagellum
Flagellum
A flagellum (; : flagella) (Latin for 'whip' or 'scourge') is a hair-like appendage that protrudes from certain plant and animal sperm cells, from fungal spores (zoospores), and from a wide range of microorganisms to provide motility. Many protists with flagella are known as flagellates. A microorganism may have from one to many flagella. A gram-negative bacterium Helicobacter pylori, for example, uses its flagella to propel itself through the stomach to reach the mucous lining where it may colonise the epithelium and potentially cause gastritis, and ulcers – a risk factor for stomach cancer. In some swarming bacteria, the flagellum can also function as a sensory organelle, being sensitive to wetness outside the cell. Across the three domains of Bacteria, Archaea, and Eukaryota, the flagellum has a different structure, protein composition, and mechanism of propulsion but shares the same function of providing motility. The Latin word means "whip" to describe its lash-like swimming motion. The flagellum in archaea is called the archaellum to note its difference from the bacterial flagellum. Eukaryotic flagella and cilia are identical in structure but have different lengths and functions. Prokaryotic fimbriae and pili are smaller, and thinner appendages, with different functions. Cilia are attached to the surface of flagella and are used to swim or move fluid from one region to another. Types The three types of flagella are bacterial, archaeal, and eukaryotic. The flagella in eukaryotes have dynein and microtubules that move with a bending mechanism. Bacteria and archaea do not have dynein or microtubules in their flagella, and they move using a rotary mechanism. Other differences among these three types are: Bacterial flagella are helical filaments, each with a rotary motor at its base which can turn clockwise or counterclockwise. They provide two of several kinds of bacterial motility. Archaeal flagella (archaella) are superficially similar to bacterial flagella in that it also has a rotary motor, but are different in many details and considered non-homologous. Eukaryotic flagella—those of animal, plant, and protist cells—are complex cellular projections that lash back and forth. Eukaryotic flagella and motile cilia are identical in structure, but have different lengths, waveforms, and functions. Primary cilia are immotile, and have a structurally different 9+0 axoneme rather than the 9+2 axoneme found in both flagella and motile cilia. Bacterial flagella Structure and composition The bacterial flagellum is made up of protein subunits of flagellin. Its shape is a 20-nanometer-thick hollow tube. It is helical and has a sharp bend just outside the outer membrane; this "hook" allows the axis of the helix to point directly away from the cell. A shaft runs between the hook and the basal body, passing through protein rings in the cell's membrane that act as bearings. Gram-positive organisms have two of these basal body rings, one in the peptidoglycan layer and one in the plasma membrane. Gram-negative organisms have four such rings: the L ring associates with the lipopolysaccharides, the P ring associates with peptidoglycan layer, the M ring is embedded in the plasma membrane, and the S ring is directly attached to the cytoplasm. The filament ends with a capping protein. The flagellar filament is the long, helical screw that propels the bacterium when rotated by the motor, through the hook. In most bacteria that have been studied, including the gram-negative Escherichia coli, Salmonella typhimurium, Caulobacter crescentus, and Vibrio alginolyticus, the filament is made up of 11 protofilaments approximately parallel to the filament axis. Each protofilament is a series of tandem protein chains. However, Campylobacter jejuni has seven protofilaments. The basal body has several traits in common with some types of secretory pores, such as the hollow, rod-like "plug" in their centers extending out through the plasma membrane. The similarities between bacterial flagella and bacterial secretory system structures and proteins provide scientific evidence supporting the theory that bacterial flagella evolved from the type-three secretion system (TTSS). The atomic structure of both bacterial flagella as well as the TTSS injectisome have been elucidated in great detail, especially with the development of cryo-electron microscopy. The best understood parts are the parts between the inner and outer membrane, that is, the scaffolding rings of the inner membrane (IM), the scaffolding pairs of the outer membrane (OM), and the rod/needle (injectisome) or rod/hook (flagellum) sections. Motor The bacterial flagellum is driven by a rotary engine (Mot complex) made up of protein, located at the flagellum's anchor point on the inner cell membrane. The engine is powered by proton-motive force, i.e., by the flow of protons (hydrogen ions) across the bacterial cell membrane due to a concentration gradient set up by the cell's metabolism (Vibrio species have two kinds of flagella, lateral and polar, and some are driven by a sodium ion pump rather than a proton pump). The rotor transports protons across the membrane, and is turned in the process. The rotor alone can operate at 6,000 to 100,000 rpm, but with the flagellar filament attached usually only reaches 200 to 1000 rpm. The direction of rotation can be changed by the flagellar motor switch almost instantaneously, caused by a slight change in the position of a protein, FliG, in the rotor. The torque is transferred from the MotAB to the torque helix on FliG's D5 domain and with the increase in the requirement of the torque or speed more MotAB are employed. Because the flagellar motor has no on-off switch, the protein epsE is used as a mechanical clutch to disengage the motor from the rotor, thus stopping the flagellum and allowing the bacterium to remain in one place. The production and rotation of a flagellum can take up to 10% of an Escherichia coli cell's energy budget and has been described as an "energy-guzzling machine". Its operation generates reactive oxygen species that elevate mutation rates. The cylindrical shape of flagella is suited to locomotion of microscopic organisms; these organisms operate at a low Reynolds number, where the viscosity of the surrounding water is much more important than its mass or inertia. The rotational speed of flagella varies in response to the intensity of the proton-motive force, thereby permitting certain forms of speed control, and also permitting some types of bacteria to attain remarkable speeds in proportion to their size; some achieve roughly 60 cell lengths per second. At such a speed, a bacterium would take about 245 days to cover 1 km; although that may seem slow, the perspective changes when the concept of scale is introduced. In comparison to macroscopic life forms, it is very fast indeed when expressed in terms of number of body lengths per second. A cheetah, for example, only achieves about 25 body lengths per second. Through use of their flagella, bacteria are able to move rapidly towards attractants and away from repellents, by means of a biased random walk, with runs and tumbles brought about by rotating its flagellum counterclockwise and clockwise, respectively. The two directions of rotation are not identical (with respect to flagellum movement) and are selected by a molecular switch. Clockwise rotation is called the traction mode with the body following the flagella. Counterclockwise rotation is called the thruster mode with the flagella lagging behind the body. Assembly During flagellar assembly, components of the flagellum pass through the hollow cores of the basal body and the nascent filament. During assembly, protein components are added at the flagellar tip rather than at the base. In vitro, flagellar filaments assemble spontaneously in a solution containing purified flagellin as the sole protein. Evolution At least 10 protein components of the bacterial flagellum share homologous proteins with the type three secretion system (T3SS) found in many gram-negative bacteria, hence one likely evolved from the other. Because the T3SS has a similar number of components as a flagellar apparatus (about 25 proteins), which one evolved first is difficult to determine. However, the flagellar system appears to involve more proteins overall, including various regulators and chaperones, hence it has been argued that flagella evolved from a T3SS. However, it has also been suggested that the flagellum may have evolved first or the two structures evolved in parallel. Early single-cell organisms' need for motility (mobility) support that the more mobile flagella would be selected by evolution first, but the T3SS evolving from the flagellum can be seen as 'reductive evolution', and receives no topological support from the phylogenetic trees. The hypothesis that the two structures evolved separately from a common ancestor accounts for the protein similarities between the two structures, as well as their functional diversity. Flagella and the intelligent design debate Some authors have argued that flagella cannot have evolved, assuming that they can only function properly when all proteins are in place. In other words, the flagellar apparatus is "irreducibly complex". However, many proteins can be deleted or mutated and the flagellum still works, though sometimes at reduced efficiency. Moreover, with many proteins unique to some number across species, diversity of bacterial flagella composition was higher than expected. Hence, the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. For instance, a number of mutations have been found that increase the motility of E. coli. Additional evidence for the evolution of bacterial flagella includes the existence of vestigial flagella, intermediate forms of flagella and patterns of similarities among flagellar protein sequences, including the observation that almost all of the core flagellar proteins have known homologies with non-flagellar proteins. Furthermore, several processes have been identified as playing important roles in flagellar evolution, including self-assembly of simple repeating subunits, gene duplication with subsequent divergence, recruitment of elements from other systems ('molecular bricolage') and recombination. Flagellar arrangements Different species of bacteria have different numbers and arrangements of flagella, named using the term tricho, from the Greek trichos meaning hair. Monotrichous bacteria such as Vibrio cholerae have a single polar flagellum. Amphitrichous bacteria have a single flagellum on each of two opposite ends (e.g., Campylobacter jejuni or Alcaligenes faecalis)—both flagella rotate but coordinate to produce coherent thrust. Lophotrichous bacteria (lopho Greek combining term meaning crest or tuft) have multiple flagella located at the same spot on the bacterial surface such as Helicobacter pylori, which act in concert to drive the bacteria in a single direction. In many cases, the bases of multiple flagella are surrounded by a specialized region of the cell membrane, called the polar organelle. Peritrichous bacteria have flagella projecting in all directions (e.g., E. coli). Counterclockwise rotation of a monotrichous polar flagellum pushes the cell forward with the flagellum trailing behind, much like a corkscrew moving inside cork. Water on the microscopic scale is highly viscous, unlike usual water. Spirochetes, in contrast, have flagella called endoflagella arising from opposite poles of the cell, and are located within the periplasmic space as shown by breaking the outer-membrane and also by electron cryotomography microscopy. The rotation of the filaments relative to the cell body causes the entire bacterium to move forward in a corkscrew-like motion, even through material viscous enough to prevent the passage of normally flagellated bacteria. In certain large forms of Selenomonas, more than 30 individual flagella are organized outside the cell body, helically twining about each other to form a thick structure (easily visible with the light microscope) called a "fascicle". In some Vibrio spp. (particularly Vibrio parahaemolyticus) and related bacteria such as Aeromonas, two flagellar systems co-exist, using different sets of genes and different ion gradients for energy. The polar flagella are constitutively expressed and provide motility in bulk fluid, while the lateral flagella are expressed when the polar flagella meet too much resistance to turn. These provide swarming motility on surfaces or in viscous fluids. Bundling Bundling is an event that can happen in multi-flagellated cells, bundling the flagella together and causing them to rotate in a coordinated manner. Flagella are left-handed helices, and when rotated counter-clockwise by their rotors, they can bundle and rotate together. When the rotors reverse direction, thus rotating clockwise, the flagellum unwinds from the bundle. This may cause the cell to stop its forward motion and instead start twitching in place, referred to as tumbling. Tumbling results in a stochastic reorientation of the cell, causing it to change the direction of its forward swimming. It is not known which stimuli drive the switch between bundling and tumbling, but the motor is highly adaptive to different signals. In the model describing chemotaxis ("movement on purpose") the clockwise rotation of a flagellum is suppressed by chemical compounds favorable to the cell (e.g. food). When moving in a favorable direction, the concentration of such chemical attractants increases and therefore tumbles are continually suppressed, allowing forward motion; likewise, when the cell's direction of motion is unfavorable (e.g., away from a chemical attractant), tumbles are no longer suppressed and occur much more often, with the chance that the cell will be thus reoriented in the correct direction. Even if all flagella would rotate clockwise, however, they often cannot form a bundle due to geometrical and hydrodynamic reasons. Eukaryotic flagella Terminology Aiming to emphasize the distinction between the bacterial flagella and the eukaryotic cilia and flagella, some authors attempted to replace the name of these two eukaryotic structures with "undulipodia" (e.g., all papers by Margulis since the 1970s) or "cilia" for both (e.g., Hülsmann, 1992; Adl et al., 2012; most papers of Cavalier-Smith), preserving "flagella" for the bacterial structure. However, the discriminative usage of the terms "cilia" and "flagella" for eukaryotes adopted in this article (see below) is still common (e.g., Andersen et al., 1991; Leadbeater et al., 2000). Internal structure The core of a eukaryotic flagellum, known as the axoneme is a bundle of nine fused pairs of microtubules known as doublets surrounding two central single microtubules (singlets). This 9+2 axoneme is characteristic of the eukaryotic flagellum. At the base of a eukaryotic flagellum is a basal body, "blepharoplast" or kinetosome, which is the microtubule organizing center for flagellar microtubules and is about 500 nanometers long. Basal bodies are structurally identical to centrioles. The flagellum is encased within the cell's plasma membrane, so that the interior of the flagellum is accessible to the cell's cytoplasm. Besides the axoneme and basal body, relatively constant in morphology, other internal structures of the flagellar apparatus are the transition zone (where the axoneme and basal body meet) and the root system (microtubular or fibrilar structures that extend from the basal bodies into the cytoplasm), more variable and useful as indicators of phylogenetic relationships of eukaryotes. Other structures, more uncommon, are the paraflagellar (or paraxial, paraxonemal) rod, the R fiber, and the S fiber. For surface structures, see below. Mechanism Each of the outer 9 doublet microtubules extends a pair of dynein arms (an "inner" and an "outer" arm) to the adjacent microtubule; these produce force through ATP hydrolysis. The flagellar axoneme also contains radial spokes, polypeptide complexes extending from each of the outer nine microtubule doublets towards the central pair, with the "head" of the spoke facing inwards. The radial spoke is thought to be involved in the regulation of flagellar motion, although its exact function and method of action are not yet understood. Flagella versus cilia The regular beat patterns of eukaryotic cilia and flagella generate motion on a cellular level. Examples range from the propulsion of single cells such as the swimming of spermatozoa to the transport of fluid along a stationary layer of cells such as in the respiratory tract. Although eukaryotic cilia and flagella are ultimately the same, they are sometimes classed by their pattern of movement, a tradition from before their structures have been known. In the case of flagella, the motion is often planar and wave-like, whereas the motile cilia often perform a more complicated three-dimensional motion with a power and recovery stroke. Yet another traditional form of distinction is by the number of 9+2 organelles on the cell. Intraflagellar transport Intraflagellar transport, the process by which axonemal subunits, transmembrane receptors, and other proteins are moved up and down the length of the flagellum, is essential for proper functioning of the flagellum, in both motility and signal transduction. Evolution and occurrence Eukaryotic flagella or cilia, probably an ancestral characteristic, are widespread in almost all groups of eukaryotes, as a relatively perennial condition, or as a flagellated life cycle stage (e.g., zoids, gametes, zoospores, which may be produced continually or not). The first situation is found either in specialized cells of multicellular organisms (e.g., the choanocytes of sponges, or the ciliated epithelia of metazoans), as in ciliates and many eukaryotes with a "flagellate condition" (or "monadoid level of organization", see Flagellata, an artificial group). Flagellated lifecycle stages are found in many groups, e.g., many green algae (zoospores and male gametes), bryophytes (male gametes), pteridophytes (male gametes), some gymnosperms (cycads and Ginkgo, as male gametes), centric diatoms (male gametes), brown algae (zoospores and gametes), oomycetes (assexual zoospores and gametes), hyphochytrids (zoospores), labyrinthulomycetes (zoospores), some apicomplexans (gametes), some radiolarians (probably gametes), foraminiferans (gametes), plasmodiophoromycetes (zoospores and gametes), myxogastrids (zoospores), metazoans (male gametes), and chytrid fungi (zoospores and gametes). Flagella or cilia are completely absent in some groups, probably due to a loss rather than being a primitive condition. The loss of cilia occurred in red algae, some green algae (Zygnematophyceae), the gymnosperms except cycads and Ginkgo, angiosperms, pennate diatoms, some apicomplexans, some amoebozoans, in the sperm of some metazoans, and in fungi (except chytrids). Typology A number of terms related to flagella or cilia are used to characterize eukaryotes. According to surface structures present, flagella may be: whiplash flagella (= smooth, acronematic flagella): without hairs, e.g., in Opisthokonta hairy flagella (= tinsel, flimmer, pleuronematic flagella): with hairs (= mastigonemes sensu lato), divided in: with fine hairs (= non-tubular, or simple hairs): occurs in Euglenophyceae, Dinoflagellata, some Haptophyceae (Pavlovales) with stiff hairs (= tubular hairs, retronemes, mastigonemes sensu stricto), divided in: bipartite hairs: with two regions. Occurs in Cryptophyceae, Prasinophyceae, and some Heterokonta tripartite (= straminipilous) hairs: with three regions (a base, a tubular shaft, and one or more terminal hairs). Occurs in most Heterokonta stichonematic flagella: with a single row of hairs pantonematic flagella: with two rows of hairs acronematic: flagella with a single, terminal mastigoneme or flagellar hair (e.g., bodonids); some authors use the term as synonym of whiplash with scales: e.g., Prasinophyceae with spines: e.g., some brown algae with undulating membrane: e.g., some kinetoplastids, some parabasalids with proboscis (trunk-like protrusion of the cell): e.g., apusomonads, some bodonids According to the number of flagella, cells may be: (remembering that some authors use "ciliated" instead of "flagellated") uniflagellated: e.g., most Opisthokonta biflagellated: e.g., all Dinoflagellata, the gametes of Charophyceae, of most bryophytes and of some metazoans triflagellated: e.g., the gametes of some Foraminifera quadriflagellated: e.g., some Prasinophyceae, Collodictyonidae octoflagellated: e.g., some Diplomonada, some Prasinophyceae multiflagellated: e.g., Opalinata, Ciliophora, Stephanopogon, Parabasalida, Hemimastigophora, Caryoblastea, Multicilia, the gametes (or zoids) of Oedogoniales (Chlorophyta), some pteridophytes and some gymnosperms According to the place of insertion of the flagella: opisthokont: cells with flagella inserted posteriorly, e.g., in Opisthokonta (Vischer, 1945). In Haptophyceae, flagella are laterally to terminally inserted, but are directed posteriorly during rapid swimming. akrokont: cells with flagella inserted apically subakrokont: cells with flagella inserted subapically pleurokont: cells with flagella inserted laterally According to the beating pattern: gliding: a flagellum that trails on the substrate heterodynamic: flagella with different beating patterns (usually with one flagellum functioning in food capture and the other functioning in gliding, anchorage, propulsion or "steering") isodynamic: flagella beating with the same patterns Other terms related to the flagellar type: isokont: cells with flagella of equal length. It was also formerly used to refer to the Chlorophyta anisokont: cells with flagella of unequal length, e.g., some Euglenophyceae and Prasinophyceae heterokont: term introduced by Luther (1899) to refer to the Xanthophyceae, due to the pair of flagella of unequal length. It has taken on a specific meaning in referring to cells with an anterior straminipilous flagellum (with tripartite mastigonemes, in one or two rows) and a posterior usually smooth flagellum. It is also used to refer to the taxon Heterokonta stephanokont: cells with a crown of flagella near its anterior end, e.g., the gametes and spores of Oedogoniales, the spores of some Bryopsidales. Term introduced by Blackman & Tansley (1902) to refer to the Oedogoniales akont: cells without flagella. It was also used to refer to taxonomic groups, as Aconta or Akonta: the Zygnematophyceae and Bacillariophyceae (Oltmanns, 1904), or the Rhodophyceae (Christensen, 1962) Archaeal flagella The archaellum possessed by some species of Archaea is superficially similar to the bacterial flagellum; in the 1980s, they were thought to be homologous on the basis of gross morphology and behavior. Both flagella and archaella consist of filaments extending outside the cell, and rotate to propel the cell. Archaeal flagella have a unique structure which lacks a central channel. Similar to bacterial type IV pilins, the archaeal proteins (archaellins) are made with class 3 signal peptides and they are processed by a type IV prepilin peptidase-like enzyme. The archaellins are typically modified by the addition of N-linked glycans which are necessary for proper assembly or function. Discoveries in the 1990s revealed numerous detailed differences between the archaeal and bacterial flagella. These include: Bacterial flagella rotation is powered by the proton motive force – a flow of H+ ions or occasionally by the sodium-motive force – a flow of Na+ ions; archaeal flagella rotation is powered by ATP. While bacterial cells often have many flagellar filaments, each of which rotates independently, the archaeal flagellum is composed of a bundle of many filaments that rotates as a single assembly. Bacterial flagella grow by the addition of flagellin subunits at the tip; archaeal flagella grow by the addition of subunits to the base. Bacterial flagella are thicker than archaella, and the bacterial filament has a large enough hollow "tube" inside that the flagellin subunits can flow up the inside of the filament and get added at the tip; the archaellum is too thin (12-15 nm) to allow this. Many components of bacterial flagella share sequence similarity to components of the type III secretion systems, but the components of bacterial flagella and archaella share no sequence similarity. Instead, some components of archaella share sequence and morphological similarity with components of type IV pili, which are assembled through the action of type II secretion systems (the nomenclature of pili and protein secretion systems is not consistent). These differences support the theory that the bacterial flagella and archaella are a classic case of biological analogy, or convergent evolution, rather than homology. Research into the structure of archaella made significant progress beginning in the early 2010s, with the first atomic resolution structure of an archaella protein, the discovery of additional functions of archaella, and the first reports of archaella in Nanoarchaeota and Thaumarchaeota. Fungal The only fungi to have a single flagellum on their spores are the chytrids. In Batrachochytrium dendrobatidis the flagellum is 19–20 μm long. A nonfunctioning centriole lies adjacent to the kinetosome. Nine interconnected props attach the kinetosome to the plasmalemma, and a terminal plate is present in the transitional zone. An inner ring-like structure attached to the tubules of the flagellar doublets within the transitional zone has been observed in transverse section. Additional images
Biology and health sciences
Organelles and other cell parts
null
43118
https://en.wikipedia.org/wiki/Cilium
Cilium
The cilium (: cilia; ; in Medieval Latin and in anatomy, cilium) is a short hair-like membrane protrusion from many types of eukaryotic cell. (Cilia are absent in bacteria and archaea.) The cilium has the shape of a slender threadlike projection that extends from the surface of the much larger cell body. Eukaryotic flagella found on sperm cells and many protozoans have a similar structure to motile cilia that enables swimming through liquids; they are longer than cilia and have a different undulating motion. There are two major classes of cilia: motile and non-motile cilia, each with two subtypes, giving four types in all. A cell will typically have one primary cilium or many motile cilia. The structure of the cilium core, called the axoneme, determines the cilium class. Most motile cilia have a central pair of single microtubules surrounded by nine pairs of double microtubules called a 9+2 axoneme. Most non-motile cilia have a 9+0 axoneme that lacks the central pair of microtubules. Also lacking are the associated components that enable motility including the outer and inner dynein arms, and radial spokes. Some motile cilia lack the central pair, and some non-motile cilia have the central pair, hence the four types. Most non-motile cilia, termed primary cilia or sensory cilia, serve solely as sensory organelles. Most vertebrate cell types possess a single non-motile primary cilium, which functions as a cellular antenna. Olfactory neurons possess a great many non-motile cilia. Non-motile cilia that have a central pair of microtubules are the kinocilia present on hair cells. Motile cilia are found in large numbers on respiratory epithelial cells – around 200 cilia per cell, where they function in mucociliary clearance, and also have mechanosensory and chemosensory functions. Motile cilia on ependymal cells move the cerebrospinal fluid through the ventricular system of the brain. Motile cilia are also present in the oviducts (fallopian tubes) of female (therian) mammals, where they function in moving egg cells from the ovary to the uterus. Motile cilia that lack the central pair of microtubules are found in the cells of the embryonic primitive node; termed nodal cells, these nodal cilia are responsible for the left-right asymmetry of bilaterians. Structure A cilium is assembled and built from a basal body on the cell surface. From the basal body, the ciliary rootlet forms ahead of the transition plate and transition zone where the earlier microtubule triplets change to the microtubule doublets of the axoneme. Basal body The foundation of the cilium is the basal body, a term applied to the mother centriole when it is associated with a cilium. Mammalian basal bodies consist of a barrel of nine triplet microtubules, subdistal appendages and nine strut-like structures, known as distal appendages, which attach the basal body to the membrane at the base of the cilium. Two of each of the basal body's triplet microtubules extend during growth of the axoneme to become the doublet microtubules. Ciliary rootlet The ciliary rootlet is a cytoskeleton-like structure that originates from the basal body at the proximal end of a cilium. Rootlets are typically 80-100 nm in diameter and contain cross striae distributed at regular intervals of approximately 55-70 nm. A prominent component of the rootlet is rootletin a coiled coil rootlet protein coded for by the CROCC gene. Transition zone To achieve its distinct composition, the proximal-most region of the cilium consists of a transition zone, also known as the ciliary gate, that controls the entry and exit of proteins to and from the cilium. At the transition zone, Y-shaped structures connect the ciliary membrane to the underlying axoneme. Control of selective entry into cilia may involve a sieve-like function of transition zone. Inherited defects in components of the transition zone cause ciliopathies, such as Joubert syndrome. Transition zone structure and function is conserved across diverse organisms, including vertebrates, Caenorhabditis elegans, Drosophila melanogaster and Chlamydomonas reinhardtii. In mammals, disruption of the transition zone reduces the ciliary abundance of membrane-associated ciliary proteins, such as those involved in Hedgehog signal transduction, compromising Hedgehog-dependent embryonic development of digit number and central nervous system patterning. Axoneme Inside a cilium is a microtubule-based cytoskeletal core called the axoneme. The axoneme of a primary cilium typically has a ring of nine outer microtubule doublets (called a 9+0 axoneme), and the axoneme of a motile cilium has, in addition to the nine outer doublets, two central microtubule singlets (called a 9+2 axoneme). This is the same axoneme type of the flagellum. The axoneme in a motile cilium acts as a scaffold for the inner and outer dynein arms that move the cilium, and provides tracks for the microtubule motor proteins of kinesin and dynein. The transport of ciliary components is carried out by intraflagellar transport (IFT) which is similar to the axonal transport in a nerve fibre. Transport is bidirectional and cytoskeletal motor proteins kinesin and dynein transport ciliary components along the microtubule tracks; kinesin in an anterograde movement towards the ciliary tip and dynein in a retrograde movement towards the cell body. The cilium has its own ciliary membrane enclosed within the surrounding cell membrane. Types Non-motile cilia In animals, non-motile primary cilia are found on nearly every type of cell, blood cells being a prominent exception. Most cells only possess one, in contrast to cells with motile cilia, an exception being olfactory sensory neurons, where the odorant receptors are located, which each possess about ten cilia. Some cell types, such as retinal photoreceptor cells, possess highly specialized primary cilia. Although the primary cilium was discovered in 1898, it was largely ignored for a century and considered a vestigial organelle without important function. Recent findings regarding its physiological roles in chemosensation, signal transduction, and cell growth control, have revealed its importance in cell function. Its importance to human biology has been underscored by the discovery of its role in a diverse group of diseases caused by the dysgenesis or dysfunction of cilia, such as polycystic kidney disease, congenital heart disease, mitral valve prolapse, and retinal degeneration, called ciliopathies. The primary cilium is now known to play an important role in the function of many human organs. Primary cilia on pancreatic beta cells regulate their function and energy metabolism. Cilia deletion can lead to islet dysfunction and type 2 diabetes. Cilia are assembled during the G1 phase and are disassembled before mitosis occurs. Disassembly of cilia requires the action of aurora kinase A. The current scientific understanding of primary cilia views them as "sensory cellular antennae that coordinate many cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation." The cilium is composed of subdomains and enclosed by a plasma membrane continuous with the plasma membrane of the cell. For many cilia, the basal body, where the cilium originates, is located within a membrane invagination called the ciliary pocket. The cilium membrane and the basal body microtubules are connected by distal appendages (also called transition fibers). Vesicles carrying molecules for the cilia dock at the distal appendages. Distal to the transition fibers form a transition zone where entry and exit of molecules is regulated to and from the cilia. Some of the signaling with these cilia occur through ligand binding such as Hedgehog signaling. Other forms of signaling include G protein-coupled receptors including the somatostatin receptor 3 in neurons. Modified non-motile cilia Kinocilia that are found on hair cells in the inner ear are termed as specialized primary cilia, or modified non-motile cilia. They possess the 9+2 axoneme of the motile cilia but lack the inner dynein arms that give movement. They do move passively following the detection of sound, allowed by the outer dynein arms. Motile cilia Mammals also have motile cilia or secondary cilia that are usually present on a cell's surface in large numbers (multiciliate), and beat in coordinated metachronal waves. Multiciliated cells are found lining the respiratory tract where they function in mucociliary clearance sweeping mucus containing debris away from the lungs. Each cell in the respiratory epithelium has around 200 motile cilia. In the reproductive tract, smooth muscle contractions help the beating of the cilia in moving the egg cell from the ovary to the uterus. In the ventricles of the brain ciliated ependymal cells circulate the cerebrospinal fluid. The functioning of motile cilia is strongly dependent on the maintenance of optimal levels of periciliary fluid bathing the cilia. Epithelial sodium channels (ENaCs) are specifically expressed along the entire length of cilia in the respiratory tract, and fallopian tube or oviduct that apparently serve as sensors to regulate the periciliary fluid. Modified motile cilia Motile cilia without the central pair of singlets (9+0) are found in early embryonic development. They are present as nodal cilia on the nodal cells of the primitive node. Nodal cells are responsible for the left-right asymmetry in bilateral animals. While lacking the central apparatus there are dynein arms present that allow the nodal cilia to move in a spinning fashion. The movement creates a current flow of the extraembryonic fluid across the nodal surface in a leftward direction that initiates the left-right asymmetry in the developing embryo. Motile, multiple, 9+0 cilia are found on the epithelial cells of the choroid plexus. Cilia also can change structure when introduced to hot temperatures and become sharp. They are present in large numbers on each cell and move relatively slowly, making them intermediate between motile and primary cilia. In addition to 9+0 cilia that are mobile, there are also solitary 9+2 cilia that stay immobile found in hair cells. Nodal cilia Nodal cells have a single cilium called a monocilium. They are present in the very early development of the embryo on the primitive node. There are two areas of the node with different types of nodal cilia. On the central node are motile cilia, and on the peripheral area of the node the nodal cilia are modified motile. The motile cilia on the central cells rotate to generate the leftward flow of extracellular fluid needed to initiate the left-right asymmetry. Cilia versus flagella The motile cilia on sperm cells and many protozoans enables swimming through liquids and are traditionally referred to as "flagella". As these protrusions are structurally identical to motile cilia, attempts at preserving this terminology include making a distinction by morphology ("flagella" are typically longer than ordinary cilia and have a different undulating motion) and by number. Microorganisms Ciliates are eukaryotic microorganisms that possess motile cilia exclusively and use them for either locomotion or to simply move liquid over their surface. A Paramecium for example is covered in thousands of cilia that enable its swimming. These motile cilia have been shown to be also sensory. Ciliogenesis Cilia are formed through the process of ciliogenesis. An early step is docking of the basal body to the growing ciliary membrane, after which the transition zone forms. The building blocks of the ciliary axoneme, such as tubulins, are added at the ciliary tips through a process that depends partly on intraflagellar transport (IFT). Exceptions include Drosophila sperm and Plasmodium falciparum flagella formation, in which cilia assemble in the cytoplasm. At the base of the cilium where it attaches to the cell body is the microtubule organizing center, the basal body. Some basal body proteins as CEP164, ODF2 and CEP170, are required for the formation and the stability of the cilium. In effect, the cilium is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines. Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. Function The dynein in the axoneme – axonemal dynein forms bridges between neighbouring microtubule doublets. When ATP activates the motor domain of dynein, it attempts to walk along the adjoining microtubule doublet. This would force the adjacent doublets to slide over one another if not for the presence of nexin between the microtubule doublets. And thus the force generated by dynein is instead converted into a bending motion. Sensing the extracellular environment Some primary cilia on epithelial cells in eukaryotes act as cellular antennae, providing chemosensation, thermosensation and mechanosensation of the extracellular environment. These cilia then play a role in mediating specific signalling cues, including soluble factors in the external cell environment, a secretory role in which a soluble protein is released to have an effect downstream of the fluid flow, and mediation of fluid flow if the cilia are motile. Some epithelial cells are ciliated, and they commonly exist as a sheet of polarized cells forming a tube or tubule with cilia projecting into the lumen. This sensory and signalling role puts cilia in a central role for maintaining the local cellular environment and may be why ciliary defects cause such a wide range of human diseases. In the embryo, nodal cilia are used to direct the flow of extracellular fluid. This leftward movement is to generate left-right asymmetry across the midline of the embryo. Central cilia coordinate their rotational beating while the immotile cilia on the sides sense the direction of the flow. Studies in mice suggest a biophysical mechanism by which the direction of flow is sensed. Axo-ciliary synapse With axo-ciliary synapses, there is communication between serotonergic axons and primary cilia of CA1 pyramidal neurons that alters the neuron's epigenetic state in the nucleus – "a way to change what is being transcribed or made in the nucleus" via this signalling distinct from that at the plasma membrane which also is longer-term. Clinical significance Ciliary defects can lead to a number of human diseases. Defects in cilia adversely affect many critical signaling pathways essential to embryonic development and to adult physiology, and thus offer a plausible hypothesis for the often multi-symptom nature of diverse ciliopathies. Known ciliopathies include primary ciliary dyskinesia, Bardet–Biedl syndrome, polycystic kidney and liver disease, nephronophthisis, Alström syndrome, Meckel–Gruber syndrome, Sensenbrenner syndrome and some forms of retinal degeneration. Genetic mutations compromising the proper functioning of cilia, ciliopathies, can cause chronic disorders such as primary ciliary dyskinesia (PCD), nephronophthisis, and Senior–Løken syndrome. In addition, a defect of the primary cilium in the renal tubule cells can lead to polycystic kidney disease (PKD). In another genetic disorder called Bardet–Biedl syndrome (BBS), the mutant gene products are the components in the basal body and cilia. Defects in cilia cells are linked to obesity and often pronounced in type 2 diabetes. Several studies already showed impaired glucose tolerance and reduction in the insulin secretion in the ciliopathy models. Moreover, the number and length of cilia was decreased in the type 2 diabetes models. Epithelial sodium channels (ENaCs) that are expressed along the length of cilia regulate periciliary fluid level. Mutations that decrease the activity of ENaCs result in multisystem pseudohypoaldosteronism, that is associated with fertility problems. In cystic fibrosis that results from mutations in the chloride channel CFTR, ENaC activity is enhanced leading to a severe reduction of the fluid level that causes complications and infections in the respiratory airways. Since the flagellum of human sperm has the same internal structure of a cilium, ciliary dysfunction can also be responsible for male infertility. There is an association of primary ciliary dyskinesia with left-right anatomic abnormalities such as situs inversus (a combination of findings is known as Kartagener syndrome), and situs ambiguus (also known as Heterotaxy syndrome). These left-right anatomic abnormalities can also result in congenital heart disease. It has been shown that proper cilial function is responsible for the normal left-right asymmetry in mammals. The diverse outcomes caused by ciliary dysfunction may result from alleles of different strengths that compromise ciliary functions in different ways or to different extents. Many ciliopathies are inherited in a Mendelian manner, but specific genetic interactions between distinct functional ciliary complexes, such as transition zone and BBS complexes, can alter the phenotypic manifestations of recessive ciliopathies. Some mutations in transition zone proteins can cause specific serious ciliopathies. Extracellular changes Reduction of cilia function can also result from infection. Research into biofilms has shown that bacteria can alter cilia. A biofilm is a community of bacteria of either the same or multiple species of bacteria. The cluster of cells secretes different factors which form an extracellular matrix. Cilia in the respiratory system is known to move mucus and pathogens out of the airways. It has been found that patients with biofilm positive infections have impaired cilia function. The impairment may present as decreased motion or reduction in the number of cilia. Though these changes result from an external source, they still effect the pathogenicity of the bacteria, progression of infection, and how it is treated. The transportation of the immature egg cell, and the embryo to the uterus for implantation depends on the combination of regulated smooth muscle contractions, and ciliary beating. Dysfunction in this transportation can result in an ectopic pregnancy where the embryo is implanted (usually) in the fallopian tube before reaching its proper destination of the uterus. Many factors can affect this stage including infection and menstrual cycle hormones. Smoking (causing inflammation), and infection can reduce the numbers of cilia, and the ciliary beat can be affected by hormonal changes. Primary cilia in pancreatic cells The pancreas is a mixture of highly differentiated exocrine and endocrine cells. Primary cilia are present in exocrine cells, which are centroacinar duct cells. Endocrine tissue is composed of different hormone-secreting cells. Insulin-secreting beta cells and glucagon-secreting alpha cells are highly ciliated.
Biology and health sciences
Cell parts
Biology
43126
https://en.wikipedia.org/wiki/Callisto%20%28moon%29
Callisto (moon)
Callisto ( ), or Jupiter IV, is the second-largest moon of Jupiter, after Ganymede. In the Solar System it is the third-largest moon after Ganymede and Saturn's largest moon Titan, and nearly as large as the smallest planet Mercury. Callisto is, with a diameter of , roughly a third larger than Earth's Moon and orbits Jupiter on average at a distance of , which is about five times further out than the Moon orbiting Earth. It is the outermost of the four large Galilean moons of Jupiter, which were discovered in 1610 with one of the first telescopes, and is today visible from Earth with common binoculars. The surface of Callisto is the oldest and most heavily cratered in the Solar System. Its surface is completely covered with impact craters. It does not show any signatures of subsurface processes such as plate tectonics or volcanism, with no signs that geological activity in general has ever occurred, and is thought to have evolved predominantly under the influence of impacts. Prominent surface features include multi-ring structures, variously shaped impact craters, and chains of craters (catenae) and associated scarps, ridges and deposits. At a small scale, the surface is varied and made up of small, sparkly frost deposits at the tips of high spots, surrounded by a low-lying, smooth blanket of dark material. This is thought to result from the sublimation-driven degradation of small landforms, which is supported by the general deficit of small impact craters and the presence of numerous small knobs, considered to be their remnants. The absolute ages of the landforms are not known. Callisto is composed of approximately equal amounts of rock and ice, with a density of about , the lowest density and surface gravity of Jupiter's major moons. Compounds detected spectroscopically on the surface include water ice, carbon dioxide, silicates and organic compounds. Investigation by the Galileo spacecraft revealed that Callisto may have a small silicate core and possibly a subsurface ocean of liquid water at depths greater than . It is not in an orbital resonance like the three other Galilean satellites—Io, Europa and Ganymede—and is thus not appreciably tidally heated. Callisto's rotation is tidally locked to its orbit around Jupiter, so that it always faces the same direction, making Jupiter appear to hang directly overhead over its near-side. It is less affected by Jupiter's magnetosphere than the other inner satellites because of its more remote orbit, located just outside Jupiter's main radiation belt. Callisto is surrounded by an extremely thin atmosphere composed of carbon dioxide and probably molecular oxygen, as well as by a rather intense ionosphere. Callisto is thought to have formed by slow accretion from the disk of the gas and dust that surrounded Jupiter after its formation. Callisto's gradual accretion and the lack of tidal heating meant that not enough heat was available for rapid differentiation. The slow convection in the interior of Callisto, which commenced soon after formation, led to partial differentiation and possibly to the formation of a subsurface ocean at a depth of 100–150 km and a small, rocky core. The likely presence of an ocean within Callisto leaves open the possibility that it could harbor life. However, conditions are thought to be less favorable than on nearby Europa. Various space probes from Pioneers 10 and 11 to Galileo and Cassini have studied Callisto. Because of its low radiation levels, Callisto has long been considered the most suitable to base possible future crewed missions on to study the Jovian system. History Discovery Callisto was discovered independently by Simon Marius and Galileo Galilei in 1610, along with the three other large Jovian moons—Ganymede, Io and Europa. Name Callisto, like all of Jupiter's moons, is named after one of Zeus's many lovers or other sexual partners in Greek mythology. Callisto was a nymph (or, according to some sources, the daughter of Lycaon) who was associated with the goddess of the hunt, Artemis. The name was suggested by Simon Marius soon after Callisto's discovery. Marius attributed the suggestion to Johannes Kepler. However, the names of the Galilean satellites fell into disfavor for a considerable time, and were not revived in common use until the mid-20th century. In much of the earlier astronomical literature, Callisto is referred to by its Roman numeral designation, a system introduced by Galileo, as or as "the fourth satellite of Jupiter". There's no established English adjectival form of the name. The adjectival form of Greek Καλλιστῴ Kallistōi is Καλλιστῴος Kallistōi-os, from which one might expect Latin Callistōius and English *Callistóian (with 5 syllables), parallel to Sapphóian (4 syllables) for Sapphōi and Letóian for Lētōi. However, the iota subscript is often omitted from such Greek names (cf. Inóan from Īnōi and Argóan from Argōi), and indeed the analogous form Callistoan is found. In Virgil, a second oblique stem appears in Latin: Callistōn-, but the corresponding Callistonian has rarely appeared in English. One also sees ad hoc forms, such as Callistan, Callistian and Callistean. Orbit and rotation Callisto is the outermost of the four Galilean moons of Jupiter. It orbits at a distance of approximately 1,880,000 km (26.3 times the 71,492 km radius of Jupiter itself). This is significantly larger than the orbital radius—1,070,000 km—of the next-closest Galilean satellite, Ganymede. As a result of this relatively distant orbit, Callisto does not participate in mean-motion resonance—in which the three inner Galilean satellites are locked—and probably never has. Callisto is expected to be captured into the resonance in about 1.5 billion years, completing the 1:2:4:8 chain. Like most other regular planetary moons, Callisto's rotation is locked to be synchronous with its orbit. The length of Callisto's day, simultaneously its orbital period, is about 16.7 Earth days. Its orbit is very slightly eccentric and inclined to the Jovian equator, with the eccentricity and inclination changing quasi-periodically due to solar and planetary gravitational perturbations on a timescale of centuries. The ranges of change are 0.0072–0.0076 and 0.20–0.60°, respectively. These orbital variations cause the axial tilt (the angle between the rotational and orbital axes) to vary between 0.4 and 1.6°. The dynamical isolation of Callisto means that it has never been appreciably tidally heated, which has important consequences for its internal structure and evolution. Its distance from Jupiter also means that the charged-particle flux from Jupiter's magnetosphere at its surface is relatively low—about 300 times lower than, for example, that at Europa. Hence, unlike the other Galilean moons, charged-particle irradiation has had a relatively minor effect on Callisto's surface. The radiation level at Callisto's surface is equivalent to a dose of about 0.01 rem (0.1 mSv) per day, which is just over ten times higher than Earth's average background radiation, but less than in Low Earth Orbit or on Mars. Physical characteristics Composition The average density of Callisto, 1.83 g/cm3, suggests a composition of approximately equal parts of rocky material and water ice, with some additional volatile ices such as ammonia. The mass fraction of ices is 49–55%. The exact composition of Callisto's rock component is not known, but is probably close to the composition of L/LL type ordinary chondrites, which are characterized by less total iron, less metallic iron and more iron oxide than H chondrites. The weight ratio of iron to silicon is 0.9–1.3 in Callisto, whereas the solar ratio is around 1:8. Callisto's surface has an albedo of about 20%. Its surface composition is thought to be broadly similar to its composition as a whole. Near-infrared spectroscopy has revealed the presence of water ice absorption bands at wavelengths of 1.04, 1.25, 1.5, 2.0 and 3.0 micrometers. Water ice seems to be ubiquitous on the surface of Callisto, with a mass fraction of 25–50%. The analysis of high-resolution, near-infrared and UV spectra obtained by the Galileo spacecraft and from the ground has revealed various non-ice materials: magnesium- and iron-bearing hydrated silicates, carbon dioxide, sulfur dioxide, and possibly ammonia and various organic compounds. Spectral data indicate that Callisto's surface is extremely heterogeneous at the small scale. Small, bright patches of pure water ice are intermixed with patches of a rock–ice mixture and extended dark areas made of a non-ice material. The Callistoan surface is asymmetric: the leading hemisphere is darker than the trailing one. This is different from other Galilean satellites, where the reverse is true. The trailing hemisphere of Callisto appears to be enriched in carbon dioxide, whereas the leading hemisphere has more sulfur dioxide. Many fresh impact craters like Lofn also show enrichment in carbon dioxide. Overall, the chemical composition of the surface, especially in the dark areas, may be close to that seen on D-type asteroids, whose surfaces are made of carbonaceous material. Internal structure Callisto's battered surface lies on top of a cold, stiff and icy lithosphere that is between 80 and 150 km thick. A salty ocean 150–200 km deep may lie beneath the crust, indicated by studies of the magnetic fields around Jupiter and its moons. It was found that Callisto responds to Jupiter's varying background magnetic field like a perfectly conducting sphere; that is, the field cannot penetrate inside Callisto, suggesting a layer of highly conductive fluid within it with a thickness of at least 10 km. The existence of an ocean is more likely if water contains a small amount of ammonia or other antifreeze, up to 5% by weight. In this case the water+ice layer can be as thick as 250–300 km. Failing an ocean, the icy lithosphere may be somewhat thicker, up to about 300 km. Beneath the lithosphere and putative ocean, Callisto's interior appears to be neither entirely uniform nor particularly variable. Galileo orbiter data (especially the dimensionless moment of inertia—0.3549 ± 0.0042—determined during close flybys) suggest that, if Callisto is in hydrostatic equilibrium, its interior is composed of compressed rocks and ices, with the amount of rock increasing with depth due to partial settling of its constituents. In other words, Callisto may be only partially differentiated. The density and moment of inertia for an equilibrium Callisto are compatible with the existence of a small silicate core in the center of Callisto. The radius of any such core cannot exceed 600 km, and the density may lie between 3.1 and 3.6 g/cm3. In this case, Callisto's interior would be in stark contrast to that of Ganymede, which appears to be fully differentiated. However, a 2011 reanalysis of Galileo data suggests that Callisto is not in hydrostatic equilibrium. In that case, the gravity data may be more consistent with a more thoroughly differentiated Callisto with a hydrated silicate core. Surface features The ancient surface of Callisto is one of the most heavily cratered in the Solar System. In fact, the crater density is close to saturation: any new crater will tend to erase an older one. The large-scale geology is relatively simple; on Callisto there are no large mountains, volcanoes or other endogenic tectonic features. The impact craters and multi-ring structures—together with associated fractures, scarps and deposits—are the only large features to be found on the surface. Callisto's surface can be divided into several geologically different parts: cratered plains, light plains, bright and dark smooth plains, and various units associated with particular multi-ring structures and impact craters. The cratered plains make up most of the surface area and represent the ancient lithosphere, a mixture of ice and rocky material. The light plains include bright impact craters like Burr and Lofn, as well as the effaced remnants of old large craters called palimpsests, the central parts of multi-ring structures, and isolated patches in the cratered plains. These light plains are thought to be icy impact deposits. The bright, smooth plains make up a small fraction of Callisto's surface and are found in the ridge and trough zones of the Valhalla and Asgard formations and as isolated spots in the cratered plains. They were thought to be connected with endogenic activity, but the high-resolution Galileo images showed that the bright, smooth plains correlate with heavily fractured and knobby terrain and do not show any signs of resurfacing. The Galileo images also revealed small, dark, smooth areas with overall coverage less than 10,000 km2, which appear to embay the surrounding terrain. They are possible cryovolcanic deposits. Both the light and the various smooth plains are somewhat younger and less cratered than the background cratered plains. Impact crater diameters seen range from 0.1 km—a limit defined by the imaging resolution—to over 100 km, not counting the multi-ring structures. Small craters, with diameters less than 5 km, have simple bowl or flat-floored shapes. Those 5–40 km across usually have a central peak. Larger impact features, with diameters in the range 25–100 km, have central pits instead of peaks, such as Tindr crater. The largest craters with diameters over 60 km can have central domes, which are thought to result from central tectonic uplift after an impact; examples include Doh and Hár craters. A small number of very large—more than 100 km in diameter—and bright impact craters show anomalous dome geometry. These are unusually shallow and may be a transitional landform to the multi-ring structures, as with the Lofn impact feature. Callisto's craters are generally shallower than those on the Moon. The largest impact features on Callisto's surface are multi-ring basins. Two are enormous. Valhalla is the largest, with a bright central region 600 km in diameter, and rings extending as far as 1,800 km from the center (see figure). The second largest is Asgard, measuring about 1,600 km in diameter. Multi-ring structures probably originated as a result of a post-impact concentric fracturing of the lithosphere lying on a layer of soft or liquid material, possibly an ocean. The catenae—for example Gomul Catena—are long chains of impact craters lined up in straight lines across the surface. They were probably created by objects that were tidally disrupted as they passed close to Jupiter prior to the impact on Callisto, or by very oblique impacts. A historical example of a disruption was Comet Shoemaker–Levy 9. As mentioned above, small patches of pure water ice with an albedo as high as 80% are found on the surface of Callisto, surrounded by much darker material. High-resolution Galileo images showed the bright patches to be predominately located on elevated surface features: crater rims, scarps, ridges and knobs. They are likely to be thin water frost deposits. Dark material usually lies in the lowlands surrounding and mantling bright features and appears to be smooth. It often forms patches up to 5 km across within the crater floors and in the intercrater depressions. On a sub-kilometer scale the surface of Callisto is more degraded than the surfaces of other icy Galilean moons. Typically there is a deficit of small impact craters with diameters less than 1 km as compared with, for instance, the dark plains on Ganymede. Instead of small craters, the almost ubiquitous surface features are small knobs and pits. The knobs are thought to represent remnants of crater rims degraded by an as-yet uncertain process. The most likely candidate process is the slow sublimation of ice, which is enabled by a temperature of up to 165 K, reached at a subsolar point. Such sublimation of water or other volatiles from the dirty ice that is the bedrock causes its decomposition. The non-ice remnants form debris avalanches descending from the slopes of the crater walls. Such avalanches are often observed near and inside impact craters and termed "debris aprons". Sometimes crater walls are cut by sinuous valley-like incisions called "gullies", which resemble certain Martian surface features. In the ice sublimation hypothesis, the low-lying dark material is interpreted as a blanket of primarily non-ice debris, which originated from the degraded rims of craters and has covered a predominantly icy bedrock. The relative ages of the different surface units on Callisto can be determined from the density of impact craters on them. The older the surface, the denser the crater population. Absolute dating has not been carried out, but based on theoretical considerations, the cratered plains are thought to be ~4.5 billion years old, dating back almost to the formation of the Solar System. The ages of multi-ring structures and impact craters depend on chosen background cratering rates and are estimated by different authors to vary between 1 and 4 billion years. Atmosphere and ionosphere Callisto has a very tenuous atmosphere composed of carbon dioxide and probably oxygen. It was detected by the Galileo Near Infrared Mapping Spectrometer (NIMS) from its absorption feature near the wavelength 4.2 micrometers. The surface pressure is estimated to be 7.5 picobar (0.75 μPa) and particle density 4 cm−3. Because such a thin atmosphere would be lost in only about four years (see atmospheric escape), it must be constantly replenished, possibly by slow sublimation of carbon dioxide ice from Callisto's icy crust, which would be compatible with the sublimation–degradation hypothesis for the formation of the surface knobs. Callisto's ionosphere was first detected during Galileo flybys; its high electron density of 7–17 cm−3 cannot be explained by the photoionization of the atmospheric carbon dioxide alone. Hence, it is suspected that the atmosphere of Callisto is actually dominated by molecular oxygen (in amounts 10–100 times greater than ). However, oxygen has not yet been directly detected in the atmosphere of Callisto. Observations with the Hubble Space Telescope (HST) placed an upper limit on its possible concentration in the atmosphere, based on lack of detection, which is still compatible with the ionospheric measurements. At the same time, HST was able to detect condensed oxygen trapped on the surface of Callisto. Atomic hydrogen has also been detected in Callisto's atmosphere via recent analysis of 2001 Hubble Space Telescope data. Spectral images taken on 15 and 24 December 2001 were re-examined, revealing a faint signal of scattered light that indicates a hydrogen corona. The observed brightness from the scattered sunlight in Callisto's hydrogen corona is approximately two times larger when the leading hemisphere is observed. This asymmetry may originate from a different hydrogen abundance in both the leading and trailing hemispheres. However, this hemispheric difference in Callisto's hydrogen corona brightness is likely to originate from the extinction of the signal in Earth's geocorona, which is greater when the trailing hemisphere is observed. Origin and evolution The partial differentiation of Callisto (inferred e.g. from moment of inertia measurements) means that it has never been heated enough to melt its ice component. Therefore, the most favorable model of its formation is a slow accretion in the low-density Jovian subnebula—a disk of the gas and dust that existed around Jupiter after its formation. Such a prolonged accretion stage would allow cooling to largely keep up with the heat accumulation caused by impacts, radioactive decay and contraction, thereby preventing melting and fast differentiation. The allowable timescale for the formation of Callisto lies then in the range 0.1 million–10 million years. The further evolution of Callisto after accretion was determined by the balance of the radioactive heating, cooling through thermal conduction near the surface, and solid state or subsolidus convection in the interior. Details of the subsolidus convection in the ice is the main source of uncertainty in the models of all icy moons. It is known to develop when the temperature is sufficiently close to the melting point, due to the temperature dependence of ice viscosity. Subsolidus convection in icy bodies is a slow process with ice motions of the order of 1 centimeter per year, but is, in fact, a very effective cooling mechanism on long timescales. It is thought to proceed in the so-called stagnant lid regime, where a stiff, cold outer layer of Callisto conducts heat without convection, whereas the ice beneath it convects in the subsolidus regime. For Callisto, the outer conductive layer corresponds to the cold and rigid lithosphere with a thickness of about 100 km. Its presence would explain the lack of any signs of the endogenic activity on the Callistoan surface. The convection in the interior parts of Callisto may be layered, because under the high pressures found there, water ice exists in different crystalline phases beginning from the ice I on the surface to ice VII in the center. The early onset of subsolidus convection in the Callistoan interior could have prevented large-scale ice melting and any resulting differentiation that would have otherwise formed a large rocky core and icy mantle. Due to the convection process, however, very slow and partial separation and differentiation of rocks and ices inside Callisto has been proceeding on timescales of billions of years and may be continuing to this day. The current understanding of the evolution of Callisto allows for the existence of a layer or "ocean" of liquid water in its interior. This is connected with the anomalous behavior of ice I phase's melting temperature, which decreases with pressure, achieving temperatures as low as 251 K at 2,070 bar (207 MPa). In all realistic models of Callisto the temperature in the layer between 100 and 200 km in depth is very close to, or exceeds slightly, this anomalous melting temperature. The presence of even small amounts of ammonia—about 1–2% by weight—almost guarantees the liquid's existence because ammonia would lower the melting temperature even further. Although Callisto is very similar in bulk properties to Ganymede, it apparently had a much simpler geological history. The surface appears to have been shaped mainly by impacts and other exogenic forces. Unlike neighboring Ganymede with its grooved terrain, there is little evidence of tectonic activity. Explanations that have been proposed for the contrasts in internal heating and consequent differentiation and geologic activity between Callisto and Ganymede include differences in formation conditions, the greater tidal heating experienced by Ganymede, and the more numerous and energetic impacts that would have been suffered by Ganymede during the Late Heavy Bombardment. The relatively simple geological history of Callisto provides planetary scientists with a reference point for comparison with other more active and complex worlds. Habitability It is speculated that there could be life in Callisto's subsurface ocean. Like Europa and Ganymede, as well as Saturn's moons Enceladus, Dione and Titan and Neptune's moon Triton, a possible subsurface ocean might be composed of salt water. It is possible that halophiles could thrive in the ocean. As with Europa and Ganymede, the idea has been raised that habitable conditions and even extraterrestrial microbial life may exist in the salty ocean under the Callistoan surface. However, the environmental conditions necessary for life appear to be less favorable on Callisto than on Europa. The principal reasons are the lack of contact with rocky material and the lower heat flux from the interior of Callisto. Callisto's ocean is heated only by radioactive decay, while Europa's is also heated by tidal energy, as it is much closer to Jupiter. It is thought that of all of Jupiter's moons, Europa has the greatest chance of supporting microbial life. Exploration Past The Pioneer 10 and Pioneer 11 Jupiter encounters in the early 1970s contributed little new information about Callisto in comparison with what was already known from Earth-based observations. The real breakthrough happened later with the Voyager 1 and Voyager 2 flybys in 1979. They imaged more than half of the Callistoan surface with a resolution of 1–2 km, and precisely measured its temperature, mass and shape. A second round of exploration lasted from 1994 to 2003, when the Galileo spacecraft had eight close encounters with Callisto, the last flyby during the C30 orbit in 2001 came as close as 138 km to the surface. The Galileo orbiter completed the global imaging of the surface and delivered a number of pictures with a resolution as high as 15 meters of selected areas of Callisto. In 2000, the Cassini spacecraft en route to Saturn acquired high-quality infrared spectra of the Galilean satellites including Callisto. In February–March 2007, the New Horizons probe on its way to Pluto obtained new images and spectra of Callisto. Future exploration Callisto will be visited by three spacecraft in the near future. The European Space Agency's Jupiter Icy Moons Explorer (JUICE), which launched on 14 April 2023, will perform 21 close flybys of Callisto between 2031 and 2034. NASA's Europa Clipper, which launched on 14 October 2024, will conduct nine close flybys of Callisto beginning in 2030. China's CNSA Tianwen-4 is planned to launch to Jupiter around 2030 before entering orbit around Callisto. Old proposals Formerly proposed for a launch in 2020, the Europa Jupiter System Mission (EJSM) was a joint NASA/ESA proposal for exploration of Jupiter's moons. In February 2009 it was announced that ESA/NASA had given this mission priority ahead of the Titan Saturn System Mission. At the time ESA's contribution still faced funding competition from other ESA projects. EJSM consisted of the NASA-led Jupiter Europa Orbiter, the ESA-led Jupiter Ganymede Orbiter and possibly a JAXA-led Jupiter Magnetospheric Orbiter. Potential crewed exploration and habitation In 2003 NASA conducted a conceptual study called Human Outer Planets Exploration (HOPE) regarding the future human exploration of the outer Solar System. The target chosen to consider in detail was Callisto. The study proposed a possible surface base on Callisto that would produce rocket propellant for further exploration of the Solar System. Advantages of a base on Callisto include low radiation (due to its distance from Jupiter) and geological stability. Such a base could facilitate remote exploration of Europa, or be an ideal location for a Jovian system waystation servicing spacecraft heading farther into the outer Solar System, using a gravity assist from a close flyby of Jupiter after departing Callisto. In December 2003, NASA reported that a crewed mission to Callisto might be possible in the 2040s.
Physical sciences
Solar System
null
43127
https://en.wikipedia.org/wiki/Europa%20%28moon%29
Europa (moon)
Europa , or Jupiter II, is the smallest of the four Galilean moons orbiting Jupiter, and the sixth-closest to the planet of all the 95 known moons of Jupiter. It is also the sixth-largest moon in the Solar System. Europa was discovered independently by Simon Marius and Galileo Galilei and was named (by Marius) after Europa, the Phoenician mother of King Minos of Crete and lover of Zeus (the Greek equivalent of the Roman god Jupiter). Slightly smaller than Earth's Moon, Europa is made of silicate rock and has a water-ice crust and probably an iron–nickel core. It has a very thin atmosphere, composed primarily of oxygen. Its geologically young white-beige surface is striated by light tan cracks and streaks, with very few impact craters. In addition to Earth-bound telescope observations, Europa has been examined by a succession of space-probe flybys, the first occurring in the early 1970s. In September 2022, the Juno spacecraft flew within about 320 km (200 miles) of Europa for a more recent close-up view. Europa has the smoothest surface of any known solid object in the Solar System. The apparent youth and smoothness of the surface is due to a water ocean beneath the surface, which could conceivably harbor extraterrestrial life, although such life would most likely be that of single celled organisms and bacteria-like creatures. The predominant model suggests that heat from tidal flexing causes the ocean to remain liquid and drives ice movement similar to plate tectonics, absorbing chemicals from the surface into the ocean below. Sea salt from a subsurface ocean may be coating some geological features on Europa, suggesting that the ocean is interacting with the sea floor. This may be important in determining whether Europa could be habitable. In addition, the Hubble Space Telescope detected water vapor plumes similar to those observed on Saturn's moon Enceladus, which are thought to be caused by erupting cryogeysers. In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated analysis of data obtained from the Galileo space probe, which orbited Jupiter from 1995 to 2003. Such plume activity could help researchers in a search for life from the subsurface Europan ocean without having to land on the moon. In March 2024, astronomers reported that the surface of Europa may have much less oxygen than previously inferred. The Galileo mission, launched in 1989, provides the bulk of current data on Europa. No spacecraft has yet landed on Europa, although there have been several proposed exploration missions. The European Space Agency's Jupiter Icy Moon Explorer (JUICE) is a mission to Ganymede launched on 14 April 2023, that will include two flybys of Europa. NASA's Europa Clipper was launched on 14 October 2024. Discovery and naming Europa, along with Jupiter's three other large moons, Io, Ganymede, and Callisto, was discovered by Galileo Galilei on 8 January 1610, and possibly independently by Simon Marius. On 7 January, Galileo had observed Io and Europa together using a 20×-magnification refracting telescope at the University of Padua, but the low resolution could not separate the two objects. The following night, he saw Io and Europa for the first time as separate bodies. The moon is the namesake of Europa, in Greek mythology the daughter of the Phoenician king of Tyre. Like all the Galilean satellites, Europa is named after a lover of Zeus, the Greek counterpart of Jupiter. Europa was courted by Zeus and became the queen of Crete. The naming scheme was suggested by Simon Marius, who attributed the proposal to Johannes Kepler: The names fell out of favor for a considerable time and were not revived in general use until the mid-20th century. In much of the earlier astronomical literature, Europa is simply referred to by its Roman numeral designation as (a system also introduced by Galileo) or as the "second satellite of Jupiter". In 1892, the discovery of Amalthea, whose orbit lay closer to Jupiter than those of the Galilean moons, pushed Europa to the third position. The Voyager probes discovered three more inner satellites in 1979, so Europa is now counted as Jupiter's sixth satellite, though it is still referred to as . The adjectival form has stabilized as Europan. Orbit and rotation Europa orbits Jupiter in just over three and a half days, with an orbital radius of about 670,900 km. With an orbital eccentricity of only 0.009, the orbit itself is nearly circular, and the orbital inclination relative to Jupiter's equatorial plane is small, at 0.470°. Like its fellow Galilean satellites, Europa is tidally locked to Jupiter, with one hemisphere of Europa constantly facing Jupiter. Because of this, there is a sub-Jovian point on Europa's surface, from which Jupiter would appear to hang directly overhead. Europa's prime meridian is a line passing through this point. Research suggests that tidal locking may not be full, as a non-synchronous rotation has been proposed: Europa spins faster than it orbits, or at least did so in the past. This suggests an asymmetry in internal mass distribution and that a layer of subsurface liquid separates the icy crust from the rocky interior. The slight eccentricity of Europa's orbit, maintained by gravitational disturbances from the other Galileans, causes Europa's sub-Jovian point to oscillate around a mean position. As Europa comes slightly nearer to Jupiter, Jupiter's gravitational attraction increases, causing Europa to elongate towards and away from it. As Europa moves slightly away from Jupiter, Jupiter's gravitational force decreases, causing Europa to relax back into a more spherical shape, and creating tides in its ocean. The orbital eccentricity of Europa is continuously pumped by its mean-motion resonance with Io. Thus, the tidal flexing kneads Europa's interior and gives it a source of heat, possibly allowing its ocean to stay liquid while driving subsurface geological processes. The ultimate source of this energy is Jupiter's rotation, which is tapped by Io through the tides it raises on Jupiter and is transferred to Europa and Ganymede by the orbital resonance. Analysis of the unique cracks lining Europa yielded evidence that it likely spun around a tilted axis at some point in time. If correct, this would explain many of Europa's features. Europa's immense network of crisscrossing cracks serves as a record of the stresses caused by massive tides in its global ocean. Europa's tilt could influence calculations of how much of its history is recorded in its frozen shell, how much heat is generated by tides in its ocean, and even how long the ocean has been liquid. Its ice layer must stretch to accommodate these changes. When there is too much stress, it cracks. A tilt in Europa's axis could suggest that its cracks may be much more recent than previously thought. The reason for this is that the direction of the spin pole may change by as much as a few degrees per day, completing one precession period over several months. A tilt could also affect estimates of the age of Europa's ocean. Tidal forces are thought to generate the heat that keeps Europa's ocean liquid, and a tilt in the spin axis would cause more heat to be generated by tidal forces. Such additional heat would have allowed the ocean to remain liquid for a longer time. However, it has not yet been determined when this hypothesized shift in the spin axis might have occurred. Physical characteristics Europa is slightly smaller than the Earth's moon. At just over in diameter, it is the sixth-largest moon and fifteenth-largest object in the Solar System. Though by a wide margin the least massive of the Galilean satellites, it is nonetheless more massive than all known moons in the Solar System smaller than itself combined. Its bulk density suggests that it is similar in composition to terrestrial planets, being primarily composed of silicate rock. Internal structure It is estimated that Europa has an outer layer of water around thick – a part frozen as its crust and a part as a liquid ocean underneath the ice. Recent magnetic-field data from the Galileo orbiter showed that Europa has an induced magnetic field through interaction with Jupiter's, which suggests the presence of a subsurface conductive layer. This layer is likely to be a salty liquid-water ocean. Portions of the crust are estimated to have undergone a rotation of nearly 80°, nearly flipping over (see true polar wander), which would be unlikely if the ice were solidly attached to the mantle. Europa probably contains a metallic iron core. Surface features Europa is the smoothest known object in the Solar System, lacking large-scale features such as mountains and craters. The prominent markings crisscrossing Europa appear to be mainly albedo features that emphasize low topography. There are few craters on Europa, because its surface is tectonically too active and therefore young. Its icy crust has an albedo (light reflectivity) of 0.64, one of the highest of any moon. This indicates a young and active surface: based on estimates of the frequency of cometary bombardment that Europa experiences, the surface is about 20 to 180 million years old. There is no scientific consensus about the explanation for Europa's surface features. It has been postulated Europa's equator may be covered in icy spikes called penitentes, which may be up to 15 meters high. Their formation is due to direct overhead sunlight near the equator causing the ice to sublime, forming vertical cracks. Although the imaging available from the Galileo orbiter does not have the resolution for confirmation, radar and thermal data are consistent with this speculation. The ionizing radiation level at Europa's surface is equivalent to a daily dose of about 5.4 Sv (540 rem), an amount that would cause severe illness or death in human beings exposed for a single Earth day (24 hours). A Europan day is about 3.5 times as long as an Earth day. Lineae Europa's most striking surface features are a series of dark streaks crisscrossing the entire globe, called lineae (). Close examination shows that the edges of Europa's crust on either side of the cracks have moved relative to each other. The larger bands are more than across, often with dark, diffuse outer edges, regular striations, and a central band of lighter material. The most likely hypothesis is that the lineae on Europa were produced by a series of eruptions of warm ice as Europa's crust slowly spreads open to expose warmer layers beneath. The effect would have been similar to that seen on Earth's oceanic ridges. These various fractures are thought to have been caused in large part by the tidal flexing exerted by Jupiter. Because Europa is tidally locked to Jupiter, and therefore always maintains approximately the same orientation towards Jupiter, the stress patterns should form a distinctive and predictable pattern. However, only the youngest of Europa's fractures conform to the predicted pattern; other fractures appear to occur at increasingly different orientations the older they are. This could be explained if Europa's surface rotates slightly faster than its interior, an effect that is possible due to the subsurface ocean mechanically decoupling Europa's surface from its rocky mantle and the effects of Jupiter's gravity tugging on Europa's outer ice crust. Comparisons of Voyager and Galileo spacecraft photos serve to put an upper limit on this hypothetical slippage. A full revolution of the outer rigid shell relative to the interior of Europa takes at least 12,000 years. Studies of Voyager and Galileo images have revealed evidence of subduction on Europa's surface, suggesting that, just as the cracks are analogous to ocean ridges, so plates of icy crust analogous to tectonic plates on Earth are recycled into the molten interior. This evidence of both crustal spreading at bands and convergence at other sites suggests that Europa may have active plate tectonics, similar to Earth. However, the physics driving these plate tectonics are not likely to resemble those driving terrestrial plate tectonics, as the forces resisting potential Earth-like plate motions in Europa's crust are significantly stronger than the forces that could drive them. Chaos and lenticulae Other features present on Europa are circular and elliptical (Latin for "freckles"). Many are domes, some are pits and some are smooth, dark spots. Others have a jumbled or rough texture. The dome tops look like pieces of the older plains around them, suggesting that the domes formed when the plains were pushed up from below. One hypothesis states that these lenticulae were formed by diapirs of warm ice rising up through the colder ice of the outer crust, much like magma chambers in Earth's crust. The smooth, dark spots could be formed by meltwater released when the warm ice breaks through the surface. The rough, jumbled lenticulae (called regions of "chaos"; for example, Conamara Chaos) would then be formed from many small fragments of crust, embedded in hummocky, dark material, appearing like icebergs in a frozen sea. An alternative hypothesis suggests that lenticulae are actually small areas of chaos and that the claimed pits, spots and domes are artefacts resulting from the over-interpretation of early, low-resolution Galileo images. The implication is that the ice is too thin to support the convective diapir model of feature formation. In November 2011, a team of researchers, including researchers at University of Texas at Austin, presented evidence suggesting that many "chaos terrain" features on Europa sit atop vast lakes of liquid water. These lakes would be entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell. Full confirmation of the lakes' existence will require a space mission designed to probe the ice shell either physically or indirectly, e.g. using radar. Chaos features may also be a result of increased melting of the ice shell and deposition of marine ice at low latitudes as a result of heterogeneous heating. Work published by researchers from Williams College suggests that chaos terrain may represent sites where impacting comets penetrated through the ice crust and into an underlying ocean. Subsurface ocean The scientific consensus is that a layer of liquid water exists beneath Europa's surface, and that heat from tidal flexing allows the subsurface ocean to remain liquid. Europa's surface temperature averages about at the equator and only at the poles, keeping Europa's icy crust as hard as granite. The first hints of a subsurface ocean came from theoretical considerations of tidal heating (a consequence of Europa's slightly eccentric orbit and orbital resonance with the other Galilean moons). Galileo imaging team members argue for the existence of a subsurface ocean from analysis of Voyager and Galileo images. The most dramatic example is "chaos terrain", a common feature on Europa's surface that some interpret as a region where the subsurface ocean has melted through the icy crust. This interpretation is controversial. Most geologists who have studied Europa favor what is commonly called the "thick ice" model, in which the ocean has rarely, if ever, directly interacted with the present surface. The best evidence for the thick-ice model is a study of Europa's large craters. The largest impact structures are surrounded by concentric rings and appear to be filled with relatively flat, fresh ice; based on this and on the calculated amount of heat generated by Europan tides, it is estimated that the outer crust of solid ice is approximately thick, including a ductile "warm ice" layer, which could mean that the liquid ocean underneath may be about deep. This leads to a volume of Europa's oceans of 3×1018m3, between two or three times the volume of Earth's oceans. The thin-ice model suggests that Europa's ice shell may be only a few kilometers thick. However, most planetary scientists conclude that this model considers only those topmost layers of Europa's crust that behave elastically when affected by Jupiter's tides. One example is flexure analysis, in which Europa's crust is modeled as a plane or sphere weighted and flexed by a heavy load. Models such as this suggest the outer elastic portion of the ice crust could be as thin as . If the ice shell of Europa is really only a few kilometers thick, this "thin ice" model would mean that regular contact of the liquid interior with the surface could occur through open ridges, causing the formation of areas of chaotic terrain. Large impacts going fully through the ice crust would also be a way that the subsurface ocean could be exposed. Composition The Galileo orbiter found that Europa has a weak magnetic moment, which is induced by the varying part of the Jovian magnetic field. The field strength at the magnetic equator (about 120 nT) created by this magnetic moment is about one-sixth the strength of Ganymede's field and six times the value of Callisto's. The existence of the induced moment requires a layer of a highly electrically conductive material in Europa's interior. The most plausible candidate for this role is a large subsurface ocean of liquid saltwater. Since the Voyager spacecraft flew past Europa in 1979, scientists have worked to understand the composition of the reddish-brown material that coats fractures and other geologically youthful features on Europa's surface. Spectrographic evidence suggests that the darker, reddish streaks and features on Europa's surface may be rich in salts such as magnesium sulfate, deposited by evaporating water that emerged from within. Sulfuric acid hydrate is another possible explanation for the contaminant observed spectroscopically. In either case, because these materials are colorless or white when pure, some other material must also be present to account for the reddish color, and sulfur compounds are suspected. Another hypothesis for the colored regions is that they are composed of abiotic organic compounds collectively called tholins. The morphology of Europa's impact craters and ridges is suggestive of fluidized material welling up from the fractures where pyrolysis and radiolysis take place. In order to generate colored tholins on Europa, there must be a source of materials (carbon, nitrogen, and water) and a source of energy to make the reactions occur. Impurities in the water ice crust of Europa are presumed both to emerge from the interior as cryovolcanic events that resurface the body, and to accumulate from space as interplanetary dust. Tholins bring important astrobiological implications, as they may play a role in prebiotic chemistry and abiogenesis. The presence of sodium chloride in the internal ocean has been suggested by a 450 nm absorption feature, characteristic of irradiated NaCl crystals, that has been spotted in HST observations of the chaos regions, presumed to be areas of recent subsurface upwelling. The subterranean ocean of Europa contains carbon and was observed on the surface ice as a concentration of carbon dioxide within Tara Regio, a geologically recently resurfaced terrain. Sources of heat Europa receives thermal energy from tidal heating, which occurs through the tidal friction and tidal flexing processes caused by tidal acceleration: orbital and rotational energy are dissipated as heat in the core of the moon, the internal ocean, and the ice crust. Tidal friction Ocean tides are converted to heat by frictional losses in the oceans and their interaction with the solid bottom and with the top ice crust. In late 2008, it was suggested Jupiter may keep Europa's oceans warm by generating large planetary tidal waves on Europa because of its small but non-zero obliquity. This generates so-called Rossby waves that travel quite slowly, at just a few kilometers per day, but can generate significant kinetic energy. For the current axial tilt estimate of 0.1 degree, the resonance from Rossby waves would contain 7.3 J of kinetic energy, which is two thousand times larger than that of the flow excited by the dominant tidal forces. Dissipation of this energy could be the principal heat source of Europa's ocean. Tidal flexing Tidal flexing kneads Europa's interior and ice shell, which becomes a source of heat. Depending on the amount of tilt, the heat generated by the ocean flow could be 100 to thousands of times greater than the heat generated by the flexing of Europa's rocky core in response to the gravitational pull from Jupiter and the other moons circling that planet. Europa's seafloor could be heated by the moon's constant flexing, driving hydrothermal activity similar to undersea volcanoes in Earth's oceans. Experiments and ice modeling published in 2016, indicate that tidal flexing dissipation can generate one order of magnitude more heat in Europa's ice than scientists had previously assumed. Their results indicate that most of the heat generated by the ice actually comes from the ice's crystalline structure (lattice) as a result of deformation, and not friction between the ice grains. The greater the deformation of the ice sheet, the more heat is generated. Radioactive decay In addition to tidal heating, the interior of Europa could also be heated by the decay of radioactive material (radiogenic heating) within the rocky mantle. But the models and values observed are one hundred times higher than those that could be produced by radiogenic heating alone, thus implying that tidal heating has a leading role in Europa. Plumes The Hubble Space Telescope acquired an image of Europa in 2012 that was interpreted to be a plume of water vapour erupting from near its south pole. The image suggests the plume may be high, or more than 20 times the height of Mt. Everest., though recent observations and modeling suggest that typical Europan plumes may be much smaller. It has been suggested that if plumes exist, they are episodic and likely to appear when Europa is at its farthest point from Jupiter, in agreement with tidal force modeling predictions. Additional imaging evidence from the Hubble Space Telescope was presented in September 2016. In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated critical analysis of data obtained from the Galileo space probe, which orbited Jupiter between 1995 and 2003. Galileo flew by Europa in 1997 within of the moon's surface and the researchers suggest it may have flown through a water plume. Such plume activity could help researchers in a search for life from the subsurface Europan ocean without having to land on the moon. The tidal forces are about 1,000 times stronger than the Moon's effect on Earth. The only other moon in the Solar System exhibiting water vapor plumes is Enceladus. The estimated eruption rate at Europa is about 7000 kg/s compared to about 200 kg/s for the plumes of Enceladus. If confirmed, it would open the possibility of a flyby through the plume and obtain a sample to analyze in situ without having to use a lander and drill through kilometres of ice. In November 2020, a study was published in the peer-reviewed scientific journal Geophysical Research Letters suggesting that the plumes may originate from water within the crust of Europa as opposed to its subsurface ocean. The study's model, using images from the Galileo space probe, proposed that a combination of freezing and pressurization may result in at least some of the cryovolcanic activity. The pressure generated by migrating briny water pockets would thus, eventually, burst through the crust, thereby creating these plumes. The hypothesis that cryovolcanism on Europa could be triggered by freezing and pressurization of liquid pockets in the icy crust was first proposed by Sarah Fagents at the University of Hawai'i at Mānoa, who in 2003, was the first to model and publish work on this process. A press release from NASA's Jet Propulsion Laboratory referencing the November 2020 study suggested that plumes sourced from migrating liquid pockets could potentially be less hospitable to life. This is due to a lack of substantial energy for organisms to thrive off, unlike proposed hydrothermal vents on the subsurface ocean floor. Atmosphere The atmosphere of Europa can be categorized as thin and tenuous (often called an exosphere), primarily composed of oxygen and trace amounts of water vapor. However, this quantity of oxygen is produced in a non-biological manner. Given that Europa's surface is icy, and subsequently very cold; as solar ultraviolet radiation and charged particles (ions and electrons) from the Jovian magnetospheric environment collide with Europa's surface, water vapor is created and instantaneously separated into oxygen and hydrogen constituents. As it continues to move, the hydrogen is light enough to pass through the surface gravity of the atmosphere leaving behind only oxygen. The surface-bounded atmosphere forms through radiolysis, the dissociation of molecules through radiation. This accumulated oxygen atmosphere can get to a height of above the surface of Europa. Molecular oxygen is the densest component of the atmosphere because it has a long lifetime; after returning to the surface, it does not stick (freeze) like a water or hydrogen peroxide molecule but rather desorbs from the surface and starts another ballistic arc. Molecular hydrogen never reaches the surface, as it is light enough to escape Europa's surface gravity. Europa is one of the few moons in our solar system with a quantifiable atmosphere, along with Titan, Io, Triton, Ganymede and Callisto. Europa is also one of several moons in our solar system with very large quantities of ice (volatiles), otherwise known as "icy moons".Europa is also considered to be geologically active due to the constant release of hydrogen-oxygen mixtures into space. As a result of the moon's particle venting, the atmosphere requires continuous replenishment. Europa also contains a small magnetosphere (approximately 25% of Ganymede's). However, this magnetosphere varies in size as Europa orbits through Jupiter's magnetic field. This confirms that a conductive element, such as a large ocean, likely lies below its icy surface. As multiple studies have been conducted over Europa's atmosphere, several findings conclude that not all oxygen molecules are released into the atmosphere. This unknown percentage of oxygen may be absorbed into the surface and sink into the subsurface. Because the surface may interact with the subsurface ocean (considering the geological discussion above), this molecular oxygen may make its way to the ocean, where it could aid in biological processes. One estimate suggests that, given the turnover rate inferred from the apparent ~0.5 Gyr maximum age of Europa's surface ice, subduction of radiolytically generated oxidizing species might well lead to oceanic free oxygen concentrations that are comparable to those in terrestrial deep oceans. Through the slow release of oxygen and hydrogen, a neutral torus around Europa's orbital plane is formed. This "neutral cloud" has been detected by both the Cassini and Galileo spacecraft, and has a greater content (number of atoms and molecules) than the neutral cloud surrounding Jupiter's inner moon Io. This torus was officially confirmed using Energetic Neutral Atom (ENA) imaging. Europa's torus ionizes through the process of neutral particles exchanging electrons with its charged particles. Since Europa's magnetic field rotates faster than its orbital velocity, these ions are left in the path of its magnetic field trajectory, forming a plasma. It has been hypothesized that these ions are responsible for the plasma within Jupiter's magnetosphere. On 4 March 2024, astronomers reported that the surface of Europa may have much less oxygen than previously inferred. Discovery of atmosphere The atmosphere of Europa was first discovered in 1995 by astronomers D. T. Hall and collaborators using the Goddard High Resolution Spectrograph instrument of the Hubble Space Telescope. This observation was further supported in 1997 by the Galileo orbiter during its mission within the Jovian system. The Galileo orbiter performed three radio occultation events of Europa, where the probe's radio contact with Earth was temporarily blocked by passing behind Europa. By analyzing the effects Europa's sparse atmosphere had on the radio signal just before and after the occultation, for a total of six events, a team of astronomers led by A. J. Kliore established the presence of an ionized layer in Europa's atmosphere. Climate and weather Despite the presence of a gas torus, Europa has no weather producing clouds. As a whole, Europa has no wind, precipitation, or presence of sky color as its gravity is too low to hold an atmosphere substantial enough for those features. Europa's gravity is approximately 13% of Earth's. The temperature on Europa varies from −160 °C at the equator, to −220 °C at either of its poles. Europa's subsurface ocean is thought to be significantly warmer however. It is hypothesized that because of radioactive and tidal heating (as mentioned in the sections above), there are points in the depths of Europa's ocean that may be only slightly cooler than Earth's oceans. Studies have also concluded that Europa's ocean would have been rather acidic at first, with large concentrations of sulfate, calcium, and carbon dioxide. But over the course of 4.5 billion years, it became full of chloride, thus resembling our 1.94% chloride oceans on Earth. Exploration Exploration of Europa began with the Jupiter flybys of Pioneer 10 and 11 in 1973 and 1974, respectively. The first closeup photos were of low resolution compared to later missions. The two Voyager probes traveled through the Jovian system in 1979, providing more-detailed images of Europa's icy surface. The images caused many scientists to speculate about the possibility of a liquid ocean underneath. Starting in 1995, the Galileo space probe orbited Jupiter for eight years, until 2003, and provided the most detailed examination of the Galilean moons to date. It included the "Galileo Europa Mission" and "Galileo Millennium Mission", with numerous close flybys of Europa. In 2007, New Horizons imaged Europa, as it flew by the Jovian system while on its way to Pluto. In 2022, the Juno orbiter flew by Europa at a distance of 352 km (219 mi). In 2012, Jupiter Icy Moons Explorer (JUICE) was selected by the European Space Agency (ESA) as a planned mission. That mission includes two flybys of Europa, but is more focused on Ganymede. It was launched in 2023, and is expected to reach Jupiter in July 2031 after four gravity assists and eight years of travel. In 2011, a Europa mission was recommended by the U.S. Planetary Science Decadal Survey. In response, NASA commissioned concept studies of a Europa lander in 2011, along with concepts for a Europa flyby (Europa Clipper), and a Europa orbiter. The orbiter element option concentrates on the "ocean" science, while the multiple-flyby element (Clipper) concentrates on the chemistry and energy science. On 13 January 2014, the House Appropriations Committee announced a new bipartisan bill that includes $80 million in funding to continue the Europa mission concept studies. In July 2013 an updated concept for a flyby Europa mission called Europa Clipper was presented by the Jet Propulsion Laboratory (JPL) and the Applied Physics Laboratory (APL). In May 2015, NASA announced that it had accepted development of the Europa Clipper mission, and revealed the instruments it would use. The aim of Europa Clipper is to explore Europa in order to investigate its habitability, and to aid in selecting sites for a future lander. The Europa Clipper would not orbit Europa, but instead orbit Jupiter and conduct 45 low-altitude flybys of Europa during its envisioned mission. The probe would carry an ice-penetrating radar, short-wave infrared spectrometer, topographical imager, and an ion- and neutral-mass spectrometer. The mission was launched on 14 October 2024 aboard a Falcon Heavy. Future missions Conjectures regarding extraterrestrial life have ensured a high profile for Europa and have led to steady lobbying for future missions. The aims of these missions have ranged from examining Europa's chemical composition to searching for extraterrestrial life in its hypothesized subsurface oceans. Robotic missions to Europa need to endure the high-radiation environment around Jupiter. Because it is deeply embedded within Jupiter's magnetosphere, Europa receives about 5.40 Sv of radiation per day. Europa Lander is a recent NASA concept mission under study. 2018 research suggests Europa may be covered in tall, jagged ice spikes, presenting a problem for any potential landing on its surface. Old proposals In the early 2000s, Jupiter Europa Orbiter led by NASA and the Jupiter Ganymede Orbiter led by the ESA were proposed together as an Outer Planet Flagship Mission to Jupiter's icy moons called Europa Jupiter System Mission, with a planned launch in 2020. In 2009 it was given priority over Titan Saturn System Mission. At that time, there was competition from other proposals. Japan proposed Jupiter Magnetospheric Orbiter. Jovian Europa Orbiter was an ESA Cosmic Vision concept study from 2007. Another concept was Ice Clipper, which would have used an impactor similar to the Deep Impact mission—it would make a controlled crash into the surface of Europa, generating a plume of debris that would then be collected by a small spacecraft flying through the plume. Jupiter Icy Moons Orbiter (JIMO) was a partially developed fission-powered spacecraft with ion thrusters that was cancelled in 2006. It was part of Project Prometheus. The Europa Lander Mission proposed a small nuclear-powered Europa lander for JIMO. It would travel with the orbiter, which would also function as a communication relay to Earth. Europa Orbiter – Its objective would be to characterize the extent of the ocean and its relation to the deeper interior. Instrument payload could include a radio subsystem, laser altimeter, magnetometer, Langmuir probe, and a mapping camera. The Europa Orbiter received the go-ahead in 1999 but was canceled in 2002. This orbiter featured a special ice-penetrating radar that would allow it to scan below the surface. More ambitious ideas have been put forward including an impactor in combination with a thermal drill to search for biosignatures that might be frozen in the shallow subsurface. Another proposal put forward in 2001 calls for a large nuclear-powered "melt probe" (cryobot) that would melt through the ice until it reached an ocean below. Once it reached the water, it would deploy an autonomous underwater vehicle (hydrobot) that would gather information and send it back to Earth. Both the cryobot and the hydrobot would have to undergo some form of extreme sterilization to prevent detection of Earth organisms instead of native life and to prevent contamination of the subsurface ocean. This suggested approach has not yet reached a formal conceptual planning stage. Habitability So far, there is no evidence that life exists on Europa, but the moon has emerged as one of the most likely locations in the Solar System for potential habitability. Life could exist in its under-ice ocean, perhaps in an environment similar to Earth's deep-ocean hydrothermal vents. Even if Europa lacks volcanic hydrothermal activity, a 2016 NASA study found that Earth-like levels of hydrogen and oxygen could be produced through processes related to serpentinization and ice-derived oxidants, which do not directly involve volcanism. In 2015, scientists announced that salt from a subsurface ocean may likely be coating some geological features on Europa, suggesting that the ocean is interacting with the seafloor. This may be important in determining if Europa could be habitable. The likely presence of liquid water in contact with Europa's rocky mantle has spurred calls to send a probe there. The energy provided by tidal forces drives active geological processes within Europa's interior, just as they do to a far more obvious degree on its sister moon Io. Although Europa, like the Earth, may possess an internal energy source from radioactive decay, the energy generated by tidal flexing would be several orders of magnitude greater than any radiological source. Life on Europa could exist clustered around hydrothermal vents on the ocean floor, or below the ocean floor, where endoliths are known to inhabit on Earth. Alternatively, it could exist clinging to the lower surface of Europa's ice layer, much like algae and bacteria in Earth's polar regions, or float freely in Europa's ocean. Should Europa's oceans be too cold, biological processes similar to those known on Earth could not occur; too salty, only extreme halophiles could survive in that environment. In 2010, a model proposed by Richard Greenberg of the University of Arizona proposed that irradiation of ice on Europa's surface could saturate its crust with oxygen and peroxide, which could then be transported by tectonic processes into the interior ocean. Such a process could render Europa's ocean as oxygenated as our own within just 12 million years, allowing the existence of complex, multicellular lifeforms. Evidence suggests the existence of lakes of liquid water entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell, as well as pockets of water that form M-shaped ice ridges when the water freezes on the surface – as in Greenland. If confirmed, the lakes and pockets of water could be yet another potential habitat for life. Evidence suggests that hydrogen peroxide is abundant across much of the surface of Europa. Because hydrogen peroxide decays into oxygen and water when combined with liquid water, the authors argue that it could be an important energy supply for simple life forms. Nonetheless, on 4 March 2024, astronomers reported that the surface of Europa may have much less oxygen than previously inferred. Clay-like minerals (specifically, phyllosilicates), often associated with organic matter on Earth, have been detected on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet. Some scientists have speculated that life on Earth could have been blasted into space by asteroid collisions and arrived on the moons of Jupiter in a process called lithopanspermia.
Physical sciences
Solar System
null
43136
https://en.wikipedia.org/wiki/Sipuncula
Sipuncula
The Sipuncula or Sipunculida (common names sipunculid worms or peanut worms) is a class containing about 162 species of unsegmented marine annelid worms. Sipuncula was once considered a phylum, but was demoted to a class of Annelida, based on recent molecular work. Sipunculans vary in size but most species are under in length. The body is divided into an unsegmented, bulbous trunk and a narrower, anterior section, called the "introvert", which can be retracted into the trunk. The mouth is at the tip of the introvert and is surrounded in most groups by a ring of short tentacles. With no hard parts, the body is flexible and mobile. Although found in a range of habitats throughout the world's oceans, the majority of species live in shallow water habitats, burrowing under the surface of sandy and muddy substrates. Others live under stones, in rock crevices or in other concealed locations. Most sipunculans are deposit feeders, extending the introvert to gather food particles and draw them into the mouth, and retracting the introvert when feeding conditions are unsuitable or danger threatens. With a few exceptions, reproduction is sexual and involves a planktonic larval stage. Sipunculid worms are used as food in some countries in south-east Asia. Taxonomy is a feminine variant of the now-obsolete genus name , itself a variant of the Latin ("little tube"), a diminutive of from Greek (síphōn, "tube, pipe"). The Swedish naturalist Carl Linnaeus first described the worm in his in 1767. In 1814, the French zoologist Constantine Samuel Rafinesque used the word "Sipuncula" to describe the family (now Sipunculidae), and in time, the term came to be used for the whole class. This is a relatively understudied group, and it is estimated there may be around 162 species worldwide. The phylogenetic placement of this group in the past has proved troublesome. Originally classified as annelids, despite the complete lack of segmentation, bristles and other annelid characters, the phylum Sipuncula was later allied with the Mollusca, mostly on the basis of developmental and larval characters. These phyla have been included in a larger group, the Lophotrochozoa, that also includes the annelids, the ribbon worms and several other phyla. Phylogenetic analyses based on 79 ribosomal proteins indicated a position of Sipuncula within Annelida. Subsequent analysis of the mitochondrion's DNA has confirmed their close relationship to the Annelida (including echiurans and pogonophorans). It has also been shown that a rudimentary neural segmentation similar to that of annelids occurs in the early larval stage, even if these traits are absent in the adults. Anatomy Sipunculans are worms ranging from in length, with most species being under . The sipunculan body is divided into an unsegmented, bulbous trunk and a narrower, anterior section, called the "introvert". Sipunculans have a body wall somewhat similar to that of most other annelids (though unsegmented) in that it consists of an epidermis without cilia overlain by a cuticle, an outer layer of circular and an inner layer of longitudinal musculature. The body wall surrounds the coelom (body cavity) that is filled with fluid on which the body wall musculature acts as a hydrostatic skeleton to extend or contract the animal. When threatened, Sipunculid worms can contract their body into a shape resembling a peanut kernel—a practice that has given rise to the name "peanut worm". The introvert is pulled inside the trunk by two pairs of retractor muscles that extend as narrow ribbons from the trunk wall to attachment points in the introvert. It can be protruded from the trunk by contracting the muscles of the trunk wall, thus forcing the fluid in the body cavity forwards. The introvert can vary in size from half the length of the trunk to several times its length, but whatever their comparative sizes, it is fully retractable. The mouth is located at the anterior end of the animal; in the subclass Sipunculidea, the mouth is surrounded by a mass of 18 to 24 ciliated tentacles, while in the subclass Phascolosomatidea, the tentacles are arranged in an arc above the mouth, surrounding the nuchal organ, also located at the tip of the introvert. The tentacles each have a deep groove along which food is moved to the mouth by cilia. They are used to gather organic detritus from the water or substrate, and probably also function as gills. In the family Themistidae the tentacles form an elaborate crown-like structure, the members of this group being specialized filter feeders, unlike the other groups of sipunculans which are deposit feeders. The tentacles are hollow and are extended via hydrostatic pressure in a similar manner as the introvert, but have a different mechanism from that of the rest of the introvert, being connected, via a system of ducts, to one or two contractile sacs next to the oesophagus. Hooks are often present near the mouth on the introvert. These are proteinaceous, non-chitinous specializations of the epidermis, either arranged in rings or scattered. They may be involved in scraping algae off rock, or alternatively provide anchorage. Three genera (Aspidosiphon, Lithacrosiphon and Cloeosiphon) in the Aspidosiphonidae family possess epidermal structures, known as anal and caudal shields. These are patches of thickened, hard plates, and are used for boring into rock; the anal shield is near the anteriorly-located anus on the trunk, just below the introvert of the animal, while the caudal shield is at the posterior of the body. In Aspidosiphon and Lithacrosiphon the anal shield is restricted to the dorsal side, causing the introvert to emerge at an angle, whereas it surrounds the anterior trunk in Cloeosiphon with the introvert emerging from its center. In Aspidosiphon the shield is a hardened, horny structure; in Lithacrosiphon it is a calcareous cone; in Cloeosiphon it is composed of separate plates. When the introvert is retracted in these animals, the anal shield blocks the entrance to its burrow. At the posterior end of the trunk, a hardened caudal shield is sometimes present in Aspidosiphon; this may help with anchoring the animal in its burrow or may be used in the boring process. Digestive system The digestive tract of sipunculans starts with the esophagus, located between the introvert retractor muscles. In the trunk the intestine runs posteriorly, forms a loop and turns anteriorly again. The downward and upward sections of the gut are coiled around each other, forming a double helix. At the termination of the gut coil, the rectum emerges and ends in the anus, located in the anterior third of the trunk. Digestion is extra-cellular, taking place in the lumen of the intestine. A rectal caecum, present in most species, is a blind ending sac at the transition between intestine and rectum with unknown function. The anus is often not visible when the introvert is retracted into the trunk. Circulation Sipunculans do not have a vascular blood system. Fluid transport and gas exchange are instead accomplished by the coelom, which contains the respiratory pigment haemerythrin, and the separate tentacular system, the two being separated by an elaborate septum. The coelomic fluid contains five types of coelomic cells: haemocytes, granulocytes, large multinuclear cells, ciliated urn-shaped cells and immature cells. The ciliated urn cells may also be attached to the peritoneum and assist in waste filtering from the coelomic fluid. Nitrogenous waste is excreted through a pair of metanephridia opening close to the anus, except in Phascolion and Onchnesoma, which have only a single nephridium. A ciliated funnel, or nephrostome, opens into the coelomic cavity at the anterior end, close to the nephridiopore. The metanephridia have an osmoregulation function but it is unclear whether the mechanism is via filtration or secretion. They also serve as gamete storage and maintenance organs. The tentacular coelom connects the tentacles at the tip of the introvert to a ring canal at their base, from which a contractile vessel runs along beside the esophagus and ends blindly posteriorly. Some evidence points towards the involvement of these structures in ultrafiltration. In crevice-dwelling sipunculans, respiration is mainly through the tentacular system, with oxygen diffusing into the trunk coelom from the tentacular coelom. However, in other species the skin is thin and respiration is mainly through the cuticle of the trunk, where oxygen uptake is assisted by the presence of dermal coelomic canals just beneath the epidermis. Nervous system The nervous system consists of dorsal cerebral ganglion, brain above the oesophagus and a nerve ring around the oesophagus, which links the brain with the single ventral nerve cord that runs the length of the body. Lateral nerves lead off this to innervate the muscles of the body wall. In some species, there are simple light-sensitive ocelli associated with the brain. Two organs, likely functioning as a unit for chemoreception are located near its anterior margin; the non-ciliated cerebral organ, which possesses bipolar sensory cells, and the nuchal organ, located posterior to the brain. Similar light-sensing tubes have been reported in the fauveliopsid annelids. In addition, all sipunculans have numerous sensory nerve endings on the body wall, especially at the forward end of the introvert which is used for exploring the surrounding environment. Distribution and habitat All sipunculid worms are marine and benthic; they are found throughout the world's oceans including polar waters, equatorial waters and the abyssal zone, but the majority of species occur in shallow water, where they are relatively common. They inhabit a range of habitats including burrowing in sand, mud, clay and gravel, hiding under stones, in rock crevices, in hollow coral heads, in wood, in empty seashells and inside the bones of dead whales. Some hide in kelp holdfasts, under tangles of eelgrass, inside sponges and in the empty tubes of other organisms, and some live among fouling organisms on man-made structures. Some bore into solid rocks to make a shelter for themselves. They are common below the surface of the sediment on tidal flats. These worms may stay submerged in the sea bed for between 10 and 18 hours a day. They are sensitive to low salinities, and thus not commonly found near estuaries. They can also be abundant in coralline rock, and in Hawaii, up to seven hundred individuals have been found per square metre in burrows in the rock. Reproduction Both asexual and sexual reproduction can be found in sipunculans, although asexual reproduction is uncommon and has only been observed in Aspidosiphon elegans and Sipunculus robustus. These reproduce asexually through transverse fission, followed by regeneration of vital body components, with S. robustus also reproducing by budding. One species of sipunculan, Themiste lageniformis, has been recorded as reproducing parthenogenetically; eggs produced in the absence of sperm developed through the normal stages. Most sipunculan species are dioecious. Their gametes are produced in the coelomic lining, where they are released into the coelom to mature. These gametes are then picked up by the metanephridia system and released into the aquatic environment, where fertilisation takes place. In at least one species, Themiste pyroides, swarming behaviour occurs with adults creating compact masses among rocks immediately before spawning. Although some species hatch directly into the adult form, many have a trochophore larva, which metamorphoses into the adult after anything from a day to a month, depending on species. In a few species, the trochophore does not develop directly into the adult, but into an intermediate pelagosphaera stage, that possesses a greatly enlarged metatroch (ciliated band). Metamorphosis occurs only in the presence of suitable habitat conditions, and is triggered by the presence of adults. Behaviour Most sipunculans are deposit feeders employing a number of different methods to obtain their foods. Those living in burrows extend their tentacles over the surface of the sediment. Food particles get trapped in mucous secretions and the beating of cilia transport the particles to the mouth. Among those that burrow through the sand, the tentacles are replaced by fluted folds which scoop up sediment and food particles. Most of this material is swallowed but larger particles are discarded. Species dwelling in crevices are able to withdraw their introverts, blocking the crevice entrance with their thickened trunks and presumably ingesting any food they have snared at the same time. One species, Thysanocardia procera is thought to be carnivorous, gaining entrance in some way to the interior of the sea mouse Aphrodita aculeata and sucking out its liquefied contents. Fossil record Because of their soft-bodied structure, fossils of sipunculans are extremely rare, and are only known from a few genera. Archaeogolfingia and Cambrosipunculus appear in the Cambrian Chengjiang biota in China. These fossils appear to belong to the crown group, and demonstrate that sipunculans have changed little (morphologically) since the early Cambrian, about 520 million years ago. An unnamed sipunculid worm from the Cambrian period has been discovered in the Burgess Shale in Alberta, Canada, and Lecthaylus has been identified from the Granton Shrimp Bed, near Edinburgh, Scotland, dating to the Silurian period. Trace fossils of burrows that may have been formed by sipunculans have been found from the Paleozoic. Some scientists once hypothesized a close relationship between sipunculans and the extinct hyoliths, operculate shells from the Palaeozoic with which they share a helical gut; but this hypothesis has since been discounted. As food Sipunculid worm jelly (土笋凍) is a delicacy in southeast China, originally from Anhai, near Quanzhou. A sipunculid worm dish is also considered a delicacy in the islands of the Visayas region, Philippines. The muscle is first prepared by soaking it in spiced vinegar and then served with other ingredients as a dish similar to ceviche. It is a basic food for local fisherman and is sometimes seen in city restaurants as an appetizer. This style of food preparation is locally called kilawin or kinilaw, and is also used for fish, conch and vegetables. The worms, especially in dried form, are considered a delicacy in Vietnam as well, where they are caught on the coasts of Minh Chao island, in Van Don District. The relatively high market price of the worms have made them a significant source of income for the local population of fishermen families.
Biology and health sciences
Lophotrochozoa
Animals
43139
https://en.wikipedia.org/wiki/Placozoa
Placozoa
Placozoa ( ; ) is a phylum of free-living (non-parasitic) marine invertebrates. They are blob-like animals composed of aggregations of cells. Moving in water by ciliary motion, eating food by engulfment, reproducing by fission or budding, placozoans are described as "the simplest animals on Earth." Structural and molecular analyses have supported them as among the most basal animals, thus, constituting a primitive metazoan phylum. The first known placozoan, Trichoplax adhaerens, was discovered in 1883 by the German zoologist Franz Eilhard Schulze (1840–1921). Describing the uniqueness, another German, Karl Gottlieb Grell (1912–1994), erected a new phylum, Placozoa, for it in 1971. Remaining a monotypic phylum for over a century, new species began to be added since 2018. So far, three other extant species have been described, in two distinct classes: Uniplacotomia (Hoilungia hongkongensis in 2018 and Cladtertia collaboinventa in 2022) and Polyplacotomia (Polyplacotoma mediterranea, the most basal, in 2019). A single putative fossil species is known, the Middle Triassic Maculicorpus microbialis. History Trichoplax was discovered in 1883 by the German zoologist Franz Eilhard Schulze, in a seawater aquarium at the Zoological Institute in Graz, Austria. The generic name is derived from the classical Greek (), meaning "hair", and (), "plate". The specific epithet adhaerens is Latin meaning "adherent", reflecting its propensity to stick to the glass slides and pipettes used in its examination. Schulze realized that the animal could not be a member of any existing phyla, and based on the simple structure and behaviour, concluded in 1891 that it must be an early metazoan. He also observed the reproduction by fission, cell layers and locomotion. In 1893, Italian zoologist Francesco Saverio Monticelli described another animal which he named Treptoplax, the specimens of which he collected from Naples. He gave the species name T. reptans in 1896. Monticelli did not preserve them and no other specimens were found again, as a result of which the identification is ruled as doubtful, and the species rejected. Schulze's description was opposed by other zoologists. For instance, in 1890, F.C. Noll argued that the animal was a flat worm (Turbellaria). In 1907, Thilo Krumbach published a hypothesis that Trichoplax is not a distinct animal but that it is a form of the planula larva of the anemone-like hydrozoan Eleutheria krohni. Although this was refuted in print by Schulze and others, Krumbach's analysis became the standard textbook explanation, and nothing was printed in zoological journals about Trichoplax until the 1960s. The development of electron microscopy in the mid-20th century allowed in-depth observation of the cellular components of organisms, following which there was renewed interest in Trichoplax starting in 1966. The most important descriptions were made by Karl Gottlieb Grell at the University of Tübingen since 1971. That year, Grell revived Schulze's interpretation that the animals are unique and created a new phylum Placozoa. Grell derived the name from the placula hypothesis, Otto Bütschli's notion on the origin of metazoans. Biology Placozoans do not have well-defined body plans, much like amoebas, unicellular eukaryotes. As Andrew Masterson reported: "they are as close as it is possible to get to being simply a little living blob." An individual body measures about 0.55 mm in diameter. There are no body parts; as one of the researchers Michael Eitel described: "There's no mouth, there's no back, no nerve cells, nothing." Animals studied in laboratories have bodies consisting of everything from hundreds to millions of cells. Placozoans have only three anatomical parts as tissue layers inside its body: the upper, intermediate (middle) and lower epithelia. There are at least six different cell types. The upper epithelium is the thinnest portion and essentially comprises flat cells with their cell body hanging underneath the surface, and each cell having a cilium. Crystal cells are sparsely distributed near the marginal edge. A few cells have unusually large number of mitochondria. The middle layer is the thickest made up of numerous fiber cells, which contain mitochondrial complexes, vacuoles and endosymbiotic bacteria in the endoplasmic reticulum. The lower epithelium consists of numerous monociliated cylinder cells along with a few endocrine-like gland cells and lipophil cells. Each lipophil cell contains numerous middle-sized granules, one of which is a secretory granule. The body axes of Hoilungia and Trichoplax are overtly similar to the oral–aboral axis of cnidarians, animals from another phylum with which they are most closely related. Structurally, they can not be distinguished from other placozoans, so that identification is purely on genetic (mitochondrial DNA) differences. Genome sequencing has shown that each species has a set of unique genes and several uniquely missing genes. Trichoplax is a small, flattened, animal around across. An amorphous multi-celled body, analogous to a single-celled amoeba, it has no regular outline, although the lower surface is somewhat concave, and the upper surface is always flattened. The body consists of an outer layer of simple epithelium enclosing a loose sheet of stellate cells resembling the mesenchyme of some more complex animals. The epithelial cells bear cilia, which the animal uses to help it creep along the seafloor. The lower surface engulfs small particles of organic detritus, on which the animal feeds. All placozoans can reproduce asexually, budding off smaller individuals, and the lower surface may also bud off eggs into the mesenchyme. Sexual reproduction has been reported to occur in one clade of placozoans, whose strain H8 was later found to belong to genus Cladtertia, where intergenic recombination was observed as well as other hallmarks of sexual reproduction. Some Trichoplax species contain Rickettsiales bacteria as endosymbionts. One of the at least 20 described species turned out to have two bacterial endosymbionts; Grellia which lives in the animal's endoplasmic reticulum and is assumed to play a role in the protein and membrane production. The other endosymbiont is the first described Margulisbacteria, that lives inside cells used for algal digestion. It appears to eat the fats and other lipids of the algae and provide its host with vitamins and amino acids in return. Studies suggest that aragonite crystals in crystal cells have the same function as statoliths, allowing it to use gravity for spatial orientation. Located in the dorsal epithelium there are lipid granules called shiny spheres which release a cocktail of venoms and toxins as an anti-predator defense, and can induce paralysis or death in some predators. Genes has been found in Trichoplax with a strong resemblance to the venom genes of some poisonous snakes, like the American copperhead and the West African carpet viper. The Placozoa show substantial evolutionary radiation in regard to sodium channels, of which they have 5–7 different types, more than any other invertebrate species studied to date. Three modes of population dynamics depended upon feeding sources, including induction of social behaviors, morphogenesis, and reproductive strategies. In addition to fission, representatives of all species produced “swarmers” (a separate vegetative reproduction stage), which could also be formed from the lower epithelium with greater cell-type diversity. Evolutionary relationships There is no convincing fossil record of the Placozoa, although the Ediacaran biota (Precambrian, ) organism Dickinsonia appears somewhat similar to placozoans. Knaust (2021) reported preservation of placozoan fossils in a microbialite bed from the Middle Triassic Muschelkalk (Germany). Traditionally, classification was based on their level of organization, i.e., they possess no tissues or organs. However this may be as a result of secondary loss and thus is inadequate to exclude them from relationships with more complex animals. More recent work has attempted to classify them based on the DNA sequences in their genome; this has placed the phylum between the sponges and the Eumetazoa. In such a feature-poor phylum, molecular data are considered to provide the most reliable approximation of the placozoans' phylogeny. Their exact position on the phylogenetic tree would give important information about the origin of neurons and muscles. If the absence of these features is an original trait of the Placozoa, it would mean that a nervous system and muscles evolved three times should placozoans and cnidarians be a sister group; once in the Ctenophora, once in the Cnidaria and once in the Bilateria. If they branched off before the Cnidaria and Bilateria split, the neurons and muscles would have the same origin in the two latter groups. Functional-morphology hypothesis On the basis of their simple structure, the Placozoa were frequently viewed as a model organism for the transition from unicellular organisms to the multicellular animals (Metazoa) and are thus considered a sister taxon to all other metazoans: According to a functional-morphology model, all or most animals are descended from a gallertoid, a free-living (pelagic) sphere in seawater, consisting of a single ciliated layer of cells supported by a thin, noncellular separating layer, the basal lamina. The interior of the sphere is filled with contractile fibrous cells and a gelatinous extracellular matrix. Both the modern Placozoa and all other animals then descended from this multicellular beginning stage via two different processes: Infolding of the epithelium led to the formation of an internal system of ducts and thus to the development of a modified gallertoid from which the sponges (Porifera), Cnidaria and Ctenophora subsequently developed. Other gallertoids, according to this model, made the transition over time to a benthic mode of life; that is, their habitat has shifted from the open ocean to the floor (benthic zone). This results naturally in a selective advantage for flattening of the body, as of course can be seen in many benthic species. While the probability of encountering food, potential sexual partners, or predators is the same in all directions for animals floating freely in the water, there is a clear difference on the seafloor between the functions useful on body sides facing toward and away from the substrate, leading their sensory, defensive, and food-gathering cells to differentiate and orient according to the vertical – the direction perpendicular to the substrate. In the proposed functional-morphology model, the Placozoa, and possibly several similar organisms only known from the fossils, are descended from such a life form, which is now termed placuloid. Three different life strategies have accordingly led to three different possible lines of development: Animals that live interstitially in the sand of the ocean floor were responsible for the fossil crawling traces that are considered the earliest evidence of animals; and are detectable even prior to the dawn of the Ediacaran Period in geology. These are usually attributed to bilaterally symmetrical worms, but the hypothesis presented here views animals derived from placuloids, and thus close relatives of Trichoplax adhaerens, to be the producers of the traces. Animals that incorporated algae as photosynthetically active endosymbionts, i.e. primarily obtaining their nutrients from their partners in symbiosis, were accordingly responsible for the mysterious creatures of the Ediacara fauna that are not assigned to any modern animal taxon and lived during the Ediacaran Period, before the start of the Paleozoic. However, recent work has shown that some of the Ediacaran assemblages (e.g. Mistaken Point) were in deep water, below the photic zone, and hence those individuals could not dependent on endosymbiotic photosynthesisers. Animals that grazed on algal mats would ultimately have been the direct ancestors of the Placozoa. The advantages of an amoeboid multiplicity of shapes thus allowed a previously present basal lamina and a gelatinous extracellular matrix to be lost secondarily. Pronounced differentiation between the surface facing the substrate (ventral) and the surface facing away from it (dorsal) accordingly led to the physiologically distinct cell layers of Trichoplax adhaerens that can still be seen today. Consequently, these are analogous, but not homologous, to ectoderm and endoderm – the "external" and "internal" cell layers in eumetazoans – i.e. the structures corresponding functionally to one another have, according to the proposed hypothesis, no common evolutionary origin. Should any of the analyses presented above turn out to be correct, Trichoplax adhaerens would be the oldest branch of the multicellular animals, and a relic of the Ediacaran fauna, or even the pre-Ediacara fauna. Although very successful in their ecological niche, due to the absence of extracellular matrix and basal lamina, the development potential of these animals was of course limited, which would explain the low rate of evolution of their phenotype (their outward form as adults) – referred to as bradytely. This hypothesis was supported by a recent analysis of the Trichoplax adhaerens mitochondrial genome in comparison to those of other animals. The hypothesis was, however, rejected in a statistical analysis of the Trichoplax adhaerens whole genome sequence in comparison to the whole genome sequences of six other animals and two related non-animal species, but only at which indicates a marginal level of statistical significance. Epitheliozoa hypothesis A concept based on purely morphological characteristics pictures the Placozoa as the nearest relative of the animals with true tissues (Eumetazoa). The taxon they share, called the Epitheliozoa, is itself construed to be a sister group to the sponges (Porifera): The above view could be correct, although there is some evidence that the ctenophores, traditionally seen as Eumetazoa, may be the sister to all other animals. This is now a disputed classification. Placozoans are estimated to have emerged 750–800 million years ago, and the first modern neuron to have originated in the common ancestor of cnidarians and bilaterians about 650 million years ago (many of the genes expressed in modern neurons are absent in ctenopheres, although some of these missing genes are present in placozoans). The principal support for such a relationship comes from special cell to cell junctions – belt desmosomes – that occur not just in the Placozoa but in all animals except the sponges: They enable the cells to join together in an unbroken layer like the epitheloid of the Placozoa. Trichoplax adhaerens also shares the ventral gland cells with most eumetazoans. Both characteristics can be considered evolutionarily derived features (apomorphies), and thus form the basis of a common taxon for all animals that possess them. One possible scenario inspired by the proposed hypothesis starts with the idea that the monociliated cells of the epitheloid in Trichoplax adhaerens evolved by reduction of the collars in the collar cells (choanocytes) of sponges as the hypothesized ancestors of the Placozoa abandoned a filtering mode of life. The epitheloid would then have served as the precursor to the true epithelial tissue of the eumetazoans. In contrast to the model based on functional morphology described earlier, in the Epitheliozoa hypothesis, the ventral and dorsal cell layers of the Placozoa are homologs of endoderm and ectoderm — the two basic embryonic cell layers of the eumetazoans. The digestive gastrodermis in the Cnidaria or the gut epithelium in the bilaterally symmetrical animals (Bilateria) may have developed from endoderm, whereas ectoderm is the precursor to the external skin layer (epidermis), among other things. The interior space pervaded by a fiber syncytium in the Placozoa would then correspond to connective tissue in the other animals. It is unclear whether the calcium ions stored in the syncytium would be related to the lime skeletons of many cnidarians. As noted above, this hypothesis was supported in a statistical analysis of the Trichoplax adhaerens whole genome sequence, as compared to the whole-genome sequences of six other animals and two related non-animal species. Eumetazoa hypothesis A third hypothesis, based primarily on molecular genetics, views the Placozoa as highly simplified eumetazoans. According to this, Trichoplax adhaerens is descended from considerably more complex animals that already had muscles and nerve tissues. Both tissue types, as well as the basal lamina of the epithelium, were accordingly lost more recently by radical secondary simplification. Various studies in this regard so far yield differing results for identifying the exact sister group: In one case, the Placozoa would qualify as the nearest relatives of the Cnidaria, while in another they would be a sister group to the Ctenophora, and occasionally they are placed directly next to the Bilateria. Currently, they are typically placed according to the cladogram below: In this cladogram the Epitheliozoa and Eumetazoa are synonyms to each other and to the Diploblasts, and the Ctenophora are basal to them. An argument raised against the proposed scenario is that it leaves morphological features of the animals completely out of consideration. The extreme degree of simplification that would have to be postulated for the Placozoa in this model, moreover, is only known for parasitic organisms, but would be difficult to explain functionally in a free-living species like Trichoplax adhaerens. This version is supported by statistical analysis of the Trichoplax adhaerens whole genome sequence in comparison to the whole genome sequences of six other animals and two related non-animal species. However, Ctenophora was not included in the analyses, placing the placozoans outside of the sampled Eumetazoans. Cnidaria-sister hypothesis DNA comparisons suggest that placozoans are related to Cnidaria, derived from planula larva (as seen in some Cnidaria). The Bilateria also are thought to be derived from planuloids. The Cnidaria and Placozoa body axis are overtly similar, and placozoan and cnidarian cells are responsive to the same neuropeptide antibodies despite extant placozoans not developing any neurons.
Biology and health sciences
Other
Animals
43140
https://en.wikipedia.org/wiki/Symbion
Symbion
Symbion is a genus of commensal aquatic animals, less than 0.5 mm wide, found living attached to the mouthparts of cold-water lobsters. They have sac-like bodies, and three distinctly different forms in different parts of their two-stage life-cycle. They appear so different from other animals that they were assigned their own, new phylum Cycliophora shortly after they were discovered in 1995. This was the first new phylum of multicelled organism to be discovered since the Loricifera in 1983. Taxonomy Symbion was discovered in 1995 by Reinhardt Kristensen and Peter Funch on the mouthparts of the Norway lobster (Nephrops norvegicus). Other, related, species have since been discovered on: the American lobster (Homarus americanus, host to Symbion americanus) the European lobster (Homarus gammarus, host to an as yet unnamed species of Symbion) The genus is so named because of its commensal relationship with the lobster (a form of symbiosis) – it feeds on the leftovers from the lobster's own meals. They are peculiar microscopic animals, with no obvious close relatives, which were therefore given their own phylum, called Cycliophora. The phylogenetic position of Symbion is still not finally settled. Currently it is placed in the clade Polyzoa along with the phyla Ectoprocta and Entoprocta, based on genetic analysis. Description Symbion pandora has a bilateral, sac-like body with no coelom. There are three basic life stages: Asexual Feeding Stage – At this stage, S. pandora is neither male nor female. It has a length of 347 μm and a width of 113 μm. On the posterior end of the sac-like body is a stalk with an adhesive disc, which attaches itself to the host. On the anterior end is a ciliated funnel (mouth) and an anus. Sexual Stage Male – S. pandora has a length of 84 μm and a width of 42 μm during this stage. It has no mouth or anus, which signifies the absence of a digestive system. It also has two reproductive organs. Female – S. pandora is the same size as the male in this stage. It does, however, have a digestive system which collapses and reconstitutes itself as a larva. Reproduction Symbion reproduces both asexually and sexually, and has a complex reproduction cycle, a strategy evolved to produce as many offspring as possible that can survive and find a new host when the lobster they live on sheds its shell. The asexual individuals are the largest ones. The sexual individuals do not eat. During the autumn they make copies of themselves, where a new individual grows inside the parent body, one offspring at the time. The new offspring attach themselves to an available spot on the lobster, begin to feed and eventually start making new copies of themselves. In early winter, the asexual animals start producing males. When a male is born, it crawls away from its parent and glues itself to another asexual individual. Once attached, the male produces two dwarf males inside its body, which turns into a hollow pouch. Each of the two dwarf males are about one hundred times smaller than the asexual individual to which they are attached. Their bodies start out with about 200 cells, but this number has been reduced to just 47 by the time they reach maturity. Thirty-four of the cells form its nervous system, and three more become sensory cells used to help them feel their surroundings. Eight cells becomes mucous glands, which produce mucus that helps them move across the surface. The final two cells form the testes, which make the sperm that fertilize the female's egg. Most of the cells of the dwarf males also lose their nucleus and shrink to almost half their size, which is an adaptation that allows two mature individuals to fit inside the body of the parent male. Two males increases their chances to fertilize a female. By late winter, when the large feeding individuals in the colony have males attached to their bodies, they start making females. Each female has a single egg inside her. When she is about to be born, one of the two dwarf males fertilizes her when she comes out. The fertilized female finds herself a place on the host's whiskers where she attaches herself. Inside her the developing embryo extracts all the nutrients it needs to grow from its mother, and by the time it is ready to be born, all that remains of the mother is an empty husk. This new offspring is a strong swimmer unlike all the other forms in the colony, and those who succeed in finding a new host will attach themselves to its mouthparts, where it will grow a stomach and mouthparts, morphing into a large, feeding and asexual type, starting the cycle all over again. The larval stage may be unscientifically referred to as sea worms.
Biology and health sciences
Spiralia
Animals
43143
https://en.wikipedia.org/wiki/Echinoderm
Echinoderm
An echinoderm () is any animal of the phylum Echinodermata (), which includes starfish, brittle stars, sea urchins, sand dollars and sea cucumbers, as well as the sessile sea lilies or "stone lilies". While bilaterally symmetrical as larvae, as adults echinoderms are recognisable by their usually five-pointed radial symmetry (pentamerous symmetry), and are found on the sea bed at every ocean depth from the intertidal zone to the abyssal zone. The phylum contains about 7,600 living species, making it the second-largest group of deuterostomes after the chordates, as well as the largest marine-only phylum. The first definitive echinoderms appeared near the start of the Cambrian. Echinoderms are important both ecologically and geologically. Ecologically, there are few other groupings so abundant in the deep sea, as well as shallower oceans. Most echinoderms are able to reproduce asexually and regenerate tissue, organs and limbs; in some cases, they can undergo complete regeneration from a single limb. Geologically, the value of echinoderms is in their ossified dermal endoskeletons, which are major contributors to many limestone formations and can provide valuable clues as to the geological environment. They were the most used species in regenerative research in the 19th and 20th centuries. Further, some scientists hold that the radiation of echinoderms was responsible for the Mesozoic Marine Revolution. Etymology The name echinoderm is . The name Echinodermata was originated by Jacob Theodor Klein in 1734, but only in reference to echinoids. It was expanded to the phylum level by Jean Guillaume Bruguière, first informally in 1789 and then in formal Latin in 1791. In 1955, Libbie Hyman attributed the name to "Bruguière, 1791 [ex Klein, 1734]." This attribution has become common and is listed by the Integrated Taxonomic Information System (ITIS), although some workers believe that the ITIS rules should result in attributing "Klein, 1778" due to a 2nd edition of his work published by Leske in that year. While Echinodermata has been in common use since the mid-1800s, several other names had been proposed. Notably, F. A. Bather called the phylum "Echinoderma" (apparently after Latreille, 1825) in his 1900 treatise on the phylum, but this name now refers to a fungus. Diversity There are about 7,600 extant species of echinoderm as well as about 13,000 extinct species. All echinoderms are marine, but they are found in habitats ranging from shallow intertidal areas to abyssal depths. Five extant classes of echinoderms are generally recognized: the Asteroidea (starfish, with over 1900 species), Ophiuroidea (brittle stars, with around 2,300 species), Echinoidea (sea urchins and sand dollars, with some 900 species), Holothuroidea (sea cucumbers, with about 1,430 species), and Crinoidea (feather stars and sea lilies, with around 580 species). Anatomy and physiology Echinoderms evolved from animals with bilateral symmetry. Although adult echinoderms possess pentaradial symmetry, their larvae are ciliated, free-swimming organisms with bilateral symmetry. Later, during metamorphosis, the left side of the body grows at the expense of the right side, which is eventually absorbed. The left side then grows in a pentaradially symmetric fashion, in which the body is arranged in five parts around a central axis. Within the Asterozoa, there can be a few exceptions from the rule. Most starfish in the genus Leptasterias have six arms, although five-armed individuals can occur. The Brisingida also contain some six-armed species. Amongst the brittle stars, six-armed species such as Ophiothela danae, Ophiactis savignyi, and Ophionotus hexactis exist, and Ophiacantha vivipara often has more than six. Echinoderms have secondary radial symmetry in portions of their body at some stage of life, most likely an adaptation to a sessile or slow-moving existence. Many crinoids and some seastars are symmetrical in multiples of the basic five; starfish such as Labidiaster annulatus possess up to fifty arms, while the sea-lily Comaster schlegelii has two hundred. Genetic studies have shown that genes directing anterior-most development are expressed along ambulacra in the center of starfish rays, with the next-most-anterior genes expressed in the surrounding fringe of tube feet. Genes related to the beginning of the trunk are expressed at the ray margins, but trunk genes are only expressed in interior tissue rather than on the body surface. This means that a starfish body can more-or-less be considered to consist only of a head. Skin and skeleton Echinoderms have a mesodermal skeleton in the dermis, composed of calcite-based plates known as ossicles. If solid, these would form a heavy skeleton, so they have a sponge-like porous structure known as stereom. Ossicles may be fused together, as in the test of sea urchins, or may articulate to form flexible joints as in the arms of sea stars, brittle stars and crinoids. The ossicles may bear external projections in the form of spines, granules or warts and they are supported by a tough epidermis. Skeletal elements are sometimes deployed in specialized ways, such as the chewing organ called "Aristotle's lantern" in sea urchins, the supportive stalks of crinoids, and the structural "lime ring" of sea cucumbers. Although individual ossicles are robust and fossilize readily, complete skeletons of starfish, brittle stars and crinoids are rare in the fossil record. On the other hand, sea urchins are often well preserved in chalk beds or limestone. During fossilization, the cavities in the stereom are filled in with calcite that is continuous with the surrounding rock. On fracturing such rock, paleontologists can observe distinctive cleavage patterns and sometimes even the intricate internal and external structure of the test. The epidermis contains pigment cells that provide the often vivid colours of echinoderms, which include deep red, stripes of black and white, and intense purple. These cells may be light-sensitive, causing many echinoderms to change appearance completely as night falls. The reaction can happen quickly: the sea urchin Centrostephanus longispinus changes colour in just fifty minutes when exposed to light. One characteristic of most echinoderms is a special kind of tissue known as catch connective tissue. This collagen-based material can change its mechanical properties under nervous control rather than by muscular means. This tissue enables a starfish to go from moving flexibly around the seabed to becoming rigid while prying open a bivalve mollusc or preventing itself from being extracted from a crevice. Similarly, sea urchins can lock their normally mobile spines upright as a defensive mechanism when attacked. The water vascular system Echinoderms possess a unique water vascular system, a network of fluid-filled canals modified from the coelom (body cavity) that function in gas exchange, feeding, sensory reception and locomotion. This system varies between different classes of echinoderm but typically opens to the exterior through a sieve-like madreporite on the aboral (upper) surface of the animal. The madreporite is linked to a slender duct, the stone canal, which extends to a ring canal that encircles the mouth or oesophagus. The ring canal branches into a set of radial canals, which in asteroids extend along the arms, and in echinoids adjoin the test in the ambulacral areas. Short lateral canals branch off the radial canals, each one ending in an ampulla. Part of the ampulla can protrude through a pore (or a pair of pores in sea urchins) to the exterior, forming a podium or tube foot. The water vascular system assists with the distribution of nutrients throughout the animal's body; it is most visible in the tube feet which can be extended or contracted by the redistribution of fluid between the foot and the internal ampulla. The organisation of the water vascular system is somewhat different in ophiuroids, where the madreporite may be on the oral surface and the podia lack suckers. In holothuroids, the system is reduced, often with few tube feet other than the specialised feeding tentacles, and the madreporite opens on to the coelom. Some holothuroids like the Apodida lack tube feet and canals along the body; others have longitudinal canals. The arrangement in crinoids is similar to that in asteroids, but the tube feet lack suckers and are used in a back-and-forth wafting motion to pass food particles captured by the arms towards the central mouth. In the asteroids, the same motion is employed to move the animal across the ground. Other organs Echinoderms possess a simple digestive system which varies according to the animal's diet. Starfish are mostly carnivorous and have a mouth, oesophagus, two-part stomach, intestine and rectum, with the anus located in the centre of the aboral body surface. With a few exceptions, the members of the order Paxillosida do not possess an anus. In many species of starfish, the large cardiac stomach can be everted to digest food outside the body. Some other species are able to ingest whole food items such as molluscs. Brittle stars, which have varying diets, have a blind gut with no intestine or anus; they expel food waste through their mouth. Sea urchins are herbivores and use their specialised mouthparts to graze, tear and chew their food, mainly algae. They have an oesophagus, a large stomach and a rectum with the anus at the apex of the test. Sea cucumbers are mostly detritivores, sorting through the sediment with modified tube feet around their mouth, the buccal tentacles. Sand and mud accompanies their food through their simple gut, which has a long coiled intestine and a large cloaca. Crinoids are suspension feeders, passively catching plankton which drift into their outstretched arms. Boluses of mucus-trapped food are passed to the mouth, which is linked to the anus by a loop consisting of a short oesophagus and longer intestine. The coelomic cavities of echinoderms are complex. Aside from the water vascular system, echinoderms have a haemal coelom, a perivisceral coelom, a gonadal coelom and often also a perihaemal coelom. During development, echinoderm coelom is divided into the metacoel, mesocoel and protocoel (also called somatocoel, hydrocoel and axocoel, respectively). The water vascular system, haemal system and perihaemal system form the tubular coelomic system. Echinoderms are unusual in having both a coelomic circulatory system (the water vascular system) and a haemal circulatory system, as most groups of animals have just one of the two. Haemal and perihaemal systems are derived from the original coelom, forming an open and reduced circulatory system. This usually consists of a central ring and five radial vessels. There is no true heart, and the blood often lacks any respiratory pigment. Gaseous exchange occurs via dermal branchiae or papulae in starfish, genital bursae in brittle stars, peristominal gills in sea urchins and cloacal trees in sea cucumbers. Exchange of gases also takes place through the tube feet. Echinoderms lack specialized excretory (waste disposal) organs and so nitrogenous waste, chiefly in the form of ammonia, diffuses out through the respiratory surfaces. The coelomic fluid contains the coelomocytes, or immune cells. There are several types of immune cells, which vary among classes and species. All classes possess a type of phagocytic amebocyte, which engulf invading particles and infected cells, aggregate or clot, and may be involved in cytotoxicity. These cells are usually large and granular, and are believed to be a main line of defence against potential pathogens. Depending on the class, echinoderms may have spherule cells (for cytotoxicity, inflammation, and anti-bacterial activity), vibratile cells (for coelomic fluid movement and clotting), and crystal cells (which may serve for osmoregulation in sea cucumbers). The coelomocytes secrete antimicrobial peptides against bacteria, and have a set of lectins and complement proteins as part of an innate immune system that is still being characterised. Echinoderms have a simple radial nervous system that consists of a modified nerve net of interconnected neurons with no central brain, although some do possess ganglia. Nerves radiate from central rings around the mouth into each arm or along the body wall; the branches of these nerves coordinate the movements of the organism and the synchronisation of the tube feet. Starfish have sensory cells in the epithelium and have simple eyespots and touch-sensitive tentacle-like tube feet at the tips of their arms. Sea urchins have no particular sense organs but do have statocysts that assist in gravitational orientation, and they too have sensory cells in their epidermis, particularly in the tube feet, spines and pedicellariae. Brittle stars, crinoids and sea cucumbers in general do not have sensory organs, but some burrowing sea cucumbers of the order Apodida have a single statocyst adjoining each radial nerve, and some have an eyespot at the base of each tentacle. The gonads at least periodically occupy much of the body cavities of sea urchins and sea cucumbers, while the less voluminous crinoids, brittle stars and starfish have two gonads in each arm. While the ancestors of modern echinoderms are believed to have had one genital aperture, many organisms have multiple gonopores through which eggs or sperm may be released. Regeneration Many echinoderms have great powers of regeneration. Many species routinely autotomize and regenerate arms and viscera. Sea cucumbers often discharge parts of their internal organs if they perceive themselves to be threatened, regenerating them over the course of several months. Sea urchins constantly replace spines lost through damage, while sea stars and sea lilies readily lose and regenerate their arms. In most cases, a single severed arm cannot grow into a new starfish in the absence of at least part of the disc. However, in a few species a single arm can survive and develop into a complete individual, and arms are sometimes intentionally detached for the purpose of asexual reproduction. During periods when they have lost their digestive tracts, sea cucumbers live off stored nutrients and absorb dissolved organic matter directly from the water. The regeneration of lost parts involves both epimorphosis and morphallaxis. In epimorphosis stem cells, either from a reserve pool or those produced by dedifferentiation, form a blastema and generate new tissues. Morphallactic regeneration involves the movement and remodelling of existing tissues to replace lost parts. Direct transdifferentiation of one type of tissue to another during tissue replacement is also observed. Reproduction Sexual reproduction Echinoderms become sexually mature after approximately two to three years, depending on the species and the environmental conditions. Almost all species have separate male and female sexes, though some are hermaphroditic. The eggs and sperm cells are typically released into open water, where fertilisation takes place. The release of sperm and eggs is synchronised in some species, usually with regard to the lunar cycle. In other species, individuals may aggregate during the reproductive season, increasing the likelihood of successful fertilisation. Internal fertilisation has been observed in three species of sea star, three brittle stars and a deep-water sea cucumber. Even at abyssal depths, where no light penetrates, echinoderms often synchronise their reproductive activity. Some echinoderms brood their eggs. This is especially common in cold water species where planktonic larvae might not be able to find sufficient food. These retained eggs are usually few in number and are supplied with large yolks to nourish the developing embryos. In starfish, the female may carry the eggs in special pouches, under her arms, under her arched body, or even in her cardiac stomach. Many brittle stars are hermaphrodites; they often brood their eggs, usually in special chambers on their oral surfaces, but sometimes in the ovary or coelom. In these starfish and brittle stars, development is usually direct to the adult form, without passing through a bilateral larval stage. A few sea urchins and one species of sand dollar carry their eggs in cavities, or near their anus, holding them in place with their spines. Some sea cucumbers use their buccal tentacles to transfer their eggs to their underside or back, where they are retained. In a very small number of species, the eggs are retained in the coelom where they develop viviparously, later emerging through ruptures in the body wall. In some crinoids, the embryos develop in special breeding bags, where the eggs are held until sperm released by a male happens to find them. Asexual reproduction One species of seastar, Ophidiaster granifer, reproduces asexually by parthenogenesis. In certain other asterozoans, adults reproduce asexually until they mature, then reproduce sexually. In most of these species, asexual reproduction is by transverse fission with the disc splitting in two. Both the lost disc area and the missing arms regrow, so an individual may have arms of varying lengths. During the period of regrowth, they have a few tiny arms and one large arm, and are thus often known as "comets". Adult sea cucumbers reproduce asexually by transverse fission. Holothuria parvula uses this method frequently, splitting into two a little in front of the midpoint. The two halves each regenerate their missing organs over a period of several months, but the missing genital organs are often very slow to develop. The larvae of some echinoderms are capable of asexual reproduction. This has long been known to occur among starfish and brittle stars, but has more recently been observed in a sea cucumber, a sand dollar and a sea urchin. This may be by autotomising parts that develop into secondary larvae, by budding, or by splitting transversely. Autotomised parts or buds may develop directly into fully formed larvae, or may pass through a gastrula or even a blastula stage. New larvae can develop from the preoral hood (a mound like structure above the mouth), the side body wall, the postero-lateral arms, or their rear ends. Cloning is costly to the larva both in resources and in development time. Larvae undergo this process when food is plentiful or temperature conditions are optimal. Cloning may occur to make use of the tissues that are normally lost during metamorphosis. The larvae of some sand dollars clone themselves when they detect dissolved fish mucus, indicating the presence of predators. Asexual reproduction produces many smaller larvae that escape better from planktivorous fish, implying that the mechanism may be an anti-predator adaptation. Larval development Development begins with a bilaterally symmetrical embryo, with a coeloblastula developing first. Gastrulation marks the opening of the "second mouth" that places echinoderms within the deuterostomes, and the mesoderm, which will host the skeleton, migrates inwards. The secondary body cavity, the coelom, forms by the partitioning of three body cavities. The larvae are often planktonic, but in some species the eggs are retained inside the female, while in some the female broods the larvae. The larvae pass through several stages, which have specific names derived from the taxonomic names of the adults or from their appearance. For example, a sea urchin has an 'echinopluteus' larva while a brittle star has an 'ophiopluteus' larva. A starfish has a 'bipinnaria' larva, which develops into a multi-armed 'brachiolaria' larva. A sea cucumber's larva is an 'auricularia' while a crinoid's is a 'vitellaria'. All these larvae are bilaterally symmetrical and have bands of cilia with which they swim; some, usually known as 'pluteus' larvae, have arms. When fully developed, they settle on the seabed to undergo metamorphosis, and the larval arms and gut degenerate. The left-hand side of the larva develops into the oral surface of the juvenile, while the right side becomes the aboral surface. At this stage, the pentaradial symmetry develops. A plankton-eating larva, living and feeding in the water column, is considered to be the ancestral larval type for echinoderms, but in extant echinoderms, some 68% of species develop using a yolk-feeding larva. The provision of a yolk-sac means that smaller numbers of eggs are produced, the larvae have a shorter development period and a smaller dispersal potential, but a greater chance of survival. Distribution and habitat Echinoderms are globally distributed in almost all depths, latitudes and environments in the ocean. Living echinoderms are known from between 0 to over 10,000 meters. Adults are mainly benthic, living on the seabed, whereas larvae are often pelagic, living as plankton in the open ocean. Some holothuroid adults such as Pelagothuria are pelagic. In the fossil record, some crinoids were pseudo-planktonic, attaching themselves to floating logs and debris. Some Paleozoic taxa displayed this life mode, before competition from organisms such as barnacles restricted the extent of the behaviour. Mode of life Locomotion Echinoderms primarily use their tube feet to move about, though some sea urchins also use their spines. The tube feet typically have a tip shaped like a suction pad in which a vacuum can be created by contraction of muscles. This combines with some stickiness from the secretion of mucus to provide adhesion. The tube feet contract and relax in waves which move along the adherent surface, and the animal moves slowly along. Brittle stars are the most agile of the echinoderms. Any one of the arms can form the axis of symmetry, pointing either forwards or back. The animal then moves in a co-ordinated way, propelled by the other four arms. During locomotion, the propelling arms can made either snake-like or rowing movements. Starfish move using their tube feet, keeping their arms almost still, including in genera like Pycnopodia where the arms are flexible. The oral surface is covered with thousands of tube feet which move out of time with each other, but not in a metachronal rhythm; in some way, however, the tube feet are coordinated, as the animal glides steadily along. Some burrowing starfish have points rather than suckers on their tube feet and they are able to "glide" across the seabed at a faster rate. Sea urchins use their tube feet to move around in a similar way to starfish. Some also use their articulated spines to push or lever themselves along or lift their oral surfaces off the substrate. If a sea urchin is overturned, it can extend its tube feet in one ambulacral area far enough to bring them within reach of the substrate and then successively attach feet from the adjoining area until it is righted. Some species bore into rock, usually by grinding away at the surface with their mouthparts. Most sea cucumber species move on the surface of the seabed or burrow through sand or mud using peristaltic movements; some have short tube feet on their under surface with which they can creep along in the manner of a starfish. Some species drag themselves along using their buccal tentacles, while others manage to swim with peristaltic movements or rhythmic flexing. Many live in cracks, hollows and burrows and hardly move at all. Some deep-water species are pelagic and can float in the water with webbed papillae forming sails or fins. The majority of feather stars (also called Comatulida or "unstalked crinoids") and some stalked forms are motile. Several stalked crinoid species are sessile, attached permanently to the substratum. Movement in most sea lilies is limited to bending (their stems can bend) and rolling and unrolling their arms; a few species can relocate themselves on the seabed by crawling. Feather stars are unattached and usually live in crevices, under corals or inside sponges with their arms the only visible part. Some feather stars emerge at night and perch themselves on nearby eminences to better exploit food-bearing currents. Many species can "walk" across the seabed, raising their body with the help of their arms, or swim using their arms. Most species of feather stars, however, are largely sedentary, seldom moving far from their chosen place of concealment. Feeding The modes of feeding vary greatly between the different echinoderm taxa. Crinoids and some brittle stars tend to be passive filter-feeders, enmeshing suspended particles from passing water. Most sea urchins are grazers; sea cucumbers are deposit feeders; and the majority of starfish are active hunters. Crinoids catch food particles using the tube feet on their outspread pinnules, move them into the ambulacral grooves, wrap them in mucus, and convey them to the mouth using the cilia lining the grooves. The exact dietary requirements of crinoids have been little researched, but in the laboratory, they can be fed with diatoms. Basket stars are suspension feeders, raising their branched arms to collect zooplankton, while other brittle stars use several methods of feeding. Some are suspension feeders, securing food particles with mucus strands, spines or tube feet on their raised arms. Others are scavengers and detritus feeders. Others again are voracious carnivores and able to lasso their waterborne prey with a sudden encirclement by their flexible arms. The limbs then bend under the disc to transfer the food to the jaws and mouth. Many sea urchins feed on algae, often scraping off the thin layer of algae covering the surfaces of rocks with their specialised mouthparts known as Aristotle's lantern. Other species devour smaller organisms, which they may catch with their tube feet. They may also feed on dead fish and other animal matter. Sand dollars may perform suspension feeding and feed on phytoplankton, detritus, algal pieces and the bacterial layer surrounding grains of sand. Sea cucumbers are often mobile deposit or suspension feeders, using their buccal podia to actively capture food and then stuffing the particles individually into their buccal cavities. Others ingest large quantities of sediment, absorb the organic matter and pass the indigestible mineral particles through their guts. In this way they disturb and process large volumes of substrate, often leaving characteristic ridges of sediment on the seabed. Some sea cucumbers live infaunally in burrows, anterior-end down and anus on the surface, swallowing sediment and passing it through their gut. Other burrowers live anterior-end up and wait for detritus to fall into the entrances of the burrows or rake in debris from the surface nearby with their buccal podia. Nearly all starfish are detritus feeders or carnivores, though a few are suspension feeders. Small fish landing on the upper surface may be captured by pedicilaria and dead animal matter may be scavenged but the main prey items are living invertebrates, mostly bivalve molluscs. To feed on one of these, the starfish moves over it, attaches its tube feet and exerts pressure on the valves by arching its back. When a small gap between the valves is formed, the starfish inserts part of its stomach into the prey, excretes digestive enzymes and slowly liquefies the soft body parts. As the adductor muscle of the bivalve relaxes, more stomach is inserted and when digestion is complete, the stomach is returned to its usual position in the starfish with its now liquefied bivalve meal inside it. Other starfish evert the stomach to feed on sponges, sea anemones, corals, detritus and algal films. Antipredator defence Despite their low nutrition value and the abundance of indigestible calcite, echinoderms are preyed upon by many organisms, including bony fish, sharks, eider ducks, gulls, crabs, gastropod molluscs, other echinoderms, sea otters, Arctic foxes and humans. Larger starfish prey on smaller ones; the great quantity of eggs and larva that they produce form part of the zooplankton, consumed by many marine creatures. Crinoids, on the other hand, are relatively free from predation. Antipredator defences include the presence of spines, toxins (inherent or delivered through the tube feet), and the discharge of sticky entangling threads by sea cucumbers. Although most echinoderm spines are blunt, those of the crown-of-thorns starfish are long and sharp and can cause a painful puncture wound as the epithelium covering them contains a toxin. Because of their catch connective tissue, which can change rapidly from a flaccid to a rigid state, echinoderms are very difficult to dislodge from crevices. Some sea cucumbers have a cluster of cuvierian tubules which can be ejected as long sticky threads from their anus to entangle and permanently disable an attacker. Sea cucumbers occasionally defend themselves by rupturing their body wall and discharging the gut and internal organs. Starfish and brittle stars may undergo autotomy when attacked, detaching an arm; this may distract the predator for long enough for the animal to escape. Some starfish species can swim away from danger. Ecology Echinoderms are numerous invertebrates whose adults play an important role in benthic ecosystems, while the larvae are a major component of the plankton. Among the ecological roles of adults are the grazing of sea urchins, the sediment processing of heart urchins, and the suspension and deposit feeding of crinoids and sea cucumbers. Some sea urchins can bore into solid rock, destabilising rock faces and releasing nutrients into the ocean. Coral reefs are also bored into in this way, but the rate of accretion of carbonate material is often greater than the erosion produced by the sea urchin. Echinoderms sequester about 0.1 gigatonnes of carbon dioxide per year as calcium carbonate, making them important contributors in the global carbon cycle. Echinoderms sometimes have large population swings which can transform ecosystems. In 1983, for example, the mass mortality of the tropical sea urchin Diadema antillarum in the Caribbean caused a change from a coral-dominated reef system to an alga-dominated one. Sea urchins are among the main herbivores on reefs and there is usually a fine balance between the urchins and the kelp and other algae on which they graze. A diminution of the numbers of predators (otters, lobsters and fish) can result in an increase in urchin numbers, causing overgrazing of kelp forests, resulting in an alga-denuded "urchin barren". On the Great Barrier Reef, an unexplained increase in the numbers of crown-of-thorns starfish (Acanthaster planci), which graze on living coral tissue, has greatly increased coral mortality and reduced coral reef biodiversity. Taxonomy and evolution The characteristics of adult echinoderms are the possession of a water vascular system with external tube feet and a stereom endoskeleton. Stereom is a calcareous material consisting of ossicles connected by a mesh of collagen fibres, which is unique to this phylum. Phylogeny Echinoderm phylogeny has long been a contentious subject. While the relationships among extant taxa are well-understood, there is no broadly accepted consensus regarding the phylum's origins or the relationships among its extinct groups. Echinoderm evolution shows a high degree of homoplasy, meaning that many features have evolved multiple times independently. This means that many features initiatlly assumed to indicate a genetic connection do not, in fact, do so, which has obscured the true relationships of various groups. External phylogeny Echinoderms are bilaterians, meaning that their ancestors were mirror-symmetric. Among the bilaterians, they belong to the deuterostome division, meaning that the blastopore, the first opening to form during embryo development, becomes the anus instead of the mouth. Echinoderms are the sister group of the Hemichordata, with which they form the crown group Ambulacraria. Two taxa of uncertain placement, Vetulocystida and Yanjiahella, have each been proposed as either stem-group echinoderms or stem-group ambulacrarians. Vetulocystids have also been proposed as stem-group chordates, while Yanjiahella has also been proposed to be a stem-group hemichordate. The Ambulacrarian context of the echinoderms is shown below, simplified from Li et al. 2023, with the possible ambulacrarian placements of the uncertian taxa shown with dashed lines and question marks: Internal phylogeny: extant classes The extant echinoderms consist of the Crinoidea and the Eleutherozoa, the latter of which is divided into the Asterozoa and the Echinozoa. Internal phylogeny: total group The lack of a consensus cladistic phylogeny incorporating extinct echinoderm groups has resulted in the continued use of terms from Linnaean taxonomies, even when the named taxa are known to be paraphyletic and/or polyphyletic. Linnaean taxonomies Three taxonomies introduced nearly all of the traditional subphyla and class divisions that continue to be referenced in cladistic work: F. A. Bather produced the earliest widely referenced classification of both fossil and extant echinoderms in 1900, using a two-subphylum system. In 1966, the Treatise on Invertebrate Paleontology, rejected Bather's classification, replacing it with a new four-subphylum scheme that had been previously proposed by H. B. Fell. James Sprinkle which added a fifth subphylum to the Treatise taxonomy in 1973. His later class-level taxonomy of the five subphyla was the most recent approach cited in an early cladistic re-assessment of the phylum. Other proposed classes not included at that rank in any of the above taxonomies include: Cryptosyringida Somasteroidea Stenuroidea Coronoidea Concentricycloidea There are also several common alternative names involving homalozoans: Carpoidea for Homalozoa, giving rise to the term "carpoids" Cincta as either the senior synonym of or sole order within Homostelea Soluta as either the senior synonym of or sole order within Homoiostelea Calcichordata , a subphylum effectively identical to Stylophora that was central to the now-disproven calcichordate hypothesis Cladograms According to 2024 review, there are two main schools of thought regarding echinoderm phylogeny: One that sees pentaradiality as a plesiomorphic trait of the phylum, and another that considers it a derived trait (apomorphy). Note that neither cladogram shown below includes all of the traditional classes, or even all of the classes mentioned in accompanying text. Pentaradiality as a plesiomorphy Supporters of pentaradiality as an initial condition of the phylum note that radial forms are the first uncontested echinoderms to appear in the fossil record. They also define homologies of echinoderm anatomy based on a division of the skeleton into two parts: those that are or are not associated with the water vascular system. The following cladogram is based on David & Mooi (1999) and David, Lefebvre, Mooi, and Parsley (2000): In this theory, the controversial Ediacaran fossil Arkarua is tentatively placed as the sister to all other echinoderms. Helicoplacoidea and Edrioasteroidea join it in the stem group. Pelmatozoa, Eocrinoidea, and Cystoidea are shown to be paraphyletic while Homalozoa is polyphyletic. Pentaradiality as an apomorphy Those who find pentaradiality to be derived incorporate the recently-discovered fossils Ctenoimbricata (seen as a possible sister to all other echinoderms) and Helicocystis (seen as bridging the triradial helicoplacoids and the pentaradial crown group). They cite research indicating that the early appearance of pentaradial forms is likely due to an incomplete fossil record, as well as multiple studies showing non-radial forms as an early stem group, to argue that this is phylogeny represents an emerging consensus. They reject Arkarua as an echinoderm due to its lack of stereom and possession of true pentaradiality instead of the 2-1-2 pseudo-pentaradiality seen in all early forms. The following cladogram is based on Rahman & Zamora (2024), incorporating class and subphylum names from the text: Here, Homalozoa (with uncertain placement of Stylophora) is shown to be a paraphyletic assemblage along the stem group, followed by Helicoplacoidea and then Helicocystis as the sister of the crown group. The details of Blastozoa vs Crinozoa are not addressed, as they are represented only by the classes Eocrinoidea and Crinoidea, respectively, and the overall nature of Pelmatozoa remains unresolved. The four-way polytomy including the Eleutherozoa and Crinoidea shows either Camptostroma or Gogia or both could prove to be outside of the crown group. Fossil history Echinoderms have a rich fossil record due to their mineralized endoskeletons. Possible early echinoderms The three oldest known candidate echinoderms all lack stereom and other echinoderm apomorphies, making their inclusion in the phylum controversial. The oldest potential echinoderm fossil is Arkarua from the late Ediacaran of Australia circa 555 Ma. These fossils are disc-like, with radial ridges on the rim and a five-pointed central depression marked with radial lines. However, the fossils have no stereom or internal structure indicating a water vascular system, so they cannot be conclusively identified. Additionally, all known early pentaradial echinoderms are pseudo-pentaradial in a 2-1-2 pattern, with true pentaradiality as seen in Arkarua not seen until the emergence of the Eleutherozoa. The next possible echinoderms are the vetulocystids, which date to the early to mid Cambrian, 541–501 Ma. While the youngest vetulocystid, Thylacocercus, displays some characteristics that could be interemediate between older vetulocystids and Yanjiahella, its discoverers consider vetulocystids more likely to be stem ambulacrarians than stem echinoderms. Yanjiahella, from the Fortunian (circa 539–529 Ma), is unlike the older fossils in that it has a plated theca, albeit one without evidence of stereom. To some, this is a reason to place it as a stem ambulacrarian or stem hemichordate. Others argue that absence of evidence for stereom is not evidence of absence, and consider a stem echinoderm position more likely. Echinoderms in the Cambrian and Ordovician The first universally accepted echinoderms appear in the Lower Cambrian period; asterozoans appeared in the Ordovician, while the crinoids were a dominant group in the Paleozoic. It is hypothesised that the ancestor of all echinoderms was a simple, motile, bilaterally symmetrical animal with a mouth, gut and anus. This ancestral organism adopted an attached mode of life with suspension feeding, and developed radial symmetry. Even so, the larvae of all echinoderms are bilaterally symmetrical, and all develop radial symmetry at metamorphosis. Like their ancestor, the starfish and crinoids still attach themselves to the seabed while changing to their adult form. The first known echinoderms were non-motile, but evolved into animals able to move freely. These soon developed endoskeletal plates with stereom structure, and external ciliary grooves for feeding. The Paleozoic echinoderms were globular, attached to the substrate and were orientated with their oral surfaces facing upwards. These early echinoderms had ambulacral grooves extending down the side of the body, fringed on either side by brachioles, like the pinnules of a modern crinoid. Eventually, the mobile eleutherozoans reversed their orientation to become mouth-downward. Before this happened, the podia probably had a feeding function, as they do in the crinoids today. The locomotor function of the podia came later, when the re-orientation of the mouth brought the podia into contact with the substrate for the first time. Use by humans As food and medicine In 2019, 129,052 tonnes of echinoderms were harvested. The majority of these were sea cucumbers (59,262 tonnes) and sea urchins (66,341 tonnes). These are used mainly for food, but also in traditional Chinese medicine. Sea cucumbers are considered a delicacy in some countries of southeast Asia; as such, they are in imminent danger of being over-harvested. Popular species include the pineapple roller Thelenota ananas (susuhan) and the red sea cucumber Holothuria edulis. These and other species are colloquially known as bêche de mer or trepang in China and Indonesia. The sea cucumbers are boiled for twenty minutes and then dried both naturally and later over a fire which gives them a smoky tang. In China, they are used as a basis for gelatinous soups and stews. Both male and female gonads of sea urchins are consumed, particularly in Japan and France. The taste is described as soft and melting, like a mixture of seafood and fruit. Sea urchin breeding trials have been undertaken to try to compensate for overexploitation. In research Because of their robust larval growth, sea urchins are widely used in research, particularly as model organisms in developmental biology and ecotoxicology. Strongylocentrotus purpuratus and Arbacia punctulata are used for this purpose in embryological studies. The large size and the transparency of the eggs enables the observation of sperm cells in the process of fertilising ova. The arm regeneration potential of brittle stars is being studied in connection with understanding and treating neurodegenerative diseases in humans. Genomic data relevant to echinoderm model organisms are collected in Echinobase. Currently, there are four species of echinoderms fully supported (gene pages, BLAST, JBrowse tracks, genome downloads) including Strongylocentrotus purpuratus (purple sea urchin), Lytechinus variegatus (green sea urchin), Patiria miniata (bat star) and Acanthaster planci (crown-of-thorns sea star). Partially supported species (no gene pages) include Lytechinus pictus (painted sea urchin), Asterias rubens (sugar star) and Anneissia japonica (feather star crinoid). Other uses The calcareous tests or shells of echinoderms are used as a source of lime by farmers in areas where limestone is unavailable and some are used in the manufacture of fish meal. 4,000 tons of the animals are used annually for these purposes. This trade is often carried out in conjunction with shellfish farmers, for whom the starfish pose a major threat by eating their cultured stock. Other uses for the starfish they recover include the manufacture of animal feed, composting and the preparation of dried specimens for the arts and craft trade.
Biology and health sciences
Echinoderms
null
43145
https://en.wikipedia.org/wiki/Gastrotrich
Gastrotrich
The gastrotrichs (phylum Gastrotricha), commonly referred to as hairybellies or hairybacks, are a group of microscopic (0.06–3.0 mm), cylindrical, acoelomate animals, and are widely distributed and abundant in freshwater and marine environments. They are mostly benthic and live within the periphyton, the layer of tiny organisms and detritus that is found on the seabed and the beds of other water bodies. The majority live on and between particles of sediment or on other submerged surfaces, but a few species are terrestrial and live on land in the film of water surrounding grains of soil. Gastrotrichs are divided into two orders, the Macrodasyida which are marine (except for two species), and the Chaetonotida, some of which are marine and some freshwater. Nearly 800 species of gastrotrich have been described. Gastrotrichs have a simple body plan with a head region, with a brain and sensory organs, and a trunk with a simple gut and the reproductive organs. They have adhesive glands with which they can anchor themselves to the substrate and cilia with which they move around. They feed on detritus, sucking up organic particles with their muscular pharynx. They are hermaphrodites, the marine species producing eggs which develop directly into miniature adults. The freshwater species are parthenogenetic, producing unfertilised eggs, and at least one species is viviparous. Gastrotrichs mature with great rapidity and have lifespans of only a few days. Etymology and taxonomy The name gastrotrich comes from Greek γαστήρ, gaster 'stomach' and θρίξ, thrix 'hair'. The name was coined by the Russian zoologist Élie Metchnikoff in 1865. The common name hairyback apparently arises from a mistranslation of gastrotrich. The relationship of gastrotrichs to other phyla is unclear. Morphology suggests that they are close to the Gnathostomulida, the Rotifera, or the Nematoda. On the other hand, genetic studies place them as close relatives of the Platyhelminthes, the Ecdysozoa or the Lophotrochozoa. As of 2011, around 790 species have been described. The phylum contains a single class, divided into two orders: the Macrodasyida and the Chaetonotida. Edward Ruppert et al. report that the Macrodasyida are wholly marine, but two rare and poorly known species, Marinellina flagellata and Redudasys fornerise, are known from fresh water. The Chaetonotida comprises both marine and freshwater species. Anatomy Gastrotrichs vary in size from about in body length. They are bilaterally symmetrical, with a transparent strap-shaped or bowling pin-shaped body, arched dorsally and flattened ventrally. The anterior end is not clearly defined as a head but contains the sense organs, brain and pharynx. Cilia are found around the mouth and on the ventral surface of the head and body. The trunk contains the gut and the reproductive organs. At the posterior end of the body are two projections with cement glands that serve in adhesion. This is a double-gland system where one gland secretes the glue and another secretes a de-adhesive agent to sever the connection. In the Macrodasyida, there are additional adhesive glands at the anterior end and on the sides of the body. The body wall consists of a cuticle, an epidermis and longitudinal and circular bands of muscle fibres. In some primitive species, each epidermal cell has a single cilium, a feature shared only by the gnathostomulans. The whole ventral surface of the animal may be ciliated or the cilia may be arranged in rows, patches or transverse bands. The cuticle is locally thickened in some gastrotrichs and forms scales, hooks and spines. There is no coelom (body cavity) and the interior of the animal is filled with poorly differentiated connective tissue. In the macrodasyidans, Y-shaped cells, each containing a vacuole, surround the gut and may function as a hydrostatic skeleton. The mouth is at the anterior end and opens into an elongated muscular pharynx with a triangular or Y-shaped lumen, lined by myoepithelial cells. The pharynx opens into a cylindrical intestine, which is lined with glandular and digestive cells. The anus is located on the ventral surface close to the posterior of the body. In some species, there are pores in the pharynx opening to the ventral surface; these contain valves and may allow egestion of any excess water swallowed while feeding. In the chaetonotidans, the excretory system consists of a single pair of protonephridia, which open through separate pores on the lateral underside of the animal, usually in the midsection of the body. In the macrodasyidans, there are several pairs of these opening along the side of the body. Nitrogenous waste is probably excreted through the body wall, as part of respiration, and the protonephridia are believed to function mainly in osmoregulation. Unusually, the protonephridia do not take the form of flame cells, but, instead, the excretory cells consist of a skirt surrounding a series of cytoplasmic rods that in turn enclose a central flagellum. These cells, termed cyrtocytes, connect to a single outlet cell which passes the excreted material into the protonephridial duct. As is typical for such small animals, there are no respiratory or circulatory organs. The nervous system is relatively simple. The brain consists of two ganglia, one on either side of the pharynx, connected by a commissure. From these lead a pair of nerve cords which run along either side of the body beside the longitudinal muscle bands. The primary sensory organs are the bristles and ciliated tufts of the body surface which function as mechanoreceptors. There are also ciliated pits on the head, simple ciliary photoreceptors and fleshy appendages which act as chemoreceptors. Distribution and habitat Gastrotrichs are cosmopolitan in distribution. They inhabit the interstitial spaces between particles in marine and freshwater environments, the surfaces of aquatic plants and other submerged objects and the surface film of water surrounding soil particles on land. They are also found in stagnant pools and anaerobic mud, where they thrive even in the presence of hydrogen sulfide. When pools dry up they can survive periods of desiccation as eggs, and some species are capable of forming cysts in harsh conditions. In marine sediments they have been known to reach 364 individuals per making them the third most common invertebrate in the sediment after nematodes and harpacticoid copepods. In freshwater they may reach a density of 158 individuals per and are the fifth most abundant group of invertebrates in the sediment. Behaviour and ecology In marine and freshwater environments, gastrotrichs form part of the benthic community. They are detritivores and are microphagous: they feed by sucking small dead or living organic materials, diatoms, bacteria and small protozoa into their mouths by the muscular action of the pharynx. They are themselves eaten by turbellarians and other small macrofauna. Like many microscopic animals, gastrotrich locomotion is primarily powered by hydrostatics, but movement occurs through different methods in different members of the group. Chaetonotids only have adhesive glands at the back and, in them, locomotion typically proceeds in a smooth gliding manner; the whole body is propelled forward by the rhythmic action of the cilia on the ventral surface. In the pelagic chaetonotid genus Stylochaeta, however, movement proceeds in jerks as the long, muscle-activated spines are forced rhythmically towards the side of the body. By contrast, with chaetonotids, macrodasyidans typically have multiple adhesive glands and move forward with a creeping action similar to that of a "looper" caterpillar. In response to a threat, the head and trunk can be rapidly pulled backwards, or the creeping movement can be reversed. Muscular action is important when the animal turns sideways and during copulation, when two individuals twine around each other. Reproduction and lifespan Gastrotrich reproduction and reproductive behaviour has been little studied. That of macrodasiyds probably most represents that of the ancestral lineage and these more primitive gastrotrichs are simultaneous hermaphrodites, possessing both male and female sex organs. There is generally a single pair of gonads, the anterior portion of which contains sperm-producing cells and the posterior portion producing ova. The sperm is sometimes packaged in spermatophores and is released through male gonopores that open, often temporarily, on the underside of the animal, roughly two-thirds of the way along the body. A copulatory organ on the tail collects the sperm and transfers it to the partner's seminal receptacle through the female gonopore. Details of the process and the behaviour involved vary with the species, and there is a range of different accessory reproductive organs. During copulation, the "male" individual uses his copulatory organ to transfer sperm to his partner's gonopore and fertilisation is internal. The fertilised eggs are released by rupture of the body wall which afterwards repairs itself. As is the case in most protostomes, development of the embryo is determinate, with each cell destined to become a specific part of the animal's body. At least one species of gastrotrich, Urodasys viviparus, is viviparous. Many species of chaetotonid gastrotrichs reproduce entirely by parthenogenesis. In these species, the male portions of the reproductive system are degenerate and non-functional, or, in many cases, entirely absent. Though the eggs have a diameter of less than 50 μm, they are still very large in comparison with the animals' size. Some species are capable of laying eggs that remain dormant during times of desiccation or low temperatures; these species, however, are also able to produce regular eggs, which hatch in one to four days, when environmental conditions are more favourable. The eggs of all gastrotrichs undergo direct development and hatch into miniature versions of the adult. The young typically reach sexual maturity in about three days. In the laboratory, Lepidodermella squamatum has lived for up to forty days, producing four or five eggs during the first ten days of life. Gastrotrichs demonstrate eutely, each species having an invariant genetically fixed number of cells as adults. Cell division ceases at the end of embryonic development and further growth is solely due to cell enlargement. Classification Gastrotricha is divided into two orders and a number of families: Order Macrodasyida Remane, 1925 [Rao and Clausen, 1970] Family Cephalodasyidae Hummon & Todaro, 2010 Genus Cephalodasys Remane, 1926 Genus Dolichodasys Gagne, 1977 Genus Megadasys Schmidt, 1974 Genus Mesodasys Remane, 1951 Genus Paradasys Remane, 1934 Genus Pleurodasys Remane, 1927 Family Dactylopodolidae Strand, 1929 Genus Dactylopodola Strand, 1929 Genus Dendrodasys Wilke, 1954 Genus Dendropodola Hummon, Todaro & Tongiorgi, 1992 Family Lepidodasyidae Remane, 1927 Genus Lepidodasys Remane, 1926 Family Macrodasyidae Remane, 1926 Genus Macrodasys Remane, 1924 Genus Urodasys Remane, 1926 Family Planodasyidae Rao & Clausen, 1970 Genus Crasiella Clausen, 1968 Genus Planodasys Rao & Clausen, 1970 Family Redudasyidae Todaro, Dal Zotto, Jondelius, Hochberg et al., 2012 Genus Anandrodasys Todaro, Dal Zotto, Jondelius, Hochberg et al., 2012 Genus Redudasys Kisielewski, 1987 Family Thaumastodermatidae Remane, 1927 Subfamily Diplodasyinae Ruppert, 1978 Genus Acanthodasys Remane, 1927 Genus Diplodasys Remane, 1927 Subfamily Thaumastodermatinae Remane, 1927 Genus Hemidasys Claparède, 1867 Genus Oregodasys Hummon, 2008 =(Platydasys Remane, 1927) Genus Pseudostomella Swedmark, 1956 Genus Ptychostomella Remane, 1926 Genus Tetranchyroderma Remane, 1926 Genus Thaumastoderma Remane, 1926 Family Turbanellidae Remane, 1927 Genus Desmodasys Clausen, 1965 Genus Dinodasys Remane, 1927 Genus Paraturbanella Remane, 1927 Genus Prostobuccantia Evans & Hummon, 1991 Genus Pseudoturbanella d'Hondt, 1968 Genus Turbanella Schultze, 1853 Family Xenodasyidae Todaro, Guidi, Leasi & Tongiorgi, 2006 Genus Chordodasiopsis Todaro, Guidi, Leasi & Tongiorgi, 2006 Genus Xenodasys Swedmark, 1967 Incertae sedis Genus Marinellina Ruttner-Kolisko, 1955 Order Chaetonotida Remane, 1925 [Rao and Clausen, 1970] Suborder Multitubulatina d'Hondt, 1971 Family Neodasyidae Remane, 1929 Genus Neodasys Remane, 1927 Suborder Paucitubulatina d'Hondt, 1971 Family Chaetonotidae Gosse, 1864 Subfamily Chaetonotinae Kisielewski, 1991 Genus Arenotus Kisielewski, 1987 Genus Aspidiophorus Voigt, 1903 Genus Caudichthydium Schwank, 1990 Genus Chaetonotus Ehrenberg, 1830 Genus Fluxiderma d'Hondt, 1974 Genus Ichthydium Ehrenberg, 1830 Genus Halichaetonotus Remane, 1936 Genus Heterolepidoderma Remane, 1927 Genus Lepidochaetus Kisielewski 1991 Genus Lepidodermella Blake, 1933 Genus Polymerurus Remane, 1927 Genus Rhomballichthys Schwank, 1990 Subfamily Undulinae Kisielewski 1991 Genus Undula Kisielewski 1991 Family Dasydytidae Daday, 1905 Genus Anacanthoderma Marcolongo, 1910 Genus Chitonodytes Remane, 1936 Genus Dasydytes Gosse, 1851 Genus Haltidytes Remane 1936 Genus Ornamentula Kisielewski 1991 Genus Setopus Grünspan, 1908 Genus Stylochaeta Hlava, 1905 Family Dichaeturidae Remane, 1927 Genus Dichaetura Lauterborn, 1913 Family Muselliferidae Leasi & Todaro, 2008 Genus Diuronotus Todaro, Kristensen & Balsamo, 2005 Genus Musellifer Hummon, 1969 Family Neogosseidae Remane, 1927 Genus Neogossea Remane, 1927 Genus Kijanebalola Beauchamp, 1932 Family Proichthydiidae Remane, 1927 Genus Proichthydium Cordero, 1918 Genus Proichthydioides Sudzuki, 1971 Family Xenotrichulidae Remane, 1927 Subfamily Draculiciterinae Ruppert, 1979 Genus Draculiciteria Hummon, 1974 Subfamily Xenotrichulinae Remane, 1927 Genus Heteroxenotrichula Wilke, 1954 Genus Xenotrichula Remane, 1927
Biology and health sciences
Platyzoa
Animals
43146
https://en.wikipedia.org/wiki/Hemichordate
Hemichordate
Hemichordata ( ) is a phylum which consists of triploblastic, eucoelomate, and bilaterally symmetrical marine deuterostome animals, generally considered the sister group of the echinoderms. They appear in the Lower or Middle Cambrian and include two main classes: Enteropneusta (acorn worms), and Pterobranchia. A third class, Planctosphaeroidea, is known only from the larva of a single species, Planctosphaera pelagica. The class Graptolithina, formerly considered extinct, is now placed within the pterobranchs, represented by a single living genus Rhabdopleura. Acorn worms are solitary worm-shaped organisms. They generally live in burrows (the earliest secreted tubes) and are deposit feeders, but some species are pharyngeal filter feeders, while the family are free living detritivores. Many are well known for their production and accumulation of various halogenated phenols and pyrroles. Pterobranchs are filter-feeders, mostly colonial, living in a collagenous tubular structure called a coenecium. The discovery of the stem group hemichordate Gyaltsenglossus shows that early hemichordates combined aspects of the two morphologically disparate classes. Anatomy The body plan of hemichordates is characterized by a muscular organization. The anteroposterior axis is divided into three parts: the anterior prosome, the intermediate mesosome, and the posterior metasome. The body of acorn worms is worm-shaped and divided into an anterior proboscis, an intermediate collar, and a posterior trunk. The proboscis is a muscular and ciliated organ used in locomotion and in the collection and transport of food particles. The mouth is located between the proboscis and the collar. The trunk is the longest part of the animal. It contains the pharynx, which is perforated with gill slits (or pharyngeal slits), the oesophagus, a long intestine, and a terminal anus. It also contains the gonads. A post-anal tail is present in juvenile members of the acorn worm family Harrimaniidae. The prosome of pterobranchs is specialized into a muscular and ciliated cephalic shield used in locomotion and in secreting the coenecium. The mesosome extends into one pair (in the genus Rhabdopleura) or several pairs (in the genus Cephalodiscus) of tentaculated arms used in filter feeding. The metasome, or trunk, contains a looped digestive tract, gonads, and extends into a contractile stalk that connects individuals to the other members of the colony, produced by asexual budding. In the genus Cephalodiscus, asexually produced individuals stay attached to the contractile stalk of the parent individual until completing their development. In the genus Rhabdopleura, zooids are permanently connected to the rest of the colony via a common stolon system. They have a diverticulum of the foregut called a stomochord, previously thought to be related to the chordate notochord, but this is most likely the result of convergent evolution rather than a homology. A hollow neural tube exists among some species (at least in early life), probably a primitive trait that they share with the common ancestor of chordata and the rest of the deuterostomes. Hemichordates have a nerve net and longitudinal nerves, but no brain. Some species biomineralize in calcium carbonate. Circulatory system Hemichordates have an open circulatory system. The heart vesicle is located dorsally within the proboscis complex, and does not contain any blood. Instead it moves the blood indirectly by pulsating against the dorsal blood vessel. Development Together with the echinoderms, the hemichordates form the Ambulacraria, which are the closest extant phylogenetic relatives of chordates. Thus these marine worms are of great interest for the study of the origins of chordate development. There are several species of hemichordates, with a moderate diversity of embryological development among these species. Hemichordates are classically known to develop in two ways, both directly and indirectly. Hemichordates are a phylum composed of two classes, the enteropneusts and the pterobranchs, both being forms of marine worm. The enteropneusts have two developmental strategies: direct and indirect development. The indirect developmental strategy includes an extended pelagic plankotrophic tornaria larval stage, which means that this hemichordate exists in a larval stage that feeds on plankton before turning into an adult worm. The Pterobranch genus most extensively studied is Rhabdopleura from Plymouth, England and from Bermuda. The following details the development of two popularly studied species of the hemichordata phylum Saccoglossus kowalevskii and Ptychodera flava. Saccoglossus kowalevskii is a direct developer and Ptychodera flava is an indirect developer. Most of what has been detailed in Hemichordate development has come from hemichordates that develop directly. Ptychodera flava P. flava’s early cleavage pattern is similar to that of S. kowalevskii. The first and second cleavages from the single cell zygote of P. flava are equal cleavages, are orthogonal to each other and both include the animal and vegetal poles of the embryo. The third cleavage is equal and equatorial so that the embryo has four blastomeres both in the vegetal and the animal pole. The fourth division occurs mainly in blastomeres in the animal pole, which divide transversally as well as equally to make eight blastomeres. The four vegetal blastomeres divide equatorially but unequally and they give rise to four big macromeres and four smaller micromeres. Once this fourth division has occurred, the embryo has reached a 16 cell stage. P. flava has a 16 cell embryo with four vegetal micromeres, eight animal mesomeres and four larger macromeres. Further divisions occur until P. flava finishes the blastula stage and goes on to gastrulation. The animal mesomeres of P. flava go on to give rise to the larva’s ectoderm, animal blastomeres also appear to give rise to these structures though the exact contribution varies from embryo to embryo. The macromeres give rise to the posterior larval ectoderm and the vegetal micromeres give rise to the internal endomesodermal tissues. Studies done on the potential of the embryo at different stages have shown that at both the two and four cell stage of development P. flava blastomeres can go on to give rise to a tornaria larvae, so fates of these embryonic cells don’t seem to be established till after this stage. Saccoglossus kowalevskii Eggs of S. kowalevskii are oval in shape and become spherical in shape after fertilization. The first cleavage occurs from the animal to the vegetal pole and usually is equal though very often can also be unequal. The second cleavage to reach the embryos four cell stage also occurs from the animal to the vegetal pole in an approximately equal fashion though like the first cleavage it’s possible to have an unequal division. The eight cell stage cleavage is latitudinal; so that each cell from the four cell stage goes on to make two cells. The fourth division occurs first in the cells of the animal pole, which end up making eight blastomeres (mesomeres) that are not radially symmetric, then the four vegetal pole blastomeres divide to make a level of four large blastomeres (macromeres) and four very small blastomeres (micromeres). The fifth cleavage occurs first in the animal cells and then in the vegetal cells to give a 32 cell blastomere. The sixth cleavage occurs in a similar order and completes a 64 cell stage, finally the seventh cleavage marks the end of the cleavage stage with a blastula with 128 blastomeres. This structure goes on to go through gastrulation movements which will determine the body plan of the resulting gill slit larva, this larva will ultimately give rise to the marine acorn worm. Genetic control of dorsal-ventral hemichordate patterning Much of the genetic work done on hemichordates has been done to make comparison with chordates, so many of the genetic markers identified in this group are also found in chordates or are homologous to chordates in some way. Studies of this nature have been done particularly on S. kowalevskii, and like chordates S. kowalevskii has dorsalizing bmp-like factors such as bmp 2/4, which is homologous to Drosophila’s decapentaplegic dpp. The expression of bmp2/4 begins at the onset of gastrulation on the ectodermal side of the embryo, and as gastrulation progresses its expression is narrowed down to the dorsal midline but is not expressed in the post-anal tail. The bmp antagonist chordin is also expressed in the endoderm of gastrulating S. kowalevskii. Besides these well known dorsalizing factors, further molecules known to be involved in dorsal ventral patterning are also present in S. kowalevskii, such as a netrin that groups with netrin gene class 1 and 2. Netrin is important in patterning of the neural system in chordates, as well as is the molecule Shh, but S. kowalevskii was only found to have one hh gene and it appears to be expressed in a region that is uncommon to where it is usually expressed in developing chordates along the ventral midline. Classification Hemichordata are divided into two classes: the Enteropneusta, commonly called acorn worms, and the Pterobranchia, which includes the graptolites. A third class, Planctosphaeroidea, is proposed based on a single species known only from larvae. The phylum contains about 120 living species. Hemichordata appears to be sister to the Echinodermata as Ambulacraria; Xenoturbellida may be basal to that grouping. Pterobranchia may be derived from within Enteropneusta, making Enteropneusta paraphyletic. It is possible that the extinct organism Etacystis is a member of the Hemichordata, either within or with close affinity to the Pterobranchia. There are 130 described species of Hemichordata and many new species are being discovered, especially in the deep sea. Phylogeny A phylogenetic tree showing the position of the hemichordates is: The internal relationships within the hemichordates are shown below. The tree is based on 16S +18S rRNA sequence data and phylogenomic studies from multiple sources.
Biology and health sciences
Other
Animals
43147
https://en.wikipedia.org/wiki/Acanthocephala
Acanthocephala
Acanthocephala (Greek , 'thorn' + , 'head') is a group of parasitic worms known as acanthocephalans, thorny-headed worms, or spiny-headed worms, characterized by the presence of an eversible proboscis, armed with spines, which it uses to pierce and hold the gut wall of its host. Acanthocephalans have complex life cycles, involving at least two hosts, which may include invertebrates, fish, amphibians, birds, and mammals. About 1,420 species have been described. The Acanthocephala were long thought to be a discrete phylum. Recent genome analysis has shown that they are descended from, and should be considered as, highly modified rotifers. This unified taxon is sometimes known as Syndermata, or simply as Rotifera, with the acanthocephalans described as a subclass of a rotifer class Hemirotatoria. History The earliest recognisable description of Acanthocephala – a worm with a proboscis armed with hooks – was made by Italian author Francesco Redi (1684). In 1771, Joseph Koelreuter proposed the name Acanthocephala. Philipp Ludwig Statius Müller independently called them Echinorhynchus in 1776. Karl Rudolphi in 1809 formally named them Acanthocephala. Evolutionary history The oldest known remains of acanthocephalans are eggs found in a coprolite from the Late Cretaceous Bauru Group of Brazil, around 70–80 million years old, likely from a crocodyliform. The group may have originated substantially earlier. Phylogeny Acanthocephalans are highly adapted to a parasitic mode of life, and have lost many organs and structures through evolutionary processes. This makes determining relationships with other higher taxa through morphological comparison problematic. Phylogenetic analysis of the 18S ribosomal gene has revealed that the Acanthocephala are most closely related to the rotifers. They are possibly closer to the two rotifer classes Bdelloidea and Monogononta than to the other class, Seisonidea, producing the names and relationships shown in the cladogram below. The three rotifer classes and the Acanthocephala make up a clade called Syndermata. This clade is placed in the Gnathifera. A study of the gene order in the mitochondria suggests that Seisonidea and Acanthocephala are sister clades and that the Bdelloidea are the sister clade to this group. Currently the phylum is divided into four classes – Palaeacanthocephala, Archiacanthocephala, Polyacanthocephala and Eoacanthocephala. The monophyletic Archiacanthocephala are the sister taxon of a clade comprising Eoacanthocephala and the monophyletic Palaeacanthocephala. Morphology Several morphological characteristics distinguish acanthocephalans from other phyla of parasitic worms. Digestion Acanthocephalans lack a mouth or alimentary canal. This is a feature they share with the cestoda (tapeworms), although the two groups are not closely related. Adult stages live in the intestines of their host and uptake nutrients which have been digested by the host, directly, through their body surface. The acanthocephalans lack an excretory system, although some species have been shown to possess flame cells (protonephridia). Proboscis The most notable feature of the acanthocephala is the presence of an anterior, protrudable proboscis that is usually covered with spiny hooks (hence the common name: thorny or spiny headed worm). The proboscis bears rings of recurved hooks arranged in horizontal rows, and it is by means of these hooks that the animal attaches itself to the tissues of its host. The hooks may be of two or three shapes, usually: longer, more slender hooks are arranged along the length of the proboscis, with several rows of more sturdy, shorter nasal hooks around the base of the proboscis. The proboscis is used to pierce the gut wall of the final host, and hold the parasite fast while it completes its life cycle. Like the body, the proboscis is hollow, and its cavity is separated from the body cavity by a septum or proboscis sheath. Traversing the cavity of the proboscis are muscle-strands inserted into the tip of the proboscis at one end and into the septum at the other. Their contraction causes the proboscis to be invaginated into its cavity. The whole proboscis apparatus can also be, at least partially, withdrawn into the body cavity, and this is effected by two retractor muscles which run from the posterior aspect of the septum to the body wall. Some of the acanthocephalans (perforating acanthocephalans) can insert their proboscis in the intestine of the host and open the way to the abdominal cavity. Size The size of these animals varies greatly, ranging from a few millimetres in length to Macracanthorhynchus hirudinaceus, which measures from . A curious feature shared by both larva and adult is the large size of many of the cells, e.g. the nerve cells and cells forming the uterine bell. Polyploidy is common, with up to 343n having been recorded in some species. Skin The body surface of the acanthocephala is peculiar. Externally, the skin has a thin tegument covering the epidermis, which consists of a syncytium with no cell walls. The syncytium is traversed by a series of branching tubules containing fluid and is controlled by a few wandering, amoeboid nuclei. Inside the syncytium is an irregular layer of circular muscle fibres, and within this again some rather scattered longitudinal fibres; there is no endothelium. In their micro-structure the muscular fibres resemble those of nematodes. Except for the absence of the longitudinal fibres the skin of the proboscis resembles that of the body, but the fluid-containing tubules of the proboscis are shut off from those of the body. The canals of the proboscis open into a circular vessel which runs round its base. From the circular canal two sac-like projections called the lemnisci run into the cavity of the body, alongside the proboscis cavity. Each consists of a prolongation of the syncytial material of the proboscis skin, penetrated by canals and sheathed with a muscular coat. They seem to act as reservoirs into which the fluid which is used to keep the proboscis "erect" can withdraw when it is retracted, and from which the fluid can be driven out when it is wished to expand the proboscis. Nervous system The central ganglion of the nervous system lies behind the proboscis sheath or septum. It innervates the proboscis and projects two stout trunks posteriorly which supply the body. Each of these trunks is surrounded by muscles, and this nerve-muscle complex is called a retinaculum. In the male at least there is also a genital ganglion. Some scattered papillae may possibly be sense-organs. Life cycles Acanthocephalans have complex life cycles, involving a number of hosts, for both developmental and resting stages. Complete life cycles have been worked out for only 25 species. Reproduction The Acanthocephala are dioecious (an individual organism is either male or female). There is a structure called the genital ligament which runs from the posterior end of the proboscis sheath to the posterior end of the body. In the male, two testes lie on either side of this. Each opens in a vas deferens which bears three diverticula or vesiculae seminales. The male also possesses three pairs of cement glands, found behind the testes, which pour their secretions through a duct into the vasa deferentia. These unite and end in a penis which opens posteriorly. In the female, the ovaries are found, like the testes, as rounded bodies along the ligament. From the ovaries, masses of ova dehisce into the body cavity, floating in its fluids for fertilization by male's sperm. After fertilization, each egg contains a developing embryo. (These embryos hatch into first stage larva.) The fertilized eggs are brought into the uterus by actions of the uterine bell, a funnel like opening continuous with the uterus. At the junction of the bell and the uterus there is a second, smaller opening situated dorsally. The bell "swallows" the matured eggs and passes them on into the uterus. (Immature embryos are passed back into the body cavity through the dorsal opening.) From the uterus, mature eggs leave the female's body via her oviduct, pass into the host's alimentary canal and are expelled from the host's body within feces. Release Having been expelled by the female, the acanthocephalan egg is released along with the feces of the host. For development to occur, the egg, containing the acanthor, needs to be ingested by an arthropod, usually a crustacean (there is one known life cycle which uses a mollusc as a first intermediate host). Inside the intermediate host, the acanthor is released from the egg and develops into an acanthella. It then penetrates the gut wall, moves into the body cavity, encysts, and begins transformation into the infective cystacanth stage. This form has all the organs of the adult save the reproductive ones. The parasite is released when the first intermediate host is ingested. This can be by a suitable final host, in which case the cystacanth develops into a mature adult, or by a paratenic host, in which the parasite again forms a cyst. When consumed by a suitable final host, the cycstacant excysts, everts its proboscis and pierces the gut wall. It then feeds, grows and develops its sexual organs. Adult worms then mate. The male uses the excretions of its cement glands to plug the vagina of the female, preventing subsequent matings from occurring. Embryos develop inside the female, and the life cycle repeats. Host control Thorny-headed worms begin their life cycle inside invertebrates that reside in marine or freshwater systems. One example is Polymorphus paradoxus. Gammarus lacustris, a small crustacean that inhabits ponds and rivers, is one invertebrate that P. paradoxus may occupy; ducks are one of the definitive hosts. This crustacean is preyed on by ducks and hides by avoiding light and staying away from the surface. However, infection by P. paradoxus changes its behavior and appearance in a number of ways that increase its chance of being eaten. First, infection significantly reduces G. lacustris'''s photophobia; as a result, it becomes attracted toward light and swims to the surface. Second, an infected organism will even go so far as to find a rock or a plant on the surface, clamp its mouth down, and latch on, making it easy prey for the duck. Finally, infection reduces the pigment distribution and amount in G. lacustris, causing the host to turn blue; unlike their normal brown colour, this makes the crustacean stand out and increases the chance the duck will see it. Experiments have shown that altered serotonin levels are likely responsible for at least some of these changes in behaviour. One experiment found that serotonin induces clinging behavior in G. lacustris similar to that seen in infected organisms. Another showed that infected G. lacustris had approximately 3 times as many serotonin-producing sites in its ventral nerve cord. Furthermore, experiments in closely-related species of Polymorphus and Pomphorhynchus infecting other Gammarus species confirmed this relation: infected organisms were considerably more attracted to light and had higher serotonin levels, while the phototropism could be duplicated by injections of serotonin. Effects on hosts Polymorphus spp. are parasites of seabirds, particularly the eider duck (Somateria mollissima). Heavy infections of up to 750 parasites per bird are common, causing ulceration to the gut, disease and seasonal mortality. Recent research has suggested that there is no evidence of pathogenicity of Polymorphus spp. to intermediate crab hosts. The cystacanth stage is long lived and probably remains infectious throughout the life of the crab. Economic impact Acanthocephalosis, a disease caused by Acanthacephalus infection, is prevalent in aquaculture, occurring in Atlantic salmon, rainbow and brown trout, tilapia, and tambaqui. Increasing occurrence in Brazilian farming of tambaqui has been reported, and in 2003 Acanthacephalus was first reported in cultured red snapper in Taiwan. The life cycle of Polymorphus spp. normally occurs between sea ducks (e.g. eiders and scoters) and small crabs. Infections found in commercial-sized lobsters in Canada were probably acquired from crabs that form an important dietary item of lobsters. Cystacanths occurring in lobsters can cause economic loss to fishermen. There are no known methods of prevention or control. Human infections In humans, it causes the disease acanthocephaliasis. The earliest known infection was found in a prehistoric man in Utah. This infection was dated to 1869 ± 160 BC. The species involved was thought to be Moniliformis clarki which is still common in the area. The first report of an isolate in historic times was by Lambl in 1859 when he isolated Macracanthorhynchus hirudinaceus from a child in Prague. Lindemann in 1865 reported that this organism was commonly isolated in Russia. The reason for this was discovered by Schneider in 1871 when he found that an intermediate host, the scarabaeid beetle grub, was commonly eaten raw. The first report of clinical symptoms was by Calandruccio who in 1888 while in Italy infected himself by ingesting larvae. He reported gastrointestinal disturbances and shed eggs in two weeks. Subsequent natural infections have since been reported. Eight species have been isolated from humans to date. Moniliformis moniliformis is the most common isolate. Other isolates include Acanthocephalus bufonis and Corynosoma strumosum''.
Biology and health sciences
Platyzoa
Animals
43148
https://en.wikipedia.org/wiki/Loricifera
Loricifera
Loricifera (from Latin, lorica, corselet (armour) + ferre, to bear) is a phylum of very small to microscopic marine cycloneuralian sediment-dwelling animals with 43 described species. and approximately 100 more that have been collected and not yet described. Their sizes range from 100 μm to . They are characterised by a protective outer case called a lorica and their habitat is in the spaces between marine gravel to which they attach themselves. The phylum was discovered in 1983 by R.M. Kristensen, near Roscoff, France. They are among the most recently discovered groups of animals. They attach themselves quite firmly to the substrate, and hence remained undiscovered for so long. The first specimen was collected in the 1970s, and described in 1983. They are found at all depths, in different sediment types, and in all latitudes. Morphology The animals have a head, mouth, and digestive system, as well as the lorica. The head (which contains the mouth and the brain), a trunk region surrounded by six plates that make up the 'lorica' or corselet and – in between these two – the neck region. Loricifera have a well developed brain and each scalid is individually connected to the brain by nerves. The armor-like lorica consists of a protective external shell or case of encircling plicae. There is no circulatory system and no endocrine system. Many of the larvae are acoelomate, with some adults being pseudocoelomate, and some remaining acoelomate. Development is generally direct, though there are so-called Higgins larvae, which differ from adults in several respects. As adults, the animals are gonochoric. Very complex and plastic life cycles of pliciloricids include also paedogenetic stages with different forms of parthenogenetic reproduction. Most Loricifera are dioecious, meaning there are males and females. However, there are a few species known to be hermaphroditic, which means they contain both male and female reproductive organs. Fossils have been dated to the late Cambrian. Taxonomic affinity Morphological studies have traditionally placed the phylum in the Vinctiplicata with the Priapulida; this plus the Kinorhyncha constitutes the taxon Scalidophora. The three phyla share four characters in common – chitinous cuticle, rings of scalids on the introvert, flosculi, and two rings of introvert retracts. However, despite a 2015 study showing the phylum's closest relatives being the Panarthropoda, a 2022 study again showed that it belonged to the Scalidophora and told that further, more comprehensive genetic tests will be required to find its actual position in Ecdysozoa. Evolutionary history The loriciferans are believed to be miniaturized descendants of a larger organism, perhaps resembling the Cambrian fossil Sirilorica. However, the fossil record of the microscopic non-mineralized group is (perhaps unsurprisingly) scarce, so it is difficult to trace out the evolutionary history of the phylum in any detail. The 2017 discovery of the Cambrian Eolorica deadwoodensis may shed some light on the group's history. In anoxic environments Three species of Loricifera have been found in the oxygen-free sediments at the bottom of the L'Atalante basin in Mediterranean Sea, more than 3,000 meters down, the first multicellular organisms known to spend their entire lives in an anoxic environment. Initially, it was thought that they are able to do this because their mitochondria act like hydrogenosomes, allowing them to respire anaerobically. However, by 2021, questions arose as to whether or not they have mitochondria. The newly reported animals complete their life cycle in the total absence of light and oxygen, and they are less than a millimetre in size. They were collected from a deep basin at the bottom of the Mediterranean Sea, where they inhabit a nearly salt-saturated brine that, because of its density (> 1.2 g/cm3), does not mix with the waters above. As a consequence, this environment is completely anoxic and, due to the activity of sulfate reducers, contains sulphide at a concentration of 2.9 mM. Despite such harsh conditions, this anoxic and sulphidic environment is teeming with microbial life, both chemosynthetic prokaryotes that are primary producers, and a broad diversity of eukaryotic heterotrophs at the next trophic level. Taxa
Biology and health sciences
Ecdysozoa
Animals
43171
https://en.wikipedia.org/wiki/Chaetognatha
Chaetognatha
The Chaetognatha or chaetognaths (meaning bristle-jaws) are a phylum of predatory marine worms that are a major component of plankton worldwide. Commonly known as arrow worms, they are mostly nektonic; however about 20% of the known species are benthic, and can attach to algae and rocks. They are found in all marine waters, from surface tropical waters and shallow tide pools to the deep sea and polar regions. Most chaetognaths are transparent and are torpedo shaped, but some deep-sea species are orange. They range in size from . Chaetognaths were first recorded by the Dutch naturalist Martinus Slabber in 1775. As of 2021, biologists recognize 133 modern species assigned to over 26 genera and eight families. Despite the limited diversity of species, the number of individuals is large. Arrow worms are strictly related to and possibly belonging to Gnathifera, a clade of protostomes that do not belong to either Ecdysozoa or Lophotrochozoa. Anatomy Chaetognaths are transparent or translucent dart-shaped animals covered by a cuticle. They range in length between 1.5 mm to 105 mm in the Antarctic species Pseudosagitta gazellae. Body size, either between individuals in the same species or between different species, seems to increase with decreasing temperature. The body is divided into a distinct head, trunk, and tail. About 80% of the body is occupied by primary longitudinal muscles. Head and digestive system There are between four and fourteen hooked, grasping spines on each side of their head, flanking a hollow vestibule containing the mouth. The spines are used in hunting, and covered with a flexible hood arising from the neck region when the animal is swimming. Spines and teeth are made of α-chitin, and the head is protected by a chitinous armature. The mouth opens into a muscular pharynx, which contains glands to lubricate the passage of food. From here, a straight intestine runs the length of the trunk to an anus just forward of the tail. The intestine is the primary site of digestion and includes a pair of diverticula near the anterior end. Materials are moved about the body cavity by cilia. Waste materials are simply excreted through the skin and anus. Eukrohniid species possess an oil vacuole closely associated with the gut. This organ contains wax esters which may assist reproduction and growth outside of the production season for Eukrohnia hamata in Arctic seas. Owing to the position of the oil vacuole in the center of the tractus, the organ may also have implications for buoyancy, trim and locomotion. Usually chaetognaths are not pigmented, however the intestines of some deep-sea species contain orange-red carotenoid pigments. Nervous and sensory systems The nervous system is reasonably simple and shows a typical protostome anatomy, consisting of a ganglionated nerve ring surrounding the pharynx. The brain is composed of two distinct functional domains: the anterior neuropil domain and the posterior neuropil domain. The former probably controls head muscles moving the spines and the digestive system. The latter is linked to eyes and the corona ciliata. A putative sensory structure of unknown function, the retrocerebral organ, is also hosted by the posterior neuropil domain. The dorsal ganglion is the largest, but nerves extend from all the ganglia along the length of the body. Chaetognaths have two compound eyes, each consisting of a number of pigment-cup ocelli fused together; some deep-sea and troglobitic species have unpigmented or absent eyes. In addition, there are a number of sensory bristles arranged in rows along the side of the body, where they probably perform a function similar to that of the lateral line in fish. An additional, curved, band of sensory bristles lies over the head and neck. Almost all chaetognaths have "indirect" or "inverted" eyes, according to the orientation of photoreceptor cells; only some Eukhroniidae species have "direct" or "everted" eyes. A unique feature of the chaetognath eye is the lamellar structure of photoreceptor membranes, containing a grid of 35–55 nm wide circular pores. A significant mechanosensory system, composed of ciliary receptor organs, detects vibrations, allowing chaetognaths to detect the swimming motion of potential prey. Another organ on the dorsal part of the neck, the corona ciliata, is probably involved in chemoreception. Internal organs The body cavity is lined by peritoneum, and therefore represents a true coelom, and is divided into one compartment on each side of the trunk, and additional compartments inside the head and tail, all separated completely by septa. Although they have a mouth with one or two rows of tiny teeth, compound eyes, and a nervous system, they have no excretory or respiratory systems. While often said to lack a circulatory system, chaetognaths do have a rudimentary hemal system resembling those of annelids. The arrow worm rhabdomeres are derived from microtubules 20 nm long and 50 nm wide, which in turn form conical bodies that contain granules and thread structures. The cone body is derived from a cilium. Locomotion The trunk bears one or two pairs of lateral fins incorporating structures superficially similar to the fin rays of fish, with which they are not homologous. Unlike those of vertebrates, these lateral fins are composed of a thickened basement membrane extending from the epidermis. An additional caudal fin covers the post-anal tail. Two chaetognath species, Caecosagitta macrocephala and Eukrohnia fowleri, have bioluminescent organs on their fins. Chaetognaths swim in short bursts using a dorso-ventral undulating body motion, where their tail fin assists with propulsion and the body fins with stabilization and steering. Muscle movements have been described as among the fastest in metazoans. Muscles are directly excitable by electrical currents or strong K+ solutions; the main neuromuscular transmitter is acetylcholine. Reproduction and life cycle All species are hermaphroditic, carrying both eggs and sperm. Each animal possesses a pair of testes within the tail, and a pair of ovaries in the posterior region of the main body cavity. Immature sperm are released from the testes to mature inside the cavity of the tail, and then swim through a short duct to a seminal vesicle where they are packaged into a spermatophore. During mating, each individual places a spermatophore onto the neck of its partner after rupture of the seminal vesicle. The sperm rapidly escape from the spermatophore and swim along the midline of the animal until they reach a pair of small pores just in front of the tail. These pores connect to the oviducts, into which the developed eggs have already passed from the ovaries, and it is here that fertilisation takes place. The seminal receptacles and oviducts accumulate and store spermatozoa, to perform multiple fertilisation cycles. Some benthic members of Spadellidae are known to have elaborate courtship rituals before copulation, for example Paraspadella gotoi. The eggs are mostly planktonic, except in a few species such as Ferosagitta hispida that attaches eggs to the substrate. In Eukrohnia, eggs develop in marsupial sacs or attached to algae. Eggs usually hatch after 1–3 days. Chaetognaths do not undergo metamorphosis nor they possess a well-defined larval stage, an unusual trait among marine invertebrates; however there are significant morphological differences between the newborn and the adult, with respect to proportions, chitinous structures and fin development. The life spans of chaetognaths are variable but short; the longest recorded was 15 months in Sagitta friderici. Behaviour Little is known of arrow worms' behaviour and physiology, due to the complexity in culturing them and reconstructing their natural habitat. It is known that they feed more frequently with higher temperatures. Planktonic chaetognaths often must swim continuously, with a "hop and sink" behaviour, to keep themselves in the desired location in the water layer, and swim actively to catch prey. They all tend to keep the body slightly slanted with the head pointing downwards. They often show a "gliding" behaviour, slowly sinking for a while, and then catching up with a quick movement of their fins. Benthic species usually stay attached to substrates such as rocks, algae or sea grasses, more rarely on top or between sand grains, and act more strictly as ambush predators, staying still until prey passes by. The prey is detected thanks to the ciliary fence and tuft organs, sensing vibrations – individuals of Spadella cephaloptera for example will attack a glass or metal probe vibrating at an adequate frequency. To catch prey, arrow worms jump forward with a strong stroke of the tail fin. Once in contact with prey, they withdraw the hood over the grasping spines, so that it forms a cage around the prey and bring it in contact with the mouth. They swallow their prey whole. Ecology Chaetognaths are found in all world's oceans, from the poles to tropics, and also in brackish and estuarine waters. They inhabit very diverse environments, from hydrothermal vents to deep ocean seafloor, to seagrass beds and marine caves. The majority are planktonic, and they are often the second most common component of zooplankton, with a biomass ranging between 10 and 30% that of copepods. In the Canada Basin, chaetognaths alone represent ~13% of the zooplankton biomass. As such, they are ecologically relevant and a key food source for fishes and other predators, including commercially relevant fishes such as mackerel or sardines. 58% of known species are pelagic, while about a third of species are epibenthic or meiobenthic, or inhabit the immediate vicinity of the substrate. Chaetognaths have been recorded up to 5000 and possibly even 6000 meters of depth. The highest density of chaetognaths is observed in the photic zone of shallow waters. Larger chaetognath species tend to live deeper in water, but spend their juvenile stages higher in the water column. Arrow worms however engage in diel vertical migration, spending the day at lower depths to avoid predators, and coming close to the surface at night. Their position in the water column can depend on light, temperature, salinity, age and food supply. They cannot swim against oceanic currents, and they are used as a hydrological indicator of currents and water masses. All chaetognaths are ambush predators, preying on other planktonic animals, mostly copepods and cladocerans but also amphipods, krill and fish larvae. Adults can feed on younger individuals of the same species. Some species are also reported to be omnivores, feeding on algae and detritus. Chaetognaths are known to use the neurotoxin tetrodotoxin to subdue prey, possibly synthesized by Vibrio bacterial species. Genetics Mitochondrial genome The mtDNA of the arrow worm Spadella cephaloptera has been sequenced in 2004, and at the time it was the smallest metazoan mitochondrial genome known, being 11,905 base pairs long (it has now been surpassed by the mitchondrial genome of the ctenophore Mnemiopsis leidyi, which is 10,326 bp long). All mitochondrial tRNA genes are absent. The MT-ATP8 and MT-ATP6 genes are also missing. The mtDNA of Paraspadella gotoi, also sequenced in 2004, is even smaller (11,403 bp) and it shows a similar pattern, lacking 21 of the 22 usually present tRNA genes and featuring only 14 of the 37 genes normally present. Chaetognaths show a unique mitochondrial genomic diversity within individual of the same species. Phylogeny External The evolutionary relationships of chaetognaths have long been enigmatic. Charles Darwin remarked that arrow worms were "remarkable for the obscurity of their affinities". Chaetognaths in the past have been traditionally, but erroneously, classed as deuterostomes by embryologists due to deuterostome-like features in the embryo. Lynn Margulis and K. V. Schwartz placed chaetognaths in the deuterostomes in their Five Kingdom classification. However, several developmental features are at odds with deuterostomes and are either akin to Spiralia or unique to Chaetognatha. Molecular phylogeny shows that Chaetognatha are, in fact, protostomes. Thomas Cavalier-Smith places them in the protostomes in his Six Kingdom classification. The similarities between chaetognaths and nematodes mentioned above may support the protostome thesis—in fact, chaetognaths are sometimes regarded as a basal ecdysozoan or lophotrochozoan. Chaetognatha appears close to the base of the protostome tree in most studies of their molecular phylogeny. This may explain their deuterostome embryonic characters. If chaetognaths branched off from the protostomes before they evolved their distinctive protostome embryonic characters, they might have retained deuterostome characters inherited from early bilaterian ancestors. Thus chaetognaths may be a useful model for the ancestral bilaterian. Studies of arrow worms' nervous systems suggests they should be placed within the protostomes. According to 2017 and 2019 papers, chaetognaths either belong to or are the sister group of Gnathifera. Internal Below is a consensus evolutionary tree of Chaetognatha, based on both morphological and molecular data, as of 2021. Fossil record Due to their soft bodies, chaetognaths fossilize poorly. Even so, several fossil chaetognath species have been described. Chaetognaths first appear during the Cambrian Period. Complete body fossils have been formally described from the Lower Cambrian Maotianshan shales of Yunnan, China (Eognathacantha ercainella Chen & Huang and Protosagitta spinosa Hu) and the Middle Cambrian Burgess Shale of British Columbia (Capinatator praetermissus.) A Cambrian stem-group chaetognath, Timorebestia, first described in 2024, was much larger than modern species, showing that chaetognaths occupied different roles in marine ecosystems compared to today. A more recent chaetognath, Paucijaculum samamithion Schram, has been described from the Mazon Creek biota from the Pennsylvanian of Illinois. Chaetognaths were thought possibly to be related to some of the animals grouped with the conodonts. The conodonts themselves, however, have been shown to be dental elements of vertebrates. It is now thought that protoconodont elements (e.g., Protohertzina anabarica Missarzhevsky, 1973), are probably grasping spines of chaetognaths rather than teeth of conodonts. Previously chaetognaths in the Early Cambrian were only suspected from these protoconodont elements, but the more recent discoveries of body fossils have confirmed their presence then. There is evidence that chaetognaths were important components of the oceanic food web already in the Early Cambrian. History The first known description of a chaetognath has been published by Dutch naturalist Martinus Slabber in the 1770s; he also coined the name "arrow worm". The zoologist Henri Marie Ducrotay de Blainville also briefly mentioned probable chaetognaths but he understood them as pelagic mollusks. The first description of a currently accepted species of chaetognath, Sagitta bipunctata, is from 1827. Among the early zoologists describing arrow worms, there is Charles Darwin, who took notes about them during the voyage of the Beagle and in 1844 dedicated a paper to them. In the following year, August David Krohn published an early anatomical description of Sagitta bipunctata. The term "chaetognath" has been coined in 1856 by Rudolf Leuckart. He was also the first to propose that the genus Sagitta belonged to a separate group: «At the moment, it seems most natural to regard the Sagittas as representatives of a small group of their own that makes the transition from the real annelids (first of all the lumbricines) to the nematodes, and may not be unsuitably named Chaetognathi.» The modern systematics of Chaetognatha begins in 1911 with Ritter-Záhony and is later consolidated by Takasi Tokioka in 1965 and Robert Bieri in 1991. Tokioka introduced the orders Phragmophora and Aphragmophora, and classified four families, six genera, for a total of 58 species – plus the extinct Amiskwia, classified as a true primitive chaetognath in a separate class, Archisagittoidea. Chaetognaths were for a while considered as belonging or affine to the deuterostomes, but suspects of their affinities among Spiralia or other protostomes were already present as early as 1986. Their affinities with protostomes were clarified in 2004 by sequencing and analysis of mtDNA. Infection by giant viruses In 2018, reanalysis of electron microscopy photographs from the 1980s allowed scientists to identify a giant virus (Meelsvirus) infecting Adhesisagitta hispida; its site of multiplication is nuclear and the virions (length: 1.25 μm) are enveloped. In 2019, reanalysis of other previous studies has shown that structures that were taken in 1967 for bristles present on the surface of the species Spadella cephaloptera, and in 2003, for bacteria infecting Paraspadella gotoi, were in fact enveloped and spindle-shaped giant viruses with a cytoplasmic site of multiplication. The viral species infecting P. gotoi, whose maximum length is 3.1 μm, has been named Klothovirus casanovai (Klotho being the Greek name for one of the three Fates whose attribute was a spindle, and casanovai, in tribute to Pr J.-P. Casanova who devoted a large part of his scientific life to the study of chaetognaths). The other species has been named Megaklothovirus horridgei (in tribute to Adrian Horridge, the first author of the 1967 article). On a photograph, one of the viruses M. horridgei, although truncated, is 3.9 μm long, corresponding to about twice the length of the bacteria Escherichia coli. Many ribosomes are present in virions but their origin remains unknown (cellular, viral or only partly viral). To date, giant viruses known to infect metazoans are exceptionally rare.
Biology and health sciences
Spiralia
Animals
43175
https://en.wikipedia.org/wiki/Conodont
Conodont
Conodonts (Greek kōnos, "cone", + odont, "tooth") are an extinct group of jawless vertebrates, classified in the class Conodonta. They are primarily known from their hard, mineralised tooth-like structures called "conodont elements" that in life were present in the oral cavity and used to process food. Rare soft tissue remains suggest that they had elongate eel-like bodies with large eyes. Conodonts were a long-lasting group with over 300 million years of existence from the Cambrian (over 500 million years ago) to the beginning of the Jurassic (around 200 million years ago). Conodont elements are highly distinctive to particular species and are widely used in biostratigraphy as indicative of particular periods of geological time. Discovery and understanding of conodonts The teeth-like fossils of the conodont were first discovered by Heinz Christian Pander and the results published in Saint Petersburg, Russia, in 1856. It was only in the early 1980s that the first fossil evidence of the rest of the animal was found (see below). In the 1990s exquisite fossils were found in South Africa in which the soft tissue had been converted to clay, preserving even muscle fibres. The presence of muscles for rotating the eyes showed definitively that the animals were primitive vertebrates. Nomenclature and taxonomic rank Through their history of study, "conodont" is a term which has been applied to both the individual fossils and to the animals to which they belonged. The original German term used by Pander was "conodonten", which was subsequently anglicized as "conodonts", though no formal latinized name was provided for several decades. MacFarlane (1923) described them as an order, Conodontes (a Greek translation), which Huddle (1934) altered to the Latin spelling Conodonta. A few years earlier, Eichenberg (1930) established another name for the animals responsible for conodont fossils: Conodontophorida ("conodont bearers"). A few other scientific names were rarely and inconsistently applied to conodonts and their proposed close relatives during 20th century, such as Conodontophoridia, Conodontophora, Conodontochordata, Conodontiformes, and Conodontomorpha. Conodonta and Conodontophorida are by far the most common scientific names used to refer to conodonts, though inconsistencies regarding their taxonomic rank still persist. Bengtson (1976)'s research on conodont evolution identified three morphological tiers of early conodont-like fossils: protoconodonts, paraconodonts, and "true conodonts" (euconodonts). Further investigations revealed that protoconodonts were probably more closely related to chaetognaths (arrow worms) rather than true conodonts. On the other hand, paraconodonts are still considered a likely ancestral stock or sister group to euconodonts. The 1981 Treatise on Invertebrate Paleontology volume on the conodonts (Part W revised, supplement 2) lists Conodonta as the name of both a phylum and a class, with Conodontophorida as a subordinate order for "true conodonts". All three ranks were attributed to Eichenberg, and Paraconodontida was also included as an order under Conodonta. This approach was criticized by Fåhraeus (1983), who argued that it overlooked Pander's historical relevance as a founder and primary figure in conodontology. Fåhraeus proposed to retain Conodonta as a phylum (attributed to Pander), with the single class Conodontata (Pander) and the single order Conodontophorida (Eichenberg). Subsequent authors continued to regard Conodonta as a phylum with an ever-increasing number of subgroups. With increasingly strong evidence that conodonts lie within the phylum Chordata, more recent studies generally refer to "true conodonts" as the class Conodonta, containing multiple smaller orders. Paraconodonts are typically excluded from the group, though still regarded as close relatives. In practice, Conodonta, Conodontophorida, and Euconodonta are equivalent terms and are used interchangeably. Conodont elements For a long time, the function and arrangement of conodont elements was enigmatic, since the whole animal was soft-bodied, with the sole exception of the mineralized elements. Upon the conodont animal's demise, the soft tissues would decompose and the individual conodont elements would separate. However, in instances of exceptional preservation the conodont elements may be recovered in articulation. By closely observing these rare specimens, Briggs et al. (1983) were able to for the first time study the anatomy of the complexes formed by the conodont elements arranged as they were in life. Other researchers have continued to revise and reinterpret this initial description. Lone elements Conodont elements consist of mineralised teeth-like structures of varying morphology and complexity. The evolution of mineralized tissues has been puzzling for more than a century. It has been hypothesized that the first mechanism of chordate tissue mineralization began either in the oral skeleton of conodonts or the dermal skeleton of early agnathans. The element array constituted a feeding apparatus that is radically different from the jaws of modern animals. They are now termed "conodont elements" to avoid confusion. The three forms of teeth, i.e., coniform cones, ramiform bars, and pectiniform platforms, probably performed different functions. For many years, conodonts were known only from enigmatic tooth-like microfossils (200 micrometers to 5 millimeters in length), which occur commonly, but not always, in isolation and were not associated with any other fossil. Until the early 1980s, conodont teeth had not been found in association with fossils of the host organism, in a konservat lagerstätte. This is because the conodont animal was soft-bodied, thus everything but the teeth was unsuited for preservation under normal circumstances. These microfossils are made of hydroxylapatite (a phosphatic mineral). The conodont elements can be extracted from rock using adequate solvents. They are widely used in biostratigraphy. Conodont elements are also used as paleothermometers, a proxy for thermal alteration in the host rock, because under higher temperatures, the phosphate undergoes predictable and permanent color changes, measured with the conodont alteration index. This has made them useful for petroleum exploration where they are known, in rocks dating from the Cambrian to the Late Triassic. Full apparatus The conodont apparatus may comprise a number of discrete elements, including the spathognathiform, ozarkodiniform, trichonodelliform, neoprioniodiform, and other forms. In the 1930s, the concept of conodont assemblages was described by Hermann Schmidt and by Harold W. Scott in 1934. Element types The arrangement of elements in ozarkodinids and other complex conodonts was first reconstructed from extremely well-preserved taxa by Briggs et al. (1983), although loosely articulated conodont elements are reported as early as 1971. Conodont elements are organized into three different groups based upon shape. These groups of shapes are termed S, M, and P elements. The S and M elements are ramiform, elongate, and comb-like structures. An individual element has a single row of many cusps running down the midline along its top side. These conodont elements are arranged towards the animal's anterior oral surface, forming an interlocking basket of cusps within the mouth. Cusp may point out towards the head of the animal, or back towards the tail. The number of S and M elements present as well as the direction they point may vary by taxonomic group. M (makellate) elements have a higher position in the mouth and commonly form a symmetrical shape akin to a horseshoe or pick. S elements are further divided into three subtypes: S element - an unpaired symmetrical ramiform structure at the front of the mouth. Sometimes known as an S0 element. S element - paired asymmetrical structures S element - paired highly asymmetrical, bipennate structures In P elements, a pectiniform (comb-shaped) row of cusps transitions into a broad flat or ridged platform moving towards the base of the element. Platforms and cusps are only found along one side of the structure. Individual elements oriented vertically and arranged in pairs, with platforms and cusps pointing towards the animal's midline. They occur deeper in the throat than the S and M elements. P elements are further divided into two subtypes: Pa element - blade-like structures Pb element - arched structures The conodont animal Although conodont elements are abundant in the fossil record, fossils preserving soft tissues of conodont animals are known from only a few deposits in the world. One of the first possible body fossils of a conodont were those of Typhloesus, an enigmatic animal known from the Bear Gulch limestone in Montana. This possible identification was based on the presence of conodont elements with the fossils of Typhloesus. This claim was disproved, however, as the conodont elements were actually in the creature's digestive area. That animal is now regarded as a possible mollusk related to gastropods. As of 2023, there are only three described species of conodonts that have preserved trunk fossils: Clydagnathus windsorensis from the Carboniferous aged Granton Shrimp Bed in Scotland, Promissum pulchrum from the Ordovician aged Soom Shale in South Africa, and Panderodus unicostatus from the Silurian aged Waukesha Biota in Wisconsin. There are other examples of conodont animals that only preserve the head region, including eyes, of the animals known from the Silurian aged Eramosa site in Ontario and Triassic aged Akkamori section in Japan. According to these fossils, conodonts had large eyes, fins with fin rays, chevron-shaped muscles and axial line, which were interpreted as notochord or the dorsal nerve cord. While Clydagnathus and Panderodus had lengths only reaching , Promissum is estimated to reach in length, if it had the same proportions as Clydagnathus. Ecology Diet Because they are associated with the oral region of the conodont animal, it is accepted that conodont elements are used in the acquisition of food. Two primary hypotheses have arisen as to how this is accomplished. One hypothesis proposed that elements acted as support structures for filamentous soft-tissues. These small filaments (cilia) would be used to filter small planktonic organisms out of the water column, analogous to the cnidoblast cells of a coral or the lophophore of a brachiopod. Another hypothesis contests that the conodont elements were used to actively catch and process prey. S and M elements could have been independently movable, allowing prey to be captured in the oral region of the animal. Modern hagfish and lampreys scrape at flesh using keratinous blades supported by a simple but effective pulley-like system, involving a string of muscles around a cartilaginous core. An equivalent system might have been present in conodonts. S and M elements would be able to open and close at will to firmly grasp or pinch at prey, before rotating back to consume the prey element. The blade-like P elements deeper in the throat would process the food by slicing against their counterparts like a pair of scissors, or grinding against each other like molar teeth. Current consensus supports the latter hypothesis in which elements are used for predation, not suspension feeding. One line of evidence for this includes the isometric growth pattern exhibited by S, M, and P elements. If the conodont animal relied upon a filter feeding strategy then this growth pattern would not provide the necessary surface area needed to support ciliated tissue as the animal grew. There is some evidence for cartilaginous structures similar to those present in modern jawless fish, which are both predators and scavengers. Wear on some conodont elements suggests that they functioned like teeth, with both wear marks likely created by food as well as by occlusion with other elements. It is possible that multiple feeding strategies may have arisen in different groups of conodonts, as they are a diverse clade. A 2009 paper suggested that the genus Panderodus may have utilized venom in the acquisition of prey. Evidence of longitudinal grooves are present on some conodont elements associated with the feeding apparatus of this particular animal. These sorts of grooves are analogous to those present in some extant groups of venomous vertebrates. Lifestyle Studies have concluded that conodonts taxa occupied both pelagic (open ocean) and nektobenthic (swimming above the sediment surface) niches. The preserved musculature suggests that some conodonts (Promissum at least) were efficient cruisers, but incapable of bursts of speed. Based on isotopic evidence, some Devonian conodonts have been proposed to have been low-level consumers that fed on zooplankton. A study on the population dynamics of Alternognathus has been published. Among other things, it demonstrates that at least this taxon had short lifespans lasting around a month. A study Sr/Ca and Ba/Ca ratios of a population of conodonts from a carbonate platform from the Silurian of Sweden found that the different conodont species and genera likely occupied different trophic niches. Classification and phylogeny Affinities , scientists classify the conodonts in the phylum Chordata on the basis of their fins with fin rays, chevron-shaped muscles and notochord. Milsom and Rigby envision them as vertebrates similar in appearance to modern hagfish and lampreys, and phylogenetic analysis suggests they are more derived than either of these groups. However, this analysis comes with one caveat: the earliest conodont-like fossils, the protoconodonts, appear to form a distinct clade from the later paraconodonts and euconodonts. Protoconodonts are probably not relatives of true conodonts, but likely represent a stem group to Chaetognatha, an unrelated phylum that includes arrow worms. Moreover, some analyses do not regard conodonts as either vertebrates or craniates, because they lack the main characteristics of these groups. More recently it has been proposed that conodonts may be stem-cyclostomes, more closely related to hagfish and lampreys than to jawed vertebrates. Ingroup relations Individual conodont elements are difficult to classify in a consistent manner, but an increasing number of conodont species are now known from multi-element assemblages, which offer more data to infer how different conodont lineages are related to each other. The following is a simplified cladogram based on Sweet and Donoghue (2001), which summarized previous work by Sweet (1988) and Donoghue et al. (2000): Only a few studies approach the question of conodont ingroup relationships from a cladistic perspective, as informed by phylogenetic analyses. One of the broadest studies of this nature was the analysis of Donoghue et al. (2008), which focused on "complex" conodonts (Prioniodontida and other descendant groups): Evolutionary history The earliest fossils of conodonts are known from the Cambrian period. Conodonts extensively diversified during the early Ordovician, reaching their apex of diversity during the middle part of the period, and experienced a sharp decline during the late Ordovician and Silurian, before reaching another peak of diversity during the mid-late Devonian. Conodont diversity declined during the Carboniferous, with an extinction event at the end of the middle Tournaisian and a prolonged period of significant loss of diversity during the Pennsylvanian. Only a handful of conodont genera were present during the Permian, though diversity increased after the P-T extinction during the Early Triassic. Diversity continued to decline during the Middle and Late Triassic, culminating in their extinction soon after the Triassic-Jurassic boundary. Much of their diversity during the Paleozoic was likely controlled by sea levels and temperature, with the major declines during the Late Ordovician and Late Carboniferous due to cooler temperatures, especially glacial events and associated marine regressions which reduced continental shelf area. However, their final demise is more likely related to biotic interactions, perhaps competition with new Mesozoic taxa. Taxonomy Conodonta taxonomy based on Sweet (1988), Sweet & Donoghue (2001), and Mikko's Phylogeny Archive. Class Conodonta Pander, 1856 [Conodontophorida Eichenberg, 1930; "euconodonts" Bengtson, 1976] Cavidonti Sweet, 1988 Order Belodellida? Sweet, 1988 Ansellidae? Fåhraeus & Hunter, 1985 Belodellidae Khodalevich & Tschernich, 1973 Dapsilodontidae? Sweet, 1988 Order Proconodontida Sweet, 1988 Cordylodontidae Lindström, 1970 Fryxellodontidae Miller, 1981 Pseudooneotodidae? Wang & Aldridge, 2010 Proconodontidae Lindström, 1970 Pygodontidae? Bergstrom, 1981 Conodonti Pander, 1856 non Branson, 1938 Order Protopanderodontida Sweet, 1988 Acanthodontidae Lindström, 1970 Clavohamulidae Lindström, 1970 Drepanoistodontidae? Fåhraeus, 1978 [Distacodontidae Bassler, 1925] Protopanderodontidae Lindström, 1970 [Scolopodontidae Bergström, 1981; Oneotodontidae Miller, 1981; Teridontidae Miller, 1981] Serratognathidae? Zhen et al., 2009 Strachanognathidae? Bergström, 1981 [Cornuodontidae Stouge, 1984] Order Panderodontida Sweet, 1988 Panderodontidae Lindström, 1970 Order Prioniodontida Dzik, 1976 (paraphyletic) Acodontidae? Dzik, 1993 [Tripodontinae Sweet, 1988] Cahabagnathidae? Stouge & Bagnoli 1999 Distacodontidae? Bassler, 1925 emend. Ulrich & Bassler, 1926 [Drepanodontinae Fåhraeus & Nowlan, 1978; Lonchodininae Hass, 1959] Gamachignathidae? Wang & Aldridge, 2010 Jablonnodontidae? Dzik, 2006 Nurrellidae? Pomešano-Cherchi, 1967 Paracordylodontidae? Bergström, 1981 Playfordiidae? Dzik, 2002 Ulrichodinidae? Bergström, 1981 Rossodus Repetski & Ethington, 1983 Multioistodontidae Harris, 1964 [Dischidognathidae] Oistodontidae Lindström, 1970 [Juanognathidae Bergström, 1981] Periodontidae Lindström, 1970 Rhipidognathidae Lindström, 1970 sensu Sweet, 1988 Prioniodontidae Bassler, 1925 Phragmodontidae Bergström, 1981 [Cyrtoniodontinae Hass, 1959] Plectodinidae Sweet, 1988 Pygodontidae? Bergstrom, 1981 Icriodontacea Balognathidae (Hass, 1959) Polyplacognathidae Bergström, 1981 Distomodontidae Klapper, 1981 Icriodellidae Sweet, 1988 Icriodontidae Müller & Müller, 1957 Order Prioniodinida Sweet, 1988 Oepikodontidae? Bergström, 1981 Xaniognathidae? Sweet, 1981 Chirognathidae Branson & Mehl, 1944 Prioniodinidae Bassler, 1925 [Hibbardellidae Mueller, 1956] Bactrognathidae Lindström, 1970 Ellisoniidae Clark, 1972 Gondolellidae Lindström, 1970 Order Ozarkodinida Dzik, 1976 [Polygnathida] Anchignathodontidae? Clark, 1972 Archeognathidae? Miller, 1969 Belodontidae? Huddle, 1934 Coleodontidae? Branson & Mehl, 1944 [Hibbardellidae Müller, 1956; Loxodontidae] Eognathodontidae? Bardashev, Weddige & Ziegler, 2002 Francodinidae? Dzik, 2006 Gladigondolellidae? (Hirsch, 1994) [Sephardiellinae Plasencia, Hirsch & Márquez-Aliaga, 2007; Neogondolellinae Hirsch, 1994; Cornudininae Orchard, 2005; Epigondolellinae Orchard, 2005; Marquezellinae Plasencia et al., 2018; Paragondolellinae Orchard, 2005; Pseudofurnishiidae Ramovs, 1977] Iowagnathidae? Liu et al., 2017 Novispathodontidae? (Orchard, 2005) Trucherognathidae? Branson & Mehl, 1944 Vjalovognathidae? Shen, Yuan & Henderson, 2015 Wapitiodontidae? Orchard, 2005 Cryptotaxidae Klapper & Philip, 1971 Spathognathodontidae Hass, 1959 [Ozarkodinidae Dzik, 1976] Pterospathodontidae Cooper, 1977 [Carniodontidae] Kockelellidae Klapper, 1981 [Caenodontontidae] Polygnathidae Bassler, 1925 [?Eopolygnathidae Bardashev, Weddige & Ziegler, 2002] Palmatolepidae Sweet, 1988 Hindeodontidae (Hass, 1959) Elictognathidae Austin & Rhodes, 1981 Gnathodontidae Sweet, 1988 Idiognathodontidae Harris & Hollingsworth, 1933 Mestognathidae Austin & Rhodes, 1981 Cavusgnathidae Austin & Rhodes, 1981 Sweetognathidae Ritter, 1986
Biology and health sciences
Prehistoric agnathae and early chordates
Animals
43177
https://en.wikipedia.org/wiki/Gnathostomulid
Gnathostomulid
Gnathostomulids, or jaw worms, are a small phylum of nearly microscopic marine animals. They inhabit sand and mud beneath shallow coastal waters and can survive in relatively anoxic environments. They were first recognised and described in 1956. Anatomy Most gnathostomulids measure in length. They are often slender to thread-like worms, with a generally transparent body. In many Bursovaginoidea, one of the major group of gnathostomulids, the neck region is slightly narrower than the rest of the body, giving them a distinct head. Like flatworms they have a ciliated epidermis, but in contrast to flatworms, they have one cilium per cell. The cilia allow the worms to glide along in the water between sand grains, although they also use muscles, allowing the body to twist or contract, for movement. They have no body cavity, and no circulatory or respiratory system. The nervous system is simple, and restricted to the outer layers of the body wall. The only sense organs are modified cilia, which are especially common in the head region. The mouth is located just behind the head, after a rostrum, on the underside of the body. It has a pair of cuticular jaws, supplied by strong muscles, and often bearing minute teeth. A "basal plate" on the lower surface that bears a comb-like structure is also present. The basal plate is used to scrape smaller organisms off of the grains of sand that make up their anoxic seabed mud habitat. This bilaterally symmetrical pharynx with its complex cuticular mouth parts make them appear closely related to rotifers and their allies, together making up the Gnathifera. The ultrastructure of the jaws made of rods with electron dense core in transmission electron microscopy sections also support their close relation with Rotifera and Micrognathozoa. The mouth opens into a blind-ending tube in which digestion takes place; there is no true anus. However, there is tissue connecting the intestine to the epidermis which may serve as an anal pore. Reproduction Gnathostomulids are simultaneous hermaphrodites. Each individual possesses a single ovary and one or two testes. After fertilization, the single egg ruptures through the body wall and adheres to nearby sand particles; the parent is able to rapidly heal the resulting wound. The egg hatches into a miniature version of the adult, without a larval stage. Taxonomy There are approximately 100 described species and certainly many more as yet undescribed. The known species are grouped in two orders. The filospermoids are very long and are characterized by an elongate rostrum. The bursovaginoids have paired sensory organs and are characterized by the presence of a penis and a sperm-storage organ called a bursa. Gnathostomulids have no known fossil record, though there are (debatable) similarities between the jaws of modern gnathostomulids and certain conodont elements. (Ochietti & Cailleux, 1969; Durden et al, 1969) They appear to be a sister clade to the Syndermata.
Biology and health sciences
Spiralia
Animals
43184
https://en.wikipedia.org/wiki/Lobopodia
Lobopodia
Lobopodians are members of the informal group Lobopodia (from the Greek, meaning "blunt feet"), or the formally erected phylum Lobopoda Cavalier-Smith (1998). They are panarthropods with stubby legs called lobopods, a term which may also be used as a common name of this group as well. While the definition of lobopodians may differ between literatures, it usually refers to a group of soft-bodied, marine worm-like fossil panarthropods such as Aysheaia and Hallucigenia. However, other genera like Kerygmachela and Pambdelurion (which have features similar to other groups) are often referred to as “gilled lobopodians”. The oldest near-complete fossil lobopodians date to the Lower Cambrian; some are also known from Ordovician, Silurian and Carboniferous Lagerstätten. Some bear toughened claws, plates or spines, which are commonly preserved as carbonaceous or mineralized microfossils in Cambrian strata. The grouping is considered to be paraphyletic, as the three living panarthropod groups (Arthropoda, Tardigrada and Onychophora) are thought to have evolved from lobopodian ancestors. Definitions The Lobopodian concept varies from author to author. Its most general sense refers to a suite of mainly Cambrian worm-like panarthropod taxa possessing lobopods – for example, Aysheaia, Hallucigenia, and Xenusion – which were traditionally united as "Xenusians" or "Xenusiids" (class Xenusia). Certain Dinocaridid genera, such as Opabinia, Pambdelurion, and Kerygmachela, may also be regarded as lobopodians, sometimes referred to more specifically as "gilled lobopodians" or "gilled lobopods". This traditional, informal usage of "Lobopodia" treats it as an evolutionary grade, including only extinct Panarthropods near the base of crown Panarthropoda. Crown Panarthropoda comprises the three extant Panarthropod phyla – Onychophora (velvet worms), Tardigrada (waterbears), and Arthropoda (arthropods) – as well as their most recent common ancestor and all of its descendants. Thus, in this usage, Lobopodia consists of various basal Panarthropods. This corresponds to "A" in the image to the left. An alternative, broader definition of Lobopodia would also incorporate Onychophora and Tardigrada, the two living panarthropod phyla which still bear lobopodous limbs. This definition, corresponding to "C", is a morphological one, depending on the superficial similarity of appendages (the "lobopods"). Thus, it is paraphyletic, excluding the Euarthropods, which are descendants of certain Lobopodians, on the basis of their highly divergent limb morphology. "Lobopodia" has also been used to refer to a proposed sister clade to Arthropoda, consisting of the extant Onychophora and Tardigrada, as well as their most recent common ancestor and all of its descendants. This definition renders Lobopodia a monophyletic taxon, if indeed it is valid (that is, if Tardigrades and Onychophora are closer to one another than either is to Arthropoda), but would exclude all the Euarthropod-line taxa traditionally considered Lobopodians. Its validity is uncertain, however, as there are a number of hypotheses regarding the internal phylogeny of Panarthropoda. The broadest definition treats Lobopodia as a monophyletic superphylum equivalent in circumscription to Panarthropoda. By this definition, represented by "D" in the image, Lobopodia is no longer treated as an evolutionary grade but as a clade, containing not only the early, superficially "Lobopodian" forms but also all of their descendants, including the extant Panarthropods. Lobopodia has, historically, sometimes included Pentastomida, a group of parasitic panarthropod which were traditionally thought to be a unique phylum, but revealed by subsequent phylogenomic and anatomical studies to be a highly specialized taxon of crustaceans. Representative taxa The better-known genera include Aysheaia, which was discovered in the Canadian Burgess Shale, and Hallucigenia, known from both the Chenjiang Maotianshan Shale and the Burgess Shale. Aysheaia pedunculata has a morphology apparently basic for lobopodians — for example, a significantly annulated cuticle, a terminal mouth opening, specialized frontalmost appendages, and stubby lobopods with terminal claws. Hallucigenia sparsa is famous for having a complex history of interpretation — it was originally reconstructed with long, stilt-like legs and mysterious fleshy dorsal protuberances, and was long considered a prime example of the way in which nature experimented with the most diverse and bizarre body designs during the Cambrian. However, further discoveries showed that this reconstruction had placed the animal upside-down: interpreting the "stilts" as dorsal spines made it clear that the fleshy "dorsal" protuberances were actually elongated lobopods. More recent reconstruction even exchanged the front and rear ends of the animal: it was revealed that the bulbous imprint previously thought to be a head was actually gut contents being expelled from the anus. Microdictyon is another charismatic as well as the speciose genus of lobopodians resembling Hallucigenia, but instead of spines, it bore pairs of net-like plates, which are often found disarticulated and are known as an example of small shelly fossils (SSF). Xenusion has the oldest fossil record amongst the described lobopodians, which may trace back to Cambrian Stage 2. Luolishania is an iconic example of lobopodians with multiple pairs of specialized appendages. The gill lobopodians Kerygmachela and Pambdelurion shed light on the relationship between lobopodians and arthropods, as they have both lobopodian affinities and characteristics linked to the arthropod stem-group. Morphology Most lobopodians were only a few centimeters in length, while some genera grew up to over 20 centimeters. Their bodies are annulated, although the presence of annulation may differ between position or taxa, and sometimes difficult to discern due to their close spacing and low relief on the fossil materials. Body and appendages are circular in cross-section. Head Due to the usually poor preservation, detailed reconstructions of the head region are only available for a handful of lobopodian species. The head of a lobopodian is more or less bulbous, and sometime possesses a pair of pre-ocular, presumely protocerebral appendages – for example, primary antennae or well-developed frontal appendages, which are individualized from the trunk lobopods (with the exception of Antennacanthopodia, which have two pairs of head appendages instead of one). Mouthparts may consist of rows of teeth or a conical proboscis. The eyes may be represented by a single ocellus or by numerous pairs of simple ocelli, as has been shown in Luolishania (=Miraluolishania), Ovatiovermis, Onychodictyon, Hallucigenia, Facivermis, and less certainly Aysheaia as well. However, in gilled lobopodians like Kerygmachela, the eyes are relatively complex reflective patches that may had been compound in nature. Trunk and lobopods The trunk is elongated and composed of numerous body segments (somites), each bearing a pair of legs called lobopods or lobopodous limbs. The segmental boundaries are not as externally significant as those of arthropods, although they are indicated by heteronomous annulations (i.e., the alternation of annulation density corresponding to the position of segmental boundaries) in some species. The trunk segments may bear other external, segment-corresponding structures such as nodes (e.g. Hadranax, Kerygmachela), papillae (e.g. Onychodictyon), spine/plate-like sclerites (e.g. armoured lobopodians) or lateral flaps (e.g. gilled lobopodians). The trunk may terminate with a pair of lobopods (e.g. Aysheaia, Hallucigenia sparsa) or a tail-like extension (e.g. Paucipodia, Siberion, Jianshanopodia). The lobopods are flexible and loosely conical in shape, tapering from the body to tips that may or may not bear claws. The claws, if present, are hardened structures with a shape resembling a hook or gently-curved spine. Claw-bearing lobopods usually have two claws, but single claws are known (e.g. posterior lobopods of luolishaniids), as are more than two (e.g. three in Tritonychus, seven in Aysheaia) depending on its segmental or taxonomical association. In some genera, the lobopods bear additional structures such as spines (e.g. Diania), fleshy outgrowths (e.g. Onychodictyon), or tubercules (e.g. Jianshanopodia). There is no sign of arthropodization (development of a hardened exoskeleton and segmental division on panarthropod appendages) in known members of lobopodians, even for those belonging to the arthropod stem-group (e.g. gilled lobopodians and siberiids), and the suspected case of arthropodization on the limbs of Diania is considered to be a misinterpretation. Differentiation (tagmosis) between trunk somites barely occurs, except in hallucigenids and luolishaniids, where numerous pairs of their anterior lobopods are significantly slender (hallucigenids) or setose (luolishaniids) in contrast to their posterior counterparts. Internal structures The gut of lobopodians is often straight, undifferentiated, and sometimes preserved in the fossil record in three dimensions. In some specimens the gut is found to be filled with sediment. The gut consists of a central tube occupying the full length of the lobopodian's trunk, which does not change much in width - at least not systematically. However, in some groups, specifically the gilled lobopodians and siberiids, the gut is surrounded by pairs of serially repeated, kidney-shaped gut diverticulae (digestive glands). In some specimens, parts of the lobopodian gut can be preserved in three dimensions. This cannot result from phosphatisation, which is usually responsible for 3-D gut preservation, because the phosphate content of the guts is under 1%; the contents comprise quartz and muscovite. The gut of the representative Paucipodia is variable in width, being widest at the centre of the body. Its position in the body cavity is only loosely fixed, so flexibility is possible. Not much is known about the neural anatomy of lobopodians due to the spare and mostly ambiguous fossil evidence. Possible traces of a nervous system were found in Paucipodia, Megadictyon and Antennacanthopodia. The first and so far the only confirmed evidence of lobopodian neural structures comes from the gilled lobopodian Kerygmachela in Park et al. 2018 — it presents a brain composed of only a protocerebrum (the frontal-most cerebral ganglion of panarthropods) that is directly connected to the nerves of eyes and frontal appendages, suggesting the protocerebral ancestry of the head of lobopodians as well as the whole Panarthropoda. In some extant ecdysozoan such as priapulids and onychophorans, there is a layer of outermost circular muscles and a layer of innermost longitudinal muscles. The onychophorans also have a third, intermediate, layer of interwoven oblique muscles. Musculature of the gilled lobopodian Pambdelurion shows a similar anatomy, but that of the lobopodian Tritonychus shows the opposite pattern: it is the outermost muscles that are longitudinal and the innermost layer that consists of circular muscles. Categories Based on external morphology, lobopdians may fall under different categories — for example the general worm-like taxa as "xenusiid" or "xenusian"; xenusiid with sclerite as "armoured lobopodians"; and taxa with both robust frontal appendages and lateral flaps as "gilled lobopodians". Some of them were originally defined under a taxonomic sense (e.g. class Xenusia), but neither any of them are generally accepted as monophyletic in further studies. Armoured lobopodians Armoured lobopodians referred to xenusiid lobopodians which bore repeated sclerites such as spine or plates on their trunk (e.g. Hallucigenia, Microdictyon, Luolishania) or lobopods (e.g. Diania). In contrast, lobopodians without sclerites may be referred to as "unarmoured lobopodians". Function of the sclerites were interpreted as protective armor and/or muscle attachment points. In some cases, only the disarticulated sclerites of the animal were preserved, which represented as component of small shelly fossils (SSF). Armoured lobopodians were suggest to be onychophoran-related and may even represent a clade in some previous studies, but their phylogenetic positions in later studies are controversial. (see text) Gilled lobopodians Dinocaridids with lobopodian affinities (due to shared features like annulation and lobopods) are referred to as "gilled lobopodians" or "gilled lobopods". These forms sport a pair of flaps on each trunk segment, but otherwise no signs of arthropodization, in contrast to more derived dinocaridids like the Radiodonta that have robust and sclerotized frontal appendages. Gilled lobopodians cover at least four genera: Pambdelurion, Kerygmachela, Utahnax and Mobulavermis. Opabinia may also fall under this category in a broader sense, although the presence of lobopods in this genus is not definitively proven. Omnidens, a genus known only from Pambdelurion-like mouthparts and distal parts of the frontal appendages, may also be a gilled lobopodian. The body flaps may have functioned as both swimming appendages and gills, and are possibly homologous to the dorsal flaps of radiodonts and exites of Euarthropoda. Whether these genera were true lobopodians is still contested by some. However, they are widely accepted as stem-group arthropods just basal to radiodonts. Siberion and similar taxa Siberion, Megadictyon and Jianshanopodia may be grouped as siberiids (order Siberiida), jianshanopodians or "giant lobopodians" by some literatures. They are generally large — body length ranging between 7 and 22 centimeters (2¼ to 8⅔ inches) — xenusiid lobopodians with widen trunk, stout trunk lobopods without evidence of claws, and most notably a pair of robust frontal appendages. With the possible exception of Siberion, they also have digestive glands like those of a gilled lobopodian and basal euarthropod. Their anatomy represent transitional forms between typical xenusiids and gilled lobopodians, eventually placing them under the basalmost position of arthropod stem-group. Paleoecology Lobopodians possibly occupied a wide range of ecological niches. Although most of them had undifferentiated appendages and straight gut, which would suggest a simple sediment-feeding lifestyle, sophisticated digestive glands and large size of gilled lobopodians and siberiids would allow them to consume larger food items, and their robust frontal appendages may even suggest a predatory lifestyle. On the other hand, luolishaniids such as Luolishania and Ovatiovermis have elaborate feather-like lobopods that presumably formed 'baskets' for suspension or filter-feeding. Lobopods with curved terminal claws may have given some lobopodians the ability to climb on substrances. Not much is known about the physiology of lobopodians. There is evidence to suggest that lobopodians moult just like other ecdysozoan taxa, but the outline and ornamentation of the harden sclerite did not vary during ontogeny. The gill-like structures on the body flaps of gilled lobopodians and ramified extensions on the lobopods of Jianshanopodia may provide respiratory function (gills). Pambdelurion may control the movement of their lobopods in a way similar to onychophorans. Distribution During the Cambrian, lobopodians displayed a substantial degree of biodiversity. One species is known from each of the Ordovician and Silurian periods, with a few more known from the Carboniferous (Mazon Creek) — this represents the paucity of exceptional lagerstatten in post-Cambrian deposits. Phylogeny The overall phylogenetic interpretation on lobopodians has changed dramatically since their discovery and first description. The reassignments are not only based on new fossil evidence, but also new embryological, neuroanatomical, and genomic (e.g. gene expression, phylogenomics) information observed from extant panarthropod taxa. Based on their apparently onychophoran-like morphology (e.g. annulated cuticle, lobopodous appendage with claws), lobopodians were originally thought to be present a group of paleozoic onychophorans. This interpretation was challenged after the discovery of lobopodians with arthropod and tardigrade-like characteristics, suggesting that the similarity between lobopodians and onychophorans represents deeper panarthropod ancestral traits (plesiomorphies) instead of onychophoran-exclusive characteristics (synapomorphies). For example, The British palaeontologist Graham Budd sees the Lobopodia as representing a basal grade from which the phyla Onychophora and Arthropoda arose, with Aysheaia comparable to the ancestral plan, and with forms like Kerygmachela and Pambdelurion representing a transition that, via the dinocaridids, would lead to an arthropod body plan. Aysheaia's surface ornamentation, if homologous with palaeoscolecid sclerites, may represent a deeper link connecting it with cycloneuralian outgroups. Lobopodians are paraphyletic, and include the last common ancestor of arthropods, onychophorans and tardigrades. Stem-group arthropods Compared to other panarthropod stem-groups, suggestion on the lobopodian members of arthropod stem-group is relatively consistent — siberiid like Megadictyon and Jianshanopodia occupied the basalmost position, gilled lobopodians Pambdelurion and Kerygmachela branch next, and finally lead to a clade compose of Opabinia, Radiodonta and Euarthropoda (crown-group arthropods). Their positions within arthropod stem-group are indicated by numerous arthropod groundplans and intermediate forms (e.g. arthropod-like digestive glands, radiodont-like frontal appendages and dorso-ventral appendicular structures link to arthropod biramous appendages). Lobopodian ancestry of arthropods also reinforced by genomic studies on extant taxa — gene expression support the homology between arthropod appendages and onychophoran lobopods, suggests that modern less-segmented arthropodized appendages evolved from annulated lobopodous limbs. On the other hand, primary antennae and frontal appendages of lobopodians and dinocaridids may be homologous to the labrum/hypostome complex of euarthropods, an idea support by their protocerebral origin and developmental pattern of the labrum of extant arthropods. Diania, a genus of armoured lobopodian with stout and spiny legs, were originally thought to be associated within the arthropod stem-group based on its apparently arthropod-like (arthropodized) trunk appendages. However, this interpretation is questionable as the data provided by the original description are not consistent with the suspected phylogenic relationships. Further re-examination even revealed that the suspected arthropodization on the legs of Diania was a misinterpretation — although the spine may have hardened, the remaining cuticle of Diania's legs were soft (not harden nor scleritzed), lacking any evidence of pivot joint and arthrodial membrane, suggest the legs are lobopods with only widely spaced annulations. Thus, the re-examination eventually reject the evidence of arthropodization (sclerotization, segmentation and articulation) on the appendages as well as the fundamental relationship between Diania and arthropods. Stem-group onychophorans While Antennacanthopodia is widely accepted as a stem-group onychophoran, the position of other xenusiid genera that were previously thought to be onychophoran-related is controversial — in further studies, most of them were either suggested to be stem-group onychophorans or basal panarthropods, with a few species (Aysheaia or Onychodictyon ferox) occasionally suggested to be stem-group tardigrades. A study in 2014 suggested that Hallucigenia are stem-group onychophorans based on their claws, which have overlapped internal structures resembling those of an extant onychophoran. This interpretation was questioned by later studies, as the structures may be a panarthropod plesiomorphy. Stem-group tardigrades Lobopodian taxa of the tardigrade stem-group is unclear. Aysheaia or Onychodictyon ferox had been suggest to be a possible member, based on the high claw number (in Aysheaia) and/or terminal lobopods with anterior-facing claws (in both taxa). Although not widely accepted, there are even suggestions that Tardigrada itself representing the basalmost panarthropod or branch between the arthropod stem-group. However, a paper in 2023 found luolishaniids to be the closest relatives of tardigrades using various morphological characteristics. Stem-group panarthropods It is unclear that which lobopodians represent members of the panarthropod stem-group, which were branched just before the last common ancestor of extant panarthropod phyla. Aysheaia may have occupied this position based on its apparently basic morphology; while other studies rather suggest luolishaniid and hallucigenid, two lobopodian taxa which had been resolved as members of stem-group onychophorans as well. Described genera As of 2018, over 20 lobopodian genera have been described. The fossil materials being described as lobopodians Mureropodia apae and Aysheaia prolata are considered to be disarticulated frontal appendages of the radiodonts Caryosyntrips and Stanleycaris, respectively. Miraluolishania was suggested to be synonym of Luolishania by some studies. The enigmatic Facivermis was later revealed to be a highly specialized genus of luolishaniid lobopodians. Acinocricus Antennacanthopodia Aysheaia Carbotubulus Cardiodictyon Collinsium Collinsovermis Diania Entothyreos Facivermis Fusuconcharium Hadranax Hallucigenia Jianshanopodia Kerygmachela? Lenisambulatrix Luolishania (=Miraluolishania) Megadictyon Microdictyon Mobulavermis? Omnidens? Onychodictyon Orstenotubulus Ovatiovermis Pambdelurion? Parvibellus? Paucipodia Quadratapora Siberion Thanahita Tritonychus Utahnax? Xenusion Youti?
Biology and health sciences
Ecdysozoa
Animals
43196
https://en.wikipedia.org/wiki/Nematomorpha
Nematomorpha
Nematomorpha (sometimes called Gordiacea, and commonly known as horsehair worms, hairsnakes, or Gordian worms) are a phylum of parasitoid animals superficially similar to nematode worms in morphology, hence the name. Most species range in size from , reaching in extreme cases, and in diameter. Horsehair worms can be discovered in damp areas, such as watering troughs, swimming pools, streams, puddles, and cisterns. The adult worms are free-living, but the larvae are parasitic on arthropods, such as beetles, cockroaches, mantises, orthopterans, and crustaceans. About 351 freshwater species are known and a conservative estimate suggests that there may be about 2000 freshwater species worldwide. The name "Gordian" stems from the legendary Gordian knot. This relates to the fact that nematomorphs often coil themselves in tight balls that resemble knots. Description and biology Nematomorphs possess an external cuticle without cilia. Internally, they have only longitudinal muscle and a non-functional gut, with no excretory, respiratory or circulatory systems. The nervous system consists of a nerve ring near the anterior end of the animal, and a ventral nerve cord running along the body. Reproductively, they have two distinct sexes, with the internal fertilization of eggs that are then laid in gelatinous strings. Adults have cylindrical gonads, opening into the cloaca. The larvae have rings of cuticular hooks and terminal stylets that are believed to be used to enter the hosts. Once inside the host, the larvae live inside the haemocoel and absorb nutrients directly through their skin. Development into the adult form takes weeks or months, and the larva moults several times as it grows in size. The adults are mostly free-living in freshwater or marine environments, and males and females aggregate into tight balls (Gordian knots) during mating. In Spinochordodes tellinii and Paragordius tricuspidatus, which have grasshoppers and crickets as their hosts, the infection acts on the infected host's brain. This causes the host insect to seek water and drown itself, thus returning the nematomorph to water. P. tricuspidatus is also remarkably able to survive the predation of their host, being able to wiggle out of the predator that has eaten the host. The nematomorpha parasite affects host Hierodula patelliferas light-interpreting organs so the host is attracted to horizontally polarized light. Thus the host goes into water and the parasite's lifecycle completes. Many of the genes the parasites use for manipulating their host have been acquired through horizontal gene transfer from the host genome. There are a few cases of accidental parasitism in vertebrate hosts, including dogs, cats, and humans. Several cases involving Parachordodes, Paragordius, or Gordius have been recorded in human hosts in Japan and China. Community ecology Owing to their use of orthopterans as hosts, nematomorphs can be significant factors in shaping community ecology. One study conducted in a Japanese riparian ecosystem showed that nematomorphs can cause orthopterans to become 20 times more likely to enter water than non-infected orthopterans; these orthopterans constituted up to 60% of the annual energy intake for the Kirikuchi char. Absence of nematomorphs from riparian communities can thus lead to char predating more heavily on other aquatic invertebrates, potentially causing more widespread physiological effects. Taxonomy Nematomorphs can be confused with nematodes, particularly mermithid worms. Unlike nematomorphs, mermithids do not have a terminal cloaca. Male mermithids have one or two spicules just before the end apart from having a thinner, smoother cuticle, without areoles and a paler brown colour. The phylum is placed along with the Ecdysozoa clade of moulting organisms that include the Arthropoda. Their closest relatives are the nematodes. The two phyla make up the group Nematoida in the clade Cycloneuralia. During the larval stage, the animals show a resemblance to adult kinorhyncha and some species of Loricifera and Priapulida, all members of the group Scalidophora. The earliest Nematomorph could be Maotianshania, from the Lower Cambrian; this organism is, however, very different from extant species; fossilized worms resembling the modern forms have been reported from mid Cretaceous Burmese amber dated to 100 million years ago. Relationships within the phylum are still somewhat unclear, but two classes are recognised. The five marine species of nematomorph are contained in Nectonematoida. This order is monotypic containing the genus Nectonema Verrill, 1879: adults are planktonic and the larvae parasitise decapod crustaceans, especially crabs. They are characterized by a double row of natotory setae along each side of the body, dorsal and ventral longitudinal epidermal cords, a spacious and fluid-filled blastocoelom and singular gonads. The approximately 320 remaining species are distributed between two families, within the monotypic class Gordioida. Gordioidean adults are free-living in freshwater or semiterrestrial habitats and larvae parasitise insects, primarily orthopterans. Unlike nectonematiodeans, gordioideans lack lateral rows of setae, have a single, ventral epidermal cord and their blastocoels are filled with mesenchyme in young animals but become spacious in older individuals.
Biology and health sciences
Ecdysozoa
Animals
43198
https://en.wikipedia.org/wiki/Onychophora
Onychophora
Onychophora (from , , "claws"; and , , "to carry"), commonly known as velvet worms (for their velvety texture and somewhat wormlike appearance) or more ambiguously as peripatus (after the first described genus, Peripatus), is a phylum of elongate, soft-bodied, many-legged animals. In appearance they have variously been compared to worms with legs, caterpillars, and slugs. They prey upon other invertebrates, which they catch by ejecting an adhesive slime. Approximately 200 species of velvet worms have been described, although the true number of species is likely to be much greater than that. The two extant families of velvet worms are Peripatidae and Peripatopsidae. They show a peculiar distribution, with the peripatids being predominantly equatorial and tropical, while the peripatopsids are all found south of the equator. It is the only phylum within Animalia that is wholly endemic to terrestrial environments, at least among extant members. Velvet worms are generally considered close relatives of the Arthropoda and Tardigrada, with which they form the proposed taxon Panarthropoda. This makes them of palaeontological interest, as they can help reconstruct the ancestral arthropod. Only two fossil species are confidently assigned as onychophorans: Antennipatus from the Late Carboniferous, and Cretoperipatus from the Late Cretaceous, the latter belonging to Peripatidae. In modern zoology, they are particularly renowned for their curious mating behaviours and the bearing of live young in some species. Anatomy and physiology Velvet worms are segmented animals with a flattened cylindrical body cross-section and rows of unstructured body appendages known as oncopods or lobopods (informally: stub feet). They reach lengths between depending on species, with the smallest known being Ooperipatellus nanus and the largest known is Mongeperipatus solorzanoi. The number of leg pairs ranges from as few as 13 (in Ooperipatellus nanus) to as many as 43 (in Plicatoperipatus jamaicensis). Their skin consists of numerous, fine transverse rings and is often inconspicuously coloured orange, red or brown, but sometimes also bright green, blue, gold or white, and occasionally patterned with other colours. Segmentation is outwardly inconspicuous, and identifiable by the regular spacing of the pairs of legs and in the regular arrangement of skin pores, excretion organs and concentrations of nerve cells. The individual body sections are largely unspecialised; even the head develops only a little differently from the abdominal segments. Segmentation is apparently specified by the same gene as in other groups of animals, and is activated in each case, during embryonic development, at the rear border of each segment and in the growth zone of the stub feet. Although onychophorans fall within the protostome group, their early development has a deuterostome trajectory, (with the mouth and anus forming separately); this trajectory is concealed by the rather sophisticated processes which occur in early development. Appendages The stub feet that characterise the velvet worms are conical, baggy appendages of the body, which are internally hollow and have no joints. Although the number of feet can vary considerably between species, their structure is basically very similar. Rigidity is provided by the hydrostatic pressure of their fluid contents, and movement is usually obtained passively by stretching and contraction of the animal's entire body. However, each leg can also be shortened and bent by internal muscles. Due to the lack of joints, this bending can take place at any point along the sides of the leg. In some species, two different organs are found within the feet: Crural glands are situated at the shoulder of the legs, extending into the body cavity. They open outwards at the crural papillae—small wart-like bumps on the belly side of the leg—and secrete chemical messenger materials called pheromones. Their name comes from the Latin cruralis meaning "of the legs". Coxal vesicles are pouches located on the belly side of the leg, which can be everted and probably serve in water absorption. They belong to the family Peripatidae and are named from , the Latin word for "hip". On each foot is a pair of retractable, hardened (sclerotised) chitin claws, which give the taxon its scientific name: Onychophora is derived from the , , "claws"; and , , "to carry". At the base of the claws are three to six spiny "cushions" on which the leg sits in its resting position and on which the animal walks over smooth substrates. The claws are used mainly to gain a firm foothold on uneven terrain. Each claw is composed of three stacked elements, like Russian nesting dolls. The outermost is shed during ecdysis, which exposes the next element, which is fully formed and so does not need time to harden before it is used. This distinctive construction identifies many early Cambrian fossils as early offshoots of the onychophoran lineage. Apart from the pairs of legs, there are three further body appendages, which are at the head and comprise three segments: On the first head segment is a pair of slender antennae, which serve in sensory perception. They probably do not correspond directly to the antennae of the Arthropoda, but perhaps rather with their "lips" or labrum. At their base is a pair of simple eyes, except in a few blind species. In front of these, in many Australian species, are various dimples, the function of which is not yet clear. It appears that in at least some species, these serve in the transfer of sperm-cell packages (spermatophores). On the belly side of the second head segment is the labrum, a mouth opening surrounded by sensitive "lips". In the velvet worms, this structure is a muscular outgrowth of the throat, so, despite its name, it is probably not homologous to the labrum of the Arthropoda and is used for feeding. Deep within the oral cavity lie the sharp, crescent-shaped "jaws", or mandibles, which are strongly hardened and resemble the claws of the feet, with which they are serially homologous; early in development, the jaw appendages have a position and shape similar to the subsequent legs. The jaws are divided into internal and external mandibles and their concave surface bears fine denticles. They move backward and forward in a longitudinal direction, tearing apart the prey, apparently moved in one direction by musculature and the other by hydrostatic pressure. The claws are made of sclerotised α-chitin, reinforced with phenols and quinones, and have a uniform composition, except that there is a higher concentration of calcium towards the tip, presumably affording greater strength. The surface of the mandibles is smooth, with no ornamentation. The cuticle in the mandibles (and claws) is distinct from the rest of the body. It has an inner and outer component; the outer component has just two layers (whereas body cuticle has four), and these outer layers (in particular the inner epicuticle) are dehydrated and strongly tanned, affording toughness. Slime glands On the third head segment, to the left and right of the mouth, are two openings called "oral papillae", with each containing a large, heavily branched slime gland. These slime glands lie roughly in the center of a velvet worm's body and secrete a sort of milky-white slime. The slime is used to both ensnare prey and act as a distraction for defensive purposes. In certain species, an organ connected to the slime gland known as the "slime conductor" is broadened into a reservoir, allowing it to hold pre-produced slime. Velvet worm slime glands and oral papilla are likely modified and repurposed limbs. The glands themselves are probably modified crural glands. All three structures correspond to an evolutionary origin in the leg pairs of the other segments. Skin and muscle Unlike the arthropods, velvet worms do not possess a rigid exoskeleton. Instead, their fluid-filled body cavity acts as a hydrostatic skeleton, similarly to many distantly related soft-bodied animals that are cylindrically shaped, for example sea anemones and various worms. Pressure of their incompressible internal bodily fluid on the body wall provides rigidity, and muscles are able to act against it. The body wall consists of a non-cellular outer skin, the cuticula; a single layer of epidermis cells forming an internal skin; and beneath this, usually three layers of muscle, which are embedded in connective tissues. The cuticula is about a micrometer thick and covered with fine villi. In composition and structure, it resembles the cuticula of the arthropods, consisting of α-chitin and various proteins, although not containing collagen. It can be divided into an external epicuticula and an internal procuticula, which themselves consist of exo- and endo-cuticula. This multi-level structure is responsible for the high flexibility of the outer skin, which enables the velvet worm to squeeze itself into the narrowest crevices. Although outwardly water-repellent, the cuticula is not able to prevent water loss by respiration, and, as a result, velvet worms can live only in microclimates with high humidity to avoid desiccation. The surface of the cuticula is scattered with numerous fine papillae, the larger of which carry visible villi-like sensitive bristles. The papillae themselves are covered with tiny scales, lending the skin a velvety appearance (from which the common name is likely derived). It also feels like dry velvet to the touch, for which its water-repellent nature is responsible. Moulting of the skin (ecdysis) takes place regularly, around every 14 days, induced by the hormone ecdysone. The inner surface of the skin bears a hexagonal pattern. At each moult, the shed skin is replaced by the epidermis, which lies immediately beneath it; unlike the cuticula, this consists of living cells. Beneath this lies a thick layer of connective tissue, which is composed primarily of collagen fibres aligned either parallel or perpendicular to the body's longitudinal axis. The colouration of Onychophora is generated by a range of pigments. The solubility of these pigments is a useful diagnostic character: in all arthropods and tardigrades, the body pigment is soluble in ethanol. This is also true for the Peripatidae, but in the case of the Peripatopsidae, the body pigment is insoluble in ethanol. Within the connective tissue lie three continuous layers of unspecialised smooth muscular tissue. The relatively thick outer layer is composed of annular muscles, and the similarly voluminous inner layer of longitudinal muscles. Between them lie thin diagonal muscles that wind backward and forward along the body axis in a spiral. Between the annular and diagonal muscles exist fine blood vessels, which lie below the superficially recognisable transverse rings of the skin and are responsible for the pseudo-segmented markings. Beneath the internal muscle layer lies the body cavity. In cross-section, this is divided into three regions by so-called dorso-ventral muscles, which run from the middle of the underbelly through to the edges of the upper side: a central midsection and on the left and right, two side regions that also include the legs. Circulation The body cavity is known as a "pseudocoel", or haemocoel. Unlike a true coelom, a pseudocoel is not fully enclosed by a cell layer derived from the embryonic mesoderm. A coelom is, however, formed around the gonads and the waste-eliminating nephridia. As the name haemocoel suggests, the body cavity is filled with a blood-like liquid in which all the organs are embedded; in this way, they can be easily supplied with nutrients circulating in the blood. This liquid is colourless as it does not contain pigments; for this reason, it serves only a limited role in oxygen transport. Two different types of blood cells (or haemocytes) circulate in the fluid: Amoebocytes and nephrocytes. The amoebocytes probably function in protection from bacteria and other foreign bodies; in some species, they also play a role in reproduction. Nephrocytes absorb toxins or convert them into a form suitable for elimination by the nephridia. The haemocoel is divided by a horizontal partition, the diaphragm, into two parts: The pericardial sinus along the back and the perivisceral sinus along the belly. The former encloses the tube-like heart, and the latter, the other organs. The diaphragm is perforated in many places, enabling the exchange of fluids between the two cavities. The heart itself is a tube of annular muscles consisting of epithelial tissues, with two lateral openings (ostia) per segment. While it is not known whether the rear end is open or closed, from the front, it opens directly into the body cavity. Since there are no blood vessels, apart from the fine vessels running between the muscle layers of the body wall and a pair of arteries that supply the antennae, this is referred to as an open circulation. The timing of the pumping procedure can be divided into two parts: Diastole and systole. During diastole, blood flows through the ostia from the pericardial sinus (the cavity containing the heart) into the heart. When the systole begins, the ostia close and the heart muscles contract inwards, reducing the volume of the heart. This pumps the blood from the front end of the heart into the perivisceral sinus containing the organs. In this way, the various organs are supplied with nutrients before the blood finally returns to the pericardial sinus via the perforations in the diaphragm. In addition to the pumping action of the heart, body movements also influence circulation. Respiration Oxygen uptake occurs to an extent via simple diffusion through the entire body surface, with the coxal vesicles on the legs possibly being involved in some species. However, of most importance is gas exchange via fine unbranched tubes, the tracheae, which draw oxygen from the surface deep into the various organs, particularly the heart. The walls of these structures, which are less than three micrometers thick in their entirety, consist only of an extremely thin membrane through which oxygen can easily diffuse. The tracheae originate at tiny openings, the spiracles, which themselves are clustered together in dent-like recesses of the outer skin, the atria. The number of "tracheae bundles" thus formed is on average around 75 bundles per body segment; they accumulate most densely on the back of the organism. Unlike the arthropods, the velvet worms are unable to control the openings of their tracheae; the tracheae are always open, entailing considerable water loss in arid conditions. Water is lost twice as fast as in earthworms and forty times faster than in caterpillars. For this reason, velvet worms are dependent upon habitats with high air humidity. Oxygen transport is helped by the oxygen carrier hemocyanin. Digestion The digestive tract begins slightly behind the head, the mouth lying on the underside a little way from the frontmost point of the body. Here, prey can be mechanically dismembered by the mandibles with their covering of fine toothlets. Two salivary glands discharge via a common conductor into the subsequent "throat", which makes up the first part of the front intestine. The saliva that they produce contains mucus and hydrolytic enzymes, which initiate digestion in and outside the mouth. The throat itself is very muscular, serving to absorb the partially liquified food and to pump it, via the oesophagus, which forms the rear part of the front intestine, into the central intestine. Unlike the front intestine, this is not lined with a cuticula but instead consists only of a single layer of epithelial tissue, which does not exhibit conspicuous indentation as is found in other animals. On entering the central intestine, food particles are coated with a mucus-based peritrophic membrane, which serves to protect the lining of the intestine from damage by sharp-edged particles. The intestinal epithelium secretes further digestive enzymes and absorbs the released nutrients, although the majority of digestion has already taken place externally or in the mouth. Indigestible remnants arrive in the rear intestine, or rectum, which is once again lined with a cuticula and which opens at the anus, located on the underside near to the rear end. In almost every segment is a pair of excretory organs called nephridia, which are derived from coelom tissue. Each consists of a small pouch that is connected, via a flagellated conductor called a nephridioduct, to an opening at the base of the nearest leg known as a nephridiopore. The pouch is occupied by special cells called podocytes, which facilitate ultrafiltration of the blood through the partition between haemocoelom and nephridium. The composition of the urinary solution is modified in the nephridioduct by selective recovery of nutrients and water and by isolation of poison and waste materials, before it is excreted to the outside world via the nephridiopore. The most important nitrogenous excretion product is the water-insoluble uric acid; this can be excreted in solid state, with very little water. This so-called uricotelic excretory mode represents an adjustment to life on land and the associated necessity of dealing economically with water. A pair of former nephridia in the head were converted secondarily into the salivary glands, while another pair in the final segment of male specimens now serve as glands that apparently play a role in reproduction. Sensation The entire body, including the stub feet, is littered with numerous papillae: warty protrusions responsive to touch that carry a mechanoreceptive bristle at the tip, each of which is also connected to further sensory nerve cells lying beneath. The mouth papillae, the exits of the slime glands, probably also have some function in sensory perception. Sensory cells known as "sensills" on the "lips" or labrum respond to chemical stimuli and are known as chemoreceptors. These are also found on the two antennae, which seem to be the velvet worm's most important sensory organs. Except in a few (typically subterranean) species, one simply constructed eye (ocellus) lies behind each antenna, laterally, just underneath the head. This consists of a chitinous ball lens, a cornea and a retina and is connected to the centre of the brain via an optic nerve. The retina comprises numerous pigment cells and photoreceptors; the latter are easily modified flagellated cells, whose flagellum membranes carry a photosensitive pigment on their surface. The rhabdomeric eyes of the Onychophora are thought to be homologous with the median ocelli of arthropods; this would suggest that the last common ancestor of arthropods may have only had median ocelli. However, the innervation shows that the homology is limited: The eyes of Onychophora form behind the antenna, whereas the opposite is true in arthropods. Reproduction Both sexes possess pairs of gonads, opening via a channel called a gonoduct into a common genital opening, the gonopore, which is located on the rear ventral side. Both the gonads and the gonoduct are derived from true coelom tissue. In females, the two ovaries are joined in the middle and to the horizontal diaphragm. The gonoduct appears differently depending on whether the species is live-bearing or egg-laying. In live-bearing species, each exit channel divides into a slender oviduct and a roomy "womb", the uterus, in which the embryos develop. The single vagina, to which both uteri are connected, runs outward to the gonopore. In egg-laying species, whose gonoduct is uniformly constructed, the genital opening lies at the tip of a long egg-laying apparatus, the ovipositor. The females of many species also possess a sperm repository called the receptacle seminis, in which sperm cells from males can be stored temporarily or for longer periods. Males possess two separate testes, along with the corresponding sperm vesicle (the vesicula seminalis) and exit channel (the vasa efferentia). The two vasa efferentia unite to a common sperm duct, the vas deferens, which in turn widens through the ejaculatory channel to open at the gonopore. Directly beside or behind this lie two pairs of special glands, which probably serve some auxiliary reproductive function; the rearmost glands are also known as anal glands. A penis-like structure has so far been found only in males of the genus Paraperipatus but has not yet been observed in action. There are different mating procedures: In some species males deposit their spermatophore directly into the female's genitals opening, while others deposit it on the female's body, where the cuticle will collapse, allowing the sperm cells to migrate into the female. There are also Australian species where the male place their spermatophore on top of their head, which is then pressed against the female's genitals. In these species the head have elaborate structures like spikes, spines, hollow stylets, pits, and depressions, whose purpose is to either hold the sperm and / or assist in the sperm transfer to the female. The males of most species also secrete a pheromone from glands on the underside of the legs to attract females. Distribution and habitat Distribution Velvet worms live in all tropical habitats and in the temperate zone of the Southern Hemisphere, showing a circumtropical and circumaustral distribution. Individual species are found in Central and South America; the Caribbean islands; equatorial West Africa and Southern Africa; northeastern India; Thailand; Indonesia and parts of Malaysia; New Guinea; Australia; and New Zealand. Fossils have been found in Baltic amber, indicating that they were formerly more widespread in the Northern Hemisphere when conditions were more suitable. Habitat Velvet worms always sparsely occupy the habitats where they are found: they are rare among the fauna of which they are a part. All extant velvet worms are terrestrial (land-living) and prefer dark environments with high air humidity. They are found particularly in the rainforests of the tropics and temperate zones, where they live among moss cushions and leaf litter, under tree trunks and stones, in rotting wood or in termite tunnels. They also occur in unforested grassland, if there exist sufficient crevices in the soil into which they can withdraw during the day, and in caves. Two species live in caves, a habitat to which their ability to squeeze themselves into the smallest cracks makes them exceptionally well-adapted and in which constant living conditions are guaranteed. Since the essential requirements for cave life were probably already present prior to the settlement of these habitats, this may be described as exaptation. Some species of velvet worms are able to occupy human-modified land-uses, such as cocoa and banana plantations in South America and the Caribbean, but for others, conversion of rainforests is likely one of the most important threats to their survival (see Conservation). Velvet worms are photophobic: They are repelled by bright light sources. Because the danger of desiccation is greatest during the day and in dry weather, it is not surprising that velvet worms are usually most active at night and during rainy weather. Under cold or dry conditions, they actively seek out crevices in which they shift their body into a resting state. Slime The Onychophora forcefully squirt glue-like slime from their oral papillae; they do so either in defense against predators or to capture prey. The openings of the glands that produce the slime are in the papillae, a pair of highly modified limbs on the sides of the head below the antennae. Inside, they have a syringe-like system that, by a geometric amplifier, allows for fast squirt using slow muscular contraction. High speed films show the animal expelling two streams of adhesive liquid through a small opening (50–200 microns) at a speed of . The interplay between the elasticity of oral papillae and the fast unsteady flow produces a passive oscillatory motion (30–60 Hz) of the oral papillae. The oscillation causes the streams to cross in mid air, weaving a disordered net; the velvet worms can control only the general direction where the net is thrown. The slime glands themselves are deep inside the body cavity, each at the end of a tube more than half the length of the body. The tube both conducts the fluid and stores it until it is required. The distance that the animal can propel the slime varies; usually it squirts it about a centimetre, but the maximal range has variously been reported to be ten centimetres, or even nearly a foot, although accuracy drops with range. It is not clear to what extent the range varies with the species and other factors. One squirt usually suffices to snare a prey item, although larger prey may be further immobilised by smaller squirts targeted at the limbs; additionally, the fangs of spiders are sometimes targeted. Upon ejection, it forms a net of threads about twenty microns in diameter, with evenly spaced droplets of viscous adhesive fluid along their length. It subsequently dries, shrinking, losing its stickiness, and becoming brittle. Onychophora eat their dried slime when they can, which seems provident, since an onychophoran requires about 24 days to replenish an exhausted slime repository. The slime can account for up to 11% of the organism's dry weight and is 90% water; its dry residue consists mainly of proteins—primarily a collagen-type protein. 1.3% of the slime's dry weight consists of sugars, mainly galactosamine. The slime also contains lipids and the surfactant nonylphenol. Onychophora are the only organisms known to produce this latter substance. It tastes "slightly bitter and at the same time somewhat astringent". The proteinaceous composition accounts for the slime's high tensile strength and stretchiness. The lipid and nonylphenol constituents may serve one of two purposes: They may line the ejection channel, stopping the slime from sticking to the organism when it is secreted; or they may slow the drying process long enough for the slime to reach its target. Behaviour Locomotion Velvet worms/Onychophora move in a slow and gradual motion that makes them difficult for prey to notice. Their trunk is raised relatively high above the ground, and they walk with non-overlapping steps. To move from place to place, the velvet worm crawls forward using its legs; unlike in arthropods, both legs of a pair are moved simultaneously. The claws of the feet are used only on hard, rough terrain where a firm grip is needed; on soft substrates, such as moss, the velvet worm walks on the foot cushions at the base of the claws. Actual locomotion is achieved less by the exertion of the leg muscles than by local changes of body length. This can be controlled using the annular and longitudinal muscles. If the annular muscles are contracted, the body cross-section is reduced, and the corresponding segment lengthens; this is the usual mode of operation of the hydrostatic skeleton as also employed by other worms. Due to the stretching, the legs of the segment concerned are lifted and swung forward. Local contraction of the longitudinal muscles then shortens the appropriate segment, and the legs, which are now in contact with the ground, are moved to the rear. This part of the locomotive cycle is the actual leg stroke that is responsible for forward movement. The individual stretches and contractions of the segments are coordinated by the nervous system such that contraction waves run the length of the body, each pair of legs swinging forward and then down and rearward in succession. Macroperipatus can reach speeds of up to four centimetres per second, although speeds of around 6 body-lengths per minute are more typical. The body gets longer and narrower as the animal picks up speed; the length of each leg also varies during each stride. Sociality The brains of Onychophora, though small, are very complex; consequently, the organisms are capable of rather sophisticated social interactions. Behaviour may vary from genus to genus, so this article reflects the most-studied genus, Euperipatoides. The Euperipatoides form social groups of up to fifteen individuals, usually closely related, which will typically live and hunt together. Groups usually live together; in drier regions an example of a shared home would be the moist interior of a rotting log. Group members are extremely aggressive towards individuals from other logs. Dominance is achieved through aggression and maintained through submissive behaviour. After a kill, the dominant female always feeds first, followed in turn by the other females, then males, then the young. When assessing other individuals, individuals often measure one another up by running their antennae down the length of the other individual. Once hierarchy has been established, pairs of individuals will often cluster together to form an "aggregate"; this is fastest in male-female pairings, followed by pairs of females, then pairs of males. Social hierarchy is established by a number of interactions: Higher-ranking individuals will chase and bite their subordinates while the latter are trying to crawl on top of them. Juveniles never engage in aggressive behaviour, but climb on top of adults, which tolerate their presence on their backs. Hierarchy is quickly established among individuals from a single group, but not among organisms from different groups; these are substantially more aggressive and very rarely climb one another or form aggregates. Individuals within an individual log are usually closely related; especially so with males. This may be related to the intense aggression between unrelated females. Feeding Velvet worms are ambush predators, hunting only by night, and are able to capture animals at least their own size, although capturing a large prey item may take almost all of their mucus-secreting capacity. They feed on almost any small invertebrates, including woodlice (Isopoda), termites (Isoptera), crickets (Gryllidae), book/bark lice (Psocoptera), cockroaches (Blattidae), millipedes and centipedes (Myriapoda), spiders (Araneae), various worms, and even large snails (Gastropoda). Depending on their size, they eat on average every one to four weeks. They are considered to be ecologically equivalent to centipedes (Chilopoda). The most energetically favourable prey are two-fifths the size of the hunting onychophoran. Ninety percent of the time involved in eating prey is spent ingesting it; re-ingestion of the slime used to trap the insect is performed while the onychophoran locates a suitable place to puncture the prey, and this phase accounts for around 8% of the feeding time, with the remaining time evenly split between examining, squirting, and injecting the prey. In some cases, chunks of the prey item are bitten off and swallowed; undigestable components take around 18 hours to pass through the digestive tract. Onychophora probably do not primarily use vision to detect their prey; although their tiny eyes do have a good image-forming capacity, their forward vision is obscured by their antennae; their nocturnal habit also limits the utility of eyesight. Air currents, formed by prey motion, are thought to be the primary mode of locating prey; the role of scent, if any, is unclear. Because it takes so long to ingest a prey item, hunting mainly happens around dusk; the onychophorans will abandon their prey at sunrise. This predatory way of life is probably a consequence of the velvet worm's need to remain moist. Due to the continual risk of desiccation, often only a few hours per day are available for finding food. This leads to a strong selection for a low cost-benefit ratio, which cannot be achieved with a herbivorous diet. Velvet worms literally creep up on their prey, with their smooth, gradual and fluid movement escaping detection. Once they reach their prey, they touch it very softly with their antennae to assess its size and nutritional value. After each poke, the antenna is hastily retracted to avoid alerting the prey. This investigation may last anywhere upwards of ten seconds, until the velvet worm makes a decision as to whether to attack it, or until it disturbs the prey and the prey flees. Hungry Onychophora spend less time investigating their prey and are quicker to apply their slime. Once slime has been squirted, Onychophora are determined to pursue and devour their prey, in order to recoup the energy investment. They have been observed to spend up to ten minutes searching for removed prey, after which they return to their slime to eat it. In the case of smaller prey, they may opt not to use slime at all. Subsequently, a soft part of the prey item (usually a joint membrane in arthropod prey) is identified, punctured with a bite from the jaws, and injected with saliva. This kills the prey very quickly and begins a slower process of digestion. While the onychophoran waits for the prey to digest, it salivates on its slime and begins to eat it (and anything attached to it). It subsequently tugs and slices at the earlier perforation to allow access to the now-liquefied interior of its prey. The jaws operate by moving backwards and forwards along the axis of the body (not in a side-to-side clipping motion as in arthropods), conceivably using a pairing of musculature and hydrostatic pressure. The pharynx is specially adapted for sucking, to extract the liquefied tissue; the arrangement of the jaws about the tongue and lip papillae ensures a tight seal and the establishment of suction. In social groups, the dominant female is the first to feed, not permitting competitors access to the prey item for the first hour of feeding. Subsequently, subordinate individuals begin to feed. The number of males reaches a peak after females start to leave the prey item. After feeding, individuals clean their antennae and mouth parts before re-joining the rest of their group. Reproduction and life-cycle Almost all species of velvet worm reproduce sexually. The sole exception is Epiperipatus imthurni, of which no males have been observed; reproduction instead occurs by parthenogenesis. All species are in principle sexually distinct and bear, in many cases, a marked sexual dimorphism: the females are usually larger than the males and have, in species where the number of legs is variable, more legs. The females of many species are fertilized only once during their lives, which leads to copulation sometimes taking place before the reproductive organs of the females are fully developed. In such cases, for example at the age of three months in Macroperipatus torquatus, the transferred sperm cells are kept in a special reservoir, where they can remain viable for longer periods. Fertilization takes place internally, although the mode of sperm transmission varies widely. In most species, for example in the genus Peripatus, a package of sperm cells called the spermatophore is placed into the genital opening of the female. The detailed process by which this is achieved is in most cases still unknown, a true penis having been observed only in species of the genus Paraperipatus. In many Australian species, there exist dimples or special dagger- or axe-shaped structures on the head; the male of Florelliceps stutchburyae presses a long spine against the female's genital opening and probably positions its spermatophore there in this way. During the process, the female supports the male by keeping him clasped with the claws of her last pair of legs. The mating behavior of two species of the genus Peripatopsis is particularly curious. Here, the male places two-millimetre spermatophores on the back or sides of the female. Amoebocytes from the female's blood collect on the inside of the deposition site, and both the spermatophore's casing and the body wall on which it rests are decomposed via the secretion of enzymes. This releases the sperm cells, which then move freely through the haemocoel, penetrate the external wall of the ovaries and finally fertilize the ova. Why this self-inflicted skin injury does not lead to bacterial infections is not yet understood (though likely related to the enzymes used to deteriorate the skin or facilitate the transfer of viable genetic material from male to female). Velvet worms are found in egg-laying (oviparous), egg-live-bearing (ovoviviparous) and live-bearing (viviparous) forms. In a recent peer-reviewed paper published in the "Journal of Zoology," researchers discovered that certain species of Peripatus exhibit a unique form of parental care. Unlike most invertebrates, where parental involvement is minimal, female Peripatus were observed actively guarding their eggs and even providing protection to their offspring after hatching. This finding challenges the conventional understanding of reproductive behavior in invertebrates and highlights the diversity of parenting strategies in the animal kingdom. Ovipary occurs solely in the Peripatopsidae, often in regions with erratic food supply or unsettled climate. In these cases, the yolk-rich eggs measure 1.3 to 2.0 mm and are coated in a protective chitinous shell. Maternal care is unknown. The majority of species are ovoviviparous: the medium-sized eggs, encased only by a double membrane, remain in the uterus. The embryos do not receive food directly from the mother, but are supplied instead by the moderate quantity of yolk contained in the eggs—they are therefore described as lecithotrophic. The young emerge from the eggs only a short time before birth. This probably represents the velvet worm's original mode of reproduction, i.e., both oviparous and viviparous species developed from ovoviviparous species. True live-bearing species are found in both families, particularly in tropical regions with a stable climate and regular food supply throughout the year. The embryos develop from eggs only micrometres in size and are nourished in the uterus by their mother, hence the description "matrotrophic". The supply of food takes place either via a secretion from the mother directly into the uterus or via a genuine tissue connection between the epithelium of the uterus and the developing embryo, known as a placenta. The former is found only outside the American continents, while the latter occurs primarily in America and the Caribbean and more rarely in the Old World. The gestation period can amount to up to 15 months, at the end of which the offspring emerge in an advanced stage of development. The embryos found in the uterus of a single female do not necessarily have to be of the same age; it is quite possible for there to be offspring at different stages of development and descended from different males. In some species, young tend to be released only at certain points in the year. A female can have between 1 and 23 offspring per year; development from fertilized ovum to adult takes between 6 and 17 months and does not have a larval stage. This is probably also the original mode of development. Velvet worms have been known to live for up to six years. Ecology The velvet worm's important predators are primarily various spiders and centipedes, along with rodents and birds, such as, in Central America, the clay-coloured thrush (Turdus grayi). In South America, Hemprichi's coral snake (Micrurus hemprichii) feeds almost exclusively on velvet worms. For defence, some species roll themselves reflexively into a spiral, while they can also fight off smaller opponents by ejecting slime. Various mites (Acari) are known to be ectoparasites infesting the skin of the velvet worm. Skin injuries are usually accompanied by bacterial infections, which are almost always fatal. The South African species Peripatopsis capensis has been inadvertently introduced to Santa Cruz Island in the Galapagos Islands, where it co-occurs with native velvet worms. Conservation The global conservation status of velvet worm species is difficult to estimate; many species are only known to exist at their type locality (the location at which they were first observed and described). The collection of reliable data is also hindered by low population densities, their typically nocturnal behaviour and possibly also as-yet undocumented seasonal influences and sexual dimorphism. To date, the only onychophorans evaluated by the IUCN are: Mesoperipatus tholloni (Data Deficient) Plicatoperipatus jamaicensis (Near Threatened) Peripatoides indigo (Vulnerable) Peripatoides suteri (Vulnerable) Peripatopsis alba (Vulnerable) Peripatopsis clavigera (Vulnerable) Macroperipatus insularis (Endangered) Leucopatus anophthalmus (Endangered) Opisthopatus roseus (Critically Endangered) Peripatopsis leonina (Critically Endangered) Speleoperipatus spelaeus (Critically Endangered) The primary threat comes from destruction and fragmentation of velvet worm habitat due to industrialisation, draining of wetlands, and slash-and-burn agriculture. Many species also have naturally low population densities and closely restricted geographic ranges; as a result, relatively small localised disturbances of important ecosystems can lead to the extinction of entire populations or species. Collection of specimens for universities or research institutes also plays a role on a local scale. There is a very pronounced difference in the protection afforded to velvet worms between regions: in some countries, such as South Africa, there are restrictions on both collecting and exporting, while in others, such as Australia, only export restrictions exist. Many countries offer no specific safeguards at all. Tasmania has a protection programme that is unique worldwide: one region of forest has its own velvet worm conservation plan, which is tailored to a particular velvet worm species. Phylogeny In their present forms, the velvet worms are probably very closely related to the arthropods, a very extensive taxon that incorporates, for instance, the crustaceans, insects, and arachnids. They share, among other things, an exoskeleton consisting of α-chitin and non-collagenous proteins; gonads and waste-elimination organs enclosed in true coelom tissue; an open blood system with a tubular heart situated at the rear; an abdominal cavity divided into pericardial and perivisceral cavities; respiration via tracheae; and similar embryonic development. Segmentation, with two body appendages per segment, is also a shared feature. However, the antennae, mandibles, and oral papillae of velvet worms are probably not homologous to the corresponding features in arthropods; i.e., they probably developed independently. Another closely related group are the comparatively obscure water bears (Tardigrada); however, due to their very small size, water bears have no need for—and hence lack—blood circulation and separate respiratory structures: shared characteristics that support common ancestry of velvet worms and arthropods. Together, the velvet worms, arthropods, and water bears form a monophyletic taxon, the Panarthropoda, i.e., the three groups collectively cover all descendants of their last common ancestor. Due to certain similarities of form, the velvet worms were usually grouped with the water bears to form the taxon Protoarthropoda. This designation would imply that both velvet worms and water bears are not yet as highly developed as the arthropods. Modern systematic theories reject such conceptions of "primitive" and "highly developed" organisms and instead consider exclusively the historical relationships among the taxa. These relationships are not as yet fully understood, but it is considered probable that the velvet worms' sister groups form a taxon designated Tactopoda, thus: For a long time, velvet worms were also considered related to the annelids. They share, among other things, a worm-like body; a thin and flexible outer skin; a layered musculature; paired waste-elimination organs; as well as a simply constructed brain and simple eyes. Decisive, however, was the existence of segmentation in both groups, with the segments showing only minor specialisation. The parapodia appendages found in annelids therefore correspond to the stump feet of the velvet worms. Within the Articulata hypothesis developed by Georges Cuvier, the velvet worms therefore formed an evolutionary link between the annelids and the arthropods: worm-like precursors first developed parapodia, which then developed further into stub feet as an intermediate link in the ultimate development of the arthropods' appendages. Due to their structural conservatism, the velvet worms were thus considered "living fossils". This perspective was expressed paradigmatically in the statement by the French zoologist A. Vandel: Onychophorans can be considered highly evolved annelids, adapted to terrestrial life, which announced prophetically the Arthropoda. They are a lateral branch which has endured from ancient times until today, without important modifications. Modern taxonomy does not study criteria such as "higher" and "lower" states of development or distinctions between "main" and "side" branches—only family relationships indicated by cladistic methods are considered relevant. From this point of view, several common characteristics still support the Articulata hypothesis — segmented body; paired appendages on each segment; pairwise arrangement of waste-elimination organs in each segment; and above all, a rope-ladder-like nervous system based on a double nerve strand lying along the belly. An alternative concept, most widely accepted today, is the so-called Ecdysozoa hypothesis. This places the annelids and Panarthropoda in two very different groups: the former in the Lophotrochozoa and the latter in the Ecdysozoa. Mitochondrial gene sequences also provide support for this hypothesis. Proponents of this hypothesis assume that the aforementioned similarities between annelids and velvet worms either developed convergently or were primitive characteristics passed unchanged from a common ancestor to both the Lophotrochozoa and Ecdysozoa. For example, in the first case, the rope-ladder nervous system would have developed in the two groups independently, while in the second case, it is a very old characteristic, which does not imply a particularly close relationship between the annelids and Panarthropoda. The Ecdysozoa concept divides the taxon into two, the Panarthropoda into which the velvet worms are placed, and the sister group Cycloneuralia, containing the threadworms (Nematoda), horsehair worms (Nematomorpha) and three rather obscure groups: the mud dragons (Kinorhyncha); penis worms (Priapulida); and brush-heads (Loricifera). Particularly characteristic of the Cycloneuralia is a ring of "circumoral" nerves around the mouth opening, which the proponents of the Ecdysozoa hypothesis also recognise in modified form in the details of the nerve patterns of the Panarthropoda. Both groups also share a common skin-shedding mechanism (ecdysis) and molecular biological similarities. One problem of the Ecdysozoa hypothesis is the velvet worms' subterminal position of their mouths: Unlike in the Cycloneuralia, the mouth is not at the front end of the body, but lies further back, under the belly. However, investigations into their developmental biology, particularly regarding the development of the head nerves, suggest that this was not always the case, and that the mouth was originally terminal (situated at the tip of the body). This is supported by the fossil record. The "stem-group arthropod" hypothesis is very widely accepted, but some trees suggest that the onychophorans may occupy a different position; their brain anatomy is more closely related to that of the chelicerates than to any other arthropod. The modern velvet worms form a monophyletic group, incorporating all the descendants of their common ancestor. Important common derivative characteristics (synapomorphies) include, for example, the mandibles of the second body segment and the oral papillae and associated slime glands of the third; nerve strands extending along the underside with numerous cross-linkages per segment; and the special form of the tracheae. By 2011, some 180 modern species, comprising 49 genera, had been described; the actual number of species is probably about twice this. According to more recent study, 82 species of Peripatidae and 115 species of Peripatopsidae have been described thus far. However, among the 197 species, 20 are nomina dubia, due to major taxonomic inconsistencies. The best-known is the type genus Peripatus, which was described as early as 1825 and which, in English-speaking countries, stands representative for all velvet worms. All genera are assigned to one of two families, the distribution ranges of which do not overlap but are separated by arid areas or oceans: The Peripatopsidae exhibit relatively many characteristics that are perceived as original or "primitive". The number of leg pairs in this family range from 13 (in Ooperipatellus nanus) to 29 (in Paraperipatus papuensis). Behind or between the last leg pair is the genital opening (gonopore). Both oviparous and ovoviviparous, as well as genuinely viviparous, species exist, although the peripatopsids essentially lack a placenta. Their distribution is circumaustral, encompassing Australasia, South Africa, and Chile. The Peripatidae exhibit a range of derivative features. They are longer, on average, than the Peripatopsidae and also have more legs. The number of leg pairs in this family range from 19 (in Typhloperipatus williamsoni) to 43 (in Plicatoperipatus jamaicensis). The gonopore is always between the penultimate leg pair. None of the peripatid species are oviparous, and the overwhelming majority are viviparous. The females of many viviparous species develop a placenta with which to provide the growing embryo with nutrients. Distribution of the peripatids is restricted to the tropical and subtropical zones; in particular, they inhabit Central America, northern South America, Gabon, Northeast India, and Southeast Asia. Evolution Onychophoran paleontology is plagued by the vagaries of the preservation process that makes fossils difficult to interpret. Experiments on the decay and compaction of onychophora demonstrate difficulties in interpreting fossils; certain parts of living onychophora are visible only in certain conditions: The mouth may or may not be preserved; The claws may be re-oriented or lost; The leg width may increase or decrease; and The mud may be mistaken for organs. More significantly, features seen in fossils may be artifacts of the preservation process. For instance, "shoulder pads" may simply be the second row of legs coaxially compressed onto the body; branching "antennae" may in fact have been created during decay. Certain fossils from the early Cambrian bear a striking resemblance to the velvet worms. These fossils, known collectively as the lobopodians, were marine and represent a grade from which arthropods, tardigrades, and Onychophora arose. Possible fossils of onychophorans are found in the Cambrian, Ordovician (possibly), Silurian and Pennsylvanian periods. Historically, all fossil Onychophora and lobopodians were lumped into the taxon Xenusia, further subdivided by some authors to the Paleozoic Udeonychophora and the Mesozoic/Tertiary Ontonychophora; living Onychophora were termed Euonychophora. Importantly, few of the Cambrian fossils bear features that distinctively unite them with the Onychophora; none can be confidently assigned to the onychophoran crown or even stem group. Possible exceptions are Hallucigenia and related taxa such as Collinsium ciliosum, which bear distinctly onychophoran-like claws. It is not clear when the transition to a terrestrial existence was made, but it is considered plausible that it took place between the Ordovician and late Silurian – approximately – via the intertidal zone. The low preservation potential of the non-mineralised onychophorans means that they have a sparse fossil record. The lobopodian Helenodora from the Carboniferous of North America has been suggested to be a member of Onychophora, but other studies recover it as more closely related to other lobopodians. A Late Carboniferous fossil from Montceau-les-Mines, France, Antennipatus has been suggested to have clear onychophoran affinities, likely the first terrestrial onychophoran, but its poor preservation prohibits differentiating between its placement on the stem or crown of the two extant families, or on the onychophoran stem-group. In 2018, the identification of Antennipatus as the oldest onychophoran has been argued by Giribet and colleagues, who suggested that the minimum age of Antennipatus would be during the Gzhelian age around , and incorporated the taxon conservatively for the phylogenetic analysis of oncyhophorans based on the uncertainty of its placement within the order. In 2021, Baker and colleagues conducted divergence analyses using molecular dating and treating Antennipatus conservatively as a stem-group onychophoran with a minimum age of , resulting in a divergence date of for the crown group onychophorans. Crown group representatives are known only from amber, the oldest being Cretoperipatus from Burmese amber during the Cenomanian-Turonian stages of the Late Cretaceous, around 100-90 million years old, assigned to the family Peripatidae. The affinity of amber records from the Cenozoic, like Tertiapatus, and Succinipatopsis, which form the suggested superfamily termed Tertiapatoidea, has been considered doubtful by other authors.
Biology and health sciences
Ecdysozoa
Animals
43200
https://en.wikipedia.org/wiki/Nemertea
Nemertea
Nemertea is a phylum of animals also known as ribbon worms or proboscis worms, consisting of about 1300 known species. Most ribbon worms are very slim, usually only a few millimeters wide, although a few have relatively short but wide bodies. Many have patterns of yellow, orange, red and green coloration. The foregut, stomach and intestine run a little below the midline of the body, the anus is at the tip of the tail, and the mouth is under the front. A little above the gut is the , a cavity which mostly runs above the midline and ends a little short of the rear of the body. All species have a proboscis which lies in the rhynchocoel when inactive but everts to emerge just above the mouth to capture the animal's prey with venom. A highly extensible muscle in the back of the rhynchocoel pulls the proboscis in when an attack ends. A few species with stubby bodies filter feed and have suckers at the front and back ends, with which they attach to a host. The brain is a ring of four ganglia, positioned around the rhynchocoel near the animal's front end. At least a pair of ventral nerve cords connect to the brain and run along the length of the body. Most nemerteans have various chemoreceptors, and on their heads some species have a number of pigment-cup ocelli, which can detect light but can not form an image. Nemerteans respire through the skin. They have at least two lateral vessels which are joined at the ends to form a loop, and these and the rhynchocoel are filled with fluid. There is no heart, and the flow of fluid depends on contraction of muscles in the vessels and the body wall. To filter out soluble waste products, flame cells are embedded in the front part of the two lateral fluid vessels, and remove the wastes through a network of pipes to the outside. All nemerteans move slowly, using their external cilia to glide on surfaces on a trail of slime, while larger species use muscular waves to crawl, and some swim by dorso-ventral undulations. A few live in the open ocean while the rest find or make hiding places on the bottom. About a dozen species inhabit freshwater, mainly in the tropics and subtropics, and another dozen species live on land in cool, damp places. Most nemerteans are carnivores, feeding on annelids, clams and crustaceans. Some species of nemerteans are scavengers, and a few live commensally inside the mantle cavity of molluscs. In most species the sexes are separate, but all the freshwater species are hermaphroditic. Nemerteans often have numerous temporary gonads (ovaries or testes), and build temporary gonoducts (ducts from which the ova or sperm are emitted) opening to a gonopore, one per gonad, when the ova and sperm are ready. The eggs are generally fertilised externally. Some species shed them into the water, and others protect their eggs in various ways. The fertilized egg divides by spiral cleavage and grows by determinate development, in which the fate of a cell can usually be predicted from its predecessors in the process of division. The embryos of most taxa develop either directly to form juveniles (like the adult but smaller) or larvae that resemble the planulas of cnidarians. However, some form a pilidium larva, in which the developing juvenile has a gut which lies across the larva's body, and usually eats the remains of the larva when it emerges. The bodies of some species fragment readily, and even parts cut off near the tail can grow full bodies. Traditional taxonomy divides the phylum in two classes, Anopla ("unarmed" – their proboscises do not have a little dagger) with two orders, and Enopla ("armed" with a dagger) also with two orders. However, it is now accepted that Anopla are paraphyletic, as one order of Anopla is more closely related to Enopla than to the other order of Anopla. The phylum Nemertea itself is monophyletic, its main synapomorphies being the rhynchocoel and eversible proboscis. Traditional taxonomy says that nemerteans are closely related to flatworms, but both phyla are regarded as members of the Lophotrochozoa, a very large clade, sometimes viewed as a superphylum that also includes molluscs, annelids, brachiopods, bryozoa and many other protostomes. History In 1555 Olaus Magnus wrote of a marine worm which was apparently long ("40 cubits"), about the width of a child's arm, and whose touch made a hand swell. William Borlase wrote in 1758 of a "sea long worm", and in 1770 Gunnerus wrote a formal description of this animal, which he called Ascaris longissima. Its current name, Lineus longissimus, was first used in 1806 by Sowerby. In 1995, a total of 1,149 species had been described and grouped into 250 genera. Nemertea are named after the Greek sea-nymph Nemertes, one of the daughters of Nereus and Doris. Alternative names for the phylum have included Nemertini, Nemertinea, and Rhynchocoela. The Nemertodermatida are a separate phylum, whose closest relatives appear to be the Acoela. Description Body structure and major cavities The typical nemertean body is very thin in proportion to its length. The smallest are a few millimeters long, most are less than , and several exceed . The longest animal ever found, at long, may be a specimen of Lineus longissimus, Ruppert, Fox and Barnes refer to a Lineus longissimus long, washed ashore after a storm off St Andrews in Scotland. Other estimates are about . Zoologists find it extremely difficult to measure this species. For comparison: The longest recorded blue whale was . The dinosaurs Argentinosaurus and Patagotitan are estimated at approximately and respectively. A specimen of the Arctic giant jellyfish Cyanea capillata arctica was long. L. longissimus, however, is usually only a few millimeters wide. The bodies of most nemerteans can stretch a lot, up to 10 times their resting length in some species, but reduce their length to 50% and increase their width to 300% when disturbed. A few have relatively short but wide bodies, for example Malacobdella grossa is up to long and wide, and some of these are much less stretchy. Smaller nemerteans are approximately cylindrical, but larger species are flattened dorso-ventrally. Many have visible patterns in various combinations of yellow, orange, red and green. The outermost layer of the body has no cuticle, but consists of a ciliated and glandular epithelium containing rhabdites, which form the mucus in which the cilia glide. Each ciliated cell has many cilia and microvilli. The outermost layer rests on a thickened basement membrane, the dermis. Next to the dermis are at least three layers of muscles, some circular and some longitudinal. The combinations of muscle types vary between the different classes, but these are not associated with differences in movement. Nemerteans also have dorso-ventral muscles, which flatten the animals, especially in the larger species. Inside the concentric tubes of these layers is mesenchyme, a kind of connective tissue. In pelagic species this tissue is gelatinous and buoyant. They are unsegmented, but at least one species, Annulonemertes minusculus, is segmented. But this is assumed to be a derived trait. The segmentation does not include the coelom and body wall, and is therefore referred to as pseudosegmentation. The mouth is ventral and a little behind the front of the body. The foregut, stomach and intestine run a little below the midline of the body and the anus is at the tip of the tail. Above the gut and separated from the gut by mesenchyme is the rhynchocoel, a cavity which mostly runs above the midline and ends a little short of the rear of the body. The rhynchocoel of class Anopla has an orifice a little to the front of the mouth, but still under the front of the body. In the other class, Enopla, the mouth and the front of the rhynchocoel share an orifice. The rhynchocoel is a coelom, as it is lined by epithelium. Proboscis and feeding The proboscis is an infolding of the body wall, and sits in the rhynchocoel when inactive. When muscles in the wall of the rhynchocoel compress the fluid inside, the pressure makes the proboscis jump inside-out along a canal called the rhynchodeum and through an orifice, the proboscis pore. The proboscis has a muscle which attaches to the back of the rhynchocoel, can stretch up to 30 times its inactive length and acts to retract the proboscis. The proboscis of the class Anopla exits from an orifice which is separate from the mouth, coils around the prey and immobilizes it by sticky, toxic secretions. The Anopla can attack as soon as the prey moves into the range of the proboscis. Some Anopla have branched proboscises which can be described as "a mass of sticky spaghetti". The animal then draws its prey into its mouth. In most of the class Enopla, the proboscis exits from a common orifice of the rhynchocoel and mouth. A typical member of this class has a stylet, a calcareous barb, with which the animal stabs the prey many times to inject toxins and digestive secretions. The prey is then swallowed whole or, after partial digestion, its tissues are sucked into the mouth. The stylet is attached about one-third of the distance from the end of the everted proboscis, which extends only enough to expose the stylet. On either side of the active stylet are sacs containing back-up stylets to replace the active one as the animal grows or an active one is lost. Instead of one stylet, the Polystilifera have a pad that bears many tiny stylets, and these animals have separate orifices for the proboscis and mouth, unlike other Enopla. The Enopla can only attack after contacting the prey. Some nemerteans, such as L. longissimus, absorb organic food in solution through their skins, which may make the long, slim bodies an advantage. Suspension feeding is found only among the specialized symbiotic bdellonemerteans, which have a proboscis but no stylet, and use suckers to attach themselves to bivalves. Respiration and circulatory system Nemerteans lack specialized gills, and respiration occurs over the surface of the body, which is long and sometimes flattened. Like other animals with thick body walls, they use fluid circulation rather than diffusion to move substances through their bodies. The circulatory system consists of the rhynchocoel and peripheral vessels, while their blood is contained in the main body cavity. The fluid in the rhynchocoel moves substances to and from the proboscis, and functions as a fluid skeleton in everting the proboscis and in burrowing. The vessels circulate fluid round the whole body and the rhynchocoel provides its own local circulation. The circulatory vessels are a system of coeloms. In the simplest type of circulatory system, two lateral vessels are joined at the ends to form a loop. However, many species have additional long-wise and cross-wise vessels. There is no heart nor pumping vessels, and the flow of fluid depends on contraction of both the vessels and the body wall's muscles. In some species, circulation is intermittent, and fluid ebbs and flows in the long-wise vessels. The fluid in the vessels is usually colorless, but in some species it contains cells that are yellow, orange, green or red. The red type contain hemoglobin and carry oxygen, but the function of the other pigments is unknown. Excretion Nemertea use organs called protonephridia to excrete soluble waste products, especially nitrogenous by-products of cellular metabolism. In nemertean protonephridia, flame cells which filter out the wastes are embedded in the front part of the two lateral fluid vessels. The flame cells remove the wastes into two collecting ducts, one on either side, and each duct has one or more nephridiopores through which the wastes exit. Semiterrestrial and freshwater nemerteans have many more flame cells than marines, sometimes thousands. The reason may be that osmoregulation is more difficult in non-marine environments. Nervous-system and senses The central nervous-system consists of a brain and paired ventral nerve cords that connect to the brain and run along the length of the body. The brain is a ring of four ganglia, masses of nerve cells, positioned round the rhynchocoel near its front end – while the brains of most protostome invertebrates encircle the foregut. Most nemertean species have just one pair of nerve cords, many species have additional paired cords, and some species also have a dorsal cord. In some species the cords lie within the skin, but in most they are deeper, inside the muscle layers. The central nervous-system is often red or pink because it contains hemoglobin. This stores oxygen for peak activity or when the animal experiences anoxia, for example while burrowing in oxygen-free sediments. Some species have paired cerebral organs, sacs whose only openings are to the outside. Others species have unpaired evertible organs on the front of their heads. Some have slits along the side of the head or grooves obliquely across the head, and these may be associated with paired cerebral organs. All of these are thought to be chemoreceptors, and the cerebral organs may also aiding osmoregulation. Small pits in the epidermis appear to be sensors. On their head, some species have a number of pigment-cup ocelli, which can detect light but not form an image. Most nemerteans have two to six ocelli, although some have hundreds. A few tiny species that live between grains of sand have statocysts, which sense balance. Paranemertes peregrina, which feeds on polychaetes, can follow the prey's trails of mucus, and find its burrow by backtracking along its own trail of mucus. Movement Nemerteans generally move slowly, though they have occasionally been documented to successfully prey on spiders or insects. Most nemerteans use their external cilia to glide on surfaces on a trail of slime, some of which is produced by glands in the head. Larger species use muscular waves to crawl, and some aquatic species swim by dorso-ventral undulations. Some species burrow by means of muscular peristalsis, and have powerful muscles. Some species of the suborder Monostilifera, whose proboscis have one active stylet, move by extending the proboscis, sticking it to an object and pulling the animal toward the object. Reproduction and life-cycle Larger species often break up when stimulated, and the fragments often grow into full individuals. Some species fragment routinely and even parts near the tail can grow full bodies. But this kind of extreme regeneration is restricted to only a few types of nemerteans, and is assumed to be a derived feature. All reproduce sexually, and most species are gonochoric (the sexes are separate), but all the freshwater forms are hermaphroditic. Nemerteans often have numerous temporary gonads (ovaries or testes), forming a row down each side of the body in the mesenchyme. Temporary gonoducts (ducts from which the ova or sperm are emitted), one per gonad, are built when the ova and sperm are ready. The eggs are generally fertilised externally. Some species shed them into the water, some lay them in a burrow or tube, and some protect them by cocoons or gelatinous strings. Some bathypelagic (deep sea) species have internal fertilization, and some of these are viviparous, growing their embryos in the female's body. The zygote (fertilised egg) divides by spiral cleavage and grows by determinate development, in which the fate of a cell can usually be predicted from its predecessors in the process of division. The embryos of most taxa develop either directly to form juveniles (like the adult but smaller) or to form planuliform larvae. The planuliform larva stage may be short-lived and lecithotrophic ("yolky") before becoming a juvenile, or may be planktotrophic, swimming for some time and eating prey larger than microscopic particles. However, many members of the order Heteronemertea and the palaeonemertean family Hubrechtiidae form a pilidium larva, which can capture unicellular algae and which Maslakova describes as like a deerstalker cap with the ear flaps pulled down. It has a gut which lies across the body, a mouth between the "ear flaps", but no anus. A small number of imaginal discs form, encircling the archenteron (developing gut) and coalesce to form the juvenile. When it is fully formed, the juvenile bursts out of the larva body and usually eats it during this catastrophic metamorphosis. This larval stage is unique in that there are no Hox genes involved during development, which are only found in the juveniles developing inside the larvae. The species Paranemertes peregrina has been reported as having a life span of around 18 months. Ecological significance Most nemerteans are marine animals that burrow in sediments, lurk in crevices between shells, stones or the holdfasts of algae or sessile animals. Some live deep in the open oceans, and have gelatinous bodies. Others build semi-permanent burrows lined with mucus or produce cellophane-like tubes. Mainly in the tropics and subtropics, about 12 species appear in freshwater, and about a dozen species live on land in cool, damp places, for example under rotting logs. The terrestrial Argonemertes dendyi is a native of Australia but has been found in the British Isles, in Sao Miguel in the Azores, in Gran Canaria, and in a lava tube at Kaumana on the Island of Hawaii. It can build a cocoon, which allows it to avoid desiccation while being transported, and it may be able to build populations quickly in new areas as it is a protandrous hermaphrodite. Another terrestrial genus, Geonemertes, is mostly found in Australasia but has species in the Seychelles, widely across the Indo-Pacific, in Tristan da Cunha in the South Atlantic, in Frankfurt, in the Canary Islands, in Madeira and in the Azores. Geonemertes pelaensis has been implicated in the decline of native arthropod species on the Ogasawara Islands, where it was introduced in the 1980s. Most are carnivores, feeding on annelids, clams and crustaceans, and may kill annelids of about their own size. They sometimes take fish, both living and dead. Insects and myriapods are the only known prey of the two terrestrial species of Argonemertes. A few nemerteans are scavengers, and these generally have good distance chemoreception ("smell") and are not selective about their prey. A few species live commensally inside the mantle cavity of molluscs and feed on micro-organisms filtered out by the host. Near San Francisco the nemertean Carcinonemertes errans has consumed about 55% of the total egg production of its host, the dungeness crab Metacarcinus magister. C. errans is considered a significant factor in the collapse of the dungeness crab fishery. Other coastal nemerteans have devastated clam beds. The few predators on nemerteans include bottom-feeding fish, some sea birds, a few invertebrates including horseshoe crabs, and other nemerteans. Nemerteans' skins secrete toxins that deter many predators, but some crabs may clean nemerteans with one claw before eating them. The American Cerebratulus lacteus and the South African Polybrachiorhynchus dayi, both called "tapeworms" in their respective localities, are sold as fish bait. Taxonomy Traditional taxonomic classification has divided the group into two classes and four orders: Class Anopla ("unarmed"). Includes animals with proboscis without stylet, and a mouth underneath and behind the brain. Order Palaeonemertea. Comprises 100 marine species. Their body wall has outer circular and inner length-wise muscles. In addition, Carinoma tremaphoros has circular and inner length-wise muscles in the epidermis; the extra muscle layers seem to be needed for burrowing by peristalsis. Order Heteronemertea. Comprises about 400 species. The majority are marine, but three are freshwater. Their body-wall muscles are disposed in four layers, alternately circular and length-wise starting from the outermost layer. The order includes the strongest swimmers. Two genera have branched proboscises. Class Enopla ("armed"). All have stylets except order Bdellonemertea. Their mouth is located underneath and ahead of the brain. Their main nerve cords run inside body-wall muscles. Order Bdellonemertea. Includes seven species, of which six live as commensals in the mantle of large clams and one in that of a freshwater snail. The hosts filter feed and all the hosts steal food from them. These nemerteans have short, wide bodies and have no stylets but have a sucking pharynx and a posterior stucker, with which they move like inchworms. Order Hoplonemertea. Comprises 650 species. They live in benthic and pelagic sea water, in freshwater and on land. They feed by commensalism and parasitism, and are armed with stylet(s) Suborder Monostilifera. Includes 500 species with a single central stylet. Some use the stylet for locomotion as well as for capturing prey. Suborder Polystilifera. Includes about 100 pelagic and 50 benthic species. Their pads bear many tiny stylets. Recent molecular phylogenetic studies divided the group into two superclasses, three classes, and eight orders: Superclass Pronemertea Class Palaeonemertea Order Carinomiformes Order Tubulaniformes Order Archinemertea Superclass Neonemertea Class Pilidiophora Order Hubrechtiiformes Order Heteronemertea Class Hoplonemertea (= Enopla) Order Polystilifera Order Monostilifera (includes Bdellonemertea) incertae sedis Order Arhynchonemertea (provisionally has been separated its own class Arhynchocoela in 1995) Evolutionary history Fossil record As nemerteans are mostly soft-bodied, one would expect fossils of them to be extremely rare. One might expect the stylet of a nemertean to be preserved, since it is made of calcium phosphate, but no fossil stylets have yet been found. reported nemertean fossils and traces from the Middle Triassic of Germany. The Middle Cambrian fossil Amiskwia from the Burgess Shale has been classed as a nemertean, based on a resemblance to some unusual deep-sea swimming nemerteans, but few paleontologists accept this classification as the Burgess Shale fossils show no evidence of rhynchocoel nor intestinal caeca. reported fossils of vermiform organisms with a wide range of morphologies occurring on bedding planes from the Late Ordovician (Katian) Vauréal Formation (Canada). In the specimens preserving the anterior end of the body, this end is pointed or rounded, bearing a rhynchocoel with the proboscis, which is characteristic for nemerteans. The authors attributed these fossils to nemerteans and interpreted them as the oldest record of the group reported so far. However, Knaust & Desrochers cautioned that partly preserved putative nemertean fossils might ultimately turn out to be fossils of turbellarians or annelids. It has been suggested that Archisymplectes, one of the Pennsylvanian-age animals from Mazon Creek in northern and central Illinois, may be a nemertean. This fossil, however, only preserves the outline of the "worm", and there is no evidence of a proboscis, so there is no certainty that it represents a nemertean. Within Nemertea There is no doubt that the phylum Nemertea is monophyletic (meaning that the phylum includes all and only descendants of one ancestor that was also a member of the phylum). The synapomorphies (trait shared by an ancestor and all its descendants, but not by other groups) include the eversible proboscis located in the rhynchocoel. While treat the Palaeonemertea as monophyletic, regard them as paraphyletic and basal (contains the ancestors of the more recent clades). The Anopla ("unarmed") represent an evolutionary grade of nemerteans without stylets (comprising the Heteronemertea and the Palaeonemerteans), while Enopla ("armed") are monophyletic, but find that Palaeonemertea is doubly paraphyletic, having given rise to both the Heteronemertea and the Enopla. treat the Bdellonemertea as a clade separate from the Hoplonemertea, while believe the Bdellonemertea are a part of the Monostilifera (with one active stylet), which are within the Hoplonemertea – which implies that "Enopla" and "Hoplonemertea" are synonyms for the same branch of the tree. The Polystilifera (with many tiny stylets) are monophyletic. Relationships with other phyla English-language writings have conventionally treated nemerteans as acoelomate bilaterians that are most closely related to flatworms (Platyhelminthes). These pre-cladistics analyses emphasised as shared features: multiciliated (with multiple cilia per cell), glandular epidermis; rod-shaped secretory bodies or rhabdites; frontal glands or organs; protonephridia; and acoelomate body organization. However, multiciliated epidermal cells and epidermal gland cells are also found in Ctenophora, Annelida, Mollusca and other taxa. The rhabdites of nemertea have a different structure from those of flatworms at the microscopic scale. The frontal glands or organs of flatworms vary a lot in structure, and similar structures appear in small marine annelids and entoproct larvae. The protonephridia of nemertea and flatworms are different in structure, and in position – the flame cells of nemertea are usually in the walls of the fluid vessels and are served by "drains" from which the wastes exit by a small number of tubes through the skin, while the flame cells of flatworms are scattered throughout the body. Rigorous comparisons show no synapomorphies of nemertean and platyhelminth nephridia. According to more recent analyses, in the development of nemertean embryos, ectomesoderm (outer part of the mesoderm, which is the layer in which most of the internal organs are built) is derived from cells labelled 3a and 3b, and endomesoderm (inner part of the mesoderm) is derived from the 4d cell. Some of the ectomesoderm in annelids, echiurans and molluscs is derived from cells 3a and 3b, while the ectomesoderm of polyclad flatworms is derived from the 2b cell and acoel flatworms produce no ectomesoderm. In nemerteans the space between the epidermis and the gut is mainly filled by well-developed muscles embedded in noncellular connective tissue. This structure is similar to that found in larger flatworms such as polyclads and triclads, but a similar structure of body-wall muscles embedded in noncellular connective tissue is widespread among the Spiralia (animals in which the early cell divisions make a spiral pattern) such as sipunculans, echiurans and many annelids. Nemerteans' affinities with Annelida (including Echiura, Pogonophora, Vestimentifera and perhaps Sipuncula) and Mollusca make the ribbon-worms members of Lophotrochozoa, which include about half of the extant animal phyla. Lophotrochozoa groups: those animals that feed using a lophophore (Brachiopoda, Bryozoa, Phoronida, Entoprocta); phyla in which most members' embryos develop into trochophore larvae (for example Annelida and Mollusca); and some other phyla (such as Platyhelminthes, Sipuncula, Gastrotricha, Gnathostomulida, Micrognathozoa, Nemertea, Phoronida, Platyhelminthes, and Rotifera). These groupings are based on molecular phylogeny, which compares sections of organisms DNA and RNA. While analyses by molecular phylogeny are confident that members of Lophotrochozoa are more closely related to each other than of non-members, the relationships between members are mostly unclear. Most protostome phyla outside the Lophotrochozoa are members of Ecdysozoa ("animals that molt"), which include Arthropoda, Nematoda and Priapulida. Most other bilaterian phyla are in the Deuterostomia, which include Echinodermata and Chordata. The Acoelomorpha, which are neither protostomes nor deuterostomes, are regarded as basal bilaterians.
Biology and health sciences
Lophotrochozoa
Animals
43207
https://en.wikipedia.org/wiki/Polychaete
Polychaete
Polychaeta () is a paraphyletic class of generally marine annelid worms, commonly called bristle worms or polychaetes (). Each body segment has a pair of fleshy protrusions called parapodia that bear many bristles, called chaetae, which are made of chitin. More than 10,000 species are described in this class. Common representatives include the lugworm (Arenicola marina) and the sandworm or clam worm Alitta. Polychaetes as a class are robust and widespread, with species that live in the coldest ocean temperatures of the abyssal plain, to forms which tolerate the extremely high temperatures near hydrothermal vents. Polychaetes occur throughout the Earth's oceans at all depths, from forms that live as plankton near the surface, to a 2- to 3-cm specimen (still unclassified) observed by the robot ocean probe Nereus at the bottom of the Challenger Deep, the deepest known spot in the Earth's oceans. Only 168 species (less than 2% of all polychaetes) are known from fresh waters. Description Polychaetes are segmented worms, generally less than in length, although ranging at the extremes from to , in Eunice aphroditois. They can sometimes be brightly coloured, and may be iridescent or even luminescent. Each segment bears a pair of paddle-like and highly vascularized parapodia, which are used for movement and, in many species, act as the worm's primary respiratory surfaces. Bundles of bristles, called chaetae, project from the parapodia. However, polychaetes vary widely from this generalized pattern, and can display a range of different body forms. The most generalised polychaetes are those that crawl along the bottom, but others have adapted to many different ecological niches, including burrowing, swimming, pelagic life, tube-dwelling or boring, commensalism, and parasitism, requiring various modifications to their body structures. The head, or prostomium, is relatively well developed, compared with other annelids. It projects forward over the mouth, which therefore lies on the animal's underside. The head normally includes two to four pair of eyes, although some species are blind. These are typically fairly simple structures, capable of distinguishing only light and dark, although some species have large eyes with lenses that may be capable of more sophisticated vision, including the Alciopids' complex eyes which rival cephalopod and vertebrate eyes. Many species show bioluminescence; eight families have luminous species. The head also includes a pair of antennae, tentacle-like palps, and a pair of pits lined with cilia, known as "nuchal organs". These latter appear to be chemoreceptors, and help the worm to seek out food. Internal anatomy and physiology The outer surface of the body wall consists of a simple columnar epithelium covered by a thin cuticle. Underneath this, in order, are a thin layer of connective tissue, a layer of circular muscle, a layer of longitudinal muscle, and a peritoneum surrounding the body cavity. Additional oblique muscles move the parapodia. In most species the body cavity is divided into separate compartments by sheets of peritoneum between each segment, but in some species it is more continuous. The mouth of polychaetes is located on the peristomium, the segment behind the prostomium, and varies in form depending on their diets, since the group includes predators, herbivores, filter feeders, scavengers, and parasites. In general, however, they possess a pair of jaws and a pharynx that can be rapidly everted, allowing the worms to grab food and pull it into their mouths. In some species, the pharynx is modified into a lengthy proboscis. The digestive tract is a simple tube, usually with a stomach part way along. The smallest species, and those adapted to burrowing, lack gills, breathing only through their body surfaces. Most other species have external gills, often associated with the parapodia. A simple but well-developed circulatory system is usually present. The two main blood vessels furnish smaller vessels to supply the parapodia and the gut. Blood flows forward in the dorsal vessel, above the gut, and returns down the body in the ventral vessel, beneath the gut. The blood vessels themselves are contractile, helping to push the blood along, so most species have no need of a heart. In a few cases, however, muscular pumps analogous to a heart are found in various parts of the system. Conversely, some species have little or no circulatory system at all, transporting oxygen in the coelomic fluid that fills their body cavities. The blood may be colourless, or have any of three different respiratory pigments. The most common of these is haemoglobin, but some groups have haemerythrin or the green-coloured chlorocruorin, instead. The nervous system consists of a single or double ventral nerve cord running the length of the body, with ganglia and a series of small nerves in each segment. The brain is relatively large, compared with that of other annelids, and lies in the upper part of the head. An endocrine gland is attached to the ventral posterior surface of the brain, and appears to be involved in reproductive activity. In addition to the sensory organs on the head, photosensitive eye spots, statocysts, and numerous additional sensory nerve endings, most likely involved with the sense of touch, also occur on the body. Polychaetes have a varying number of protonephridia or metanephridia for excreting waste, which in some cases can be relatively complex in structure. The body also contains greenish "chloragogen" tissue, similar to that found in oligochaetes, which appears to function in metabolism, in a similar fashion to that of the vertebrate liver. The cuticle is constructed from cross-linked fibres of collagen and may be 200 nm to 13 mm thick. Their jaws are formed from sclerotised collagen, and their setae from sclerotised chitin. Ecology Polychaetes are predominantly marine, but many species also live in freshwater, and a few in terrestrial environments. They are extremely variable in both form and lifestyle, and include a few taxa that swim among the plankton or above the abyssal plain. Most burrow or build tubes in the sediment, and some live as commensals. A few species, roughly 80 (less than 0.5% of species), are parasitic. These include both ectoparasites and endoparasites. Ectoparasitic polychaetes feed on skin, blood, and other secretions, and some are adapted to bore through hard, usually calcerous surfaces, such as the shells of mollusks. These "boring" polychaetes may be parasitic, but may be opportunistic or even obligate symbionts (commensals). The mobile forms (Errantia) tend to have well-developed sense organs and jaws, while the stationary forms (Sedentaria) lack them, but may have specialized gills or tentacles used for respiration and deposit or filter feeding, e.g., fanworms. Underwater polychaetes have eversible mouthparts used to capture prey. A few groups have evolved to live in terrestrial environments, like Namanereidinae with many terrestrial species, but are restricted to humid areas. Some have even evolved cutaneous invaginations for aerial gas exchange. Notable polychaetes One notable polychaete, the Pompeii worm (Alvinella pompejana), is endemic to the hydrothermal vents of the Pacific Ocean. Pompeii worms are among the most heat-tolerant complex animals known. A recently discovered genus, Osedax, includes a species nicknamed the "bone-eating snot flower". Another remarkable polychaete is Hesiocaeca methanicola, which lives on methane clathrate deposits. Lamellibrachia luymesi is a cold seep tube worm that reaches lengths of over 3 m and may be the most long-lived annelid, being over 250 years old. A still unclassified multilegged predatory polychaete worm was identified only by observation from the underwater vehicle Nereus at the bottom of the Challenger Deep, the greatest depth in the oceans, near in depth. It was about an inch long visually, but the probe failed to capture it, so it could not be studied in detail. The Bobbit worm (Eunice aphroditois) is a predatory species that can achieve a length of ), with an average diameter of . Dimorphilus gyrociliatus has the smallest known genome of any annelid. The species shows extreme sexual dimorphism. Females measure ~1 mm long and have simplified bodies containing six segments, a reduced coelom, and no appendages, parapodia, or chaetae. The males are only 50 μm long and consist of just a few hundred cells. They lack a digestive system and have just 68 neurons, and only live for roughly a week. Reproduction Most polychaetes have separate sexes, rather than being hermaphroditic. The most primitive species have a pair of gonads in every segment, but most species exhibit some degree of specialisation. The gonads shed immature gametes directly into the body cavity, where they complete their development. Once mature, the gametes are shed into the surrounding water through ducts or openings that vary between species, or in some cases by the complete rupture of the body wall (and subsequent death of the adult). A few species copulate, but most fertilize their eggs externally. The fertilized eggs typically hatch into trochophore larvae, which float among the plankton, and eventually metamorphose into the adult form by adding segments. A few species have no larval form, with the egg hatching into a form resembling the adult, and in many that do have larvae, the trochophore never feeds, surviving off the yolk that remains from the egg. However, some polychaetes exhibit remarkable reproductive strategies. Some species reproduce by epitoky. For much of the year, these worms look like any other burrow-dwelling polychaete, but as the breeding season approaches, the worm undergoes a remarkable transformation as new, specialized segments begin to grow from its rear end until the worm can be clearly divided into two halves. The front half, the atoke, is asexual. The new rear half, responsible for breeding, is known as the epitoke. Each of the epitoke segments is packed with eggs and sperm and features a single eyespot on its surface. The beginning of the last lunar quarter is the cue for these animals to breed, and the epitokes break free from the atokes and float to the surface. The eye spots sense when the epitoke reaches the surface and the segments from millions of worms burst, releasing their eggs and sperm into the water. A similar strategy is employed by the deep sea worm Syllis ramosa, which lives inside a sponge. The rear ends of the worm develop into "stolons" containing the eggs or sperm; these stolons then become detached from the parent worm and rise to the sea surface, where fertilisation takes place. Fossil record Stem-group polychaete fossils are known from the Sirius Passet Lagerstätte, a rich, sedimentary deposit in Greenland tentatively dated to the late Atdabanian (early Cambrian). The oldest found is Phragmochaeta canicularis. Many of the more famous Burgess Shale organisms, such as Canadia, may also have polychaete affinities. Wiwaxia, long interpreted as an annelid, is now considered to represent a mollusc. An even older fossil, Cloudina, dates to the terminal Ediacaran period; this has been interpreted as an early polychaete, although consensus is absent. Being soft-bodied organisms, the fossil record of polychaetes is dominated by their fossilized jaws, known as scolecodonts, and the mineralized tubes that some of them secrete. Most important biomineralising polychaetes are serpulids, sabellids, and cirratulids. Polychaete cuticle does have some preservation potential; it tends to survive for at least 30 days after a polychaete's death. Although biomineralisation is usually necessary to preserve soft tissue after this time, the presence of polychaete muscle in the nonmineralised Burgess shale shows this need not always be the case. Their preservation potential is similar to that of jellyfish. Taxonomy and systematics Taxonomically, polychaetes are thought to be paraphyletic, meaning the group excludes some descendants of its most recent common ancestor. Groups that may be descended from the polychaetes include the clitellates (earthworms and leeches), sipunculans, and echiurans. The Pogonophora and Vestimentifera were once considered separate phyla, but are now classified in the polychaete family Siboglinidae. Much of the classification below matches Rouse & Fauchald, 1998, although that paper does not apply ranks above family. Older classifications recognize many more (sub)orders than the layout presented here. As comparatively few polychaete taxa have been subject to cladistic analysis, some groups which are usually considered invalid today may eventually be reinstated. These divisions were shown to be mostly paraphyletic in recent years. Basal or incertae sedis Family Diurodrilidae Family Histriobdellidae Family Nerillidae Family Parergodrilidae Family Potamodrilidae Family Psammodrilidae Family Spintheridae Family Protodriloididae Family Saccocirridae Order Haplodrili Order Myzostomida Family Endomyzostomatidae Family Asteromyzostomatidae Family Myzostomatidae Subclass Palpata Family Protodrilidae Family Polygordiidae Subclass Aciculata Family Levidoridae Order Amphinomida Family Amphinomidae Family Euphrosinidae Order Eunicida Family Dorvilleidae Family Eunicidae Family Hartmaniellidae Family Ichthyotomidae Family Lumbrineridae Family Oenonidae Family Onuphidae Order Phyllodocida Suborder Aphroditiformia Family Acoetidae Family Aphroditidae Family Eulepethidae Family Iphionidae Family Pholoidae Family Polynoidae Family Sigalionidae Suborder Glyceriformia Family Glyceridae Family Goniadidae Family Lacydoniidae Family Paralacydoniidae Suborder Nereidiformia Family Antonbruunidae Family Chrysopetalidae Family Hesionidae Family Nereididae Family Pilargidae Family Syllidae Suborder Phyllodocida incertae sedis Family Iospilidae Family Nautiliniellidae Family Nephtyidae Family Typhloscolecidae Family Tomopteridae Suborder Phyllodociformia Family Alciopidae Family Lopadorrhynchidae Family Phyllodocidae Family Pontodoridae Subclass Sedentaria Family Chaetopteridae Infraclass Canalipalpata Order Sabellida Family Caobangidae Family Fabriciidae Family Oweniidae Family Sabellariidae Family Sabellidae Family Serpulidae Family Siboglinidae (formerly the phyla Pogonophora & Vestimentifera) Order Spionida Suborder Spioniformia Family Apistobranchidae Family Longosomatidae Family Magelonidae Family Poecilochaetidae Family Spionidae Family Trochochaetidae Family Uncispionidae Order Terebellida Suborder Cirratuliformia Family Acrocirridae (sometimes placed in Spionida) Family Cirratulidae (sometimes placed in Spionida) Family Ctenodrilidae (sometimes own suborder Ctenodrilida) Family Fauveliopsidae (sometimes own suborder Fauveliopsida) Family Flabelligeridae (sometimes suborder Flabelligerida) Family Flotidae (sometimes included in Flabelligeridae) Family Poeobiidae (sometimes own suborder Poeobiida or included in Flabelligerida) Family Sternaspidae (sometimes own suborder Sternaspida) Suborder Terebellomorpha Family Alvinellidae Family Ampharetidae Family Pectinariidae Family Terebellidae Family Trichobranchidae Infraclass Scolecida Family Arenicolidae Family Capitellidae Family Cossuridae Family Maldanidae Family Opheliidae Family Orbiniidae Family Paraonidae Family Scalibregmatidae Order Capitellida (nomen dubium) Order Cossurida (nomen dubium) Order Opheliida (nomen dubium) Order Orbiniida (nomen dubium) Order Questida (nomen dubium) Order Scolecidaformia (nomen dubium) Subclass Echiura Order Bonelliida Family Bonelliidae Family Ikedidae Order Echiurida Family Echiuridae Family Thalassematidae Family Urechidae
Biology and health sciences
Lophotrochozoa
null
43209
https://en.wikipedia.org/wiki/Priapulida
Priapulida
Priapulida (priapulid worms, from Gr. πριάπος, priāpos 'Priapus' + Lat. -ul-, diminutive), sometimes referred to as penis worms, is a phylum of unsegmented marine worms. The name of the phylum relates to the Greek god of fertility, because their general shape and their extensible spiny introvert (eversible) proboscis may resemble the shape of a human penis. They live in the mud and in comparatively shallow waters up to deep. Some species show a remarkable tolerance for hydrogen sulfide, anoxia and low salinity. Halicryptus spinulosus appears to prefer brackish shallow waters. They can be quite abundant in some areas. In an Alaskan bay as many as 85 adult individuals of Priapulus caudatus per square meter has been recorded, while the density of its larvae can be as high as 58,000 per square meter (5,390 per square foot). Together with Echiura and Sipuncula, they were once placed in the taxon Gephyrea, but consistent morphological and molecular evidence supports their belonging to Ecdysozoa, which also includes arthropods and nematodes. Fossil findings show that the mouth design of the stem-arthropod Pambdelurion is identical with that of priapulids, indicating that their mouth is an original trait inherited from the last common ancestor of both priapulids and arthropods, even if modern arthropods no longer possess it. Among Ecdysozoa, their nearest relatives are Kinorhyncha and Loricifera, with which they constitute the Scalidophora clade named after the spines covering the introvert (scalids). They feed on slow-moving invertebrates, such as polychaete worms. Some analyses suggest that Priapulida may represent a basal lineage within Ecdysozoa, leading to their classification as "living fossils". Priapulid-like fossils are known at least as far back as the Middle Cambrian. They were likely major predators of the Cambrian period. However, crown-group priapulids cannot be recognized until the Carboniferous. 22 extant species of priapulid worms are known, half of them being of meiobenthic size. Anatomy Priapulids are cylindrical worm-like animals, ranging from 0.2–0.3 to 39 centimetres (0.08–0.12 to 15.35 in) long, with a median anterior mouth quite devoid of any armature or tentacles. The body is divided into a main trunk or abdomen and a somewhat swollen proboscis region ornamented with longitudinal ridges. The body is ringed and often has circles of spines, which are continued into the slightly protrusible pharynx. Family Priapulidae have species with a tail or a pair of caudal appendages. A slender tail or tail filament is also found in family Tubiluchidae. Appendages are absent in the remaining families. The body has a chitinous cuticle that is moulted as the animal grows. Members of the family Chaetostephanidae also secretes a gelatinous tube, open in both ends, which they live in. There is a wide body-cavity, which has no connection with the renal or reproductive organs, so it is not a coelom; it is probably a blood-space or hemocoel. There are no vascular or respiratory systems, but the body cavity does contain phagocytic amoebocytes and cells containing the respiratory pigment haemerythrin. The alimentary canal is straight, consisting of an eversible pharynx, an intestine, and a short rectum. The pharynx is muscular and lined by teeth. Three of the five extant families have gone through a significant miniaturization and become detritivores (Tubiluchidae and Meiopriapulidae) and filter feeders (Chaetostephanidae). The two remaining families Priapulidae and Halicryptidae are larger carnivores that feed on other animals, although some species also consume detritus as larvae. The shape of the teeth reflect these different lifestyles, and seem to be adapted mainly towards grasping prey or raking detritus from the sediment into the mouth. The anus is terminal, although in Priapulus one or two hollow ventral diverticula of the body-wall stretch out behind it. The nervous system consists of a nerve ring around the pharynx and a prominent cord running the length of the body with ganglia and longitudinal and transversal neurites consistent with an orthogonal organisation. The nervous system retains a basiepidermal configuration with a connection with the ectoderm, forming part of the body wall. There are no specialized sense organs, but there are sensory nerve endings in the body, especially on the proboscis. The priapulids are gonochoristic, having two separate sexes (i.e. male and female). Their male and female organs are closely associated with the excretory protonephridia. They comprise a pair of branching tufts, each of which opens to the exterior on one side of the anus. The tips of these tufts enclose a flame-cell like those found in flatworms and other animals, and these probably function as excretory organs. As the animals mature, diverticula arise on the tubes of these organs, which develop either spermatozoa or ova. These sex cells pass out through the ducts. The perigenital area of the genus Tubiluchus exhibit sexual dimorphism. Reproduction and development For the species Priapulus caudatus, the 80 μm egg undergoes a total and radial cleavage following a symmetrical and subequal pattern. Development is remarkably slow, with the first cleavage taking place 15 hours after fertilization, gastrulation after several days and hatching of the first 'lorica' larvae after 15 to 20 days. The species Meiopriapulus fijiensis have direct development. In current systematics, they are described as protostomes, despite having a deuterostomic development. Because the group is so ancient, it is assumed the deuterostome condition which appears to be ancestral for bilaterians have been maintained. Fossil record Stem-group priapulids are known from the Middle Cambrian Burgess Shale, where their soft-part anatomy is preserved, often in conjunction with their gut contents – allowing a reconstruction of their diets. In addition, isolated microfossils (corresponding to the various teeth and spines that line the pharynx and introvert) are widespread in Cambrian deposits, allowing the distribution of priapulids – and even individual species – to be tracked widely through Cambrian oceans. Trace fossils that are morphologically almost identical to modern priapulid burrows (Treptichnus pedum) officially mark the start of the Cambrian period, suggesting that priapulids, or at least close anatomical relatives, evolved around this time. Crown-group priapulid body fossils are first known from the Carboniferous. Phylogeny External phylogeny Internal phylogeny Classification There are 22 known extant species: Phylum Priapulida Théel 1906 Order Halicryptomorpha Salvini-Plawen 1974 [Adrianov & Malakhov 1995; Salvini-Plawen 1974; Eupriapulida Lemburg, 1999] Family Halicryptidae Salvini-Plawen 1974 Genus Halicryptus Species H. higginsi (Shirley & Storch, 1999) Species H. spinulosus (von Siebold, 1849) Order Meiopriapulomorpha Family Meiopriapulidae Genus Meiopriapulus Species M. fijiensis (Morse, 1981) Order Priapulomorpha Adrianov & Malakhov 1995 (assigned its own order by ) Family Priapulidae Gosse 1855 [Xiaoheiqingidae (sic) Hu 2002] Genus Acanthopriapulus Species A. horridus (Théel, 1911) Genus Priapulopsis Species P. australis (de Guerne, 1886) Species P. bicaudatus (Danielssen, 1869) Species P. cnidephorus (Salvini-Plawen, 1973) Genus Priapulus Species P. abyssorum (Menzies, 1959) Species P. caudatus (Lamarck, 1816) Species P. tuberculatospinosus (Baird, 1868) Family Tubiluchidae van der Land 1970 [Meiopriapulidae Adrianov & Malakhov 1995] Genus Tubiluchus Species T. arcticus (Adrianov, Malakhov, Tchesunov & Tzetlin, 1989) Species T. australensis (van der Land, 1985) Species T. corallicola (van der Land, 1968) Species T. lemburgi (Schmidt-Rhaesa, Rothe & Martínez, 2013) Species T. pardosi (Scmidt-Rhaesa, Panpeng & Yamasaki, 2017) Species T. philippinensis (van der Land, 1985) Species T. remanei (van der Land, 1982) Species T. soyoae (Scmidt-Rhaesa, Panpeng & Yamasaki, 2017) Species T. troglodytes (Todaro & Shirley, 2003) Species T. vanuatensis (Adrianov & Malakhov, 1991) Order Seticoronaria Family Chaetostephanidae Por & Bromley 1974 [Chaetostephanidae Salvini-Plawen 1974] Genus Maccabeus Species M. cirratus (Malakhov, 1979) Species M. tentaculatus (Por, 1973) Extinct groups Stem-group †Scalidophora Order †Ancalagonida Adrianov & Malakhov 1995 [Fieldiida Adrianov & Malakhov 1995] Family †Ancalagonidae Conway Morris 1977 Genus †Ancalagon Conway Morris 1977 Family †Fieldiidae Conway Morris 1977 Genus †Fieldia Walcott 1912 Stem-group †Palaeoscolecida Family †Selkirkiidae Conway Morris 1977 Genus †Selkirkia Walcott 1911 non Hemsley 1884 Order †Ottoiomorpha Adrianov & Malakhov 1995 Genus †Scolecofurca Conway Morris 1977 Family †Ottoiidae Walcott 1911 Genus †Ottoia Walcott 1911 Family †Corynetidae Huang, Vannier & Chen 2004 Genus †Corynetis Luo & Hu 1999 [Anningvermis Huang, Vannier & Chen 2004] Family †Miskoiidae Walcott 1911 Genus †Miskoia Walcott 1911 Genus †Louisella Conway Morris 1977
Biology and health sciences
Ecdysozoa
Animals
43217
https://en.wikipedia.org/wiki/Vetulicolia
Vetulicolia
Vetulicolia is a group of bilaterian marine animals encompassing several extinct species from the Cambrian, and possibly Ediacaran, periods. As of 2023, the majority of workers favor placing Vetulicolians in the stem group of the Chordata, but some continue to favor a more crownward placement as a sister group to the Tunicata. It was initially erected as a monophyletic clade with the rank of phylum in 2001, with subsequent work supporting its monophyly. However, more recent research suggests that vetulicolians may be paraphyletic and form a basal evolutionary grade of stem chordates. Etymology The taxon name, Vetulicolia, is derived from the type genus, Vetulicola, which is a compound Latin word composed of vetuli "old" and cola "inhabitant". It was named after Vetulicola cuneata, the first species of the group described in 1987. Description The vetulicolian body plan comprises two parts: a voluminous rostral (anterior) forebody, tipped with an anteriorly positioned mouth and lined with a lateral row of five round to oval-shaped openings on each side, which have been interpreted as gills (or at least orifices in the vicinity of the pharynx); and a caudal (posterior) section that primitively comprises seven body segments and functions as a tail. All vetulicolians lack preserved appendages of any kind, having no legs, feelers or even eye spots. The area where the anterior and posterior parts join is constricted in most genera. Notochord-like structures have been found in some vetulicolian fossils. Ecology and lifestyle From their superficially tadpole-like forms, leaf or paddle-shaped tails, and various degrees of streamlining, it is assumed that all vetulicolians discovered to date were swimming animals that spent much, if not all, of their time living in water. Some groups, like the genus Vetulicola, were more streamlined (complete with ventral keels) than other groups, such as the tadpole-like Didazoonidae. Because all vetulicolians had mouths which had no features for chewing or grasping, it is assumed that they were not predators. Since vetulicolians possessed gill slits, many researchers regard these organisms as planktivores. The sediment infills in the guts of their fossils have caused some to suggest that they were deposit feeders. This idea has been contested, as deposit feeders tend to have straight guts, whereas the hindguts of vetulicolians were spiral-shaped. Some researchers propose that the vetulicolians were "selective deposit-feeders" which actively swam from one region of the seafloor to another, while supplementing their nutrition with filter-feeding. The earliest vetulicolians appear to have been living in shallow water, with the first deeper water specimens appearing in the Balang Biota and some maybe in the Quingjiang Biota . Taxonomy and evolution The phylum Vetulicolia was erected in 2001 to group the genera Vetulicola, Didazoon, and Xidazoon (later deemed a junior synonym of Pomatrum). Prior to this the class Vetulicolida had been defined in 1997 to group Vetulicola with the previously enigmatic genus Banffia due to its similar two-part construction, as well as apparent gill slits in a newly discovered specimen. Further work split Banffia into a separate class called Banffozoa, which was soon expanded to encompass similar species such as Heteromorphus. While subsequent studies supported the monophyly of Vetulicolia, it has also been noted that this would preclude vetulicolians representing a stepwise development of deuterostome characteristics, as the genus with the most such characteristics, Vetulicola, is one of the most derived in the group. A 2024 phylogenetic analysis by Mussini and colleagues found vetulicolians to be a paraphyletic group of stem-chordates, lying outside a clade formed by Yunnanozoon, Cathaymyrus, Pikaia and crown-chordates. This is in part due to the Cambroernida, which are basal stem-ambulacrarians, being discovered to share characteristics such as a terminal anus with vetulicolians, despite such characteristics previously being believed to be present in the last common ancestor of deuterostomes. However, ascidian larvae have been noted to have endoderm extending to the terminal end, which could suggest that the ancestral tunicate also had a terminal anus. Other possible placements are suggested by the Centroneuralia hypothesis, which features a paraphyletic Deuterostomia with chordates as the sister-group to protostomes. If proven true, pharyngeal slits would no longer require a deuterostome placement and vetulicolians could prove to be stem protostomes that lost the post-anal tail. In such a scenario, Banffozoa could be a more derived stem protostome group than Vetulicolida. Cladograms The following cladograms show two possible placements of the Vetulicolia. First, on the left, a monophyletic Vetulicolia is shown as the sister group to Tunicata, but with all internal relationships unresolved. Next, on the right, the two proposed classes are shown as the earlier (Banffozoa) and later (Vetulicolida) parts of the vetulicolian grade. Within the Vetulicolida, the family Vetulicolidae as defined by Li et al. (2018) is recovered as monophyletic, while the three widely accepted members of the Didazoonidae are in a polytomy with the clade of crownward chordates. Classification The following classification is taken from Li et al. (2018) except where noted. Phylum Vetulicolia ? Genus Alienum A. velamenus Genus Shenzianyuloma S. yunnanense Class Heteromorphida (= Banffozoa ) "Form A" Order Banffiata Family Banffiidae Genus Banffia B. constricta B. episoma Genus Heteromorphus H. confusus (= Banffia confusa) ; = H. longicaudatus ) Genus Skeemella S. clavula Class Vetulicolida Genus Nesonektris N. aldridgei Order Vetulicolata Family Vetulicolidae Genus Vetulicola V. cuneata V. rectangulata V. gantoucunensis V. monile V. longbaoshanensis Genus Ooedigera O. peeli Genus Beidazoon (= Bullivetula ) B. venustum (= B. variola ) Family Didazoonidae Genus Didazoon D. haoae Genus Pomatrum (= Xidazoon ) P. ventralis (= X. stephanus ) Genus Yuyuanozoon Y. magnificissimi History of identification The current consensus view is that vetulicolians are stem group chordates, although some researchers continue to raise other possibilities. The possible identification of an endostyle bolstered theories of a tunicate affinity, but was later retracted, while the tentative identification of a notochord in Nesonektris and Vetulicola has further supported overall chordate affinities. Other characters that have been used to support a tunicate affinity include the limiting of the notochord to the tail and the presence of a stiff cuticle (tunic). Recent research has strengthened the arguments for placing vetulicolians in the chordate stem lineage rather than near the tunicates. Like vetulicolians, members of the basal ambulacrarian clade Cambroernida have a terminal anus rathre than a post-anal tail. Since Ambulacraria is the sister-group of the chordates within the deuterostomes, this suggests that the last common ancestor of both groups lacked a post-anal tail. However, ascidian larvae have been noted to have endoderm extending to the terminal end, which could suggest that tunicates also lacked post-anal tails ancestrally. Some workers have questioned the inclusion of Banffozoa within this group due to their lack of gill slits and apparent gut diverticula, and have theorized that they may fit within Protostomia instead. Skeemella, in particular, has been noted as having striking arthropod-like characteristics. However, Herpetogaster, the most basal cambroernid, is thought to have non-serialized pores for pharyngial openings. If banffozoans are the most basal vetulicolians, This could explain why they also lack serialized pharyngeal structures. Additionally, a comprehensive review of the Vetulicolia in 2007 did not find evidence of gut diverticula in their material while acknowledging the previous report regarding Banffia. Shenzianyuloma has been interpreted as a vetulicolian with both a notochord (a definitively deuterostome trait) and gut diverticula. However, this fossil has is of unusual provenance (a "crystal and fossil vendor"), and has not yet been examined by other researchers. Vetulicolians were thought to be stem arthropods when Vetulicola was first discovered, but around 2001 the focus of most theories shifted towards stem deuterostomes due to the discovery of pharyngial gill slits (a deuterostome characteristic), as well as the mounting evidence that vetuicolians have no appendages of any kind. A theory grouping both vetulicolians and vetulocystids with Saccorhytus was disproven when the alleged pharyngial openings of Saccorhytus were shown to be remnants of spines that had broken off; the saccorhytids are now considered to be ecdysozoans.
Biology and health sciences
Prehistoric agnathae and early chordates
Animals
43325
https://en.wikipedia.org/wiki/Probability%20space
Probability space
In probability theory, a probability space or a probability triple is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a . A probability space consists of three elements: A sample space, , which is the set of all possible outcomes. An event space, which is a set of events, , an event being a set of outcomes in the sample space. A probability function, , which assigns, to each event in the event space, a probability, which is a number between 0 and 1 (inclusive). In order to provide a model of probability, these elements must satisfy probability axioms. In the example of the throw of a standard die, The sample space is typically the set where each element in the set is a label which represents the outcome of the die landing on that label. For example, represents the outcome that the die lands on 1. The event space could be the set of all subsets of the sample space, which would then contain simple events such as ("the die lands on 5"), as well as complex events such as ("the die lands on an even number"). The probability function would then map each event to the number of outcomes in that event divided by 6 – so for example, would be mapped to , and would be mapped to . When an experiment is conducted, it results in exactly one outcome from the sample space . All the events in the event space that contain the selected outcome are said to "have occurred". The probability function must be so defined that if the experiment were repeated arbitrarily many times, the number of occurrences of each event as a fraction of the total number of experiments, will most likely tend towards the probability assigned to that event. The Soviet mathematician Andrey Kolmogorov introduced the notion of a probability space and the axioms of probability in the 1930s. In modern probability theory, there are alternative approaches for axiomatization, such as the algebra of random variables. Introduction A probability space is a mathematical triplet that presents a model for a particular class of real-world situations. As with other models, its author ultimately defines which elements , , and will contain. The sample space is the set of all possible outcomes. An outcome is the result of a single execution of the model. Outcomes may be states of nature, possibilities, experimental results and the like. Every instance of the real-world situation (or run of the experiment) must produce exactly one outcome. If outcomes of different runs of an experiment differ in any way that matters, they are distinct outcomes. Which differences matter depends on the kind of analysis we want to do. This leads to different choices of sample space. The σ-algebra is a collection of all the events we would like to consider. This collection may or may not include each of the elementary events. Here, an "event" is a set of zero or more outcomes; that is, a subset of the sample space. An event is considered to have "happened" during an experiment when the outcome of the latter is an element of the event. Since the same outcome may be a member of many events, it is possible for many events to have happened given a single outcome. For example, when the trial consists of throwing two dice, the set of all outcomes with a sum of 7 pips may constitute an event, whereas outcomes with an odd number of pips may constitute another event. If the outcome is the element of the elementary event of two pips on the first die and five on the second, then both of the events, "7 pips" and "odd number of pips", are said to have happened. The probability measure is a set function returning an event's probability. A probability is a real number between zero (impossible events have probability zero, though probability-zero events are not necessarily impossible) and one (the event happens almost surely, with almost total certainty). Thus is a function The probability measure function must satisfy two simple requirements: First, the probability of a countable union of mutually exclusive events must be equal to the countable sum of the probabilities of each of these events. For example, the probability of the union of the mutually exclusive events and in the random experiment of one coin toss, , is the sum of probability for and the probability for , . Second, the probability of the sample space must be equal to 1 (which accounts for the fact that, given an execution of the model, some outcome must occur). In the previous example the probability of the set of outcomes must be equal to one, because it is entirely certain that the outcome will be either or (the model neglects any other possibility) in a single coin toss. Not every subset of the sample space must necessarily be considered an event: some of the subsets are simply not of interest, others cannot be "measured". This is not so obvious in a case like a coin toss. In a different example, one could consider javelin throw lengths, where the events typically are intervals like "between 60 and 65 meters" and unions of such intervals, but not sets like the "irrational numbers between 60 and 65 meters". Definition In short, a probability space is a measure space such that the measure of the whole space is equal to one. The expanded definition is the following: a probability space is a triple consisting of: the sample space – an arbitrary non-empty set, the σ-algebra (also called σ-field) – a set of subsets of , called events, such that: contains the sample space: , is closed under complements: if , then also , is closed under countable unions: if for , then also The corollary from the previous two properties and De Morgan's law is that is also closed under countable intersections: if for , then also the probability measure – a function on such that: P is countably additive (also called σ-additive): if is a countable collection of pairwise disjoint sets, then the measure of the entire sample space is equal to one: . Discrete case Discrete probability theory needs only at most countable sample spaces . Probabilities can be ascribed to points of by the probability mass function such that . All subsets of can be treated as events (thus, is the power set). The probability measure takes the simple form The greatest σ-algebra describes the complete information. In general, a σ-algebra corresponds to a finite or countable partition , the general form of an event being .
Mathematics
Probability
null
43377
https://en.wikipedia.org/wiki/Keratin
Keratin
Keratin () is one of a family of structural fibrous proteins also known as scleroproteins. Alpha-keratin (α-keratin) is a type of keratin found in vertebrates. It is the key structural material making up scales, hair, nails, feathers, horns, claws, hooves, and the outer layer of skin among vertebrates. Keratin also protects epithelial cells from damage or stress. Keratin is extremely insoluble in water and organic solvents. Keratin monomers assemble into bundles to form intermediate filaments, which are tough and form strong unmineralized epidermal appendages found in reptiles, birds, amphibians, and mammals. Excessive keratinization participate in fortification of certain tissues such as in horns of cattle and rhinos, and armadillos' osteoderm. The only other biological matter known to approximate the toughness of keratinized tissue is chitin. Keratin comes in two types, the primitive, softer forms found in all vertebrates and harder, derived forms found only among sauropsids (reptiles and birds). Spider silk is classified as keratin, although production of the protein may have evolved independently of the process in vertebrates. Examples of occurrence Alpha-keratins (α-keratins) are found in all vertebrates. They form the hair (including wool), the outer layer of skin, horns, nails, claws and hooves of mammals, and the slime threads of hagfish. The baleen plates of filter-feeding whales are also made of keratin. Keratin filaments are abundant in keratinocytes in the hornified layer of the epidermis; these are proteins which have undergone keratinization. They are also present in epithelial cells in general. For example, mouse thymic epithelial cells react with antibodies for keratin 5, keratin 8, and keratin 14. These antibodies are used as fluorescent markers to distinguish subsets of mouse thymic epithelial cells in genetic studies of the thymus. The harder beta-keratins (β-keratins) are found only in the sauropsids, that is all living reptiles and birds. They are found in the nails, scales, and claws of reptiles, in some reptile shells (Testudines, such as tortoise, turtle, terrapin), and in the feathers, beaks, and claws of birds. These keratins are formed primarily in beta sheets. However, beta sheets are also found in α-keratins. Recent scholarship has shown that sauropsid β-keratins are fundamentally different from α-keratins at a genetic and structural level. The new term corneous beta protein (CBP) has been proposed to avoid confusion with α-keratins. Keratins (also described as cytokeratins) are polymers of type I and type II intermediate filaments that have been found only in chordates (vertebrates, amphioxi, urochordates). Nematodes and many other non-chordate animals seem to have only type VI intermediate filaments, fibers that structure the nucleus. Genes The human genome encodes 54 functional keratin genes, located in two clusters on chromosomes 12 and 17. This suggests that they originated from a series of gene duplications on these chromosomes. The keratins include the following proteins of which KRT23, KRT24, KRT25, KRT26, KRT27, KRT28, KRT31, KRT32, KRT33A, KRT33B, KRT34, KRT35, KRT36, KRT37, KRT38, KRT39, KRT40, KRT71, KRT72, KRT73, KRT74, KRT75, KRT76, KRT77, KRT78, KRT79, KRT8, KRT80, KRT81, KRT82, KRT83, KRT84, KRT85 and KRT86 have been used to describe keratins past 20. Protein structure The first sequences of keratins were determined by Israel Hanukoglu and Elaine Fuchs (1982, 1983). These sequences revealed that there are two distinct but homologous keratin families, which were named type I and type II keratins. By analysis of the primary structures of these keratins and other intermediate filament proteins, Hanukoglu and Fuchs suggested a model in which keratins and intermediate filament proteins contain a central ~310 residue domain with four segments in α-helical conformation that are separated by three short linker segments predicted to be in beta-turn conformation. This model has been confirmed by the determination of the crystal structure of a helical domain of keratins. Type 1 and 2 Keratins The human genome has 54 functional annotated Keratin genes, 28 are in the Keratin type 1 family, and 26 are in the Keratin type 2 family. Fibrous keratin molecules supercoil to form a very stable, left-handed superhelical motif to multimerise, forming filaments consisting of multiple copies of the keratin monomer. The major force that keeps the coiled-coil structure is hydrophobic interactions between apolar residues along the keratins helical segments. Limited interior space is the reason why the triple helix of the (unrelated) structural protein collagen, found in skin, cartilage and bone, likewise has a high percentage of glycine. The connective tissue protein elastin also has a high percentage of both glycine and alanine. Silk fibroin, considered a β-keratin, can have these two as 75–80% of the total, with 10–15% serine, with the rest having bulky side groups. The chains are antiparallel, with an alternating C → N orientation. A preponderance of amino acids with small, nonreactive side groups is characteristic of structural proteins, for which H-bonded close packing is more important than chemical specificity. Disulfide bridges In addition to intra- and intermolecular hydrogen bonds, the distinguishing feature of keratins is the presence of large amounts of the sulfur-containing amino acid cysteine, required for the disulfide bridges that confer additional strength and rigidity by permanent, thermally stable crosslinking—in much the same way that non-protein sulfur bridges stabilize vulcanized rubber. Human hair is approximately 14% cysteine. The pungent smells of burning hair and skin are due to the volatile sulfur compounds formed. Extensive disulfide bonding contributes to the insolubility of keratins, except in a small number of solvents such as dissociating or reducing agents. The more flexible and elastic keratins of hair have fewer interchain disulfide bridges than the keratins in mammalian fingernails, hooves and claws (homologous structures), which are harder and more like their analogs in other vertebrate classes. Hair and other α-keratins consist of α-helically coiled single protein strands (with regular intra-chain H-bonding), which are then further twisted into superhelical ropes that may be further coiled. The β-keratins of reptiles and birds have β-pleated sheets twisted together, then stabilized and hardened by disulfide bridges. Thiolated polymers (=thiomers) can form disulfide bridges with cysteine substructures of keratins getting covalently attached to these proteins. Thiomers exhibit therefore high binding properties to keratins found in hair, on skin and on the surface of many cell types. Filament formation It has been proposed that keratins can be divided into 'hard' and 'soft' forms, or 'cytokeratins' and 'other keratins'. That model is now understood to be correct. A new nuclear addition in 2006 to describe keratins takes this into account. Keratin filaments are intermediate filaments. Like all intermediate filaments, keratin proteins form filamentous polymers in a series of assembly steps beginning with dimerization; dimers assemble into tetramers and octamers and eventually, if the current hypothesis holds, into unit-length-filaments (ULF) capable of annealing end-to-end into long filaments. Pairing Cornification Cornification is the process of forming an epidermal barrier in stratified squamous epithelial tissue. At the cellular level, cornification is characterised by: production of keratin production of small proline-rich (SPRR) proteins and transglutaminase which eventually form a cornified cell envelope beneath the plasma membrane terminal differentiation loss of nuclei and organelles, in the final stages of cornification Metabolism ceases, and the cells are almost completely filled by keratin. During the process of epithelial differentiation, cells become cornified as keratin protein is incorporated into longer keratin intermediate filaments. Eventually the nucleus and cytoplasmic organelles disappear, metabolism ceases and cells undergo a programmed death as they become fully keratinized. In many other cell types, such as cells of the dermis, keratin filaments and other intermediate filaments function as part of the cytoskeleton to mechanically stabilize the cell against physical stress. It does this through connections to desmosomes, cell–cell junctional plaques, and hemidesmosomes, cell-basement membrane adhesive structures. Cells in the epidermis contain a structural matrix of keratin, which makes this outermost layer of the skin almost waterproof, and along with collagen and elastin gives skin its strength. Rubbing and pressure cause thickening of the outer, cornified layer of the epidermis and form protective calluses, which are useful for athletes and on the fingertips of musicians who play stringed instruments. Keratinized epidermal cells are constantly shed and replaced. These hard, integumentary structures are formed by intercellular cementing of fibers formed from the dead, cornified cells generated by specialized beds deep within the skin. Hair grows continuously and feathers molt and regenerate. The constituent proteins may be phylogenetically homologous but differ somewhat in chemical structure and supermolecular organization. The evolutionary relationships are complex and only partially known. Multiple genes have been identified for the β-keratins in feathers, and this is probably characteristic of all keratins. Silk The silk fibroins produced by insects and spiders are often classified as keratins, though it is unclear whether they are phylogenetically related to vertebrate keratins. Silk found in insect pupae, and in spider webs and egg casings, also has twisted β-pleated sheets incorporated into fibers wound into larger supermolecular aggregates. The structure of the spinnerets on spiders' tails, and the contributions of their interior glands, provide remarkable control of fast extrusion. Spider silk is typically about 1 to 2 micrometers (μm) thick, compared with about 60 μm for human hair, and more for some mammals. The biologically and commercially useful properties of silk fibers depend on the organization of multiple adjacent protein chains into hard, crystalline regions of varying size, alternating with flexible, amorphous regions where the chains are randomly coiled. A somewhat analogous situation occurs with synthetic polymers such as nylon, developed as a silk substitute. Silk from the hornet cocoon contains doublets about 10 μm across, with cores and coating, and may be arranged in up to 10 layers, also in plaques of variable shape. Adult hornets also use silk as a glue, as do spiders. Clinical significance Abnormal growth of keratin can occur in a variety of conditions including keratosis, hyperkeratosis and keratoderma. Mutations in keratin gene expression can lead to, among others: Alopecia areata Epidermolysis bullosa simplex Ichthyosis bullosa of Siemens Epidermolytic hyperkeratosis Steatocystoma multiplex Keratosis pharyngis Rhabdoid cell formation in large cell lung carcinoma with rhabdoid phenotype Several diseases, such as athlete's foot and ringworm, are caused by infectious fungi that feed on keratin. Keratin is highly resistant to digestive acids if ingested. Cats regularly ingest hair as part of their grooming behavior, leading to the gradual formation of hairballs that may be expelled orally or excreted. In humans, trichophagia may lead to Rapunzel syndrome, an extremely rare but potentially fatal intestinal condition. Diagnostic use Keratin expression is helpful in determining epithelial origin in anaplastic cancers. Tumors that express keratin include carcinomas, thymomas, sarcomas and trophoblastic neoplasms. Furthermore, the precise expression-pattern of keratin subtypes allows prediction of the origin of the primary tumor when assessing metastases. For example, hepatocellular carcinomas typically express CK8 and CK18, and cholangiocarcinomas express CK7, CK8 and CK18, while metastases of colorectal carcinomas express CK20, but not CK7.
Biology and health sciences
Proteins
Biology
43379
https://en.wikipedia.org/wiki/Ballista
Ballista
The ballista (Latin, from Greek βαλλίστρα ballistra and that from βάλλω ballō, "throw"), plural ballistae or ballistas, sometimes called bolt thrower, was an ancient missile weapon that launched either bolts or stones at a distant target. Developed from earlier Greek weapons, it relied upon different mechanics, using two levers with torsion springs instead of a tension prod (the bow part of a modern crossbow). The springs consisted of several loops of twisted skeins. Early versions projected heavy darts or spherical stone projectiles of various sizes for siege warfare. It developed into a smaller precision weapon, the scorpio, and possibly the polybolos. Greek weapon The early ballistae in Ancient Greece were developed from two weapons called oxybeles and gastraphetes. The gastraphetes ('belly-bow') was a handheld crossbow. It had a composite prod and was spanned by bracing the front end of the weapon against the ground while placing the end of a slider mechanism against the stomach. The operator would then walk forward to arm the weapon while a ratchet prevented it from shooting during loading. This produced a weapon that, it was claimed, could be operated by a person of average strength but which had a power that allowed it to be successfully used against armored troops. The oxybeles were a bigger and heavier construction employing a winch and were mounted on a tripod. It had a lower rate of fire and was used as a siege engine. With the invention of the torsion spring bundle, the first ballistae could now be built. The advantage of this new technology was the fast relaxation time of this system. Thus it was possible to shoot lighter projectiles with higher velocities over a longer distance. By contrast, the comparatively slow relaxation time of the bow or prod of a conventional crossbow such as the oxybeles meant that much less energy could be transferred to light projectiles, limiting the effective range of the weapon. The earliest form of the ballista is thought to have been developed for Dionysius of Syracuse, 400 BC. The Greek ballista was a siege weapon. All components that were not made of wood were transported in the baggage train. It would be assembled with local wood, if necessary. Some were positioned inside large, armored, mobile siege towers or even on the edge of a battlefield. For all of the tactical advantages offered, it was only under Philip II of Macedon, and even more so under his son Alexander, that the ballista began to develop and gain recognition as both a siege engine and field artillery. Historical accounts, for instance, cited that Philip II employed a group of engineers within his army to design and build catapults for his military campaigns. There is even a claim that it was Philip II with his team of engineers who invented the ballista after improving Dionysius's device, which was merely an oversized slingshot. It was further perfected by Alexander, whose own team of engineers introduced innovations such as the idea of using springs made from tightly strung coils of rope instead of a bow to achieve more energy and power when throwing projectiles. Polybius reported about the usage of smaller, more portable ballistae, called scorpions, during the Second Punic War. Ballistae could be easily modified to shoot both spherical and shaft projectiles, allowing their crews to adapt quickly to prevailing battlefield situations in real time. As the role of battlefield artillery became more sophisticated, a universal joint (which was invented just for this function) was integrated into the ballista's stand, allowing the operators to alter the trajectory and firing direction of the ballista as required without a lengthy disassembly of the machine. Roman weaponry After the absorption of the Ancient Greek city-states into the Roman Republic in 146 BC, the highly advanced Greek technology began to spread across many areas of Roman influence. This included the great military machine advances the Greeks had made (most notably by Dionysus of Syracuse), as well as all the scientific, mathematical, political and artistic developments. The Romans adopted the torsion-powered ballista, which had by now spread to several cities around the Mediterranean, all of which became Roman spoils of war, including one from Pergamon, which was depicted among a pile of trophy weapons in relief on a balustrade. The torsion ballista, developed by Alexander, was a far more complicated weapon than its predecessor and the Romans developed it even further, especially into much smaller versions, that could be easily carried. Early Roman ballistae The early Roman ballistae were made of wood, and held together with iron plates around the frames and iron nails in the stand. The main stand had a slider on the top, into which were loaded the bolts or stone shot. Attached to this, at the back, was a pair of 'winches' and a 'claw', used to ratchet the bowstring back to the armed firing position. The slider passed through the field frames of the weapon, in which were located the torsion springs (rope made of animal sinew), which were twisted around the bow arms, which in turn, were attached to the bowstring. Drawing the bowstring back with the winches twisted the already taut springs, storing the energy to fire the projectiles. The bronze or iron caps, which secured the torsion bundles were adjustable by means of pins and peripheral holes, which allowed the weapon to be tuned for symmetrical power and for changing weather conditions. The ballista was a highly accurate weapon (there are many accounts of single soldiers being picked off by ballistarii), but some design aspects meant it could compromise its accuracy for range. The maximum range was over , but the effective combat range for many targets was far shorter. The Romans continued the development of the ballista, and it became a highly prized and valued weapon in the army of the Roman Empire. It was used, just before the start of the Empire, by Julius Caesar during his conquest of Gaul and on both of his campaigns in subduing Britain. First invasion of Britain The first of Caesar's invasions of Britain took place in 55 BC, after a rapid and successful initial conquest of Gaul, in part as an expedition, and more practical to try to put an end to the reinforcements sent by the native Britons to fight the Romans in Gaul. A total of eighty means of transport, carrying two legions, attempted to land on the British shore, only to be driven back by the many British warriors assembled along the shoreline. The ships had to unload their troops on the beach, as it was the only one suitable for many miles, yet the massed ranks of British charioteers and javeliners were making it difficult. Seeing this, Caesar ordered the warships – which were swifter and easier to handle than the transports, and likely to impress the natives more by their unfamiliar appearance – to be removed a short distance from the others, and then be rowed hard and run ashore on the enemy’s right flank, from which position men on deck could use the slings, bows, and artillery to drive them back. This maneuver was highly successful. Scared by the strange shape of the warships, the motion of the oars, and the unfamiliar machines, the natives halted and retreated. (Caesar, The Conquest of Gaul, p.99) Siege of Alesia In Gaul, the stronghold of Alesia was under a Roman siege in 52 BC, and was completely surrounded by a Roman fortification including a wooden palisade and towers. As was standard siege technique at the time, small ballistae were placed in the towers with other troops armed with bows or slings. The use of the ballista in the Roman siege strategy was also demonstrated in the case of the Siege of Masada. Ballistae in the Roman Empire During the conquest of the Empire, the ballista proved its worth many times in sieges and battles, both at sea and on land. It is from the time of the Roman Empire that many of the archaeological finds of ballistae date. Accounts by the finders, including technical manuals and journals, are used today by archaeologists to reconstruct these weapons. After Julius Caesar, the ballista was a permanent fixture in the Roman army and, over time, modifications and improvements were made by successive engineers. This included replacing the remaining wooden parts of the machine with metal, creating a much smaller, lighter and more powerful machine than the wooden version, which required less maintenance (though the vital torsion springs were still vulnerable to the strain). The largest ballistae of the 4th century could throw a dart further than 1200 yards (1,100 m). The weapon was named ballista fulminalis in De rebus bellicis: "From this ballista, darts were projected not only in great number but also at a large size over a considerable distance, such as across the width of the Danube River." Ballistae were not only used in laying siege: after AD 350, at least 22 semi-circular towers were erected around the walls of Londinium (London) to provide platforms for permanently mounted defensive devices. Eastern Roman Empire During the 6th century, Procopius described the effects of this weapon: But Belisarius placed upon the towers engines which they call "ballistae". Now these engines have the form of a bow, but on the under side of them a grooved wooden shaft projects; this shaft is so fitted to the bow that it is free to move, and rests upon a straight iron bed. So when men wish to shoot at the enemy with this, they make the parts of the bow which form the ends bend toward one another by means of a short rope fastened to them, and they place in the grooved shaft the arrow, which is about one half the length of the ordinary missiles which they shoot from bows, but about four times as wide...but the missile is discharged from the shaft, and with such force that it attains the distance of not less than two bow-shots, and that, when it hits a tree or a rock, it pierces it easily. Such is the engine which bears this name, being so called because it shoots with very great force... The missiles were able to penetrate body-armour: And at the Salarian Gate a Goth of goodly stature and a capable warrior, wearing a corselet and having a helmet on his head, a man who was of no mean station in the Gothic nation, refused to remain in the ranks with his comrades, but stood by a tree and kept shooting many missiles at the parapet. But this man by some chance was hit by a missile from an engine which was on a tower at his left. And passing through the corselet and the body of the man, the missile sank more than half its length into the tree, and pinning him to the spot where it entered the tree, it suspended him there a corpse. Carroballista The carroballista was a cart-mounted version of the weapon. There were probably different models of ballista under the cheiroballistra class, at least two different two-wheeled models and one model with four wheels. Their probable size was roughly width, i.e., 5 Roman feet. The cart system and structure gave it a great deal of flexibility and capability as a battlefield weapon, since the increased maneuverability allowed it to be moved with the flow of the battle. This weapon features several times on Trajan's Column. Polybolos It has been speculated that the Roman military may have also fielded a 'repeating' ballista, also known as a polybolos. Reconstruction and trials of such a weapon carried out in a BBC documentary, What the Romans Did For Us, showed that they "were able to shoot eleven bolts a minute, which is almost four times the rate at which an ordinary ballista can be operated". However, no example of such a weapon has been found by archaeologists. Cheiroballistra and manuballista The cheiroballistra and the manuballista are held by many archaeologists to be the same weapon. The difference in name may be attributable to the different languages spoken in the Empire. Latin remained the official language in the Western Empire, but the Eastern Empire predominantly used Greek, which added an extra 'r' to the word ballista. The manuballista was a handheld version of the traditional ballista. This new version was made entirely of iron, which conferred greater power to the weapon, since it was smaller, and less iron (an expensive material before the 19th century), was used in its production. It was not the ancient gastraphetes, but the Roman weapon. However, the same physical limitations applied as with the gastraphetes. Archaeology and the Roman ballista Archaeology, and in particular experimental archaeology has been influential on this subject. Although several ancient authors (such as Vegetius) wrote very detailed technical treatises, providing us with all the information necessary to reconstruct the weapons, all their measurements were in their native language and therefore highly difficult to translate. Attempts to reconstruct these ancient weapons began at the end of the 19th century, based on rough translations of ancient authors. It was only during the 20th century, however, that many of the reconstructions began to make any sense as a weapon. By bringing in modern engineers, progress was made with the ancient systems of measurement. By redesigning the reconstructions using the new information, archaeologists in that specialty were able to recognise certain finds from Roman military sites, and identify them as ballistae. The information gained from the excavations was fed into the next generation of reconstructions and so on. Sites across the empire have yielded information on ballistae, from Spain (the Ampurias Catapult), to Italy (the Cremona Battleshield, which proved that the weapons had decorative metal plates to shield the operators), to Iraq (the Hatra Machine) and even Scotland (Burnswark siege tactics training camp), and many other sites between. The most influential archaeologists in this area have been Peter Connolly and Eric Marsden, who have not only written extensively on the subject but have also made many reconstructions themselves and have refined the designs over many years of work. Middle Ages With the decline of the Roman Empire, resources to build and maintain these complex machines became very scarce, so the ballista was likely supplanted initially by the simpler and cheaper onager and the more efficient springald. However, while it remained less and less popular as more efficient siege engines such as the trebuchet and the mangonel became widespread, the Ballista still retained some use in Medieval Siege Warfare, especially by city and castle garrisons, until it became finally extinguished by the more convenient medieval canons, already omnipresent in all major European Catholic cities by the first half of the 14th century. The Littere Wallie records the existence of 4 "balistas ad turrimi" at "Duluithelan" [Dolwyddelan] Castle in 1280, one "balistam de tur" at "Rothelano"[Rhuddlan] castle and one "magnam ballistam" at "Bere Blada" Castle [Castell y Bere?] in 1286. These all being held under the authority of the English Crown. In remote and seemingly "savage" places like Ireland, however, where cannons were rare and personal firearms were almost non-existent, ballistae had recorded use well into late 15th century. While not a direct descendant mechanically, the concept and naming continues on as arbalest crossbows (arcus 'bow' + ballista).
Technology
Artillery and siege
null
43380
https://en.wikipedia.org/wiki/Trebuchet
Trebuchet
A trebuchet () is a type of catapult that uses a rotating arm with a sling attached to the tip to launch a projectile. It was a common powerful siege engine until the advent of gunpowder. The design of a trebuchet allows it to launch projectiles of greater weights and further distances than that of a traditional catapult. There are two main types of trebuchet. The first is the traction trebuchet, or mangonel, which uses manpower to swing the arm. It first appeared in China by the 4th century BC. It spread westward, possibly by the Avars, and was adopted by the Byzantines, Persians, Arabs, and other neighboring peoples by the sixth to seventh centuries AD. The later, and often larger and more powerful, counterweight trebuchet, also known as the counterpoise trebuchet, uses a counterweight to swing the arm. It appeared in both Christian and Muslim lands around the Mediterranean in the 12th century, and was carried back to China by the Mongols in the 13th century. Etymology and terminology The numerous forms of the word that appeared during the 13th century, including trabocco, tribok, tribuclietta, and trubechetum, have obscured the origin of the term. In Arabic the counterweight trebuchet was called manjaniq maghribi or majaniq ifranji. In China it was called the húihúi pào (Muslim trebuchet). The English word trebuchet is first mentioned in the 14th century (13th century in Anglo-Latin) as "medieval stone-throwing engine of war". It is borrowed from (Old) French trebuchet (now trébuchet). The French word is from the verbal root of trebucher (now trébucher) : trebuch- + diminutive noun suffix -et, trebucher (10th century) meant "to overthrow, to bring down", then and now "to stumble", maybe earlier "to rock" or "to tilt". It is a compound of (Old) French tre(s)-, variant form tra- (now tré- / tra-) from Latin trans expressing "displacement" in that case + Old French buc "trunk of the body, bulk", itself from Old Low Franconian *būk- "belly" similar to Old High German buh, German Bauch "belly". The earliest appearance of the term "trebuchet" in French dates to the late 12th century and the first attestations of trebuchet as a siege weapon are from around the year 1200. The 1174-77 edition of Roman de Renart, an epic about Renard the Fox, describes it as a "trap whose trigger mechanism consists of an assembly of balanced logs" (understood as animal trap by 1375) while the ca. 1200 edition describes it as a "war engine that throws stones to break down walls". The word trabuchellus appeared alongside manganum and prederia in a document in Vicenza on . Trabucha is found a decade later with predariae at the siege of Castelnuovo Bocca d'Adda in an account by Iohannes Codagnellus. It is unclear, however, whether these referred to counterweight trebuchets. Codagnellus did not specify a specific type of engine with the term and even implied that they were "fairly light in subsequent references". Only in the late 1210s do variations of "trebuchet" in sources, described as increasingly powerful machines or utilizing different components, identify more closely with the counterweight trebuchet. Other terms, such as machina maior/magna, might have also referred to counterweight trebuchets. Traction trebuchet and counterweight trebuchet are modern terms (retronyms), not used by contemporary users of the weapons. The term traction trebuchet was created mainly to distinguish this type of weapon from the onager, a torsion powered catapult that is often conflated in contemporary sources with the mangonel, which was used as a generic term for any medieval stone throwing artillery. Both the traction and counterweight trebuchets have been called mangonel at one point or another. Confusion between the onager, mangonel, trebuchet, and other catapult types in contemporary terminology has led some historians today to use the more precise traction trebuchet instead, with counterweight trebuchet used to distinguish what was before called simply a trebuchet. Some modern historians use mangonel to mean exclusively traction trebuchets, while others call traction trebuchets traction mangonels and counterweight trebuchets counterweight mangonels. Basic design The trebuchet is a compound machine that makes use of the mechanical advantage of a lever to throw a projectile. They are typically large constructions, with the length of the beam as much as , with some purported to be even larger. A trebuchet consists primarily of a long beam attached by an axle suspended high above the ground by a stout frame and base, such that the beam can rotate vertically through a wide arc (typically over 180°). A sling is attached to one end of the beam to hold the projectile. The projectile is thrown when the beam is quickly rotated by applying force to the opposite end of the beam. The mechanical advantage is primarily obtained by having the projectile section of the beam much longer than the opposite section where the force is applied – usually four to six times longer. The difference between counterweight and traction trebuchets is what force they use. Counterweight trebuchets use gravity; potential energy is stored by slowly raising an extremely heavy box (typically filled with stones, sand, or lead) attached to the shorter end of the beam (typically on a hinged connection), and releasing it on command. Traction trebuchets use human power; on command, men pull ropes attached to the shorter end of the trebuchet beam. The difficulties of coordinating the pull of many men together repeatedly and predictably makes counterweight trebuchets preferable for the larger machines, though they are more complicated to engineer. The trebuchet had further modifications to allow an increase to its range, by creating a slot for the sling and projectile to sit underneath the trebuchet, enabling the sling to be lengthened and thus extending the range, an alteration in the trajectory, or the release point to be changed. Further increasing their complexity is that either winches or treadwheels, aided by block and tackle, are typically required to raise the more massive counterweights. So while counterweight trebuchets require significantly fewer men to operate than traction trebuchets, they require significantly more time to reload. In a long siege, reload time may not be a critical concern. When the trebuchet is operated, the force causes rotational acceleration of the beam around the axle (the fulcrum of the lever). These factors multiply the acceleration transmitted to the throwing portion of the beam and its attached sling. The sling starts rotating with the beam, but rotates farther (typically about 360°) and therefore faster, transmitting this increased speed to the projectile. The length of the sling increases the mechanical advantage, and also changes the trajectory so that, at the time of release from the sling, the projectile is traveling in the desired speed and angle to give it the range to hit the target. Adjusting the sling's release point is the primary means of fine-tuning the range, as the rest of the trebuchet's actions are difficult to adjust after construction. The rotation speed of the throwing beam increases smoothly, starting slow but building up quickly. After the projectile is released, the arm continues to rotate, allowed to smoothly slow down on its own accord and come to rest at the end of the rotation. This is unlike the violent sudden stop inherent in the action of other catapult designs such as the onager, which must absorb most of the launching energy into their own frame, and must be heavily built and reinforced as a result. This key difference makes the trebuchet much more durable, allowing for larger and more powerful machines. A trebuchet projectile can be almost anything, even debris, rotting carcasses, or incendiaries, but is typically a large stone. Dense stone, or even metal, specially worked to be round and smooth, gives the best range and predictability. When attempting to breach enemy walls, it is important to use materials that will not shatter on impact; projectiles were sometimes brought from distant quarries to get the desired properties. History Traction trebuchet The traction trebuchet, also referred to as a mangonel in some sources, originated in ancient China. The first recorded use of traction trebuchets was in ancient China. They were probably used by the Mohists as early as 4th century BC; descriptions can be found in the Mozi (compiled in the 4th century BC). According to the Mozi, the traction trebuchet was high with buried below ground, the fulcrum attached was constructed from the wheels of a cart, the throwing arm was long with three quarters above the pivot and a quarter below to which the ropes are attached, and the sling long. The range given for projectiles are , , and . They were used as defensive weapons stationed on walls and sometimes hurled hollowed-out logs filled with burning charcoal to destroy enemy siege works. By the 1st century AD, commentators were interpreting other passages in texts such as the Zuo zhuan and Classic of Poetry as references to the traction trebuchet: "the guai is 'a great arm of wood on which a stone is laid, and this by means of a device [ji] is shot off and so strikes down the enemy. The Records of the Grand Historian say that "The flying stones weigh 12 catties and by devices [ji] are shot off 300 paces." Traction trebuchets went into decline during the Han dynasty due to long periods of peace but became a common siege weapon again during the Three Kingdoms period. They were commonly called stone-throwing machines, thunder carriages, and stone carriages in the following centuries. They were used as ship mounted weapons by 573 for attacking enemy fortifications. It seems that during the early 7th century, improvements were made on traction trebuchets, although it is not explicitly stated what. According to a stele in Barkul celebrating Tang Taizong's conquest of what is now Ejin Banner, the engineer Jiang Xingben made great advancements on trebuchets that were unknown in ancient times. Jiang Xingben participated in the construction of siege engines for Taizong's campaigns against the Western Regions. In 617 Li Mi (Sui dynasty) constructed 300 trebuchets for his assault on Luoyang, in 621 Li Shimin did the same at Luoyang, and onward into the Song dynasty when in 1161, trebuchets operated by Song dynasty soldiers fired bombs of lime and sulphur against the ships of the Jin dynasty navy during the Battle of Caishi. The traction trebuchet was adopted by various peoples west of China such as the Byzantines, Persians, Arabs, and Avars by the sixth to seventh centuries AD. Some scholars suggest that the Avars carried the traction trebuchet westward while others claim that the Byzantines already possessed knowledge of the traction trebuchet beforehand. Regardless of the vector of transmission, it appeared in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager. The rapid displacement of torsion siege engines was probably due to a combination of reasons. The traction trebuchet is simpler in design, has a faster rate of fire, increased accuracy, and comparable range and power. It was probably also safer than the twisted cords of torsion weapons, "whose bundles of taut sinews stored up huge amounts of energy even in resting state and were prone to catastrophic failure when in use." At the same time, the late Roman Empire seems to have fielded "considerably less artillery than its forebears, organised now in separate units, so the weaponry that came into the hands of successor states might have been limited in quantity." Evidence from Gaul and Germania suggests there was substantial loss of skills and techniques in artillery further west. According to the Miracles of Saint Demetrius, probably written around 620 by John, Archbishop of Thessaloniki, the Avaro-Slavs attacked Thessaloniki in 586 with traction trebuchets. The bombardment lasted for hours, but the operators were inaccurate and most of the shots missed their target. When one stone did reach their target, it "demolished the top of the rampart down to the walkway." The Byzantines adopted the traction trebuchet possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. In 652, the Arabs used trebuchets at the siege of Dongola in the Sudan. Like the Chinese, by 653, the Arabs also had ship mounted traction trebuchets. The Franks and Saxons adopted the weapon in the 8th century. The Life of Louis the Pious contains the earliest western European reference to mangonels (traction trebuchets) in its account of the siege of Tortosa (808–809). In 1173, the Republic of Pisa tried to capture an island castle with traction trebuchet on galleys. Traction trebuchets were also used in India. The traction trebuchet was most efficient as an anti-personnel weapon, used in a supportive position alongside archers and slingers. Most accounts of traction trebuchets describe them as light artillery weapons while actual penetration of defenses was the result of mining or siege towers. At the Siege of Kamacha in 766, Byzantine defenders used wooden cover to protect themselves from the enemy artillery while inflicting casualties with their own stone throwers. Michael the Syrian noted that at the siege of Balis in 823 it was the defenders that suffered from bombardment rather than the fortifications. At the siege of Kaysum, Abdallah ibn Tahir al-Khurasani used artillery to damage houses in the town. The Sack of Amorium in 838 saw the use of traction trebuchets to drive away defenders and destroy wooden defenses. At the siege of Marand in 848, traction trebuchets were used, "reportedly killing 100 and wounding 400 on each side during the eight-month siege." During the siege of Baghdad in 865, defensive artillery were responsible for repelling an attack on the city gate while traction trebuchets on boats claimed a hundred of the defenders' lives. Some exceptionally large and powerful traction trebuchets have been described during the 11th century or later. At the Siege of Manzikert (1054), the Seljuks' initial siege artillery was countered by the defenders' own, which shot stones at the besieging machine. In response, the Seljuks constructed another one requiring 400 men to pull and threw stones weighing . A breach was created on the first shot but the machine was burnt down by the defenders. According to Matthew of Edessa, this machine weighed and caused a number of casualties to the city's defenders. Ibn al-Adim describes a traction trebuchet capable of throwing a man in 1089. At the siege of Haizhou in 1161, a traction trebuchet was reported to have had a range of 200 paces (over ). West of China, the traction trebuchet remained the primary siege engine until the 12th century when it was replaced by the counterweight trebuchet. In China the traction trebuchet was the primary siege engine until the counterweight trebuchet was introduced during the Mongol conquest of the Song dynasty in the 13th century. Counterweight trebuchet Origins There is little to no consensus as to where and when the counterweight trebuchet, which has been described as the "most powerful weapon of the Middle Ages", was first developed. The earliest known description and illustration of a counterweight trebuchet comes from a commentary on the conquests of Saladin by Mardi ibn Ali al-Tarsusi in 1187. However cases for the existence of both European and Muslim counterweight trebuchets prior to 1187 have been made. In 1090, Khalaf ibn Mula'ib threw out a man from the citadel in Salamiya with a machine and in the early 12th century, Muslim siege engines were able to breach crusader fortifications. David Nicolle argues that these events could have only been possible with the use of counterweight trebuchets. Although al-Tarsusi provided the first description and illustration of a counterweight trebuchet, the text implies that the engine was not new and had previously been built. Al-Tarsusi referred to the counterweight trebuchet as the "Persian" trebuchet whereas the "Frankish" trebuchet was a light traction engine. Later during the 13th century, Muslims used manjaniq maghribi (Western trebuchet) and manjaniq ifranji (Frankish trebuchet) to refer to counterweight trebuchets. Paul E. Chevedden suggests that manjaniq maghribi was used to describe hinged counterweight engines in contrast to previous fixed or hanging counterweight trebuchets. Sometimes counterweight trebuchets are separated into two or three different categories based on how their counterweights are attached. These being fixed, hanging, and hinged counterweights. A fixed counterweight is an intrinsic part of the swinging arm and its trajectory is circular. Hanging counterweights hang below the arm and drop vertically. Hinged counterweights are attached to the arm by a swinging joint. Some fixed counterweights also had a hinged component. The type described by al-Tarsusi was a hanging counterweight. Writing in 1280, Giles of Rome claimed that hinged counterweight trebuchets had a greater range than fixed counterweight types. Chevedden argues that counterweight trebuchets appeared prior to 1187 in Europe based on what might have been counterweight trebuchets in earlier sources. The 12th-century Byzantine historian Niketas Choniates may have been referring to a counterweight trebuchet when he described one equipped with a windlass, which is only useful to counterweight machines, at the siege of Zevgminon in 1165. However the source for this was written in the 1180s to 1190s and Niketas may have been placing the engine of his own time anachronistically into the past. At the siege of Nicaea in 1097 the Byzantine emperor Alexios I Komnenos reportedly invented new pieces of heavy artillery which deviated from the conventional design and made a deep impression on everyone. Illustrations produced later in 1270 depicted fixed counterweight trebuchets used at the siege. Possible references to counterweight trebuchets also appear for the second siege of Tyre in 1124, where the crusaders reportedly made use of "great trebuchets". However the sources for this siege, Fulcher of Chartres and William of Tyre, only mention machinae and machinae iaculatoriae that were later translated as perrieres and mangoniaux in the Estoire d'Eracles. Chevedden argues that given the references to new and better trebuchets that by the 1120–30s, the counterweight trebuchet was being used in a variety of places by different peoples such as the crusader states, the Normans of Sicily and the Seljuks. The earliest solid reference to a "trebuchet" in European sources dates to the siege of Castelnuovo Bocca d'Adda in 1199. However it is unclear if this referred to counterweight trebuchets since the author did not specify what engine was used and described the machine as fairly light. They may have been used in Germany from around 1205. Only in the late 1210s do references to "trebuchet", describing more powerful engines and different components, more closely align with the features of a counterweight trebuchet. Some of these more powerful engines may have just been traction trebuchets, as one was described being pulled by ten thousand. At the Siege of Toulouse (1217–1218), trabuquets were mentioned to have been deployed, but the siege engine depicted at the tomb of Simon de Montfort, who was killed by artillery at the siege, is a traction trebuchet. Though soon after, clear evidence of counterweight machines appeared. According to the Song of the Albigensian Crusade, the defenders "ran to the ropes and wound the trebuchets", and to shoot the machine, they "then released their ropes." They were used in England at least by 1217 and in Iberia shortly after 1218. By the 1230s the counterweight trebuchet was a common item in siege warfare. Despite the lack of clearly definable terms in the late 12th and early 13th centuries, it is likely that both Muslims and Europeans already had working knowledge of the counterweight trebuchet beforehand. From the First Crusade (1096–1099) onward, there does not appear to be any discernible difference in the technology of siege engines employed by Muslim and Frankish forces, and by the Third Crusade (1189–1192), both sides seemed well acquainted with the enemy's siege weapons, which "appear to have been remarkably similar." China Counterweight trebuchets do not appear with certainty in Chinese historical records until about 1268. Prior to 1268, the counterweight trebuchet may have been used in 1232 by the Jurchen Jin commander Qiang Shen. Qiang invented a device called the "Arresting Trebuchet" which only needed a few men to work it, and could hurl great stones more than a hundred paces, further than even the strongest traction trebuchet. However no other details on the machine are given. Qiang died the following year and no further references to the Arresting Trebuchet appear. The earliest definite mention of the counterweight trebuchet in China was in 1268, when the Mongols laid siege to Fancheng and Xiangyang. After failing to take the twin cities of Fancheng and Xiangyang for several years, collectively known as the siege of Fancheng and Xiangyang, the Mongol army brought in two Persian engineers to build hinged counterweight trebuchets. Known as the Huihui trebuchet (回回砲, where "huihui" is a loose slang referring to any Muslims), or Xiangyang trebuchet (襄陽砲) because they were first encountered in that battle. Ismail and Al-aud-Din travelled to South China from Iraq and built trebuchets for the siege. Chinese and Muslim engineers operated artillery and siege engines for the Mongol armies. By 1283, counterweight trebuchets were also used in Southeast Asia by the Chams against the Yuan dynasty. Function While some historians have described the counterweight trebuchet as a type of medieval super weapon, other historians have urged caution in overemphasizing its destructive capability. On the side of the counterweight engine as a medieval military revolution, historians such as Sydney Toy, Paul Chevedden, and Hugh Kennedy consider its power to have caused significant changes in medieval warfare. This line of thought suggests that rams were abandoned due to the effectiveness of the counterweight trebuchet, which was capable of reducing "any fortress to rubble". Accordingly, traditional fortifications became obsolete and had to be improved with new architectural structures to support defensive counterweight trebuchets. In southern France during the Albigensian Crusade, sieges were a last resort and negotiations for surrender were common. In these instances, trebuchets were used to threaten or bombard enemy fortifications and ensure victory. On the side of caution, historians such as John France, Christopher Marshall, and Michael Fulton emphasize the still considerable difficulty of reducing fortifications with siege artillery. Examples of the failure of siege artillery include the lack of evidence that artillery ever threatened the defenses of Kerak Castle between 1170 and 1188. Marshall maintains that "the methods of attack and defence remained largely the same through the thirteenth century as they had been during the twelfth." Reservations on the counterweight trebuchet's destructive capability were expressed by Viollet-le-Duc, who "asserted that even counterweight-powered artillery could do little more than destroy crenellations, clear defenders from parapets and target the machines of the besieged." In spite of the evidence regarding increasingly powerful counterweight trebuchets during the 13th century, "it remains an important consideration that not one of these appears to have effected a breach that directly led to the fall of a stronghold." In 1220, Al-Mu'azzam Isa laid siege to Atlit with a trabuculus, three petrariae, and four mangonelli but could not penetrate past the outer wall, which was soft but thick. As late as the Siege of Acre (1291), where the Mamluk Sultanate fielded 72 or 92 trebuchets, including 14 or 15 counterweight trebuchets and the remaining traction types, they were never able to fulfill a breaching role. The Mamluks entered the city by sapping the northeast corner of the outer wall. Though stone projectiles of substantial size (~) have been found at Acre, located near the site of the siege and likely used by the Mamluks, surviving walls of a 13th-century Montmusard tower are no more than one meter thick. There is no indication that the thickness of fortress walls increased exponentially rather than a modest increase of between the 12th and 13th century. The Templar of Tyre described the faster firing traction trebuchets as more dangerous to the defenders than the counterweight ones. The Song dynasty described countermeasures against counterweight trebuchets that prevented them from damaging towers and houses: "an extraordinary method was invented of neutralising the effects of the enemy's trebuchets. Ropes of rice straw four inches thick and thirty-four feet long were joined together twenty at a time, draped on to the buildings from top to bottom, and covered with [wet] clay. Then neither the incendiary arrows, nor bombs [huo pao] from trebuchets, nor even stones of a hundred jun caused any damage to the towers and houses." The counterweight trebuchet did not completely replace the traction trebuchet. Despite its greater range, counterweight trebuchets had to be constructed close to the site of the siege unlike traction trebuchets, which were smaller, lighter, cheaper, and easier to take apart and put back together again where necessary. The superiority of the counterweight trebuchet was not clear cut. Of this, the Hongwu Emperor stated in 1388: "The old type of trebuchet was really more convenient. If you have a hundred of those machines, then when you are ready to march, each wooden pole can be carried by only four men. Then when you reach your destination, you encircle the city, set them up, and start shooting!" The traction trebuchet continued to serve as an anti-personnel weapon. The Norwegian text of 1240, Speculum regale, explicitly states this division of functions. Traction trebuchets were to be used for hitting people in undefended areas. At the Siege of Acre (1291), both traction and counterweight trebuchets were used. The traction trebuchets provided cover fire while the counterweight trebuchets destroyed the city's fortifications. The counterweight-trebuchet could also be used for cover fire and as an anti-personnel weapon. King James I of Aragon employed this as a defensive tactic in many fortified structures and towns which proved effective. Trebuchets could cause mass casualties due to the destruction of structures. During an assault on Muntcada by King James I, a trebuchet was used to target a tower, destroying the structure and causing the consequential deaths of civilians and livestock. But typically the counterweight trebuchet was used against battlements such as parapets, other defensive structures, and the lower section of walls due to its greater accuracy and longer range, which was how it was employed by the Kingdom of Aragon. There is some evidence that the counterweight trebuchet could be transported. Armies employed a magister tormentorum ('master of trebuchets') for the reconstruction of trebuchets after they were deconstructed for transportation to their destination, whether on carts or by ship. They could also be equipped with their own wheels, as shown in two 17th- and 18th-century Chinese illustrations, which are also the only Chinese depictions of counterweight trebuchets on land. According to Liang Jieming, the "illustration shows ... its throwing arm disassembled, its counterweight locked with supporting braces, and prepped for transport and not in battle deployment." However, according to Joseph Needham, the large tank in the middle was the counterweight, while the bulb at the end of the arm was for adjusting between fixed and swinging counterweights. Both Liang and Needham note that the illustrations are poorly drawn and confusing, leading to mislabeling. The counterweight and traction trebuchets were phased out around the mid-15th century in favor of gunpowder weapons. Decline of military use With the introduction of gunpowder, the trebuchet began to lose its place as the siege engine of choice to the cannon. Trebuchets were still used both at the siege of Burgos (1475–1476) and siege of Rhodes (1480). One of the last recorded military uses was by Hernán Cortés, at the 1521 siege of the Aztec capital Tenochtitlán. Accounts of the attack note that its use was motivated by the limited supply of gunpowder. The attempt was reportedly unsuccessful: the first projectile landed on the trebuchet itself, destroying it. In China, the last time trebuchets were seriously considered for military purposes was in 1480. Not much is heard of them afterwards. In 2024, the Israeli military made at least partial use of trebuchets against Hezbollah objectives in southern Lebanon. Other trebuchets Hand-trebuchet The hand-trebuchet () was a staff sling mounted on a pole using a lever mechanism to propel projectiles. Basically a one-man traction trebuchet, it was used by troops of emperor Nikephoros II Phokas around 965 to disrupt enemy formations in the open field. It was also mentioned in the Taktika of general Nikephoros Ouranos (c. 1000), and listed in De obsidione toleranda (author anonymous) as a form of artillery. In China, the hand-trebuchet (shoupao) was invented by Liu Yongxi and presented to the emperor in 1002. It was a pole with a pin at its upper end that acted as a fulcrum for the arm. The pole was used as a shot for fixing in the ground and the user could then throw missiles at the enemy from a static position. Hybrid trebuchet According to Paul E. Chevedden, a hybrid trebuchet existed that used both counterweight and human propulsion. However no illustrations or descriptions of the device exist from the time when they were supposed to have been used. The entire argument for the existence of hybrid trebuchets rests on accounts of increasingly more effective siege weapons. Peter Purton suggests that this was simply because the machines became larger. The earliest depiction of a hybrid trebuchet is dated to 1462, when trebuchets had already become obsolete due to cannons. Couillard The couillard is a smaller version of a counterweight trebuchet with a single frame instead of the usual double "A" frames. The counterweight is split into two halves to avoid hitting the center frame. Comparison of different artillery weapons Roman torsion engines Chinese trebuchets Counterweight trebuchets (estimates) Siege crossbows Reconstructed traction trebuchets Reconstructed counterweight trebuchets Modern use Recreation and education Most trebuchet use in recent centuries has been for recreational or educational, rather than military purposes. New machines have been constructed and old ones restored by living history enthusiasts, for historical re-enactments, and use in other historical celebrations. As their construction is substantially simpler than modern weapons, trebuchets also serve as the object of engineering challenges. The methods of trebuchet construction were lost at the beginning of the 16th century. In 1984, the French engineer Renaud Beffeyte made the first modern reconstruction of a trebuchet, based on documents from 1324. The largest currently-functioning trebuchet in the world is the machine at Warwick Castle, England, constructed in 2005. Based on historical designs, it stands tall and throws missiles typically 36 kg (80 lbs) up to . The trebuchet gained significant interest from numerous news sources when in 2015 a burning missile fired from the siege engine struck and damaged a Victorian-era boathouse situated at the River Avon close by, inadvertently demonstrating the weapon's power. It is built on the design of a similar trebuchet at Middelaldercentret in Denmark. In 1989, Middelaldercentret became the first place in the modern era to have a working trebuchet. Trebuchets compete in one of the classifications of machines used to hurl pumpkins at the annual pumpkin chucking contest held in Sussex County, Delaware, U.S. The record-holder in that contest for trebuchets is the Yankee Siege II from New Hampshire, which at the 2013 WCPC Championship tossed a pumpkin 2835.8 ft (864.35 metres). The , trebuchet flings the standard pumpkins, specified for all entries in the WCPC competition. A large trebuchet was tested in late 2017 in Belfast as part of the set for the television series Game of Thrones. A large trebuchet based on Edward I's "Warwolf" was constructed for a scene in David Mackenzie's movie Outlaw King (2018) about Robert the Bruce, King of Scots. During the film, it hurls an incendiary projectile at Stirling Castle. It recreates the true story that it took some three months to build and Edward would not let his enemy surrender until he could use it. In recent years several trebuchets has been created capable of throwing cars. In the episode "Carnage A Trois" in series 4 of The Grand Tour the presenters uses a trebuchet to allegedly sling a Citroën C3 Pluriel from the White Cliffs of Dover across the English Channel. The Stamford based YouTube personality and inventor Colin Furze created a high trebuchet capable of throwing a washing machine in December 2020. Developments Although rarely used as a weapon today, trebuchets maintain the interest of professional and hobbyist engineers. One modern technological development, especially for the competitive pumpkin-hurling events, is the "floating arms" design. Instead of using the traditional axle fixed to a frame, these devices are mounted on wheels that roll on a track parallel to the ground, with a counterweight that falls directly downward upon release, allowing for greater efficiency by increasing the proportion of energy transferred to the projectile. A more radical design; Jonathan, Orion, and Emmerson Stapleton's "walking arm", described as "a stick falling over with a huge counterweight on top of the stick" debuted in 2016 and in 2018 won both the Grand Champion Best Design and Middleweight Open Division of the 10th annual Vermont Pumpkin Chuckin Festival. Another recent development is the "flywheel trebuchet," in which a flywheel is spun into rapid rotation to build up momentum before release. Uses in activism and insurgency In 2013, during the Syrian civil war, rebels were filmed using a trebuchet in the Battle of Aleppo. The trebuchet was used to project explosives at government troops. In 2014, during the Hrushevskoho street riots in Ukraine, rioters used an improvised trebuchet to throw bricks and Molotov cocktails at the Berkut. Uses in regular armies In 2024, the IDF used a trebuchet to hurl flaming projectiles into Lebanon. The goal was to burn down the thicket that grew alongside the border wall between Israel and Lebanon, so it couldn't be used as cover by Hezbollah troops. The IDF later issued a response to suggest that the trebuchet's use was a "local initiative", rather than a widely-used tool in the Israeli military. Gallery
Technology
Artillery
null
43444
https://en.wikipedia.org/wiki/Simulation
Simulation
A simulation is an imitative representation of a process or system that could exist in the real world. In this broad sense, simulation can often be used interchangeably with model. Sometimes a clear distinction between the two terms is made, in which simulations require the use of models; the model represents the key characteristics or behaviors of the selected system or process, whereas the simulation represents the evolution of the model over time. Another way to distinguish between the terms is to define simulation as experimentation with the help of a model. This definition includes time-independent simulations. Often, computers are used to execute the simulation. Simulation is used in many contexts, such as simulation of technology for performance tuning or optimizing, safety engineering, testing, training, education, and video games. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist. Key issues in modeling and simulation include the acquisition of valid sources of information about the relevant selection of key characteristics and behaviors used to build the model, the use of simplifying approximations and assumptions within the model, and fidelity and validity of the simulation outcomes. Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement, research and development in simulations technology or practice, particularly in the work of computer simulation. Classification and terminology Historically, simulations used in different fields developed largely independently, but 20th-century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept. Physical simulation refers to simulation in which physical objects are substituted for the real thing (some circles use the term for computer simulations modelling selected laws of physics, but this article does not). These physical objects are often chosen because they are smaller or cheaper than the actual object or system. Interactive simulation is a special kind of physical simulation, often referred to as a human-in-the-loop simulation, in which physical simulations include human operators, such as in a flight simulator, sailing simulator, or driving simulator. Continuous simulation is a simulation based on continuous-time rather than discrete-time steps, using numerical integration of differential equations. Discrete-event simulation studies systems whose states change their values only at discrete times. For example, a simulation of an epidemic could change the number of infected people at time instants when susceptible individuals get infected or when infected individuals recover. Stochastic simulation is a simulation where some variable or process is subject to random variations and is projected using Monte Carlo techniques using pseudo-random numbers. Thus replicated runs with the same boundary conditions will each produce different results within a specific confidence band. Deterministic simulation is a simulation which is not stochastic: thus the variables are regulated by deterministic algorithms. So replicated runs from the same boundary conditions always produce identical results. Hybrid simulation (or combined simulation) corresponds to a mix between continuous and discrete event simulation and results in integrating numerically the differential equations between two sequential events to reduce the number of discontinuities. A stand-alone simulation is a simulation running on a single workstation by itself. A is one which uses more than one computer simultaneously, to guarantee access from/to different resources (e.g. multi-users operating different systems, or distributed data sets); a classical example is Distributed Interactive Simulation (DIS). Parallel simulation speeds up a simulation's execution by concurrently distributing its workload over multiple processors, as in high-performance computing. Interoperable simulation is where multiple models, simulators (often defined as federates) interoperate locally, distributed over a network; a classical example is High-Level Architecture. Modeling and simulation as a service is where simulation is accessed as a service over the web. Modeling, interoperable simulation and serious games is where serious game approaches (e.g. game engines and engagement methods) are integrated with interoperable simulation. Simulation fidelity is used to describe the accuracy of a simulation and how closely it imitates the real-life counterpart. Fidelity is broadly classified as one of three categories: low, medium, and high. Specific descriptions of fidelity levels are subject to interpretation, but the following generalizations can be made: Low – the minimum simulation required for a system to respond to accept inputs and provide outputs Medium – responds automatically to stimuli, with limited accuracy High – nearly indistinguishable or as close as possible to the real system A synthetic environment is a computer simulation that can be included in human-in-the-loop simulations. Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure. This can be the best and fastest method to identify the failure cause. Computer simulation A computer simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system. It is a tool to virtually investigate the behaviour of the system under study. Computer simulation has become a useful part of modeling many natural systems in physics, chemistry and biology, and human systems in economics and social science (e.g., computational sociology) as well as in engineering to gain insight into the operation of those systems. A good example of the usefulness of using computers to simulate can be found in the field of network traffic simulation. In such simulations, the model behaviour will change each simulation according to the set of initial parameters assumed for the environment. Traditionally, the formal modeling of systems has been via a mathematical model, which attempts to find analytical solutions enabling the prediction of the behaviour of the system from a set of parameters and initial conditions. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation, the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states would be prohibitive or impossible. Several software packages exist for running computer-based simulation modeling (e.g. Monte Carlo simulation, stochastic modeling, multimethod modeling) that makes all the modeling almost effortless. Modern usage of the term "computer simulation" may encompass virtually any computer-based representation. Computer science In computer science, simulation has some specialized meanings: Alan Turing used the term simulation to refer to what happens when a universal machine executes a state transition table (in modern terminology, a computer runs a program) that describes the state transitions, inputs and outputs of a subject discrete-state machine. The computer simulates the subject machine. Accordingly, in theoretical computer science the term simulation is a relation between state transition systems, useful in the study of operational semantics. Less theoretically, an interesting application of computer simulation is to simulate computers using computers. In computer architecture, a type of simulator, typically called an emulator, is often used to execute a program that has to run on some inconvenient type of computer (for example, a newly designed computer that has not yet been built or an obsolete computer that is no longer available), or in a tightly controlled testing environment (see Computer architecture simulator and Platform virtualization). For example, simulators have been used to debug a microprogram or sometimes commercial application programs, before the program is downloaded to the target machine. Since the operation of the computer is simulated, all of the information about the computer's operation is directly available to the programmer, and the speed and execution of the simulation can be varied at will. Simulators may also be used to interpret fault trees, or test VLSI logic designs before they are constructed. Symbolic simulation uses variables to stand for unknown values. In the field of optimization, simulations of physical processes are often used in conjunction with evolutionary computation to optimize control strategies. Simulation in education and training Simulation is extensively used for educational purposes. It is used for cases where it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations they will spend time learning valuable lessons in a "safe" virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system. Simulations in education are somewhat like training simulations. They focus on specific tasks. The term 'microworld' is used to refer to educational simulations which model some abstract concept rather than simulating a realistic object or environment, or in some cases model a real-world environment in a simplistic way so as to help a learner develop an understanding of the key concepts. Normally, a user can create some sort of construction within the microworld that will behave in a way consistent with the concepts being modeled. Seymour Papert was one of the first to advocate the value of microworlds, and the Logo programming environment developed by Papert is one of the most well-known microworlds. Project management simulation is increasingly used to train students and professionals in the art and science of project management. Using simulation for project management training improves learning retention and enhances the learning process. Social simulations may be used in social science classrooms to illustrate social and political processes in anthropology, economics, history, political science, or sociology courses, typically at the high school or university level. These may, for example, take the form of civics simulations, in which participants assume roles in a simulated society, or international relations simulations in which participants engage in negotiations, alliance formation, trade, diplomacy, and the use of force. Such simulations might be based on fictitious political systems, or be based on current or historical events. An example of the latter would be Barnard College's Reacting to the Past series of historical educational games. The National Science Foundation has also supported the creation of reacting games that address science and math education. In social media simulations, participants train communication with critics and other stakeholders in a private environment. In recent years, there has been increasing use of social simulations for staff training in aid and development agencies. The Carana simulation, for example, was first developed by the United Nations Development Programme, and is now used in a very revised form by the World Bank for training staff to deal with fragile and conflict-affected countries. Military uses for simulation often involve aircraft or armoured fighting vehicles, but can also target small arms and other weapon systems training. Specifically, virtual firearms ranges have become the norm in most military training processes and there is a significant amount of data to suggest this is a useful tool for armed professionals. Virtual simulation A virtual simulation is a category of simulation that uses simulation equipment to create a simulated world for the user. Virtual simulations allow users to interact with a virtual world. Virtual worlds operate on platforms of integrated software and hardware components. In this manner, the system can accept input from the user (e.g., body tracking, voice/sound recognition, physical controllers) and produce output to the user (e.g., visual display, aural display, haptic display) . Virtual simulations use the aforementioned modes of interaction to produce a sense of immersion for the user. Virtual simulation input hardware There is a wide variety of input hardware available to accept user input for virtual simulations. The following list briefly describes several of them: Body tracking: The motion capture method is often used to record the user's movements and translate the captured data into inputs for the virtual simulation. For example, if a user physically turns their head, the motion would be captured by the simulation hardware in some way and translated to a corresponding shift in view within the simulation. Capture suits and/or gloves may be used to capture movements of users body parts. The systems may have sensors incorporated inside them to sense movements of different body parts (e.g., fingers). Alternatively, these systems may have exterior tracking devices or marks that can be detected by external ultrasound, optical receivers or electromagnetic sensors. Internal inertial sensors are also available on some systems. The units may transmit data either wirelessly or through cables. Eye trackers can also be used to detect eye movements so that the system can determine precisely where a user is looking at any given instant. Physical controllers: Physical controllers provide input to the simulation only through direct manipulation by the user. In virtual simulations, tactile feedback from physical controllers is highly desirable in a number of simulation environments. Omnidirectional treadmills can be used to capture the users locomotion as they walk or run. High fidelity instrumentation such as instrument panels in virtual aircraft cockpits provides users with actual controls to raise the level of immersion. For example, pilots can use the actual global positioning system controls from the real device in a simulated cockpit to help them practice procedures with the actual device in the context of the integrated cockpit system. Voice/sound recognition: This form of interaction may be used either to interact with agents within the simulation (e.g., virtual people) or to manipulate objects in the simulation (e.g., information). Voice interaction presumably increases the level of immersion for the user. Users may use headsets with boom microphones, lapel microphones or the room may be equipped with strategically located microphones. Current research into user input systems Research in future input systems holds a great deal of promise for virtual simulations. Systems such as brain–computer interfaces (BCIs) offer the ability to further increase the level of immersion for virtual simulation users. Lee, Keinrath, Scherer, Bischof, Pfurtscheller proved that naïve subjects could be trained to use a BCI to navigate a virtual apartment with relative ease. Using the BCI, the authors found that subjects were able to freely navigate the virtual environment with relatively minimal effort. It is possible that these types of systems will become standard input modalities in future virtual simulation systems. Virtual simulation output hardware There is a wide variety of output hardware available to deliver a stimulus to users in virtual simulations. The following list briefly describes several of them: Visual display: Visual displays provide the visual stimulus to the user. Stationary displays can vary from a conventional desktop display to 360-degree wrap-around screens to stereo three-dimensional screens. Conventional desktop displays can vary in size from . Wrap around screens is typically used in what is known as a cave automatic virtual environment (CAVE). Stereo three-dimensional screens produce three-dimensional images either with or without special glasses—depending on the design. Head-mounted displays (HMDs) have small displays that are mounted on headgear worn by the user. These systems are connected directly into the virtual simulation to provide the user with a more immersive experience. Weight, update rates and field of view are some of the key variables that differentiate HMDs. Naturally, heavier HMDs are undesirable as they cause fatigue over time. If the update rate is too slow, the system is unable to update the displays fast enough to correspond with a quick head turn by the user. Slower update rates tend to cause simulation sickness and disrupt the sense of immersion. Field of view or the angular extent of the world that is seen at a given moment field of view can vary from system to system and has been found to affect the user's sense of immersion. Aural display: Several different types of audio systems exist to help the user hear and localize sounds spatially. Special software can be used to produce 3D audio effects 3D audio to create the illusion that sound sources are placed within a defined three-dimensional space around the user. Stationary conventional speaker systems may be used to provide dual or multi-channel surround sound. However, external speakers are not as effective as headphones in producing 3D audio effects. Conventional headphones offer a portable alternative to stationary speakers. They also have the added advantages of masking real-world noise and facilitate more effective 3D audio sound effects. Haptic display: These displays provide a sense of touch to the user (haptic technology). This type of output is sometimes referred to as force feedback. Tactile tile displays use different types of actuators such as inflatable bladders, vibrators, low-frequency sub-woofers, pin actuators and/or thermo-actuators to produce sensations for the user. End effector displays can respond to users inputs with resistance and force. These systems are often used in medical applications for remote surgeries that employ robotic instruments. Vestibular display: These displays provide a sense of motion to the user (motion simulator). They often manifest as motion bases for virtual vehicle simulation such as driving simulators or flight simulators. Motion bases are fixed in place but use actuators to move the simulator in ways that can produce the sensations pitching, yawing or rolling. The simulators can also move in such a way as to produce a sense of acceleration on all axes (e.g., the motion base can produce the sensation of falling). Clinical healthcare simulators Clinical healthcare simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Simulators have been developed for training procedures ranging from the basics such as blood draw, to laparoscopic surgery and trauma care. They are also important to help on prototyping new devices for biomedical engineering problems. Currently, simulators are applied to research and develop tools for new therapies, treatments and early diagnosis in medicine. Many medical simulators involve a computer connected to a plastic simulation of the relevant anatomy. Sophisticated simulators of this type employ a life-size mannequin that responds to injected drugs and can be programmed to create simulations of life-threatening emergencies. In other simulations, visual components of the procedure are reproduced by computer graphics techniques, while touch-based components are reproduced by haptic feedback devices combined with physical simulation routines computed in response to the user's actions. Medical simulations of this sort will often use 3D CT or MRI scans of patient data to enhance realism. Some medical simulations are developed to be widely distributed (such as web-enabled simulations and procedural simulations that can be viewed via standard web browsers) and can be interacted with using standard computer interfaces, such as the keyboard and mouse. Placebo An important medical application of a simulator—although, perhaps, denoting a slightly different meaning of simulator—is the use of a placebo drug, a formulation that simulates the active drug in trials of drug efficacy. Improving patient safety Patient safety is a concern in the medical industry. Patients have been known to suffer injuries and even death due to management error, and lack of using best standards of care and training. According to Building a National Agenda for Simulation-Based Medical Education (Eder-Van Hook, Jackie, 2004), "a health care provider's ability to react prudently in an unexpected situation is one of the most critical factors in creating a positive outcome in medical emergency, regardless of whether it occurs on the battlefield, freeway, or hospital emergency room." Eder-Van Hook (2004) also noted that medical errors kill up to 98,000 with an estimated cost between $37 and $50 million and $17 to $29 billion for preventable adverse events dollars per year. Simulation is being used to study patient safety, as well as train medical professionals. Studying patient safety and safety interventions in healthcare is challenging, because there is a lack of experimental control (i.e., patient complexity, system/process variances) to see if an intervention made a meaningful difference (Groves & Manges, 2017). An example of innovative simulation to study patient safety is from nursing research. Groves et al. (2016) used a high-fidelity simulation to examine nursing safety-oriented behaviors during times such as change-of-shift report. However, the value of simulation interventions to translating to clinical practice are is still debatable. As Nishisaki states, "there is good evidence that simulation training improves provider and team self-efficacy and competence on manikins. There is also good evidence that procedural simulation improves actual operational performance in clinical settings." However, there is a need to have improved evidence to show that crew resource management training through simulation. One of the largest challenges is showing that team simulation improves team operational performance at the bedside. Although evidence that simulation-based training actually improves patient outcome has been slow to accrue, today the ability of simulation to provide hands-on experience that translates to the operating room is no longer in doubt. One of the largest factors that might impact the ability to have training impact the work of practitioners at the bedside is the ability to empower frontline staff (Stewart, Manges, Ward, 2015). Another example of an attempt to improve patient safety through the use of simulations training is patient care to deliver just-in-time service or/and just-in-place. This training consists of 20  minutes of simulated training just before workers report to shift. One study found that just in time training improved the transition to the bedside. The conclusion as reported in Nishisaki (2008) work, was that the simulation training improved resident participation in real cases; but did not sacrifice the quality of service. It could be therefore hypothesized that by increasing the number of highly trained residents through the use of simulation training, that the simulation training does, in fact, increase patient safety. History of simulation in healthcare The first medical simulators were simple models of human patients. Since antiquity, these representations in clay and stone were used to demonstrate clinical features of disease states and their effects on humans. Models have been found in many cultures and continents. These models have been used in some cultures (e.g., Chinese culture) as a "diagnostic" instrument, allowing women to consult male physicians while maintaining social laws of modesty. Models are used today to help students learn the anatomy of the musculoskeletal system and organ systems. In 2002, the Society for Simulation in Healthcare (SSH) was formed to become a leader in international interprofessional advances the application of medical simulation in healthcare The need for a "uniform mechanism to educate, evaluate, and certify simulation instructors for the health care profession" was recognized by McGaghie et al. in their critical review of simulation-based medical education research. In 2012 the SSH piloted two new certifications to provide recognition to educators in an effort to meet this need. Type of models Active models Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous "Harvey" mannequin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation, and electrocardiography. Interactive models More recently, interactive models have been developed that respond to actions taken by a student or physician. Until recently, these simulations were two dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgments, and also to make errors. The process of iterative learning through assessment, evaluation, decision making, and error correction creates a much stronger learning environment than passive instruction. Computer simulators Simulators have been proposed as an ideal tool for assessment of students for clinical skills. For patients, "cybertherapy" can be used for sessions simulating traumatic experiences, from fear of heights to social anxiety. Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These "lifelike" simulations are expensive, and lack reproducibility. A fully functional "3Di" simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context. Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state. Such a simulator meets the goals of an objective and standardized examination for clinical competence. This system is superior to examinations that use "standard patients" because it permits the quantitative measurement of competence, as well as reproducing the same objective findings. Simulation in entertainment Simulation in entertainment encompasses many large and popular industries such as film, television, video games (including serious games) and rides in theme parks. Although modern simulation is thought to have its roots in training and the military, in the 20th century it also became a conduit for enterprises which were more hedonistic in nature. History of visual simulation in film and games Early history (1940s and 1950s) The first simulation game may have been created as early as 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann. This was a straightforward game that simulated a missile being fired at a target. The curve of the missile and its speed could be adjusted using several knobs. In 1958, a computer game called Tennis for Two was created by Willy Higginbotham which simulated a tennis game between two players who could both play at the same time using hand controls and was displayed on an oscilloscope. This was one of the first electronic video games to use a graphical display. 1970s and early 1980s Computer-generated imagery was used in the film to simulate objects as early as 1972 in A Computer Animated Hand, parts of which were shown on the big screen in the 1976 film Futureworld. This was followed by the "targeting computer" that young Skywalker turns off in the 1977 film Star Wars. The film Tron (1982) was the first film to use computer-generated imagery for more than a couple of minutes. Advances in technology in the 1980s caused 3D simulation to become more widely used and it began to appear in movies and in computer-based games such as Atari's Battlezone (1980) and Acornsoft's Elite (1984), one of the first wire-frame 3D graphics games for home computers. Pre-virtual cinematography era (early 1980s to 1990s) Advances in technology in the 1980s made the computer more affordable and more capable than they were in previous decades, which facilitated the rise of computer such as the Xbox gaming. The first video game consoles released in the 1970s and early 1980s fell prey to the industry crash in 1983, but in 1985, Nintendo released the Nintendo Entertainment System (NES) which became one of the best selling consoles in video game history. In the 1990s, computer games became widely popular with the release of such game as The Sims and Command & Conquer and the still increasing power of desktop computers. Today, computer simulation games such as World of Warcraft are played by millions of people around the world. In 1993, the film Jurassic Park became the first popular film to use computer-generated graphics extensively, integrating the simulated dinosaurs almost seamlessly into live action scenes. This event transformed the film industry; in 1995, the film Toy Story was the first film to use only computer-generated images and by the new millennium computer generated graphics were the leading choice for special effects in films. Virtual cinematography (early 2000s–present) The advent of virtual cinematography in the early 2000s has led to an explosion of movies that would have been impossible to shoot without it. Classic examples are the digital look-alikes of Neo, Smith and other characters in the Matrix sequels and the extensive use of physically impossible camera runs in The Lord of the Rings trilogy. The terminal in the Pan Am (TV series) no longer existed during the filming of this 2011–2012 aired series, which was no problem as they created it in virtual cinematography using automated viewpoint finding and matching in conjunction with compositing real and simulated footage, which has been the bread and butter of the movie artist in and around film studios since the early 2000s. Computer-generated imagery is "the application of the field of 3D computer graphics to special effects". This technology is used for visual effects because they are high in quality, controllable, and can create effects that would not be feasible using any other technology either because of cost, resources or safety. Computer-generated graphics can be seen in many live-action movies today, especially those of the action genre. Further, computer-generated imagery has almost completely supplanted hand-drawn animation in children's movies which are increasingly computer-generated only. Examples of movies that use computer-generated imagery include Finding Nemo, 300 and Iron Man. Examples of non-film entertainment simulation Simulation games Simulation games, as opposed to other genres of video and computer games, represent or simulate an environment accurately. Moreover, they represent the interactions between the playable characters and the environment realistically. These kinds of games are usually more complex in terms of gameplay. Simulation games have become incredibly popular among people of all ages. Popular simulation games include SimCity and Tiger Woods PGA Tour. There are also flight simulator and driving simulator games. Theme park rides Simulators have been used for entertainment since the Link Trainer in the 1930s. The first modern simulator ride to open at a theme park was Disney's Star Tours in 1987 soon followed by Universal's The Funtastic World of Hanna-Barbera in 1990 which was the first ride to be done entirely with computer graphics. Simulator rides are the progeny of military training simulators and commercial simulators, but they are different in a fundamental way. While military training simulators react realistically to the input of the trainee in real time, ride simulators only feel like they move realistically and move according to prerecorded motion scripts. One of the first simulator rides, Star Tours, which cost $32 million, used a hydraulic motion based cabin. The movement was programmed by a joystick. Today's simulator rides, such as The Amazing Adventures of Spider-Man include elements to increase the amount of immersion experienced by the riders such as: 3D imagery, physical effects (spraying water or producing scents), and movement through an environment. Simulation and manufacturing Manufacturing simulation represents one of the most important applications of simulation. This technique represents a valuable tool used by engineers when evaluating the effect of capital investment in equipment and physical facilities like factory plants, warehouses, and distribution centers. Simulation can be used to predict the performance of an existing or planned system and to compare alternative solutions for a particular design problem. Another important goal of simulation in manufacturing systems is to quantify system performance. Common measures of system performance include the following: Throughput under average and peak loads System cycle time (how long it takes to produce one part) Use of resource, labor, and machines Bottlenecks and choke points Queuing at work locations Queuing and delays caused by material-handling devices and systems WIP storages needs Staffing requirements Effectiveness of scheduling systems Effectiveness of control systems More examples of simulation Automobiles An automobile simulator provides an opportunity to reproduce the characteristics of real vehicles in a virtual environment. It replicates the external factors and conditions with which a vehicle interacts enabling a driver to feel as if they are sitting in the cab of their own vehicle. Scenarios and events are replicated with sufficient reality to ensure that drivers become fully immersed in the experience rather than simply viewing it as an educational experience. The simulator provides a constructive experience for the novice driver and enables more complex exercises to be undertaken by the more mature driver. For novice drivers, truck simulators provide an opportunity to begin their career by applying best practice. For mature drivers, simulation provides the ability to enhance good driving or to detect poor practice and to suggest the necessary steps for remedial action. For companies, it provides an opportunity to educate staff in the driving skills that achieve reduced maintenance costs, improved productivity and, most importantly, to ensure the safety of their actions in all possible situations. Biomechanics A biomechanics simulator is a simulation platform for creating dynamic mechanical models built from combinations of rigid and deformable bodies, joints, constraints, and various force actuators. It is specialized for creating biomechanical models of human anatomical structures, with the intention to study their function and eventually assist in the design and planning of medical treatment. A biomechanics simulator is used to analyze walking dynamics, study sports performance, simulate surgical procedures, analyze joint loads, design medical devices, and animate human and animal movement. A neuromechanical simulator that combines biomechanical and biologically realistic neural network simulation. It allows the user to test hypotheses on the neural basis of behavior in a physically accurate 3-D virtual environment. City and urban A city simulator can be a city-building game but can also be a tool used by urban planners to understand how cities are likely to evolve in response to various policy decisions. AnyLogic is an example of modern, large-scale urban simulators designed for use by urban planners. City simulators are generally agent-based simulations with explicit representations for land use and transportation. UrbanSim and LEAM are examples of large-scale urban simulation models that are used by metropolitan planning agencies and military bases for land use and transportation planning. Christmas Several Christmas-themed simulations exist, many of which are centred around Santa Claus. An example of these simulations are websites which claim to allow the user to track Santa Claus. Due to the fact that Santa is a legendary character and not a real, living person, it is impossible to provide actual information on his location, and services such as NORAD Tracks Santa and the Google Santa Tracker (the former of which claims to use radar and other technologies to track Santa) display fake, predetermined location information to users. Another example of these simulations are websites that claim to allow the user to email or send messages to Santa Claus. Websites such as emailSanta.com or Santa's former page on the now-defunct Windows Live Spaces by Microsoft use automated programs or scripts to generate personalized replies claimed to be from Santa himself based on user input. Classroom of the future The classroom of the future will probably contain several kinds of simulators, in addition to textual and visual learning tools. This will allow students to enter the clinical years better prepared, and with a higher skill level. The advanced student or postgraduate will have a more concise and comprehensive method of retraining—or of incorporating new clinical procedures into their skill set—and regulatory bodies and medical institutions will find it easier to assess the proficiency and competency of individuals. The classroom of the future will also form the basis of a clinical skills unit for continuing education of medical personnel; and in the same way that the use of periodic flight training assists airline pilots, this technology will assist practitioners throughout their career. The simulator will be more than a "living" textbook, it will become an integral a part of the practice of medicine. The simulator environment will also provide a standard platform for curriculum development in institutions of medical education. Communication satellites Modern satellite communications systems (SATCOM) are often large and complex with many interacting parts and elements. In addition, the need for broadband connectivity on a moving vehicle has increased dramatically in the past few years for both commercial and military applications. To accurately predict and deliver high quality of service, SATCOM system designers have to factor in terrain as well as atmospheric and meteorological conditions in their planning. To deal with such complexity, system designers and operators increasingly turn towards computer models of their systems to simulate real-world operating conditions and gain insights into usability and requirements prior to final product sign-off. Modeling improves the understanding of the system by enabling the SATCOM system designer or planner to simulate real-world performance by injecting the models with multiple hypothetical atmospheric and environmental conditions. Simulation is often used in the training of civilian and military personnel. This usually occurs when it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations, they will spend time learning valuable lessons in a "safe" virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system. Digital lifecycle Simulation solutions are being increasingly integrated with computer-aided solutions and processes (computer-aided design or CAD, computer-aided manufacturing or CAM, computer-aided engineering or CAE, etc.). The use of simulation throughout the product lifecycle, especially at the earlier concept and design stages, has the potential of providing substantial benefits. These benefits range from direct cost issues such as reduced prototyping and shorter time-to-market to better performing products and higher margins. However, for some companies, simulation has not provided the expected benefits. The successful use of simulation, early in the lifecycle, has been largely driven by increased integration of simulation tools with the entire set of CAD, CAM and product-lifecycle management solutions. Simulation solutions can now function across the extended enterprise in a multi-CAD environment, and include solutions for managing simulation data and processes and ensuring that simulation results are made part of the product lifecycle history. Disaster preparedness Simulation training has become a method for preparing people for disasters. Simulations can replicate emergency situations and track how learners respond thanks to a lifelike experience. Disaster preparedness simulations can involve training on how to handle terrorism attacks, natural disasters, pandemic outbreaks, or other life-threatening emergencies. One organization that has used simulation training for disaster preparedness is CADE (Center for Advancement of Distance Education). CADE has used a video game to prepare emergency workers for multiple types of attacks. As reported by News-Medical.Net, "The video game is the first in a series of simulations to address bioterrorism, pandemic flu, smallpox, and other disasters that emergency personnel must prepare for." Developed by a team from the University of Illinois at Chicago (UIC), the game allows learners to practice their emergency skills in a safe, controlled environment. The Emergency Simulation Program (ESP) at the British Columbia Institute of Technology (BCIT), Vancouver, British Columbia, Canada is another example of an organization that uses simulation to train for emergency situations. ESP uses simulation to train on the following situations: forest fire fighting, oil or chemical spill response, earthquake response, law enforcement, municipal firefighting, hazardous material handling, military training, and response to terrorist attack One feature of the simulation system is the implementation of "Dynamic Run-Time Clock," which allows simulations to run a 'simulated' time frame, "'speeding up' or 'slowing down' time as desired" Additionally, the system allows session recordings, picture-icon based navigation, file storage of individual simulations, multimedia components, and launch external applications. At the University of Québec in Chicoutimi, a research team at the outdoor research and expertise laboratory (Laboratoire d'Expertise et de Recherche en Plein Air – LERPA) specializes in using wilderness backcountry accident simulations to verify emergency response coordination. Instructionally, the benefits of emergency training through simulations are that learner performance can be tracked through the system. This allows the developer to make adjustments as necessary or alert the educator on topics that may require additional attention. Other advantages are that the learner can be guided or trained on how to respond appropriately before continuing to the next emergency segment—this is an aspect that may not be available in the live environment. Some emergency training simulators also allow for immediate feedback, while other simulations may provide a summary and instruct the learner to engage in the learning topic again. In a live-emergency situation, emergency responders do not have time to waste. Simulation-training in this environment provides an opportunity for learners to gather as much information as they can and practice their knowledge in a safe environment. They can make mistakes without risk of endangering lives and be given the opportunity to correct their errors to prepare for the real-life emergency. Economics Simulations in economics and especially in macroeconomics, judge the desirability of the effects of proposed policy actions, such as fiscal policy changes or monetary policy changes. A mathematical model of the economy, having been fitted to historical economic data, is used as a proxy for the actual economy; proposed values of government spending, taxation, open market operations, etc. are used as inputs to the simulation of the model, and various variables of interest such as the inflation rate, the unemployment rate, the balance of trade deficit, the government budget deficit, etc. are the outputs of the simulation. The simulated values of these variables of interest are compared for different proposed policy inputs to determine which set of outcomes is most desirable. Engineering, technology, and processes Simulation is an important feature in engineering systems or any system that involves many processes. For example, in electrical engineering, delay lines may be used to simulate propagation delay and phase shift caused by an actual transmission line. Similarly, dummy loads may be used to simulate impedance without simulating propagation and is used in situations where propagation is unwanted. A simulator may imitate only a few of the operations and functions of the unit it simulates. Contrast with: emulate. Most engineering simulations entail mathematical modeling and computer-assisted investigation. There are many cases, however, where mathematical modeling is not reliable. Simulation of fluid dynamics problems often require both mathematical and physical simulations. In these cases the physical models require dynamic similitude. Physical and chemical simulations have also direct realistic uses, rather than research uses; in chemical engineering, for example, process simulations are used to give the process parameters immediately used for operating chemical plants, such as oil refineries. Simulators are also used for plant operator training. It is called Operator Training Simulator (OTS) and has been widely adopted by many industries from chemical to oil&gas and to the power industry. This created a safe and realistic virtual environment to train board operators and engineers. Mimic is capable of providing high fidelity dynamic models of nearly all chemical plants for operator training and control system testing. Ergonomics Ergonomic simulation involves the analysis of virtual products or manual tasks within a virtual environment. In the engineering process, the aim of ergonomics is to develop and to improve the design of products and work environments. Ergonomic simulation utilizes an anthropometric virtual representation of the human, commonly referenced as a mannequin or Digital Human Models (DHMs), to mimic the postures, mechanical loads, and performance of a human operator in a simulated environment such as an airplane, automobile, or manufacturing facility. DHMs are recognized as evolving and valuable tool for performing proactive ergonomics analysis and design. The simulations employ 3D-graphics and physics-based models to animate the virtual humans. Ergonomics software uses inverse kinematics (IK) capability for posing the DHMs. Software tools typically calculate biomechanical properties including individual muscle forces, joint forces and moments. Most of these tools employ standard ergonomic evaluation methods such as the NIOSH lifting equation and Rapid Upper Limb Assessment (RULA). Some simulations also analyze physiological measures including metabolism, energy expenditure, and fatigue limits Cycle time studies, design and process validation, user comfort, reachability, and line of sight are other human-factors that may be examined in ergonomic simulation packages. Modeling and simulation of a task can be performed by manually manipulating the virtual human in the simulated environment. Some ergonomics simulation software permits interactive, real-time simulation and evaluation through actual human input via motion capture technologies. However, motion capture for ergonomics requires expensive equipment and the creation of props to represent the environment or product. Some applications of ergonomic simulation in include analysis of solid waste collection, disaster management tasks, interactive gaming, automotive assembly line, virtual prototyping of rehabilitation aids, and aerospace product design. Ford engineers use ergonomics simulation software to perform virtual product design reviews. Using engineering data, the simulations assist evaluation of assembly ergonomics. The company uses Siemen's Jack and Jill ergonomics simulation software in improving worker safety and efficiency, without the need to build expensive prototypes. Finance In finance, computer simulations are often used for scenario planning. Risk-adjusted net present value, for example, is computed from well-defined but not always known (or fixed) inputs. By imitating the performance of the project under evaluation, simulation can provide a distribution of NPV over a range of discount rates and other variables. Simulations are also often used to test a financial theory or the ability of a financial model. Simulations are frequently used in financial training to engage participants in experiencing various historical as well as fictional situations. There are stock market simulations, portfolio simulations, risk management simulations or models and forex simulations. Such simulations are typically based on stochastic asset models. Using these simulations in a training program allows for the application of theory into a something akin to real life. As with other industries, the use of simulations can be technology or case-study driven. Flight Flight simulation is mainly used to train pilots outside of the aircraft. In comparison to training in flight, simulation-based training allows for practicing maneuvers or situations that may be impractical (or even dangerous) to perform in the aircraft while keeping the pilot and instructor in a relatively low-risk environment on the ground. For example, electrical system failures, instrument failures, hydraulic system failures, and even flight control failures can be simulated without risk to the crew or equipment. Instructors can also provide students with a higher concentration of training tasks in a given period of time than is usually possible in the aircraft. For example, conducting multiple instrument approaches in the actual aircraft may require significant time spent repositioning the aircraft, while in a simulation, as soon as one approach has been completed, the instructor can immediately reposition the simulated aircraft to a location from which the next approach can be begun. Flight simulation also provides an economic advantage over training in an actual aircraft. Once fuel, maintenance, and insurance costs are taken into account, the operating costs of an FSTD are usually substantially lower than the operating costs of the simulated aircraft. For some large transport category airplanes, the operating costs may be several times lower for the FSTD than the actual aircraft. Another advantage is reduced environmental impact, as simulators don't contribute directly to carbon or noise emissions. There also exist "engineering flight simulators" which are a key element of the aircraft design process. Many benefits that come from a lower number of test flights like cost and safety improvements are described above, but there are some unique advantages. Having a simulator available allows for faster design iteration cycle or using more test equipment than could be fit into a real aircraft. Marine Bearing resemblance to flight simulators, a marine simulator is meant for training of ship personnel. The most common marine simulators include: Ship's bridge simulators Engine room simulators Cargo handling simulators Communication / GMDSS simulators ROV simulators Simulators like these are mostly used within maritime colleges, training institutions, and navies. They often consist of a replication of a ships' bridge, with the operating console(s), and a number of screens on which the virtual surroundings are projected. Military Military simulations, also known informally as war games, are models in which theories of warfare can be tested and refined without the need for actual hostilities. They exist in many different forms, with varying degrees of realism. In recent times, their scope has widened to include not only military but also political and social factors (for example, the Nationlab series of strategic exercises in Latin America). While many governments make use of simulation, both individually and collaboratively, little is known about the model's specifics outside professional circles. Network and distributed systems Network and distributed systems have been extensively simulated in other to understand the impact of new protocols and algorithms before their deployment in the actual systems. The simulation can focus on different levels (physical layer, network layer, application layer), and evaluate different metrics (network bandwidth, resource consumption, service time, dropped packets, system availability). Examples of simulation scenarios of network and distributed systems are: Content delivery networks Smart cities Internet of things Payment and securities settlement system Simulation techniques have also been applied to payment and securities settlement systems. Among the main users are central banks who are generally responsible for the oversight of market infrastructure and entitled to contribute to the smooth functioning of the payment systems. Central banks have been using payment system simulations to evaluate things such as the adequacy or sufficiency of liquidity available ( in the form of account balances and intraday credit limits) to participants (mainly banks) to allow efficient settlement of payments. The need for liquidity is also dependent on the availability and the type of netting procedures in the systems, thus some of the studies have a focus on system comparisons. Another application is to evaluate risks related to events such as communication network breakdowns or the inability of participants to send payments (e.g. in case of possible bank failure). This kind of analysis falls under the concepts of stress testing or scenario analysis. A common way to conduct these simulations is to replicate the settlement logics of the real payment or securities settlement systems under analysis and then use real observed payment data. In case of system comparison or system development, naturally, also the other settlement logics need to be implemented. To perform stress testing and scenario analysis, the observed data needs to be altered, e.g. some payments delayed or removed. To analyze the levels of liquidity, initial liquidity levels are varied. System comparisons (benchmarking) or evaluations of new netting algorithms or rules are performed by running simulations with a fixed set of data and varying only the system setups. An inference is usually done by comparing the benchmark simulation results to the results of altered simulation setups by comparing indicators such as unsettled transactions or settlement delays. Power systems Project management Project management simulation is simulation used for project management training and analysis. It is often used as a training simulation for project managers. In other cases, it is used for what-if analysis and for supporting decision-making in real projects. Frequently the simulation is conducted using software tools. Robotics A robotics simulator is used to create embedded applications for a specific (or not) robot without being dependent on the 'real' robot. In some cases, these applications can be transferred to the real robot (or rebuilt) without modifications. Robotics simulators allow reproducing situations that cannot be 'created' in the real world because of cost, time, or the 'uniqueness' of a resource. A simulator also allows fast robot prototyping. Many robot simulators feature physics engines to simulate a robot's dynamics. Production Simulation of production systems is used mainly to examine the effect of improvements or investments in a production system. Most often this is done using a static spreadsheet with process times and transportation times. For more sophisticated simulations Discrete Event Simulation (DES) is used with the advantages to simulate dynamics in the production system. A production system is very much dynamic depending on variations in manufacturing processes, assembly times, machine set-ups, breaks, breakdowns and small stoppages. There is much software commonly used for discrete event simulation. They differ in usability and markets but do often share the same foundation. Sales process Simulations are useful in modeling the flow of transactions through business processes, such as in the field of sales process engineering, to study and improve the flow of customer orders through various stages of completion (say, from an initial proposal for providing goods/services through order acceptance and installation). Such simulations can help predict the impact of how improvements in methods might impact variability, cost, labor time, and the number of transactions at various stages in the process. A full-featured computerized process simulator can be used to depict such models, as can simpler educational demonstrations using spreadsheet software, pennies being transferred between cups based on the roll of a die, or dipping into a tub of colored beads with a scoop. Sports In sports, computer simulations are often done to predict the outcome of events and the performance of individual sportspeople. They attempt to recreate the event through models built from statistics. The increase in technology has allowed anyone with knowledge of programming the ability to run simulations of their models. The simulations are built from a series of mathematical algorithms, or models, and can vary with accuracy. Accuscore, which is licensed by companies such as ESPN, is a well-known simulation program for all major sports. It offers a detailed analysis of games through simulated betting lines, projected point totals and overall probabilities. With the increased interest in fantasy sports simulation models that predict individual player performance have gained popularity. Companies like What If Sports and StatFox specialize in not only using their simulations for predicting game results but how well individual players will do as well. Many people use models to determine whom to start in their fantasy leagues. Another way simulations are helping the sports field is in the use of biomechanics. Models are derived and simulations are run from data received from sensors attached to athletes and video equipment. Sports biomechanics aided by simulation models answer questions regarding training techniques such as the effect of fatigue on throwing performance (height of throw) and biomechanical factors of the upper limbs (reactive strength index; hand contact time). Computer simulations allow their users to take models which before were too complex to run, and give them answers. Simulations have proven to be some of the best insights into both play performance and team predictability. Space shuttle countdown Simulation was used at Kennedy Space Center (KSC) to train and certify Space Shuttle engineers during simulated launch countdown operations. The Space Shuttle engineering community would participate in a launch countdown integrated simulation before each Shuttle flight. This simulation is a virtual simulation where real people interact with simulated Space Shuttle vehicle and Ground Support Equipment (GSE) hardware. The Shuttle Final Countdown Phase Simulation, also known as S0044, involved countdown processes that would integrate many of the Space Shuttle vehicle and GSE systems. Some of the Shuttle systems integrated in the simulation are the main propulsion system, RS-25, solid rocket boosters, ground liquid hydrogen and liquid oxygen, external tank, flight controls, navigation, and avionics. The high-level objectives of the Shuttle Final Countdown Phase Simulation are: To demonstrate firing room final countdown phase operations. To provide training for system engineers in recognizing, reporting and evaluating system problems in a time critical environment. To exercise the launch team's ability to evaluate, prioritize and respond to problems in an integrated manner within a time critical environment. To provide procedures to be used in performing failure/recovery testing of the operations performed in the final countdown phase. The Shuttle Final Countdown Phase Simulation took place at the Kennedy Space Center Launch Control Center firing rooms. The firing room used during the simulation is the same control room where real launch countdown operations are executed. As a result, equipment used for real launch countdown operations is engaged. Command and control computers, application software, engineering plotting and trending tools, launch countdown procedure documents, launch commit criteria documents, hardware requirement documents, and any other items used by the engineering launch countdown teams during real launch countdown operations are used during the simulation. The Space Shuttle vehicle hardware and related GSE hardware is simulated by mathematical models (written in Shuttle Ground Operations Simulator (SGOS) modeling language) that behave and react like real hardware. During the Shuttle Final Countdown Phase Simulation, engineers command and control hardware via real application software executing in the control consoles – just as if they were commanding real vehicle hardware. However, these real software applications do not interface with real Shuttle hardware during simulations. Instead, the applications interface with mathematical model representations of the vehicle and GSE hardware. Consequently, the simulations bypass sensitive and even dangerous mechanisms while providing engineering measurements detailing how the hardware would have reacted. Since these math models interact with the command and control application software, models and simulations are also used to debug and verify the functionality of application software. Satellite navigation The only true way to test GNSS receivers (commonly known as Sat-Nav's in the commercial world) is by using an RF Constellation Simulator. A receiver that may, for example, be used on an aircraft, can be tested under dynamic conditions without the need to take it on a real flight. The test conditions can be repeated exactly, and there is full control over all the test parameters. this is not possible in the 'real-world' using the actual signals. For testing receivers that will use the new Galileo (satellite navigation) there is no alternative, as the real signals do not yet exist. Trains Weather Predicting weather conditions by extrapolating/interpolating previous data is one of the real use of simulation. Most of the weather forecasts use this information published by Weather bureaus. This kind of simulations helps in predicting and forewarning about extreme weather conditions like the path of an active hurricane/cyclone. Numerical weather prediction for forecasting involves complicated numeric computer models to predict weather accurately by taking many parameters into account. Simulation games Strategy games—both traditional and modern—may be viewed as simulations of abstracted decision-making for the purpose of training military and political leaders (see History of Go for an example of such a tradition, or Kriegsspiel for a more recent example). Many other video games are simulators of some kind. Such games can simulate various aspects of reality, from business, to government, to construction, to piloting vehicles (see above). Historical usage Historically, the word had negative connotations: However, the connection between simulation and dissembling later faded out and is now only of linguistic interest.
Technology
General
null
43476
https://en.wikipedia.org/wiki/Operations%20research
Operations research
Operations research () (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve decision-making. Although the term management science is sometimes used similarly, the two fields differ in their scope and emphasis. Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Overview Operations research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem). The major sub-disciplines (but not limited to) in modern operational research, as identified by the journal Operations Research and The Journal of the Operational Research Society are: Computing and information technologies Financial engineering Manufacturing, service sciences, and supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation theory Game theory for strategies Linear programming Nonlinear programming Integer programming in NP-complete problem specially for 0-1 integer linear programming for binary Dynamic programming in Aerospace engineering and Economics Information theory used in Cryptography, Quantum computing Quadratic programming for solutions of Quadratic equation and Quadratic function History In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research. Historical origins In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead. Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt. Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken. Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules. Second World War The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included operational analysis (UK Ministry of Defence from 1962) and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army. Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty. Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones. While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings. As a result of these findings Coastal Command changed their aircraft to using white undersurfaces. Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics". Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft. This story has been disputed, with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University, the result of work done by Abraham Wald. When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses. The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes. Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction. On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting. After World War II In 1947, under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR: "To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective." With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947. In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953. Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members. Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming. In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s – the one in 1956 with 120 participants – bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966. With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently." Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks. Problems addressed Critical path analysis or project planning: identifying those processes in a multiple-dependency project which affect the overall duration of the project Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost) Network optimization: for instance, setup of telecommunications or power system networks to maintain quality of service during outages Resource allocation problems Facility location Assignment Problems: Assignment problem Generalized assignment problem Quadratic assignment problem Weapon target assignment problem Bayesian search theory: looking for a target Optimal search Routing, such as determining the routes of buses so that as few buses are needed as possible Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products Project production activities: managing the flow of work activities in a capital project in response to system variability through operations research tools for variability reduction and buffer allocation using a combination of allocation of capacity, inventory and time Efficient messaging and customer response tactics Automation: automating or integrating robotic systems in human-driven operations processes Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs Transportation: managing freight transportation and delivery systems (Examples: LTL shipping, intermodal freight transport, travelling salesman problem, driver scheduling problem) Scheduling: Personnel staffing Manufacturing steps Project tasks Network data traffic: these are known as queueing models or queueing systems. Sports events and their television coverage Blending of raw materials in oil refineries Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science Cutting stock problem: Cutting small items out of bigger ones. Finding the optimal parameter (weights) setting of an algorithm that generates the realisation of a figured bass in Baroque compositions (classical music) by using weighted local cost and transition cost rules Operational research is also used extensively in government where evidence-based policy is used. Management science The field of management science (MS) is known as using operations research models in business. Stafford Beer characterized this in 1967. Like operational research itself, management science is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research. The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence. Related fields Some of the fields that have considerable overlap with Operations Research and Management Science include: Artificial Intelligence Business analytics Computer science Data mining/Data science/Big data Decision analysis Decision intelligence Engineering Financial engineering Forecasting Game theory Geography/Geographic information science Graph theory Industrial engineering Inventory control Logistics Mathematical modeling Mathematical optimization Probability and statistics Project management Policy analysis Queueing theory Simulation Social network/Transportation forecasting models Stochastic processes Supply chain management Systems engineering Applications Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes: Scheduling (of airlines, trains, buses etc.) Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities) Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station) Hydraulics & Piping Engineering (managing flow of water from reservoirs) Health Services (information and supply chain management) Game Theory (identifying, understanding; developing strategies adopted by companies) Urban Design Computer Network Engineering (packet routing; timing; analysis) Telecom & Data Communication Engineering (packet routing; timing; analysis) Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods. In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years, a number of non-quantified modeling methods have been developed. These include: stakeholder based approaches including metagame analysis and drama theory morphological analysis and various forms of influence diagrams cognitive mapping strategic choice robustness analysis Societies and journals Societies The International Federation of Operational Research Societies (IFORS) is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US, UK, France, Germany, Italy, Canada, Australia, New Zealand, Philippines, India, Japan and South Africa. For the institutionalization of Operations Research, the foundation of IFORS in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957. The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO). Other important operational research organizations are Simulation Interoperability Standards Organization (SISO) and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR. Journals of INFORMS The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports. They are: Decision Analysis Information Systems Research INFORMS Journal on Computing INFORMS Transactions on Education (an open access journal) Interfaces Management Science Manufacturing & Service Operations Management Marketing Science Mathematics of Operations Research Operations Research Organization Science Service Science Transportation Science Other journals These are listed in alphabetical order of their titles. 4OR-A Quarterly Journal of Operations Research: jointly published the Belgian, French and Italian Operations Research Societies (Springer); Decision Sciences published by Wiley-Blackwell on behalf of the Decision Sciences Institute European Journal of Operational Research (EJOR): Founded in 1975 and is presently by far the largest operational research journal in the world, with its around 9,000 pages of published papers per year. In 2004, its total number of citations was the second largest amongst Operational Research and Management Science journals; INFOR Journal: published and sponsored by the Canadian Operational Research Society; Journal of Defense Modeling and Simulation (JDMS): Applications, Methodology, Technology: a quarterly journal devoted to advancing the science of modeling and simulation as it relates to the military and defense. Journal of the Operational Research Society (JORS): an official journal of The OR Society; this is the oldest continuously published journal of OR in the world, published by Taylor & Francis; Military Operations Research (MOR): published by the Military Operations Research Society; Omega - The International Journal of Management Science; Operations Research Letters; Opsearch: official journal of the Operational Research Society of India; OR Insight: a quarterly journal of The OR Society published by Palgrave; Pesquisa Operacional, the official journal of the Brazilian Operations Research Society Production and Operations Management, the official journal of the Production and Operations Management Society TOP: the official journal of the Spanish Statistics and Operations Research Society.
Mathematics
Other
null
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Probability density function
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1. The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. Example Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on. In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour−1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour−1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1)×(1 nanosecond) ≈ (using the unit conversion nanoseconds = 1 hour). There is a probability density function f with f(5 hours) = 2 hour−1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window. Absolutely continuous univariate distributions A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable has density , where is a non-negative Lebesgue-integrable function, if: Hence, if is the cumulative distribution function of , then: and (if is continuous at ) Intuitively, one can think of as being the probability of falling within the infinitesimal interval . Formal definition (This definition may be extended to any probability distribution using the measure-theoretic definition of probability.) A random variable with values in a measurable space (usually with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X∗P on : the density of with respect to a reference measure on is the Radon–Nikodym derivative: That is, f is any measurable function with the property that: for any measurable set Discussion In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof). It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere. Further details Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval has probability density for and elsewhere. The standard normal distribution has probability density If a random variable is given and its distribution admits a probability density function , then the expected value of (if the expected value exists) can be calculated as Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point. A distribution has a density function if its cumulative distribution function is absolutely continuous. In this case: is almost everywhere differentiable, and its derivative can be used as probability density: If a probability distribution admits a density, then the probability of every one-point set is zero; the same holds for finite and countable sets. Two probability densities and represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero. In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: If is an infinitely small number, the probability that is included within the interval is equal to , or: Link between discrete and continuous distributions It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability each. The density of probability associated with this variable is: More generally, if a discrete variable can take different values among real numbers, then the associated probability density function is: where are the discrete values accessible to the variable and are the probabilities associated with these values. This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability. Families of densities It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of densities Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution. Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Densities associated with multiple variables For continuous random variables , it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the variables, such that, for any domain in the -dimensional space of the values of the variables , the probability that a realisation of the set variables falls inside the domain is If is the cumulative distribution function of the vector , then the joint probability density function can be computed as a partial derivative Marginal densities For , let be the probability density function associated with variable alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables by integrating over all values of the other variables: Independence Continuous random variables admitting a joint density are all independent from each other if Corollary If the joint probability density function of a vector of random variables can be factored into a product of functions of one variable (where each is not necessarily a density) then the variables in the set are all independent from each other, and the marginal probability density function of each of them is given by Example This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call a 2-dimensional random vector of coordinates : the probability to obtain in the quarter plane of positive and is Function of random variables and change of variables in the probability density function If the probability density function of a random variable (or vector) is given as , it is possible (but often not necessary; see below) to calculate the probability density function of some variable . This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape using a known (for instance, uniform) random number generator. It is tempting to think that in order to find the expected value , one must first find the probability density of the new random variable . However, rather than computing one may find instead The values of the two integrals are the same in all cases in which both and actually have probability density functions. It is not necessary that be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician. Scalar to scalar Let be a monotonic function, then the resulting density function is Here denotes the inverse function. This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, or For functions that are not monotonic, the probability density function for is where is the number of solutions in for the equation , and are these solutions. Vector to vector Suppose is an -dimensional random variable with joint density . If , where is a bijective, differentiable function, then has density : with the differential regarded as the Jacobian of the inverse of , evaluated at . For example, in the 2-dimensional case , suppose the transform is given as , with inverses , . The joint distribution for y = (y1, y2) has density Vector to scalar Let be a differentiable function and be a random vector taking values in , be the probability density function of and be the Dirac delta function. It is possible to use the formulas above to determine , the probability density function of , which will be given by This result leads to the law of the unconscious statistician: Proof: Let be a collapsed random variable with probability density function (i.e., a constant equal to zero). Let the random vector and the transform be defined as It is clear that is a bijective mapping, and the Jacobian of is given by: which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that which if marginalized over leads to the desired probability density function. Sums of independent random variables The probability density function of the sum of two independent random variables and , each of which has a probability density function, is the convolution of their separate density functions: It is possible to generalize the previous relation to a sum of N independent random variables, with densities : This can be derived from a two-way change of variables involving and , similarly to the example below for the quotient of independent random variables. Products and quotients of independent random variables Given two independent random variables and , each of which has a probability density function, the density of the product and quotient can be computed by a change of variables. Example: Quotient distribution To compute the quotient of two independent random variables and , define the following transformation: Then, the joint density can be computed by a change of variables from U,V to Y,Z, and can be derived by marginalizing out from the joint density. The inverse transformation is The absolute value of the Jacobian matrix determinant of this transformation is: Thus: And the distribution of can be computed by marginalizing out : This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because can be mapped directly back to , and for a given the quotient is monotonic. This is similarly the case for the sum , difference and product . Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables. Example: Quotient of two standard normals Given two standard normal variables and , the quotient can be computed as follows. First, the variables have the following density functions: We transform as described above: This leads to: This is the density of a standard Cauchy distribution.
Mathematics
Statistics and probability
null
43490
https://en.wikipedia.org/wiki/Windmill
Windmill
A windmill is a structure that converts wind power into rotational energy using vanes called sails or blades, by tradition specifically to mill grain (gristmills), but in some parts of the English-speaking world, the term has also been extended to encompass windpumps, wind turbines, and other applications. The term wind engine is also sometimes used to describe such devices. Windmills were used throughout the high medieval and early modern periods; the horizontal or panemone windmill first appeared in Persia during the 9th century, and the vertical windmill first appeared in northwestern Europe in the 12th century. Regarded as an icon of Dutch culture, there are approximately 1,000 windmills in the Netherlands today. Forerunners Wind-powered machines may have been known earlier, but there is no clear evidence of windmills before the 9th century. Hero of Alexandria (Heron) in first-century Roman Egypt described what appears to be a wind-driven wheel to power a machine. His description of a wind-powered organ is not a practical windmill but was either an early wind-powered toy or a design concept for a wind-powered machine that may or may not have been a working device, as there is ambiguity in the text and issues with the design. Another early example of a wind-driven wheel was the prayer wheel, which is believed to have been first used in Tibet and China, though there is uncertainty over the date of its first appearance, which could have been either , the 7th century, or after the 9th century. One of the earliest recorded working windmill designs found was invented sometime around 700–900 AD in Persia. This design was the panemone, with vertical lightweight wooden sails attached by horizontal struts to a central vertical shaft. It was first built to pump water and subsequently modified to grind grain as well. Horizontal windmills The first practical windmills were panemone windmills, using sails that rotated in a horizontal plane, around a vertical axis. Made of six to 12 sails covered in reed matting or cloth material, these windmills were used to grind grain or draw up water. A medieval account reports that windmill technology was used in Persia and the Middle East during the reign of Rashidun caliph Umar ibn al-Khattab (), based on the caliph's conversation with a Persian builder slave. The authenticity of part of the anecdote involving the caliph Umar is questioned because it was recorded only in the 10th century. The Persian geographer Estakhri reported windmills being operated in Khorasan (Eastern Iran and Western Afghanistan) already in the 9th century. Such windmills were in widespread use across the Middle East and Central Asia and later spread to Europe, China, and India from there. By the 11th century, the vertical-axle windmill had reached parts of Southern Europe, including the Iberian Peninsula (via Al-Andalus) and the Aegean Sea (in the Balkans). A similar type of horizontal windmill with rectangular blades, used for irrigation, can also be found in thirteenth-century China (during the Jurchen Jin dynasty in the north), introduced by the travels of Yelü Chucai to Turkestan in 1219. Vertical-axle windmills were built, in small numbers, in Europe during the 18th and nineteenth centuries, for example Fowler's Mill at Battersea in London, and Hooper's Mill at Margate in Kent. These early modern examples seem not to have been directly influenced by the vertical-axle windmills of the medieval period, but to have been independent inventions by 18th-century engineers. Vertical windmills The horizontal-axis or vertical windmill (so called due to the plane of the movement of its sails) is a development of the 12th century, first used in northwestern Europe, in the triangle of northern France, eastern England and Flanders. It is unclear whether the vertical windmill was influenced by the introduction of the horizontal windmill from Persia-Middle East to Southern Europe in the preceding century. The earliest certain reference to a windmill in Northern Europe (assumed to have been of the vertical type) dates from 1185, in the former village of Weedley in Yorkshire which was located at the southern tip of the Wold overlooking the Humber Estuary. Several earlier, but less certainly dated, 12th-century European sources referring to windmills have also been found. These earliest mills were used to grind cereals. Post mill The evidence at present is that the earliest type of European windmill was the post mill, so named because of the large upright post on which the mill's main structure (the "body" or "buck") is balanced. By mounting the body this way, the mill can rotate to face the wind direction; an essential requirement for windmills to operate economically in north-western Europe, where wind directions are variable. The body contains all the milling machinery. The first post mills were of the sunken type, where the post was buried in an earth mound to support it. Later, a wooden support was developed called the trestle. This was often covered over or surrounded by a roundhouse to protect the trestle from the weather and to provide storage space. This type of windmill was the most common in Europe until the 19th century when more powerful tower and smock mills replaced them. Hollow-post mill In a hollow-post mill, the post on which the body is mounted is hollowed out, to accommodate the drive shaft. This makes it possible to drive machinery below or outside the body while still being able to rotate the body into the wind. Hollow-post mills driving scoop wheels were used in the Netherlands to drain wetlands since the early 15th century onwards. Tower mill By the end of the 13th century, the masonry tower mill, on which only the cap is rotated rather than the whole body of the mill, had been introduced. The spread of tower mills came with a growing economy that called for larger and more stable sources of power, though they were more expensive to build. In contrast to the post mill, only the cap of the tower mill needs to be turned into the wind, so the main structure can be made much taller, allowing the sails to be made longer, which enables them to provide useful work even in low winds. The cap can be turned into the wind either by winches or gearing inside the cap or from a winch on the tail pole outside the mill. A method of keeping the cap and sails into the wind automatically is by using a fantail, a small windmill mounted at right angles to the sails, at the rear of the windmill. These are also fitted to tail poles of post mills and are common in Great Britain and English-speaking countries of the former British Empire, Denmark, and Germany but rare in other places. Around some parts of the Mediterranean Sea, tower mills with fixed caps were built because the wind's direction varied little most of the time. Smock mill The smock mill is a later development of the tower mill, where the masonry tower is replaced by a wooden framework, called the "smock", which is thatched, boarded, or covered by other materials, such as slate, sheet metal, or tar paper. The smock is commonly of octagonal plan, though there are examples with different numbers of sides. Smock windmills were introduced by the Dutch in the 17th century to overcome the limitations of tower windmills, which were expensive to build and could not be erected on wet surfaces. The lower half of the smock windmill was made of brick, while the upper half was made of wood, with a sloping tower shape that added structural strength to the design. This made them lightweight and able to be erected on unstable ground. The smock windmill design included a small turbine in the back that helped the main mill to face the direction of the wind. Mechanics Sails Common sails consist of a lattice framework on which the sailcloth is spread. The miller can adjust the amount of cloth spread according to the wind and the power needed. In medieval mills, the sailcloth was wound in and out of a ladder-type arrangement of sails. Later mill sails had a lattice framework over which the sailcloth was spread, while in colder climates, the cloth was replaced by wooden slats, which were easier to handle in freezing conditions. The jib sail is commonly found in Mediterranean countries and consists of a simple triangle of cloth wound round a spar. In all cases, the mill needs to be stopped to adjust the sails. Inventions in Great Britain in the late eighteenth and nineteenth centuries led to sails that automatically adjust to the wind speed without the need for the miller to intervene, culminating in patent sails invented by William Cubitt in 1807. In these sails, the cloth is replaced by a mechanism of connected shutters. In France, Pierre-Théophile Berton invented a system consisting of longitudinal wooden slats connected by a mechanism that lets the miller open them while the mill is turning. In the twentieth century, increased knowledge of aerodynamics from the development of the airplane led to further improvements in efficiency by German engineer Bilau and several Dutch millwrights. The majority of windmills have four sails. Multiple-sailed mills, with five, six, or eight sails, were built in Great Britain (especially in and around the counties of Lincolnshire and Yorkshire), Germany, and less commonly elsewhere. Earlier multiple-sailed mills are found in Spain, Portugal, Greece, parts of Romania, Bulgaria, and Russia. A mill with an even number of sails has the advantage of being able to run with a damaged sail by removing both the damaged sail and the one opposite, which does not unbalance the mill. In the Netherlands, the stationary position of the sails, i.e. when the mill is not working, has long been used to give signals. If the blades are stopped in a "+" sign (3-6-9-12 o'clock), the windmill is open for business. When the blades are stopped in an "X" configuration, the windmill is closed or not functional. A slight tilt of the sails (top blade at 1 o'clock) signals joy, such as the birth of a healthy baby. A tilt of the blades to 11-2-5-8 o'clock signals mourning, or warning. It was used to signal the local region during Nazi operations in World War II, such as searches for Jews. Across the Netherlands, windmills were placed in mourning positions in honor of the Dutch victims of the 2014 Malaysian Airlines Flight 17 shootdown. Machinery Gears inside a windmill convey power from the rotary motion of the sails to a mechanical device. The sails are carried on the horizontal windshaft. Windshafts can be wholly made of wood, wood with a cast iron pole end (where the sails are mounted), or entirely of cast iron. The brake wheel is fitted onto the windshaft between the front and rear bearings. It has the brake around the outside of the rim and teeth in the side of the rim which drives the horizontal gearwheel called wallower on the top end of the vertical upright shaft. In grist mills, the great spur wheel, lower down the upright shaft, drives one or more stone nuts on the shafts driving each millstone. Post mills sometimes have a head and/or tail wheel driving the stone nuts directly, instead of the spur gear arrangement. Additional gear wheels drive a sack hoist or other machinery. The machinery differs if the windmill is used for other applications than milling grain. A drainage mill uses another set of gear wheels on the bottom end of the upright shaft to drive a scoop wheel or Archimedes' screw. Sawmills uses a crankshaft to provide a reciprocating motion to the saws. Windmills have been used to power many other industrial processes, including papermills, threshing mills, and to process oil seeds, wool, paints, and stone products. Spread and decline In the 14th century, windmills became popular in Europe; the total number of wind-powered mills is estimated to have been around 200,000 at the peak in 1850, which is close to half of the some 500,000 water wheels. Windmills were applied in regions where there was too little water, where rivers freeze in winter and in flat lands where the flow of the river was too slow to provide the required power. With the coming of the Industrial Revolution, the importance of wind and water as primary industrial energy sources declined, and they were eventually replaced by steam (in steam mills) and internal combustion engines, although windmills continued to be built in large numbers until late in the nineteenth century. More recently, windmills have been preserved for their historic value, in some cases as static exhibits when the antique machinery is too fragile to be put in motion, and other cases as fully working mills. Of the 10,000 windmills in use in the Netherlands around 1850, about 1,000 are still standing. Most of these are being run by volunteers, though some grist mills are still operating commercially. Many of the drainage mills have been appointed as a backup to the modern pumping stations. The Zaan district has been said to have been the first industrialized region of the world with around 600 operating wind-powered industries by the end of the eighteenth century. Economic fluctuations and the industrial revolution had a much greater impact on these industries than on grain and drainage mills, so only very few are left. Construction of mills spread to the Cape Colony in the seventeenth century. The early tower mills did not survive the gales of the Cape Peninsula, so in 1717 the Heeren XVII sent carpenters, masons, and materials to construct a durable mill. The mill, completed in 1718, became known as the Oude Molen and was located between Pinelands Station and the Black River. Long since demolished, its name lives on as that of a Technical school in Pinelands. By 1863, Cape Town had 11 mills stretching from Paarden Eiland to Mowbray. Specialized windmills Wind turbines A wind turbine is a windmill-like structure specifically developed to generate electricity. They can be seen as the next step in the development of the windmill. The first wind turbines were built by the end of the nineteenth century by James Blyth in Scotland (1887), Charles F. Brush in Cleveland, Ohio (1887–1888) and Poul la Cour in Denmark (1890s). La Cour's mill from 1896 later became the local power of the village of Askov. By 1908, there were 72 wind-driven electric generators in Denmark, ranging from 5 to 25 kW. By the 1930s, windmills were widely used to generate electricity on farms in the United States where distribution systems had not yet been installed, built by companies such as Jacobs Wind, Wincharger, Miller Airlite, Universal Aeroelectric, Paris-Dunn, Airline, and Winpower. The Dunlite Corporation produced turbines for similar locations in Australia. Forerunners of modern horizontal-axis utility-scale wind generators were the WIME-3D in service in Balaklava, USSR, from 1931 until 1942, a 100 kW generator on a tower, the Smith–Putnam wind turbine built in 1941 on the mountain known as Grandpa's Knob in Castleton, Vermont, United States, of 1.25 MW, and the NASA wind turbines developed from 1974 through the mid-1980s. The development of these 13 experimental wind turbines pioneered many of the wind turbine design technologies in use today, including steel tube towers, variable-speed generators, composite blade materials, and partial-span pitch control, as well as aerodynamic, structural, and acoustic engineering design capabilities. The modern wind power industry began in 1979 with the serial production of wind turbines by Danish manufacturers Kuriant, Vestas, Nordtank, and Bonus. These early turbines were small by today's standards, with capacities of 20–30 kW each. Since then, commercial turbines have increased greatly in size, with the Enercon E-126 capable of delivering up to 7 MW, while wind turbine production has expanded to many countries. As the 21st century began, rising concerns over energy security, global warming, and eventual fossil fuel depletion led to an expansion of interest in all available forms of renewable energy. Worldwide, many thousands of wind turbines are now operating, with a total nameplate capacity of 591 GW as of 2018. Materials In an attempt to make wind turbines more efficient and increase their energy output, they are being built bigger, with taller towers and longer blades, and being increasingly deployed in offshore locations. While such changes increase their power output, they subject the components of the windmills to stronger forces and consequently put them at a greater risk of failure. Taller towers and longer blades suffer from higher fatigue, and offshore windfarms are subject to greater forces due to higher wind speeds and accelerated corrosion due to the proximity to seawater. To ensure a long enough lifetime to make the return on the investment viable, the materials for the components must be chosen appropriately. The blade of a wind turbine consists of 4 main elements: the root, spar, aerodynamic fairing, and surfacing. The fairing is composed of two shells (one on the pressure side, and one on the suction side), connected by one or more webs linking the upper and lower shells. The webs connect to the spar laminates, which are enclosed within the skins (surfacing) of the blade, and together, the system of the webs and spars resist the flapwise loading. Flapwise loading, one of the two different types of loading that blades are subject to, is caused by the wind pressure, and edgewise loading (the second type of loading) is caused by the gravitational force and torque load. The former loading subjects the spar laminate on the pressure (upwind) side of the blade to cyclic tension-tension loading, while the suction (downwind) side of the blade is subject to cyclic compression-compression loading. Edgewise bending subjects the leading edge to a tensile load, and the trailing edge to a compressive load. The remainder of the shell, not supported by the spars or laminated at the leading and trailing edges, is designed as a sandwiched structure, consisting of multiple layers to prevent elastic buckling. In addition to meeting the stiffness, strength, and toughness requirements determined by the loading, the blade needs to be lightweight, and the weight of the blade scales with the cube of its radius. To determine which materials fit the criteria described above, a parameter known as the beam merit index is defined: Mb = E^1/2 / rho, where E is Young's modulus and rho is the density. The best blade materials are carbon fiber and glass fiber reinforced polymers (CFRP and GFRP). Currently, GFRP materials are chosen for their lower cost, despite the much greater figure of merit of CFRP. Recycling and waste problems with polymers blades When the Vindeby Offshore Wind Farm was taken down in Denmark in 2017, 99% of the not-degradable fiberglass from 33 wind turbine blades ended as cut up at the Rærup Controlled Landfill near Aalborg and in 2020, with considerably larger fiberglass quantities, even though it is the least environmentally friendly way of handling waste. Scrapped wind turbine blades are set to become a huge waste problem in Denmark and countries Denmark, to a greater and greater extent, export its many produced wind turbines. "The reason why many wings end up in landfill is that they are incredibly difficult to separate from each other, which you will have to do if you hope to be able to recycle the fiberglass", says Lykke Margot Ricard, Associate Professor in Innovation and Technological Foresight and education leader for civil engineering in Product Development and Innovation at the University of Southern Denmark (SDU). According to Dakofa, the Danish Competence Center for Waste and Resources, there is nothing specific in the Danish waste order about how to handle discarded fiberglass. Several scrap dealers tell Ingeniøren that they have handled wind turbine blades (wings) that have been pulverized after being taken to a recycling station. One of them is the recycling company H.J. Hansen, where the product manager informed, that they have transported approximately half of the wings they have received since 2012 to Reno Nord's landfill in Aalborg. A total of around 1,000 wings have ended up there, he estimates - and today up to 99 percent of the wings the company receives end up in a landfill. Since 1996, according to an estimate made by Lykke Margot Ricard (SDU) in 2020, at least 8,810 tonnes of the wing scrap have been disposed of in Denmark, and the waste problem will grow significantly in the coming years when more and more wind turbines have reached their end of life. According to the SDU lecturer's calculations, the waste sector in Denmark will have to receive 46,400 tonnes of fiberglass from wind turbine blades over the next 20–25 years. As so, at the island, Lolland, in Denmark, 250 tonnes of fiberglass from wind turbine waste also pours up on a landfill at Gerringe in the middle of Lolland in 2020. In the United States, worn-out wind turbine blades made of fiberglass go to the handful of landfills that accept them (e.g., in Lake Mills, Iowa; Sioux Falls, South Dakota; Casper). Windpumps Windpumps were used to pump water since at least the 9th century in what is now Afghanistan, Iran, and Pakistan. The use of windpumps became widespread across the Muslim world and later spread to East Asia (China) and South Asia (India). Windmills were later used extensively in Europe, particularly in the Netherlands and the East Anglia area of Great Britain, from the late Middle Ages onwards, to drain land for agricultural or building purposes. The "American windmill", or "wind engine", was invented by Daniel Halladay in 1854 and was used mostly for lifting water from wells. Larger versions were also used for tasks such as sawing wood, chopping hay, and shelling and grinding grain. In early California and some other states, the windmill was part of a self-contained domestic water system which included a hand-dug well and a wooden water tower supporting a redwood tank enclosed by wooden siding known as a tankhouse. During the late 19th century, steel blades and towers replaced wooden construction. At their peak in 1930, an estimated 600,000 units were in use. Firms such as U.S. Wind Engine and Pump Company, Challenge Wind Mill and Feed Mill Company, Appleton Manufacturing Company, Star, Eclipse, Fairbanks-Morse, Dempster Mill Manufacturing Company, and Aermotor became the main suppliers in North and South America. These windpumps are used extensively on farms and ranches in the United States, Canada, Southern Africa, and Australia. They feature a large number of blades, so they turn slowly with considerable torque in low winds and are self-regulating in high winds. A tower-top gearbox and crankshaft convert the rotary motion into reciprocating strokes carried downward through a rod to the pump cylinder below. Such mills pumped water and powered feed mills, sawmills, and agricultural machinery. In Australia, the Griffiths Brothers at Toowoomba manufactured windmills of the American pattern from 1876, with the trade name Southern Cross Windmills in use from 1903. These became an icon of the Australian rural sector by utilizing the water of the Great Artesian Basin. Another well-known maker was Metters Ltd. of Adelaide, Perth and Sydney.
Technology
Energy and fuel
null
43530
https://en.wikipedia.org/wiki/Schist
Schist
Schist ( ) is a medium-grained metamorphic rock showing pronounced schistosity (named for the rock). This means that the rock is composed of mineral grains easily seen with a low-power hand lens, oriented in such a way that the rock is easily split into thin flakes or plates. This texture reflects a high content of platy minerals, such as mica, talc, chlorite, or graphite. These are often interleaved with more granular minerals, such as feldspar or quartz. Schist typically forms during regional metamorphism accompanying the process of mountain building (orogeny) and usually reflects a medium grade of metamorphism. Schist can form from many different kinds of rocks, including sedimentary rocks such as mudstones and igneous rocks such as tuffs. Schist metamorphosed from mudstone is particularly common and is often very rich in mica (a mica schist). Where the type of the original rock (the protolith) is discernible, the schist is usually given a name reflecting its protolith, such as schistose metasandstone. Otherwise, the names of the constituent minerals will be included in the rock name, such as quartz-felspar-biotite schist. Schist bedrock can pose a challenge for civil engineering because of its pronounced planes of weakness. Etymology The word schist is derived ultimately from the Greek word σχίζειν (schízein), meaning "to split", which refers to the ease with which schists can be split along the plane in which the platy minerals lie. Definition Before the mid-19th century, the terms slate, shale and schist were not sharply differentiated by those involved with mining. Geologists define schist as medium-grained metamorphic rock that shows well-developed schistosity. Schistosity is a thin layering of the rock produced by metamorphism (a foliation) that permits the rock to easily be split into flakes or slabs less than thick. The mineral grains in a schist are typically from in size and so are easily seen with a 10× hand lens. Typically, over half the mineral grains in a schist show a preferred orientation. Schists make up one of the three divisions of metamorphic rock by texture, with the other two divisions being gneiss, which has poorly developed schistosity and thicker layering, and granofels, which has no discernible schistosity. Schists are defined by their texture without reference to their composition, and while most are a result of medium-grade metamorphism, they can vary greatly in mineral makeup. However, schistosity normally develops only when the rock contains abundant platy minerals, such as mica or chlorite. Grains of these minerals are strongly oriented in a preferred direction in schist, often also forming very thin parallel layers. The ease with which the rock splits along the aligned grains accounts for the schistosity. Though not a defining characteristic, schists very often contain porphyroblasts (individual crystals of unusual size) of distinctive minerals, such as garnet, staurolite, kyanite, sillimanite, or cordierite. Because schists are a very large class of metamorphic rock, geologists will formally describe a rock as a schist only when the original type of the rock prior to metamorphism (the protolith) is unknown and its mineral content is not yet determined. Otherwise, the modifier schistose will be applied to a more precise type name, such as schistose semipelite (when the rock is known to contain moderate amounts of mica) or a schistose metasandstone (if the protolith is known to have been a sandstone). If all that is known is that the protolith was a sedimentary rock, the schist will be described as a paraschist, while if the protolith was an igneous rock, the schist will be described as an orthoschist. Mineral qualifiers are important when naming a schist. For example, a quartz-feldspar-biotite schist is a schist of uncertain protolith that contains biotite mica, feldspar, and quartz in order of apparent decreasing abundance. Lineated schist has a strong linear fabric in a rock which otherwise has well-developed schistosity. Formation Schistosity is developed at elevated temperature when the rock is more strongly compressed in one direction than in other directions (nonhydrostatic stress). Nonhydrostatic stress is characteristic of regional metamorphism where mountain building is taking place (an orogenic belt). The schistosity develops perpendicular to the direction of greatest compression, also called the shortening direction, as platy minerals are rotated or recrystallized into parallel layers. While platy or elongated minerals are most obviously reoriented, even quartz or calcite may take up preferred orientations. At the microscopic level, schistosity is divided into internal schistosity, in which inclusions within porphyroblasts take a preferred orientation, and external schistosity, which is the orientation of grains in the surrounding medium-grained rock. The composition of the rock must permit formation of abundant platy minerals. For example, the clay minerals in mudstone are metamorphosed to mica, producing a mica schist. Early stages of metamorphism convert mudstone to a very fine-grained metamorphic rock called slate, which with further metamorphism becomes fine-grained phyllite. Further recrystallization produces medium-grained mica schist. If the metamorphism proceeds further, the mica schist experiences dehydration reactions that convert platy minerals to granular minerals such as feldspars, decreasing schistosity and turning the rock into a gneiss. Other platy minerals found in schists include chlorite, talc, and graphite. Chlorite schist is typically formed by metamorphism of ultramafic igneous rocks, as is talc schist. Talc schist also forms from metamorphosis of talc-bearing carbonate rocks formed by hydrothermal alteration. Graphite schist is uncommon but can form from metamorphosis of sedimentary beds containing abundant organic carbon. This may be of algal origin. Graphite schist is known to have experienced greenschist facies metamorphism, for example in the northern Andes. Metamorphosis of felsic volcanic rock, such as tuff, can produce quartz-muscovite schist. Engineering considerations In geotechnical engineering a schistosity plane often forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of rock masses in, for example, tunnel, foundation, or slope construction. A hazard may exist even in undisturbed terrain. On August 17, 1959, a magnitude 7.2 earthquake destabilized a mountain slope near Hebgen Lake, Montana, composed of schist. This caused a massive landslide that killed 26 people camping in the area.
Physical sciences
Petrology
null
43532
https://en.wikipedia.org/wiki/Uraninite
Uraninite
Uraninite, also known as pitchblende, is a radioactive, uranium-rich mineral and ore with a chemical composition that is largely UO2 but because of oxidation typically contains variable proportions of U3O8. Radioactive decay of the uranium causes the mineral to contain oxides of lead and trace amounts of helium. It may also contain thorium and rare-earth elements. Overview Uraninite used to be known as pitchblende (from pitch, because of its black color, and blende, from blenden meaning "to deceive", a term used by German miners to denote minerals whose density suggested metal content, but whose exploitation, at the time they were named, was either unknown or not economically feasible). The mineral has been known since at least the 15th century, from silver mines in the Ore Mountains, on the German/Czech border. The type locality is the historic mining and spa town known as Joachimsthal, the modern-day Jáchymov, on the Czech side of the mountains, where F. E. Brückmann described the mineral in 1772. Pitchblende from the Johanngeorgenstadt deposit in Germany was used by M. Klaproth in 1789 to discover the element uranium. All uraninite minerals contain a small amount of radium as a radioactive decay product of uranium. Marie Curie used pitchblende, processing tons of it herself, as the source material for her isolation of radium in 1910. Uraninite also always contains small amounts of the lead isotopes 206Pb and 207Pb, the end products of the decay series of the uranium isotopes 238U and 235U respectively. Small amounts of helium are also present in uraninite as a result of alpha decay. Helium was first found on Earth in cleveite, an impure radioactive variety of uraninite, after having been discovered spectroscopically in the Sun's atmosphere. The extremely rare elements technetium and promethium can be found in uraninite in very small quantities (about 200 pg/kg and 4 fg/kg respectively), produced by the spontaneous fission of uranium-238. Francium can also be found in uraninite at 1 francium atom for every 1 × 1018 uranium atoms in the ore as a result from the decay of actinium. Occurrence Uraninite is a major ore of uranium. Some of the highest-grade uranium ores in the world were found in the Shinkolobwe mine in the Democratic Republic of the Congo (the initial source for the Manhattan Project) and in the Athabasca Basin in northern Saskatchewan, Canada. Another important source of pitchblende is at Great Bear Lake in the Northwest Territories of Canada, where it is found in large quantities associated with silver. It also occurs in Australia, the Czech Republic, Germany, England, Rwanda, Namibia and South Africa. In the United States, it can be found in the states of Arizona, Colorado, Connecticut, Maine, New Hampshire, New Mexico, North Carolina and Wyoming. The geologist Charles Steen made a fortune on the production of uraninite in his Mi Vida mine in Moab, Utah. Uranium ores from the Ore Mountains (today the border between the Czech Republic and Germany) were an important supply of both the wartime German nuclear program (which failed to produce a bomb) and the Soviet nuclear program. Mining for uranium in the Ore Mountains (under the auspices of SDAG Wismut after the war) ceased after the collapse of the German Democratic Republic. Uranium ore is generally processed close to the mine into yellowcake, which is an intermediate step in the processing of uranium.
Physical sciences
Minerals
Earth science
43533
https://en.wikipedia.org/wiki/Hornblende
Hornblende
Hornblende is a complex inosilicate series of minerals. It is not a recognized mineral in its own right, but the name is used as a general or field term, to refer to a dark amphibole. Hornblende minerals are common in igneous and metamorphic rocks. The general formula is . Physical properties Hornblende has a hardness of 5–6, a specific gravity of 3.0 to 3.6, and is typically an opaque green, dark green, brown, or black color. It tends to form slender prismatic to bladed crystals, diamond-shaped in cross section, or is present as irregular grains or fibrous masses. Its planes of cleavage intersect at 56° and 124° angles. Hornblende is most often confused with the pyroxene series and biotite mica, which are also dark minerals found in granite and charnockite. Pyroxenes differ in their cleavage planes, which intersect at 87° and 93°. Hornblende is an inosilicate (chain silicate) mineral, built around double chains of silica tetrahedra. These chains extend the length of the crystal and are bonded to their neighbors by additional metal ions to form the complete crystal structure. Compositional variances Hornblende is part of the calcium-amphibole group of amphibole minerals. It is highly variable in composition, and includes at least five solid solution series: Magnesiohornblende–ferrohornblende, Tschermakite–ferrotschermakite, Edenite–ferroedenite, Pargasite–ferropargasite, Magnesiohastingstite–hastingsite, In addition, titanium, manganese, or chromium can substitute for some of the cations and oxygen, fluorine, or chlorine for some of the hydroxide (OH). The different chemical types are almost impossible to distinguish even by optical or X-ray methods, and detailed chemical analysis using an electron microprobe is required. There is a solid solution series between hornblende and the closely related amphibole minerals, tremolite–actinolite, at elevated temperature. A miscibility gap exists at lower temperatures, and, as a result, hornblende often contains exsolution lamellae of grunerite. Occurrence Hornblende is a common constituent of many igneous and metamorphic rocks such as granite, syenite, diorite, gabbro, basalt, andesite, gneiss, and schist. It crystallizes in preference to pyroxene minerals from cooler magma that is richer in silica and water. It is the principal mineral of amphibolites, which form during medium- to high-grade metamorphism of mafic to intermediate igneous rock (igneous rocks with relative low silica content) in the presence of pore water. Much of the pore water comes from the breakdown of micas or other hydrous minerals. However, hornblende itself breaks down at very high temperatures. Hornblende alters easily to chlorite, biotite, or other mafic minerals. A rare variety of hornblende contains less than 5% of iron oxide, is gray to white in color, and is named edenite from its locality in Edenville, Orange County, New York. Oxyhornblende is a variety in which most of the iron has been oxidized to the ferric state, . Charge balance is preserved by the substitution of oxygen ions for hydroxide. Oxyhornblende is also typically enriched in titanium. It is found almost exclusively in volcanic rock and is sometimes called basaltic hornblende. Etymology The word hornblende is derived from German ('horn') and ('deceive'), in allusion to its similar appearance to metal-bearing ore minerals.
Physical sciences
Silicate minerals
Earth science
43534
https://en.wikipedia.org/wiki/Basalt
Basalt
Basalt (; ) is an aphanitic (fine-grained) extrusive igneous rock formed from the rapid cooling of low-viscosity lava rich in magnesium and iron (mafic lava) exposed at or very near the surface of a rocky planet or moon. More than 90% of all volcanic rock on Earth is basalt. Rapid-cooling, fine-grained basalt is chemically equivalent to slow-cooling, coarse-grained gabbro. The eruption of basalt lava is observed by geologists at about 20 volcanoes per year. Basalt is also an important rock type on other planetary bodies in the Solar System. For example, the bulk of the plains of Venus, which cover ~80% of the surface, are basaltic; the lunar maria are plains of flood-basaltic lava flows; and basalt is a common rock on the surface of Mars. Molten basalt lava has a low viscosity due to its relatively low silica content (between 45% and 52%), resulting in rapidly moving lava flows that can spread over great areas before cooling and solidifying. Flood basalts are thick sequences of many such flows that can cover hundreds of thousands of square kilometres and constitute the most voluminous of all volcanic formations. Basaltic magmas within Earth are thought to originate from the upper mantle. The chemistry of basalts thus provides clues to processes deep in Earth's interior. Definition and characteristics Basalt is composed mostly of oxides of silicon, iron, magnesium, potassium, aluminum, titanium, and calcium. Geologists classify igneous rock by its mineral content whenever possible; the relative volume percentages of quartz (crystalline silica (SiO2)), alkali feldspar, plagioclase, and feldspathoid (QAPF) are particularly important. An aphanitic (fine-grained) igneous rock is classified as basalt when its QAPF fraction is composed of less than 10% feldspathoid and less than 20% quartz, and plagioclase makes up at least 65% of its feldspar content. This places basalt in the basalt/andesite field of the QAPF diagram. Basalt is further distinguished from andesite by its silica content of under 52%. It is often not practical to determine the mineral composition of volcanic rocks, due to their very small grain size, in which case geologists instead classify the rocks chemically, with particular emphasis on the total content of alkali metal oxides and silica (TAS); in that context, basalt is defined as volcanic rock with a content of between 45% and 52% silica and no more than 5% alkali metal oxides. This places basalt in the B field of the TAS diagram. Such a composition is described as mafic. Basalt is usually dark grey to black in colour, due to a high content of augite or other dark-coloured pyroxene minerals, but can exhibit a wide range of shading. Some basalts are quite light-coloured due to a high content of plagioclase; these are sometimes described as leucobasalts. It can be difficult to distinguish between lighter-colored basalt and andesite, so field researchers commonly use a rule of thumb for this purpose, classifying it as basalt if it has a color index of 35 or greater. The physical properties of basalt result from its relatively low silica content and typically high iron and magnesium content. The average density of basalt is 2.9 g/cm3, compared, for example, to granite’s typical density of 2.7 g/cm3. The viscosity of basaltic magma is relatively low—around 104 to 105 cP—similar to the viscosity of ketchup, but that is still several orders of magnitude higher than the viscosity of water, which is about 1 cP). Basalt is often porphyritic, containing larger crystals (phenocrysts) that formed before the extrusion event that brought the magma to the surface, embedded in a finer-grained matrix. These phenocrysts are usually made of augite, olivine, or a calcium-rich plagioclase, which have the highest melting temperatures of any of the minerals that can typically crystallize from the melt, and which are therefore the first to form solid crystals. Basalt often contains vesicles; they are formed when dissolved gases bubble out of the magma as it decompresses during its approach to the surface; the erupted lava then solidifies before the gases can escape. When vesicles make up a substantial fraction of the volume of the rock, the rock is described as scoria. The term basalt is at times applied to shallow intrusive rocks with a composition typical of basalt, but rocks of this composition with a phaneritic (coarser) groundmass are more properly referred to either as diabase (also called dolerite) or—when they are more coarse-grained (having crystals over 2 mm across)—as gabbro. Diabase and gabbro are thus the hypabyssal and plutonic equivalents of basalt. During the Hadean, Archean, and early Proterozoic eons of Earth's history, the chemistry of erupted magmas was significantly different from what it is today, due to immature crustal and asthenosphere differentiation. The resulting ultramafic volcanic rocks, with silica (SiO2) contents below 45% and high magnesium oxide (MgO) content, are usually classified as komatiites. Etymology The word "basalt" is ultimately derived from Late Latin , a misspelling of Latin "very hard stone", which was imported from Ancient Greek (), from (, "touchstone"). The modern petrological term basalt, describing a particular composition of lava-derived rock, became standard because of its use by Georgius Agricola in 1546, in his work De Natura Fossilium. Agricola applied the term "basalt" to the volcanic black rock beneath the Bishop of Meissen's Stolpen castle, believing it to be the same as the "basaniten" described by Pliny the Elder in AD 77 in . Types On Earth, most basalt is formed by decompression melting of the mantle. The high pressure in the upper mantle (due to the weight of the overlying rock) raises the melting point of mantle rock, so that almost all of the upper mantle is solid. However, mantle rock is ductile (the solid rock slowly deforms under high stress). When tectonic forces cause hot mantle rock to creep upwards, pressure on the ascending rock decreases, and this can lower its melting point enough for the rock to partially melt, producing basaltic magma. Decompression melting can occur in a variety of tectonic settings, including in continental rift zones, at mid-ocean ridges, above geological hotspots, and in back-arc basins. Basalt also forms in subduction zones, where mantle rock rises into a mantle wedge above the descending slab. The slab releases water vapor and other volatiles as it descends, which further lowers the melting point, further increasing the amount of decompression melting. Each tectonic setting produces basalt with its own distinctive characteristics. Tholeiitic basalt, which is relatively rich in iron and poor in alkali metals and aluminium, include most basalts of the ocean floor, most large oceanic islands, and continental flood basalts such as the Columbia River Plateau. High- and low-titanium basalt rocks, which are sometimes classified based on their titanium (Ti) content in High-Ti and Low-Ti varieties. High-Ti and Low-Ti basalt have been distinguished from each other in the Paraná and Etendeka traps and the Emeishan Traps. Mid-ocean ridge basalt (MORB) is a tholeiitic basalt that has almost exclusively erupted at ocean ridges; it is characteristically low in incompatible elements. Although all MORBs are chemically similar, geologists recognize that they vary significantly in how depleted they are in incompatible elements. When they are present in close proximity along mid-ocean ridges, that is seen as evidence for mantle inhomogeneity. Enriched MORB (E-MORB) is defined as MORB that is relatively undepleted in incompatible elements. It was once thought to be mostly located in hot spots along mid-ocean ridges, such as Iceland, but it is now known to be located in many other places along those ridges. Normal MORB (N-MORB) is defined as MORB that has an average amount of incompatible elements. D-MORB, depleted MORB, is defined as MORB that is highly depleted in incompatible elements. Alkali basalt is relatively rich in alkali metals. It is silica-undersaturated and may contain feldspathoids, alkali feldspar, phlogopite, and kaersutite. Augite in alkali basalts is titanium-enriched augite; low-calcium pyroxenes are never present. They are characteristic of continental rifting and hotspot volcanism. High-alumina basalt has greater than 17% alumina (Al2O3) and is intermediate in composition between tholeiitic basalt and alkali basalt. Its relatively alumina-rich composition is based on rocks without phenocrysts of plagioclase. These represent the low-silica end of the calc-alkaline magma series and are characteristic of volcanic arcs above subduction zones. Boninite is a high-magnesium form of basalt that is erupted generally in back-arc basins; it is distinguished by its low titanium content and trace-element composition. Ocean island basalts include both tholeiites and alkali basalts; the tholeiites predominate early in the eruptive history of the island. These basalts are characterized by elevated concentrations of incompatible elements, which suggests that their source mantle rock has produced little magma in the past (it is undepleted). Petrology The mineralogy of basalt is characterized by a preponderance of calcic plagioclase feldspar and pyroxene. Olivine can also be a significant constituent. Accessory minerals present in relatively minor amounts include iron oxides and iron-titanium oxides, such as magnetite, ulvöspinel, and ilmenite. Because of the presence of such oxide minerals, basalt can acquire strong magnetic signatures as it cools, and paleomagnetic studies have made extensive use of basalt. In tholeiitic basalt, pyroxene (augite and orthopyroxene or pigeonite) and calcium-rich plagioclase are common phenocryst minerals. Olivine may also be a phenocryst, and when present, may have rims of pigeonite. The groundmass contains interstitial quartz or tridymite or cristobalite. Olivine tholeiitic basalt has augite and orthopyroxene or pigeonite with abundant olivine, but olivine may have rims of pyroxene and is unlikely to be present in the groundmass. Alkali basalts typically have mineral assemblages that lack orthopyroxene but contain olivine. Feldspar phenocrysts typically are labradorite to andesine in composition. Augite is rich in titanium compared to augite in tholeiitic basalt. Minerals such as alkali feldspar, leucite, nepheline, sodalite, phlogopite mica, and apatite may be present in the groundmass. Basalt has high liquidus and solidus temperatures—values at the Earth's surface are near or above 1200 °C (liquidus) and near or below 1000 °C (solidus); these values are higher than those of other common igneous rocks. The majority of tholeiitic basalts are formed at approximately 50–100 km depth within the mantle. Many alkali basalts may be formed at greater depths, perhaps as deep as 150–200 km. The origin of high-alumina basalt continues to be controversial, with disagreement over whether it is a primary melt or derived from other basalt types by fractionation. Geochemistry Relative to most common igneous rocks, basalt compositions are rich in MgO and CaO and low in SiO2 and the alkali oxides, i.e., Na2O + K2O, consistent with their TAS classification. Basalt contains more silica than picrobasalt and most basanites and tephrites but less than basaltic andesite. Basalt has a lower total content of alkali oxides than trachybasalt and most basanites and tephrites. Basalt generally has a composition of 45–52 wt% SiO2, 2–5 wt% total alkalis, 0.5–2.0 wt% TiO2, 5–14 wt% FeO and 14 wt% or more Al2O3. Contents of CaO are commonly near 10 wt%, those of MgO commonly in the range 5 to 12 wt%. High-alumina basalts have aluminium contents of 17–19 wt% Al2O3; boninites have magnesium (MgO) contents of up to 15 percent. Rare feldspathoid-rich mafic rocks, akin to alkali basalts, may have Na2O + K2O contents of 12% or more. The abundances of the lanthanide or rare-earth elements (REE) can be a useful diagnostic tool to help explain the history of mineral crystallisation as the melt cooled. In particular, the relative abundance of europium compared to the other REE is often markedly higher or lower, and called the europium anomaly. It arises because Eu2+ can substitute for Ca2+ in plagioclase feldspar, unlike any of the other lanthanides, which tend to only form 3+ cations. Mid-ocean ridge basalts (MORB) and their intrusive equivalents, gabbros, are the characteristic igneous rocks formed at mid-ocean ridges. They are tholeiitic basalts particularly low in total alkalis and in incompatible trace elements, and they have relatively flat REE patterns normalized to mantle or chondrite values. In contrast, alkali basalts have normalized patterns highly enriched in the light REE, and with greater abundances of the REE and of other incompatible elements. Because MORB basalt is considered a key to understanding plate tectonics, its compositions have been much studied. Although MORB compositions are distinctive relative to average compositions of basalts erupted in other environments, they are not uniform. For instance, compositions change with position along the Mid-Atlantic Ridge, and the compositions also define different ranges in different ocean basins. Mid-ocean ridge basalts have been subdivided into varieties such as normal (NMORB) and those slightly more enriched in incompatible elements (EMORB). Isotope ratios of elements such as strontium, neodymium, lead, hafnium, and osmium in basalts have been much studied to learn about the evolution of the Earth's mantle. Isotopic ratios of noble gases, such as 3He/4He, are also of great value: for instance, ratios for basalts range from 6 to 10 for mid-ocean ridge tholeiitic basalt (normalized to atmospheric values), but to 15–24 and more for ocean-island basalts thought to be derived from mantle plumes. Source rocks for the partial melts that produce basaltic magma probably include both peridotite and pyroxenite. Morphology and textures The shape, structure and texture of a basalt is diagnostic of how and where it erupted—for example, whether into the sea, in an explosive cinder eruption or as creeping pāhoehoe lava flows, the classic image of Hawaiian basalt eruptions. Subaerial eruptions Basalt that erupts under open air (that is, subaerially) forms three distinct types of lava or volcanic deposits: scoria; ash or cinder (breccia); and lava flows. Basalt in the tops of subaerial lava flows and cinder cones will often be highly vesiculated, imparting a lightweight "frothy" texture to the rock. Basaltic cinders are often red, coloured by oxidized iron from weathered iron-rich minerals such as pyroxene. Aā types of blocky cinder and breccia flows of thick, viscous basaltic lava are common in Hawaii. Pāhoehoe is a highly fluid, hot form of basalt which tends to form thin aprons of molten lava which fill up hollows and sometimes forms lava lakes. Lava tubes are common features of pāhoehoe eruptions. Basaltic tuff or pyroclastic rocks are less common than basaltic lava flows. Usually basalt is too hot and fluid to build up sufficient pressure to form explosive lava eruptions but occasionally this will happen by trapping of the lava within the volcanic throat and buildup of volcanic gases. Hawaii's Mauna Loa volcano erupted in this way in the 19th century, as did Mount Tarawera, New Zealand in its violent 1886 eruption. Maar volcanoes are typical of small basalt tuffs, formed by explosive eruption of basalt through the crust, forming an apron of mixed basalt and wall rock breccia and a fan of basalt tuff further out from the volcano. Amygdaloidal structure is common in relict vesicles and beautifully crystallized species of zeolites, quartz or calcite are frequently found. Columnar basalt During the cooling of a thick lava flow, contractional joints or fractures form. If a flow cools relatively rapidly, significant contraction forces build up. While a flow can shrink in the vertical dimension without fracturing, it cannot easily accommodate shrinking in the horizontal direction unless cracks form; the extensive fracture network that develops results in the formation of columns. These structures, or basalt prisms, are predominantly hexagonal in cross-section, but polygons with three to twelve or more sides can be observed. The size of the columns depends loosely on the rate of cooling; very rapid cooling may result in very small (<1 cm diameter) columns, while slow cooling is more likely to produce large columns. Submarine eruptions The character of submarine basalt eruptions is largely determined by depth of water, since increased pressure restricts the release of volatile gases and results in effusive eruptions. It has been estimated that at depths greater than , explosive activity associated with basaltic magma is suppressed. Above this depth, submarine eruptions are often explosive, tending to produce pyroclastic rock rather than basalt flows. These eruptions, described as Surtseyan, are characterised by large quantities of steam and gas and the creation of large amounts of pumice. Pillow basalts When basalt erupts underwater or flows into the sea, contact with the water quenches the surface and the lava forms a distinctive pillow shape, through which the hot lava breaks to form another pillow. This "pillow" texture is very common in underwater basaltic flows and is diagnostic of an underwater eruption environment when found in ancient rocks. Pillows typically consist of a fine-grained core with a glassy crust and have radial jointing. The size of individual pillows varies from 10 cm up to several metres. When pāhoehoe lava enters the sea it usually forms pillow basalts. However, when aā enters the ocean it forms a littoral cone, a small cone-shaped accumulation of tuffaceous debris formed when the blocky aā lava enters the water and explodes from built-up steam. The island of Surtsey in the Atlantic Ocean is a basalt volcano which breached the ocean surface in 1963. The initial phase of Surtsey's eruption was highly explosive, as the magma was quite fluid, causing the rock to be blown apart by the boiling steam to form a tuff and cinder cone. This has subsequently moved to a typical pāhoehoe-type behaviour. Volcanic glass may be present, particularly as rinds on rapidly chilled surfaces of lava flows, and is commonly (but not exclusively) associated with underwater eruptions. Pillow basalt is also produced by some subglacial volcanic eruptions. Distribution Earth Basalt is the most common volcanic rock type on Earth, making up over 90% of all volcanic rock on the planet. The crustal portions of oceanic tectonic plates are composed predominantly of basalt, produced from upwelling mantle below the ocean ridges. Basalt is also the principal volcanic rock in many oceanic islands, including the islands of Hawaii, the Faroe Islands, and Réunion. The eruption of basalt lava is observed by geologists at about 20 volcanoes per year. Basalt is the rock most typical of large igneous provinces. These include continental flood basalts, the most voluminous basalts found on land. Examples of continental flood basalts included the Deccan Traps in India, the Chilcotin Group in British Columbia, Canada, the Paraná Traps in Brazil, the Siberian Traps in Russia, the Karoo flood basalt province in South Africa, and the Columbia River Plateau of Washington and Oregon. Basalt is also prevalent across extensive regions of the Eastern Galilee, Golan, and Bashan in Israel and Syria. Basalt also is common around volcanic arcs, specially those on thin crust. Ancient Precambrian basalts are usually only found in fold and thrust belts, and are often heavily metamorphosed. These are known as greenstone belts, because low-grade metamorphism of basalt produces chlorite, actinolite, epidote and other green minerals. Other bodies in the Solar System As well as forming large parts of the Earth's crust, basalt also occurs in other parts of the Solar System. Basalt commonly erupts on Io (the third largest moon of Jupiter), and has also formed on the Moon, Mars, Venus, and the asteroid Vesta. The Moon The dark areas visible on Earth's moon, the lunar maria, are plains of flood basaltic lava flows. These rocks were sampled both by the crewed American Apollo program and the robotic Russian Luna program, and are represented among the lunar meteorites. Lunar basalts differ from their Earth counterparts principally in their high iron contents, which typically range from about 17 to 22 wt% FeO. They also possess a wide range of titanium concentrations (present in the mineral ilmenite), ranging from less than 1 wt% TiO2, to about 13 wt.%. Traditionally, lunar basalts have been classified according to their titanium content, with classes being named high-Ti, low-Ti, and very-low-Ti. Nevertheless, global geochemical maps of titanium obtained from the Clementine mission demonstrate that the lunar maria possess a continuum of titanium concentrations, and that the highest concentrations are the least abundant. Lunar basalts show exotic textures and mineralogy, particularly shock metamorphism, lack of the oxidation typical of terrestrial basalts, and a complete lack of hydration. Most of the Moon's basalts erupted between about 3 and 3.5 billion years ago, but the oldest samples are 4.2 billion years old, and the youngest flows, based on the age dating method of crater counting, are estimated to have erupted only 1.2 billion years ago. Venus From 1972 to 1985, five Venera and two VEGA landers successfully reached the surface of Venus and carried out geochemical measurements using X-ray fluorescence and gamma-ray analysis. These returned results consistent with the rock at the landing sites being basalts, including both tholeiitic and highly alkaline basalts. The landers are thought to have landed on plains whose radar signature is that of basaltic lava flows. These constitute about 80% of the surface of Venus. Some locations show high reflectivity consistent with unweathered basalt, indicating basaltic volcanism within the last 2.5 million years. Mars Basalt is also a common rock on the surface of Mars, as determined by data sent back from the planet's surface, and by Martian meteorites. Vesta Analysis of Hubble Space Telescope images of Vesta suggests this asteroid has a basaltic crust covered with a brecciated regolith derived from the crust. Evidence from Earth-based telescopes and the Dawn mission suggest that Vesta is the source of the HED meteorites, which have basaltic characteristics. Vesta is the main contributor to the inventory of basaltic asteroids of the main Asteroid Belt. Io Lava flows represent a major volcanic terrain on Io. Analysis of the Voyager images led scientists to believe that these flows were composed mostly of various compounds of molten sulfur. However, subsequent Earth-based infrared studies and measurements from the Galileo spacecraft indicate that these flows are composed of basaltic lava with mafic to ultramafic compositions. This conclusion is based on temperature measurements of Io's "hotspots", or thermal-emission locations, which suggest temperatures of at least 1,300 K and some as high as 1,600 K. Initial estimates suggesting eruption temperatures approaching 2,000 K have since proven to be overestimates because the wrong thermal models were used to model the temperatures. Alteration of basalt Weathering Compared to granitic rocks exposed at the Earth's surface, basalt outcrops weather relatively rapidly. This reflects their content of minerals that crystallized at higher temperatures and in an environment poorer in water vapor than granite. These minerals are less stable in the colder, wetter environment at the Earth's surface. The finer grain size of basalt and the volcanic glass sometimes found between the grains also hasten weathering. The high iron content of basalt causes weathered surfaces in humid climates to accumulate a thick crust of hematite or other iron oxides and hydroxides, staining the rock a brown to rust-red colour. Because of the low potassium content of most basalts, weathering converts the basalt to calcium-rich clay (montmorillonite) rather than potassium-rich clay (illite). Further weathering, particularly in tropical climates, converts the montmorillonite to kaolinite or gibbsite. This produces the distinctive tropical soil known as laterite. The ultimate weathering product is bauxite, the principal ore of aluminium. Chemical weathering also releases readily water-soluble cations such as calcium, sodium and magnesium, which give basaltic areas a strong buffer capacity against acidification. Calcium released by basalts binds CO2 from the atmosphere forming CaCO3 acting thus as a CO2 trap. Metamorphism Intense heat or great pressure transforms basalt into its metamorphic rock equivalents. Depending on the temperature and pressure of metamorphism, these may include greenschist, amphibolite, or eclogite. Basalts are important rocks within metamorphic regions because they can provide vital information on the conditions of metamorphism that have affected the region. Metamorphosed basalts are important hosts for a variety of hydrothermal ores, including deposits of gold, copper and volcanogenic massive sulfides. Life on basaltic rocks The common corrosion features of underwater volcanic basalt suggest that microbial activity may play a significant role in the chemical exchange between basaltic rocks and seawater. The significant amounts of reduced iron, Fe(II), and manganese, Mn(II), present in basaltic rocks provide potential energy sources for bacteria. Some Fe(II)-oxidizing bacteria cultured from iron-sulfide surfaces are also able to grow with basaltic rock as a source of Fe(II). Fe- and Mn- oxidizing bacteria have been cultured from weathered submarine basalts of Kamaʻehuakanaloa Seamount (formerly Loihi). The impact of bacteria on altering the chemical composition of basaltic glass (and thus, the oceanic crust) and seawater suggest that these interactions may lead to an application of hydrothermal vents to the origin of life. Uses Basalt is used in construction (e.g. as building blocks or in the groundwork), making cobblestones (from columnar basalt) and in making statues. Heating and extruding basalt yields stone wool, which has potential to be an excellent thermal insulator. Carbon sequestration in basalt has been studied as a means of removing carbon dioxide, produced by human industrialization, from the atmosphere. Underwater basalt deposits, scattered in seas around the globe, have the added benefit of the water serving as a barrier to the re-release of CO2 into the atmosphere.
Physical sciences
Petrology
null
43551
https://en.wikipedia.org/wiki/Ruby
Ruby
Ruby is a pinkish red to blood-red colored gemstone, a variety of the mineral corundum (aluminium oxide). Ruby is one of the most popular traditional jewelry gems and is very durable. Other varieties of gem-quality corundum are called sapphires. Ruby is one of the traditional cardinal gems, alongside amethyst, sapphire, emerald, and diamond. The word ruby comes from ruber, Latin for red. The color of a ruby is due to the element chromium. Some gemstones that are popularly or historically called rubies, such as the Black Prince's Ruby in the British Imperial State Crown, are actually spinels. These were once known as "Balas rubies". The quality of a ruby is determined by its color, cut, and clarity, which, along with carat weight, affect its value. The brightest and most valuable shade of red, called blood-red or pigeon blood, commands a large premium over other rubies of similar quality. After color follows clarity: similar to diamonds, a clear stone will command a premium, but a ruby without any needle-like rutile inclusions may indicate that the stone has been treated. Ruby is the traditional birthstone for July and is usually red/pinker than garnet, although some rhodolite garnets have a similar pinkish hue to most rubies. The world's most valuable ruby to be sold at auction is the Estrela de Fura, which sold for US$34.8 million. Physical properties Rubies have a hardness of 9.0 on the Mohs scale of mineral hardness. Among the natural gems, only moissanite and diamond are harder, with diamond having a Mohs hardness of 10.0 and moissanite falling somewhere in between corundum (ruby) and diamond in hardness. Sapphire, ruby, and pure corundum are α-alumina, the most stable form of AlO, in which 3 electrons leave each aluminium ion to join the regular octahedral group of six nearby O ions; in pure corundum this leaves all of the aluminium ions with a very stable configuration of no unpaired electrons or unfilled energy levels, and the crystal is perfectly colorless, and transparent except for flaws. When a chromium atom replaces an occasional aluminium atom, it too loses 3 electrons to become a chromium ion to maintain the charge balance of the AlO crystal. However, the Cr ions are larger and have electron orbitals in different directions than aluminium. The octahedral arrangement of the O ions is distorted, and the energy levels of the different orbitals of those Cr ions are slightly altered because of the directions to the O ions. Those energy differences correspond to absorption in the ultraviolet, violet, and yellow-green regions of the spectrum. If one percent of the aluminium ions are replaced by chromium in ruby, the yellow-green absorption results in a red color for the gem. Additionally, absorption at any of the above wavelengths stimulates fluorescent emission of 694-nanometer-wavelength red light, which adds to its red color and perceived luster. The chromium concentration in artificial rubies can be adjusted (in the crystal growth process) to be ten to twenty times less than in the natural gemstones. Theodore Maiman says that "because of the low chromium level in these crystals they display a lighter red color than gemstone ruby and are referred to as pink ruby." After absorbing short-wavelength light, there is a short interval of time when the crystal lattice of ruby is in an excited state before fluorescence occurs. If 694-nanometer photons pass through the crystal during that time, they can stimulate more fluorescent photons to be emitted in-phase with them, thus strengthening the intensity of that red light. By arranging mirrors or other means to pass emitted light repeatedly through the crystal, a ruby laser in this way produces a very high intensity of coherent red light. All natural rubies have imperfections in them, including color impurities and inclusions of rutile needles known as "silk". Gemologists use these needle inclusions found in natural rubies to distinguish them from synthetics, simulants, or substitutes. Usually, the rough stone is heated before cutting. These days, almost all rubies are treated in some form, with heat treatment being the most common practice. Untreated rubies of high quality command a large premium. Some rubies show a three-point or six-point asterism or "star". These rubies are cut into cabochons to display the effect properly. Asterisms are best visible with a single-light source and move across the stone as the light moves or the stone is rotated. Such effects occur when light is reflected off the "silk" (the structurally oriented rutile needle inclusions) in a certain way. This is one example where inclusions increase the value of a gemstone. Furthermore, rubies can show color changes—though this occurs very rarely—as well as chatoyancy or the "cat's eye" effect. Versus pink sapphire Generally, gemstone-quality corundum in all shades of red, including pink, are called rubies. However, in the United States, a minimum color saturation must be met to be called a ruby; otherwise, the stone will be called a pink sapphire. Drawing a distinction between rubies and pink sapphires is relatively new, having arisen sometime in the 20th century. Often, the distinction between ruby and pink sapphire is not clear and can be debated. As a result of the difficulty and subjectiveness of such distinctions, trade organizations such as the International Colored Gemstone Association (ICGA) have adopted the broader definition for ruby which encompasses its lighter shades, including pink. Occurrence and mining Historically, rubies have been mined in Thailand, in the Pailin and Samlout District of Cambodia, as well as in Afghanistan, Australia, Brazil, Colombia, India, Namibia, Japan, and Scotland. After the Second World War, ruby deposits were found in Madagascar, Mozambique, Nepal, Pakistan, Tajikistan, Tanzania, and Vietnam. The Republic of North Macedonia is the only country in mainland Europe to have naturally occurring rubies. They can mainly be found around the city of Prilep. Macedonian rubies have a unique raspberry color. A few rubies have been found in the U.S. states of Montana, North Carolina, South Carolina and Wyoming. Spinel, another red gemstone, is sometimes found along with rubies in the same gem gravel or marble. Red spinels may be mistaken for rubies by those lacking experience with gems. However, the finest red spinels, now heavily sought, can have values approaching all but the finest examples of ruby. The Mogok Valley in Upper Myanmar (Burma) was for centuries the world's main source for rubies. That region has produced some exceptional rubies; however, in recent years few good rubies have been found. In central Myanmar, the area of Mong Hsu began producing rubies during the 1990s and rapidly became the world's main ruby mining area. The most recently found ruby deposit in Myanmar is in Namya (Namyazeik) located in the northern state of Kachin. In Pakistani Kashmir there are vast proven reserves of millions of rubies, worth up to half a billion dollars. However, as of 2017 there was only one mine (at Chitta Katha) due to lack of investment. In Afghanistan, rubies are mined at Jegdalek. In 2017 the Aappaluttoq mine in Greenland began running. The rubies in Greenland are said to be among the oldest in the world at approximately 3 billion years old. The Aappaluttoq mine in Greenland is located 160 kilometers south of Nuuk, the capital of Greenland. The rubies are traceable from mine to market. The Montepuez ruby mine in northeastern Mozambique is situated on one of the most significant ruby deposits in the world, although, rubies were only discovered here for the first time in 2009. In less than a decade, Mozambique has become the world's most productive source for gem-quality ruby. Factors affecting value Rubies, as with other gemstones, are graded using criteria known as the four Cs, namely color, cut, clarity and carat weight. Rubies are also evaluated on the basis of their geographic origin. Color In the evaluation of colored gemstones, color is the most important factor. Color divides into three components: hue, saturation and tone. Hue refers to color as we normally use the term. Transparent gemstones occur in the pure spectral hues of red, orange, yellow, green, blue, violet. In nature, there are rarely pure hues, so when speaking of the hue of a gemstone, we speak of primary and secondary and sometimes tertiary hues. Ruby is defined to be red. All other hues of the gem species corundum are called sapphire. Ruby may exhibit a range of secondary hues, including orange, purple, violet, and pink. Clarity Because rubies host many inclusions, their clarity is evaluated by the inclusions’ size, number, location, and visibility. Rubies with the highest clarity grades are known as “eye-clean,” because their inclusions are the least visible to the naked human eye. Rubies may also have thin, intersecting inclusions called silk. Silk can scatter light, brightening the gem's appearance, and the presence of silk can also show whether a ruby has been previously heat treated, since intense heat will degrade a ruby's silk. Treatments and enhancements Improving the quality of gemstones by treating them is common practice. Some treatments are used in almost all cases and are therefore considered acceptable. During the late 1990s, a large supply of low-cost materials caused a sudden surge in supply of heat-treated rubies, leading to a downward pressure on ruby prices. Improvements used include color alteration, improving transparency by dissolving rutile inclusions, healing of fractures (cracks) or even completely filling them. The most common treatment is the application of heat. Most rubies at the lower end of the market are heat treated to improve color, remove purple tinge, blue patches, and silk. These heat treatments typically occur around temperatures of 1800 °C (3300 °F). Some rubies undergo a process of low tube heat, when the stone is heated over charcoal of a temperature of about 1300 °C (2400 °F) for 20 to 30 minutes. The silk is partially broken, and the color is improved. Another treatment, which has become more frequent in recent years, is lead glass filling. Filling the fractures inside the ruby with lead glass (or a similar material) dramatically improves the transparency of the stone, making previously unsuitable rubies fit for applications in jewelry. The process is done in four steps: The rough stones are pre-polished to eradicate all surface impurities that may affect the process The rough is cleaned with hydrogen fluoride The first heating process during which no fillers are added. The heating process eradicates impurities inside the fractures. Although this can be done at temperatures up to 1400 °C (2500 °F) it most likely occurs at a temperature of around 900 °C (1600 °F) since the rutile silk is still intact. The second heating process in an electrical oven with different chemical additives. Different solutions and mixes have shown to be successful; however, mostly lead-containing glass-powder is used at present. The ruby is dipped into oils, then covered with powder, embedded on a tile and placed in the oven where it is heated at around 900 °C (1600 °F) for one hour in an oxidizing atmosphere. The orange colored powder transforms upon heating into a transparent to yellow-colored paste, which fills all fractures. After cooling the color of the paste is fully transparent and dramatically improves the overall transparency of the ruby. If a color needs to be added, the glass powder can be "enhanced" with copper or other metal oxides as well as elements such as sodium, calcium, potassium etc. The second heating process can be repeated three to four times, even applying different mixtures. When jewelry containing rubies is heated (for repairs) it should not be coated with boracic acid or any other substance, as this can etch the surface; it does not have to be "protected" like a diamond. The treatment can be identified by noting bubbles in cavities and fractures using a 10× loupe. Synthesis and imitation In 1837, Gaudin made the first synthetic rubies by fusing potash alum at a high temperature with a little chromium as a pigment. In 1847, Ebelmen made white sapphire by fusing alumina in boric acid. In 1877, Edmond Frémy and industrial glass-maker Charles Feil made crystal corundum from which small stones could be cut. In 1887, Fremy and Auguste Verneuil manufactured artificial ruby by fusing BaF and AlO with a little chromium at red heat. In 1903, Verneuil announced he could produce synthetic rubies on a commercial scale using this flame fusion process, later also known as the Verneuil process. By 1910, Verneuil's laboratory had expanded into a 30 furnace production facility, with annual gemstone production having reached in 1907. Other processes in which synthetic rubies can be produced are through Czochralski's pulling process, flux process, and the hydrothermal process. Most synthetic rubies originate from flame fusion, due to the low costs involved. Synthetic rubies may have no imperfections visible to the naked eye but magnification may reveal curved striae and gas bubbles. The fewer the number and the less obvious the imperfections, the more valuable the ruby is; unless there are no imperfections (i.e., a perfect ruby), in which case it will be suspected of being artificial. Dopants are added to some manufactured rubies so they can be identified as synthetic, but most need gemological testing to determine their origin. Synthetic rubies have technological uses as well as gemological ones. Rods of synthetic ruby are used to make ruby lasers and masers. The first working laser was made by Theodore H. Maiman in 1960. Maiman used a solid-state light-pumped synthetic ruby to produce red laser light at a wavelength of 694 nanometers (nm). Ruby lasers are still in use. Rubies are also used in applications where high hardness is required such as at wear-exposed locations in mechanical clockworks, or as scanning probe tips in a coordinate measuring machine. Imitation rubies are also marketed. Red spinels, red garnets, and colored glass have been falsely claimed to be rubies. Imitations go back to Roman times and already in the 17th century techniques were developed to color foil red—by burning scarlet wool in the bottom part of the furnace—which was then placed under the imitation stone. Trade terms such as balas ruby for red spinel and rubellite for red tourmaline can mislead unsuspecting buyers. Such terms are therefore discouraged from use by many gemological associations such as the Laboratory Manual Harmonisation Committee (LMHC). Records and famous examples The Smithsonian's National Museum of Natural History in Washington, D.C. has some of the world's largest and finest ruby gemstones. The Burmese ruby, set in a platinum ring with diamonds, was donated by businessman and philanthropist Peter Buck in memory of his late wife Carmen Lúcia. This gemstone displays a richly saturated red color combined with an exceptional transparency. The finely proportioned cut provides vivid red reflections. The stone was mined from the Mogok region of Burma (now Myanmar) in the 1930s. In 2007, the London jeweler Garrard & Co featured a heart-shaped 40.63-carat ruby on their website. On 13/14 December 2011, Elizabeth Taylor's complete jewelry collection was auctioned by Christie's. Several ruby-set pieces were included in the sale, notably a ring set with an 8.24 ct gem that broke the 'price-per-carat' record for rubies (US$512,925 per carat – i.e., over US$4.2 million in total), and a necklace that sold for over US$3.7 million. The Liberty Bell Ruby is the largest mined ruby in the world. It was stolen in a heist in 2011. The Sunrise Ruby was the world's most expensive ruby, most expensive colored gemstone, and most expensive gemstone other than a diamond when it sold at auction in Switzerland to an anonymous buyer for US$30 million In May 2015. A synthetic ruby crystal became the gain medium in the world's first optical laser, conceived, designed and constructed by Theodore H. "Ted" Maiman, on 16 May 1960 at Hughes Research Laboratories. The concept of electromagnetic radiation amplification through the mechanism of stimulated emission had already been successfully demonstrated in the laboratory by way of the maser, using other materials such as ammonia and, later, ruby, but the ruby laser was the first device to work at optical (694.3 nm) wavelengths. Maiman's prototype laser is still in working order. Historical and cultural references The Old Testament of the Bible mentions ruby many times in the Book of Exodus, and many times in the Book of Proverbs, as well as various other times. It is not certain that the Biblical words mean 'ruby' as distinct from other jewels. An early recorded transport and trading of rubies arises in the literature on the North Silk Road of China, wherein about 200 BC rubies were carried along this ancient trackway moving westward from China. Rubies have always been held in high esteem in Asian countries. They were used to ornament armor, scabbards, and harnesses of noblemen in India and China. Rubies were laid beneath the foundation of buildings to secure good fortune to the structure. A traditional Hindu astrological belief holds rubies as the "gemstone of the Sun and also the heavenly deity Surya, the leader of the nine heavenly bodies (Navagraha)." The belief is that worshiping and wearing rubies causes the Sun to be favorable to the wearer. In the Marvel comic books, the Godstone is a ruby that the son of J. Jonah Jameson, John Jameson found on the Moon that becomes activated by moonlight, grafts itself to his chest which turns him into the Man-Wolf.
Physical sciences
Mineral gemstones
null
43589
https://en.wikipedia.org/wiki/Fluorite
Fluorite
Fluorite (also called fluorspar) is the mineral form of calcium fluoride, CaF2. It belongs to the halide minerals. It crystallizes in isometric cubic habit, although octahedral and more complex isometric forms are not uncommon. The Mohs scale of mineral hardness, based on scratch hardness comparison, defines value 4 as fluorite. Pure fluorite is colourless and transparent, both in visible and ultraviolet light, but impurities usually make it a colorful mineral and the stone has ornamental and lapidary uses. Industrially, fluorite is used as a flux for smelting, and in the production of certain glasses and enamels. The purest grades of fluorite are a source of fluoride for hydrofluoric acid manufacture, which is the intermediate source of most fluorine-containing fine chemicals. Optically clear transparent fluorite has anomalous partial dispersion, that is, its refractive index varies with the wavelength of light in a manner that differs from that of commonly used glasses, so fluorite is useful in making apochromatic lenses, and particularly valuable in photographic optics. Fluorite optics are also usable in the far-ultraviolet and mid-infrared ranges, where conventional glasses are too opaque for use. Fluorite also has low dispersion, and a high refractive index for its density. History and etymology The word fluorite is derived from the Latin verb fluere, meaning to flow. The mineral is used as a flux in iron smelting to decrease the viscosity of slag. The term flux comes from the Latin adjective fluxus, meaning flowing, loose, slack. The mineral fluorite was originally termed fluorspar and was first discussed in print in a 1530 work Bermannvs sive de re metallica dialogus [Bermannus; or dialogue about the nature of metals], by Georgius Agricola, as a mineral noted for its usefulness as a flux. Agricola, a German scientist with expertise in philology, mining, and metallurgy, named fluorspar as a Neo-Latinization of the German Flussspat from Fluss (stream, river) and Spat (meaning a nonmetallic mineral akin to gypsum, spærstān, spear stone, referring to its crystalline projections). In 1852, fluorite gave its name to the phenomenon of fluorescence, which is prominent in fluorites from certain locations, due to certain impurities in the crystal. Fluorite also gave the name to its constitutive element fluorine. Currently, the word "fluorspar" is most commonly used for fluorite as an industrial and chemical commodity, while "fluorite" is used mineralogically and in most other senses. In archeology, gemmology, classical studies, and Egyptology, the Latin terms murrina and myrrhina refer to fluorite. In book 37 of his Naturalis Historia, Pliny the Elder describes it as a precious stone with purple and white mottling, and noted that the Romans prized objects carved from it. Structure Fluorite crystallizes in a cubic motif. Crystal twinning is common and adds complexity to the observed crystal habits. Fluorite has four perfect cleavage planes that help produce octahedral fragments. The structural motif adopted by fluorite is so common that the motif is called the fluorite structure. Element substitution for the calcium cation often includes strontium and certain rare-earth elements (REE), such as yttrium and cerium. Occurrence and mining Fluorite forms as a late-crystallizing mineral in felsic igneous rocks typically through hydrothermal activity. It is particularly common in granitic pegmatites. It may occur as a vein deposit formed through hydrothermal activity particularly in limestones. In such vein deposits it can be associated with galena, sphalerite, barite, quartz, and calcite. Fluorite can also be found as a constituent of sedimentary rocks either as grains or as the cementing material in sandstone. It is a common mineral mainly distributed in South Africa, China, Mexico, Mongolia, the United Kingdom, the United States, Canada, Tanzania, Rwanda and Argentina. The world reserves of fluorite are estimated at 230 million tonnes (Mt) with the largest deposits being in South Africa (about 41 Mt), Mexico (32 Mt) and China (24 Mt). China is leading the world production with about 3 Mt annually (in 2010), followed by Mexico (1.0 Mt), Mongolia (0.45 Mt), Russia (0.22 Mt), South Africa (0.13 Mt), Spain (0.12 Mt) and Namibia (0.11 Mt). One of the largest deposits of fluorspar in North America is located on the Burin Peninsula, Newfoundland, Canada. The first official recognition of fluorspar in the area was recorded by geologist J.B. Jukes in 1843. He noted an occurrence of "galena" or lead ore and fluoride of lime on the west side of St. Lawrence harbour. It is recorded that interest in the commercial mining of fluorspar began in 1928 with the first ore being extracted in 1933. Eventually, at Iron Springs Mine, the shafts reached depths of . In the St. Lawrence area, the veins are persistent for great lengths and several of them have wide lenses. The area with veins of known workable size comprises about . In 2018, Canada Fluorspar Inc. commenced mine production again in St. Lawrence; in spring 2019, the company was planned to develop a new shipping port on the west side of Burin Peninsula as a more affordable means of moving their product to markets, and they successfully sent the first shipload of ore from the new port on July 31, 2021. This marks the first time in 30 years that ore has been shipped directly out of St. Lawrence. Cubic crystals up to 20 cm across have been found at Dalnegorsk, Russia. The largest documented single crystal of fluorite was a cube 2.12 meters in size and weighing approximately 16 tonnes. In Asturias (Spain) there are several fluorite deposits known internationally for the quality of the specimens they have yielded. In the area of Berbes, Ribadesella, fluorite appears as cubic crystals, sometimes with dodecahedron modifications, which can reach a size of up to 10 cm of edge, with internal colour zoning, almost always violet in colour. It is associated with quartz and leafy aggregates of baryte. In the Emilio mine, in Loroñe, Colunga, the fluorite crystals, cubes with small modifications of other figures, are colourless and transparent. They can reach 10 cm of edge. In the Moscona mine, in Villabona, the fluorite crystals, cubic without modifications of other shapes, are yellow, up to 3 cm of edge. They are associated with large crystals of calcite and barite. "Blue John" One of the most famous of the older-known localities of fluorite is Castleton in Derbyshire, England, where, under the name of "Derbyshire Blue John", purple-blue fluorite was extracted from several mines or caves. During the 19th century, this attractive fluorite was mined for its ornamental value. The mineral Blue John is now scarce, and only a few hundred kilograms are mined each year for ornamental and lapidary use. Mining still takes place in Blue John Cavern and Treak Cliff Cavern. Recently discovered deposits in China have produced fluorite with coloring and banding similar to the classic Blue John stone. Fluorescence George Gabriel Stokes named the phenomenon of fluorescence from fluorite, in 1852. Many samples of fluorite exhibit fluorescence under ultraviolet light, a property that takes its name from fluorite. Many minerals, as well as other substances, fluoresce. Fluorescence involves the elevation of electron energy levels by quanta of ultraviolet light, followed by the progressive falling back of the electrons into their previous energy state, releasing quanta of visible light in the process. In fluorite, the visible light emitted is most commonly blue, but red, purple, yellow, green, and white also occur. The fluorescence of fluorite may be due to mineral impurities, such as yttrium and ytterbium, or organic matter, such as volatile hydrocarbons in the crystal lattice. In particular, the blue fluorescence seen in fluorites from certain parts of Great Britain responsible for the naming of the phenomenon of fluorescence itself, has been attributed to the presence of inclusions of divalent europium in the crystal. Natural samples containing rare earth impurities such as erbium have also been observed to display upconversion fluorescence, in which infrared light stimulates emission of visible light, a phenomenon usually only reported in synthetic materials. One fluorescent variety of fluorite is chlorophane, which is reddish or purple in color and fluoresces brightly in emerald green when heated (thermoluminescence), or when illuminated with ultraviolet light. The color of visible light emitted when a sample of fluorite is fluorescing depends on where the original specimen was collected; different impurities having been included in the crystal lattice in different places. Neither does all fluorite fluoresce equally brightly, even from the same locality. Therefore, ultraviolet light is not a reliable tool for the identification of specimens, nor for quantifying the mineral in mixtures. For example, among British fluorites, those from Northumberland, County Durham, and eastern Cumbria are the most consistently fluorescent, whereas fluorite from Yorkshire, Derbyshire, and Cornwall, if they fluoresce at all, are generally only feebly fluorescent. Fluorite also exhibits the property of thermoluminescence. Color Fluorite is allochromatic, meaning that it can be tinted with elemental impurities. Fluorite comes in a wide range of colors and has consequently been dubbed "the most colorful mineral in the world". Every color of the rainbow in various shades is represented by fluorite samples, along with white, black, and clear crystals. The most common colors are purple, blue, green, yellow, or colorless. Less common are pink, red, white, brown, and black. Color zoning or banding is commonly present. The color of the fluorite is determined by factors including impurities, exposure to radiation, and the absence of voids of the color centers. Uses Source of fluorine and fluoride Fluorite is a major source of hydrogen fluoride, a commodity chemical used to produce a wide range of materials. Hydrogen fluoride is liberated from the mineral by the action of concentrated sulfuric acid: CaF2(s) + H2SO4 → CaSO4(s) + 2 HF(g) The resulting HF is converted into fluorine, fluorocarbons, and diverse fluoride materials. As of the late 1990s, five billion kilograms were mined annually. There are three principal types of industrial use for natural fluorite, commonly referred to as "fluorspar" in these industries, corresponding to different grades of purity. Metallurgical grade fluorite (60–85% CaF2), the lowest of the three grades, has traditionally been used as a flux to lower the melting point of raw materials in steel production to aid the removal of impurities, and later in the production of aluminium. Ceramic grade fluorite (85–95% CaF2) is used in the manufacture of opalescent glass, enamels, and cooking utensils. The highest grade, "acid grade fluorite" (97% or more CaF2), accounts for about 95% of fluorite consumption in the US where it is used to make hydrogen fluoride and hydrofluoric acid by reacting the fluorite with sulfuric acid. Internationally, acid-grade fluorite is also used in the production of AlF3 and cryolite (Na3AlF6), which are the main fluorine compounds used in aluminium smelting. Alumina is dissolved in a bath that consists primarily of molten Na3AlF6, AlF3, and fluorite (CaF2) to allow electrolytic recovery of aluminium. Fluorine losses are replaced entirely by the addition of AlF3, the majority of which react with excess sodium from the alumina to form Na3AlF6. Niche uses Lapidary uses Natural fluorite mineral has ornamental and lapidary uses. Fluorite may be drilled into beads and used in jewelry, although due to its relative softness it is not widely used as a semiprecious stone. It is also used for ornamental carvings, with expert carvings taking advantage of the stone's zonation. Optics In the laboratory, calcium fluoride is commonly used as a window material for both infrared and ultraviolet wavelengths, since it is transparent in these regions (about 0.15 μm to 9 μm) and exhibits an extremely low change in refractive index with wavelength. Furthermore, the material is attacked by few reagents. At wavelengths as short as 157 nm, a common wavelength used for semiconductor stepper manufacture for integrated circuit lithography, the refractive index of calcium fluoride shows some non-linearity at high power densities, which has inhibited its use for this purpose. In the early years of the 21st century, the stepper market for calcium fluoride collapsed, and many large manufacturing facilities have been closed. Canon and other manufacturers have used synthetically grown crystals of calcium fluoride components in lenses to aid apochromatic design, and to reduce light dispersion. This use has largely been superseded by newer glasses and computer-aided design. As an infrared optical material, calcium fluoride is widely available and was sometimes known by the Eastman Kodak trademarked name "Irtran-3", although this designation is obsolete. Fluorite should not be confused with fluoro-crown (or fluorine crown) glass, a type of low-dispersion glass that has special optical properties approaching fluorite. True fluorite is not a glass but a crystalline material. Lenses or optical groups made using this low dispersion glass as one or more elements exhibit less chromatic aberration than those utilizing conventional, less expensive crown glass and flint glass elements to make an achromatic lens. Optical groups employ a combination of different types of glass; each type of glass refracts light in a different way. By using combinations of different types of glass, lens manufacturers are able to cancel out or significantly reduce unwanted characteristics; chromatic aberration being the most important. The best of such lens designs are often called apochromatic (see above). Fluoro-crown glass (such as Schott FK51) usually in combination with an appropriate "flint" glass (such as Schott KzFSN 2) can give very high performance in telescope objective lenses, as well as microscope objectives, and camera telephoto lenses. Fluorite elements are similarly paired with complementary "flint" elements (such as Schott LaK 10). The refractive qualities of fluorite and of certain flint elements provide a lower and more uniform dispersion across the spectrum of visible light, thereby keeping colors focused more closely together. Lenses made with fluorite are superior to fluoro-crown based lenses, at least for doublet telescope objectives; but are more difficult to produce and more costly. The use of fluorite for prisms and lenses was studied and promoted by Victor Schumann near the end of the 19th century. Naturally occurring fluorite crystals without optical defects were only large enough to produce microscope objectives. With the advent of synthetically grown fluorite crystals in the 1950s - 60s, it could be used instead of glass in some high-performance optical telescope and camera lens elements. In telescopes, fluorite elements allow high-resolution images of astronomical objects at high magnifications. Canon Inc. produces synthetic fluorite crystals that are used in their better telephoto lenses. The use of fluorite for telescope lenses has declined since the 1990s, as newer designs using fluoro-crown glass, including triplets, have offered comparable performance at lower prices. Fluorite and various combinations of fluoride compounds can be made into synthetic crystals which have applications in lasers and special optics for UV and infrared. Exposure tools for the semiconductor industry make use of fluorite optical elements for ultraviolet light at wavelengths of about 157 nanometers. Fluorite has a uniquely high transparency at this wavelength. Fluorite objective lenses are manufactured by the larger microscope firms (Nikon, Olympus, Carl Zeiss and Leica). Their transparence to ultraviolet light enables them to be used for fluorescence microscopy. The fluorite also serves to correct optical aberrations in these lenses. Nikon has previously manufactured at least one fluorite and synthetic quartz element camera lens (105 mm f/4.5 UV) for the production of ultraviolet images. Konica produced a fluorite lens for their SLR cameras – the Hexanon 300 mm f/6.3. Source of fluorine gas in nature In 2012, the first source of naturally occurring fluorine gas was found in fluorite mines in Bavaria, Germany. It was previously thought that fluorine gas did not occur naturally because it is so reactive, and would rapidly react with other chemicals. Fluorite is normally colorless, but some varied forms found nearby look black, and are known as 'fetid fluorite' or antozonite. The minerals, containing small amounts of uranium and its daughter products, release radiation sufficiently energetic to induce oxidation of fluoride anions within the structure, to fluorine that becomes trapped inside the mineral. The color of fetid fluorite is predominantly due to the calcium atoms remaining. Solid-state fluorine-19 NMR carried out on the gas contained in the antozonite, revealed a peak at 425 ppm, which is consistent with F2. Gallery
Physical sciences
Minerals
Earth science
43590
https://en.wikipedia.org/wiki/Flux
Flux
Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications in physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface. Terminology The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton. The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is: According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" according to the electromagnetism definition. Their names in accordance with the quote (and transport definition) would be "surface integral of electric flux" and "surface integral of magnetic flux", in which case "electric flux" would instead be defined as "electric field" and "magnetic flux" defined as "magnetic field". This implies that Maxwell conceived of these fields as flows/fluxes of some sort. Given a flux according to the electromagnetism definition, the corresponding flux density, if that term is used, refers to its derivative along the surface that was integrated. By the Fundamental theorem of calculus, the corresponding flux density is a flux according to the transport definition. Given a current such as electric current—charge per time, current density would also be a flux according to the transport definition—charge per time per area. Due to the conflicting definitions of flux, and the interchangeability of flux, flow, and current in nontechnical English, all of the terms used in this paragraph are sometimes used interchangeably and ambiguously. Concrete fluxes in the rest of this article will be used in accordance to their broad acceptance in the literature, regardless of which definition of flux the term corresponds to. Flux as flow rate per unit area In transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as the rate of flow of a property per unit area, which has the dimensions [quantity]·[time]−1·[area]−1. The area is of the surface the property is flowing "through" or "across". For example, the amount of water that flows through a cross section of a river each second divided by the area of that cross section, or the amount of sunlight energy that lands on a patch of ground each second divided by the area of the patch, are kinds of flux. General mathematical definition (transport) Here are 3 definitions in increasing order of complexity. Each is a special case of the following. In all cases the frequent symbol j, (or J) is used for flux, q for the physical quantity that flows, t for time, and A for area. These identifiers will be written in bold when and only when they are vectors. First, flux as a (single) scalar: where In this case the surface in which flux is being measured is fixed and has area A. The surface is assumed to be flat, and the flow is assumed to be everywhere constant with respect to position and perpendicular to the surface. Second, flux as a scalar field defined along a surface, i.e. a function of points on the surface: As before, the surface is assumed to be flat, and the flow is assumed to be everywhere perpendicular to it. However the flow need not be constant. q is now a function of p, a point on the surface, and A, an area. Rather than measure the total flow through the surface, q measures the flow through the disk with area A centered at p along the surface. Finally, flux as a vector field: In this case, there is no fixed surface we are measuring over. q is a function of a point, an area, and a direction (given by a unit vector ), and measures the flow through the disk of area A perpendicular to that unit vector. I is defined picking the unit vector that maximizes the flow around the point, because the true flow is maximized across the disk that is perpendicular to it. The unit vector thus uniquely maximizes the function when it points in the "true direction" of the flow. (Strictly speaking, this is an abuse of notation because the "argmax" cannot directly compare vectors; we take the vector with the biggest norm instead.) Properties These direct definitions, especially the last, are rather unwieldy. For example, the argmax construction is artificial from the perspective of empirical measurements, when with a weathervane or similar one can easily deduce the direction of flux at a point. Rather than defining the vector flux directly, it is often more intuitive to state some properties about it. Furthermore, from these properties the flux can uniquely be determined anyway. If the flux j passes through the area at an angle θ to the area normal , then the dot product That is, the component of flux passing through the surface (i.e. normal to it) is jcosθ, while the component of flux passing tangential to the area is jsinθ, but there is no flux actually passing through the area in the tangential direction. The only component of flux passing normal to the area is the cosine component. For vector flux, the surface integral of j over a surface S, gives the proper flowing per unit of time through the surface: where A (and its infinitesimal) is the vector area combination of the magnitude of the area A through which the property passes and a unit vector normal to the area. Unlike in the second set of equations, the surface here need not be flat. Finally, we can integrate again over the time duration t1 to t2, getting the total amount of the property flowing through the surface in that time (t2 − t1): Transport fluxes Eight of the most common forms of flux from the transport phenomena literature are defined as follows: Momentum flux, the rate of transfer of momentum across a unit area (N·s·m−2·s−1). (Newton's law of viscosity) Heat flux, the rate of heat flow across a unit area (J·m−2·s−1). (Fourier's law of conduction) (This definition of heat flux fits Maxwell's original definition.) Diffusion flux, the rate of movement of molecules across a unit area (mol·m−2·s−1). (Fick's law of diffusion) Volumetric flux, the rate of volume flow across a unit area (m3·m−2·s−1). (Darcy's law of groundwater flow) Mass flux, the rate of mass flow across a unit area (kg·m−2·s−1). (Either an alternate form of Fick's law that includes the molecular mass, or an alternate form of Darcy's law that includes the density.) Radiative flux, the amount of energy transferred in the form of photons at a certain distance from the source per unit area per second (J·m−2·s−1). Used in astronomy to determine the magnitude and spectral class of a star. Also acts as a generalization of heat flux, which is equal to the radiative flux when restricted to the electromagnetic spectrum. Energy flux, the rate of transfer of energy through a unit area (J·m−2·s−1). The radiative flux and heat flux are specific cases of energy flux. Particle flux, the rate of transfer of particles through a unit area ([number of particles] m−2·s−1) These fluxes are vectors at each point in space, and have a definite magnitude and direction. Also, one can take the divergence of any of these fluxes to determine the accumulation rate of the quantity in a control volume around a given point in space. For incompressible flow, the divergence of the volume flux is zero. Chemical diffusion As mentioned above, chemical molar flux of a component A in an isothermal, isobaric system is defined in Fick's law of diffusion as: where the nabla symbol ∇ denotes the gradient operator, DAB is the diffusion coefficient (m2·s−1) of component A diffusing through component B, cA is the concentration (mol/m3) of component A. This flux has units of mol·m−2·s−1, and fits Maxwell's original definition of flux. For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass m, the collision cross section , and the absolute temperature T by where the second factor is the mean free path and the square root (with the Boltzmann constant k) is the mean velocity of the particles. In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient. Quantum mechanics In quantum mechanics, particles of mass m in the quantum state ψ(r, t) have a probability density defined as So the probability of finding a particle in a differential volume element d3r is Then the number of particles passing perpendicularly through unit area of a cross-section per unit time is the probability flux; This is sometimes referred to as the probability current or current density, or probability flux density. Flux as a surface integral General mathematical definition (surface integral) As a mathematical concept, flux is represented by the surface integral of a vector field, where F is a vector field, and dA is the vector area of the surface A, directed as the surface normal. For the second, n is the outward pointed unit normal vector to the surface. The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative. The surface normal is usually directed by the right-hand rule. Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density. Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks).
Physical sciences
Basics_6
null
43598
https://en.wikipedia.org/wiki/Fjord
Fjord
In physical geography, a fjord (also spelled fiord in New Zealand English; ) is a long, narrow sea inlet with steep sides or cliffs, created by a glacier. Fjords exist on the coasts of Antarctica, the Arctic, and surrounding landmasses of the northern and southern hemispheres. Norway's coastline is estimated to be long with its nearly 1,200 fjords, but only long excluding the fjords. Formation A true fjord is formed when a glacier cuts a U-shaped valley by ice segregation and abrasion of the surrounding bedrock. According to the standard model, glaciers formed in pre-glacial valleys with a gently sloping valley floor. The work of the glacier then left an overdeepened U-shaped valley that ends abruptly at a valley or trough end. Such valleys are fjords when flooded by the ocean. Thresholds above sea level create freshwater lakes. Glacial melting is accompanied by the rebounding of Earth's crust as the ice load and eroded sediment is removed (also called isostasy or glacial rebound). In some cases, this rebound is faster than sea level rise. Most fjords are deeper than the adjacent sea; Sognefjord, Norway, reaches as much as below sea level. Fjords generally have a sill or shoal (bedrock) at their mouth caused by the previous glacier's reduced erosion rate and terminal moraine. In many cases this sill causes extreme currents and large saltwater rapids (see skookumchuck). Saltstraumen in Norway is often described as the world's strongest tidal current. These characteristics distinguish fjords from rias (such as the Bay of Kotor), which are drowned valleys flooded by the rising sea. Drammensfjorden is cut almost in two by the Svelvik "ridge", a sandy moraine that was below sea level when it was covered by ice, but after the post-glacial rebound reaches above the fjord. In the 19th century, Jens Esmark introduced the theory that fjords are or have been created by glaciers and that large parts of Northern Europe had been covered by thick ice in prehistory. Thresholds at the mouths and overdeepening of fjords compared to the ocean are the strongest evidence of glacial origin, and these thresholds are mostly rocky. Thresholds are related to sounds and low land where the ice could spread out and therefore have less erosive force. John Walter Gregory argued that fjords are of tectonic origin and that glaciers had a negligible role in their formation. Gregory's views were rejected by subsequent research and publications. In the case of Hardangerfjord the fractures of the Caledonian fold has guided the erosion by glaciers, while there is no clear relation between the direction of Sognefjord and the fold pattern. This relationship between fractures and direction of fjords is also observed in Lyngen. Preglacial, tertiary rivers presumably eroded the surface and created valleys that later guided the glacial flow and erosion of the bedrock. This may in particular have been the case in Western Norway where the tertiary uplift of the landmass amplified eroding forces of rivers. Confluence of tributary fjords led to excavation of the deepest fjord basins. Near the very coast, the typical West Norwegian glacier spread out (presumably through sounds and low valleys) and lost their concentration and reduced the glaciers' power to erode leaving bedrock thresholds. Bolstadfjorden is deep with a threshold of only , while the deep Sognefjorden has a threshold around deep. Hardangerfjord is made up of several basins separated by thresholds: The deepest basin Samlafjorden between Jonaneset (Jondal) and Ålvik with a distinct threshold at Vikingneset in Kvam Municipality. Hanging valleys are common along glaciated fjords and U-shaped valleys. A hanging valley is a tributary valley that is higher than the main valley and was created by tributary glacier flows into a glacier of larger volume. The shallower valley appears to be 'hanging' above the main valley or a fjord. Often, waterfalls form at or near the outlet of the upper valley. Small waterfalls within these fjords are also used as freshwater resources. Hanging valleys also occur underwater in fjord systems. The branches of Sognefjord are for instance much shallower than the main fjord. The mouth of Fjærlandsfjord is about deep while the main fjord is nearby. The mouth of Ikjefjord is only deep while the main fjord is around at the same point. Features and variations Hydrology During the winter season, there is usually little inflow of freshwater. Surface water and deeper water (down to or more) are mixed during winter because of the steady cooling of the surface and wind. In the deep fjords, there is still fresh water from the summer with less density than the saltier water along the coast. Offshore wind, common in the fjord areas during winter, sets up a current on the surface from the inner to the outer parts. This current on the surface in turn pulls dense salt water from the coast across the fjord threshold and into the deepest parts of the fjord. Bolstadfjorden has a threshold of only and strong inflow of freshwater from Vosso river creates a brackish surface that blocks circulation of the deep fjord. The deeper, salt layers of Bolstadfjorden are deprived of oxygen and the seabed is covered with organic material. The shallow threshold also creates a strong tidal current. During the summer season, there is usually a large inflow of river water in the inner areas. This freshwater gets mixed with saltwater creating a layer of brackish water with a slightly higher surface than the ocean which in turn sets up a current from the river mouths towards the ocean. This current is gradually more salty towards the coast and right under the surface current there is a reverse current of saltier water from the coast. In the deeper parts of the fjord the cold water remaining from winter is still and separated from the atmosphere by the brackish top layer. This deep water is ventilated by mixing with the upper layer causing it to warm and freshen over the summer. In fjords with a shallow threshold or low levels of mixing this deep water is not replaced every year and low oxygen concentration makes the deep water unsuitable for fish and animals. In the most extreme cases, there is a constant barrier of freshwater on the surface and the fjord freezes over such that there is no oxygen below the surface. Drammensfjorden is one example. The mixing in fjords predominantly results from the propagation of an internal tide from the entrance sill or internal seiching. The Gaupnefjorden branch of Sognefjorden is strongly affected by freshwater as a glacial river flows in. Velfjorden has little inflow of freshwater. Coral reefs In 2000, some coral reefs were discovered along the bottoms of the Norwegian fjords. These reefs were found in fjords from the north of Norway to the south. The marine life on the reefs is believed to be one of the most important reasons why the Norwegian coastline is such a generous fishing ground. Since this discovery is fairly new, little research has been done. The reefs are host to thousands of lifeforms such as plankton, coral, anemones, fish, several species of shark, and many more. Most are specially adapted to life under the greater pressure of the water column above it, and the total darkness of the deep sea. New Zealand's fjords are also host to deep-water corals, but a surface layer of dark fresh water allows these corals to grow in much shallower water than usual. An underwater observatory in Milford Sound allows tourists to view them without diving. Skerries In some places near the seaward margins of areas with fjords, the ice-scoured channels are so numerous and varied in direction that the rocky coast is divided into thousands of island blocks, some large and mountainous while others are merely rocky points or rock reefs, menacing navigation. These are called skerries. The term skerry is derived from the Old Norse , which means a rock in the sea. Skerries most commonly formed at the outlet of fjords where submerged glacially formed valleys perpendicular to the coast join with other cross valleys in a complex array. The island fringe of Norway is such a group of skerries (called a ); many of the cross fjords are so arranged that they parallel the coast and provide a protected channel behind an almost unbroken succession of mountainous islands and skerries. By this channel, one can travel through a protected passage almost the entire route from Stavanger to North Cape, Norway. The Blindleia is a skerry-protected waterway that starts near Kristiansand in southern Norway and continues past Lillesand. The Swedish coast along Bohuslän is likewise skerry guarded. The Inside Passage provides a similar route from Seattle, Washington, and Vancouver, British Columbia, to Skagway, Alaska. Yet another such skerry-protected passage extends from the Straits of Magellan north for . Phytoplankton Fjords provide unique environmental conditions for phytoplankton communities. In polar fjords, glacier and ice sheet outflow add cold, fresh meltwater along with transported sediment into the body of water. Nutrients provided by this outflow can significantly enhance phytoplankton growth. For example, in some fjords of the West Antarctic Peninsula (WAP), nutrient enrichment from meltwater drives diatom blooms, a highly productive group of phytoplankton that enable such fjords to be valuable feeding grounds for other species. It is possible that as climate change reduces long-term meltwater output, nutrient dynamics within such fjords will shift to favor less productive species, destabilizing the food web ecology of fjord systems. In addition to nutrient flux, sediment carried by flowing glaciers can become suspended in the water column, increasing turbidity and reducing light penetration into greater depths of the fjord. This effect can limit the available light for photosynthesis in deeper areas of the water mass, reducing phytoplankton abundance beneath the surface. Overall, phytoplankton abundance and species composition within fjords is highly seasonal, varying as a result of seasonal light availability and water properties that depend on glacial melt and the formation of sea ice. The study of phytoplankton communities within fjords is an active area of research, supported by groups such as FjordPhyto, a citizen science initiative to study phytoplankton samples collected by local residents, tourists, and boaters of all backgrounds. Epishelf lakes An epishelf lake forms when meltwater is trapped behind a floating ice shelf and the freshwater floats on the denser saltwater below. Its surface may freeze forming an isolated ecosystem. Etymology The word fjord is borrowed from Norwegian, where it is pronounced , , or in various dialects and has a more general meaning, referring in many cases to any long, narrow body of water, inlet or channel (for example, see Oslofjord). The Norwegian word is inherited from Old Norse , a noun which refers to a 'lake-like' body of water used for passage and ferrying and is closely related to the noun "travelling, ferrying, journey". Both words go back to Indo-European "crossing", from the root "cross". The words and ferry are of the same origin. The Scandinavian fjord, Proto-Scandinavian *, is the origin for similar Germanic words: Icelandic , Faroese , Swedish (for Baltic waterbodies), Scots (for marine waterbodies, mainly in Scotland and northern England). The Norse noun was adopted in German as , used for the narrow long bays of Schleswig-Holstein, and in English as firth "fjord, river mouth". The English word ford (compare German , Low German or , in Dutch names such as Vilvoorde, Ancient Greek , , and Latin ) is assumed to originate from Germanic and Indo-European root * meaning "crossing point". Fjord/firth/Förde as well as ford/Furt/Vörde/voorde refer to a Germanic noun for a travel: North Germanic or and of the verb to travel, Dutch , German ; English to fare. As a loanword from Norwegian, it is one of the few words in the English language to start with the sequence fj. The word was for a long time normally spelled fiord, a spelling preserved in place names such as Grise Fiord. The fiord spelling mostly remains only in New Zealand English, as in the place name Fiordland. Scandinavian usage The use of the word fjord in Norwegian, Danish and Swedish is more general than in English and in international scientific terminology. In Scandinavia, fjord is used for a narrow inlet of the sea in Norway, Denmark and western Sweden, but this is not its only application. In Norway and Iceland, the usage is closest to the Old Norse, with fjord used for both a firth and for a long, narrow inlet. In eastern Norway, the term is also applied to long narrow freshwater lakes (Randsfjorden and Tyrifjorden) and sometimes even to rivers (for instance in Flå Municipality in Hallingdal, the Hallingdal river is referred to as ). In southeast Sweden, the name fjard is a subdivision of the term 'fjord' used for bays, bights and narrow inlets on the Swedish Baltic Sea coast, and in most Swedish lakes. This latter term is also used for bodies of water off the coast of Finland where Finland Swedish is spoken. In Danish, the word may even apply to shallow lagoons. In modern Icelandic, is still used with the broader meaning of firth or inlet. In Faroese is used both about inlets and about broader sounds, whereas a narrower sound is called . In the Finnish language, a word is used although there is only one fjord in Finland. In old Norse genitive was fjarðar whereas dative was firði. The dative form has become common place names like Førde (for instance Førde), Fyrde or Førre (for instance Førre). The German use of the word for long narrow bays on their Baltic Sea coastline, indicates a common Germanic origin of the word. The landscape consists mainly of moraine heaps. The and some "fjords" on the east side of Jutland, Denmark are also of glacial origin. But while the glaciers digging "real" fjords moved from the mountains to the sea, in Denmark and Germany they were tongues of a huge glacier covering the basin of which is now the Baltic Sea. See Förden and East Jutland Fjorde. Whereas fjord names mostly describe bays (though not always geological fjords), straits in the same regions typically are named Sund, in Scandinavian languages as well as in German. The word is related to "to sunder" in the meaning of "to separate". So the use of Sound to name fjords in North America and New Zealand differs from the European meaning of that word. The name of Wexford in Ireland is originally derived from ("inlet of the mud flats") in Old Norse, as used by the Viking settlers—though the inlet at that place in modern terms is an estuary, not a fjord. Similarly the name of Milford (now Milford Haven) in Wales is derived from ("sandbank fjord/inlet"), though the inlet on which it is located is actually a ria. Before or in the early phase of Old Norse was another common noun for fjords and other inlets of the ocean. This word has survived only as a suffix in names of some Scandinavian fjords and has in same cases also been transferred to adjacent settlements or surrounding areas for instance Hardanger, Stavanger, and Geiranger. Differences in definitions The differences in usage between the English and the Scandinavian languages have contributed to confusion in the use of the term fjord. Bodies of water that are clearly fjords in Scandinavian languages are not considered fjords in English; similarly bodies of water that would clearly not be fjords in the Scandinavian sense have been named or suggested to be fjords. Examples of this confused usage follow. In the Danish language some inlets are called a fjord, but are, according to the English language definition, technically not a fjord, such as Roskilde Fjord. Limfjord in English terminology is a sound, since it separates the North Jutlandic Island (Vendsyssel-Thy) from the rest of Jutland. However, the Limfjord once was a fjord until the sea broke through from the west. Ringkøbing Fjord on the western coast of Jutland is a lagoon. The long narrow fjords of Denmark's Baltic Sea coast like the German were dug by ice moving from the sea upon land, while fjords in the geological sense were dug by ice moving from the mountains down to the sea. However, some definitions of a fjord is: "A long narrow inlet consisting of only one inlet created by glacial activity". Examples of Danish fjords are: Kolding Fjord, Vejle Fjord and Mariager Fjord. The fjords in Finnmark in Norway, which are fjords in the Scandinavian sense of the term, are not universally considered to be fjords by the scientific community, because although glacially formed, most Finnmark fjords lack the steep-sided valleys of the more southerly Norwegian fjords. The glacial pack was deep enough to cover even the high grounds when they were formed. The Oslofjord, on the other hand, is a rift valley, and not glacially formed. The indigenous Māori people of New Zealand see a fjord as a kind of sea () that runs by a bluff (, altogether "bluff sea"). "Fjords" not created by glaciers The term "fjord" is sometimes applied to steep-sided inlets which were not created by glaciers. Most such inlets are drowned river canyons or rias. Examples include: In Acapulco, Mexico, the calanques (narrow, rocky inlets) on the western side of the city, where the famous cliff-divers perform daily, are described in the city's tourist literature as being fjords. The calanques of Parc national des Calanques, Provence, France, are also referred to as fjords. Camel Estuary at Padstow, Cornwall, England, is sometimes referred to as a fjord. despite being classified as a ria. The Fiordo di Furore in Italy is actually a ria. Golfo Dulce in Puntarenas, Costa Rica. Like the Saco de Mamangua below, it is sometimes described as a "tropical fjord". The Khor ash Sham in the Musandam Peninsula in Oman, and other "khors" or inlets in the deeply indented coast of Musandam, are often described as "fjords". They were formed by the subduction of the Arabian tectonic plate beneath the Eurasian plate. Bay of Kotor in Montenegro the Lim bay in Istria, Croatia, is sometimes called "Lim fjord" although it is a ria dug by the river Pazinčica. The Croats call it , which does not translate precisely to the English equivalent either. Milford Haven Waterway in Pembrokeshire, Wales. This inlet is a ria. The place-name is derived from Old Norse Melrfjordr meaning "sandbank fjord". Port Davey in Tasmania, Australia is popularly believed to be a "fjord", but is now thought to be part of a drowned river valley system. in Paraty, Rio de Janeiro, Brazil. Colloquially, it's been labeled the world's "only tropical fjord". Freshwater fjords Some Norwegian freshwater lakes that have formed in long glacially carved valleys with sill thresholds, ice front deltas or terminal moraines blocking the outlet follow the Norwegian naming convention; they are frequently named fjords. Ice front deltas developed when the ice front was relatively stable for long time during the melting of the ice shield. The resulting landform is an isthmus between the lake and the saltwater fjord, in Norwegian called "eid" as in placename Eidfjord or Nordfjordeid. The post-glacial rebound changed these deltas into terraces up to the level of the original sea level. In Eidfjord, Eio has dug through the original delta and left a terrace while lake is only above sea level. Such deposits are valuable sources of high-quality building materials (sand and gravel) for houses and infrastructure. Eidfjord village sits on the eid or isthmus between Eidfjordvatnet lake and Eidfjorden branch of Hardangerfjord. Nordfjordeid is the isthmus with a village between Hornindalsvatnet lake and Nordfjord. Such lakes are also denoted fjord valley lakes by geologists. One of Norway's largest is Tyrifjorden at above sea level and an average depth at most of the lake is under sea level. Norway's largest lake, Mjøsa, is also referred to as "the fjord" by locals. Another example is the freshwater fjord Movatnet (Mo lake) that until 1743 was separated from Romarheimsfjorden by an isthmus and connected by a short river. During a flood in November 1743, the river bed eroded and sea water could flow into the lake at high tide. Eventually, Movatnet became a saltwater fjord and renamed Mofjorden (). Like fjords, freshwater lakes are often deep. For instance Hornindalsvatnet is at least deep and water takes an average of 16 years to flow through the lake. Such lakes created by glacial action are also called fjord lakes or moraine-dammed lakes. Some of these lakes were salt after the ice age but later cut off from the ocean during the post-glacial rebound. At the end of the ice age Eastern Norway was about lower (the marine limit). When the ice cap receded and allowed the ocean to fill valleys and lowlands, and lakes like Mjøsa and Tyrifjorden were part of the ocean while Drammen valley was a narrow fjord. At the time of the Vikings Drammensfjord was still higher than today and reached the town of Hokksund, while parts of what is now the city of Drammen was under water. After the ice age the ocean was about at Notodden. The ocean stretched like a fjord through Heddalsvatnet all the way to Hjartdal. Post-glacial rebound eventually separated Heddalsvatnet from the ocean and turned it into a freshwater lake. In neolithic times Heddalsvatnet was still a saltwater fjord connected to the ocean, and was cut off from the ocean around 1500 BC. Some freshwater fjords such as Slidrefjord are above the marine limit. Like freshwater fjords, the continuation of fjords on land are in the same way denoted as fjord-valleys. For instance Flåmsdal (Flåm valley) and Måbødalen. Outside of Norway, the three western arms of New Zealand's Lake Te Anau are named North Fiord, Middle Fiord and South Fiord. Another freshwater "fjord" in a larger lake is Western Brook Pond, in Newfoundland's Gros Morne National Park; it is also often described as a fjord, but is actually a freshwater lake cut off from the sea, so is not a fjord in the English sense of the term. Locally they refer to it as a "landlocked fjord". Such lakes are sometimes called "fjord lakes". Okanagan Lake was the first North American lake to be so described, in 1962. The bedrock there has been eroded up to below sea level, which is below the surrounding regional topography. Fjord lakes are common on the inland lea of the Coast Mountains and Cascade Range; notable ones include Lake Chelan, Seton Lake, Chilko Lake, and Atlin Lake. Kootenay Lake, Slocan Lake and others in the basin of the Columbia River are also fjord-like in nature, and created by glaciation in the same way. Along the British Columbia Coast, a notable fjord-lake is Owikeno Lake, which is a freshwater extension of Rivers Inlet. Quesnel Lake, located in central British Columbia, is claimed to be the deepest fjord formed lake on Earth. Great Lakes A family of freshwater fjords are the embayments of the North American Great Lakes. Baie Fine is located on the northwestern coast of Georgian Bay of Lake Huron in Ontario, and Huron Bay is located on the southern shore of Lake Superior in Michigan. Locations The principal mountainous regions where fjords have formed are in the higher middle latitudes and the high latitudes reaching to 80°N (Svalbard, Greenland), where, during the glacial period, many valley glaciers descended to the then-lower sea level. The fjords develop best in mountain ranges against which the prevailing westerly marine winds are orographically lifted over the mountainous regions, resulting in abundant snowfall to feed the glaciers. Hence coasts having the most pronounced fjords include the west coast of Norway, the west coast of North America from Puget Sound to Alaska, the southwest coast of New Zealand, and the west and to south-western coasts of South America, chiefly in Chile. Principal fjord regions West coast of Europe Faroe Islands Westfjords of Iceland Eastern Region of Iceland West Highlands of Scotland Norway, the whole coast including Svalbard Kola Peninsula in Russia West coast of New Zealand Fiordland, in the southwest of the South Island Northwest coast of North America Coast of Alaska, United States: Lynn Canal, Glacier Bay, etc. British Columbia Coast, Canada: from the Alaskan Border along the Portland Canal to Indian Arm; Kingcome Inlet is a typical West Coast fjord. Hood Canal in Washington, United States and various of the arms of Puget Sound Northeast coast of North America Labrador: Saglek Fjord, Nachvak Fjord, Hebron Fjord The east coast of Ungava Bay. Baffin Island Ellesmere Island Greenland: Kangerlussuaq, Ilulissat Icefjord, Scoresby Sund, Disko Island Southwest coast of South America Fjords and channels of Chile Isla de los Estados, Argentina Other glaciated or formerly glaciated regions Other regions have fjords, but many of these are less pronounced due to more limited exposure to westerly winds and less pronounced relief. Areas include: Europe Ireland Lough Swilly Carlingford Lough Killary Harbour Russia (see also List of fjords of Russia) Chukchi Peninsula Kola Peninsula Scotland (where they are called firths, the Scots language cognate of fjord; lochs or sea lochs). Notable examples are: Loch Long Loch Fyne, Scotland's longest fjord at 65 km Loch Etive Sweden Gullmarsfjorden, in Bohuslän, Sweden Wales Mawddach Estuary, a fjord in-filled by glacial deposits. North America Canada: the west and south coasts of Newfoundland, particularly: Facheux Bay Bonne Bay in Gros Morne National Park Aviron Bay La Hune Bay Bay de Vieux White Bear Bay Baie d'Espoir La Poile Bay Bay Le Moine the Canadian Arctic Archipelago Quebec, Saguenay Fjord United States: Somes Sound, Acadia National Park, Maine Hudson River most clearly seen at The Palisades Puget Sound South America Argentina: Isla de los Estados Arctic Arctic islands Novaya Zemlya Severnaya Zemlya Antarctica South Georgia (UK) Kerguelen Islands (France) particularly the Antarctic Peninsula Sub-Antarctic islands Extreme fjords The longest fjords in the world are: Nansen Sound/Greely Fiord/Tanquary Fiord in Canada— Chatham Strait/Lynn Canal in United States— Scoresby Sund in Greenland— Concepción Channel-Puerto Simpson in Chile— Sognefjord in Norway— Independence Fjord in Greenland— Matochkin Shar, Novaya Zemlya, Russia— (a strait with a fjord structure) Deep fjords include: Skelton Inlet in Antarctica— Sognefjord in Norway— (the mountains then rise to up to and more, Hurrungane reaches ) Messier Channel in Tortel, Chile— Baker Channel in Tortel, Chile— Heritage fjords Norway has several heritage fjords, including UNESCO World Heritage Sites and other notable fjords, these will require visiting ships to be low-emission by 2026 and zero-emission by 2032 Geirangerfjord Nærøyfjord Hardangerfjord Trollfjord Urnes Stave Church Hjørundfjord
Physical sciences
Glacial landforms
null
43600
https://en.wikipedia.org/wiki/Crayfish
Crayfish
Crayfish are freshwater crustaceans belonging to the infraorder Astacidea, which also contains lobsters. Taxonomically, they are members of the superfamilies Astacoidea and Parastacoidea. They breathe through feather-like gills. Some species are found in brooks and streams, where fresh water is running, while others thrive in swamps, ditches, and paddy fields. Most crayfish cannot tolerate polluted water, although some species, such as Procambarus clarkii, are hardier. Crayfish feed on animals and plants, either living or decomposing, and detritus. The term "crayfish" is applied to saltwater species in some countries. Terminology The name "crayfish" comes from the Old French word (Modern French ). The word has been modified to "crayfish" by association with "fish" (folk etymology). The largely American variant "crawfish" is similarly derived. Some kinds of crayfish are known locally as lobsters, crawdads, mudbugs, and yabbies. In the Eastern United States, "crayfish" is more common in the north, while "crawdad" is heard more in central and southwestern regions, and "crawfish" farther south, although considerable overlaps exist. The study of crayfish is called astacology. Anatomy The body of a decapod crustacean, such as a crab, lobster, or prawn (shrimp), is made up of twenty body segments grouped into two main body parts, the cephalothorax and the abdomen. Each segment may possess one pair of appendages, although in various groups, these may be reduced or missing. On average, crayfish grow to in length. Walking legs have a small claw at the end. Diet Crayfish are opportunistic omnivorous scavengers, with the ability to filter and process mud. In aquaculture ponds using isotope analysis they were shown to build body tissue selectively from the animal protein portion of pelleted food and not the other components of the pellet. They have the potential to eat most foods, even nutrient poor material such as grass, leaves, and paper, but can be highly selective and need variety to balance their diet. The personalities of the individual crayfish can be a key determinant in the food preference behaviour in aquaria. Crayfish all over the world can be seen in an ecological role of benthic dwellers, so this is where most of their food is obtained - at the sediment/water interface in ponds, lakes, swamps, or burrows. When the gut contents are analysed, most of the contents is mud: fine particulate organic matter (FPOM) and mixed particles of lignin and cellulose (roots, leaves, bark, wood). Some animal material can also be identified, but this only contributes a small portion of the diet by volume. They feed on submerged vegetable material at times, but their ability to catch large living animal material is restricted. They can feed on interstitial organisms if they can be grasped in the small feeding claws. They can be lured into traps with an array of baits from dog biscuits, fish heads, meat, etc., all of which reinforces the fact that they are generalist feeders. On a day-to-day basis, they consume what they can acquire in their immediate environment in limited space and time available - detritus. At a microbial level, the FPOM has a high surface area of organic particles and consists of a plethora of substrate and bacteria, fungi, micro-algae, meiofauna, partially decomposed organic material and mucus. This mucus or "slime" is a biofilm and can be felt on the surface of leaves and sticks. Also crayfish have been shown to be coprophagic - eating their own faeces, they also eat their own exuviae (moulted carapace) and each other. They have even been observed leaving the water to graze. Detritus or mud is a mixture of dead plankton (plant and animal), organic wastes from the water column, and debris derived from the aquatic and terrestrial environments. Mostly detritus is in the end phase of decomposition and is recognised as black organic mud. The crayfish usually ingest the material in only a few minutes, as distinct from grazing for many hours. The material is mixed with digestive fluids and sorted by size. The finer particles follow a slower and more exacting route through to the hindgut, compared to the coarser material. The coarser material is eliminated first and often reappears in approximately 10 to 12 hours, whereas the finer material is usually eliminated from 16 to 26 hours after ingestion. All waste products coming out through the hindgut are wrapped in a peritrophic membrane, so they look like a tube. Such an investment in the wrapping of the microbial free faeces in a protein rich membrane is most likely the reason they are coprophagic. Such feeding behaviour based on selection, ingestion, and extreme processing ensures periodic feeding, as distinct from continuous grazing. They tend to eat to satiation and then take many hours to process the material, leaving minimal chance of having more room to ingest other items. Crayfish usually have limited home range and so they rest, digest, and eliminate their waste, most commonly in the same location each day. Feeding exposes the crayfish to risk of predation, and so feeding behaviour is often rapid and synchronised with feeding processes that reduce such risks — eat, hide, process and eliminate. Knowledge of the diet of these creatures was considered too complex since the first book ever written in the field of zoology, The Crayfish by T.H. Huxley (1879), where they were described as "detritivores". This is why most researchers have not attempted to understand the diet of freshwater crayfish. The most complex study which matched the structure and function of the whole digestive tract with ingested material was performed in the 1990s by Brett O'Brien on marron, the least aggressive of the larger freshwater crayfish with aquaculture potential, similar to redclaw and yabbies. Classification and geographical distribution Crayfish are closely related to lobsters, and together they belong to the infraorder Astacidea. Their phylogeny can be shown in the simplified cladogram below: Four extant (living) families of crayfish are described, three in the Northern Hemisphere and one in the Southern Hemisphere. The Southern Hemisphere (Gondwana-distributed) family Parastacidae, with 14 extant genera and two extinct genera, live(d) in South America, Madagascar, and Australasia. They are distinguished by the absence of the first pair of pleopods. Of the other three Northern Hemisphere families (grouped in the superfamily Astacoidea), the four genera of the family Astacidae live in western Eurasia and western North America, the 15 genera of the family Cambaridae live in eastern North America, and the single genus of Cambaroididae live in eastern Asia. North America The greatest diversity of crayfish species is found in southeastern North America, with over 330 species in 15 genera, all in the family Cambaridae. A further genus of astacid crayfish is found in the Pacific Northwest and the headwaters of some rivers east of the Continental Divide. Many crayfish are also found in lowland areas where the water is abundant in calcium, and oxygen rises from underground springs. Crayfish are also found in some non-coastal wetlands; eight species of crayfish live in Iowa, for example. In 1983, Louisiana designated the crayfish, or crawfish as they are commonly called, as its official state crustacean. Louisiana produces of crawfish per year with the red swamp and white river crawfish being the main species harvested. Crawfish are a part of Cajun culture dating back hundreds of years. A variety of cottage industries have developed as a result of commercialized crawfish iconography. Their products include crawfish attached to wooden plaques, T-shirts with crawfish logos, and crawfish pendants, earrings, and necklaces made of gold or silver. Australia Australia has over 100 species in a dozen genera. It is home to the world's three largest freshwater crayfish: the Tasmanian giant freshwater crayfish Astacopsis gouldi, which can achieve a mass over and is found in rivers of northern Tasmania the Murray crayfish Euastacus armatus, which can reach , although reports of animals up to have been made. It is found in much of the southern Murray-Darling basin. the marron from Western Australia (now believed to be two species, Cherax tenuimanus and C. cainii) which may reach Many of the better-known Australian crayfish are of the genus Cherax, and include the common yabby (C. destructor), western yabby (C. preissii), and red-claw crayfish (C. quadricarinatus). The marron species C. tenuimanus is critically endangered, while other large Australasian crayfish are threatened or endangered. New Zealand In New Zealand, two species of Paranephrops are endemic, and are known by the Māori name . Other animals In Australia, New Zealand, and South Africa, the term "crayfish" or "cray" generally refers to a saltwater spiny lobster, of the genus Jasus that is indigenous to much of southern Oceania, while the freshwater species are usually called yabbies or , from the indigenous Australian and Māori names for the animal, respectively, or by other names specific to each species. Exceptions include western rock lobster (of the Palinuridae family) found on the west coast of Australia (it is a spiny lobster, but not of Jasus); the Tasmanian giant freshwater crayfish (from the Parastacidae family and therefore a true crayfish) found only in Tasmania; and the Murray crayfish found along Australia's Murray River. In Singapore, the term crayfish typically refers to Thenus orientalis, a seawater crustacean from the slipper lobster family. True crayfish are not native to Singapore, but are commonly found as pets, or as an invasive species (Cherax quadricarinatus) in the many water catchment areas, and are alternatively known as freshwater lobsters. In the United Kingdom and Ireland, the terms crayfish or crawfish commonly refer to the European spiny lobster, a saltwater species found in much of the East Atlantic and Mediterranean. The only true crayfish species native to the British Isles is the endangered white clawed crayfish. Fossil record Fossil burrows very similar in construction to those of modern crayfish and likely produced by early crayfish are known from the Early Permian (~300-270 million years ago) of equatorial Pangea, in what is now North America (Washington Formation), and Europe (Sardinia). The oldest body fossils assigned to crayfish are known from the Late Triassic (~230-200 million years ago) Chinle Formation of North America, assigned to the species "Enoploclytia" porteri and Camborygma eumekenomos, which are not assigned to any modern families. An indeterminate member of the modern family Cambaridae is known from the Late Jurassic Morrison Formation of North America. The earliest records of other modern families date to the Early Cretaceous, including the parastacid Palaeoechinastacus from Australia which is 115 million years old, the cambaroidid Palaeocambarus from the Yixian Formation of China which is likely around 120 million years old (Barremian-Aptian), and the astacid "Austropotamobius" llopisi from the Las Hoyas site in Spain (Barremian). Threats to crayfish Crayfish are susceptible to infections such as crayfish plague and to environmental stressors including acidification. In Europe, they are particularly threatened by crayfish plague, which is caused by the North American water mold Aphanomyces astaci. This water mold was transmitted to Europe when North American species of crayfish were introduced. Species of the genus Astacus are particularly susceptible to infection, allowing the plague-coevolved signal crayfish (native to western North America) to invade parts of Europe. Acid rain can cause problems for crayfish across the world. In whole-ecosystem experiments simulating acid rain at the Experimental Lakes Area in Ontario, Canada, crayfish populations crashed – probably because their exoskeletons are weaker in acidified environments. Invasive pest In several countries, particularly in Europe, native species of crayfish are under threat by imported species, particularly the signal crayfish (Pacifastacus leniusculus). Crayfish are also considered an invasive predatory species, endangering native European species such as the Italian agile frog and the painted frog in Malta. Uses Culinary use Crayfish are eaten worldwide. Like other edible crustaceans, only a small portion of the body of a crayfish is eaten. In most prepared dishes, such as soups, bisques and étouffées, only the tail portion is served. At crawfish boils or other meals where the entire body of the crayfish is presented, other portions, such as the claw meat, may be eaten. Research shows that crayfish do not die immediately when boiled alive, and respond to pain in a similar way to mammals. Then the stress hormone cortisol is released and this leads to the formation of lactic acid in the muscles, which makes the meat taste sour. Crayfish can be cooked more humanely by first freezing them unconscious for a few hours, then destroying the central nervous system along their abdomen by cutting the crayfish lengthwise with a long knife down the center of the crayfish before cooking it. Global crayfish production is centered in Asia, primarily China. In 2018, Asian production accounted for 95% of the world's crawfish supply. Crayfish is part of Swedish cuisine and is usually eaten in August at special crayfish parties (). Documentation of the consumption of crayfish dates to at least the 16th century. On the Swedish west coast, Nephrops norvegicus (, ) is more commonly eaten while various freshwater crayfish are consumed in the rest of the country. Prior to the 1960s, crayfish was largely inaccessible to the urban population in Sweden and consumption was largely limited to the upper classes or farmers holding fishing rights in fresh water lakes. With the introduction of import of frozen crayfish the crayfish party is now widely practiced across all spheres in Sweden and among the Swedish-speaking population of Finland. In the United States, crayfish production is strongly centered in Louisiana, with 93% of crayfish farms located in the state as of 2018. In 1987, Louisiana produced 90% of the crayfish harvested in the world, 70% of which were consumed locally. In 2007, the Louisiana crayfish harvest was about 54,800 tons, almost all of it from aquaculture. About 70–80% of crayfish produced in Louisiana are Procambarus clarkii (red swamp crawfish), with the remaining 20–30% being Procambarus zonangulus (white river crawfish). Optimum dietary nutritional requirement of freshwater crayfish, or crayfish nutrient specifications are now available for aquaculture feed producers Like all crustaceans, crayfish are not kosher because they are aquatic animals that do not have both fins and scales. They are therefore not eaten by observant Jews. Bait Crayfish are preyed upon by a variety of ray-finned fishes, and are commonly used as bait, either live or with only the tail meat. They are a popular bait for catching catfish, largemouth bass, smallmouth bass, striped bass, perch, pike and muskie. When using live crayfish as bait, anglers prefer to hook them between the eyes, piercing through their hard, pointed beak which causes them no harm; therefore, they remain more active. When using crayfish as bait, it is important to fish in the same environment where they were caught. An Illinois State University report that focused on studies conducted on the Fox River and Des Plaines River watershed stated that rusty crayfish, initially caught as bait in a different environment, were dumped into the water and "outcompeted the native clearwater crayfish". Other studies confirmed that transporting crayfish to different environments has led to various ecological problems, including the elimination of native species. Transporting crayfish as live bait has also contributed to the spread of zebra mussels in various waterways throughout Europe and North America, as they are known to attach themselves to exoskeleton of crayfishes. Pets Crayfish are kept as pets in freshwater aquariums. They prefer foods like shrimp pellets or various vegetables, but will also eat tropical fish food, regular fish food, algae wafers, and small fish that can be captured with their claws. A report by the National Park Service as well as video and anecdotal reports by aquarium owners indicate that crayfish will eat their moulted exoskeleton "to recover the calcium and phosphates contained in it." As omnivores, crayfish will eat almost anything; therefore, they may explore the edibility of aquarium plants in a fish tank. However, most species of dwarf crayfish, such as Cambarellus patzcuarensis, will not destructively dig or eat live aquarium plants. In some nations, such as the United Kingdom, United States, Australia, and New Zealand, imported alien crayfish are a danger to local rivers. The three most widespread American species invasive in Europe are Faxonius limosus, Pacifastacus leniusculus and Procambarus clarkii. Crayfish may spread into different bodies of water because specimens captured for pets in one river are often released into a different catchment. There is a potential for ecological damage when crayfish are introduced into non-native bodies of water: e.g., crayfish plague in Europe, or the introduction of the common yabby (Cherax destructor) into drainages east of the Great Dividing Range in Australia. Education Some public schools in the United States keep live crayfish in the classroom and have the students take care of them in order to give the students a greater understanding of the creatures. Sentinel species The Protivin brewery in the Czech Republic uses crayfish outfitted with sensors to detect any changes in their bodies or pulse activity in order to monitor the purity of the water used in their product. The creatures are kept in a fish tank that is fed with the same local natural source water used in their brewing. If three or more of the crayfish have changes to their pulses, employees know there is a change in the water and examine the parameters. Scientists also monitor crayfish in the wild in natural bodies of water to study the levels of pollutants there.
Biology and health sciences
Crustaceans
null
43617
https://en.wikipedia.org/wiki/Shark
Shark
Sharks are a group of elasmobranch fish characterized by a cartilaginous skeleton, five to seven gill slits on the sides of the head, and pectoral fins that are not fused to the head. Modern sharks are classified within the clade Selachimorpha (or Selachii) and are the sister group to the Batoidea (rays and kin). Some sources extend the term "shark" as an informal category including extinct members of Chondrichthyes (cartilaginous fish) with a shark-like morphology, such as hybodonts. Shark-like chondrichthyans such as Cladoselache and Doliodus first appeared in the Devonian Period (419–359 million years), though some fossilized chondrichthyan-like scales are as old as the Late Ordovician (458–444 million years ago). The earliest confirmed modern sharks (selachimorphs) are known from the Early Jurassic around , with the oldest known member being Agaleus, though records of true sharks may extend back as far as the Permian. Sharks range in size from the small dwarf lanternshark (Etmopterus perryi), a deep sea species that is only in length, to the whale shark (Rhincodon typus), the largest fish in the world, which reaches approximately in length. They are found in all seas and are common to depths up to . They generally do not live in freshwater, although there are a few known exceptions, such as the bull shark and the river sharks, which can be found in both seawater and freshwater, and the Ganges shark, which lives only in freshwater. Sharks have a covering of dermal denticles that protects their skin from damage and parasites in addition to improving their fluid dynamics. They have numerous sets of replaceable teeth. Several species are apex predators, which are organisms that are at the top of their food chain. Select examples include the bull shark, tiger shark, great white shark, mako sharks, thresher sharks, and hammerhead sharks. Sharks are caught by humans for shark meat or shark fin soup. Many shark populations are threatened by human activities. Since 1970, shark populations have been reduced by 71%, mostly from overfishing. Etymology Until the 16th century, sharks were known to mariners as "sea dogs". This is still evidential in several species termed "dogfish", or the porbeagle. The etymology of the word shark is uncertain. The most likely etymology states that the original sense of the word was that of "predator, one who preys on others" from the Dutch , meaning 'villain, scoundrel' (cf. card shark, loan shark, etc.), which was later applied to the fish due to its predatory behaviour. A now disproven theory is that it derives from the Yucatec Maya word (), meaning 'shark'. Evidence for this etymology came from the Oxford English Dictionary, which notes that shark first came into use after Sir John Hawkins' sailors exhibited one in London in 1569 and posted "sharke" to refer to the large sharks of the Caribbean Sea. However, the Middle English Dictionary records an isolated occurrence of the word shark (referring to a sea fish) in a letter written by Thomas Beckington in 1442, which rules out a New World etymology. Evolutionary history Fossil record The oldest total-group chondrichthyans, known as acanthodians or "spiny sharks", appeared during the Early Silurian, around 439 million years ago. The oldest confirmed members of Elasmobranchii sensu lato (the group containing all cartilaginous fish more closely related to modern sharks and rays than to chimaeras) appeared during the Devonian. Anachronistidae, the oldest probable representatives of Neoselachii, the group containing modern sharks (Selachimorpha) and rays (Batoidea) to the exclusion of most extinct elasmobranch groups, date to the Carboniferous. Selachiimorpha and Batoidea are suggested by some to have diverged during the Triassic. Fossils of the earliest true sharks may have appeared during the Permian, based on remains of "synechodontiforms" found in the Early Permian of Russia, but if remains of "synechodontiformes" from the Permian and Triassic are true sharks, they only had low diversity. Modern shark orders first appeared during the Early Jurassic, and during the Jurassic true sharks underwent great diversification. Selachimorphs largely replaced the hybodonts, which had previously been a dominant group of shark-like fish during the Triassic and Early Jurassic. Taxonomy Sharks belong to the clade Selachimorpha in the subclass Elasmobranchii in the class Chondrichthyes. The Elasmobranchii also include rays and skates; the Chondrichthyes also include Chimaeras. It was thought that the sharks form a polyphyletic group: some sharks are more closely related to rays than they are to some other sharks, but current molecular studies support monophyly of both groups of sharks and batoids. The clade Selachimorpha is divided into the superorders Galea (or Galeomorphii), and Squalea (or Squalomorphii). The Galeans are the Heterodontiformes, Orectolobiformes, Lamniformes, and Carcharhiniformes. Lamnoids and Carcharhinoids are usually placed in one clade, but recent studies show that Lamnoids and Orectoloboids are a clade. Some scientists now think that Heterodontoids may be Squalean. The Squaleans are divided into Hexanchiformes and Squalomorpha. The former includes cow shark and frilled shark, though some authors propose that both families be moved to separate orders. The Squalomorpha contains the Squaliformes and the Hypnosqualea. The Hypnosqualea may be invalid. It includes the Squatiniformes, and the Pristorajea, which may also be invalid, but includes the Pristiophoriformes and the Batoidea. There are more than 500 species of sharks split across thirteen orders, including several orders of sharks that have gone extinct: Carcharhiniformes: Commonly known as ground sharks, the order includes the blue, tiger, bull, grey reef, blacktip reef, Caribbean reef, blacktail reef, whitetip reef, and oceanic whitetip sharks (collectively called the requiem sharks) along with the houndsharks, catsharks, and hammerhead sharks. They are distinguished by an elongated snout and a nictitating membrane which protects the eyes during an attack. Heterodontiformes: They are generally referred to as the bullhead or horn sharks. Hexanchiformes: Examples from this group include the cow sharks and frilled sharks, which somewhat resembles a marine snake. Lamniformes: They are commonly known as the mackerel sharks. They include the goblin shark, basking shark, megamouth shark, the thresher sharks, shortfin and longfin mako sharks, and great white shark. They are distinguished by their large jaws and ovoviviparous reproduction. The Lamniformes also include the extinct megalodon, Otodus megalodon. Orectolobiformes: They are commonly referred to as the carpet sharks, including zebra sharks, nurse sharks, wobbegongs, and the whale shark. Pristiophoriformes: These are the sawsharks, with an elongated, toothed snout that they use for slashing their prey. Squaliformes: This group includes the dogfish sharks and roughsharks. Squatiniformes: Also known as angel sharks, they are flattened sharks with a strong resemblance to stingrays and skates. Echinorhiniformes: This group includes the prickly shark and bramble shark. Phylogenetic placement of this group has been ambiguous in scientific studies. They are sometimes given their own order, Echinorhiniformes. Anatomy Teeth Shark teeth are embedded in the gums rather than directly affixed to the jaw, and are constantly replaced throughout life. Multiple rows of replacement teeth grow in a groove on the inside of the jaw and steadily move forward in comparison to a conveyor belt; some sharks lose 30,000 or more teeth in their lifetime. The rate of tooth replacement varies from once every 8 to 10 days to several months. In most species, teeth are replaced one at a time as opposed to the simultaneous replacement of an entire row, which is observed in the cookiecutter shark. Tooth shape depends on the shark's diet: those that feed on mollusks and crustaceans have dense and flattened teeth used for crushing, those that feed on fish have needle-like teeth for gripping, and those that feed on larger prey such as mammals have pointed lower teeth for gripping and triangular upper teeth with serrated edges for cutting. The teeth of plankton-feeders such as the basking shark are small and non-functional. Skeleton Shark skeletons are very different from those of bony fish and terrestrial vertebrates. Sharks and other cartilaginous fish (skates and rays) have skeletons made of cartilage and connective tissue. Cartilage is flexible and durable, yet is about half the normal density of bone. This reduces the skeleton's weight, saving energy. Because sharks do not have rib cages, they can easily be crushed under their own weight on land. Jaw The jaws of sharks, like those of rays and skates, are not attached to the cranium. The jaw's surface (in comparison to the shark's vertebrae and gill arches) needs extra support due to its heavy exposure to physical stress and its need for strength. It has a layer of tiny hexagonal plates called "tesserae", which are crystal blocks of calcium salts arranged as a mosaic. This gives these areas much of the same strength found in the bony tissue found in other animals. Generally sharks have only one layer of tesserae, but the jaws of large specimens, such as the bull shark, tiger shark, and the great white shark, have two to three layers or more, depending on body size. The jaws of a large great white shark may have up to five layers. In the rostrum (snout), the cartilage can be spongy and flexible to absorb the power of impacts. Fins Fin skeletons are elongated and supported with soft and unsegmented rays named ceratotrichia, filaments of elastic protein resembling the horny keratin in hair and feathers. Most sharks have eight fins. Sharks can only drift away from objects directly in front of them because their fins do not allow them to move in the tail-first direction. Dermal denticles Unlike bony fish, sharks have a complex dermal corset made of flexible collagenous fibers and arranged as a helical network surrounding their body. This works as an outer skeleton, providing attachment for their swimming muscles and thus saving energy. Their dermal teeth give them hydrodynamic advantages as they reduce turbulence when swimming. Some species of shark have pigmented denticles that form complex patterns like spots (e.g. Zebra shark) and stripes (e.g. Tiger shark). These markings are important for camouflage and help sharks blend in with their environment, as well as making them difficult for prey to detect. For some species, dermal patterning returns to healed denticles even after they have been removed by injury. Tails Tails provide thrust, making speed and acceleration dependent on tail shape. Caudal fin shapes vary considerably between shark species, due to their evolution in separate environments. Sharks possess a heterocercal caudal fin in which the dorsal portion is usually noticeably larger than the ventral portion. This is because the shark's vertebral column extends into that dorsal portion, providing a greater surface area for muscle attachment. This allows more efficient locomotion among these negatively buoyant cartilaginous fish. By contrast, most bony fish possess a homocercal caudal fin. Tiger sharks have a large upper lobe, which allows for slow cruising and sudden bursts of speed. The tiger shark must be able to twist and turn in the water easily when hunting to support its varied diet, whereas the porbeagle shark, which hunts schooling fish such as mackerel and herring, has a large lower lobe to help it keep pace with its fast-swimming prey. Other tail adaptations help sharks catch prey more directly, such as the thresher shark's usage of its powerful, elongated upper lobe to stun fish and squid. Physiology Buoyancy Unlike bony fish, sharks do not have gas-filled swim bladders for buoyancy. Instead, sharks rely on a large liver filled with oil that contains squalene, and their cartilage, which is about half the normal density of bone. Their liver constitutes up to 30% of their total body mass. The liver's effectiveness is limited, so sharks employ dynamic lift to maintain depth while swimming. Sand tiger sharks store air in their stomachs, using it as a form of swim bladder. Bottom-dwelling sharks, like the nurse shark, have negative buoyancy, allowing them to rest on the ocean floor. Some sharks, if inverted or stroked on the nose, enter a natural state of tonic immobility. Researchers use this condition to handle sharks safely. Respiration Like other fish, sharks extract oxygen from seawater as it passes over their gills. Unlike other fish, shark gill slits are not covered, but lie in a row behind the head. A modified slit called a spiracle lies just behind the eye, which assists the shark with taking in water during respiration and plays a major role in bottom–dwelling sharks. Spiracles are reduced or missing in active pelagic sharks. While the shark is moving, water passes through the mouth and over the gills in a process known as "ram ventilation". While at rest, most sharks pump water over their gills to ensure a constant supply of oxygenated water. A small number of species have lost the ability to pump water through their gills and must swim without rest. These species are obligate ram ventilators and would presumably asphyxiate if unable to move. Obligate ram ventilation is also true of some pelagic bony fish species. The respiratory and circulatory process begins when deoxygenated venous blood travels to the shark's two-chambered heart. Here, the shark pumps blood to its gills via the ventral aorta where it branches into afferent branchial arteries. Gas exchange takes place in the gills and the reoxygenated blood flows into the efferent branchial arteries, which come together to form the dorsal aorta. The blood flows from the dorsal aorta throughout the body. The deoxygenated blood from the body then flows through the posterior cardinal veins and enters the posterior cardinal sinuses. From there venous blood re-enters the heart ventricle and the cycle repeats. Thermoregulation Most sharks are "cold-blooded" or, more precisely, poikilothermic, meaning that their internal body temperature matches that of their ambient environment. Members of the family Lamnidae (such as the shortfin mako shark and the great white shark) are homeothermic and maintain a higher body temperature than the surrounding water. In these sharks, a strip of aerobic red muscle located near the center of the body generates the heat, which the body retains via a countercurrent exchange mechanism by a system of blood vessels called the rete mirabile ("miraculous net"). The common thresher and bigeye thresher sharks have a similar mechanism for maintaining an elevated body temperature. Larger species, like the whale shark, are able to conserve their body heat through sheer size when they dive to colder depths, and the scalloped hammerhead close its mouth and gills when they dives to depths of around 800 metres, holding its breath till it reach warmer waters again. Osmoregulation In contrast to bony fish, with the exception of the coelacanth, the blood and other tissue of sharks and Chondrichthyes is generally isotonic to their marine environments because of the high concentration of urea (up to 2.5%) and trimethylamine N-oxide (TMAO), allowing them to be in osmotic balance with the seawater. This adaptation prevents most sharks from surviving in freshwater, and they are therefore confined to marine environments. A few exceptions exist, such as the bull shark, which has developed a way to change its kidney function to excrete large amounts of urea. When a shark dies, the urea is broken down to ammonia by bacteria, causing the dead body to gradually smell strongly of ammonia. Research in 1930 by Homer W. Smith showed that sharks' urine does not contain sufficient sodium to avoid hypernatremia, and it was postulated that there must be an additional mechanism for salt secretion. In 1960 it was discovered at the Mount Desert Island Biological Laboratory in Salsbury Cove, Maine that sharks have a type of salt gland located at the end of the intestine, known as the "rectal gland", whose function is the secretion of chlorides. Digestion Digestion can take a long time. The food moves from the mouth to a J-shaped stomach, where it is stored and initial digestion occurs. Unwanted items may never get past the stomach, and instead the shark either vomits or turns its stomachs inside out and ejects unwanted items from its mouth. One of the biggest differences between the digestive systems of sharks and mammals is that sharks have much shorter intestines. This short length is achieved by the spiral valve with multiple turns within a single short section instead of a long tube-like intestine. The valve provides a long surface area, requiring food to circulate inside the short gut until fully digested, when remaining waste products pass into the cloaca. Fluorescence A few sharks appear fluorescent under blue light, such as the swell shark and the chain catshark, where the fluorophore derives from a metabolite of kynurenic acid. Senses Smell Sharks have keen olfactory senses, located in the short duct (which is not fused, unlike bony fish) between the anterior and posterior nasal openings, with some species able to detect as little as one part per million of blood in seawater. The size of the olfactory bulb varies across different shark species, with size dependent on how much a given species relies on smell or vision to find their prey. In environments with low visibility, shark species generally have larger olfactory bulbs. In reefs, where visibility is high, species of sharks from the family Carcharhinidae have smaller olfactory bulbs. Sharks found in deeper waters also have larger olfactory bulbs. Sharks have the ability to determine the direction of a given scent based on the timing of scent detection in each nostril. This is similar to the method mammals use to determine direction of sound. They are more attracted to the chemicals found in the intestines of many species, and as a result often linger near or in sewage outfalls. Some species, such as nurse sharks, have external barbels that greatly increase their ability to sense prey. Sight Shark eyes are similar to the eyes of other vertebrates, including similar lenses, corneas and retinas, though their eyesight is well adapted to the marine environment with the help of a tissue called tapetum lucidum. This tissue is behind the retina and reflects light back to it, thereby increasing visibility in the dark waters. The effectiveness of the tissue varies, with some sharks having stronger nocturnal adaptations. Many sharks can contract and dilate their pupils, like humans, something no teleost fish can do. Sharks have eyelids, but they do not blink because the surrounding water cleans their eyes. To protect their eyes some species have nictitating membranes. This membrane covers the eyes while hunting and when the shark is being attacked. However, some species, including the great white shark (Carcharodon carcharias), do not have this membrane, but instead roll their eyes backwards to protect them when striking prey. The importance of sight in shark hunting behavior is debated. Some believe that electro- and chemoreception are more significant, while others point to the nictating membrane as evidence that sight is important, since presumably the shark would not protect its eyes were they unimportant. The use of sight probably varies with species and water conditions. The shark's field of vision can swap between monocular and stereoscopic at any time. A micro-spectrophotometry study of 17 species of sharks found 10 had only rod photoreceptors and no cone cells in their retinas giving them good night vision while making them colorblind. The remaining seven species had in addition to rods a single type of cone photoreceptor sensitive to green and, seeing only in shades of grey and green, are believed to be effectively colorblind. The study indicates that an object's contrast against the background, rather than colour, may be more important for object detection. Hearing Although it is hard to test the hearing of sharks, they may have a sharp sense of hearing and can possibly hear prey from many miles away. The hearing sensitivity for most shark species lies between 20 and 1000 Hz. A small opening on each side of their heads (not the spiracle) leads directly into the inner ear through a thin channel. The lateral line shows a similar arrangement, and is open to the environment via a series of openings called lateral line pores. This is a reminder of the common origin of these two vibration- and sound-detecting organs that are grouped together as the acoustico-lateralis system. In bony fish and tetrapods the external opening into the inner ear has been lost. Electroreception The ampullae of Lorenzini are the electroreceptor organs. They number in the hundreds to thousands. Sharks use the ampullae of Lorenzini to detect the electromagnetic fields that all living things produce. This helps sharks (particularly the hammerhead shark) find prey. The shark has the greatest electrical sensitivity of any animal. Sharks find prey hidden in sand by detecting the electric fields they produce. Ocean currents moving in the magnetic field of the Earth also generate electric fields that sharks can use for orientation and possibly navigation. Lateral line This system is found in most fish, including sharks. It is a tactile sensory system which allows the organism to detect water speed and pressure changes near by. The main component of the system is the neuromast, a cell similar to hair cells present in the vertebrate ear that interact with the surrounding aquatic environment. This helps sharks distinguish between the currents around them, obstacles off on their periphery, and struggling prey out of visual view. The shark can sense frequencies in the range of 25 to 50 Hz. Life history Shark lifespans vary by species. Most live 20 to 30 years. The spiny dogfish has one of the longest lifespans at more than 100 years. Whale sharks (Rhincodon typus) may also live over 100 years. Earlier estimates suggested the Greenland shark (Somniosus microcephalus) could reach about 200 years, but a recent study found that a specimen was 392 ± 120 years old (i.e., at least 272 years old), making it the longest-lived vertebrate known. Reproduction Unlike most bony fish, sharks are K-selected reproducers, meaning that they produce a small number of well-developed young as opposed to a large number of poorly developed young. Fecundity in sharks ranges from 2 to over 100 young per reproductive cycle. Sharks mature slowly relative to many other fish. For example, lemon sharks reach sexual maturity at around age 13–15. Sexual Sharks practice internal fertilization. The posterior part of a male shark's pelvic fins are modified into a pair of intromittent organs called claspers, analogous to a mammalian penis, of which one is used to deliver sperm into the female. Mating has rarely been observed in sharks. The smaller catsharks often mate with the male curling around the female. In less flexible species the two sharks swim parallel to each other while the male inserts a clasper into the female's oviduct. Females in many of the larger species have bite marks that appear to be a result of a male grasping them to maintain position during mating. The bite marks may also come from courtship behavior: the male may bite the female to show his interest. In some species, females have evolved thicker skin to withstand these bites. Asexual There have been a number of documented cases in which a female shark who has not been in contact with a male has conceived a pup on her own through parthenogenesis. The details of this process are not well understood, but genetic fingerprinting showed that the pups had no paternal genetic contribution, ruling out sperm storage. The extent of this behavior in the wild is unknown. Mammals are now the only major vertebrate group in which asexual reproduction has not been observed. Scientists say that asexual reproduction in the wild is rare, and probably a last-ditch effort to reproduce when a mate is not present. Asexual reproduction diminishes genetic diversity, which helps build defenses against threats to the species. Species that rely solely on it risk extinction. Asexual reproduction may have contributed to the blue shark's decline off the Irish coast. Brooding Sharks display three ways to bear their young, varying by species, oviparity, viviparity and ovoviviparity. Ovoviviparity Most sharks are ovoviviparous, meaning that the eggs hatch in the oviduct within the mother's body and that the egg's yolk and fluids secreted by glands in the walls of the oviduct nourishes the embryos. The young continue to be nourished by the remnants of the yolk and the oviduct's fluids. As in viviparity, the young are born alive and fully functional. Lamniforme sharks practice oophagy, where the first embryos to hatch eat the remaining eggs. Taking this a step further, sand tiger shark pups cannibalistically consume neighboring embryos. The survival strategy for ovoviviparous species is to brood the young to a comparatively large size before birth. The whale shark is now classified as ovoviviparous rather than oviparous, because extrauterine eggs are now thought to have been aborted. Most ovoviviparous sharks give birth in sheltered areas, including bays, river mouths and shallow reefs. They choose such areas for protection from predators (mainly other sharks) and the abundance of food. Dogfish have the longest known gestation period of any shark, at 18 to 24 months. Basking sharks and frilled sharks appear to have even longer gestation periods, but accurate data are lacking. Oviparity Some species are oviparous, laying their fertilized eggs in the water. In most oviparous shark species, an egg case with the consistency of leather protects the developing embryo(s). These cases may be corkscrewed into crevices for protection. The egg case is commonly called a mermaid's purse. Oviparous sharks include the horn shark, catshark, Port Jackson shark, and swellshark. Viviparity Viviparity is the gestation of young without the use of a traditional egg, and results in live birth. Viviparity in sharks can be placental or aplacental. Young are born fully formed and self-sufficient. Hammerheads, the requiem sharks (such as the bull and blue sharks), and smoothhounds are viviparous. Behavior The classic view describes a solitary hunter, ranging the oceans in search of food. However, this applies to only a few species. Most live far more social, sedentary, benthic lives, and appear likely to have their own distinct personalities. Even solitary sharks meet for breeding or at rich hunting grounds, which may lead them to cover thousands of miles in a year. Shark migration patterns may be even more complex than in birds, with many sharks covering entire ocean basins. Sharks can be highly social, remaining in large schools. Sometimes more than 100 scalloped hammerheads congregate around seamounts and islands, e.g., in the Gulf of California. Cross-species social hierarchies exist. For example, oceanic whitetip sharks dominate silky sharks of comparable size during feeding. When approached too closely some sharks perform a threat display. This usually consists of exaggerated swimming movements, and can vary in intensity according to the threat level. Speed In general, sharks swim ("cruise") at an average speed of , but when feeding or attacking, the average shark can reach speeds upwards of . The shortfin mako shark, the fastest shark and one of the fastest fish, can burst at speeds up to . The great white shark is also capable of speed bursts. These exceptions may be due to the warm-blooded, or homeothermic, nature of these sharks' physiology. Sharks can travel 70 to 80 km in a day. Intelligence Sharks possess brain-to-body mass ratios that are similar to mammals and birds, and have exhibited apparent curiosity and behavior resembling play in the wild. There is evidence that juvenile lemon sharks can use observational learning in their investigation of novel objects in their environment. Sleep All sharks need to keep water flowing over their gills in order for them to breathe; however, not all species need to be moving to do this. Those that are able to breathe while not swimming do so by using their spiracles to force water over their gills, thereby allowing them to extract oxygen from the water. It has been recorded that their eyes remain open while in this state and actively follow the movements of divers swimming around them and as such they are not truly asleep. Species that do need to swim continuously to breathe go through a process known as sleep swimming, in which the shark is essentially unconscious. It is known from experiments conducted on the spiny dogfish that its spinal cord, rather than its brain, coordinates swimming, so spiny dogfish can continue to swim while sleeping, and this also may be the case in larger shark species. In 2016 a great white shark was captured on video for the first time in a state researchers believed was sleep swimming. Ecology Feeding Most sharks are carnivorous. Basking sharks, whale sharks, and megamouth sharks have independently evolved different strategies for filter feeding plankton: basking sharks practice ram feeding, whale sharks use suction to take in plankton and small fishes, and megamouth sharks make suction feeding more efficient by using the luminescent tissue inside of their mouths to attract prey in the deep ocean. This type of feeding requires gill rakers—long, slender filaments that form a very efficient sieve—analogous to the baleen plates of the great whales. The shark traps the plankton in these filaments and swallows from time to time in huge mouthfuls. Teeth in these species are comparatively small because they are not needed for feeding. Other highly specialized feeders include cookiecutter sharks, which feed on flesh sliced out of other larger fish and marine mammals. Cookiecutter teeth are enormous compared to the animal's size. The lower teeth are particularly sharp. Although they have never been observed feeding, they are believed to latch onto their prey and use their thick lips to make a seal, twisting their bodies to rip off flesh. Some seabed–dwelling species are highly effective ambush predators. Angel sharks and wobbegongs use camouflage to lie in wait and suck prey into their mouths. Many benthic sharks feed solely on crustaceans which they crush with their flat molariform teeth. Other sharks feed on squid or fish, which they swallow whole. The viper dogfish has teeth it can point outwards to strike and capture prey that it then swallows intact. The great white and other large predators either swallow small prey whole or take huge bites out of large animals. Thresher sharks use their long tails to stun shoaling fishes, and sawsharks either stir prey from the seabed or slash at swimming prey with their tooth-studded rostra. The bonnethead shark is the only known omnivorous species. Its main prey is crustaceans and mollusks, but it also eats a large amount of seagrass, and is able to digest and extract nutrients from about 50% of the seagrass it consume. Many sharks, including the whitetip reef shark are cooperative feeders and hunt in packs to herd and capture elusive prey. These social sharks are often migratory, traveling huge distances around ocean basins in large schools. These migrations may be partly necessary to find new food sources. Range and habitat Sharks are found in all seas. They generally do not live in fresh water, with a few exceptions such as the bull shark and the river shark which can swim both in seawater and freshwater. Sharks are common down to depths of , and some live even deeper, but they are almost entirely absent below . The deepest confirmed report of a shark is a Portuguese dogfish at . Relationship with humans Attacks In 2006 the International Shark Attack File (ISAF) undertook an investigation into 96 alleged shark attacks, confirming 62 of them as unprovoked attacks and 16 as provoked attacks. The average number of fatalities worldwide per year between 2001 and 2006 from unprovoked shark attacks is 4.3. Contrary to popular belief, only a few sharks are dangerous to humans. Out of more than 470 species, only four have been involved in a significant number of fatal, unprovoked attacks on humans: the great white, oceanic whitetip, tiger, and bull sharks. These sharks are large, powerful predators, and may sometimes attack and kill people. Despite being responsible for attacks on humans they have all been filmed without using a protective cage. The perception of sharks as dangerous animals has been popularized by publicity given to a few isolated unprovoked attacks, such as the Jersey Shore shark attacks of 1916, and through popular fictional works about shark attacks, such as the Jaws film series. Jaws author Peter Benchley, as well as Jaws director Steven Spielberg, later attempted to dispel the image of sharks as man-eating monsters. To help avoid an unprovoked attack, humans should not wear jewelry or metal that is shiny and refrain from splashing around too much. In general, sharks show little pattern of attacking humans specifically, part of the reason could be that sharks prefer the blood of fish and other common preys. Research indicates that when humans do become the object of a shark attack, it is possible that the shark has mistaken the human for species that are its normal prey, such as seals. This was further proven in a recent study conducted by researchers at the California State University's Shark Lab. According to footage caught by the Lab's drones, juveniles swam right up to humans in the water without any bites incidents. The lab stated that the results showed that humans and sharks can co-exist in the water. In captivity Until recently, only a few benthic species of shark, such as hornsharks, leopard sharks and catsharks, had survived in aquarium conditions for a year or more. This gave rise to the belief that sharks, as well as being difficult to capture and transport, were difficult to care for. More knowledge has led to more species (including the large pelagic sharks) living far longer in captivity, along with safer transportation techniques that have enabled long-distance transportation. The great white shark had never been successfully held in captivity for long periods of time until September 2004, when the Monterey Bay Aquarium successfully kept a young female for 198 days before releasing her. Most species are not suitable for home aquaria, and not every species sold by pet stores are appropriate. Some species can flourish in home saltwater aquaria. Uninformed or unscrupulous dealers sometimes sell juvenile sharks like the nurse shark, which upon reaching adulthood is far too large for typical home aquaria. Public aquaria generally do not accept donated specimens that have outgrown their housing. Some owners have been tempted to release them. Species appropriate to home aquaria represent considerable spatial and financial investments as they generally approach adult lengths of and can live up to 25 years. In culture In Hawaii Sharks figure prominently in Hawaiian mythology. Stories tell of men with shark jaws on their back who could change between shark and human form. A common theme was that a shark-man would warn beach-goers of sharks in the waters. The beach-goers would laugh and ignore the warnings and get eaten by the shark-man who warned them. Hawaiian mythology also includes many shark gods. Among a fishing people, the most popular of all aumakua, or deified ancestor guardians, are shark aumakua. Kamaku describes in detail how to offer a corpse to become a shark. The body transforms gradually until the kahuna can point the awe-struck family to the markings on the shark's body that correspond to the clothing in which the beloved's body had been wrapped. Such a shark aumakua becomes the family pet, receiving food, and driving fish into the family net and warding off danger. Like all aumakua it had evil uses such as helping kill enemies. The ruling chiefs typically forbade such sorcery. Many Native Hawaiian families claim such an aumakua, who is known by name to the whole community. Kamohoali'i is the best known and revered of the shark gods, he was the older and favored brother of Pele, and helped and journeyed with her to Hawaii. He was able to assume all human and fish forms. A summit cliff on the crater of Kilauea is one of his most sacred spots. At one point he had a heiau (temple or shrine) dedicated to him on every piece of land that jutted into the ocean on the island of Molokai. Kamohoali'i was an ancestral god, not a human who became a shark and banned the eating of humans after eating one herself. In Fijian mythology, Dakuwaqa was a shark god who was the eater of lost souls. In American Samoa On the island of Tutuila in American Samoa (a U.S. territory), there is a location called Turtle and Shark (Laumei ma Malie) which is important in Samoan culture—the location is the site of a legend called O Le Tala I Le Laumei Ma Le Malie, in which two humans are said to have transformed into a turtle and a shark. According to the U.S. National Park Service, "Villagers from nearby Vaitogi continue to reenact an important aspect of the legend at Turtle and Shark by performing a ritual song intended to summon the legendary animals to the ocean surface, and visitors are frequently amazed to see one or both of these creatures emerge from the sea in apparent response to this call." In popular culture In contrast to the complex portrayals by Hawaiians and other Pacific Islanders, the European and Western view of sharks has historically been mostly of fear and malevolence. Sharks are used in popular culture commonly as eating machines, notably in the Jaws novel and the film of the same name, along with its sequels. Sharks are threats in other films such as Deep Blue Sea, The Reef, and others, although they are sometimes used for comedic effect such as in Finding Nemo and the Austin Powers series. Sharks tend to be seen quite often in cartoons whenever a scene involves the ocean. Such examples include the Tom and Jerry cartoons, Jabberjaw, and other shows produced by Hanna-Barbera. They also are used commonly as a clichéd means of killing off a character that is held up by a rope or some similar object as the sharks swim right below them, or the character may be standing on a plank above shark infested waters. Popular misconceptions A popular myth is that sharks are immune to disease and cancer, but this is not scientifically supported. Sharks have been known to get cancer. Both diseases and parasites affect sharks. The evidence that sharks are at least resistant to cancer and disease is mostly anecdotal and there have been few, if any, scientific or statistical studies that show sharks to have heightened immunity to disease. Other apparently false claims are that fins prevent cancer and treat osteoarthritis. No scientific proof supports these claims; at least one study has shown shark cartilage of no value in cancer treatment. Threats to sharks Fishery In 2008, it was estimated that nearly 100 million sharks were being killed by people every year, due to commercial and recreational fishing. In 2021, it was estimated that the population of oceanic sharks and rays had dropped by 71% over the previous half-century. Shark finning yields are estimated at for 2000, and for 2010. Based on an analysis of average shark weights, this translates into a total annual mortality estimate of about 100 million sharks in 2000, and about 97 million sharks in 2010, with a total range of possible values between 63 and 273 million sharks per year. Sharks are a common seafood in many places, including Japan and Australia. In southern Australia, shark is commonly used in fish and chips, in which fillets are battered and deep-fried or crumbed and grilled. In fish and chip shops, shark is called flake. In India, small sharks or baby sharks (called sora in Tamil language, Telugu language) are sold in local markets. Since the flesh is not developed, cooking the flesh breaks it into powder, which is then fried in oil and spices (called sora puttu/sora poratu). The soft bones can be easily chewed, they are considered a delicacy in coastal Tamil Nadu. Icelanders ferment Greenland sharks to produce a delicacy called hákarl. During a four-year period from 1996 to 2000, an estimated 26 to 73 million sharks were killed and traded annually in commercial markets. Sharks are often killed for shark fin soup. Fishermen capture live sharks, fin them, and dump the finless animal back into the water. Shark finning involves removing the fin with a hot metal blade. The resulting immobile shark soon dies from suffocation or predators. Shark fin has become a major trade within black markets all over the world. Fins sell for about $300/lb in 2009. Poachers illegally fin millions each year. Few governments enforce laws that protect them. In 2010 Hawaii became the first U.S. state to prohibit the possession, sale, trade or distribution of shark fins. From 1996 to 2000, an estimated 38 million sharks had been killed per year for harvesting shark fins. It is estimated by TRAFFIC that over 14,000 tonnes of shark fins were exported into Singapore between 2005–2007 and 2012–2014. Shark fin soup is a status symbol in Asian countries and is erroneously considered healthy and full of nutrients. Scientific research has revealed, however, that high concentrations of BMAA are present in shark fins. Because BMAA is a neurotoxin, consumption of shark fin soup and cartilage pills, therefore, may pose a health risk. BMAA is under study for its pathological role in neurodegenerative diseases such as ALS, Alzheimer's disease, and Parkinson's disease. Sharks are also killed for meat. European diners consume dogfishes, smoothhounds, catsharks, makos, porbeagle and also skates and rays. However, the U.S. FDA lists sharks as one of four fish (with swordfish, king mackerel, and tilefish) whose high mercury content is hazardous to children and pregnant women. Sharks generally reach sexual maturity only after many years and produce few offspring in comparison to other harvested fish. Harvesting sharks before they reproduce severely impacts future populations. Capture induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks/rays when fished. Capture-induced parturition is rarely considered in fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date). The majority of shark fisheries have little monitoring or management. The rise in demand for shark products increases pressure on fisheries. Major declines in shark stocks have been recorded—some species have been depleted by over 90% over the past 20–30 years with population declines of 70% not unusual. A study by the International Union for Conservation of Nature suggests that one quarter of all known species of sharks and rays are threatened by extinction and 25 species were classified as critically endangered. Shark culling In 2014, a shark cull in Western Australia killed dozens of sharks (mostly tiger sharks) using drum lines, until it was cancelled after public protests and a decision by the Western Australia EPA; from 2014 to 2017, there was an "imminent threat" policy in Western Australia in which sharks that "threatened" humans in the ocean were shot and killed. This "imminent threat" policy was criticized by senator Rachel Siewart for killing endangered sharks. The "imminent threat" policy was cancelled in March 2017. In August 2018, the Western Australia government announced a plan to re-introduce drum lines (though, this time the drum lines are "SMART" drum lines). From 1962 to the present, the government of Queensland has targeted and killed sharks in large numbers by using drum lines, under a "shark control" program—this program has also inadvertently killed large numbers of other animals such as dolphins; it has also killed endangered hammerhead sharks. Queensland's drum line program has been called "outdated, cruel and ineffective". From 2001 to 2018, a total of 10,480 sharks were killed on lethal drum lines in Queensland, including in the Great Barrier Reef. From 1962 to 2018, roughly 50,000 sharks were killed by Queensland authorities. The government of New South Wales has a program that deliberately kills sharks using nets. The current net program in New South Wales has been described as being "extremely destructive" to marine life, including sharks. Between 1950 and 2008, 352 tiger sharks and 577 great white sharks were killed in the nets in New South Wales—also during this period, a total of 15,135 marine animals were killed in the nets, including dolphins, whales, turtles, dugongs, and critically endangered grey nurse sharks. There has been a very large decrease in the number of sharks in eastern Australia, and the shark-killing programs in Queensland and New South Wales are partly responsible for this decrease. Kwazulu-Natal, an area of South Africa, has a shark-killing program using nets and drum lines—these nets and drum lines have killed turtles and dolphins, and have been criticized for killing wildlife. During a 30-year period, more than 33,000 sharks have been killed in KwaZulu-Natal's shark-killing program—during the same 30-year period, 2,211 turtles, 8,448 rays, and 2,310 dolphins were killed in KwaZulu-Natal. Authorities on the French island of Réunion kill about 100 sharks per year. Killing sharks negatively affects the marine ecosystem. Jessica Morris of Humane Society International calls shark culling a "knee-jerk reaction" and says, "sharks are top order predators that play an important role in the functioning of marine ecosystems. We need them for healthy oceans." George H. Burgess, the former director of the International Shark Attack File, "describes [shark] culling as a form of revenge, satisfying a public demand for blood and little else"; he also said shark culling is a "retro-type move reminiscent of what people would have done in the 1940s and 50s, back when we didn't have an ecological conscience and before we knew the consequences of our actions." Jane Williamson, an associate professor in marine ecology at Macquarie University, says "There is no scientific support for the concept that culling sharks in a particular area will lead to a decrease in shark attacks and increase ocean safety." Other threats Other threats include habitat alteration, damage and loss from coastal development, pollution and the impact of fisheries on the seabed and prey species. The 2007 documentary Sharkwater exposed how sharks are being hunted to extinction. Conservation In 1991, South Africa was the first country in the world to declare Great White sharks a legally protected species (however, the KwaZulu-Natal Sharks Board is allowed to kill great white sharks in its "shark control" program in eastern South Africa). Intending to ban the practice of shark finning while at sea, the United States Congress passed the Shark Finning Prohibition Act in 2000. Two years later the Act saw its first legal challenge in United States v. Approximately 64,695 Pounds of Shark Fins. In 2008 a Federal Appeals Court ruled that a loophole in the law allowed non-fishing vessels to purchase shark fins from fishing vessels while on the high seas. Seeking to close the loophole, the Shark Conservation Act was passed by Congress in December 2010, and it was signed into law in January 2011. In 2003, the European Union introduced a general shark finning ban for all vessels of all nationalities in Union waters and for all vessels flying a flag of one of its member states. This prohibition was amended in June 2013 to close remaining loopholes. In 2009, the International Union for Conservation of Nature's IUCN Red List of Endangered Species named 64 species, one-third of all oceanic shark species, as being at risk of extinction due to fishing and shark finning. In 2010, the Convention on International Trade in Endangered Species (CITES) rejected proposals from the United States and Palau that would have required countries to strictly regulate trade in several species of scalloped hammerhead, oceanic whitetip and spiny dogfish sharks. The majority, but not the required two-thirds of voting delegates, approved the proposal. China, by far the world's largest shark market, and Japan, which battles all attempts to extend the convention to marine species, led the opposition. In March 2013, three endangered commercially valuable sharks, the hammerheads, the oceanic whitetip and porbeagle were added to Appendix 2 of CITES, bringing shark fishing and commerce of these species under licensing and regulation. In 2010, Greenpeace International added the school shark, shortfin mako shark, mackerel shark, tiger shark and spiny dogfish to its seafood red list, a list of common supermarket fish that are often sourced from unsustainable fisheries. Advocacy group Shark Trust campaigns to limit shark fishing. Advocacy group Seafood Watch directs American consumers to not eat sharks. Under the auspices of the Convention on the Conservation of Migratory Species of Wild Animals (CMS), also known as the Bonn Convention, the Memorandum of Understanding on the Conservation of Migratory Sharks was concluded and came into effect in March 2010. It was the first global instrument concluded under CMS and aims at facilitating international coordination for the protection, conservation and management of migratory sharks, through multilateral, intergovernmental discussion and scientific research. In July 2013, New York state, a major market and entry point for shark fins, banned the shark fin trade joining seven other states of the United States and the three Pacific U.S. territories in providing legal protection to sharks. In the United States, and as of January 16, 2019, 12 states including (Massachusetts, Maryland, Delaware, California, Illinois, Hawaii, Oregon, Nevada, Rhode Island, Washington, New York and Texas) along with 3 U.S. territories (American Samoa, Guam and the Northern Mariana Islands) have passed laws against the sale or possession of shark fins. Several regions now have shark sanctuaries or have banned shark fishing—these regions include American Samoa, the Bahamas, the Cook Islands, French Polynesia, Guam, the Maldives, the Marshall Islands, Micronesia, the Northern Mariana Islands, and Palau. In April 2020 researchers reported to have traced the origins of shark fins of endangered hammerhead sharks from a retail market in Hong Kong back to their source populations and therefore the approximate locations where the sharks were first caught using DNA analysis. In July 2020 scientists reported results of a survey of 371 reefs in 58 nations estimating the conservation status of reef sharks globally. No sharks have been observed on almost 20% of the surveyed reefs and shark depletion was strongly associated with both socio-economic conditions and conservation measures. Sharks are considered to be a vital part of the ocean ecosystem. According to a 2021 study in Nature, overfishing has resulted in a 71% global decline in the number of oceanic sharks and rays over the preceding 50 years. The oceanic whitetip, and both the scalloped hammerhead and great hammerheads are now classified as critically endangered. Sharks in tropical waters have declined more rapidly than those in temperate zones during the period studied. A 2021 study published in Current Biology found that overfishing is currently driving over one-third of sharks and rays to extinction.
Biology and health sciences
Sharks
null
43619
https://en.wikipedia.org/wiki/Great%20white%20shark
Great white shark
The great white shark (Carcharodon carcharias), also known as the white shark, white pointer, or simply great white, is a species of large mackerel shark which can be found in the coastal surface waters of all the major oceans. It is the only known surviving species of its genus Carcharodon. The great white shark is notable for its size, with the largest preserved female specimen measuring in length and around in weight at maturity. However, most are smaller; males measure , and females measure on average. According to a 2014 study, the lifespan of great white sharks is estimated to be as long as 70 years or more, well above previous estimates, making it one of the longest lived cartilaginous fishes currently known. According to the same study, male great white sharks take 26 years to reach sexual maturity, while the females take 33 years to be ready to produce offspring. Great white sharks can swim at speeds of 25 km/h (16 mph) for short bursts and to depths of . The great white shark is arguably the world's largest-known extant macropredatory fish, and is one of the primary predators of marine mammals, such as pinnipeds and dolphins. The great white shark is also known to prey upon a variety of other animals, including fish, other sharks, and seabirds. It has only one recorded natural predator, the orca. The species faces numerous ecological challenges which has resulted in international protection. The International Union for Conservation of Nature lists the great white shark as a vulnerable species, and it is included in Appendix II of CITES. It is also protected by several national governments, such as Australia (as of 2018). Due to their need to travel long distances for seasonal migration and extremely demanding diet, it is not logistically feasible to keep great white sharks in captivity; because of this, while attempts have been made to do so in the past, there are no known aquariums in the world believed to house a live specimen. The great white shark is depicted in popular culture as a ferocious man-eater, largely as a result of the novel Jaws by Peter Benchley and its subsequent film adaptation by Steven Spielberg. Humans are not a preferred prey, but nevertheless it is responsible for the largest number of reported and identified fatal unprovoked shark attacks on humans. However, attacks are rare, typically occurring fewer than 10 times per year globally. Taxonomy The great white is the sole recognized extant species in the genus Carcharodon, and is one of five extant species belonging to the family Lamnidae. Other members of this family include the mako sharks, porbeagle, and salmon shark. The family belongs to the Lamniformes, the order of mackerel sharks. Etymology and naming history The English name 'white shark' and its Australian variant 'white pointer' is thought to have come from the shark's stark white underside, a characteristic feature most noticeable in beached sharks lying upside down with their bellies exposed. Colloquial use favours the name 'great white shark', with 'great' perhaps stressing the size and prowess of the species, and "white shark" having historically been used to describe the much smaller oceanic white-tipped shark, later referred to for a time as the "lesser white shark". Most scientists prefer 'white shark', as the name "lesser white shark" is no longer used, while some use 'white shark' to refer to all members of the Lamnidae. The scientific genus name Carcharodon literally means "jagged tooth", a reference to the large serrations that appear in the shark's teeth. It is a portmanteau of two Ancient Greek words: the prefix carchar- is derived from κάρχαρος (kárkharos), which means "jagged" or "sharp". The suffix -odon is a romanization of ὀδών (odṓn), a which translates to "tooth". The specific name carcharias is a Latinization of καρχαρίας (karkharías), the Ancient Greek word for shark. The great white shark was one of the species originally described by Carl Linnaeus in his 1758 10th edition of Systema Naturae, in which it was identified as an amphibian and assigned the scientific name Squalus carcharias, Squalus being the genus that he placed all sharks in. By the 1810s, it was recognized that the shark should be placed in a new genus, but it was not until 1838 when Sir Andrew Smith coined the name Carcharodon as the new genus. There have been a few attempts to describe and classify the great white before Linnaeus. One of its earliest mentions in literature as a distinct type of animal appears in Pierre Belon's 1553 book De aquatilibus duo, cum eiconibus ad vivam ipsorum effigiem quoad ejus fieri potuit, ad amplissimum cardinalem Castilioneum. In it, he illustrated and described the shark under the name Canis carcharias based on the jagged nature of its teeth and its alleged similarities with dogs. Another name used for the great white around this time was Lamia, first coined by Guillaume Rondelet in his 1554 book Libri de Piscibus Marinis, who also identified it as the fish that swallowed the prophet Jonah in biblical texts. Linnaeus recognized both names as previous classifications. Fossil ancestry Molecular clock studies published between 1988 and 2002 determined the closest living relative of the great white to be the mako sharks of the genus Isurus, which diverged some time between 60 and 43 million years ago. Tracing this evolutionary relationship through fossil evidence, however, remains subject to further paleontological study. The original hypothesis of the great white shark's origin held that it is a descendant of a lineage of mega-toothed sharks, and is closely related to the prehistoric megalodon. These sharks were considerably larger in size, with megalodon attaining an estimated length of up to . Similarities between the teeth of great white and mega-toothed sharks, such as large triangular shapes, serrated blades, and the presence of dental bands, led the primary evidence of a close evolutionary relationship. As a result, scientists classified the ancient forms under the genus Carcharodon. Although weaknesses in the hypothesis existed, such as uncertainty over exactly which species evolved into the modern great white and multiple gaps in the fossil record, palaeontologists were able to chart the hypothetical lineage back to a 60-million-year-old shark known as Cretalamna as the common ancestor of all sharks within the Lamnidae. However, it is now understood that the great white shark holds closer ties to the mako sharks and is descended from a separate lineage as a chronospecies unrelated to the mega-toothed sharks. This was proven with the discovery of a transitional species that connected the great white to an unserrated shark known as Carcharodon hastalis. This transitional species, which was named Carcharodon hubbelli in 2012, demonstrated a mosaic of evolutionary transitions between the great white and C. hastalis, namely the gradual appearance of serrations, in a span of between 8 and 5 million years ago. The progression of C. hubbelli characterized shifting diets and niches; by 6.5 million years ago, the serrations were developed enough for C. hubbelli to handle marine mammals. Although both the great white and C. hastalis were known worldwide, C. hubbelli is primarily found in California, Peru, Chile, and surrounding coastal deposits, indicating that the great white had Pacific origins. C. hastalis continued to thrive alongside the great white until its last appearance around one million years ago and is believed to have possibly sired a number of additional species, including Carcharodon subserratus and Carcharodon plicatilis. However, Yun argued that the tooth fossil remains of C. hastalis and Great White Shark "have been documented from the same deposits, hence the former cannot be a chronospecific ancestor of the latter." He also criticized that the C. hastalis "morphotype has never been tested through phylogenetic analyses," and denoted that as of 2021, the argument that the modern Carcharodon lineage with narrow, serrated teeth evolved from C. hastalis with a broad, unserrated teeth is uncertain. Tracing beyond C. hastalis, another prevailing hypothesis proposes that the great white and mako lineages shared a common ancestor in a primitive mako-like species. The identity of this ancestor is still debated, but a potential species includes Isurolamna inflata, which lived between 65 and 55 million years ago. It is hypothesized that the great white and mako lineages split with the rise of two separate descendants, the one representing the great white shark lineage being Macrorhizodus praecursor. Distribution and habitat Great white sharks live in almost all coastal and offshore waters which have water temperature between , with greater concentrations in the United States (Northeast and California), South Africa, Japan, Oceania, Chile, and the Mediterranean including the Sea of Marmara and Bosphorus. One of the densest-known populations is found around Dyer Island, South Africa. Juvenile great white sharks inhabit a more narrow band of temperatures, between , in shallow coastal nurseries. Increased observation of young sharks in areas they were not previously common, such as Monterey Bay on the Central California coast, suggest climate change may be reducing the range of juvenile great white sharks and shifting it toward the poles. The great white is an epipelagic fish, observed mostly in the presence of rich game, such as fur seals (Arctocephalus ssp.), sea lions, cetaceans, other sharks, and large bony fish species. In the open ocean, it has been recorded at depths as great as . These findings challenge the traditional notion that the great white is a coastal species. According to a recent study, California great whites have migrated to an area between Baja California Peninsula and Hawaii known as the White Shark Café to spend at least 100 days before migrating back to Baja. On the journey out, they swim slowly and dive down to around . After they arrive, they change behaviour and do short dives to about for up to ten minutes. Another white shark that was tagged off the South African coast swam to the southern coast of Australia and back within the year. A similar study tracked a different great white shark from South Africa swimming to Australia's northwestern coast and back, a journey of in under nine months. These observations argue against traditional theories that white sharks are coastal territorial predators, and open up the possibility of interaction between shark populations that were previously thought to have been discrete. The reasons for their migration and what they do at their destination is still unknown. Possibilities include seasonal feeding or mating. In the Northwest Atlantic, the white shark populations off the New England coast were nearly eradicated due to over-fishing. In recent years, the populations have grown greatly, largely due to the increase in seal populations on Cape Cod, Massachusetts since the enactment of the Marine Mammal Protection Act in 1972. Currently very little is known about the hunting and movement patterns of great whites off Cape Cod, but ongoing studies hope to offer insight into this growing shark population. The Massachusetts Division of Marine Fisheries (part of the Department of Fish and Game) began a population study in 2014; since 2019, this research has focused on how humans can avoid conflict with sharks. Scientists believe all North Atlantic great white sharks spend their first year of life near New York City, off the coast of Long Island. A 2018 study indicated that white sharks prefer to congregate deep in anticyclonic eddies in the North Atlantic Ocean. The sharks studied tended to favour the warm-water eddies, spending the daytime hours at depths of and coming to the surface at night. Anatomy and appearance The great white shark has a robust, large, conical snout. The upper and lower lobes on the tail fin are approximately the same size which is similar to some mackerel sharks. A great white displays countershading, by having a white underside and a grey dorsal area (sometimes in a brown or blue shade) that gives an overall mottled appearance. The coloration makes it difficult for prey to spot the shark because it breaks up the shark's outline when seen from the side. From above, the darker shade blends with the sea and from below it exposes a minimal silhouette against the sunlight. Leucism is extremely rare in this species, but has been documented at least three times; in a pup that washed ashore in Australia and died, in another pup in South Africa, and a third six-metre adult male in Indonesia. Great white sharks, like many other sharks, have rows of serrated teeth behind the main ones, ready to replace any that break off. When the shark bites, it shakes its head side-to-side, helping the teeth saw off large chunks of flesh. Great white sharks, like other mackerel sharks, have larger eyes than other shark species in proportion to their body size. The iris of the eye is a deep blue instead of black. Size In great white sharks, sexual dimorphism is present, and females are generally larger than males. Male great whites on average measure in length, while females measure . Adults of this species weigh on average; however, mature females can have an average mass of . The largest females have been verified up to in length and an estimated in weight, perhaps up to . The maximum size is subject to debate because some reports are rough estimations or speculations performed under questionable circumstances. Among living cartilaginous fish, only the whale shark (Rhincodon typus), the basking shark (Cetorhinus maximus) and the giant manta ray (Manta birostris), in that order, are on average larger and heavier. These three species are generally quite docile in disposition and given to passively filter-feeding on very small organisms. This makes the great white shark the largest extant macropredatory fish. Great white sharks measure approximately when born, and grow about every year. A complete female great white shark specimen in the Museum of Zoology in Lausanne, and claimed by De Maddalena et al. (2003) as the largest preserved specimen, measured in total body length with the caudal fin in its depressed position, and is estimated to have weighed . According to J. E. Randall, the largest white shark reliably measured was a specimen reported from Ledge Point, Western Australia in 1987, but it is unclear whether that length was measured with the caudal fin in its depressed or natural position. Another great white specimen of similar size was a female caught in August 1988 in the Gulf of St. Lawrence, off Prince Edward Island, by David McKendrick of Alberton, Prince Edward Island. This female great white was long, as verified by the Canadian Shark Research Center. A report of a specimen reportedly measuring in length and with a body mass estimated at caught in 1945 off the coast of Cuba was at the time considered reliable by some experts. However, later studies revealed this particular specimen to be around in length, i.e. a specimen within the typical maximum size range. The largest great white recognized by the International Game Fish Association (IGFA) is one caught by Alf Dean in southern Australian waters in 1959, weighing . Examples of large unconfirmed great whites A number of very large unconfirmed great white shark specimens have been recorded. For decades, many ichthyological works, as well as the Guinness Book of World Records, listed two great white sharks as the largest individuals: In the 1870s, a great white captured in southern Australian waters, near Port Fairy, and an shark trapped in a herring weir in New Brunswick, Canada, in the 1930s. However, these measurements were not obtained in a rigorous, scientifically valid manner, and researchers have questioned the reliability of these measurements for a long time, noting they were much larger than any other accurately reported sighting. Later studies proved these doubts to be well-founded. This New Brunswick shark may have been a misidentified basking shark, as the two have similar body shapes. The question of the Port Fairy shark was settled in the 1970s when J. E. Randall examined the shark's jaws and "found that the Port Fairy shark was of the order of in length and suggested that a mistake had been made in the original record, in 1870, of the shark's length". While these measurements have not been confirmed, some great white sharks caught in modern times have been estimated to be more than long, but these claims have received some criticism. However, J. E. Randall believed that great white shark may have exceeded in length. A great white shark was captured near Kangaroo Island in Australia on 1 April 1987. This shark was estimated to be more than long by Peter Resiley, and has been designated as KANGA. Another great white shark was caught in Malta by Alfredo Cutajar on 16 April 1987. This shark was also estimated to be around long by John Abela and has been designated as MALTA. However, Cappo drew criticism because he used shark size estimation methods proposed by J. E. Randall to suggest that the KANGA specimen was long. In a similar fashion, I. K. Fergusson also used shark size estimation methods proposed by J. E. Randall to suggest that the MALTA specimen was long. However, photographic evidence suggested that these specimens were larger than the size estimations yielded through Randall's methods. Thus, a team of scientists—H. F. Mollet, G. M. Cailliet, A. P. Klimley, D. A. Ebert, A. D. Testi, and L. J. V. Compagno—reviewed the cases of the KANGA and MALTA specimens in 1996 to resolve the dispute by conducting a comprehensive morphometric analysis of the remains of these sharks and re-examination of photographic evidence in an attempt to validate the original size estimations and their findings were consistent with them. The findings indicated that estimations by P. Resiley and J. Abela are reasonable and could not be ruled out. A particularly large female great white nicknamed "Deep Blue", estimated measuring at was filmed off Guadalupe during shooting for the 2014 episode of Shark Week "Jaws Strikes Back". Deep Blue would also later gain significant attention when she was filmed interacting with researcher Mauricio Hoyas Pallida in a viral video that Mauricio posted on Facebook on 11 June 2015. Deep Blue was later seen off Oahu in January 2019 while scavenging a sperm whale carcass, whereupon she was filmed swimming beside divers including dive tourism operator and model Ocean Ramsey in open water. A particularly infamous great white shark, supposedly of record proportions, once patrolled the area that comprises False Bay, South Africa, was said to be well over during the early 1980s. This shark, known locally as the "Submarine", had a legendary reputation that was supposedly well-founded. Though rumours have stated this shark was exaggerated in size or non-existent altogether, witness accounts by the then young Craig Anthony Ferreira, a notable shark expert in South Africa, and his father indicate an unusually large animal of considerable size and power (though it remains uncertain just how massive the shark was as it escaped capture each time it was hooked). Ferreira describes the four encounters with the giant shark he participated in with great detail in his book Great White Sharks On Their Best Behavior. One contender in maximum size among the predatory sharks is the tiger shark (Galeocerdo cuvier). While tiger sharks, which are typically both a few feet smaller and have a leaner, less heavy body structure than white sharks, have been confirmed to reach at least in the length, an unverified specimen was reported to have measured in length and weighed , more than two times heavier than the largest confirmed specimen at . Some other macropredatory sharks such as the Greenland shark (Somniosus microcephalus) and the Pacific sleeper shark (S. pacificus) are also reported to rival these sharks in length (but probably weigh a bit less since they are more slender in build than a great white) in exceptional cases. Reported sizes Adaptations Great white sharks, like all other sharks, have an extra sense given by the ampullae of Lorenzini which enables them to detect the electromagnetic field emitted by the movement of living animals. Great whites are so sensitive they can detect variations of half a billionth of a volt. At close range, this allows the shark to locate even immobile animals by detecting their heartbeat. Most fish have a less-developed but similar sense using their body's lateral line. To more successfully hunt fast and agile prey such as sea lions, the great white has adapted to maintain a body temperature warmer than the surrounding water. One of these adaptations is a "rete mirabile" (Latin for "wonderful net"). This close web-like structure of veins and arteries, located along each lateral side of the shark, conserves heat by warming the cooler arterial blood with the venous blood that has been warmed by the working muscles. This keeps certain parts of the body (particularly the stomach) at temperatures up to above that of the surrounding water, while the heart and gills remain at sea temperature. When conserving energy, the core body temperature can drop to match the surroundings. A great white shark's success in raising its core temperature is an example of gigantothermy. Therefore, the great white shark can be considered an endothermic poikilotherm or mesotherm because its body temperature is not constant but is internally regulated. Great whites also rely on the fat and oils stored within their livers for long-distance migrations across nutrient-poor areas of the oceans. Studies by Stanford University and the Monterey Bay Aquarium published on 17 July 2013 revealed that in addition to controlling the sharks' buoyancy, the liver of great whites is essential in migration patterns. Sharks that sink faster during drift dives were revealed to use up their internal stores of energy quicker than those which sink in a dive at more leisurely rates. Toxicity from heavy metals seems to have little negative effects on great white sharks. Blood samples taken from forty-three individuals of varying size, age and sex off the South African coast led by biologists from the University of Miami in 2012 indicates that despite high levels of mercury, lead, and arsenic, there was no sign of raised white blood cell count and granulate to lymphocyte ratios, indicating the sharks had healthy immune systems. This discovery suggests a previously unknown physiological defence against heavy metal poisoning. Great whites are known to have a propensity for "self-healing and avoiding age-related ailments". Bite force A 2007 study from the University of New South Wales in Sydney, Australia, used CT scans of a shark's skull and computer models to measure the shark's maximum bite force. The study reveals the forces and behaviours its skull is adapted to handle and resolves competing theories about its feeding behaviour. In 2008, a team of scientists led by Stephen Wroe conducted an experiment to determine the great white shark's jaw power and findings indicated that a specimen massing could exert a bite force of . Ecology and behaviour This shark's behaviour and social structure are complex. In South Africa, white sharks have a dominance hierarchy where an individual's rank is primarily established by their size, and to a lesser extent, their sex and "squatter's rights"; larger sharks dominate smaller sharks, females dominate males, and established residents dominate newcomers. When hunting, great whites tend to separate and resolve conflicts with rituals and displays. White sharks rarely resort to combat, although some individuals have been found with bite marks that match those of other white sharks. This suggests that when a great white approaches too closely to another, they react with a warning bite. Another possibility is that white sharks bite to show their dominance. Data acquired from animal-borne telemetry receivers and published in 2022 via the journal Royal Society Publishing suggests that individual great whites may associate so that they can inadvertently share information on the whereabouts of prey or the location of the remains of animals that can be scavenged. As biologging can help to reveal social habits, it allows a better understanding to be made in future studies regarding the full extent of social interactions in large marine animals, including the great white shark. The great white shark is one of only a few sharks known to regularly lift its head above the sea surface to gaze at other objects such as prey. This is known as spy-hopping. This behaviour has also been seen in at least one group of blacktip reef sharks, but this might be learned from interaction with humans (it is theorized that the shark may also be able to smell better this way because smell travels through air faster than through water). White sharks are generally very curious animals, display intelligence and may also turn to socializing if the situation demands it. At Seal Island, white sharks have been observed arriving and departing in stable "clans" of two to six individuals on a yearly basis. Whether clan members are related is unknown, but they get along peacefully enough. In fact, the social structure of a clan is probably most aptly compared to that of a wolf pack, in that each member has a clearly established rank and each clan has an alpha leader. When members of different clans meet, they establish social rank nonviolently through any of a variety of interactions. In 2022, research in South Africa suggested that the great white shark has the ability to change colours to camouflage itself depending on the hormones it gives off. Different hormones would change the colour of the skin from white to grey. Skin dosed with adrenaline would turn lighter, with melanocyte-stimulating hormone causing melanocyte cells to dissipate thus making the shark's skin a darker colour, although hormone mediated color change is not fully validated due to the limited number of test subjects (i.e. great whites). The camo shark hypothesis is supported by the fact that zebra sharks can change their colour as they age, and rainbow sharks can lose colour due to stress and ageing. Diet Great white sharks are generalist carnivores, preying upon fish (e.g. tuna, rays, other sharks), cetaceans (i.e., dolphins, porpoises, whales), pinnipeds (e.g. seals, fur seals, and sea lions), squid, sea turtles, sea otters (Enhydra lutris) and seabirds. Great whites have also been known to eat objects that they are unable to digest. Juvenile white sharks predominantly prey on fish, including other elasmobranchs, as their jaws are not strong enough to withstand the forces required to attack larger prey such as pinnipeds and cetaceans until they reach a length of or more, at which point their jaw cartilage mineralizes enough to withstand the impact of biting into larger prey species. Upon approaching a length of nearly , great white sharks begin to target predominantly marine mammals for food, though individual sharks seem to specialize in different types of prey depending on their preferences. They seem to be highly opportunistic. These sharks prefer prey with a high content of energy-rich fat. Shark expert Peter Klimley used a rod-and-reel rig and trolled carcasses of a seal, a pig, and a sheep from his boat in the South Farallons. The sharks attacked all three baits but rejected the sheep carcass. Off Seal Island, False Bay in South Africa, the sharks ambush brown fur seals (Arctocephalus pusillus) from below at high speeds, hitting the seal mid-body. They achieve high speeds that allow them to completely breach the surface of the water. The peak burst speed is estimated to be above . They have also been observed chasing prey after a missed attack. Prey is usually attacked at the surface. Shark attacks occur most often in the morning, within two hours of sunrise, when visibility is poor. Their success rate is 55% in the first two hours, falling to 40% in late morning after which hunting stops. Off California, sharks use different predation techniques depending on the prey species. They immobilize northern elephant seals (Mirounga angustirostris) with a large bite to the hindquarters (which is the main source of the seal's mobility) and wait for the seal to bleed to death. This technique is especially used on adult male elephant seals, which are typically larger than the shark, ranging between , and are potentially dangerous adversaries. However, juvenile elephant seals are the most frequently eaten at elephant seal colonies. Prey is normally attacked sub-surface. Harbor seals (Phoca vitulina) are taken from the surface and dragged down until they stop struggling. They are then eaten near the bottom. California sea lions (Zalophus californianus) are ambushed from below and struck mid-body before being dragged and eaten. In the Northwest Atlantic mature great whites are known to feed on both harbor and grey seals. Unlike adults, juvenile white sharks in the area feed on smaller fish species until they are large enough to prey on marine mammals such as seals. White sharks also attack dolphins and porpoises from above, behind or below to avoid being detected by their echolocation. Targeted species include dusky dolphins (Sagmatias obscurus), Risso's dolphins (Grampus griseus), bottlenose dolphins (Tursiops ssp.), humpback dolphins (Sousa ssp.), harbour porpoises (Phocoena phocoena), and Dall's porpoises (Phocoenoides dalli). Groups of dolphins have occasionally been observed defending themselves from sharks with mobbing behaviour. White shark predation on other species of small cetacean has also been observed. In August 1989, a juvenile male pygmy sperm whale (Kogia breviceps) was found stranded in central California with a bite mark on its caudal peduncle from a great white shark. In addition, white sharks attack and prey upon beaked whales. Cases where an adult Stejneger's beaked whale (Mesoplodon stejnegeri), with a mean mass of around , and a juvenile Cuvier's beaked whale (Ziphius cavirostris), an individual estimated at , were hunted and killed by great white sharks have also been observed. When hunting sea turtles, they appear to simply bite through the carapace around a flipper, immobilizing the turtle. The heaviest species of bony fish, the oceanic sunfish (Mola mola), has been found in great white shark stomachs. Whale carcasses comprise an important part of the diet of white sharks. However, this has rarely been observed due to whales dying in remote areas. It has been estimated that of whale blubber could feed a white shark for 1.5 months. Detailed observations were made of four whale carcasses in False Bay between 2000 and 2010. Sharks were drawn to the carcass by chemical and odour detection, spread by strong winds. After initially feeding on the whale caudal peduncle and fluke, the sharks would investigate the carcass by slowly swimming around it and mouthing several parts before selecting a blubber-rich area. During feeding bouts of 15–20 seconds the sharks removed flesh with lateral headshakes, without the protective ocular rotation they employ when attacking live prey. The sharks were frequently observed regurgitating chunks of blubber and immediately returning to feed, possibly in order to replace low energy yield pieces with high energy yield pieces, using their teeth as mechanoreceptors to distinguish them. After feeding for several hours, the sharks appeared to become lethargic, no longer swimming to the surface; they were observed mouthing the carcass but apparently unable to bite hard enough to remove flesh, they would instead bounce off and slowly sink. Up to eight sharks were observed feeding simultaneously, bumping into each other without showing any signs of aggression; on one occasion a shark accidentally bit the head of a neighbouring shark, leaving two teeth embedded, but both continued to feed unperturbed. Smaller individuals hovered around the carcass eating chunks that drifted away. Unusually for the area, large numbers of sharks over five metres long were observed, suggesting that the largest sharks change their behaviour to search for whales as they lose the manoeuvrability required to hunt seals. The investigating team concluded that the importance of whale carcasses, particularly for the largest white sharks, has been underestimated. In another documented incident, white sharks were observed scavenging on a whale carcass alongside tiger sharks. In 2020, marine biologists Sasha Dines and Enrico Gennari published a documented incident in the journal Marine and Freshwater Research of two great white sharks within an hour apart, successfully attacking and killing a live juvenile 7 m (23 ft) humpback whale. The sharks utilized the classic attack strategy used on pinnipeds when attacking the whale, even utilizing the bite-and-spit tactic they employ on smaller prey items. The whale was an entangled individual, heavily emaciated and thus more vulnerable to the sharks' attacks. The incident is the first known documentation of great whites actively killing a large baleen whale. A second incident regarding great white sharks killing humpback whales involving a single large female great white nicknamed Helen was documented off the coast of South Africa. Working alone, the shark attacked a emaciated and entangled humpback whale by attacking the whale's tail to cripple it before she managed to drown the whale by biting onto its head and pulling it underwater. The attack was witnessed via aerial drone by marine biologist Ryan Johnson, who said the attack went on for roughly 50 minutes before the shark successfully killed the whale. Johnson suggested that the shark may have strategized its attack in order to kill such a large animal. Stomach contents of great whites also indicates that whale sharks both juvenile and adult may also be included on the animal's menu, though whether this is active hunting or scavenging is not known at present. Reproduction Great white sharks were previously thought to reach sexual maturity at around 15 years of age, but are now believed to take far longer; male great white sharks reach sexual maturity at age 26, while females take 33 years to reach sexual maturity. Maximum life span was originally believed to be more than 30 years, but a study by the Woods Hole Oceanographic Institution placed it at upwards of 70 years. Examinations of vertebral growth ring count gave a maximum male age of 73 years and a maximum female age of 40 years for the specimens studied. The shark's late sexual maturity, low reproductive rate, long gestation period of 11 months and slow growth make it vulnerable to pressures such as overfishing and environmental change. Little is known about the great white shark's mating habits, and mating behaviour had not been observed in this species until 1997 and properly documented in 2020. It was assumed previously to be possible that whale carcasses are an important location for sexually mature sharks to meet for mating. According to the testimony of fisherman Dick Ledgerwood, who observed two great white sharks mating in the area near Port Chalmers and Otago Harbor, in New Zealand, it is theorized that great white sharks mate in shallow water away from feeding areas and continually roll belly to belly during copulation. Birth has never been observed, but pregnant females have been examined. Great white sharks are ovoviviparous, which means eggs develop and hatch in the uterus and continue to develop until birth. The great white has an 11-month gestation period. The shark pup's powerful jaws begin to develop in the first month. The unborn sharks participate in oophagy, in which they feed on ova produced by the mother. Delivery is in spring and summer. The largest number of pups recorded for this species is 14 pups from a single mother measuring that was killed incidentally off Taiwan in 2019. On 9 July 2023, the first footage of what was likely a newborn great white shark was filmed via aerial drone off of Southern California, off Carpinteria, after a large adult shark was seen diving to the bottom roughly from the shoreline, after which the smaller shark rose to the surface. The young shark, estimated up to long, was pale in colour, possibly due to what may be an embryonic covering, possibly intrauterine milk, was seen sloughing off the skin of the young shark. Adult sharks filmed in the area days prior suggest the area may be a birthing ground for pregnant females. This footage was published in the journal Environmental Biology of Fishes on 29 January 2024. A follow-up study in published in October, 2024 lends further support to the theory that the Carpinteria shark was a newborn; The description and examination of neonate porbeagles with a similar body covering to the young great white suggests that the body covering is not intrauterine milk (which ceases in production mid-gestation), but is instead embryonic epithelium that covers the shark's denticles and rubs off shortly after birth. Breaching behaviour A breach is the result of a high-speed approach to the surface with the resulting momentum taking the shark partially or completely clear of the water. This is a hunting technique employed by great white sharks whilst hunting seals. This technique is often used on cape fur seals at Seal Island in False Bay, South Africa. Because the behaviour is unpredictable, it is very hard to document. It was first photographed by Chris Fallows and Rob Lawrence who developed the technique of towing a slow-moving seal decoy to trick the sharks to breach. Between April and September, scientists may observe around 600 breaches. The seals swim on the surface and the great white sharks launch their predatory attack from the deeper water below. They can reach speeds of up to and can at times launch themselves more than into the air. Just under half of observed breach attacks are successful. In 2011, a 3-m-long shark jumped onto a seven-person research vessel off Seal Island in Mossel Bay. The crew were undertaking a population study using sardines as bait, and the incident was judged not to be an attack on the boat but an accident. Natural threats Interspecific competition and predation by orcas Interspecific competition between the great white shark and the orca is probable in regions where dietary preferences of both species may overlap. An incident was documented on 4 October 1997, in the Farallon Islands off California in the United States. An estimated female orca immobilized an estimated great white shark. The orca held the shark upside down to induce tonic immobility and kept the shark still for fifteen minutes, causing it to suffocate. The orca then proceeded to eat the dead shark's liver. It is believed that the scent of the slain shark's carcass caused all the great whites in the region to flee, forfeiting an opportunity for a great seasonal feed. Another similar attack apparently occurred there in 2000, but its outcome is not clear. After both attacks, the local population of about 100 great whites vanished. Following the 2000 incident, a great white with a satellite tag was found to have immediately submerged to a depth of and swam to Hawaii. In 2015, a pod of orcas was recorded to have killed a great white shark off South Australia. In 2017, three great whites were found washed ashore near Gansbaai, South Africa, with their body cavities torn open and the livers removed by what is likely to have been orcas. Orcas also generally impact great white distribution. Studies published in 2019 of orca and great white shark distribution and interactions around the Farallon Islands indicate that the cetaceans impact the sharks negatively, with brief appearances by orcas causing the sharks to seek out new feeding areas until the next season. It is unclear whether this is an example of competitive exclusion or ecology of fear. Occasionally, however, some great whites have been seen to swim near orcas without fear. Parasites The great white shark is the definitive host of two species of tapeworms from the genus Clistobothrium, these being Clistobothrium carcharodoni and Clistobothrium tumidum, both of which infect the shark's spiral intestine. The former is believed to be transmitted to great whites through the consumption of infected cetacean prey, namely the spinner dolphin (Stenella longirostris), Risso's dolphin (Grampus griseus), and the common bottlenose dolphin (Tursiops truncatus), all of which serve as intermediary or paratenic hosts of the tapeworm. The latter species of tapeworm's transmission vector is currently unknown, but it is unlikely to share the same intermediary hosts as Clisbotherium carcharodoni. The intensity of Clistobothrium carcharodoni infestations in affected great whites is extremely high; in one case, up to 2,533 specimens were recovered from the spiral valve of a single great white. There are two recorded instances of the ectoparasitic cookiecutter shark (Isistius brasiliensis) targeting subadult great whites off the coast of Guadalupe Island. However, the relative dearth of predation records indicates that great whites are not a common food source for cookiecutter sharks, and that cetaceans and pinnipeds - especially the Guadalupe fur seal (Arctocephalus townsendi) - are preferred over great whites; in part due to the higher caloric content of their blubber, and in part due to the higher risk of retaliation from victimized great whites. Relationship with humans Shark bite incidents Of all shark species, the great white shark is responsible for by far the largest number of recorded shark bite incidents on humans, with 351 documented unprovoked bite incidents on humans as of 2024. More than any documented bite incident, Peter Benchley's best-selling novel Jaws and the subsequent 1975 film adaptation directed by Steven Spielberg provided the great white shark with the image of being a "man-eater" in the public mind. While great white sharks have killed humans in at least 74 documented unprovoked bite incidents, they typically do not target them: for example, in the Mediterranean Sea there have been 31 confirmed bite incidents against humans in the last two centuries, most of which were non-fatal. Many of the incidents seemed to be "test-bites". Great white sharks also test-bite buoys, flotsam, and other unfamiliar objects, and they might grab a human or a surfboard to identify what it is. Many bite incidents occur in waters with low visibility or other situations which impair the shark's senses. The species appears to not like the taste of humans, or at least finds the taste unfamiliar. Further research shows that they can tell in one bite whether or not the object is worth predating upon. Humans, for the most part, are too bony for their liking. They much prefer seals, which are fat and rich in protein. Studies published in 2021 by Ryan et al. in the Journal of the Royal Society Interface suggest that mistaken identity is in fact a case for many shark bite incidents perpetrated by great white sharks. Using cameras and footage of seals in aquariums as models and mounted cameras moving at the same speed and angle as a cruising great white shark looking up at the surface from below, the experiment suggests that the sharks are likely colorblind and cannot see in fine enough detail to determine whether the silhouette above them is a pinniped or a swimming human, potentially vindicating the hypothesis. Humans are not appropriate prey because the shark's digestion is too slow to cope with a human's high ratio of bone to muscle and fat. Accordingly, in most recorded shark bite incidents, great whites broke off contact after the first bite. Fatalities are usually caused by blood loss from the initial bite rather than from critical organ loss or from whole consumption. , of the 351 recorded unprovoked attacks, 59 were fatal. However, some researchers have hypothesized that the reason the proportion of fatalities is low is not that sharks do not like human flesh, but because humans are often able to escape after the first bite. In the 1980s, John McCosker, chair of aquatic biology at the California Academy of Sciences, noted that divers who dived solo and were bitten by great whites were generally at least partially consumed, while divers who followed the buddy system were generally rescued by their companion. McCosker and Timothy C. Tricas, an author and professor at the University of Hawaii, suggest that a standard pattern for great whites is to make an initial devastating attack and then wait for the prey to weaken before consuming the wounded animal. Humans' ability to move out of reach with the help of others, thus foiling the attack, is unusual for a great white's prey. Shark culling Shark culling is the deliberate killing of sharks by a government in an attempt to reduce shark attacks; shark culling is often called "shark control". These programs have been criticized by environmentalists and scientists—they say these programs harm the marine ecosystem; they also say such programs are "outdated, cruel, and ineffective". Many different species (dolphins, turtles, etc.) are also killed in these programs (because of their use of shark nets and drum lines)—15,135 marine animals were killed in New South Wales' nets between 1950 and 2008, and 84,000 marine animals were killed by Queensland authorities from 1962 to 2015. Great white sharks are currently killed in both Queensland and New South Wales in "shark control" (shark culling) programs. Queensland uses shark nets and drum lines with baited hooks, while New South Wales only uses nets. From 1962 to 2018, Queensland authorities killed about 50,000 sharks, many of which were great whites. From 2013 to 2014 alone, 667 sharks were killed by Queensland authorities, including great white sharks. In Queensland, great white sharks found alive on the drum lines are shot. In New South Wales, between 1950 and 2008, a total of 577 great white sharks were killed in nets. Between September 2017 and April 2018, fourteen great white sharks were killed in New South Wales. KwaZulu-Natal (an area of South Africa) also has a "shark control" program that kills great white sharks and other marine life. In a 30-year period, more than 33,000 sharks were killed in KwaZulu-Natal's shark-killing program, including great whites. In 2014 the state government of Western Australia led by Premier Colin Barnett implemented a policy of killing large sharks. The policy, colloquially referred to as the Western Australian shark cull, was intended to protect users of the marine environment from shark bite incidents, following the deaths of seven people on the Western Australian coastline in the years 2010–2013. Baited drum lines were deployed near popular beaches using hooks designed to catch great white sharks, as well as bull and tiger sharks. Large sharks found hooked but still alive were shot and their bodies discarded at sea. The government claimed they were not culling the sharks, but were using a "targeted, localised, hazard mitigation strategy". Barnett described opposition as "ludicrous" and "extreme", and said that nothing could change his mind. This policy was met with widespread condemnation from the scientific community, which showed that species responsible for bite incidents were notoriously hard to identify, that the drum lines failed to capture white sharks, as intended, and that the government also failed to show any correlation between their drum line policy and a decrease in shark bite incidents in the region. Attacks on boats Great white sharks infrequently bite and sometimes even sink boats. Only five of the 108 authenticated unprovoked shark bite incidents reported from the Pacific Coast during the 20th century involved kayakers. In a few cases they have bitten boats up to in length. They have bumped or knocked people overboard, usually biting the boat from the stern. In one case in 1936, a large shark leapt completely into the South African fishing boat Lucky Jim, knocking a crewman into the sea. Tricas and McCosker's underwater observations suggest that sharks are attracted to boats by the electrical fields they generate, which are picked up by the ampullae of Lorenzini and confuse the shark about whether or not wounded prey might be nearby. In captivity Prior to August 1981, no great white shark in captivity lived longer than 11 days. In August 1981, a great white survived for 16 days at SeaWorld San Diego before being released. The idea of containing a live great white at SeaWorld Orlando was used in the 1983 film Jaws 3-D. Monterey Bay Aquarium first attempted to display a great white in 1984, but the shark died after 11 days because it did not eat. In July 2003, Monterey researchers captured a small female and kept it in a large netted pen near Malibu for five days. They had the rare success of getting the shark to feed in captivity before its release. Not until September 2004 was the aquarium able to place a great white on long-term exhibit. A young female, which was caught off the coast of Ventura, was kept in the aquarium's Outer Bay exhibit for 198 days before she was released in March 2005. She was tracked for 30 days after release. On the evening of 31 August 2006, the aquarium introduced a juvenile male caught outside Santa Monica Bay. His first meal as a captive was a large salmon steak on 8 September 2006, and as of that date, he was estimated to be in length and to weigh approximately . He was released on 16 January 2007, after 137 days in captivity. Monterey Bay Aquarium housed a third great white, a juvenile male, for 162 days between 27 August 2007, and 5 February 2008. On arrival, he was long and weighed . He grew to and before release. A juvenile female came to the Outer Bay Exhibit on 27 August 2008. While she did swim well, the shark fed only once during her stay and was tagged and released on 7 September 2008. Another juvenile female was captured near Malibu on 12 August 2009, introduced to the Outer Bay exhibit on 26 August 2009, and was successfully released into the wild on 4 November 2009. The Monterey Bay Aquarium introduced a 1.4-m-long male into their redesigned "Open Sea" exhibit on 31 August 2011. He was exhibited for 55 days, and was released into the wild on 25 October the same year. However, the shark was determined to have died shortly after release via an attached electronic tag. The cause of death is not known. The Monterey Bay Aquarium does not plan to exhibit any more great whites, as the main purpose of containing them was scientific. As data from captive great whites were no longer needed, the institute has instead shifted its focus to study wild sharks. One of the largest adult great whites ever exhibited was at Japan's Okinawa Churaumi Aquarium in 2016, where a male was exhibited for three days before dying. Perhaps the most famous captive was a female named Sandy, which in August 1980 became the only great white to be housed at the California Academy of Sciences' Steinhart Aquarium in San Francisco, California. She was released because she would not eat and constantly bumped against the walls. Due to the vast amounts of resources required and the subsequent cost to keep a great white shark alive in captivity, their dietary preferences, size, migratory nature, and the stress of capture and containment, permanent exhibition of a great white shark is likely to be unfeasible. Shark tourism Cage diving is most common at sites where great whites are frequent including the coast of South Africa, the Neptune Islands in South Australia, and Guadalupe Island in Baja California. The popularity of cage diving and swimming with sharks is at the focus of a booming tourist industry. A common practice is to chum the water with pieces of fish to attract the sharks. These practices may make sharks more accustomed to people in their environment and to associate human activity with food; a potentially dangerous situation. By drawing bait on a wire towards the cage, tour operators lure the shark to the cage, possibly striking it, exacerbating this problem. Other operators draw the bait away from the cage, causing the shark to swim past the divers. At present, hang baits are illegal off Isla Guadalupe and reputable dive operators do not use them. Operators in South Africa and Australia continue to use hang baits and pinniped decoys. In South Australia, playing rock music recordings underwater, including the AC/DC album Back in Black has also been used experimentally to attract sharks. Companies object to being blamed for shark bite incidents, pointing out that lightning tends to strike humans more often than sharks bite humans. Their position is that further research needs to be done before banning practices such as chumming, which may alter natural behaviour. One compromise is to only use chum in areas where whites actively patrol anyway, well away from human leisure areas. Also, responsible dive operators do not feed sharks. Only sharks that are willing to scavenge follow the chum trail and if they find no food at the end then the shark soon swims off and does not associate chum with a meal. It has been suggested that government licensing strategies may help enforce these responsible tourism. Conservation status It is unclear how much of a concurrent increase in fishing for great white sharks has caused the decline of great white shark populations from the 1970s to the present. No accurate global population numbers are available, but the great white shark is now considered vulnerable worldwide, and critically endangered in Europe and the Mediterranean. Sharks taken during the long interval between birth and sexual maturity never reproduce, making population recovery and growth difficult. The International Union for Conservation of Nature notes that very little is known about the actual status of the great white shark, but as it appears uncommon compared to other widely distributed species, it is considered vulnerable. It is included in Appendix II of CITES, meaning that international trade in the species (including parts and derivatives) requires a permit. As of March 2010, it has also been included in Annex I of the CMS Migratory Sharks MoU, which strives for increased international understanding and coordination for the protection of certain migratory sharks. A February 2010 study by Barbara Block of Stanford University estimated the world population of great white sharks to be lower than 3,500 individuals, making the species more vulnerable to extinction than the tiger, whose population is in the same range. According to another study from 2014 by George H. Burgess, Florida Museum of Natural History, University of Florida, there are about 2,000 great white sharks near the California coast, which is 10 times higher than the previous estimate of 219 by Barbara Block. Fishermen target many sharks for their jaws, teeth, and fins, and as game fish in general. The great white shark, however, is rarely an object of commercial fishing, although its flesh is considered valuable. If casually captured (it happens for example in some tonnare in the Mediterranean), it is misleadingly sold as smooth-hound shark. In Australia The great white shark was declared vulnerable by the Australian Government in 1999 because of significant population decline and is currently protected under the Environmental Protection and Biodiversity Conservation (EPBC) Act. The causes of decline prior to protection included mortality from sport fishing harvests as well as being caught in beach protection netting. The national conservation status of the great white shark is reflected by all Australian states under their respective laws, granting the species full protection throughout Australia regardless of jurisdiction. Many states had prohibited the killing or possession of great white sharks prior to national legislation coming into effect. The great white shark is further listed as threatened in Victoria under the Flora and Fauna Guarantee Act, and as rare or likely to become extinct under Schedule 5 of the Wildlife Conservation Act in Western Australia. In 2002, the Australian government created the White Shark Recovery Plan, implementing government-mandated conservation research and monitoring for conservation in addition to federal protection and stronger regulation of shark-related trade and tourism activities. An updated recovery plan was published in 2013 to review progress, research findings, and to implement further conservation actions. A study in 2012 revealed that Australia's white shark population was separated by Bass Strait into genetically distinct eastern and western populations, indicating a need for the development of regional conservation strategies. Presently, human-caused shark mortality is continuing, primarily from accidental and illegal catching in commercial and recreational fishing as well as from being caught in beach protection netting, and the populations of great white shark in Australia are yet to recover. In spite of official protections in Australia, great white sharks continue to be killed in state "shark control" programs within Australia. For example, the government of Queensland has a "shark control" program (shark culling) which kills great white sharks (as well as other marine life) using shark nets and drum lines with baited hooks. In Queensland, great white sharks that are found alive on the baited hooks are shot. The government of New South Wales also kills great white sharks in its "shark control" program. Partly because of these programs, shark numbers in eastern Australia have decreased. The Australasian population of great white sharks is believed to be in excess of 8,000–10,000 individuals according to genetic research studies done by CSIRO, with an adult population estimated to be around 2,210 individuals in both Eastern and Western Australia. The annual survival rate for juveniles in these two separate populations was estimated in the same study to be close to 73 per cent, while adult sharks had a 93 per cent annual survival rate. Whether or not mortality rates in great white sharks have declined, or the population has increased as a result of the protection of this species in Australian waters is as yet unknown due to the slow growth rates of this species. In New Zealand The great white shark is one of the most commonly found in the waters of New Zealand. As of April 2007, great white sharks were fully protected within of New Zealand and additionally from fishing by New Zealand-flagged boats outside this range. The maximum penalty is a $250,000 fine and up to six months in prison. In June 2018 the New Zealand Department of Conservation classified the great white shark under the New Zealand Threat Classification System as "Nationally Endangered". The species meets the criteria for this classification as there exists a moderate, stable population of between 1000 and 5000 mature individuals. This classification has the qualifiers "Data Poor" and "Threatened Overseas". In the United States California In addition to existing federal regulations, great white sharks have been protected under California state law since January 1st, 1994. Under this law, catching, hunting, pursuit, capturing, and/or killing of great whites in California waters is strictly prohibited up to offshore, though exceptions exist for great whites caught for scientific research or unintentionally caught as bycatch. In both cases, a special permit is required in order to legally take them. In 2013, great white sharks were added to California's Endangered Species Act. From data collected, the population of great whites in the North Pacific was estimated to be fewer than 340 individuals. Research also reveals these sharks are genetically distinct from other members of their species elsewhere in Africa, Australia, and the east coast of North America, having been isolated from other populations. A 2014 study estimated the population of great white sharks along the California coastline to be approximately 2,400. In September 2019, California governor Gavin Newsom signed Assembly Bill 2109 into law, banning the use of shark bait, shark lures, and chumming to attract great whites in California waters, and prohibiting their usage within one nautical mile of any shoreline, pier, or jetty when a great white is visible or known to be present in the area. Massachusetts In June 2015, Massachusetts banned catching, cage diving, feeding, towing decoys, or baiting and chumming for its significant and highly predictable migratory great white population without an appropriate research permit. However, these restrictions apply to only activities within state waters, which extend three miles from shore. Therefore there are over a dozen tour operators offering cage diving and some do bait and/or chum.
Biology and health sciences
Sharks
null
43701
https://en.wikipedia.org/wiki/Flint
Flint
Flint, occasionally flintstone, is a sedimentary cryptocrystalline form of the mineral quartz, categorized as the variety of chert that occurs in chalk or marly limestone. Historically, flint was widely used to make stone tools and start fires. Flint occurs chiefly as nodules and masses in sedimentary rocks, such as chalks and limestones. Inside the nodule, flint is usually dark grey or black, green, white, or brown in colour, and has a glassy or waxy appearance. A thin, oxidised layer on the outside of the nodules is usually different in colour, typically white and rough in texture. The nodules can often be found along streams and beaches. Flint breaks and chips into sharp-edged pieces, making it useful in constructing a variety of cutting tools, such as knife blades and scrapers. The use of flint to make stone tools dates back more than three million years; flint's extreme durability has made it possible to accurately date its use over this time. Flint is one of the primary materials used to define the Stone Age. During the Stone Age, access to flint was so important for survival that people would travel or trade long distances to obtain the stone. Grime's Graves was an important source of flint traded across Europe. Flint Ridge in Ohio was another important source of flint, and Native Americans extracted the flint from hundreds of quarries along the ridge. This "Ohio Flint" was traded across the eastern United States, and has been found as far west as the Rocky Mountains and south around the Gulf of Mexico. When struck against steel, flint will produce enough sparks to ignite a fire with the correct tinder, or gunpowder used in weapons, namely the flintlock firing mechanism. Although it has been superseded in these uses by different processes (the percussion cap), or materials (ferrocerium), "flint" has lent its name as generic term for a fire starter. Origin The exact mode of formation of flint is not yet clear, but it is thought that it occurs as a result of chemical changes in compressed sedimentary rock formations during the process of diagenesis. One hypothesis is that a gelatinous material fills cavities in the sediment, such as holes bored by crustaceans or molluscs and that this becomes silicified. This hypothesis would certainly explain the complex shapes of flint nodules that are found. The source of dissolved silica in the porous media could be the spicules of silicious sponges (demosponges). Certain types of flint, such as that from the south coast of England and its counterpart on the French side of the Channel, contain trapped fossilised marine flora. Pieces of coral and vegetation have been found preserved inside the flint similar to insects and plant parts within amber. Thin slices of the stone often reveal this effect. Flint sometimes occurs in large flint fields in Jurassic or Cretaceous beds, for example, in Europe. Puzzling giant flint formations known as paramoudra and flint circles are found around Europe but especially in Norfolk, England, on the beaches at Beeston Bump and West Runton. The "Ohio flint" is the official gemstone of Ohio state. It is formed from limey debris that was deposited at the bottom of inland Paleozoic seas hundreds of millions of years ago that hardened into limestone and later became infused with silica. The flint from Flint Ridge is found in many hues like red, green, pink, blue, white, and grey, with the colour variations caused by minute impurities of iron compounds. Flint can be coloured: sandy brown, medium to dark grey, black, reddish brown or an off-white grey. Uses Tools or cutting edges Flint was used in the manufacture of tools during the Stone Age as it splits into thin, sharp splinters called flakes or blades (depending on the shape) when struck by another hard object (such as a hammerstone made of another material). This process is referred to as knapping. Flint mining is attested since the Paleolithic, but became more common since the Neolithic (Michelsberg culture, Funnelbeaker culture). In Europe, some of the best toolmaking flint has come from Belgium (Obourg, flint mines of Spiennes), the coastal chalks of the English Channel, the Paris Basin, Thy in Jutland (flint mine at Hov), the Sennonian deposits of Rügen, Grimes Graves in England, the Upper Cretaceous chalk formation of Dobruja and the lower Danube (Balkan flint), the Cenomanian chalky marl formation of the Moldavian Plateau (Miorcani flint) and the Jurassic deposits of the Kraków area and Krzemionki in Poland, as well as of the Lägern (silex) in the Jura Mountains of Switzerland. In 1938, a project of the Ohio Historical Society, under the leadership of H. Holmes Ellis began to study the knapping methods and techniques of Native Americans. Like past studies, this work involved experimenting with actual knapping techniques by creation of stone tools through the use of techniques like direct freehand percussion, freehand pressure and pressure using a rest. Other scholars who have conducted similar experiments and studies include William Henry Holmes, Alonzo W. Pond, Francis H. S. Knowles and Don Crabtree. To reduce susceptibility to fragmentation, flint/chert may be heat-treated, being slowly brought up to a temperature of for 24 hours, then slowly cooled to room temperature. This makes the material more homogeneous and thus more knappable and produces tools with a cleaner, sharper cutting edge. Heat treating was known to Stone Age artisans. To ignite fire or gunpowder When struck against steel, a flint edge produces sparks. The hard flint edge shaves off a particle of the steel that exposes iron, which reacts with oxygen from the atmosphere and can ignite the proper tinder. Prior to the wide availability of steel, rocks of pyrite (FeS2) would be used along with the flint, in a similar (but more time-consuming) way. These methods remain popular in woodcraft, bushcraft, and amongst people practising traditional fire-starting skills. Flintlocks A later, major use of flint and steel was in the flintlock mechanism, used primarily in flintlock firearms, but also used on dedicated fire-starting tools. A piece of flint held in the jaws of a spring-loaded hammer, when released by a trigger, strikes a hinged piece of steel ("frizzen") at an angle, creating a shower of sparks and exposing a charge of priming powder. The sparks ignite the priming powder and that flame, in turn, ignites the main charge, propelling the ball, bullet, or shot through the barrel. While the military use of the flintlock declined after the adoption of the percussion cap from the 1840s onward, flintlock rifles and shotguns remain in use amongst recreational shooters. Comparison with ferrocerium Flint and steel used to strike sparks were superseded in the 20th century by ferrocerium (sometimes referred to as "flint", although not true flint, "mischmetal", "hot spark", "metal match", or "fire steel"). This human-made material, when scraped with any hard, sharp edge, produces sparks that are much hotter than obtained with natural flint and steel, allowing use of a wider range of tinders. Because it can produce sparks when wet and can start fires when used correctly, ferrocerium is commonly included in survival kits. Ferrocerium is used in many cigarette lighters, where it is referred to as "a flint". Fragmentation Flint's utility as a fire starter is hampered by its property of uneven expansion under heating, causing it to fracture, sometimes violently, during heating. This tendency is enhanced by the impurities found in most samples of flint that may expand to a greater or lesser degree than the surrounding stone, and is similar to the tendency of glass to shatter when exposed to heat, and can become a drawback when flint is used as a building material. As a building material Flint, knapped or unknapped, has been used from antiquity (for example at the Late Roman fort of Burgh Castle in Norfolk) up to the present day as a material for building stone walls, using lime mortar, and often combined with other available stone or brick rubble. It was most common in those parts of southern England where no good building stone was available locally, and where brick-making was not widespread until the later Middle Ages. It is especially associated with East Anglia, but also used in chalky areas stretching through Hampshire, Sussex, Surrey and Kent to Somerset. Flint was used in the construction of many churches, houses, and other buildings, for example, the large stronghold of Framlingham Castle. Many different decorative effects have been achieved by using different types of knapping or arrangement and combinations with stone (flushwork), especially in the 15th and early 16th centuries. Because knapping flints to a relatively flush surface and size is a highly skilled process with a high level of wastage, flint finishes typically indicate high status buildings. During World War I, in the chalky-soil country of France, the British filled sandbags with flint and used these sandbags as breastworks. Ceramics Flint pebbles are used as the media in ball mills to grind glazes and other raw materials for the ceramics industry. The pebbles are hand-selected based on colour; those having a tint of red, indicating high iron content, are discarded. The remaining blue-grey stones have a low content of chromophoric oxides and so are less deleterious to the colour of the ceramic composition after firing. Until recently calcined flint was also an important raw material in clay-based ceramic bodies produced in the UK. In clay bodies, calcined flint attenuates the shrinkage whilst drying, and modifies the fired thermal expansion. Flint can also be used in glazes as a network former. In preparation for use flint pebbles, frequently sourced from the coasts of South-East England or Western France, were calcined to around . This heating process both removed organic impurities and induced certain physical reactions, including converting some of the quartz to cristobalite. After calcination the flint pebbles were crushed and milled to a fine particle size. However, the use of flint has now been superseded by quartz. Because of the historical use of flint, the word "flint" is used by some potters (especially in the U.S.) to refer generically to siliceous raw materials used in ceramics that are not flint. Jewelry Flint bracelets were known in Ancient Egypt, and several examples have been found.
Physical sciences
Sedimentary rocks
Earth science
43709
https://en.wikipedia.org/wiki/Radiation%20pressure
Radiation pressure
Radiation pressure (also known as light pressure) is mechanical pressure exerted upon a surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength that is absorbed, reflected, or otherwise emitted (e.g. black-body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules). The associated force is called the radiation pressure force, or sometimes just the force of light. The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes and technologies. This particularly includes objects in outer space, where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the Sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars orbit by about . Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures and can sometimes dwarf the usual gas pressure, for instance, in stellar interiors and thermonuclear weapons. Furthermore, large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion. Radiation pressure forces are the bedrock of laser technology and the branches of science that rely heavily on lasers and other optical technologies. That includes, but is not limited to, biomicroscopy (where light is used to irradiate and observe microbes, cells, and molecules), quantum optics, and optomechanics (where light is used to probe and control objects like atoms, qubits and macroscopic quantum objects). Direct applications of the radiation pressure force in these fields are, for example, laser cooling (the subject of the 1997 Nobel Prize in Physics), quantum control of macroscopic objects and atoms (2012 Nobel Prize in Physics), interferometry (2017 Nobel Prize in Physics) and optical tweezers (2018 Nobel Prize in Physics). Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons, particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum. Due to the law of conservation of momentum, any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with (Newton's third law of motion), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure. Discovery Johannes Kepler put forward the concept of radiation pressure in 1619 to explain the observation that a tail of a comet always points away from the Sun. The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface that is exposed to it was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. The pressure is very small, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer, whose characteristic motion is not caused by radiation pressure but by air flow caused by temperature differentials.) Theory Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below. Radiation pressure from momentum of an electromagnetic wave According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum. Momentum will be transferred to any surface it strikes that absorbs or reflects the radiation. Consider the momentum transferred to a perfectly absorbing (black) surface. The energy flux (irradiance) of a plane wave is calculated using the Poynting vector , which is the cross product of the electric field vector E and the magnetic field's auxiliary field vector (or magnetizing field) H. The magnitude, denoted by S, divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field. So, dimensionally, the Poynting vector is , which is the speed of light, , times pressure, . That pressure is experienced as radiation pressure on the surface: where is pressure (usually in pascals), is the incident irradiance (usually in W/m2) and is the speed of light in vacuum. Here, . If the surface is planar at an angle α to the incident wave, the intensity across the surface will be geometrically reduced by the cosine of that angle and the component of the radiation force against the surface will also be reduced by the cosine of α, resulting in a pressure: The momentum from the incident wave is in the same direction of that wave. But only the component of that momentum normal to the surface contributes to the pressure on the surface, as given above. The component of that force tangent to the surface is not called pressure. Radiation pressure from reflection The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected, then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave: thus doubling the net radiation pressure on the surface: For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double. Radiation pressure by emission Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface Ie: The emission can be from black-body radiation or any other radiative mechanism. Since all materials emit black-body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually tiny. However, because black-body radiation increases rapidly with temperature (as the fourth power of temperature, given by the Stefan–Boltzmann law), radiation pressure due to the temperature of a very hot object (or due to incoming black-body radiation from similarly hot surroundings) can become significant. This is important in stellar interiors. Radiation pressure in terms of photons Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons. Photons do not have a rest-mass; however, photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by: where is momentum, is the Planck constant, is wavelength, and is speed of light in vacuum. And is the energy of a single photon given by: The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance over an area has a power of , this implies a flux of photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically. Compression in a uniform radiation field In general, the pressure of electromagnetic waves can be obtained from the vanishing of the trace of the electromagnetic stress tensor: since this trace equals 3P − u, we get where is the radiation energy per unit volume. This can also be shown in the specific case of the pressure exerted on surfaces of a body in thermal equilibrium with its surroundings, at a temperature : the body will be surrounded by a uniform radiation field described by the Planck black-body radiation law and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black-body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space. By using Stefan–Boltzmann law, this can be expressed as where is the Stefan–Boltzmann constant. Solar radiation pressure Solar radiation pressure is due to the Sun's radiation at closer distances, thus especially within the Solar System. While it acts on all objects, its net effect is generally greater on smaller bodies, since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure, except when they are behind the shadow of a larger orbiting body. Solar radiation pressure on objects near the Earth may be calculated using the Sun's irradiance at 1 AU, known as the solar constant, or GSC, whose value is set at 1361 W/m2 as of 2011. All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation. This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail, for instance. Momentary or hours long solar pressures can indeed escalate due to release of solar flares and coronal mass ejections, but effects remain essentially immeasureable in relation to Earth's orbit. However these pressures persist over eons, such that cumulatively having produced a measureable movement on the Earth-Moon system's orbit. Pressures of absorption and reflection Solar radiation pressure at the Earth's distance from the Sun, may be calculated by dividing the solar constant GSC (above) by the speed of light c. For an absorbing sheet facing the Sun, this is simply: This result is in pascals, equivalent to N/m2 (newtons per square meter). For a sheet at an angle α to the Sun, the effective area A of a sheet is reduced by a geometrical factor resulting in a force in the direction of the sunlight of: To find the component of this force normal to the surface, another cosine factor must be applied resulting in a pressure P on the surface of: Note, however, that in order to account for the net effect of solar radiation on a spacecraft for instance, one would need to consider the total force (in the direction away from the Sun) given by the preceding equation, rather than just the component normal to the surface that we identify as "pressure". The solar constant is defined for the Sun's radiation at the distance to the Earth, also known as one astronomical unit (au). Consequently, at a distance of R astronomical units (R thus being dimensionless), applying the inverse-square law, we would find: Finally, considering not an absorbing but a perfectly reflecting surface, the pressure is doubled due to the reflected wave, resulting in: Note that unlike the case of an absorbing material, the resulting force on a reflecting body is given exactly by this pressure acting normal to the surface, with the tangential forces from the incident and reflecting waves canceling each other. In practice, materials are neither totally reflecting nor totally absorbing, so the resulting force will be a weighted average of the forces calculated using these formulas. Radiation pressure perturbations Solar radiation pressure is a source of orbital perturbations. It significantly affects the orbits and trajectories of small bodies including all spacecraft. Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large ones because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules). The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun. A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects. At any particular time, some facets are exposed to the Sun, and some are in shadow. Each surface exposed to the Sun is reflecting, absorbing, and emitting radiation. Facets in shadow are emitting radiation. The summation of pressures across all of the facets defines the net force and torque on the body. These can be calculated using the equations in the preceding sections. The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face is more intense than that of the opposite face, resulting in a net force on the body that affects its motion. The YORP effect is a collection of effects expanding upon the earlier concept of the Yarkovsky effect, but of a similar nature. It affects the spin properties of bodies. The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny, since the radiation is moving at the speed of light, while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System. While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons, and are essential in the theory of electron emission from the Sun, of cometary material, and so on. Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty (micrometre-size) particles are susceptible to radiation pressure even in the outer Solar System. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure. As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction", which would oppose the movement of matter. He wrote: "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief." Solar sails Solar sailing, an experimental method of spacecraft propulsion, uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in his 1865 novel From the Earth to the Moon. A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance. The Japan Aerospace Exploration Agency (JAXA) has successfully unfurled a solar sail in space, which has already succeeded in propelling its payload with the IKAROS project. Cosmic effects of radiation pressure Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales. Early universe The photon epoch is a phase when the energy of the universe was dominated by photons, between 10 seconds and 380,000 years after the Big Bang. Galaxy formation and evolution The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material. Clouds of dust and gases The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions. Clusters of stars Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters. Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster. Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal. Star formation Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars. As a branch of astronomy, star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Stellar planetary systems Planetary systems are generally believed to form as part of the same process that results in star formation. A protoplanetary disk forms by gravitational collapse of a molecular cloud, called a solar nebula, and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure. Stellar interiors In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun, and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component. Comets Solar radiation pressure strongly affects comet tails. Solar heating causes gases to be released from the comet nucleus, which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail. Laser applications of radiation pressure Optical tweezers Lasers can be used as a source of monochromatic light with wavelength . With a set of lenses, one can focus the laser beam to a point that is in diameter (or ). The radiation pressure of a P = 30 mW laser with λ = 1064 nm can therefore be computed as follows. Area: force: pressure: This is used to trap or levitate particles in optical tweezers. Light–matter interactions The reflection of a laser pulse from the surface of an elastic solid can give rise to various types of elastic waves that propagate inside the solid or liquid. In other words, the light can excite and/or amplify motion of, and in, materials. This is the subject of study in the field of optomechanics. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Such light-pressure-induced elastic waves have for example observed inside an ultrahigh-reflectivity dielectric mirror. These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale. In the field of cavity optomechanics, light is trapped and resonantly enhanced in optical cavities, for example between mirrors. This serves the purpose of gravely enhancing the power of the light, and the radiation pressure it can exert on objects and materials. Optical control (that is, manipulation of the motion) of a plethora of objects has been realized: from kilometers long beams (such as in the LIGO interferometer) to clouds of atoms, and from micro-engineered trampolines to superfluids. Opposite to exciting or amplifying motion, light can also damp the motion of objects. Laser cooling is a method of cooling materials very close to absolute zero by converting some of material's motional energy into light. Kinetic energy and thermal energy of the material are synonyms here, because they represent the energy associated with Brownian motion of the material. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect. An other active research area of laser–matter interaction is the radiation pressure acceleration of ions or protons from thin–foil targets. High ion energy beams can be generated for medical applications (for example in ion beam therapy) by the radiation pressure of short laser pulses on ultra-thin foils.
Physical sciences
Classical mechanics
Physics
43710
https://en.wikipedia.org/wiki/Silicon%20dioxide
Silicon dioxide
Silicon dioxide, also known as silica, is an oxide of silicon with the chemical formula , commonly found in nature as quartz. In many parts of the world, silica is the major constituent of sand. Silica is one of the most complex and abundant families of materials, existing as a compound of several minerals and as a synthetic product. Examples include fused quartz, fumed silica, opal, and aerogels. It is used in structural materials, microelectronics, and as components in the food and pharmaceutical industries. All forms are white or colorless, although impure samples can be colored. Silicon dioxide is a common fundamental constituent of glass. Structure In the majority of silicon dioxides, the silicon atom shows tetrahedral coordination, with four oxygen atoms surrounding a central Si atom (see 3-D Unit Cell). Thus, SiO2 forms 3-dimensional network solids in which each silicon atom is covalently bonded in a tetrahedral manner to 4 oxygen atoms. In contrast, CO2 is a linear molecule. The starkly different structures of the dioxides of carbon and silicon are a manifestation of the double bond rule. Based on the crystal structural differences, silicon dioxide can be divided into two categories: crystalline and non-crystalline (amorphous). In crystalline form, this substance can be found naturally occurring as quartz, tridymite (high-temperature form), cristobalite (high-temperature form), stishovite (high-pressure form), and coesite (high-pressure form). On the other hand, amorphous silica can be found in nature as opal and diatomaceous earth. Quartz glass is a form of intermediate state between these structures. All of these distinct crystalline forms always have the same local structure around Si and O. In α-quartz the Si–O bond length is 161 pm, whereas in α-tridymite it is in the range 154–171 pm. The Si–O–Si angle also varies between a low value of 140° in α-tridymite, up to 180° in β-tridymite. In α-quartz, the Si–O–Si angle is 144°. Polymorphism Alpha quartz is the most stable form of solid SiO2 at room temperature. The high-temperature minerals, cristobalite and tridymite, have both lower densities and indices of refraction than quartz. The transformation from α-quartz to beta-quartz takes place abruptly at 573 °C. Since the transformation is accompanied by a significant change in volume, it can easily induce fracturing of ceramics or rocks passing through this temperature limit. The high-pressure minerals, seifertite, stishovite, and coesite, though, have higher densities and indices of refraction than quartz. Stishovite has a rutile-like structure where silicon is 6-coordinate. The density of stishovite is 4.287 g/cm3, which compares to α-quartz, the densest of the low-pressure forms, which has a density of 2.648 g/cm3. The difference in density can be ascribed to the increase in coordination as the six shortest Si–O bond lengths in stishovite (four Si–O bond lengths of 176 pm and two others of 181 pm) are greater than the Si–O bond length (161 pm) in α-quartz. The change in the coordination increases the ionicity of the Si–O bond. Faujasite silica, another polymorph, is obtained by the dealumination of a low-sodium, ultra-stable Y zeolite with combined acid and thermal treatment. The resulting product contains over 99% silica, and has high crystallinity and specific surface area (over 800 m2/g). Faujasite-silica has very high thermal and acid stability. For example, it maintains a high degree of long-range molecular order or crystallinity even after boiling in concentrated hydrochloric acid. Molten SiO2 Molten silica exhibits several peculiar physical characteristics that are similar to those observed in liquid water: negative temperature expansion, density maximum at temperatures ~5000 °C, and a heat capacity minimum. Its density decreases from 2.08 g/cm3 at 1950 °C to 2.03 g/cm3 at 2200 °C. Molecular SiO2 The molecular SiO2 has a linear structure like . It has been produced by combining silicon monoxide (SiO) with oxygen in an argon matrix. The dimeric silicon dioxide, (SiO2)2 has been obtained by reacting O2 with matrix isolated dimeric silicon monoxide, (Si2O2). In dimeric silicon dioxide there are two oxygen atoms bridging between the silicon atoms with an Si–O–Si angle of 94° and bond length of 164.6 pm and the terminal Si–O bond length is 150.2 pm. The Si–O bond length is 148.3 pm, which compares with the length of 161 pm in α-quartz. The bond energy is estimated at 621.7 kJ/mol. Natural occurrence Geology is most commonly encountered in nature as quartz, which comprises more than 10% by mass of the Earth's crust. Quartz is the only polymorph of silica stable at the Earth's surface. Metastable occurrences of the high-pressure forms coesite and stishovite have been found around impact structures and associated with eclogites formed during ultra-high-pressure metamorphism. The high-temperature forms of tridymite and cristobalite are known from silica-rich volcanic rocks. In many parts of the world, silica is the major constituent of sand. Biology Even though it is poorly soluble, silica occurs in many plants such as rice. Plant materials with high silica phytolith content appear to be of importance to grazing animals, from chewing insects to ungulates. Silica accelerates tooth wear, and high levels of silica in plants frequently eaten by herbivores may have developed as a defense mechanism against predation. Silica is also the primary component of rice husk ash, which is used, for example, in filtration and as supplementary cementitious material (SCM) in cement and concrete manufacturing. Silicification in and by cells has been common in the biological world and it occurs in bacteria, protists, plants, and animals (invertebrates and vertebrates). Prominent examples include: Tests or frustules (i.e. shells) of diatoms, Radiolaria, and testate amoebae. Silica phytoliths in the cells of many plants including Equisetaceae, many grasses, and a wide range of dicotyledons. The spicules forming the skeleton of many sponges. Uses Structural use About 95% of the commercial use of silicon dioxide (sand) is in the construction industry, e.g. in the production of concrete (Portland cement concrete). Certain deposits of silica sand, with desirable particle size and shape and desirable clay and other mineral content, were important for sand casting of metallic products. The high melting point of silica enables it to be used in such applications such as iron casting; modern sand casting sometimes uses other minerals for other reasons. Crystalline silica is used in hydraulic fracturing of formations which contain tight oil and shale gas. Precursor to glass and silicon Silica is the primary ingredient in the production of most glass. As other minerals are melted with silica, the principle of freezing point depression lowers the melting point of the mixture and increases fluidity. The glass transition temperature of pure SiO2 is about 1475 K. When molten silicon dioxide SiO2 is rapidly cooled, it does not crystallize, but solidifies as a glass. Because of this, most ceramic glazes have silica as the main ingredient. The structural geometry of silicon and oxygen in glass is similar to that in quartz and most other crystalline forms of silicon and oxygen, with silicon surrounded by regular tetrahedra of oxygen centres. The difference between the glass and crystalline forms arises from the connectivity of the tetrahedral units: Although there is no long-range periodicity in the glassy network, ordering remains at length scales well beyond the SiO bond length. One example of this ordering is the preference to form rings of 6-tetrahedra. The majority of optical fibers for telecommunications are also made from silica. It is a primary raw material for many ceramics such as earthenware, stoneware, and porcelain. Silicon dioxide is used to produce elemental silicon. The process involves carbothermic reduction in an electric arc furnace: SiO2 + 2 C -> Si + 2 CO Fumed silica Fumed silica, also known as pyrogenic silica, is prepared by burning SiCl4 in an oxygen-rich hydrogen flame to produce a "smoke" of SiO2. SiCl4 + 2 H2 + O2 -> SiO2 + 4 HCl It can also be produced by vaporizing quartz sand in a 3000 °C electric arc. Both processes result in microscopic droplets of amorphous silica fused into branched, chainlike, three-dimensional secondary particles which then agglomerate into tertiary particles, a white powder with extremely low bulk density (0.03-0.15 g/cm3) and thus high surface area. The particles act as a thixotropic thickening agent, or as an anti-caking agent, and can be treated to make them hydrophilic or hydrophobic for either water or organic liquid applications. Silica fume is an ultrafine powder collected as a by-product of the silicon and ferrosilicon alloy production. It consists of amorphous (non-crystalline) spherical particles with an average particle diameter of 150 nm, without the branching of the pyrogenic product. The main use is as pozzolanic material for high performance concrete. Fumed silica nanoparticles can be successfully used as an anti-aging agent in asphalt binders. Food, cosmetic, and pharmaceutical applications Silica, either colloidal, precipitated, or pyrogenic fumed, is a common additive in food production. It is used primarily as a flow or anti-caking agent in powdered foods such as spices and non-dairy coffee creamer, or powders to be formed into pharmaceutical tablets. It can adsorb water in hygroscopic applications. Colloidal silica is used as a fining agent for wine, beer, and juice, with the E number reference E551. In cosmetics, silica is useful for its light-diffusing properties and natural absorbency. Diatomaceous earth, a mined product, has been used in food and cosmetics for centuries. It consists of the silica shells of microscopic diatoms; in a less processed form it was sold as "tooth powder". Manufactured or mined hydrated silica is used as the hard abrasive in toothpaste. Semiconductors Silicon dioxide is widely used in the semiconductor technology: for the primary passivation (directly on the semiconductor surface), as an original gate dielectric in MOS technology. Today when scaling (dimension of the gate length of the MOS transistor) has progressed below 10 nm, silicon dioxide has been replaced by other dielectric materials like hafnium oxide or similar with higher dielectric constant compared to silicon dioxide, as a dielectric layer between metal (wiring) layers (sometimes up to 8–10) connecting elements and as a second passivation layer (for protecting semiconductor elements and the metallization layers) typically today layered with some other dielectrics like silicon nitride. Because silicon dioxide is a native oxide of silicon it is more widely used compared to other semiconductors like gallium arsenide or indium phosphide. Silicon dioxide could be grown on a silicon semiconductor surface. Silicon oxide layers could protect silicon surfaces during diffusion processes, and could be used for diffusion masking. Surface passivation is the process by which a semiconductor surface is rendered inert, and does not change semiconductor properties as a result of interaction with air or other materials in contact with the surface or edge of the crystal. The formation of a thermally grown silicon dioxide layer greatly reduces the concentration of electronic states at the silicon surface. SiO2 films preserve the electrical characteristics of p–n junctions and prevent these electrical characteristics from deteriorating by the gaseous ambient environment. Silicon oxide layers could be used to electrically stabilize silicon surfaces. The surface passivation process is an important method of semiconductor device fabrication that involves coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below. Growing a layer of silicon dioxide on top of a silicon wafer enables it to overcome the surface states that otherwise prevent electricity from reaching the semiconducting layer. The process of silicon surface passivation by thermal oxidation (silicon dioxide) is critical to the semiconductor industry. It is commonly used to manufacture metal–oxide–semiconductor field-effect transistors (MOSFETs) and silicon integrated circuit chips (with the planar process). Other Hydrophobic silica is used as a defoamer component. In its capacity as a refractory, it is useful in fiber form as a high-temperature thermal protection fabric. Silica is used in the extraction of DNA and RNA due to its ability to bind to the nucleic acids under the presence of chaotropes. Silica aerogel was used in the Stardust spacecraft to collect extraterrestrial particles. Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fibre for fibreglass. Production Silicon dioxide is mostly obtained by mining, including sand mining and purification of quartz. Quartz is suitable for many purposes, while chemical processing is required to make a purer or otherwise more suitable (e.g. more reactive or fine-grained) product. Precipitated silica Precipitated silica or amorphous silica is produced by the acidification of solutions of sodium silicate. The gelatinous precipitate or silica gel, is first washed and then dehydrated to produce colorless microporous silica. The idealized equation involving a trisilicate and sulfuric acid is: Na2Si3O7 + H2SO4 -> 3 SiO2 + Na2SO4 + H2O Approximately one billion kilograms/year (1999) of silica were produced in this manner, mainly for use for polymer composites – tires and shoe soles. On microchips Thin films of silica grow spontaneously on silicon wafers via thermal oxidation, producing a very shallow layer of about 1 nm or 10 Å of so-called native oxide. Higher temperatures and alternative environments are used to grow well-controlled layers of silicon dioxide on silicon, for example at temperatures between 600 and 1200 °C, using so-called dry oxidation with O2 Si + O2 -> SiO2 or wet oxidation with H2O. Si + 2 H2O -> SiO2 + 2 H2 The native oxide layer is beneficial in microelectronics, where it acts as electric insulator with high chemical stability. It can protect the silicon, store charge, block current, and even act as a controlled pathway to limit current flow. Laboratory or special methods From organosilicon compounds Many routes to silicon dioxide start with an organosilicon compound, e.g., HMDSO, TEOS. Synthesis of silica is illustrated below using tetraethyl orthosilicate (TEOS). Simply heating TEOS at 680–730 °C results in the oxide: Si(OC2H5)4 -> SiO2 + 2 O(C2H5)2 Similarly TEOS combusts around 400 °C: Si(OC2H5)4 + 12 O2 -> SiO2 + 10 H2O + 8 CO2 TEOS undergoes hydrolysis via the so-called sol-gel process. The course of the reaction and nature of the product are affected by catalysts, but the idealized equation is: Si(OC2H5)4 + 2 H2O -> SiO2 + 4 HOCH2CH3 Other methods Being highly stable, silicon dioxide arises from many methods. Conceptually simple, but of little practical value, combustion of silane gives silicon dioxide. This reaction is analogous to the combustion of methane: SiH4 + 2 O2 -> SiO2 + 2 H2O However the chemical vapor deposition of silicon dioxide onto crystal surface from silane had been used using nitrogen as a carrier gas at 200–500 °C. Chemical reactions Silicon dioxide is a relatively inert material (hence its widespread occurrence as a mineral). Silica is often used as inert containers for chemical reactions. At high temperatures, it is converted to silicon by reduction with carbon. Fluorine reacts with silicon dioxide to form SiF4 and O2 whereas the other halogen gases (Cl2, Br2, I2) are unreactive. Most forms of silicon dioxide are attacked ("etched") by hydrofluoric acid (HF) to produce hexafluorosilicic acid: Stishovite does not react to HF to any significant degree. HF is used to remove or pattern silicon dioxide in the semiconductor industry. Silicon dioxide acts as a Lux–Flood acid, being able to react with bases under certain conditions. As it does not contain any hydrogen, non-hydrated silica cannot directly act as a Brønsted–Lowry acid. While silicon dioxide is only poorly soluble in water at low or neutral pH (typically, 2 × 10−4 M for quartz up to 10−3 M for cryptocrystalline chalcedony), strong bases react with glass and easily dissolve it. Therefore, strong bases have to be stored in plastic bottles to avoid jamming the bottle cap, to preserve the integrity of the recipient, and to avoid undesirable contamination by silicate anions. Silicon dioxide dissolves in hot concentrated alkali or fused hydroxide, as described in this idealized equation: SiO2 + 2 NaOH -> Na2SiO3 + H2O Silicon dioxide will neutralise basic metal oxides (e.g. sodium oxide, potassium oxide, lead(II) oxide, zinc oxide, or mixtures of oxides, forming silicates and glasses as the Si-O-Si bonds in silica are broken successively). As an example the reaction of sodium oxide and SiO2 can produce sodium orthosilicate, sodium silicate, and glasses, dependent on the proportions of reactants: 2 Na2O + SiO2 -> Na4SiO4; Na2O + SiO2 -> Na2SiO3; Na2O + SiO2 -> glass. Examples of such glasses have commercial significance, e.g. soda–lime glass, borosilicate glass, lead glass. In these glasses, silica is termed the network former or lattice former. The reaction is also used in blast furnaces to remove sand impurities in the ore by neutralisation with calcium oxide, forming calcium silicate slag. Silicon dioxide reacts in heated reflux under dinitrogen with ethylene glycol and an alkali metal base to produce highly reactive, pentacoordinate silicates which provide access to a wide variety of new silicon compounds. The silicates are essentially insoluble in all polar solvent except methanol. Silicon dioxide reacts with elemental silicon at high temperatures to produce SiO: SiO2 + Si -> 2 SiO Water solubility The solubility of silicon dioxide in water strongly depends on its crystalline form and is three to four times higher for amorphous silica than quartz; as a function of temperature, it peaks around . This property is used to grow single crystals of quartz in a hydrothermal process where natural quartz is dissolved in superheated water in a pressure vessel that is cooler at the top. Crystals of 0.5–1  kg can be grown for 1–2 months. These crystals are a source of very pure quartz for use in electronic applications. Above the critical temperature of water and a pressure of or higher, water is a supercritical fluid and solubility is once again higher than at lower temperatures. Health effects Silica ingested orally is essentially nontoxic, with an of 5000 mg/kg (5 g/kg). A 2008 study following subjects for 15 years found that higher levels of silica in water appeared to decrease the risk of dementia. An increase of 10 mg/day of silica in drinking water was associated with a reduced risk of dementia of 11%. Inhaling finely divided crystalline silica dust can lead to silicosis, bronchitis, or lung cancer, as the dust becomes lodged in the lungs and continuously irritates the tissue, reducing lung capacities. When fine silica particles are inhaled in large enough quantities (such as through occupational exposure), it increases the risk of systemic autoimmune diseases such as lupus and rheumatoid arthritis compared to expected rates in the general population. Occupational hazard Silica is an occupational hazard for people who do sandblasting or work with powdered crystalline silica products. Amorphous silica, such as fumed silica, may cause irreversible lung damage in some cases but is not associated with the development of silicosis. Children, asthmatics of any age, those with allergies, and the elderly (all of whom have reduced lung capacity) can be affected in less time. Crystalline silica is an occupational hazard for those working with stone countertops because the process of cutting and installing the countertops creates large amounts of airborne silica. Crystalline silica used in hydraulic fracturing presents a health hazard to workers. Pathophysiology In the body, crystalline silica particles do not dissolve over clinically relevant periods. Silica crystals inside the lungs can activate the NLRP3 inflammasome inside macrophages and dendritic cells and thereby result in production of interleukin, a highly pro-inflammatory cytokine in the immune system. Regulation Regulations restricting silica exposure 'with respect to the silicosis hazard' specify that they are concerned only with silica, which is both crystalline and dust-forming. In 2013, the U.S. Occupational Safety and Health Administration reduced the exposure limit to 50 μg/m3 of air. Prior to 2013, it had allowed 100 μg/m3 and in construction workers even 250 μg/m3. In 2013, OSHA also required the "green completion" of fracked wells to reduce exposure to crystalline silica and restrict the exposure limit. Crystalline forms SiO2, more so than almost any material, exists in many crystalline forms. These forms are called polymorphs. Safety Inhaling finely divided crystalline silica can lead to severe inflammation of the lung tissue, silicosis, bronchitis, lung cancer, and systemic autoimmune diseases, such as lupus and rheumatoid arthritis. Inhalation of amorphous silicon dioxide, in high doses, leads to non-permanent short-term inflammation, where all effects heal. Other names This extended list enumerates synonyms for silicon dioxide; all of these values are from a single source; values in the source were presented capitalized.
Physical sciences
Ceramic compounds
Chemistry
43717
https://en.wikipedia.org/wiki/Prisoner%27s%20dilemma
Prisoner's dilemma
The prisoner's dilemma is a game theory thought experiment involving two rational agents, each of whom can either cooperate for mutual benefit or betray their partner ("defect") for individual gain. The dilemma arises from the fact that while defecting is rational for each agent, cooperation yields a higher payoff for each. The puzzle was designed by Merrill Flood and Melvin Dresher in 1950 during their work at the RAND Corporation. They invited economist Armen Alchian and mathematician John Williams to play a hundred rounds of the game, observing that Alchian and Williams often chose to cooperate. When asked about the results, John Nash remarked that rational behavior in the iterated version of the game can differ from that in a single-round version. This insight anticipated a key result in game theory: cooperation can emerge in repeated interactions, even in situations where it is not rational in a one-off interaction. Albert W. Tucker later named the game the "prisoner's dilemma" by framing the rewards in terms of prison sentences. The prisoner's dilemma models many real-world situations involving strategic behavior. In casual usage, the label "prisoner's dilemma" is applied to any situation in which two entities can gain important benefits by cooperating or suffer by failing to do so, but find it difficult or expensive to coordinate their choices. Premise William Poundstone described this "typical contemporary version" of the game in his 1993 book Prisoner's Dilemma: Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail. The prisoners are given a little time to think this over, but in no case may either learn what the other has decided until he has irrevocably made his decision. Each is informed that the other prisoner is being offered the very same deal. Each prisoner is concerned only with his own welfare—with minimizing his own prison sentence. This leads to three different possible outcomes for prisoners A and B: If A and B both remain silent, they will each serve one year in prison. If one testifies against the other but the other doesn’t, the one testifying will be set free while the other serves three years in prison. If A and B testify against each other, they will each serve two years. Strategy for the prisoner's dilemma Two prisoners are separated into individual rooms and cannot communicate with each other. It is assumed that both prisoners understand the nature of the game, have no loyalty to each other, and will have no opportunity for retribution or reward outside of the game. The normal game is shown below: Regardless of what the other decides, each prisoner gets a higher reward by betraying the other ("defecting"). The reasoning involves analyzing both players' best responses: B will either cooperate or defect. If B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3. So, either way, A should defect since defecting is A's best response regardless of B's strategy. Parallel reasoning will show that B should defect. Defection always results in a better payoff than cooperation, so it is a strictly dominant strategy for both players. Mutual defection is the only strong Nash equilibrium in the game. Since the collectively ideal result of mutual cooperation is irrational from a self-interested standpoint, this Nash equilibrium is not Pareto efficient. Generalized form The structure of the traditional prisoner's dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors red and blue and that each player chooses to either "cooperate" or "defect". If both players cooperate, they both receive the reward for cooperating. If both players defect, they both receive the punishment payoff . If Blue defects while Red cooperates, then Blue receives the temptation payoff , while Red receives the "sucker's" payoff, . Similarly, if Blue cooperates while Red defects, then Blue receives the sucker's payoff , while Red receives the temptation payoff . This can be expressed in normal form: and to be a prisoner's dilemma game in the strong sense, the following condition must hold for the payoffs: The payoff relationship implies that mutual cooperation is superior to mutual defection, while the payoff relationships and imply that defection is the dominant strategy for both agents. The iterated prisoner's dilemma If two players play the prisoner's dilemma more than once in succession, remember their opponent's previous actions, and are allowed to change their strategy accordingly, the game is called the iterated prisoner's dilemma. In addition to the general form above, the iterative version also requires that , to prevent alternating cooperation and defection giving a greater reward than mutual cooperation. The iterated prisoner's dilemma is fundamental to some theories of human cooperation and trust. Assuming that the game effectively models transactions between two people that require trust, cooperative behavior in populations can be modeled by a multi-player iterated version of the game. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma is also called the "peace-war game". General strategy If the iterated prisoner's dilemma is played a finite number of times and both players know this, then the dominant strategy and Nash equilibrium is to defect in all rounds. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit. For cooperation to emerge between rational players, the number of rounds must be unknown or infinite. In that case, "always defect" may no longer be a dominant strategy. As shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain cooperation. Specifically, a player may be less willing to cooperate if their counterpart did not cooperate many times, which causes disappointment. Conversely, as time elapses, the likelihood of cooperation tends to rise, owing to the establishment of a "tacit agreement" among participating players. In experimental situations, cooperation can occur even when both participants know how many iterations will be played. According to a 2019 experimental study in the American Economic Review that tested what strategies real-life subjects used in iterated prisoner's dilemma situations with perfect monitoring, the majority of chosen strategies were always to defect, tit-for-tat, and grim trigger. Which strategy the subjects chose depended on the parameters of the game. Axelrod's tournament and successful strategy conditions Interest in the iterated prisoner's dilemma was kindled by Robert Axelrod in his 1984 book The Evolution of Cooperation, in which he reports on a tournament that he organized of the N-step prisoner's dilemma (with N fixed) in which participants have to choose their strategy repeatedly and remember their previous encounters. Axelrod invited academic colleagues from around the world to devise computer strategies to compete in an iterated prisoner's dilemma tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth. Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behavior from mechanisms that are initially purely selfish, by natural selection. The winning deterministic strategy was tit for tat, developed and entered into the tournament by Anatol Rapoport. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness": when the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%, depending on the lineup of opponents). This allows for occasional recovery from getting trapped in a cycle of defections. After analyzing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to succeed: Nice: The strategy will not be the first to defect (this is sometimes referred to as an "optimistic" algorithm), i.e., it will not "cheat" on its opponent for purely self-interested reasons first. Almost all the top-scoring strategies were nice. Retaliating: The strategy must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate, a very bad choice that will frequently be exploited by "nasty" strategies. Forgiving: Successful strategies must be forgiving. Though players will retaliate, they will cooperate again if the opponent does not continue to defect. This can stop long runs of revenge and counter-revenge, maximizing points. Non-envious: The strategy must not strive to score more than the opponent. In contrast to the one-time prisoner's dilemma game, the optimal strategy in the iterated prisoner's dilemma depends upon the strategies of likely opponents, and how they will react to defections and cooperation. For example, if a population consists entirely of players who always defect, except for one who follows the tit-for-tat strategy, that person is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy is to defect every time. More generally, given a population with a certain percentage of always-defectors with the rest being tit-for-tat players, the optimal strategy depends on the percentage and number of iterations played. Other strategies Deriving the optimal strategy is generally done in two ways: Bayesian Nash equilibrium: If the statistical distribution of opposing strategies can be determined an optimal counter-strategy can be derived analytically. Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit-for-tat players, but no analytic proof exists that this will always occur. In the strategy called win-stay, lose-switch, faced with a failure to cooperate, the player switches strategy the next turn. In certain circumstances, Pavlov beats all other strategies by giving preferential treatment to co-players using a similar strategy. Although tit-for-tat is considered the most robust basic strategy, a team from Southampton University in England introduced a more successful strategy at the 20th-anniversary iterated prisoner's dilemma competition. It relied on collusion between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the competing program's score. As a result, the 2004 Prisoners' Dilemma Tournament results show University of Southampton's strategies in the first three places (and a number of positions towards the bottom), despite having fewer wins and many more losses than the GRIM strategy. The Southampton strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that a team's performance was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing). Because of this new rule, this competition also has little theoretical significance when analyzing single-agent strategies as compared to Axelrod's seminal tournament. But it provided a basis for analyzing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. Long before this new-rules tournament was played, Dawkins, in his book The Selfish Gene, pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that Axelrod would most likely not have allowed them if they had been submitted. It also relies on circumventing the rule that no communication is allowed between players, which the Southampton programs arguably did with their preprogrammed "ten-move dance" to recognize one another, reinforcing how valuable communication can be in shifting the balance of the game. Even without implicit collusion between software strategies, tit-for-tat is not always the absolute winner of any given tournament; more precisely, its long-run results over a series of tournaments outperform its rivals, but this does not mean it is the most successful in the short term. The same applies to tit-for-tat with forgiveness and other optimal strategies. This can also be illustrated using the Darwinian ESS simulation. In such a simulation, tit-for-tat will almost always come to dominate, though nasty strategies will drift in and out of the population because a tit-for-tat population is penetrable by non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. Dawkins showed that here, no static mix of strategies forms a stable equilibrium, and the system will always oscillate between bounds. Stochastic iterated prisoner's dilemma In a stochastic iterated prisoner's dilemma game, strategies are specified in terms of "cooperation probabilities". In an encounter between player X and player Y, Xs strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities: , where Pcd is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by X cooperating and Y defecting. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit-for-tat strategy written as , in which X responds as Y did in the previous encounter. Another is the win-stay, lose switch strategy written as . It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy that gives the same statistical results, so that only memory-1 strategies need be considered. If is defined as the above 4-element strategy vector of X and as the 4-element strategy vector of Y (where the indices are from Y's point of view), a transition matrix M may be defined for X whose ij-th entry is the probability that the outcome of a particular encounter between X and Y will be j given that the previous encounter was i, where i and j are one of the four outcome indices: cc, cd, dc, or dd. For example, from Xs point of view, the probability that the outcome of the present encounter is cd given that the previous encounter was cd is equal to . Under these definitions, the iterated prisoner's dilemma qualifies as a stochastic process and M is a stochastic matrix, allowing all of the theory of stochastic processes to be applied. One result of stochastic theory is that there exists a stationary vector v for the matrix v such that . Without loss of generality, it may be specified that v is normalized so that the sum of its four components is unity. The ij-th entry in will give the probability that the outcome of an encounter between X and Y will be j given that the encounter n steps previous is i. In the limit as n approaches infinity, M will converge to a matrix with fixed values, giving the long-term probabilities of an encounter producing j independent of i. In other words, the rows of will be identical, giving the long-term equilibrium result probabilities of the iterated prisoner's dilemma without the need to explicitly evaluate a large number of interactions. It can be seen that v is a stationary vector for and particularly , so that each row of will be equal to v. Thus, the stationary vector specifies the equilibrium outcome probabilities for X. Defining and as the short-term payoff vectors for the {cc,cd,dc,dd} outcomes (from Xs point of view), the equilibrium payoffs for X and Y can now be specified as and , allowing the two strategies P and Q to be compared for their long-term payoffs. Zero-determinant strategies In 2012, William H. Press and Freeman Dyson published a new class of strategies for the stochastic iterated prisoner's dilemma called "zero-determinant" (ZD) strategies. The long term payoffs for encounters between X and Y can be expressed as the determinant of a matrix which is a function of the two strategies and the short term payoff vectors: and , which do not involve the stationary vector v. Since the determinant function is linear in , it follows that (where ). Any strategies for which are by definition a ZD strategy, and the long-term payoffs obey the relation . Tit-for-tat is a ZD strategy which is "fair", in the sense of not gaining advantage over the other player. But the ZD space also contains strategies that, in the case of two players, can allow one player to unilaterally set the other player's score or alternatively force an evolutionary player to achieve a payoff some percentage lower than his own. The extorted player could defect, but would thereby hurt himself by getting a lower payoff. Thus, extortion solutions turn the iterated prisoner's dilemma into a sort of ultimatum game. Specifically, X is able to choose a strategy for which , unilaterally setting sy to a specific value within a particular range of values, independent of Ys strategy, offering an opportunity for X to "extort" player Y (and vice versa). But if X tries to set sx to a particular value, the range of possibilities is much smaller, consisting only of complete cooperation or complete defection. An extension of the iterated prisoner's dilemma is an evolutionary stochastic iterated prisoner's dilemma, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly because they reduce each other's surplus). Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is larger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents. While extortionary ZD strategies are not stable in large populations, another ZD class called "generous" strategies is both stable and robust. When the population is not too small, these strategies can supplant any other ZD strategy and even perform well against a broad array of generic strategies for iterated prisoner's dilemma, including win–stay, lose–switch. This was proven specifically for the donation game by Alexander Stewart and Joshua Plotkin in 2013. Generous strategies will cooperate with other cooperative players, and in the face of defection, the generous player loses more utility than its rival. Generous strategies are the intersection of ZD strategies and so-called "good" strategies, which were defined by Ethan Akin to be those for which the player responds to past mutual cooperation with future cooperation and splits expected payoffs equally if he receives at least the cooperative expected payoff. Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate. Continuous iterated prisoner's dilemma Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. In a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, tit-for-tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit-for-tat-like cooperation are extremely rare even though tit-for-tat seems robust in theoretical models. Real-life examples Many instances of human interaction and natural processes have payoff matrices like the prisoner's dilemma's. It is therefore of interest to the social sciences, such as economics, politics, and sociology, as well as to the biological sciences, such as ethology and evolutionary biology. Many natural processes have been abstracted into models in which living beings are engaged in endless games of prisoner's dilemma. Environmental studies In environmental studies, the dilemma is evident in crises such as global climate change. It is argued all countries will benefit from a stable climate, but any single country is often hesitant to curb emissions. The immediate benefit to any one country from maintaining current behavior is perceived to be greater than the purported eventual benefit to that country if all countries' behavior was changed, therefore explaining the impasse concerning climate-change in 2007. An important difference between climate-change politics and the prisoner's dilemma is uncertainty; the extent and pace at which pollution can change climate is not known. The dilemma faced by governments is therefore different from the prisoner's dilemma in that the payoffs of cooperation are unknown. This difference suggests that states will cooperate much less than in a real iterated prisoner's dilemma, so that the probability of avoiding a possible climate catastrophe is much smaller than that suggested by a game-theoretical analysis of the situation using a real iterated prisoner's dilemma. Thomas Osang and Arundhati Nandy provide a theoretical explanation with proofs for a regulation-driven win-win situation along the lines of Michael Porter's hypothesis, in which government regulation of competing firms is substantial. Animals Cooperative behavior of many animals can be understood as an example of the iterated prisoner's dilemma. Often animals engage in long-term partnerships; for example, guppies inspect predators cooperatively in groups, and they are thought to punish non-cooperative inspectors. Vampire bats are social animals that engage in reciprocal food exchange. Applying the payoffs from the prisoner's dilemma can help explain this behavior. Psychology In addiction research and behavioral economics, George Ainslie points out that addiction can be cast as an intertemporal prisoner's dilemma problem between the present and future selves of the addict. In this case, "defecting" means relapsing, where not relapsing both today and in the future is by far the best outcome. The case where one abstains today but relapses in the future is the worst outcome: in some sense, the discipline and self-sacrifice involved in abstaining today have been "wasted" because the future relapse means that the addict is right back where they started and will have to start over. Relapsing today and tomorrow is a slightly "better" outcome, because while the addict is still addicted, they haven't put the effort in to trying to stop. The final case, where one engages in the addictive behavior today while abstaining tomorrow, has the problem that (as in other prisoner's dilemmas) there is an obvious benefit to defecting "today", but tomorrow one will face the same prisoner's dilemma, and the same obvious benefit will be present then, ultimately leading to an endless string of defections. In The Science of Trust, John Gottman defines good relationships as those where partners know not to enter into mutual defection behavior, or at least not to get dynamically stuck there in a loop. In cognitive neuroscience, fast brain signaling associated with processing different rounds may indicate choices at the next round. Mutual cooperation outcomes entail brain activity changes predictive of how quickly a person will cooperate in kind at the next opportunity; this activity may be linked to basic homeostatic and motivational processes, possibly increasing the likelihood of short-cutting into mutual cooperation. Economics The prisoner's dilemma has been called the E. coli of social psychology, and it has been used widely to research various topics such as oligopolistic competition and collective action to produce a collective good. Advertising is sometimes cited as a real example of the prisoner's dilemma. When cigarette advertising was legal in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The effectiveness of Firm A's advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period, then the advertisement from each firm negates the other's, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy, which makes it slightly different from a prisoner's dilemma. The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium. Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the making of laws banning cigarette advertising, understanding that this would reduce costs and increase profits across the industry. Without enforceable agreements, members of a cartel are also involved in a (multi-player) prisoner's dilemma. "Cooperating" typically means agreeing to a price floor, while "defecting" means selling under this minimum level, instantly taking business from other cartel members. Anti-trust authorities want potential cartel members to mutually defect, ensuring the lowest possible prices for consumers. Sport Doping in sport has been cited as an example of a prisoner's dilemma. Two competing athletes have the option to use an illegal and/or dangerous drug to boost their performance. If neither athlete takes the drug, then neither gains an advantage. If only one does, then that athlete gains a significant advantage over the competitor, reduced by the legal and/or medical dangers of having taken the drug. But if both athletes take the drug, the benefits cancel out and only the dangers remain, putting them both in a worse position than if neither had doped. International politics In international relations theory, the prisoner's dilemma is often used to demonstrate why cooperation fails in situations when cooperation between states is collectively optimal but individually suboptimal. A classic example is the security dilemma, whereby an increase in one state's security (such as increasing its military strength) leads other states to fear for their own security out of fear of offensive action. Consequently, security-increasing measures can lead to tensions, escalation or conflict with one or more other parties, producing an outcome which no party truly desires. The security dilemma is particularly intense in situations when it is hard to distinguish offensive weapons from defensive weapons, and offense has the advantage in any conflict over defense. The prisoner's dilemma has frequently been used by realist international relations theorists to demonstrate the why all states (regardless of their internal policies or professed ideology) under international anarchy will struggle to cooperate with one another even when all benefit from such cooperation. Critics of realism argue that iteration and extending the shadow of the future are solutions to the prisoner's dilemma. When actors play the prisoner's dilemma once, they have incentives to defect, but when they expect to play it repeatedly, they have greater incentives to cooperate. Multiplayer dilemmas Many real-life dilemmas involve multiple players. Although metaphorical, Garrett Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the prisoner's dilemma: each villager makes a choice for personal gain or restraint. The collective reward for unanimous or frequent defection is very low payoffs and the destruction of the commons. The commons are not always exploited: William Poundstone, in a book about the prisoner's dilemma, describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for people to take a paper without paying (defecting), but very few do, feeling that if they do not pay then neither will others, destroying the system. Subsequent research by Elinor Ostrom, winner of the 2009 Nobel Memorial Prize in Economic Sciences, hypothesized that the tragedy of the commons is oversimplified, with the negative outcome influenced by outside influences. Without complicating pressures, groups communicate and manage the commons among themselves for their mutual benefit, enforcing social norms to preserve the resource and achieve the maximum good for the group, an example of effecting the best-case outcome for prisoner's dilemma. Academic settings The prisoner's dilemma has been used in various academic settings to illustrate the complexities of cooperation and competition. One notable example is the classroom experiment conducted by sociology professor Dan Chambliss at Hamilton College in the 1980s. Starting in 1981, Chambliss proposed that if no student took the final exam, everyone would receive an A, but if even one student took it, those who didn't would receive a zero. In 1988, John Werner, a first-year student, successfully organized his classmates to boycott the exam, demonstrating a practical application of game theory and the prisoner's dilemma concept. Nearly 25 years later, a similar incident occurred at Johns Hopkins University in 2013. Professor Peter Fröhlich's grading policy scaled final exams according to the highest score, meaning that if everyone received the same score, they would all get an A. Students in Fröhlich's classes organized a boycott of the final exam, ensuring that no one took it. As a result, every student received an A, successfully solving the prisoner's dilemma in a mutually optimal way without iteration. These examples highlight how the prisoner's dilemma can be used to explore cooperative behavior and strategic decision-making in educational contexts. Related games Closed-bag exchange Douglas Hofstadter suggested that people often find problems such as the prisoner's dilemma problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange": Friend or Foe? Friend or Foe? is a game show that aired from 2002 to 2003 on the Game Show Network in the US. On the game show, three pairs of people compete. When a pair is eliminated, they play a game similar to the prisoner's dilemma to determine how the winnings are split. If they both cooperate (Friend), they share the winnings 50–50. If one cooperates and the other defects (Foe), the defector gets all the winnings, and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the reward matrix is slightly different from the standard one given above, as the rewards for the "both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If a contestant knows that their opponent is going to vote "Foe", then their own choice does not affect their own winnings. In a specific sense, Friend or Foe has a rewards model between prisoner's dilemma and the game of Chicken. This is the rewards matrix: This payoff matrix has also been used on the British television programs Trust Me, Shafted, The Bank Job and Golden Balls, and on the American game show Take It All, as well as for the winning couple on the reality shows Bachelor Pad and Love Island. Game data from the Golden Balls series has been analyzed by a team of economists, who found that cooperation was "surprisingly high" for amounts of money that would seem consequential in the real world but were comparatively low in the context of the game. Iterated snowdrift Researchers from the University of Lausanne and the University of Edinburgh have suggested that the "Iterated Snowdrift Game" may more closely reflect real-world social situations, although this model is actually a chicken game. In this model, the risk of being exploited through defection is lower, and individuals always gain from taking the cooperative choice. The snowdrift game imagines two drivers who are stuck on opposite sides of a snowdrift, each of whom is given the option of shoveling snow to clear a path or remaining in their car. A player's highest payoff comes from leaving the opponent to clear all the snow by themselves, but the opponent is still nominally rewarded for their work. This may better reflect real-world scenarios, the researchers giving the example of two scientists collaborating on a report, both of whom would benefit if the other worked harder. "But when your collaborator doesn't do any work, it's probably better for you to do all the work yourself. You'll still end up with a completed project." Coordination games In coordination games, players must coordinate their strategies for a good outcome. An example is two cars that abruptly meet in a blizzard; each must choose whether to swerve left or right. If both swerve left, or both right, the cars do not collide. The local left- and right-hand traffic convention helps to co-ordinate their actions. Symmetrical co-ordination games include Stag hunt and Bach or Stravinsky. Asymmetric prisoner's dilemmas A more general set of games is asymmetric. As in the prisoner's dilemma, the best outcome is cooperation, and there are motives for defection. Unlike the symmetric prisoner's dilemma, though, one player has more to lose and/or more to gain than the other. Some such games have been described as a prisoner's dilemma in which one prisoner has an alibi, hence the term "alibi game". In experiments, players getting unequal payoffs in repeated games may seek to maximize profits, but only under the condition that both players receive equal payoffs; this may lead to a stable equilibrium strategy in which the disadvantaged player defects every X game, while the other always co-operates. Such behavior may depend on the experiment's social norms around fairness. Software Several software packages have been created to run simulations and tournaments of the prisoner's dilemma, some of which have their source code available: The source code for the second tournament run by Robert Axelrod (written by Axelrod and many contributors in Fortran) Prison, a library written in Java, last updated in 1998 Axelrod-Python, written in Python Evoplex, a fast agent-based modeling program released in 2018 by Marcos Cardinot In fiction Hannu Rajaniemi set the opening scene of his The Quantum Thief trilogy in a "dilemma prison". The main theme of the series has been described as the "inadequacy of a binary universe" and the ultimate antagonist is a character called the All-Defector. The first book in the series was published in 2010, with the two sequels, The Fractal Prince and The Causal Angel, published in 2012 and 2014, respectively. A game modeled after the iterated prisoner's dilemma is a central focus of the 2012 video game Zero Escape: Virtue's Last Reward and a minor part in its 2016 sequel Zero Escape: Zero Time Dilemma. In The Mysterious Benedict Society and the Prisoner's Dilemma by Trenton Lee Stewart, the main characters start by playing a version of the game and escaping from the "prison" altogether. Later, they become actual prisoners and escape once again. In The Adventure Zone: Balance during The Suffering Game subarc, the player characters are twice presented with the prisoner's dilemma during their time in two liches' domain, once cooperating and once defecting. In the eighth novel from the author James S. A. Corey, Tiamat's Wrath, Winston Duarte explains the prisoner's dilemma to his 14-year-old daughter, Teresa, to train her in strategic thinking. The 2008 film The Dark Knight includes a scene loosely based on the problem in which the Joker rigs two ferries, one containing prisoners and the other containing civilians, arming both groups with the means to detonate the bomb on each other's ferries, threatening to detonate them both if they hesitate. In moral philosophy The prisoner's dilemma is commonly used as a thinking tool in moral philosophy as an illustration of the potential tension between the benefit of the individual and the benefit of the community. Both the one-shot and the iterated prisoner's dilemma have applications in moral philosophy. Indeed, many of the moral situations, such as genocide, are not easily repeated more than once. Moreover, in many situations, the previous rounds' outcomes are unknown to the players, since they are not necessarily the same (e.g. interaction with a panhandler on the street). The philosopher David Gauthier uses the prisoner's dilemma to show how morality and rationality can conflict. Some game theorists have criticized the use of the prisoner's dilemma as a thinking tool in moral philosophy. Kenneth Binmore argued that the prisoner's dilemma does not accurately describe the game played by humanity, which he argues is closer to a coordination game. Brian Skyrms shares this perspective. Steven Kuhn suggests that these views may be reconciled by considering that moral behavior can modify the payoff matrix of a game, transforming it from a prisoner's dilemma into other games. Pure and impure prisoner's dilemma A prisoner's dilemma is considered "impure" if a mixed strategy may give better expected payoffs than a pure strategy. This creates the interesting possibility that the moral action from a utilitarian perspective (i.e., aiming at maximizing the good of an action) may require randomization of one's strategy, such as cooperating with 80% chance and defecting with 20% chance.
Mathematics
Game theory
null
43730
https://en.wikipedia.org/wiki/Linear%20programming
Linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists. Linear programs are problems that can be expressed in standard form as Here the components of are the variables to be determined, and are given vectors, and is a given matrix. The function whose value is to be maximized ( in this case) is called the objective function. The constraints and specify a convex polytope over which the objective function is to be optimized. Linear programming can be applied to various fields of study. It is widely used in mathematics and, to a lesser extent, in business, economics, and some engineering problems. There is a close connection between linear programs, eigenequations, John von Neumann's general equilibrium model, and structural equilibrium models (see dual linear program for details). Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proven useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. History The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, and after whom the method of Fourier–Motzkin elimination is named. In the late 1930s, Soviet mathematician Leonid Kantorovich and American economist Wassily Leontief independently delved into the practical applications of linear programming. Kantorovich focused on manufacturing schedules, while Leontief explored economic applications. Their groundbreaking work was largely overlooked for decades. The turning point came during World War II when linear programming emerged as a vital tool. It found extensive use in addressing complex wartime challenges, including transportation logistics, scheduling, and resource allocation. Linear programming proved invaluable in optimizing these processes while considering critical constraints such as costs and resource availability. Despite its initial obscurity, the wartime successes propelled linear programming into the spotlight. Post-WWII, the method gained widespread recognition and became a cornerstone in various fields, from operations research to economics. The overlooked contributions of Kantorovich and Leontief in the late 1930s eventually became foundational to the broader acceptance and utilization of linear programming in optimizing decision-making processes. Kantorovich's work was initially neglected in the USSR. About the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel Memorial Prize in Economic Sciences. In 1941, Frank Lauren Hitchcock also formulated transportation problems as linear programs and gave a solution very similar to the later simplex method. Hitchcock had died in 1957, and the Nobel Memorial Prize is not awarded posthumously. From 1946 to 1947 George B. Dantzig independently developed general linear programming formulation to use for planning problems in the US Air Force. In 1947, Dantzig also invented the simplex method that, for the first time efficiently, tackled the linear programming problem in most cases. When Dantzig arranged a meeting with John von Neumann to discuss his simplex method, von Neumann immediately conjectured the theory of duality by realizing that the problem he had been working in game theory was equivalent. Dantzig provided formal proof in an unpublished report "A Theorem on Linear Inequalities" on January 5, 1948. Dantzig's work was made available to public in 1951. In the post-war years, many industries applied it in their daily planning. Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm. The theory behind linear programming drastically reduces the number of possible solutions that must be checked. The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. Uses Linear programming is a widely used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems, are considered important enough to have much research on specialized algorithms. A number of algorithms for other types of optimization problems work by solving linear programming problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations. Likewise, linear programming was heavily used in the early formation of microeconomics, and it is currently utilized in company management, such as planning, production, transportation, and technology. Although the modern management issues are ever-changing, most companies would like to maximize profits and minimize costs with limited resources. Google also uses linear programming to stabilize YouTube videos. Standard form Standard form is the usual and most intuitive form of describing a linear programming problem. It consists of the following three parts: A linear (or affine) function to be maximized e.g. Problem constraints of the following form e.g. Non-negative variables e.g. The problem is usually expressed in matrix form, and then becomes: Other forms, such as minimization problems, problems with constraints on alternative forms, and problems involving negative variables can always be rewritten into an equivalent problem in standard form. Example Suppose that a farmer has a piece of farm land, say L hectares, to be planted with either wheat or barley or some combination of the two. The farmer has F kilograms of fertilizer and P kilograms of pesticide. Every hectare of wheat requires F1 kilograms of fertilizer and P1 kilograms of pesticide, while every hectare of barley requires F2 kilograms of fertilizer and P2 kilograms of pesticide. Let S1 be the selling price of wheat and S2 be the selling price of barley, per hectare. If we denote the area of land planted with wheat and barley by x1 and x2 respectively, then profit can be maximized by choosing optimal values for x1 and x2. This problem can be expressed with the following linear programming problem in the standard form: In matrix form this becomes: maximize subject to Augmented form (slack form) Linear programming problems can be converted into an augmented form in order to apply the common form of the simplex algorithm. This form introduces non-negative slack variables to replace inequalities with equalities in the constraints. The problems can then be written in the following block matrix form: Maximize : where are the newly introduced slack variables, are the decision variables, and is the variable to be maximized. Example The example above is converted into the following augmented form: {| |- | colspan="2" | Maximize: | (objective function) |- | subject to: | | (augmented constraint) |- | | | (augmented constraint) |- | | | (augmented constraint) |- | | |} where are (non-negative) slack variables, representing in this example the unused area, the amount of unused fertilizer, and the amount of unused pesticide. In matrix form this becomes: Maximize : Duality Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we can express the primal problem as: Maximize cTx subject to Ax ≤ b, x ≥ 0; with the corresponding symmetric dual problem, Minimize bTy subject to ATy ≥ c, y ≥ 0. An alternative primal formulation is: Maximize cTx subject to Ax ≤ b; with the corresponding asymmetric dual problem, Minimize bTy subject to ATy = c, y ≥ 0. There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual) the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective function of its dual. The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution. The strong duality theorem states that if the primal has an optimal solution, x*, then the dual also has an optimal solution, y*, and cTx*=bTy*. A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible. See dual linear program for details and several more examples. Variations Covering/packing dualities A covering LP is a linear program of the form: Minimize: bTy, subject to: ATy ≥ c, y ≥ 0, such that the matrix A and the vectors b and c are non-negative. The dual of a covering LP is a packing LP, a linear program of the form: Maximize: cTx, subject to: Ax ≤ b, x ≥ 0, such that the matrix A and the vectors b and c are non-negative. Examples Covering and packing LPs commonly arise as a linear programming relaxation of a combinatorial problem and are important in the study of approximation algorithms. For example, the LP relaxations of the set packing problem, the independent set problem, and the matching problem are packing LPs. The LP relaxations of the set cover problem, the vertex cover problem, and the dominating set problem are also covering LPs. Finding a fractional coloring of a graph is another example of a covering LP. In this case, there is one constraint for each vertex of the graph and one variable for each independent set of the graph. Complementary slackness It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complementary slackness theorem. The theorem states: Suppose that x = (x1, x2, ... , xn) is primal feasible and that y = (y1, y2, ... , ym) is dual feasible. Let (w1, w2, ..., wm) denote the corresponding primal slack variables, and let (z1, z2, ... , zn) denote the corresponding dual slack variables. Then x and y are optimal for their respective problems if and only if xj zj = 0, for j = 1, 2, ... , n, and wi yi = 0, for i = 1, 2, ... , m. So if the i-th slack variable of the primal is not zero, then the i-th variable of the dual is equal to zero. Likewise, if the j-th slack variable of the dual is not zero, then the j-th variable of the primal is equal to zero. This necessary condition for optimality conveys a fairly simple economic principle. In standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there are "leftovers"), then additional quantities of that resource must have no value. Likewise, if there is slack in the dual (shadow) price non-negativity constraint requirement, i.e., the price is not zero, then there must be scarce supplies (no "leftovers"). Theory Existence of optimal solutions Geometrically, the linear constraints define the feasible region, which is a convex polytope. A linear function is a convex function, which implies that every local minimum is a global minimum; similarly, a linear function is a concave function, which implies that every local maximum is a global maximum. An optimal solution need not exist, for two reasons. First, if the constraints are inconsistent, then no feasible solution exists: For instance, the constraints x ≥ 2 and x ≤ 1 cannot be satisfied jointly; in this case, we say that the LP is infeasible. Second, when the polytope is unbounded in the direction of the gradient of the objective function (where the gradient of the objective function is the vector of the coefficients of the objective function), then no optimal value is attained because it is always possible to do better than any finite value of the objective function. Optimal vertices (and rays) of polyhedra Otherwise, if a feasible solution exists and if the constraint set is bounded, then the optimum value is always attained on the boundary of the constraint set, by the maximum principle for convex functions (alternatively, by the minimum principle for concave functions) since linear functions are both convex and concave. However, some problems have distinct optimal solutions; for example, the problem of finding a feasible solution to a system of linear inequalities is a linear programming problem in which the objective function is the zero function (i.e., the constant function taking the value zero everywhere). For this feasibility problem with the zero-function for its objective-function, if there are two distinct solutions, then every convex combination of the solutions is a solution. The vertices of the polytope are also called basic feasible solutions. The reason for this choice of name is as follows. Let d denote the number of variables. Then the fundamental theorem of linear inequalities implies (for feasible problems) that for every vertex x* of the LP feasible region, there exists a set of d (or fewer) inequality constraints from the LP such that, when we treat those d constraints as equalities, the unique solution is x*. Thereby we can study these vertices by means of looking at certain subsets of the set of all constraints (a discrete set), rather than the continuum of LP solutions. This principle underlies the simplex algorithm for solving linear programs. Algorithms Basis exchange algorithms Simplex algorithm of Dantzig The simplex algorithm, developed by George Dantzig in 1947, solves LP problems by constructing a feasible solution at a vertex of the polytope and then walking along a path on the edges of the polytope to vertices with non-decreasing values of the objective function until an optimum is reached for sure. In many practical problems, "stalling" occurs: many pivots are made with no increase in the objective function. In rare practical problems, the usual versions of the simplex algorithm may actually "cycle". To avoid cycles, researchers developed new pivoting rules. In practice, the simplex algorithm is quite efficient and can be guaranteed to find the global optimum if certain precautions against cycling are taken. The simplex algorithm has been proved to solve "random" problems efficiently, i.e. in a cubic number of steps, which is similar to its behavior on practical problems. However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size. In fact, for some time it was not known whether the linear programming problem was solvable in polynomial time, i.e. of complexity class P. Criss-cross algorithm Like the simplex algorithm of Dantzig, the criss-cross algorithm is a basis-exchange algorithm that pivots between bases. However, the criss-cross algorithm need not maintain feasibility, but can pivot rather from a feasible basis to an infeasible basis. The criss-cross algorithm does not have polynomial time-complexity for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube, in the worst case. Interior point In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedral set, interior-point methods move through the interior of the feasible region. Ellipsoid algorithm, following Khachiyan This is the first worst-case polynomial-time algorithm ever found for linear programming. To solve a problem which has n variables and can be encoded in L input bits, this algorithm runs in time. Leonid Khachiyan solved this long-standing complexity issue in 1979 with the introduction of the ellipsoid method. The convergence analysis has (real-number) predecessors, notably the iterative methods developed by Naum Z. Shor and the approximation algorithms by Arkadi Nemirovski and D. Yudin. Projective algorithm of Karmarkar Khachiyan's algorithm was of landmark importance for establishing the polynomial-time solvability of linear programs. The algorithm was not a computational break-through, as the simplex method is more efficient for all but specially constructed families of linear programs. However, Khachiyan's algorithm inspired new lines of research in linear programming. In 1984, N. Karmarkar proposed a projective method for linear programming. Karmarkar's algorithm improved on Khachiyan's worst-case polynomial bound (giving ). Karmarkar claimed that his algorithm was much faster in practical LP than the simplex method, a claim that created great interest in interior-point methods. Since Karmarkar's discovery, many interior-point methods have been proposed and analyzed. Vaidya's 87 algorithm In 1987, Vaidya proposed an algorithm that runs in time. Vaidya's 89 algorithm In 1989, Vaidya developed an algorithm that runs in time. Formally speaking, the algorithm takes arithmetic operations in the worst case, where is the number of constraints, is the number of variables, and is the number of bits. Input sparsity time algorithms In 2015, Lee and Sidford showed that linear programming can be solved in time, where denotes the soft O notation, and represents the number of non-zero elements, and it remains taking in the worst case. Current matrix multiplication time algorithm In 2019, Cohen, Lee and Song improved the running time to time, is the exponent of matrix multiplication and is the dual exponent of matrix multiplication. is (roughly) defined to be the largest number such that one can multiply an matrix by a matrix in time. In a followup work by Lee, Song and Zhang, they reproduce the same result via a different method. These two algorithms remain when and . The result due to Jiang, Song, Weinstein and Zhang improved to . Comparison of interior-point methods and simplex algorithms The current opinion is that the efficiencies of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming. However, for specific types of LP problems, it may be that one type of solver is better than another (sometimes much better), and that the structure of the solutions generated by interior point methods versus simplex-based methods are significantly different with the support set of active variables being typically smaller for the latter one. Open problems and recent work There are several open problems in the theory of linear programming, the solution of which would represent fundamental breakthroughs in mathematics and potentially major advances in our ability to solve large-scale linear programs. Does LP admit a strongly polynomial-time algorithm? Does LP admit a strongly polynomial-time algorithm to find a strictly complementary solution? Does LP admit a polynomial-time algorithm in the real number (unit cost) model of computation? This closely related set of problems has been cited by Stephen Smale as among the 18 greatest unsolved problems of the 21st century. In Smale's words, the third version of the problem "is the main unsolved problem of linear programming theory." While algorithms exist to solve linear programming in weakly polynomial time, such as the ellipsoid methods and interior-point techniques, no algorithms have yet been found that allow strongly polynomial-time performance in the number of constraints and the number of variables. The development of such algorithms would be of great theoretical interest, and perhaps allow practical gains in solving large LPs as well. Although the Hirsch conjecture was recently disproved for higher dimensions, it still leaves the following questions open. Are there pivot rules which lead to polynomial-time simplex variants? Do all polytopal graphs have polynomially bounded diameter? These questions relate to the performance analysis and development of simplex-like methods. The immense efficiency of the simplex algorithm in practice despite its exponential-time theoretical performance hints that there may be variations of simplex that run in polynomial or even strongly polynomial time. It would be of great practical and theoretical significance to know whether any such variants exist, particularly as an approach to deciding if LP can be solved in strongly polynomial time. The simplex algorithm and its variants fall in the family of edge-following algorithms, so named because they solve linear programming problems by moving from vertex to vertex along edges of a polytope. This means that their theoretical performance is limited by the maximum number of edges between any two vertices on the LP polytope. As a result, we are interested in knowing the maximum graph-theoretical diameter of polytopal graphs. It has been proved that all polytopes have subexponential diameter. The recent disproof of the Hirsch conjecture is the first step to prove whether any polytope has superpolynomial diameter. If any such polytopes exist, then no edge-following variant can run in polynomial time. Questions about polytope diameter are of independent mathematical interest. Simplex pivot methods preserve primal (or dual) feasibility. On the other hand, criss-cross pivot methods do not preserve (primal or dual) feasibilitythey may visit primal feasible, dual feasible or primal-and-dual infeasible bases in any order. Pivot methods of this type have been studied since the 1970s. Essentially, these methods attempt to find the shortest pivot path on the arrangement polytope under the linear programming problem. In contrast to polytopal graphs, graphs of arrangement polytopes are known to have small diameter, allowing the possibility of strongly polynomial-time criss-cross pivot algorithm without resolving questions about the diameter of general polytopes. Integer unknowns If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0–1 integer programming or binary integer programming (BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems. If only some of the unknown variables are required to be integers, then the problem is called a mixed integer (linear) programming (MIP or MILP) problem. These are generally also NP-hard because they are even more general than ILP programs. There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integers or – more general – where the system has the total dual integrality (TDI) property. Advanced algorithms for solving integer linear programs include: cutting-plane method Branch and bound Branch and cut Branch and price if the problem has some extra structure, it may be possible to apply delayed column generation. Such integer-programming algorithms are discussed by Padberg and in Beasley. Integral linear programs A linear program in real variables is said to be integral if it has at least one optimal solution which is integral, i.e., made of only integer values. Likewise, a polyhedron is said to be integral if for all bounded feasible objective functions c, the linear program has an optimum with integer coordinates. As observed by Edmonds and Giles in 1977, one can equivalently say that the polyhedron is integral if for every bounded feasible integral objective function c, the optimal value of the linear program is an integer. Integral linear programs are of central importance in the polyhedral aspect of combinatorial optimization since they provide an alternate characterization of a problem. Specifically, for any problem, the convex hull of the solutions is an integral polyhedron; if this polyhedron has a nice/compact description, then we can efficiently find the optimal feasible solution under any linear objective. Conversely, if we can prove that a linear programming relaxation is integral, then it is the desired description of the convex hull of feasible (integral) solutions. Terminology is not consistent throughout the literature, so one should be careful to distinguish the following two concepts, in an integer linear program, described in the previous section, variables are forcibly constrained to be integers, and this problem is NP-hard in general, in an integral linear program, described in this section, variables are not constrained to be integers but rather one has proven somehow that the continuous problem always has an integral optimal value (assuming c is integral), and this optimal value may be found efficiently since all polynomial-size linear programs can be solved in polynomial time. One common way of proving that a polyhedron is integral is to show that it is totally unimodular. There are other general methods including the integer decomposition property and total dual integrality. Other specific well-known integral LPs include the matching polytope, lattice polyhedra, submodular flow polyhedra, and the intersection of two generalized polymatroids/g-polymatroids – e.g. see Schrijver 2003. Solvers and scripting (programming) languages Permissive licenses: Copyleft (reciprocal) licenses: MINTO (Mixed Integer Optimizer, an integer programming solver which uses branch and bound algorithm) has publicly available source code but is not open source. Proprietary licenses:
Mathematics
Other
null
43734
https://en.wikipedia.org/wiki/Network%20packet
Network packet
In telecommunications and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. A packet consists of control information and user data; the latter is also known as the payload. Control information provides data for delivering the payload (e.g., source and destination network addresses, error detection codes, or sequencing information). Typically, control information is found in packet headers and trailers. In packet switching, the bandwidth of the transmission medium is shared between multiple communication sessions, in contrast to circuit switching, in which circuits are preallocated for the duration of one session and data is typically transmitted as a continuous bit stream. Terminology In the seven-layer OSI model of computer networking, packet strictly refers to a protocol data unit at layer 3, the network layer. A data unit at layer 2, the data link layer, is a frame. In layer 4, the transport layer, the data units are segments and datagrams. Thus, in the example of TCP/IP communication over Ethernet, a TCP segment is carried in one or more IP packets, which are each carried in one or more Ethernet frames. Architecture The basis of the packet concept is the postal letter: the header is like the envelope, the payload is the entire content inside the envelope, and the footer would be your signature at the bottom. Network design can achieve two major results by using packets: error detection and multiple host addressing. Framing Communications protocols use various conventions for distinguishing the elements of a packet and for formatting the user data. For example, in Point-to-Point Protocol, the packet is formatted in 8-bit bytes, and special characters are used to delimit elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level. Contents A packet may contain any of the following components: Addresses The routing of network packets requires two network addresses, the source address of the sending host, and the destination address of the receiving host. Error detection and correction Error detection and correction is performed at various layers in the protocol stack. Network packets may contain a checksum, parity bits or cyclic redundancy checks to detect errors that occur during transmission. At the transmitter, the calculation is performed before the packet is sent. When received at the destination, the checksum is recalculated, and compared with the one in the packet. If discrepancies are found, the packet may be corrected or discarded. Any packet loss due to these discards is dealt with by the network protocol. In some cases, modifications of the network packet may be necessary while routing, in which cases checksums are recalculated. Hop limit Under fault conditions, packets can end up traversing a closed circuit. If nothing was done, eventually the number of packets circulating would build up until the network was congested to the point of failure. Time to live is a field that is decreased by one each time a packet goes through a network hop. If the field reaches zero, routing has failed, and the packet is discarded. Ethernet packets have no time-to-live field and so are subject to broadcast storms in the presence of a switching loop. Length There may be a field to identify the overall packet length. However, in some types of networks, the length is implied by the duration of the transmission. Protocol identifier It is often desirable to carry multiple communication protocols on a network. A protocol identifier field specifies a packet's protocol and allows the protocol stack to process many types of packets. Priority Some networks implement quality of service which can prioritize some types of packets above others. This field indicates which packet queue should be used; a high-priority queue is emptied more quickly than lower-priority queues at points in the network where congestion is occurring. Payload In general, the payload is the data that is carried on behalf of an application. It is usually of variable length, up to a maximum that is set by the network protocol and sometimes the equipment on the route. When necessary, some networks can break a larger packet into smaller packets. Examples Internet protocol IP packets are composed of a header and payload. The header consists of fixed and optional fields. The payload appears immediately after the header. An IP packet has no trailer. However, an IP packet is often carried as the payload inside an Ethernet frame, which has its own header and trailer. Per the end-to-end principle, IP networks do not provide guarantees of delivery, non-duplication, or in-order delivery of packets. However, it is common practice to layer a reliable transport protocol such as Transmission Control Protocol on top of the packet service to provide such protection. NASA Deep Space Network The Consultative Committee for Space Data Systems (CCSDS) packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel. Under this standard, an image or other data sent from a spacecraft instrument is transmitted using one or more packets. MPEG packetized stream Packetized elementary stream (PES) is a specification associated with the MPEG-2 standard that allows an elementary stream to be divided into packets. The elementary stream is packetized by encapsulating sequential data bytes from the elementary stream between PES packet headers. A typical method of transmitting elementary stream data from a video or audio encoder is to first create PES packets from the elementary stream data and then to encapsulate these PES packets inside an MPEG transport stream (TS) packets or an MPEG program stream (PS). The TS packets can then be transmitted using broadcasting techniques, such as those used in an ATSC and DVB. NICAM In order to provide mono compatibility, the NICAM signal is transmitted on a subcarrier alongside the sound carrier. This means that the FM or AM regular mono sound carrier is left alone for reception by monaural receivers. The NICAM packet (except for the header) is scrambled with a nine-bit pseudo-random bit-generator before transmission. Making the NICAM bitstream look more like white noise is important because this reduces signal patterning on adjacent TV channels.
Technology
Networks
null
43922
https://en.wikipedia.org/wiki/Clostridium%20botulinum
Clostridium botulinum
Clostridium botulinum is a gram-positive, rod-shaped, anaerobic, spore-forming, motile bacterium with the ability to produce botulinum toxin, which is a neurotoxin. C. botulinum is a diverse group of pathogenic bacteria. Initially, they were grouped together by their ability to produce botulinum toxin and are now known as four distinct groups, C. botulinum groups I–IV. Along with some strains of Clostridium butyricum and Clostridium baratii, these bacteria all produce the toxin. Botulinum toxin can cause botulism, a severe flaccid paralytic disease in humans and other animals, and is the most potent toxin known to science, natural or synthetic, with a lethal dose of 1.3–2.1 ng/kg in humans. C. botulinum is commonly associated with bulging canned food; bulging, misshapen cans can be due to an internal increase in pressure caused by gas produced by bacteria. C. botulinum is responsible for foodborne botulism (ingestion of preformed toxin), infant botulism (intestinal infection with toxin-forming C. botulinum), and wound botulism (infection of a wound with C. botulinum). C. botulinum produces heat-resistant endospores that are commonly found in soil and are able to survive under adverse conditions. Microbiology C. botulinum is a Gram-positive, rod-shaped, spore-forming bacterium. It is an obligate anaerobe, the organism survives in an environment that lacks oxygen. However, C. botulinum tolerates traces of oxygen due to the enzyme superoxide dismutase, which is an important antioxidant defense in nearly all cells exposed to oxygen. C. botulinum is able to produce the neurotoxin only during sporulation, which can happen only in an anaerobic environment. C. botulinum is divided into four distinct phenotypic groups (I-IV) and is also classified into seven serotypes (A–G) based on the antigenicity of the botulinum toxin produced. On the level visible to DNA sequences, the phenotypic grouping matches the results of whole-genome and rRNA analyses, and setotype grouping approximates the result of analyses focused specifically on the toxin sequence. The two phylogenetic trees do not match because of the ability of the toxin gene cluster to be horizontally transferred. Serotypes Botulinum neurotoxin (BoNT) production is the unifying feature of the species. Seven serotypes of toxins have been identified that are allocated a letter (A–G), several of which can cause disease in humans. They are resistant to degradation by enzymes found in the gastrointestinal tract. This allows for ingested toxins to be absorbed from the intestines into the bloodstream. Toxins can be further differentiated into subtypes on the bases of smaller variations. However, all types of botulinum toxin are rapidly destroyed by heating to 100 °C for 15 minutes (900 seconds). 80 °C for 30 minutes also destroys BoNT. Most strains produce one type of BoNT, but strains producing multiple toxins have been described. C. botulinum producing B and F toxin types have been isolated from human botulism cases in New Mexico and California. The toxin type has been designated Bf as the type B toxin was found in excess to the type F. Similarly, strains producing Ab and Af toxins have been reported. Evidence indicates the neurotoxin genes have been the subject of horizontal gene transfer, possibly from a viral (bacteriophage) source. This theory is supported by the presence of integration sites flanking the toxin in some strains of C. botulinum. However, these integrations sites are degraded (except for the C and D types), indicating that the C. botulinum acquired the toxin genes quite far in the evolutionary past. Nevertheless, further transfers still happen via the plasmids and other mobile elements the genes are located on. Toxin types in disease Only botulinum toxin types A, B, E, F and H (FA) cause disease in humans. Types A, B, and E are associated with food-borne illness, while type E is specifically associated with fish products. Type C produces limber-neck in birds and type D causes botulism in other mammals. No disease is associated with type G. The "gold standard" for determining toxin type is a mouse bioassay, but the genes for types A, B, E, and F can now be readily differentiated using quantitative PCR. Type "H" is in fact a recombinant toxin from types A and F. It can be neutralized by type A antitoxin and no longer is considered a distinct type. A few strains from organisms genetically identified as other Clostridium species have caused human botulism: C. butyricum has produced type E toxin and C. baratii had produced type F toxin. The ability of C. botulinum to naturally transfer neurotoxin genes to other clostridia is concerning, especially in the food industry, where preservation systems are designed to destroy or inhibit only C. botulinum but not other Clostridium species. Metabolism Many C. botulinum genes play a role in the breakdown of essential carbohydrates and the metabolism of sugars. Chitin is the preferred source of carbon and nitrogen for C. botulinum. Hall A strain of C. botulinum has an active chitinolytic system to aid in the breakdown of chitin. Type A and B of C. botulinum production of BoNT is affected by nitrogen and carbon nutrition. There is evidence that these processes are also under catabolite repression. Groups Physiological differences and genome sequencing at 16S rRNA level support the subdivision of the C. botulinum species into groups I-IV. Some authors have briefly used groups V and VI, corresponding to toxin-producing C. baratii and C. butyricum. What used to be group IV is now C. argentinense. Although group II cannot degrade native protein such as casein, coagulated egg white, and cooked meat particles, it is able to degrade gelatin. Human botulism is predominantly caused by group I or II C. botulinum. Group III organisms mainly cause diseases in non-human animals. Laboratory isolation In the laboratory, C. botulinum is usually isolated in tryptose sulfite cycloserine (TSC) growth medium in an anaerobic environment with less than 2% oxygen. This can be achieved by several commercial kits that use a chemical reaction to replace O2 with CO2. C. botulinum (groups I through III) is a lipase-positive microorganism that grows between pH of 4.8 and 7.0 and cannot use lactose as a primary carbon source, characteristics important for biochemical identification. Transmission and sporulation The exact mechanism behind sporulation of C. botulinum is not known. Different strains of C. botulinum can be divided into three different groups, group I, II, and III, based on environmental conditions like heat resistance, temperature, and biome. Within each group, different strains will use different strategies to adapt to their environment to survive. Unlike other clostridial species, C. botulinum spores will sporulate as it enters the stationary phase. C. botulinum relies on quorum-sensing to initiate the sporulation process. C. botulinum spores are not found in human feces unless the individual has contracted botulism, but C. botulinum cannot spread from person to person. Motility structures The most common motility structure for C. botulinum is a flagellum. Though this structure is not found in all strains of C. botulinum, most produce peritrichous flagella. When comparing the different strains, there is also differences in the length of the flagella and how many are present on the cell. Growth conditions and prevention C. botulinum is a soil bacterium. The spores can survive in most environments and are very hard to kill. They can survive the temperature of boiling water at sea level, thus many foods are canned with a pressurized boil that achieves even higher temperatures, sufficient to kill the spores. This bacteria is widely distributed in nature and can be assumed to be present on all food surfaces. Its optimum growth temperature is within the mesophilic range. In spore form, it is a heat resistant pathogen that can survive in low acid foods and grow to produce toxins. The toxin attacks the nervous system and will kill an adult at a dose of around 75 ng. Botulinum toxin can be destroyed by holding food at 100 °C for 10 minutes; however, because of its potency, this is not recommended by the USA's FDA as a means of control. Botulism poisoning can occur due to preserved or home-canned, low-acid food that was not processed using correct preservation times and/or pressure. Growth of the bacterium can be prevented by high acidity, high ratio of dissolved sugar, high levels of oxygen, very low levels of moisture, or storage at temperatures below 3 °C (38 °F) for type A. For example, in a low-acid, canned vegetable such as green beans that are not heated enough to kill the spores (i.e., a pressurized environment) may provide an oxygen-free medium for the spores to grow and produce the toxin. However, pickles are sufficiently acidic to prevent growth; even if the spores are present, they pose no danger to the consumer. Honey, corn syrup, and other sweeteners may contain spores, but the spores cannot grow in a highly concentrated sugar solution; however, when a sweetener is diluted in the low-oxygen, low-acid digestive system of an infant, the spores can grow and produce toxin. As soon as infants begin eating solid food, the digestive juices become too acidic for the bacterium to grow. The control of food-borne botulism caused by C. botulinum is based almost entirely on thermal destruction (heating) of the spores or inhibiting spore germination into bacteria and allowing cells to grow and produce toxins in foods. Conditions conducive of growth are dependent on various environmental factors. Growth of C. botulinum is a risk in low acid foods as defined by having a pH above 4.6 although growth is significantly retarded for pH below 4.9. Taxonomic history C. botulinum was first recognized and isolated in 1895 by Emile van Ermengem from home-cured ham implicated in a botulism outbreak. The isolate was originally named Bacillus botulinus, after the Latin word for sausage, botulus. ("Sausage poisoning" was a common problem in 18th- and 19th-century Germany, and was most likely caused by botulism.) However, isolates from subsequent outbreaks were always found to be anaerobic spore formers, so Ida A. Bengtson proposed that both be placed into the genus Clostridium, as the genus Bacillus was restricted to aerobic spore-forming rods. Since 1959, all species producing the botulinum neurotoxins (types A–G) have been designated C. botulinum. Substantial phenotypic and genotypic evidence exists to demonstrate heterogeneity within the species, with at least four clearly-defined "groups" (see ) straddling other species, implying that they each deserve to be a genospecies. The situation as of 2018 is as follows: C. botulinum type G (= group IV) strains are since 1988 their own species, C. argentinense. Group I C. botulinum strains that do not produce a botulin toxin are referred to as C. sporogenes. Both names are conserved names since 1999. Group I also contains C. combesii. All other botulinum toxin-producing bacteria, not otherwise classified as C. baratii or C. butyricum, is called C. botulinum. This group still contains three genogroups. Smith et al. (2018) argues that group I should be called C. parabotulinum and group III be called C. novyi sensu lato, leaving only group II in C. botulinum. This argument is not accepted by the LPSN and would cause an unjustified change of the type strain under the Prokaryotic Code. (The current type strain ATCC 25763 falls into group I.) Dobritsa et al. (2018) argues, without formal descriptions, that group II can potentially be made into two new species. The complete genome of C. botulinum ATCC 3502 has been sequenced at Wellcome Trust Sanger Institute in 2007. This strain encodes a type "A" toxin. Diagnosis Physicians may consider the diagnosis of botulism based on a patient's clinical presentation, which classically includes an acute onset of bilateral cranial neuropathies and symmetric descending weakness. Other key features of botulism include an absence of fever, symmetric neurologic deficits, normal or slow heart rate and normal blood pressure, and no sensory deficits except for blurred vision. A careful history and physical examination is paramount to diagnose the type of botulism, as well as to rule out other conditions with similar findings, such as Guillain–Barré syndrome, stroke, and myasthenia gravis. Depending on the type of botulism considered, different tests for diagnosis may be indicated. Foodborne botulism: serum analysis for toxins by bioassay in mice should be done, as the demonstration of the toxins is diagnostic. Wound botulism: isolation of C. botulinum from the wound site should be attempted, as growth of the bacteria is diagnostic. Adult enteric and infant botulism: isolation and growth of C. botulinum from stool samples is diagnostic. Infant botulism is a diagnosis which is often missed in the emergency room. Other tests that may be helpful in ruling out other conditions are: Electromyography (EMG) or antibody studies may help with the exclusion of myasthenia gravis and Lambert–Eaton myasthenic syndrome (LEMS). Collection of cerebrospinal fluid (CSF) protein and blood assist with the exclusion of Guillan-Barre syndrome and stroke. Detailed physical examination of the patient for any rash or tick presence helps with the exclusion of any tick transmitted tick paralysis. Pathology Foodborne botulism Signs and symptoms of foodborne botulism typically begin between 18 and 36 hours after the toxin gets into your body, but can range from a few hours to several days, depending on the amount of toxin ingested. Symptoms include: Double vision Blurred vision Ptosis Nausea, vomiting, and abdominal cramps Slurred speech Trouble breathing Difficulty in swallowing Dry mouth Muscle weakness Constipation Reduced or absent deep tendon reactions, such as in the knee Wound botulism Most people who develop wound botulism inject drugs several times a day, so determining a timeline of when onset symptoms first occurred and when the toxin entered the body can be difficult. It is more common in people who inject black tar heroin. Wound botulism signs and symptoms include: Difficulty swallowing or speaking Facial weakness on both sides of the face Blurred or double vision Ptosis Trouble breathing Paralysis Infant botulism If infant botulism is related to food, such as honey, problems generally begin within 18 to 36 hours after the toxin enters the baby's body. Signs and symptoms include: Constipation (often the first sign) Floppy movements due to muscle weakness and trouble controlling the head Weak cry Irritability Drooling Ptosis Tiredness Difficulty sucking or feeding Paralysis Beneficial effects of botulinum toxin Purified botulinum toxin is diluted by a physician for treatment of: Congenital pelvic tilt Spasmodic dysphasia (the inability of the muscles of the larynx) Achalasia (esophageal stricture) Strabismus (crossed eyes) Paralysis of the facial muscles Failure of the cervix Blinking frequently Anti-cancer drug delivery Adult intestinal toxemia A very rare form of botulism that occurs by the same route as infant botulism but is among adults. Occurs rarely and sporadically. Signs and symptoms include: Abdominal pain Blurred vision Diarrhea Dysarthria Imbalance Weakness in arms and hand area Treatment In the case of a diagnosis or suspicion of botulism, patients should be hospitalized immediately, even if the diagnosis and/or tests are pending. Additionally if botulism is suspected, patients should be treated immediately with antitoxin therapy in order to reduce mortality. Immediate intubation is also highly recommended, as respiratory failure is the primary cause of death from botulism. In North America, an equine-derived heptavalent botulinum antitoxin is used to treat all serotypes of non-infant naturally occurring botulism. For infants less than one year of age, botulism immune globulin is used to treat type A or type B. Outcomes vary between one and three months, but with prompt interventions, mortality from botulism ranges from less than 5 percent to 8 percent. Vaccination There used to be a formalin-treated toxoid vaccine against botulism (serotypes A-E), but it was discontinued in 2011 due to declining potency in the toxoid stock. It was originally intended for people at risk of exposure. A few new vaccines are under development. Use and detection C. botulinum is used to prepare the medicaments Botox, Dysport, Xeomin, and Neurobloc used to selectively paralyze muscles to temporarily relieve muscle function. It has other "off-label" medical purposes, such as treating severe facial pain, such as that caused by trigeminal neuralgia. Botulinum toxin produced by C. botulinum is often believed to be a potential bioweapon as it is so potent that it takes about 75 nanograms to kill a person ( of 1 ng/kg, assuming an average person weighs ~75 kg); 1 kilogram of it would be enough to kill the entire human population. A "mouse protection" or "mouse bioassay" test determines the type of C. botulinum toxin present using monoclonal antibodies. An enzyme-linked immunosorbent assay (ELISA) with digoxigenin-labeled antibodies can also be used to detect the toxin, and quantitative PCR can detect the toxin genes in the organism. C. botulinum in different geographical locations A number of quantitative surveys for C. botulinum spores in the environment have suggested a prevalence of specific toxin types in given geographic areas, which remain unexplained.
Biology and health sciences
Gram-positive bacteria
Plants
43937
https://en.wikipedia.org/wiki/Parasitism
Parasitism
Parasitism is a close relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson characterised parasites as "predators that eat prey in units of less than one". Parasites include single-celled protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes. There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophicallytransmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation. One major axis of classification concerns invasiveness: an endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface. Like predation, parasitism is a type of consumer–resource interaction, but unlike predators, parasites, with the exception of parasitoids, are much smaller than their hosts, do not kill them, and often live in or on their hosts for an extended period. Parasites of animals are highly specialised, each parasite species living on one given animal species, and reproduce at a faster rate than their hosts. Classic examples include interactions between vertebrate hosts and tapeworms, flukes, and those between the malaria-causing Plasmodium species, and fleas. Parasites reduce host fitness by general or specialised pathology, that ranges from parasitic castration to modification of host behaviour. Parasites increase their own fitness by exploiting hosts for resources necessary for their survival, in particular by feeding on them and by using intermediate (secondary) hosts to assist in their transmission from one definitive (primary) host to another. Although parasitism is often unambiguous, it is part of a spectrum of interactions between species, grading via parasitoidism into predation, through evolution into mutualism, and in some fungi, shading into being saprophytic. Human knowledge of parasites such as roundworms and tapeworms dates back to ancient Egypt, Greece, and Rome. In early modern times, Antonie van Leeuwenhoek observed Giardia lamblia with his microscope in 1681, while Francesco Redi described internal and external parasites including sheep liver fluke and ticks. Modern parasitology developed in the 19th century. In human culture, parasitism has negative connotations. These were exploited to satirical effect in Jonathan Swift's 1733 poem "On Poetry: A Rhapsody", comparing poets to hyperparasitical "vermin". In fiction, Bram Stoker's 1897 Gothic horror novel Dracula and its many later adaptations featured a blood-drinking parasite. Ridley Scott's 1979 film Alien was one of many works of science fiction to feature a parasitic alien species. Etymology First used in English in 1539, the word parasite comes from the Medieval French , from the Latinised form , . The related term parasitism appears in English from 1611. Evolutionary strategies Basic concepts Parasitism is a kind of symbiosis, a close and persistent long-term biological interaction between a parasite and its host. Unlike saprotrophs, parasites feed on living hosts, though some parasitic fungi, for instance, may continue to feed on hosts they have killed. Unlike commensalism and mutualism, the parasitic relationship harms the host, either feeding on it or, as in the case of intestinal parasites, consuming some of its food. Because parasites interact with other species, they can readily act as vectors of pathogens, causing disease. Predation is by definition not a symbiosis, as the interaction is brief, but the entomologist E. O. Wilson has characterised parasites as "predators that eat prey in units of less than one". Within that scope are many possible strategies. Taxonomists classify parasites in a variety of overlapping schemes, based on their interactions with their hosts and on their life cycles, which can be complex. An obligate parasite depends completely on the host to complete its life cycle, while a facultative parasite does not. Parasite life cycles involving only one host are called "direct"; those with a definitive host (where the parasite reproduces sexually) and at least one intermediate host are called "indirect". An endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface. Mesoparasites—like some copepods, for example—enter an opening in the host's body and remain partly embedded there. Some parasites can be generalists, feeding on a wide range of hosts, but many parasites, and the majority of protozoans and helminths that parasitise animals, are specialists and extremely host-specific. An early basic, functional division of parasites distinguished microparasites and macroparasites. These each had a mathematical model assigned in order to analyse the population movements of the host–parasite groupings. The microorganisms and viruses that can reproduce and complete their life cycle within the host are known as microparasites. Macroparasites are the multicellular organisms that reproduce and complete their life cycle outside of the host or on the host's body. Much of the thinking on types of parasitism has focused on terrestrial animal parasites of animals, such as helminths. Those in other environments and with other hosts often have analogous strategies. For example, the snubnosed eel is probably a facultative endoparasite (i.e., it is semiparasitic) that opportunistically burrows into and eats sick and dying fish. Plant-eating insects such as scale insects, aphids, and caterpillars closely resemble ectoparasites, attacking much larger plants; they serve as vectors of bacteria, fungi and viruses which cause plant diseases. As female scale insects cannot move, they are obligate parasites, permanently attached to their hosts. The sensory inputs that a parasite employs to identify and approach a potential host are known as "host cues". Such cues can include, for example, vibration, exhaled carbon dioxide, skin odours, visual and heat signatures, and moisture. Parasitic plants can use, for example, light, host physiochemistry, and volatiles to recognize potential hosts. Major strategies There are six major parasitic strategies, namely parasitic castration; directly transmitted parasitism; trophically-transmitted parasitism; vector-transmitted parasitism; parasitoidism; and micropredation. These apply to parasites whose hosts are plants as well as animals. These strategies represent adaptive peaks; intermediate strategies are possible, but organisms in many different groups have consistently converged on these six, which are evolutionarily stable. A perspective on the evolutionary options can be gained by considering four key questions: the effect on the fitness of a parasite's hosts; the number of hosts they have per life stage; whether the host is prevented from reproducing; and whether the effect depends on intensity (number of parasites per host). From this analysis, the major evolutionary strategies of parasitism emerge, alongside predation. Parasitic castrators Parasitic castrators partly or completely destroy their host's ability to reproduce, diverting the energy that would have gone into reproduction into host and parasite growth, sometimes causing gigantism in the host. The host's other systems remain intact, allowing it to survive and to sustain the parasite. Parasitic crustaceans such as those in the specialised barnacle genus Sacculina specifically cause damage to the gonads of their many species of host crabs. In the case of Sacculina, the testes of over two-thirds of their crab hosts degenerate sufficiently for these male crabs to develop female secondary sex characteristics such as broader abdomens, smaller claws and egg-grasping appendages. Various species of helminth castrate their hosts (such as insects and snails). This may happen directly, whether mechanically by feeding on their gonads, or by secreting a chemical that destroys reproductive cells; or indirectly, whether by secreting a hormone or by diverting nutrients. For example, the trematode Zoogonus lasius, whose sporocysts lack mouths, castrates the intertidal marine snail Tritia obsoleta chemically, developing in its gonad and killing its reproductive cells. Directly transmitted Directly transmitted parasites, not requiring a vector to reach their hosts, include such parasites of terrestrial vertebrates as lice and mites; marine parasites such as copepods and cyamid amphipods; monogeneans; and many species of nematodes, fungi, protozoans, bacteria, and viruses. Whether endoparasites or ectoparasites, each has a single host-species. Within that species, most individuals are free or almost free of parasites, while a minority carry a large number of parasites; this is known as an aggregated distribution. Trophically transmitted Trophically-transmitted parasites are transmitted by being eaten by a host. They include trematodes (all except schistosomes), cestodes, acanthocephalans, pentastomids, many roundworms, and many protozoa such as Toxoplasma. They have complex life cycles involving hosts of two or more species. In their juvenile stages they infect and often encyst in the intermediate host. When the intermediate-host animal is eaten by a predator, the definitive host, the parasite survives the digestion process and matures into an adult; some live as intestinal parasites. Many trophically transmitted parasites modify the behaviour of their intermediate hosts, increasing their chances of being eaten by a predator. As with directly transmitted parasites, the distribution of trophically transmitted parasites among host individuals is aggregated. Coinfection by multiple parasites is common. Autoinfection, where (by exception) the whole of the parasite's life cycle takes place in a single primary host, can sometimes occur in helminths such as Strongyloides stercoralis. Vector-transmitted Vector-transmitted parasites rely on a third party, an intermediate host, where the parasite does not reproduce sexually, to carry them from one definitive host to another. These parasites are microorganisms, namely protozoa, bacteria, or viruses, often intracellular pathogens (disease-causers). Their vectors are mostly hematophagic arthropods such as fleas, lice, ticks, and mosquitoes. For example, the deer tick Ixodes scapularis acts as a vector for diseases including Lyme disease, babesiosis, and anaplasmosis. Protozoan endoparasites, such as the malarial parasites in the genus Plasmodium and sleeping-sickness parasites in the genus Trypanosoma, have infective stages in the host's blood which are transported to new hosts by biting insects. Parasitoids Parasitoids are insects which sooner or later kill their hosts, placing their relationship close to predation. Most parasitoids are parasitoid wasps or other hymenopterans; others include dipterans such as phorid flies. They can be divided into two groups, idiobionts and koinobionts, differing in their treatment of their hosts. Idiobiont parasitoids sting their often-large prey on capture, either killing them outright or paralysing them immediately. The immobilised prey is then carried to a nest, sometimes alongside other prey if it is not large enough to support a parasitoid throughout its development. An egg is laid on top of the prey and the nest is then sealed. The parasitoid develops rapidly through its larval and pupal stages, feeding on the provisions left for it. Koinobiont parasitoids, which include flies as well as wasps, lay their eggs inside young hosts, usually larvae. These are allowed to go on growing, so the host and parasitoid develop together for an extended period, ending when the parasitoids emerge as adults, leaving the prey dead, eaten from inside. Some koinobionts regulate their host's development, for example preventing it from pupating or making it moult whenever the parasitoid is ready to moult. They may do this by producing hormones that mimic the host's moulting hormones (ecdysteroids), or by regulating the host's endocrine system. Micropredators A micropredator attacks more than one host, reducing each host's fitness by at least a small amount, and is only in contact with any one host intermittently. This behavior makes micropredators suitable as vectors, as they can pass smaller parasites from one host to another. Most micropredators are hematophagic, feeding on blood. They include annelids such as leeches, crustaceans such as branchiurans and gnathiid isopods, various dipterans such as mosquitoes and tsetse flies, other arthropods such as fleas and ticks, vertebrates such as lampreys, and mammals such as vampire bats. Transmission strategies Parasites use a variety of methods to infect animal hosts, including physical contact, the fecal–oral route, free-living infectious stages, and vectors, suiting their differing hosts, life cycles, and ecological contexts. Examples to illustrate some of the many possible combinations are given in the table. Variations Among the many variations on parasitic strategies are hyperparasitism, social parasitism, brood parasitism, kleptoparasitism, sexual parasitism, and adelphoparasitism. Hyperparasitism Hyperparasites feed on another parasite, as exemplified by protozoa living in helminth parasites, or facultative or obligate parasitoids whose hosts are either conventional parasites or parasitoids. Levels of parasitism beyond secondary also occur, especially among facultative parasitoids. In oak gall systems, there can be up to four levels of parasitism. Hyperparasites can control their hosts' populations, and are used for this purpose in agriculture and to some extent in medicine. The controlling effects can be seen in the way that the CHV1 virus helps to control the damage that chestnut blight, Cryphonectria parasitica, does to American chestnut trees, and in the way that bacteriophages can limit bacterial infections. It is likely, though little researched, that most pathogenic microparasites have hyperparasites which may prove widely useful in both agriculture and medicine. Social parasitism Social parasites take advantage of interspecific interactions between members of eusocial animals such as ants, termites, and bumblebees. Examples include the large blue butterfly, Phengaris arion, its larvae employing ant mimicry to parasitise certain ants, Bombus bohemicus, a bumblebee which invades the hives of other bees and takes over reproduction while their young are raised by host workers, and Melipona scutellaris, a eusocial bee whose virgin queens escape killer workers and invade another colony without a queen. An extreme example of interspecific social parasitism is found in the ant Tetramorium inquilinum, an obligate parasite which lives exclusively on the backs of other Tetramorium ants. A mechanism for the evolution of social parasitism was first proposed by Carlo Emery in 1909. Now known as "Emery's rule", it states that social parasites tend to be closely related to their hosts, often being in the same genus. Intraspecific social parasitism occurs in parasitic nursing, where some individual young take milk from unrelated females. In wedge-capped capuchins, higher ranking females sometimes take milk from low ranking females without any reciprocation. Brood parasitism In brood parasitism, the hosts suffer increased parental investment and energy expenditure to feed parasitic young, which are commonly larger than host young. The growth rate of host nestlings is slowed, reducing the host's fitness. Brood parasites include birds in different families such as cowbirds, whydahs, cuckoos, and black-headed ducks. These do not build nests of their own, but leave their eggs in nests of other species. In the family Cuculidae, over 40% of cuckoo species are obligate brood parasites, while others are either facultative brood parasites or provide parental care. The eggs of some brood parasites mimic those of their hosts, while some cowbird eggs have tough shells, making them hard for the hosts to kill by piercing, both mechanisms implying selection by the hosts against parasitic eggs. The adult female European cuckoo further mimics a predator, the European sparrowhawk, giving her time to lay her eggs in the host's nest unobserved. Host species often combat parasitic egg mimicry through egg polymorphism, having two or more egg phenotypes within a single population of a species. Multiple phenotypes in host eggs decrease the probability of a parasitic species accurately "matching" their eggs to host eggs. Kleptoparasitism In kleptoparasitism (from Greek κλέπτης (kleptēs), "thief"), parasites steal food gathered by the host. The parasitism is often on close relatives, whether within the same species or between species in the same genus or family. For instance, the many lineages of cuckoo bees lay their eggs in the nest cells of other bees in the same family. Kleptoparasitism is uncommon generally but conspicuous in birds; some such as skuas are specialised in pirating food from other seabirds, relentlessly chasing them down until they disgorge their catch. Sexual parasitism A unique approach is seen in some species of anglerfish, such as Ceratias holboelli, where the males are reduced to tiny sexual parasites, wholly dependent on females of their own species for survival, permanently attached below the female's body, and unable to fend for themselves. The female nourishes the male and protects him from predators, while the male gives nothing back except the sperm that the female needs to produce the next generation. Adelphoparasitism Adelphoparasitism, (from Greek ἀδελφός (adelphós), brother), also known as sibling-parasitism, occurs where the host species is closely related to the parasite, often in the same family or genus. In the citrus blackfly parasitoid, Encarsia perplexa, unmated females may lay haploid eggs in the fully developed larvae of their own species, producing male offspring, while the marine worm Bonellia viridis has a similar reproductive strategy, although the larvae are planktonic. Illustrations Examples of the major variant strategies are illustrated. Taxonomic range Parasitism has an extremely wide taxonomic range, including animals, plants, fungi, protozoans, bacteria, and viruses. Animals Parasitism is widespread in the animal kingdom, and has evolved independently from free-living forms hundreds of times. Many types of helminth including flukes and cestodes have complete life cycles involving two or more hosts. By far the largest group is the parasitoid wasps in the Hymenoptera. The phyla and classes with the largest numbers of parasitic species are listed in the table. Numbers are conservative minimum estimates. The columns for Endo- and Ecto-parasitism refer to the definitive host, as documented in the Vertebrate and Invertebrate columns. Plants A hemiparasite or partial parasite such as mistletoe derives some of its nutrients from another living plant, whereas a holoparasite such as Cuscuta derives all of its nutrients from another plant. Parasitic plants make up about one per cent of angiosperms and are in almost every biome in the world. All these plants have modified roots, haustoria, which penetrate the host plants, connecting them to the conductive system—either the xylem, the phloem, or both. This provides them with the ability to extract water and nutrients from the host. A parasitic plant is classified depending on where it latches onto the host, either the stem or the root, and the amount of nutrients it requires. Since holoparasites have no chlorophyll and therefore cannot make food for themselves by photosynthesis, they are always obligate parasites, deriving all their food from their hosts. Some parasitic plants can locate their host plants by detecting chemicals in the air or soil given off by host shoots or roots, respectively. About 4,500 species of parasitic plant in approximately 20 families of flowering plants are known. Species within the Orobanchaceae (broomrapes) are among the most economically destructive of all plants. Species of Striga (witchweeds) are estimated to cost billions of dollars a year in crop yield loss, infesting over 50 million hectares of cultivated land within Sub-Saharan Africa alone. Striga infects both grasses and grains, including corn, rice, and sorghum, which are among the world's most important food crops. Orobanche also threatens a wide range of other important crops, including peas, chickpeas, tomatoes, carrots, and varieties of cabbage. Yield loss from Orobanche can be total; despite extensive research, no method of control has been entirely successful. Many plants and fungi exchange carbon and nutrients in mutualistic mycorrhizal relationships. Some 400 species of myco-heterotrophic plants, mostly in the tropics, however effectively cheat by taking carbon from a fungus rather than exchanging it for minerals. They have much reduced roots, as they do not need to absorb water from the soil; their stems are slender with few vascular bundles, and their leaves are reduced to small scales, as they do not photosynthesize. Their seeds are small and numerous, so they appear to rely on being infected by a suitable fungus soon after germinating. Fungi Parasitic fungi derive some or all of their nutritional requirements from plants, other fungi, or animals. Plant pathogenic fungi are classified into three categories depending on their mode of nutrition: biotrophs, hemibiotrophs and necrotrophs. Biotrophic fungi derive nutrients from living plant cells, and during the course of infection they colonise their plant host in such a way as to keep it alive for a maximally long time. One well-known example of a biotrophic pathogen is Ustilago maydis, causative agent of the corn smut disease. Necrotrophic pathogens on the other hand, kill host cells and feed saprophytically, an example being the root-colonising honey fungi in the genus Armillaria. Hemibiotrophic pathogens begin their colonising their hosts as biotrophs, and subsequently killing off host cells and feeding as necrotrophs, a phenomenon termed the biotrophy-necrotrophy switch. Pathogenic fungi are well-known causative agents of diseases on animals as well as humans. Fungal infections (mycosis) are estimated to kill 1.6 million people each year. One example of a potent fungal animal pathogen are Microsporidia - obligate intracellular parasitic fungi that largely affect insects, but may also affect vertebrates including humans, causing the intestinal infection microsporidiosis. Protozoa Protozoa such as Plasmodium, Trypanosoma, and Entamoeba are endoparasitic. They cause serious diseases in vertebrates including humans—in these examples, malaria, sleeping sickness, and amoebic dysentery—and have complex life cycles. Bacteria Many bacteria are parasitic, though they are more generally thought of as pathogens causing disease. Parasitic bacteria are extremely diverse, and infect their hosts by a variety of routes. To give a few examples, Bacillus anthracis, the cause of anthrax, is spread by contact with infected domestic animals; its spores, which can survive for years outside the body, can enter a host through an abrasion or may be inhaled. Borrelia, the cause of Lyme disease and relapsing fever, is transmitted by vectors, ticks of the genus Ixodes, from the diseases' reservoirs in animals such as deer. Campylobacter jejuni, a cause of gastroenteritis, is spread by the fecal–oral route from animals, or by eating insufficiently cooked poultry, or by contaminated water. Haemophilus influenzae, an agent of bacterial meningitis and respiratory tract infections such as influenza and bronchitis, is transmitted by droplet contact. Treponema pallidum, the cause of syphilis, is spread by sexual activity. Viruses Viruses are obligate intracellular parasites, characterised by extremely limited biological function, to the point where, while they are evidently able to infect all other organisms from bacteria and archaea to animals, plants and fungi, it is unclear whether they can themselves be described as living. They can be either RNA or DNA viruses consisting of a single or double strand of genetic material (RNA or DNA, respectively), covered in a protein coat and sometimes a lipid envelope. They thus lack all the usual machinery of the cell such as enzymes, relying entirely on the host cell's ability to replicate DNA and synthesise proteins. Most viruses are bacteriophages, infecting bacteria. Evolutionary ecology Parasitism is a major aspect of evolutionary ecology; for example, almost all free-living animals are host to at least one species of parasite. Vertebrates, the best-studied group, are hosts to between 75,000 and 300,000 species of helminths and an uncounted number of parasitic microorganisms. On average, a mammal species hosts four species of nematode, two of trematodes, and two of cestodes. Humans have 342 species of helminth parasites, and 70 species of protozoan parasites. Some three-quarters of the links in food webs include a parasite, important in regulating host numbers. Perhaps 40 per cent of described species are parasitic. Fossil record Parasitism is hard to demonstrate from the fossil record, but holes in the mandibles of several specimens of Tyrannosaurus may have been caused by Trichomonas-like parasites. Saurophthirus, the Early Cretaceous flea, parasitized pterosaurs. Eggs that belonged to nematode worms and probably protozoan cysts were found in the Late Triassic coprolite of phytosaur. This rare find in Thailand reveals more about the ecology of prehistoric parasites. Coevolution As hosts and parasites evolve together, their relationships often change. When a parasite is in a sole relationship with a host, selection drives the relationship to become more benign, even mutualistic, as the parasite can reproduce for longer if its host lives longer. But where parasites are competing, selection favours the parasite that reproduces fastest, leading to increased virulence. There are thus varied possibilities in host–parasite coevolution. Evolutionary epidemiology analyses how parasites spread and evolve, whereas Darwinian medicine applies similar evolutionary thinking to non-parasitic diseases like cancer and autoimmune conditions. Long-term partnerships favouring mutualism Long-term partnerships can lead to a relatively stable relationship tending to commensalism or mutualism, as, all else being equal, it is in the evolutionary interest of the parasite that its host thrives. A parasite may evolve to become less harmful for its host or a host may evolve to cope with the unavoidable presence of a parasite—to the point that the parasite's absence causes the host harm. For example, although animals parasitised by worms are often clearly harmed, such infections may also reduce the prevalence and effects of autoimmune disorders in animal hosts, including humans. In a more extreme example, some nematode worms cannot reproduce, or even survive, without infection by Wolbachia bacteria. Lynn Margulis and others have argued, following Peter Kropotkin's 1902 Mutual Aid: A Factor of Evolution, that natural selection drives relationships from parasitism to mutualism when resources are limited. This process may have been involved in the symbiogenesis which formed the eukaryotes from an intracellular relationship between archaea and bacteria, though the sequence of events remains largely undefined. Competition favouring virulence Competition between parasites can be expected to favour faster reproducing and therefore more virulent parasites, by natural selection. Among competing parasitic insect-killing bacteria of the genera Photorhabdus and Xenorhabdus, virulence depended on the relative potency of the antimicrobial toxins (bacteriocins) produced by the two strains involved. When only one bacterium could kill the other, the other strain was excluded by the competition. But when caterpillars were infected with bacteria both of which had toxins able to kill the other strain, neither strain was excluded, and their virulence was less than when the insect was infected by a single strain. Cospeciation A parasite sometimes undergoes cospeciation with its host, resulting in the pattern described in Fahrenholz's rule, that the phylogenies of the host and parasite come to mirror each other. An example is between the simian foamy virus (SFV) and its primate hosts. The phylogenies of SFV polymerase and the mitochondrial cytochrome c oxidase subunit II from African and Asian primates were found to be closely congruent in branching order and divergence times, implying that the simian foamy viruses cospeciated with Old World primates for at least 30 million years. The presumption of a shared evolutionary history between parasites and hosts can help elucidate how host taxa are related. For instance, there has been a dispute about whether flamingos are more closely related to storks or ducks. The fact that flamingos share parasites with ducks and geese was initially taken as evidence that these groups were more closely related to each other than either is to storks. However, evolutionary events such as the duplication, or the extinction of parasite species (without similar events on the host phylogeny) often erode similarities between host and parasite phylogenies. In the case of flamingos, they have similar lice to those of grebes. Flamingos and grebes do have a common ancestor, implying cospeciation of birds and lice in these groups. Flamingo lice then switched hosts to ducks, creating the situation which had confused biologists. Parasites infect sympatric hosts (those within their same geographical area) more effectively, as has been shown with digenetic trematodes infecting lake snails. This is in line with the Red Queen hypothesis, which states that interactions between species lead to constant natural selection for coadaptation. Parasites track the locally common hosts' phenotypes, so the parasites are less infective to allopatric hosts, those from different geographical regions. Modifying host behaviour Some parasites modify host behaviour in order to increase their transmission between hosts, often in relation to predator and prey (parasite increased trophic transmission). For example, in the California coastal salt marsh, the fluke Euhaplorchis californiensis reduces the ability of its killifish host to avoid predators. This parasite matures in egrets, which are more likely to feed on infected killifish than on uninfected fish. Another example is the protozoan Toxoplasma gondii, a parasite that matures in cats but can be carried by many other mammals. Uninfected rats avoid cat odors, but rats infected with T. gondii are drawn to this scent, which may increase transmission to feline hosts. The malaria parasite modifies the skin odour of its human hosts, increasing their attractiveness to mosquitoes and hence improving the chance for the parasite to be transmitted. The spider Cyclosa argenteoalba often have parasitoid wasp larvae attached to them which alter their web-building behavior. Instead of producing their normal sticky spiral shaped webs, they made simplified webs when the parasites were attached. This manipulated behavior lasted longer and was more prominent the longer the parasites were left on the spiders. Trait loss Parasites can exploit their hosts to carry out a number of functions that they would otherwise have to carry out for themselves. Parasites which lose those functions then have a selective advantage, as they can divert resources to reproduction. Many insect ectoparasites including bedbugs, batbugs, lice and fleas have lost their ability to fly, relying instead on their hosts for transport. Trait loss more generally is widespread among parasites. An extreme example is the myxosporean Henneguya zschokkei, an ectoparasite of fish and the only animal known to have lost the ability to respire aerobically: its cells lack mitochondria. Host defences Hosts have evolved a variety of defensive measures against their parasites, including physical barriers like the skin of vertebrates, the immune system of mammals, insects actively removing parasites, and defensive chemicals in plants. The evolutionary biologist W. D. Hamilton suggested that sexual reproduction could have evolved to help to defeat multiple parasites by enabling genetic recombination, the shuffling of genes to create varied combinations. Hamilton showed by mathematical modelling that sexual reproduction would be evolutionarily stable in different situations, and that the theory's predictions matched the actual ecology of sexual reproduction. However, there may be a trade-off between immunocompetence and breeding male vertebrate hosts' secondary sex characteristics, such as the plumage of peacocks and the manes of lions. This is because the male hormone testosterone encourages the growth of secondary sex characteristics, favouring such males in sexual selection, at the price of reducing their immune defences. Vertebrates The physical barrier of the tough and often dry and waterproof skin of reptiles, birds and mammals keeps invading microorganisms from entering the body. Human skin also secretes sebum, which is toxic to most microorganisms. On the other hand, larger parasites such as trematodes detect chemicals produced by the skin to locate their hosts when they enter the water. Vertebrate saliva and tears contain lysozyme, an enzyme that breaks down the cell walls of invading bacteria. Should the organism pass the mouth, the stomach with its hydrochloric acid, toxic to most microorganisms, is the next line of defence. Some intestinal parasites have a thick, tough outer coating which is digested slowly or not at all, allowing the parasite to pass through the stomach alive, at which point they enter the intestine and begin the next stage of their life. Once inside the body, parasites must overcome the immune system's serum proteins and pattern recognition receptors, intracellular and cellular, that trigger the adaptive immune system's lymphocytes such as T cells and antibody-producing B cells. These have receptors that recognise parasites. Insects Insects often adapt their nests to reduce parasitism. For example, one of the key reasons why the wasp Polistes canadensis nests across multiple combs, rather than building a single comb like much of the rest of its genus, is to avoid infestation by tineid moths. The tineid moth lays its eggs within the wasps' nests and then these eggs hatch into larvae that can burrow from cell to cell and prey on wasp pupae. Adult wasps attempt to remove and kill moth eggs and larvae by chewing down the edges of cells, coating the cells with an oral secretion that gives the nest a dark brownish appearance. Plants Plants respond to parasite attack with a series of chemical defences, such as polyphenol oxidase, under the control of the jasmonic acid-insensitive (JA) and salicylic acid (SA) signalling pathways. The different biochemical pathways are activated by different attacks, and the two pathways can interact positively or negatively. In general, plants can either initiate a specific or a non-specific response. Specific responses involve recognition of a parasite by the plant's cellular receptors, leading to a strong but localised response: defensive chemicals are produced around the area where the parasite was detected, blocking its spread, and avoiding wasting defensive production where it is not needed. Non-specific defensive responses are systemic, meaning that the responses are not confined to an area of the plant, but spread throughout the plant, making them costly in energy. These are effective against a wide range of parasites. When damaged, such as by lepidopteran caterpillars, leaves of plants including maize and cotton release increased amounts of volatile chemicals such as terpenes that signal they are being attacked; one effect of this is to attract parasitoid wasps, which in turn attack the caterpillars. Biology and conservation Ecology and parasitology Parasitism and parasite evolution were until the twenty-first century studied by parasitologists, in a science dominated by medicine, rather than by ecologists or evolutionary biologists. Even though parasite-host interactions were plainly ecological and important in evolution, the history of parasitology caused what the evolutionary ecologist Robert Poulin called a "takeover of parasitism by parasitologists", leading ecologists to ignore the area. This was in his opinion "unfortunate", as parasites are "omnipresent agents of natural selection" and significant forces in evolution and ecology. In his view, the long-standing split between the sciences limited the exchange of ideas, with separate conferences and separate journals. The technical languages of ecology and parasitology sometimes involved different meanings for the same words. There were philosophical differences, too: Poulin notes that, influenced by medicine, "many parasitologists accepted that evolution led to a decrease in parasite virulence, whereas modern evolutionary theory would have predicted a greater range of outcomes". Their complex relationships make parasites difficult to place in food webs: a trematode with multiple hosts for its various life cycle stages would occupy many positions in a food web simultaneously, and would set up loops of energy flow, confusing the analysis. Further, since nearly every animal has (multiple) parasites, parasites would occupy the top levels of every food web. Parasites can play a role in the proliferation of non-native species. For example, invasive green crabs are minimally affected by native trematodes on the Eastern Atlantic coast. This helps them outcompete native crabs such as the Atlantic Rock and Jonah crabs. Ecological parasitology can be important to attempts at control, like during the campaign for eradicating the Guinea worm. Even though the parasite was eradicated in all but four countries, the worm began using frogs as an intermediary host before infecting dogs, making control more difficult than it would have been if the relationships had been better understood. Rationale for conservation Although parasites are widely considered to be harmful, the eradication of all parasites would not be beneficial. Parasites account for at least half of life's diversity; they perform important ecological roles; and without parasites, organisms might tend to asexual reproduction, diminishing the diversity of traits brought about by sexual reproduction. Parasites provide an opportunity for the transfer of genetic material between species, facilitating evolutionary change. Many parasites require multiple hosts of different species to complete their life cycles and rely on predator-prey or other stable ecological interactions to get from one host to another. The presence of parasites thus indicates that an ecosystem is healthy. An ectoparasite, the California condor louse, Colpocephalum californici, became a well-known conservation issue. A large and costly captive breeding program was run in the United States to rescue the California condor. It was host to a louse, which lived only on it. Any lice found were "deliberately killed" during the program, to keep the condors in the best possible health. The result was that one species, the condor, was saved and returned to the wild, while another species, the parasite, became extinct. Although parasites are often omitted in depictions of food webs, they usually occupy the top position. Parasites can function like keystone species, reducing the dominance of superior competitors and allowing competing species to co-exist. Quantitative ecology A single parasite species usually has an aggregated distribution across host animals, which means that most hosts carry few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology, as it renders parametric statistics as commonly used by biologists invalid. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is recommended by several authors, but this can give rise to further problems, so quantitative parasitology is based on more advanced biostatistical methods. History Ancient Human parasites including roundworms, the Guinea worm, threadworms and tapeworms are mentioned in Egyptian papyrus records from 3000 BC onwards; the Ebers Papyrus describes hookworm. In ancient Greece, parasites including the bladder worm are described in the Hippocratic Corpus, while the comic playwright Aristophanes called tapeworms "hailstones". The Roman physicians Celsus and Galen documented the roundworms Ascaris lumbricoides and Enterobius vermicularis. Medieval In his Canon of Medicine, completed in 1025, the Persian physician Avicenna recorded human and animal parasites including roundworms, threadworms, the Guinea worm and tapeworms. In his 1397 book Traité de l'état, science et pratique de l'art de la Bergerie (Account of the state, science and practice of the art of shepherding), wrote the first description of a trematode endoparasite, the sheep liver fluke Fasciola hepatica. Early modern In the early modern period, Francesco Redi's 1668 book Esperienze Intorno alla Generazione degl'Insetti (Experiences of the Generation of Insects), explicitly described ecto- and endoparasites, illustrating ticks, the larvae of nasal flies of deer, and sheep liver fluke. Redi noted that parasites develop from eggs, contradicting the theory of spontaneous generation. In his 1684 book Osservazioni intorno agli animali viventi che si trovano negli animali viventi (Observations on Living Animals found in Living Animals), Redi described and illustrated over 100 parasites including the large roundworm in humans that causes ascariasis. Redi was the first to name the cysts of Echinococcus granulosus seen in dogs and sheep as parasitic; a century later, in 1760, Peter Simon Pallas correctly suggested that these were the larvae of tapeworms. In 1681, Antonie van Leeuwenhoek observed and illustrated the protozoan parasite Giardia lamblia, and linked it to "his own loose stools". This was the first protozoan parasite of humans to be seen under a microscope. A few years later, in 1687, the Italian biologists Giovanni Cosimo Bonomo and Diacinto Cestoni described scabies as caused by the parasitic mite Sarcoptes scabiei, marking it as the first disease of humans with a known microscopic causative agent. Parasitology Modern parasitology developed in the 19th century with accurate observations and experiments by many researchers and clinicians; the term was first used in 1870. In 1828, James Annersley described amoebiasis, protozoal infections of the intestines and the liver, though the pathogen, Entamoeba histolytica, was not discovered until 1873 by Friedrich Lösch. James Paget discovered the intestinal nematode Trichinella spiralis in humans in 1835. James McConnell described the human liver fluke, Clonorchis sinensis, in 1875. Algernon Thomas and Rudolf Leuckart independently made the first discovery of the life cycle of a trematode, the sheep liver fluke, by experiment in 1881–1883. In 1877 Patrick Manson discovered the life cycle of the filarial worms that cause elephantiasis transmitted by mosquitoes. Manson further predicted that the malaria parasite, Plasmodium, had a mosquito vector, and persuaded Ronald Ross to investigate. Ross confirmed that the prediction was correct in 1897–1898. At the same time, Giovanni Battista Grassi and others described the malaria parasite's life cycle stages in Anopheles mosquitoes. Ross was controversially awarded the 1902 Nobel prize for his work, while Grassi was not. In 1903, David Bruce identified the protozoan parasite and the tsetse fly vector of African trypanosomiasis. Vaccine Given the importance of malaria, with some 220 million people infected annually, many attempts have been made to interrupt its transmission. Various methods of malaria prophylaxis have been tried including the use of antimalarial drugs to kill off the parasites in the blood, the eradication of its mosquito vectors with organochlorine and other insecticides, and the development of a malaria vaccine. All of these have proven problematic, with drug resistance, insecticide resistance among mosquitoes, and repeated failure of vaccines as the parasite mutates. The first and as of 2015 the only licensed vaccine for any parasitic disease of humans is RTS,S for Plasmodium falciparum malaria. Biological control Several groups of parasites, including microbial pathogens and parasitoidal wasps have been used as biological control agents in agriculture and horticulture. Resistance Poulin observes that the widespread prophylactic use of anthelmintic drugs in domestic sheep and cattle constitutes a worldwide uncontrolled experiment in the life-history evolution of their parasites. The outcomes depend on whether the drugs decrease the chance of a helminth larva reaching adulthood. If so, natural selection can be expected to favour the production of eggs at an earlier age. If on the other hand the drugs mainly affects adult parasitic worms, selection could cause delayed maturity and increased virulence. Such changes appear to be underway: the nematode Teladorsagia circumcincta is changing its adult size and reproductive rate in response to drugs. Cultural significance Classical times In the classical era, the concept of the parasite was not strictly pejorative: the parasitus was an accepted role in Roman society, in which a person could live off the hospitality of others, in return for "flattery, simple services, and a willingness to endure humiliation". Society Parasitism has a derogatory sense in popular usage. According to the immunologist John Playfair, The satirical cleric Jonathan Swift alludes to hyperparasitism in his 1733 poem "On Poetry: A Rhapsody", comparing poets to "vermin" who "teaze and pinch their foes": A 2022 study examined the naming of some 3000 parasite species discovered in the previous two decades. Of those named after scientists, over 80% were named for men, whereas about a third of authors of papers on parasites were women. The study found that the percentage of parasite species named for relatives or friends of the author has risen sharply in the same period. Fiction In Bram Stoker's 1897 Gothic horror novel Dracula, and its many film adaptations, the eponymous Count Dracula is a blood-drinking parasite (a vampire). The critic Laura Otis argues that as a "thief, seducer, creator, and mimic, Dracula is the ultimate parasite. The whole point of vampirism is sucking other people's blood—living at other people's expense." Disgusting and terrifying parasitic alien species are widespread in science fiction, as for instance in Ridley Scott's 1979 film Alien. In one scene, a Xenomorph bursts out of the chest of a dead man, with blood squirting out under high pressure assisted by explosive squibs. Animal organs were used to reinforce the shock effect. The scene was filmed in a single take, and the startled reaction of the actors was genuine. The entomopathogenic fungus Cordyceps is represented culturally as a deadly threat to the human race. The video game series The Last of Us (2013–present) and its television adaptation present Cordyceps as a parasite of humans, causing a zombie apocalypse. Its human hosts initially become violent "infected" beings, before turning into blind zombie "clickers", complete with fruiting bodies growing out from their faces.
Biology and health sciences
Ecology
null
43945
https://en.wikipedia.org/wiki/Massage
Massage
Massage is the rubbing or kneading of the body's soft tissues. Massage techniques are commonly applied with hands, fingers, elbows, knees, forearms, feet, or a device. The purpose of massage is generally for the treatment of body stress or pain. In English-speaking European countries, a person professionally trained to give massages is traditionally known as a masseur (male) or masseuse (female). In the United States, these individuals are often referred to as "massage therapists". In some provinces of Canada, they are called "registered massage therapists." In professional settings, clients are treated while lying on a massage table, sitting in a massage chair, or lying on a mat on the floor. There are many different modalities in the massage industry, including (but not limited to): deep tissue, manual lymphatic drainage, medical, sports, structural integration, Swedish, Thai and trigger point. Etymology The word comes from the French 'friction of kneading', which, in turn, comes either from the Arabic word massa meaning 'to touch, feel', the Portuguese 'knead', from the Latin meaning 'mass, dough', or the Greek verb () 'to handle, touch, to work with the hands, to knead dough'. The ancient Greek word for massage was and the Latin was . History Ancient times Archaeological evidence of massage has been found in many ancient civilizations including China, India, Japan, Egypt, Rome, Greece, and Mesopotamia. 2330 BC: The Tomb of Akmanthor (also known as "The Tomb of the Physician") in Saqqara, Egypt, depicts two men having work done on their feet and hands, possibly depicting a massage. 2000 BC: The word muššu'u ("massage") is written for the first time, and its use is described, in some Sumerian and Akkadian texts found at the beginning of the 21st century in ancient Mesopotamia. 722–481 BC: Huangdi Neijing is composed during the Chinese Spring and Autumn period. The Nei-jing is a compilation of medical knowledge known up to that date, and is the foundation of traditional Chinese medicine. Massage is referred to in 30 different chapters of the Nei Jing. It specifies the use of different massage techniques and how they should be used in the treatment of specific ailments, and injuries. Also known as "The Yellow Emperor's Inner Canon," the text refers to previous medical knowledge from the time of the Yellow Emperor (), misleading some into believing the text itself was written during the time of the Yellow Emperor (which would predate written history). 762 BC: In the Iliad and the Odyssey, massage with oils and aromatic substances is mentioned as a means to relax the tired limbs of warriors and as a way to help the treatment of wounds. 700 BC: Bian Que, the earliest known Chinese physician, uses massage in medical practice. 500 BC: Jīvaka Komarabhācca was an Indian physician who according to the Pāli Buddhist Canon was Shakyamuni Buddha's physician. Jivaka is sometimes credited with founding and developing a style of massage that led to the type of massage practiced in current-day Thailand. Though this claim is disputed. 493 BC: A possible biblical reference documents daily "treatments" with oil of myrrh as a part of the beauty regimen of the wives of Xerxes (Esther, 2:12). 460 BC: Hippocrates wrote "The physician must be experienced in many things, but assuredly in rubbing." 300 BC: Charaka Samhita, sometimes dated to 800 BCE, is one of the oldest of the three ancient treatises of Ayurvedic medicine, including massage. Sanskrit records indicate that massage had been practiced in India long before the beginning of recorded history. AD 1st or 2nd: Galen mentioned Diogas (Διόγας) who was an iatralipta (ἰατραλείπτης) (rubber and anointer/physiotherapist). AD 581: China establishes a department of massage therapy within the Office of Imperial Physicians. Middle Ages Many of Galen's manuscripts, for instance, were collected and translated by Hunayn ibn Ishaq in the 9th century. Later in the 11th-century copies were translated into Latin and again in the 15th and 16th centuries, when they helped enlighten European scholars as to the achievements of the Ancient Greeks. This renewal of the Galenic tradition during the Renaissance played a very important part in the rise of modern science. One of the greatest Persian medics was Avicenna, also known as Ibn Sina, who lived from 980 AD to 1037 AD. His works included a comprehensive collection and systematization of the fragmentary and unorganized Greco-Roman medical literature that had been translated Arabic by that time, augmented by notes from his own experiences. One of his books, Al-Qānūn fī aṭ-Ṭibb (The Canon of Medicine) has been called the most famous single book in the history of medicine in both East and West. Avicenna excelled in the logical assessment of conditions and comparison of symptoms and took special note of analgesics and their proper use as well as other methods of relieving pain, including massage. AD 1150: Evidence of massage abortion, involving the application of pressure to the pregnant abdomen, can be found in one of the bas reliefs decorating the temple of Angkor Wat in Cambodia. It depicts a demon performing such an abortion upon a woman who has been sent to the underworld. This is the oldest known visual representation of abortion. In Southeast Asia, massage traditions and techniques have already been entrenched in the people's diverse cultures for centuries before trade contact with Europe in the 16th century. In the Philippines, a distinct massage and healing tradition called hilot developed, while in Thailand, the tradition of massage that developed was called nuad thai. Nuad thai was declared in 2019 as a UNESCO intangible cultural heritage. 18th and 19th centuries AD 1776: Jean Joseph Marie Amiot and Pierre-Martial Cibot, French missionaries in China translate summaries of Huangdi Neijing, including a list of medical plants, exercises, and elaborate massage techniques, into the French language, thereby introducing Europe to the highly developed Chinese system of medicine, medical-gymnastics, and medical-massage. AD 1776: Pehr Henrik Ling, a Swedish physical therapist and teacher of medical-gymnastics, is born. Ling has often been erroneously credited for having invented "Classic Massage", also known as "Swedish Massage", and has been called the "Father of Massage". AD 1779: Frenchman Pierre-Martial Cibot publishes "Notice du Cong-fou des Bonzes Tao-see", also known as "The Cong-Fou of the Tao-Tse", a French language summary of medical techniques used by Taoist priests. According to English historian of China Joseph Needham, Cibot's work "was intended to present the physicists and physicians of Europe with a sketch of a system of medical gymnastics which they might like to adopt—or if they found it at fault they might be stimulated to invent something better. This work has long been regarded as of cardinal importance in the history of physiotherapy because it almost certainly influenced the Swedish founder of the modern phase of the art, Pehr Hendrik Ling. Cibot had studied at least one Chinese book but also got much from a Christian neophyte who had become expert in the subject before his conversion." AD 1813: The Royal Gymnastic Central Institute for the training of gymnastic instructors was opened in Stockholm, Sweden, with Pehr Henrik Ling appointed as principal. Ling developed what he called the "Swedish Movement Cure". Ling died in 1839, having previously named his pupils as the repositories of his teaching. Ling and his assistants left a little proper written account of their methods. AD 1868: Dutch massage practitioner Johan Georg Mezger applies French terms to name five basic massage techniques, and coins the phrase "Swedish massage system". These techniques are still known by their French names (effleurage (long, gliding strokes), petrissage (lifting and kneading the muscles), friction (firm, deep, circular rubbing movements), tapotement (brisk tapping or percussive movements) and vibration (rapidly shaking or vibrating specific muscles)). Modern times China As of 2005, with the city of Shanghai alone there were an estimated 1,300–2,000 foot massage centers, with more than 3,000 in Shenzhen. It was also estimated that there were nearly 30,000 massage workers in Shanghai and over 40,000 in Shenzhen. The average rate of pay for a worker in the massage industry in China is over 10,000 yuan per month, making them a well-paying job in China's service sector. United States Massage started to become popular in the United States in the middle part of the 19th century and was introduced by two New York physicians, George and Charles Taylor, based on Pehr Henrik Ling's techniques developed in Sweden. During the 1930s and 1940s, massage's influence decreased as a result of medical advancements of the time, while in the 1970s massage's influence grew once again with a notable rise among athletes. Until the 1970s, nurses used massage to reduce pain and aid sleep. Popular books and videos, such as Massage for Relaxation, helped introduce massage to popular culture outside of a health setting. The massage therapy industry is continuously increasing. In 2009, U.S. consumers spent between $4 and $6 billion on visits to massage therapists. In 2015, research estimates that massage therapy was a $12.1 billion industry. All but five states require massage therapists to be licensed, and licensure requires the applicant to receive training at an accredited school, and to pass a comprehensive exam. Those states that require licensure also typically require continuing education in massage techniques and in ethics. United Kingdom The service of massage or "physiological shampooing" was advertised in The Times from as early as 1880. Adverts claimed it as a cure for obesity amongst other chronic ailments. Sports, business and organizations Massage developed alongside athletics in both Ancient China and Ancient Greece. Taoist priests developed massage in concert with their Kung Fu gymnastic movements, while Ancient Greek Olympians used a specific type of trainer ("aleiptes") who would rub their muscles with oil. Pehr Ling's introduction to massage also came about directly as a result of his study of gymnastic movements. The 1984 Summer Olympics in Los Angeles was the first time that massage therapy was televised as it was being performed on the athletes. And then, during the 1996 Summer Olympics in Atlanta massage therapy was finally offered as a core medical service to the US Olympic Team. Massage has been employed by businesses and organizations such as the U.S. Department of Justice, Boeing and Reebok. Athletes such as Michael Jordan and LeBron James have personal massage therapists that at times even travel with them. Types and methods Acupressure Acupressure [from Latin acus "needle" (see acuity) + pressure (n.)] is a technique similar in principle to acupuncture. It is based on the concept of life energy which flows through "meridians" in the body. In treatment, physical pressure is applied to acupuncture points with the aim of clearing blockages in those meridians. Pressure may be applied by fingers, palm, elbow, toes or with various devices. Some medical studies have suggested that acupressure may be effective at helping manage nausea and vomiting, for helping lower back pain, tension headaches, stomach ache, among other things, although such studies have been found to have a high likelihood of bias. Ashiatsu In ashiatsu, the practitioner uses their feet to deliver treatment. The name comes from the Japanese, ashi for foot and atsu for pressure. This technique typically uses the heel, sesamoid, arch, and/or whole plantar surface of foot, and offers large compression, tension and shear forces with less pressure than an elbow and is ideal for large muscles, such as in thigh, or for long-duration upper trapezius compressions. Other manual therapy techniques using the feet to provide treatment include Keralite, Barefoot Lomilomi, and Chavutti Thirumal. Ayurvedic massage Ayurvedic massage is known as Abhyangam in Sanskrit. According to the Ayurvedic Classics Abhyangam is an important dincharya (Daily Regimen) that is needed for maintaining a healthy lifestyle. The massage technique used during Ayurvedic Massage aims to stimulate the lymphatic system. Practitioners claim that the benefits of regular Ayurvedic massage include pain relief, reduction of fatigue, improved immune system and improved longevity. Burmese massage "Known in Myanmar as Yoe Yar Nhake Nal Chin, meaning 'traditional massage', Burmese massage has its ancient origins from Thai, Chinese and Indian medicine. Currently, Burmese massage also includes the use of local natural ingredients such as Thanaka which helps to promote smooth skin and prevents sunburn." Burmese massage is a full body massage technique that starts from head to toes, drawing on acupuncture, reflexology and kneading. Signature massage strokes include acupressure using the elbows, quick gentle knocking of acupressure points, and slow kneading of tight muscles. The massage aims to improve blood circulation and quality of sleep, while at the same time help to promote better skin quality. Biomechanical stimulation (BMS) massage Biomechanical stimulation (BMS) is a term generally used for localised biomechanical oscillation methods, whereby local muscle groups are stimulated directly or via the associated tendons by means of special hand held mechanical vibration devices. Biomechanical oscillation therapy and training is offered in a variety of areas such as competitive sports, fitness, rehabilitation, medicine, prevention, beauty and used to improve performance of the muscles and to improve coordination and balance. It is often used in myofascial trigger point therapy to invoke reciprocal inhibition within the musculoskeletal system. Beneficial effects from this type of stimulation have been found to exist. Biodynamic massage Biodynamic massage was created by Gerda Boyesen as part of Biodynamic Psychotherapy. It uses a combination of hands-on work and "energy work" and also uses a stethoscope to hear the peristalsis. Craniosacral therapy Craniosacral therapy (CST) is a pseudoscience that aims to improve fluid movement and cranial bone motion by applying light touch to the skull, face, spine, and pelvis. Erotic massage A type of massage that is done in an erotic way via the use of massage techniques by a person on another person's erogenous zones to achieve or enhance their sexual excitation or arousal and to achieve orgasm. It was also once used for medical purposes as well as for the treatment of "female hysteria" and "womb disease". Nuru massage is a Japanese form of erotic massage. Hammam ("Turkish bath") massage In the traditional Hammam, massage involves not just vigorous muscle kneading, but also joint cracking, "not so much a tender working of the flesh as a pummeling, a cracking of joints, a twisting of limbs..." An 18th-century traveler reported: Lomilomi and indigenous massage of Oceania Lomilomi is the traditional massage of Hawaii. As an indigenous practice, it varies by island and by family. The word lomilomi also is used for massage in Samoa and East Futuna. In Samoa, it is also known as lolomi and milimili. In East Futuna, it is also called milimili, fakasolosolo, amoamo, lusilusi, kinikini, fai’ua. The Māori call it romiromi and mirimiri. In Tonga massage is fotofota, tolotolo, and amoamo. In Tahiti it is rumirumi. On Nanumea in Tuvalu, massage is known as popo, pressure application is kukumi, and heat application is tutu. Massage has also been documented in Tikopia in the Solomon Islands, in Rarotonga, in Pukapuka and in Western Samoa. Lymphatic drainage Manual lymphatic drainage is a technique used to gently work and stimulate the lymphatic system, to assist in reduction of localized swelling. The lymphatic system is a network of slow moving vessels in the body that carries cellular waste toward the liver, to be filtered and removed. Lymph also carries lymphocytes and other immune system agents. Manual lymphatic drainage claims to improve waste removal and immune function. Medical massage Medical massage is a controversial term in the massage profession. Many use it to describe a specific technique. Others use it to describe a general category of massage and many methods such as deep tissue massage, myofascial release and trigger-point therapy, as well as osteopathic techniques, cranial-sacral techniques and many more can be used to work with various medical conditions. Massage used in the medical field includes decongestive therapy used for lymphedema which can be used in conjunction with the treatment of breast cancer. Light massage is also used in pain management and palliative care. Carotid sinus massage is used to diagnose carotid sinus syncope and is sometimes useful for differentiating supraventricular tachycardia (SVT) from ventricular tachycardia. It, like the valsalva maneuver, is a therapy for SVT. However, it is less effective than management of SVT with medications. A 2004 systematic review found single applications of massage therapy "reduced state anxiety, blood pressure, and heart rate but not negative mood, immediate assessment of pain, and cortisol level," while "multiple applications reduced delayed assessment of pain," and found improvements in anxiety and depression similar to effects of psychotherapy. A subsequent systematic review published in 2008 found that there is little evidence supporting the use of massage therapy for depression in high quality studies from randomized controlled trials. Myofascial release Myofascial release refers to the manual massage technique that claims to release adhered fascia and muscles with the goal of eliminating pain, increasing range of motion and equilibrioception. Myofascial release usually involves applying shear compression or tension in various directions, cross fiber friction or by skin rolling. Reflexology Reflexology, also known as "zone therapy", is an alternative medicine involving application of pressure to the feet and hands with specific thumb, finger, and hand techniques without the use of oil or lotion. It is based on a pseudoscientific belief in a system of zones and reflex areas that purportedly reflect an image of the body on the feet and hands, with the premise that such work effects a physical change to the body. Shiatsu Shiatsu (指圧) (shi meaning finger and atsu meaning pressure) is a form of Japanese bodywork based on concepts in traditional Chinese medicine such as qi meridians. It consists of finger, palm pressure, stretches, and other massage techniques. There is no convincing data available to suggest that shiatsu is an effective treatment for any medical condition. Sports massage Sports massage is the use of specific massage therapy techniques in an athletic context to improve recovery time, enhance performance and reduce the risk of injury. This is accomplished using techniques that stimulate the flow of blood and lymph to and from muscles. Sports massage is often delivered before or after physical activity depending on the subject's needs, preferences and goals. Structural Integration Structural Integration's aim is to unwind the strain patterns in the body's myofascial system, restoring it to its natural balance, alignment, length and ease. This is accomplished by hands-on manipulation, coupled with movement re-education. There are about 15 schools of Structural Integration as recognized by the International Association of Structural Integration, including the Dr. Ida Rolf Institute (with the brand Rolfing), Hellerwork, Guild for Structural Integration, Aston Patterning, Soma, and Kinesis Myofascial Integration. Swedish massage The most widely recognized and commonly used category of massage is Swedish massage. The Swedish massage techniques vary from light to vigorous. Swedish massage uses five styles of strokes. The five basic strokes are effleurage (sliding or gliding), petrissage (kneading), tapotement (rhythmic tapping), friction (cross fiber or with the fibers) and vibration/shaking. The development of Swedish massage is often inaccurately credited to Per Henrik Ling, though the Dutch practitioner Johann Georg Mezger applied the French terms to name the basic strokes. The term "Swedish massage" is actually only recognized in English- and Dutch-speaking countries, and in Hungary and Israel. Elsewhere the style is referred to as "classic massage". Clinical studies have found that Swedish massage can reduce chronic pain, fatigue, joint stiffness and improve function in patients with osteoarthritis of the knee. Thai massage Known in Thailand as Nuat phaen boran, meaning "ancient/traditional massage", traditional Thai massage is generally based on a combination of Indian and Chinese traditions of medicine. Thai massage combines both physical and energetic aspects. It is a deep, full-body massage progressing from the feet up, and focusing on sen or energy lines throughout the body, with the aim of clearing blockages in these lines, and thus stimulating the flow of blood and lymph throughout the body. It draws on yoga, acupressure and reflexology. Thai massage is a popular massage therapy that is used for the management of conditions such as musculoskeletal pain and fatigue. Thai massage involves a number of stretching movements that improve body flexibility, joint movement and also improve blood circulation throughout the body. In one study scientists found that Thai massage showed comparable efficacy as the painkiller ibuprofen in the reduction of joint pain caused by osteoarthritis (OA) of the knee. Traditional Chinese massage Massage of Chinese Medicine is known as An Mo (按摩) (pressing and rubbing) or Qigong Massage and is the foundation of Japan's Anma. Categories include Pu Tong An Mo (普通按摩) (general massage), Tui Na An Mo (推拿按摩) (pushing and grasping massage), Dian Xue An Mo (cavity pressing massage), and Qi An Mo (氣按摩 ) (energy massage). Tui na (推拿) focuses on pushing, stretching, and kneading muscles, and Zhi Ya(指壓) focuses on pinching and pressing at acupressure points. Technique such as friction and vibration are used as well. Trigger point therapy Sometimes confused with pressure point massage, this involves deactivating trigger points that may cause local pain or refer pain and other sensations, such as headaches, in other parts of the body. Manual pressure, vibration, injection, or other treatment is applied to these points to relieve myofascial pain. Trigger points were first discovered and mapped by Janet G. Travell (President Kennedy's physician) and David Simons. Trigger points have been photomicrographed and measured electrically and in 2007 a paper was presented showing images of Trigger Points using MRI. These points relate to dysfunction in the myoneural junction, also called neuromuscular junction (NMJ), in muscle, and therefore this technique is different from reflexology acupressure and pressure point massage. Tui na Tui na is a Chinese manual therapy technique that includes many different types of strokes, aimed to improve the flow of chi through the meridians. Watsu Watsu, developed by Harold Dull at Harbin Hot Springs, California, is a type of aquatic bodywork performed in near-body-temperature water, and characterized by continuous support by the practitioner and gentle movement, including rocking, stretching of limbs, and massage. The technique combines hydrotherapy floating and immersion with shiatsu and other massage techniques. Watsu is used as a form of aquatic therapy for deep relaxation and other therapeutic intent. Related forms include Waterdance, Healing Dance, and Jahara technique. Facilities, equipment, and supplies Massage tables and chairs Specialized massage tables and chairs are used to position recipients during massages. A typical commercial massage table has an easily cleaned, heavily padded surface, and horseshoe-shaped head support that allows the client to breathe easily while lying face down and can be stationary or portable, while home versions are often lighter weight or designed to fold away easily. An orthopedic pillow or bolster can be used to correct body positioning. Ergonomic chairs serve a similar function as a massage table. Chairs may be either stationary or portable models. Massage chairs are easier to transport than massage tables, and recipients do not need to disrobe to receive a chair massage. Due to these two factors, chair massage is often performed in settings such as corporate offices, outdoor festivals, shopping malls, and other public locations. Warm-water therapy pools Temperature-controlled warm-water therapy pools are used to perform aquatic bodywork. For example, Watsu requires a warm-water therapy pool that is approximately chest-deep (depending on the height of the therapist) and temperature-controlled to about 35 °C (95 °F). Dry-water massage tables A dry-water massage table uses jets of water to perform the massage of the patient's muscles. These tables differ from a Vichy shower in that the client usually stays dry. Two common types are one in which the client lies on a waterbed-like mattress which contains warm water and jets of water and air bubbles and one in which the client lies on a foam pad and is covered by a plastic sheet and is then sprayed by jets of warm water, similar to a Vichy shower. The first type is sometimes seen available for use in shopping centers for a small fee. Vichy showers A Vichy shower is a form of hydrotherapy that uses a series of shower nozzles that spray large quantities of water over the client while they lie in a shallow wet bed, similar to a massage table, but with drainage for the water. The nozzles may usually be adjusted for height, direction, and temperature to suit the patient's needs. Cremes, lotions, gels, and oils Many different types of massage cremes, lotions, gels, and oils are used to lubricate and moisturize the skin and reduce the friction between skin (hands of technician and client). Massage tools These instruments or devices are sometimes used during massages. Some tools are for use by individuals, others by the therapist. Tools used by massage therapists Instrument-assisted soft-tissue massage can deploy stainless-steel devices to manipulate tissue in a way that augments hands-on work. A body rock is a serpentine-shaped tool, usually carved out of stone. It is used to amplify the therapist' strength and focus pressure on certain areas. It can be used directly on the skin with a lubricant such as oil or corn starch or directly over clothing. Bamboo and rosewood tools are also commonly used. They originate from practices in southeast Asia, Thailand, Cambodia, and Burma. Some of them may be heated, oiled, or wrapped in cloth. Cupping massage is often carried out using plastic cups and a manual hand-pump to create the vacuum. The vacuum draws the soft tissue perpendicular to the skin, providing a tensile force, which can be left in one site or moved along the tissue during the massage. Tools used by both individuals and massagers Hand-held battery-operated massaging and vibrating instruments are available, including devices for massaging the scalp following a haircut. Vibrating massage pads come in a range of sizes, some with the option of heating. Vibrating massage chairs can provide an alternative for therapy at home. There is a widespread market in erotic massage instruments, including electric dildos and vibrators such as the massage wand. Medical and therapeutic use The main professionals that provide therapeutic massage are massage therapists, athletic trainers, physical therapists, and practitioners of many traditional Chinese and other eastern medicines. Massage practitioners work in a variety of medical settings and may travel to private residences or businesses. Contraindications to massage include deep vein thrombosis, bleeding disorders, taking blood thinners such as warfarin, damaged blood vessels, or weakened bones from cancer, osteoporosis, fractures, and fever. Beneficial effects Peer-reviewed medical research has shown that the benefits of massage include pain relief, reduced trait anxiety and depression, temporarily reduced blood pressure, heart rate, and state of anxiety. Additional testing has shown an immediate increase in, and expedited recovery periods for, muscle performance. Theories behind what massage might do include: enhanced skeletal muscle regrowth and remodeling, blocking nociception (gate control theory), activating the parasympathetic nervous system (which may stimulate the release of endorphins and serotonin, preventing fibrosis or scar tissue), increasing the flow of lymph, and improving sleep. Infant massage has been found to hold therapeutic benefits for premature infants and their parents. Premature infants are susceptible to low birth weight and decreased immune function; massage has been found to counter these effects, causing weight increase, reduced pain, and increased immune function. Administering infant massage also reduces stress and increased oxytocin in parental figures regardless of gender, and overall improves emotional attachment with their child. Massage research is hindered from reaching the gold standard of scientific inquiry, which includes placebo-controlled and double blind clinical trials. Developing a "sham" manual therapy for massage would be difficult since even light touch massage could have effects on a subject. It would also be difficult to find a subject that would not notice that they were getting less of a massage, and it would be impossible to blind the therapist. Massage research can employ randomized controlled trials, which are published in peer reviewed medical journals. This type of study could increase the credibility of the profession because it displays that purported therapeutic effects are reproducible. Single-dose effects Pain relief: Relief from pain due to musculoskeletal injuries and other causes is cited as a major benefit of massage. A 2015 Cochrane Review concluded that there is very little evidence that massage is an effective treatment for lower back pain. A meta-analysis conducted by scientists at the University of Illinois Urbana-Champaign failed to find a statistically significant reduction in pain immediately following treatment. Weak evidence suggests that massage may improve pain in the short term for people with acute, sub-acute, and chronic lower back pain. State anxiety: Massage has been shown to reduce state anxiety, a transient measure of anxiety in a given situation. Blood pressure and heart rate: Massage has been shown to temporarily reduce blood pressure and heart rate. Multiple-dose effects Pain relief: Massage may reduce pain experienced in the days or weeks after treatment. Trait anxiety: Massage has been shown to reduce trait anxiety; a person's general susceptibility to anxiety. Depression: Massage has been shown to reduce sub-clinical depression. Neuromuscular effects Massage has been shown to reduce neuromuscular excitability by measuring changes in the Hoffman's reflex (H-reflex) amplitude. A decrease in peak-to-peak H-reflex amplitude suggests a decrease in motoneuron excitability. Others explain, "H-reflex is considered to be the electrical analogue of the stretch reflex... and the reduction" is due to a decrease in spinal reflex excitability. Field (2007) confirms that the inhibitory effects are due to deep tissue receptors and not superficial cutaneous receptors, as there was no decrease in H-reflex when looking at light fingertip pressure massage. It has been noted that "the receptors activated during massage are specific to the muscle being massaged," as other muscles did not produce a decrease in H-reflex amplitude. Global regulation and practice Because the art and science of massage is a globally diverse phenomenon, different legal jurisdictions sometimes recognize and license individuals with titles, while other areas do not. Examples are: Registered Massage Therapist (RMT) in Canada and New Zealand Certified Massage Therapist (CMT) in New Zealand Licensed Massage Practitioner (LMP) Licensed Massage Therapist (LMT) Licensed Massage and Bodywork Therapist (LMBT) in North Carolina Therapeutic Massage Therapist (TMT) in South Africa In some jurisdictions, practicing without a license is a crime. One such jurisdiction is Washington state, where any health professionals practicing without a license can be issued a fine and charged with a misdemeanor offense. Canada In regulated provinces massage therapists are known as Registered Massage Therapists, in Canada six provinces regulate massage therapy: British Columbia, Ontario, Newfoundland and Labrador, Prince Edward Island, Saskatchewan, and New Brunswick. Registered Massage Therapy in British Columbia is regulated by the College of Massage Therapists of British Columbia (CMTBC). Regulated provinces have, since 2012, established inter-jurisdiction competency standards. Quebec is not provincially regulated. Massage therapists may obtain a certification with one of the various associations operating. There is the Professional Association of Specialized Massage Therapists of Quebec, also named Mon Réseau Plus, which represents 6,300 massage therapists (including ortho therapist, natural therapists, and others), the Quebec Federation of massage therapists (FMQ), and the Association québécoise des thérapeutes naturals; however, none of these are regulated by provincial law. Canadian educational institutions undergo a formal accreditation process through the Canadian Massage Therapy Council for Accreditation (CMTCA). China Most types of massage, with the exception of some traditional Chinese medicine, are not regulated in China. Although illegal in China, some of the smaller massage parlors are sometimes linked to the sex industry and the government has taken a number of measures in recent times to curb this. In a nationwide crackdown known as the yellow sweep ("Yellow" in Mandarin Chinese refers to sexual activities or pornographic content), limitations on the design and operation of massage parlors have been placed, going so far as requiring identification from customers who visit massage establishments late at night and logging their visits with the local police. France France requires three years of study and two final exams in order to apply for a license. Germany In Germany, massage is regulated by the government on a federal and national level. Only someone who has completed 3,200 hours of training (theoretical and practical) can use the professional title "Masseur und Medizinischer Bademeister" 'Masseur and Medical Spa Therapist'. This person can prolong his training depending on the length of professional experience to a Physiotherapist (1 year to 18 months additional training). The Masseur is trained in Classical Massage, Myofascial Massage, Exercise, and Movement Therapy. During the training, they will study anatomy, physiology, pathology, gynecology, podiatry, psychiatry, psychology, surgery, dermiatry, and orthopedics. They are trained in Electrotherapy and Hydrotherapy. Hydrotherapy includes Kneipp, Wraps, underwater massage, therapeutic washing, Sauna, and Steambath. A small part of their training will include special forms of massage which are decided by the local college, for example, foot reflex zone massage, Thai Massage, etc. Finally, a graduate is allowed to treat patients under the direction of a doctor. Graduates are regulated by the professional body which regulates Physiotherapists. This includes restrictions on advertising and the oath of confidentiality to clients. India In India, massage therapy is licensed by The Department of Ayurveda, Yoga & Naturopathy, Unani, Siddha, and Homoeopathy (AYUSH) under the Ministry of Health and Family Welfare (India) in March 1995. Massage therapy is based on Ayurveda, the ancient medicinal system that evolved around 600 BC. In ayurveda, massage is part of a set of holistic medicinal practices, contrary to the independent massage system popular in some other systems. In Siddha, Tamil traditional medicine from south India, massage is termed as "Thokkanam" and is classified into nine types, each for a specific variety of diseases. Japan In Japan, shiatsu is regulated but oil massage and Thai massage are not. Prostitution in Japan is not heavily policed, and prostitutes posing as massage therapists in "fashion health" shops and "pink salons" are fairly common in the larger cities. Myanmar In Myanmar, massage is unregulated. However, it is necessary to apply for a spa license with the government to operate a massage parlor in major cities such as Yangon. Blind and visually impaired people can become masseurs, but they are not issued licenses. There are a few professional spa training schools in Myanmar but these training centers are not accredited by the government. Mexico In Mexico massage therapists, called sobadores, combine massage using oil or lotion with a form of acupuncture and faith. Sobadores are used to relieve digestive system problems as well as knee and back pain. Many of these therapists work out of the back of a truck, with just a curtain for privacy. By learning additional holistic healer's skills in addition to massage, the practitioner may become a curandero. In some jurisdictions, prostitution in Mexico is legal, and prostitutes are allowed to sell sexual massages. These businesses are often confined to a specific area of the city, such as the Zona Norte in Tijuana. New Zealand In New Zealand, massage is unregulated. There are two levels of registration with Massage New Zealand, the professional body for massage therapists within New Zealand, although neither of these levels are government recognized. Registration at the certified massage therapist level denotes competency in the practice of relaxation massage. Registration at the remedial massage therapist denotes competency in the practice of remedial or orthopedic massage. Both levels of registration are defined by agreed minimum competencies and minimum hours. South Africa In South Africa, massage is regulated, but enforcement is poor. The minimum legal requirement to be able to practice as a professional massage therapist is a two-year diploma in therapeutic massage and registration with the Allied Health Professions Council of SA (AHPCSA). The qualification includes 240 credits, about 80 case studies, and about 100 hours of community service. South Korea In South Korea, only blind and visually impaired people can become licensed masseurs. Thailand In Thailand, Thai massage is officially listed as one of the branches of traditional Thai medicine, recognized and regulated by the government. It is considered to be a medical discipline in its own right and is used for the treatment of a wide variety of ailments and conditions. Massage schools, centers, therapists, and practitioners are increasingly regulated by the Ministries of Education and Public Health in Thailand. United Kingdom To practice commercial massage or massage therapy in the UK, an ITEC or VTCT certificate must be obtained through training which includes Beauty and Spa Therapy, Hairdressing, Complementary Therapies, Sports & Fitness Training and Customer Service. Therapists with appropriate paperwork and insurance may join the Complementary and Natural Healthcare Council (CNHC), a voluntary, government regulated, professional register. Its key aim is to protect the public. In addition, there are many professional bodies that have a required minimum standard of education and hold relevant insurance policies including the Federation of Holistic Therapists (FHT), the Complementary Therapists Association (CThA), and the Complementary Health Professionals (CHP). In contrast to the CNHC these bodies exist to support therapists rather than clients. United States According to research done by the American Massage Therapy Association, as of 2012 in the United States, there are between 280,000 and 320,000 massage therapists and massage school students. As of 2022, there are an estimated 872 state-approved massage training programs operating in the U.S. Most states have licensing requirements that must be met before a practitioner can use the title "massage therapist", and some states and municipalities require a license to practice any form of massage. If a state does not have any massage laws then a practitioner need not apply for a license with the state. Training programs in the US are typically 500 hours to 1000 hours in total training time and can award a certificate, diploma, or degree depending on the particular school. Study will often include anatomy and physiology, kinesiology, massage techniques, first aid and CPR, business, ethical and legal issues, and hands-on practice along with continuing education requirements if regulated. The Commission on Massage Therapy Accreditation (COMTA) is one of the organizations that works with massage schools in the U.S. and there are almost 300 schools that are accredited through this agency. Forty-seven states, Puerto Rico, and the District of Columbia currently offer some type of credential to professionals in the massage and bodywork field—usually licensure, certification or registration. Forty-five states require some type of licensing for massage therapists. There are two nationally recognized tests gain a massage therapy license, as well as state-specific exams. In the US, 38 states accept the now defunct National Certification Board for Therapeutic Massage and Bodywork's (NCBTMB) certification program as a basis for granting licenses either by rule or statute. The NCBTMB formerly offered the designation Nationally Certified in Therapeutic Massage and Bodywork (NCTMB) but now only offers its certificate program, Board Certification in Therapeutic Massage and Bodywork (BCTMB) which does not qualify for licensure. Forty-three states, as well as Puerto Rico and the District of Columbia, accept the Massage & Bodywork Licensing Examination (MBLEx), administered by the Federation of State Massage Therapy Boards (FSMTB). Between 10% and 20% of towns or counties independently regulate the profession. These local regulations can range from prohibition on opposite sex massage, fingerprinting and venereal checks from a doctor, to prohibition on house calls because of concern regarding sale of sexual services. In the US, licensure is the highest level of regulation and this restricts anyone without a license from practicing massage therapy or calling themselves by that protected title. Certification allows only those who meet certain educational criteria to use the protected title and registration only requires a listing of therapists who apply and meet an educational requirement. In the US, most certifications are locally based. A massage therapist may be certified, but not licensed. Licensing requirements vary per state, and often require additional criteria be met in addition to attending an accredited massage therapy school and passing a required state-specified exam. Only Kansas, Minnesota, and Wyoming, California and Vermont do not require a license or a certification at the state level. Some states allow license reciprocity, where licensed massage therapists who relocate can relatively easily obtain a license in their new state. In New York State in 2024, a man was arrested and charged with three counts of third-degree Sexual Abuse and three counts of Forcible Touching, as well as New York State Education Department Law violations, for providing massage therapy services without a New York State license to do so. In 1997 there were an estimated 114 million visits to massage therapists in the US. Massage therapy is the most used type of alternative medicine in hospitals in the United States. Between July 2010 and July 2011 roughly 38 million adult Americans (18 percent) had a massage at least once. People state that they use massage because they believe that it relieves pain from musculoskeletal injuries and other causes of pain, reduces stress and enhances relaxation, rehabilitates sports injuries, decreases feelings of anxiety and depression, and increases general well-being. In a poll of 25–35-year-olds, 79% said they would like their health insurance plan to cover massage. In 2006 Duke University Health System opened up a center to integrate medical disciplines with CAM disciplines such as massage therapy and acupuncture. There were 15,500 spas in the United States in 2007, with about two-thirds of the visitors being women. The number of visits rose from 91 million in 1999 to 136 million in 2003, generating a revenue that equals $11 billion. Job outlook for massage therapists was also projected to grow at 20% between 2010 and 2020 by the Bureau of Labor Statistics, faster than the average.
Biology and health sciences
Treatments
Health
43946
https://en.wikipedia.org/wiki/Biofilm
Biofilm
A biofilm is a syntrophic community of microorganisms in which cells stick to each other and often also to a surface. These adherent cells become embedded within a slimy extracellular matrix that is composed of extracellular polymeric substances (EPSs). The cells within the biofilm produce the EPS components, which are typically a polymeric combination of extracellular polysaccharides, proteins, lipids and DNA. Because they have a three-dimensional structure and represent a community lifestyle for microorganisms, they have been metaphorically described as "cities for microbes". Biofilms may form on living (biotic) or non-living (abiotic) surfaces and can be common in natural, industrial, and hospital settings. They may constitute a microbiome or be a portion of it. The microbial cells growing in a biofilm are physiologically distinct from planktonic cells of the same organism, which, by contrast, are single cells that may float or swim in a liquid medium. Biofilms can form on the teeth of most animals as dental plaque, where they may cause tooth decay and gum disease. Microbes form a biofilm in response to a number of different factors, which may include cellular recognition of specific or non-specific attachment sites on a surface, nutritional cues, or in some cases, by exposure of planktonic cells to sub-inhibitory concentrations of antibiotics. A cell that switches to the biofilm mode of growth undergoes a phenotypic shift in behavior in which large suites of genes are differentially regulated. A biofilm may also be considered a hydrogel, which is a complex polymer that contains many times its dry weight in water. Biofilms are not just bacterial slime layers but biological systems; the bacteria organize themselves into a coordinated functional community. Biofilms can attach to a surface such as a tooth or rock, and may include a single species or a diverse group of microorganisms. Subpopulations of cells within the biofilm differentiate to perform various activities for motility, matrix production, and sporulation, supporting the overall success of the biofilm. The biofilm bacteria can share nutrients and are sheltered from harmful factors in the environment, such as desiccation, antibiotics, and a host body's immune system. A biofilm usually begins to form when a free-swimming, planktonic bacterium attaches to a surface. Origin and formation Origin of biofilms Biofilms are thought to have arisen during primitive Earth as a defense mechanism for prokaryotes, as the conditions at that time were too harsh for their survival. They can be found very early in Earth's fossil records (about 3.25 billion years ago) as both Archaea and Bacteria, and commonly protect prokaryotic cells by providing them with homeostasis, encouraging the development of complex interactions between the cells in the biofilm. Formation of biofilms The formation of a biofilm begins with the attachment of free-floating microorganisms to a surface. The first colonist bacteria of a biofilm may adhere to the surface initially by the weak van der Waals forces and hydrophobic effects. If the colonists are not immediately separated from the surface, they can anchor themselves more permanently using cell adhesion structures such as pili. A unique group of Archaea that inhabit anoxic groundwater have similar structures called hami. Each hamus is a long tube with three hook attachments that are used to attach to each other or to a surface, enabling a community to develop. Hyperthermophilic archaeon Pyrobaculum calidifontis produce bundling pili which are homologous to the bacterial TasA filaments, a major component of the extracellular matrix in bacterial biofilms, which contribute to biofilm stability. TasA homologs are encoded by many other archaea, suggesting mechanistic similarities and evolutionary connection between bacterial and archaeal biofilms. Hydrophobicity can also affect the ability of bacteria to form biofilms. Bacteria with increased hydrophobicity have reduced repulsion between the substratum and the bacterium. Some bacteria species are not able to attach to a surface on their own successfully due to their limited motility but are instead able to anchor themselves to the matrix or directly to other, earlier bacteria colonists. Non-motile bacteria cannot recognize surfaces or aggregate together as easily as motile bacteria. During surface colonization bacteria cells are able to communicate using quorum sensing (QS) products such as N-acyl homoserine lactone (AHL). Once colonization has begun, the biofilm grows by a combination of cell division and recruitment. Polysaccharide matrices typically enclose bacterial biofilms. The matrix exopolysaccharides can trap QS autoinducers within the biofilm to prevent predator detection and ensure bacterial survival. In addition to the polysaccharides, these matrices may also contain material from the surrounding environment, including but not limited to minerals, soil particles, and blood components, such as erythrocytes and fibrin. The final stage of biofilm formation is known as development, and is the stage in which the biofilm is established and may only change in shape and size. The development of a biofilm may allow for an aggregate cell colony to be increasingly tolerant or resistant to antibiotics. Cell-cell communication or quorum sensing has been shown to be involved in the formation of biofilm in several bacterial species. Development Biofilms are the product of a microbial developmental process. The process is summarized by five major stages of biofilm development, as shown in the diagram below: Dispersal Dispersal of cells from the biofilm colony is an essential stage of the biofilm life cycle. Dispersal enables biofilms to spread and colonize new surfaces. Enzymes that degrade the biofilm extracellular matrix, such as dispersin B and deoxyribonuclease, may contribute to biofilm dispersal. Enzymes that degrade the biofilm matrix may be useful as anti-biofilm agents. Evidence has shown that a fatty acid messenger, cis-2-decenoic acid, is capable of inducing dispersion and inhibiting growth of biofilm colonies. Secreted by Pseudomonas aeruginosa, this compound induces cyclo heteromorphic cells in several species of bacteria and the yeast Candida albicans. Nitric oxide has also been shown to trigger the dispersal of biofilms of several bacteria species at sub-toxic concentrations. Nitric oxide has potential as a treatment for patients that have chronic infections caused by biofilms. It was generally assumed that cells dispersed from biofilms immediately go into the planktonic growth phase. However, studies have shown that the physiology of dispersed cells from Pseudomonas aeruginosa biofilms is highly different from that of planktonic and biofilm cells. Hence, the dispersal process is a unique stage during the transition from biofilm to planktonic lifestyle in bacteria. Dispersed cells are found to be highly virulent against macrophages and Caenorhabditis elegans, but highly sensitive towards iron stress, as compared with planktonic cells. Furthermore, Pseudomonas aeruginosa biofilms undergo distinct spatiotemporal dynamics during biofilm dispersal or disassembly, with contrasting consequences in recolonization and disease dissemination. Biofilm dispersal induced bacteria to activate dispersal genes to actively depart from biofilms as single cells at consistent velocities but could not recolonize fresh surfaces. In contrast, biofilm disassembly by degradation of a biofilm exopolysaccharide released immotile aggregates at high initial velocities, enabling the bacteria to recolonize fresh surfaces and cause infections in the hosts efficiently. Hence, biofilm dispersal is more complex than previously thought, where bacterial populations adopting distinct behavior after biofilm departure may be the key to survival of bacterial species and dissemination of diseases. Properties Biofilms are usually found on solid substrates submerged in or exposed to an aqueous solution, although they can form as floating mats on liquid surfaces and also on the surface of leaves, particularly in high humidity climates. Given sufficient resources for growth, a biofilm will quickly grow to be macroscopic (visible to the naked eye). Biofilms can contain many different types of microorganism, e.g. bacteria, archaea, protozoa, fungi and algae; each group performs specialized metabolic functions. However, some organisms will form single-species films under certain conditions. The social structure (cooperation/competition) within a biofilm depends highly on the different species present. Extracellular matrix The EPS matrix consists of exopolysaccharides, proteins and nucleic acids. A large proportion of the EPS is more or less strongly hydrated, however, hydrophobic EPS also occur; one example is cellulose which is produced by a range of microorganisms. This matrix encases the cells within it and facilitates communication among them through biochemical signals as well as gene exchange. The EPS matrix also traps extracellular enzymes and keeps them in close proximity to the cells. Thus, the matrix represents an external digestion system and allows for stable synergistic microconsortia of different species. Some biofilms have been found to contain water channels that help distribute nutrients and signalling molecules. This matrix is strong enough that under certain conditions, biofilms can become fossilized (stromatolites). Bacteria living in a biofilm usually have significantly different properties from free-floating bacteria of the same species, as the dense and protected environment of the film allows them to cooperate and interact in various ways. One benefit of this environment is increased resistance to detergents and antibiotics, as the dense extracellular matrix and the outer layer of cells protect the interior of the community. In some cases antibiotic resistance can be increased up to 5,000 times. Lateral gene transfer is often facilitated within bacterial and archaeal biofilms and can leads to a more stable biofilm structure. Extracellular DNA is a major structural component of many different microbial biofilms. Enzymatic degradation of extracellular DNA can weaken the biofilm structure and release microbial cells from the surface. However, biofilms are not always less susceptible to antibiotics. For instance, the biofilm form of Pseudomonas aeruginosa has no greater resistance to antimicrobials than do stationary-phase planktonic cells, although when the biofilm is compared to logarithmic-phase planktonic cells, the biofilm does have greater resistance to antimicrobials. This resistance to antibiotics in both stationary-phase cells and biofilms may be due to the presence of persister cells. Habitats Biofilms are ubiquitous in organic life. Nearly every species of microorganism have mechanisms by which they can adhere to surfaces and to each other. Biofilms will form on virtually every non-shedding surface in non-sterile aqueous or humid environments. Biofilms can grow in the most extreme environments: from, for example, the extremely hot, briny waters of hot springs ranging from very acidic to very alkaline, to frozen glaciers. Biofilms can be found on rocks and pebbles at the bottoms of most streams or rivers and often form on the surfaces of stagnant pools of water. Biofilms are important components of food chains in rivers and streams and are grazed by the aquatic invertebrates upon which many fish feed. Biofilms are found on the surface of and inside plants. They can either contribute to crop disease or, as in the case of nitrogen-fixing rhizobia on root nodules, exist symbiotically with the plant. Examples of crop diseases related to biofilms include citrus canker, Pierce's disease of grapes, and bacterial spot of plants such as peppers and tomatoes. Percolating filters Percolating filters in sewage treatment works are highly effective removers of pollutants from settled sewage liquor. They work by trickling the liquid over a bed of hard material which is designed to have a very large surface area. A complex biofilm develops on the surface of the medium which absorbs, adsorbs and metabolises the pollutants. The biofilm grows rapidly and when it becomes too thick to retain its grip on the media it washes off and is replaced by newly grown film. The washed off ("sloughed" off) film is settled out of the liquid stream to leave a highly purified effluent. Slow sand filter Slow sand filters are used in water purification for treating raw water to produce a potable product. They work through the formation of a biofilm called the hypogeal layer or Schmutzdecke in the top few millimetres of the fine sand layer. The Schmutzdecke is formed in the first 10–20 days of operation and consists of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As an epigeal biofilm ages, more algae tend to develop and larger aquatic organisms may be present including some bryozoa, snails and annelid worms. The surface biofilm is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer. As water passes through the hypogeal layer, particles of foreign matter are trapped in the mucilaginous matrix and soluble organic material is adsorbed. The contaminants are metabolised by the bacteria, fungi and protozoa. The water produced from an exemplary slow sand filter is of excellent quality with 90–99% bacterial cell count reduction. Rhizosphere Plant-beneficial microbes can be categorized as plant growth-promoting rhizobacteria. These plant growth-promoters colonize the roots of plants, and provide a wide range of beneficial functions for their host including nitrogen fixation, pathogen suppression, anti-fungal properties, and the breakdown of organic materials. One of these functions is the defense against pathogenic, soil-borne bacteria and fungi by way of induced systemic resistance (ISR) or induced systemic responses triggered by pathogenic microbes (pathogen-induced systemic acquired resistance). Plant exudates act as chemical signals for host specific bacteria to colonize. Rhizobacteria colonization steps include attractions, recognition, adherence, colonization, and growth. Bacteria that have been shown to be beneficial and form biofilms include Bacillus, Pseudomonas, and Azospirillum. Biofilms in the rhizosphere often result in pathogen or plant induced systemic resistances. Molecular properties on the surface of the bacterium cause an immune response in the plant host. These microbe associated molecules interact with receptors on the surface of plant cells, and activate a biochemical response that is thought to include several different genes at a number of loci. Several other signaling molecules have been linked to both induced systemic responses and pathogen-induced systemic responses, such as jasmonic acid and ethylene. Cell envelope components such as bacterial flagella and lipopolysaccharides, which are recognized by plant cells as components of pathogens. Certain iron metabolites produced by Pseudomonas have also been shown to create an induced systemic response. This function of the biofilm helps plants build stronger resistance to pathogens. Plants that have been colonized by PGPR forming a biofilm have gained systemic resistances and are primed for defense against pathogens. This means that the genes necessary for the production of proteins that work towards defending the plant against pathogens have been expressed, and the plant has a "stockpile" of compounds to release to fight off pathogens. A primed defense system is much faster in responding to pathogen induced infection, and may be able to deflect pathogens before they are able to establish themselves. Plants increase the production of lignin, reinforcing cell walls and making it difficult for pathogens to penetrate into the cell, while also cutting off nutrients to already infected cells, effectively halting the invasion. They produce antimicrobial compounds such as phytoalexins, chitinases, and proteinase inhibitors, which prevent the growth of pathogens. These functions of disease suppression and pathogen resistance ultimately lead to an increase in agricultural production and a decrease in the use of chemical pesticides, herbicides, and fungicides because there is a reduced amount of crop loss due to disease. Induced systemic resistance and pathogen-induced systemic acquired resistance are both potential functions of biofilms in the rhizosphere, and should be taken into consideration when applied to new age agricultural practices because of their effect on disease suppression without the use of dangerous chemicals. Mammalian gut Studies in 2003 discovered that the immune system supports biofilm development in the large intestine. This was supported mainly with the fact that the two most abundantly produced molecules by the immune system also support biofilm production and are associated with the biofilms developed in the gut. This is especially important because the appendix holds a mass amount of these bacterial biofilms. This discovery helps to distinguish the possible function of the appendix and the idea that the appendix can help reinoculate the gut with good gut flora. However, modified or disrupted states of biofilms in the gut have been connected to diseases such as inflammatory bowel disease and colorectal cancer. Human environment In the human environment, biofilms can grow in showers very easily since they provide a moist and warm environment for them to thrive. Mold biofilms on ceilings may form due to roof leaks. They can form inside water and sewage pipes and cause clogging and corrosion. On floors and counters, they can make sanitation difficult in food preparation areas. In soil, they can cause bioclogging. In cooling- or heating-water systems, they are known to reduce heat transfer. Biofilms in marine engineering systems, such as pipelines of the offshore oil and gas industry, can lead to substantial corrosion problems. Corrosion is mainly due to abiotic factors; however, at least 20% of corrosion is caused by microorganisms that are attached to the metal subsurface (i.e., microbially influenced corrosion). Ship fouling Bacterial adhesion to boat hulls serves as the foundation for biofouling of seagoing vessels. Once a film of bacteria forms, it is easier for other marine organisms such as barnacles to attach. Such fouling can reduce maximum vessel speed by up to 20%, prolonging voyages and consuming fuel. Time in dry dock for refitting and repainting reduces the productivity of shipping assets, and the useful life of ships is also reduced due to corrosion and mechanical removal (scraping) of marine organisms from ships' hulls. Stromatolites Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by microbial biofilms, especially of cyanobacteria. Stromatolites include some of the most ancient records of life on Earth, and are still forming today. Dental plaque Within the human body, biofilms are present on the teeth as dental plaque, where they may cause tooth decay and gum disease. These biofilms can either be in an uncalcified state that can be removed by dental instruments, or a calcified state which is more difficult to remove. Removal techniques can also include antimicrobials. Dental plaque is an oral biofilm that adheres to the teeth and consists of many species of both bacteria and fungi (such as Streptococcus mutans and Candida albicans), embedded in salivary polymers and microbial extracellular products. The accumulation of microorganisms subjects the teeth and gingival tissues to high concentrations of bacterial metabolites which results in dental disease. Biofilm on the surface of teeth is frequently subject to oxidative stress and acid stress. Dietary carbohydrates can cause a dramatic decrease in pH in oral biofilms to values of 4 and below (acid stress). A pH of 4 at body temperature of 37 °C causes depurination of DNA, leaving apurinic (AP) sites in DNA, especially loss of guanine. Dental plaque biofilm can result in dental caries if it is allowed to develop over time. An ecologic shift away from balanced populations within the dental biofilm is driven by certain (cariogenic) microbiological populations beginning to dominate when the environment favors them. The shift to an acidogenic, aciduric, and cariogenic microbiological population develops and is maintained by frequent consumption of fermentable dietary carbohydrate. The resulting activity shift in the biofilm (and resulting acid production within the biofilm, at the tooth surface) is associated with an imbalance of demineralization over remineralization, leading to net mineral loss within dental hard tissues (enamel and then dentin), the symptom being a carious lesion, or cavity. By preventing the dental plaque biofilm from maturing or by returning it back to a non-cariogenic state, dental caries can be prevented and arrested. This can be achieved through the behavioral step of reducing the supply of fermentable carbohydrates (i.e. sugar intake) and frequent removal of the biofilm (i.e., toothbrushing). Intercellular communication A peptide pheromone quorum sensing signaling system in S. mutans includes the competence stimulating peptide (CSP) that controls genetic competence. Genetic competence is the ability of a cell to take up DNA released by another cell. Competence can lead to genetic transformation, a form of sexual interaction, favored under conditions of high cell density and/or stress where there is maximal opportunity for interaction between the competent cell and the DNA released from nearby donor cells. This system is optimally expressed when S. mutans cells reside in an actively growing biofilm. Biofilm grown S. mutans cells are genetically transformed at a rate 10- to 600-fold higher than S. mutans growing as free-floating planktonic cells suspended in liquid. When the biofilm, containing S. mutans and related oral streptococci, is subjected to acid stress, the competence regulon is induced, leading to resistance to being killed by acid. As pointed out by Michod et al., transformation in bacterial pathogens likely provides for effective and efficient recombinational repair of DNA damages. It appears that S. mutans can survive the frequent acid stress in oral biofilms, in part, through the recombinational repair provided by competence and transformation. Predator-prey interactions Predator-prey interactions between biofilms and bacterivores, such as the soil-dwelling nematode Caenorhabditis elegans, had been extensively studied. Via the production of sticky matrix and formation of aggregates, Yersinia pestis biofilms can prevent feeding by obstructing the mouth of C. elegans. Moreover, Pseudomonas aeruginosa biofilms can impede the slithering motility of C. elegans, termed as 'quagmire phenotype', resulting in trapping of C. elegans within the biofilms and preventing the exploration of nematodes to feed on susceptible biofilms. This significantly reduced the ability of predator to feed and reproduce, thereby promoting the survival of biofilms. Pseudomonas aeruginosa biofilms can also mask their chemical signatures, where they reduced the diffusion of quorum sensing molecules into the environment and prevented the detection of C. elegans. Taxonomic diversity Many different bacteria form biofilms, including gram-positive (e.g. Bacillus spp, Listeria monocytogenes, Staphylococcus spp, and lactic acid bacteria, including Lactobacillus plantarum and Lactococcus lactis) and gram-negative species (e.g. Escherichia coli, or Pseudomonas aeruginosa). Cyanobacteria also form biofilms in aquatic environments. Biofilms are formed by bacteria that colonize plants, e.g. Pseudomonas putida, Pseudomonas fluorescens, and related pseudomonads which are common plant-associated bacteria found on leaves, roots, and in the soil, and the majority of their natural isolates form biofilms. Several nitrogen-fixing symbionts of legumes such as Rhizobium leguminosarum and Sinorhizobium meliloti form biofilms on legume roots and other inert surfaces. Along with bacteria, biofilms are also generated by archaea and by a range of eukaryotic organisms, including fungi e.g. Cryptococcus laurentii and microalgae. Among microalgae, one of the main progenitors of biofilms are diatoms, which colonise both fresh and marine environments worldwide. For other species in disease-associated biofilms and biofilms arising from eukaryotes, see below. Infectious diseases Biofilms have been found to be involved in a wide variety of microbial infections in the body, by one estimate 80% of all infections. Infectious processes in which biofilms have been implicated include common problems such as bacterial vaginosis, urinary tract infections, catheter infections, middle-ear infections, formation of dental plaque, gingivitis, coating contact lenses, and less common but more lethal processes such as endocarditis, infections in cystic fibrosis, and infections of permanent indwelling devices such as joint prostheses, heart valves, and intervertebral disc. The first visual evidence of a biofilm was recorded after spine surgery. It was found that in the absence of clinical presentation of infection, impregnated bacteria could form a biofilm around an implant, and this biofilm can remain undetected via contemporary diagnostic methods, including swabbing. Implant biofilm is frequently present in "aseptic" pseudarthrosis cases. Furthermore, it has been noted that bacterial biofilms may impair cutaneous wound healing and reduce topical antibacterial efficiency in healing or treating infected skin wounds. The diversity of P. aeruginosa cells within a biofilm is thought to make it harder to treat the infected lungs of people with cystic fibrosis. Early detection of biofilms in wounds is crucial to successful chronic wound management. Although many techniques have developed to identify planktonic bacteria in viable wounds, few have been able to quickly and accurately identify bacterial biofilms. Future studies are needed to find means of identifying and monitoring biofilm colonization at the bedside to permit timely initiation of treatment. It has been shown that biofilms are present on the removed tissue of 80% of patients undergoing surgery for chronic sinusitis. The patients with biofilms were shown to have been denuded of cilia and goblet cells, unlike the controls without biofilms who had normal cilia and goblet cell morphology. Biofilms were also found on samples from two of 10 healthy controls mentioned. The species of bacteria from intraoperative cultures did not correspond to the bacteria species in the biofilm on the respective patient's tissue. In other words, the cultures were negative though the bacteria were present. New staining techniques are being developed to differentiate bacterial cells growing in living animals, e.g. from tissues with allergy-inflammations. Research has shown that sub-therapeutic levels of β-lactam antibiotics induce biofilm formation in Staphylococcus aureus. This sub-therapeutic level of antibiotic may result from the use of antibiotics as growth promoters in agriculture, or during the normal course of antibiotic therapy. The biofilm formation induced by low-level methicillin was inhibited by DNase, suggesting that the sub-therapeutic levels of antibiotic also induce extracellular DNA release. Moreover, from an evolutionary point of view, the creation of the tragedy of the commons in pathogenic microbes may provide advanced therapeutic ways for chronic infections caused by biofilms via genetically engineered invasive cheaters who can invade wild-types 'cooperators' of pathogenic bacteria until cooperator populations go to extinction or overall population 'cooperators and cheaters ' go to extinction. Pseudomonas aeruginosa P. aeruginosa represents a commonly used biofilm model organism since it is involved in different types of biofilm-associated chronic infections. Examples of such infections include chronic wounds, chronic otitis media, chronic prostatitis and chronic lung infections in cystic fibrosis (CF) patients. About 80% of CF patients have chronic lung infection, caused mainly by P. aeruginosa growing in a non-surface attached biofilms surround by PMN. The infection remains present despite aggressive antibiotic therapy and is a common cause of death in CF patients due to constant inflammatory damage to the lungs. In patients with CF, one therapy for treating early biofilm development is to employ DNase to structurally weaken the biofilm. Biofilm formation of P. aeruginosa, along with other bacteria, is found in 90% of chronic wound infections, which leads to poor healing and high cost of treatment estimated at more than US$25 billion every year in the United States. In order to minimize the P. aeruginosa infection, host epithelial cells secrete antimicrobial peptides, such as lactoferrin, to prevent the formation of the biofilms. Streptococcus pneumoniae Streptococcus pneumoniae is the main cause of community-acquired pneumonia and meningitis in children and the elderly, and of sepsis in HIV-infected persons. When S. pneumoniae grows in biofilms, genes are specifically expressed that respond to oxidative stress and induce competence. Formation of a biofilm depends on competence stimulating peptide (CSP). CSP also functions as a quorum-sensing peptide. It not only induces biofilm formation, but also increases virulence in pneumonia and meningitis. It has been proposed that competence development and biofilm formation is an adaptation of S. pneumoniae to survive the defenses of the host. In particular, the host's polymorphonuclear leukocytes produce an oxidative burst to defend against the invading bacteria, and this response can kill bacteria by damaging their DNA. Competent S. pneumoniae in a biofilm have the survival advantage that they can more easily take up transforming DNA from nearby cells in the biofilm to use for recombinational repair of oxidative damages in their DNA. Competent S. pneumoniae can also secrete an enzyme (murein hydrolase) that destroys non-competent cells (fratricide) causing DNA to be released into the surrounding medium for potential use by the competent cells. The insect antimicrobial peptide cecropin A can destroy planktonic and sessile biofilm-forming uropathogenic E. coli cells, either alone or when combined with the antibiotic nalidixic acid, synergistically clearing infection in vivo (in the insect host Galleria mellonella) without off-target cytotoxicity. The multi-target mechanism of action involves outer membrane permeabilization followed by biofilm disruption triggered by the inhibition of efflux pump activity and interactions with extracellular and intracellular nucleic acids. Escherichia coli Escherichia coli biofilms are responsible for many intestinal infectious diseases. The Extraintestinal group of E. coli (ExPEC) is the dominant bacterial group that attacks the urinary system, which leads to urinary tract infections. The biofilm formation of these pathogenic E. coli is hard to eradicate due to the complexity of its aggregation structure, and it has a significant contribution to developing aggressive medical complications, increase in hospitalization rate, and cost of treatment. The development of E. coli biofilm is a common leading cause of urinary tract infections (UTI) in hospitals through its contribution to developing medical device-associated infections. Catheter-associated urinary tract infections (CAUTI) represent the most common hospital-acquired infection due to the formation of the pathogenic E. coli biofilm inside the catheters. Staphylococcus aureus Staphylococcus aureus pathogen can attack skin and lungs, leading to skin infection and pneumonia. Moreover, the biofilm infections network of S. aureus plays a critical role in preventing immune cells, such as macrophages from eliminating and destroying bacterial cells. Furthermore, biofilm formation by bacteria, such as S. aureus, not only develops resistance against antibiotic medication but also develop internal resistance toward antimicrobial peptides (AMPs), leading to preventing the inhibition of the pathogen and maintaining its survival. Serratia marcescens Serratia marcescens is a fairly common opportunistic pathogen that can form biofilms on various surfaces, including medical devices such as catheters and implants, as well as natural environments like soil and water. The formation of biofilms by S. marcescens is a serious concern because of its ability to adhere to and colonize surfaces, protecting itself from host immune responses and antimicrobial agents. This strength makes infections caused by S. marcescens challenging to treat, specifically in hospitals where the bacterium can cause severe, and specific, infections. Research suggests that biofilm formation by S. marcescens is a process controlled by both nutrient cues and the quorum-sensing system. Quorum sensing influences the bacterium's ability to adhere to surfaces and establish mature biofilms, whereas the availability of specific nutrients can enhance or inhibit biofilm development. S. marcescens creates biofilms that ultimately develop into a highly porous, thread-like structure composed of chains of cells, filaments, and cell clusters. Research has shown that S. marcescens biofilms exhibit complex structural organization, including the formation of microcolonies and channels that facilitate nutrient and waste exchange. The production of extracellular polymeric substances (EPS) is a key factor in biofilm development, contributing to the bacterium's adhesion and resistance to antimicrobial agents. In addition to its role in healthcare-associated infections, S. marcescens biofilms have been implicated in the deterioration of industrial equipment and processes. For example, biofilm growth in cooling towers can lead to biofouling and reduced efficiency. Efforts to control and prevent biofilm formation by S. marcescens involve the use of antimicrobial coatings on medical devices, the development of targeted biofilm disruptors, and improved sterilization protocols. Further research into the molecular mechanisms governing S. marcescens biofilm formation and persistence is crucial for developing effective strategies to combat its associated risks. The use of indole compounds has been studied to be used as protection against biofilm formation. Uses and impact In medicine It is suggested that around two-thirds of bacterial infections in humans involve biofilms. Infections associated with the biofilm growth usually are challenging to eradicate. This is mostly due to the fact that mature biofilms display antimicrobial tolerance, and immune response evasions. Biofilms often form on the inert surfaces of implanted devices such as catheters, prosthetic cardiac valves and intrauterine devices. Some of the most difficult infections to treat are those associated with the use of medical devices. The rapidly expanding worldwide industry for biomedical devices and tissue engineering related products is already at $180 billion per year, yet this industry continues to suffer from microbial colonization. No matter the sophistication, microbial infections can develop on all medical devices and tissue engineering constructs. 60-70% of hospital-acquired infections are associated with the implantation of a biomedical device. This leads to 2 million cases annually in the U.S., costing the healthcare system over $5 billion in additional healthcare expenses. The level of antibiotic resistance in a biofilm is much greater than that of non-biofilm bacteria, and can be as much as 5,000 times greater. The extracellular matrix of biofilm is considered one of the leading factors that can reduce the penetration of antibiotics into a biofilm structure and contributes to antibiotic resistance. Further, it has been demonstrated that the evolution of resistance to antibiotics may be affected by the biofilm lifestyle. Bacteriophage therapy can disperse the biofilm generated by antibiotic-resistant bacteria. It has been shown that the introduction of a small current of electricity to the liquid surrounding a biofilm, together with small amounts of antibiotic can reduce the level of antibiotic resistance to levels of non-biofilm bacteria. This is termed the bioelectric effect. The application of a small DC current on its own can cause a biofilm to detach from its surface. A study showed that the type of current used made no difference to the bioelectric effect. In industry Biofilms can also be harnessed for constructive purposes. For example, many sewage treatment plants include a secondary treatment stage in which waste water passes over biofilms grown on filters, which extract and digest organic compounds. In such biofilms, bacteria are mainly responsible for removal of organic matter (BOD), while protozoa and rotifers are mainly responsible for removal of suspended solids (SS), including pathogens and other microorganisms. Slow sand filters rely on biofilm development in the same way to filter surface water from lake, spring or river sources for drinking purposes. What is regarded as clean water is effectively a waste material to these microcellular organisms. Biofilms can help eliminate petroleum oil from contaminated oceans or marine systems. The oil is eliminated by the hydrocarbon-degrading activities of communities of hydrocarbonoclastic bacteria (HCB). Biofilms are used in microbial fuel cells (MFCs) to generate electricity from a variety of starting materials, including complex organic waste and renewable biomass. Biofilms are also relevant for the improvement of metal dissolution in bioleaching industry, and aggregation of microplastics pollutants for convenient removal from the environment. Food industry Biofilms have become problematic in several food industries due to the ability to form on plants and during industrial processes. Bacteria can survive long periods of time in water, animal manure, and soil, causing biofilm formation on plants or in the processing equipment. The buildup of biofilms can affect the heat flow across a surface and increase surface corrosion and frictional resistance of fluids. These can lead to a loss of energy in a system and overall loss of products. Along with economic problems, biofilm formation on food poses a health risk to consumers due to the ability to make the food more resistant to disinfectants As a result, from 1996 to 2010 the Centers for Disease Control and Prevention estimated 48 million foodborne illnesses per year. Biofilms have been connected to about 80% of bacterial infections in the United States. In produce, microorganisms attach to the surfaces and biofilms develop internally. During the washing process, biofilms resist sanitization and allow bacteria to spread across the produce, especially via kitchen utensils. This problem is also found in ready-to-eat foods, because the foods go through limited cleaning procedures before consumption Due to the perishability of dairy products and limitations in cleaning procedures, resulting in the buildup of bacteria, dairy is susceptible to biofilm formation and contamination. The bacteria can spoil the products more readily and contaminated products pose a health risk to consumers. One species of bacteria that can be found in various industries and is a major cause of foodborne disease is Salmonella. Large amounts of Salmonella contamination can be found in the poultry processing industry as about 50% of Salmonella strains can produce biofilms on poultry farms. Salmonella increases the risk of foodborne illnesses when the poultry products are not cleaned and cooked correctly. Salmonella is also found in the seafood industry where biofilms form from seafood borne pathogens on the seafood itself as well as in water. Shrimp products are commonly affected by Salmonella because of unhygienic processing and handling techniques The preparation practices of shrimp and other seafood products can allow for bacteria buildup on the products. New forms of cleaning procedures are being tested to reduce biofilm formation in these processes which will lead to safer and more productive food processing industries. These new forms of cleaning procedures also have a profound effect on the environment, often releasing toxic gases into the groundwater reservoirs. As a response to the aggressive methods employed in controlling biofilm formation, there are a number of novel technologies and chemicals under investigation that can prevent either the proliferation or adhesion of biofilm-secreting microbes. Latest proposed biomolecules presenting marked anti-biofilm activity include a range of metabolites such as bacterial rhamnolipids and even plant- and animal-derived alkaloids. In aquaculture In shellfish and algal aquaculture, biofouling microbial species tend to block nets and cages and ultimately outcompete the farmed species for space and food. Bacterial biofilms start the colonization process by creating microenvironments that are more favorable for biofouling species. In the marine environment, biofilms could reduce the hydrodynamic efficiency of ships and propellers, lead to pipeline blockage and sensor malfunction, and increase the weight of appliances deployed in seawater. Numerous studies have shown that biofilm can be a reservoir for potentially pathogenic bacteria in freshwater aquaculture. Moreover, biofilms are important in establishing infections on the fish. As mentioned previously, biofilms can be difficult to eliminate even when antibiotics or chemicals are used in high doses. The role that biofilm plays as reservoirs of bacterial fish pathogens has not been explored in detail but it certainly deserves to be studied. Eukaryotic Along with bacteria, biofilms are often initiated and produced by eukaryotic microbes. The biofilms produced by eukaryotes is usually occupied by bacteria and other eukaryotes alike, however the surface is cultivated and EPS is secreted initially by the eukaryote. Both fungi and microalgae are known to form biofilms in such a way. Biofilms of fungal origin are important aspects of human infection and fungal pathogenicity, as the fungal infection is more resistant to antifungals. In the environment, fungal biofilms are an area of ongoing research. One key area of research is fungal biofilms on plants. For example, in the soil, plant associated fungi including mycorrhiza have been shown to decompose organic matter and protect plants from bacterial pathogens. Biofilms in aquatic environments are often founded by diatoms. The exact purpose of these biofilms is unknown, however there is evidence that the EPS produced by diatoms facilitates both cold and salinity stress. These eukaryotes interact with a diverse range of other organisms within a region known as the phycosphere, but importantly are the bacteria associated with diatoms, as it has been shown that although diatoms excrete EPS, they only do so when interacting with certain bacteria species. Horizontal gene transfer Horizontal gene transfer is the lateral transfer of genetic material between cellular organisms. It happens frequently in prokaryotes, and less frequently in eukaryotes. In bacteria, horizontal gene transfer can occur through transformation (uptake of free floating DNA in the environment), transduction (virus mediated DNA uptake), or conjugation (transfer of DNA between pili structures of two adjacent bacteria). Recent studies have also uncovered other mechanisms, such as membrane vesicle transmission or gene transfer agents. Biofilms promote horizontal gene transfer in a variety of ways.Bacterial conjugation has been shown to accelerate biofilm formation in difficult environment due to the robust connections established by the conjugative pili. These connections can often foster cross-species transfer events due to the diverse heterogeneity of many biofilms. Additionally, biofilms are structurally confined by a polysaccharide matrix, providing the close spatial requirements for conjugation. Transformation is also frequently observed in biofilms. Bacterial autolysis is a key mechanism in biofilm structural regulation, providing an abundant source of competent DNA primed for transformative uptake. In some instances, inter-biofilm quorum sensing can enhance the competence of free floating eDNA, further promoting transformation. Stx gene transfer through bacteriophage carriers has been witnessed within biofilms, which suggests that biofilms are also a suitable environment for transduction. Membrane vesicles HGT occurs when released membrane vesicles (containing genetic information) fuse with a recipient bacteria, and release genetic material into the bacteria's cytoplasm. Recent research has revealed that membrane vesicle HGT can promote single-strain biofilm formation, yet the role membrane vesicle HGT plays in the formation of multistrain biofilms is still unknown. GTAs, or gene transfer agents, are phage-like particles produced by the host bacteria and contain random DNA fragments from the host bacteria genome. HGT within biofilms can confer antibiotic resistance or increased pathogenicity across the biofilms' population, promoting biofilm homeostasis. Examples Conjugative plasmids may encode biofilm-associated proteins, such as PtgA, PrgB, or PrgC which promote cell adhesion (required for early biofilm formation). Genes encoding type III fimbriae are found in pOLA52 (Klebsiella pneumoniae plasmid) which promote conjugative-pilus-dependent biofilm formation. Transformation commonly occurs within biofilms. A phenomenon called fratricide can be seen among streptococcal species in which cell-wall degrading enzymes are released, lysing neighboring bacteria and releasing their DNA. This DNA can then be taken up by the surviving bacteria (transformation). Competence stimulating peptides may play an important role in biofilm formation among S. pneumoniae and S. mutans as well. Among V. cholerae, the competence pilus itself promotes cell aggregation through pilus-pilus interactions at the beginning of biofilm formation. Phage invasion may play a role in biofilm life cycles, lysing bacteria and releasing their eDNA, which strengthens biofilm structures and can be taken up by neighboring bacteria in transformation. Biofilm destruction caused by the E. coli phage Rac and the P. aeruginosa prophage Pf4 causes detachment of cells from the biofilm. Detachment is a biofilm phenomenon which requires more study, but is hypothesized to proliferate the bacterial species that comprise the biofilm. Membrane vesicle HGT has been witnessed occurring in marine environments, among Neisseria gonorrhoeae, Pseudomonas aeruginosa, Helicobacter pylori, and among many other bacterial species. Even though membrane vesicle HGT has been shown as a contributing factor in biofilm formation, research is still required to prove that membrane vesicle mediated HGT occurs within biofilms. Membrane vesicle HGT has also been shown to modulate phage-bacteria interactions in Bacillus subtilis SPP1 phage-resistant cells (lacking the SPP1 receptor protein). Upon exposure to vesicles containing receptors, transduction of pBT163 (a cat-encoding plasmid) occurs, resulting in the expression of the SPP1 receptor protein, opening the receptive bacteria to future phage infection. Recent research has shown that the archaeal species H. volcanii has some biofilm phenotypes similar to bacterial biofilms such as differentiation and HGT, which required cell-cell contact and involved formation of cytosolic bridges and cellular fusion events. Cultivation devices There is a wide variety of biofilm cultivation devices to mimic natural or industrial environments. Although it is important to consider that the particular experimental platform for biofilm research determines what kind of biofilm is cultivated and the data that can be extracted. These devices can be grouped into the following: microtiter plate (MTP) systems and MBEC Assay® [formerly the Calgary Biofilm Device (CBD)] BioFilm Ring Test (BRT) or clinical Biofilm Ring Test (cBRT) Robbins Device or modified Robbins Device (such as the MPMR-10PMMA or the Bio-inLine Biofilm Reactor) Drip Flow Biofilm Reactor® rotary devices (such as the CDC Biofilm Reactor®, the Rotating Disk Reactor, the Biofilm Annular Reactor, the Industrial Surfaces Biofilm Reactor, or the Constant Depth Film Fermenter) flow chambers or flow cells (such as the Coupon Evaluation Flow Cell, Transmission Flow Cell, and Capillary Flow Cell from BioSurface Technologies) microfluidic approaches, such as 3D-bacterial "biofilm-dispersal-then-recolonization" (BDR) microfluidic model
Biology and health sciences
Basic anatomy
Biology
43948
https://en.wikipedia.org/wiki/Star%20formation
Star formation
Star formation is the process by which dense regions within molecular clouds in interstellar space, sometimes referred to as "stellar nurseries" or "star-forming regions", collapse and form stars. As a branch of astronomy, star formation includes the study of the interstellar medium (ISM) and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. It is closely related to planet formation, another branch of astronomy. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Most stars do not form in isolation but as part of a group of stars referred as star clusters or stellar associations. History The first stars were believed to be formed approximately 12-13 billion years ago following the Big Bang. Over intervals of time, stars have fused helium to form a series of chemical elements. Stellar nurseries Interstellar clouds Spiral galaxies like the Milky Way contain stars, stellar remnants, and a diffuse interstellar medium (ISM) of gas and dust. The interstellar medium consists of 104 to 106 particles per cm3, and is typically composed of roughly 70% hydrogen, 28% helium, and 1.5% heavier elements by mass. The trace amounts of heavier elements were and are produced within stars via stellar nucleosynthesis and ejected as the stars pass beyond the end of their main sequence lifetime. Higher density regions of the interstellar medium form clouds, or diffuse nebulae, where star formation takes place. In contrast to spiral galaxies, elliptical galaxies lose the cold component of its interstellar medium within roughly a billion years, which hinders the galaxy from forming diffuse nebulae except through mergers with other galaxies. In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so these nebulae are called molecular clouds. The Herschel Space Observatory has revealed that filaments, or elongated dense gas structures, are truly ubiquitous in molecular clouds and central to the star formation process. They fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed manner in which the filaments are fragmented. Observations of supercritical filaments have revealed quasi-periodic chains of dense cores with spacing comparable to the filament inner width, and embedded protostars with outflows. Observations indicate that the coldest clouds tend to form low-mass stars, which are first observed via the infrared light they emit inside the clouds, and then as visible light when the clouds dissipate. Giant molecular clouds, which are generally warmer, produce stars of all masses. These giant molecular clouds have typical densities of 100 particles per cm3, diameters of , masses of up to 6 million solar masses (), or six million times the mass of Earth's sun. The average interior temperature is . About half the total mass of the Milky Way's galactic ISM is found in molecular clouds and the galaxy includes an estimated 6,000 molecular clouds, each with more than . The nebula nearest to the Sun where massive stars are being formed is the Orion Nebula, away. However, lower mass star formation is occurring about 400–450 light-years distant in the ρ Ophiuchi cloud complex. A more compact site of star formation is the opaque clouds of dense gas and dust known as Bok globules, so named after the astronomer Bart Bok. These can form in association with collapsing molecular clouds or possibly independently. The Bok globules are typically up to a light-year across and contain a few solar masses. They can be observed as dark clouds silhouetted against bright emission nebulae or background stars. Over half the known Bok globules have been found to contain newly forming stars. Cloud collapse An interstellar cloud of gas will remain in hydrostatic equilibrium as long as the kinetic energy of the gas pressure is in balance with the potential energy of the internal gravitational force. Mathematically this is expressed using the virial theorem, which states that, to maintain equilibrium, the gravitational potential energy must equal twice the internal thermal energy. If a cloud is massive enough that the gas pressure is insufficient to support it, the cloud will undergo gravitational collapse. The mass above which a cloud will undergo such collapse is called the Jeans mass. The Jeans mass depends on the temperature and density of the cloud, but is typically thousands to tens of thousands of solar masses. During cloud collapse dozens to tens of thousands of stars form more or less simultaneously which is observable in so-called embedded clusters. The end product of a core collapse is an open cluster of stars. In triggered star formation, one of several events might occur to compress a molecular cloud and initiate its gravitational collapse. Molecular clouds may collide with each other, or a nearby supernova explosion can be a trigger, sending shocked matter into the cloud at very high speeds. (The resulting new stars may themselves soon produce supernovae, producing self-propagating star formation.) Alternatively, galactic collisions can trigger massive starbursts of star formation as the gas clouds in each galaxy are compressed and agitated by tidal forces. The latter mechanism may be responsible for the formation of globular clusters. A supermassive black hole at the core of a galaxy may serve to regulate the rate of star formation in a galactic nucleus. A black hole that is accreting infalling matter can become active, emitting a strong wind through a collimated relativistic jet. This can limit further star formation. Massive black holes ejecting radio-frequency-emitting particles at near-light speed can also block the formation of new stars in aging galaxies. However, the radio emissions around the jets may also trigger star formation. Likewise, a weaker jet may trigger star formation when it collides with a cloud. As it collapses, a molecular cloud breaks into smaller and smaller pieces in a hierarchical manner, until the fragments reach stellar mass. In each of these fragments, the collapsing gas radiates away the energy gained by the release of gravitational potential energy. As the density increases, the fragments become opaque and are thus less efficient at radiating away their energy. This raises the temperature of the cloud and inhibits further fragmentation. The fragments now condense into rotating spheres of gas that serve as stellar embryos. Complicating this picture of a collapsing cloud are the effects of turbulence, macroscopic flows, rotation, magnetic fields and the cloud geometry. Both rotation and magnetic fields can hinder the collapse of a cloud. Turbulence is instrumental in causing fragmentation of the cloud, and on the smallest scales it promotes collapse. Protostar A protostellar cloud will continue to collapse as long as the gravitational binding energy can be eliminated. This excess energy is primarily lost through radiation. However, the collapsing cloud will eventually become opaque to its own radiation, and the energy must be removed through some other means. The dust within the cloud becomes heated to temperatures of , and these particles radiate at wavelengths in the far infrared where the cloud is transparent. Thus the dust mediates the further collapse of the cloud. During the collapse, the density of the cloud increases towards the center and thus the middle region becomes optically opaque first. This occurs when the density is about . A core region, called the first hydrostatic core, forms where the collapse is essentially halted. It continues to increase in temperature as determined by the virial theorem. The gas falling toward this opaque region collides with it and creates shock waves that further heat the core. When the core temperature reaches about , the thermal energy dissociates the H2 molecules. This is followed by the ionization of the hydrogen and helium atoms. These processes absorb the energy of the contraction, allowing it to continue on timescales comparable to the period of collapse at free fall velocities. After the density of infalling material has reached about 10−8 g / cm3, that material is sufficiently transparent to allow energy radiated by the protostar to escape. The combination of convection within the protostar and radiation from its exterior allow the star to contract further. This continues until the gas is hot enough for the internal pressure to support the protostar against further gravitational collapse—a state called hydrostatic equilibrium. When this accretion phase is nearly complete, the resulting object is known as a protostar. Accretion of material onto the protostar continues partially from the newly formed circumstellar disc. When the density and temperature are high enough, deuterium fusion begins, and the outward pressure of the resultant radiation slows (but does not stop) the collapse. Material comprising the cloud continues to "rain" onto the protostar. In this stage bipolar jets are produced called Herbig–Haro objects. This is probably the means by which excess angular momentum of the infalling material is expelled, allowing the star to continue to form. When the surrounding gas and dust envelope disperses and accretion process stops, the star is considered a pre-main-sequence star (PMS star). The energy source of these objects is (gravitational contraction)Kelvin–Helmholtz mechanism, as opposed to hydrogen burning in main sequence stars. The PMS star follows a Hayashi track on the Hertzsprung–Russell (H–R) diagram. The contraction will proceed until the Hayashi limit is reached, and thereafter contraction will continue on a Kelvin–Helmholtz timescale with the temperature remaining stable. Stars with less than thereafter join the main sequence. For more massive PMS stars, at the end of the Hayashi track they will slowly collapse in near hydrostatic equilibrium, following the Henyey track. Finally, hydrogen begins to fuse in the core of the star, and the rest of the enveloping material is cleared away. This ends the protostellar phase and begins the star's main sequence phase on the H–R diagram. The stages of the process are well defined in stars with masses around or less. In high mass stars, the length of the star formation process is comparable to the other timescales of their evolution, much shorter, and the process is not so well defined. The later evolution of stars is studied in stellar evolution. Observations Key elements of star formation are only available by observing in wavelengths other than the optical. The protostellar stage of stellar existence is almost invariably hidden away deep inside dense clouds of gas and dust left over from the GMC. Often, these star-forming cocoons known as Bok globules, can be seen in silhouette against bright emission from surrounding gas. Early stages of a star's life can be seen in infrared light, which penetrates the dust more easily than visible light. Observations from the Wide-field Infrared Survey Explorer (WISE) have thus been especially important for unveiling numerous galactic protostars and their parent star clusters. Examples of such embedded star clusters are FSR 1184, FSR 1190, Camargo 14, Camargo 74, Majaess 64, and Majaess 98. The structure of the molecular cloud and the effects of the protostar can be observed in near-IR extinction maps (where the number of stars are counted per unit area and compared to a nearby zero extinction area of sky), continuum dust emission and rotational transitions of CO and other molecules; these last two are observed in the millimeter and submillimeter range. The radiation from the protostar and early star has to be observed in infrared astronomy wavelengths, as the extinction caused by the rest of the cloud in which the star is forming is usually too big to allow us to observe it in the visual part of the spectrum. This presents considerable difficulties as the Earth's atmosphere is almost entirely opaque from 20μm to 850μm, with narrow windows at 200μm and 450μm. Even outside this range, atmospheric subtraction techniques must be used. X-ray observations have proven useful for studying young stars, since X-ray emission from these objects is about 100–100,000 times stronger than X-ray emission from main-sequence stars. The earliest detections of X-rays from T Tauri stars were made by the Einstein X-ray Observatory. For low-mass stars X-rays are generated by the heating of the stellar corona through magnetic reconnection, while for high-mass O and early B-type stars X-rays are generated through supersonic shocks in the stellar winds. Photons in the soft X-ray energy range covered by the Chandra X-ray Observatory and XMM-Newton may penetrate the interstellar medium with only moderate absorption due to gas, making the X-ray a useful wavelength for seeing the stellar populations within molecular clouds. X-ray emission as evidence of stellar youth makes this band particularly useful for performing censuses of stars in star-forming regions, given that not all young stars have infrared excesses. X-ray observations have provided near-complete censuses of all stellar-mass objects in the Orion Nebula Cluster and Taurus Molecular Cloud. The formation of individual stars can only be directly observed in the Milky Way Galaxy, but in distant galaxies star formation has been detected through its unique spectral signature. Initial research indicates star-forming clumps start as giant, dense areas in turbulent gas-rich matter in young galaxies, live about 500 million years, and may migrate to the center of a galaxy, creating the central bulge of a galaxy. On February 21, 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. In February 2018, astronomers reported, for the first time, a signal of the reionization epoch, an indirect detection of light from the earliest stars formed - about 180 million years after the Big Bang. An article published on October 22, 2019, reported on the detection of 3MM-1, a massive star-forming galaxy about 12.5 billion light-years away that is obscured by clouds of dust. At a mass of about 1010.8 solar masses, it showed a star formation rate about 100 times as high as in the Milky Way. Notable pathfinder objects MWC 349 was first discovered in 1978, and is estimated to be only 1,000 years old. VLA 1623 – The first exemplar Class 0 protostar, a type of embedded protostar that has yet to accrete the majority of its mass. Found in 1993, is possibly younger than 10,000 years. L1014 – An extremely faint embedded object representative of a new class of sources that are only now being detected with the newest telescopes. Their status is still undetermined, they could be the youngest low-mass Class 0 protostars yet seen or even very low-mass evolved objects (like brown dwarfs or even rogue planets). GCIRS 8* – The youngest known main sequence star in the Galactic Center region, discovered in August 2006. It is estimated to be 3.5 million years old. Low mass and high mass star formation Stars of different masses are thought to form by slightly different mechanisms. The theory of low-mass star formation, which is well-supported by observation, suggests that low-mass stars form by the gravitational collapse of rotating density enhancements within molecular clouds. As described above, the collapse of a rotating cloud of gas and dust leads to the formation of an accretion disk through which matter is channeled onto a central protostar. For stars with masses higher than about , however, the mechanism of star formation is not well understood. Massive stars emit copious quantities of radiation which pushes against infalling material. In the past, it was thought that this radiation pressure might be substantial enough to halt accretion onto the massive protostar and prevent the formation of stars with masses more than a few tens of solar masses. Recent theoretical work has shown that the production of a jet and outflow clears a cavity through which much of the radiation from a massive protostar can escape without hindering accretion through the disk and onto the protostar. Present thinking is that massive stars may therefore be able to form by a mechanism similar to that by which low mass stars form. There is mounting evidence that at least some massive protostars are indeed surrounded by accretion disks. Disk accretion in high-mass protostars, similar to their low-mass counterparts, is expected to exhibit bursts of episodic accretion as a result of a gravitationally instability leading to clumpy and in-continuous accretion rates. Recent evidence of accretion bursts in high-mass protostars has indeed been confirmed observationally. Several other theories of massive star formation remain to be tested observationally. Of these, perhaps the most prominent is the theory of competitive accretion, which suggests that massive protostars are "seeded" by low-mass protostars which compete with other protostars to draw in matter from the entire parent molecular cloud, instead of simply from a small local region. Another theory of massive star formation suggests that massive stars may form by the coalescence of two or more stars of lower mass. Filamentary nature of star formation Recent studies have emphasized the role of filamentary structures in molecular clouds as the initial conditions for star formation. Findings from the Herschel Space Observatory highlight the ubiquitous nature of these filaments in the cold interstellar medium (ISM). The spatial relationship between cores and filaments indicates that the majority of prestellar cores are located within 0.1 pc of supercritical filaments. This supports the hypothesis that filamentary structures act as pathways for the accumulation of gas and dust, leading to core formation. Both the core mass function (CMF) and filament line mass function (FLMF) observed in the California GMC follow power-law distributions at the high-mass end, consistent with the Salpeter initial mass function (IMF). Current results strongly support the existence of a connection between the FLMF and the CMF/IMF, demonstrating that this connection holds at the level of an individual cloud, specifically the California GMC. The FLMF presented is a distribution of local line masses for a complete, homogeneous sample of filaments within the same cloud. It is the local line mass of a filament that defines its ability to fragment at a particular location along its spine, not the average line mass of the filament. This connection is more direct and provides tighter constraints on the origin of the CMF/IMF.
Physical sciences
Stellar astronomy
null
43950
https://en.wikipedia.org/wiki/Interstate%20Highway%20System
Interstate Highway System
The Dwight D. Eisenhower National System of Interstate and Defense Highways, commonly known as the Interstate Highway System, or the Eisenhower Interstate System, is a network of controlled-access highways that forms part of the National Highway System in the United States. The system extends throughout the contiguous United States and has routes in Hawaii, Alaska, and Puerto Rico. In the 20th century, the United States Congress began funding roadways through the Federal Aid Road Act of 1916, and started an effort to construct a national road grid with the passage of the Federal Aid Highway Act of 1921. In 1926, the United States Numbered Highway System was established, creating the first national road numbering system for cross-country travel. The roads were funded and maintained by U.S. states, and there were few national standards for road design. United States Numbered Highways ranged from two-lane country roads to multi-lane freeways. After Dwight D. Eisenhower became president in 1953, his administration developed a proposal for an interstate highway system, eventually resulting in the enactment of the Federal-Aid Highway Act of 1956. Unlike the earlier United States Numbered Highway System, the interstates were designed to be all freeways, with nationally unified standards for construction and signage. While some older freeways were adopted into the system, most of the routes were completely new. In dense urban areas, the choice of routing destroyed many well-established neighborhoods, often intentionally as part of a program of "urban renewal". In the two decades following the 1956 Highway Act, the construction of the freeways displaced one million people, and as a result of the many freeway revolts during this era, several planned Interstates were abandoned or re-routed to avoid urban cores. Construction of the original Interstate Highway System was proclaimed complete in 1992, despite deviations from the original 1956 plan and several stretches that did not fully conform with federal standards. The construction of the Interstate Highway System cost approximately $114 billion (equivalent to $ in ). The system has continued to expand and grow as additional federal funding has provided for new routes to be added, and many future Interstate Highways are currently either being planned or under construction. Though heavily funded by the federal government, Interstate Highways are owned by the state in which they were built. With few exceptions, all Interstates must meet specific standards, such as having controlled access, physical barriers or median strips between lanes of oncoming traffic, breakdown lanes, avoiding at-grade intersections, no traffic lights, and complying with federal traffic sign specifications. Interstate Highways use a numbering scheme in which primary Interstates are assigned one- or two-digit numbers, and shorter routes which branch off from longer ones are assigned three-digit numbers where the last two digits match the parent route. The Interstate Highway System is partially financed through the Highway Trust Fund, which itself is funded by a combination of a federal fuel tax and transfers from the Treasury's general fund. Though federal legislation initially banned the collection of tolls, some Interstate routes are toll roads, either because they were grandfathered into the system or because subsequent legislation has allowed for tolling of Interstates in some cases. , about one quarter of all vehicle miles driven in the country used the Interstate Highway System, which has a total length of . In 2022 and 2023, the number of fatalities on the Interstate Highway System amounted to more than 5,000 people annually, with nearly 5,600 fatalities in 2022. History Planning The United States government's efforts to construct a national network of highways began on an ad hoc basis with the passage of the Federal Aid Road Act of 1916, which provided $75 million over a five-year period for matching funds to the states for the construction and improvement of highways. The nation's revenue needs associated with World War I prevented any significant implementation of this policy, which expired in 1921. In December 1918, E. J. Mehren, a civil engineer and the editor of Engineering News-Record, presented his "A Suggested National Highway Policy and Plan" during a gathering of the State Highway Officials and Highway Industries Association at the Congress Hotel in Chicago. In the plan, Mehren proposed a system, consisting of five east–west routes and 10 north–south routes. The system would include two percent of all roads and would pass through every state at a cost of , providing commercial as well as military transport benefits. In 1919, the US Army sent an expedition across the US to determine the difficulties that military vehicles would have on a cross-country trip. Leaving from the Ellipse near the White House on July 7, the Motor Transport Corps convoy needed 62 days to drive on the Lincoln Highway to the Presidio of San Francisco along the Golden Gate. The convoy suffered many setbacks and problems on the route, such as poor-quality bridges, broken crankshafts, and engines clogged with desert sand. Dwight Eisenhower, then a 28-year-old brevet lieutenant colonel, accompanied the trip "through darkest America with truck and tank," as he later described it. Some roads in the West were a "succession of dust, ruts, pits, and holes." As the landmark 1916 law expired, new legislation was passed—the Federal Aid Highway Act of 1921 (Phipps Act). This new road construction initiative once again provided for federal matching funds for road construction and improvement, $75 million allocated annually. Moreover, this new legislation for the first time sought to target these funds to the construction of a national road grid of interconnected "primary highways", setting up cooperation among the various state highway planning boards. The Bureau of Public Roads asked the Army to provide a list of roads that it considered necessary for national defense. In 1922, General John J. Pershing, former head of the American Expeditionary Force in Europe during the war, complied by submitting a detailed network of of interconnected primary highways—the so-called Pershing Map. A boom in road construction followed throughout the decade of the 1920s, with such projects as the New York parkway system constructed as part of a new national highway system. As automobile traffic increased, planners saw a need for such an interconnected national system to supplement the existing, largely non-freeway, United States Numbered Highways system. By the late 1930s, planning had expanded to a system of new superhighways. In 1938, President Franklin D. Roosevelt gave Thomas MacDonald, chief at the Bureau of Public Roads, a hand-drawn map of the United States marked with eight superhighway corridors for study. In 1939, Bureau of Public Roads Division of Information chief Herbert S. Fairbank wrote a report called Toll Roads and Free Roads, "the first formal description of what became the Interstate Highway System" and, in 1944, the similarly themed Interregional Highways. Federal Aid Highway Act of 1956 The Interstate Highway System gained a champion in President Dwight D. Eisenhower, who was influenced by his experiences as a young Army officer crossing the country in the 1919 Motor Transport Corps convoy that drove in part on the Lincoln Highway, the first road across America. He recalled that, "The old convoy had started me thinking about good two-lane highways... the wisdom of broader ribbons across our land." Eisenhower also gained an appreciation of the Reichsautobahn system, the first "national" implementation of modern Germany's Autobahn network, as a necessary component of a national defense system while he was serving as Supreme Commander of Allied Forces in Europe during World War II. In 1954, Eisenhower appointed General Lucius D. Clay to head a committee charged with proposing an interstate highway system plan. Summing up motivations for the construction of such a system, Clay stated, Clay's committee proposed a 10-year, $100 billion program , which would build of divided highways linking all American cities with a population of greater than 50,000. Eisenhower initially preferred a system consisting of toll roads, but Clay convinced Eisenhower that toll roads were not feasible outside of the highly populated coastal regions. In February 1955, Eisenhower forwarded Clay's proposal to Congress. The bill quickly won approval in the Senate, but House Democrats objected to the use of public bonds as the means to finance construction. Eisenhower and the House Democrats agreed to instead finance the system through the Highway Trust Fund, which itself would be funded by a gasoline tax. In June 1956, Eisenhower signed the Federal Aid Highway Act of 1956 into law. Under the act, the federal government would pay for 90 percent of the cost of construction of Interstate Highways. Each Interstate Highway was required to be a freeway with at least four lanes and no at-grade crossings. The publication in 1955 of the General Location of National System of Interstate Highways, informally known as the Yellow Book, mapped out what became the Interstate Highway System. Assisting in the planning was Charles Erwin Wilson, who was still head of General Motors when President Eisenhower selected him as Secretary of Defense in January 1953. Construction Some sections of highways that became part of the Interstate Highway System actually began construction earlier. Three states have claimed the title of first Interstate Highway. Missouri claims that the first three contracts under the new program were signed in Missouri on August 2, 1956. The first contract signed was for upgrading a section of US Route 66 to what is now designated Interstate 44. On August 13, 1956, work began on US 40 (now I-70) in St. Charles County. Kansas claims that it was the first to start paving after the act was signed. Preliminary construction had taken place before the act was signed, and paving started September 26, 1956. The state marked its portion of I-70 as the first project in the United States completed under the provisions of the new Federal-Aid Highway Act of 1956. The Pennsylvania Turnpike could also be considered one of the first Interstate Highways, and is nicknamed "Grandfather of the Interstate System". On October 1, 1940, of the highway now designated I‑70 and I‑76 opened between Irwin and Carlisle. The Commonwealth of Pennsylvania refers to the turnpike as the Granddaddy of the Pikes, a reference to turnpikes. Milestones in the construction of the Interstate Highway System include: October 17, 1974: Nebraska becomes the first state to complete all of its mainline Interstate Highways with the dedication of its final piece of I-80. October 12, 1979: The final section of the Canada to Mexico freeway Interstate 5 is dedicated near Stockton, California. Representatives of the two neighboring nations attended the dedication to commemorate the first contiguous freeway connecting the North American countries. August 22, 1986: The final section of the coast-to-coast I-80 (San Francisco, California, to Teaneck, New Jersey) is dedicated on the western edge of Salt Lake City, Utah, making I-80 the world's first contiguous freeway to span from the Atlantic to Pacific Ocean and, at the time, the longest contiguous freeway in the world. The section spanned from Redwood Road to just west of the Salt Lake City International Airport. At the dedication it was noted that coincidentally this was only from Promontory Summit, where a similar feat was accomplished nearly 120 years prior, the driving of the golden spike of the United States' First transcontinental railroad. August 10, 1990: The final section of coast-to-coast I-10 (Santa Monica, California, to Jacksonville, Florida) is dedicated, the Papago Freeway Tunnel under downtown Phoenix, Arizona. Completion of this section was delayed due to a freeway revolt that forced the cancellation of an originally planned elevated routing. September 12, 1991: I-90 becomes the final coast-to-coast Interstate Highway (Seattle, Washington to Boston, Massachusetts) to be completed with the dedication of an elevated viaduct bypassing Wallace, Idaho, which opened a week earlier. This section was delayed after residents forced the cancellation of the originally planned at-grade alignment that would have demolished much of downtown Wallace. The residents accomplished this feat by arranging for most of the downtown area to be declared a historic district and listed on the National Register of Historic Places; this succeeded in blocking the path of the original alignment. Two days after the dedication residents held a mock funeral celebrating the removal of the last stoplight on a transcontinental Interstate Highway. October 14, 1992: The original Interstate Highway System is proclaimed to be complete with the opening of I-70 through Glenwood Canyon in Colorado. This section is considered an engineering marvel with a span featuring 40 bridges and numerous tunnels and is one of the most expensive rural highways per mile built in the United States. The initial cost estimate for the system was $25 billion over 12 years; it ended up costing $114 billion (equivalent to $425 billion in 2006 or $ in ) and took 35 years. 1992–present Discontinuities The system was proclaimed complete in 1992, but two of the original Interstates—I-95 and I-70—were not continuous: both of these discontinuities were due to local opposition, which blocked efforts to build the necessary connections to fully complete the system. I-95 was made a continuous freeway in 2018, and thus I-70 remains the only original Interstate with a discontinuity. I-95 was discontinuous in New Jersey because of the cancellation of the Somerset Freeway. This situation was remedied when the construction of the Pennsylvania Turnpike/Interstate 95 Interchange Project started in 2010 and partially opened on September 22, 2018, which was already enough to fill the gap. However, I-70 remains discontinuous in Pennsylvania, because of the lack of a direct interchange with the Pennsylvania Turnpike at the eastern end of the concurrency near Breezewood. Traveling in either direction, I-70 traffic must exit the freeway and use a short stretch of US 30 (which includes a number of roadside services) to rejoin I-70. The interchange was not originally built because of a legacy federal funding rule, since relaxed, which restricted the use of federal funds to improve roads financed with tolls. Solutions have been proposed to eliminate the discontinuity, but they have been blocked by local opposition, fearing a loss of business. Expansions and removals The Interstate Highway System has been expanded numerous times. The expansions have both created new designations and extended existing designations. For example, I-49, added to the system in the 1980s as a freeway in Louisiana, was designated as an expansion corridor, and FHWA approved the expanded route north from Lafayette, Louisiana, to Kansas City, Missouri. The freeway exists today as separate completed segments, with segments under construction or in the planning phase between them. In 1966, the FHWA designated the entire Interstate Highway System as part of the larger Pan-American Highway System, and at least two proposed Interstate expansions were initiated to help trade with Canada and Mexico spurred by the North American Free Trade Agreement (NAFTA). Long-term plans for I-69, which currently exists in several separate completed segments (the largest of which are in Indiana and Texas), is to have the highway route extend from Tamaulipas, Mexico to Ontario, Canada. The planned I-11 will then bridge the Interstate gap between Phoenix, Arizona and Las Vegas, Nevada, and thus form part of the CANAMEX Corridor (along with I-19, and portions of I-10 and I-15) between Sonora, Mexico and Alberta, Canada. Opposition, cancellations, and removals Political opposition from residents canceled many freeway projects around the United States, including: I-40 in Memphis, Tennessee was rerouted and part of the original I-40 is still in use as the eastern half of Sam Cooper Boulevard. I-66 in the District of Columbia was abandoned in 1977. I-69 was to continue past its terminus at Interstate 465 to intersect with Interstate 70 and Interstate 65 at the north split, northeast of downtown Indianapolis. Though local opposition led to the cancellation of this project in 1981, bridges and ramps for the connection into the "north split" remained until it was rebuilt in 2023. I-70 in Baltimore was supposed to run from the Baltimore Beltway (Interstate 695), which surrounds the city to terminate at I-95, the East Coast thoroughfare that runs through Maryland and Baltimore on a diagonal course, northeast to southwest; the connection was cancelled on the mid-1970s due to its routing through Gwynns Falls-Leakin Park, a wilderness urban park reserve following the Gwynns Falls stream through West Baltimore. This included the cancellation of I-170, partially built and in use as US 40, and nicknamed the Highway to Nowhere. The freeway stub of I-70 inside the Beltway was renumbered MD 570 in 2014, but continues to bear I-70 signs. I-78 in New York City was canceled along with portions of I-278, I-478, and I-878. I-878 was supposed to be part of I-78, and I-478 and I-278 were to be spur routes. I-80 in San Francisco was originally planned to travel past the city's Civic Center along the Panhandle Freeway into Golden Gate Park and terminate at the original alignment of I-280/SR 1. The city canceled this and several other freeways in 1958. Similarly, more than 20 years later, Sacramento canceled plans to upgrade I-80 to Interstate Standards and rerouted the freeway on what was then I-880 that traveled north of Downtown Sacramento. I-83, southern extension of the Jones Falls Expressway (southern I-83) in Baltimore was supposed to run along the waterfront of the Patapsco River / Baltimore Harbor to connect to I-95, bisecting historic neighborhoods of Fells Point and Canton, but the connection was never built. I-84 in Connecticut was once planned to fork east of Hartford, into an I-86 to Sturbridge, Massachusetts, and I-84 to Providence, R.I. The plan was cancelled, primarily because of anticipated impact on a major Rhode Island reservoir. The I-84 designation was restored to the highway to Sturbridge, and other numbering was used for completed Eastern sections of what had been planned as part of I-84. I-95 through the District of Columbia into Maryland was abandoned in 1977. Instead it was rerouted to I-495 (Capital Beltway). The completed section is now I-395. I-95 was originally planned to run up the Southwest Expressway and meet I-93, where the two highways would travel along the Central Artery through downtown Boston, but was rerouted onto the Route 128 beltway due to widespread opposition. This revolt also included the cancellation of the Inner Belt, connecting I-93 to I-90 and a cancelled section of the Northwest Expressway which would have carried US 3 inside the Route 128 beltway, meeting with Route 2 in Cambridge. In addition to cancellations, removals of freeways are planned: I-81 in Syracuse, New York, which bisects the city's 15th Ward neighborhood, is planned to be torn down and replaced with a boulevard that accommodates pedestrians. Freeway traffic would be rerouted along I-481. Standards The American Association of State Highway and Transportation Officials (AASHTO) has defined a set of standards that all new Interstates must meet unless a waiver from the Federal Highway Administration (FHWA) is obtained. One almost absolute standard is the controlled access nature of the roads. With few exceptions, traffic lights (and cross traffic in general) are limited to toll booths and ramp meters (metered flow control for lane merging during rush hour). Speed limits Being freeways, Interstate Highways usually have the highest speed limits in a given area. Speed limits are determined by individual states. From 1975 to 1986, the maximum speed limit on any highway in the United States was , in accordance with federal law. Typically, lower limits are established in Northeastern and coastal states, while higher speed limits are established in inland states west of the Mississippi River. For example, the maximum speed limit is in northern Maine, varies between from southern Maine to New Jersey, and is in New York City and the District of Columbia. Currently, rural speed limits elsewhere generally range from . Several portions of various highways such as I-10 and I-20 in rural western Texas, I-80 in Nevada between Fernley and Winnemucca (except around Lovelock) and portions of I-15, I-70, I-80, and I-84 in Utah have a speed limit of . Other Interstates in Idaho, Montana, Oklahoma, South Dakota and Wyoming also have the same high speed limits. In some areas, speed limits on Interstates can be significantly lower in areas where they traverse significantly hazardous areas. The maximum speed limit on I-90 is in downtown Cleveland because of two sharp curves with a suggested limit of in a heavily congested area; I-70 through Wheeling, West Virginia, has a maximum speed limit of through the Wheeling Tunnel and most of downtown Wheeling; and I-68 has a maximum speed limit of through Cumberland, Maryland, because of multiple hazards including sharp curves and narrow lanes through the city. In some locations, low speed limits are the result of lawsuits and resident demands; after holding up the completion of I-35E in St. Paul, Minnesota, for nearly 30 years in the courts, residents along the stretch of the freeway from the southern city limit to downtown successfully lobbied for a speed limit in addition to a prohibition on any vehicle weighing more than gross vehicle weight. I-93 in Franconia Notch State Park in northern New Hampshire has a speed limit of because it is a parkway that consists of only one lane per side of the highway. On the other hand, Interstates 15, 80, 84, and 215 in Utah have speed limits as high as within the Wasatch Front, Cedar City, and St. George areas, and I-25 in New Mexico within the Santa Fe and Las Vegas areas along with I-20 in Texas along Odessa and Midland and I-29 in North Dakota along the Grand Forks area have higher speed limits of . Other uses As one of the components of the National Highway System, Interstate Highways improve the mobility of military troops to and from airports, seaports, rail terminals, and other military bases. Interstate Highways also connect to other roads that are a part of the Strategic Highway Network, a system of roads identified as critical to the US Department of Defense. The system has also been used to facilitate evacuations in the face of hurricanes and other natural disasters. An option for maximizing traffic throughput on a highway is to reverse the flow of traffic on one side of a divider so that all lanes become outbound lanes. This procedure, known as contraflow lane reversal, has been employed several times for hurricane evacuations. After public outcry regarding the inefficiency of evacuating from southern Louisiana prior to Hurricane Georges' landfall in September 1998, government officials looked towards contraflow to improve evacuation times. In Savannah, Georgia, and Charleston, South Carolina, in 1999, lanes of I-16 and I-26 were used in a contraflow configuration in anticipation of Hurricane Floyd with mixed results. In 2004, contraflow was employed ahead of Hurricane Charley in the Tampa, Florida area and on the Gulf Coast before the landfall of Hurricane Ivan; however, evacuation times there were no better than previous evacuation operations. Engineers began to apply lessons learned from the analysis of prior contraflow operations, including limiting exits, removing troopers (to keep traffic flowing instead of having drivers stop for directions), and improving the dissemination of public information. As a result, the 2005 evacuation of New Orleans, Louisiana, prior to Hurricane Katrina ran much more smoothly. According to urban legend, early regulations required that one out of every five miles of the Interstate Highway System must be built straight and flat, so as to be usable by aircraft during times of war. There is no evidence of this rule being included in any Interstate legislation. It is also commonly believed the Interstate Highway System was built for the sole purpose of evacuating cities in the event of nuclear warfare. While military motivations were present, the primary motivations were civilian. Numbering system Primary (one- and two-digit) Interstates The numbering scheme for the Interstate Highway System was developed in 1957 by the American Association of State Highway and Transportation Officials (AASHTO). The association's present numbering policy dates back to August 10, 1973. Within the contiguous United States, primary Interstates—also called main line Interstates or two-digit Interstates—are assigned numbers less than 100. While numerous exceptions do exist, there is a general scheme for numbering Interstates. Primary Interstates are assigned one- or two-digit numbers, while shorter routes (such as spurs, loops, and short connecting roads) are assigned three-digit numbers where the last two digits match the parent route (thus, I-294 is a loop that connects at both ends to I-94, while I-787 is a short spur route attached to I-87). In the numbering scheme for the primary routes, east–west highways are assigned even numbers and north–south highways are assigned odd numbers. Odd route numbers increase from west to east, and even-numbered routes increase from south to north (to avoid confusion with the US Highways, which increase from east to west and north to south). This numbering system usually holds true even if the local direction of the route does not match the compass directions. Numbers divisible by five are intended to be major arteries among the primary routes, carrying traffic long distances. Primary north–south Interstates increase in number from I-5 between Canada and Mexico along the West Coast to I‑95 between Canada and Miami, Florida along the East Coast. Major west–east arterial Interstates increase in number from I-10 between Santa Monica, California, and Jacksonville, Florida, to I-90 between Seattle, Washington, and Boston, Massachusetts, with two exceptions. There are no I-50 and I-60, as routes with those numbers would likely pass through states that currently have US Highways with the same numbers, which is generally disallowed under highway administration guidelines. Several two-digit numbers are shared between unconnected road segments at opposite ends of the country for various reasons. Some such highways are incomplete Interstates (such as I-69 and I-74) and some just happen to share route designations (such as I-76, I-84, I‑86, I-87, and I-88). Some of these were due to a change in the numbering system as a result of a new policy adopted in 1973. Previously, letter-suffixed numbers were used for long spurs off primary routes; for example, western I‑84 was I‑80N, as it went north from I‑80. The new policy stated, "No new divided numbers (such as I-35W and I-35E, etc.) shall be adopted." The new policy also recommended that existing divided numbers be eliminated as quickly as possible; however, an I-35W and I-35E still exist in the Dallas–Fort Worth metroplex in Texas, and an I-35W and I-35E that run through Minneapolis and Saint Paul, Minnesota, still exist. Additionally, due to Congressional requirements, three sections of I-69 in southern Texas will be divided into I-69W, I-69E, and I-69C (for Central). AASHTO policy allows dual numbering to provide continuity between major control points. This is referred to as a concurrency or overlap. For example, I‑75 and I‑85 share the same roadway in Atlanta; this section, called the Downtown Connector, is labeled both I‑75 and I‑85. Concurrencies between Interstate and US Highway numbers are also allowed in accordance with AASHTO policy, as long as the length of the concurrency is reasonable. In rare instances, two highway designations sharing the same roadway are signed as traveling in opposite directions; one such wrong-way concurrency is found between Wytheville and Fort Chiswell, Virginia, where I‑81 north and I‑77 south are equivalent (with that section of road traveling almost due east), as are I‑81 south and I‑77 north. Auxiliary (three-digit) Interstates Auxiliary Interstate Highways are circumferential, radial, or spur highways that principally serve urban areas. These types of Interstate Highways are given three-digit route numbers, which consist of a single digit prefixed to the two-digit number of its parent Interstate Highway. Spur routes deviate from their parent and do not return; these are given an odd first digit. Circumferential and radial loop routes return to the parent, and are given an even first digit. Unlike primary Interstates, three-digit Interstates are signed as either east–west or north–south, depending on the general orientation of the route, without regard to the route number. For instance, I-190 in Massachusetts is labeled north–south, while I-195 in New Jersey is labeled east–west. Some looped Interstate routes use inner–outer directions instead of compass directions, when the use of compass directions would create ambiguity. Due to the large number of these routes, auxiliary route numbers may be repeated in different states along the mainline. Some auxiliary highways do not follow these guidelines, however. Alaska, Hawaii, and Puerto Rico The Interstate Highway System also extends to Alaska, Hawaii, and Puerto Rico, even though they have no direct land connections to any other states or territories. However, their residents still pay federal fuel and tire taxes. The Interstates in Hawaii, all located on the most populous island of Oahu, carry the prefix H. There are three one-digit routes in the state (H-1, H-2, and H-3) and one auxiliary route (H-201). These Interstates connect several military and naval bases together, as well as the important communities spread across Oahu, and especially within the urban core of Honolulu. Both Alaska and Puerto Rico also have public highways that receive 90 percent of their funding from the Interstate Highway program. The Interstates of Alaska and Puerto Rico are numbered sequentially in order of funding without regard to the rules on odd and even numbers. They also carry the prefixes A and PR, respectively. However, these highways are signed according to their local designations, not their Interstate Highway numbers. Furthermore, these routes were neither planned according to nor constructed to the official Interstate Highway standards. Mile markers and exit numbers On one- or two-digit Interstates, the mile marker numbering almost always begins at the southern or western state line. If an Interstate originates within a state, the numbering begins from the location where the road begins in the south or west. As with all guidelines for Interstate routes, however, numerous exceptions exist. Three-digit Interstates with an even first number that form a complete circumferential (circle) bypass around a city feature mile markers that are numbered in a clockwise direction, beginning just west of an Interstate that bisects the circumferential route near a south polar location. In other words, mile marker 1 on I-465, a route around Indianapolis, is just west of its junction with I-65 on the south side of Indianapolis (on the south leg of I-465), and mile marker 53 is just east of this same junction. An exception is I-495 in the Washington metropolitan area, with mileposts increasing counterclockwise because part of that road is also part of I-95. Most Interstate Highways use distance-based exit numbers so that the exit number is the same as the nearest mile marker. If multiple exits occur within the same mile, letter suffixes may be appended to the numbers in alphabetical order starting with A. A small number of Interstate Highways (mostly in the Northeastern United States) use sequential-based exit numbering schemes (where each exit is numbered in order starting with 1, without regard for the mile markers on the road). One Interstate Highway, I-19 in Arizona, is signed with kilometer-based exit numbers. In the state of New York, most Interstate Highways use sequential exit numbering, with some exceptions. Business routes AASHTO defines a category of special routes separate from primary and auxiliary Interstate designations. These routes do not have to comply to Interstate construction or limited-access standards but are routes that may be identified and approved by the association. The same route marking policy applies to both US Numbered Highways and Interstate Highways; however, business route designations are sometimes used for Interstate Highways. Known as Business Loops and Business Spurs, these routes principally travel through the corporate limits of a city, passing through the central business district when the regular route is directed around the city. They also use a green shield instead of the red and blue shield. An example would be Business Loop Interstate 75 at Pontiac, Michigan, which follows surface roads into and through downtown. Sections of BL I-75's routing had been part of US 10 and M-24, predecessors of I-75 in the area. Financing Interstate Highways and their rights-of-way are owned by the state in which they were built. The last federally owned portion of the Interstate System was the Woodrow Wilson Bridge on the Washington Capital Beltway. The new bridge was completed in 2009 and is collectively owned by Virginia and Maryland. Maintenance is generally the responsibility of the state department of transportation. However, there are some segments of Interstate owned and maintained by local authorities. Taxes and user fees About 70 percent of the construction and maintenance costs of Interstate Highways in the United States have been paid through user fees, primarily the fuel taxes collected by the federal, state, and local governments. To a much lesser extent they have been paid for by tolls collected on toll highways and bridges. The federal gasoline tax was first imposed in 1932 at one cent per gallon; during the Eisenhower administration, the Highway Trust Fund, established by the Highway Revenue Act in 1956, prescribed a three-cent-per-gallon fuel tax, soon increased to 4.5 cents per gallon. Since 1993 the tax has remained at 18.4 cents per gallon. Other excise taxes related to highway travel also accumulated in the Highway Trust Fund. Initially, that fund was sufficient for the federal portion of building the Interstate system, built in the early years with "10 cent dollars", from the perspective of the states, as the federal government paid 90% of the costs while the state paid 10%. The system grew more rapidly than the rate of the taxes on fuel and other aspects of driving (e. g., excise tax on tires). The rest of the costs of these highways are borne by general fund receipts, bond issues, designated property taxes, and other taxes. The federal contribution is funded primarily through fuel taxes and through transfers from the Treasury's general fund. Local government contributions are overwhelmingly from sources besides user fees. As decades passed in the 20th century and into the 21st century, the portion of the user fees spent on highways themselves covers about 57 percent of their costs, with about one-sixth of the user fees being sent to other programs, including the mass transit systems in large cities. Some large sections of Interstate Highways that were planned or constructed before 1956 are still operated as toll roads, for example the Massachusetts Turnpike (I-90), the New York State Thruway (I-87 and I-90), and Kansas Turnpike (I-35, I-335, I-470, I-70). Others have had their construction bonds paid off and they have become toll-free, such as the Connecticut Turnpike (I‑95, I-395), the Richmond-Petersburg Turnpike in Virginia (also I‑95), and the Kentucky Turnpike (I‑65). As American suburbs have expanded, the costs incurred in maintaining freeway infrastructure have also grown, leaving little in the way of funds for new Interstate construction. This has led to the proliferation of toll roads (turnpikes) as the new method of building limited-access highways in suburban areas. Some Interstates are privately maintained (for example, the VMS company maintains I‑35 in Texas) to meet rising costs of maintenance and allow state departments of transportation to focus on serving the fastest-growing regions in their states. Parts of the Interstate System might have to be tolled in the future to meet maintenance and expansion demands, as has been done with adding toll HOV/HOT lanes in cities such as Atlanta, Dallas, and Los Angeles. Although part of the tolling is an effect of the SAFETEA‑LU act, which has put an emphasis on toll roads as a means to reduce congestion, present federal law does not allow for a state to change a freeway section to a tolled section for all traffic. Tolls About of toll roads are included in the Interstate Highway System. While federal legislation initially banned the collection of tolls on Interstates, many of the toll roads on the system were either completed or under construction when the Interstate Highway System was established. Since these highways provided logical connections to other parts of the system, they were designated as Interstate highways. Congress also decided that it was too costly to either build toll-free Interstates parallel to these toll roads, or directly repay all the bondholders who financed these facilities and remove the tolls. Thus, these toll roads were grandfathered into the Interstate Highway System. Toll roads designated as Interstates (such as the Massachusetts Turnpike) were typically allowed to continue collecting tolls, but are generally ineligible to receive federal funds for maintenance and improvements. Some toll roads that did receive federal funds to finance emergency repairs (notably the Connecticut Turnpike (I-95) following the Mianus River Bridge collapse) were required to remove tolls as soon as the highway's construction bonds were paid off. In addition, these toll facilities were grandfathered from Interstate Highway standards. A notable example is the western approach to the Benjamin Franklin Bridge in Philadelphia, where I-676 has a surface street section through a historic area. Policies on toll facilities and Interstate Highways have since changed. The Federal Highway Administration has allowed some states to collect tolls on existing Interstate Highways, while a recent extension of I-376 included a section of Pennsylvania Route 60 that was tolled by the Pennsylvania Turnpike Commission before receiving Interstate designation. Also, newer toll facilities (like the tolled section of I-376, which was built in the early 1990s) must conform to Interstate standards. A new addition of the Manual on Uniform Traffic Control Devices in 2009 requires a black-on-yellow "Toll" sign to be placed above the Interstate trailblazer on Interstate Highways that collect tolls. Legislation passed in 2005 known as SAFETEA-LU, encouraged states to construct new Interstate Highways through "innovative financing" methods. SAFETEA-LU facilitated states to pursue innovative financing by easing the restrictions on building interstates as toll roads, either through state agencies or through public–private partnerships. However, SAFETEA-LU left in place a prohibition of installing tolls on existing toll-free Interstates, and states wishing to toll such routes to finance upgrades and repairs must first seek approval from Congress. Many states have started using High-occupancy toll lane and other partial tolling methods, whereby certain lanes of highly congested freeways are tolled, while others are left free, allowing people to pay a fee to travel in less congested lanes. Examples of recent projects to add HOT lanes to existing freeways include the Virginia HOT lanes on the Virginia portions of the Capital Beltway and other related interstate highways (I-95, I-495, I-395) and the addition of express toll lanes to Interstate 77 in North Carolina in the Charlotte metropolitan area. Chargeable and non-chargeable Interstate routes Interstate Highways financed with federal funds are known as "chargeable" Interstate routes, and are considered part of the network of highways. Federal laws also allow "non-chargeable" Interstate routes, highways funded similarly to state and US Highways to be signed as Interstates, if they both meet the Interstate Highway standards and are logical additions or connections to the system. These additions fall under two categories: routes that already meet Interstate standards, and routes not yet upgraded to Interstate standards. Only routes that meet Interstate standards may be signed as Interstates once their proposed number is approved. Signage Interstate shield Interstate Highways are signed by a number placed on a red, white, and blue sign. The shield design itself is a registered trademark of the American Association of State Highway and Transportation Officials. The colors red, white, and blue were chosen because they are the colors of the American flag. In the original design, the name of the state was displayed above the highway number, but in many states, this area is now left blank, allowing for the printing of larger and more-legible digits. Signs with the shield alone are placed periodically throughout each Interstate as reassurance markers. These signs usually measure high, and are wide for two-digit Interstates or for three-digit Interstates. Interstate business loops and spurs use a special shield in which the red and blue are replaced with green, the word "BUSINESS" appears instead of "INTERSTATE", and the word "SPUR" or "LOOP" usually appears above the number. The green shield is employed to mark the main route through a city's central business district, which intersects the associated Interstate at one (spur) or both (loop) ends of the business route. The route usually traverses the main thoroughfare(s) of the city's downtown area or other major business district. A city may have more than one Interstate-derived business route, depending on the number of Interstates passing through a city and the number of significant business districts therein. Over time, the design of the Interstate shield has changed. In 1957 the Interstate shield designed by Texas Highway Department employee Richard Oliver was introduced, the winner of a contest that included 100 entries; at the time, the shield color was a dark navy blue and only wide. The Manual on Uniform Traffic Control Devices (MUTCD) standards revised the shield in the 1961, 1971, and 1978 editions. Exit numbering The majority of Interstates have exit numbers. Like other highways, Interstates feature guide signs that list control cities to help direct drivers through interchanges and exits toward their desired destination. All traffic signs and lane markings on the Interstates are supposed to be designed in compliance with the Manual on Uniform Traffic Control Devices (MUTCD). There are, however, many local and regional variations in signage. For many years, California was the only state that did not use an exit numbering system. It was granted an exemption in the 1950s due to having an already largely completed and signed highway system; placing exit number signage across the state was deemed too expensive. To control costs, California began to incorporate exit numbers on its freeways in 2002—Interstate, US, and state routes alike. Caltrans commonly installs exit number signage only when a freeway or interchange is built, reconstructed, retrofitted, or repaired, and it is usually tacked onto the top-right corner of an already existing sign. Newer signs along the freeways follow this practice as well. Most exits along California's Interstates now have exit number signage, particularly in rural areas. California, however, still does not use mileposts, although a few exist for experiments or for special purposes. In 2010–2011, the Illinois State Toll Highway Authority posted all new mile markers to be uniform with the rest of the state on I‑90 (Jane Addams Memorial/Northwest Tollway) and the I‑94 section of the Tri‑State Tollway, which previously had matched the I‑294 section starting in the south at I‑80/I‑94/IL Route 394. This also applied to the tolled portion of the Ronald Reagan Tollway (I-88). The tollway also added exit number tabs to the exits. Exit numbers correspond to Interstate mileage markers in most states. On I‑19 in Arizona, however, length is measured in kilometers instead of miles because, at the time of construction, a push for the United States to change to a metric system of measurement had gained enough traction that it was mistakenly assumed that all highway measurements would eventually be changed to metric (and some distance signs retain metric distances); proximity to metric-using Mexico may also have been a factor, as I‑19 indirectly connects I‑10 to the Mexican Federal Highway system via surface streets in Nogales. Mileage count increases from west to east on most even-numbered Interstates; on odd-numbered Interstates mileage count increases from south to north. Some highways, including the New York State Thruway, use sequential exit-numbering schemes. Exits on the New York State Thruway count up from Yonkers traveling north, and then west from Albany. I‑87 in New York State is numbered in three sections. The first section makes up the Major Deegan Expressway in the Bronx, with interchanges numbered sequentially from 1 to 14. The second section of I‑87 is a part of the New York State Thruway that starts in Yonkers (exit 1) and continues north to Albany (exit 24); at Albany, the Thruway turns west and becomes I‑90 for exits 25 to 61. From Albany north to the Canadian border, the exits on I‑87 are numbered sequentially from 1 to 44 along the Adirondack Northway. This often leads to confusion as there is more than one exit on I‑87 with the same number. For example, exit 4 on Thruway section of I‑87 connects with the Cross County Parkway in Yonkers, but exit 4 on the Northway is the exit for the Albany airport. These two exits share a number but are located apart. Many northeastern states label exit numbers sequentially, regardless of how many miles have passed between exits. States in which Interstate exits are still numbered sequentially are Connecticut, Delaware, New Hampshire, New York, and Vermont; as such, three of the main Interstate Highways that remain completely within these states (87, 88, 89) have interchanges numbered sequentially along their entire routes. Maine, Massachusetts, Pennsylvania, Virginia, Georgia, and Florida followed this system for a number of years, but have since converted to mileage-based exit numbers. Georgia renumbered in 2000, while Maine did so in 2004. Massachusetts converted its exit numbers in 2021, and most recently Rhode Island in 2022. The Pennsylvania Turnpike uses both mile marker numbers and sequential numbers. Mile marker numbers are used for signage, while sequential numbers are used for numbering interchanges internally. The New Jersey Turnpike, including the portions that are signed as I‑95 and I‑78, also has sequential numbering, but other Interstates within New Jersey use mile markers. Sign locations There are four common signage methods on Interstates: Locating a sign on the ground to the side of the highway, mostly the right, and is used to denote exits, as well as rest areas, motorist services such as gas and lodging, recreational sites, and freeway names Attaching the sign to an overpass Mounting on full gantries that bridge the entire width of the highway and often show two or more signs Mounting on half-gantries that are located on one side of the highway, like a ground-mounted sign Statistics Volume Heaviest traveled: 379,000 vehicles per day: I-405 in Los Angeles, California (2011 estimate). Elevation Highest: : I-70 in the Eisenhower Tunnel at the Continental Divide in the Colorado Rocky Mountains. Lowest (land): : I-8 at the New River near Seeley, California. Lowest (underwater): : I-95 in the Fort McHenry Tunnel under the Baltimore Inner Harbor. Length Longest (east–west): : I-90 from Boston, Massachusetts, to Seattle, Washington. Longest (north–south): : I-95 from the Canadian border near Houlton, Maine, to Miami, Florida. Shortest (two-digit): : I-69W in Laredo, Texas. Shortest (auxiliary): : I-878 in Queens, New York, New York. Longest segment between state lines: : I-10 in Texas from the New Mexico state line near El Paso to the Louisiana state line near Orange, Texas. Shortest segment between state lines: : I-95/I-495 (Capital Beltway) on the Woodrow Wilson Bridge across the Potomac River where they briefly cross the southernmost tip of the District of Columbia between its borders with Maryland and Virginia. Longest concurrency: : I-80 and I-90; Gary, Indiana, to Elyria, Ohio. States Most states served by an Interstate: 15 states plus the District of Columbia: I-95 through Florida, Georgia, South Carolina, North Carolina, Virginia, DC, Maryland, Delaware, Pennsylvania, New Jersey, New York, Connecticut, Rhode Island, Massachusetts, New Hampshire, and Maine. Most Interstates in a state: 32 routes: New York, totaling Most primary Interstates in a state: 13 routes: Illinois Most Interstate mileage in a state: : Texas, in 17 different routes. Fewest Interstates in a state: 3 routes: Delaware, New Mexico, North Dakota, and Rhode Island. Puerto Rico also has 3 routes. Fewest primary Interstates in a state: 1 route: Delaware, Maine, and Rhode Island (I-95 in each case). Least Interstate mileage in a state: : Delaware, in 3 different routes. Impact and reception Following the passage of the Federal Aid Highway Act of 1956, passenger rail declined sharply as did freight rail for a short time, but the trucking industry expanded dramatically and the cost of shipping and travel fell sharply. Suburbanization became possible, with the rapid growth of larger, sprawling, and more car-dependent housing than was available in central cities, enabling racial segregation by white flight. A sense of isolationism developed in suburbs, with suburbanites wanting to keep urban areas disconnected from the suburbs. Tourism dramatically expanded, creating a demand for more service stations, motels, restaurants and visitor attractions. The Interstate System was the basis for urban expansion in the Sun Belt, and many urban areas in the region are thus very car-dependent. The highways may have contributed to increased economic productivity in, and thereby increased migration to, the Sun Belt. In rural areas, towns and small cities off the grid lost out as shoppers followed the interstate and new factories were located near them. The system had a profound effect on interstate shipping. The Interstate Highway System was being constructed at the same time as the intermodal shipping container made its debut. These containers could be placed on trailers behind trucks and shipped across the country with ease. A new road network and shipping containers that could be easily moved from ship to train to truck, meant that overseas manufacturers and domestic startups could get their products to market quicker than ever, allowing for accelerated economic growth. Forty years after its construction, the Interstate Highway system returned on investment, making $6 for every $1 spent on the project. According to research by the FHWA, "from 1950 to 1989, approximately one-quarter of the nation's productivity increase is attributable to increased investment in the highway system." The system had a particularly strong effect in Southern states, where major highways were inadequate. The new system facilitated the relocation of heavy manufacturing to the South and spurred the development of Southern-based corporations like Walmart (in Arkansas) and FedEx (in Tennessee). The Interstate Highway System also dramatically affected American culture, contributing to cars becoming more central to the American identity. Before, driving was considered an excursion that required some amount of skill and could have some chance of unpredictability. With the standardization of signs, road widths and rules, certain unpredictabilities lessened. Justin Fox wrote, "By making road more reliable and by making Americans more reliant on them, they took away most of the adventure and romance associated with driving." The Interstate Highway System has been criticized for contributing to the decline of some cities that were divided by Interstates, and for displacing minority neighborhoods in urban centers. Between 1957 and 1977, the Interstate System alone displaced over 475,000 households and one million people across the country. Highways have also been criticized for increasing racial segregation by creating physical barriers between neighborhoods, and for overall reductions in available housing and population in neighborhoods affected by highway construction. Other critics have blamed the Interstate Highway System for the decline of public transportation in the United States since the 1950s, which minorities and low-income residents are three to six times more likely to use. Previous highways, such as US 66, were also bypassed by the new Interstate system, turning countless rural communities along the way into ghost towns. The Interstate System has also contributed to continued resistance against new public transportation. The Interstate Highway System had a negative impact on minority groups, especially in urban areas. Even though the government used eminent domain to obtain land for the Interstates, it was still economical to build where land was cheapest. This cheap land was often located in predominately minority areas. Not only were minority neighborhoods destroyed, but in some cities the Interstates were used to divide white and minority neighborhoods. These practices were common in cities both in the North and South, including Nashville, Miami, Chicago, Detroit, and many other cities. The division and destruction of neighborhoods led to the limitation of employment and other opportunities, which deteriorated the economic fabric of neighborhoods. Neighborhoods bordering Interstates have a much higher level of particulate air pollution and are more likely to be chosen for polluting industrial facilities.
Technology
Ground transportation networks
null
43958
https://en.wikipedia.org/wiki/Laboratory%20glassware
Laboratory glassware
Laboratory glassware is a variety of equipment used in scientific work, traditionally made of glass. Glass may be blown, bent, cut, molded, or formed into many sizes and shapes. It is commonly used in chemistry, biology, and analytical laboratories. Many laboratories have training programs to demonstrate how glassware is used and to alert first–time users to the safety hazards involved with using glassware. History Ancient era The history of glassware dates back to the Phoenicians who fused obsidian together in campfires making the first glassware. Glassware evolved as other ancient civilizations including the Syrians, Egyptians, and Romans refined the art of glassmaking. Mary the Jewess, an alchemist in Alexandria during the 1st century AD, is credited for the creation of some of the first glassware for chemical such as the kerotakis which was used for the collection of fumes from a heated material. Despite these creations, glassware for chemical uses was still limited during this time because of the low thermal stability necessary for experimentation and therefore was primarily made using copper or ceramic materials. Early modern era Glassware improved once again during the 14th-16th century, with the skill and knowledge of glass makers in Venice. During this time, the Venetians gathered knowledge about glassmaking from the East with information coming from Syria and the Byzantine Empire. Along with knowledge about glassmaking, glassmakers in Venice also received higher quality raw materials from the East such as imported plant ash which contained higher soda content compared to plant ash from other areas. This combination of better raw materials and information from the East led to the production of clearer and higher thermal and chemical durability leading towards the shift to the use of glassware in laboratories. Modern era Many glasses that were produced in bulk in the 1830s would quickly become unclear and dirty because of the low quality glass being used. During the 19th century, more chemists began to recognize the importance of glassware due to its transparency, and the ability to control the conditions of experiments. Jöns Jacob Berzelius, who invented the test tube, and Michael Faraday both contributed to the rise of chemical glassblowing. Faraday published Chemical Manipulation in 1827 which detailed the process for creating many types of small tube glassware and some experimental techniques for tube chemistry. Berzelius wrote a similar textbook titled Chemical Operations and Apparatus which provided a variety of chemical glassblowing techniques. The rise of this chemical glassblowing widened the availability of chemical experimentation and led to a shift towards the dominant use of glassware in laboratories. With the emergence of glassware in laboratories, the need for organization and standards arose. The Prussian Society for the Advancement of Industry was one of the earliest organizations to support the collaborative improvement of the quality of glass used. Following the development of borosilicate glass by Otto Schott in the late 19th century, most laboratory glassware was manufactured in Germany up until the start of World War I. Before World War I, glass producers in the United States had difficulty competing with German laboratory glassware manufacturers because laboratory glassware was classified as educational material and was not subject to an import tax. During World War I, the supply of laboratory glassware to the United States was cut off. In 1915 Corning Glassworks developed their own borosilicate glass, introduced under the name Pyrex. This was a boon to the war effort in the United States. Though many laboratories turned back to imports after the war ended, research into better glassware flourished. Glassware became more resistant to thermal shock while maintaining chemical inertness. During the 1920s efforts to standardise the dimensions of laboratory glassware began, particularly for ground glass joints, with some manufacturer specific standardisation beginning to occur around this time. Commercial standards began development around 1930, allowing the compatibility of joints between different manufacturers for the first time, along with other features. This quickly led to the high degree of standardisation and modularity seen in modern glassware. Laboratory glassware selection Laboratory glassware is typically selected by a person in charge of a particular laboratory analysis to match the needs of a given task. The task may require a piece of glassware made with a specific type of glass. The task may be readily performed using low cost, mass-produced glassware, or it may require a specialized piece created by a glass blower. The task may require controlling the flow of fluid. The task may have distinctive quality assurance requirements. Type of glass Laboratory glassware may be made from several types of glass, each with different capabilities and used for different purposes. Borosilicate glass is a type of transparent glass that is composed of boron oxide and silica, its main feature is a low coefficient of thermal expansion making it more resistant to thermal shock than most other glasses. Quartz glass can withstand very high temperatures and is transparent in certain parts of the electromagnetic spectrum. Darkened brown or amber (actinic) glass can block ultraviolet and infrared radiation. Heavy-wall glass can withstand pressurized applications. Fritted glass is finely porous glass through which gas or liquid may pass. Coated glassware is specially treated to reduce the occurrence of breakage or failure. Silanized (siliconized) glassware is specially treated to prevent organic samples from sticking to the glass. Scientific glass blowing Scientific glass blowing, which is practiced in some larger laboratories, is a specialized field of glassblowing. Scientific glassblowing involves precisely controlling the shape and dimension of glass, repairing expensive or difficult-to-replace glassware, and fusing together various glass parts. Many parts are available fused to a length of glass tubing to create highly specialized piece of laboratory glassware. Controlling fluid flow When using glassware it is often necessary to control the flow of fluid. It is commonly stopped with a stopper. Fluid may be transported between connected pieces of glassware. Types of interconnecting components include glass tubing, T-connectors, Y-connectors, and glass adapters. For a leak-tight connection a ground glass joint is used (possibly reinforced using a clamping method such as a Keck clips). Another way to connect glassware is with a hose barb and flexible tubing. Fluid flow can be switched selectively using a valve, of which a stopcock is a common type fused to the glassware. Valves made entirely of glass may be used to restrict fluid flows. Fluid, or any material which flows, can be directed into a narrow opening using a funnel. Quality assurance Metrology Laboratory glassware can be used for high precision volumetric measurements. With high precision measurements, such as those made in a testing laboratory, the metrological grade of the glassware becomes important. The metrological grade then can be determined by both the confidence interval around the nominal value of measurement marks and the traceability of the calibration to an NIST standard. Periodically it may be necessary to check the calibration of the laboratory glassware. Dissolved silica Laboratory glassware is composed of silica, which is considered insoluble in most substances, with a few exceptions such as hydrofluoric acid or strong alkali hydroxides. Though insoluble, a minute quantity of silica will dissolve in neutral water, which may affect high precision, low threshold measurements of silica in water. Cleaning Cleaning laboratory glassware is a frequent necessity and may be done using multiple methods depending on the nature of the contamination and the purity requirements of its use. Glassware can be soaked in a detergent solution to remove grease and loosen most contaminations, these contaminations are then scrubbed with a brush or scouring pad to remove particles which cannot be rinsed. Sturdy glassware may be able to withstand sonication as an alternative to scrubbing. Solvents are used to remove organic residues that soap cannot remove, and inorganic residues that do not dissolve in water can often be dissolved with a dilute acid. When cleaning is finished it is common practice to rinse glassware multiple times, often finally with deionised water, before suspending it upside down on drying racks. Specialised dishwashers can be used to automate these cleaning methods. Resistant residues may require more powerful cleaning methods. Base baths are commonly used for organic residues, although the strong alkaline conditions do slowly dissolve the glass itself, and concentrated hydrochloric acid is common for removing inorganic residues. Even more severe methods exist, such as acidic peroxide (piranha solution), aqua regia, and chromic acid, but these are considered somewhat of a last resort due to the hazards of using them, and their use by students is restricted in many institutions. For certain sensitive experiments glassware may require specialised procedures and ultra-pure water or solvents to dissolve trace quantities of specific contaminations known to interfere with an experiment. Examples There are many different kinds of laboratory glassware items: Examples of glassware containers include: Beakers are simple cylindrical shaped containers used to hold reagents or samples. Flasks are narrow-necked glass containers, typically conical or spherical, used in a laboratory to hold reagents or samples. Examples flasks include the Erlenmeyer flask, Florence flask, and Schlenk flask. Reagent bottles are containers with narrow openings generally used to store reagents or samples. Small bottles are called vials. Jars are cylindrical containers with wide openings that may be sealed. Bell jars are used to contain vacuums. Test tubes are used by chemists to hold, mix, or heat small quantities of solid or liquid chemicals, especially for qualitative experiments and assays Desiccators of glass construction are used to dry materials or keep material dry. Glass evaporating dishes, such as watch glasses, are primarily used as an evaporating surface (though they may be used to cover a beaker.) The Petri dish is a flat dish filled with a nutritious gelatin that allows for microorganisms to quickly grow, its named after its inventor Julius Petri in the 1880s. Microscope slides are thin strips used to hold items under a microscope. Examples of glassware used for measurements include: Graduated cylinders are thin and tall cylindrical containers used for volumetric measurements. Volumetric flasks are for measuring a specific volume of fluid. Burettes are similar to graduated cylinders but have a valve at the end used to disperse precise amounts of liquid reagents often for titrations. Glass pipettes are used to transfer precise quantities of fluids. Glass Ebulliometers are used to accurately measure the boiling point of liquids. Other examples of glassware includes: Stirring rods are glass rods used to mix chemicals. Condensers are used to condense vapors by cooling them down and turning them into liquids. Glass retorts are used for distillation by heating, they have a bulb with a long curved spout. Drying pistols are used to free samples from traces of water, or other volatile impurities.
Physical sciences
Research methods
Basics and measurement
43970
https://en.wikipedia.org/wiki/Calorimeter
Calorimeter
A calorimeter is a device used for calorimetry, or the process of measuring the heat of chemical reactions or physical changes as well as heat capacity. Differential scanning calorimeters, isothermal micro calorimeters, titration calorimeters and accelerated rate calorimeters are among the most common types. A simple calorimeter just consists of a thermometer attached to a metal container full of water suspended above a combustion chamber. It is one of the measurement devices used in the study of thermodynamics, chemistry, and biochemistry. To find the enthalpy change per mole of a substance A in a reaction between two substances A and B, the substances are separately added to a calorimeter and the initial and final temperatures (before the reaction has started and after it has finished) are noted. Multiplying the temperature change by the mass and specific heat capacities of the substances gives a value for the energy given off or absorbed during the reaction. Dividing the energy change by how many moles of A were present gives its enthalpy change of reaction. where is the amount of heat according to the change in temperature measured in joules and is the heat capacity of the calorimeter which is a value associated with each individual apparatus in units of energy per temperature (joules/kelvin). History In 1761 Joseph Black introduced the idea of latent heat which led to the creation of the first ice calorimeters. In 1780, Antoine Lavoisier used the heat released by the respiration of a guinea pig to melt snow surrounding his apparatus, showing that respiratory gas exchange is a form of combustion, similar to the burning of a candle. Lavoisier named this apparatus 'calorimeter', based on both Greek and Latin roots. One of the first ice calorimeters was used in the winter of 1782–83 by Lavoisier and Pierre-Simon Laplace. It relied on the heat required for the melting of ice to measure the heat released in various chemical reactions. Adiabatic calorimeters An adiabatic calorimeter is a calorimeter used to examine a runaway reaction. Since the calorimeter runs in an adiabatic environment, any heat generated by the material sample under test causes the sample to increase in temperature, thus fueling the reaction. No adiabatic calorimeter is fully adiabatic - some heat will be lost by the sample to the sample holder. A mathematical correction factor, known as the phi-factor, can be used to adjust the calorimetric result to account for these heat losses. The phi-factor is the ratio of the thermal mass of the sample and sample holder to the thermal mass of the sample alone. Reaction calorimeters A reaction calorimeter is a calorimeter in which a chemical reaction is initiated within a closed insulated container. Reaction heats are measured and the total heat is obtained by integrating heat flow versus time. This is the standard used in industry to measure heats since industrial processes are engineered to run at constant temperatures. Reaction calorimetry can also be used to determine maximum heat release rate for chemical process engineering and for tracking the global kinetics of reactions. There are four main methods for measuring the heat in reaction calorimeter: Heat flow calorimeter The cooling/heating jacket controls either the temperature of the process or the temperature of the jacket. Heat is measured by monitoring the temperature difference between heat transfer fluid and the process fluid. In addition, fill volumes (i.e. wetted area), specific heat, heat transfer coefficient have to be determined to arrive at a correct value. It is possible with this type of calorimeter to do reactions at reflux, although it is very less accurate. Heat balance calorimeter The cooling/heating jacket controls the temperature of the process. Heat is measured by monitoring the heat gained or lost by the heat transfer fluid. Power compensation Power compensation uses a heater placed within the vessel to maintain a constant temperature. The energy supplied to this heater can be varied as reactions require and the calorimetry signal is purely derived from this electrical power. Constant flux Constant flux calorimetry (or COFLUX as it is often termed) is derived from heat balance calorimetry and uses specialized control mechanisms to maintain a constant heat flow (or flux) across the vessel wall. Bomb calorimeters A bomb calorimeter is a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Bomb calorimeters have to withstand the large pressure within the calorimeter as the reaction is being measured. Electrical energy is used to ignite the fuel; as the fuel is burning, it will heat up the surrounding air, which expands and escapes through a tube that leads the air out of the calorimeter. When the air is escaping through the copper tube it will also heat up the water outside the tube. The change in temperature of the water allows for calculating calorie content of the fuel. In more recent calorimeter designs, the whole bomb, pressurized with excess pure oxygen (typically at ) and containing a weighed mass of a sample (typically 1–1.5 g) and a small fixed amount of water (to saturate the internal atmosphere, thus ensuring that all water produced is liquid, and removing the need to include enthalpy of vaporization in calculations), is submerged under a known volume of water (ca. 2000 ml) before the charge is electrically ignited. The bomb, with the known mass of the sample and oxygen, form a closed system — no gases escape during the reaction. The weighed reactant put inside the steel container is then ignited. Energy is released by the combustion and heat flow from this crosses the stainless steel wall, thus raising the temperature of the steel bomb, its contents, and the surrounding water jacket. The temperature change in the water is then accurately measured with a thermometer. This reading, along with a bomb factor (which is dependent on the heat capacity of the metal bomb parts), is used to calculate the energy given out by the sample burn. A small correction is made to account for the electrical energy input, the burning fuse, and acid production (by titration of the residual liquid). After the temperature rise has been measured, the excess pressure in the bomb is released. At its core, a bomb calorimeter consists of a small cup to contain the sample, oxygen, a stainless steel bomb, water, a stirrer, a thermometer, the dewar or insulating container (to prevent heat flow from the calorimeter to its surroundings) and an ignition circuit connected to the bomb. By using stainless steel for the bomb, the reaction will occur with no volume change observed. Since there is no heat exchange between the calorimeter and surroundings (Q = 0) (adiabatic), no work is performed (W = 0) Thus, the total internal energy change Also, total internal energy change (constant volume ) where is heat capacity of the bomb Before the bomb can be used to determine heat of combustion of any compound, it must be calibrated. The value of can be estimated by and can be measured; In the laboratory, is determined by running a compound with known heat of combustion value: Common compounds are benzoic acid () or p-methyl benzoic acid (). Temperature () is recorded every minute and A small factor contributes to the correction of the total heat of combustion is the fuse wire. Nickel fuse wire is often used and has heat of combustion: 981.2cal/g. In order to calibrate the bomb, a small amount (~ 1g) of benzoic acid, or p-methyl benzoic acid is weighed. A length of nickel fuse wire (~10 cm) is weighed both before and after the combustion process. Mass of fuse wire burned The combustion of sample (benzoic acid) inside the bomb Once value of the bomb is determined, the bomb is ready to use to calculate heat of combustion of any compounds by Combustion of non-flammables The higher pressure and concentration of in the bomb system can render combustible some compounds that are not normally flammable. Some substances do not combust completely, making the calculations harder as the remaining mass has to be taken into consideration, making the possible error considerably larger and compromising the data. When working with compounds that are not as flammable (that might not combust completely) one solution would be to mix the compound with some flammable compounds with a known heat of combustion and make a pallet with the mixture. Once the of the bomb is known, the heat of combustion of the flammable compound (), of the wire () and the masses ( and ), and the temperature change (ΔT), the heat of combustion of the less flammable compound () can be calculated with: CLFC = Cv ΔT − CFC mFC − CW mW Calvet-type calorimeters The detection is based on a three-dimensional fluxmeter sensor. The fluxmeter element consists of a ring of several thermocouples in series. The corresponding thermopile of high thermal conductivity surrounds the experimental space within the calorimetric block. The radial arrangement of the thermopiles guarantees an almost complete integration of the heat. This is verified by the calculation of the efficiency ratio that indicates that an average value of 94% ± 1% of heat is transmitted through the sensor on the full range of temperature of the Calvet-type calorimeter. In this setup, the sensitivity of the calorimeter is not affected by the crucible, the type of purgegas, or the flow rate. The main advantage of the setup is the increase of the experimental vessel's size and consequently the size of the sample, without affecting the accuracy of the calorimetric measurement. The calibration of the calorimetric detectors is a key parameter and has to be performed very carefully. For Calvet-type calorimeters, a specific calibration, so called Joule effect or electrical calibration, has been developed to overcome all the problems encountered by a calibration done with standard materials. The main advantages of this type of calibration are as follows: It is an absolute calibration. The use of standard materials for calibration is not necessary. The calibration can be performed at a constant temperature, in the heating mode and in the cooling mode. It can be applied to any experimental vessel volume. It is a very accurate calibration. An example of Calvet-type calorimeter is the C80 Calorimeter (reaction, isothermal and scanning calorimeter). Adiabatic and Isoperibol calorimeters Sometimes referred to as constant-pressure calorimeters, adiabatic calorimeters measure the change in enthalpy of a reaction occurring in solution during which the no heat exchange with the surroundings is allowed (adiabatic) and the atmospheric pressure remains constant. An example is a coffee-cup calorimeter, which is constructed from two nested Styrofoam cups, providing insulation from the surroundings, and a lid with two holes, allowing insertion of a thermometer and a stirring rod. The inner cup holds a known amount of a solvent, usually water, that absorbs the heat from the reaction. When the reaction occurs, the outer cup provides insulation. Then where , Specific heat at constant pressure , Enthalpy of solution , Change in temperature , mass of solvent , molecular mass of solvent The measurement of heat using a simple calorimeter, like the coffee cup calorimeter, is an example of constant-pressure calorimetry, since the pressure (atmospheric pressure) remains constant during the process. Constant-pressure calorimetry is used in determining the changes in enthalpy occurring in solution. Under these conditions the change in enthalpy equals the heat. Commercial calorimeters operate in a similar way. The semi-adiabatic (isoperibol) calorimeters measure temperature changes up to 10°C and account for heat loss through the walls of the reaction vessel to the environment, hence, semi-adiabatic. The reaction vessel is a dewar flask which is immersed in a constant temperature bath. This provides a constant heat leak rate that can be corrected through the software. The heat capacity of the reactants (and the vessel) are measured by introducing a known amount of heat using a heater element (voltage and current) and measuring the temperature change. Adiabatic calorimeters most commonly used in materials science research to study reactions that occur at a constant pressure and volume. They are particularly useful for determining the heat capacity of substances, measuring the enthalpy changes of chemical reactions, and studying the thermodynamic properties of materials. Differential scanning calorimeter In a differential scanning calorimeter (DSC), heat flow into a sample—usually contained in a small aluminium capsule or 'pan'—is measured differentially, i.e., by comparing it to the flow into an empty reference pan. In a heat flux DSC, both pans sit on a small slab of material with a known (calibrated) heat resistance K. The temperature of the calorimeter is raised linearly with time (scanned), i.e., the heating rate dT/dt = β is kept constant. This time linearity requires good design and good (computerized) temperature control. Of course, controlled cooling and isothermal experiments are also possible. Heat flows into the two pans by conduction. The flow of heat into the sample is larger because of its heat capacity Cp. The difference in flow dq/dt induces a small temperature difference ΔT across the slab. This temperature difference is measured using a thermocouple. The heat capacity can in principle be determined from this signal: Note that this formula (equivalent to Newton's law of heat flow) is analogous to, and much older than, Ohm's law of electric flow: . When suddenly heat is absorbed by the sample (e.g., when the sample melts), the signal will respond and exhibit a peak. From the integral of this peak the enthalpy of melting can be determined, and from its onset the melting temperature. Differential scanning calorimetry is a workhorse technique in many fields, particularly in polymer characterization. A modulated temperature differential scanning calorimeter (MTDSC) is a type of DSC in which a small oscillation is imposed upon the otherwise linear heating rate. This has a number of advantages. It facilitates the direct measurement of the heat capacity in one measurement, even in (quasi-)isothermal conditions. It permits the simultaneous measurement of heat effects that respond to a changing heating rate (reversing) and that don't respond to the changing heating rate (non-reversing). It allows for the optimization of both sensitivity and resolution in a single test by allowing for a slow average heating rate (optimizing resolution) and a fast changing heating rate (optimizing sensitivity). A DSC may also be used as an initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often gold, or gold-plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3 °C per min) due to much heavier crucible, and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to suggest a maximum temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of 3 °C increment per half hour. Isothermal titration calorimeter In an isothermal titration calorimeter, the heat of reaction is used to follow a titration experiment. This permits determination of the midpoint (stoichiometry) (N) of a reaction as well as its enthalpy (delta H), entropy (delta S) and of primary concern the binding affinity (Ka) The technique is gaining in importance particularly in the field of biochemistry, because it facilitates determination of substrate binding to enzymes. The technique is commonly used in the pharmaceutical industry to characterize potential drug candidates. Continuous Reaction Calorimeter The Continuous Reaction Calorimeter is especially suitable to obtain thermodynamic information for a scale-up of continuous processes in tubular reactors. This is useful because the released heat can strongly depend on the reaction control, especially for non-selective reactions. With the Continuous Reaction Calorimeter an axial temperature profile along the tube reactor can be recorded and the specific heat of reaction can be determined by means of heat balances and segmental dynamic parameters. The system must consist of a tubular reactor, dosing systems, preheaters, temperature sensors and flow meters. In traditional heat flow calorimeters, one reactant is added continuously in small amounts, similar to a semi-batch process, in order to obtain a complete conversion of the reaction. In contrast to the tubular reactor, this leads to longer residence times, different substance concentrations and flatter temperature profiles. Thus, the selectivity of not well-defined reactions can be affected. This can lead to the formation of by-products or consecutive products which alter the measured heat of reaction, since other bonds are formed. The amount of by-product or secondary product can be found by calculating the yield of the desired product. If the heat of reaction measured in the HFC (Heat flow calorimetry) and PFR calorimeter differ, most probably some side reactions have occurred. They could for example be caused by different temperatures and residence times. The totally measured Qr is composed of partially overlapped reaction enthalpies (ΔHr) of main and side reactions, depending on their degrees of conversion (U). Calorimetry in Geothermal Reactors Calorimeters can be used to measure the efficiency of geothermal energy conversion processes. Through measuring the heat input and output of the process, engineers can determine how effective the plant is at converting geothermal energy into usable electricity or other forms of energy. Calorimeters can also monitor the quality of the steam extracted from the geothermal resource. By analyzing the heat content of the steam, engineers can ensure that the resource meets the required specifications for efficient energy production.
Technology
Measuring instruments
null
43972
https://en.wikipedia.org/wiki/Partial%20pressure
Partial pressure
In a mixture of gases, each constituent gas has a partial pressure which is the notional pressure of that constituent gas as if it alone occupied the entire volume of the original mixture at the same temperature. The total pressure of an ideal gas mixture is the sum of the partial pressures of the gases in the mixture (Dalton's Law). The partial pressure of a gas is a measure of thermodynamic activity of the gas's molecules. Gases dissolve, diffuse, and react according to their partial pressures but not according to their concentrations in gas mixtures or liquids. This general property of gases is also true in chemical reactions of gases in biology. For example, the necessary amount of oxygen for human respiration, and the amount that is toxic, is set by the partial pressure of oxygen alone. This is true across a very wide range of different concentrations of oxygen present in various inhaled breathing gases or dissolved in blood; consequently, mixture ratios, like that of breathable 20% oxygen and 80% Nitrogen, are determined by volume instead of by weight or mass. Furthermore, the partial pressures of oxygen and carbon dioxide are important parameters in tests of arterial blood gases. That said, these pressures can also be measured in, for example, cerebrospinal fluid. Symbol The symbol for pressure is usually or which may use a subscript to identify the pressure, and gas species are also referred to by subscript. When combined, these subscripts are applied recursively. Examples: or = pressure at time 1 or = partial pressure of hydrogen or or PaO2 = arterial partial pressure of oxygen or or PvO2 = venous partial pressure of oxygen Dalton's law of partial pressures Dalton's law expresses the fact that the total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the individual gases in the mixture. This equality arises from the fact that in an ideal gas, the molecules are so far apart that they do not interact with each other. Most actual real-world gases come very close to this ideal. For example, given an ideal gas mixture of nitrogen (N2), hydrogen (H2) and ammonia (NH3): where: = total pressure of the gas mixture = partial pressure of nitrogen (N2) = partial pressure of hydrogen (H2) = partial pressure of ammonia (NH3) Ideal gas mixtures Ideally the ratio of partial pressures equals the ratio of the number of molecules. That is, the mole fraction of an individual gas component in an ideal gas mixture can be expressed in terms of the component's partial pressure or the moles of the component: and the partial pressure of an individual gas component in an ideal gas can be obtained using this expression: The mole fraction of a gas component in a gas mixture is equal to the volumetric fraction of that component in a gas mixture. The ratio of partial pressures relies on the following isotherm relation: VX is the partial volume of any individual gas component (X) Vtot is the total volume of the gas mixture pX is the partial pressure of gas X ptot is the total pressure of the gas mixture nX is the amount of substance of gas (X) ntot is the total amount of substance in gas mixture Partial volume (Amagat's law of additive volume) The partial volume of a particular gas in a mixture is the volume of one component of the gas mixture. It is useful in gas mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction: VX is the partial volume of an individual gas component X in the mixture Vtot is the total volume of the gas mixture pX is the partial pressure of gas X ptot is the total pressure of the gas mixture nX is the amount of substance of gas X ntot is the total amount of substance in the gas mixture Vapor pressure Vapor pressure is the pressure of a vapor in equilibrium with its non-vapor phases (i.e., liquid or solid). Most often the term is used to describe a liquid's tendency to evaporate. It is a measure of the tendency of molecules and atoms to escape from a liquid or a solid. A liquid's atmospheric pressure boiling point corresponds to the temperature at which its vapor pressure is equal to the surrounding atmospheric pressure and it is often called the normal boiling point. The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point of the liquid. The vapor pressure chart displayed has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. At higher altitudes, the atmospheric pressure is less than that at sea level, so boiling points of liquids are reduced. At the top of Mount Everest, the atmospheric pressure is approximately 0.333 atm, so by using the graph, the boiling point of diethyl ether would be approximately 7.5 °C versus 34.6 °C at sea level (1 atm). Equilibrium constants of reactions involving gas mixtures It is possible to work out the equilibrium constant for a chemical reaction involving a mixture of gases given the partial pressure of each gas and the overall reaction formula. For a reversible reaction involving gas reactants and gas products, such as: {\mathit{a}A} + {\mathit{b}B} <=> {\mathit{c}C} + {\mathit{d}D} the equilibrium constant of the reaction would be: For reversible reactions, changes in the total pressure, temperature or reactant concentrations will shift the equilibrium so as to favor either the right or left side of the reaction in accordance with Le Chatelier's Principle. However, the reaction kinetics may either oppose or enhance the equilibrium shift. In some cases, the reaction kinetics may be the overriding factor to consider. Henry's law and the solubility of gases Gases will dissolve in liquids to an extent that is determined by the equilibrium between the undissolved gas and the gas that has dissolved in the liquid (called the solvent). The equilibrium constant for that equilibrium is: where: =  the equilibrium constant for the solvation process =  partial pressure of gas in equilibrium with a solution containing some of the gas =  the concentration of gas in the liquid solution The form of the equilibrium constant shows that the concentration of a solute gas in a solution is directly proportional to the partial pressure of that gas above the solution. This statement is known as Henry's law and the equilibrium constant is quite often referred to as the Henry's law constant. Henry's law is sometimes written as: where is also referred to as the Henry's law constant. As can be seen by comparing equations () and () above, is the reciprocal of . Since both may be referred to as the Henry's law constant, readers of the technical literature must be quite careful to note which version of the Henry's law equation is being used. Henry's law is an approximation that only applies for dilute, ideal solutions and for solutions where the liquid solvent does not react chemically with the gas being dissolved. In diving breathing gases In underwater diving the physiological effects of individual component gases of breathing gases are a function of partial pressure. Using diving terms, partial pressure is calculated as: partial pressure = (total absolute pressure) × (volume fraction of gas component) For the component gas "i": pi = P × Fi For example, at underwater, the total absolute pressure is (i.e., 1 bar of atmospheric pressure + 5 bar of water pressure) and the partial pressures of the main components of air, oxygen 21% by volume and nitrogen approximately 79% by volume are: pN2 = 6 bar × 0.79 = 4.7 bar absolute pO2 = 6 bar × 0.21 = 1.3 bar absolute The minimum safe lower limit for the partial pressures of oxygen in a breathing gas mixture for diving is absolute. Hypoxia and sudden unconsciousness can become a problem with an oxygen partial pressure of less than 0.16 bar absolute. Oxygen toxicity, involving convulsions, becomes a problem when oxygen partial pressure is too high. The NOAA Diving Manual recommends a maximum single exposure of 45 minutes at 1.6 bar absolute, of 120 minutes at 1.5 bar absolute, of 150 minutes at 1.4 bar absolute, of 180 minutes at 1.3 bar absolute and of 210 minutes at 1.2 bar absolute. Oxygen toxicity becomes a risk when these oxygen partial pressures and exposures are exceeded. The partial pressure of oxygen also determines the maximum operating depth of a gas mixture. Narcosis is a problem when breathing gases at high pressure. Typically, the maximum total partial pressure of narcotic gases used when planning for technical diving may be around 4.5 bar absolute, based on an equivalent narcotic depth of . The effect of a toxic contaminant such as carbon monoxide in breathing gas is also related to the partial pressure when breathed. A mixture which may be relatively safe at the surface could be dangerously toxic at the maximum depth of a dive, or a tolerable level of carbon dioxide in the breathing loop of a diving rebreather may become intolerable within seconds during descent when the partial pressure rapidly increases, and could lead to panic or incapacitation of the diver. In medicine The partial pressures of particularly oxygen () and carbon dioxide () are important parameters in tests of arterial blood gases, but can also be measured in, for example, cerebrospinal fluid.
Physical sciences
Thermodynamics
Physics
43982
https://en.wikipedia.org/wiki/Plumbing
Plumbing
Plumbing is any system that conveys fluids for a wide range of applications. Plumbing uses pipes, valves, plumbing fixtures, tanks, and other apparatuses to convey fluids. Heating and cooling (HVAC), waste removal, and potable water delivery are among the most common uses for plumbing, but it is not limited to these applications. The word derives from the Latin for lead, plumbum, as the first effective pipes used in the Roman era were lead pipes. In the developed world, plumbing infrastructure is critical to public health and sanitation. Boilermakers and pipefitters are not plumbers although they work with piping as part of their trade and their work can include some plumbing. History Plumbing originated during ancient civilizations, as they developed public baths and needed to provide potable water and wastewater removal for larger numbers of people. The Mesopotamians introduced the world to clay sewer pipes around 4000 BCE, with the earliest examples found in the Temple of Bel at Nippur and at Eshnunna, used to remove wastewater from sites, and capture rainwater, in wells. The city of Uruk contains the oldest known examples of brick constructed Latrines, constructed atop interconnecting fired clay sewer pipes, . Clay pipes were later used in the Hittite city of Hattusa. They had easily detachable and replaceable segments, and allowed for cleaning. Standardized earthen plumbing pipes with broad flanges making use of asphalt for preventing leakages appeared in the urban settlements of the Indus Valley civilization by 2700 BC. Copper piping appeared in Egypt by 2400 BCE, with the Pyramid of Sahure and adjoining temple complex at Abusir, found to be connected by a copper waste pipe. The word "plumber" dates from the Roman Empire. The Latin for lead is . Roman roofs used lead in conduits and drain pipes and some were also covered with lead. Lead was also used for piping and for making baths. Plumbing reached its early apex in ancient Rome, which saw the introduction of expansive systems of aqueducts, tile wastewater removal, and widespread use of lead pipes. The Romans used lead pipe inscriptions to prevent water theft. With the Fall of Rome both water supply and sanitation stagnated—or regressed—for well over 1,000 years. Improvement was very slow, with little effective progress made until the growth of modern densely populated cities in the 1800s. During this period, public health authorities began pressing for better waste disposal systems to be installed, to prevent or control epidemics of disease. Earlier, the waste disposal system had consisted of collecting waste and dumping it on the ground or into a river. Eventually the development of separate, underground water and sewage systems eliminated open sewage ditches and cesspools. In post-classical Kilwa the wealthy enjoyed indoor plumbing in their stone homes. Most large cities today pipe solid wastes to sewage treatment plants in order to separate and partially purify the water, before emptying into streams or other bodies of water. For potable water use, galvanized iron piping was commonplace in the United States from the late 1800s until around 1960. After that period, copper piping took over, first soft copper with flared fittings, then with rigid copper tubing using soldered fittings. The use of lead for potable water declined sharply after World War II because of increased awareness of the dangers of lead poisoning. At this time, copper piping was introduced as a better and safer alternative to lead pipes. Systems The major categories of plumbing systems or subsystems are: potable cold and hot tap water supply plumbing drainage venting sewage systems and septic systems with or without hot water heat recycling and graywater recovery and treatment systems Rainwater, surface, and subsurface water drainage fuel gas piping hydronics, i.e. heating and cooling systems using water to transport thermal energy, as in district heating systems, like for example the New York City steam system. Water pipes A water pipe is a pipe or tube, frequently made of plastic or metal, that carries pressurized and treated fresh water to a building (as part of a municipal water system), as well as inside the building. History Lead was the favoured material for water pipes for many centuries because its malleability made it practical to work into the desired shape. Such use was so common that the word "plumbing" derives from plumbum, the Latin word for lead. This was a source of lead-related health problems in the years before the health hazards of ingesting lead were fully understood; among these were stillbirths and high rates of infant mortality. Lead water pipes were still widely used in the early 20th century and remain in many households. Lead-tin alloy solder was commonly used to join copper pipes, but modern practice uses tin-antimony alloy solder instead in order to eliminate lead hazards. Despite the Romans' common use of lead pipes, their aqueducts rarely poisoned people. Unlike other parts of the world where lead pipes cause poisoning, the Roman water had so much calcium in it that a layer of plaque prevented the water contacting the lead itself. What often causes confusion is the large amount of evidence of widespread lead poisoning, particularly amongst those who would have had easy access to piped water, an unfortunate result of lead being used in cookware and as an additive to processed food and drink (for example as a preservative in wine). Roman lead pipe inscriptions provided information on the owner to prevent water theft. Wooden pipes were used in London and elsewhere during the 16th and 17th centuries. The pipes were hollowed-out logs which were tapered at the end with a small hole in which the water would pass through. The multiple pipes were then sealed together with hot animal fat. Wooden pipes were used in Philadelphia, Boston, and Montreal in the 1800s. Built-up wooden tubes were widely used in the US during the 20th century. These pipes (used in place of corrugated iron or reinforced concrete pipes) were made of sections cut from short lengths of wood. Locking of adjacent rings with hardwood dowel pins produced a flexible structure. About 100,000 feet of these wooden pipes were installed during WW2 in drainage culverts, storm sewers and conduits, under highways and at army camps, naval stations, airfields and ordnance plants. Cast iron and ductile iron pipe was long a lower-cost alternative to copper before the advent of durable plastic materials but special non-conductive fittings must be used where transitions are to be made to other metallic pipes (except for terminal fittings) in order to avoid corrosion owing to electrochemical reactions between dissimilar metals (see galvanic cell). Bronze fittings and short pipe segments are commonly used in combination with various materials. Difference between pipes and tubes The difference between pipes and tubes is a matter of sizing. For instance, PVC pipe for plumbing applications and galvanized steel pipe are measured in iron pipe size (IPS). Copper tube, CPVC, PeX and other tubing is measured nominally, basically an average diameter. These sizing schemes allow for universal adaptation of transitional fittings. For instance, 1/2" PeX tubing is the same size as 1/2" copper tubing. 1/2" PVC on the other hand is not the same size as 1/2" tubing, and therefore requires either a threaded male or female adapter to connect them. When used in agricultural irrigation, the singular form "pipe" is often used as a plural. Pipe is available in rigid joints, which come in various lengths depending on the material. Tubing, in particular copper, comes in rigid hard tempered joints or soft tempered (annealed) rolls. PeX and CPVC tubing also comes in rigid joints or flexible rolls. The temper of the copper, whether it is a rigid joint or flexible roll, does not affect the sizing. The thicknesses of the water pipe and tube walls can vary. Because piping and tubing are commodities, having a greater wall thickness implies higher initial cost. Thicker walled pipe generally implies greater durability and higher pressure tolerances. Pipe wall thickness is denoted by various schedules or for large bore polyethylene pipe in the UK by the Standard Dimension Ratio (SDR), defined as the ratio of the pipe diameter to its wall thickness. Pipe wall thickness increases with schedule, and is available in schedules 20, 40, 80, and higher in special cases. The schedule is largely determined by the operating pressure of the system, with higher pressures commanding greater thickness. Copper tubing is available in four wall thicknesses: type DWV (thinnest wall; only allowed as drain pipe per UPC), type 'M' (thin; typically only allowed as drain pipe by IPC code), type 'L' (thicker, standard duty for water lines and water service), and type 'K' (thickest, typically used underground between the main and the meter). Wall thickness does not affect pipe or tubing size. 1/2" L copper has the same outer diameter as 1/2" K or M copper. The same applies to pipe schedules. As a result, a slight increase in pressure losses is realized due to a decrease in flowpath as wall thickness is increased. In other words, 1 foot of 1/2" L copper has slightly less volume than 1 foot of 1/2 M copper. Materials Water systems of ancient times relied on gravity for the supply of water, using pipes or channels usually made of clay, lead, bamboo, wood, or stone. Hollowed wooden logs wrapped in steel banding were used for plumbing pipes, particularly water mains. Logs were used for water distribution in England close to 500 years ago. US cities began using hollowed logs in the late 1700s through the 1800s. Today, most plumbing supply pipe is made out of steel, copper, and plastic; most waste (also known as "soil") out of steel, copper, plastic, and cast iron. The straight sections of plumbing systems are called "pipes" or "tubes". A pipe is typically formed via casting or welding, whereas a tube is made through extrusion. Pipe normally has thicker walls and may be threaded or welded, while tubing is thinner-walled and requires special joining techniques such as brazing, compression fitting, crimping, or for plastics, solvent welding. These joining techniques are discussed in more detail in the piping and plumbing fittings article. Steel Galvanized steel potable water supply and distribution pipes are commonly found with nominal pipe sizes from to . It is rarely used today for new construction residential plumbing. Steel pipe has National Pipe Thread (NPT) standard tapered male threads, which connect with female tapered threads on elbows, tees, couplers, valves, and other fittings. Galvanized steel (often known simply as "galv" or "iron" in the plumbing trade) is relatively expensive, and difficult to work with due to weight and requirement of a pipe threader. It remains in common use for repair of existing "galv" systems and to satisfy building code non-combustibility requirements typically found in hotels, apartment buildings and other commercial applications. It is also extremely durable and resistant to mechanical abuse. Black lacquered steel pipe is the most widely used pipe material for fire sprinklers and natural gas. Most typical single family home systems will not require supply piping larger than due to expense as well as steel piping's tendency to become obstructed from internal rusting and mineral deposits forming on the inside of the pipe over time once the internal galvanizing zinc coating has degraded. In potable water distribution service, galvanized steel pipe has a service life of about 30 to 50 years, although it is not uncommon for it to be less in geographic areas with corrosive water contaminants. Copper Copper pipe and tubing was widely used for domestic water systems in the latter half of the twentieth century. Demand for copper products has fallen due to the dramatic increase in the price of copper, resulting in increased demand for alternative products including PEX and stainless steel. Plastic Plastic pipe is in wide use for domestic water supply and drain-waste-vent (DWV) pipe. Principal types include: Polyvinyl chloride (PVC) was produced experimentally in the 19th century but did not become practical to manufacture until 1926, when Waldo Semon of BF Goodrich Co. developed a method to plasticize PVC, making it easier to process. PVC pipe began to be manufactured in the 1940s and was in wide use for Drain-Waste-Vent piping during the reconstruction of Germany and Japan following WWII. In the 1950s, plastics manufacturers in Western Europe and Japan began producing acrylonitrile butadiene styrene (ABS) pipe. The method for producing cross-linked polyethylene (PEX) was also developed in the 1950s. Plastic supply pipes have become increasingly common, with a variety of materials and fittings employed. PVC/CPVC – rigid plastic pipes similar to PVC drain pipes but with thicker walls to deal with municipal water pressure, introduced around 1970. PVC stands for polyvinyl chloride, and it has become a common replacement for metal piping. PVC should be used only for cold water, or for venting. CPVC can be used for hot and cold potable water supply. Connections are made with primers and solvent cements as required by code. PP – The material is used primarily in housewares, food packaging, and clinical equipment, but since the early 1970s has seen increasing use worldwide for both domestic hot and cold water. PP pipes are heat fused, being unsuitable for the use of glues, solvents, or mechanical fittings. PP pipe is often used in green building projects. PBT – flexible (usually gray or black) plastic pipe which is attached to barbed fittings and secured in place with a copper crimp ring. The primary manufacturer of PBT tubing and fittings was driven into bankruptcy by a class-action lawsuit over failures of this system. However, PB and PBT tubing has since returned to the market and codes, typically first for "exposed locations" such as risers. PEX – cross-linked polyethylene system with mechanically joined fittings employing barbs, and crimped steel or copper rings. Polytanks – plastic polyethylene cisterns, underground water tanks, above ground water tanks, are usually made of linear polyethylene suitable as a potable water storage tank, provided in white, black or green. Aqua – known as PEX-Al-PEX, for its PEX/aluminum sandwich, consisting of aluminum pipe sandwiched between layers of PEX, and connected with modified brass compression fittings. In 2005, many of these fittings were recalled. Present-day water-supply systems use a network of high-pressure pumps, and pipes in buildings are now made of copper, brass, plastic (particularly cross-linked polyethylene called PEX, which is estimated to be used in 60% of single-family homes), or other nontoxic material. Due to its toxicity, most cities moved away from lead water-supply piping by the 1920s in the United States, although lead pipes were approved by national plumbing codes into the 1980s, and lead was used in plumbing solder for drinking water until it was banned in 1986. Drain and vent lines are made of plastic, steel, cast iron, or lead. Gallery Components In addition to lengths of pipe or tubing, pipe fittings such as valves, elbows, tees, and unions. are used in plumbing systems. Pipe and fittings are held in place with pipe hangers and strapping. Plumbing fixtures are exchangeable devices that use water and can be connected to a building's plumbing system. They are considered to be "fixtures", in that they are semi-permanent parts of buildings, not usually owned or maintained separately. Plumbing fixtures are seen by and designed for the end-users. Some examples of fixtures include water closets (also known as toilets), urinals, bidets, showers, bathtubs, utility and kitchen sinks, drinking fountains, ice makers, humidifiers, air washers, fountains, and eye wash stations. Sealants Threaded pipe joints are sealed with thread seal tape or pipe dope. Many plumbing fixtures are sealed to their mounting surfaces with plumber's putty. Equipment and tools Plumbing equipment includes devices often behind walls or in utility spaces which are not seen by the general public. It includes water meters, pumps, expansion tanks, back flow preventers, water filters, UV sterilization lights, water softeners, water heaters, heat exchangers, gauges, and control systems. There are many tools a plumber needs to do a good plumbing job. While many simple plumbing tasks can be completed with a few common hand held tools, other more complex jobs require specialised tools, designed specifically to make the job easier. Specialized plumbing tools include pipe wrenches, flaring pliers, pipe vise, pipe bending machine, pipe cutter, dies, and joining tools such as soldering torches and crimp tools. New tools have been developed to help plumbers fix problems more efficiently. For example, plumbers use video cameras for inspections of hidden leaks or other problems; they also use hydro jets, and high pressure hydraulic pumps connected to steel cables for trench-less sewer line replacement. Flooding from excessive rain or clogged sewers may require specialized equipment, such as a heavy duty pumper truck designed to vacuum raw sewage. Problems Bacteria have been shown to live in "premises plumbing systems". The latter refers to the "pipes and fixtures within a building that transport water to taps after it is delivered by the utility". Community water systems have been known for centuries to spread waterborne diseases like typhoid and cholera. However, "opportunistic premises plumbing pathogens" have been recognized only more recently: Legionella pneumophila, discovered in 1976, Mycobacterium avium, and Pseudomonas aeruginosa are the most commonly tracked bacteria, which people with depressed immunity can inhale or ingest and may become infected with. Some of the locations where these opportunistic pathogens can grow include faucets, shower heads, water heaters and along pipe walls. Reasons that favor their growth are "high surface-to-volume ratio, intermittent stagnation, low disinfectant residual, and warming cycles". A high surface-to-volume ratio, i.e. a relatively large surface area allows the bacteria to form a biofilm, which protects them from disinfection. Regulation Much of the plumbing work in populated areas is regulated by government or quasi-government agencies due to the direct impact on the public's health, safety, and welfare. Plumbing installation and repair work on residences and other buildings generally must be done according to plumbing and building codes to protect the inhabitants of the buildings and to ensure safe, quality construction to future buyers. If permits are required for work, plumbing contractors typically secure them from the authorities on behalf of home or building owners. Australia In Australia, the national governing body for plumbing regulation is the Australian Building Codes Board. They are responsible for the creation of the National Construction Code (NCC), Volume 3 of which, the Plumbing Regulations 2008 and the Plumbing Code of Australia, pertains to plumbing. Each Government at the state level has their own Authority and regulations in place for licensing plumbers. They are also responsible for the interpretation, administration and enforcement of the regulations outlined in the NCC. These Authorities are usually established for the sole purpose of regulating plumbing activities in their respective states/territories. However, several state level regulation acts are quite outdated, with some still operating on local policies introduced more than a decade ago. This has led to an increase in plumbing regulatory issues not covered under current policy, and as such, many policies are currently being updated to cover these more modern issues. The updates include changed to the minimum experience and training requirements for licensing, additional work standards for new and more specific kinds of plumbing, as well as adopting the Plumbing Code of Australia into state regulations in an effort to standardise plumbing regulations across the country. Norway In Norway, new domestic plumbing installed since 1997 has had to satisfy the requirement that it should be easily accessible for replacement after installation. This has led to the development of the pipe-in-pipe system as a de facto requirement for domestic plumbing. United Kingdom In the United Kingdom the professional body is the Chartered Institute of Plumbing and Heating Engineering (educational charity status) and it is true that the trade still remains virtually ungoverned; there are no systems in place to monitor or control the activities of unqualified plumbers or those home owners who choose to undertake installation and maintenance works themselves, despite the health and safety issues which arise from such works when they are undertaken incorrectly; see Health Aspects of Plumbing (HAP) published jointly by the World Health Organization (WHO) and the World Plumbing Council (WPC). WPC has subsequently appointed a representative to the World Health Organization to take forward various projects related to Health Aspects of Plumbing. United States In the United States, plumbing codes and licensing are generally controlled by state and local governments. At the national level, the Environmental Protection Agency has set guidelines about what constitutes lead-free plumbing fittings and pipes, in order to comply with the Safe Drinking Water Act. Some widely used Standards in the United States are: ASME A112.6.3 – Floor and Trench Drains ASME A112.6.4 – Roof, Deck, and Balcony Drains ASME A112.18.1/CSA B125.1 – Plumbing Supply Fittings ASME A112.19.1/CSA B45.2 – Enameled Cast Iron and Enameled Steel Plumbing Fixtures ASME A112.19.2/CSA B45.1 – Ceramic Plumbing Fixtures Canada In Canada, plumbing is a regulated trade requiring specific technical training and certification. Standards and regulations for plumbing are overseen at the provincial and territorial level, each having its distinct governing body: Governing Bodies: Each province or territory possesses its regulatory authority overseeing the licensing and regulation of plumbers. For instance, in Ontario, the Ontario College of Trades handles the certification and regulation of tradespeople, whereas in British Columbia, the Industry Training Authority (ITA) undertakes this function. Certification: To achieve certified plumber status in Canada, individuals typically complete an apprenticeship program encompassing both classroom instruction and hands-on experience. Upon completion, candidates undergo an examination for their certification. Building Codes: Plumbing installations and repairs must adhere to building codes specified by individual provinces or territories. The National Building Code of Canada acts as a model code, with provinces and territories having the discretion to adopt or modify to their specific needs. Safety and Health: Given its direct correlation with health and sanitation, plumbing work is of paramount importance in Canada. Regulations ensure uncontaminated drinking water and proper wastewater treatment, underscoring the vital role of certified plumbers for public health. Environmental Considerations: Reflecting Canada's commitment to environmental conservation, there is an increasing emphasis on sustainable plumbing practices. Regulations advocate water conservation and the deployment of eco-friendly materials. Standards: The Canadian Standards Association (CSA) determines standards for diverse plumbing products, ensuring their safety, quality, and efficiency. Items such as faucets and toilets frequently come with a CSA certification, indicating adherence to required standards.
Technology
Food, water and health
null
44017
https://en.wikipedia.org/wiki/Candle
Candle
A candle is an ignitable wick embedded in wax, or another flammable solid substance such as tallow, that provides light, and in some cases, a fragrance. A candle can also provide heat or a method of keeping time. Candles have been used for over two millennia around the world, and were a significant form of indoor lighting until the invention of other types of light sources. Although electric light has largely made candle use nonessential for illumination, candles are still commonly used for functional, symbolic and aesthetic purposes and in specific cultural and religious settings. Early candles may be made of beeswax, but these candles were expensive and their use was limited to the elite and the churches. Tallow was a cheaper but a less aesthetically pleasing alternative. A variety of different materials have been developed in the modern era for making candles, including paraffin wax, which together with efficient production techniques, made candles affordable for the masses. Various devices can be used to hold candles, such as candlesticks, or candelabras, chandeliers, lanterns and sconces. A person who makes candles is traditionally known as a chandler. The combustion of the candle proceeds in self-sustaining manner. As the wick of candle is lit, the heat melts and ignites a small amount of solid fuel (the wax), which vaporizes and combines with oxygen in the air to form a flame. The flame then melts the top of the mass of solid fuel, which moves upward through the wick via capillary action to be continually burnt, thereby maintaining a constant flame. The candle shortens as the solid fuel is consumed, so does the wick. Wicks of pre-19th century candles required regular trimming with scissors or "snuffers" to promote steady burning and prevent smoking. In modern candles, the wick is constructed so that it curves over as it burns, and the end of the wick gets trimmed by itself through incineration by fire. Etymology The word candle comes from Middle English , from Old English and from Anglo-Norman , both from Latin , from 'to shine'. History Prior to the invention of candles, ancient people used open fire, torches, splinters of resinous wood, and lamps to provide artificial illumination at night. Primitive oil lamps in which a lit wick rested in a pool of oil or fat were used from the Paleolithic period, and pottery and stone lamps from the Neolithic period have been found. Because candle making requires a reliable supply of animal or vegetable fats, it is certain that candles could not have developed before the early Bronze Age; however, it is unclear when and where candles were first used. Objects that could be candlesticks have been found in Babylonian and middle Minoan cultures, as well in the tomb of Tutankhamun. The "candles" used in these early periods would not have resembled the current forms; more likely they were made of plant materials dipped in animal fat. Early evidence of candle use may be found in Italy, where depiction of a candlestick exists in an Etruscan tomb at Orvieto, and the earliest excavated Etruscan candlestick dates from the 7th century BC. Candles may have evolved from taper with wick of oakum and other plant fibre soaked in fat, pitch or oil and burned in lamps or pots. Candles of antiquity were made from various forms of natural fat, tallow, and wax, and Romans made true dipped candles from tallow and beeswax. Beeswax candles were expensive and their use was limited to the wealthy, instead oil lamps were the more commonly used lighting devices in Roman times. Ancient Greece used torches and oil lamps, and likely adopted candle use in a later period from Rome. Early record in China suggests that candle was used in the Qin dynasty before 200 BC. These early Chinese candles may have been made from whale fat. In Christianity, candles gained significance in their decorative, symbolic and ceremonial uses in churches. Wax candles, or candela cerea recorded at the end of the 3rd century, were documented as Easter candles in Spain and Italy in the fourth century, the Christian festival Candlemas was named after it, and Pope Sergius I instituted the procession of lighted candles. Papal bulls decreed that tallow be excluded for use in altar candles, and a high beeswax content is necessary for candles of the high altar. In medieval Europe, candles were initially used primarily in Christian churches. Its use spread later to the households of the wealthy as a luxury item. In northern Europe, rushlight made of greased rushes were commonly used especially in England, but tallow candles were used during the Middle Ages, with a mention of tallow candles in English appearing in 1154. Beeswax was widely used in church ceremonies, and compared to animal-based tallow, it burns cleanly without smoky flame, and does not release an unpleasant smell like tallow. Beeswax candles were expensive, and relatively few people could afford to burn them in their homes in medieval Europe. The candles were produced using a number of methods: dipping the wick in molten fat or wax, rolling the candle by hand around a wick, or pouring fat or wax onto a wick to build up the candle. In the 14th century Sieur de Brez introduced the technique of using a mould, but real improvement for the efficient production of candles with mould was only achieved in the 19th century. Wax and tallow candles were made in monasteries in the medieval period, and in rural households, tallow candles may be made at home. By the 13th century, candle making had become a guild craft in England and France, with a French guild documented as early as 1061. The candle makers (chandlers) went from house to house making candles from the kitchen fats saved for that purpose, or made and sold their own candles from small candle shops. By the 16th century, beeswax candles were appearing as luxury household items among the wealthy. Candles were widely used in the 17th and 18th centuries, and a party in Dresden was said to have been lit by 14,000 candles in 1779. In the Middle East, during the Abbasid and Fatimid Caliphates, beeswax was the dominant material used for candle making. Beeswax was often imported from long distances; for example, candle makers from Egypt used beeswax from Tunis. As in Europe, these candles were expensive and limited to the elite, and most commoners used oil lamps instead. According to legend, the practice of using lamps and candles in mosque started with Tamim al-Dari who lit a lamp he brought from Syria in the Prophet's Mosque in Medina. The Umayyad caliph Al-Walid II was known to have used candles in the court in Damascus, while the Abbasid caliph al-Mutawakkil was said to have spent 1.2 million silver dirhams annually on candles for his royal palaces. In early modern Syria, candles were in high demand by all socioeconomic classes because they were customarily lit during marriage ceremonies. There were candle makers' guilds in the Safavid capital of Isfahan during the 1500s and 1600s. However, candle makers had a relatively low social position in Safavid Iran, comparable to barbers, bathhouse workers, fortune tellers, bricklayers, and porters. In the 18th and 19th centuries, spermaceti, a waxy substance produced by the sperm whale, was used to produce a superior candle that burned longer, brighter and gave off no offensive smell. Later in the 18th century, colza oil and rapeseed oil came into use as much cheaper substitutes. Modern era A number of improvements were made to candle in the 19th century. In older candles, the wick of a burning candle was not in direct contact with air, so it charred instead of being burnt. The charred wick inhibited further burning and produced black smoke, so the wick needed to be constantly trimmed or "snuffed". In 1825, a French man M. Cambacérès introduced the plaited wick soaked with mineral salts, which when burnt, curled towards the outer edge of the flame and become incinerated by it, thereby trimming itself. These are referred to as "self-trimming" or "self-consuming" wicks. In 1823, Michel Eugène Chevreul and Joseph Louis Gay-Lussac separate out stearin in animal fats, and obtained a patent in 1825 to produce candles that are harder and can burn brighter. The manufacture of candles became an industrialized mass market in the mid 19th century. In 1834, Joseph Morgan, a pewterer from Manchester, England, patented a machine that revolutionised candle making. It allowed for continuous production of molded candles by using a cylinder with a moveable piston to eject candles as they solidified. This more efficient mechanized production produced about 1,500 candles per hour. This allowed candles to be an affordable commodity for the masses. In the mid-1850s, James Young succeeded in distilling paraffin wax from coal and oil shales at Bathgate in West Lothian and developed a commercially viable method of production. Paraffin could be used to make inexpensive candles of high quality. It was a bluish-white wax, which burned cleanly and left no unpleasant odor, unlike tallow candles. By the end of the 19th century, candles were made from paraffin wax and stearic acid. By the late 19th century, Price's Candles, based in London, was the largest candle manufacturer in the world. Founded by William Wilson in 1830, the company pioneered the implementation of the technique of steam distillation, and was thus able to manufacture candles from a wide range of raw materials, including skin fat, bone fat, fish oil and industrial greases. Despite advances in candle making, the candle industry declined rapidly upon the introduction of superior methods of lighting, including kerosene and lamps and the 1879 invention of the incandescent light bulb. From this point on, candles came to be marketed as more of a decorative item. Use Before the invention of electric lighting, candles and oil lamps were commonly used for illumination. In areas without electricity, they are still used routinely. Until the 20th century, candles were more common in northern Europe. In southern Europe and the Mediterranean, oil lamps predominated. In the developed world today, candles are used mainly for their aesthetic value and scent, particularly to set a soft, warm, or romantic ambiance, for emergency lighting during electrical power failures. Candles, however, are still commonly used in religious and ceremonial contexts. Examples include votive candles, Paschal candles and yahrzeit candles. In the days leading to Christmas, some people burn a candle a set amount to represent each day, as marked on the candle. The type of candle used in this way is called the Advent candle, although this term is also used to refer to a candle that are used in an Advent wreath. Symbolic use of candles has extended from the religious to the secular, for example, a candlelight vigil may be held in remembrance for a person, for a cause or an event, or as a form of political action or protest. In a social setting, candles are commonly used on birthday cakes. In the 21st century, there has been a huge spike in sales of scented candles in recent years. The COVID-19 pandemic and the ensuing lockdowns led to a dramatic increase in the sales of scented candles, diffusers and room sprays. Other uses With the fairly consistent and measurable burning of a candle, a common use of candles was to tell the time. The candle designed for this purpose might have time measurements, usually in hours, marked along the wax. The Song dynasty in China (960–1279) used candle clocks. By the 18th century, candle clocks were being made with weights set into the sides of the candle. As the candle melted, the weights fell off and made a noise as they fell into a bowl. Components Wax For most of recorded history candles were made from tallow (rendered from beef or mutton-fat) or beeswax. From the mid-1800s, they were also made from spermaceti, a waxy substance derived from the Sperm whale, which in turn spurred demand for the substance. Candles were also made from stearin (initially manufactured from animal fats but now produced almost exclusively from palm waxes). Today, most candles are made from paraffin wax, a byproduct of petroleum refining. Candles can also be made from microcrystalline wax, beeswax (a byproduct of honey collection), gel (a mixture of polymer and mineral oil), or some plant waxes (generally palm, carnauba, bayberry, or soybean wax). The size of the flame and corresponding rate of burning is controlled largely by the candle wick. The kind of wax also affects the burn rate, with beeswax and coconut wax burning longer than paraffin or soy wax. Production methods utilize extrusion moulding. More traditional production methods entail melting the solid fuel by the controlled application of heat. The liquid is then poured into a mould, or a wick is repeatedly immersed in the liquid to create a dipped tapered candle. Often fragrance oils, essential oils or aniline-based dye is added. Wick A candle wick works by capillary action, drawing ("wicking") the melted wax or fuel up to the flame. When the liquid fuel reaches the flame, it vaporizes and combusts. The candle wick influences how the candle burns. Important characteristics of the wick include diameter, stiffness, fire resistance, and tethering. A candle wick is a piece of string or cord that holds the flame of a candle. Commercial wicks are made from braided cotton. The wick's capillarity determines the rate at which the melted hydrocarbon is conveyed to the flame. If the capillarity is too great, the molten wax streams down the side of the candle. Wicks are often infused with a variety of chemicals to modify their burning characteristics. For example, it is usually desirable that the wick not glow after the flame is extinguished. Typical agents are ammonium nitrate and ammonium sulfate. Characteristics Light Based on measurements of a taper-type, paraffin wax candle, a modern candle typically burns at a steady rate of about 0.1 g/min, releasing heat at roughly 80 W. The light produced is about 13 lumens, for a luminous efficacy of about 0.16 lumens per watt (luminous efficacy of a source) – almost a hundred times lower than an incandescent light bulb. If a 1 candela source emitted uniformly in all directions, the total radiant flux would be only about 18.40 mW. The luminous intensity of a typical candle is approximately one candela. The SI unit, candela, was in fact based on an older unit called the candlepower, which represented the luminous intensity emitted by a candle made to particular specifications (a "standard candle"). The modern unit is defined in a more precise and repeatable way, but was chosen such that a candle's luminous intensity is still about one candela. Temperature The hottest part of a candle flame is just above the very dull blue part to one side of the flame, at the base. At this point, the flame is about . However, this part of the flame is very small and releases little heat energy. The blue color is due to chemiluminescence, while the visible yellow color is due to radiative emission from hot soot particles. The soot is formed through a series of complex chemical reactions, leading from the fuel molecule through molecular growth, until multi-carbon ring compounds are formed. The thermal structure of a flame is complex, hundreds of degrees over very short distances leading to extremely steep temperature gradients. On average, the flame temperature is about . The color temperature is approximately 1,000 K. Combustion For a candle to burn, a heat source (commonly a naked flame from a match or lighter) is used to light the candle's wick, which melts and vaporizes a small amount of fuel (the wax). Once vaporized, the fuel combines with oxygen in the atmosphere to ignite and form a constant flame. This flame provides sufficient heat to keep the candle burning via a self-sustaining chain of events: the heat of the flame melts the top of the mass of solid fuel; the liquefied fuel then moves upward through the wick via capillary action; the liquefied fuel finally vaporizes to burn within the candle's flame. As the fuel (wax) is melted and burned, the candle becomes shorter. The end of the plaited wick bends and get consumed in the flame. The incineration of the wick limits the length of the exposed portion of the wick, thus maintaining a constant burning temperature and rate of fuel consumption. Pre-19th century wicks required regular trimming with scissors (or a specialized wick trimmer), usually to about one-quarter inch (~0.7 cm), to promote steady burning and to prevent it from releasing black smoke. Special candle scissors called "snuffers" were produced for this purpose in the 20th century and were often combined with an extinguisher. In modern candles, the wick is made in such a way that it curves over as it burns, which ensures that the end of the wick gets incinerated by fire, thereby trimming itself. Candle flame A candle flame is formed because wax vaporizes on burning. A candle flame is widely recognized as having between three and five regions or "zones": Zone I – this is the non-luminous, lowest, and coolest part of the candle flame. It is located around the base of the wick where there is insufficient oxygen for fuel to burn. Temperatures are around . Zone II – this is the blue zone, which surrounds the base of the flame. Here the supply of oxygen is plentiful, and the fuel burns clean and blue. It is heat from this zone which causes the wax to melt. Temperatures are around . Zone III – the dark zone is a region directly above the wick containing unburnt wax. Pyrolysis takes place here. Temperature is around . Zone IV – the middle or luminous zone is yellow/white and is located above the dark zone. It is the brightest zone, but not the hottest. It is an oxygen-depleted zone with insufficient oxygen to burn all of the wax vapor rising from below it, resulting in only partial combustion. The zone also contains unburnt carbon particles. Temperature is around . Zone V – The non-luminous outer zone or veil surrounds Zone IV. Here, the flame is at its hottest, at around , and complete combustion occurs. It is light blue in color, though most of it is invisible. The main determinant of the height of a candle flame is the diameter of the wick. This is evidenced in tealights where the wick is very thin and the flame is very small. Candles whose main purpose is illumination use a much thicker wick. History of study One of Michael Faraday's significant works was The Chemical History of a Candle, where he gives an in-depth analysis of the evolutionary development, workings and science of candles. Hazards According to the National Fire Protection Association, candles are a leading source of residential fires in the United States with almost 10% of civilian injuries and 6% of fatalities from fire attributed to candles. A candle flame that is longer than its laminar smoke point will emit soot. Proper wick trimming will reduce soot emissions from most candles. The liquid wax is hot and can cause skin burns, but the amount and temperature are generally rather limited and the burns are seldom serious. The best way to avoid getting burned from splashed wax is to use a candle snuffer instead of blowing on the flame. A candle snuffer is usually a small metal cup on the end of a long handle. Placing the snuffer over the flame cuts off the oxygen supply. Snuffers were common in the home when candles were the main source of lighting before electric lights were available. Ornate snuffers, often combined with a taper for lighting, are still found in those churches which regularly use large candles. Glass candle-holders are sometimes cracked by thermal shock from the candle flame, particularly when the candle burns down to the end. When burning candles in glass holders or jars, users should avoid lighting candles with chipped or cracked containers, and stop use once a half-inch or less of wax remains. A former worry regarding the safety of candles was that a lead core was used in the wicks to keep them upright in container candles. Without a stiff core, the wicks of a container candle could sag and drown in the deep wax pool. Concerns rose that the lead in these wicks would vaporize during the burning process, releasing lead vapors – a known health and developmental hazard. Lead core wicks have not been common since the 1970s. Today, most metal-cored wicks use zinc or a zinc alloy, which has become the industry standard. Wicks made from specially treated paper and cotton are also available. Candles emit volatile organic compounds into the environment, which releases carbon into the air. The combustion process of lighting a candle includes the release of light, heat, carbon dioxide and water vapor, to fuel the flame. Candle use can be unsafe if fragrances are inhaled at high doses Non-toxic candles have been created as an alternative to prevent these volatile organic compounds from being released into the environment. Candle companies such as "The Plant Project" have created candles that are more environmentally sustainable and better for lung health. These alternatives include non-toxic wax blends, safe fragrances and eco-friendly packaging. Safer candles include candles made from coconut, soy, vegetable, and beeswax. Users who seek the aesthetics of a candle sometimes install an electric flameless candle to avoid the hazards. Regulation International markets have developed a range of standards and regulations to ensure compliance, while maintaining and improving safety, including: Europe: GPSD, EN 15493, EN 15494, EN 15426, EN 14059, REACH, RAL-GZ 041 Candles (Germany), French Decree 91-1175 United States: ASTM F2058, ASTM F2179, ASTM F2417, ASTM F2601, ASTM F2326 (all are federal and applies in all 50 states), California Proposition 65 (California only), CONEG (New England and New York states only) China: QB/T 2119 Basic Candle, QB/T 2902 Art Candle, QB/T 2903 Jar Candle, GB/T 22256 Jelly Candle Accessories Candle holders Decorative candleholders, especially those shaped as a pedestal, are called candlesticks; if multiple candle tapers are held, the term candelabra is also used. The root form of chandelier is from the word for candle, but now often refers to an electric fixture. The word chandelier is used to describe a hanging fixture designed to hold multiple lights. Other forms of candle holders include the wall-mounted sconces, lanterns, and girandoles. Many candle holders use a friction-tight socket to keep the candle upright. In this case, a candle that is slightly too wide will not fit in the holder, and a candle that is slightly too narrow will wobble. Candles that are too big can be trimmed to fit with a knife; candles that are too small can be fitted with aluminium foil. Traditionally, the candle and candle holders were made in the same place, so they were appropriately sized, but international trade has combined the modern candle with existing holders, which makes the ill-fitting candle more common. This friction-tight socket is only needed for the federals and the tapers. For tea light candles, there is a variety of candle holders, including small glass holders and elaborate multi-candle stands. The same is true for votives. Wall sconces are available for tea light and votive candles. For pillar-type candles, the assortment of candle holders is broad. A fireproof plate, such as a glass plate or small mirror, can be a candle holder for a pillar-style candle. A pedestal of any kind, with the appropriate-sized fireproof top, is another option. A large glass bowl with a large flat bottom and tall mostly vertical curved sides is called a hurricane. The pillar-style candle is placed at the bottom center of the hurricane. A hurricane on a pedestal is sometimes sold as a unit. A bobèche is a drip-catching ring, which may also be affixed to a candle holder, or used independently of one. Bobèches can range from ornate metal or glass to simple plastic, cardboard, or wax paper. Use of paper or plastic bobèches is common at events where candles are distributed to a crowd or audience, such as Christmas carolers or people at other concerts or festivals. Candle snuffers Candle snuffers are instruments used to extinguish burning candles by smothering the flame with a small metal cup that is suspended from a long handle, and thus depriving it of oxygen. An older meaning refers to a scissor-like tool used to trim the wick of a candle. With skill, this could be done without extinguishing the flame. The instrument now known as a candle snuffer was formerly called an "extinguisher" or "douter". Candle followers These are glass or metal tubes with an internal stricture partway along, which sit around the top of a lit candle. As the candle burns, the wax melts and the follower holds the melted wax in, whilst the stricture rests on the topmost solid portion of wax. Candle followers are often deliberately heavy or weighted to ensure they move down as the candle burns lower, maintaining a seal and preventing wax escape. The purpose of a candle follower is threefold: To contain the melted wax, making the candle more efficient, avoiding mess, and producing a more even burn. As a decoration, either due to the ornate nature of the device, or (in the case of a glass follower) through light dispersion or colouration. If necessary, to shield the flame from wind. Candle followers are often found in churches on altar candles. Gallery
Technology
Energy and fuel
null
44027
https://en.wikipedia.org/wiki/Permutation
Permutation
In mathematics, a permutation of a set can mean one of two different things: an arrangement of its members in a sequence or linear order, or the act or process of changing the linear order of an ordered set. An example of the first meaning is the six permutations (orderings) of the set {1, 2, 3}: written as tuples, they are (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1). Anagrams of a word whose letters are all different are also permutations: the letters are already ordered in the original word, and the anagram reorders them. The study of permutations of finite sets is an important topic in combinatorics and group theory. Permutations are used in almost every branch of mathematics and in many other fields of science. In computer science, they are used for analyzing sorting algorithms; in quantum physics, for describing states of particles; and in biology, for describing RNA sequences. The number of permutations of distinct objects is  factorial, usually written as , which means the product of all positive integers less than or equal to . According to the second meaning, a permutation of a set is defined as a bijection from to itself. That is, it is a function from to for which every element occurs exactly once as an image value. Such a function is equivalent to the rearrangement of the elements of in which each element i is replaced by the corresponding . For example, the permutation (3, 1, 2) is described by the function defined as . The collection of all permutations of a set form a group called the symmetric group of the set. The group operation is the composition of functions (performing one rearrangement after the other), which results in another function (rearrangement). The properties of permutations do not depend on the nature of the elements being permuted, only on their number, so one often considers the standard set . In elementary combinatorics, the -permutations, or partial permutations, are the ordered arrangements of distinct elements selected from a set. When is equal to the size of the set, these are the permutations in the previous sense. History Permutation-like objects called hexagrams were used in China in the I Ching (Pinyin: Yi Jing) as early as 1000 BC. In Greece, Plutarch wrote that Xenocrates of Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations. Al-Khalil (717–786), an Arab mathematician and cryptographer, wrote the Book of Cryptographic Messages. It contains the first use of permutations and combinations, to list all possible Arabic words with and without vowels. The rule to determine the number of permutations of n objects was known in Indian culture around 1150 AD. The Lilavati by the Indian mathematician Bhāskara II contains a passage that translates as follows: The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures. In 1677, Fabian Stedman described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells: "first, two must be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1. He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain; cast away 2, and 1.3 will remain; cast away 1, and 2.3 will remain". He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations. At this point he gives up and remarks: Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body; Stedman widens the consideration of permutations; he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20. A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, when Joseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of the roots of an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work of Évariste Galois, in Galois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it. The study of permutations as substitutions on n elements led to the notion of group as algebraic structure, through the works of Cauchy (1815 memoir). Permutations played an important role in the cryptanalysis of the Enigma machine, a cipher device used by Nazi Germany during World War II. In particular, one important property of permutations, namely, that two permutations are conjugate exactly when they have the same cycle type, was used by cryptologist Marian Rejewski to break the German Enigma cipher in turn of years 1932-1933. Definition In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, either or are used. A permutation can be defined as a bijection (an invertible mapping, a one-to-one and onto function) from a set to itself: The identity permutation is defined by for all elements , and can be denoted by the number , by , or by a single 1-cycle (x). The set of all permutations of a set with n elements forms the symmetric group , where the group operation is composition of functions. Thus for two permutations and in the group , their product is defined by: Composition is usually written without a dot or other sign. In general, composition of two permutations is not commutative: As a bijection from a set to itself, a permutation is a function that performs a rearrangement of a set, termed an active permutation or substitution. An older viewpoint sees a permutation as an ordered arrangement or list of all the elements of S, called a passive permutation. According to this definition, all permutations in are passive. This meaning is subtly distinct from how passive (i.e. alias) is used in Active and passive transformation and elsewhere, which would consider all permutations open to passive interpretation (regardless of whether they are in one-line notation, two-line notation, etc.). A permutation can be decomposed into one or more disjoint cycles which are the orbits of the cyclic group acting on the set S. A cycle is found by repeatedly applying the permutation to an element: , where we assume . A cycle consisting of k elements is called a k-cycle. (See below.) A fixed point of a permutation is an element x which is taken to itself, that is , forming a 1-cycle . A permutation with no fixed points is called a derangement. A permutation exchanging two elements (a single 2-cycle) and leaving the others fixed is called a transposition. Notations Several notations are widely used to represent permutations conveniently. Cycle notation is a popular choice, as it is compact and shows the permutation's structure clearly. This article will use cycle notation unless otherwise specified. Two-line notation Cauchy's two-line notation lists the elements of S in the first row, and the image of each element below it in the second row. For example, the permutation of S = {1, 2, 3, 4, 5, 6} given by the functioncan be written as The elements of S may appear in any order in the first row, so this permutation could also be written: One-line notation If there is a "natural" order for the elements of S, say , then one uses this for the first row of the two-line notation: Under this assumption, one may omit the first row and write the permutation in one-line notation as , that is, as an ordered arrangement of the elements of S. Care must be taken to distinguish one-line notation from the cycle notation described below: a common usage is to omit parentheses or other enclosing marks for one-line notation, while using parentheses for cycle notation. The one-line notation is also called the word representation. The example above would then be: (It is typical to use commas to separate these entries only if some have two or more digits.) This compact form is common in elementary combinatorics and computer science. It is especially useful in applications where the permutations are to be compared as larger or smaller using lexicographic order. Cycle notation Cycle notation describes the effect of repeatedly applying the permutation on the elements of the set S, with an orbit being called a cycle. The permutation is written as a list of cycles; since distinct cycles involve disjoint sets of elements, this is referred to as "decomposition into disjoint cycles". To write down the permutation in cycle notation, one proceeds as follows: Write an opening bracket followed by an arbitrary element x of : Trace the orbit of x, writing down the values under successive applications of : Repeat until the value returns to x, and close the parenthesis without repeating x: Continue with an element y of S which was not yet written, and repeat the above process: Repeat until all elements of S are written in cycles. Also, it is common to omit 1-cycles, since these can be inferred: for any element x in S not appearing in any cycle, one implicitly assumes . Following the convention of omitting 1-cycles, one may interpret an individual cycle as a permutation which fixes all the elements not in the cycle (a cyclic permutation having only one cycle of length greater than 1). Then the list of disjoint cycles can be seen as the composition of these cyclic permutations. For example, the one-line permutation can be written in cycle notation as:This may be seen as the composition of cyclic permutations: While permutations in general do not commute, disjoint cycles do; for example:Also, each cycle can be rewritten from a different starting point; for example,Thus one may write the disjoint cycles of a given permutation in many different ways. A convenient feature of cycle notation is that inverting the permutation is given by reversing the order of the elements in each cycle. For example, Canonical cycle notation In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves. Miklós Bóna calls the following ordering choices the canonical cycle notation: in each cycle the largest element is listed first the cycles are sorted in increasing order of their first element, not omitting 1-cycles For example, is a permutation of in canonical cycle notation. Richard Stanley calls this the "standard representation" of a permutation, and Martin Aigner uses "standard form". Sergey Kitaev also uses the "standard form" terminology, but reverses both choices; that is, each cycle lists its minimal element first, and the cycles are sorted in decreasing order of their minimal elements. Composition of permutations There are two ways to denote the composition of two permutations. In the most common notation, is the function that maps any element x to . The rightmost permutation is applied to the argument first, because the argument is written to the right of the function. A different rule for multiplying permutations comes from writing the argument to the left of the function, so that the leftmost permutation acts first. In this notation, the permutation is often written as an exponent, so σ acting on x is written xσ; then the product is defined by . This article uses the first definition, where the rightmost permutation is applied first. The function composition operation satisfies the axioms of a group. It is associative, meaning , and products of more than two permutations are usually written without parentheses. The composition operation also has an identity element (the identity permutation ), and each permutation has an inverse (its inverse function) with . Other uses of the term permutation The concept of a permutation as an ordered arrangement admits several generalizations that have been called permutations, especially in older literature. k-permutations of n In older literature and elementary textbooks, a k-permutation of n (sometimes called a partial permutation, sequence without repetition, variation, or arrangement) means an ordered arrangement (list) of a k-element subset of an n-set. The number of such k-permutations (k-arrangements) of is denoted variously by such symbols as , , , , , or , computed by the formula: , which is 0 when , and otherwise is equal to The product is well defined without the assumption that is a non-negative integer, and is of importance outside combinatorics as well; it is known as the Pochhammer symbol or as the -th falling factorial power :This usage of the term permutation is closely associated with the term combination to mean a subset. A k-combination of a set S is a k-element subset of S: the elements of a combination are not ordered. Ordering the k-combinations of S in all possible ways produces the k-permutations of S. The number of k-combinations of an n-set, C(n,k), is therefore related to the number of k-permutations of n by: These numbers are also known as binomial coefficients, usually denoted : Permutations with repetition Ordered arrangements of k elements of a set S, where repetition is allowed, are called k-tuples. They have sometimes been referred to as permutations with repetition, although they are not permutations in the usual sense. They are also called words or strings over the alphabet S. If the set S has n elements, the number of k-tuples over S is Permutations of multisets If M is a finite multiset, then a multiset permutation is an ordered arrangement of elements of M in which each element appears a number of times equal exactly to its multiplicity in M. An anagram of a word having some repeated letters is an example of a multiset permutation. If the multiplicities of the elements of M (taken in some order) are , , ..., and their sum (that is, the size of M) is n, then the number of multiset permutations of M is given by the multinomial coefficient, For example, the number of distinct anagrams of the word MISSISSIPPI is: . A k-permutation of a multiset M is a sequence of k elements of M in which each element appears a number of times less than or equal to its multiplicity in M (an element's repetition number). Circular permutations Permutations, when considered as arrangements, are sometimes referred to as linearly ordered arrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called a circular permutation. These can be formally defined as equivalence classes of ordinary permutations of these objects, for the equivalence relation generated by moving the final element of the linear arrangement to its front. Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same. 1 4 2 3 4 3 2 1 3 4 1 2 2 3 1 4 The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other. 1 1 4 3 3 4 2 2 There are (n – 1)! circular permutations of a set with n elements. Properties The number of permutations of distinct objects is !. The number of -permutations with disjoint cycles is the signless Stirling number of the first kind, denoted or . Cycle type The cycles (including the fixed points) of a permutation of a set with elements partition that set; so the lengths of these cycles form an integer partition of , which is called the cycle type (or sometimes cycle structure or cycle shape) of . There is a "1" in the cycle type for every fixed point of , a "2" for every transposition, and so on. The cycle type of is This may also be written in a more compact form as . More precisely, the general form is , where are the numbers of cycles of respective length. The number of permutations of a given cycle type is . The number of cycle types of a set with elements equals the value of the partition function . Polya's cycle index polynomial is a generating function which counts permutations by their cycle type. Conjugating permutations In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case of conjugating a permutation by another permutation , which means forming the product . Here, is the conjugate of by and its cycle notation can be obtained by taking the cycle notation for and applying to all the entries in it. It follows that two permutations are conjugate exactly when they have the same cycle type. Order of a permutation The order of a permutation is the smallest positive integer m so that . It is the least common multiple of the lengths of its cycles. For example, the order of is . Parity of a permutation Every permutation of a finite set can be expressed as the product of transpositions. Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified as even or odd depending on this number. This result can be extended so as to assign a sign, written , to each permutation. if is even and if is odd. Then for two permutations and It follows that The sign of a permutation is equal to the determinant of its permutation matrix (below). Matrix representation A permutation matrix is an n × n matrix that has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ..., n}. One natural approach is to define to be the linear transformation of which permutes the standard basis by , and define to be its matrix. That is, has its jth column equal to the n × 1 column vector : its (i, j) entry is to 1 if i = σ(j), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations:. For example, the one-line permutations have product , and the corresponding matrices are: It is also common in the literature to find the inverse convention, where a permutation σ is associated to the matrix whose (i, j) entry is 1 if j = σ(i) and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is, . In this correspondence, permutation matrices act on the right side of the standard row vectors : . The Cayley table on the right shows these matrices for permutations of 3 elements. Permutations of totally ordered sets In some applications, the elements of the set being permuted will be compared with each other. This requires that the set S has a total order so that any two elements can be compared. The set {1, 2, ..., n} with the usual ≤ relation is the most frequently used set in these applications. A number of properties of a permutation are directly related to the total ordering of S, considering the permutation written in one-line notation as a sequence . Ascents, descents, runs, exceedances, records An ascent of a permutation σ of n is any position i < n where the following value is bigger than the current one. That is, i is an ascent if . For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6. Similarly, a descent is a position i < n with , so every i with is either an ascent or a descent. An ascending run of a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast an increasing subsequence of a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367. If a permutation has k − 1 descents, then it must be the union of k ascending runs. The number of permutations of n with k ascents is (by definition) the Eulerian number ; this is also the number of permutations of n with k descents. Some authors however define the Eulerian number as the number of permutations with k ascending runs, which corresponds to descents. An exceedance of a permutation σ1σ2...σn is an index j such that . If the inequality is not strict (that is, ), then j is called a weak exceedance. The number of n-permutations with k exceedances coincides with the number of n-permutations with k descents. A record or left-to-right maximum of a permutation σ is an element i such that σ(j) < σ(i) for all j < i. Foata's transition lemma Foata's fundamental bijection transforms a permutation with a given canonical cycle form into the permutation whose one-line notation has the same sequence of elements with parentheses removed. For example:Here the first element in each canonical cycle of becomes a record (left-to-right maximum) of . Given , one may find its records and insert parentheses to construct the inverse transformation . Underlining the records in the above example: , which allows the reconstruction of the cycles of . The following table shows and for the six permutations of S = {1, 2, 3}, with the bold text on each side showing the notation used in the bijection: one-line notation for and canonical cycle notation for . As a first corollary, the number of n-permutations with exactly k records is equal to the number of n-permutations with exactly k cycles: this last number is the signless Stirling number of the first kind, . Furthermore, Foata's mapping takes an n-permutation with k weak exceedances to an n-permutation with ascents. For example, (2)(31) = 321 has k = 2 weak exceedances (at index 1 and 2), whereas has ascent (at index 1; that is, from 2 to 3). Inversions An inversion of a permutation σ is a pair of positions where the entries of a permutation are in the opposite order: and . Thus a descent is an inversion at two adjacent positions. For example, has (i, j) = (1, 3), (2, 3), and (4, 5), where (σ(i), σ(j)) = (2, 1), (3, 1), and (5, 4). Sometimes an inversion is defined as the pair of values (σ(i), σ(j)); this makes no difference for the number of inversions, and the reverse pair (σ(j), σ(i)) is an inversion in the above sense for the inverse permutation σ−1. The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same for σ and for σ−1. To bring a permutation with k inversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of k such operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of i and where i is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation σ can be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transforms σ into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform σ into the identity, one obtains (after reversal) a complete list of all expressions of minimal length writing σ as a product of adjacent transpositions. The number of permutations of n with k inversions is expressed by a Mahonian number. This is the coefficient of in the expansion of the product The notation denotes the q-factorial. This expansion commonly appears in the study of necklaces. Let such that and . In this case, say the weight of the inversion is . Kobayashi (2011) proved the enumeration formula where denotes Bruhat order in the symmetric groups. This graded partial order often appears in the context of Coxeter groups. Permutations in computing Numbering permutations One way to represent permutations of n things is by an integer N with 0 ≤ N < n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when n is small enough that N can be held in a machine word; for 32-bit words this means n ≤ 12, and for 64-bit words this means n ≤ 20. The conversion can be done via the intermediate form of a sequence of numbers dn, dn−1, ..., d2, d1, where di is a non-negative integer less than i (one may omit d1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply express N in the factorial number system, which is just a particular mixed radix representation, where, for numbers less than n!, the bases (place values or multiplication factors) for successive digits are , , ..., 2!, 1!. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table. In the Lehmer code for a permutation σ, the number dn represents the choice made for the first term σ1, the number dn−1 represents the choice made for the second term σ2 among the remaining elements of the set, and so forth. More precisely, each dn+1−i gives the number of remaining elements strictly less than the term σi. Since those remaining elements are bound to turn up as some later term σj, the digit dn+1−i counts the inversions (i,j) involving i as smaller index (the number of values j for which i < j and σi > σj). The inversion table for σ is quite similar, but here dn+1−k counts the number of inversions (i,j) where k = σj occurs as the smaller of the two values appearing in inverted order. Both encodings can be visualized by an n by n Rothe diagram (named after Heinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa. To effectively convert a Lehmer code dn, dn−1, ..., d2, d1 into a permutation of an ordered set S, one can start with a list of the elements of S in increasing order, and for i increasing from 1 to n set σi to the element in the list that is preceded by dn+1−i other ones, and remove that element from the list. To convert an inversion table dn, dn−1, ..., d2, d1 into the corresponding permutation, one can traverse the numbers from d1 to dn while inserting the elements of S from largest to smallest into an initially empty sequence; at the step using the number d from the inversion table, the element from S inserted into the sequence at the point where it is preceded by d elements already present. Alternatively one could process the numbers from the inversion table and the elements of S both in the opposite order, starting with a row of n empty slots, and at each step place the element from S into the empty slot that is preceded by d other empty slots. Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the place of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code dn, dn−1, ..., d2, d1 has an ascent if and only if . Algorithms to generate permutations In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence. An obvious way to generate permutations of n is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to n!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requires n operations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about n2/4 operations to perform the conversion. With n likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in O(n log n) time. Random generation of permutations For generating random permutations of a given sequence of n values, it makes no difference whether one applies a randomly selected permutation of n to the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of n that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large n due to the growth of the number n!, there is no reason to assume that n will be small for random generation. The basic idea to generate a random permutation is to generate at random one of the n! sequences of integers d1,d2,...,dn satisfying (since d1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald Fisher and Frank Yates. While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using di to select an element among i remaining elements of the sequence (for decreasing values of i), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated. The resulting algorithm for generating a random permutation of a[0], a[1], ..., a[n − 1] can be described as follows in pseudocode: for i from n downto 2 do di ← random element of { 0, ..., i − 1 } swap a[di] and a[i − 1] This can be combined with the initialization of the array a[i] = i as follows for i from 0 to n−1 do di+1 ← random element of { 0, ..., i } a[i] ← a[di+1] a[di+1] ← i If di+1 = i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value i. However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel. Generation in lexicographic order There are many ways to systematically generate all permutations of a given sequence. One classic, simple, and flexible algorithm is based upon finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been rediscovered frequently. The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place. Find the largest index k such that . If no such index exists, the permutation is the last permutation. Find the largest index l greater than k such that . Swap the value of a[k] with that of a[l]. Reverse the sequence from a[k + 1] up to and including the final element a[n]. For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index is zero-based, the steps are as follows: Index k = 2, because 3 is placed at an index that satisfies condition of being the largest index that is still less than a[k + 1] which is 4. Index l = 3, because 4 is the only value in the sequence that is greater than 3 in order to satisfy the condition a[k] < a[l]. The values of a[2] and a[3] are swapped to form the new sequence [1, 2, 4, 3]. The sequence after k-index a[2] to the final element is reversed. Because only one value lies after this index (the 3), the sequence remains unchanged in this instance. Thus the lexicographic successor of the initial state is permuted: [1, 2, 4, 3]. Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which point a[k] < a[k + 1] does not exist, indicating that this is the last permutation. This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort. Generation with minimal changes An alternative to the above algorithm, the Steinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation. An alternative to Steinhaus–Johnson–Trotter is Heap's algorithm, said by Robert Sedgewick in 1977 to be the fastest algorithm of generating permutations in applications. The following figure shows the output of all three aforementioned algorithms for generating all permutations of length , and of six additional algorithms described in the literature. Lexicographic ordering; Steinhaus–Johnson–Trotter algorithm; Heap's algorithm; Ehrlich's star-transposition algorithm: in each step, the first entry of the permutation is exchanged with a later entry; Zaks' prefix reversal algorithm: in each step, a prefix of the current permutation is reversed to obtain the next permutation; Sawada-Williams' algorithm: each permutation differs from the previous one either by a cyclic left-shift by one position, or an exchange of the first two entries; Corbett's algorithm: each permutation differs from the previous one by a cyclic left-shift of some prefix by one position; Single-track ordering: each column is a cyclic shift of the other columns; Single-track Gray code: each column is a cyclic shift of the other columns, plus any two consecutive permutations differ only in one or two transpositions. Nested swaps generating algorithm in steps connected to the nested subgroups . Each permutation is obtained from the previous by a transposition multiplication to the left. Algorithm is connected to the Factorial_number_system of the index. Generation of permutations in nested swap steps Explicit sequence of swaps (transpositions, 2-cycles ), is described here, each swap applied (on the left) to the previous chain providing a new permutation, such that all the permutations can be retrieved, each only once. This counting/generating procedure has an additional structure (call it nested), as it is given in steps: after completely retrieving , continue retrieving by cosets of in , by appropriately choosing the coset representatives to be described below. Since each is sequentially generated, there is a last element . So, after generating by swaps, the next permutation in has to be for some . Then all swaps that generated are repeated, generating the whole coset , reaching the last permutation in that coset ; the next swap has to move the permutation to representative of another coset . Continuing the same way, one gets coset representatives for the cosets of in ; the ordered set () is called the set of coset beginnings. Two of these representatives are in the same coset if and only if , that is, . Concluding, permutations are all representatives of distinct cosets if and only if for any , (no repeat condition). In particular, for all generated permutations to be distinct it is not necessary for the values to be distinct. In the process, one gets that and this provides the recursion procedure. EXAMPLES: obviously, for one has ; to build there are only two possibilities for the coset beginnings satisfying the no repeat condition; the choice leads to . To continue generating one needs appropriate coset beginnings (satisfying the no repeat condition): there is a convenient choice: , leading to . Then, to build a convenient choice for the coset beginnings (satisfying the no repeat condition) is , leading to . From examples above one can inductively go to higher in a similar way, choosing coset beginnings of in , as follows: for even choosing all coset beginnings equal to 1 and for odd choosing coset beginnings equal to . With such choices the "last" permutation is for odd and for even (). Using these explicit formulae one can easily compute the permutation of certain index in the counting/generation steps with minimum computation. For this, writing the index in factorial base is useful. For example, the permutation for index is: , yelding finally, . Because multiplying by swap permutation takes short computing time and every new generated permutation requires only one such swap multiplication, this generation procedure is quite efficient. Moreover as there is a simple formula, having the last permutation in each can save even more time to go directly to a permutation with certain index in fewer steps than expected as it can be done in blocks of subgroups rather than swap by swap. Applications Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212). Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on the permutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.
Mathematics
Discrete mathematics
null
44041
https://en.wikipedia.org/wiki/Solvation
Solvation
Solvations describes the interaction of a solvent with dissolved molecules. Both ionized and uncharged molecules interact strongly with a solvent, and the strength and nature of this interaction influence many properties of the solute, including solubility, reactivity, and color, as well as influencing the properties of the solvent such as its viscosity and density. If the attractive forces between the solvent and solute particles are greater than the attractive forces holding the solute particles together, the solvent particles pull the solute particles apart and surround them. The surrounded solute particles then move away from the solid solute and out into the solution. Ions are surrounded by a concentric shell of solvent. Solvation is the process of reorganizing solvent and solute molecules into solvation complexes and involves bond formation, hydrogen bonding, and van der Waals forces. Solvation of a solute by water is called hydration. Solubility of solid compounds depends on a competition between lattice energy and solvation, including entropy effects related to changes in the solvent structure. Distinction from solubility By an IUPAC definition, solvation is an interaction of a solute with the solvent, which leads to stabilization of the solute species in the solution. In the solvated state, an ion or molecule in a solution is surrounded or complexed by solvent molecules. Solvated species can often be described by coordination number, and the complex stability constants. The concept of the solvation interaction can also be applied to an insoluble material, for example, solvation of functional groups on a surface of ion-exchange resin. Solvation is, in concept, distinct from solubility. Solvation or dissolution is a kinetic process and is quantified by its rate. Solubility quantifies the dynamic equilibrium state achieved when the rate of dissolution equals the rate of precipitation. The consideration of the units makes the distinction clearer. The typical unit for dissolution rate is mol/s. The units for solubility express a concentration: mass per volume (mg/mL), molarity (mol/L), etc. Solvents and intermolecular interactions Solvation involves different types of intermolecular interactions: Hydrogen bonding Ion–dipole interactions The van der Waals forces, which consist of dipole–dipole, dipole–induced dipole, and induced dipole–induced dipole interactions. Which of these forces are at play depends on the molecular structure and properties of the solvent and solute. The similarity or complementary character of these properties between solvent and solute determines how well a solute can be solvated by a particular solvent. Solvent polarity is the most important factor in determining how well it solvates a particular solute. Polar solvents have molecular dipoles, meaning that part of the solvent molecule has more electron density than another part of the molecule. The part with more electron density will experience a partial negative charge while the part with less electron density will experience a partial positive charge. Polar solvent molecules can solvate polar solutes and ions because they can orient the appropriate partially charged portion of the molecule towards the solute through electrostatic attraction. This stabilizes the system and creates a solvation shell (or hydration shell in the case of water) around each particle of solute. The solvent molecules in the immediate vicinity of a solute particle often have a much different ordering than the rest of the solvent, and this area of differently ordered solvent molecules is called the cybotactic region. Water is the most common and well-studied polar solvent, but others exist, such as ethanol, methanol, acetone, acetonitrile, and dimethyl sulfoxide. Polar solvents are often found to have a high dielectric constant, although other solvent scales are also used to classify solvent polarity. Polar solvents can be used to dissolve inorganic or ionic compounds such as salts. The conductivity of a solution depends on the solvation of its ions. Nonpolar solvents cannot solvate ions, and ions will be found as ion pairs. Hydrogen bonding among solvent and solute molecules depends on the ability of each to accept H-bonds, donate H-bonds, or both. Solvents that can donate H-bonds are referred to as protic, while solvents that do not contain a polarized bond to a hydrogen atom and cannot donate a hydrogen bond are called aprotic. H-bond donor ability is classified on a scale (α). Protic solvents can solvate solutes that can accept hydrogen bonds. Similarly, solvents that can accept a hydrogen bond can solvate H-bond-donating solutes. The hydrogen bond acceptor ability of a solvent is classified on a scale (β). Solvents such as water can both donate and accept hydrogen bonds, making them excellent at solvating solutes that can donate or accept (or both) H-bonds. Some chemical compounds experience solvatochromism, which is a change in color due to solvent polarity. This phenomenon illustrates how different solvents interact differently with the same solute. Other solvent effects include conformational or isomeric preferences and changes in the acidity of a solute. Solvation energy and thermodynamic considerations The solvation process will be thermodynamically favored only if the overall Gibbs energy of the solution is decreased, compared to the Gibbs energy of the separated solvent and solid (or gas or liquid). This means that the change in enthalpy minus the change in entropy (multiplied by the absolute temperature) is a negative value, or that the Gibbs energy of the system decreases. A negative Gibbs energy indicates a spontaneous process but does not provide information about the rate of dissolution. Solvation involves multiple steps with different energy consequences. First, a cavity must form in the solvent to make space for a solute. This is both entropically and enthalpically unfavorable, as solvent ordering increases and solvent-solvent interactions decrease. Stronger interactions among solvent molecules leads to a greater enthalpic penalty for cavity formation. Next, a particle of solute must separate from the bulk. This is enthalpically unfavorable since solute-solute interactions decrease, but when the solute particle enters the cavity, the resulting solvent-solute interactions are enthalpically favorable. Finally, as solute mixes into solvent, there is an entropy gain. The enthalpy of solution is the solution enthalpy minus the enthalpy of the separate systems, whereas the entropy of solution is the corresponding difference in entropy. The solvation energy (change in Gibbs free energy) is the change in enthalpy minus the product of temperature (in Kelvin) times the change in entropy. Gases have a negative entropy of solution, due to the decrease in gaseous volume as gas dissolves. Since their enthalpy of solution does not decrease too much with temperature, and their entropy of solution is negative and does not vary appreciably with temperature, most gases are less soluble at higher temperatures. Enthalpy of solvation can help explain why solvation occurs with some ionic lattices but not with others. The difference in energy between that which is necessary to release an ion from its lattice and the energy given off when it combines with a solvent molecule is called the enthalpy change of solution. A negative value for the enthalpy change of solution corresponds to an ion that is likely to dissolve, whereas a high positive value means that solvation will not occur. It is possible that an ion will dissolve even if it has a positive enthalpy value. The extra energy required comes from the increase in entropy that results when the ion dissolves. The introduction of entropy makes it harder to determine by calculation alone whether a substance will dissolve or not. A quantitative measure for solvation power of solvents is given by donor numbers. Although early thinking was that a higher ratio of a cation's ion charge to ionic radius, or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(III) or lanthanides and actinides, which are readily hydrolyzed to form insoluble (hydrous) oxides. As these are solids, it is apparent that they are not solvated. Strong solvent–solute interactions make the process of solvation more favorable. One way to compare how favorable the dissolution of a solute is in different solvents is to consider the free energy of transfer. The free energy of transfer quantifies the free energy difference between dilute solutions of a solute in two different solvents. This value essentially allows for comparison of solvation energies without including solute-solute interactions. In general, thermodynamic analysis of solutions is done by modeling them as reactions. For example, if you add sodium chloride to water, the salt will dissociate into the ions sodium(+aq) and chloride(-aq). The equilibrium constant for this dissociation can be predicted by the change in Gibbs energy of this reaction. The Born equation is used to estimate Gibbs free energy of solvation of a gaseous ion. Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series. Macromolecules and assemblies Solvation (specifically, hydration) is important for many biological structures and processes. For instance, solvation of ions and/or of charged macromolecules, like DNA and proteins, in aqueous solutions influences the formation of heterogeneous assemblies, which may be responsible for biological function. As another example, protein folding occurs spontaneously, in part because of a favorable change in the interactions between the protein and the surrounding water molecules. Folded proteins are stabilized by 5-10 kcal/mol relative to the unfolded state due to a combination of solvation and the stronger intramolecular interactions in the folded protein structure, including hydrogen bonding. Minimizing the number of hydrophobic side chains exposed to water by burying them in the center of a folded protein is a driving force related to solvation. Solvation also affects host–guest complexation. Many host molecules have a hydrophobic pore that readily encapsulates a hydrophobic guest. These interactions can be used in applications such as drug delivery, such that a hydrophobic drug molecule can be delivered in a biological system without needing to covalently modify the drug in order to solubilize it. Binding constants for host–guest complexes depend on the polarity of the solvent. Hydration affects electronic and vibrational properties of biomolecules. Importance of solvation in computer simulations Due to the importance of the effects of solvation on the structure of macromolecules, early computer simulations which attempted to model their behaviors without including the effects of solvent (in vacuo) could yield poor results when compared with experimental data obtained in solution. Small molecules may also adopt more compact conformations when simulated in vacuo; this is due to favorable van der Waals interactions and intramolecular electrostatic interactions which would be dampened in the presence of a solvent. As computer power increased, it became possible to try and incorporate the effects of solvation within a simulation and the simplest way to do this is to surround the molecule being simulated with a "skin" of solvent molecules, akin to simulating the molecule within a drop of solvent if the skin is sufficiently deep.
Physical sciences
Mixture
Chemistry
44044
https://en.wikipedia.org/wiki/Oceanography
Oceanography
Oceanography (), also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology. It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology. Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics. History Early history Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken. The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic. The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour: "nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient). His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer. The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe. The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486). The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil. The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth. Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770. Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences. Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagles three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology. The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide. Modern oceanography Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans. The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. , leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development. In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period. In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean. The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge. In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000) Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966. The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible . In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe to investigate the ocean's depths. The United States nuclear submarine made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a spar buoy, was first deployed. In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent. From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events. 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995. Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks. In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science. Branches The study of oceanography is divided into these five branches: Biological oceanography Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment. Chemical oceanography Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography. Ocean acidification Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide () emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100. An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers. The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas. Geological oceanography Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography. Physical oceanography Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography. Seismic Oceanography Ocean currents Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity. Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents. Ocean heat content Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971. Paleoceanography Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology. Oceanographic institutions The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research. In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards. Related disciplines
Physical sciences
Oceanography
null
44057
https://en.wikipedia.org/wiki/Galactic%20astronomy
Galactic astronomy
Galactic astronomy is the study of the Milky Way galaxy and all its contents. This is in contrast to extragalactic astronomy, which is the study of everything outside our galaxy, including all other galaxies. Galactic astronomy should not be confused with galaxy formation and evolution, which is the general study of galaxies, their formation, structure, components, dynamics, interactions, and the range of forms they take. The Milky Way galaxy, where the Solar System is located, is in many ways the best-studied galaxy, although important parts of it are obscured from view in visible wavelengths by regions of cosmic dust. The development of radio astronomy, infrared astronomy and submillimetre astronomy in the 20th century allowed the gas and dust of the Milky Way to be mapped for the first time. Subcategories A standard set of subcategories is used by astronomical journals to split up the subject of Galactic Astronomy: abundances – the study of the location of elements heavier than helium bulge – the study of the bulge around the center of the Milky Way center – the study of the central region of the Milky Way disk – the study of the Milky Way disk (the plane upon which most galactic objects are aligned) evolution – the evolution of the Milky Way formation – the formation of the Milky Way fundamental parameters – the fundamental parameters of the Milky Way (mass, size etc.) globular cluster – globular clusters within the Milky Way halo – the large halo around the Milky Way kinematics, and dynamics – the motions of stars and clusters nucleus – the region around the black hole at the center of the Milky Way (Sagittarius A*) open clusters and associations – open clusters and associations of stars Solar neighborhood – nearby stars stellar content – numbers and types of stars in the Milky Way structure – the structure (spiral arms etc.) Stellar populations Star clusters Globular clusters Open clusters Interstellar medium Interplanetary space - Interplanetary medium - interplanetary dust Interstellar space - Interstellar medium - interstellar dust Intergalactic space - Intergalactic medium - Intergalactic dust
Physical sciences
Basics_2
Astronomy
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Big Bang nucleosynthesis
In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is the production of nuclei other than those of the lightest isotope of hydrogen (hydrogen-1, 1H, having a single proton as a nucleus) during the early phases of the universe. This type of nucleosynthesis is thought by most cosmologists to have occurred from 10 seconds to 20 minutes after the Big Bang. It is thought to be responsible for the formation of most of the universe's helium (as isotope helium-4 (4He)), along with small fractions of the hydrogen isotope deuterium (2H or D), the helium isotope helium-3 (3He), and a very small fraction of the lithium isotope lithium-7 (7Li). In addition to these stable nuclei, two unstable or radioactive isotopes were produced: the heavy hydrogen isotope tritium (3H or T) and the beryllium isotope beryllium-7 (7Be). These unstable isotopes later decayed into 3He and 7Li, respectively, as above. Elements heavier than lithium are thought to have been created later in the life of the Universe by stellar nucleosynthesis, through the formation, evolution and death of stars. Characteristics There are several important characteristics of Big Bang nucleosynthesis (BBN): The initial conditions (neutron–proton ratio) were set in the first second after the Big Bang. The universe was very close to homogeneous at this time, and strongly radiation-dominated. The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate. It was widespread, encompassing the entire observable universe. The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory. In this field, for historical reasons it is customary to quote the helium-4 fraction by mass, symbol Y, so that 25% helium-4 means that helium-4 atoms account for 25% of the mass, but less than 8% of the nuclei would be helium-4 nuclei. Other (trace) nuclei are usually expressed as number ratios to hydrogen. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993. Important parameters The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio. Neutron–proton ratio The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era, essentially within the first 1-second after the Big Bang. Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions: n \ + e+ <=> \overline{\nu}_e + p n \ + \nu_{e} <=> p + e- At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased. These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1/6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1/7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4. Baryon–photon ratio The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions: along with some other low-probability reactions leading to 7Li or 7Be. (An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur). Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio. Sequence Big Bang nucleosynthesis began roughly about 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei. One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before. As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7. History of theory The history of Big Bang nucleosynthesis began with the calculations of Ralph Alpher in the 1940s. Alpher published the Alpher–Bethe–Gamow paper that outlined the theory of light-element production in the early universe. Heavy elements Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang. The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7. Helium-4 Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass. One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it. The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory. Deuterium Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain. There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory. During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed. It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs. Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements. Lithium Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13. Measurements and status of theory The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang. In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars). As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations? More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. Using this value, are the BBN predictions for the abundances of light elements in agreement with the observations? The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy is a factor of 2.4―4.3 below the theoretically predicted value. This discrepancy, called the "cosmological lithium problem", is considered a problem for the original models, that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of , versus . Non-standard scenarios In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos. There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino.
Physical sciences
Physical cosmology
Astronomy
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
X-ray astronomy
X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy uses a type of space telescope that can see x-ray radiation which standard optical telescopes, such as the Mauna Kea Observatories, cannot. X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitudes were developed that these X-ray sources could be studied. The existence of solar X-rays was confirmed early in the mid-twentieth century by V-2s converted to sounding rockets, and the detection of extra-terrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. The first cosmic (beyond the Solar System) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths. Many thousands of X-ray sources have since been discovered. In addition, the intergalactic space in galaxy clusters is filled with a hot, but very dilute gas at a temperature between 100 and 1000 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies. History of X-ray astronomy In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes". In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. The Sun has been known to be surrounded by a hot tenuous corona. In the mid-1940s radio observations revealed a radio corona around the Sun. The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds. The first solar X-rays were recorded by T. Burnight. Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects. Sounding rocket flights The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board. An Aerobee 150 rocket launched on June 19, 1962 (UTC) detected the first X-rays emitted from a source outside our solar system (Scorpius X-1). It is now known that such X-ray sources as Sco X-1 are compact stars, such as neutron stars or black holes. Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity. Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002. The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky. X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble. To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison. Balloons Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source. High-energy focusing telescope The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula. High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. It was launched from McMurdo Station, Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time. Rockoons The rockoon, a blend of rocket and balloon, was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel. The original concept of "rockoons" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the on March 1, 1949. From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island, apogee: 120 km. X-ray telescopes and mirrors Satellites are needed because X-rays are absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection. This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil. The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960, the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket. The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires: the ability to determine the location at the arrival of an X-ray photon in two dimensions and a reasonable detection efficiency. X-ray astronomy detectors X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time. X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them. Astrophysical sources of X-rays Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters, through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions. An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star. Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, , between high- and low-mass X-ray binaries. In July 2020, astronomers reported the observation of a "hard tidal disruption event candidate" associated with ASASSN-20hx, located near the nucleus of galaxy NGC 6297, and noted that the observation represented one of the "very few tidal disruption events with hard powerlaw X-ray spectra". Celestial X-ray sources The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars. As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth. Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations. Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the "shadow" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS. Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum. Explorational X-ray astronomy Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth. For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth's orbit. Ulysses was launched October 6, 1990, and reached Jupiter for its "gravitational slingshot" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed. The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV. Theoretical X-ray astronomy Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects. Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied. Dynamos Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate. Astronomical models From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary. In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source. The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed: low transition region densities, leading to low emission in coronae, high-density wind extinction of coronal emission, only cool coronal loops become stable, changes in a magnetic field structure to that an open topology, leading to a decrease of magnetically confined plasma, or changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants. Analytical X-ray astronomy High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity (Lx) increases up to 1036 erg·s−1 (1029 watts). The mechanism triggering the different temporal behavior observed between the classical SGXBs and the recently discovered supergiant fast X-ray transients (SFXT)s is still debated. Stellar X-ray astronomy The first detection of stellar x-rays occurred on April 5, 1974, with the detection of X-rays from Capella. A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. The X-ray luminosity of Lx = 1031 erg·s−1 (1024 W) is four orders of magnitude above the Sun's X-ray luminosity. Stellar coronae Coronal stars, or stars within a coronal cloud, are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram. Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind. In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. With the operation of the Einstein observatory, X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. The Einstein initial survey led to significant insights: X-ray sources abound among all types of stars, across the Hertzsprung-Russell diagram and across most stages of evolution, the X-ray luminosities and their distribution along the main sequence were not in agreement with the long-favored acoustic heating theories, but were now interpreted as the effect of magnetic coronal heating, and stars that are otherwise similar reveal large differences in their X-ray output if their rotation period is different. To fit the medium-resolution spectrum of UX Arietis, subsolar abundances were required. Stellar X-ray astronomy is contributing toward a deeper understanding of magnetic fields in magnetohydrodynamic dynamos, the release of energy in tenuous astrophysical plasmas through various plasma-physical processes, and the interactions of high-energy radiation with the stellar environment. Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory. Young, low-mass stars Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence. Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 103 to 105 times stronger than for main-sequence stars of similar masses. X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory. This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the "quiescent" X-ray emission from these stars. Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots and collimated outflows. X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members. Unstable winds Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays. Coolest M dwarfs Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. As a distributed (or α2) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity (LX) ≈ 1026 erg·s−1 (1019 W) and flares up to an order of magnitude higher. Comparison with other late M dwarfs shows a rather continuous trend. Strong X-ray emission from Herbig Ae/Be stars Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are reminiscent of hot stars, others point to coronal activity as in cool stars, in particular the presence of flares and very high temperatures. The nature of these strong emissions has remained controversial with models including unstable stellar winds, colliding winds, magnetic coronae, disk coronae, wind-fed magnetospheres, accretion shocks, the operation of a shear dynamo, the presence of unknown late-type companions. K giants The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (LX ≥ 1032 erg·s−1 or 1025 W) and the hottest known with dominant temperatures up to 40 MK. However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary. Pollux is the brightest star in the constellation Gemini, despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white "twin", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter. Eta Carinae New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota. Davidson is principal investigator for the Eta Carina observations by the Hubble Space Telescope. "In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet." Amateur X-ray astronomy Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride. There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector. Major questions in X-ray astronomy As X-ray astronomy uses a major spectral probe to peer into the source, it is a valuable tool in efforts to understand many puzzles. Stellar magnetic fields Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently. Extrasolar X-ray source astrometry With the initial detection of an extrasolar X-ray source, the first question usually asked is "What is the source?" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance. There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature. X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and the position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. "An adopted matching criterion of 40" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%." Solar X-ray astronomy All of the detected X-ray sources at, around, or near the Sun appear to be associated with processes in the corona, which is its outer atmosphere. Coronal heating problem In the area of solar X-ray astronomy, there is the coronal heating problem. The photosphere of the Sun has an effective temperature of 5,570 K yet its corona has an average temperature of 1–2 × 106 K. However, the hottest regions are 8–20 × 106 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere. It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares. Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. Coronal mass ejection A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. "Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs." The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971, by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing. The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside). Exotic X-ray sources A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. Observations are revealing a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. X-ray dark stars During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." X-ray dark planets and comets X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays. Comet Lulin NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.
Physical sciences
High-energy astronomy
Astronomy
44063
https://en.wikipedia.org/wiki/Extragalactic%20astronomy
Extragalactic astronomy
Extragalactic astronomy is the branch of astronomy concerned with objects outside the Milky Way galaxy. In other words, it is the study of all astronomical objects which are not covered by galactic astronomy. The closest objects in extragalactic astronomy include the galaxies of the Local Group, which are close enough to allow very detailed analyses of their contents (e.g. supernova remnants, stellar associations). As instrumentation has improved, distant objects can now be examined in more detail and so extragalactic astronomy includes objects at nearly the edge of the observable universe. Research into distant galaxies (outside of our local group) is valuable for studying aspects of the universe such as galaxy evolution and Active Galactic Nuclei (AGN) which give insight into physical phenomena (e.g. super massive black hole accretion and the presence of dark matter). It is through extragalactic astronomy that astronomers and physicists are able to study the effects of General Relativity such as gravitational lensing and gravitational waves, that are otherwise impossible (or nearly impossible) to study on a galactic scale. A key interest in Extragalactic Astronomy is the study of how galaxies behave and interact through the universe. Astronomer's methodologies depend — from theoretical to observation based methods. Galaxies form in various ways. In most Cosmological N-body simulations, the earliest galaxies in the cosmos formed in the first hundreds of millions of years. These primordial galaxies formed as the enormous reservoirs of gas and dust in the early universe collapsed in on themselves, giving birth to the first stars, now known as Population III Stars. These stars were of enormous masses in the range of 300 to perhaps 3 million solar masses. Due to their large mass, these stars had extremely short lifespans. Famous examples Hubble Deep Field LIGO's detection of gravitational waves Chandra Deep Field South Topics Active Galactic Nuclei (AGN), Quasars Dark Matter Galaxy clusters, Superclusters Intergalactic stars Intergalactic dust the observable universe Radio galaxies Supernovae Extragalactic planet
Physical sciences
Basics_2
Astronomy
44125
https://en.wikipedia.org/wiki/Gyroscope
Gyroscope
A gyroscope (from Ancient Greek γῦρος gŷros, "round" and σκοπέω skopéō, "to look") is a device used for measuring or maintaining orientation and angular velocity. It is a spinning wheel or disc in which the axis of rotation (spin axis) is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum. Gyroscopes based on other operating principles also exist, such as the microchip-packaged MEMS gyroscopes found in electronic devices (sometimes called gyrometers), solid-state ring lasers, fibre optic gyroscopes, and the extremely sensitive quantum gyroscope. Applications of gyroscopes include inertial navigation systems, such as in the Hubble Space Telescope, or inside the steel hull of a submerged submarine. Due to their precision, gyroscopes are also used in gyrotheodolites to maintain direction in tunnel mining. Gyroscopes can be used to construct gyrocompasses, which complement or replace magnetic compasses (in ships, aircraft and spacecraft, vehicles in general), to assist in stability (bicycles, motorcycles, and ships) or be used as part of an inertial guidance system. MEMS gyroscopes are popular in some consumer electronics, such as smartphones. Description and diagram A gyroscope is an instrument, consisting of a wheel mounted into two or three gimbals providing pivoted supports, for allowing the wheel to rotate about a single axis. A set of three gimbals, one mounted on the other with orthogonal pivot axes, may be used to allow a wheel mounted on the innermost gimbal to have an orientation remaining independent of the orientation, in space, of its support. In the case of a gyroscope with two gimbals, the outer gimbal, which is the gyroscope frame, is mounted so as to pivot about an axis in its own plane determined by the support. This outer gimbal possesses one degree of rotational freedom and its axis possesses none. The second gimbal, inner gimbal, is mounted in the gyroscope frame (outer gimbal) so as to pivot about an axis in its own plane that is always perpendicular to the pivotal axis of the gyroscope frame (outer gimbal). This inner gimbal has two degrees of rotational freedom. The axle of the spinning wheel (the rotor) defines the spin axis. The rotor is constrained to spin about an axis, which is always perpendicular to the axis of the inner gimbal. So the rotor possesses three degrees of rotational freedom and its axis possesses two. The rotor responds to a force applied to the input axis by a reaction force to the output axis. A gyroscope flywheel will roll or resist about the output axis depending upon whether the output gimbals are of a free or fixed configuration. An example of some free-output-gimbal devices is the attitude control gyroscopes used to sense or measure the pitch, roll and yaw attitude angles in a spacecraft or aircraft. The centre of gravity of the rotor can be in a fixed position. The rotor simultaneously spins about one axis and is capable of oscillating about the two other axes, and it is free to turn in any direction about the fixed point (except for its inherent resistance caused by rotor spin). Some gyroscopes have mechanical equivalents substituted for one or more of the elements. For example, the spinning rotor may be suspended in a fluid, instead of being mounted in gimbals. A control moment gyroscope (CMG) is an example of a fixed-output-gimbal device that is used on spacecraft to hold or maintain a desired attitude angle or pointing direction using the gyroscopic resistance force. In some special cases, the outer gimbal (or its equivalent) may be omitted so that the rotor has only two degrees of freedom. In other cases, the centre of gravity of the rotor may be offset from the axis of oscillation, and thus the centre of gravity of the rotor and the centre of suspension of the rotor may not coincide. History Early similar devices Essentially, a gyroscope is a top combined with a pair of gimbals. Tops were invented in many different civilizations, including classical Greece, Rome, and China. Most of these were not utilized as instruments. The first known apparatus similar to a gyroscope (the "Whirling Speculum" or "Serson's Speculum") was invented by John Serson in 1743. It was used as a level, to locate the horizon in foggy or misty conditions. The first instrument used more like an actual gyroscope was made by Johann Bohnenberger of Germany, who first wrote about it in 1817. At first he called it the "Machine". Bohnenberger's machine was based on a rotating massive sphere. In 1832, American Walter R. Johnson developed a similar device that was based on a rotating disc. The French mathematician Pierre-Simon Laplace, working at the École Polytechnique in Paris, recommended the machine for use as a teaching aid, and thus it came to the attention of Léon Foucault. Foucault's gyroscope In 1852, Foucault used it in an experiment demonstrating the rotation of the Earth. It was Foucault who gave the device its modern name, in an experiment to see (Greek skopeein, to see) the Earth's rotation (Greek gyros, circle or rotation), which was visible in the 8 to 10 minutes before friction slowed the spinning rotor. Commercialization In the 1860s, the advent of electric motors made it possible for a gyroscope to spin indefinitely; this led to the first prototype heading indicators, and a rather more complicated device, the gyrocompass. The first functional gyrocompass was patented in 1904 by German inventor Hermann Anschütz-Kaempfe. American Elmer Sperry followed with his own design later that year, and other nations soon realized the military importance of the invention—in an age in which naval prowess was the most significant measure of military power—and created their own gyroscope industries. The Sperry Gyroscope Company quickly expanded to provide aircraft and naval stabilizers as well, and other gyroscope developers followed suit. Circa 1911 the L. T. Hurst Mfg Co of Indianapolis started producing the "Hurst gyroscope" a toy gyroscope with a pull string and pedestal. Manufacture was at some point switched to Chandler Mfg Co (still branded Hurst). The product was later renamed to a “Chandler gyroscope”, presumably because Chandler Mfg Co. took over rights to the gyroscope. Chandler continued to produce the toy until the company was purchased by TEDCO Inc. in 1982. The gyroscope is still produced by TEDCO today. In the first several decades of the 20th century, other inventors attempted (unsuccessfully) to use gyroscopes as the basis for early black box navigational systems by creating a stable platform from which accurate acceleration measurements could be performed (in order to bypass the need for star sightings to calculate position). Similar principles were later employed in the development of inertial navigation systems for ballistic missiles. During World War II, the gyroscope became the prime component for aircraft and anti-aircraft gun sights. After the war, the race to miniaturize gyroscopes for guided missiles and weapons navigation systems resulted in the development and manufacturing of so-called midget gyroscopes that weighed less than and had a diameter of approximately . Some of these miniaturized gyroscopes could reach a speed of 24,000 revolutions per minute in less than 10 seconds. Gyroscopes continue to be an engineering challenge. For example, the axle bearings have to be extremely accurate. A small amount of friction is deliberately introduced to the bearings, since otherwise an accuracy of better than of an inch (2.5 nm) would be required. Three-axis MEMS-based gyroscopes are also used in portable electronic devices such as tablets, smartphones, and smartwatches. This adds to the 3-axis acceleration sensing ability available on previous generations of devices. Together these sensors provide 6 component motion sensing; accelerometers for X, Y, and Z movement, and gyroscopes for measuring the extent and rate of rotation in space (roll, pitch and yaw). Some devices additionally incorporate a magnetometer to provide absolute angular measurements relative to the Earth's magnetic field. Newer MEMS-based inertial measurement units incorporate up to all nine axes of sensing in a single integrated circuit package, providing inexpensive and widely available motion sensing. Gyroscopic principles All spinning objects have gyroscopic properties. The main properties that an object can experience in any gyroscopic motion are rigidity in space and precession. Rigidity in space Rigidity in space describes the principle that a gyroscope remains in the fixed position on the plane in which it is spinning, unaffected by the Earth's rotation. For example, a bike wheel. Early forms of gyroscope (not then known by the name) were used to demonstrate the principle. Precession A simple case of precession, also known as steady precession, can be described by the following relation to Moment: where represents precession, is represented by spin, is the nutation angle, and represents inertia along its respective axis. This relation is only valid with the Moment along the Y and Z axes are equal to 0. The equation can be further reduced noting that the angular velocity along the z-axis is equal to the sum of the Precession and the Spin: , Where represents the angular velocity along the z axis. or Gyroscopic precession is torque induced. It is the rate of change of the angular momentum that is produced by the applied torque. Precession produces counterintuitive dynamic results such as a spinning top not falling over. Precession is used in aerospace applications for sensing changes of attitude and direction. Contemporary uses Steadicam A Steadicam rig was employed during the filming of the 1983 film Return of the Jedi, in conjunction with two gyroscopes for extra stabilization, to film the background plates for the speeder bike chase. Steadicam inventor Garrett Brown operated the shot, walking through a redwood forest, running the camera at one frame per second. When projected at 24 frames per second, it gave the impression of flying through the air at perilous speeds. Heading indicator The heading indicator or directional gyro has an axis of rotation that is set horizontally, pointing north. Unlike a magnetic compass, it does not seek north. When being used in an airplane, for example, it will slowly drift away from north and will need to be reoriented periodically, using a magnetic compass as a reference. Gyrocompass Unlike a directional gyro or heading indicator, a gyrocompass seeks north. It detects the rotation of the Earth about its axis and seeks the true north, rather than the magnetic north. Gyrocompasses usually have built-in damping to prevent overshoot when re-calibrating from sudden movement. Accelerometer By determining an object's acceleration and integrating over time, the velocity of the object can be calculated. Integrating again, position can be determined. The simplest accelerometer is a weight that is free to move horizontally, which is attached to a spring and a device to measure the tension in the spring. This can be improved by introducing a counteracting force to push the weight back and to measure the force needed to prevent the weight from moving. A more complicated design consists of a gyroscope with a weight on one of the axes. The device will react to the force generated by the weight when it is accelerated, by integrating that force to produce a velocity. Variations Gyrostat A gyrostat consists of a massive flywheel concealed in a solid casing. Its behaviour on a table, or with various modes of suspension or support, serves to illustrate the curious reversal of the ordinary laws of static equilibrium due to the gyrostatic behaviour of the interior invisible flywheel when rotated rapidly. The first gyrostat was designed by Lord Kelvin to illustrate the more complicated state of motion of a spinning body when free to wander about on a horizontal plane, like a top spun on the pavement, or a bicycle on the road. Kelvin also made use of gyrostats to develop mechanical theories of the elasticity of matter and of the ether. In modern continuum mechanics there is a variety of these models, based on ideas of Lord Kelvin. They represent a specific type of Cosserat theories (suggested for the first time by Eugène Cosserat and François Cosserat), which can be used for description of artificially made smart materials as well as of other complex media. One of them, so-called Kelvin's medium, has the same equations as magnetic insulators near the state of magnetic saturation in the approximation of quasimagnetostatics. In modern times, the gyrostat concept is used in the design of attitude control systems for orbiting spacecraft and satellites. For instance, the Mir space station had three pairs of internally mounted flywheels known as gyrodynes or control moment gyroscopes. In physics, there are several systems whose dynamical equations resemble the equations of motion of a gyrostat. Examples include a solid body with a cavity filled with an inviscid, incompressible, homogeneous liquid, the static equilibrium configuration of a stressed elastic rod in elastica theory, the polarization dynamics of a light pulse propagating through a nonlinear medium, the Lorenz system in chaos theory, and the motion of an ion in a Penning trap mass spectrometer. MEMS gyroscope A microelectromechanical systems (MEMS) gyroscope is a miniaturized gyroscope found in electronic devices. It takes the idea of the Foucault pendulum and uses a vibrating element. This kind of gyroscope was first used in military applications but has since been adopted for increasing commercial use. HRG The hemispherical resonator gyroscope (HRG), also called a wine-glass gyroscope or mushroom gyro, makes use of a thin solid-state hemispherical shell, anchored by a thick stem. This shell is driven to a flexural resonance by electrostatic forces generated by electrodes which are deposited directly onto separate fused-quartz structures that surround the shell. Gyroscopic effect is obtained from the inertial property of the flexural standing waves. VSG or CVG A vibrating structure gyroscope (VSG), also called a Coriolis vibratory gyroscope (CVG), uses a resonator made of different metallic alloys. It takes a position between the low-accuracy, low-cost MEMS gyroscope and the higher-accuracy and higher-cost fiber optic gyroscope. Accuracy parameters are increased by using low-intrinsic damping materials, resonator vacuumization, and digital electronics to reduce temperature dependent drift and instability of control signals. High quality wine-glass resonators are used for precise sensors like HRG. DTG A dynamically tuned gyroscope (DTG) is a rotor suspended by a universal joint with flexure pivots. The flexure spring stiffness is independent of spin rate. However, the dynamic inertia (from the gyroscopic reaction effect) from the gimbal provides negative spring stiffness proportional to the square of the spin speed (Howe and Savet, 1964; Lawrence, 1998). Therefore, at a particular speed, called the tuning speed, the two moments cancel each other, freeing the rotor from torque, a necessary condition for an ideal gyroscope. Ring laser gyroscope A ring laser gyroscope relies on the Sagnac effect to measure rotation by measuring the shifting interference pattern of a beam split into two separate beams which travel around the ring in opposite directions. When the Boeing 757-200 entered service in 1983, it was equipped with the first suitable ring laser gyroscope. This gyroscope took many years to develop, and the experimental models went through many changes before it was deemed ready for production by the engineers and managers of Honeywell and Boeing. It was an outcome of the competition with mechanical gyroscopes, which kept improving. The reason Honeywell, of all companies, chose to develop the laser gyro was that they were the only one that didn't have a successful line of mechanical gyroscopes, so they wouldn't be competing against themselves. The first problem they had to solve was that with laser gyros rotations below a certain minimum could not be detected at all, due to a problem called "lock-in", whereby the two beams act like coupled oscillators and pull each other's frequencies toward convergence and therefore zero output. The solution was to shake the gyro rapidly so that it never settled into lock-in. Paradoxically, too regular of a dithering motion produced an accumulation of short periods of lock-in when the device was at rest at the extremities of its shaking motion. This was cured by applying a random white noise to the vibration. The material of the block was also changed from quartz to a new glass ceramic Cer-Vit, made by Owens Corning, because of helium leaks. Fiber optic gyroscope A fiber optic gyroscope also uses the interference of light to detect mechanical rotation. The two-halves of the split beam travel in opposite directions in a coil of fiber optic cable as long as 5 km. Like the ring laser gyroscope, it makes use of the Sagnac effect. London moment A London moment gyroscope relies on the quantum-mechanical phenomenon, whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis of the gyroscopic rotor. A magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotation. Gyroscopes of this type can be extremely accurate and stable. For example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 degrees, or about ) over a one-year period. This is equivalent to an angular separation the width of a human hair viewed from away. The GP-B gyro consists of a nearly-perfect spherical rotating mass made of fused quartz, which provides a dielectric support for a thin layer of niobium superconducting material. To eliminate friction found in conventional bearings, the rotor assembly is centered by the electric field from six electrodes. After the initial spin-up by a jet of helium which brings the rotor to 4,000 RPM, the polished gyroscope housing is evacuated to an ultra-high vacuum to further reduce drag on the rotor. Provided the suspension electronics remain powered, the extreme rotational symmetry, lack of friction, and low drag will allow the angular momentum of the rotor to keep it spinning for about 15,000 years. A sensitive DC SQUID that can discriminate changes as small as one quantum, or about 2 Wb, is used to monitor the gyroscope. A precession, or tilt, in the orientation of the rotor causes the London moment magnetic field to shift relative to the housing. The moving field passes through a superconducting pickup loop fixed to the housing, inducing a small electric current. The current produces a voltage across a shunt resistance, which is resolved to spherical coordinates by a microprocessor. The system is designed to minimize Lorentz torque on the rotor. Other examples Helicopters The main rotor of a helicopter acts like a gyroscope. Its motion is influenced by the principle of gyroscopic precession which is the concept that a force applied to a spinning object will have a maximum reaction approximately 90 degrees later. The reaction may differ from 90 degrees when other stronger forces are in play. To change direction, helicopters must adjust the pitch angle and the angle of attack. Gyro X Gyro X prototype vehicle created by Alex Tremulis and Thomas Summers in 1967. The car utilized gyroscopic precession to drive on two wheels. An assembly consisting of a flywheel mounted in a gimbal housing under the hood of the vehicle acted as a large gyroscope. The flywheel was rotated by hydraulic pumps creating a gyroscopic effect on the vehicle. A precessional ram was responsible for rotating the gyroscope to change the direction of the precessional force to counteract any forces causing the vehicle imbalance. The one-of-a-kind prototype is now at the Lane Motor Museum in Nashville, Tennessee. Consumer electronics In addition to being used in compasses, aircraft, computer pointing devices, etc., gyroscopes have been introduced into consumer electronics. Since the gyroscope allows the calculation of orientation and rotation, designers have incorporated them into modern technology. The integration of the gyroscope has allowed for more accurate recognition of movement within a 3D space than the previous lone accelerometer within a number of smartphones. Gyroscopes in consumer electronics are frequently combined with accelerometers for more robust direction- and motion-sensing. Examples of such applications include smartphones such as the Samsung Galaxy Note 4, HTC Titan, Nexus 5, iPhone 5s, Nokia 808 PureView and Sony Xperia, game console peripherals such as the PlayStation 3 controller and the Wii Remote, and virtual reality headsets such as the Oculus Rift. Some features of Android phones like PhotoSphere or 360 Camera and to use VR gadget do not work without a gyroscope sensor in the phone. Nintendo has integrated a gyroscope into the Wii console's Wii Remote controller by an additional piece of hardware called "Wii MotionPlus". It is also included in the 3DS, Wii U GamePad, and Nintendo Switch Joy-Con and Pro controllers, which detect movement when turning and shaking. Cruise ships use gyroscopes to level motion-sensitive devices such as self-leveling pool tables. An electric powered flywheel gyroscope inserted in a bicycle wheel is sold as an alternative to training wheels.
Technology
Navigation
null
44142
https://en.wikipedia.org/wiki/Metric%20system
Metric system
The metric system is a system of measurement that standardizes a set of base units and a nomenclature for describing relatively large and small quantities via decimal-based multiplicative unit prefixes. Though the rules governing the metric system have changed over time, the modern definition, the International System of Units (SI), defines the metric prefixes and seven base units: metre (m), kilogram (kg), second (s), ampere (A), kelvin (K), mole (mol), and candela (cd). An SI derived unit is a named combination of base units such as hertz (cycles per second), newton (kg⋅m/s2), and tesla (1 kg⋅s−2⋅A−1) and in the case of Celsius a shifted scale from Kelvin. Certain units have been officially accepted for use with the SI. Some of these are decimalised, like the litre and electronvolt, and are considered "metric". Others, like the astronomical unit are not. Ancient non-metric but SI-accepted multiples of time, minute and hour, are base 60 (sexagesimal). Similarly, the angular measure degree and submultiples, arcminute, and arcsecond, are also sexagesimal and SI-accepted. The SI system derives from the older metre, kilogram, second (MKS) system of units, though the definition of the base units has evolved over time. Today, all base units are defined by physical constants; not by example as physical objects as they were in the past. Other metric system variants include the centimetre–gram–second system of units, the metre–tonne–second system of units, and the gravitational metric system. Each has unaffiliated metric units. Some of these systems are still used in limited contexts. Adoption The SI system has been adopted as the official system of weights and measures by most countries in the world. A notable outlier is the United States (US). Although used in some contexts, the US has resisted full adoption; continuing to use "a conglomeration of basically incoherent measurement systems". Adopting the metric system is known as metrication. Multiplicative prefixes In the SI system and generally in older metric systems, multiples and fractions of a unit can be described via a prefix on a unit name that implies a decimal (base-10), multiplicative factor. The only exceptions are for the SI-accepted units of time (minute and hour) and angle (degree, arcminute, arcsecond) which, based on ancient convention, use base-60 multipliers. The prefix kilo, for example, implies a factor of 1000 (103), and the prefix milli implies a factor of 1/1000 (10−3). Thus, a kilometre is a thousand metres, and a milligram is one thousandth of a gram. These relations can be written symbolically as: Base units The decimalised system is based on the metre, which had been introduced in France in the 1790s. The historical development of these systems culminated in the definition of the International System of Units (SI) in the mid-20th century, under the oversight of an international standards body. The historical evolution of metric systems has resulted in the recognition of several principles. A set of independent dimensions of nature is selected, in terms of which all natural quantities can be expressed, called base quantities. For each of these dimensions, a representative quantity is defined as a base unit of measure. The definition of base units has increasingly been realised in terms of fundamental natural phenomena, in preference to copies of physical artefacts. A unit derived from the base units is used for expressing quantities of dimensions that can be derived from the base dimensions of the system—e.g., the square metre is the derived unit for area, which is derived from length. These derived units are coherent, which means that they involve only products of powers of the base units, without any further factors. For any given quantity whose unit has a name and symbol, an extended set of smaller and larger units is defined that are related by factors of powers of ten. The unit of time should be the second; the unit of length should be either the metre or a decimal multiple of it; and the unit of mass should be the gram or a decimal multiple of it. Metric systems have evolved since the 1790s, as science and technology have evolved, in providing a single universal measuring system. Before and in addition to the SI, other metric systems include: the MKS system of units and the MKSA systems, which are the direct forerunners of the SI; the centimetre–gram–second (CGS) system and its subtypes, the CGS electrostatic (cgs-esu) system, the CGS electromagnetic (cgs-emu) system, and their still-popular blend, the Gaussian system; the metre–tonne–second (MTS) system; and the gravitational metric systems, which can be based on either the metre or the centimetre, and either the gram, gram-force, kilogram or kilogram-force. Attributes Ease of learning and use The metric system is intended to be easy to use and widely applicable, including units based on the natural world, decimal ratios, prefixes for multiples and sub-multiples, and a structure of base and derived units. It is a coherent system with derived units built from base units using logical rather than empirical relationships and with multiples and submultiples of both units based on decimal factors and identified by a common set of prefixes. Extensibility The metric system is extensible since the governing body reviews, modifies and extends it needs arise. For example, the katal, a derived unit for catalytic activity equivalent to one mole per second (1 mol/s), was added in 1999. Realisation The base units used in a measurement system must be realisable. To that end, the definition of each SI base unit is accompanied by a mise en pratique (practical realisation) that describes at least one way that the unit can be measured. Where possible, definitions of the base units were developed so that any laboratory equipped with proper instruments would be able to realise a standard without reliance on an artefact held by another country. In practice, such realisation is done under the auspices of a mutual acceptance arrangement. In 1791 the commission originally defined the metre based on the size of the earth, equal to one ten-millionth of the distance from the equator to the North Pole. In the SI, the standard metre is now defined as exactly of the distance that light travels in a second. The metre can be realised by measuring the length that a light wave travels in a given time, or equivalently by measuring the wavelength of light of a known frequency. The kilogram was originally defined as the mass of one cubic decimetre of water at 4 °C, standardised as the mass of a man-made artefact of platinum–iridium held in a laboratory in France, which was used until a new definition was introduced in May 2019. Replicas made in 1879 at the time of the artefact's fabrication and distributed to signatories of the Metre Convention serve as de facto standards of mass in those countries. Additional replicas have been fabricated since as additional countries have joined the convention. The replicas were subject to periodic validation by comparison to the original, called the IPK. It became apparent that either the IPK or the replicas or both were deteriorating, and are no longer comparable: they had diverged by 50 μg since fabrication, so figuratively, the accuracy of the kilogram was no better than 5 parts in a hundred million or a relative accuracy of . The revision of the SI replaced the IPK with an exact definition of the Planck constant as expressed in SI units, which defines the kilogram in terms of fundamental constants. Base and derived unit structure A base quantity is one of a conventionally chosen subset of physical quantities, where no quantity in the subset can be expressed in terms of the others. A base unit is a unit adopted for expressing a base quantity. A derived unit is used for expressing any other quantity, and is a product of powers of base units. For example, in the modern metric system, length has the unit metre and time has the unit second, and speed has the derived unit metre per second. Density, or mass per unit volume, has the unit kilogram per cubic metre. Decimal ratios A significant characteristic of the metric system is its use of decimal multiples powers of 10. For example, a length that is significantly longer or shorter than 1 metre can be represented in units that are a power of 10 or 1000 metres. This differs from many older systems in which the ratio of different units varied. For example, 12 inches is one foot, but the larger unit in the same system, the mile is not a power of 12 feet. It is 5,280 feet which is hard to remember for many. In the early days, multipliers that were positive powers of ten were given Greek-derived prefixes such as kilo- and mega-, and those that were negative powers of ten were given Latin-derived prefixes such as centi- and milli-. However, 1935 extensions to the prefix system did not follow this convention: the prefixes nano- and micro-, for example have Greek roots. During the 19th century the prefix myria-, derived from the Greek word μύριοι (mýrioi), was used as a multiplier for . When applying prefixes to derived units of area and volume that are expressed in terms of units of length squared or cubed, the square and cube operators are applied to the unit of length including the prefix, as illustrated below. For the most part, the metric prefixes are used uniformly for SI base, derived and accepted units. A notable exception is that for a large measure of seconds, the non-SI units of minute, hour and day are customary instead. Units of duration longer than a day are problematic since both month and year have varying number of days. Sub-second measures are often indicated via submultiple prefixes. For example, millisecond. Coherence Each variant of the metric system has a degree of coherence—the derived units are directly related to the base units without the need for intermediate conversion factors. For example, in a coherent system the units of force, energy, and power are chosen so that the equations hold without the introduction of unit conversion factors. Once a set of coherent units has been defined, other relationships in physics that use this set of units will automatically be true. Therefore, Einstein's mass–energy equation, , does not require extraneous constants when expressed in coherent units. The CGS system had two units of energy, the erg that was related to mechanics and the calorie that was related to thermal energy; so only one of them (the erg) could bear a coherent relationship to the base units. Coherence was a design aim of SI, which resulted in only one unit of energy being defined – the joule. Rationalisation Maxwell's equations of electromagnetism contained a factor of relating to steradians, representative of the fact that electric charges and magnetic fields may be considered to emanate from a point and propagate equally in all directions, i.e. spherically. This factor made equations more awkward than necessary, and so Oliver Heaviside suggested adjusting the system of units to remove it. Everyday notions The basic units of the metric system have always represented commonplace quantities or relationships in nature; even with modern refinements of definition and methodology. In cases where laboratory precision may not be required or available, or where approximations are good enough, the commonplace notions may suffice. Time The second is readily determined from the Earth's rotation period. Unlike other units, time multiples are not decimal. A second is of a minute, which is of an hour, which is of a day, so a second is of a day. Length The length of the equator is close to (more precisely ). In fact, the dimensions of our planet were used by the French Academy in the original definition of the metre. A dining tabletop is typically about 0.75 metres high. A very tall human is about 2 metres tall. Mass A 1-euro coin weighs 7.5 g; a Sacagawea US 1-dollar coin weighs 8.1 g; a UK 50-pence coin weighs 8.0 g. Temperature In every day use, Celsius is more commonly used than Kelvin, however a temperature difference of one Kelvin is the same as one degree Celsius and that is defined as of the temperature differential between the freezing and boiling points of water at sea level. A temperature in Kelvin is the temperature in Celsius plus about 273. Human body temperature is about 37 °C or 310 K. Length, mass, volume relationship The mass of a litre of cold water is 1 kilogram. 1 millilitre of water occupies 1 cubic centimetre and weighs 1 gram. Candela and Watt relationship Candela is about the luminous intensity of a moderately bright candle, or 1 candle power. A 60 Watt tungsten-filament incandescent light bulb has a luminous intensity of about 800 lumens which is radiated equally in all directions (i.e. 4 steradians), thus is equal to . Watt, Volt and Ampere relationship A 60 W incandescent light bulb consumes 0.5 A at 120 V (US mains voltage). A 60 W bulb rated at 230 V (European mains voltage) consumes 0.26 A at this voltage. This is evident from the formula . Mole and mass relationship A mole of a substance has a mass that is its molecular mass expressed in units of grams. The mass of a mole of carbon is 12.0 g, and the mass of a mole of table salt is 58.4 g. Since all gases have the same volume per mole at a given temperature and pressure far from their points of liquefaction and solidification (see Perfect gas), and air is about oxygen (molecular mass 32) and nitrogen (molecular mass 28), the density of any near-perfect gas relative to air can be obtained to a good approximation by dividing its molecular mass by 29 (because ). For example, carbon monoxide (molecular mass 28) has almost the same density as air. History The French Revolution (1789–99) enabled France to reform its many outdated systems of various local weights and measures. In 1790, Charles Maurice de Talleyrand-Périgord proposed a new system based on natural units to the French National Assembly, aiming for global adoption. With the United Kingdom not responding to a request to collaborate in the development of the system, the French Academy of Sciences established a commission to implement this new standard alone, and in 1799, the new system was launched in France. A number of different metric systems have been developed, all using the Mètre des Archives and Kilogramme des Archives (or their descendants) as their base units, but differing in the definitions of the various derived units. 19th century In 1832, Gauss used the astronomical second as a base unit in defining the gravitation of the Earth, and together with the milligram and millimetre, this became the first system of mechanical units. He showed that the strength of a magnet could also be quantified in terms of these units, by measuring the oscillations of a magnetised needle and finding the quantity of "magnetic fluid" that produces an acceleration of one unit when applied to a unit mass. The centimetre–gram–second system of units (CGS) was the first coherent metric system, having been developed in the 1860s and promoted by Maxwell and Thomson. In 1874, this system was formally promoted by the British Association for the Advancement of Science (BAAS). The system's characteristics are that density is expressed in , force expressed in dynes and mechanical energy in ergs. Thermal energy was defined in calories, one calorie being the energy required to raise the temperature of one gram of water from 15.5 °C to 16.5 °C. The meeting also recognised two sets of units for electrical and magnetic properties – the electrostatic set of units and the electromagnetic set of units. The CGS units of electricity were cumbersome to work with. This was remedied at the 1893 International Electrical Congress held in Chicago by defining the "international" ampere and ohm using definitions based on the metre, kilogram and second, in the International System of Electrical and Magnetic Units. During the same period in which the CGS system was being extended to include electromagnetism, other systems were developed, distinguished by their choice of coherent base unit, including the Practical System of Electric Units, or QES (quad–eleventhgram–second) system, was being used. Here, the base units are the quad, equal to (approximately a quadrant of the Earth's circumference), the eleventhgram, equal to , and the second. These were chosen so that the corresponding electrical units of potential difference, current and resistance had a convenient magnitude. 20th century In 1901, Giovanni Giorgi showed that by adding an electrical unit as a fourth base unit, the various anomalies in electromagnetic systems could be resolved. The metre–kilogram–second–coulomb (MKSC) and metre–kilogram–second–ampere (MKSA) systems are examples of such systems. The metre–tonne–second system of units (MTS) was based on the metre, tonne and second – the unit of force was the sthène and the unit of pressure was the pièze. It was invented in France for industrial use and from 1933 to 1955 was used both in France and in the Soviet Union. Gravitational metric systems use the kilogram-force (kilopond) as a base unit of force, with mass measured in a unit known as the hyl, Technische Masseneinheit (TME), mug or metric slug. Although the CGPM passed a resolution in 1901 defining the standard value of acceleration due to gravity to be 980.665 cm/s2, gravitational units are not part of the International System of Units (SI). Current The International System of Units is the modern metric system. It is based on the metre–kilogram–second–ampere (MKSA) system of units from early in the 20th century. It also includes numerous coherent derived units for common quantities like power (watt) and irradience (lumen). Electrical units were taken from the International system then in use. Other units like those for energy (joule) were modelled on those from the older CGS system, but scaled to be coherent with MKSA units. Two additional base units – the kelvin, which is equivalent to degree Celsius for change in thermodynamic temperature but set so that 0 K is absolute zero, and the candela, which is roughly equivalent to the international candle unit of illumination – were introduced. Later, another base unit, the mole, a unit of amount of substance equivalent to the Avogadro number number of specified molecules, was added along with several other derived units. The system was promulgated by the General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM) in 1960. At that time, the metre was redefined in terms of the wavelength of a spectral line of the krypton-86 atom (krypton-86 being a stable isotope of an inert gas that occurs in undetectable or trace amounts naturally), and the standard metre artefact from 1889 was retired. Today, the International system of units consists of 7 base units and innumerable coherent derived units including 22 with special names. The last new derived unit, the katal for catalytic activity, was added in 1999. All the base units except the second are now defined in terms of exact and invariant constants of physics or mathematics, barring those parts of their definitions which are dependent on the second itself. As a consequence, the speed of light has now become an exactly defined constant, and defines the metre as of the distance light travels in a second. The kilogram was defined by a cylinder of platinum-iridium alloy until a new definition in terms of natural physical constants was adopted in 2019. As of 2022, the range of decimal prefixes has been extended to those for 1030 (quetta–) and 10−30 (quecto–).
Physical sciences
Measurement systems
null
44158
https://en.wikipedia.org/wiki/Conservative%20force
Conservative force
In physics, a conservative force is a force with the property that the total work done by the force in moving a particle between two points is independent of the path taken. Equivalently, if a particle travels in a closed loop, the total work done (the sum of the force acting along the path multiplied by the displacement) by a conservative force is zero. A conservative force depends only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point and conversely, when an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken, contributing to the mechanical energy and the overall conservation of energy. If the force is not conservative, then defining a scalar potential is not possible, because taking different paths would lead to conflicting potential differences between the start and end points. Gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force. Other examples of conservative forces are: force in elastic spring, electrostatic force between two electric charges, and magnetic force between two magnetic poles. The last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. A central force is conservative if and only if it is spherically symmetric. For conservative forces, where is the conservative force, is the potential energy, and is the position. Informal definition Informally, a conservative force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it. Then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, it has traveled a closed path. If the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force. The gravitational force, spring force, magnetic force (according to some definitions, see below) and electric force (at least in a time-independent magnetic field, see Faraday's law of induction for details) are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces. For non-conservative forces, the mechanical energy that is lost (not conserved) has to go somewhere else, by conservation of energy. Usually the energy is turned into heat, for example the heat generated by friction. In addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the boat's mechanical energy into not only heat and sound energy, but also wave energy at the edges of its wake. These and other energy losses are irreversible because of the second law of thermodynamics. Path independence A direct consequence of the closed path test is that the work done by a conservative force on a particle moving between any two points does not depend on the path taken by the particle. This is illustrated in the figure to the right: The work done by the gravitational force on an object depends only on its change in height because the gravitational force is conservative. The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B. For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child. Mathematical description A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions: The curl of F is the zero vector: where in two dimensions this reduces to: There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place: The force can be written as the negative gradient of a potential, : The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force. Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative, while others do not. The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative. Non-conservative force Despite conservation of total energy, non-conservative forces can arise in classical physics due to neglected degrees of freedom or from time-dependent potentials. Many non-conservative forces may be perceived as macroscopic effects of small-scale conservative forces. For instance, friction may be treated without violating conservation of energy by considering the motion of individual molecules; however, that means every molecule's motion must be considered rather than handling it through statistical methods. For macroscopic systems the non-conservative approximation is far easier to deal with than millions of degrees of freedom. Examples of non-conservative forces are friction and non-elastic material stress. Friction has the effect of transferring some of the energy from the large-scale motion of the bodies to small-scale movements in their interior, and therefore appear non-conservative on a large scale. General relativity is non-conservative, as seen in the anomalous precession of Mercury's orbit. However, general relativity does conserve a stress–energy–momentum pseudotensor.
Physical sciences
Classical mechanics
Physics
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Ozone depletion
Ozone depletion consists of two related events observed since the late 1970s: a steady lowering of about four percent in the total amount of ozone in Earth's atmosphere, and a much larger springtime decrease in stratospheric ozone (the ozone layer) around Earth's polar regions. The latter phenomenon is referred to as the ozone hole. There are also springtime polar tropospheric ozone depletion events in addition to these stratospheric events. The main causes of ozone depletion and the ozone hole are manufactured chemicals, especially manufactured halocarbon refrigerants, solvents, propellants, and foam-blowing agents (chlorofluorocarbons (CFCs), HCFCs, halons), referred to as ozone-depleting substances (ODS). These compounds are transported into the stratosphere by turbulent mixing after being emitted from the surface, mixing much faster than the molecules can settle. Once in the stratosphere, they release atoms from the halogen group through photodissociation, which catalyze the breakdown of ozone (O3) into oxygen (O2). Both types of ozone depletion were observed to increase as emissions of halocarbons increased. Ozone depletion and the ozone hole have generated worldwide concern over increased cancer risks and other negative effects. The ozone layer prevents harmful wavelengths of ultraviolet (UVB) light from passing through the Earth's atmosphere. These wavelengths cause skin cancer, sunburn, permanent blindness, and cataracts, which were projected to increase dramatically as a result of thinning ozone, as well as harming plants and animals. These concerns led to the adoption of the Montreal Protocol in 1987, which bans the production of CFCs, halons, and other ozone-depleting chemicals. Over time, scientists have developed new refrigerants with lower global warming potential (GWP) to replace older ones. For example, in new automobiles, R-1234yf systems are now common, being chosen over refrigerants with much higher GWP such as R-134a and R-12. The ban came into effect in 1989. Ozone levels stabilized by the mid-1990s and began to recover in the 2000s, as the shifting of the jet stream in the southern hemisphere towards the south pole has stopped and might even be reversing. Recovery was projected to continue over the next century, with the ozone hole expected to reach pre-1980 levels by around 2075. In 2019, NASA reported that the ozone hole was the smallest ever since it was first discovered in 1982. The UN now projects that under the current regulations the ozone layer will completely regenerate by 2045. The Montreal Protocol is considered the most successful international environmental agreement to date. Ozone cycle overview Three forms (or allotropes) of oxygen are involved in the ozone-oxygen cycle: oxygen atoms (O or atomic oxygen), oxygen gas ( or diatomic oxygen), and ozone gas ( or triatomic oxygen). Ozone is formed in the stratosphere when oxygen gas molecules photodissociate after absorbing UVC photons. This converts a single into two atomic oxygen radicals. The atomic oxygen radicals then combine with separate molecules to create two molecules. These ozone molecules absorb UVB light, following which ozone splits into a molecule of and an oxygen atom. The oxygen atom then joins up with an oxygen molecule to regenerate ozone. This is a continuing process that terminates when an oxygen atom recombines with an ozone molecule to make two molecules. It is worth noting that ozone is the only atmospheric gas that absorbs UVB light. O + → 2 The total amount of ozone in the stratosphere is determined by a balance between photochemical production and recombination. Ozone can be destroyed by a number of free radical catalysts; the most important are the hydroxyl radical (OH·), nitric oxide radical (NO·), chlorine radical (Cl·) and bromine radical (Br·). The dot is a notation to indicate that each species has an unpaired electron and is thus extremely reactive. The effectiveness of different halogens and pseudohalogens as catalysts for ozone destruction varies, in part due to differing routes to regenerate the original radical after reacting with ozone or dioxygen. While all of the relevant radicals have both natural and man-made sources, human activity has impacted some more than others. As of 2020, most of the OH· and NO· in the stratosphere is naturally occurring, but human activity has drastically increased the levels of chlorine and bromine. These elements are found in stable organic compounds, especially chlorofluorocarbons, which can travel to the stratosphere without being destroyed in the troposphere due to their low reactivity. Once in the stratosphere, the Cl and Br atoms are released from the parent compounds by the action of ultraviolet light, e.g. + electromagnetic radiation → Cl· + · Ozone is a highly reactive molecule that easily reduces to the more stable oxygen form with the assistance of a catalyst. Cl and Br atoms destroy ozone molecules through a variety of catalytic cycles. In the simplest example of such a cycle, a chlorine atom reacts with an ozone molecule (), taking an oxygen atom to form chlorine monoxide (ClO) and leaving an oxygen molecule (). The ClO can react with a second molecule of ozone, releasing the chlorine atom and yielding two molecules of oxygen. The chemical shorthand for these gas-phase reactions is: Cl· + → ClO + A chlorine atom removes an oxygen atom from an ozone molecule to make a ClO molecule ClO + → Cl· + 2 This ClO can also remove an oxygen atom from another ozone molecule; the chlorine is free to repeat this two-step cycle The overall effect is a decrease in the amount of ozone, though the rate of these processes can be decreased by the effects of null cycles. More complicated mechanisms have also been discovered that lead to ozone destruction in the lower stratosphere. A single chlorine atom would continuously destroy ozone (thus a catalyst) for up to two years (the time scale for transport back down to the troposphere) except for reactions that remove it from this cycle by forming reservoir species such as hydrogen chloride (HCl) and chlorine nitrate (). Bromine is even more efficient than chlorine at destroying ozone on a per-atom basis, but there is much less bromine in the atmosphere at present. Both chlorine and bromine contribute significantly to overall ozone depletion. Laboratory studies have also shown that fluorine and iodine atoms participate in analogous catalytic cycles. However, fluorine atoms react rapidly with water vapour, methane and hydrogen to form strongly bound hydrogen fluoride (HF) in the Earth's stratosphere, while organic molecules containing iodine react so rapidly in the lower atmosphere that they do not reach the stratosphere in significant quantities. A single chlorine atom is able to react with an average of 100,000 ozone molecules before it is removed from the catalytic cycle. This fact plus the amount of chlorine released into the atmosphere yearly by chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) demonstrates the danger of CFCs and HCFCs to the environment. Observations on ozone layer depletion The ozone hole is usually measured by reduction in the total column ozone above a point on the Earth's surface. This is normally expressed in Dobson units; abbreviated as "DU". The most prominent decrease in ozone has been in the lower stratosphere. Marked decreases in column ozone in the Antarctic spring and early summer compared to the early 1970s and before have been observed using instruments such as the Total Ozone Mapping Spectrometer (TOMS). Reductions of up to 70 percent in the ozone column observed in the austral (southern hemispheric) spring over Antarctica and first reported in 1985 (Farman et al.) are continuing. Antarctic total column ozone in September and October have continued to be 40–50 percent lower than pre-ozone-hole values since the 1990s. A gradual trend toward "healing" was reported in 2016. In 2017, NASA announced that the ozone hole was the weakest since 1988 because of warm stratospheric conditions. It is expected to recover around 2070. The amount lost is more variable year-to-year in the Arctic than in the Antarctic. The greatest Arctic declines are in the winter and spring, reaching up to 30 percent when the stratosphere is coldest. Reactions that take place on polar stratospheric clouds (PSCs) play an important role in enhancing ozone depletion. PSCs form more readily in the extreme cold of the Arctic and Antarctic stratosphere. This is why ozone holes first formed, and are deeper, over Antarctica. Early models failed to take PSCs into account and predicted a gradual global depletion, which is why the sudden Antarctic ozone hole was such a surprise to many scientists. It is more accurate to speak of ozone depletion in middle latitudes rather than holes. Total column ozone declined below pre-1980 values between 1980 and 1996 for mid-latitudes. In the northern mid-latitudes, it then increased from the minimum value by about two percent from 1996 to 2009 as regulations took effect and the amount of chlorine in the stratosphere decreased. In the Southern Hemisphere's mid-latitudes, total ozone remained constant over that time period. There are no significant trends in the tropics, largely because halogen-containing compounds have not had time to break down and release chlorine and bromine atoms at tropical latitudes. Large volcanic eruptions have been shown to have substantial albeit uneven ozone-depleting effects, as observed with the 1991 eruption of Mt. Pinatubo in the Philippines. Ozone depletion also explains much of the observed reduction in stratospheric and upper tropospheric temperatures. The source of the warmth of the stratosphere is the absorption of UV radiation by ozone, hence reduced ozone leads to cooling. Some stratospheric cooling is also predicted from increases in greenhouse gases such as and CFCs themselves; however, the ozone-induced cooling appears to be dominant. Predictions of ozone levels remain difficult, but the precision of models' predictions of observed values and the agreement among different modeling techniques have increased steadily. The World Meteorological Organization Global Ozone Research and Monitoring Project—Report No. 44 is strongly in favor of the Montreal Protocol, but notes that a UNEP 1994 Assessment overestimated ozone loss for the 1994–1997 period. Compounds in the atmosphere CFCs and related compounds Chlorofluorocarbons (CFCs) and other halogenated ozone-depleting substances (ODS) are mainly responsible for man-made chemical ozone depletion. The total amount of effective halogens (chlorine and bromine) in the stratosphere can be calculated and are known as the equivalent effective stratospheric chlorine (EESC). CFCs as refrigerants were invented by Thomas Midgley Jr. in the 1930s. They were used in air conditioning and cooling units, as aerosol spray propellants prior to the 1970s, and in the cleaning processes of delicate electronic equipment. They also occur as by-products of some chemical processes. No significant natural sources have ever been identified for these compounds—their presence in the atmosphere is due almost entirely to human manufacture. As mentioned above, when such ozone-depleting chemicals reach the stratosphere, they are dissociated by ultraviolet light to release chlorine atoms. The chlorine atoms act as a catalyst, and each can break down tens of thousands of ozone molecules before being removed from the stratosphere. Given the longevity of CFC molecules, recovery times are measured in decades. It is calculated that a CFC molecule takes an average of about five to seven years to go from the ground level up to the upper atmosphere, and it can stay there for about a century, destroying up to one hundred thousand ozone molecules during that time. 1,1,1-Trichloro-2,2,2-trifluoroethane, also known as CFC-113a, is one of four man-made chemicals newly discovered in the atmosphere by a team at the University of East Anglia. CFC-113a is the only known CFC whose abundance in the atmosphere is still growing. Its source remains a mystery, but illegal manufacturing is suspected by some. CFC-113a seems to have been accumulating unabated since 1960. Between 2012 and 2017, concentrations of the gas jumped by 40 percent. A study by an international team of researchers published in Nature found that since 2013 emissions that are predominately from north-eastern China have released large quantities of the banned chemical Chlorofluorocarbon-11 (CFC-11) into the atmosphere. Scientists estimate that without action, these CFC-11 emissions will delay the recovery of the planet's ozone hole by a decade. Aluminum oxide Satellites burning up upon re-entry into Earth's atmosphere produce aluminum oxide (Al2O3) nanoparticles that endure in the atmosphere for decades. Estimates for 2022 alone were ~17 metric tons (~30kg of nanoparticles per ~250kg satellite). Increasing populations of satellite constellations can eventually lead to significant ozone depletion. Computer modeling Scientists have attributed ozone depletion to the increase of man-made (anthropogenic) halogen compounds from CFCs by combining observational data with computer models. These complex chemistry transport models (e.g. SLIMCAT, CLaMS—Chemical Lagrangian Model of the Stratosphere) work by combining measurements of chemicals and meteorological fields with chemical reaction rate constants. They identify key chemical reactions and transport processes that bring CFC photolysis products into contact with ozone. Ozone hole and its causes The Antarctic ozone hole is an area of the Antarctic stratosphere in which the recent ozone levels have dropped to as low as 33 percent of their pre-1975 values. The ozone hole occurs during the Antarctic spring, from September to early December, as strong westerly winds start to circulate around the continent and create an atmospheric container. Within this polar vortex, over 50 percent of the lower stratospheric ozone is destroyed during the Antarctic spring. As explained above, the primary cause of ozone depletion is the presence of chlorine-containing source gases (primarily CFCs and related halocarbons). In the presence of UV light, these gases dissociate, releasing chlorine atoms, which then go on to catalyze ozone destruction. The Cl-catalyzed ozone depletion can take place in the gas phase, but it is substantially enhanced in the presence of polar stratospheric clouds (PSCs). These polar stratospheric clouds form during winter, in the extreme cold. Polar winters are dark, consisting of three months without solar radiation (sunlight). The lack of sunlight contributes to a decrease in temperature and the polar vortex traps and chills the air. Temperatures are around or below −80 °C. These low temperatures form cloud particles. There are three types of PSC clouds—nitric acid trihydrate clouds, slowly cooling water-ice clouds, and rapid cooling water-ice (nacreous) clouds—provide surfaces for chemical reactions whose products will, in the spring lead to ozone destruction. The photochemical processes involved are complex but well understood. The key observation is that, ordinarily, most of the chlorine in the stratosphere resides in "reservoir" compounds, primarily chlorine nitrate () as well as stable end products such as HCl. The formation of end products essentially removes Cl from the ozone depletion process. Reservoir compounds sequester Cl, which can later be made available via absorption of light at wavelengths shorter than 400 nm. During the Antarctic winter and spring, reactions on the surface of the polar stratospheric cloud particles convert these "reservoir" compounds into reactive free radicals (Cl and ClO). Denitrification is the process by which the clouds remove from the stratosphere by converting it to nitric acid in PSC particles, which then are lost by sedimentation. This prevents newly formed ClO from being converted back into . The role of sunlight in ozone depletion is the reason why the Antarctic ozone depletion is greatest during spring. During winter, even though PSCs are at their most abundant, there is no light over the pole to drive chemical reactions. During the spring, however, sunlight returns and provides energy to drive photochemical reactions and melt the polar stratospheric clouds, releasing considerable ClO, which drives the hole mechanism. Further warming temperatures near the end of spring break up the vortex around mid-December. As warm, ozone and -rich air flows in from lower latitudes, the PSCs are destroyed, the enhanced ozone depletion process shuts down, and the ozone hole closes. Most of the ozone that is destroyed is in the lower stratosphere, in contrast to the much smaller ozone depletion through homogeneous gas-phase reactions, which occurs primarily in the upper stratosphere. Effects Since the ozone layer absorbs UVB ultraviolet light from the sun, ozone layer depletion increases surface UVB levels (all else equal), which could lead to damage, including an increase in skin cancer. This was the reason for the Montreal Protocol. Although decreases in stratospheric ozone are well-tied to CFCs and increases in surface UVB, there is no direct observational evidence linking ozone depletion to higher incidence of skin cancer and eye damage in human beings. This is partly because UVA, which has also been implicated in some forms of skin cancer, is not absorbed by ozone, and because it is nearly impossible to control statistics for lifestyle changes over time. Ozone depletion may also influence wind patterns. Increased UV Ozone, while a minority constituent in Earth's atmosphere, is responsible for most of the absorption of UVB radiation. The amount of UVB radiation that penetrates through the ozone layer decreases exponentially with the slant-path thickness and density of the layer. When stratospheric ozone levels decrease, higher levels of UVB reach the Earth's surface. UV-driven phenolic formation in tree rings has dated the start of ozone depletion in northern latitudes to the late 1700s. In October 2008, the Ecuadorian Space Agency published a report called HIPERION. The study used ground instruments in Ecuador and the last 28 years' data from 12 satellites of several countries, and found that the UV radiation reaching equatorial latitudes was far greater than expected, with the UV Index climbing as high as 24 in Quito; the WHO considers 11 as an extreme index and a great risk to health. The report concluded that depleted ozone levels around the mid-latitudes of the planet are already endangering large populations in these areas. Later, the CONIDA, the Peruvian Space Agency, published its own study, which yielded almost the same findings as the Ecuadorian study. Biological effects The main public concern regarding the ozone hole has been the effects of increased surface UV radiation on human health. So far, ozone depletion in most locations has been typically a few percent and, as noted above, no direct evidence of health damage is available in most latitudes. If the high levels of depletion seen in the ozone hole were to be common across the globe, the effects could be substantially more dramatic. As the ozone hole over Antarctica has in some instances grown so large as to affect parts of Australia, New Zealand, Chile, Argentina, and South Africa, environmentalists have been concerned that the increase in surface UV could be significant. Excessive ultraviolet radiation (UVR) has reducing effects on the rates of photosynthesis and growth of benthic diatom communities (microalgae species that increase water quality and are pollution resistant) that are present in shallow freshwater. Ozone depletion not only affects human health but also has a profound impact on biodiversity. It damages plants and trees at the cellular level, affecting their growth, vitality, photosynthesis, water balance, and defense mechanisms against pests and diseases. This sets off a cascade of ecological impacts, harming soil microbes, insects, wildlife, and entire ecosystems. Ozone depletion would magnify all of the effects of UV on human health, both positive (including production of vitamin D) and negative (including sunburn, skin cancer, and cataracts). In addition, increased surface UV leads to increased tropospheric ozone, which is a health risk to humans. Basal and squamous cell carcinomas The most common forms of skin cancer in humans, basal and squamous cell carcinomas, have been strongly linked to UV-B exposure. The mechanism by which UVB induces these cancers is well understood—absorption of UV-B radiation causes the pyrimidine bases in the DNA molecule to form dimers, resulting in transcription errors when the DNA replicates. These cancers are relatively mild and rarely fatal, although the treatment of squamous cell carcinoma sometimes requires extensive reconstructive surgery. By combining epidemiological data with results of animal studies, scientists have estimated that every one percent decrease in long-term stratospheric ozone would increase the incidence of these cancers by 2%. Melanoma Another form of skin cancer, Melanoma, is much less common but far more dangerous, being lethal in about 15–20 percent of the cases diagnosed. The relationship between melanoma and ultraviolet exposure is not yet fully understood, but it appears that both UV-B and UV-A are involved. Because of this uncertainty, it is difficult to estimate the effect of ozone depletion on melanoma incidence. One study showed that a 10 percent increase in UV-B radiation was associated with a 19 percent increase in melanomas for men and 16 percent for women. A study of people in Punta Arenas, at the southern tip of Chile, showed a 56 percent increase in melanoma and a 46 percent increase in non-melanoma skin cancer over a period of seven years, along with decreased ozone and increased UVB levels. Cortical cataracts Epidemiological studies suggest an association between ocular cortical cataracts and UV-B exposure, using crude approximations of exposure and various cataract assessment techniques. A detailed assessment of ocular exposure to UV-B was carried out in a study on Chesapeake Bay Watermen, where increases in average annual ocular exposure were associated with increasing risk of cortical opacity. In this highly exposed group of predominantly white males, the evidence linking cortical opacities to sunlight exposure was the strongest to date. Based on these results, ozone depletion is predicted to cause hundreds of thousands of additional cataracts by 2050. Increased tropospheric ozone Increased surface UV leads to increased tropospheric ozone. Ground-level ozone is generally recognized to be a health risk, as ozone is toxic due to its strong oxidant properties. The risks are particularly high for young children, the elderly, and those with asthma or other respiratory difficulties. At this time, ozone at ground level is produced mainly by the action of UV radiation on combustion gases from vehicle exhausts. Increased production of vitamin D Vitamin D is produced in the skin by ultraviolet light. Thus, higher UVB exposure raises human vitamin D in those deficient in it. Recent research (primarily since the Montreal Protocol) shows that many humans have less than optimal vitamin D levels. In particular, in the U.S. population, the lowest quarter of vitamin D (<17.8 ng/ml) were found using information from the National Health and Nutrition Examination Survey to be associated with an increase in all-cause mortality in the general population. While blood level of vitamin D in excess of 100 ng/ml appear to raise blood calcium excessively and to be associated with higher mortality, the body has mechanisms that prevent sunlight from producing vitamin D in excess of the body's requirements. Effects on animals A November 2011 report by scientists at the Institute of Zoology in London, England found that whales off the coast of California have shown a sharp rise in sun damage, and these scientists "fear that the thinning ozone layer is to blame". The study photographed and took skin biopsies from over 150 whales in the Gulf of California and found "widespread evidence of epidermal damage commonly associated with acute and severe sunburn", having cells that form when the DNA is damaged by UV radiation. The findings suggest "rising UV levels as a result of ozone depletion are to blame for the observed skin damage, in the same way that human skin cancer rates have been on the increase in recent decades." Apart from whales many other animals such as dogs, cats, sheep and terrestrial ecosystems also suffer the negative effects of increased UV-B radiations. Effects on crops An increase of UV radiation would be expected to affect crops. A number of economically important species of plants, such as rice, depend on cyanobacteria residing on their roots for the retention of nitrogen. Cyanobacteria are sensitive to UV radiation and would be affected by its increase. "Despite mechanisms to reduce or repair the effects of increased ultraviolet radiation, plants have a limited ability to adapt to increased levels of UVB, therefore plant growth can be directly affected by UVB radiation." Effects on plant life Over the years, the Arctic ozone layer has depleted severely. As a consequence species that live above the snow cover or in areas where snow has melted abundantly, due to hot temperatures, are negatively impacted due to UV radiation that reaches the ground. Depletion of the ozone layer and allowing excess UVB radiation would initially be assumed to increase damage to plant DNA. Reports have found that when plants are exposed to UVB radiation similar to stratospheric ozone depletion, there was no significant change in plant height or leaf mass, but showed a response in shoot biomass and leaf area with a small decrease. However, UVB radiation has been shown to decrease quantum yield of photosystem II. UVB damage only occurs under extreme exposure, and most plants also have UVB absorbing flavonoids which allow them to acclimatize to the radiation present. Plants experience different levels of UV radiation throughout the day. It is known that they are able to shift the levels and types of UV sunscreens (i.e. flavonoids), that they contain, throughout the day. This allows them to increase their protection against UV radiation. Plants that have been affected by radiation throughout development are more affected by the inability to intercept light with a larger leaf area than having photosynthetic systems compromised. Damage from UVB radiation is more likely to be significant on species interactions than on plants themselves. Another significant impact of ozone depletion on plant life is the stress experienced by plants when exposed to UV radiation. This can cause a decrease in plant growth and an increase in oxidative stress, due to the production of nitric oxide and hydrogen peroxide. In areas where substantial ozone depletion has occurred, increased UV-B radiation reduces terrestrial plant productivity (and likewise carbon sequestration) by about 6%. Moreover, if plants are exposed to high levels of UV radiation, it can elicit the production of harmful volatile organic compounds, like isoprenes. The emission of isoprenes into the air, by plants, can severely impact the environment by adding to air pollution and increasing the amount of carbon in the atmosphere, ultimately contributing to climate change. Public policy The full extent of the damage that CFCs have caused to the ozone layer is not known and will not be known for decades; however, marked decreases in column ozone have already been observed. The Montreal and Vienna conventions were installed long before a scientific consensus was established or important uncertainties in the science field were being resolved. The ozone case was understood comparably well by lay persons as e.g. Ozone shield or ozone hole were useful "easy-to-understand bridging metaphors". Americans voluntarily switched away from aerosol sprays, resulting in a 50 percent sales loss even before legislation was enforced. After a 1976 report by the United States National Academy of Sciences concluded that credible scientific evidence supported the ozone depletion hypothesis a few countries, including the United States, Canada, Sweden, Denmark, and Norway, moved to eliminate the use of CFCs in aerosol spray cans. At the time this was widely regarded as a first step towards a more comprehensive regulation policy, but progress in this direction slowed in subsequent years, due to a combination of political factors (continued resistance from the halocarbon industry and a general change in attitude towards environmental regulation during the first two years of the Reagan administration) and scientific developments (subsequent National Academy assessments that indicated that the first estimates of the magnitude of ozone depletion had been overly large). A critical DuPont manufacturing patent for Freon was set to expire in 1979. The United States banned the use of CFCs in aerosol cans in 1978. The European Community rejected proposals to ban CFCs in aerosol sprays, and in the U.S., CFCs continued to be used as refrigerants and for cleaning circuit boards. Worldwide CFC production fell sharply after the U.S. aerosol ban, but by 1986 had returned nearly to its 1976 level. In 1993, DuPont Canada closed its CFC facility. The U.S. government's attitude began to change again in 1983, when William Ruckelshaus replaced Anne M. Burford as Administrator of the United States Environmental Protection Agency (EPA). Under Ruckelshaus and his successor, Lee Thomas, the EPA pushed for an international approach to halocarbon regulations. In 1985 twenty nations, including most of the major CFC producers, signed the Vienna Convention for the Protection of the Ozone Layer, which established a framework for negotiating international regulations on ozone-depleting substances. That same year, the discovery of the Antarctic ozone hole was announced, causing a revival in public attention to the issue. In 1987, representatives from 43 nations signed the Montreal Protocol. Meanwhile, the halocarbon industry shifted its position and started supporting a protocol to limit CFC production. However, this shift was uneven with DuPont acting more quickly than its European counterparts. DuPont may have feared court action related to increased skin cancer, especially as the EPA had published a study in 1986 claiming that an additional 40 million cases and 800,000 cancer deaths were to be expected in the U.S. in the next 88 years. The EU shifted its position as well after Germany gave up its defence of the CFC industry and started supporting moves towards regulation. Government and industry in France and the UK tried to defend their CFC producing industries even after the Montreal Protocol had been signed. At Montreal, the participants agreed to freeze production of CFCs at 1986 levels and to reduce production by 50 percent by 1999. After a series of scientific expeditions to the Antarctic produced convincing evidence that the ozone hole was indeed caused by chlorine and bromine from manmade organohalogens, the Montreal Protocol was strengthened at a 1990 meeting in London. The participants agreed to phase out CFCs and halons entirely (aside from a very small amount marked for certain "essential" uses, such as asthma inhalers) by 2000 in non-Article 5 countries and by 2010 in Article 5 (less developed) signatories. At a 1992 meeting in Copenhagen, Denmark, the phase-out date was moved up to 1996. At the same meeting, methyl bromide (MeBr), a fumigant used primarily in agricultural production, was added to the list of controlled substances. For all substances controlled under the protocol, phaseout schedules were delayed for less developed ('Article 5(1)') countries, and phaseout in these countries was supported by transfers of expertise, technology, and money from non-Article 5(1) Parties to the Protocol. Additionally, exemptions from the agreed schedules could be applied for under the Essential Use Exemption (EUE) process for substances other than methyl bromide and under the Critical Use Exemption (CUE) process for methyl bromide. Civil society, including especially non-governmental organizations (NGOs), played critical roles at all stages of policy development leading to the Vienna Conference, the Montreal Protocol, and in assessing compliance afterwards. The major companies claimed that no alternatives to HFC existed. An ozone-safe hydrocarbon refrigerant was developed at a technological institute in Hamburg, Germany, consisting of a mixture of the hydrocarbon gases propane and butane, and in 1992 came to the attention of the NGO Greenpeace. Greenpeace called it "Greenfreeze". The NGO then worked successfully first with a small and struggling company to market an appliance beginning in Europe, then Asia and later Latin America, receiving a 1997 UNEP award. By 1995, Germany had made CFC refrigerators illegal. Since 2004, corporations like Coca-Cola, Carlsberg, and IKEA formed a coalition to promote the ozone-safe Greenfreeze units. Production spread to companies like Electrolux, Bosch, and LG, with sales reaching some 300 million refrigerators by 2008. In Latin America, a domestic Argentinian company began Greenfreeze production in 2003, while the giant Bosch in Brazil began a year later. By 2013 it was being used by some 700 million refrigerators, making up about 40 percent of the market. In the U.S., however, change has been much slower. To some extent, CFCs were being replaced by the less damaging hydrochlorofluorocarbons (HCFCs), although concerns remain regarding HCFCs also. In some applications, hydrofluorocarbons (HFCs) were being used to replace CFCs. HFCs, which contain no chlorine or bromine, do not contribute to ozone depletion although they are potent greenhouse gases. The best known of these compounds is probably HFC-134a (R-134a), which in the United States has largely replaced CFC-12 (R-12) in automobile air conditioners. In laboratory analytics (a former "essential" use) the ozone depleting substances can be replaced with other solvents. Chemical companies like Du Pont, whose representatives disparaged Greenfreeze as "that German technology," maneuvered the EPA to block the technology in the U.S. until 2011. Ben & Jerry's of Unilever and General Electric, spurred by Greenpeace, had expressed formal interest in 2008 which figured in the EPA's final approval. The EU recast its Ozone Regulation in 2009. The law bans ozone-depleting substances with the goal of protecting the ozone layer. The list of ODS that are subject to the regulation is the same as those under the Montreal Protocol, with some additions. More recently, policy experts have advocated for efforts to link ozone protection efforts to climate protection efforts. Many ODS are also greenhouse gases, some thousands of times more powerful agents of radiative forcing than carbon dioxide over the short and medium term. Thus policies protecting the ozone layer have had benefits in mitigating climate change. The reduction of the radiative forcing due to ODS probably masked the true level of climate change effects of other greenhouse gases, and was responsible for the "slow down" of global warming from the mid-90s. Policy decisions in one arena affect the costs and effectiveness of environmental improvements in the other. ODS requirements in the marine industry The IMO has amended MARPOL Annex VI Regulation 12 regarding ozone depleting substances. As from July 1, 2010, all vessels where MARPOL Annex VI is applicable should have a list of equipment using ozone depleting substances. The list should include the name of ODS, type and location of equipment, quantity in kg and date. All changes since that date should be recorded in an ODS Record book on board recording all intended or unintended releases to the atmosphere. Furthermore, new ODS supply or landing to shore facilities should be recorded as well. Prospects of ozone depletion Since the adoption and strengthening of the Montreal Protocol has led to reductions in the emissions of CFCs, atmospheric concentrations of the most-significant compounds have been declining. These substances are being gradually removed from the atmosphere; since peaking in 1994, the Effective Equivalent Chlorine (EECl) level in the atmosphere had dropped about 10 percent by 2008. The decrease in ozone-depleting chemicals has also been significantly affected by a decrease in bromine-containing chemicals. The data suggest that substantial natural sources exist for atmospheric methyl bromide (). The phase-out of CFCs means that nitrous oxide (), which is not covered by the Montreal Protocol, has become the most highly emitted ozone-depleting substance and is expected to remain so throughout the 21st century. According to the IPCC Sixth Assessment Report, global stratospheric ozone levels experienced rapid decline in the 1970s and 1980s and have since been increasing, but have not reached preindustrial levels. Although considerable variability is expected from year to year, including in polar regions where depletion is largest, the ozone layer is expected to continue recovering in coming decades due to declining ozone-depleting substance concentrations, assuming full compliance with the Montreal Protocol. The Antarctic ozone hole is expected to continue for decades. Ozone concentrations in the lower stratosphere over Antarctica increased by 5–10 percent by 2020 and will return to pre-1980 levels by about 2060–2075. This is 10–25 years later than predicted in earlier assessments, because of revised estimates of atmospheric concentrations of ozone-depleting substances, including a larger predicted future usage in developing countries. Another factor that may prolong ozone depletion is the drawdown of nitrogen oxides from above the stratosphere due to changing wind patterns. A gradual trend toward "healing" was reported in 2016. In 2019, the ozone hole was at its smallest in the previous thirty years, due to the warmer polar stratosphere weakening the polar vortex. In September 2023, the Antarctic ozone hole was one of the largest on record, at 26 million square kilometers. The anomalously large ozone loss may have been a result of the 2022 Tonga volcanic eruption. Research history The basic physical and chemical processes that lead to the formation of an ozone layer in the Earth's stratosphere were discovered by Sydney Chapman in 1930. Short-wavelength UV radiation splits an oxygen () molecule into two oxygen (O) atoms, which then combine with other oxygen molecules to form ozone. Ozone is removed when an oxygen atom and an ozone molecule "recombine" to form two oxygen molecules, i.e. O + → 2. In the 1950s, David Bates and Marcel Nicolet presented evidence that various free radicals, in particular hydroxyl (OH) and nitric oxide (NO), could catalyze this recombination reaction, reducing the overall amount of ozone. These free radicals were known to be present in the stratosphere, and so were regarded as part of the natural balance—it was estimated that in their absence, the ozone layer would be about twice as thick as it currently is. In 1970 Paul Crutzen pointed out that emissions of nitrous oxide (), a stable, long-lived gas produced by soil bacteria, from the Earth's surface could affect the amount of nitric oxide (NO) in the stratosphere. Crutzen showed that nitrous oxide lives long enough to reach the stratosphere, where it is converted into NO. Crutzen then noted that increasing use of fertilizers might have led to an increase in nitrous oxide emissions over the natural background, which would in turn result in an increase in the amount of NO in the stratosphere. Thus human activity could affect the stratospheric ozone layer. In the following year, Crutzen and (independently) Harold Johnston suggested that NO emissions from supersonic passenger aircraft, which would fly in the lower stratosphere, could also deplete the ozone layer. However, more recent analysis in 1995 by David W. Fahey, an atmospheric scientist at the National Oceanic and Atmospheric Administration, found that the drop in ozone would be from 1–2 percent if a fleet of 500 supersonic passenger aircraft were operated. This, Fahey expressed, would not be a showstopper for advanced supersonic passenger aircraft development. Rowland–Molina hypothesis In 1974 Frank Sherwood Rowland, Chemistry Professor at the University of California at Irvine, and his postdoctoral associate Mario J. Molina suggested that long-lived organic halogen compounds, such as CFCs, might behave in a similar fashion as Crutzen had proposed for nitrous oxide. James Lovelock had recently discovered, during a cruise in the South Atlantic in 1971, that almost all of the CFC compounds manufactured since their invention in 1930 were still present in the atmosphere. Molina and Rowland concluded that, like , the CFCs would reach the stratosphere where they would be dissociated by UV light, releasing chlorine atoms. A year earlier, Richard Stolarski and Ralph Cicerone at the University of Michigan had shown that Cl is even more efficient than NO at catalyzing the destruction of ozone. Similar conclusions were reached by Michael McElroy and Steven Wofsy at Harvard University. Neither group, however, had realized that CFCs were a potentially large source of stratospheric chlorine—instead, they had been investigating the possible effects of HCl emissions from the Space Shuttle, which are very much smaller. The Rowland–Molina hypothesis was strongly disputed by representatives of the aerosol and halocarbon industries. The Chair of the Board of DuPont was quoted as saying that ozone depletion theory is "a science fiction tale ... a load of rubbish ... utter nonsense". Robert Abplanalp, the President of Precision Valve Corporation (and inventor of the first practical aerosol spray can valve), wrote to the Chancellor of UC Irvine to complain about Rowland's public statements. Nevertheless, within three years most of the basic assumptions made by Rowland and Molina were confirmed by laboratory measurements and by direct observation in the stratosphere. The concentrations of the source gases (CFCs and related compounds) and the chlorine reservoir species (HCl and ) were measured throughout the stratosphere, and demonstrated that CFCs were indeed the major source of stratospheric chlorine, and that nearly all of the CFCs emitted would eventually reach the stratosphere. Even more convincing was the measurement, by James G. Anderson and collaborators, of chlorine monoxide (ClO) in the stratosphere. ClO is produced by the reaction of Cl with ozone—its observation thus demonstrated that Cl radicals not only were present in the stratosphere but also were actually involved in destroying ozone. McElroy and Wofsy extended the work of Rowland and Molina by showing that bromine atoms were even more effective catalysts for ozone loss than chlorine atoms and argued that the brominated organic compounds known as halons, widely used in fire extinguishers, were a potentially large source of stratospheric bromine. In 1976 the United States National Academy of Sciences released a report concluding that the ozone depletion hypothesis was strongly supported by the scientific evidence. In response the United States, Canada and Norway banned the use of CFCs in aerosol spray cans in 1978. Early estimates were that, if CFC production continued at 1977 levels, the total atmospheric ozone would after a century or so reach a steady state, 15 to 18 percent below normal levels. By 1984, when better evidence on the speed of critical reactions was available, this estimate was changed to 5 to 9 percent steady-state depletion. Crutzen, Molina, and Rowland were awarded the 1995 Nobel Prize in Chemistry for their work on stratospheric ozone. Antarctic ozone hole The discovery of the Antarctic "ozone hole" by British Antarctic Survey scientists Farman, Gardiner and Shanklin (first reported in a paper in Nature in May 1985) came as a shock to the scientific community, because the observed decline in polar ozone was far larger than had been anticipated. Satellite measurements (TOMS onboard Nimbus 7) showing massive depletion of ozone around the south pole were becoming available at the same time. However, these were initially rejected as unreasonable by data quality control algorithms (they were filtered out as errors since the values were unexpectedly low); the ozone hole was detected only in satellite data when the raw data was reprocessed following evidence of ozone depletion in in situ observations. When the software was rerun without the flags, the ozone hole was seen as far back as 1976. Susan Solomon, an atmospheric chemist at the National Oceanic and Atmospheric Administration (NOAA), proposed that chemical reactions on polar stratospheric clouds (PSCs) in the cold Antarctic stratosphere caused a massive, though localized and seasonal, increase in the amount of chlorine present in active, ozone-destroying forms. The polar stratospheric clouds in Antarctica are only formed at very low temperatures, as low as −80 °C, and early spring conditions. In such conditions the ice crystals of the cloud provide a suitable surface for conversion of unreactive chlorine compounds into reactive chlorine compounds, which can easily deplete ozone. Moreover, the polar vortex formed over Antarctica is very tight and the reaction occurring on the surface of the cloud crystals is far different from when it occurs in atmosphere. These conditions have led to ozone hole formation in Antarctica. This hypothesis was decisively confirmed, first by laboratory measurements and subsequently by direct measurements, from the ground and from high-altitude airplanes, of very high concentrations of chlorine monoxide (ClO) in the Antarctic stratosphere. Alternative hypotheses, which had attributed the ozone hole to variations in solar UV radiation or to changes in atmospheric circulation patterns, were also tested and shown to be untenable. Meanwhile, analysis of ozone measurements from the worldwide network of ground-based Dobson spectrophotometers led an international panel to conclude that the ozone layer was in fact being depleted, at all latitudes outside of the tropics. These trends were confirmed by satellite measurements. As a consequence, the major halocarbon-producing nations agreed to phase out production of CFCs, halons, and related compounds, a process that was completed in 1996. Since 1981 the United Nations Environment Programme, under the auspices of the World Meteorological Organization, has sponsored a series of technical reports on the Scientific Assessment of Ozone Depletion, based on satellite measurements. The 2007 report showed that the hole in the ozone layer was recovering and the smallest it had been for about a decade. A 2010 report found, "Over the past decade, global ozone and ozone in the Arctic and Antarctic regions is no longer decreasing but is not yet increasing. The ozone layer outside the Polar regions is projected to recover to its pre-1980 levels some time before the middle of this century. In contrast, the springtime ozone hole over the Antarctic is expected to recover much later." In 2012, NOAA and NASA reported "Warmer air temperatures high above the Antarctic led to the second smallest season ozone hole in 20 years averaging 17.9 million square kilometres. The hole reached its maximum size for the season on Sept 22, stretching to 21.2 million square kilometres." A gradual trend toward "healing" was reported in 2016 and then in 2017. It is reported that the recovery signal is evident even in the ozone loss saturation altitudes. The hole in the Earth's ozone layer over the South Pole has affected atmospheric circulation in the Southern Hemisphere all the way to the equator. The ozone hole has influenced atmospheric circulation all the way to the tropics and increased rainfall at low, subtropical latitudes in the Southern Hemisphere. Arctic ozone "mini-hole" On March 3, 2005, the journal Nature published an article linking 2004's unusually large Arctic ozone hole to solar wind activity. On March 15, 2011, a record ozone layer loss was observed, with about half of the ozone present over the Arctic having been destroyed. The change was attributed to increasingly cold winters in the Arctic stratosphere at an altitude of approximately , a change associated with global warming in a relationship that is still under investigation. By March 25, the ozone loss had become the largest compared to that observed in all previous winters with the possibility that it would become an ozone hole. This would require that the quantities of ozone to fall below 200 Dobson units, from the 250 recorded over central Siberia. It is predicted that the thinning layer would affect parts of Scandinavia and Eastern Europe on March 30–31. On October 2, 2011, a study was published in the journal Nature, which said that between December 2010 and March 2011 up to 80 percent of the ozone in the atmosphere at about above the surface was destroyed. The level of ozone depletion was severe enough that scientists said it could be compared to the ozone hole that forms over Antarctica every winter. According to the study, "for the first time, sufficient loss occurred to reasonably be described as an Arctic ozone hole." The study analyzed data from the Aura and CALIPSO satellites, and determined that the larger-than-normal ozone loss was due to an unusually long period of cold weather in the Arctic, some 30 days more than typical, which allowed for more ozone-destroying chlorine compounds to be created. According to Lamont Poole, a co-author of the study, cloud and aerosol particles on which the chlorine compounds are found "were abundant in the Arctic until mid March 2011—much later than usual—with average amounts at some altitudes similar to those observed in the Antarctic, and dramatically larger than the near-zero values seen in March in most Arctic winters". In 2013, researchers analyzed the data and found the 2010–2011 Arctic event did not reach the ozone depletion levels to classify as a true hole. A hole in the ozone is generally classified as 220 Dobson units or lower; the Arctic hole did not approach that low level. It has since been classified as a "mini-hole." Following the ozone depletion in 1997 and 2011, a 90% drop in ozone was measured by weather balloons over the Arctic in March 2020, as they normally recorded 3.5 parts per million of ozone, compared to only around 0.3 parts per million lastly, due to the coldest temperatures ever recorded since 1979, and a strong polar vortex which allowed chemicals, including chlorine and bromine, to reduce ozone. A rare hole, the result of unusually low temperatures in the atmosphere above the North Pole, was studied in 2020. Tibet ozone hole As winters that are colder are more affected, at times there is an ozone hole over Tibet. In 2006, a 2.5 million square kilometer ozone hole was detected over Tibet. Again in 2011, an ozone hole appeared over mountainous regions of Tibet, Xinjiang, Qinghai and the Hindu Kush, along with an unprecedented hole over the Arctic, though the Tibet one was far less intense than the ones over the Arctic or Antarctic. Potential depletion by storm clouds Research in 2012 showed that the same process that produces the ozone hole over Antarctica, occurs over summer storm clouds in the United States, and thus may be destroying ozone there as well. Ozone hole over tropics Physicist Qing-Bin Lu, of the University of Waterloo, claimed to have discovered a large, all-season ozone hole in the lower stratosphere over the tropics in July 2022. However, other researchers in the field refuted this claim, stating that the research was riddled with "serious errors and unsubstantiated assertions." According to Dr Paul Young, a lead author of the 2022 WMO/UNEP Scientific Assessment of Ozone Depletion, "The author's identification of a 'tropical ozone hole' is down to him looking at percentage changes in ozone, rather than absolute changes, with the latter being much more relevant for damaging UV reaching the surface." Specifically, Lu's work defines "ozone hole" as "an area with O3 loss in percent larger than 25%, with respect to the undisturbed O3 value when there were no significant CFCs in the stratosphere (~ in the 1960s)" instead of the general definition of 220 Dobson units or lower. Dr Marta Abalos Alvarez has added "Ozone depletion in the tropics is nothing new and is mainly due to the acceleration of the Brewer-Dobson circulation." Depletion caused by wildfire smoke Analyzing the atmospheric impacts of the 2019–2020 Australian bushfire season, scientists led by MIT researcher Susan Solomon found the smoke destroyed 3–5% of ozone in affected areas of the Southern Hemisphere. Smoke particles absorb hydrogen chloride and act as a catalyst to create chlorine radicals that destroy ozone. Ozone depletion and global warming Among others, Robert Watson had a role in the science assessment and in the regulation efforts of ozone depletion and global warming. Prior to the 1980s, the EU, NASA, NAS, UNEP, WMO and the British government had dissenting scientific reports and Watson played a role in the process of unified assessments. Based on the experience with the ozone case, the IPCC started to work on a unified reporting and science assessment to reach a consensus to provide the IPCC Summary for Policymakers. There are various areas of linkage between ozone depletion and global warming science: The same radiative forcing that produces global warming is expected to cool the stratosphere. This cooling, in turn, is expected to produce a relative increase in ozone () depletion in polar areas and the frequency of ozone holes. Conversely, ozone depletion represents a radiative forcing of the climate system. There are two opposing effects: Reduced ozone causes the stratosphere to absorb less solar radiation, thus cooling the stratosphere while warming the troposphere; the resulting colder stratosphere emits less long-wave radiation downward, thus cooling the troposphere. Overall, the cooling dominates; the IPCC concludes "observed stratospheric losses over the past two decades have caused a negative forcing of the surface-troposphere system" of about −0.15 ± 0.10 watts per square meter (W/m2). One of the strongest predictions of the greenhouse effect is that the stratosphere will cool. Although this cooling has been observed, it is not trivial to separate the effects of changes in the concentration of greenhouse gases and ozone depletion since both will lead to cooling. However, this can be done by numerical stratospheric modeling. Results from the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory show that above , the greenhouse gases dominate the cooling. Ozone depleting chemicals are also often greenhouse gases. The increases in concentrations of these chemicals have produced 0.34 ± 0.03 W/m2 of radiative forcing, corresponding to about 14 percent of the total radiative forcing from increases in the concentrations of well-mixed greenhouse gases. The long term modeling of the process, its measurement, study, design of theories and testing take decades to document, gain wide acceptance, and ultimately become the dominant paradigm. Several theories about the destruction of ozone were hypothesized in the 1980s, published in the late 1990s, and are now being investigated. Dr Drew Schindell, and Dr Paul Newman, NASA Goddard, proposed a theory in the late 1990s, using computational modeling methods to model ozone destruction, that accounted for 78 percent of the ozone destroyed. Further refinement of that model accounted for 89 percent of the ozone destroyed, but pushed back the estimated recovery of the ozone hole from 75 years to 150 years. (An important part of that model is the lack of stratospheric flight due to depletion of fossil fuels.) In 2019, NASA reported that there was no significant relation between size of the ozone hole and climate change. Misconceptions CFC weight Since CFC molecules are heavier than air (nitrogen or oxygen), it is commonly believed that the CFC molecules cannot reach the stratosphere in significant amounts. However, atmospheric gases are not sorted by weight at these altitudes; the forces of wind can fully mix the gases in the atmosphere. Some of the heavier CFCs are not evenly distributed. Percentage of man-made chlorine Another misconception is that "it is generally accepted that natural sources of tropospheric chlorine are four to five times larger than man-made ones." While this statement is strictly true, tropospheric chlorine is irrelevant; it is stratospheric chlorine that affects ozone depletion. Chlorine from ocean spray is soluble and thus is washed by rainfall before it reaches the stratosphere. CFCs, in contrast, are insoluble and long-lived, allowing them to reach the stratosphere. In the lower atmosphere, there is much more chlorine from CFCs and related haloalkanes than there is in HCl from salt spray, and in the stratosphere halocarbons are dominant. Only methyl chloride, which is one of these halocarbons, has a mainly natural source, and it is responsible for about 20 percent of the chlorine in the stratosphere; the remaining 80 percent comes from manmade sources. Very violent volcanic eruptions can inject HCl into the stratosphere, but researchers have shown that the contribution is not significant compared to that from CFCs. A similar erroneous assertion is that soluble halogen compounds from the volcanic plume of Mount Erebus on Ross Island, Antarctica are a major contributor to the Antarctic ozone hole. Nevertheless, a 2015 study showed that the role of Mount Erebus volcano in the Antarctic ozone depletion was probably underestimated. Based on the NCEP/NCAR reanalysis data over the last 35 years and by using the NOAA HYSPLIT trajectory model, researchers showed that Erebus volcano gas emissions (including hydrogen chloride (HCl)) can reach the Antarctic stratosphere via high-latitude cyclones and then the polar vortex. Depending on Erebus volcano activity, the additional annual HCl mass entering the stratosphere from Erebus varies from 1.0 to 14.3 kt. First observation G.M.B. Dobson mentioned that when springtime ozone levels in the Antarctic over Halley Bay were first measured in 1956, he was surprised to find that they were ~320 DU, or about 150 DU below spring Arctic levels of ~450 DU. These were at that time the only known Antarctic ozone values available. What Dobson describes is essentially the baseline from which the ozone hole is measured: actual ozone hole values are in the 150–100 DU range. The discrepancy between the Arctic and Antarctic noted by Dobson was primarily a matter of timing: during the Arctic spring, ozone levels rose smoothly, peaking in April, whereas in the Antarctic they stayed approximately constant during early spring, rising abruptly in November when the polar vortex broke down. The behavior seen in the Antarctic ozone hole is different. Instead of staying constant, early springtime ozone levels drop from their already low winter values, by as much as 50 percent, and normal values are not reached again until December. Location of hole Some people thought that the ozone hole should be above the sources of CFCs. However, CFCs are well mixed globally in the troposphere and stratosphere. The reason for occurrence of the ozone hole above Antarctica is not because there are more CFCs concentrated but because the low temperatures help form polar stratospheric clouds. In fact, there are findings of significant and localized "ozone holes" above other parts of the Earth, such as above Central Asia. Awareness campaigns Public misconceptions and misunderstandings of complex issues like ozone depletion are common. The limited scientific knowledge of the public led to confusion about global warming or the perception of global warming as a subset of the "ozone hole". In the beginning, classical green NGOs refrained from using CFC depletion for campaigning, as they assumed the topic was too complicated. They became active much later, e.g. in Greenpeace's support for a CFC-free refrigerator produced by the former East German company VEB dkk Scharfenstein. The metaphors used in the CFC discussion (ozone shield, ozone hole) are not "exact" in the scientific sense. The "ozone hole" is more of a depression, less "a hole in the windshield". The ozone does not disappear through the layer, nor is there a uniform "thinning" of the ozone layer. However, they resonated better with non-scientists and their concerns. The ozone hole was seen as a "hot issue" and imminent risk as laypeople feared severe personal consequences such as skin cancer, cataracts, damage to plants, and reduction of plankton populations in the ocean's photic zone. Not only on the policy level, ozone regulation compared to climate change fared much better in public opinion. Americans voluntarily switched away from aerosol sprays before legislation was enforced, while climate change failed to achieve comparable concern and public action. The sudden identification in 1985 that there was a substantial "hole" was widely reported in the press. The especially rapid ozone depletion in Antarctica had previously been dismissed as a measurement error. Scientific consensus was established after regulation. While the Antarctic ozone hole has a relatively small effect on global ozone, the hole has generated a great deal of public interest because: Many have worried that ozone holes might start appearing over other areas of the globe, though to date the only other large-scale depletion is a smaller ozone "dimple" observed during the Arctic spring around the North Pole. Ozone at middle latitudes has declined, but by a much smaller extent (a decrease of about 4–5 percent). If stratospheric conditions become more severe (cooler temperatures, more clouds, more active chlorine), global ozone may decrease at a greater pace. Standard global warming theory predicts that the stratosphere will cool. When the Antarctic ozone hole breaks up each year, the ozone-depleted air drifts into nearby regions. Decreases in the ozone level of up to 10 percent have been reported in New Zealand in the month following the breakup of the Antarctic ozone hole, with ultraviolet-B radiation intensities increasing by more than 15 percent since the 1970s. World Ozone Day In 1994, the United Nations General Assembly voted to designate September 16 as the International Day for the Preservation of the Ozone Layer, or "World Ozone Day". The designation commemorates the signing of the Montreal Protocol on that date in 1987.
Physical sciences
Atmosphere
null
44211
https://en.wikipedia.org/wiki/Shale
Shale
Shale is a fine-grained, clastic sedimentary rock formed from mud that is a mix of flakes of clay minerals (hydrous aluminium phyllosilicates, e.g., kaolin, Al2Si2O5(OH)4) and tiny fragments (silt-sized particles) of other minerals, especially quartz and calcite. Shale is characterized by its tendency to split into thin layers (laminae) less than one centimeter in thickness. This property is called fissility. Shale is the most common sedimentary rock. The term shale is sometimes applied more broadly, as essentially a synonym for mudrock, rather than in the narrower sense of clay-rich fissile mudrock. Texture Shale typically exhibits varying degrees of fissility. Because of the parallel orientation of clay mineral flakes in shale, it breaks into thin layers, often splintery and usually parallel to the otherwise indistinguishable bedding planes. Non-fissile rocks of similar composition and particle size (less than 0.0625 mm) are described as mudstones (1/3 to 2/3 silt particles) or claystones (less than 1/3 silt). Rocks with similar particle sizes but with less clay (greater than 2/3 silt) and therefore grittier are siltstones. Composition and color Shales are typically gray in color and are composed of clay minerals and quartz grains. The addition of variable amounts of minor constituents alters the color of the rock. Red, brown and green colors are indicative of ferric oxide (hematite – reds), iron hydroxide (goethite – browns and limonite – yellow), or micaceous minerals (chlorite, biotite and illite – greens). The color shifts from reddish to greenish as iron in the oxidized (ferric) state is converted to iron in the reduced (ferrous) state. Black shale results from the presence of greater than one percent carbonaceous material and indicates a reducing environment. Pale blue to blue-green shales typically are rich in carbonate minerals. Clays are the major constituent of shales and other mudrocks. The clay minerals represented are largely kaolinite, montmorillonite and illite. Clay minerals of Late Tertiary mudstones are expandable smectites, whereas in older rocks (especially in mid-to early Paleozoic shales) illites predominate. The transformation of smectite to illite produces silica, sodium, calcium, magnesium, iron and water. These released elements form authigenic quartz, chert, calcite, dolomite, ankerite, hematite and albite, all trace to minor (except quartz) minerals found in shales and other mudrocks. A typical shale is composed of about 58% clay minerals, 28% quartz, 6% feldspar, 5% carbonate minerals, and 2% iron oxides. Most of the quartz is detrital (part of the original sediments that formed the shale) rather than authigenic (crystallized within the shale after deposition). Shales and other mudrocks contain roughly 95 percent of the organic matter in all sedimentary rocks. However, this amounts to less than one percent by mass in an average shale. Black shales, which form in anoxic conditions, contain reduced free carbon along with ferrous iron (Fe2+) and sulfur (S2−). Amorphous iron sulfide, along with carbon, produce the black coloration. Because amorphous iron sulfide gradually converts to pyrite, which is not an important pigment, young shales may be quite dark from their iron sulfide content, in spite of a modest carbon content (less than 1%), while a black color in an ancient shale indicates a high carbon content. Most shales are marine in origin, and the groundwater in shale formations is often highly saline. There is evidence that shale acts as a semipermeable medium, allowing water to pass through while retaining dissolved salts. Formation The fine particles that compose shale can remain suspended in water long after the larger particles of sand have been deposited. As a result, shales are typically deposited in very slow moving water and are often found in lakes and lagoonal deposits, in river deltas, on floodplains and offshore below the wave base. Thick deposits of shale are found near ancient continental margins and foreland basins. Some of the most widespread shale formations were deposited by epicontinental seas. Black shales are common in Cretaceous strata on the margins of the Atlantic Ocean, where they were deposited in fault-bounded silled basins associated with the opening of the Atlantic during the breakup of Pangaea. These basins were anoxic, in part because of restricted circulation in the narrow Atlantic, and in part because the very warm Cretaceous seas lacked the circulation of cold bottom water that oxygenates the deep oceans today. Most clay must be deposited as aggregates and floccules, since the settling rate of individual clay particles is extremely slow. Flocculation is very rapid once the clay encounters highly saline sea water. Whereas individual clay particles are less than 4 microns in size, the clumps of clay particles produced by flocculation vary in size from a few tens of microns to over 700 microns in diameter. The floccules start out water-rich, but much of the water is expelled from the floccules as the clay minerals bind more tightly together over time (a process called syneresis). Clay pelletization by organisms that filter feed is important where flocculation is inhibited. Filter feeders produce an estimated 12 metric tons of clay pellets per square kilometer per year along the U.S. Gulf Coast. As sediments continue to accumulate, the older, more deeply buried sediments begin to undergo diagenesis. This mostly consists of compaction and lithification of the clay and silt particles. Early stages of diagenesis, described as eogenesis, take place at shallow depths (a few tens of meters) and are characterized by bioturbation and mineralogical changes in the sediments, with only slight compaction. Pyrite may be formed in anoxic mud at this stage of diagenesis. Deeper burial is accompanied by mesogenesis, during which most of the compaction and lithification takes place. As the sediments come under increasing pressure from overlying sediments, sediment grains move into more compact arrangements, ductile grains (such as clay mineral grains) are deformed, and pore space is reduced. In addition to this physical compaction, chemical compaction may take place via pressure solution. Points of contact between grains are under the greatest strain, and the strained mineral is more soluble than the rest of the grain. As a result, the contact points are dissolved away, allowing the grains to come into closer contact. It is during compaction that shale develops its fissility, likely through mechanical compaction of the original open framework of clay particles. The particles become strongly oriented into parallel layers that give the shale its distinctive fabric. Fissility likely develops early in the compaction process, at relatively shallow depth, since fissility does not seem to vary with depth in thick formations. Kaolinite flakes have less tendency to align in parallel layers than other clays, so kaolinite-rich clay is more likely to form nonfissile mudstone than shale. On the other hand, black shales often have very pronounced fissility (paper shales) due to binding of hydrocarbon molecules to the faces of the clay particles, which weakens the binding between particles. Lithification follows closely on compaction, as increased temperatures at depth hasten deposition of cement that binds the grains together. Pressure solution contributes to cementing, as the mineral dissolved from strained contact points is redeposited in the unstrained pore spaces. The clay minerals may be altered as well. For example, smectite is altered to illite at temperatures of about , releasing water in the process. Other alteration reactions include the alteration of smectite to chlorite and of kaolinite to illite at temperatures between . Because of these reactions, illite composes 80% of Precambrian shales, versus about 25% of young shales. Unroofing of buried shale is accompanied by telogenesis, the third and final stage of diagenesis. As erosion reduces the depth of burial, renewed exposure to meteoric water produces additional changes to the shale, such as dissolution of some of the cement to produce secondary porosity. Pyrite may be oxidized to produce gypsum. Black shales are dark, as a result of being especially rich in unoxidized carbon. Common in some Paleozoic and Mesozoic strata, black shales were deposited in anoxic, reducing environments, such as in stagnant water columns. Some black shales contain abundant heavy metals such as molybdenum, uranium, vanadium, and zinc. The enriched values are of controversial origin, having been alternatively attributed to input from hydrothermal fluids during or after sedimentation or to slow accumulation from sea water over long periods of sedimentation. Fossils, animal tracks or burrows and even raindrop impressions are sometimes preserved on shale bedding surfaces. Shales may also contain concretions consisting of pyrite, apatite, or various carbonate minerals. Shales that are subject to heat and pressure of metamorphism alter into a hard, fissile, metamorphic rock known as slate. With continued increase in metamorphic grade the sequence is phyllite, then schist and finally gneiss. As hydrocarbon source rock Shale is the most common source rock for hydrocarbons (natural gas and petroleum). The lack of coarse sediments in most shale beds reflects the absence of strong currents in the waters of the depositional basin. These might have oxygenated the waters and destroyed organic matter before it could accumulate. The absence of carbonate rock in shale beds reflects the absence of organisms that might have secreted carbonate skeletons, also likely due to an anoxic environment. As a result, about 95% of organic matter in sedimentary rocks is found in shales and other mudrocks. Individual shale beds typically have an organic matter content of about 1%, but the richest source rocks may contain as much as 40% organic matter. The organic matter in shale is converted over time from the original proteins, polysaccharides, lipids, and other organic molecules to kerogen, which at the higher temperatures found at greater depths of burial is further converted to graphite and petroleum. Historical mining terminology Before the mid-19th century, the terms slate, shale and schist were not sharply distinguished. In the context of underground coal mining, shale was frequently referred to as slate well into the 20th century. Black shale associated with coal seams is called black metal.
Physical sciences
Sedimentary rocks
Earth science
44214
https://en.wikipedia.org/wiki/Slate
Slate
Slate is a fine-grained, foliated, homogeneous, metamorphic rock derived from an original shale-type sedimentary rock composed of clay or volcanic ash through low-grade, regional metamorphism. It is the finest-grained foliated metamorphic rock. Foliation may not correspond to the original sedimentary layering, but instead is in planes perpendicular to the direction of metamorphic compression. The foliation in slate, called "slaty cleavage", is caused by strong compression in which fine-grained clay forms flakes to regrow in planes perpendicular to the compression. When expertly "cut" by striking parallel to the foliation with a specialized tool in the quarry, many slates display a property called fissility, forming smooth, flat sheets of stone which have long been used for roofing, floor tiles, and other purposes. Slate is frequently grey in color, especially when seen en masse covering roofs. However, slate occurs in a variety of colors even from a single locality; for example, slate from North Wales can be found in many shades of grey, from pale to dark, and may also be purple, green, or cyan. Slate is not to be confused with shale, from which it may be formed, or schist. The word "slate" is also used for certain types of object made from slate rock. It may mean a single roofing tile made of slate, or a writing slate, which was traditionally a small, smooth piece of the rock, often framed in wood, used with chalk as a notepad or notice board, and especially for recording charges in pubs and inns. The phrases "clean slate" and "blank slate" come from this usage. Description Slate is a fine-grained, metamorphic rock that shows no obvious compositional layering but can easily be split into thin slabs and plates. It is usually formed by low-grade regional metamorphism of mudrock. This mild degree of metamorphism produces a rock in which the individual mineral crystals remain microscopic in size, producing a characteristic slaty cleavage in which fresh cleavage surfaces appear dull. This is in contrast to the silky cleaved surfaces of phyllite, which is the next-higher grade of metamorphic rock derived from mudstone. The direction of cleavage is independent of any sedimentary structures in the original mudrock, reflecting instead the direction of regional compression. Slaty cleavage is continuous, meaning that the individual cleavage planes are too closely spaced to be discernible in hand samples. The texture of the slate is totally dominated by these pervasive cleavage planes. Under a microscope, the slate is found to consist of very thin lenses of quartz and feldspar (QF-domains) separated by layers of mica (M-domains). These are typically less than 100 μm (micron) thick. Because slate was formed in low heat and pressure, compared to most other metamorphic rocks, some fossils can be found in slate; sometimes even microscopic remains of delicate organisms can be found in slate. The process of conversion of mudrock to slate involves a loss of up to 50% of the volume of the mudrock as it is compacted. Grains of platy minerals, such as clay minerals, are rotated to form parallel layers perpendicular to the direction of compaction, which begin to impart cleavage to the rock. Slaty cleavage is fully developed as the clay minerals begin to be converted to chlorite and mica. Organic carbon in the rock is converted to graphite. Slate is mainly composed of the minerals quartz, illite, and chlorite, which account for up to 95% of its composition. The most important accessory minerals are iron oxides (such as hematite and magnetite), iron sulfides (such as pyrite), and carbonate minerals. Feldspar may be present as albite or, less commonly, orthoclase. Occasionally, as in the purple slates of North Wales, ferrous (iron(II)) reduction spheres form around iron nuclei, leaving a light-green, spotted texture. These spheres are sometimes deformed by a subsequent applied stress field into ovoids, which appear as ellipses when viewed on a cleavage plane of the specimen. However, some evidence shows that reduced spots may also form after deformation and acquire an elliptical shape from preferential infiltration along the cleavage direction, so caution is required in using reduction ellipsoids to estimate deformation. Terminology Before the mid-19th century, the terms "slate", "shale", and "schist" were not sharply distinguished. In the context of underground coal mining in the United States, the term slate was commonly used to refer to shale well into the 20th century. For example, roof slate referred to shale above a coal seam, and draw slate referred to shale that fell from the mine roof as the coal was removed. The British Geological Survey recommends that the term "slate" be used in scientific writings only when very little else is known about the rock that would allow a more definite classification. For example, if the characteristics of the rock show definitely that it was formed by metamorphosis of shale, it should be described in scientific writings as a metashale. If its origin is uncertain, but the rock is known to be rich in mica, it should be described as a pelite. Uses Construction Slate can be made into roofing slate, a type of roof tile which are installed by a slater. Slate has two lines of breakability—cleavage and grain—which make it possible to split the stone into thin sheets. When broken, slate retains a natural appearance while remaining relatively flat and easy to stack. A series of "slate booms" occurred in Europe from the 1870s until the First World War following improvements in railway, road and waterway transportation systems. Slate is particularly suitable as a roofing material as it has an extremely low water absorption index of less than 0.4%, making the material resistant to frost damage. Natural slate, which requires only minimal processing, has an embodied energy that compares favorably with other roofing materials. Natural slate is used by building professionals as a result of its beauty and durability. Slate is incredibly durable and can last several hundred years, often with little or no maintenance. Natural slate is also fire resistant and energy efficient. Slate roof tiles are usually fixed (fastened) either with nails or with hooks (as is common with Spanish slate). In the UK, fixing is typically with double nails onto timber battens (England and Wales) or nailed directly onto timber sarking boards (Scotland and Northern Ireland). Nails were traditionally of copper, although there are modern alloy and stainless steel alternatives. Both these methods, if used properly, provide a long-lasting weathertight roof with a lifespan of around 60–125 years. Some mainland European slate suppliers suggest that using hook fixing means that: Areas of weakness on the tile are fewer since no holes have to be drilled Roofing features such as valleys and domes are easier to create since narrow tiles can be used Hook fixing is particularly suitable in regions subject to severe weather conditions, since there is greater resistance to wind uplift, as the lower edge of the slate is secured. The metal hooks are, however, visible and may be unsuitable for historic properties. Slate tiles are often used for interior and exterior flooring, stairs, walkways and wall cladding. Tiles are installed and set on mortar and grouted along the edges. Chemical sealants are often used on tiles to improve durability and appearance, increase stain resistance, reduce efflorescence, and increase or reduce surface smoothness. Tiles are often sold gauged, meaning that the back surface is ground for ease of installation. Slate flooring can be slippery when used in external locations subject to rain. Slate tiles were used in 19th century UK building construction (apart from roofs) and in slate quarrying areas such as Blaenau Ffestiniog and Bethesda, Wales there are still many buildings wholly constructed of slate. Slates can also be set into walls to provide a rudimentary damp-proof membrane. Small offcuts are used as shims to level floor joists. In areas where slate is plentiful it is also used in pieces of various sizes for building walls and hedges, sometimes combined with other kinds of stone. Other uses Because it is a good electrical insulator and fireproof, it was used to construct early-20th-century electric switchboards and relay controls for large electric motors. Because of its thermal stability and chemical inertness, slate has been used for laboratory bench tops and for billiard table tops. Slate was used by earlier cultures as whetstone to hone knives, but whetstones are nowadays more typically made of quartz. In 18th- and 19th-century schools, slate was extensively used for blackboards and individual writing slates, for which slate or chalk pencils were used. In modern homes slate is often used as table coasters. In areas where it is available, high-quality slate is used for tombstones and commemorative tablets. In some cases slate was used by the ancient Maya civilization to fashion stelae. Slate was the traditional material of choice for black Go stones in Japan, alongside clamshell for white stones. It is now considered to be a luxury. Pennsylvania slate is widely used in the manufacture of turkey calls used for hunting turkeys. The tones produced from the slate, when scratched with various species of wood striker, imitates almost exactly the calls of all four species of wild turkey in North America: eastern, Rio Grande, Osceola and Merriam's. Extraction Slate is found in the Arctic and was used by Inuit to make the blades for ulus. China has vast slate deposits; in recent years its export of finished and unfinished slate has increased. Deposits of slate exist throughout Australia, with large reserves quarried in the Adelaide Hills in Willunga, Kanmantoo, and the Mid North at Mintaro and Spalding. Slate is abundant in Brazil, the world's second-largest producer of slate, around Papagaios in Minas Gerais, which extracts 95 percent of Brazil's slate. However, not all "slate" products from Brazil are entitled to bear the CE mark. Most slate in Europe today comes from Spain, the world's largest producer and exporter of natural slate, and 90 percent of Europe's natural slate used for roofing originates from the slate industry there. Lesser slate-producing regions in present-day Europe include Wales (with UNESCO landscape status and a museum at Llanberis), Cornwall (famously the village of Delabole), Cumbria (see Burlington Slate Quarries, Honister Slate Mine and Skiddaw Slate) and, formerly in the West Highlands of Scotland, around Ballachulish and the Slate Islands in the United Kingdom. Parts of France (Anjou, Loire Valley, Ardennes, Brittany, Savoie) and Belgium (Ardennes), Liguria in northern Italy, especially between the town of Lavagna (whose name is inherited as the term for chalkboard in Italian) and Fontanabuona valley; Portugal especially around Valongo in the north of the country. Germany's Moselle River region, Hunsrück (with a former mine open as a museum at Fell), Eifel, Westerwald, Thuringia and north Bavaria; and Alta, Norway (actually schist, not a true slate). Some of the slate from Wales and Cumbria is colored slate (non-blue): purple and formerly green in Wales and green in Cumbria. In North America, slate is produced in Newfoundland, eastern Pennsylvania, Buckingham County, Virginia, and the Slate Valley region in Vermont and New York, where colored slate is mined in the Granville, New York, area. A major slating operation existed in Monson, Maine, during the late 19th and early 20th centuries, where the slate is usually dark purple to blackish, and many local structures are roofed with slate tiles. The roof of St. Patrick's Cathedral in New York City and the headstone of John F. Kennedy's gravesite in Arlington National Cemetery are both made of Monson slate.
Physical sciences
Petrology
null
44217
https://en.wikipedia.org/wiki/Menhaden
Menhaden
Menhaden, also known as mossbunker, bunker, and "the most important fish in the sea", are forage fish of the genera Brevoortia and Ethmidium, two genera of marine fish in the order Clupeiformes. Menhaden is a blend of poghaden (pogy for short) and an Algonquian word akin to Narragansett munnawhatteaûg, derived from munnohquohteau ("he fertilizes"), referring to their use of the fish as fertilizer. It is generally thought that Pilgrims were advised by Tisquantum (also known as Squanto) to plant menhaden with their crops. Description Menhaden are flat and have soft flesh and a deeply forked tail. They rarely exceed in length, and have a varied weight range. Gulf menhaden and Atlantic menhaden are small oily-fleshed fish, bright silver, and characterized by a series of smaller spots behind the main humeral spot. They tend to have larger scales than yellowfin menhaden and finescale menhaden. In addition, yellowfin menhaden tail rays are a bright yellow in contrast to those of the Atlantic menhaden. Taxonomy Recent taxonomic work using DNA comparisons have organized the North American menhadens into large-scaled (Gulf and Atlantic menhaden) and small-scaled (Finescale and Yellowfin menhaden) designations. The menhaden consist of two genera and seven species: Genus Brevoortia T. N. Gill, 1861 Brevoortia aurea (Spix & Agassiz, 1829) (Brazilian menhaden) Brevoortia gunteri Hildebrand, 1948 (Finescale menhaden) Brevoortia patronus Goode, 1878 (Gulf menhaden) Brevoortia pectinata (Jenyns, 1842) (Argentine menhaden) Brevoortia smithi Hildebrand, 1941 (Yellowfin menhaden) Brevoortia tyrannus (Latrobe, 1802) (Atlantic menhaden) Genus Ethmidium W. F. Thompson, 1916 Ethmidium maculatum (Valenciennes, 1847) (Pacific menhaden) Distribution Finescale menhaden range from the Yucatán to Louisiana. Yellowfin menhaden range from Louisiana to Virginia. Gulf menhaden range from the Yucatán Peninsula, Mexico, to Tampa Bay, Florida. Atlantic menhaden range from Jupiter Inlet, Florida, to Nova Scotia; Atlantic menhaden seasonally migrate along the coast; in June, mature adults typically are in the northern portion of the coastline with sub-adults and juveniles located in the southern portion. The various species of menhaden occur anywhere from estuarine waters outward to the continental shelf; menhaden grow in less saline waters of estuaries and may be found in bays and lagoons, as well as at river mouths; adults appear to prefer water temperatures near 18 °C. Ecology Menhaden are filter feeders that travel in large, slow-moving, and tightly packed schools with open mouths. Filter feeders typically take into their open mouths "materials in the same proportions as they occur in ambient waters". Menhaden have two main sources of food: phytoplankton and zooplankton. A menhaden's diet varies considerably over the course of its lifetime, and is directly related to its size. The smallest menhaden, typically those under one year old, eat primarily phytoplankton. After that age, adult menhaden gradually shift to a diet comprised almost exclusively of zooplankton. Menhaden are omnivorous filter feeders, feeding by straining plankton and algae from water. Along with oysters, which filter water on the seabed, menhaden play a key role in the food chain in estuaries and bays. Atlantic menhaden are an important link between plankton and upper level predators. Because of their filter feeding abilities, "menhaden consume and redistribute a significant amount of energy within and between Chesapeake Bay and other estuaries, and the coastal ocean." Because they play this role, and their abundance, menhaden are an invaluable prey species for many predatory fish, such as striped bass, bluefish, mackerel, flounder, tuna, drums, and sharks. They are also a very important food source for many birds, including egrets, ospreys, seagulls, northern gannets, pelicans, and herons. In 2012, the Atlantic States Marine Fisheries Commission declared that the Atlantic menhaden was depleted due to overfishing. The decision was driven by issues with water quality in the Chesapeake Bay and failing efforts to re-introduce predator species, due to lack of menhaden on which they could feed. Menhaden are crucial not only because of their keystone species-status in the food web, but also because of their ecological services. The way menhaden filter feed on phytoplankton helps to mitigate toxic algal blooms. These algal blooms, which are often detrimental to a number of fish, bird, and marine mammal species, create hypoxic conditions. The phytoplankton being preyed upon are photosynthetic organisms, converting sunlight into energy which is then transferred to menhaden and then to bigger species of fish or other larger marine organisms such as birds or mammals. The consequence of this behavior is that if menhaden are eliminated or significantly decreased, there are limited means of energy transfer among trophic levels – making menhaden a true keystone species with ecological services that are invaluable to humans. Habitat Menhaden are a pelagic schooling fish that migrate inshore during the summer and off-shore in the winter months. The juvenile and larval menhaden migrate to shore and inland waterways through currents during summer months to grow while feeding on the phytoplankton and eventually zooplankton once they have matured. Commercially caught menhaden have been recorded in waters of around 5 to 24 ‰, as well as in hypersaline waters around 60 ‰. Reproduction Menhaden reproduce in open oceans externally, however, the female does not carry eggs with them during the process as they are released into the water column at the planktonic level as gametes and sperm. Currently, functional hermaphroditism is unknown to the species and identification of sex of the individual organism cannot be determined externally due to the lack of accessory reproductive organs. These fish breed during the winter months through December to March and the eggs and juveniles navigate towards estuaries and inland waterways through tides and currents. Human use Menhaden are not used directly for food. They are processed into fish oil and fish meal that are used as food ingredients, animal feed, and dietary supplements. The flesh has a high omega-3 fat content. Fish oil made from menhaden also is used as a raw material for products such as lipstick. Fisheries According to the Virginia Institute of Marine Science (VIMS), there are two established commercial fisheries for menhaden. The first is known as a reduction fishery. The second is known as a bait fishery, which harvests menhaden for the use of both commercial and recreational fishermen. Commercial fishermen, especially crabbers in the Chesapeake Bay area, use menhaden to bait their traps or hooks. The recreational fisherman use ground menhaden chum as a fish attractant, and whole fish as bait. The total harvest is approximately 500 million fish per year. Atlantic menhaden are harvested using purse seines. Omega Protein – a reduction fishery company with operations in the northwest Atlantic and the Gulf of Mexico – takes 90% of the total menhaden harvest in the United States. In October 2005, the Atlantic States Marine Fisheries Commission (ASMFC) approved an addendum to Amendment 1 of the Interstate Fishery Management Plan for Atlantic Menhaden, which "established a five-year annual cap on reduction fishery landings in the Chesapeake Bay", imposing a limit on reduction fishery operations for 2006–2010. In November 2006, that cap was established at 109,020 metric tons; this cap remained in place until 2013. In December 2012, in the face of the depletion of Atlantic menhaden, the ASMFC implemented another cap, effective in 2013 and 2014, for the Chesapeake Bay, this time at 87,216 metric tons, as well as a total allowable catch (TAC) of the species of 170,800 metric tons, a 20% reduction from the 2009–2011 average. The TAC was subsequently raised for 2015 and 2016 to 187,880 metric tons. The cap in the Chesapeake Bay was further lowered in November 2017 to 51,000 metric tons, but this came alongside a higher TAC of 216,000 metric tons. Omega Protein has been openly critical of these caps. Uses for menhaden oil Despite not being a popular fish for consumption, menhaden oil has many uses not only for humans but also for other animals. One element of menhaden oil is that it is high in omega-3 fatty acids. This molecule helps with lowering blood pressure, fixing abnormal heartbeats, reducing the chance of a heart attack or stroke, and other health benefits. It is due to this that menhaden oil can be used in supplements to help with the previously mentioned issues. One way that menhaden oil benefits animals is seen in chickens. When menhaden oil was given to chickens in their feed, they had a lower chance of fatty liver disease. This was because of menhaden oil's high omega-3 fatty acid content, which took the place of omega-6 fatty acid, which is not as beneficial to animals. Another animal that benefits from omega-3 in menhaden oil is guinea pigs. When given menhaden oil in feed guinea pigs were shown to have a longer life span. Risks of overfishing According to the Chesapeake Bay Foundation, menhaden are the most important fish in the Bay. This is because they are a food source for many commercial important species like striped bass. They also manage the algal bloom occurrences in the Bay because they eat phytoplankton. Decreases in menhaden populations could also leave striped bass vulnerable to disease. In the past 20 years, the number of juvenile menhaden produced in the Chesapeake Bay have been decreasing (Refer to Atlantic Menhaden Graph on bay-wide mean catch per haul). This is believed to be due to the overfishing of menhaden for their fish oil. This could seriously disrupt the food chain. In response, the Atlantic States Marine Fisheries Commission (ASMFC) put a cap on the Atlantic menhaden harvest in October 2020. This 10% cut to the harvest is the first to ever be seen for menhaden coast-wide. It also was the first vote to consider benchmarks known as "ecological reference points". This allows managers to account for a species role in the food chain when setting catch limits. This is different from the "single-species stock assessments" that were previously used which only accounted for the demand from the fishing industry rather than the demand from the food web. This cut to the harvest established a quota of 194,400 metric tons of menhaden for the 2021–2022 fishing season. It is the hope that this cut will allow menhaden to fulfill their role in the ecosystem while keeping the commercial fishery alive. Cultural significance After menhaden had been identified as a valuable alternative to whale oil in the 1870s, the menhaden fishery on the Chesapeake was worked by predominantly African-American crews on open boats hauling purse seines. The men employed sea chanties to help synchronize the hauling of the nets. These chanties pulled from West African, blues, and gospel sources and created a uniquely African American culture of chanty singing. By the late 1950s, hydraulic winches replaced the large crews of manual haulers, and the menhaden chanty tradition declined.
Biology and health sciences
Clupeiformes
Animals
44284
https://en.wikipedia.org/wiki/Non-coding%20DNA
Non-coding DNA
Non-coding DNA (ncDNA) sequences are components of an organism's DNA that do not encode protein sequences. Some non-coding DNA is transcribed into functional non-coding RNA molecules (e.g. transfer RNA, microRNA, piRNA, ribosomal RNA, and regulatory RNAs). Other functional regions of the non-coding DNA fraction include regulatory sequences that control gene expression; scaffold attachment regions; origins of DNA replication; centromeres; and telomeres. Some non-coding regions appear to be mostly nonfunctional, such as introns, pseudogenes, intergenic DNA, and fragments of transposons and viruses. Regions that are completely nonfunctional are called junk DNA. Fraction of non-coding genomic DNA In bacteria, the coding regions typically take up 88% of the genome. The remaining 12% does not encode proteins, but much of it still has biological function through genes where the RNA transcript is functional (non-coding genes) and regulatory sequences, which means that almost all of the bacterial genome has a function. The amount of coding DNA in eukaryotes is usually a much smaller fraction of the genome because eukaryotic genomes contain large amounts of repetitive DNA not found in prokaryotes. The human genome contains somewhere between 1–2% coding DNA. The exact number is not known because there are disputes over the number of functional coding exons and over the total size of the human genome. This means that 98–99% of the human genome consists of non-coding DNA and this includes many functional elements such as non-coding genes and regulatory sequences. Genome size in eukaryotes can vary over a wide range, even between closely related species. This puzzling observation was originally known as the C-value Paradox where "C" refers to the haploid genome size. The paradox was resolved with the discovery that most of the differences were due to the expansion and contraction of repetitive DNA and not the number of genes. Some researchers speculated that this repetitive DNA was mostly junk DNA. The reasons for the changes in genome size are still being worked out and this problem is called the C-value Enigma. This led to the observation that the number of genes does not seem to correlate with perceived notions of complexity because the number of genes seems to be relatively constant, an issue termed the G-value Paradox. For example, the genome of the unicellular Polychaos dubium (formerly known as Amoeba dubia) has been reported to contain more than 200 times the amount of DNA in humans (i.e. more than 600 billion pairs of bases vs a bit more than 3 billion in humans). The pufferfish Takifugu rubripes genome is only about one eighth the size of the human genome, yet seems to have a comparable number of genes. Genes take up about 30% of the pufferfish genome and the coding DNA is about 10%. (Non-coding DNA = 90%.) The reduced size of the pufferfish genome is due to a reduction in the length of introns and less repetitive DNA. Utricularia gibba, a bladderwort plant, has a very small nuclear genome (100.7 Mb) compared to most plants. It likely evolved from an ancestral genome that was 1,500 Mb in size. The bladderwort genome has roughly the same number of genes as other plants but the total amount of coding DNA comes to about 30% of the genome. The remainder of the genome (70% non-coding DNA) consists of promoters and regulatory sequences that are shorter than those in other plant species. The genes contain introns but there are fewer of them and they are smaller than the introns in other plant genomes. There are noncoding genes, including many copies of ribosomal RNA genes. The genome also contains telomere sequences and centromeres as expected. Much of the repetitive DNA seen in other eukaryotes has been deleted from the bladderwort genome since that lineage split from those of other plants. About 59% of the bladderwort genome consists of transposon-related sequences but since the genome is so much smaller than other genomes, this represents a considerable reduction in the amount of this DNA. The authors of the original 2013 article note that claims of additional functional elements in the non-coding DNA of animals do not seem to apply to plant genomes. According to a New York Times article, during the evolution of this species, "... genetic junk that didn't serve a purpose was expunged, and the necessary stuff was kept." According to Victor Albert of the University of Buffalo, the plant is able to expunge its so-called junk DNA and "have a perfectly good multicellular plant with lots of different cells, organs, tissue types and flowers, and you can do it without the junk. Junk is not needed." Types of non-coding DNA sequences Noncoding genes There are two types of genes: protein coding genes and noncoding genes. Noncoding genes are an important part of non-coding DNA and they include genes for transfer RNA and ribosomal RNA. These genes were discovered in the 1960s. Prokaryotic genomes contain genes for a number of other noncoding RNAs but noncoding RNA genes are much more common in eukaryotes. Typical classes of noncoding genes in eukaryotes include genes for small nuclear RNAs (snRNAs), small nucleolar RNAs (sno RNAs), microRNAs (miRNAs), short interfering RNAs (siRNAs), PIWI-interacting RNAs (piRNAs), and long noncoding RNAs (lncRNAs). In addition, there are a number of unique RNA genes that produce catalytic RNAs. Noncoding genes account for only a few percent of prokaryotic genomes but they can represent a vastly higher fraction in eukaryotic genomes. In humans, the noncoding genes take up at least 6% of the genome, largely because there are hundreds of copies of ribosomal RNA genes. Protein-coding genes occupy about 38% of the genome; a fraction that is much higher than the coding region because genes contain large introns. The total number of noncoding genes in the human genome is controversial. Some scientists think that there are only about 5,000 noncoding genes while others believe that there may be more than 100,000 (see the article on Non-coding RNA). The difference is largely due to debate over the number of lncRNA genes. Promoters and regulatory elements Promoters are DNA segments near the 5' end of the gene where transcription begins. They are the sites where RNA polymerase binds to initiate RNA synthesis. Every gene has a noncoding promoter. Regulatory elements are sites that control the transcription of a nearby gene. They are almost always sequences where transcription factors bind to DNA and these transcription factors can either activate transcription (activators) or repress transcription (repressors). Regulatory elements were discovered in the 1960s and their general characteristics were worked out in the 1970s by studying specific transcription factors in bacteria and bacteriophage. Promoters and regulatory sequences represent an abundant class of noncoding DNA but they mostly consist of a collection of relatively short sequences so they do not take up a very large fraction of the genome. The exact amount of regulatory DNA in mammalian genome is unclear because it is difficult to distinguish between spurious transcription factor binding sites and those that are functional. The binding characteristics of typical DNA-binding proteins were characterized in the 1970s and the biochemical properties of transcription factors predict that in cells with large genomes, the majority of binding sites will not be biologically functional. Many regulatory sequences occur near promoters, usually upstream of the transcription start site of the gene. Some occur within a gene and a few are located downstream of the transcription termination site. In eukaryotes, there are some regulatory sequences that are located at a considerable distance from the promoter region. These distant regulatory sequences are often called enhancers but there is no rigorous definition of enhancer that distinguishes it from other transcription factor binding sites. Introns Introns are the parts of a gene that are transcribed into the precursor RNA sequence, but ultimately removed by RNA splicing during the processing to mature RNA. Introns are found in both types of genes: protein-coding genes and noncoding genes. They are present in prokaryotes but they are much more common in eukaryotic genomes. Group I and group II introns take up only a small percentage of the genome when they are present. Spliceosomal introns (see Figure) are only found in eukaryotes and they can represent a substantial proportion of the genome. In humans, for example, introns in protein-coding genes cover 37% of the genome. Combining that with about 1% coding sequences means that protein-coding genes occupy about 38% of the human genome. The calculations for noncoding genes are more complicated because there is considerable dispute over the total number of noncoding genes but taking only the well-defined examples means that noncoding genes occupy at least 6% of the genome. Untranslated regions The standard biochemistry and molecular biology textbooks describe non-coding nucleotides in mRNA located between the 5' end of the gene and the translation initiation codon. These regions are called 5'-untranslated regions or 5'-UTRs. Similar regions called 3'-untranslated regions (3'-UTRs) are found at the end of the gene. The 5'-UTRs and 3'UTRs are very short in bacteria but they can be several hundred nucleotides in length in eukaryotes. They contain short elements that control the initiation of translation (5'-UTRs) and transcription termination (3'-UTRs) as well as regulatory elements that may control mRNA stability, processing, and targeting to different regions of the cell. Origins of replication DNA synthesis begins at specific sites called origins of replication. These are regions of the genome where the DNA replication machinery is assembled and the DNA is unwound to begin DNA synthesis. In most cases, replication proceeds in both directions from the replication origin. The main features of replication origins are sequences where specific initiation proteins are bound. A typical replication origin covers about 100-200 base pairs of DNA. Prokaryotes have one origin of replication per chromosome or plasmid but there are usually multiple origins in eukaryotic chromosomes. The human genome contains about 100,000 origins of replication representing about 0.3% of the genome. Centromeres Centromeres are the sites where spindle fibers attach to newly replicated chromosomes in order to segregate them into daughter cells when the cell divides. Each eukaryotic chromosome has a single functional centromere that is seen as a constricted region in a condensed metaphase chromosome. Centromeric DNA consists of a number of repetitive DNA sequences that often take up a significant fraction of the genome because each centromere can be millions of base pairs in length. In humans, for example, the sequences of all 24 centromeres have been determined and they account for about 6% of the genome. However, it is unlikely that all of this noncoding DNA is essential since there is considerable variation in the total amount of centromeric DNA in different individuals. Centromeres are another example of functional noncoding DNA sequences that have been known for almost half a century and it is likely that they are more abundant than coding DNA. Telomeres Telomeres are regions of repetitive DNA at the end of a chromosome, which provide protection from chromosomal deterioration during DNA replication. Recent studies have shown that telomeres function to aid in its own stability. Telomeric repeat-containing RNA (TERRA) are transcripts derived from telomeres. TERRA has been shown to maintain telomerase activity and lengthen the ends of chromosomes. Scaffold attachment regions Both prokaryotic and eukarotic genomes are organized into large loops of protein-bound DNA. In eukaryotes, the bases of the loops are called scaffold attachment regions (SARs) and they consist of stretches of DNA that bind an RNA/protein complex to stabilize the loop. There are about 100,000 loops in the human genome and each SAR consists of about 100 bp of DNA, so the total amount of DNA devoted to SARs accounts for about 0.3% of the human genome. Pseudogenes Pseudogenes are mostly former genes that have become non-functional due to mutation, but the term also refers to inactive DNA sequences that are derived from RNAs produced by functional genes (processed pseudogenes). Pseudogenes are only a small fraction of noncoding DNA in prokaryotic genomes because they are eliminated by negative selection. In some eukaryotes, however, pseudogenes can accumulate because selection is not powerful enough to eliminate them (see Nearly neutral theory of molecular evolution). The human genome contains about 15,000 pseudogenes derived from protein-coding genes and an unknown number derived from noncoding genes. They may cover a substantial fraction of the genome (~5%) since many of them contain former intron sequences. Pseudogenes are junk DNA by definition and they evolve at the neutral rate as expected for junk DNA. Some former pseudogenes have secondarily acquired a function and this leads some scientists to speculate that most pseudogenes are not junk because they have a yet-to-be-discovered function. Repeat sequences, transposons and viral elements Transposons and retrotransposons are mobile genetic elements. Retrotransposon repeated sequences, which include long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs), account for a large proportion of the genomic sequences in many species. Alu sequences, classified as a short interspersed nuclear element, are the most abundant mobile elements in the human genome. Some examples have been found of SINEs exerting transcriptional control of some protein-encoding genes. Endogenous retrovirus sequences are the product of reverse transcription of retrovirus genomes into the genomes of germ cells. Mutation within these retro-transcribed sequences can inactivate the viral genome. Over 8% of the human genome is made up of (mostly decayed) endogenous retrovirus sequences, as part of the over 42% fraction that is recognizably derived of retrotransposons, while another 3% can be identified to be the remains of DNA transposons. Much of the remaining half of the genome that is currently without an explained origin is expected to have found its origin in transposable elements that were active so long ago (> 200 million years) that random mutations have rendered them unrecognizable. Genome size variation in at least two kinds of plants is mostly the result of retrotransposon sequences. Highly repetitive DNA Highly repetitive DNA consists of short stretches of DNA that are repeated many times in tandem (one after the other). The repeat segments are usually between 2 bp and 10 bp but longer ones are known. Highly repetitive DNA is rare in prokaryotes but common in eukaryotes, especially those with large genomes. It is sometimes called satellite DNA. Most of the highly repetitive DNA is found in centromeres and telomeres (see above) and most of it is functional although some might be redundant. The other significant fraction resides in short tandem repeats (STRs; also called microsatellites) consisting of short stretches of a simple repeat such as ATC. There are about 350,000 STRs in the human genome and they are scattered throughout the genome with an average length of about 25 repeats. Variations in the number of STR repeats can cause genetic diseases when they lie within a gene but most of these regions appear to be non-functional junk DNA where the number of repeats can vary considerably from individual to individual. This is why these length differences are used extensively in DNA fingerprinting. Junk DNA Junk DNA is DNA that has no biologically relevant function such as pseudogenes and fragments of once active transposons. Bacteria and viral genomes have very little junk DNA but some eukaryotic genomes may have a substantial amount of junk DNA. The exact amount of nonfunctional DNA in humans and other species with large genomes has not been determined and there is considerable controversy in the scientific literature. The nonfunctional DNA in bacterial genomes is mostly located in the intergenic fraction of non-coding DNA but in eukaryotic genomes it may also be found within introns. There are many examples of functional DNA elements in non-coding DNA, and it is erroneous to equate non-coding DNA with junk DNA. Genome-wide association studies (GWAS) and non-coding DNA Genome-wide association studies (GWAS) identify linkages between alleles and observable traits such as phenotypes and diseases. Most of the associations are between single-nucleotide polymorphisms (SNPs) and the trait being examined and most of these SNPs are located in non-functional DNA. The association establishes a linkage that helps map the DNA region responsible for the trait but it does not necessarily identify the mutations causing the disease or phenotypic difference. SNPs that are tightly linked to traits are the ones most likely to identify a causal mutation. (The association is referred to as tight linkage disequilibrium.) About 12% of these polymorphisms are found in coding regions; about 40% are located in introns; and most of the rest are found in intergenic regions, including regulatory sequences.
Biology and health sciences
Molecular biology
Biology
44290
https://en.wikipedia.org/wiki/DNA%20profiling
DNA profiling
DNA profiling (also called DNA fingerprinting and genetic fingerprinting) is the process of determining an individual's deoxyribonucleic acid (DNA) characteristics. DNA analysis intended to identify a species, rather than an individual, is called DNA barcoding. DNA profiling is a forensic technique in criminal investigations, comparing criminal suspects' profiles to DNA evidence so as to assess the likelihood of their involvement in the crime. It is also used in paternity testing, to establish immigration eligibility, and in genealogical and medical research. DNA profiling has also been used in the study of animal and plant populations in the fields of zoology, botany, and agriculture. Background Starting in the mid 1970s, scientific advances allowed the use of DNA as a material for the identification of an individual. The first patent covering the direct use of DNA variation for forensics (US5593832A) was filed by Jeffrey Glassberg in 1983, based upon work he had done while at Rockefeller University in the United States in 1981. British geneticist Sir Alec Jeffreys independently developed a process for DNA profiling in 1984 while working in the Department of Genetics at the University of Leicester. Jeffreys discovered that a DNA examiner could establish patterns in unknown DNA. These patterns were a part of inherited traits that could be used to advance the field of relationship analysis. These discoveries led to the first use of DNA profiling in a criminal case. The process, developed by Jeffreys in conjunction with Peter Gill and Dave Werrett of the Forensic Science Service (FSS), was first used forensically in the solving of the murder of two teenagers who had been raped and murdered in Narborough, Leicestershire in 1983 and 1986. In the murder inquiry, led by Detective David Baker, the DNA contained within blood samples obtained voluntarily from around 5,000 local men who willingly assisted Leicestershire Constabulary with the investigation, resulted in the exoneration of Richard Buckland, an initial suspect who had confessed to one of the crimes, and the subsequent conviction of Colin Pitchfork on January 2, 1988. Pitchfork, a local bakery employee, had coerced his coworker Ian Kelly to stand in for him when providing a blood sample—Kelly then used a forged passport to impersonate Pitchfork. Another coworker reported the deception to the police. Pitchfork was arrested, and his blood was sent to Jeffreys' lab for processing and profile development. Pitchfork's profile matched that of DNA left by the murderer which confirmed Pitchfork's presence at both crime scenes; he pleaded guilty to both murders. After some years, a chemical company named Imperial Chemical Industries (ICI) introduced the first ever commercially available kit to the world. Despite being a relatively recent field, it had a significant global influence on both criminal justice system and society. Although 99.9% of human DNA sequences are the same in every person, enough of the DNA is different that it is possible to distinguish one individual from another, unless they are monozygotic (identical) twins. DNA profiling uses repetitive sequences that are highly variable, called variable number tandem repeats (VNTRs), in particular short tandem repeats (STRs), also known as microsatellites, and minisatellites. VNTR loci are similar between closely related individuals, but are so variable that unrelated individuals are unlikely to have the same VNTRs. Before VNTRs and STRs, people like Jeffreys used a process called restriction fragment length polymorphism (RFLP). This process regularly used large portions of DNA to analyze the differences between two DNA samples. RFLP was among the first technologies used in DNA profiling and analysis. However, as technology has evolved, new technologies, like STR, emerged and took the place of older technology like RFLP. The admissibility of DNA evidence in courts was disputed in the United States in the 1980s and 1990s, but has since become more universally accepted due to improved techniques. Profiling processes DNA extraction When a sample such as blood or saliva is obtained, the DNA is only a small part of what is present in the sample. Before the DNA can be analyzed, it must be extracted from the cells and purified. There are many ways this can be accomplished, but all methods follow the same basic procedure. The cell and nuclear membranes need to be broken up to allow the DNA to be free in solution. Once the DNA is free, it can be separated from all other cellular components. After the DNA has been separated in solution, the remaining cellular debris can then be removed from the solution and discarded, leaving only DNA. The most common methods of DNA extraction include organic extraction (also called phenol–chloroform extraction), Chelex extraction, and solid-phase extraction. Differential extraction is a modified version of extraction in which DNA from two different types of cells can be separated from each other before being purified from the solution. Each method of extraction works well in the laboratory, but analysts typically select their preferred method based on factors such as the cost, the time involved, the quantity of DNA yielded, and the quality of DNA yielded. RFLP analysis RFLP stands for restriction fragment length polymorphism and, in terms of DNA analysis, describes a DNA testing method which utilizes restriction enzymes to "cut" the DNA at short and specific sequences throughout the sample. To start off processing in the laboratory, the sample has to first go through an extraction protocol, which may vary depending on the sample type or laboratory SOPs (Standard Operating Procedures). Once the DNA has been "extracted" from the cells within the sample and separated away from extraneous cellular materials and any nucleases that would degrade the DNA, the sample can then be introduced to the desired restriction enzymes to be cut up into discernable fragments. Following the enzyme digestion, a Southern Blot is performed. Southern Blots are a size-based separation method that are performed on a gel with either radioactive or chemiluminescent probes. RFLP could be conducted with single-locus or multi-locus probes (probes which target either one location on the DNA or multiple locations on the DNA). Incorporating the multi-locus probes allowed for higher discrimination power for the analysis, however completion of this process could take several days to a week for one sample due to the extreme amount of time required by each step required for visualization of the probes. Polymerase chain reaction (PCR) analysis This technique was developed in 1983 by Kary Mullis. PCR is now a common and important technique used in medical and biological research labs for a variety of applications. PCR, or Polymerase Chain Reaction, is a widely used molecular biology technique to amplify a specific DNA sequence. Amplification is achieved by a series of three steps: 1- Denaturation : In this step, the DNA is heated to 95 °C to dissociate the hydrogen bonds between the complementary base pairs of the double-stranded DNA. 2-Annealing : During this stage the reaction is cooled to 50-65 °C . This enables the primers to attach to a specific location on the single -stranded template DNA by way of hydrogen bonding. 3-Extension : A thermostable DNA polymerase which is Taq polymerase is commonly used at this step. This is done at a temperature of 72 °C . DNA polymerase adds nucleotides in the 5'-3' direction and synthesizes the complementary strand of the DNA template . STR analysis The system of DNA profiling used today is based on polymerase chain reaction (PCR) and uses simple sequences. From country to country, different STR-based DNA-profiling systems are in use. In North America, systems that amplify the CODIS 20 core loci are almost universal, whereas in the United Kingdom the DNA-17 loci system is in use, and Australia uses 18 core markers. The true power of STR analysis is in its statistical power of discrimination. Because the 20 loci that are currently used for discrimination in CODIS are independently assorted (having a certain number of repeats at one locus does not change the likelihood of having any number of repeats at any other locus), the product rule for probabilities can be applied. This means that, if someone has the DNA type of ABC, where the three loci were independent, then the probability of that individual having that DNA type is the probability of having type A times the probability of having type B times the probability of having type C. This has resulted in the ability to generate match probabilities of 1 in a quintillion (1x1018) or more. However, DNA database searches showed much more frequent than expected false DNA profile matches. Y-chromosome analysis Due to the paternal inheritance, Y-haplotypes provide information about the genetic ancestry of the male population. To investigate this population history, and to provide estimates for haplotype frequencies in criminal casework, the "Y haplotype reference database (YHRD)" has been created in 2000 as an online resource. It currently comprises more than 300,000 minimal (8 locus) haplotypes from world-wide populations. Mitochondrial analysis mtDNA can be obtained from such material as hair shafts and old bones/teeth. Control mechanism based on interaction point with data. This can be determined by tooled placement in sample. Issues with forensic DNA samples When people think of DNA analysis, they often think about television shows like NCIS or CSI, which portray DNA samples coming into a lab and being instantly analyzed, followed by the pulling up of a picture of the suspect within minutes⁠. However, the reality is quite different, and perfect DNA samples are often not collected from the scene of a crime. Homicide victims are frequently left exposed to harsh conditions before they are found, and objects that are used to commit crimes have often been handled by more than one person. The two most prevalent issues that forensic scientists encounter when analyzing DNA samples are degraded samples and DNA mixtures. Degraded DNA Before modern PCR methods existed, it was almost impossible to analyze degraded DNA samples. Methods like restriction fragment length polymorphism (RFLP), which was the first technique used for DNA analysis in forensic science, required high molecular weight DNA in the sample in order to get reliable data. High molecular weight DNA, however, is lacking in degraded samples, as the DNA is too fragmented to carry out RFLP accurately. It was only when polymerase chain reaction techniques were invented that analysis of degraded DNA samples were able to be carried out. Multiplex PCR in particular made it possible to isolate and to amplify the small fragments of DNA that are still left in degraded samples. When multiplex PCR methods are compared to the older methods like RFLP, a vast difference can be seen. Multiplex PCR can theoretically amplify less than 1 ng of DNA, but RFLP had to have a least 100 ng of DNA in order to carry out an analysis. Low-Template DNA Low-template DNA can happen when there is less than 0.1 ng() of DNA in a sample. This can lead to more stochastic effects (random events) such as allelic dropout or allelic drop-in which can alter the interpretation of a DNA profile. These stochastic effects can lead to the unequal amplification of the 2 alleles that come from a heterozygous individual. It is especially important to take low-template DNA into account when dealing with a mixture of DNA sample. This is because for one (or more) of the contributors in the mixture, they are more likely to have less than the optimal amount of DNA for the PCR reaction to work properly. Therefore, stochastic thresholds are developed for DNA profile interpretation. The stochastic threshold is the minimum peak height (RFU value), seen in an electropherogram where dropout occurs. If the peak height value is above this threshold, then it is reasonable to assume that allelic dropout has not occurred. For example, if only 1 peak is seen for a particular locus in the electropherogram but its peak height is above the stochastic threshold, then we can reasonably assume that this individual is homozygous and is not missing its heterozygous partner allele that otherwise would have dropped out due to having low-template DNA. Allelic dropout can occur when there is low-template DNA because there is such little DNA to start with that at this locus the contributor to the DNA sample (or mixture) is a true heterozygote but the other allele is not amplified and so it would be lost. Allelic drop-in can also occur when there is low-template DNA because sometimes the stutter peak can be amplified. The stutter is an artifact of PCR. During the PCR reaction, DNA Polymerase will come in and add nucleotides off of the primer, but this whole process is very dynamic, meaning that the DNA Polymerase is constantly binding, popping off and then rebinding. Therefore, sometimes DNA Polymerase will rejoin at the short tandem repeat ahead of it, leading to a short tandem repeat that is 1 repeat less than the template. During PCR, if DNA Polymerase happens to bind to a locus in stutter and starts to amplify it to make lots of copies, then this stutter product will appear randomly in the electropherogram, leading to allelic drop-in. MiniSTR analysis In instances in which DNA samples are degraded, like if there are intense fires or all that remains are bone fragments, standard STR testing on those samples can be inadequate. When standard STR testing is done on highly degraded samples, the larger STR loci often drop out, and only partial DNA profiles are obtained. Partial DNA profiles can be a powerful tool, but the probability of a random match is larger than if a full profile was obtained. One method that has been developed to analyse degraded DNA samples is to use miniSTR technology. In the new approach, primers are specially designed to bind closer to the STR region. In normal STR testing, the primers bind to longer sequences that contain the STR region within the segment. MiniSTR analysis, however, targets only the STR location, which results in a DNA product that is much smaller. By placing the primers closer to the actual STR regions, there is a higher chance that successful amplification of this region will occur. Successful amplification of those STR regions can now occur, and more complete DNA profiles can be obtained. The success that smaller PCR products produce a higher success rate with highly degraded samples was first reported in 1995, when miniSTR technology was used to identify victims of the Waco fire. DNA mixtures Mixtures are another common issue faced by forensic scientists when they are analyzing unknown or questionable DNA samples. A mixture is defined as a DNA sample that contains two or more individual contributors. That can often occur when a DNA sample is swabbed from an item that is handled by more than one person or when a sample contains both the victim's and the assailant's DNA. The presence of more than one individual in a DNA sample can make it challenging to detect individual profiles, and interpretation of mixtures should be performed only by highly trained individuals. Mixtures that contain two or three individuals can be interpreted with difficulty. Mixtures that contain four or more individuals are much too convoluted to get individual profiles. One common scenario in which a mixture is often obtained is in the case of sexual assault. A sample may be collected that contains material from the victim, the victim's consensual sexual partners, and the perpetrator(s). Mixtures can generally be sorted into three categories: Type A, Type B, and Type C. Type A mixtures have alleles with similar peak-heights all around, so the contributors cannot be distinguished from each other. Type B mixtures can be deconvoluted by comparing peak-height ratios to determine which alleles were donated together. Type C mixtures cannot be safely interpreted with current technology because the samples were affected by DNA degradation or having too small a quantity of DNA present. When looking at an electropherogram, it is possible to determine the number of contributors in less complex mixtures based on the number of peaks located in each locus. In comparison to a single source profile, which will only have one or two peaks at each locus, a mixture is when there are three or more peaks at two or more loci. If there are three peaks at only a single locus, then it is possible to have a single contributor who is tri-allelic at that locus. Two person mixtures will have between two and four peaks at each locus, and three person mixtures will have between three and six peaks at each locus. Mixtures become increasingly difficult to deconvolute as the number of contributors increases. As detection methods in DNA profiling advance, forensic scientists are seeing more DNA samples that contain mixtures, as even the smallest contributor can now be detected by modern tests. The ease in which forensic scientists have in interpenetrating DNA mixtures largely depends on the ratio of DNA present from each individual, the genotype combinations, and the total amount of DNA amplified. The DNA ratio is often the most important aspect to look at in determining whether a mixture can be interpreted. For example, if a DNA sample had two contributors, it would be easy to interpret individual profiles if the ratio of DNA contributed by one person was much higher than the second person. When a sample has three or more contributors, it becomes extremely difficult to determine individual profiles. Fortunately, advancements in probabilistic genotyping may make that sort of determination possible in the future. Probabilistic genotyping uses complex computer software to run through thousands of mathematical computations to produce statistical likelihoods of individual genotypes found in a mixture. DNA profiling in plant: Plant DNA profiling (fingerprinting) is a method for identifying cultivars that uses molecular marker techniques. This method is gaining attention due to Trade Related Intellectual property rights (TRIPs) and the Convention on Biological Diversity (CBD). Advantages of Plant DNA profiling: Identification, authentication, specific distinction, detecting adulteration and identifying phytoconstituents are all possible with DNA fingerprinting in medical plants. DNA based markers are critical for these applications, determining the future of scientific study in pharmacognosy. It also helps with determining the traits (such as seed size and leaf color) are likely to improve the offspring or not. DNA databases An early application of a DNA database was the compilation of a Mitochondrial DNA Concordance, prepared by Kevin W. P. Miller and John L. Dawson at the University of Cambridge from 1996 to 1999 from data collected as part of Miller's PhD thesis. There are now several DNA databases in existence around the world. Some are private, but most of the largest databases are government-controlled. The United States maintains the largest DNA database, with the Combined DNA Index System (CODIS) holding over 13 million records as of May 2018. The United Kingdom maintains the National DNA Database (NDNAD), which is of similar size, despite the UK's smaller population. The size of this database, and its rate of growth, are giving concern to civil liberties groups in the UK, where police have wide-ranging powers to take samples and retain them even in the event of acquittal. The Conservative–Liberal Democrat coalition partially addressed these concerns with part 1 of the Protection of Freedoms Act 2012, under which DNA samples must be deleted if suspects are acquitted or not charged, except in relation to certain (mostly serious or sexual) offenses. Public discourse around the introduction of advanced forensic techniques (such as genetic genealogy using public genealogy databases and DNA phenotyping approaches) has been limited, disjointed, unfocused, and raises issues of privacy and consent that may warrant the establishment of additional legal protections. The U.S. Patriot Act of the United States provides a means for the U.S. government to get DNA samples from suspected terrorists. DNA information from crimes is collected and deposited into the CODIS database, which is maintained by the FBI. CODIS enables law enforcement officials to test DNA samples from crimes for matches within the database, providing a means of finding specific biological profiles associated with collected DNA evidence. When a match is made from a national DNA databank to link a crime scene to an offender having provided a DNA sample to a database, that link is often referred to as a cold hit. A cold hit is of value in referring the police agency to a specific suspect but is of less evidential value than a DNA match made from outside the DNA Databank. FBI agents cannot legally store DNA of a person not convicted of a crime. DNA collected from a suspect not later convicted must be disposed of and not entered into the database. In 1998, a man residing in the UK was arrested on accusation of burglary. His DNA was taken and tested, and he was later released. Nine months later, this man's DNA was accidentally and illegally entered in the DNA database. New DNA is automatically compared to the DNA found at cold cases and, in this case, this man was found to be a match to DNA found at a rape and assault case one year earlier. The government then prosecuted him for these crimes. During the trial the DNA match was requested to be removed from the evidence because it had been illegally entered into the database. The request was carried out. The DNA of the perpetrator, collected from victims of rape, can be stored for years until a match is found. In 2014, to address this problem, Congress extended a bill that helps states deal with "a backlog" of evidence. DNA profiling databases in Plants: PIDS: PIDS(Plant international DNA-fingerprinting system) is an open source web server and free software based plant international DNA fingerprinting system. It manages huge amount of microsatellite DNA fingerprint data, performs genetic studies, and automates collection, storage and maintenance while decreasing human error and increasing efficiency. The system may be tailored to specific laboratory needs, making it a valuable tool for plant breeders, forensic science, and human fingerprint recognition. It keeps track of experiments, standardizes data and promotes inter-database communication. It also helps with the regulation of variety quality, the preservation of variety rights and the use of molecular markers in breeding by providing location statistics, merging, comparison and genetic analysis function. Considerations in evaluating DNA evidence When using RFLP, the theoretical risk of a coincidental match is 1 in 100 billion (100,000,000,000) although the practical risk is actually 1 in 1,000 because monozygotic twins are 0.2% of the human population. Moreover, the rate of laboratory error is almost certainly higher than that and actual laboratory procedures often do not reflect the theory under which the coincidence probabilities were computed. For example, coincidence probabilities may be calculated based on the probabilities that markers in two samples have bands in precisely the same location, but a laboratory worker may conclude that similar but not precisely-identical band patterns result from identical genetic samples with some imperfection in the agarose gel. However, in that case, the laboratory worker increases the coincidence risk by expanding the criteria for declaring a match. Studies conducted in the 2000s quoted relatively-high error rates, which may be cause for concern. In the early days of genetic fingerprinting, the necessary population data to compute a match probability accurately was sometimes unavailable. Between 1992 and 1996, arbitrary-low ceilings were controversially put on match probabilities used in RFLP analysis, rather than the higher theoretically computed ones. Evidence of genetic relationship It is possible to use DNA profiling as evidence of genetic relationship although such evidence varies in strength from weak to positive. Testing that shows no relationship is absolutely certain. Further, while almost all individuals have a single and distinct set of genes, ultra-rare individuals, known as "chimeras", have at least two different sets of genes. There have been two cases of DNA profiling that falsely suggested that a mother was unrelated to her children. Fake DNA evidence The functional analysis of genes and their coding sequences (open reading frames [ORFs]) typically requires that each ORF be expressed, the encoded protein purified, antibodies produced, phenotypes examined, intracellular localization determined, and interactions with other proteins sought. In a study conducted by the life science company Nucleix and published in the journal Forensic Science International, scientists found that an in vitro synthesized sample of DNA matching any desired genetic profile can be constructed using standard molecular biology techniques without obtaining any actual tissue from that person. DNA evidence in criminal trials Familial DNA searching Familial DNA searching (sometimes referred to as "familial DNA" or "familial DNA database searching") is the practice of creating new investigative leads in cases where DNA evidence found at the scene of a crime (forensic profile) strongly resembles that of an existing DNA profile (offender profile) in a state DNA database but there is not an exact match. After all other leads have been exhausted, investigators may use specially developed software to compare the forensic profile to all profiles taken from a state's DNA database to generate a list of those offenders already in the database who are most likely to be a very close relative of the individual whose DNA is in the forensic profile. Familial DNA database searching was first used in an investigation leading to the conviction of Jeffrey Gafoor of the murder of Lynette White in the United Kingdom on 4 July 2003. DNA evidence was matched to Gafoor's nephew, who at 14 years old had not been born at the time of the murder in 1988. It was used again in 2004 to find a man who threw a brick from a motorway bridge and hit a lorry driver, killing him. DNA found on the brick matched that found at the scene of a car theft earlier in the day, but there were no good matches on the national DNA database. A wider search found a partial match to an individual; on being questioned, this man revealed he had a brother, Craig Harman, who lived very close to the original crime scene. Harman voluntarily submitted a DNA sample, and confessed when it matched the sample from the brick. As of 2011, familial DNA database searching is not conducted on a national level in the United States, where states determine how and when to conduct familial searches. The first familial DNA search with a subsequent conviction in the United States was conducted in Denver, Colorado, in 2008, using software developed under the leadership of Denver District Attorney Mitch Morrissey and Denver Police Department Crime Lab Director Gregg LaBerge. California was the first state to implement a policy for familial searching under then-Attorney General Jerry Brown, who later became Governor. In his role as consultant to the Familial Search Working Group of the California Department of Justice, former Alameda County Prosecutor Rock Harmon is widely considered to have been the catalyst in the adoption of familial search technology in California. The technique was used to catch the Los Angeles serial killer known as the "Grim Sleeper" in 2010. It was not a witness or informant that tipped off law enforcement to the identity of the "Grim Sleeper" serial killer, who had eluded police for more than two decades, but DNA from the suspect's own son. The suspect's son had been arrested and convicted in a felony weapons charge and swabbed for DNA the year before. When his DNA was entered into the database of convicted felons, detectives were alerted to a partial match to evidence found at the "Grim Sleeper" crime scenes. David Franklin Jr., also known as the Grim Sleeper, was charged with ten counts of murder and one count of attempted murder. More recently, familial DNA led to the arrest of 21-year-old Elvis Garcia on charges of sexual assault and false imprisonment of a woman in Santa Cruz in 2008. In March 2011 Virginia Governor Bob McDonnell announced that Virginia would begin using familial DNA searches. At a press conference in Virginia on 7 March 2011, regarding the East Coast Rapist, Prince William County prosecutor Paul Ebert and Fairfax County Police Detective John Kelly said the case would have been solved years ago if Virginia had used familial DNA searching. Aaron Thomas, the suspected East Coast Rapist, was arrested in connection with the rape of 17 women from Virginia to Rhode Island, but familial DNA was not used in the case. Critics of familial DNA database searches argue that the technique is an invasion of an individual's 4th Amendment rights. Privacy advocates are petitioning for DNA database restrictions, arguing that the only fair way to search for possible DNA matches to relatives of offenders or arrestees would be to have a population-wide DNA database. Some scholars have pointed out that the privacy concerns surrounding familial searching are similar in some respects to other police search techniques, and most have concluded that the practice is constitutional. The Ninth Circuit Court of Appeals in United States v. Pool (vacated as moot) suggested that this practice is somewhat analogous to a witness looking at a photograph of one person and stating that it looked like the perpetrator, which leads law enforcement to show the witness photos of similar looking individuals, one of whom is identified as the perpetrator. Critics also state that racial profiling could occur on account of familial DNA testing. In the United States, the conviction rates of racial minorities are much higher than that of the overall population. It is unclear whether this is due to discrimination from police officers and the courts, as opposed to a simple higher rate of offence among minorities. Arrest-based databases, which are found in the majority of the United States, lead to an even greater level of racial discrimination. An arrest, as opposed to conviction, relies much more heavily on police discretion. For instance, investigators with Denver District Attorney's Office successfully identified a suspect in a property theft case using a familial DNA search. In this example, the suspect's blood left at the scene of the crime strongly resembled that of a current Colorado Department of Corrections prisoner. Partial matches Partial DNA matches are the result of moderate stringency CODIS searches that produce a potential match that shares at least one allele at every locus. Partial matching does not involve the use of familial search software, such as those used in the United Kingdom and the United States, or additional Y-STR analysis and therefore often misses sibling relationships. Partial matching has been used to identify suspects in several cases in both countries and has also been used as a tool to exonerate the falsely accused. Darryl Hunt was wrongly convicted in connection with the rape and the murder of a young woman in 1984 in North Carolina. Surreptitious DNA collecting Police forces may collect DNA samples without a suspect's knowledge, and use it as evidence. The legality of the practice has been questioned in Australia. In the United States, where it has been accepted, courts often rule that there is no expectation of privacy and cite California v. Greenwood (1988), in which the Supreme Court held that the Fourth Amendment does not prohibit the warrantless search and seizure of garbage left for collection outside the curtilage of a home. Critics of this practice underline that this analogy ignores that "most people have no idea that they risk surrendering their genetic identity to the police by, for instance, failing to destroy a used coffee cup. Moreover, even if they do realize it, there is no way to avoid abandoning one's DNA in public." The United States Supreme Court ruled in Maryland v. King (2013) that DNA sampling of prisoners arrested for serious crimes is constitutional. In the United Kingdom, the Human Tissue Act 2004 prohibits private individuals from covertly collecting biological samples (hair, fingernails, etc.) for DNA analysis but exempts medical and criminal investigations from the prohibition. England and Wales Evidence from an expert who has compared DNA samples must be accompanied by evidence as to the sources of the samples and the procedures for obtaining the DNA profiles. The judge must ensure that the jury must understand the significance of DNA matches and mismatches in the profiles. The judge must also ensure that the jury does not confuse the match probability (the probability that a person that is chosen at random has a matching DNA profile to the sample from the scene) with the probability that a person with matching DNA committed the crime. In 1996 R v. Doheny Juries should weigh up conflicting and corroborative evidence, using their own common sense and not by using mathematical formulae, such as Bayes' theorem, so as to avoid "confusion, misunderstanding and misjudgment". Presentation and evaluation of evidence of partial or incomplete DNA profiles In R v Bates, Moore-Bick LJ said: DNA testing in the United States There are state laws on DNA profiling in all 50 states of the United States. Detailed information on database laws in each state can be found at the National Conference of State Legislatures website. Development of artificial DNA In August 2009, scientists in Israel raised serious doubts concerning the use of DNA by law enforcement as the ultimate method of identification. In a paper published in the journal Forensic Science International: Genetics, the Israeli researchers demonstrated that it is possible to manufacture DNA in a laboratory, thus falsifying DNA evidence. The scientists fabricated saliva and blood samples, which originally contained DNA from a person other than the supposed donor of the blood and saliva. The researchers also showed that, using a DNA database, it is possible to take information from a profile and manufacture DNA to match it, and that this can be done without access to any actual DNA from the person whose DNA they are duplicating. The synthetic DNA oligos required for the procedure are common in molecular laboratories. The New York Times quoted the lead author, Daniel Frumkin, saying, "You can just engineer a crime scene ... any biology undergraduate could perform this". Frumkin perfected a test that can differentiate real DNA samples from fake ones. His test detects epigenetic modifications, in particular, DNA methylation. Seventy percent of the DNA in any human genome is methylated, meaning it contains methyl group modifications within a CpG dinucleotide context. Methylation at the promoter region is associated with gene silencing. The synthetic DNA lacks this epigenetic modification, which allows the test to distinguish manufactured DNA from genuine DNA. It is unknown how many police departments, if any, currently use the test. No police lab has publicly announced that it is using the new test to verify DNA results. Researchers at the University of Tokyo integrated an artificial DNA replication scheme with a rebuilt gene expression system and micro-compartmentalization utilizing cell-free materials alone for the first time. Multiple cycles of serial dilution were performed on a system contained in microscale water-in-oil droplets. Chances of making DNA change on purpose Overall, this study's artificial genomic DNA, which kept copying itself using self-encoded proteins and made its sequence better on its own, is a good starting point for making more complex artificial cells. By adding the genes needed for transcription and translation to artificial genomic DNA, it may be possible in the future to make artificial cells that can grow on their own when fed small molecules like amino acids and nucleotides. Using living organisms to make useful things, like drugs and food, would be more stable and easier to control in these artificial cells. On July 7, 2008, the American chemical society reported that Japanese chemists have created the world's first DNA molecule comprised nearly completely of synthetic components. A nano-particle based artificial transcription factor for gene regulation: Nano Script is a nanoparticle-based artificial transcription factor that is supposed to replicate the structure and function of TFs. On gold nanoparticles, functional peptides and tiny molecules referred to as synthetic transcription factors, which imitate the various TF domains, were attached to create Nano Script. We show that Nano Script localizes to the nucleus and begins transcription of a reporter plasmid by an amount more than 15-fold. Moreover, Nano Script can successfully transcribe targeted genes onto endogenous DNA in a nonviral manner. Three different fluorophores—red, green, and blue—were carefully fixed on the DNA rod surface to provide spatial information and create a nanoscale barcode. Epifluorescence and total internal reflection fluorescence microscopy reliably deciphered spatial information between fluorophores. By moving the three fluorophores on the DNA rod, this nanoscale barcode created 216 fluorescence patterns. Cases In 1986, Richard Buckland was exonerated, despite having admitted to the rape and murder of a teenager near Leicester, the city where DNA profiling was first developed. This was the first use of DNA fingerprinting in a criminal investigation, and the first to prove a suspect's innocence. The following year Colin Pitchfork was identified as the perpetrator of the same murder, in addition to another, using the same techniques that had cleared Buckland. In 1987, genetic fingerprinting was used in a US criminal court for the first time in the trial of a man accused of unlawful intercourse with a mentally disabled 14-year-old female who gave birth to a baby. In 1987, Florida rapist Tommie Lee Andrews was the first person in the United States to be convicted as a result of DNA evidence, for raping a woman during a burglary; he was convicted on 6 November 1987, and sentenced to 22 years in prison. In 1990, a violent murder of a young student in Brno was the first criminal case in Czechoslovakia solved by DNA evidence, with the murderer sentenced to 23 years in prison. In 1992, DNA from a palo verde tree was used to convict Mark Alan Bogan of murder. DNA from seed pods of a tree at the crime scene was found to match that of seed pods found in Bogan's truck. This is the first instance of plant DNA admitted in a criminal case. In 1994, the claim that Anna Anderson was Grand Duchess Anastasia Nikolaevna of Russia was tested after her death using samples of her tissue that had been stored at a Charlottesville hospital following a medical procedure. The tissue was tested using DNA fingerprinting, and showed that she bore no relation to the Romanovs. In 1994, Earl Washington, Jr., of Virginia had his death sentence commuted to life imprisonment a week before his scheduled execution date based on DNA evidence. He received a full pardon in 2000 based on more advanced testing. In 1999, Raymond Easton, a disabled man from Swindon, England, was arrested and detained for seven hours in connection with a burglary. He was released due to an inaccurate DNA match. His DNA had been retained on file after an unrelated domestic incident some time previously. In 2000 Frank Lee Smith was proved innocent by DNA profiling of the murder of an eight-year-old girl after spending 14 years on death row in Florida, USA. However he had died of cancer just before his innocence was proven. In view of this the Florida state governor ordered that in future any death row inmate claiming innocence should have DNA testing. In May 2000 Gordon Graham murdered Paul Gault at his home in Lisburn, Northern Ireland. Graham was convicted of the murder when his DNA was found on a sports bag left in the house as part of an elaborate ploy to suggest the murder occurred after a burglary had gone wrong. Graham was having an affair with the victim's wife at the time of the murder. It was the first time Low Copy Number DNA was used in Northern Ireland. In 2001, Wayne Butler was convicted for the murder of Celia Douty. It was the first murder in Australia to be solved using DNA profiling. In 2002, the body of James Hanratty, hanged in 1962 for the "A6 murder", was exhumed and DNA samples from the body and members of his family were analysed. The results convinced Court of Appeal judges that Hanratty's guilt, which had been strenuously disputed by campaigners, was proved "beyond doubt". Paul Foot and some other campaigners continued to believe in Hanratty's innocence and argued that the DNA evidence could have been contaminated, noting that the small DNA samples from items of clothing, kept in a police laboratory for over 40 years "in conditions that do not satisfy modern evidential standards", had had to be subjected to very new amplification techniques in order to yield any genetic profile. However, no DNA other than Hanratty's was found on the evidence tested, contrary to what would have been expected had the evidence indeed been contaminated. In August 2002, Annalisa Vicentini was shot dead in Tuscany. Bartender Peter Hamkin, 23, was arrested, in Merseyside in March 2003 on an extradition warrant heard at Bow Street Magistrates' Court in London to establish whether he should be taken to Italy to face a murder charge. DNA "proved" he shot her, but he was cleared on other evidence. In 2003, Welshman Jeffrey Gafoor was convicted of the 1988 murder of Lynette White, when crime scene evidence collected 12 years earlier was re-examined using STR techniques, resulting in a match with his nephew. In June 2003, because of new DNA evidence, Dennis Halstead, John Kogut and John Restivo won a re-trial on their murder conviction, their convictions were struck down and they were released. In 2004, DNA testing shed new light into the mysterious 1912 disappearance of Bobby Dunbar, a four-year-old boy who vanished during a fishing trip. He was allegedly found alive eight months later in the custody of William Cantwell Walters, but another woman claimed that the boy was her son, Bruce Anderson, whom she had entrusted in Walters' custody. The courts disbelieved her claim and convicted Walters for the kidnapping. The boy was raised and known as Bobby Dunbar throughout the rest of his life. However, DNA tests on Dunbar's son and nephew revealed the two were not related, thus establishing that the boy found in 1912 was not Bobby Dunbar, whose real fate remains unknown. In 2005, Gary Leiterman was convicted of the 1969 murder of Jane Mixer, a law student at the University of Michigan, after DNA found on Mixer's pantyhose was matched to Leiterman. DNA in a drop of blood on Mixer's hand was matched to John Ruelas, who was only four years old in 1969 and was never successfully connected to the case in any other way. Leiterman's defense unsuccessfully argued that the unexplained match of the blood spot to Ruelas pointed to cross-contamination and raised doubts about the reliability of the lab's identification of Leiterman. In November 2008, Anthony Curcio was arrested for masterminding one of the most elaborately planned armored car heists in history. DNA evidence linked Curcio to the crime. In March 2009, Sean Hodgson—convicted of 1979 killing of Teresa De Simone, 22, in her car in Southampton—was released after tests proved DNA from the scene was not his. It was later matched to DNA retrieved from the exhumed body of David Lace. Lace had previously confessed to the crime but was not believed by the detectives. He served time in prison for other crimes committed at the same time as the murder and then committed suicide in 1988. In 2012, a case of babies being switched, many decades earlier, was discovered by accident. After undertaking DNA testing for other purposes, Alice Collins Plebuch was advised that her ancestry appeared to include a significant Ashkenazi Jewish component, despite a belief in her family that they were of predominantly Irish descent. Profiling of Plebuch's genome suggested that it included distinct and unexpected components associated with Ashkenazi, Middle Eastern, and Eastern European populations. This led Plebuch to conduct an extensive investigation, after which she concluded that her father had been switched (possibly accidentally) with another baby soon after birth. Plebuch was also able to identify the biological ancestors of her father. In 2016 Anthea Ring, abandoned as a baby, was able to use a DNA sample and DNA matching database to discover her deceased mother's identity and roots in County Mayo, Ireland. A recently developed forensic test was subsequently used to capture DNA from saliva left on old stamps and envelopes by her suspected father, uncovered through painstaking genealogy research. The DNA in the first three samples was too degraded to use. However, on the fourth, more than enough DNA was found. The test, which has a degree of accuracy acceptable in UK courts, proved that a man named Patrick Coyne was her biological father. In 2018 the Buckskin girl (a body found in 1981 in Ohio) was identified as Marcia King from Arkansas using DNA genealogical techniques In 2018 Joseph James DeAngelo was arrested as the main suspect for the Golden State Killer using DNA and genealogy techniques. In 2018, William Earl Talbott II was arrested as a suspect for the 1987 murders of Jay Cook and Tanya Van Cuylenborg with the assistance of genealogical DNA testing. The same genetic genealogist that helped in this case also helped police with 18 other arrests in 2018. In 2018, With the use of Next Generation Identification System's enhanced biometric capabilities, the FBI matched the fingerprint of a suspect named Timothy David Nelson and arrested him 20 years after the alleged sexual assault. DNA evidence as evidence to prove rights of succession to British titles DNA testing has been used to establish the right of succession to British titles. Cases: Baron Moynihan Pringle baronets
Technology
Diagnostic technologies
null
44299
https://en.wikipedia.org/wiki/Moraine
Moraine
A moraine is any accumulation of unconsolidated debris (regolith and rock), sometimes referred to as glacial till, that occurs in both currently and formerly glaciated regions, and that has been previously carried along by a glacier or ice sheet. It may consist of partly rounded particles ranging in size from boulders (in which case it is often referred to as boulder clay) down to gravel and sand, in a groundmass of finely-divided clayey material sometimes called glacial flour. Lateral moraines are those formed at the side of the ice flow, and terminal moraines are those formed at the foot, marking the maximum advance of the glacier. Other types of moraine include ground moraines (till-covered areas forming sheets on flat or irregular topography) and medial moraines (moraines formed where two glaciers meet). Etymology The word moraine is borrowed from French , which in turn is derived from the Savoyard Italian ('mound of earth'). in this case was derived from Provençal ('snout'), itself from Vulgar Latin ('rounded object'). The term was introduced into geology by Horace Bénédict de Saussure in 1779. Characteristics Moraines are landforms composed of glacial till deposited primarily by glacial ice. Glacial till, in turn, is unstratified and unsorted debris ranging in size from silt-sized glacial flour to large boulders. The individual rock fragments are typically sub-angular to rounded in shape. Moraines may be found on the glacier's surface or deposited as piles or sheets of debris where the glacier has melted. Formation Moraines may form through a number of processes, depending on the characteristics of sediment, the dynamics on the ice, and the location on the glacier in which the moraine is formed. Moraine forming processes may be loosely divided into passive and active. Passive processes involve the placing of chaotic supraglacial sediments onto the landscape with limited reworking, typically forming hummocky moraines. These moraines are composed of supraglacial sediments from the ice surface. Active processes form or rework moraine sediment directly by the movement of ice, known as glaciotectonism. These form push moraines and thrust-block moraines, which are often composed of till and reworked proglacial sediment. Moraine may also form by the accumulation of sand and gravel deposits from glacial streams emanating from the ice margin. These fan deposits may coalesce to form a long moraine bank marking the ice margin. Several processes may combine to form and rework a single moraine, and most moraines record a continuum of processes. Reworking of moraines may lead to the formation of placer deposits of gold as is the case of southernmost Chile. Types of moraines Moraines can be classified either by origin, location with respect to a glacier or former glacier, or by shape. Lateral moraines Lateral moraines are parallel ridges of debris deposited along the sides of a glacier. The unconsolidated debris can be deposited on top of the glacier by frost shattering of the valley walls or from tributary streams flowing into the valley, or may be subglacial debris carried to the surface of the glacier, melted out, and transported to the glacier margin. Lateral moraines can rise up to over the valley floor, can be up to long, and are steeper close to the glacier margin (up to 80 degrees) than further away (where slopes are typically 29 to 36 degrees). Ground moraines Ground moraines are till-covered areas with irregular topography and no ridges, often forming gently rolling hills or plains, with relief of less than . Ground moraine is accumulated at the base of the ice as lodgment till with a thin and discontinuous upper layer of supraglacial till deposited as the glacier retreats. It typically is found in the areas between end moraines. Rogen moraines Rogen moraines or ribbed moraines are a type of basal moraines that form a series of ribs perpendicular to the ice flow in an ice sheet. The depressions between the ribs are sometimes filled with water, making the Rogen moraines look like tigerstripes on aerial photographs. Rogen moraines are named after Lake Rogen in Härjedalen, Sweden, the landform's type locality. de Geer moraines Closely related to Rogen moraines, de Geer moraines are till ridges up to 5m high and 10–50m wide running perpendicular to the ice flow. They occur in large groups in low-lying areas. Named for Gerard De Geer, who first described them in 1889, these moraines may have developed from crevasses underneath the ice sheet. The Kvarken has a very high density of de Geer moraines. End or terminal moraines End moraines, or terminal moraines, are ridges of unconsolidated debris deposited at the snout or end of the glacier. They usually reflect the shape of the glacier's terminus. Glaciers act much like a conveyor belt, carrying debris from the top of the glacier to the bottom where it deposits it in end moraines. End moraine size and shape are determined by whether the glacier is advancing, receding or at equilibrium. The longer the terminus of the glacier stays in one place, the more debris accumulate in the moraine. There are two types of end moraines: terminal and recessional. Terminal moraines mark the maximum advance of the glacier. Recessional moraines are small ridges left as a glacier pauses during its retreat. After a glacier retreats, the end moraine may be destroyed by postglacial erosion. Recessional moraine Recessional moraines are often observed as a series of transverse ridges running across a valley behind a terminal moraine. They form perpendicular to the lateral moraines that they reside between and are composed of unconsolidated debris deposited by the glacier. They are created during temporary halts in a glacier's retreat. Arctic push moraines In permafrost areas an advancing glacier may push up thick layers of frozen sediments at its front. An arctic push moraine will then be formed. Medial moraine A medial moraine is a ridge of moraine that runs down the center of a valley floor. It forms when two glaciers meet and the debris on the edges of the adjacent valley sides join and are carried on top of the enlarged glacier. As the glacier melts or retreats, the debris is deposited and a ridge down the middle of the valley floor is created. The Kaskawulsh Glacier in the Kluane National Park, Yukon, has a ridge of medial moraine 1 km wide. Supraglacial moraines Supraglacial moraines are created by debris accumulated on top of glacial ice. This debris can accumulate due to ice flow toward the surface in the ablation zone, melting of surface ice or from debris that falls onto the glacier from valley sidewalls. Washboard moraines Washboard moraines, also known as minor or corrugated moraines, are low-amplitude geomorphic features caused by glaciers. They consist of low-relief ridges, in height and around apart, accumulated at the base of the ice as lodgment till. The name "washboard moraine" refers to the fact that, from the air, it resembles a washboard.'' Veiki moraine A Veiki moraine is a kind of hummocky moraine that forms irregular landscapes of ponds and plateaus surrounded by banks. It forms from the irregular melting of ice covered with a thick layer of debris. Veiki moraine is common in northern Sweden and parts of Canada.
Physical sciences
Glacial landforms
null
44303
https://en.wikipedia.org/wiki/Leopard
Leopard
The leopard (Panthera pardus) is one of the five extant cat species in the genus Panthera. It has a pale yellowish to dark golden fur with dark spots grouped in rosettes. Its body is slender and muscular reaching a length of with a long tail and a shoulder height of . Males typically weigh , and females . The leopard was first described in 1758, and several subspecies were proposed in the 19th and 20th centuries. Today, eight subspecies are recognised in its wide range in Africa and Asia. It initially evolved in Africa during the Early Pleistocene, before migrating into Eurasia around the Early–Middle Pleistocene transition. Leopards were formerly present across Europe, but became extinct in the region at around the end of the Late Pleistocene-early Holocene. The leopard is adapted to a variety of habitats ranging from rainforest to steppe, including arid and montane areas. It is an opportunistic predator, hunting mostly ungulates and primates. It relies on its spotted pattern for camouflage as it stalks and ambushes its prey, which it sometimes drags up a tree. It is a solitary animal outside the mating season and when raising cubs. Females usually give birth to a litter of 2–4 cubs once in 15–24 months. Both male and female leopards typically reach sexual maturity at the age 2–2.5 years. Listed as Vulnerable on the IUCN Red List, leopard populations are currently threatened by habitat loss and fragmentation, and are declining in large parts of the global range. Leopards have had cultural roles in Ancient Greece, West Africa and modern Western culture. Leopard skins are popular in fashion. Etymology The English name "leopard" comes from Old French or Middle French , that derives from Latin and ancient Greek (). could be a compound of (), meaning , and (), meaning . The word originally referred to a cheetah (Acinonyx jubatus). "Panther" is another common name, derived from Latin and ancient Greek (); The generic name Panthera originates in Latin , a hunting net for catching wild beasts to be used by the Romans in combats. is the masculine singular form. Taxonomy Felis pardus was the scientific name proposed by Carl Linnaeus in 1758. The generic name Panthera was first used by Lorenz Oken in 1816, who included all the known spotted cats into this group. Oken's classification was not widely accepted, and Felis or Leopardus was used as the generic name until the early 20th century. The leopard was designated as the type species of Panthera by Joel Asaph Allen in 1902. In 1917, Reginald Innes Pocock also subordinated the tiger (P. tigris), lion (P. leo), and jaguar (P. onca) to Panthera. Living subspecies Following Linnaeus' first description, 27 leopard subspecies were proposed by naturalists between 1794 and 1956. Since 1996, only eight subspecies have been considered valid on the basis of mitochondrial analysis. Later analysis revealed a ninth valid subspecies, the Arabian leopard. In 2017, the Cat Classification Task Force of the Cat Specialist Group recognized the following eight subspecies as valid taxa: Results of an analysis of molecular variance and pairwise fixation index of 182 African leopard museum specimens showed that some African leopards exhibit higher genetic differences than Asian leopard subspecies. Evolution Results of phylogenetic studies based on nuclear DNA and mitochondrial DNA analysis showed that the last common ancestor of the Panthera and Neofelis genera is thought to have lived about . Neofelis diverged about from the Panthera lineage. The tiger diverged about , followed by the snow leopard about and the leopard about . The leopard is a sister taxon to a clade within Panthera, consisting of the lion and the jaguar. Results of a phylogenetic analysis of chemical secretions amongst cats indicated that the leopard is closely related to the lion. The geographic origin of the Panthera is most likely northern Central Asia. The leopard-lion clade was distributed in the Asian and African Palearctic since at least the early Pliocene. The leopard-lion clade diverged 3.1–1.95 million years ago. Additionally, a 2016 study revealed that the mitochondrial genomes of the leopard, lion and snow leopard are more similar to each other than their nuclear genomes, indicating that their ancestors hybridized with the snow leopard at some point in their evolution. The oldest unambiguous fossils of the leopard are from Eastern Africa, dating to around 2 million years ago. Leopard-like fossil bones and teeth possibly dating to the Pliocene were excavated in Perrier in France, northeast of London, and in Valdarno, Italy. Until 1940, similar fossils dating back to the Pleistocene were excavated mostly in loess and caves at 40 sites in Europe, including Furninha Cave near Lisbon, Genista Caves in Gibraltar, and Santander Province in northern Spain to several sites across France, Switzerland, Italy, Austria, Germany, in the north up to Derby in England, in the east to Přerov in the Czech Republic and the Baranya in southern Hungary. Leopards arrived in Eurasia during the late Early to Middle Pleistocene around 1.2 to 0.6 million years ago. Four European Pleistocene leopard subspecies were proposed. P. p. begoueni from the beginning of the Early Pleistocene was replaced about by P. p. sickenbergi, which in turn was replaced by P. p. antiqua around 0.3 million years ago. P. p. spelaea is the most recent subspecies that appeared at the beginning of the Late Pleistocene and survived until about 11,000 years ago and possibly into the early Holocene in the Iberian Peninsula. Leopards depicted in cave paintings in Chauvet Cave provide indirect evidence of leopard presence in Europe. Leopard fossils dating to the Late Pleistocene were found in Biśnik Cave in south-central Poland. Fossil remains were also excavated in the Iberian and Italian Peninsula, and in the Balkans. Leopard fossils dating to the Pleistocene were also excavated in the Japanese archipelago. Leopard fossils were also found in Taiwan. Hybrids In 1953, a male leopard and a female lion were crossbred in Hanshin Park in Nishinomiya, Japan. Their offspring known as a leopon was born in 1959 and 1961, all cubs were spotted and bigger than a juvenile leopard. Attempts to mate a leopon with a tigress proved unsuccessful. Characteristics The leopard's fur is generally soft and thick, notably softer on the belly than on the back. Its skin colour varies between individuals from pale yellowish to dark golden with dark spots grouped in rosettes. Its underbelly is white and its ringed tail is shorter than its body. Its pupils are round. Leopards living in arid regions are pale cream, yellowish to ochraceous and rufous in colour; those living in forests and mountains are much darker and deep golden. Spots fade toward the white underbelly and the insides and lower parts of the legs. Rosettes are circular in East African leopard populations, and tend to be squarish in Southern African and larger in Asian leopard populations. The fur tends to be grayish in colder climates, and dark golden in rainforest habitats. Rosette patterns are unique in each individual. This pattern is thought to be an adaptation to dense vegetation with patchy shadows, where it serves as camouflage. Its white-tipped tail is about long, white underneath and with spots that form incomplete bands toward the end of the tail. The guard hairs protecting the basal hairs are short, in face and head, and increase in length toward the flanks and the belly to about . Juveniles have woolly fur that appear to be dark-coloured due to the densely arranged spots. Its fur tends to grow longer in colder climates. The leopard's rosettes differ from those of the jaguar, which are darker and with smaller spots inside. The leopard has a diploid chromosome number of 38. Melanistic leopards are also known as black panthers. Melanism in leopards is caused by a recessive allele and is inherited as a recessive trait. In India, nine pale and white leopards were reported between 1905 and 1967. Leopards exhibiting erythrism were recorded between 1990 and 2015 in South Africa's Madikwe Game Reserve and in Mpumalanga. The cause of this morph known as a "strawberry leopard" or "pink panther" is not well understood. Size The leopard is a slender and muscular cat, with relatively short limbs and a broad head. It is sexually dimorphic with males larger and heavier than females. Males stand at the shoulder, while females are tall. The head-and-body length ranges between with a long tail. Sizes vary geographically. Males typically weigh , and females . Occasionally, large males can grow up to . Leopards from the Cape Province in South Africa are generally smaller, reaching only in males. The heaviest wild leopard in Southern Africa weighed around , and it measured . In 2016, an Indian leopard killed in Himachal Pradesh measured with an estimated weight of ; it was perhaps the largest known wild leopard in India. The largest recorded skull of a leopard was found in India in 1920 and measured in basal length, in breadth, and weighed . The skull of an African leopard measured in basal length, and in breadth, and weighed . Distribution and habitat The leopard has the largest distribution of all wild cats, occurring widely in Africa and Asia, although populations are fragmented and declining. It inhabits foremost savanna and rainforest, and areas where grasslands, woodlands and riparian forests remain largely undisturbed. It also persists in urban environments, if it is not persecuted, has sufficient prey and patches of vegetation for shelter during the day. The leopard's range in West Africa is estimated to have drastically declined by 95%, and in the Sahara desert by 97%. In sub-Saharan Africa, it is still numerous and surviving in marginal habitats where other large cats have disappeared. In southeastern Egypt, an individual found killed in 2017 was the first sighting of the leopard in this area in 65 years. In West Asia, the leopard inhabits remain in the areas of southern and southeastern Anatolia. Leopard populations in the Arabian Peninsula are small and fragmented. In the Indian subcontinent, the leopard is still relatively abundant, with greater numbers than those of other Panthera species. Some leopard populations in India live quite close to human settlements and even in semi-developed areas. Although adaptable to human disturbances, leopards require healthy prey populations and appropriate vegetative cover for hunting for prolonged survival and thus rarely linger in heavily developed areas. Due to the leopard's stealth, people often remain unaware that it lives in nearby areas. As of 2020, the leopard population within forested habitats in India's tiger range landscapes was estimated at 12,172 to 13,535 individuals. Surveyed landscapes included elevations below in the Shivalik Hills and Gangetic plains, Central India and Eastern Ghats, Western Ghats, the Brahmaputra River basin and hills in Northeast India. In Nepal's Kanchenjunga Conservation Area, a melanistic leopard was photographed at an elevation of by a camera trap in May 2012. In Sri Lanka, leopards were recorded in Yala National Park and in unprotected forest patches, tea estates, grasslands, home gardens, pine and eucalyptus plantations. In Myanmar, leopards were recorded for the first time by camera traps in the hill forests of Myanmar's Karen State. The Northern Tenasserim Forest Complex in southern Myanmar is considered a leopard stronghold. In Thailand, leopards are present in the Western Forest Complex, Kaeng Krachan-Kui Buri, Khlong Saeng-Khao Sok protected area complexes and in Hala Bala Wildlife Sanctuary bordering Malaysia. In Peninsular Malaysia, leopards are present in Belum-Temengor, Taman Negara and Endau-Rompin National Parks. In Laos, leopards were recorded in Nam Et-Phou Louey National Biodiversity Conservation Area and Nam Kan National Protected Area. In Cambodia, leopards inhabit deciduous dipterocarp forest in Phnom Prich Wildlife Sanctuary and Mondulkiri Protected Forest. In southern China, leopards were recorded only in the Qinling Mountains during surveys in 11 nature reserves between 2002 and 2009. In Java, leopards inhabit dense tropical rainforests and dry deciduous forests at elevations from sea level to . Outside protected areas, leopards were recorded in mixed agricultural land, secondary forest and production forest between 2008 and 2014. In the Russian Far East, it inhabits temperate coniferous forests where winter temperatures reach a low of . Behaviour and ecology The leopard is a solitary and territorial animal. It is typically shy and alert when crossing roadways and encountering oncoming vehicles, but may be emboldened to attack people or other animals when threatened. Adults associate only in the mating season. Females continue to interact with their offspring even after weaning and have been observed sharing kills with their offspring when they can not obtain any prey. They produce a number of vocalizations, including growls, snarls, meows, and purrs. The roaring sequence in leopards consists mainly of grunts, also called "sawing", as it resembles the sound of sawing wood. Cubs call their mother with an urr-urr sound. The whitish spots on the back of its ears are thought to play a role in communication. It has been hypothesized that the white tips of their tails may function as a 'follow-me' signal in intraspecific communication. However, no significant association were found between a conspicuous colour of tail patches and behavioural variables in carnivores. Leopards are mainly active from dusk till dawn and will rest for most of the day and some hours at night in thickets, among rocks or over tree branches. Leopards have been observed walking across their range at night; wandering up to if disturbed. In some regions, they are nocturnal. In western African forests, they have been observed to be largely diurnal and hunting during twilight, when their prey animals are active; activity patterns vary between seasons. Leopards can climb trees quite skillfully, often resting on tree branches and descending headfirst. They can run at over , leap over horizontally, and jump up to vertically. Social spacing In Kruger National Park, most leopards tend to keep apart. Males occasionally interact with their partners and cubs, and exceptionally this can extend beyond to two generations. Aggressive encounters are rare, typically limited to defending territories from intruders. In a South African reserve, a male was wounded in a male–male territorial battle over a carcass. Males occupy home ranges that often overlap with a few smaller female home ranges, probably as a strategy to enhance access to females. In the Ivory Coast, the home range of a female was completely enclosed within a male's. Females live with their cubs in home ranges that overlap extensively, probably due to the association between mothers and their offspring. There may be a few other fluctuating home ranges belonging to young individuals. It is not clear if male home ranges overlap as much as those of females do. Individuals try to drive away intruders of the same sex. A study of leopards in the Namibian farmlands showed that the size of home ranges was not significantly affected by sex, rainfall patterns or season; the higher the prey availability in an area, the greater the leopard population density and the smaller the size of home ranges, but they tend to expand if there is human interference. Sizes of home ranges vary geographically and depending on habitat and availability of prey. In the Serengeti, males have home ranges of and females of ; but males in northeastern Namibia of and females of . They are even larger in arid and montane areas. In Nepal's Bardia National Park, male home ranges of and female ones of are smaller than those generally observed in Africa. Hunting and diet The leopard is a carnivore that prefers medium-sized prey with a body mass ranging from . Prey species in this weight range tend to occur in dense habitat and to form small herds. Species that prefer open areas and have well-developed anti-predator strategies are less preferred. More than 100 prey species have been recorded. The most preferred species are ungulates, such as impala, bushbuck, common duiker and chital. Primates preyed upon include white-eyelid mangabeys, guenons and gray langurs. Leopards also kill smaller carnivores like black-backed jackal, bat-eared fox, genet and cheetah. In urban environments, domestic dogs provide an important food source. The largest prey killed by a leopard was reportedly a male eland weighing . A study in Wolong National Nature Reserve in southern China demonstrated variation in the leopard's diet over time; over the course of seven years, the vegetative cover receded, and leopards opportunistically shifted from primarily consuming tufted deer to pursuing bamboo rats and other smaller prey. The leopard depends mainly on its acute senses of hearing and vision for hunting. It primarily hunts at night in most areas. In western African forests and Tsavo National Park, they have also been observed hunting by day. They usually hunt on the ground. In the Serengeti, they have been seen to ambush prey by descending on it from trees. It stalks its prey and tries to approach as closely as possible, typically within of the target, and, finally, pounces on it and kills it by suffocation. It kills small prey with a bite to the back of the neck, but holds larger animals by the throat and strangles them. It caches kills up to apart. It is able to take large prey due to its powerful jaw muscles, and is therefore strong enough to drag carcasses heavier than itself up into trees; an individual was seen to haul a young giraffe weighing nearly up into a tree. It eats small prey immediately, but drags larger carcasses over several hundred metres and caches it safely in trees, bushes or even caves; this behaviour allows the leopard to store its prey away from rivals, and offers it an advantage over them. The way it stores the kill depends on local topography and individual preferences, varying from trees in Kruger National Park to bushes in the plain terrain of the Kalahari. Average daily consumption rates of were estimated for males and of for females. In the southern Kalahari Desert, leopards meet their water requirements by the bodily fluids of prey and succulent plants; they drink water every two to three days and feed infrequently on moisture-rich plants such as gemsbok cucumbers, watermelon and Kalahari sour grass. Enemies and competitors Across its range, the leopard coexists with a number of other large predators. In Africa, it is part of a large predator guild with lions, cheetahs, spotted and brown hyenas, and African wild dogs. The leopard is dominant only over the cheetah while the others have the advantage of size, pack numbers or both. Lions pose a great mortal threat and can be responsible for 22% of leopard deaths in Sabi Sand Game Reserve. Spotted hyenas are less threatening but are more likely to steal kills, being the culprits of up to 50% of stolen leopard kills in the same area. To counter this, leopards store their kills in the trees and out of reach. Lions have a high success rate in fetching leopard kills from trees. Leopards do not seem to actively avoid their competitors but rather difference in prey and habitat preferences appear to limit their spatial overlap. In particular, leopards use heavy vegetation regardless of whether lions are present in an area and both cats are active at the same time of day. In Asia, the leopard's main competitors are tigers and dholes. Both the larger tiger and pack-living dhole dominate leopards during encounters. Interactions between the three predators involve chasing, stealing kills and direct killing. Tigers appear to inhabit the deep parts of the forest while leopards and dholes are pushed closer to the fringes. The three predators coexist by hunting different sized prey. In Nagarhole National Park, the average size for a leopard kill was compared to for tigers and for dholes. At Kui Buri National Park, following a reduction in prey numbers, tigers continued to feed on favoured prey while leopards and dholes had to increase their consumption of small prey. Leopards can live successfully in tiger habitat when there is abundant food and vegetation cover. Otherwise, they appear to be less common where tigers are numerous. The recovery of the tiger population in Rajaji National Park during the 2000s led to a reduction in leopard population densities. Reproduction and life cycle In some areas, leopards mate all year round. In Manchuria and Siberia, they mate during January and February. On average, females begin to breed between the ages of 2½ and three, and males between the ages of two and three. The female's estrous cycle lasts about 46 days, and she is usually in heat for 6–7 days. Gestation lasts for 90 to 105 days. Cubs are usually born in a litter of 2–4 cubs. The mortality rate of cubs is estimated at 41–50% during the first year. Predators are the biggest cause for leopard cub mortality during their first year. Male leopards are known to cause infanticide, in order to bring the female back into heat. Intervals between births average 15 to 24 months, but can be shorter, depending on the survival of the cubs. Females give birth in a cave, crevice among boulders, hollow tree or thicket. Newborn cubs weigh , and are born with closed eyes, which open four to nine days after birth. The fur of the young tends to be longer and thicker than that of adults. Their pelage is also more gray in colour with less defined spots. They begin to eat meat at around nine weeks. Around three months of age, the young begin to follow the mother on hunts. At one year of age, cubs can probably fend for themselves, but will remain with the mother for 18–24 months. After separating from their mother, sibling cubs may travel together for months. Both male and female leopards typically reach sexual maturity at 2–2⅓ years. The generation length of the leopard is 9.3 years. The average life span of a leopard is 12–17 years. The oldest leopard was a captive female that died at the age of 24 years, 2 months and 13 days. Conservation The leopard is listed on CITES Appendix I, and hunting is banned in Botswana and Afghanistan; in 11 sub-Saharan countries, trade is restricted to skins and body parts of 2,560 individuals. In 2007, a leopard reintroduction programme was initiated in the Russian Caucasus, where captive bred individuals are reared and trained in large enclosures in Sochi National Park; six individuals released into Caucasus Nature Reserve and Alaniya National Park in 2018 survived as of February 2022. Threats The leopard is primarily threatened by habitat fragmentation and conversion of forest to agriculturally used land, which lead to a declining natural prey base, human–wildlife conflict with livestock herders and high leopard mortality rates. It is also threatened by trophy hunting and poaching. Contemporary records suggest that the leopard occurs in only 25% of its historical range. Between 2002 and 2012, at least four leopards were estimated to have been poached per week in India for the illegal wildlife trade of its skins and bones. In spring 2013, 37 leopard skins were found during a 7-week long market survey in major Moroccan cities. In 2014, 43 leopard skins were detected during two surveys in Morocco. Vendors admitted to have imported skins from sub-Saharan Africa. Surveys in the Central African Republic's Chinko area revealed that the leopard population decreased from 97 individuals in 2012 to 50 individuals in 2017. In this period, transhumant pastoralists from the border area with Sudan moved in the area with their livestock. Rangers confiscated large amounts of poison in the camps of livestock herders who were accompanied by armed merchants. They engaged in poaching large herbivores, sale of bushmeat and trading leopard skins in Am Dafok. In Java, the leopard is threatened by illegal hunting and trade. Between 2011 and 2019, body parts of 51 Javan leopards were seized including six live individuals, 12 skins, 13 skulls, 20 canines and 22 claws. Human relations Cultural significance Leopards have been featured in art, mythology and folklore of many countries. In Greek mythology, it was a symbol of the god Dionysus, who was depicted wearing leopard skin and using leopards as means of transportation. In one myth, the god was captured by pirates but two leopards rescued him. Numerous Roman mosaics from North African sites depict fauna now found only in tropical Africa. During the Benin Empire, the leopard was commonly represented on engravings and sculptures and was used to symbolise the power of the king or oba, since the leopard was considered the king of the forest. The Ashanti people also used the leopard as a symbol of leadership, and only the king was permitted to have a ceremonial leopard stool. Some African cultures considered the leopard to be a smarter, better hunter than the lion and harder to kill. In Rudyard Kipling's "How the Leopard Got His Spots", one of his Just So Stories, a leopard with no spots in the Highveld lives with his hunting partner, the Ethiopian. When they set off to the forest, the Ethiopian changed his brown skin, and the leopard painted spots on his skin. A leopard played an important role in the 1938 Hollywood film Bringing Up Baby. African chiefs, European queens, Hollywood actors and burlesque dancers wore coats made of leopard skins. The leopard is a frequently used motif in heraldry, most commonly as passant. The heraldic leopard lacks spots and sports a mane, making it visually almost identical to the heraldic lion, and the two are often used interchangeably. Naturalistic leopard-like depictions appear on the coat of arms of Benin, Malawi, Somalia, the Democratic Republic of the Congo and Gabon, the last of which uses a black panther. Attacks on people The Leopard of Rudraprayag killed more than 125 people; the Panar Leopard was thought to have killed over 400 people. Both were shot by British hunter Jim Corbett. The spotted devil of Gummalapur killed about 42 people in Karnataka, India. In captivity The ancient Romans kept leopards in captivity to be slaughtered in hunts as well as execute criminals. In Benin, leopards were kept and paraded as mascots, totems and sacrifices to deities. Several leopards were kept in a menagerie originally established by King John of England at the Tower of London in the 13th century; around 1235, three of these animals were given to Henry III by Holy Roman Emperor Frederick II. In modern times, leopards have been trained and tamed in circuses.
Biology and health sciences
Carnivora
null
44305
https://en.wikipedia.org/wiki/Pap%20test
Pap test
The Papanicolaou test (abbreviated as Pap test, also known as Pap smear (AE), cervical smear (BE), cervical screening (BE), or smear test (BE)) is a method of cervical screening used to detect potentially precancerous and cancerous processes in the cervix (opening of the uterus or womb) or, more rarely, anus (in both men and women). Abnormal findings are often followed up by more sensitive diagnostic procedures and, if warranted, interventions that aim to prevent progression to cervical cancer. The test was independently invented in the 1920s by the Greek physician Georgios Papanikolaou and named after him. A simplified version of the test was introduced by the Canadian obstetrician Anna Marion Hilliard in 1957. A Pap smear is performed by opening the vagina with a speculum and collecting cells at the outer opening of the cervix at the transformation zone (where the outer squamous cervical cells meet the inner glandular endocervical cells), using an Ayre spatula or a cytobrush. The collected cells are examined under a microscope to look for abnormalities. The test aims to detect potentially precancerous changes (called cervical intraepithelial neoplasia (CIN) or cervical dysplasia; the squamous intraepithelial lesion system (SIL) is also used to describe abnormalities) caused by human papillomavirus, a sexually transmitted DNA virus. The test remains an effective, widely used method for early detection of precancer and cervical cancer. While the test may also detect infections and abnormalities in the endocervix and endometrium, it is not designed to do so. Guidelines on when to begin Pap smear screening are varied, but usually begin in adulthood. Guidelines on frequency vary from every three to five years. If results are abnormal, and depending on the nature of the abnormality, the test may need to be repeated in six to twelve months. If the abnormality requires closer scrutiny, the patient may be referred for detailed inspection of the cervix by colposcopy, which magnifies the view of the cervix, vagina and vulva surfaces. The person may also be referred for HPV DNA testing, which can serve as an adjunct to Pap testing. In some countries, viral DNA is checked for first, before checking for abnormal cells. Additional biomarkers that may be applied as ancillary tests with the Pap test are evolving. Medical uses Screening guidelines vary from country to country. In general, screening starts about the age of 20 or 25 and continues until about the age of 50 or 60. Screening is typically recommended every three to five years, as long as results are normal. American Congress of Obstetricians and Gynecologists (ACOG) and others recommend starting screening at age 21. Many other countries wait until age 25 or later to start screening. For instance, some parts of Great Britain start screening at age 25. ACOG's general recommendation is that people with female reproductive organs age 30–65 have an annual well-woman examination, that they not get annual Pap tests, and that they do get Pap tests at three to five year intervals. HPV is passed through skin to skin contact; sex does not have to occur, although it is a common way for it to spread. It takes an average of a year, but can take up to four years, for a person's immune system to clear the initial infection. Screening during this period may show this immune reaction and repair as mild abnormalities, which are usually not associated with cervical cancer, but could cause the patient stress and result in further tests and possible treatment. Cervical cancer usually takes time to develop, so delaying the start of screening a few years poses little risk of missing a potentially precancerous lesion. For instance, screening people under age 25 does not decrease cancer rates under age 30. HPV can be transmitted in sex between females, so those who have only had sex with other females should be screened, although they are at somewhat lower risk for cervical cancer. Guidelines on frequency of screening vary—typically every three to five years for those who have not had previous abnormal smears. Some older recommendations suggested screening as frequently as every one to two years, however there is little evidence to support such frequent screening; annual screening has little benefit but leads to greatly increased cost and many unnecessary procedures and treatments. It has been acknowledged since before 1980 that most people can be screened less often. In some guidelines, frequency depends on age; for instance in Great Britain, screening is recommended every three years for women under 50, and every five years for those over. Screening should stop at about age 65 unless there is a history of abnormal test result or disease. There is probably no benefit in screening people aged 60 or over whose previous tests have been negative. If a woman's last three Pap results were normal, she can discontinue testing at age 65, according to the USPSTF, ACOG, ACS, and ASCP; England's NHS says 64. There is no need to continue screening after a complete hysterectomy for benign disease. Pap smear screening is still recommended for those who have been vaccinated against HPV since the vaccines do not cover all HPV types that can cause cervical cancer. Also, the vaccine does not protect against HPV exposure before vaccination. Those with a history of endometrial cancer should discontinue routine Pap tests after hysterectomy. Further tests are unlikely to detect recurrence of cancer but do bring the risk of giving false positive results, which would lead to unnecessary further testing. More frequent Pap smears may be needed to follow up after an abnormal Pap smear, after treatment for abnormal Pap or biopsy results, or after treatment of cancer (cervical, anal, etc.). Effectiveness The Pap test, when combined with a regular program of screening and appropriate follow-up, can reduce cervical cancer deaths by up to 80%. Failure of prevention of cancer by the Pap test can occur for many reasons, including not getting regular screening, lack of appropriate follow-up of abnormal results, and sampling and interpretation errors. In the US, over half of all invasive cancers occur in females who have never had a Pap smear; an additional 10 to 20% of cancers occur in those who have not had a Pap smear in the preceding five years. About one-quarter of US cervical cancers were in people who had an abnormal Pap smear but did not get appropriate follow-up (patient did not return for care, or clinician did not perform recommended tests or treatment). Adenocarcinoma of the cervix has not been shown to be prevented by Pap smears. In the UK, which has a Pap smear screening program, adenocarcinoma accounts for about 15% of all cervical cancers. Estimates of the effectiveness of the United Kingdom's call and recall system vary widely, but it may prevent about 700 deaths per year in the UK. Multiple studies have performed sensitivity and specificity analyses on Pap smears. Sensitivity analysis captures the ability of Pap smears to correctly identify women with cervical cancer. Various studies have revealed the sensitivity of Pap smears to be between 47.19 - 55.5%. Specificity analysis captures the ability of Pap smears to correctly identify women without cervical cancer. Various studies have revealed the specificity of Pap smears to be between 64.79 - 96.8%. While Pap smears may not be entirely accurate, they remain one of the most effective cervical cancer prevention tools. Pap smears may be supplemented with HPV DNA testing. Results In screening a general or low-risk population, most Pap results are normal. In the United States, about 2–3 million abnormal Pap smear results are found each year. Most abnormal results are mildly abnormal (ASC-US (typically 2–5% of Pap results) or low-grade squamous intraepithelial lesion (LSIL) (about 2% of results)), indicating HPV infection. Although most low-grade cervical dysplasias spontaneously regress without ever leading to cervical cancer, dysplasia can serve as an indication that increased vigilance is needed. In a typical scenario, about 0.5% of Pap results are high-grade SIL (HSIL), and less than 0.5% of results indicate cancer; 0.2 to 0.8% of results indicate Atypical Glandular Cells of Undetermined Significance (AGC-NOS). As liquid-based preparations (LBPs) become a common medium for testing, atypical result rates have increased. The median rate for all preparations with low-grade squamous intraepithelial lesions using LBPs was 2.9% in 2006, compared with a 2003 median rate of 2.1%. Rates for high-grade squamous intraepithelial lesions (median, 0.5%) and atypical squamous cells have changed little. Abnormal results are reported according to the Bethesda system. They include: Atypical squamous cells (ASC) Atypical squamous cells of undetermined significance (ASC-US) Atypical squamous cells – cannot exclude HSIL (ASC-H) Squamous intraepithelial lesion (SIL) Low-grade squamous intraepithelial lesion (LGSIL or LSIL) High-grade squamous intraepithelial lesion (HGSIL or HSIL) Squamous cell carcinoma Glandular epithelial cell abnormalities Atypical glandular cells not otherwise specified (AGC or AGC-NOS) Endocervical and endometrial abnormalities can also be detected, as can a number of infectious processes, including yeast, herpes simplex virus and trichomoniasis. However it is not very sensitive at detecting these infections, so absence of detection on a Pap does not mean absence of the infection. Pregnancy Pap tests can usually be performed during pregnancy up to at least 24 weeks of gestational age. Pap tests during pregnancy have not been associated with increased risk of miscarriage. An inflammatory component is commonly seen on Pap smears from pregnant women and does not appear to be a risk for subsequent preterm birth. After childbirth, it is recommended to wait 12 weeks before taking a Pap test because inflammation of the cervix caused by the birth interferes with test interpretation. In transgender individuals Transgender men are also typically at risk for HPV due to retention of the uterine cervix in the majority of individuals in this subgroup. As such, professional guidelines recommend that transgender men be screened routinely for cervical cancer using methods such as Pap smear, identical to the recommendations for cisgender women. However, transgender men have lower rates of cervical cancer screening than cisgender women. Many transgender men report barriers to receiving gender-affirming healthcare, including lack of insurance coverage and stigma/discrimination during clinical encounters, and may encounter provider misconceptions regarding risk in this population for cervical cancer. Pap smears may be presented to patients as non-gendered screening procedures for cancer rather than one specific for examination of the female reproductive organs. Pap smears may trigger gender dysphoria in patients and gender-neutral language can be used when explaining the pathogenesis of cancer due to infection, emphasizing the pervasiveness of HPV infection regardless of gender. Transgender women who have not had vaginoplasties are not at risk of developing cervical cancer because they do not have cervices. Transgender women who have had vaginoplasties and have a neo-cervix or neo-vagina have a small chance of developing cancer, according to the Canadian Cancer Society. Surgeons typically use penile skin to create the new vagina and cervix, which can contract HPV and lead to penile cancer, although it is considerably rarer than cervical cancer. Because the risk of this kind of cancer is so low, cervical cancer screening is not routinely offered for those with a neo-cervix. Procedure According to the CDC, intercourse, douching, and the use of vaginal medicines or spermicidal foam should be avoided for 2 days before the test. A number of studies have shown that using a small amount of water-based gel lubricant does not interfere with, obscure, or distort the Pap smear. Further, cytology is not affected, nor are some STD testing. The CDC states that Pap smears can be performed during menstruation. However, the NHS recommends against cervical screening during, or in the 2 days before and after, menstruation. Pap smears can be performed during menstruation, especially if the physician is using a liquid-based test; however if bleeding is extremely heavy, endometrial cells can obscure cervical cells, and if this occurs the test may need to be repeated in 6 months. Pap smears begin with the insertion of a speculum into the vagina, which spreads the vagina open and allows access to the cervix. The health care provider then collects a sample of cells from the outer opening or external os of the cervix by scraping it with either a spatula or brush. Obtaining a Pap smear should not cause much pain, but may be uncomfortable. Conditions such as vaginismus, vulvodynia, or cervical stenosis can cause insertion of the speculum to be painful. In a conventional Pap smear, the cells are placed on a glass slide and taken to the laboratory to be checked for abnormalities. A plastic-fronded broom is sometimes used in place of the spatula or brush. The broom is not as good a collection device, since it is much less effective at collecting endocervical material than the spatula and brush. The broom is used more frequently with the advent of liquid-based cytology, although either type of collection device may be used with either type of cytology. The sample is stained using the Papanicolaou technique, in which tinctorial dyes and acids are selectively retained by cells. Unstained cells cannot be seen adequately with a light microscope. Papanicolaou chose stains that highlighted cytoplasmic keratinization, which actually has almost nothing to do with the nuclear features used to make diagnoses now. A single smear has an area of 25 x 50 mm and contains a few hundred thousand cells on average. Screening with light microscopy is first done on low (10x) power and then switched to higher (40x) power upon viewing suspicious findings. Cells are analyzed under high power for morphologic changes indicative of malignancy (including enlarged and irregularly shaped nucleus, an increase in nucleus to cytoplasm ratio, and more coarse and irregular chromatin). Approximately 1,000 fields of view are required on 10x power for screening of a single sample, which takes on average 5 to 10 minutes. In some cases, a computer system may prescreen the slides, indicating those that do not need examination by a person or highlighting areas for special attention. The sample is then usually screened by a specially trained and qualified cytotechnologist using a light microscope. The terminology for who screens the sample varies according to the country; in the UK, the personnel are known as cytoscreeners, biomedical scientists (BMS), advanced practitioners and pathologists. The latter two take responsibility for reporting the abnormal sample, which may require further investigation. Automated analysis In the last decade, there have been successful attempts to develop automated, computer image analysis systems for screening. Although, on the available evidence automated cervical screening could not be recommended for implementation into a national screening program, a recent NHS Health technology appraisal concluded that the 'general case for automated image analysis ha(d) probably been made'. Automation may improve sensitivity and reduce unsatisfactory specimens. Two systems have been approved by the FDA and function in high-volume reference laboratories, with human oversight. Types of screening Conventional Pap—In a conventional Pap smear, samples are smeared directly onto a microscope slide after collection. Liquid-based cytology—The sample of (epithelial) cells is taken from the transitional zone, the squamocolumnar junction of the cervix, between the ectocervix and the endocervix. The cells taken are suspended in a bottle of preservative for transport to the laboratory, where they are analyzed using Pap stains. Pap tests commonly examine epithelial abnormalities, such as metaplasia, dysplasia, or borderline changes, all of which may be indicative of CIN. Nuclei will stain dark blue, squamous cells will stain green and keratinised cells will stain pink/ orange. Koilocytes may be observed where there is some dyskaryosis (of epithelium). The nucleus in koilocytes is typically irregular, indicating possible cause for concern; requiring further confirmatory screens and tests. In addition, human papillomavirus (HPV) test may be performed either as indicated for abnormal Pap results, or in some cases, dual testing is done, where both a Pap smear and an HPV test are done at the same time (also called Pap co-testing). Practical aspects The endocervix may be partially sampled with the device used to obtain the ectocervical sample, but due to the anatomy of this area, consistent and reliable sampling cannot be guaranteed. Since abnormal endocervical cells may be sampled, those examining them are taught to recognize them. The endometrium is not directly sampled with the device used to sample the ectocervix. Cells may exfoliate onto the cervix and be collected from there, so as with endocervical cells, abnormal cells can be recognised if present but the Pap test should not be used as a screening tool for endometrial malignancy. In the United States, a Pap test itself costs $20 to $30, but the costs for Pap test visits can cost over $1,000, largely because additional tests are added that may or may not be necessary. History The test was invented by and named after the Greek doctor Georgios Papanikolaou, who started his research in 1923. Aurel Babeș independently made similar discoveries in 1927. However, Babeș' method was radically different from Papanikolaou's. The Pap test was finally recognized only after a leading article in the American Journal of Obstetrics and Gynecology in 1941 by Papanikolaou and Herbert F. Traut, an American gynecologist. A monograph titled Diagnosis of Uterine Cancer by the Vaginal Smear that they published contained drawings of the various cells seen in patients with no disease, inflammatory conditions, and preclinical and clinical carcinoma. The monograph was illustrated by Hashime Murayama, who later became a staff illustrator with the National Geographic Society. Both Papanikolaou and his wife, Andromachi Papanikolaou, dedicated the rest of their lives to teaching the technique to other physicians and laboratory personnel. Experimental techniques In the developed world, cervical biopsy guided by colposcopy is considered the "gold standard" for diagnosing cervical abnormalities after an abnormal Pap smear. Other techniques such as triple smear are also done after an abnormal Pap smear. The procedure requires a trained colposcopist and can be expensive to perform. However, Pap smears are very sensitive and some negative biopsy results may represent undersampling of the lesion in the biopsy, so negative biopsy with positive cytology requires careful follow-up. Experimental visualization techniques use broad-band light (e.g., direct visualization, speculoscopy, cervicography, visual inspection with acetic acid or with Lugol's, and colposcopy) and electronic detection methods (e.g., Polarprobe and in vivo spectroscopy). These techniques are less expensive and can be performed with significantly less training. They do not perform as well as Pap smear screening and colposcopy. At this point, these techniques have not been validated by large-scale trials and are not in general use. Implementation by country Australia Australia has used the Pap test as part of its cervical screening program since its implementation in 1991 which required women past the age of 18 be tested every two years. In December 2017 Australia discontinued its use of the Pap test and replaced it with a new HPV test that is only required to be conducted once every five years from the age of 25. Medicare covers the costs of testing; however, if a patient's doctor does not allow bulk billing, they may have to pay for the appointment and then claim the Medicare rebate. Taiwan Free Pap tests were offered from 1974–1984 before being replaced by a system in which all women over the age of 30 could have the cost of their Pap test reimbursed by the National Health Insurance in 1995. This policy was still ongoing in 2018 and encouraged women to screen at least every three years. Despite this, the number of people receiving Pap tests remain lower than countries like Australia. Some believe this is due to a lack of awareness regarding the test and its availability. It has also been found that women who have chronic diseases or other reproductive diseases are less likely to receive the test. England the NHS maintains a cervical screening program in which women between the age of 25–49 are invited for a smear test every three years, and women past 50 every five years. Much like Australia, England uses a HPV test before examining cells that test positive using the Pap test. The test is free as part of the national cervical screening program. Coccoid bacteria The finding of coccoid bacteria on a Pap test is of no consequence with otherwise normal test findings and no infectious symptoms. However, if there is enough inflammation to obscure the detection of precancerous and cancerous processes, it may indicate treatment with a broad-spectrum antibiotic for streptococci and anaerobic bacteria (such as metronidazole and amoxicillin) before repeating the smear. Alternatively, the test will be repeated at an earlier time than it would otherwise. If there are symptoms of vaginal discharge, bad odor or irritation, the presence of coccoid bacteria also may indicate treatment with antibiotics as per above.
Biology and health sciences
Medical procedures
null