id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
4404484
https://en.wikipedia.org/wiki/Horned%20gopher
Horned gopher
Horned gophers are extinct rodents from the genus Ceratogaulus, a member of the extinct fossorial rodent clade Mylagaulidae. Ceratogaulus is the only known rodent genus with horns, and is the smallest known horned mammal. Ceratogaulus lived from the late Miocene to the early Pliocene epochs. Description The horned gopher had two horns; these were large (in comparison to body size), paired, and originated from the nasal bones. Horned gophers are the smallest known horned mammals and the only known rodents ever to have had horns. They are also one of only two known horned fossorial mammals, the other being Peltephilus, an extinct genus of armadillo. They were native to what is now the Great Plains of North America, mostly Nebraska. The role of the horns of Ceratogaulus is subject to much speculation. Several functions have been hypothesized (see below for a more detailed analysis) including digging, mating displays or combat, and defense from predators. The horns are not sexually dimorphic and multiple analyses support a role in defense. In other respects, the animals most resembled modern marmots. They were approximately long, and had paddle-like forepaws with powerful claws adapted for digging. They also had small eyes, and probably had poor eyesight, similar to that of a mole. These features, and some formal analyses of their morphology, suggest that they were likely burrowing animals. Possible roles of the horns Digging The nasal horns of Ceratogaulus are inconsistent with use as a digging tool. In recent mammals that use their heads for excavating, the tips of their snouts are used like a spade to scrape at the substrate. Therefore, the only modification of the nasal bones is a slight thickening of the anterior tips. Although it is theoretically possible that some mammal might develop horns as a digging tool, digging horns would differ from the Ceratogaulus horns in position and shape. Ceratogaulus horns are positioned on the posterior ends of the nasal bones and extend dorsally, perpendicular to the plane of the palate. As a result of their posterior position, using the horns to dig would bring the anterior tip of the nasals against the substrate after a very short sweep of the horns, making digging with the horns extremely inefficient. This motion would be even more inefficient than suggested because the anterior surface of a burrow is concave, making it essentially impossible to use the horns without the anterior end of the snout interfering. The expectation is that an animal using its horns anteriorly (rather than dorsally) would have the occipital plate positioned vertically or tilted posteriorly. In this configuration, the effective input lever is maximized when the head is lowered, as in the rhinoceros skull. The shape of the horn itself is also very poor for a digging tool. The horns are thick and broad with large, flat anterior and posterior surfaces. Dragging such a broad tool through the soil would create immense resistance, proportional to the large surface area presented to the substrate. Finally, the Ceratogaulus horn becomes more posteriorly positioned through time, so that the evolutionary trend is towards a horn which becomes more poorly suited to digging through time, rather than better suited. Thus, the argument that the horns functioned in digging is not supported by the morphology or the evolutionary progression. Mating displays or combat Many of the objections that apply to the horns as a digging implement also apply to the use of the horns in sexual combat. Their orientation and position and the morphology of the rest of the skull make it exceedingly difficult to bring them to bear on an opponent of similar size. The cervical vertebrae are shortened anteroposteriorly in all mylagaulids (a feature inherited by Ceratogaulus from ancestral, head-digging mylagaulids), decreasing the flexibility and range of motion of the neck and making it even more difficult for Ceratogaulus species to wrestle with their horns. Many ungulates with horns ill-suited to sexual combat still use them for combat or for sexual display. However, a sexually selected use of the horns is unlikely in Ceratogaulus, as the optic foramen is very small, roughly one-half to two-thirds the size of that of the mountain beaver, Aplodontia rufa, which itself has very poor vision. The small size of the optic foramen indicates extremely poor visual acuity, meaning the females would be unlikely to be able to visually recognize a winner in any sexual displays or sexual combat by the males. Defense Horns are used in defense against predators by almost all horned mammals. Animals will use any weapons at their disposal to fight off predators, and the horns of Ceratogaulus are well suited to defense. The horns are broad and robust, and their dorsal orientation and relatively posterior position makes them well suited to protecting the vulnerable eyes and neck. By elevating the head dorsally, the horns would be snapped backward, protecting the areas most commonly attacked by predators. A similar use of posterodorsal horns has been indicated to decrease predation in horned lizards. As the horns grow taller through evolutionary time, they also become more posteriorly positioned and the height of the occipital plate increases, increasing the leverage for lifting them. By positioning the horns more posteriorly, the output lever is shortened and, because the muscles used to rotate the skull dorsally attach at the top of the occipital plate, the input lever is lengthened. Thus, the dorsal strike with the horns would be more powerful as the ratio of output lever to input lever would be increased. Predation is the dominant cause of mortality in most small mammals, so the benefits provided by a mechanism to reduce predation could offset the substantial evolutionary cost of horns in a fossorial mammal.
Biology and health sciences
Rodents
Animals
9884592
https://en.wikipedia.org/wiki/Suspended%20load
Suspended load
The suspended load of a flow of fluid, such as a river, is the portion of its sediment uplifted by the fluid's flow in the process of sediment transportation. It is kept suspended by the fluid's turbulence. The suspended load generally consists of smaller particles, like clay, silt, and fine sands. Sediment transportation The suspended load is one of the three layers of the fluvial sediment transportation system. The bed load consists of the larger sediment which is transported by saltation, rolling, and dragging on the riverbed. The suspended load is the middle layer that consists of the smaller sediment that's suspended. The wash load is uppermost layer which consist of the smallest sediment that can be seen with the naked eye; however, the wash load gets easily mixed with suspended load during transportation due to the very similar process. The wash load never touches the bed even outside of a current. Composition The boundary between bed load and suspended load is not straightforward because whether a particle is in suspension or not depends on the flow velocity – it is easy to imagine a particle moving between bed load, part-suspension and full suspension in a fluid with variable flow. Suspended load generally consists of fine sand, silt and clay size particles although larger particles (coarser sands) may be carried in the lower water column in more intense flows. Suspended load vs suspended sediment Suspended load and suspended sediment are very similar, but are not the same. Suspended Sediment contains sediment uplifted in Fluvial zones, but unlike suspended load no turbulence is required to keep it uplifted. Suspended loads required the Velocity to keep the sediment transporting above the bed. With low velocity the sediment will deposit. Velocity The suspended load is carried within the lower to middle part of the water column and moves at a large fraction of the mean flow velocity of the stream, with a Rouse number between 0.8 and 1.2. The rates within the Rouse number reveal how at which the sediment will transport at the current velocity. It is the ratio of the fall velocity and uplift velocity on a grain. Diagrams Suspended load is often visualised using two diagrams. The Hjulström curve uses velocity and sediment size to compare the rate of erosion, transportation, and deposition. While the diagram shows the rate, one flaw about the Hjulström Diagram is that it doesn't show the depth of the creek giving an estimated rate. The second diagram used is the Shields Diagram. The Shields Diagram (based on the Shields formula) uses the critical shear stress and Reynolds number to estimate transportation rate. The Shields Diagram is considered a more precise chart to estimate suspended load. Measuring suspended load Shear stress To find the stream power for sediment transportation, shear stress helps determine the force required to allow sediment transportation. Critical shear stress The point at which the sediment is transported within a stream Suspended load transport rate
Physical sciences
Sedimentology
Earth science
9888598
https://en.wikipedia.org/wiki/Nothofagus%20menziesii
Nothofagus menziesii
Nothofagus menziesii, commonly known as silver beech (), is a tree of the southern beech family endemic to New Zealand. Its common name probably comes from the fact that its bark is whitish in colour, particularly in younger specimens. It is found from Thames southwards in the North Island (except Mount Taranaki/Egmont), and throughout the South Island. Silver beech is a forest tree up to 30 m tall. The trunk, which is often buttressed, may be up to 2 m in diameter. The leaves are small, thick and almost round in shape, 6 to 15 mm long and 5 to 15 mm wide with rounded teeth which usually occur in pairs, 1 or 2 hair fringed domatia are found on the underside of each leaf. Its Māori name is tawhai. It grows from low altitudes to the mountains. Nothofagus menziesii was proposed to be renamed Lophozonia menziesii in 2013. Distribution Alongside mountain beech (Nothofagus solandri var. cliffortioides), silver beech is the most widely distributed beech taxon in New Zealand. Predominantly found in cold wet forests from the Bay of Plenty to the bottom of the South Island. No beeches are present on Stewart Island. In the South Island its geographical range extends from sea level to the treeline, while in the North Island they are restricted mainly to montane and subalpine forests on ranges and central volcanoes. Silver beech forests generally dominate wetter regions in the South Island within Fiordland and Southland. Silver beech trees typically dominate other species of beech in increasingly wet and cold environments, due to its competitive advantage in greater tolerance of low soil nutrients, greater shade tolerance and lower thermal optimum for photosynthesis. Extensive pure silver beech forests are commonly found in high altitude environments between 500 m and the timberline, but also exist in valley floors, especially in inland valleys where atmospheric and soil moisture is high. Throughout the New Zealand landscape there are areas that have suitable soils and conditions, yet lack the presence of southern beech trees and given the common term of ‘beech gap’. There are several hypotheses as to why these gaps occur, including glaciation, volcanic activity and drought. Also theorised that it might be caused due to drought from Māori lighting forest fires pre-European settlement. The most significant example of a beech gap is located in central Westland in the South Island, two areas of high endemicity (Otago-Southland, and northwest Nelson) are separated by low diversity. Cultivation and uses The wood is hard and is used for furniture. It is not durable outdoors. The bark contains a black dye and tannin which is used for tanning leather.
Biology and health sciences
Fagales
Plants
17910805
https://en.wikipedia.org/wiki/Counting%20board
Counting board
The counting board is the precursor of the abacus, and the earliest known form of a counting device (excluding fingers and other very simple methods). Counting boards were made of stone or wood, and the counting was done on the board with beads, pebbles etc. Not many boards survive because of the perishable materials used in their construction, or the impossibility to identify the object as a counting board. The counting board was invented to facilitate and streamline numerical calculations in ancient civilizations. Its inception addressed the need for a practical tool to perform arithmetic operations efficiently. By using counters or tokens on a board with designated sections, people could easily keep track of quantities, trade, and financial transactions. This invention not only enhanced accuracy but also fueled the development of more sophisticated mathematical concepts and systems throughout history. The counting board does not include a zero as we have come to understand it today. It primarily used Roman numerals to calculate. The system was based on a base ten or base twenty system, where the lines represented the bases of ten or twenty, and the spaces representing base fives. The oldest known counting board, the Salamis Tablet () was discovered on the Greek island of Salamis in 1899. It is thought to have been used as more of a gaming board than a calculating device. It is marble, about 150 x 75 x 4.5 cm, and is in the Epigraphical Museum in Athens. It has carved Greek letters and parallel grooves. The German mathematician Adam Ries described the use of counting boards in .
Technology
Basics_4
null
3240142
https://en.wikipedia.org/wiki/Agricultural%20cycle
Agricultural cycle
The agricultural cycle is the annual cycle of activities related to the growth and harvest of a crop (plant). These activities include loosening the soil, seeding, special watering, moving plants when they grow bigger, and harvesting, among others. Without these activities, a crop cannot be grown. The main steps for agricultural practices include preparation of soil, sowing, adding manure and fertilizers, irrigation, harvesting and storage. Seeding The fundamental factor in the process of seeding is dependent on the properties of both seed and the soil it is being planted in. The prior step associated with seeding is crop selection, which mainly consists of two techniques: sexual and asexual. Asexual technique includes all forms of the vegetative process such as budding, grafting and layering. Sexual technique involves growing of the plant from a seed. Grafting is referred to as the artificial method of propagation in which parts of plants are joined together in order to make them bind together and continue growing as one plant. Grafting is mainly applied to two parts of the plant: the dicot and the gymnosperms due to the presence of vascular cambium between the plant tissues: xylem and phloem. A grafted plant consists of two parts: first rootstock, which is the lower part of the plant that comprises roots and the lowest part of the shoot; second, the branches and primary stem, which consists of the upper and main part of the shoot which gradually develops into a fully nourished plant. Budding is another form of asexual reproduction in which the new plant develops from a productive objective source of the parent plant. It is a method in which a bud from the plant is joined onto the stem of another plant. The plant in which the bud is implanted in eventually develops into a replica of the parent plant. The new plant can either divert its ways into forming an independent plant; however, in numerous cases it may remain attached and form various accumulations. Seedling Germination is a process by which the seed develops into a seedling. The vital conditions necessary for this process are water, air, temperature, energy, viability and enzymes. If any of these conditions are absent, the process cannot undergo successfully. Germination is also known as sprouting; it is also considered the first sign of life shown by a seed. Pollination The process of pollination refers to the transfer of pollen to the female organs of the plant. Optimum factors for ideal pollination are a relative humidity rate of 50–70% and temperature of 24.4 °C. If the humidity rate is higher than 90%, the pollen would not shed. Increasing air circulation is a favourable method of keeping humidity levels under control. Irrigation Irrigation is the process of artificially applying water to soil to allow plant growth. This term is preferably used when large amounts of water is applied to dry, arid regions in order to facilitate plant growth. The process of irrigation not only increases the growth rate of the plant but also increases the yield amount. In temperate and tropical areas rainfall and snowfall are the main suppliers of irrigation water, but in dry places with unfavourable weather conditions, groundwater serves as an essential source. Groundwater collects in basins made up of gravel and aquifers, which are water-holding rocks. Dams also act as an essential distributive source of irrigation water. Underground wells also play an important role in storing water for irrigation, specifically in America, and Arizona, in particular. Water and debris from streams filled by water accumulated during storms also collect into underground basins. There are two types of irrigation techniques: spray irrigation and drip irrigation. Drip irrigation is regarded as more efficient as less water evaporates than in spray irrigation.
Technology
Basics_2
null
3241032
https://en.wikipedia.org/wiki/Treehopper
Treehopper
Treehoppers (more precisely typical treehoppers to distinguish them from the Aetalionidae) and thorn bugs are members of the family Membracidae, a group of insects related to the cicadas and the leafhoppers. About 3,200 species of treehoppers in over 400 genera are known. They are found on all continents except Antarctica; only five species are known from Europe. Individual treehoppers usually live for only a few months. Morphology Treehoppers, due to their unusual appearance, have long interested naturalists. They are best known for their enlarged and ornate pronotum, expanded into often fantastic shapes that enhance their camouflage or mimicry, often resembling plant thorns (thus the commonly used name of "thorn bugs" for a number of treehopper species). Treehoppers have specialized muscles in the hind femora that unfurl to generate sufficient force to jump. It had been suggested that the pronotal "helmet" could be serial homologues of insect wings, but this interpretation has been refuted by several later studies (e.g., ). Treehopper nymphs can be recognised by the tube-like ninth abdominal segment, through which the tenth and eleventh segments can be exerted in defence or to provide honeydew to other animals (explained further in the next section). The tube is longer (compared to the rest of the body) in early instars compared to late instars. Ecology Treehoppers have pointy, tube-shaped mouthparts that they use to pierce plant stems and feed upon sap. The young can frequently be found on herbaceous shrubs and grasses, while the adults more often frequent hardwood tree species. Excess sap becomes concentrated as honeydew, which often attracts ants. Some species have a well-developed ant mutualism, and these species are normally gregarious as well, which attracts more ants. The ants provide protection from predators. Treehoppers mimic thorns to prevent predators from spotting them. Others have formed mutualisms with wasps, such as Parachartergus apicalis. Even geckos form mutualistic relations with treehoppers, with whom they communicate by small vibrations of the abdomen. Mutualisms are not done only for protection against predators. Nymphs of the treehopper Publilia concava have higher survivorship in the presence of ants even when predators are absent. This is suspected to be because uncollected honeydew leads to the growth of sooty mould, which may hinder excretion by treehoppers and photosynthesis by their host plants. Ant collection of honeydew thus allows treehoppers to feed more (the feeding facilitation hypothesis). Eggs are laid by the female with her saw-like ovipositor in slits cut into the cambium or live tissue of stems, though some species lay eggs on top of leaves or stems. The eggs may be parasitised by wasps, such as the tiny fairyflies (Mymaridae) and Trichogrammatidae. The females of some membracid species sit over their eggs to protect them from predators and parasites, and may buzz their wings at intruders. The females of some gregarious species work together to protect each other's eggs. In at least one species, Publilia modesta, mothers serve to attract ants when nymphs are too small to produce much honeydew. Some other species make feeding slits for the nymphs. Most species are innocuous to humans, although a few are considered minor pests, such as Umbonia crassicornis (a thorn bug), the three-cornered alfalfa hopper (Spissistilus festinus), and the buffalo treehopper (Stictocephala bisonia), which has been introduced to Europe. The cowbug Oxyrachis tarandus has been recorded as a pest of Withania somnifera in India. Systematics The diversity of treehoppers has been little researched, and their systematic arrangement is tentative. It seems three main lineages can be distinguished; the Endoiastinae are the most ancient treehoppers, still somewhat resembling cicadas. Centrotinae form the second group; they are somewhat more advanced but the pronotum still does not cover the scutellum in almost all of these. The Darninae, Heteronotinae, Membracinae and Smiliinae contain the most apomorphic treehoppers. Several proposed subfamilies seem to be paraphyletic. Centronodinae and Nicomiinae might need to be merged into the Centrotinae to result in a monophyletic group.
Biology and health sciences
Hemiptera (true bugs)
Animals
198766
https://en.wikipedia.org/wiki/Greylag%20goose
Greylag goose
The greylag goose or graylag goose (Anser anser) is a species of large goose in the waterfowl family Anatidae and the type species of the genus Anser. It has mottled and barred grey and white plumage and an orange beak and pink legs. A large bird, it measures between in length, with an average weight of . Its distribution is widespread, with birds from the north of its range in Europe and Asia often migrating southwards to spend the winter in warmer places, although many populations are resident, even in the north. It is the ancestor of most breeds of domestic goose, having been domesticated at least as early as 1360 BCE. The genus name and specific epithet are from anser, the Latin for "goose". Greylag geese travel to their northerly breeding grounds in spring, nesting on moorlands, in marshes, around lakes and on coastal islands. They normally mate for life and nest on the ground among vegetation. A clutch of three to five eggs is laid; the female incubates the eggs and both parents defend and rear the young. The birds stay together as a family group, migrating southwards in autumn as part of a flock, and separating the following year. During the winter they occupy semi-aquatic habitats, estuaries, marshes and flooded fields, feeding on grass and often consuming agricultural crops. Some populations, such as those in southern England and in urban areas across the species' range, are primarily resident and occupy the same area year-round. Taxonomy The greylag goose was formally described in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. He placed it with the ducks in the genus Anas and coined the binomial name Anas anser. The specific epithet is Latin meaning "goose". The greylag goose is now one of 11 geese placed in the genus Anser that was erected in 1860 by the French naturalist Mathurin Jacques Brisson. It is the type species of the genus. Two subspecies are recognised: A. a. anser, the western greylag goose, which breeds in Iceland and northern and central Europe and A. a. rubrirostris, the eastern greylag goose, which breeds in Romania, Turkey, and Russia eastwards to northeastern China. The two subspecies intergrade where their ranges meet. The greylag goose sometimes hybridises with other species of goose, including the barnacle goose (Branta leucopsis) and the Canada goose (Branta canadensis), and occasionally with the mute swan (Cygnus olor). The greylag goose was one of the first animals to be domesticated; this happened at least 3,000 years ago in Ancient Egypt, the domestic subspecies being known as A. a. domesticus. As the domestic goose is a subspecies of the greylag goose they are able to interbreed, with the offspring sharing characteristics of both wild and domestic birds. Description The greylag is the largest and bulkiest of the grey geese of the genus Anser, but is more lightly built and agile than its domestic relative. It has a rotund, bulky body, a thick and long neck, and a large head and bill. It has pink legs and feet, and an orange or pink bill with a white or brown nail (hard horny material at tip of upper mandible). It is long with a wing length of . It has a tail , a bill of long, and a tarsus of . It weighs , with a mean weight of around . The wingspan is . Males are generally larger than females, with the sexual dimorphism more pronounced in the eastern subspecies rubirostris, which is larger than the nominate subspecies on average. The plumage of the greylag goose is greyish brown, with a darker head and paler breast and belly with a variable amount of black spotting. It has a pale grey forewing and rump which are noticeable when the bird is in flight or stretches its wings on the ground. It has a white line bordering its upper flanks, and its wing coverts are light coloured, contrasting with its darker flight feathers. Its plumage is patterned by the pale fringes of the feathers. Juveniles differ mostly in their lack of black speckling on the breast and belly and by their greyish legs. Adults have a distinctive 'concertina' pattern of folds in the feathers on their necks. The greylag goose has a loud cackling call similar to that of the domestic goose, "aahng-ung-ung", uttered on the ground or in flight. There are various subtle variations used under different circumstances, and individual geese seem to be able to identify other known geese by their voices. The sound made by a flock of geese resembles the baying of hounds. Goslings chirp or whistle lightly, and adults hiss if threatened or angered. Distribution and habitat This species has a Palearctic distribution. The nominate subspecies breeds in Iceland, Norway, Sweden, Denmark, Finland, the Baltic States, northern Russia, Poland, eastern Hungary, Romania, Germany and the Netherlands. It also breeds locally in the United Kingdom, Belgium, Austria, the Czech Republic, Slovakia, North Macedonia and some other European countries. The eastern race extends eastwards across a broad swathe of Asia to China. Historically, European birds generally migrated southwards to spend winter in southern Europe and North Africa, but in recent decades many instead overwinter in or near their breeding range, even in Scandinavia. Asian birds migrate to Azerbaijan, Iran, Pakistan, northern India, Bangladesh and eastward to China. Greylags also occur as very rare winter migrants to South Korea and Japan. In North America, there are both feral domestic geese, which are similar to greylags, and occasional vagrant greylags. Greylag geese seen in the wild in New Zealand probably originated from the escape of farmyard geese, and a similar situation has occurred in Australia, where feral birds are now established in the east and southeast of the country. In their breeding quarters, they are found on moors with scattered lochs, in marshes, fens and peat-bogs, besides lakes and on little islands some way out to sea. They like dense ground cover of reeds, rushes, heather, bushes and willow thickets. In their winter quarters, they frequent salt marshes, estuaries, freshwater marshes, steppes, flooded fields, bogs and pasture near lakes, rivers and streams. They also visit agricultural land where they feed on winter cereals, rice, beans or other crops, moving at night to shoals and sand-banks on the coast, mud-banks in estuaries or secluded lakes. Large numbers of immature birds congregate each year to moult on the Rone Islands near Gotland in the Baltic Sea. Since the 1950s, increases in winter temperatures have resulted in greylag geese breeding in northern and central Europe, reducing their winter migration distances or even becoming resident. Wintering grounds closer to home can therefore be exploited, meaning that the geese can return to set up breeding territories earlier the following spring. In Great Britain, their numbers had declined as a breeding bird, retreating north to breed wild only in the Outer Hebrides and the northern mainland of Scotland. However, during the 20th century, feral populations have been established elsewhere, and they have now re-colonised much of England. These populations are increasingly coming into contact and merging. The greylag goose has become a pest species in several areas where its population has increased sharply. In Norway, the number of greylag geese is estimated to have increased three- to fivefold between 1995 and 2015. As a consequence, farmers' problems caused by goose grazing on farmland have increased considerably. This problem is also evident for the pink-footed goose. In the Orkney islands the population has increased dramatically: there were 300 breeding pairs, increasing to 10,000 in 2009, and 64,000 in 2019. Due to extensive damage caused to crops, the hunting season for the greylag goose in the Orkney islands is now most of the year. Behaviour Greylag geese are largely herbivorous and feed chiefly on grasses. Short, actively growing grass is more nutritious and greylag geese are often found grazing in pastures with sheep or cows. Because of its low nutrient status, they need to feed for much of their time; the herbage passes rapidly through the gut and is voided frequently. The tubers of sea clubrush (Bolboschoenus maritimus) are also taken as well as berries and water plants such as duckweed (Lemna) and floating sweetgrass (Glyceria fluitans). In wintertime they eat grass and leaves but also glean grain on cereal stubbles and sometimes feed on growing crops, especially during the night. They have been known to feed on oats, wheat, barley, buckwheat, lentils, peas and root crops. Acorns are sometimes consumed, and on the coast, seagrass (Zostera sp.) may be eaten. In the 1920s in Britain, the pink-footed goose "discovered" that potatoes were edible and started feeding on waste potatoes. The greylag followed suit in the 1940s and now regularly searches for tubers on ploughed fields. They also consume small fish, amphibians, crustaceans, molluscs and insects. Greylag geese tend to pair bond in long-term monogamous relationships. Most such pairs are probably life-long partnerships, though 5 to 8% of the pairs separate and re-mate with other geese. Birds in heterosexual pairs may engage in promiscuous behavior, despite the opposition of their mates. Homosexual pairs are common (14 to 20% of the pairs may be ganders, depending on flock), and share the characteristics of heterosexual pairs with the exceptions that the bonds appear to be closer, based on the intensity of their displays. Same-sex pairs also engage in courtship and sexual relations, and often assume high-ranking positions in the flock as a result of their superior strength and courage, leading some to speculate that they may serve as guardians of the flock. The sexual preference of the birds is generally flexible, as more than half of widowers re-pair with a bird of the opposite sex. The nest is on the ground among heather, rushes, dwarf shrubs or reeds, or on a raft of floating vegetation. It is built from pieces of reed, sprigs of heather, grasses and moss, mixed with small feathers and down. A typical clutch is four to six eggs, but fewer eggs or larger numbers are not unusual. The eggs are creamy-white at first but soon become stained, and average . They are mostly laid on successive days and incubation starts after the last one is laid. The female does the incubation, which lasts about twenty-eight days, while the male remains on guard somewhere near. The chicks are precocial and able to leave the nest soon after hatching. Both parents are involved in their care and they soon learn to peck at food and become fully-fledged at eight or nine weeks, about the same time as their parents regain their ability to fly after moulting their main wing and tail feathers a month earlier. Immature birds undergo a similar moult, and move to traditional, safe locations before doing so because of their vulnerability while flightless. Greylag geese are gregarious birds and form flocks. This has the advantage for the birds that the vigilance of some individuals in the group allows the rest to feed without having to constantly be alert to the approach of predators. After the eggs hatch, some grouping of families occur, enabling the geese to defend their young by their joint actions, such as mobbing or attacking predators. After driving off a predator, a gander will return to its mate and give a "triumph call", a resonant honk followed by a low-pitched cackle, uttered with neck extended forward parallel with the ground. The mate and even unfledged young reciprocate in kind. Young greylags stay with their parents as a family group, migrating with them in a larger flock, and only dispersing when the adults drive them away from their newly established breeding territory the following year. At least in Europe, patterns of migration are well understood and follow traditional routes with known staging sites and wintering sites. The young learn these locations from their parents which normally stay together for life. Greylags leave their northern breeding areas relatively late in the autumn, for example completing their departure from Iceland by November, and start their return migration as early as January. Birds that breed in Iceland overwinter in the British Isles; those from Central Europe overwinter as far south as Spain and North Africa; others migrate down to the Balkans, Turkey and Iraq for the winter. In human culture Geese are important to multiple culinary traditions. The meat, liver and other organs, fat, skin and blood are used culinarily in various cuisines. The greylag was once revered across Eurasia. It was linked with the goddess of healing, Gula, a forerunner of the Sumerian fertility goddess Ishtar, in the cities of the Tigris-Euphrates delta over 5,000 years ago. In Ancient Egypt, geese symbolised the sun god Ra. In Ancient Greece and Rome, they were associated with the goddess of love, Aphrodite, and goose fat was used as an aphrodisiac. Since they were sacred birds, they were kept on Rome's Capitoline Hill, from where they raised the alarm when the Gauls attacked in 390 BCE. The goose's role in fertility survives in modern British tradition in the nursery rhyme Goosey Goosey Gander, which preserves its sexual overtones ("And in my lady's chamber"), while "to goose" still has a sexual meaning. The tradition of pulling a wishbone derives from the tradition of eating a roast goose at Michaelmas, where the goose bone was once believed to have the powers of an oracle. For that festival, in Thomas Bewick's time, geese were driven in thousand-strong flocks on foot from farms all over the East of England to London's Cheapside market, covering some per day. Some farmers painted the geese's feet with tar and sand to protect them from road wear as they walked. Greylag geese were domesticated by at least 1360 BCE, when images of domesticated birds resembling the eastern race, Anser anser rubirostris (which like modern farmyard geese, but unlike western greylags, have a pink beak) were painted in Ancient Egypt. Goose feathers were used as quill pens, the best being the primary feathers of the left-wing, whose "curvature bent away from the eyes of right-handed writers". The feathers also served to fletch arrows. In ethology, the greylag goose was the subject of Konrad Lorenz's pioneering studies of imprinting behaviour. Gallery
Biology and health sciences
Anseriformes
Animals
198824
https://en.wikipedia.org/wiki/Freezing
Freezing
Freezing is a phase transition in which a liquid turns into a solid when its temperature is lowered below its freezing point. For most substances, the melting and freezing points are the same temperature; however, certain substances possess differing solid-liquid transition temperatures. For example, agar displays a hysteresis in its melting point and freezing point. It melts at and solidifies from . Crystallization Most liquids freeze by crystallization, formation of crystalline solid from the uniform liquid. This is a first-order thermodynamic phase transition, which means that as long as solid and liquid coexist, the temperature of the whole system remains very nearly equal to the melting point due to the slow removal of heat when in contact with air, which is a poor heat conductor. Because of the latent heat of fusion, the freezing is greatly slowed and the temperature will not drop anymore once the freezing starts but will continue dropping once it finishes. Crystallization consists of two major events, nucleation and crystal growth. "Nucleation" is the step wherein the molecules start to gather into clusters, on the nanometer scale, arranging in a defined and periodic manner that defines the crystal structure. "Crystal growth" is the subsequent growth of the nuclei that succeed in achieving the critical cluster size. Supercooling In spite of the second law of thermodynamics, crystallization of pure liquids usually begins at a lower temperature than the melting point, due to high activation energy of homogeneous nucleation. The creation of a nucleus implies the formation of an interface at the boundaries of the new phase. Some energy is expended to form this interface, based on the surface energy of each phase. If a hypothetical nucleus is too small, the energy that would be released by forming its volume is not enough to create its surface, and nucleation does not proceed. Freezing does not start until the temperature is low enough to provide enough energy to form stable nuclei. In presence of irregularities on the surface of the containing vessel, solid or gaseous impurities, pre-formed solid crystals, or other nucleators, heterogeneous nucleation may occur, where some energy is released by the partial destruction of the previous interface, raising the supercooling point to be near or equal to the melting point. The melting point of water at 1 atmosphere of pressure is very close to , and in the presence of nucleating substances the freezing point of water is close to the melting point, but in the absence of nucleators water can supercool to before freezing. Under high pressure (2,000 atmospheres) water will supercool to as low as before freezing. Exothermicity Freezing is almost always an exothermic process, meaning that as liquid changes into solid, heat and pressure are released. This is often seen as counter-intuitive, since the temperature of the material does not rise during freezing, except if the liquid were supercooled. But this can be understood since heat must be continually removed from the freezing liquid or the freezing process will stop. The energy released upon freezing is a latent heat, and is known as the enthalpy of fusion and is exactly the same as the energy required to melt the same amount of the solid. Low-temperature helium is the only known exception to the general rule. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.8 K. This means that, at appropriate constant pressures, heat must be added to these substances in order to freeze them. Vitrification Certain materials, such as glass and glycerol, may harden without crystallizing; these are called amorphous solids. Amorphous materials, as well as some polymers, do not have a freezing point, as there is no abrupt phase change at any specific temperature. Instead, there is a gradual change in their viscoelastic properties over a range of temperatures. Such materials are characterized by a glass transition that occurs at a glass transition temperature, which may be roughly defined as the "knee" point of the material's density vs. temperature graph. Because vitrification is a non-equilibrium process, it does not qualify as freezing, which requires an equilibrium between the crystalline and liquid state. Expansion The size of substances increases or expands on being heated. This increase in the size of a body due to heating is called thermal expansion .. Thermal expansion takes place in all objects and in all states of matter. However, different substances have different rates of expansion for the same rise in temperature.Freezing of living organisms Many living organisms are able to tolerate prolonged periods of time at temperatures below the freezing point of water. Most living organisms accumulate cryoprotectants such as anti-nucleating proteins, polyols, and glucose to protect themselves against frost damage by sharp ice crystals. Most plants, in particular, can safely reach temperatures of −4 °C to −12 °C. Certain bacteria, notably Pseudomonas syringae, produce specialized proteins that serve as potent ice nucleators, which they use to force ice formation on the surface of various fruits and plants at about −2 °C. The freezing causes injuries in the epithelia and makes the nutrients in the underlying plant tissues available to the bacteria. Bacteria Three species of bacteria, Carnobacterium pleistocenium, as well as Chryseobacterium greenlandensis and Herminiimonas glaciei, have reportedly been revived after surviving for thousands of years frozen in ice. Plants Many plants undergo a process called hardening, which allows them to survive temperatures below 0 °C for weeks to months. Animals The nematode Haemonchus contortus can survive 44 weeks frozen at liquid nitrogen temperatures. Other nematodes that survive at temperatures below 0 °C include Trichostrongylus colubriformis and Panagrolaimus davidi''. Many species of reptiles and amphibians survive freezing. Human gametes and 2-, 4- and 8-cell embryos can survive freezing and are viable for up to 10 years, a process known as cryopreservation. Experimental attempts to freeze human beings for later revival are known as cryonics. Food preservation Freezing is a common method of food preservation that slows both food decay and the growth of micro-organisms. Besides the effect of lower temperatures on reaction rates, freezing makes water less available for bacteria growth. Freezing is a widely used method of food preservation. Freezing generally preserves flavours, smell and nutritional content. Freezing became commercially viable
Physical sciences
Phase transitions
null
198842
https://en.wikipedia.org/wiki/Grassland
Grassland
A grassland is an area where the vegetation is dominated by grasses (Poaceae). However, sedge (Cyperaceae) and rush (Juncaceae) can also be found along with variable proportions of legumes, such as clover, and other herbs. Grasslands occur naturally on all continents except Antarctica and are found in most ecoregions of the Earth. Furthermore, grasslands are one of the largest biomes on Earth and dominate the landscape worldwide. There are different types of grasslands: natural grasslands, semi-natural grasslands, and agricultural grasslands. They cover 31–69% of the Earth's land area. Definitions Included among the variety of definitions for grasslands are: "...any plant community, including harvested forages, in which grasses and/or legumes make up the dominant vegetation." "...terrestrial ecosystems dominated by herbaceous and shrub vegetation, and maintained by fire, grazing, drought and/or freezing temperatures." (Pilot Assessment of Global Ecosystems, 2000) "A region with sufficient average annual precipitation (25-75 cm) to support grass..." (Stiling, 1999) Semi-natural grasslands are a very common subcategory of the grasslands biome. These can be defined as: Grassland existing as a result of human activity (mowing or livestock grazing), where environmental conditions and the species pool are maintained by natural processes. They can also be described as the following: "Semi-natural grasslands are one of the world's most biodiverse habitats on a small spatial scales." "Semi-natural grasslands belong to the most species rich ecosystems in the world." "...have been formed over the course of centuries through extensive grazing and mowing." "...without the use of pesticides or fertilisers in modern times." There are many different types of semi-natural grasslands, e.g. hay meadows. Evolutionary history The graminoids are among the most versatile life forms. They became widespread toward the end of the Cretaceous period, and coprolites of fossilized dinosaur feces have been found containing phytoliths of a variety of grasses that include grasses that are related to modern rice and bamboo. The appearance of mountains in the western United States during the Miocene and Pliocene epochs, a period of some 25 million years, created a continental climate favourable to the evolution of grasslands. Around 5 million years ago during the Late Miocene in the New World and the Pliocene in the Old World, the first true grasslands occurred. Existing forest biomes declined, and grasslands became much more widespread. It is known that grasslands have existed in Europe throughout the Pleistocene (the last 1.8 million years). Following the Pleistocene ice ages (with their glacials and interglacials), grasslands expanded in the hotter, drier climates, and began to become the dominant land feature worldwide. Since the grasslands have existed for over 1.8 million years, there is high variability. For example steppe-tundra dominated in Northern and Central Europe whereas a higher amount of xerothermic grasslands occurred in the Mediterranean area. Within temperate Europe, the range of types is quite wide and also became unique due to the exchange of species and genetic material between different biomes. The semi-natural grasslands first appeared when humans started farming. So for the use of agriculture, forests got cleared in Europe. Ancient meadows and pastures were the parts that were suitable for cultivation. The semi-natural grasslands were formed from these areas. However, there's also evidence for the local persistence of natural grasslands in Europe, originally maintained by wild herbivores, throughout the pre-Neolithic Holocene. The removal of the plants by the grazing animals and later the mowing farmers led to co-existence of other plant species around. In the following, the biodiversity of the plants evolve. Also, the species that already lived there adapted to the new conditions. Most of the grassland areas have been turned to arable fields and disappeared again. The grasslands permanently became arable cropping fields due to the steady decrease in organic matter. Nowadays, semi-natural grasslands are rather located in areas that are unsuitable for agricultural farming. Ecology Biodiversity Grasslands dominated by unsown wild-plant communities ("unimproved grasslands") can be called either natural or "semi-natural" habitat. Although their plant communities are natural, their maintenance depends upon anthropogenic activities such as grazing and cutting regimes. The semi-natural grasslands contain many species of wild plants, including grasses, sedges, rushes, and herbs; 25 plant-species per 100 square centimeters can be found. A European record that was found on a meadow in Estonia described 76 species of plants in one square meter. Chalk downlands in England can support over 40 species per square meter. In many parts of the world, few examples have escaped agricultural improvement (fertilizing, weed killing, plowing, or re-seeding). For example, original North American prairie grasslands or lowland wildflower meadows in the UK are now rare and their associated wild flora equally threatened. Associated with the wild-plant diversity of the "unimproved" grasslands is usually a rich invertebrate fauna; there are also many species of birds that are grassland "specialists", such as the snipe and the little bustard. Owing to semi-natural grasslands being referred to as one of the most-species rich ecosystems in the world and essential habitat for many specialists, also including pollinators, there are many approaches to conservation activities lately. Agriculturally improved grasslands, which dominate modern intensive agricultural landscapes, are usually poor in wild plant species due to the original diversity of plants having been destroyed by cultivation and by the use of fertilizers. Almost 90% of the European semi-natural grasslands do not exist anymore due to political and economic reasons. This loss took place during the 20th century. The ones in Western and Central Europe have almost disappeared completely. There are a few left in Northern Europe. Unfortunately, a large amount of red-listed species are specialists of semi-natural grasslands and are affected by the landscape change due to agriculture of the last century. The original wild-plant communities having been replaced by sown monocultures of cultivated varieties of grasses and clovers, such as perennial ryegrass and white clover. In many parts of the world, "unimproved" grasslands are one of the most threatened types of habitat, and a target for acquisition by wildlife conservation groups or for special grants to landowners who are encouraged to manage them appropriately. Vegetation Grassland vegetation can vary considerably depending on the grassland type and on how strong it is affected by human impact. Dominant trees for the semi-natural grassland are Quercus robur, Betula pendula, Corylus avellana, Crataegus and many kinds of herbs. In chalk grassland, the plants can vary from very tall to very short. Quite tall grasses can be found in North American tallgrass prairie, South American grasslands, and African savanna. Woody plants, shrubs or trees may occur on some grasslands—forming savannas, scrubby grassland or semi-wooded grassland, such as the African savannas or the Iberian deheza. As flowering plants and trees, grasses grow in great concentrations in climates where annual rainfall ranges between . The root systems of perennial grasses and forbs form complex mats that hold the soil in place. Fauna Grasslands support the greatest aggregations of large animals on Earth, including jaguars, African wild dogs, pronghorn, black-footed ferret, plains bison, mountain plover, African elephant, Sunda tiger, black rhino, white rhino, savanna elephant, greater one-horned rhino, Indian elephant and swift fox. Grazing animals, herd animals, and predators in grasslands, like lions and cheetahs live in the grasslands of the African savanna. Mites, insect larvae, nematodes, and earthworms inhabit deep soil, which can reach underground in undisturbed grasslands on the richest soils of the world. These invertebrates, along with symbiotic fungi, extend the root systems, break apart hard soil, enrich it with urea and other natural fertilizers, trap minerals and water and promote growth. Some types of fungi make the plants more resistant to insect and microbial attacks. Grassland in all its form supports a vast variety of mammals, reptiles, birds, and insects. Typical large mammals include the blue wildebeest, American bison, giant anteater, and Przewalski's horse. The plants and animals that live in grasslands are connected through an unlimited web of interactions. But the removal of key species—such as buffalo and prairie dogs within the American West—and introduction of invasive species, like cane toads in northern Australia, have disrupted the balance in these ecosystems and damaged a number of other species. Grasslands are home to a number of the foremost magnificent animals on the planet—elephants, bison, lions—and hunters have found them to be enticing prey. But when hunting is not controlled or is conducted illegally, species can become extinct. Ecosystem services Grasslands provide a range of marketed and non-marketed ecosystem services that are fundamental to the livelihoods of an estimated one billion people globally. Carbon sequestration Grasslands hold about twenty percent of global soil carbon stocks. Herbaceous (non-wooded) vegetation dominates grasslands and carbon is stored in the roots and soil underground. Above-ground biomass carbon is relatively short-lived due to grazing, fire, and senescence. Grassland species have an extensive fibrous root system, with grasses often accounting for 60-80% of the biomass carbon in this ecosystem. This underground biomass can extend several meters below the surface and store abundant carbon into the soil, resulting in deep, fertile soils with high organic matter content. For this reason, soil carbon accounts for about 81% of the total ecosystem carbon in grasslands. The close link between soil carbon and underground biomass leads to similar responses of these carbon pools to fluctuations in annual precipitation and temperature on a broad spatial scale. Because plant productivity is limited by grassland precipitation, carbon stocks are highest in regions where precipitation is heaviest, such as the high grass prairie in the humid temperate region of the United States. Similarly, as annual temperatures rise, grassland carbon stocks decrease due to increased evapotranspiration. Grasslands have suffered large losses of organic carbon due to soil disturbances, vegetation degradation, fires, erosion, nutrient deficiencies, and water shortages. The type, frequency and intensity of the disturbance can play a key role in the soil organic carbon (SOC) balance of grasslands. Bedrock, irrigation practices, soil acidification, liming, and pasture management can all have potential impacts on grassland organic carbon stocks. Good grassland management can reverse historical soil carbon losses. The relationship of improved biodiversity with carbon storage is subject of research. There is a lack of agreement on the amount of carbon that can be stored in grassland ecosystem. This is partly caused by different methodologies applied to measure soil organic carbon and limited respective datasets. Further, carbon accumulation in soils changes significantly over time and point in time measurements produce an insufficient evidence base. Other ecosystem services promotion of genetic diversity weather amelioration provision of wildlife habitat Degradation Grasslands are among the most threatened ecosystems. Global losses from grassland degradation are estimated to be over $7 billion per year. According to the International Union for the Conservation of Nature (IUCN), the most significant threat to grasslands is human land use, especially agriculture and mining. The vulnerability of grasslands stems from a range of factors, such as misclassification, poor protection and cultivation. Causes Land use intensification Grasslands have an extensive history of human activity and disturbance. To feed a growing human population, most of the world's grasslands are converted from natural landscapes to fields of corn, wheat or other crops. Grasslands that have remained largely intact thus far, such as the East African savannas, are in danger of being lost to agriculture. Grasslands are very sensitive to disturbances, such as people hunting and killing key species, or plowing the land to make more space for farms. Grassland vegetation is often a plagioclimax; it remains dominant in a particular area usually due to grazing, cutting, or natural or man-made fires, all discouraging colonization by and survival of tree and shrub seedlings. Some of the world's largest expanses of grassland are found in the African savanna, and these are maintained by wild herbivores as well as by nomadic pastoralists and their cattle, sheep or goats. Grasslands have an impact on climate change by slower decomposition rates of litter compared to forest environments. Grasslands may occur naturally or as a result of human activity. Hunting cultures around the world often set regular fires to maintain and extend grasslands and prevent fire-intolerant trees and shrubs from taking hold. The tallgrass prairies in the U.S. Midwest may have been extended eastward into Illinois, Indiana, and Ohio by human agency. Much grassland in northwest Europe developed after the Neolithic Period when people gradually cleared the forest to create areas for raising their livestock. Climate change Grasslands often occur in areas with annual precipitation is between and and average mean annual temperatures ranges from −5 and 20 °C. However, some grasslands occur in colder (−20 °C) and hotter (30 °C) climatic conditions. Grassland can exist in habitats that are frequently disturbed by grazing or fire, as such disturbance prevents the encroachment of woody species. Species richness is particularly high in grasslands of low soil fertility such as serpentine barrens and calcareous grasslands, where woody encroachment is prevented as low nutrient levels in the soil may inhibit the growth of forest and shrub species. Another common predicament often experienced by the ill-fated grassland creatures is the constant burning of plants, fueled by oxygen and many expired photosynthesizing organisms, with the lack of rain pushing this problem to further heights. When not limited by other factors, increasing CO2 concentration in the air increases plant growth, similarly as water use efficiency, which is very important in drier regions. However, the advantages of elevated CO2 are limited by factors including water availability and available nutrients, particularly nitrogen. Thus effects of elevated CO2 on plant growth will vary with local climate patterns, species adaptations to water limitations, and nitrogen availability. Studies indicate that nutrient depletion may happen faster in drier regions, and with factors like plant community composition and grazing. Nitrogen deposition from air pollutants and increased mineralization from higher temperatures can increase plant productivity, but increases are often among a discount in biodiversity as faster-growing plants outcompete others. A study of a California grassland found that global change may speed reductions in diversity and forb species are most prone to this process. Afforestation or introduction of invasive species Misguided afforestation efforts, for example as part of the global effort to increase carbon sequestration, can harm grasslands and their core ecosystem services. Forest centric restoration efforts can create the risk of misreading and misclassifying of landscapes. A map created by the World Resources Institute in collaboration with the IUCN identifies 2 billion hectares for potential forest restoration. It is criticised for including 900 million hectares of grasslands. It is expected that non-native grasses will continue to outperform native species under warmer and drier conditions that occur in many grasslands due to climate change. Management The type of land management used in grasslands can also lead to grassland loss or degradation. Many grasslands and other open ecosystems depend on disturbances such as wildfires, controlled burns or grazing to persist, although this subject is still controversial. A study in Brazilian Subtropical Highland Grasslands found that grasslands without traditional land management—which uses fire every two years and extensive cattle grazing—can disappear within 30 years. This study showed that grasslands inside protected areas, in which fire is not allowed and cattle grazing is banned, grasslands were quickly replaced by shrubs (shrub encroachment). Types of degradation Land cover change Land cover has always changed during the years. The following relates to the changes between 1960 and 2015. There has been a decrease in semi-natural grasslands and an increase in areas with arable land, forest and land used for infrastructure and buildings. The line style and relative thickness of the lines indicates the percentage of the total area that changed. Changes less than 1% and land-cover classes with all changes less than 1% (i.e. semi-natural wetlands and water) are not included. In 1960 most of the land, 49.7%, was covered with forest and there was also more semi-natural grassland (18.8%) than arable land (15.8%). In 2015 this has changed drastically. The forest cover has increased (50.8%) and arable land has also increased (20.4%), but the semi-natural grassland cover has decreased. Although it still covers a large area of the earth (10.6%). A quarter of semi-natural grassland was lost through intensification, i.e. it was converted into arable or pasture land and forests. It is more likely that intensification will occur in flat semi-natural grasslands, especially if the soil is fertile. On the other hand, grasslands, where the land is drought-prone or less productive, are more likely to persist as semi-natural grasslands than grasslands with fertile soil and low gradient of the terrain. Furthermore, the accessibility of the land is also important, as it is then easier to fertilize, for example. For instance, if it is located near a road. With the development of technology, it is becoming increasingly easy to cultivate land with a steeper gradient, to the detriment of grasslands. The management of grasslands is also changing permanently. There is increased use of mineral fertilizers, furthermore borders and field edges are removed to enlarge fields and leveling the terrain to facilitate the use of agricultural machinery. The professional study of dry grasslands falls under the category of rangeland management, which focuses on ecosystem services associated with the grass-dominated arid and semi-arid rangelands of the world. Rangelands account for an estimated 70% of the earth's landmass; thus, many cultures including those of the United States are indebted to the economics that the world's grasslands have to offer, from producing grazing animals, tourism, ecosystems services such as clean water and air, and energy extraction. Vast areas of grassland are affected by woody encroachment, which is the expansion of woody plants at the expense of the herbaceous layer. Woody encroachment is caused by a combination of human impact (e.g. fire exclusion, overstocking and resulting overgrazing) and environmental factors (i.e. increased CO2 levels in the atmosphere). It can have severe negative consequences on key ecosystem services, like land productivity and groundwater recharge. Conservation and restoration Despite growing recognition of the importance of grasslands, understanding of restoration options remains limited. Cost of grassland restoration is highly variable and respective data is scarce. Successful grassland restoration has several dimensions, including recognition in policy, standardisation of indicators of degradation, scientific innovation, knowledge transfer and data sharing. Restoration methods and measures include the following: prescribed fires appropriate management of livestock and wild herbivores: in light of land use intensification caused by global food demand, grassland land use practices may need to be adjusted to better support key ecosystem services. tree cutting shrub removal invasive species control reintroduction of native grasses and forbs via seeding or transplant: a main challenge for grassland restoration is how to overcome seed limitation. For the period 2021–2030 the United Nations General Assembly has proclaimed the UN Decade on Restoration, involving a joint resolution by over 70 countries. It is led by the United Nations Environment Programme and the Food and Agriculture Organization. Types of grasslands Classifications of grassland Grassland types by Schimper (1898, 1903): Meadow (hygrophilous or tropophilous grassland) Steppe (xerophilous grassland) Savannah (xerophilous grassland containing isolated trees) Grassland types by Ellenberg and Mueller-Dombois (1967): Formation-class V. Terrestrial herbaceous communities Savannas and related grasslands (tropical or subtropical grasslands and parklands) Steppes and related grasslands (e.g. North American "prairies" etc.) Meadows, pastures or related grasslands Sedge swamps and flushes Herbaceous and half-woody salt swamps Forb vegetation Grassland types by Laycock (1979): Tallgrass (true) prairie Shortgrass prairie Mixed-grass prairie Shrub steppe Annual grassland Desert (arid) grassland High mountain grassland General grasslands types Tropical and subtropical These grasslands can be classified as the tropical and subtropical grasslands, savannas and shrublands biome. The rainfall level for that grassland type is between 90 and 150 centimeters per year. Grasses and scattered trees are common for that ecoregion, as well as large mammals, such as wildebeest (Connochaetes taurinus) and zebra (Equus zebra). Notable tropical and subtropical grasslands include the Llanos grasslands of South America. Temperate Mid-latitude grasslands, including the prairie and Pacific grasslands of North America, the Pampas of Argentina, Brazil and Uruguay, calcareous downland, and the steppes of Europe. They are classified with temperate savannas and shrublands as the temperate grasslands, savannas, and shrublands biome. Temperate grasslands are the home to many large herbivores, such as bison, gazelles, zebras, rhinoceroses, and wild horses. Carnivores like lions, wolves, cheetahs and leopards are also found in temperate grasslands. Other animals of this region include deer, prairie dogs, mice, jack rabbits, skunks, coyotes, snakes, foxes, owls, badgers, blackbirds, grasshoppers, meadowlarks, sparrows, quails, hawks and hyenas. Flooded Grasslands that are flooded seasonally or year-round, like the Everglades of Florida, the Pantanal of Brazil, Bolivia and Paraguay or the Esteros del Ibera in Argentina, are classified with flooded savannas as the flooded grasslands and savannas biome and occur mostly in the tropics and subtropics. The species that live in these grasslands are well adapted to the hydrologic regimes and soil conditions. The Everglades—the world's largest rain-fed flooded grassland—is rich in 11,000 species of seed-bearing plants, 25 species of orchids, 300 bird species, and 150 fish species. Water-meadows are grasslands that are deliberately flooded for short periods. Montane High-altitude grasslands located on high mountain ranges around the world, like the Páramo of the Andes Mountains. They are part of the montane grasslands and shrublands biome and can be tropical, subtropical, and temperate. The plants and animals, that can be found in the tropical montane, are able to adapt to cool, wet conditions as well as intense sunlight. Tundra grasslands Similar to montane grasslands, polar Arctic tundra can have grasses, but high soil moisture means that few tundras are grass-dominated today. However, during the Pleistocene glacial periods (commonly referred to as ice ages), a grassland known as steppe-tundra or mammoth steppe occupied large areas of the Northern Hemisphere. These areas were very cold and arid and featured sub-surface permafrost (hence tundra) but were nevertheless productive grassland ecosystems supporting a wide variety of fauna. As the temperature increased and the climate became wetter at the beginning of the Holocene much of the mammoth steppe transitioned to forest, while the drier parts in central Eurasia remained as a grassland, becoming the modern Eurasian steppe. Desert and xeric Also called desert grasslands, they are composed of sparse grassland ecoregions located in the deserts and xeric shrublands biome. Temperature extremes and low amounts of rainfall characterise these kinds of grasslands. Therefore, plants and animals are well adapted to minimize water loss. Temperate grasslands, savannas, and shrublands ecoregions The grassland ecoregions of the temperate grasslands, savannas, and shrublands biome are: Tropical and subtropical grasslands, savannas, and shrublands ecoregions
Physical sciences
Terrestrial features
null
198843
https://en.wikipedia.org/wiki/Savanna
Savanna
A savanna or savannah is a mixed woodland-grassland (i.e. grassy woodland) biome and ecosystem characterised by the trees being sufficiently widely spaced so that the canopy does not close. The open canopy allows sufficient light to reach the ground to support an unbroken herbaceous layer consisting primarily of grasses. Four savanna forms exist; savanna woodland where trees and shrubs form a light canopy, tree savanna with scattered trees and shrubs, shrub savanna with distributed shrubs, and grass savanna where trees and shrubs are mostly nonexistent. Savannas maintain an open canopy despite a high tree density. It is often believed that savannas feature widely spaced, scattered trees. However, in many savannas, tree densities are higher and trees are more regularly spaced than in forests. The South American savanna types cerrado sensu stricto and cerrado dense typically have densities of trees similar to or higher than that found in South American tropical forests, with savanna ranging from 800 to 3300 trees per hectare (trees/ha) and adjacent forests with 800–2000 trees/ha. Similarly Guinean savanna has 129 trees/ha, compared to 103 for riparian forest, while Eastern Australian sclerophyll forests have average tree densities of approximately 100 per hectare, comparable to savannas in the same region. Savannas are also characterised by seasonal water availability, with the majority of rainfall confined to one season. They are associated with several types of biomes, and are frequently in a transitional zone between forest and desert or grassland, though mostly a transition between desert to forest. Savanna covers approximately 20% of the Earth's land area. Unlike the prairies in North America and steppes in Eurasia, which feature cold winters, savannas are mostly located in areas having warm to hot climates, such as in Africa, Australia, South America, and India. Etymology The word derives from the Spanish sabana, which is itself a loanword from Taíno, which means "treeless grassland" in the West Indies. The letter b in Spanish, when positioned in the middle of a word, is pronounced almost like an English v; hence the change of grapheme when transcribed into English. The word originally entered English as the Zauana in a description of the ilands of the kinges of Spayne from 1555. This was equivalent in the orthography of the times to zavana (see history of V). Peter Martyr reported it as the local name for the plain around Comagre, the court of the cacique Carlos in present-day Panama. The accounts are inexact, but this is usually placed in present-day Madugandí<ref>{{cite web |last=Bancroft |first=Hubert H. |url=https://archive.org/stream/histcentralameri01bancrich#page/n77/mode/2up |title=History of Central America. 1501–1530 |page= LXXIV |publisher=A.L. Bancroft & Co. |location=San Francisco |year= 1882}}</ref> or at points on the nearby Guna Yala coast opposite Ustupo or on Point Mosquitos. These areas are now either given over to modern cropland or jungle. Distribution Many grassy landscapes and mixed communities of trees, shrubs, and grasses were described as savanna before the middle of the 19th century, when the concept of a tropical savanna climate became established. The Köppen climate classification system was strongly influenced by effects of temperature and precipitation upon tree growth, and oversimplified assumptions resulted in a tropical savanna classification concept which considered it as a "climatic climax" formation. The common usage to describe vegetation now conflicts with a simplified yet widespread climatic concept. The divergence has sometimes caused areas such as extensive savannas north and south of the Congo and Amazon Rivers to be excluded from mapped savanna categories. In different parts of North America, the word "savanna" has been used interchangeably with "barrens", "prairie", "glade", "grassland" and "oak opening". Different authors have defined the lower limits of savanna tree coverage as 5–10% and upper limits range as 25–80% of an area. Two factors common to all savanna environments are rainfall variations from year to year, and dry season wildfires. In the Americas, e.g. in Belize, Central America, savanna vegetation is similar from Mexico to South America and to the Caribbean. The distinction between woodland and savanna is vague and therefore the two can be combined into a single biome as both woodlands and savannas feature open-canopied trees with crowns not usually interlinking (mostly forming 25-60% cover). Over many large tropical areas, the dominant biome (forest, savanna or grassland) can not be predicted only by the climate, as historical events plays also a key role, for example, fire activity. In some areas, indeed, it is possible for there to be multiple stable biomes. The annual rainfall ranges from to per year, with the precipitation being more common in six or eight months of the year, followed by a period of drought. Savannas may at times be classified as forests. In climatic geomorphology it has been noted that many savannas occur in areas of pediplains and inselbergs. It has been posited that river incision is not prominent but that rivers in savanna landscapes erode more by lateral migration. Flooding and associated sheet wash have been proposed as dominant erosion processes in savanna plains. Ecology The savannas of tropical America comprise broadleaved trees such as Curatella, Byrsonima, and Bowdichia, with grasses such as Leersia and Paspalum. Bean relative Prosopis is common in the Argentinian savannas. In the East African savannas, Acacia, Combretum, baobabs, Borassus, and Euphorbia are a common vegetation genera. Drier savannas there feature spiny shrubs and grasses, such as Andropogon, Hyparrhenia, and Themeda. Wetter savannas include Brachystegia trees and Pennisetum purpureum, and elephant grass type. West African savanna trees include Anogeissus, Combretum, and Strychnos. Indian savannas are mostly cleared, but the reserved ones feature Acacia, Mimosa, and Zizyphus over a grass cover comprising Sehima and Dichanthium. The Australian savanna is abundant with sclerophyllous evergreen vegetation, which include the eucalyptus, as well as Acacia, Bauhinia, Pandanus with grasses such as Heteropogon and kangaroo grass (Themeda). Animals in the African savanna generally include the giraffe, elephant, buffalo, zebra, gnu, hippopotamus, rhinoceros, and antelope, where they rely on grass and/or tree foliage to survive. In the Australian savanna, mammals in the family Macropodidae predominate, such as kangaroos and wallabies, though cattle, horses, camels, donkeys and the Asian water buffalo, among others, have been introduced by humans. Threats It is estimated that less than three percent of savanna ecosystems can be classified as highly intact. Reasons for savanna degradation are manifold, as outlined below. Changes in fire management Savannas are subject to regular wildfires and the ecosystem appears to be the result of human use of fire. For example, Native Americans created the Pre-Columbian woodlands of North America by periodically burning where fire-resistant plants were the dominant species. Fire-stick farming appears to have been responsible for the widespread occurrence of savanna in tropical Australia and New Guinea, and savannas in India are a result of human fire use. The maquis shrub savannas of the Mediterranean region were likewise created and maintained by anthropogenic fire. Intentional controlled burns typically create fires confined to the herbaceous layer that do little long term damage to mature trees. This prevents more catastrophic wildfires that could do much more damage. However, these fires either kill or suppress tree seedlings, thus preventing the establishment of a continuous tree canopy which would prevent further grass growth. Prior to European settlement aboriginal land use practices, including fire, influenced vegetation and may have maintained and modified savanna flora. It has been suggested by many authors that aboriginal burning created a structurally more open savanna landscape. Aboriginal burning certainly created a habitat mosaic that probably increased biodiversity and changed the structure of woodlands and geographic range of numerous woodland species. It has been suggested by many authorsArcher S, (1994.) "Woody plant encroachment into southwestern grasslands and savannas: Rates, patterns and proximate causes." pp. 13–68 in Vavra, Laycock and Pieper (eds.) Ecological Implications of Livestock Herbivory in the West. Society For Range Management, Denver . that with the removal or alteration of traditional burning regimes many savannas are being replaced by forest and shrub thickets with little herbaceous layer. The consumption of herbage by introduced grazers in savanna woodlands has led to a reduction in the amount of fuel available for burning and resulted in fewer and cooler fires. The introduction of exotic pasture legumes has also led to a reduction in the need to burn to produce a flush of green growth because legumes retain high nutrient levels throughout the year, and because fires can have a negative impact on legume populations which causes a reluctance to burn. Grazing and browsing animals The closed forest types such as broadleaf forests and rainforests are usually not grazed owing to the closed structure precluding grass growth, and hence offering little opportunity for grazing. In contrast the open structure of savannas allows the growth of a herbaceous layer and is commonly used for grazing domestic livestock. As a result, much of the world's savannas have undergone change as a result of grazing by sheep, goats and cattle, ranging from changes in pasture composition to woody plant encroachment. The removal of grass by grazing affects the woody plant component of woodland systems in two major ways. Grasses compete with woody plants for water in the topsoil and removal by grazing reduces this competitive effect, potentially boosting tree growth. In addition to this effect, the removal of fuel reduces both the intensity and the frequency of fires which may control woody plant species. Grazing animals can have a more direct effect on woody plants by the browsing of palatable woody species. There is evidence that unpalatable woody plants have increased under grazing in savannas. Grazing also promotes the spread of weeds in savannas by the removal or reduction of the plants which would normally compete with potential weeds and hinder establishment. In addition to this, cattle and horses are implicated in the spread of the seeds of weed species such as prickly acacia (Acacia nilotica) and stylo (Stylosanthes species). Alterations in savanna species composition brought about by grazing can alter ecosystem function, and are exacerbated by overgrazing and poor land management practices. Introduced grazing animals can also affect soil condition through physical compaction and break-up of the soil caused by the hooves of animals and through the erosion effects caused by the removal of protective plant cover. Such effects are most likely to occur on land subjected to repeated and heavy grazing. The effects of overstocking are often worst on soils of low fertility and in low rainfall areas below 500 mm, as most soil nutrients in these areas tend to be concentrated in the surface so any movement of soils can lead to severe degradation. Alteration in soil structure and nutrient levels affects the establishment, growth and survival of plant species and in turn can lead to a change in woodland structure and composition. That being said, impact of grazing animals can be reduced. Looking at Elephant impact on Savannas, the overall impact is reduced in the presence of rainfall and fences. Tree clearing Large areas of Australian and South American savannas have been cleared of trees, and this clearing continues today. For example, land clearing and fracking threaten the Northern Territory, Australia savanna, and 480,000 ha of savanna were being cleared annually in Queensland in the 2000s, primarily to improve pasture production. Substantial savanna areas have been cleared of woody vegetation and much of the area that remains today is vegetation that has been disturbed by either clearing or thinning at some point in the past. Clearing is carried out by the grazing industry in an attempt to increase the quality and quantity of feed available for stock and to improve the management of livestock. The removal of trees from savanna land removes the competition for water from the grasses present, and can lead to a two to fourfold increase in pasture production, as well as improving the quality of the feed available. Since stock carrying capacity is strongly correlated with herbage yield, there can be major financial benefits from the removal of trees, such as assisting with grazing management: regions of dense tree and shrub cover harbors predators, leading to increased stock losses, for example, while woody plant cover hinders mustering in both sheep and cattle areas. A number of techniques have been employed to clear or kill woody plants in savannas. Early pastoralists used felling and girdling, the removal of a ring of bark and sapwood, as a means of clearing land. In the 1950s arboricides suitable for stem injection were developed. War-surplus heavy machinery was made available, and these were used for either pushing timber, or for pulling using a chain and ball strung between two machines. These two new methods of timber control, along with the introduction and widespread adoption of several new pasture grasses and legumes promoted a resurgence in tree clearing. The 1980s also saw the release of soil-applied arboricides, notably tebuthiuron, that could be utilised without cutting and injecting each individual tree. In many ways "artificial" clearing, particularly pulling, mimics the effects of fire and, in savannas adapted to regeneration after fire as most Queensland savannas are, there is a similar response to that after fire. Tree clearing in many savanna communities, although causing a dramatic reduction in basal area and canopy cover, often leaves a high percentage of woody plants alive either as seedlings too small to be affected or as plants capable of re-sprouting from lignotubers and broken stumps. A population of woody plants equal to half or more of the original number often remains following pulling of eucalypt communities, even if all the trees over 5 metres are uprooted completely. Exotic plant species A number of exotic plants species have been introduced to savannas around the world. Amongst the woody plant species are serious environmental weeds such as Prickly Acacia (Acacia nilotica), Rubbervine (Cryptostegia grandiflora), Mesquite (Prosopis spp.), Lantana (Lantana camara and L. montevidensis) and Prickly Pear (Opuntia spp.). A range of herbaceous species have also been introduced to these woodlands, either deliberately or accidentally including Rhodes grass and other Chloris species, Buffel grass (Cenchrus ciliaris), Giant rat's tail grass (Sporobolus pyramidalis) parthenium (Parthenium hysterophorus) and stylos (Stylosanthes'' spp.) and other legumes. These introductions have the potential to significantly alter the structure and composition of savannas worldwide, and have already done so in many areas through a number of processes including altering the fire regime, increasing grazing pressure, competing with native vegetation and occupying previously vacant ecological niches. Other plant species include: white sage, spotted cactus, cotton seed, rosemary. Climate change Human induced climate change resulting from the greenhouse effect may result in an alteration of the structure and function of savannas. Some authors have suggested that savannas and grasslands may become even more susceptible to woody plant encroachment as a result of greenhouse induced climate change. However, a recent case described a savanna increasing its range at the expense of forest in response to climate variation, and potential exists for similar rapid, dramatic shifts in vegetation distribution as a result of global climate change, particularly at ecotones such as savannas so often represent. Savanna ecoregions A savanna can simply be distinguished by the open savanna, where grass prevails and trees are rare; and the wooded savanna, where the trees are densest, bordering an open woodland or forest. Specific savanna ecoregions of several different types include: Tropical savannas are classified with tropical and subtropical grasslands and shrublands as the tropical and subtropical grasslands, savannas, and shrublands biome. The savannas of Africa, including the Serengeti, famous for its wildlife, are typical of this type. The Brazilian savanna (Cerrado) is also included in this category, known for its exotic and varied flora. Other examples include the Kimberley tropical savanna, Central Zambezian miombo woodlands, Guinean forest–savanna mosaic, Cape York Peninsula tropical savanna, Somali Acacia–Commiphora bushlands and thickets, Terai–Duar savanna and grasslands and the Victoria Basin forest–savanna mosaic. Subtropical and temperate savannas are mid-latitude savannas with wetter summers and drier winters. They are classified with temperate savannas and shrublands as the temperate grasslands, savannas, and shrublands biome, that for example cover much of the plains of southeastern Australia, northern India, Southern Africa, southeastern Argentina and Uruguay. Examples of subtropical and temperate savannas include the Southeast Australia temperate savanna, Argentine Espinal, Pampas, Cumberland Plain Woodland, Southern Cone Mesopotamian savanna, New England Peppermint Grassy Woodland and the Uruguayan savanna. Mediterranean savannas are mid-latitude savannas in Mediterranean climate regions, with mild, rainy winters and hot, dry summers, part of the Mediterranean forests, woodlands, and scrub biome. The oak tree savannas of California, part of the California chaparral and woodlands ecoregion, fall into this category, including the Temperate Grassland of South Australia, which features eucalyptuses. Parts of the Middle East steppe and the Eastern Mediterranean conifer–sclerophyllous–broadleaf forests may also feature savanna-like landscapes. Flooded savannas are savannas that are flooded seasonally or year-round. They are classified with flooded savannas as the flooded grasslands and savannas biome, which occurs mostly in the tropics and subtropics. Examples include the Everglades, Mesopotamian Marshes, Pantanal, Nile Delta flooded savanna, Lake Chad flooded savanna, Zambezian flooded grasslands, and the Sudd. Montane savannas are mid- to high-altitude savannas, located in a few spots around the world's high mountain regions, part of the montane grasslands and shrublands biome. The Bogotá savanna, located at an average altitude of on the Altiplano Cundiboyacense, Eastern Ranges of the Andes, is an example of a montane savanna. The savannas of the Angolan Scarp savanna and woodlands ecoregion are a lower altitude example, up to . Other examples include the Al Hajar montane woodlands and the southern part of the Eastern Anatolian montane steppe.
Physical sciences
Grasslands
null
198984
https://en.wikipedia.org/wiki/Electrical%20conductor
Electrical conductor
In physics and electrical engineering, a conductor is an object or type of material that allows the flow of charge (electric current) in one or more directions. Materials made of metal are common electrical conductors. The flow of negatively charged electrons generates electric current, positively charged holes, and positive or negative ions in some cases. In order for current to flow within a closed electrical circuit, one charged particle does not need to travel from the component producing the current (the current source) to those consuming it (the loads). Instead, the charged particle simply needs to nudge its neighbor a finite amount, who will nudge its neighbor, and on and on until a particle is nudged into the consumer, thus powering it. Essentially what is occurring is a long chain of momentum transfer between mobile charge carriers; the Drude model of conduction describes this process more rigorously. This momentum transfer model makes metal an ideal choice for a conductor; metals, characteristically, possess a delocalized sea of electrons which gives the electrons enough mobility to collide and thus affect a momentum transfer. As discussed above, electrons are the primary mover in metals; however, other devices such as the cationic electrolyte(s) of a battery, or the mobile protons of the proton conductor of a fuel cell rely on positive charge carriers. Insulators are non-conducting materials with few mobile charges that support only insignificant electric currents. Resistance and conductance The resistance of a given conductor depends on the material it is made of, and on its dimensions. For a given material, the resistance is inversely proportional to the cross-sectional area. For example, a thick copper wire has lower resistance than an otherwise-identical thin copper wire. Also, for a given material, the resistance is proportional to the length; for example, a long copper wire has higher resistance than an otherwise-identical short copper wire. The resistance and conductance of a conductor of uniform cross section, therefore, can be computed as where is the length of the conductor, measured in metres [m], A is the cross-section area of the conductor measured in square metres [m2], σ (sigma) is the electrical conductivity measured in siemens per meter (S·m−1), and ρ (rho) is the electrical resistivity (also called specific electrical resistance) of the material, measured in ohm-metres (Ω·m). The resistivity and conductivity are proportionality constants, and therefore depend only on the material the wire is made of, not the geometry of the wire. Resistivity and conductivity are reciprocals: . Resistivity is a measure of the material's ability to oppose electric current. This formula is not exact: It assumes the current density is totally uniform in the conductor, which is not always true in practical situation. However, this formula still provides a good approximation for long thin conductors such as wires. Another situation this formula is not exact for is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. Then, the geometrical cross-section is different from the effective cross-section in which current actually flows, so the resistance is higher than expected. Similarly, if two conductors are near each other carrying AC current, their resistances increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in an electrical substation, or large power cables carrying more than a few hundred amperes. Aside from the geometry of the wire, temperature also has a significant effect on the efficacy of conductors. Temperature affects conductors in two main ways, the first is that materials may expand under the application of heat. The amount that the material will expand is governed by the thermal expansion coefficient specific to the material. Such an expansion (or contraction) will change the geometry of the conductor and therefore its characteristic resistance. However, this effect is generally small, on the order of 10−6. An increase in temperature will also increase the number of phonons generated within the material. A phonon is essentially a lattice vibration, or rather a small, harmonic kinetic movement of the atoms of the material. Much like the shaking of a pinball machine, phonons serve to disrupt the path of electrons, causing them to scatter. This electron scattering will decrease the number of electron collisions and therefore will decrease the total amount of current transferred. Conductor materials Conduction materials include metals, electrolytes, superconductors, semiconductors, plasmas and some nonmetallic conductors such as graphite and conductive polymers. Copper has a high conductivity. Annealed copper is the international standard to which all other electrical conductors are compared; the International Annealed Copper Standard conductivity is , although ultra-pure copper can slightly exceed 101% IACS. The main grade of copper used for electrical applications, such as building wire, motor windings, cables and busbars, is electrolytic-tough pitch (ETP) copper (CW004A or ASTM designation C100140). If high conductivity copper must be welded or brazed or used in a reducing atmosphere, then oxygen-free high conductivity copper (CW008A or ASTM designation C10100) may be used. Because of its ease of connection by soldering or clamping, copper is still the most common choice for most light-gauge wires. Silver is 6% more conductive than copper, but due to cost it is not practical in most cases. However, it is used in specialized equipment, such as satellites, and as a thin plating to mitigate skin effect losses at high frequencies. Famously, of silver on loan from the United States Treasury were used in the making of the calutron magnets during World War II due to wartime shortages of copper. Aluminum wire is the most common metal in electric power transmission and distribution. Although only 61% of the conductivity of copper by cross-sectional area, its lower density makes it twice as conductive by mass. As aluminum is roughly one-third the cost of copper by weight, the economic advantages are considerable when large conductors are required. The disadvantages of aluminum wiring lie in its mechanical and chemical properties. It readily forms an insulating oxide, making connections heat up. Its larger coefficient of thermal expansion than the brass materials used for connectors causes connections to loosen. Aluminum can also "creep", slowly deforming under load, which also loosens connections. These effects can be mitigated with suitably designed connectors and extra care in installation, but they have made aluminum building wiring unpopular past the service drop. Organic compounds such as octane, which has 8 carbon atoms and 18 hydrogen atoms, cannot conduct electricity. Oils are hydrocarbons, since carbon has the property of tetracovalency and forms covalent bonds with other elements such as hydrogen, since it does not lose or gain electrons, thus does not form ions. Covalent bonds are simply the sharing of electrons. Hence, there is no separation of ions when electricity is passed through it. Liquids made of compounds with only covalent bonds cannot conduct electricity. Certain organic ionic liquids, by contrast, can conduct an electric current. While pure water is not an electrical conductor, even a small portion of ionic impurities, such as salt, can rapidly transform it into a conductor. Wire size Wires are measured by their cross sectional area. In many countries, the size is expressed in square millimetres. In North America, conductors are measured by American wire gauge for smaller ones, and circular mils for larger ones. Conductor ampacity The ampacity of a conductor, that is, the amount of current it can carry, is related to its electrical resistance: a lower-resistance conductor can carry a larger value of current. The resistance, in turn, is determined by the material the conductor is made from (as described above) and the conductor's size. For a given material, conductors with a larger cross-sectional area have less resistance than conductors with a smaller cross-sectional area. For bare conductors, the ultimate limit is the point at which power lost to resistance causes the conductor to melt. Aside from fuses, most conductors in the real world are operated far below this limit, however. For example, household wiring is usually insulated with PVC insulation that is only rated to operate to about 60 °C, therefore, the current in such wires must be limited so that it never heats the copper conductor above 60 °C, causing a risk of fire. Other, more expensive insulation such as Teflon or fiberglass may allow operation at much higher temperatures. Isotropy If an electric field is applied to a material, and the resulting induced electric current is in the same direction, the material is said to be an isotropic electrical conductor. If the resulting electric current is in a different direction from the applied electric field, the material is said to be an anisotropic electrical conductor.
Physical sciences
Electrical circuits
Physics
199013
https://en.wikipedia.org/wiki/Snow%20goose
Snow goose
The snow goose (Anser caerulescens) is a species of goose native to North America. Both white and dark morphs exist, the latter often known as blue goose. Its name derives from the typically white plumage. The species was previously placed in the genus Chen, but is now typically included in the "gray goose" genus Anser. Snow geese breed north of the timberline in Greenland, Canada, Alaska, and the northeastern tip of Siberia, and spend winters in warm parts of North America from southwestern British Columbia through parts of the United States to Mexico. Taxonomy In 1750 the English naturalist George Edwards included an illustration and a description of the snow goose in the third volume of his A Natural History of Uncommon Birds. He used the English name "The blue-winged goose". Edwards based his hand-coloured etching on a preserved specimen that had been brought to London from the Hudson Bay area of Canada by James Isham. When in 1758 the Swedish naturalist Carl Linnaeus updated his Systema Naturae for the tenth edition, he placed the snow goose with the ducks and geese in the genus Anas. Linnaeus included a brief description, coined the binomial name Anas caerulescens and cited Edwards' work. The snow goose is now placed in the genus Anser that was introduced by the French zoologist Mathurin Jacques Brisson in 1760. The scientific name is from the Latin anser, "goose", and caerulescens, "bluish", derived from caeruleus, "dark blue". The snow goose is the sister species to Ross's goose (Anser rossii). Two subspecies are recognised: A. c. caerulescens (Linnaeus, 1758) – lesser snow goose – breeds in northeast Siberia, north Alaska and northwest Canada, winters in south USA, north Mexico and Japan A. c. atlanticus (Kennard, 1927) – greater snow goose – breeds in northeast Canada and northwest Greenland, winters in northeast USA The greater snow goose is distinguished from the nominate form by being slightly larger. It nests farther north and east. The lesser snow goose can be found in two color phases, the normal white-colored animals and a dark gray-colored "blue" phase. The greater snow goose is rarely seen in a blue phase. Description The snow goose has two color plumage morphs, white (snow) or gray/blue (blue), thus the common description as "snows" and "blues". White-morph birds are white except for black wing tips, but blue-morph geese have bluish-gray plumage replacing the white except on the head, neck and tail tip. The immature blue phase is drab or slate-gray with little to no white on the head, neck, or belly. Both snow and blue phases have rose-red feet and legs, and pink bills with black tomia ("cutting edges"), giving them a black "grin patch". The colors are not as bright on the feet, legs, and bill of immature birds. The head can be stained rusty-brown from minerals in the soil where they feed. They are very vocal and can often be heard from more than a mile away. White- and blue-morph birds interbreed and the offspring may be of either morph. These two colors of geese were once thought to be separate species; since they interbreed and are found together throughout their ranges, they are now considered two color phases of the same species. The color phases are genetically controlled. The dark phase results from a single dominant gene and the white phase is homozygous recessive. When choosing a mate, young birds will most often select a mate that resembles their parents' coloring. If the birds were hatched into a mixed pair, they will mate with either color phase. The species is divided into two subspecies on the basis of size and geography. Size overlap has caused some to question the division. The smaller subspecies, the lesser snow goose (C. c. caerulescens), lives from central northern Canada to the Bering Straits area. The lesser snow goose stands tall and weighs . The larger subspecies, the greater snow goose (C. c. atlanticus), nests in northeastern Canada. It averages about and , but can weigh up to . The wingspan for both subspecies ranges from . Breeding Long-term pair bonds are usually formed in the second year, although breeding does not usually start until the third year. Females are strongly philopatric, meaning they will return to the place they hatched to breed. Snow geese often nest in colonies. Nesting usually begins at the end of May or during the first few days of June, depending on snow conditions. The female selects a nest site and builds the nest on an area of high ground. The nest is a shallow depression lined with plant material and may be reused from year to year. After the female lays the first of three to five eggs, she lines the nest with down. The female incubates for 22 to 25 days, and the young leave the nest within a few hours of hatching. The young feed themselves, but are protected by both parents. After 42 to 50 days they can fly, but they remain with their family until they are two to three years old. Where snow geese and Ross's geese breed together, as at La Pérouse, they hybridize at times, and hybrids are fertile. Rare hybrids with the greater white-fronted goose, Canada goose, and cackling goose have been observed. Migration Snow geese breed from late May to mid-August, but they leave their nesting areas and spend more than half the year on their migration to-and-from warmer wintering areas. During spring migration (the reverse migration), large flocks of snow geese fly very high and migrate in large numbers along narrow corridors, more than from traditional wintering areas to the tundra. The lesser snow goose travels through the Central Flyway, Mississippi Flyway, and Pacific Flyway across prairie and rich farmland to their wintering grounds on grassland and agricultural fields across the United States and Mexico, especially the Gulf coastal plain. The larger and less numerous greater snow goose travels through the Atlantic Flyway and winters on a relatively more restricted range on the Atlantic coastal plain. Traditionally, lesser snow geese wintered in coastal marsh areas where they used their short but strong bills to dig up the roots of marsh grasses for food. However, they have also since shifted inland towards agricultural areas, likely the cause behind the unsustainable population increase in the 20th century. This shift may help to contribute to increased goose survival rates, leading to overgrazing on tundra breeding grounds. In March 2015, 2,000 snow geese were killed in northern Idaho from an avian cholera epidemic while flying their spring migration to northern Canada. Vagrancy The snow goose is a rare vagrant to Europe, but escapes from collections have occurred, and it is an occasional feral breeder. Snow geese are visitors to the British Isles where they are seen regularly among flocks of brant, barnacle goose, and greater white-fronted goose. There is also a feral population in Scotland from which many vagrant birds in Britain seem to derive. Around 2015, a small group of 3–5 snow geese landed on the north shore of O'ahu. They were seen and photographed several times over the course of 3–4 months. In Central America, vagrants are frequently encountered during winter. Ecology Outside of the nesting season, they usually feed in flocks. In winter, snow geese feed on left-over grain in fields. They migrate in large flocks, often visiting traditional stopover habitats in spectacular numbers. Snow geese frequently travel and feed alongside greater white-fronted geese; in contrast, the two tend to avoid travelling and feeding alongside Canada geese, which are often heavier birds. The population of greater snow geese was in decline at the beginning of the 20th century, but has now recovered to sustainable levels. Snow geese in North America have increased to the point where the tundra breeding areas in the Arctic and the saltmarsh wintering grounds are both becoming severely degraded, and this affects other species using the same habitat. Major nest predators include Arctic foxes and skuas. The biggest threat occurs during the first couple of weeks after the eggs are laid and then after hatching. The eggs and young chicks are vulnerable to these predators, but adults are generally safe. They have been seen nesting near snowy owl nests, which is likely a solution to predation. Their nesting success was much lower when snowy owls were absent, leading scientists to believe that the owls, since they are predatory, were capable of keeping competing predators away from the nests. A similar association as with the owls has been noted between geese and rough-legged hawks. Additional predators at the nest have reportedly included wolves, coyotes and all three North American bear species. Few predators regularly prey on snow geese outside of the nesting season, but bald eagles (as well as possibly golden eagles) will readily attack wintering geese. Population The breeding population of the lesser snow goose exceeds 5 million birds, an increase of more than 300% since the mid-1970s. The population is increasing at a rate of more than five percent per year. Non-breeding geese (juveniles or adults that fail to nest successfully) are not included in this estimate, so the total number of geese is likely higher. Lesser snow goose population indices are the highest they have been since population records have been kept, and evidence suggests that large breeding populations are spreading to previously untouched sections of the Hudson Bay coastline. The cause of this overpopulation may be the heavy conversion of land from forest and prairie to agricultural usage in the 20th century. Since the late 1990s, efforts have been underway in the U.S. and Canada to reduce the North American population of lesser snow and Ross's geese to sustainable levels due to the documented destruction of tundra habitat in Hudson Bay and other nesting areas. The Light Goose Conservation Order was established in 1997 and federally mandated in 1999. Increasing hunter bag limits, extending the length of hunting seasons, and adding new hunting methods have all been successfully implemented, but have not reduced the overall population of snow geese in North America. Conservation order for light geese The late 1990s was when the mid-continent population of snow geese was recognized as causing significant damage to the arctic and sub-arctic breeding grounds which was also causing critical damage to other varieties of waterfowl species and other wildlife that uses the arctic and sub-arctic grounds for home habitat. The increase in population in substantial amounts raised concern to then DU chief biologist Dr. Bruce Batt who was part of a committee that put together various data and submitted it to the U.S. Fish and Wildlife Service and the Canadian Wildlife Service with the recommendation on ways to combat the growing population and the damage that the snow geese were creating in the arctic breeding grounds. The committee recommended relaxing hunting restrictions and giving hunters a better opportunity to harvest more snow geese on their way back to the breeding grounds in the spring. The suggested restrictions were to allow the use of electronic callers, unplugged shotguns, extended shooting hours, and no bag limits. Two years after the Light Goose Conservation Order was introduced it was federally mandated in 1999. Gallery
Biology and health sciences
Anseriformes
Animals
199040
https://en.wikipedia.org/wiki/Chemical%20equation
Chemical equation
A chemical equation is the symbolic representation of a chemical reaction in the form of symbols and chemical formulas. The reactant entities are given on the left-hand side and the product entities are on the right-hand side with a plus sign between the entities in both the reactants and the products, and an arrow that points towards the products to show the direction of the reaction. The chemical formulas may be symbolic, structural (pictorial diagrams), or intermixed. The coefficients next to the symbols and formulas of entities are the absolute values of the stoichiometric numbers. The first chemical equation was diagrammed by Jean Beguin in 1615. Structure A chemical equation (see an example below) consists of a list of reactants (the starting substances) on the left-hand side, an arrow symbol, and a list of products (substances formed in the chemical reaction) on the right-hand side. Each substance is specified by its chemical formula, optionally preceded by a number called stoichiometric coefficient. The coefficient specifies how many entities (e.g. molecules) of that substance are involved in the reaction on a molecular basis. If not written explicitly, the coefficient is equal to 1. Multiple substances on any side of the equation are separated from each other by a plus sign. As an example, the equation for the reaction of hydrochloric acid with sodium can be denoted: Given the formulas are fairly simple, this equation could be read as "two H-C-L plus two N-A yields two N-A-C-L and H two." Alternately, and in general for equations involving complex chemicals, the chemical formulas are read using IUPAC nomenclature, which could verbalise this equation as "two hydrochloric acid molecules and two sodium atoms react to form two formula units of sodium chloride and a hydrogen gas molecule." Reaction types Different variants of the arrow symbol are used to denote the type of a reaction: {| | style="text-align: center; padding-right: 0.5em;" | || net forward reaction |- | style="text-align: center; padding-right: 0.5em;" | || reaction in both directions |- | style="text-align: center; padding-right: 0.5em;" | || equilibrium |- | style="text-align: center; padding-right: 0.5em;" | || stoichiometric relation |- | style="text-align: center; padding-right: 0.5em;" | || resonance (not a reaction) |} State of matter To indicate physical state of a chemical, a symbol in parentheses may be appended to its formula: (s) for a solid, (l) for a liquid, (g) for a gas, and (aq) for an aqueous solution. This is especially done when one wishes to emphasize the states or changes thereof. For example, the reaction of aqueous hydrochloric acid with solid (metallic) sodium to form aqueous sodium chloride and hydrogen gas would be written like this: That reaction would have different thermodynamic and kinetic properties if gaseous hydrogen chloride were to replace the hydrochloric acid as a reactant: Alternately, an arrow without parentheses is used in some cases to indicate formation of a gas ↑ or precipitate ↓. This is especially useful if only one such species is formed. Here is an example indicating that hydrogen gas is formed: Catalysis and other conditions If the reaction requires energy, it is indicated above the arrow. A capital Greek letter delta (Δ) or a triangle (△) is put on the reaction arrow to show that energy in the form of heat is added to the reaction. The expression is used as a symbol for the addition of energy in the form of light. Other symbols are used for other specific types of energy or radiation. Similarly, if a reaction requires a certain medium with certain specific characteristics, then the name of the acid or base that is used as a medium may be placed on top of the arrow. If no specific acid or base is required, another way of denoting the use of an acidic or basic medium is to write H+ or OH− (or even "acid" or "base") on top of the arrow. Specific conditions of the temperature and pressure, as well as the presence of catalysts, may be indicated in the same way. Notation variants The standard notation for chemical equations only permits all reactants on one side, all products on the other, and all stoichiometric coefficients positive. For example, the usual form of the equation for dehydration of methanol to dimethylether is: Sometimes an extension is used, where some substances with their stoichiometric coefficients are moved above or below the arrow, preceded by a plus sign or nothing for a reactant, and by a minus sign for a product. Then the same equation can look like this: 2 CH3OH ->[\overset{}\ce{-H2O}] CH3OCH3 Such notation serves to hide less important substances from the sides of the equation, to make the type of reaction at hand more obvious, and to facilitate chaining of chemical equations. This is very useful in illustrating multi-step reaction mechanisms. Note that the substances above or below the arrows are not catalysts in this case, because they are consumed or produced in the reaction like ordinary reactants or products. Another extension used in reaction mechanisms moves some substances to branches of the arrow. Both extensions are used in the example illustration of a mechanism. Use of negative stoichiometric coefficients at either side of the equation (like in the example below) is not widely adopted and is often discouraged. 2 CH3OH \;-\; H2O -> CH3OCH3 Balancing chemical equations Because no nuclear reactions take place in a chemical reaction, the chemical elements pass through the reaction unchanged. Thus, each side of the chemical equation must represent the same number of atoms of any particular element (or nuclide, if different isotopes are taken into account). The same holds for the total electric charge, as stated by the charge conservation law. An equation adhering to these requirements is said to be balanced. A chemical equation is balanced by assigning suitable values to the stoichiometric coefficients. Simple equations can be balanced by inspection, that is, by trial and error. Another technique involves solving a system of linear equations. Balanced equations are usually written with smallest natural-number coefficients. Yet sometimes it may be advantageous to accept a fractional coefficient, if it simplifies the other coefficients. The introductory example can thus be rewritten as HCl + Na -> NaCl + 1/2 H2 In some circumstances the fractional coefficients are even inevitable. For example, the reaction corresponding to the standard enthalpy of formation must be written such that one molecule of a single product is formed. This will often require that some reactant coefficients be fractional, as is the case with the formation of lithium fluoride: Li(s) + 1/2F2(g) -> LiF(s) Inspection method The method of inspection can be outlined as setting the most complex substance's stoichiometric coefficient to 1 and assigning values to other coefficients step by step such that both sides of the equation end up with the same number of atoms for each element. If any fractional coefficients arise during this process, the presence of fractions may be eliminated (at any time) by multiplying all coefficients by their lowest common denominator. Example Balancing of the chemical equation for the complete combustion of methane \mathord{?}\,{CH4} + \mathord{?}\,{O2} -> \mathord{?}\,{CO2} + \mathord{?}\,{H2O} is achieved as follows: A coefficient of 1 is placed in front of the most complex formula (CH4): 1 {CH4} + \mathord{?}\,{O2} -> \mathord{?}\,{CO2} + \mathord{?}\,{H2O} The left-hand side has 1 carbon atom, so 1 molecule of CO2 will balance it. The left-hand side also has 4 hydrogen atoms, which will be balanced by 2 molecules of H2O: 1 {CH4} + \mathord{?}\,{O2} -> 1 {CO2} + 2 H2O Balancing the 4 oxygen atoms of the right-hand side by 2 molecules of O2 yields the equation 1 CH4 + 2 O2 -> 1 CO2 + 2 H2O The coefficients equal to 1 are omitted, as they do not need to be specified explicitly: CH4 + 2 O2 -> CO2 + 2 H2O It is wise to check that the final equation is balanced, i.e. that for each element there is the same number of atoms on the left- and right-hand side: 1 carbon, 4 hydrogen, and 4 oxygen. System of linear equations For each chemical element (or nuclide or unchanged moiety or charge) , its conservation requirement can be expressed by the mathematical equation where is the number of atoms of element in a molecule of substance (per formula in the chemical equation), and is the stoichiometric coefficient for the substance . This results in a homogeneous system of linear equations, which are readily solved using mathematical methods. Such system always has the all-zeros trivial solution, which we are not interested in, but if there are any additional solutions, there will be infinite number of them. Any non-trivial solution will balance the chemical equation. A "preferred" solution is one with whole-number, mostly positive stoichiometric coefficients with greatest common divisor equal to one. Example Let us assign variables to stoichiometric coefficients of the chemical equation from the previous section and write the corresponding linear equations: \mathit{s}_1 {CH4} + \mathit{s}_2 {O2} -> \mathit{s}_3 {CO2} + \mathit{s}_4 {H2O} All solutions to this system of linear equations are of the following form, where is any real number: The choice of yields the preferred solution, which corresponds to the balanced chemical equation: CH4 + 2 O2 -> CO2 + 2 H2O Matrix method The system of linear equations introduced in the previous section can also be written using an efficient matrix formalism. First, to unify the reactant and product stoichiometric coefficients , let us introduce the quantity called stoichiometric number, which simplifies the linear equations to where is the total number of reactant and product substances (formulas) in the chemical equation. Placement of the values at row and column of the composition matrix and arrangement of the stoichiometric numbers into the stoichiometric vector allows the system of equations to be expressed as a single matrix equation: Like previously, any nonzero stoichiometric vector , which solves the matrix equation, will balance the chemical equation. The set of solutions to the matrix equation is a linear space called the kernel of the matrix . For this space to contain nonzero vectors , i.e. to have a positive dimension N, the columns of the composition matrix must not be linearly independent. The problem of balancing a chemical equation then becomes the problem of determining the N-dimensional kernel of the composition matrix. It is important to note that only for N = 1 will there be a unique preferred solution to the balancing problem. For N > 1 there will be an infinite number of preferred solutions with N of them linearly independent. If N = 0, there will be only the unusable trivial solution, the zero vector. Techniques have been developed to quickly calculate a set of N independent solutions to the balancing problem, which are superior to the inspection and in that they are determinative and yield all solutions to the balancing problem. Example Using the same chemical equation again, write the corresponding matrix equation: \mathit{s}_1 {CH4} + \mathit{s}_2 {O2} -> \mathit{s}_3 {CO2} + \mathit{s}_4 {H2O} Its solutions are of the following form, where is any real number: The choice of and a sign-flip of the first two rows yields the preferred solution to the balancing problem: Ionic equations An ionic equation is a chemical equation in which electrolytes are written as dissociated ions. Ionic equations are used for single and double displacement reactions that occur in aqueous solutions. For example, in the following precipitation reaction: CaCl2 + 2AgNO3 -> Ca(NO3)2 + 2 AgCl(v) the full ionic equation is: Ca^2+ + 2Cl^- + 2Ag+ + 2NO3^- -> Ca^2+ + 2NO3^- + 2AgCl(v) or, with all physical states included: Ca^2+(aq) + 2Cl^{-}(aq) + 2Ag+(aq) + 2NO3^{-}(aq) -> Ca^2+(aq) + 2NO3^{-}(aq) + 2AgCl(v) In this reaction, the Ca2+ and the NO3− ions remain in solution and are not part of the reaction. That is, these ions are identical on both the reactant and product side of the chemical equation. Because such ions do not participate in the reaction, they are called spectator ions. A net ionic equation is the full ionic equation from which the spectator ions have been removed. The net ionic equation of the proceeding reactions is: 2Cl^- + 2Ag+ -> 2AgCl(v) or, in reduced balanced form, Ag+ + Cl^- -> AgCl(v) In a neutralization or acid/base reaction, the net ionic equation will usually be: H+ (aq) + OH^{-}(aq) -> H2O(l) There are a few acid/base reactions that produce a precipitate in addition to the water molecule shown above. An example is the reaction of barium hydroxide with phosphoric acid, which produces not only water but also the insoluble salt barium phosphate. In this reaction, there are no spectator ions, so the net ionic equation is the same as the full ionic equation. 3Ba(OH)2 + 2H3PO4 -> 6H2O + Ba3(PO4)2(v) Double displacement reactions that feature a carbonate reacting with an acid have the net ionic equation: If every ion is a "spectator ion" then there was no reaction, and the net ionic equation is null. Generally, if zj is the multiple of elementary charge on the j-th molecule, charge neutrality may be written as: where the νj are the stoichiometric coefficients described above. The zj may be incorporated as an additional row in the aij matrix described above, and a properly balanced ionic equation will then also obey: History Typesetting
Physical sciences
Chemical reactions
null
199051
https://en.wikipedia.org/wiki/Antisocial%20personality%20disorder
Antisocial personality disorder
Antisocial personality disorder (ASPD) is a personality disorder defined by a chronic pattern of behavior that disregards the rights and well-being of others. People with ASPD often exhibit behavior that conflicts with social norms, leading to issues with interpersonal relationships, employment, and legal matters. The condition generally manifests in childhood or early adolescence, with a high rate of associated conduct problems and a tendency for symptoms to peak in late adolescence and early adulthood. The prognosis for ASPD is complex, with high variability in outcomes. Individuals with severe ASPD symptoms may have difficulty forming stable relationships, maintaining employment, and avoiding criminal behavior, resulting in higher rates of divorce, unemployment, homelessness, and incarceration. In extreme cases, ASPD may lead to violent or criminal behaviors, often escalating in early adulthood. Research indicates that individuals with ASPD have an elevated risk of suicide, particularly those who also engage in substance misuse or have a history of incarceration. Additionally, children raised by parents with ASPD may be at greater risk of delinquency and mental health issues themselves. Although ASPD is a persistent and often lifelong condition, symptoms may diminish over time, particularly after age 40, though only a small percentage of individuals experience significant improvement. Many individuals with ASPD have co-occurring issues such as substance use disorders, mood disorders, or other personality disorder. Research on pharmacological treatment for ASPD is limited, with no medications approved specifically for the disorder. However, certain psychiatric medications, including antipsychotics, antidepressants, and mood stabilizers, may help manage symptoms like aggression and impulsivity in some cases, or treat co-occurring disorders. The diagnostic criteria and understanding of ASPD have evolved significantly over time. Early diagnostic manuals, such as the DSM-I in 1952, described “sociopathic personality disturbance” as involving a range of antisocial behaviors linked to societal and environmental factors. Subsequent editions of the DSM have refined the diagnosis, eventually distinguishing ASPD in the DSM-III (1980) with a more structured checklist of observable behaviors. Current definitions in the DSM-5 align with the clinical description of ASPD as a pattern of disregard for the rights of others, with potential overlap in traits associated with psychopathy and sociopathy. Symptoms and behaviors Due to tendencies toward recklessness and impulsivity, patients with ASPD are at a higher risk of drug and alcohol abuse. ASPD is the personality disorder most likely to be associated with addiction. Individuals with ASPD are at a higher risk of illegal drug usage, blood-borne diseases, HIV, shorter periods of abstinence, misuse of oral administrations, and compulsive gambling as a consequence of their tendency towards addiction. In addition, sufferers are more likely to abuse substances or develop an addiction at a young age. Due to ASPD being associated with higher levels of impulsivity, suicidality, and irresponsible behavior, the condition is correlated with heightened levels of aggressive behavior, domestic violence, illegal drug use, pervasive anger, and violent crimes. This behavior typically has negative effects on their education, relationships, and/or employment. Alongside this, sexual behaviors of risk such as having multiple sexual partners in a short period of time, seeing prostitutes, inconsistent use of condoms, trading sex for drugs, and frequent unprotected sex are also common. Patients with ASPD have been documented to describe emotions with ambivalence and experience heightened states of emotional coldness and detachment. Individuals with ASPD, or who display antisocial behavior, may often experience chronic boredom. They may experience emotions such as happiness and fear less clearly than others. It is also possible that they may experience emotions such as anger and frustration more frequently and clearly than other emotions. People with ASPD may have a limited capacity for empathy and can be more interested in benefiting themselves than avoiding harm to others. They may have no regard for morals, social norms, or the rights of others. People with ASPD can have difficulty beginning or sustaining relationships. It is common for the interpersonal relationships of someone with ASPD to revolve around the exploitation and abuse of others. People with ASPD may display arrogance, think lowly and negatively of others, have limited remorse for their harmful actions, and have a callous attitude toward those they have harmed. People with ASPD can have difficulty mentalizing, or interpreting the mental state of others. Alternately, they may display a perfectly intact theory of mind, or the ability to understand one's mental state, but have an impaired ability to understand how another individual may be affected by an aggressive action. These factors might contribute to aggressive and criminal behavior as well as empathy deficits. Despite this, they may be adept at social cognition, or the ability to process and store information about other people, which can contribute to an increased ability to manipulate others. ASPD is highly prevalent among prisoners. People with ASPD tend to be convicted more, receive longer sentences, and are more likely to be charged with almost any crime, with assault and other violent crimes being the most common charges. Those who have committed violent crimes tend to have higher levels of testosterone than the average person, also contributing to the higher likelihood for men to be diagnosed with ASPD. The effect of testosterone is counteracted by cortisol, which facilitates the cognitive control of impulsive tendencies. Arson and the destruction of others' property are also behaviors commonly associated with ASPD. Alongside other conduct problems, many people with ASPD had conduct disorder in their youth, characterized by a pervasive pattern of violent, criminal, defiant, and anti-social behavior. Although behaviors vary by degree, individuals with this personality disorder have been known to exploit others in harmful ways for their own gain or pleasure, and frequently manipulate and deceive other people. While some do so with a façade of superficial charm, others do so through intimidation and violence. Individuals with antisocial personality disorder may deliberately show irresponsibility, have difficulty acknowledging their faults and/or attempt to redirect attention away from harmful behaviors. Comorbidity ASPD presents high comorbidity rates with various psychiatric conditions, particularly substance use and mood disorder. Individuals diagnosed with ASPD are significantly more prone to develop substance use disorder (SUDs), with studies showing that they are approximately 13 times more likely to be diagnosed with a SUD than those without ASPD. This population also faces increased risks for mood disorders, including a fourfold likelihood of experiencing major depressive disorder, as well as heightened risks for suicidal ideation and behaviors. Anxiety disorders, particularly post-traumatic stress disorder (PTSD) and social anxiety disorder, are also common comorbidities, affecting up to 50% of individuals with ASPD. These comorbidities often exacerbate the problems of those with ASPD, leading to more severe symptoms, complex treatment needs, and poorer clinical outcomes. When combined with alcoholism, people may show frontal brain function deficits on neuropsychological tests greater than those associated with each condition. Alcohol use disorder is likely caused by lack of impulse and behavioral control exhibited by antisocial personality disorder patients. Causes Personality disorders are generally believed to be caused by a combination and interaction of genetics and environmental influences. People with an antisocial or alcoholic parent are considered to be at higher risk of developing ASPD. Fire-setting and cruelty to animals during childhood are also linked to the development of an antisocial personality disorder, along with being more common in males and among incarcerated populations. Although the causes listed correlate to the risk of developing ASPD, one factor alone is unlikely to be the only cause associated with ASPD and relating to a listed cause does not necessarily mean that a person should identify or be identified as having ASPD. According to professor Emily Simonoff of the Institute of Psychiatry, Psychology and Neuroscience, there are many variables that are consistently connected to ASPD, such as: childhood hyperactivity and conduct disorder, criminality in adulthood, lower IQ scores, and reading problems. Additionally, children who grow up with a predisposition of ASPD and interact with other delinquent children are likely to later be diagnosed with ASPD. Genetic Research into genetic associations in antisocial personality disorder suggests that ASPD has some or even a strong genetic basis. The prevalence of ASPD is higher in people related to someone with the disorder. Twin studies, which are designed to discern between genetic and environmental effects, have reported significant genetic influences on antisocial behavior and conduct disorder. In the specific genes that may be involved, one gene that has shown particular promise in its correlation with ASPD is the gene that encodes for monoamine oxidase A (MAO-A), an enzyme that breaks down monoamine neurotransmitters such as serotonin and norepinephrine. Various studies examining the gene's relationship to behavior have suggested that variants of the gene resulting in less MAO-A being produced (such as the 2R and 3R alleles of the promoter region) have associations with aggressive behavior in men. This association is also influenced by negative experiences early in life, with children possessing a low-activity variant (MAOA-L) who have experienced negative circumstances being more likely to develop antisocial behavior than those with the high-activity variant (MAOA-H). Even when environmental interactions (e.g., emotional abuse) are taken out of the equation, a small association between MAOA-L and aggressive and antisocial behavior remains. The gene that encodes for the serotonin transporter (SLC6A4), a gene that is heavily researched for its associations with other mental disorders, is another gene of interest in antisocial behavior and personality traits. Genetic association's studies have suggested that the short "S" allele is associated with impulsive antisocial behavior and ASPD in the inmate population. However, research into psychopathy find that the long "L" allele is associated with the Factor 1 traits of psychopathy, which describes its core affective (e.g. lack of empathy, fearlessness) and interpersonal (e.g. grandiosity, manipulativeness) personality disturbances. This is suggestive of two different forms of the disorder, one associated more with impulsive behavior and emotional dysregulation, and the other with predatory aggression and affective disturbance. Various other gene candidates for ASPD have been identified by a genome-wide association study published in 2016. Several of these gene candidates are shared with attention-deficit hyperactivity disorder, with which ASPD is often comorbid. The study found that those who carry four mutations on chromosome 6 are 50% more likely to develop antisocial personality disorder than those who do not. Physiological Hormones and neurotransmitters Traumatic events can lead to a disruption of the standard development of the central nervous system, which can generate a release of hormones that can change normal patterns of development. One of the neurotransmitters that has been discussed in individuals with ASPD is serotonin, also known as 5-HT. A meta-analysis of 20 studies found significantly lower 5-HIAA levels (indicating lower serotonin levels), especially in those who are younger than 30 years of age. While it has been shown that lower levels of serotonin may be associated with ASPD, there has also been evidence that decreased serotonin function is highly correlated with impulsiveness and aggression across a number of different experimental paradigms. Impulsivity is not only linked with irregularities in 5-HT metabolism but may be the most essential psychopathological aspect linked with such dysfunction. Correspondingly, the DSM classifies "impulsivity or failure to plan ahead" and "irritability and aggressiveness" as two of seven sub-criteria in category A of the diagnostic criteria of ASPD. Some studies have found a relationship between monoamine oxidase A and antisocial behavior, including conduct disorder and symptoms of adult ASPD, in maltreated children. Neurological Antisocial behavior may be related to a number of neurological defects, such as head trauma. Antisocial behavior is associated with decreased grey matter in the right lentiform nucleus, left insular, and frontopolar cortex. Increased volumes of grey matter have been observed in the right fusiform gyrus, inferior parietal cortex, right cingulate gyrus, and post-central cortex. Intellectual and cognitive ability is often found to be impaired or reduced in the ASPD population. Contrary to stereotypes in popular culture of the "psychopathic genius", antisocial personality disorder is associated with reduced overall intelligence and specific reductions in individual aspects of cognitive ability. These deficits also occur in general-population samples of people with antisocial traits and in children with the precursors to antisocial personality disorder. People that exhibit antisocial behavior tend to demonstrate decreased activity in the prefrontal cortex, and is more apparent in functional neuroimaging as opposed to structural neuroimaging. Some investigators have questioned whether the reduced volume in prefrontal regions is associated with antisocial personality disorder, or whether they result from co-morbid disorders, such as substance use disorder or childhood maltreatment. It is still considered an open question if the anatomical abnormality causes the psychological and behavioral abnormality, or vice versa. Antisocial behavior is also associated with structural brain differences. Some of the major areas involved are areas of the prefrontal cortex, such as the right frontal and temporal cortices, the ventromedial prefrontal cortex, and the middle and orbitofrontal cortices. In these areas, a reduction in gray matter is seen in individuals with antisocial personality disorder, suggesting these structural differences may play a role in their behavior. Reduced gray matter volumes in these areas are in fact associated with a lack of emotional regulation, a lack of behavioral and response inhibition, and poor decision making among other affects.  Additionally, those with ASPD have shown decreased gray matter volumes in other brain areas such as the amygdala and insula, suggesting possible issues with emotional reactions to certain stimuli. People that exhibit antisocial behavior also tend to demonstrate decreased activity in the prefrontal cortex, as is apparent in functional neuroimaging. Cavum septi pellucidi (CSP) is a marker for limbic neural maldevelopment, and its presence has been loosely associated with certain mental disorders, such as schizophrenia and post-traumatic stress disorder. One study found that those with CSP had significantly higher levels of antisocial personality, psychopathy, arrests and convictions compared with controls. Environmental Family environment Many studies suggest that the social and home environment contribute to the development of ASPD. Parents of children with ASPD may display antisocial behavior themselves, which are then adopted by their children. A lack of parental stimulation and affection during early development can lead to high levels of cortisol with the absence of balancing hormones such as oxytocin. This disrupts and overloads the child's stress response systems, which is thought to lead to underdevelopment of the part of the child's brain that deals with emotion, empathy, and ability to connect to other humans on an emotional level. According to Dr. Bruce Perry in his book The Boy Who Was Raised as a Dog, "the infant's developing brain needs to be patterned, repetitive stimuli to develop properly. Spastic, unpredictable relief from fear, loneliness, discomfort, and hunger keeps a baby's stress system on high alert. An environment of intermittent care punctuated by total abandonment may be the worst of all worlds for a child." Parenting styles Some hypothesise that parenting styles can affect how children experience and develop in their youth, and can have an impact on a child's diagnosis of ASPD. Childhood trauma ASPD is highly comorbid with emotional and physical abuse in childhood. Physical neglect also has a significant correlation to ASPD. The way a child bonds with its parents early in life is important. Poor parental bonding due to abuse or neglect puts children at greater risk for developing antisocial personality disorder. There is also a significant correlation with parental overprotection and people who develop ASPD. Those with ASPD may have experienced any of the following forms of childhood trauma or abuse: physical or sexual abuse, neglect, coercion, abandonment or separation from caregivers, violence in a community, acts of terror, bullying, or life-threatening incidents. Some symptoms can mimic other forms of mental illness, such as: post-traumatic stress disorder (symptoms of upsetting/terrifying memories of traumatic events) reactive attachment disorder (little to no response regarding emotional triggers) disinhibited social engagement disorder (roaming off with people you don't know without caregivers being informed) dissociative identity disorder (disconnection from self or environment) The comorbidity rate of the previously listed disorders with ASPD tend to be much higher. Cultural influences The sociocultural perspective of clinical psychology views disorders as influenced by cultural aspects; since cultural norms differ significantly, mental disorders (such as ASPD) are viewed differently. Robert D. Hare suggested that the rise in ASPD that has been reported in the United States may be linked to changes in cultural norms, serving to validate the behavioral tendencies of many individuals with ASPD. While the rise reported may be in part a byproduct of the widening use (and abuse) of diagnostic techniques, given Eric Berne's division between individuals with active and latent ASPD – the latter keeping themselves in check by attachment to an external source of control like the law, traditional standards, or religion – it has been suggested that the erosion of collective standards may serve to release the individual with latent ASPD from their previously prosocial behavior. There is also a continuous debate as to the extent to which the legal system should be involved in the identification and admittance of patients with preliminary symptoms of ASPD. Controversial clinical psychiatrist Pierre-Édouard Carbonneau suggested that the problem with legal forced admittance is the rate of failure when diagnosing ASPD. He contends that the possibility of diagnosing and coercing a patient into prescribing medication to someone without ASPD, but is diagnosed with ASPD, could be potentially disastrous. But the possibility of not diagnosing ASPD and seeing a patient go untreated because of a lack of sufficient evidence of cultural or environmental influences is something a psychiatrist must ignore; and in his words, "play it safe". Conduct disorder While antisocial personality disorder is a mental disorder diagnosed in adulthood, it has its precedent in childhood. The DSM-5's criteria for ASPD require that the individual have conduct problems evident by the age of 15. Persistent antisocial behavior, as well as a lack of regard for others in childhood and adolescence, is known as conduct disorder and is the precursor of ASPD. About 25–40% of youths with conduct disorder will be diagnosed with ASPD in adulthood. Conduct disorder (CD) is a disorder diagnosed in childhood that parallels the characteristics found in ASPD. It is characterized by a repetitive and persistent pattern of behavior in which the basic rights of others or major age-appropriate norms are violated by the child. Children with the disorder often display impulsive and aggressive behavior, may be callous and deceitful, may repeatedly engage in petty crime (such as stealing or vandalism), or get into fights with other children and adults. This behavior is typically persistent and may be difficult to deter with either threat or punishment. Attention deficit hyperactivity disorder (ADHD) is common in this population, and children with the disorder may also engage in substance use. CD is distinct from oppositional defiant disorder (ODD) in that children with ODD do not commit aggressive or antisocial acts against other people, animals, or property, though many children diagnosed with ODD are subsequently re-diagnosed with CD. Two developmental courses for CD have been identified based on the age at which the symptoms become present. The first course is known as the "childhood-onset type" and occurs when conduct disorder symptoms are present before the age of 10. This course is often linked to a more persistent life course and more pervasive behaviors, and children in this group express greater levels of ADHD symptoms, neuropsychological deficits, more academic problems, increased family dysfunction, and higher likelihood of aggression and violence. The second course is known as the "adolescent-onset type" and occurs when conduct disorder develops after the age of 10 years. Compared to the childhood-onset type, less impairment in various cognitive and emotional functions are present, and the adolescent-onset variety may remit by adulthood. In addition to this differentiation, the DSM-5 provides a specifier for a callous and unemotional interpersonal style, which reflects characteristics seen in psychopathy and are believed to be a childhood precursor to this disorder. Compared to the adolescent-onset subtype, the childhood-onset subtype tends to have a worse treatment outcome, especially if callous and unemotional traits are present. Diagnosis DSM-5 Section II The main text of fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) defines antisocial personality disorder as being characterized by at least three of the following traits: Failure to conform to social norms and laws, indicated by repeatedly engaging in illegal activities. Deceitfulness, indicated by continuously lying, using aliases, or conning others for personal gain and pleasure. Exhibiting impulsivity or failing to plan ahead. Irritability and aggressiveness, indicated by repeatedly getting into fights or physically assaulting others. Reckless behaviors that disregard the safety of others. Irresponsibility, indicated by repeatedly failing to consistently work or honor financial obligations. Lack of remorse after hurting or mistreating another person. In order to be diagnosed with antisocial personality disorder under the DSM-5, one must be at least 18 years old, show evidence of onset of conduct disorder before age 15, and antisocial behavior cannot be explained by schizophrenia or bipolar disorder. Section III (Alternative Model of Personality Disorders) In response to criticisms of the extant (Section II/DSM-IV) criteria for personality disorders, including their discordance with current models in the scientific literature, high comorbidity rate, overuse of some categories, underuse of others, and overwhelming use of the personality disorder-not otherwise specified (PD-NOS) diagnosis, the DSM-5 Workgroup on personality disorders devised a dimensional model, wherein categoric personality diagnoses reflect extreme variations of normal personality traits. In response to criticisms of the extant Section II/DSM-IV criteria for ASPD, namely its failure to capture the interpersonal and affective features of psychopathy, new criteria were proposed. In addition to the new criteria, the individual must be at least 18 years old, the traits must cause dysfunction or distress, and should not be better explained by another mental disorder, the pathophysiological effects of a substance, or a person's cultural or social background. Also included as a "with psychopathic traits" specifier modelled after the Fearless Dominance scale of the Psychopathic Personality Inventory, defined by low Anxiousness and Withdrawal and high Attention-Seeking. Researchers have also proposed the inclusion of Grandiosity and Restricted Affectivity to better capture psychopathy. Psychopathy Psychopathy is commonly defined as a personality construct characterized partly by antisocial behavior, a diminished capacity for empathy and remorse, and poor behavioral controls. Psychopathic traits are assessed using various measurement tools, including Canadian researcher Robert D. Hare's Psychopathy Checklist, Revised (PCL-R). "Psychopathy" is not the official title of any diagnosis in the DSM or ICD; nor is it an official title used by any other major psychiatric organizations. The DSM and ICD, however, state that their antisocial diagnoses are at times referred to (or include what is referred to) as psychopathy or sociopathy. American psychiatrist Hervey Cleckley's work on psychopathy formed the basis of the diagnostic criteria for ASPD, and the DSM states ASPD is often referred to as psychopathy. However, critics argue ASPD is not synonymous with psychopathy as the diagnostic criteria are not the same, since criteria relating to personality traits are emphasized relatively less in the former. These differences exist in part because it was believed such traits were difficult to measure reliably and it was "easier to agree on the behaviors that typify a disorder than on the reasons why they occur". Although the diagnosis of ASPD covers two to three times as many prisoners as the diagnosis of psychopathy, Robert Hare believes the PCL-R is better able to predict future criminality, violence, and recidivism than a diagnosis of ASPD. He suggests there are differences between PCL-R-diagnosed psychopaths and non-psychopaths on "processing and use of linguistic and emotional information", while such differences are potentially smaller between those diagnosed with ASPD and without. Additionally, Hare argued confusion regarding how to diagnose ASPD, confusion regarding the difference between ASPD and psychopathy, as well as the differing future prognoses regarding recidivism and treatability, may have serious consequences in settings such as court cases where psychopathy is often seen as aggravating the crime. Nonetheless, psychopathy has been proposed as a specifier under an alternative model for ASPD. In the DSM-5, under "Alternative DSM-5 Model for Personality Disorders", ASPD with psychopathic features is described as characterized by "a lack of anxiety or fear and by a bold interpersonal style that may mask maladaptive behaviors (e.g., fraudulence)". Low levels of withdrawal and high levels of attention-seeking combined with low anxiety are associated with "social potency" and "stress immunity" in psychopathy. Under the specifier, affective and interpersonal characteristics are comparatively emphasized over behavioral components. Research suggests that, even without the "with psychopathic traits" specifier, these Section III criteria accurately capture the affective-interpersonal features of psychopathy, though the specifier increases coverage of the Interpersonal and Lifestyle facets of the PCL-R. Millon's subtypes Theodore Millon suggested 5 subtypes of ASPD. However, these constructs are not recognized in the DSM or ICD. Elsewhere, Millon differentiates ten subtypes (partially overlapping with the above) – covetous, risk-taking, malevolent, tyrannical, malignant, disingenuous, explosive, and abrasive – but specifically stresses that "the number 10 is by no means special ... Taxonomies may be put forward at levels that are more coarse or more fine-grained." Treatment ASPD is considered to be among the most difficult personality disorders to treat. Rendering an effective treatment for ASPD is further complicated due to the inability to look at comparative studies between psychopathy and ASPD due to differing diagnostic criteria, differences in defining and measuring outcomes and a focus on treating incarcerated patients rather than those in the community. Because of their very low or absent capacity for remorse, individuals with ASPD often lack sufficient motivation and fail to see the costs associated with antisocial acts. They may only simulate remorse rather than truly commit to change: they can be charming and dishonest, and may manipulate staff and fellow patients during treatment. Studies have shown that outpatient therapy is not likely to be successful, but the extent to which persons with ASPD are entirely unresponsive to treatment may have been exaggerated. Most treatment done is for those in the criminal justice system to whom the treatment regimes are given as part of their imprisonment. Those with ASPD may stay in treatment only as required by an external source, such as parole conditions. Residential programs that provide a carefully controlled environment of structure and supervision along with peer confrontation have been recommended. There has been some research on the treatment of ASPD that indicated positive results for therapeutic interventions. Psychotherapy, also known as "talk" therapy, has been found to help treat patients with ASPD. Schema therapy is also being investigated as a treatment for ASPD. A review by Charles M. Borduin features the strong influence of multisystemic therapy (MST) that could potentially improve this issue. However, this treatment requires complete cooperation and participation of all family members. Some studies have found that the presence of ASPD does not significantly interfere with treatment for other disorders, such as substance use, although others have reported contradictory findings. Therapists working with individuals with ASPD may have considerable negative feelings toward patients with extensive histories of aggressive, exploitative, and abusive behaviors. Rather than attempt to develop a sense of conscience in these individuals, which is extremely difficult considering the nature of the disorder, therapeutic techniques are focused on rational and utilitarian arguments against repeating past mistakes. These approaches would focus on the tangible, material value of prosocial behavior and abstaining from antisocial behavior. However, the impulsive and aggressive nature of those with this disorder may limit the effectiveness of this form of therapy. The use of medications in treating antisocial personality disorder is still poorly explored, and no medications have been approved by the FDA to specifically treat ASPD. A 2020 Cochrane review of studies that explored the use of pharmaceuticals in ASPD patients, of which eight studies met the selection criteria for review, concluded that the current body of evidence was inconclusive for recommendations concerning the use of pharmaceuticals in treating the various issues of ASPD. Nonetheless, psychiatric medications such as antipsychotics, antidepressants, and mood stabilizers can be used to control symptoms such as aggression and impulsivity, as well as treat disorders that may co-occur with ASPD for which medications are indicated. Prognosis Boys are almost twice as likely to meet all of the diagnostic criteria for ASPD than girls and they will often start showing symptoms of the disorder much earlier in life. Children that do not show symptoms of the disease through age 15 will almost never develop ASPD later in life. If adults exhibit milder symptoms of ASPD, it is likely that they never met the criteria for the disorder in their childhood and were consequently never diagnosed. Overall, symptoms of ASPD tend to peak in late teens and early twenties, but can often reduce or improve through age 40. ASPD is ultimately a lifelong disorder that has chronic consequences, though some of these can be moderated over time. There may be a high variability of the long-term outlook of antisocial personality disorder. The treatment of this disorder can be successful, but it entails unique difficulties. It is unlikely to see rapid change especially when the condition is severe. In fact, past studies revealed that remission rates were small, with 27-31% of patients with ASPD seeing an improvement "with the most violent and dangerous features remitting". As a result of the characteristics of ASPD (e.g., displaying charm in effort of personal gain, manipulation), patients seeking treatment (mandated or otherwise) may appear to be "cured" in order to get out of treatment. According to definitions found in the DSM-5, people with ASPD can be deceitful and intimidating in their relationships. When they are caught doing something wrong, they often appear to be unaffected and unemotional about the consequences. Over time, continual behavior that lacks empathy and concern may lead to someone with ASPD taking advantage of the kindness of others, including their therapist. Without proper treatment, individuals with ASPD could lead a life that brings about harm to themselves or others. This can be detrimental to their families and careers. Those with ASPD lack interpersonal skills (e.g., lack of remorse, lack of empathy, lack of emotional-processing skills). As a result of the inability to create and maintain healthy relationships due to the lack of interpersonal skills, individuals with ASPD may find themselves in predicaments such as divorce, unemployment, homelessness and even premature death by suicide. They also see higher rates of committed crime, reaching peaks in their late teens and often committing higher-severity crimes in their younger ages of diagnoses. Comorbidity of other mental illnesses such as depression or substance use disorder is prevalent among patients with ASPD. People with ASPD are also more likely to commit homicides and other crimes. Those who are imprisoned longer often see higher rates of improvement with symptoms of ASPD than others who have been imprisoned for a shorter amount of time. According to one study, aggressive tendencies show in about 72% of all male patients diagnosed with ASPD. About 29% of the men studied with ASPD also showed a prevalence of pre-meditated aggression. Based on the evidence in the study, the researchers concluded that aggression in patients with ASPD is mostly impulsive, though there are some long-term evidences of pre-meditated aggressions. It often occurs that those with higher psychopathic traits will exhibit the pre-meditated aggressions to those around them. Over the course of a patient's life with ASPD, he or she can exhibit this aggressive behavior and harm those close to him or her. Additionally, many people (especially adults) who have been diagnosed with ASPD become burdens to their close relatives, peers, and caretakers. Harvard Medical School recommends that time and resources be spent treating victims who have been affected by someone with ASPD, because the patient with ASPD may not respond to the administered therapies. In fact, a patient with ASPD may only accept treatment when ordered by a court, which will make their course of treatment difficult and severe. Because of the challenges in treatment, the patient's family and close friends must take an active role in decisions about therapies that are offered to the patient. Ultimately, there must be a group effort to aid the long-term effects of the disorder. Epidemiology The estimated lifetime prevalence of ASPD amongst the general population falls within 1% to 4%, skewed towards 6% men and 2% women. The prevalence of ASPD is even higher in selected populations, like prisons, where there is a preponderance of violent offenders. It has been found that the prevalence of ASPD among prisoners is just under 50%. According to one study (n=23000), the prevalence of ASPD in prisoners is 47% in men and 21% in women. Thus, with only 27-31% of patients with ASPD seeing an improvement in symptoms over time, statistically around one third (33%) of male prisoners will not see any improvement in their symptoms, and are thus essentially prognostically hopeless. The corresponding percentage of female prisoners with statistically no chance of improvement in symptoms is around 15% or roughly one in six. Similarly, the prevalence of ASPD is higher among patients in alcohol or other drug (AOD) use treatment programs than in the general population, suggesting a link between ASPD and AOD use and dependence. As part of the Epidemiological Catchment Area (ECA) study, men with ASPD were found to be three to five times more likely to excessively use alcohol and illicit substances than those men without ASPD. There was found to be increased severity of this substance use in women with ASPD. In a study conducted with both men and women with ASPD, women were more likely to misuse substances compared to their male counterparts. Homelessness is also common amongst people with ASPD. A study on 31 youths of San Francisco and 56 youths in Chicago found that 84% and 48% of the homeless met the diagnostic criteria for ASPD respectively. Another study on the homeless found that 25% of participants had ASPD. Individuals with ASPD are at an elevated risk for suicide. Some studies suggest this increase in suicidality is in part due to the association between suicide and symptoms or trends within ASPD, such as criminality and substance use. Children of people with ASPD are also at risk. Some research suggests that negative or traumatic experiences in childhood, perhaps as a result of the choices a parent with ASPD might make, can be a predictor of delinquency later on in the child's life. Additionally, with variability between situations, children of a parent with ASPD may face consequences of delinquency if they are raised in an environment in which crime and violence is common. Suicide is a leading cause of death among youth who display antisocial behavior, especially when mixed with delinquency. Incarceration, which could come as a consequence of actions from a person with ASPD, is a predictor for suicide ideation in youth. History The first version of the DSM in 1952 listed sociopathic personality disturbance. This category was for individuals who were considered "...ill primarily in terms of society and of conformity with the prevailing milieu, and not only in terms of personal discomfort and relations with other individuals." There were four subtypes, referred to as "reactions": antisocial, dyssocial, sexual, and addiction. The antisocial reaction was said to include people who were "always in trouble" and not learning from it, maintaining "no loyalties", frequently callous and lacking responsibility, with an ability to "rationalize" their behavior. The category was described as more specific and limited than the existing concepts of "constitutional psychopathic state" or "psychopathic personality" which had a very broad meaning; the narrower definition was in line with criteria advanced by Hervey M. Cleckley from 1941, while the term sociopathic had been advanced by George Partridge in 1928 when studying the early environmental influence on psychopaths. Partridge discovered the correlation between antisocial psychopathic disorder and parental rejection experienced in early childhood. The DSM-II in 1968 rearranged the categories and "antisocial personality" was now listed as one of ten personality disorders but still described similarly, to be applied to individuals who are: "basically unsocialized", in repeated conflicts with society, incapable of significant loyalty, selfish, irresponsible, unable to feel guilt or learn from prior experiences, and who tend to blame others and rationalize. The manual preface contains "special instructions" including "Antisocial personality should always be specified as mild, moderate, or severe." The DSM-II warned that a history of legal or social offenses was not by itself enough to justify the diagnosis, and that a "group delinquent reaction" of childhood or adolescence or "social maladjustment without manifest psychiatric disorder" should be ruled out first. The dyssocial personality type was relegated in the DSM-II to "dyssocial behavior" for individuals who are predatory and follow more or less criminal pursuits, such as racketeers, dishonest gamblers, prostitutes, and dope peddlers (DSM-I classified this condition as sociopathic personality disorder, dyssocial type). It would later resurface as the name of a diagnosis in the ICD manual produced by the WHO, later spelled dissocial personality disorder and considered approximately equivalent to the ASPD diagnosis. The DSM-III in 1980 included the full term antisocial personality disorder and, as with other disorders, there was now a full checklist of symptoms focused on observable behaviors to enhance consistency in diagnosis between different psychiatrists ('inter-rater reliability'). The ASPD symptom list was based on the Research Diagnostic Criteria developed from the so-called Feighner Criteria from 1972, and in turn largely credited to influential research by sociologist Lee Robins published in 1966 as "Deviant Children Grown Up". However, Robins has previously clarified that while the new criteria of prior childhood conduct problems came from her work, she and co-researcher psychiatrist Patricia O'Neal got the diagnostic criteria they used from Lee's husband the psychiatrist Eli Robins, one of the authors of the Feighner criteria who had been using them as part of diagnostic interviews. The DSM-IV maintained the trend for behavioral antisocial symptoms while noting, "This pattern has also been referred to as psychopathy, sociopathy, or dyssocial personality disorder" and re-including in the 'Associated Features' text summary some of the underlying personality traits from the older diagnoses. The DSM-5 has the same diagnosis of antisocial personality disorder. The Pocket Guide to the DSM-5 Diagnostic Exam suggests that a person with ASPD may present "with psychopathic features" if he or she exhibits "a lack of anxiety or fear and a bold, efficacious interpersonal style".
Biology and health sciences
Mental disorders
Health
199077
https://en.wikipedia.org/wiki/Period%202%20element
Period 2 element
A period 2 element is one of the chemical elements in the second row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases; a new row is started when chemical behavior begins to repeat, creating columns of elements with similar properties. The second period contains the elements lithium, beryllium, boron, carbon, nitrogen, oxygen, fluorine, and neon. In a quantum mechanical description of atomic structure, this period corresponds to the filling of the second () shell, more specifically its 2s and 2p subshells. Period 2 elements (carbon, nitrogen, oxygen, fluorine and neon) obey the octet rule in that they need eight electrons to complete their valence shell (lithium and beryllium obey duet rule, boron is electron deficient.), where at most eight electrons can be accommodated: two in the 2s orbital and six in the 2p subshell. Periodic trends Period 2 is the first period in the periodic table from which periodic trends can be drawn. Period 1, which only contains two elements (hydrogen and helium), is too small to draw any conclusive trends from it, especially because the two elements behave nothing like other s-block elements. Period 2 has much more conclusive trends. For all elements in period 2, as the atomic number increases, the atomic radius of the elements decreases, the electronegativity increases, and the ionization energy increases. Period 2 only has two metals (lithium and beryllium) of eight elements, less than for any subsequent period both by number and by proportion. It also has the most number of nonmetals, namely five, among all periods. The elements in period 2 often have the most extreme properties in their respective groups; for example, fluorine is the most reactive halogen, neon is the most inert noble gas, and lithium is the least reactive alkali metal. All period 2 elements completely obey the Madelung rule; in period 2, lithium and beryllium fill the 2s subshell, and boron, carbon, nitrogen, oxygen, fluorine, and neon fill the 2p subshell. The period shares this trait with periods 1 and 3, none of which contain transition elements or inner transition elements, which often vary from the rule. {| | colspan="3" | Chemical element || Block || Electron configuration |-bgcolor="" || 3 || Li || Lithium || s-block || [He] 2s1 |-bgcolor="" || 4 || Be || Beryllium || s-block || [He] 2s2 |-bgcolor="" || 5 || B || Boron || p-block || [He] 2s2 2p1 |-bgcolor="" || 6 || C || Carbon || p-block || [He] 2s2 2p2 |-bgcolor="" || 7 || N || Nitrogen || p-block || [He] 2s2 2p3 |-bgcolor="" || 8 || O || Oxygen || p-block || [He] 2s2 2p4 |-bgcolor="" || 9 || F || Fluorine || p-block || [He] 2s2 2p5 |-bgcolor="" || 10 || Ne || Neon || p-block || [He] 2s2 2p6 |} Lithium Lithium (Li) is an alkali metal with atomic number 3, occurring naturally in two isotopes: 6Li and 7Li. The two make up all natural occurrence of lithium on Earth, although further isotopes have been synthesized. In ionic compounds, lithium loses an electron to become positively charged, forming the cation Li+. Lithium is the first alkali metal in the periodic table, and the first metal of any kind in the periodic table. At standard temperature and pressure, lithium is a soft, silver-white, highly reactive metal. With a density of 0.564 g⋅cm−3, lithium is the lightest metal and the least dense solid element. Lithium is one of the few elements synthesized in the Big Bang. Lithium is the 31st most abundant element on earth, occurring in concentrations of between 20 and 70 ppm by weight, but due to its high reactivity it is only found naturally in compounds. Lithium salts are used in the pharmacology industry as mood stabilising drugs. They are used in the treatment of bipolar disorder, where they have a role in treating depression and mania and may reduce the chances of suicide. The most common compounds used are lithium carbonate, Li2CO3, lithium citrate, Li3C6H5O7, lithium sulphate, Li2SO4, and lithium orotate, LiC5H3N2O4·H2O. Lithium is also used in batteries as an anode and its alloys with aluminium, cadmium, copper and manganese are used to make high performance parts for aircraft, most notably the external tank of the Space Shuttle. Beryllium Beryllium (Be) is the chemical element with atomic number 4, occurring in the form of 9Be. At standard temperature and pressure, beryllium is a strong, steel-grey, light-weight, brittle, bivalent alkaline earth metal, with a density of 1.85 g⋅cm−3. It also has one of the highest melting points of all the light metals. Beryllium's most common isotope is 9Be, which contains 4 protons and 5 neutrons. It makes up almost 100% of all naturally occurring beryllium and is its only stable isotope; however other isotopes have been synthesised. In ionic compounds, beryllium loses its two valence electrons to form the cation, Be2+. Small amounts of beryllium were synthesised during the Big Bang, although most of it decayed or reacted further to create larger nuclei, like carbon, nitrogen or oxygen. Beryllium is a component of 100 out of 4000 known minerals, such as bertrandite, Be4Si2O7(OH)2, beryl, Al2Be3Si6O18, chrysoberyl, Al2BeO4, and phenakite, Be2SiO4. Precious forms of beryl are aquamarine, red beryl and emerald. The most common sources of beryllium used commercially are beryl and bertrandite and production of it involves the reduction of beryllium fluoride with magnesium metal or the electrolysis of molten beryllium chloride, containing some sodium chloride as beryllium chloride is a poor conductor of electricity. Due to its stiffness, light weight, and dimensional stability over a wide temperature range, beryllium metal is used in as a structural material in aircraft, missiles and communication satellites. It is used as an alloying agent in beryllium copper, which is used to make electrical components due to its high electrical and heat conductivity. Sheets of beryllium are used in X-ray detectors to filter out visible light and let only X-rays through. It is used as a neutron moderator in nuclear reactors because light nuclei are more effective at slowing down neutrons than heavy nuclei. Beryllium's low weight and high rigidity also make it useful in the construction of tweeters in loudspeakers. Beryllium and beryllium compounds are classified by the International Agency for Research on Cancer as Group 1 carcinogens; they are carcinogenic to both animals and humans. Chronic berylliosis is a pulmonary and systemic granulomatous disease caused by exposure to beryllium. Between 1% – 15% of people are sensitive to beryllium and may develop an inflammatory reaction in their respiratory system and skin, called chronic beryllium disease or berylliosis. The body's immune system recognises the beryllium as foreign particles and mounts an attack against them, usually in the lungs where they are breathed in. This can cause fever, fatigue, weakness, night sweats and difficulty in breathing. Boron Boron (B) is the chemical element with atomic number 5, occurring as 10B and 11B. At standard temperature and pressure, boron is a trivalent metalloid that has several different allotropes. Amorphous boron is a brown powder formed as a product of many chemical reactions. Crystalline boron is a very hard, black material with a high melting point and exists in many polymorphs: Two rhombohedral forms, α-boron and β-boron containing 12 and 106.7 atoms in the rhombohedral unit cell respectively, and 50-atom tetragonal boron are the most common. Boron has a density of 2.34−3. Boron's most common isotope is 11B at 80.22%, which contains 5 protons and 6 neutrons. The other common isotope is 10B at 19.78%, which contains 5 protons and 5 neutrons. These are the only stable isotopes of boron; however other isotopes have been synthesised. Boron forms covalent bonds with other nonmetals and has oxidation states of 1, 2, 3 and 4. Boron does not occur naturally as a free element, but in compounds such as borates. The most common sources of boron are tourmaline, borax, Na2B4O5(OH)4·8H2O, and kernite, Na2B4O5(OH)4·2H2O. it is difficult to obtain pure boron. It can be made through the magnesium reduction of boron trioxide, B2O3. This oxide is made by melting boric acid, B(OH)3, which in turn is obtained from borax. Small amounts of pure boron can be made by the thermal decomposition of boron bromide, BBr3, in hydrogen gas over hot tantalum wire, which acts as a catalyst. The most commercially important sources of boron are: sodium tetraborate pentahydrate, Na2B4O7 · 5H2O, which is used in large amounts in making insulating fiberglass and sodium perborate bleach; boron carbide, a ceramic material, is used to make armour materials, especially in bulletproof vests for soldiers and police officers; orthoboric acid, H3BO3 or boric acid, used in the production of textile fiberglass and flat panel displays; sodium tetraborate decahydrate, Na2B4O7 · 10H2O or borax, used in the production of adhesives; and the isotope boron-10 is used as a control for nuclear reactors, as a shield for nuclear radiation, and in instruments used for detecting neutrons. Boron is an essential plant micronutrient, required for cell wall strength and development, cell division, seed and fruit development, sugar transport and hormone development. However, high soil concentrations of over 1.0 ppm can cause necrosis in leaves and poor growth. Levels as low as 0.8 ppm can cause these symptoms to appear in plants particularly boron-sensitive. Most plants, even those tolerant of boron in the soil, will show symptoms of boron toxicity when boron levels are higher than 1.8 ppm. In animals, boron is an ultratrace element; in human diets, daily intake ranges from 2.1 to 4.3 mg boron/kg body weight (bw)/day. It is also used as a supplement for the prevention and treatment of osteoporosis and arthritis. Carbon Carbon is the chemical element with atomic number 6, occurring as 12C, 13C and 14C. At standard temperature and pressure, carbon is a solid, occurring in many different allotropes, the most common of which are graphite, diamond, the fullerenes and amorphous carbon. Graphite is a soft, hexagonal crystalline, opaque black semimetal with very good conductive and thermodynamically stable properties. Diamond however is a highly transparent colourless cubic crystal with poor conductive properties, is the hardest known naturally occurring mineral and has the highest refractive index of all gemstones. In contrast to the crystal lattice structure of diamond and graphite, the fullerenes are molecules, named after Richard Buckminster Fuller whose architecture the molecules resemble. There are several different fullerenes, the most widely known being the "buckeyball" C60. Little is known about the fullerenes and they are a current subject of research. There is also amorphous carbon, which is carbon without any crystalline structure. In mineralogy, the term is used to refer to soot and coal, although these are not truly amorphous as they contain small amounts of graphite or diamond. Carbon's most common isotope at 98.9% is 12C, with six protons and six neutrons. 13C is also stable, with six protons and seven neutrons, at 1.1%. Trace amounts of 14C also occur naturally but this isotope is radioactive and decays with a half life of 5730 years; it is used for radiocarbon dating. Other isotopes of carbon have also been synthesised. Carbon forms covalent bonds with other non-metals with an oxidation state of −4, −2, +2 or +4. Carbon is the fourth most abundant element in the universe by mass after hydrogen, helium and oxygen and is the second most abundant element in the human body by mass after oxygen, the third most abundant by number of atoms. There are an almost infinite number of compounds that contain carbon due to carbon's ability to form long stable chains of C — C bonds. The simplest carbon-containing molecules are the hydrocarbons, which contain carbon and hydrogen, although they sometimes contain other elements in functional groups. Hydrocarbons are used as fossil fuels and to manufacture plastics and petrochemicals. All organic compounds, those essential for life, contain at least one atom of carbon. When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, and aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Nitrogen Nitrogen is the chemical element with atomic number 7, the symbol N and atomic mass 14.00674 u. Elemental nitrogen is a colorless, odorless, tasteless and mostly inert diatomic gas at standard conditions, constituting 78.08% by volume of Earth's atmosphere. The element nitrogen was discovered as a separable component of air, by Scottish physician Daniel Rutherford, in 1772. It occurs naturally in form of two isotopes: nitrogen-14 and nitrogen-15. Many industrially important compounds, such as ammonia, nitric acid, organic nitrates (propellants and explosives), and cyanides, contain nitrogen. The extremely strong bond in elemental nitrogen dominates nitrogen chemistry, causing difficulty for both organisms and industry in breaking the bond to convert the molecule into useful compounds, but at the same time causing release of large amounts of often useful energy when the compounds burn, explode, or decay back into nitrogen gas. Nitrogen occurs in all living organisms, and the nitrogen cycle describes movement of the element from air into the biosphere and organic compounds, then back into the atmosphere. Synthetically produced nitrates are key ingredients of industrial fertilizers, and also key pollutants in causing the eutrophication of water systems. Nitrogen is a constituent element of amino acids and thus of proteins, and of nucleic acids (DNA and RNA). It resides in the chemical structure of almost all neurotransmitters, and is a defining component of alkaloids, biological molecules produced by many organisms. Oxygen Oxygen is the chemical element with atomic number 8, occurring mostly as 16O, but also 17O and 18O. Oxygen is the third-most common element by mass in the universe (although there are more carbon atoms, each carbon atom is lighter). It is highly electronegative and non-metallic, usually diatomic, gas down to very low temperatures. Only fluorine is more reactive among non-metallic elements. It is two electrons short of a full octet and readily takes electrons from other elements. It reacts violently with alkali metals and white phosphorus at room temperature and less violently with alkali earth metals heavier than magnesium. At higher temperatures it burns most other metals and many non-metals (including hydrogen, carbon, and sulfur). Many oxides are extremely stable substances difficult to decompose—like water, carbon dioxide, alumina, silica, and iron oxides (the latter often appearing as rust). Oxygen is part of substances best described as some salts of metals and oxygen-containing acids (thus nitrates, sulfates, phosphates, silicates, and carbonates. Oxygen is essential to all life. Plants and phytoplankton photosynthesize water and carbon dioxide and water, both oxides, in the presence of sunlight to form sugars with the release of oxygen. The sugars are then turned into such substances as cellulose and (with nitrogen and often sulfur) proteins and other essential substances of life. Animals especially but also fungi and bacteria ultimately depend upon photosynthesizing plants and phytoplankton for food and oxygen. Fire uses oxygen to oxidize compounds typically of carbon and hydrogen to water and carbon dioxide (although other elements may be involved) whether in uncontrolled conflagrations that destroy buildings and forests or the controlled fire within engines or that supply electrical energy from turbines, heat for keeping buildings warm, or the motive force that drives vehicles. Oxygen forms roughly 21% of the Earth's atmosphere; all of this oxygen is the result of photosynthesis. Pure oxygen has use in medical treatment of people who have respiratory difficulties. Excess oxygen is toxic. Oxygen was originally associated with the formation of acids—until some acids were shown to not have oxygen in them. Oxygen is named for its formation of acids, especially with non-metals. Some oxides of some non-metals are extremely acidic, like sulfur trioxide, which forms sulfuric acid on contact with water. Most oxides with metals are alkaline, some extremely so, like potassium oxide. Some metallic oxides are amphoteric, like aluminum oxide, which means that they can react with both acids and bases. Although oxygen is normally a diatomic gas, oxygen can form an allotrope known as ozone. Ozone is a triatomic gas even more reactive than oxygen. Unlike regular diatomic oxygen, ozone is a toxic material generally considered a pollutant. In the upper atmosphere, some oxygen forms ozone which has the property of absorbing dangerous ultraviolet rays within the ozone layer. Land life was impossible before the formation of an ozone layer. Fluorine Fluorine is the chemical element with atomic number 9. It occurs naturally in its only stable form 19F. Fluorine is a pale-yellow, diatomic gas under normal conditions and down to very low temperatures. Short one electron of the highly stable octet in each atom, fluorine molecules are unstable enough that they easily snap, with loose fluorine atoms tending to grab single electrons from just about any other element. Fluorine is the most reactive of all elements, and it even attacks many oxides to replace oxygen with fluorine. Fluorine even attacks silica, one of the favored materials for transporting strong acids, and burns asbestos. It attacks common salt, one of the most stable compounds, with the release of chlorine. It never appears uncombined in nature and almost never stays uncombined for long. It burns hydrogen simultaneously if either is liquid or gaseous—even at temperatures close to absolute zero. It is extremely difficult to isolate from any compounds, let alone keep uncombined. Fluorine gas is extremely dangerous because it attacks almost all organic material, including live flesh. Many of the binary compounds that it forms (called fluorides) are themselves highly toxic, including soluble fluorides and especially hydrogen fluoride. Fluorine forms very strong bonds with many elements. With sulfur it can form the extremely stable and chemically inert sulfur hexafluoride; with carbon it can form the remarkable material Teflon that is a stable and non-combustible solid with a high melting point and a very low coefficient of friction that makes it an excellent liner for cooking pans and raincoats. Fluorine-carbon compounds include some unique plastics. it is also used as a reactant in the making of toothpaste. Neon Neon is the chemical element with atomic number 10, occurring as 20Ne, 21Ne and 22Ne. Neon is a monatomic gas. With a complete octet of outer electrons it is highly resistant to removal of any electron, and it cannot accept an electron from anything. Neon has no tendency to form any normal compounds under normal temperatures and pressures; it is effectively inert. It is one of the so-called "noble gases". Neon is a trace component of the atmosphere without any biological role.
Physical sciences
Periods
Chemistry
199079
https://en.wikipedia.org/wiki/Period%201%20element
Period 1 element
A period 1 element is one of the chemical elements in the first row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate periodic (recurring) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that analog elements fall into the same vertical columns. The first period contains fewer elements than any other row in the table, with only two: hydrogen and helium. This situation can be explained by modern theories of atomic structure. In a quantum mechanical description of atomic structure, this period corresponds to the filling of the 1s orbital. Period 1 elements obey the duet rule in that they need two electrons to complete their valence shell. Hydrogen and helium are the oldest and the most abundant elements in the universe. Periodic trends All other periods in the periodic table contain at least eight elements, and it is often helpful to consider periodic trends across the period. However, period 1 contains only two elements, so this concept does not apply here. In terms of vertical trends down groups, helium can be seen as a typical noble gas at the head of the IUPAC group 18, but as discussed below, hydrogen's chemistry is unique and it is not easily assigned to any group. Position of period 1 elements in the periodic table The first electron shell, , consists of only one orbital, and the maximum number of valence electrons that a period 1 element can accommodate is two, both in the 1s orbital. The valence shell lacks "p" or any other kind of orbitals due to the general constraint on the quantum numbers. Therefore, period 1 has exactly two elements. Although both hydrogen and helium are in the s-block, neither of them behaves similarly to other s-block elements. Their behaviour is so different from the other s-block elements that there is considerable disagreement over where these two elements should be placed in the periodic table. Simply following electron configurations, hydrogen (electronic configuration 1s1) and helium (1s2) should be placed in groups 1 and 2, above lithium (1s22s1) and beryllium (1s22s2). While such a placement is common for hydrogen, it is rarely used for helium outside of the context of illustrating the electron configurations. Usually, hydrogen is placed in group 1, and helium in group 18: this is the placement found on the IUPAC periodic table. Some variation can be found on both these matters. Like the group 1 metals, hydrogen has one electron in its outermost shell and typically loses its only electron in chemical reactions. It has some metal-like chemical properties, being able to displace some metals from their salts. But hydrogen forms a diatomic nonmetallic gas at standard conditions, unlike the alkali metals which are reactive solid metals. This and hydrogen's formation of hydrides, in which it gains an electron, brings it close to the properties of the halogens which do the same (though it is rarer for hydrogen to form H− than H+). Moreover, the lightest two halogens (fluorine and chlorine) are gaseous like hydrogen at standard conditions. Some properties of hydrogen are not a good fit for either group: hydrogen is neither highly oxidising nor highly reducing and is not reactive with water. Hydrogen thus has properties corresponding to both those of the alkali metals and the halogens, but matches neither group perfectly, and is thus difficult to place by its chemistry. Therefore, while the electronic placement of hydrogen in group 1 predominates, some rarer arrangements show either hydrogen in group 17, duplicate hydrogen in both groups 1 and 17, or float it separately from all groups. The possibility of "floating" hydrogen has nonetheless been criticised by Eric Scerri, who points out that removing it from all groups suggests that it is being excluded from the periodic law, when all elements should be subject to that law. A few authors have advocated more unusual placements for hydrogen, such as group 13 or group 14, on the grounds of trends in ionisation energy, electron affinity, and electronegativity. Helium is an unreactive noble gas at standard conditions, and has a full outer shell: these properties are like the noble gases in group 18, but not at all like the reactive alkaline earth metals of group 2. Therefore, helium is nearly universally placed in group 18 which its properties best match. However, helium only has two outer electrons in its outer shell, whereas the other noble gases have eight; and it is an s-block element, whereas all other noble gases are p-block elements. Also, solid helium crystallises in a hexagonal close-packed structure, which matches beryllium and magnesium in group 2, but not the other noble gases in group 18. In these ways helium better matches the alkaline earth metals. Therefore, tables with both hydrogen and helium floating outside all groups may rarely be encountered. A few chemists, such as Henry Bent, have advocated that the electronic placement in group 2 be adopted for helium. This assignment is also found in Charles Janet's left-step table. Arguments for this often rest on the first-row anomaly trend (s >> p > d > f), which states that the first element of each group often behaves quite differently from the succeeding ones: the difference is largest in the s-block (H and He), is moderate for the p-block (B to Ne), and is less pronounced for the d- and f-blocks. Thus helium as the first s2 element before the alkaline earth metals stands out as anomalous in a way that helium as the first noble gas does not. The normalized ionization potentials and electron affinities show better trends with helium in group 2 than in group 18; helium is expected to be slightly more reactive than neon (which breaks the general trend of reactivity in the noble gases, where the heavier ones are more reactive); and predicted helium compounds often lack neon analogues even theoretically, but sometimes have beryllium analogues. Elements Hydrogen Hydrogen (H) is the chemical element with atomic number 1. At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas with the molecular formula H2. With an atomic mass of 1.00794 amu, hydrogen is the lightest element. Hydrogen is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane, after which most elemental hydrogen is used "captively" (meaning locally at the production site), with the largest markets almost equally divided between fossil fuel upgrading, such as hydrocracking, and ammonia production, mostly for the fertilizer market. Hydrogen may be produced from water using the process of electrolysis, but this process is significantly more expensive commercially than hydrogen production from natural gas. The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons. In ionic compounds, it can take on either a positive charge, becoming a cation composed of a bare proton, or a negative charge, becoming an anion known as a hydride. Hydrogen can form compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and spectrum of the hydrogen atom has played a key role in the development of quantum mechanics. The interactions of hydrogen with various metals are very important in metallurgy, as many metals can suffer hydrogen embrittlement, and in developing safe ways to store it for use as a fuel. Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals and can be dissolved in both crystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice. Helium Helium (He) is a colorless, odorless, tasteless, non-toxic, inert monatomic chemical element that heads the noble gas series in the periodic table and whose atomic number is 2. Its boiling and melting points are the lowest among the elements and it exists only as a gas except in extreme conditions. Helium was discovered in 1868 by French astronomer Pierre Janssen, who first detected the substance as an unknown yellow spectral line signature in light from a solar eclipse. In 1903, large reserves of helium were found in the natural gas fields of the United States, which is by far the largest supplier of the gas. The substance is used in cryogenics, in deep-sea breathing systems, to cool superconducting magnets, in helium dating, for inflating balloons, for providing lift in airships, and as a protective gas for industrial uses such as arc welding and growing silicon wafers. Inhaling a small volume of the gas temporarily changes the timbre and quality of the human voice. The behavior of liquid helium-4's two fluid phases, helium I and helium II, is important to researchers studying quantum mechanics and the phenomenon of superfluidity in particular, and to those looking at the effects that temperatures near absolute zero have on matter, such as with superconductivity. Helium is the second lightest element and is the second most abundant in the observable universe. Most helium was formed during the Big Bang, but new helium is being created as a result of the nuclear fusion of hydrogen in stars. On Earth, helium is relatively rare and is created by the natural decay of some radioactive elements because the alpha particles that are emitted consist of helium nuclei. This radiogenic helium is trapped with natural gas in concentrations of up to seven percent by volume, from which it is extracted commercially by a low-temperature separation process called fractional distillation.
Physical sciences
Periods
Chemistry
199081
https://en.wikipedia.org/wiki/Period%207%20element
Period 7 element
A period 7 element is one of the chemical elements in the seventh row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases: a new row is begun when chemical behavior begins to repeat, meaning that elements with similar behavior fall into the same vertical columns. The seventh period contains 32 elements, tied for the most with period 6, beginning with francium and ending with oganesson, the heaviest element currently discovered. As a rule, period 7 elements fill their 7s shells first, then their 5f, 6d, and 7p shells in that order, but there are exceptions, such as uranium. Properties All period 7 elements are radioactive. This period contains the actinides, which include plutonium, the last naturally occurring element; subsequent elements must be created artificially. While the first five of these synthetic elements (americium through einsteinium) are now available in macroscopic quantities, most are extremely rare, having only been prepared in microgram amounts or less. The later transactinide elements have only been identified in laboratories in batches of a few atoms at a time. Though the rarity of many of these elements means that experimental results are not many, their periodic and group trends are less well defined than other periods. Whilst francium and radium do show typical properties of their respective groups, actinides display a much greater variety of behavior and oxidation states than the lanthanides. These peculiarities are due to a variety of factors, including a large degree of spin–orbit coupling and relativistic effects, ultimately caused by the very high electric charge of their massive nuclei. Periodicity mostly holds throughout the 6d series and is predicted also for moscovium and livermorium, but the other four 7p elements, nihonium, flerovium, tennessine, and oganesson, are predicted to have very different properties from those expected for their groups. Elements {| class="wikitable sortable" ! colspan="3" | Chemical element ! Block ! Electron configuration ! Occurrence |- !   ! ! ! ! ! |- bgcolor="" || 87 || Fr || Francium || s-block || [Rn] 7s1 || From decay |- bgcolor="" || 88 || Ra || Radium || s-block || [Rn] 7s2 || From decay |- bgcolor="" || 89 || Ac || Actinium || f-block || [Rn] 6d1 7s2 (*) || From decay |- bgcolor="" || 90 || Th || Thorium || f-block || [Rn] 6d2 7s2 (*) || Primordial |- bgcolor="" || 91 || Pa || Protactinium || f-block || [Rn] 5f2 6d1 7s2 (*) || From decay |- bgcolor="" || 92 || U || Uranium || f-block || [Rn] 5f3 6d1 7s2 (*) || Primordial |- bgcolor="" || 93 || Np || Neptunium || f-block || [Rn] 5f4 6d1 7s2 (*) || From decay |- bgcolor="" || 94 || Pu || Plutonium || f-block || [Rn] 5f6 7s2 || From decay |- bgcolor="" || 95 || Am || Americium || f-block || [Rn] 5f7 7s2 || Synthetic |- bgcolor="" || 96 || Cm || Curium || f-block || [Rn] 5f7 6d1 7s2 (*) || Synthetic |- bgcolor="" || 97 || Bk || Berkelium || f-block || [Rn] 5f9 7s2 || Synthetic |- bgcolor="" || 98 || Cf || Californium || f-block || [Rn] 5f10 7s2 || Synthetic |- bgcolor="" || 99 || Es || Einsteinium || f-block || [Rn] 5f11 7s2 || Synthetic |- bgcolor="" || 100 || Fm || Fermium || f-block || [Rn] 5f12 7s2 || Synthetic |- bgcolor="" || 101 || Md || Mendelevium || f-block|| [Rn] 5f13 7s2 || Synthetic |- bgcolor="" || 102 || No || Nobelium || f-block || [Rn] 5f14 7s2|| Synthetic |- bgcolor="" || 103 || Lr || Lawrencium || d-block || [Rn] 5f14 7s2 7p1 (*) || Synthetic |- bgcolor="" || 104 || Rf || Rutherfordium || d-block || [Rn] 5f14 6d2 7s2 || Synthetic |- bgcolor="" || 105 || Db || Dubnium || d-block || [Rn] 5f14 6d3 7s2 || Synthetic |- bgcolor="" || 106 || Sg || Seaborgium || d-block || [Rn] 5f14 6d4 7s2 || Synthetic |- bgcolor="" || 107 || Bh || Bohrium || d-block || [Rn] 5f14 6d5 7s2 || Synthetic |- bgcolor="" || 108 || Hs || Hassium || d-block || [Rn] 5f14 6d6 7s2 || Synthetic |- bgcolor="" || 109 || Mt || Meitnerium || d-block || [Rn] 5f14 6d7 7s2 (?) || Synthetic |- bgcolor="" || 110 || Ds || Darmstadtium || d-block || [Rn] 5f14 6d8 7s2 (?) || Synthetic |- bgcolor="" || 111 || Rg || Roentgenium || d-block || [Rn] 5f14 6d9 7s2 (?) || Synthetic |- bgcolor="" || 112 || Cn || Copernicium || d-block || [Rn] 5f14 6d10 7s2 (?) || Synthetic |- bgcolor="" || 113 || Nh || Nihonium || p-block || [Rn] 5f14 6d10 7s2 7p1 (?) || Synthetic |- bgcolor="" || 114 || Fl || Flerovium || p-block || [Rn] 5f14 6d10 7s2 7p2 (?) || Synthetic |- bgcolor="" || 115 || Mc || Moscovium || p-block || [Rn] 5f14 6d10 7s2 7p3 (?) || Synthetic |- bgcolor="" || 116 || Lv || Livermorium || p-block || [Rn] 5f14 6d10 7s2 7p4 (?) || Synthetic |- bgcolor="" || 117 || Ts || Tennessine || p-block || [Rn] 5f14 6d10 7s2 7p5 (?) || Synthetic |- bgcolor="" || 118 || Og || Oganesson || p-block || [Rn] 5f14 6d10 7s2 7p6 (?) || Synthetic |} (?) Prediction (*) Exception to the Madelung rule. In many periodic tables, the f-block is erroneously shifted one element to the right, so that lanthanum and actinium become d-block elements, and Ce–Lu and Th–Lr form the f-block tearing the d-block into two very uneven portions. This is a holdover from early erroneous measurements of electron configurations. Lev Landau and Evgeny Lifshitz pointed out in 1948 that lutetium is not an f-block element, and since then physical, chemical, and electronic evidence has overwhelmingly supported that the f-block contains the elements La–Yb and Ac–No, as shown here and as supported by International Union of Pure and Applied Chemistry reports dating from 1988 and 2021. S-block Francium and radium make up the s-block elements of the 7th period. Francium (Fr, atomic number 87) is a highly radioactive metal that decays into astatine, radium, or radon. It is one of the two least electronegative elements; the other is caesium. As an alkali metal, it has one valence electron. Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. It was the last element discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium and thorium ores, where the isotope francium-223 continually forms and decays. As little as 20–30 g (one ounce) exists at any given time throughout Earth's crust; the other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms. Radium (Ra, atomic number 88) is an almost pure-white alkaline earth metal, but it readily oxidizes, reacting with nitrogen (rather than oxygen) on exposure to air, becoming black in color. All isotopes of radium are radioactive; the most stable is radium-226, which has a half-life of 1601 years and decays into radon. Due to such instability, radium luminesces, glowing a faint blue. Radium, in the form of radium chloride, was discovered by Marie and Pierre Curie in 1898. They extracted the radium compound from uraninite and published the discovery at the French Academy of Sciences five days later. Radium was isolated in its metallic state by Marie Curie and André-Louis Debierne through electrolysis of radium chloride in 1910. Since its discovery, it has given names such as radium A and radium C to several isotopes of other elements that are decay products of radium-226. In nature, radium is found in uranium ores in trace amounts as small as a seventh of a gram per ton of uraninite. Radium is not necessary for living things, and adverse health effects are likely when it is incorporated into biochemical processes due to its radioactivity and chemical reactivity. Actinides The actinide or actinoid (IUPAC nomenclature) series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium. The actinide series is named after its first element actinium. All but one of the actinides are f-block elements, corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an actinide. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. Of the actinides, thorium and uranium occur naturally in substantial, primordial, quantities. Radioactive decay of uranium produces transient amounts of actinium, protactinium and plutonium, and atoms of neptunium and plutonium are occasionally produced from transmutation in uranium ores. The other actinides are purely synthetic elements, though the first six actinides after plutonium would have been produced at Oklo (and long since decayed away), and curium almost certainly previously existed in nature as an extinct radionuclide. Nuclear tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium. All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These are used in nuclear reactors and nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors. In presentations of the periodic table, the lanthanides and the actinides are customarily shown as two additional rows below the main body of the table, with placeholders or else a selected single element of each series (either lanthanum or lutetium, and either actinium or lawrencium, respectively) shown in a single cell of the main table, between barium and hafnium, and radium and rutherfordium, respectively. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table (32 columns) shows the lanthanide and actinide series in their proper columns, as parts of the table's sixth and seventh rows (periods). Transactinides Transactinide elements (also, transactinides, or super-heavy elements, or superheavies) are the chemical elements with atomic numbers greater than those of the actinides, the heaviest of which is lawrencium (103). All transactinides of period 7 have been discovered, up to oganesson (element 118). Superheavies are also transuranic elements, that is, have atomic number greater than that of uranium (92). The further distinction of having an atomic number greater than the actinides is significant in several ways: The transactinide elements all have electrons in the 6d subshell in their ground state (and thus are placed in the d-block). Even the longest-lived known isotopes of many transactinides have extremely short half-lives, measured in seconds or smaller units. The element naming controversy involved the first five or six transactinides. These elements thus used three-letter systematic names for many years after their discovery was confirmed. (Usually, the three-letter symbols are replaced with two-letter symbols relatively soon after a discovery has been confirmed.) Transactinides are radioactive and have only been obtained synthetically in laboratories. None of these elements has ever been collected in a macroscopic sample. Transactinides are all named after scientists, or important locations involved in the synthesis of the elements. Chemistry Nobel Prize winner Glenn T. Seaborg, who first proposed the actinide concept which led to the acceptance of the actinide series, also proposed the existence of a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153. The transactinide seaborgium is named in his honor. IUPAC defines an element to exist if its lifetime is longer than 10 second, the time needed to form an electron cloud.
Physical sciences
Periods
Chemistry
199121
https://en.wikipedia.org/wiki/Rydberg%20constant
Rydberg constant
In spectroscopy, the Rydberg constant, symbol for heavy atoms or for hydrogen, named after the Swedish physicist Johannes Rydberg, is a physical constant relating to the electromagnetic spectra of an atom. The constant first arose as an empirical fitting parameter in the Rydberg formula for the hydrogen spectral series, but Niels Bohr later showed that its value could be calculated from more fundamental constants according to his model of the atom. Before the 2019 revision of the SI, and the electron spin g-factor were the most accurately measured physical constants. The constant is expressed for either hydrogen as , or at the limit of infinite nuclear mass as . In either case, the constant is used to express the limiting value of the highest wavenumber (inverse wavelength) of any photon that can be emitted from a hydrogen atom, or, alternatively, the wavenumber of the lowest-energy photon capable of ionizing a hydrogen atom from its ground state. The hydrogen spectral series can be expressed simply in terms of the Rydberg constant for hydrogen and the Rydberg formula. In atomic physics, Rydberg unit of energy, symbol Ry, corresponds to the energy of the photon whose wavenumber is the Rydberg constant, i.e. the ionization energy of the hydrogen atom in a simplified Bohr model. Value Rydberg constant The CODATA value is where is the rest mass of the electron (i.e. the electron mass), is the elementary charge, is the permittivity of free space, is the Planck constant, and is the speed of light in vacuum. The symbol means that the nucleus is assumed to be infinitely heavy, an improvement of the value can be made using the reduced mass of the atom: with the mass of the nucleus. The corrected Rydberg constant is: that for hydrogen, where is the mass of the proton, becomes: Since the Rydberg constant is related to the spectrum lines of the atom, this correction leads to an isotopic shift between different isotopes. For example, deuterium, an isotope of hydrogen with a nucleus formed by a proton and a neutron (), was discovered thanks to its slightly shifted spectrum. Rydberg unit of energy The Rydberg unit of energy is = = Rydberg frequency = Rydberg wavelength . The corresponding angular wavelength is . Bohr model The Bohr model explains the atomic spectrum of hydrogen (see Hydrogen spectral series) as well as various other atoms and ions. It is not perfectly accurate, but is a remarkably good approximation in many cases, and historically played an important role in the development of quantum mechanics. The Bohr model posits that electrons revolve around the atomic nucleus in a manner analogous to planets revolving around the Sun. In the simplest version of the Bohr model, the mass of the atomic nucleus is considered to be infinite compared to the mass of the electron, so that the center of mass of the system, the barycenter, lies at the center of the nucleus. This infinite mass approximation is what is alluded to with the subscript. The Bohr model then predicts that the wavelengths of hydrogen atomic transitions are (see Rydberg formula): where n1 and n2 are any two different positive integers (1, 2, 3, ...), and is the wavelength (in vacuum) of the emitted or absorbed light, giving where and M is the total mass of the nucleus. This formula comes from substituting the reduced mass of the electron. Precision measurement The Rydberg constant was one of the most precisely determined physical constants, with a relative standard uncertainty of This precision constrains the values of the other physical constants that define it. Since the Bohr model is not perfectly accurate, due to fine structure, hyperfine splitting, and other such effects, the Rydberg constant cannot be directly measured at very high accuracy from the atomic transition frequencies of hydrogen alone. Instead, the Rydberg constant is inferred from measurements of atomic transition frequencies in three different atoms (hydrogen, deuterium, and antiprotonic helium). Detailed theoretical calculations in the framework of quantum electrodynamics are used to account for the effects of finite nuclear mass, fine structure, hyperfine splitting, and so on. Finally, the value of is determined from the best fit of the measurements to the theory. Alternative expressions The Rydberg constant can also be expressed as in the following equations. and in energy units where is the electron rest mass, is the electric charge of the electron, is the Planck constant, is the reduced Planck constant, is the speed of light in vacuum, is the electric constant (vacuum permittivity), is the fine-structure constant, is the Compton wavelength of the electron, is the Compton frequency of the electron, is the Compton angular frequency of the electron, is the Bohr radius, is the classical electron radius. The last expression in the first equation shows that the wavelength of light needed to ionize a hydrogen atom is 4π/α times the Bohr radius of the atom. The second equation is relevant because its value is the coefficient for the energy of the atomic orbitals of a hydrogen atom: .
Physical sciences
Atomic physics
Physics
199189
https://en.wikipedia.org/wiki/Bernoulli%20distribution
Bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability and the value 0 with probability . Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are Boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and p would be the probability of tails). In particular, unfair coins would have The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1. Properties If is a random variable with a Bernoulli distribution, then: The probability mass function of this distribution, over possible outcomes k, is This can also be expressed as or as The Bernoulli distribution is a special case of the binomial distribution with The kurtosis goes to infinity for high and low values of but for the two-point distributions including the Bernoulli distribution have a lower excess kurtosis, namely −2, than any other probability distribution. The Bernoulli distributions for form an exponential family. The maximum likelihood estimator of based on a random sample is the sample mean. Mean The expected value of a Bernoulli random variable is This is due to the fact that for a Bernoulli distributed random variable with and we find Variance The variance of a Bernoulli distributed is We first find From this follows With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside . Skewness The skewness is . When we take the standardized Bernoulli distributed random variable we find that this random variable attains with probability and attains with probability . Thus we get Higher moments and cumulants The raw moments are all equal due to the fact that and . The central moment of order is given by The first six central moments are The higher central moments can be expressed more compactly in terms of and The first six cumulants are Entropy and Fisher's Information Entropy Entropy is a measure of uncertainty or randomness in a probability distribution. For a Bernoulli random variable with success probability and failure probability , the entropy is defined as: The entropy is maximized when , indicating the highest level of uncertainty when both outcomes are equally likely. The entropy is zero when or , where one outcome is certain. Fisher's Information Fisher information measures the amount of information that an observable random variable carries about an unknown parameter upon which the probability of depends. For the Bernoulli distribution, the Fisher information with respect to the parameter is given by: Proof: The Likelihood Function for a Bernoulli random variable is: This represents the probability of observing given the parameter . The Log-Likelihood Function is: The Score Function (the first derivative of the log-likelihood w.r.t. is: The second derivative of the log-likelihood function is: Fisher information is calculated as the negative expected value of the second derivative of the log-likelihood: It is maximized when , reflecting maximum uncertainty and thus maximum information about the parameter . Related distributions If are independent, identically distributed (i.i.d.) random variables, all Bernoulli trials with success probability p, then their sum is distributed according to a binomial distribution with parameters n and p: (binomial distribution). The Bernoulli distribution is simply , also written as The categorical distribution is the generalization of the Bernoulli distribution for variables with any constant number of discrete values. The Beta distribution is the conjugate prior of the Bernoulli distribution. The geometric distribution models the number of independent and identical Bernoulli trials needed to get one success. If , then has a Rademacher distribution.
Mathematics
Probability
null
199252
https://en.wikipedia.org/wiki/Deserts%20and%20xeric%20shrublands
Deserts and xeric shrublands
Deserts and xeric shrublands are a biome defined by the World Wide Fund for Nature. Deserts and xeric (Ancient Greek 'dry') shrublands form the largest terrestrial biome, covering 19% of Earth's land surface area. Ecoregions in this habitat type vary greatly in the amount of annual rainfall they receive, usually less than annually except in the margins. Generally evaporation exceeds rainfall in these ecoregions. Temperature variability is also diverse in these lands. Many deserts, such as the Sahara, are hot year-round, but others, such as East Asia's Gobi Desert, become quite cold during the winter. Temperature extremes are a characteristic of most deserts. High daytime temperatures give way to cold nights because there is no insulation provided by humidity and cloud cover. The diversity of climatic conditions, though quite harsh, supports a rich array of habitats. Many of these habitats are ephemeral in nature, reflecting the paucity and seasonality of available water. Woody-stemmed shrubs and plants characterize vegetation in these regions. Above all, these plants have evolved to minimize water loss. Animal biodiversity is equally well adapted and quite diverse. Degradation Desertification The conversion of productive drylands to desert conditions, known as desertification, can occur from a variety of causes. One is human intervention, including intensive agricultural tillage or overgrazing in areas that cannot support such exploitation. Climatic shifts such as global warming or the Milankovitch cycle (which drives glacials and interglacials) also affect the pattern of deserts on Earth. Woody plant encroachment Xeric shrublands can experience woody plant encroachment, which is the thickening of bushes and shrubs at the expense of grasses. This process is often caused by unsustainable land management practices, such as overgrazing and fire suppression, but can also be a consequence of climate change. As a result, the shrublands' core ecosystem services are affected, including its biodiversity, productivity, and groundwater recharge. Woody plant encroachment can be an expression of land degradation. Ecoregions The World Wide Fund for Nature highlights a number of desert ecoregions that have a high degree of biodiversity and endemism: The Nama Karoo of Namibia has the world's richest desert fauna. The Chihuahuan desert and Central Mexican matorral are the richest deserts in the Neotropics. The Carnarvon xeric shrublands of Australia are a regional center for endemism. The Sonoran and Baja deserts of Mexico are unusual desert communities dominated by giant columnar cacti. Madagascar spiny forests Atacama Desert
Physical sciences
Biomes: General
Earth science
199304
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s%20circuital%20law
Ampère's circuital law
In classical electromagnetism, Ampère's circuital law (not to be confused with Ampère's force law) relates the circulation of a magnetic field around a closed loop to the electric current passing through the loop. James Clerk Maxwell derived it using hydrodynamics in his 1861 published paper "On Physical Lines of Force". In 1865 he generalized the equation to apply to time-varying currents by adding the displacement current term, resulting in the modern form of the law, sometimes called the Ampère–Maxwell law, which is one of Maxwell's equations that form the basis of classical electromagnetism. Ampère's original circuital law In 1820 Danish physicist Hans Christian Ørsted discovered that an electric current creates a magnetic field around it, when he noticed that the needle of a compass next to a wire carrying current turned so that the needle was perpendicular to the wire. He investigated and discovered the rules which govern the field around a straight current-carrying wire: The magnetic field lines encircle the current-carrying wire. The magnetic field lines lie in a plane perpendicular to the wire. If the direction of the current is reversed, the direction of the magnetic field reverses. The strength of the field is directly proportional to the magnitude of the current. The strength of the field at any point is inversely proportional to the distance of the point from the wire. This sparked a great deal of research into the relation between electricity and magnetism. André-Marie Ampère investigated the magnetic force between two current-carrying wires, discovering Ampère's force law. In the 1850s Scottish mathematical physicist James Clerk Maxwell generalized these results and others into a single mathematical law. The original form of Maxwell's circuital law, which he derived as early as 1855 in his paper "On Faraday's Lines of Force" based on an analogy to hydrodynamics, relates magnetic fields to electric currents that produce them. It determines the magnetic field associated with a given current, or the current associated with a given magnetic field. The original circuital law only applies to a magnetostatic situation, to continuous steady currents flowing in a closed circuit. For systems with electric fields that change over time, the original law (as given in this section) must be modified to include a term known as Maxwell's correction (see below). Equivalent forms The original circuital law can be written in several different forms, which are all ultimately equivalent: An "integral form" and a "differential form". The forms are exactly equivalent, and related by the Kelvin–Stokes theorem (see the "proof" section below). Forms using SI units, and those using cgs units. Other units are possible, but rare. This section will use SI units, with cgs units discussed later. Forms using either or magnetic fields. These two forms use the total current density and free current density, respectively. The and fields are related by the constitutive equation: in non-magnetic materials where is the magnetic constant. Explanation The integral form of the original circuital law is a line integral of the magnetic field around some closed curve (arbitrary but must be closed). The curve in turn bounds both a surface which the electric current passes through (again arbitrary but not closed—since no three-dimensional volume is enclosed by ), and encloses the current. The mathematical statement of the law is a relation between the circulation of the magnetic field around some path (line integral) due to the current which passes through that enclosed path (surface integral). In terms of total current, (which is the sum of both free current and bound current) the line integral of the magnetic -field (in teslas, T) around closed curve is proportional to the total current passing through a surface (enclosed by ). In terms of free current, the line integral of the magnetic -field (in amperes per metre, A·m−1) around closed curve equals the free current through a surface . is the total current density (in amperes per square metre, A·m−2), is the free current density only, is the closed line integral around the closed curve , denotes a surface integral over the surface bounded by the curve , is the vector dot product, is an infinitesimal element (a differential) of the curve (i.e. a vector with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve ) is the vector area of an infinitesimal element of surface (that is, a vector with magnitude equal to the area of the infinitesimal surface element, and direction normal to surface . The direction of the normal must correspond with the orientation of by the right hand rule), see below for further explanation of the curve and surface . is the curl operator. Ambiguities and sign conventions There are a number of ambiguities in the above definitions that require clarification and a choice of convention. First, three of these terms are associated with sign ambiguities: the line integral could go around the loop in either direction (clockwise or counterclockwise); the vector area could point in either of the two directions normal to the surface; and is the net current passing through the surface , meaning the current passing through in one direction, minus the current in the other direction—but either direction could be chosen as positive. These ambiguities are resolved by the right-hand rule: With the palm of the right-hand toward the area of integration, and the index-finger pointing along the direction of line-integration, the outstretched thumb points in the direction that must be chosen for the vector area . Also the current passing in the same direction as must be counted as positive. The right hand grip rule can also be used to determine the signs. Second, there are infinitely many possible surfaces that have the curve as their border. (Imagine a soap film on a wire loop, which can be deformed by blowing on the film). Which of those surfaces is to be chosen? If the loop does not lie in a single plane, for example, there is no one obvious choice. The answer is that it does not matter: in the magnetostatic case, the current density is solenoidal (see next section), so the divergence theorem and continuity equation imply that the flux through any surface with boundary , with the same sign convention, is the same. In practice, one usually chooses the most convenient surface (with the given boundary) to integrate over. Free current versus bound current The electric current that arises in the simplest textbook situations would be classified as "free current"—for example, the current that passes through a wire or battery. In contrast, "bound current" arises in the context of bulk materials that can be magnetized and/or polarized. (All materials can to some extent.) When a material is magnetized (for example, by placing it in an external magnetic field), the electrons remain bound to their respective atoms, but behave as if they were orbiting the nucleus in a particular direction, creating a microscopic current. When the currents from all these atoms are put together, they create the same effect as a macroscopic current, circulating perpetually around the magnetized object. This magnetization current is one contribution to "bound current". The other source of bound current is bound charge. When an electric field is applied, the positive and negative bound charges can separate over atomic distances in polarizable materials, and when the bound charges move, the polarization changes, creating another contribution to the "bound current", the polarization current . The total current density due to free and bound charges is then: with   the "free" or "conduction" current density. All current is fundamentally the same, microscopically. Nevertheless, there are often practical reasons for wanting to treat bound current differently from free current. For example, the bound current usually originates over atomic dimensions, and one may wish to take advantage of a simpler theory intended for larger dimensions. The result is that the more microscopic Ampère's circuital law, expressed in terms of and the microscopic current (which includes free, magnetization and polarization currents), is sometimes put into the equivalent form below in terms of and the free current only. For a detailed definition of free current and bound current, and the proof that the two formulations are equivalent, see the "proof" section below. Shortcomings of the original formulation of the circuital law There are two important issues regarding the circuital law that require closer scrutiny. First, there is an issue regarding the continuity equation for electrical charge. In vector calculus, the identity for the divergence of a curl states that the divergence of the curl of a vector field must always be zero. Hence and so the original Ampère's circuital law implies that i.e. that the current density is solenoidal. But in general, reality follows the continuity equation for electric charge: which is nonzero for a time-varying charge density. An example occurs in a capacitor circuit where time-varying charge densities exist on the plates. Second, there is an issue regarding the propagation of electromagnetic waves. For example, in free space, where the circuital law implies that i.e. that the magnetic field is irrotational, but to maintain consistency with the continuity equation for electric charge, we must have To treat these situations, the contribution of displacement current must be added to the current term in the circuital law. James Clerk Maxwell conceived of displacement current as a polarization current in the dielectric vortex sea, which he used to model the magnetic field hydrodynamically and mechanically. He added this displacement current to Ampère's circuital law at equation 112 in his 1861 paper "On Physical Lines of Force". Displacement current In free space, the displacement current is related to the time rate of change of electric field. In a dielectric the above contribution to displacement current is present too, but a major contribution to the displacement current is related to the polarization of the individual molecules of the dielectric material. Even though charges cannot flow freely in a dielectric, the charges in molecules can move a little under the influence of an electric field. The positive and negative charges in molecules separate under the applied field, causing an increase in the state of polarization, expressed as the polarization density . A changing state of polarization is equivalent to a current. Both contributions to the displacement current are combined by defining the displacement current as: where the electric displacement field is defined as: where is the electric constant, the relative static permittivity, and is the polarization density. Substituting this form for in the expression for displacement current, it has two components: The first term on the right hand side is present everywhere, even in a vacuum. It doesn't involve any actual movement of charge, but it nevertheless has an associated magnetic field, as if it were an actual current. Some authors apply the name displacement current to only this contribution. The second term on the right hand side is the displacement current as originally conceived by Maxwell, associated with the polarization of the individual molecules of the dielectric material. Maxwell's original explanation for displacement current focused upon the situation that occurs in dielectric media. In the modern post-aether era, the concept has been extended to apply to situations with no material media present, for example, to the vacuum between the plates of a charging vacuum capacitor. The displacement current is justified today because it serves several requirements of an electromagnetic theory: correct prediction of magnetic fields in regions where no free current flows; prediction of wave propagation of electromagnetic fields; and conservation of electric charge in cases where charge density is time-varying. For greater discussion see Displacement current. Extending the original law: the Ampère–Maxwell equation Next, the circuital equation is extended by including the polarization current, thereby remedying the limited applicability of the original circuital law. Treating free charges separately from bound charges, the equation including Maxwell's correction in terms of the -field is (the -field is used because it includes the magnetization currents, so does not appear explicitly, see -field and also Note): (integral form), where is the magnetic field (also called "auxiliary magnetic field", "magnetic field intensity", or just "magnetic field"), is the electric displacement field, and is the enclosed conduction current or free current density. In differential form, On the other hand, treating all charges on the same footing (disregarding whether they are bound or free charges), the generalized Ampère's equation, also called the Maxwell–Ampère equation, is in integral form (see the "proof" section below): In differential form, In both forms includes magnetization current density as well as conduction and polarization current densities. That is, the current density on the right side of the Ampère–Maxwell equation is: where current density is the displacement current, and is the current density contribution actually due to movement of charges, both free and bound. Because , the charge continuity issue with Ampère's original formulation is no longer a problem. Because of the term in , wave propagation in free space now is possible. With the addition of the displacement current, Maxwell was able to hypothesize (correctly) that light was a form of electromagnetic wave. See electromagnetic wave equation for a discussion of this important discovery. Proof of equivalence Proof that the formulations of the circuital law in terms of free current are equivalent to the formulations involving total current In this proof, we will show that the equation is equivalent to the equation Note that we are only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the Kelvin–Stokes theorem. We introduce the polarization density , which has the following relation to and : Next, we introduce the magnetization density , which has the following relation to and : and the following relation to the bound current: where is called the magnetization current density, and is the polarization current density. Taking the equation for : Consequently, referring to the definition of the bound current: as was to be shown. Ampère's circuital law in cgs units In cgs units, the integral form of the equation, including Maxwell's correction, reads where is the speed of light. The differential form of the equation (again, including Maxwell's correction) is
Physical sciences
Electrodynamics
Physics
199516
https://en.wikipedia.org/wiki/Beta%20vulgaris
Beta vulgaris
Beta vulgaris (beet) is a species of flowering plant in the subfamily Betoideae of the family Amaranthaceae. Economically, it is the most important crop of the large order Caryophyllales. It has several cultivar groups: the sugar beet, of greatest importance to produce table sugar; the root vegetable known as the beetroot or garden beet; the leaf vegetable known as chard or spinach beet or silverbeet; and mangelwurzel, which is a fodder crop. Three subspecies are typically recognised. All cultivars, despite their quite different morphologies, fall into the subspecies Beta vulgaris subsp. vulgaris. The wild ancestor of the cultivated beets is the sea beet (Beta vulgaris subsp. maritima). Description Beta vulgaris is a herbaceous biennial or, rarely, perennial plant up to in height, rarely 200 cm; cultivated forms are mostly biennial. The roots of cultivated forms are dark red, white, or yellow and moderately to strongly swollen and fleshy (subsp. vulgaris); they are brown, fibrous, sometimes swollen, and woody in the wild subspecies. The stems grow erect or, in the wild forms, often procumbent; they are simple or branched in the upper part, and their surface is ribbed and striate. The basal leaves have a long petiole (which may be thickened and red, white, or yellow in some cultivars). The simple leaf blade is oblanceolate to heart-shaped, dark green to dark red, slightly fleshy, usually with a prominent midrib, with entire or undulate margin, 5–20 cm long on wild plants (often much larger in cultivated plants). The upper leaves are smaller, their blades are rhombic to narrowly lanceolate. The flowers are produced in dense spike-like, basally interrupted inflorescences. Very small flowers sit in one- to three- (rarely eight-) flowered glomerules in the axils of short bracts or in the upper half of the inflorescence without bracts. The hermaphrodite flowers are urn-shaped, green or tinged reddish, and consist of five basally connate perianth segments (tepals), 3–5 × 2–3 mm, 5 stamens, and a semi-inferior ovary with 2–3 stigmas. The perianths of neighbouring flowers are often fused. Flowers are wind-pollinated or insect-pollinated, the former method being more important. In fruit, the glomerules of flowers form connate hard clusters. The fruit (utricle) is enclosed by the leathery and incurved perianth, and is immersed in the swollen, hardened perianth base. The horizontal seed is lenticular, 2–3 mm, with a red-brown, shiny seed coat. The seed contains an annular embryo and copious perisperm (feeding tissue). There are 18 chromosomes found in 2 sets, which makes beets diploid. Using chromosome number notation, 2n = 18. Taxonomy The species description of Beta vulgaris was made in 1753 by Carl Linnaeus in Species Plantarum, at the same time creating the genus Beta. Linnaeus regarded sea beet, chard and red beet as varieties (at that time, sugar beet and mangelwurzel had not been selected yet). In the second edition of Species Plantarum (1762), Linnaeus separated the sea beet as its own species, Beta maritima, and left only the cultivated beets in Beta vulgaris. Today sea beet and cultivated beets are considered as belonging to the same species, because they may hybridize and form fertile offspring. The taxonomy of the various cultivated races has a long and complicated history, they were treated at the rank of either subspecies, or convarieties or varieties. Now rankless cultivar groups are used, according to the International Code of Nomenclature for Cultivated Plants. Beta vulgaris belongs to the subfamily Betoideae in family Amaranthaceae (s.l, including the Chenopodiaceae). Beta vulgaris is classified into three subspecies: Beta vulgaris subsp. adanensis (Pamukç. ex Aellen) Ford-Lloyd & J.T.Williams (Syn.: Beta adanensis Pamukç. ex Aellen): occurring in disturbed habitats and steppes of Southeast Europe (Greece) and Western Asia (Cyprus, Israel, western Syria and Turkey). Beta vulgaris subsp. maritima, Sea beet, the wild ancestor of all cultivated beets. Its distribution area reaches from the coasts of Western Europe and the Mediterranean Sea to the Near and Middle East. Beta vulgaris subsp. vulgaris (Syn.: Beta vulgaris subsp. cicla (L.) Arcang., Beta vulgaris subsp. rapacea (Koch) Döll).: all cultivated beets belong to this subspecies. There are five Cultivar groups: Altissima Group, sugar beet (Syn. B. v. subsp. v. convar. vulgaris var. altissima) - The sugar beet is a major commercial crop due to its high concentrations of sucrose, which is extracted to produce table sugar. It was developed from garden beets in Germany in the late 18th century after the roots of beets were found to contain sugar in 1747. Cicla Group, spinach beet or chard (Syn. B. v. subsp. vulgaris convar. cicla var. cicla) - The leaf beet group has a long history dating to the second millennium BC. The first cultivated forms were believed to have been domesticated in the Mediterranean, but were introduced to the Middle East, India, and finally China by 850 AD. These were used as medicinal plants in Ancient Greece and Medieval Europe. Their popularity declined in Europe following the introduction of spinach. This variety is widely cultivated for its leaves, which are usually cooked like spinach. It can be found in many grocery stores around the world. Flavescens Group, swiss chard (Syn. B. v. subsp. v. convar. cicla. var. flavescens) - Chard leaves have thick and fleshy midribs. Both the midribs and the leaf blades are used as vegetables, often in separate dishes. Some cultivars are also grown ornamentally for their coloured midribs. The thickened midribs are thought to have arisen from the spinach beet by mutation. Conditiva Group, beetroot or garden beet (Syn. B. v. subsp. v. convar. vulgaris var. vulgaris) - This is the red root vegetable that is most typically associated with the word 'beet'. It is especially popular in Eastern Europe where it is the main ingredient of borscht. Crassa Group, mangelwurzel (Syn. B. v. subsp. v. convar. vulgaris var. crassa) - This variety was developed in the 18th century from the garden beet for its tubers for use as a fodder crop. Distribution and habitat The wild forms of Beta vulgaris are distributed in southwestern, northern and Southeast Europe along the Atlantic coasts and the Mediterranean Sea, in North Africa, Macaronesia, to Western Asia. Naturalized they occur in other continents. The plants grow at coastal cliffs, on stony and sandy beaches, in salt marshes or coastal grasslands, and in ruderal or disturbed places. Cultivated beets are grown worldwide in regions without severe frosts. They prefer relatively cool temperatures between 15 and 19 °C. Leaf beets can thrive in warmer temperatures than beetroot. As descendants of coastal plants, they tolerate salty soils and drought. They grow best on pH-neutral to slightly alkaline soils containing plant nutrients and additionally sodium and boron. Ecology Beets are a food plant for the larvae of a number of Lepidoptera species. Cultivation Beets are cultivated for fodder (e.g. mangelwurzel), for sugar (the sugar beet), as a leaf vegetable (chard or "Bull's Blood"), or as a root vegetable ("beetroot", "table beet", or "garden beet"). "Blood Turnip" was once a common name for beet root cultivars for the garden. Examples include: Bastian's Blood Turnip, Dewing's Early Blood Turnip, Edmand Blood Turnip, and Will's Improved Blood Turnip. The "earthy" taste of some beetroot cultivars comes from the presence of geosmin. Researchers have not yet answered whether beets produce geosmin themselves or whether it is produced by symbiotic soil microbes living in the plant. Breeding programs can produce cultivars with low geosmin levels yielding flavours more acceptable to consumers. Beets are one of the most boron-intensive of modern crops, a dependency possibly introduced as an evolutionary response its pre-industrial ancestor's constant exposure to sea spray; on commercial farms, a 60 tonne per hectare (26.8 ton/acre) harvest requires 600 grams of elemental boron per hectare (8.6 ounces/acre) for growth. A lack of boron causes the meristem and the shoot to languish, eventually leading to heart rot. Red or purple coloring The color of red/purple beetroot is due to a variety of betalain pigments, unlike most other red plants, such as red cabbage, which contain anthocyanin pigments. The composition of different betalain pigments can vary, resulting in strains of beetroot which are yellow or other colors in addition to the familiar deep red. Some of the betalains in beets are betanin, isobetanin, probetanin, and neobetanin (the red to violet ones are known collectively as betacyanin). Other pigments contained in beet are indicaxanthin and vulgaxanthins (yellow to orange pigments known as betaxanthins). Indicaxanthin has been shown as a powerful protective antioxidant for thalassemia and prevents the breakdown of alpha-tocopherol (vitaminE). Betacyanin in beetroot may cause red urine in people who are unable to break it down. This is called beeturia. The pigments are contained in cell vacuoles. Beetroot cells are quite unstable and will 'leak' when cut, heated, or when in contact with air or sunlight. This is why red beetroots leave a purple stain. Leaving the skin on when cooking, however, will maintain the integrity of the cells and therefore minimize leakage. Uses Nutrition In a 100 gram amount, beets supply 43 calories, contain 88% water, 10% carbohydrates, about 2% protein and have a minute amount of fat (table). The only micronutrients of significant content are folate (27% of the Daily Value, DV) and manganese (16% DV). Culinary Spinach beet leaves are eaten as a pot herb. Young leaves of the garden beet are sometimes used similarly. The midribs of Swiss chard are eaten boiled while the whole leaf blades are eaten as spinach beet. In some parts of Africa, the whole leaf blades are usually prepared with the midribs as one dish. The leaves and stems of young plants are steamed briefly and eaten as a vegetable; older leaves and stems are stir-fried and have a flavour resembling taro leaves. The usually deep-red roots of garden beet can be baked, boiled, or steamed, and often served hot as a cooked vegetable or cold as a salad vegetable. They are also pickled. Raw beets are added to salads. A large proportion of the commercial production is processed into boiled and sterilised beets or into pickles. In Eastern Europe beet soup, such as cold borsch, is a popular dish. Yellow-coloured garden beets are grown on a very small scale for home consumption. The consumption of beets causes pink urine in some people. Jews traditionally eat beet on Rosh Hashana (New Year). Its Aramaic name סלקא sounds like the word for "remove" or "depart"; it is eaten with a prayer "that our enemies be removed". Traditional medicine The roots and leaves of the beet have been used in traditional medicine to treat a wide variety of ailments. Ancient Romans used beetroot as a treatment for fevers and constipation, amongst other ailments. Apicius in De re coquinaria gives five recipes for soups to be given as a laxative, three of which feature the root of beet. Platina recommended taking beetroot with garlic to nullify the effects of 'garlic-breath'. Beet greens and Swiss chard are both considered high oxalate foods which are implicated in the formation of kidney stones. Phytochemicals and research Betaine and betalain, two phytochemical compounds prevalent in Beta vulgaris, are under basic research for their potential biological properties. Other uses Cultivars with large, brightly coloured leaves are grown for decorative purposes. History The sea beet, the ancestor of modern cultivated beets, prospered along the coast of the Mediterranean Sea. Beetroot remains have been excavated in the Third dynasty Saqqara pyramid at Thebes, Egypt, and four charred beetroots were found in the Neolithic site of Aartswoud in the Netherlands though it has not been determined whether these were domesticated or wild forms of B. vulgaris. Zohary and Hopf note that beetroot is "linguistically well identified". They state the earliest written mention of the beet comes from 8th century BC Mesopotamia. The Greek Peripatetic Theophrastus later describes the beet as similar to the radish, while Aristotle also mentions the plant. Available evidence, such as that provided by Aristotle and Theophrastus, suggests the leafy varieties of the beet were grown primarily for most of its history, though these lost much of their popularity following the introduction of spinach. The ancient Romans considered beets an important health food and an aphrodisiac. Roman and Jewish literary sources suggest that in the 1st century BC the domestic beet was represented in the Mediterranean basin primarily by leafy forms like chard and spinach beet. Zohary and Hopf also argue that it is very probable that beetroot cultivars were also grown at the time, and some Roman recipes support this. Later English and German sources show that beetroots were commonly cultivated in Medieval Europe. The sugar beet Modern sugar beets date back to mid-18th century Silesia where the king of Prussia subsidised experiments aimed at processes for sugar extraction. In 1747 Andreas Marggraf isolated sugar from beetroots and found them at concentrations of 1.3-1.6%. He also demonstrated that sugar could be extracted from beets that was the same as that produced from sugarcane. His student, Franz Karl Achard, evaluated 23 varieties of mangelwurzel for sugar content and selected a local race from Halberstadt in modern-day Saxony-Anhalt, Germany. Moritz Baron von Koppy and his son further selected from this race for white, conical tubers. The selection was named 'Weiße Schlesische Zuckerrübe', meaning white Silesian sugar beet, and boasted about a 6% sugar content. This selection is the progenitor of all modern sugar beets. A royal decree led to the first factory devoted to sugar extraction from beetroots being opened in Kunern, Silesia (now Konary, Poland) in 1801. The Silesian sugar beet was soon introduced to France where Napoleon opened schools specifically for studying the plant. He also ordered that be devoted to growing the new sugar beet. This was in response to British blockades of cane sugar during the Napoleonic Wars, which ultimately stimulated the rapid growth of a European sugar beet industry. By 1840 about 5% of the world's sugar was derived from sugar beets, and by 1880 this number had risen more than tenfold to over 50%. The sugar beet was introduced to North America after 1830 with the first commercial production starting in 1879 at a farm in Alvarado, California. The sugar beet was also introduced to Chile via German settlers around 1850. It remains a widely cultivated commercial crop for producing table sugar, in part due to subsidies scaled to keep it competitive with tropical sugar cane.
Biology and health sciences
Caryophyllales
Plants
199534
https://en.wikipedia.org/wiki/Ecdysozoa
Ecdysozoa
Ecdysozoa () is a group of protostome animals, including Arthropoda (insects, chelicerata (including arachnids), crustaceans, and myriapods), Nematoda, and several smaller phyla. The grouping of these animal phyla into a single clade was first proposed by Eernisse et al. (1992) based on a phylogenetic analysis of 141 morphological characters of ultrastructural and embryological phenotypes. This clade, that is, a group consisting of a common ancestor and all its descendants, was formally named by Aguinaldo et al. in 1997, based mainly on phylogenetic trees constructed using 18S ribosomal RNA genes. A large study in 2008 by Dunn et al. strongly supported the monophyly of Ecdysozoa. The group Ecdysozoa is supported by many morphological characters, including growth by ecdysis, with moulting of the cuticle – without mitosis in the epidermis – under control of the prohormone ecdysone, and internal fertilization. The group was initially contested by a significant minority of biologists. Some argued for groupings based on more traditional taxonomic techniques, while others contested the interpretation of the molecular data. Etymology The name Ecdysozoa is scientific Greek, derived from () "shedding" + () "animal". Characteristics The most notable characteristic shared by ecdysozoans is a three-layered cuticle (four in Tardigrada) composed of organic material, which is periodically molted as the animal grows. This process of molting is called ecdysis, and gives the group its name. The ecdysozoans lack locomotory cilia and produce mostly amoeboid sperm, and their embryos do not undergo spiral cleavage as in most other protostomes. Ancestrally, the group exhibited sclerotized teeth within the foregut, and a ring of spines around the mouth opening, though these features have been secondarily lost in certain groups. An unpaired ventral nerve cord, present in Priapulida and Nematoida, appear to be the ancestral condition, making the paired ventral nerve cord found in Panarthropoda, Kinorhyncha and Loricifera a derived trait. A respiratory and circulatory system is only present in onychophorans and arthropods (often absent in smaller arthropods like mites); in the rest of the groups, both systems are missing. Phylogeny The Ecdysozoa include the following phyla: Arthropoda, Onychophora, Tardigrada, Kinorhyncha, Priapulida, Loricifera, Nematoda, and Nematomorpha. A few other groups, such as the gastrotrichs, have been considered possible members but lack the main characters of the group, and are now placed elsewhere. The Arthropoda, Onychophora, and Tardigrada have been grouped together as the Panarthropoda because they are distinguished by segmented body plans. Dunn et al. in 2008 suggested that the tardigrada could be grouped along with the nematodes, leaving Onychophora as the sister group to the arthropods. The non-panarthropod members of Ecdysozoa have been grouped as Cycloneuralia but they are more usually considered paraphyletic in representing the primitive condition from which the Panarthropoda evolved. A modern consensus phylogenetic tree for the protostomes is shown below. It is indicated when approximately clades radiated into newer clades in millions of years ago (Mya); dashed lines show especially uncertain placements. The phylogenetic tree is based on Nielsen et al. and Howard et al.. Older alternative groupings Articulata hypothesis The grouping proposed by Aguinaldo et al. is almost universally accepted, replacing an older hypothesis that Panarthropoda should be classified with Annelida in a group called the Articulata, and that Ecdysozoa are polyphyletic. Nielsen has suggested that a possible solution is to regard Ecdysozoa as a sister-group of Annelida, though later considered them unrelated. Inclusion of the roundworms within the Ecdysozoa was initially contested but since 2003, a broad consensus has formed supporting the Ecdysozoa and in 2011 the Darwin–Wallace Medal was awarded to James Lake for the discovery of the New Animal Phylogeny consisting of the Ecdysozoa, the Lophotrochozoa, and the Deuterostomia. Coelomata hypothesis Before Aguinaldo's Ecdysozoa proposal, one of the prevailing theories for the evolution of the bilateral animals was based on the morphology of their body cavities. There were three types, or grades of organization: the Acoelomata (no coelom), the Pseudocoelomata (partial coelom), and the Eucoelomata (true coelom). Adoutte and coworkers were among the first to strongly support the Ecdysozoa. With the introduction of molecular phylogenetics, the coelomate hypothesis was abandoned, although some molecular, phylogenetic support for the Coelomata continued until as late as 2005.
Biology and health sciences
Ecdysozoa
Animals
199556
https://en.wikipedia.org/wiki/Taxon
Taxon
In biology, a taxon (back-formation from taxonomy; : taxa) is a group of one or more populations of an organism or organisms seen by taxonomists to form a unit. Although neither is required, a taxon is usually known by a particular name and given a particular ranking, especially if and when it is accepted or becomes established. It is very common, however, for taxonomists to remain at odds over what belongs to a taxon and the criteria used for inclusion, especially in the context of rank-based ("Linnaean") nomenclature (much less so under phylogenetic nomenclature). If a taxon is given a formal scientific name, its use is then governed by one of the nomenclature codes specifying which scientific name is correct for a particular grouping. Initial attempts at classifying and ordering organisms (plants and animals) were presumably set forth in prehistoric times by hunter-gatherers, as suggested by the fairly sophisticated folk taxonomies. Much later, Aristotle, and later still, European scientists, like Magnol, Tournefort and Carl Linnaeus's system in Systema Naturae, 10th edition (1758),, as well as an unpublished work by Bernard and Antoine Laurent de Jussieu, contributed to this field. The idea of a unit-based system of biological classification was first made widely available in 1805 in the introduction of Jean-Baptiste Lamarck's Flore françoise, and Augustin Pyramus de Candolle's Principes élémentaires de botanique. Lamarck set out a system for the "natural classification" of plants. Since then, systematists continue to construct accurate classifications encompassing the diversity of life; today, a "good" or "useful" taxon is commonly taken to be one that reflects evolutionary relationships. Many modern systematists, such as advocates of phylogenetic nomenclature, use cladistic methods that require taxa to be monophyletic (all descendants of some ancestor). Therefore, their basic unit, the clade, is equivalent to the taxon, assuming that taxa should reflect evolutionary relationships. Similarly, among those contemporary taxonomists working with the traditional Linnean (binomial) nomenclature, few propose taxa they know to be paraphyletic. An example of a long-established taxon that is not also a clade is the class Reptilia, the reptiles; birds and mammals are the descendants of animals traditionally classed as reptiles, but neither is included in the Reptilia (birds are traditionally placed in the class Aves, and mammals in the class Mammalia). History The term taxon was first used in 1926 by Adolf Meyer-Abich for animal groups, as a back-formation from the word taxonomy; the word taxonomy had been coined a century before from the Greek components (), meaning "arrangement", and (), meaning "method". For plants, it was proposed by Herman Johannes Lam in 1948, and it was adopted at the VII International Botanical Congress, held in 1950. Definition The glossary of the International Code of Zoological Nomenclature (1999) defines a "taxon, (pl. taxa), n. A taxonomic unit, whether named or not: i.e. a population, or group of populations of organisms which are usually inferred to be phylogenetically related and which have characters in common which differentiate (q.v.) the unit (e.g. a geographic population, a genus, a family, an order) from other such units. A taxon encompasses all included taxa of lower rank (q.v.) and individual organisms. [...]" Ranks A taxon can be assigned a taxonomic rank, usually (but not necessarily) when it is given a formal name. "Phylum" applies formally to any biological domain, but traditionally it was always used for animals, whereas "division" was traditionally often used for plants, fungi, etc. A prefix is used to indicate a ranking of lesser importance. The prefix super- indicates a rank above, the prefix sub- indicates a rank below. In zoology, the prefix infra- indicates a rank below sub-. For instance, among the additional ranks of class are superclass, subclass and infraclass. Rank is relative, and restricted to a particular systematic schema. For example, liverworts have been grouped, in various systems of classification, as a family, order, class, or division (phylum). The use of a narrow set of ranks is challenged by users of cladistics; for example, the mere 10 ranks traditionally used between animal families (governed by the International Code of Zoological Nomenclature (ICZN)) and animal phyla (usually the highest relevant rank in taxonomic work) often cannot adequately represent the evolutionary history as more about a lineage's phylogeny becomes known. In addition, the class rank is quite often not an evolutionary but a phenetic or paraphyletic group and as opposed to those ranks governed by the ICZN (family-level, genus-level and species-level taxa), can usually not be made monophyletic by exchanging the taxa contained therein. This has given rise to phylogenetic taxonomy and the ongoing development of the PhyloCode, which has been proposed as a new alternative to replace Linnean classification and govern the application of names to clades. Many cladists do not see any need to depart from traditional nomenclature as governed by the ICZN, International Code of Nomenclature for algae, fungi, and plants, etc.
Biology and health sciences
Phylogenetics and taxonomy
Biology
199661
https://en.wikipedia.org/wiki/Rock%20%28geology%29
Rock (geology)
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects. Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting. Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new human-made rocks and rock-like substances, such as concrete. Study Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the development of engineering and technology in human society. While the history of geology includes many theories of rocks and their origins that have persisted throughout human history, the study of rocks was developed as a formal science during the 19th century. Plutonism was developed as a theory during this time, and the discovery of radioactive decay in 1896 allowed for the radiocarbon dating of rocks. Understanding of plate tectonics developed in the second half of the 20th century. Classification Rocks are composed primarily of grains of minerals, which are crystalline solids formed from atoms chemically bonded into an orderly structure. Some rocks also contain mineraloids, which are rigid, mineral-like substances, such as volcanic glass, that lack crystalline structure. The types and abundance of minerals in a rock are determined by the manner in which it was formed. Most rocks contain silicate minerals, compounds that include silica tetrahedra in their crystal lattice, and account for about one-third of all known mineral species and about 95% of the earth's crust. The proportion of silica in rocks and minerals is a major factor in determining their names and properties. Rocks are classified according to characteristics such as mineral and chemical composition, permeability, texture of the constituent particles, and particle size. These physical properties are the result of the processes that formed the rocks. Over the course of time, rocks can be transformed from one type into another, as described by a geological model called the rock cycle. This transformation produces three general classes of rock: igneous, sedimentary and metamorphic. Those three classes are subdivided into many groups. There are, however, no hard-and-fast boundaries between allied rocks. By increase or decrease in the proportions of their minerals, they pass through gradations from one to the other; the distinctive structures of one kind of rock may thus be traced, gradually merging into those of another. Hence the definitions adopted in rock names simply correspond to selected points in a continuously graduated series. Igneous rock Igneous rock (derived from the Latin word igneus, meaning of fire, from ignis meaning fire) is formed through the cooling and solidification of magma or lava. This magma may be derived from partial melts of pre-existing rocks in either a planet's mantle or crust. Typically, the melting of rocks is caused by one or more of three processes: an increase in temperature, a decrease in pressure, or a change in composition. Igneous rocks are divided into two main categories: Plutonic or intrusive rocks result when magma cools and crystallizes slowly within the Earth's crust. A common example of this type is granite. Volcanic or extrusive rocks result from magma reaching the surface either as lava or fragmental ejecta, forming minerals such as pumice or basalt. Magmas tend to become richer in silica as they rise towards the Earth's surface, a process called magma differentiation. This occurs both because minerals low in silica crystallize out of the magma as it begins to cool (Bowen's reaction series) and because the magma assimilates some of the crustal rock through which it ascends (country rock), and crustal rock tends to be high in silica. Silica content is thus the most important chemical criterion for classifying igneous rock. The content of alkali metal oxides is next in importance. About 65% of the Earth's crust by volume consists of igneous rocks. Of these, 66% are basalt and gabbro, 16% are granite, and 17% granodiorite and diorite. Only 0.6% are syenite and 0.3% are ultramafic. The oceanic crust is 99% basalt, which is an igneous rock of mafic composition. Granite and similar rocks, known as granitoids, dominate the continental crust. Sedimentary rock Sedimentary rocks are formed at the earth's surface by the accumulation and cementation of fragments of earlier rocks, minerals, and organisms or as chemical precipitates and organic growths in water (sedimentation). This process causes clastic sediments (pieces of rock) or organic particles (detritus) to settle and accumulate or for minerals to chemically precipitate (evaporite) from a solution. The particulate matter then undergoes compaction and cementation at moderate temperatures and pressures (diagenesis). Before being deposited, sediments are formed by weathering of earlier rocks by erosion in a source area and then transported to the place of deposition by water, wind, ice, mass movement or glaciers (agents of denudation). About 7.9% of the crust by volume is composed of sedimentary rocks, with 82% of those being shales, while the remainder consists of 6% limestone and 12% sandstone and arkoses. Sedimentary rocks often contain fossils. Sedimentary rocks form under the influence of gravity and typically are deposited in horizontal or near horizontal layers or strata, and may be referred to as stratified rocks. Sediment and the particles of clastic sedimentary rocks can be further classified by grain size. The smallest sediments are clay, followed by silt, sand, and gravel. Some systems include cobbles and boulders as measurements. Metamorphic rock Metamorphic rocks are formed by subjecting any rock type—sedimentary rock, igneous rock or another older metamorphic rock—to different temperature and pressure conditions than those in which the original rock was formed. This process is called metamorphism, meaning to "change in form". The result is a profound change in physical properties and chemistry of the stone. The original rock, known as the protolith, transforms into other mineral types or other forms of the same minerals, by recrystallization. The temperatures and pressures required for this process are always higher than those found at the Earth's surface: temperatures greater than 150 to 200 °C and pressures greater than 1500 bars. This occurs, for example, when continental plates collide. Metamorphic rocks compose 27.4% of the crust by volume. The three major classes of metamorphic rock are based upon the formation mechanism. An intrusion of magma that heats the surrounding rock causes contact metamorphism—a temperature-dominated transformation. Pressure metamorphism occurs when sediments are buried deep under the ground; pressure is dominant, and temperature plays a smaller role. This is termed burial metamorphism, and it can result in rocks such as jade. Where both heat and pressure play a role, the mechanism is termed regional metamorphism. This is typically found in mountain-building regions. Depending on the structure, metamorphic rocks are divided into two general categories. Those that possess a texture are referred to as foliated; the remainders are termed non-foliated. The name of the rock is then determined based on the types of minerals present. Schists are foliated rocks that are primarily composed of lamellar minerals such as micas. A gneiss has visible bands of differing lightness, with a common example being the granite gneiss. Other varieties of foliated rock include slates, phyllites, and mylonite. Familiar examples of non-foliated metamorphic rocks include marble, soapstone, and serpentine. This branch contains quartzite—a metamorphosed form of sandstone—and hornfels. Extraterrestrial rocks Though most understanding of rocks comes from those of Earth, rocks make up many of the universe's celestial bodies. In the Solar System, Mars, Venus, and Mercury are composed of rock, as are many natural satellites, asteroids, and meteoroids. Meteorites that fall to Earth provide evidence of extraterrestrial rocks and their composition. They are typically heavier than rocks on Earth. Asteroid rocks can also be brought to Earth through space missions, such as the Hayabusa mission. Lunar rocks and Martian rocks have also been studied. Human use The use of rock has had a huge impact on the cultural and technological development of the human race. Rock has been used by humans and other hominids for at least 2.5 million years. Lithic technology marks some of the oldest and continuously used technologies. The mining of rock for its metal content has been one of the most important factors of human advancement, and has progressed at different rates in different places, in part because of the kind of metals available from the rock of a region. Anthropic rock Anthropic rock is synthetic or restructured rock formed by human activity. Concrete is recognized as a human-made rock constituted of natural and processed rock and having been developed since Ancient Rome. Rock can also be modified with other substances to develop new forms, such as epoxy granite. Artificial stone has also been developed, such as Coade stone. Geologist James R. Underwood has proposed anthropic rock as a fourth class of rocks alongside igneous, sedimentary, and metamorphic. Building Rock varies greatly in strength, from quartzites having a tensile strength in excess of 300 MPa to sedimentary rock so soft it can be crumbled with bare fingers (that is, it is friable). (For comparison, structural steel has a tensile strength of around 350 MPa.) Relatively soft, easily worked sedimentary rock was quarried for construction as early as 4000 BCE in Egypt, and stone was used to build fortifications in Inner Mongolia as early as 2800 BCE. The soft rock, tuff, is common in Italy, and the Romans used it for many buildings and bridges. Limestone was widely used in construction in the Middle Ages in Europe and remained popular into the 20th century. Mining Mining is the extraction of valuable minerals or other geological materials from the earth, from an ore body, vein or seam. The term also includes the removal of soil. Materials recovered by mining include base metals, precious metals, iron, uranium, coal, diamonds, limestone, oil shale, rock salt, potash, construction aggregate and dimension stone. Mining is required to obtain any material that cannot be grown through agricultural processes, or created artificially in a laboratory or factory. Mining in a wider sense comprises extraction of any resource (e.g. petroleum, natural gas, salt or even water) from the earth. Mining of rock and metals has been done since prehistoric times. Modern mining processes involve prospecting for mineral deposits, analysis of the profit potential of a proposed mine, extraction of the desired materials, and finally reclamation of the land to prepare it for other uses once mining ceases. Mining processes may create negative impacts on the environment both during the mining operations and for years after mining has ceased. These potential impacts have led to most of the world's nations adopting regulations to manage negative effects of mining operations. Tools Stone tools have been used for millions of years by humans and earlier hominids. The Stone Age was a period of widespread stone tool usage. Early Stone Age tools were simple implements, such as hammerstones and sharp flakes. Middle Stone Age tools featured sharpened points to be used as projectile points, awls, or scrapers. Late Stone Age tools were developed with craftsmanship and distinct cultural identities. Stone tools were largely superseded by copper and bronze tools following the development of metallurgy.
Physical sciences
Earth science
null
199665
https://en.wikipedia.org/wiki/Helicobacter%20pylori
Helicobacter pylori
Helicobacter pylori, previously known as Campylobacter pylori, is a gram-negative, flagellated, helical bacterium. Mutants can have a rod or curved rod shape that exhibits less virulence. Its helical body (from which the genus name Helicobacter derives) is thought to have evolved to penetrate the mucous lining of the stomach, helped by its flagella, and thereby establish infection. The bacterium was first identified as the causal agent of gastric ulcers in 1983 by Australian physician-scientists Barry Marshall and Robin Warren. In 2005, they were awarded the Nobel Prize in Physiology or Medicine for their discovery. Infection of the stomach with H. pylori is not the cause of illness itself: over half of the global population is infected, but most individuals are asymptomatic. Persistent colonization with more virulent strains can induce a number of gastric and non-gastric disorders. Gastric disorders due to infection begin with gastritis, or inflammation of the stomach lining. When infection is persistent, the prolonged inflammation will become chronic gastritis. Initially, this will be non-atrophic gastritis, but the damage caused to the stomach lining can bring about the development of atrophic gastritis and ulcers within the stomach itself or the duodenum (the nearest part of the intestine). At this stage, the risk of developing gastric cancer is high. However, the development of a duodenal ulcer confers a comparatively lower risk of cancer. Helicobacter pylori are class 1 carcinogenic bacteria, and potential cancers include gastric MALT lymphoma and gastric cancer. Infection with H. pylori is responsible for an estimated 89% of all gastric cancers and is linked to the development of 5.5% of all cases cancers worldwide. H. pylori is the only bacterium known to cause cancer. Extragastric complications that have been linked to H. pylori include anemia due either to iron deficiency or vitamin B12 deficiency, diabetes mellitus, cardiovascular illness, and certain neurological disorders. An inverse association has also been claimed with H. pylori having a positive protective effect against asthma, esophageal cancer, inflammatory bowel disease (including gastroesophageal reflux disease and Crohn's disease), and others. Some studies suggest that H. pylori plays an important role in the natural stomach ecology by influencing the type of bacteria that colonize the gastrointestinal tract. Other studies suggest that non-pathogenic strains of H. pylori may beneficially normalize stomach acid secretion, and regulate appetite. In 2023, it was estimated that about two-thirds of the world's population was infected with H. pylori, being more common in developing countries. The prevalence has declined in many countries due to eradication treatments with antibiotics and proton-pump inhibitors, and with increased standards of living. Microbiology Helicobacter pylori is a species of gram-negative bacteria in the Helicobacter genus. About half the world's population is infected with H. pylori but only a few strains are pathogenic. H pylori is a helical bacterium having a predominantly helical shape, also often described as having a spiral or S shape. Its helical shape is better suited for progressing through the viscous mucosa lining of the stomach, and is maintained by a number of enzymes in the cell wall's peptidoglycan. The bacteria reach the less acidic mucosa by use of their flagella. Three strains studied showed a variation in length from 2.8–3.3 μm but a fairly constant diameter of 0.55–0.58 μm. H. pylori can convert from a helical to an inactive coccoid form that can evade the immune system, and that may possibly become viable, known as viable but nonculturable (VBNC). Helicobacter pylori is microaerophilic – that is, it requires oxygen, but at lower concentration than in the atmosphere. It contains a hydrogenase that can produce energy by oxidizing molecular hydrogen (H2) made by intestinal bacteria. H. pylori can be demonstrated in tissue by Gram stain, Giemsa stain, H&E stain, Warthin-Starry silver stain, acridine orange stain, and phase-contrast microscopy. It is capable of forming biofilms. Biofilms help to hinder the action of antibiotics and can contribute to treatment failure. To successfully colonize its host, H. pylori uses many different virulence factors including oxidase, catalase, and urease. Urease is the most abundant protein, its expression representing about 10% of the total protein weight. H. pylori possesses five major outer membrane protein families. The largest family includes known and putative adhesins. The other four families are porins, iron transporters, flagellum-associated proteins, and proteins of unknown function. Like other typical gram-negative bacteria, the outer membrane of H. pylori consists of phospholipids and lipopolysaccharide (LPS). The O-antigen of LPS may be fucosylated and mimic Lewis blood group antigens found on the gastric epithelium. Genome Helicobacter pylori consists of a large diversity of strains, and hundreds of genomes have been completely sequenced. The genome of the strain 26695 consists of about 1.7 million base pairs, with some 1,576 genes. The pan-genome, that is the combined set of 30 sequenced strains, encodes 2,239 protein families (orthologous groups OGs). Among them, 1,248 OGs are conserved in all the 30 strains, and represent the universal core. The remaining 991 OGs correspond to the accessory genome in which 277 OGs are unique to one strain. There are eleven restriction modification systems in the genome of H. pylori. This is an unusually high number providing a defence against bacteriophages. Transcriptome Single-cell transcriptomics using single-cell RNA-Seq gave the complete transcriptome of H. pylori which was published in 2010. This analysis of its transcription confirmed the known acid induction of major virulence loci, including the urease (ure) operon and the Cag pathogenicity island (PAI). A total of 1,907 transcription start sites 337 primary operons, and 126 additional suboperons, and 66 monocistrons were identified. Until 2010, only about 55 transcription start sites (TSSs) were known in this species. 27% of the primary TSSs are also antisense TSSs, indicating that – similar to E. coli – antisense transcription occurs across the entire H. pylori genome. At least one antisense TSS is associated with about 46% of all open reading frames, including many housekeeping genes. About 50% of the 5 UTRs (leader sequences) are 20–40 nucleotides (nt) in length and support the AAGGag motif located about 6 nt (median distance) upstream of start codons as the consensus Shine–Dalgarno sequence in H. pylori. Proteome The proteome of H. pylori has been systematically analyzed and more than 70% of its proteins have been detected by mass spectrometry, and other methods. About 50% of the proteome has been quantified, informing of the number of protein copies in a typical cell. Studies of the interactome have identified more than 3000 protein-protein interactions. This has provided information of how proteins interact with each other, either in stable protein complexes or in more dynamic, transient interactions, which can help to identify the functions of the protein. This in turn helps researchers to find out what the function of uncharacterized proteins is, e.g. when an uncharacterized protein interacts with several proteins of the ribosome (that is, it is likely also involved in ribosome function). About a third of all ~1,500 proteins in H. pylori remain uncharacterized and their function is largely unknown. Infection An infection with Helicobacter pylori can either have no symptoms even when lasting a lifetime, or can harm the stomach and duodenal linings by inflammatory responses induced by several mechanisms associated with a number of virulence factors. Colonization can initially cause H. pylori induced gastritis, an inflammation of the stomach lining that became a listed disease in ICD11. This will progress to chronic gastritis if left untreated. Chronic gastritis may lead to atrophy of the stomach lining, and the development of peptic ulcers (gastric or duodenal). These changes may be seen as stages in the development of gastric cancer, known as Correa's cascade. Extragastric complications that have been linked to H. pylori include anemia due either to iron-deficiency or vitamin B12 deficiency, diabetes mellitus, cardiovascular, and certain neurological disorders. Peptic ulcers are a consequence of inflammation that allows stomach acid and the digestive enzyme pepsin to overwhelm the protective mechanisms of the mucous membranes. The location of colonization of H. pylori, which affects the location of the ulcer, depends on the acidity of the stomach. In people producing large amounts of acid, H. pylori colonizes near the pyloric antrum (exit to the duodenum) to avoid the acid-secreting parietal cells at the fundus (near the entrance to the stomach). G cells express relatively high levels of PD-L1 that protects these cells from H. pylori-induced immune destruction. In people producing normal or reduced amounts of acid, H. pylori can also colonize the rest of the stomach. The inflammatory response caused by bacteria colonizing near the pyloric antrum induces G cells in the antrum to secrete the hormone gastrin, which travels through the bloodstream to parietal cells in the fundus. Gastrin stimulates the parietal cells to secrete more acid into the stomach lumen, and over time increases the number of parietal cells, as well. The increased acid load damages the duodenum, which may eventually lead to the formation of ulcers. Helicobacter pylori is a class I carcinogen, and potential cancers include gastric mucosa-associated lymphoid tissue (MALT) lymphomas and gastric cancer. Less commonly, diffuse large B-cell lymphoma of the stomach is a risk. Infection with H. pylori is responsible for around 89 per cent of all gastric cancers, and is linked to the development of 5.5 per cent of all cases of cancer worldwide. Although the data varies between different countries, overall about 1% to 3% of people infected with Helicobacter pylori develop gastric cancer in their lifetime compared to 0.13% of individuals who have had no H. pylori infection. H. pylori-induced gastric cancer is the third highest cause of worldwide cancer mortality as of 2018. Because of the usual lack of symptoms, when gastric cancer is finally diagnosed it is often fairly advanced. More than half of gastric cancer patients have lymph node metastasis when they are initially diagnosed. Chronic inflammation that is a feature of cancer development is characterized by infiltration of neutrophils and macrophages to the gastric epithelium, which favors the accumulation of pro-inflammatory cytokines, reactive oxygen species (ROS) and reactive nitrogen species (RNS) that cause DNA damage. The oxidative DNA damage and levels of oxidative stress can be indicated by a biomarker, 8-oxo-dG. Other damage to DNA includes double-strand breaks. Small gastric and colorectal polyps are adenomas that are more commonly found in association with the mucosal damage induced by H. pylori gastritis. Larger polyps can in time become cancerous. A modest association of H. pylori has been made with the development of colorectal cancers, but as of 2020 causality had yet to be proved. Signs and symptoms Most people infected with H. pylori never experience any symptoms or complications, but will have a 10% to 20% risk of developing peptic ulcers or a 0.5% to 2% risk of stomach cancer. H. pylori induced gastritis may present as acute gastritis with stomach ache, nausea, and ongoing dyspepsia (indigestion) that is sometimes accompanied by depression and anxiety. Where the gastritis develops into chronic gastritis, or an ulcer, the symptoms are the same and can include indigestion, stomach or abdominal pains, nausea, bloating, belching, feeling hunger in the morning, feeling full too soon, and sometimes vomiting, heartburn, bad breath, and weight loss. Complications of an ulcer can cause severe signs and symptoms such as black or tarry stool indicative of bleeding into the stomach or duodenum; blood - either red or coffee-ground colored in vomit; persistent sharp or severe abdominal pain; dizziness, and a fast heartbeat. Bleeding is the most common complication. In cases caused by H. pylori there was a greater need for hemostasis often requiring gastric resection. Prolonged bleeding may cause anemia leading to weakness and fatigue. Inflammation of the pyloric antrum, which connects the stomach to the duodenum, is more likely to lead to duodenal ulcers, while inflammation of the corpus may lead to a gastric ulcer. Stomach cancer can cause nausea, vomiting, diarrhoea, constipation, and unexplained weight loss. Gastric polyps are adenomas that are usually asymptomatic and benign, but may be the cause of dyspepsia, heartburn, bleeding from the stomach, and, rarely, gastric outlet obstruction. Larger polyps may have become cancerous. Colorectal polyps may be the cause of rectal bleeding, anemia, constipation, diarrhea, weight loss, and abdominal pain. Pathophysiology Virulence factors help a pathogen to evade the immune response of the host, and to successfully colonize. The many virulence factors of H. pylori include its flagella, the production of urease, adhesins, serine protease HtrA (high temperature requirement A), and the major exotoxins CagA and VacA. The presence of VacA and CagA are associated with more advanced outcomes. CagA is an oncoprotein associated with the development of gastric cancer. H. pylori infection is associated with epigenetically reduced efficiency of the DNA repair machinery, which favors the accumulation of mutations and genomic instability as well as gastric carcinogenesis. It has been shown that expression of two DNA repair proteins, ERCC1 and PMS2, was severely reduced once H. pylori infection had progressed to cause dyspepsia. Dyspepsia occurs in about 20% of infected individuals. Epigenetically reduced protein expression of DNA repair proteins MLH1, MGMT and MRE11 are also evident. Reduced DNA repair in the presence of increased DNA damage increases carcinogenic mutations and is likely a significant cause of gastric carcinogenesis. These epigenetic alterations are due to H. pylori-induced methylation of CpG sites in promoters of genes and H. pylori-induced altered expression of multiple microRNAs. Two related mechanisms by which H. pylori could promote cancer have been proposed. One mechanism involves the enhanced production of free radicals near H. pylori and an increased rate of host cell mutation. The other proposed mechanism has been called a "perigenetic pathway", and involves enhancement of the transformed host cell phenotype by means of alterations in cell proteins, such as adhesion proteins. H. pylori has been proposed to induce inflammation and locally high levels of tumor necrosis factor (TNF), also known as tumor necrosis factor alpha (TNFα)), and/or interleukin 6 (IL-6). According to the proposed perigenetic mechanism, inflammation-associated signaling molecules, such as TNF, can alter gastric epithelial cell adhesion and lead to the dispersion and migration of mutated epithelial cells without the need for additional mutations in tumor suppressor genes, such as genes that code for cell adhesion proteins. Flagellum The first virulence factor of Helicobacter pylori that enables colonization is its flagellum. H. pylori has from two to seven flagella at the same polar location which gives it a high motility. The flagellar filaments are about 3 μm long, and composed of two copolymerized flagellins, FlaA and FlaB, coded by the genes flaA, and flaB. The minor flagellin FlaB is located in the proximal region and the major flagellin FlaA makes up the rest of the flagellum. The flagella are sheathed in a continuation of the bacterial outer membrane which gives protection against the gastric acidity. The sheath is also the location of the origin of the outer membrane vesicles that gives protection to the bacterium from bacteriophages. Flagella motility is provided by the proton motive force provided by urease-driven hydrolysis allowing chemotactic movements towards the less acidic pH gradient in the mucosa. The mucus layer is about 300 μm thick, and the helical shape of H. pylori aided by its flagella helps it to burrow through this layer where it colonises a narrow region of about 25 μm closest to the epithelial cell layer, where the pH is near to neutral. They further colonise the gastric pits and live in the gastric glands. Occasionally the bacteria are found inside the epithelial cells themselves. The use of quorum sensing by the bacteria enables the formation of a biofilm which furthers persistent colonisation. In the layers of the biofilm, H. pylori can escape from the actions of antibiotics, and also be protected from host-immune responses. In the biofilm, H. pylori can change the flagella to become adhesive structures. Urease In addition to using chemotaxis to avoid areas of high acidity (low pH), H. pylori also produces large amounts of urease, an enzyme which breaks down the urea present in the stomach to produce ammonia and bicarbonate, which are released into the bacterial cytosol and the surrounding environment, creating a neutral area. The decreased acidity (higher pH) changes the mucus layer from a gel-like state to a more viscous state that makes it easier for the flagella to move the bacteria through the mucosa and attach to the gastric epithelial cells. Helicobacter pylori is one of the few known types of bacterium that has a urea cycle which is uniquely configured in the bacterium. 10% of the cell is of nitrogen, a balance that needs to be maintained. Any excess is stored in urea excreted in the urea cycle. A final stage enzyme in the urea cycle is arginase, an enzyme that is crucial to the pathogenesis of H. pylori. Arginase produces ornithine and urea, which the enzyme urease breaks down into carbonic acid and ammonia. Urease is the bacterium’s most abundant protein, accounting for 10–15% of the bacterium's total protein content. Its expression is not only required for establishing initial colonization in the breakdown of urea to carbonic acid and ammonia, but is also essential for maintaining chronic infection. Ammonia reduces stomach acidity, allowing the bacteria to become locally established. Arginase promotes the persistence of infection by consuming arginine; arginine is used by macrophages to produce nitric oxide, which has a strong antimicrobial effect. The ammonia produced to regulate pH is toxic to epithelial cells. Adhesins H. pylori must make attachment with the epithelial cells to prevent its being swept away with the constant movement and renewal of the mucus. To give them this adhesion, bacterial outer membrane proteins as virulence factors called adhesins are produced. BabA (blood group antigen binding adhesin) is most important during initial colonization, and SabA (sialic acid binding adhesin) is important in persistence. BabA attaches to glycans and mucins in the epithelium. BabA (coded for by the babA2 gene) also binds to the Lewis b antigen displayed on the surface of the epithelial cells. Adherence via BabA is acid sensitive and can be fully reversed by a decreased pH. It has been proposed that BabA's acid responsiveness enables adherence while also allowing an effective escape from an unfavorable environment such as a low pH that is harmful to the organism. SabA (coded for by the sabA gene) binds to increased levels of sialyl-Lewis X antigen expressed on gastric mucosa. Cholesterol glucoside The outer membrane contains cholesterol glucoside, a sterol glucoside that H. pylori glycosylates from the cholesterol in the gastric gland cells, and inserts it into its outer membrane. This cholesterol glucoside is important for membrane stability, morphology and immune evasion, and is rarely found in other bacteria. The enzyme responsible for this is cholesteryl α-glucosyltransferase (αCgT or Cgt), encoded by the HP0421 gene. A major effect of the depletion of host cholesterol by Cgt is to disrupt cholesterol-rich lipid rafts in the epithelial cells. Lipid rafts are involved in cell signalling and their disruption causes a reduction in the immune inflammatory response, particularly by reducing interferon gamma. Cgt is also secreted by the type IV secretion system, and is secreted in a selective way so that gastric niches where the pathogen can thrive are created. Its lack has been shown to give vulnerability from environmental stress to bacteria, and also to disrupt CagA-mediated interactions. Catalase Colonization induces an intense anti-inflammatory response as a first-line immune system defence. Phagocytic leukocytes and monocytes infiltrate the site of infection, and antibodies are produced. H. pylori is able to adhere to the surface of the phagocytes and impede their action. This is responded to by the phagocyte in the generation and release of oxygen metabolites into the surrounding space. H. pylori can survive this response by the activity of catalase at its attachment to the phagocytic cell surface. Catalase decomposes hydrogen peroxide into water and oxygen, protecting the bacteria from toxicity. Catalase has been shown to almost completely inhibit the phagocytic oxidative response. It is coded for by the gene katA. Tipα TNF-inducing protein alpha (Tipα) is a carcinogenic protein encoded by HP0596 unique to H. pylori that induces the expression of tumor necrosis factor. Tipα enters gastric cancer cells where it binds to cell surface nucleolin, and induces the expression of vimentin. Vimentin is important in the epithelial–mesenchymal transition associated with the progression of tumors. CagA CagA (cytotoxin-associated antigen A) is a major virulence factor for H. pylori, an oncoprotein that is encoded by the cagA gene. Bacterial strains with the cagA gene are associated with the ability to cause ulcers, MALT lymphomas, and gastric cancer. The cagA gene codes for a relatively long (1186-amino acid) protein. The cag pathogenicity island (PAI) has about 30 genes, part of which code for a complex type IV secretion system (T4SS or TFSS). The low GC-content of the cag PAI relative to the rest of the Helicobacter genome suggests the island was acquired by horizontal transfer from another bacterial species. The serine protease HtrA also plays a major role in the pathogenesis of H. pylori. The HtrA protein enables the bacterium to transmigrate across the host cells' epithelium, and is also needed for the translocation of CagA. The virulence of H. pylori may be increased by genes of the cag pathogenicity island; about 50–70% of H. pylori strains in Western countries carry it. Western people infected with strains carrying the cag PAI have a stronger inflammatory response in the stomach and are at a greater risk of developing peptic ulcers or stomach cancer than those infected with strains lacking the island. Following attachment of H. pylori to stomach epithelial cells, the type IV secretion system expressed by the cag PAI "injects" the inflammation-inducing agent, peptidoglycan, from their own cell walls into the epithelial cells. The injected peptidoglycan is recognized by the cytoplasmic pattern recognition receptor (immune sensor) Nod1, which then stimulates expression of cytokines that promote inflammation. The type-IV secretion apparatus also injects the cag PAI-encoded protein CagA into the stomach's epithelial cells, where it disrupts the cytoskeleton, adherence to adjacent cells, intracellular signaling, cell polarity, and other cellular activities. Once inside the cell, the CagA protein is phosphorylated on tyrosine residues by a host cell membrane-associated tyrosine kinase (TK). CagA then allosterically activates protein tyrosine phosphatase/protooncogene Shp2. These proteins are directly toxic to cells lining the stomach and signal strongly to the immune system that an invasion is under way. As a result of the bacterial presence, neutrophils and macrophages set up residence in the tissue to fight the bacteria assault. Pathogenic strains of H. pylori have been shown to activate the epidermal growth factor receptor (EGFR), a membrane protein with a TK domain. Activation of the EGFR by H. pylori is associated with altered signal transduction and gene expression in host epithelial cells that may contribute to pathogenesis. A C-terminal region of the CagA protein (amino acids 873–1002) has also been suggested to be able to regulate host cell gene transcription, independent of protein tyrosine phosphorylation. A great deal of diversity exists between strains of H. pylori, and the strain that infects a person can predict the outcome. VacA VacA (vacuolating cytotoxin autotransporter) is another major virulence factor encoded by the vacA gene. All strains of H. pylori carry this gene but there is much diversity, and only 50% produce the encoded cytotoxin. The four main subtypes of vacA are s1/m1, s1/m2, s2/m1, and s2/m2. s1/m1 and s1/m2 are known to cause an increased risk of gastric cancer. VacA is an oligomeric protein complex that causes a progressive vacuolation in the epithelial cells leading to their death. The vacuolation has also been associated with promoting intracellular reservoirs of H. pylori by disrupting the calcium channel cell membrane TRPML1. VacA has been shown to increase the levels of COX2, an up-regulation that increases the production of a prostaglandin indicating a strong host cell inflammatory response. Outer membrane proteins and vesicles About 4% of the genome encodes for outer membrane proteins that can be grouped into five families. The largest family includes bacterial adhesins. The other four families are porins, iron transporters, flagellum-associated proteins, and proteins of unknown function. Like other typical gram-negative bacteria, the outer membrane of H. pylori consists of phospholipids and lipopolysaccharide (LPS). The O-antigen of LPS may be fucosylated and mimic Lewis blood group antigens found on the gastric epithelium. H. pylori forms blebs from the outer membrane that pinch off as outer membrane vesicles to provide an alternative delivery system for virulence factors including CagA. A Helicobacter cysteine-rich protein HcpA is known to trigger an immune response, causing inflammation. A Helicobacter pylori virulence factor DupA is associated with the development of duodenal ulcers. Mechanisms of tolerance The need for survival has led to the development of different mechanisms of tolerance that enable the persistence of H. pylori. These mechanisms can also help to overcome the effects of antibiotics. H. pylori has to not only survive the harsh gastric acidity but also the sweeping of mucus by continuous peristalsis, and phagocytic attack accompanied by the release of reactive oxygen species. All organisms encode genetic programs for response to stressful conditions including those that cause DNA damage. Stress conditions activate bacterial response mechanisms that are regulated by proteins expressed by regulator genes. The oxidative stress can induce potentially lethal mutagenic DNA adducts in its genome. Surviving this DNA damage is supported by transformation-mediated recombinational repair, that contributes to successful colonization. H. pylori is naturally competent for transformation. While many organisms are competent only under certain environmental conditions, such as starvation, H. pylori is competent throughout logarithmic growth. Transformation (the transfer of DNA from one bacterial cell to another through the intervening medium) appears to be part of an adaptation for DNA repair. Homologous recombination is required for repairing double-strand breaks (DSBs). The AddAB helicase-nuclease complex resects DSBs and loads RecA onto single-strand DNA (ssDNA), which then mediates strand exchange, leading to homologous recombination and repair. The requirement of RecA plus AddAB for efficient gastric colonization suggests that H. pylori is either exposed to double-strand DNA damage that must be repaired or requires some other recombination-mediated event. In particular, natural transformation is increased by DNA damage in H. pylori, and a connection exists between the DNA damage response and DNA uptake in H. pylori. This natural competence contributes to the persistence of H. pylori. H. pylori has much greater rates of recombination and mutation than other bacteria. Genetically different strains can be found in the same host, and also in different regions of the stomach. An overall response to multiple stressors can result from an interaction of the mechanisms. RuvABC proteins are essential to the process of recombinational repair, since they resolve intermediates in this process termed Holliday junctions. H. pylori mutants that are defective in RuvC have increased sensitivity to DNA-damaging agents and to oxidative stress, exhibit reduced survival within macrophages, and are unable to establish successful infection in a mouse model. Similarly, RecN protein plays an important role in DSB repair. An H. pylori recN mutant displays an attenuated ability to colonize mouse stomachs, highlighting the importance of recombinational DNA repair in survival of H. pylori within its host. Biofilm An effective sustained colonization response is the formation of a biofilm. Having first adhered to cellular surfaces, the bacteria produce and secrete extracellular polymeric substance (EPS). EPS consists largely of biopolymers and provides the framework for the biofilm structure. H. pylori helps the biofilm formation by altering its flagella into adhesive structures that provide adhesion between the cells. Layers of aggregated bacteria as microcolonies accumulate to thicken the biofilm. The matrix of EPS prevents the entry of antibiotics and immune cells, and provides protection from heat and competition from other microorganisms. Channels form between the cells in the biofilm matrix allowing the transport of nutrients, enzymes, metabolites, and waste. Cells in the deep layers may be nutritionally deprived and enter into the coccoid dormant-like state. By changing the shape of the bacterium to a coccoid form, the exposure of LPS (targeted by antibiotics) becomes limited, and so evades detection by the immune system. It has also been shown that the cag pathogenicity island remains intact in the coccoid form. Some of these antibiotic resistant cells may remain in the host as persister cells. Following eradication, the persister cells can cause a recurrence of the infection. Bacteria can detach from the biofilm to relocate and colonize elsewhere in the stomach to form other biofilms. Diagnosis Colonization with H. pylori does not always lead to disease, but is associated with a number of stomach diseases. Testing is recommended in cases of peptic ulcer disease or low-grade gastric MALT lymphoma; after endoscopic resection of early gastric cancer; for first-degree relatives with gastric cancer, and in certain cases of indigestion. Other indications that prompt testing for H. pylori include long term aspirin or other non-steroidal anti-inflammatory use, unexplained iron deficiency anemia, or in cases of immune thrombocytopenic purpura. Several methods of testing exist, both invasive and non-invasive. Non-invasive tests for H. pylori infection include serological tests for antibodies, stool tests, and urea breath tests. Carbon urea breath tests include the use of carbon-13, or a radioactive carbon-14 producing a labelled carbon dioxide that can be detected in the breath. Carbon urea breath tests have a high sensitivity and specificity for the diagnosis of H. pylori. Proton-pump inhibitors and antibiotics should be discontinued for at least 30 days prior to testing for H. pylori infection or eradication, as both agents inhibit H. pylori growth and may lead to false negative results. Testing to confirm eradication is recommended 30 days or more after completion of treatment for H. pylori infection. H. pylori breath testing or stool antigen testing are both reasonable tests to confirm eradication. H. pylori serologic testing, including IgG antibodies, are not recommended as a test of eradication as they may remain elevated for years after successful treatment of infection. An endoscopic biopsy is an invasive means to test for H. pylori infection. Low-level infections can be missed by biopsy, so multiple samples are recommended. The most accurate method for detecting H. pylori infection is with a histological examination from two sites after endoscopic biopsy, combined with either a rapid urease test or microbial culture. Generally, repeating endoscopy is not recommended to confirm H. pylori eradication, unless there are specific indications to repeat the procedure. Transmission Helicobacter pylori is contagious, and is transmitted through direct contact either with saliva (oral-oral) or feces (fecal–oral route), but mainly through the oral–oral route. Consistent with these transmission routes, the bacteria have been isolated from feces, saliva, and dental plaque. H. pylori may also be transmitted by consuming contaminated food or water. Transmission occurs mainly within families in developed nations, but also from the broader community in developing countries. Prevention To prevent the development of H. pylori-related diseases when infection is suspected, antibiotic-based therapy regimens are recommended to eradicate the bacteria. When successful the disease progression is halted. First line therapy is recommended if low-grade gastric MALT lymphoma is diagnosed, regardless of evidence of H. pylori. However, if a severe condition of atrophic gastritis with gastric lesions is reached antibiotic-based treatment regimens are not advised since such lesions are often not reversible and will progress to gastric cancer. If the cancer is managed to be treated it is advised that an eradication program be followed to prevent a recurrence of infection, or reduce a recurrence of the cancer, known as metachronous. Due to H. pylori role as a major cause of certain diseases (particularly cancers) and its consistently increasing resistance to antibiotic therapy, there is an obvious need for alternative treatments. A vaccine targeted towards the development of gastric cancer, including MALT lymphoma, would also prevent the development of gastric ulcers. A vaccine that would be prophylactic for use in children, and one that would be therapeutic later are the main goals. Challenges to this are the extreme genomic diversity shown by H. pylori and complex host-immune responses. Previous studies in the Netherlands and in the US have shown that such a prophylactic vaccine programme would be ultimately cost-effective. However, as of late 2019 there have been no advanced vaccine candidates and only one vaccine in a Phase I clinical trial. Furthermore, development of a vaccine against H. pylori has not been a priority of major pharmaceutical companies. A key target for potential therapy is the proton-gated urea channel, since the secretion of urease enables the survival of the bacterium. Treatment The 2022 Maastricht Consensus Report recognised H. pylori gastritis as Helicobacter pylori induced gastritis, and has been included in ICD11. Initially the infection tends to be superficial, localised to the upper mucosal layers of the stomach. The intensity of chronic inflammation is related to the cytotoxicity of the H. pylori strain. A greater cytotoxicity will result in the change from a non-atrophic gastritis to an atrophic gastritis, with the loss of mucous glands. This condition is a prequel to the development of peptic ulcers and gastric adenocarcinoma. Eradication of H. pylori is recommended to treat the infection, including when advanced to peptic ulcer disease. The recommendations for first-line treatment is a quadruple therapy consisting of a proton-pump inhibitor, amoxicillin, clarithromycin, and metronidazole. Prior to treatment, testing is recommended to identify any pre-existing antibiotic resistances. A high rate of resistance to metronidazole has been observed. In areas of known clarithromycin resistance, the first-line therapy is changed to a bismuth based regimen including tetracycline and metronidazole for 14 days. If one of these courses of treatment fails, it is suggested to use the alternative. Treatment failure may typically be attributed to antibiotic resistance, or inadequate acid suppression from proton-pump inhibitors. Following clinical trials, the use of the potassium-competitive acid blocker vonoprazan, which has a greater acid suppressive action, was approved for use in the US in 2022. Its recommended use is in combination with amoxicillin, with or without clarithromycin. It has been shown to have a faster action and can be used with or without food. Successful eradication regimens have revolutionised the treatment of peptic ulcers. Eradication of H. pylori is also associated with a subsequent decreased risk of duodenal or gastric ulcer recurrence. Plant extracts and probiotic foods are being increasingly used as add-ons to usual treatments. Probiotic yogurts containing lactic acid bacteria Bifidobacteria and Lactobacillus exert a suppressive effect on H. pylori infection, and their use has been shown to improve the rates of eradication. Some commensal intestinal bacteria as part of the gut microbiota produce butyrate that acts as a prebiotic and enhances the mucosal immune barrier. Their use as probiotics may help balance the gut dysbiosis that accompanies antibiotic use. Some probiotic strains have been shown to have bactericidal and bacteriostatic activity against H. pylori, and also help to balance the gut dysbiosis. Antibiotics have a negative impact on gastrointestinal microbiota and cause nausea, diarrhea, and sickness for which probiotics can alleviate. Antibiotic resistance Increasing antibiotic resistance is the main cause of initial treatment failure. Factors linked to resistance include mutations, efflux pumps, and the formation of biofilms. One of the main antibiotics used in eradication therapies is clarithromycin, but clarithromycin-resistant strains have become well-established and the use of alternative antibiotics needs to be considered. Fortunately, non-invasive stool tests for clarithromycin have become available that allow selection of patients that are likely to respond to the therapy. Multidrug resistance has also increased. Additional rounds of antibiotics or other therapies may be used. Next generation sequencing is looked to for identifying initial specific antibiotic resistances that will help in targeting more effective treatment. In 2018, the WHO listed H. pylori as a high priority pathogen for the research and discovery of new drugs and treatments. The increasing antibiotic resistance encountered has spurred interest in developing alternative therapies using a number of plant compounds. Plant compounds have fewer side effects than synthetic drugs. Most plant extracts contain a complex mix of components that may not act on their own as antimicrobials but can work together with antibiotics to enhance treatment and work towards overcoming resistance. Plant compounds have a different mechanism of action that has proved useful in fighting antimicrobial resistance. For example, various compounds can act by inhibiting enzymes such as urease, and weakening adhesions to the mucous membrane. Sulfur-containing compounds from plants with high concentrations of polysulfides, coumarins, and terpenes have all been shown to be effective against H. pylori. H. pylori is found in saliva and dental plaque. Its transmission is known to include oral-oral, suggesting that the dental plaque biofilm may act as a reservoir for the bacteria. Periodontal therapy or scaling and root planing has therefore been suggested as an additional treatment to enhance eradication rates, but more research is needed. Cancers Stomach cancer Helicobacter pylori is a risk factor for gastric adenocarcinomas. Treatment is highly aggressive, with even localized disease being treated sequentially with chemotherapy and radiotherapy before surgical resection. Since this cancer, once developed, is independent of H. pylori infection, eradication regimens are not used. Gastric MALT lymphoma and DLBCL MALT lymphomas are malignancies of mucosa-associated lymphoid tissue. Early gastric MALTomas due to H. pylori may be successfully treated (70–95% of cases) with one or more eradication programs. Some 50–80% of patients who experience eradication of the pathogen develop a remission and long-term clinical control of their lymphoma within 3–28 months. Radiation therapy to the stomach and surrounding (i.e. peri-gastric) lymph nodes has also been used to successfully treat these localized cases. Patients with non-localized (i.e. systemic Ann Arbor stage III and IV) disease who are free of symptoms have been treated with watchful waiting or, if symptomatic, with the immunotherapy drug rituximab (given for 4 weeks) combined with the chemotherapy drug chlorambucil for 6–12 months; 58% of these patients attain a 58% progression-free survival rate at 5 years. Frail stage III/IV patients have been successfully treated with rituximab or the chemotherapy drug cyclophosphamide alone. Antibiotic-proton pump inhibitor eradication therapy and localized radiation therapy have been used successfully to treat H. pylori-positive MALT lymphomas of the rectum; however radiation therapy has given slightly better results and therefore been suggested to be the disease's preferred treatment. However, the generally recognized treatment of choice for patients with systemic involvement uses various chemotherapy drugs often combined with rituximab. A MALT lymphoma may rarely transform into a more aggressive diffuse large B-cell lymphoma (DLBCL). Where this is associated with H. pylori infection, the DLBCL is less aggressive and more amenable to treatment. When limited to the stomach, they have sometimes been successfully treated with H. pylori eradication programs. If unresponsive or showing a deterioration, a more conventional chemotherapy (CHOP), immunotherapy, or local radiotherapy can be considered, and any of these or a combination have successfully treated these more advanced types. Prognosis Helicobacter pylori colonizes the stomach for decades in most people, and induces chronic gastritis, a long-lasting inflammation of the stomach. In most cases symptoms are never experienced but about 10–20% of those infected will ultimately develop gastric and duodenal ulcers, and have a possible 1–2% lifetime risk of gastric cancer. H. pylori thrives in a high salt diet, which is seen as an environmental risk factor for its association with gastric cancer. A diet high in salt enhances colonization, increases inflammation, increases the expression of H. pylori virulence factors, and intensifies chronic gastritis. Paradoxically, extracts of kimchi, a salted probiotic food, has been found to have a preventive effect on H. pylori–associated gastric carcinogenesis. In the absence of treatment, H. pylori infection usually persists for life. Infection may disappear in the elderly as the stomach's mucosa becomes increasingly atrophic and inhospitable to colonization. Some studies in young children up to two years of age have shown that infection can be transient in this age group. It is possible for H. pylori to re-establish in a person after eradication. This recurrence can be caused by the original strain (recrudescence), or be caused by a different strain (reinfection). A 2017 meta-analysis showed that the global per-person annual rates of recurrence, reinfection, and recrudescence is 4.3%, 3.1%, and 2.2% respectively. It is unclear what the main risk factors are. Mounting evidence suggests H. pylori has an important role in protection from some diseases. The incidence of acid reflux disease, Barrett's esophagus, and esophageal cancer have been rising dramatically at the same time as H. pyloris presence decreases. In 1996, Martin J. Blaser advanced the hypothesis that H. pylori has a beneficial effect by regulating the acidity of the stomach contents. The hypothesis is not universally accepted, as several randomized controlled trials failed to demonstrate worsening of acid reflux disease symptoms following eradication of H. pylori. Nevertheless, Blaser has reasserted his view that H. pylori is a member of the normal gastric microbiota. He postulates that the changes in gastric physiology caused by the loss of H. pylori account for the recent increase in incidence of several diseases, including type 2 diabetes, obesity, and asthma. His group has recently shown that H. pylori colonization is associated with a lower incidence of childhood asthma. Epidemiology In 2023, it was estimated that about two-thirds of the world's population were infected with H. pylori infection, being more common in developing countries. H. pylori infection is more prevalent in South America, Sub-Saharan Africa, and the Middle East. The global prevalence declined markedly in the decade following 2010, with a particular reduction in Africa. The age when someone acquires this bacterium seems to influence the pathologic outcome of the infection. People infected at an early age are likely to develop more intense inflammation that may be followed by atrophic gastritis with a higher subsequent risk of gastric ulcer, gastric cancer, or both. Acquisition at an older age brings different gastric changes more likely to lead to duodenal ulcer. Infections are usually acquired in early childhood in all countries. However, the infection rate of children in developing nations is higher than in industrialized nations, probably due to poor sanitary conditions, perhaps combined with lower antibiotics usage for unrelated pathologies. In developed nations, it is currently uncommon to find infected children, but the percentage of infected people increases with age. The higher prevalence among the elderly reflects higher infection rates incurred in childhood. In the United States, prevalence appears higher in African-American and Hispanic populations, most likely due to socioeconomic factors. The lower rate of infection in the West is largely attributed to higher hygiene standards and widespread use of antibiotics. Despite high rates of infection in certain areas of the world, the overall frequency of H. pylori infection is declining. However, antibiotic resistance is appearing in H. pylori; many metronidazole- and clarithromycin-resistant strains are found in most parts of the world. History Helicobacter pylori migrated out of Africa along with its human host around 60,000 years ago. Research has shown that genetic diversity in H. pylori, like that of its host, decreases with geographic distance from East Africa. Using the genetic diversity data, researchers have created simulations that indicate the bacteria seem to have spread from East Africa around 58,000 years ago. Their results indicate modern humans were already infected by H. pylori before their migrations out of Africa, and it has remained associated with human hosts since that time. H. pylori was first discovered in the stomachs of patients with gastritis and ulcers in 1982 by Barry Marshall and Robin Warren of Perth, Western Australia. At the time, the conventional thinking was that no bacterium could live in the acid environment of the human stomach. In recognition of their discovery, Marshall and Warren were awarded the 2005 Nobel Prize in Physiology or Medicine. Before the research of Marshall and Warren, German scientists found spiral-shaped bacteria in the lining of the human stomach in 1875, but they were unable to culture them, and the results were eventually forgotten. The Italian researcher Giulio Bizzozero described similarly shaped bacteria living in the acidic environment of the stomach of dogs in 1893. Professor Walery Jaworski of the Jagiellonian University in Kraków investigated sediments of gastric washings obtained by lavage from humans in 1899. Among some rod-like bacteria, he also found bacteria with a characteristic spiral shape, which he called Vibrio rugula. He was the first to suggest a possible role of this organism in the pathogenesis of gastric diseases. His work was included in the Handbook of Gastric Diseases, but it had little impact, as it was published only in Polish. Several small studies conducted in the early 20th century demonstrated the presence of curved rods in the stomachs of many people with peptic ulcers and stomach cancers. Interest in the bacteria waned, however, when an American study published in 1954 failed to observe the bacteria in 1180 stomach biopsies. Interest in understanding the role of bacteria in stomach diseases was rekindled in the 1970s, with the visualization of bacteria in the stomachs of people with gastric ulcers. The bacteria had also been observed in 1979, by Robin Warren, who researched it further with Barry Marshall from 1981. After unsuccessful attempts at culturing the bacteria from the stomach, they finally succeeded in visualizing colonies in 1982, when they unintentionally left their Petri dishes incubating for five days over the Easter weekend. In their original paper, Warren and Marshall contended that most stomach ulcers and gastritis were caused by bacterial infection and not by stress or spicy food, as had been assumed before. Some skepticism was expressed initially, but within a few years multiple research groups had verified the association of H. pylori with gastritis and, to a lesser extent, ulcers. To demonstrate H. pylori caused gastritis and was not merely a bystander, Marshall drank a beaker of H. pylori culture. He became ill with nausea and vomiting several days later. An endoscopy 10 days after inoculation revealed signs of gastritis and the presence of H. pylori. These results suggested H. pylori was the causative agent. Marshall and Warren went on to demonstrate antibiotics are effective in the treatment of many cases of gastritis. In 1994, the National Institutes of Health stated most recurrent duodenal and gastric ulcers were caused by H. pylori, and recommended antibiotics be included in the treatment regimen. The bacterium was initially named Campylobacter pyloridis, then renamed C. pylori in 1987 (pylori being the genitive of pylorus, the circular opening leading from the stomach into the duodenum, from the Ancient Greek word πυλωρός, which means gatekeeper). When 16S ribosomal RNA gene sequencing and other research showed in 1989 that the bacterium did not belong in the genus Campylobacter, it was placed in its own genus, Helicobacter from the Ancient Greek έλιξ (hělix) "spiral" or "coil". In October 1987, a group of experts met in Copenhagen to found the European Helicobacter Study Group (EHSG), an international multidisciplinary research group and the only institution focused on H. pylori. The Group is involved with the Annual International Workshop on Helicobacter and Related Bacteria, (renamed as the European Helicobacter and Microbiota Study Group), the Maastricht Consensus Reports (European Consensus on the management of H. pylori), and other educational and research projects, including two international long-term projects: European Registry on H. pylori Management (Hp-EuReg) – a database systematically registering the routine clinical practice of European gastroenterologists. Optimal H. pylori management in primary care (OptiCare) – a long-term educational project aiming to disseminate the evidence based recommendations of the Maastricht IV Consensus to primary care physicians in Europe, funded by an educational grant from United European Gastroenterology. Research Results from in vitro studies suggest that fatty acids, mainly polyunsaturated fatty acids, have a bactericidal effect against H. pylori, but their in vivo effects have not been proven. The antibiotic resistance provided by biofilms has generated much research into targeting the mechanisms of quorum sensing used in the formation of biofilms. A suitable vaccine for H. pylori, either prophylactic or therapeutic, is an ongoing research aim. The Murdoch Children's Research Institute is working at developing a vaccine that instead of specifically targeting the bacteria, aims to inhibit the inflammation caused that leads to the associated diseases. Gastric organoids can be used as models for the study of H. pylori pathogenesis.
Biology and health sciences
Gram-negative bacteria
Plants
199829
https://en.wikipedia.org/wiki/Angular%20frequency
Angular frequency
In physics, angular frequency (symbol ω), also called angular speed and angular rate, is a scalar measure of the angle rate (the angle per unit time) or the temporal rate of change of the phase argument of a sinusoidal waveform or sine function (for example, in oscillations and waves). Angular frequency (or angular speed) is the magnitude of the pseudovector quantity angular velocity. Angular frequency can be obtained multiplying rotational frequency, ν (or ordinary frequency, f) by a full turn (2 radians): . It can also be formulated as , the instantaneous rate of change of the angular displacement, θ, with respect to time, t. Unit In SI units, angular frequency is normally presented in the unit radian per second. The unit hertz (Hz) is dimensionally equivalent, but by convention it is only used for frequency f, never for angular frequency ω. This convention is used to help avoid the confusion that arises when dealing with quantities such as frequency and angular quantities because the units of measure (such as cycle or radian) are considered to be one and hence may be omitted when expressing quantities in terms of SI units. In digital signal processing, the frequency may be normalized by the sampling rate, yielding the normalized frequency. Examples Circular motion In a rotating or orbiting object, there is a relation between distance from the axis, , tangential speed, , and the angular frequency of the rotation. During one period, , a body in circular motion travels a distance . This distance is also equal to the circumference of the path traced out by the body, . Setting these two quantities equal, and recalling the link between period and angular frequency we obtain: Circular motion on the unit circle is given by where: ω is the angular frequency (SI unit: radians per second), T is the period (SI unit: seconds), f is the ordinary frequency (SI unit: hertz). Oscillations of a spring An object attached to a spring can oscillate. If the spring is assumed to be ideal and massless with no damping, then the motion is simple and harmonic with an angular frequency given by where k is the spring constant, m is the mass of the object. ω is referred to as the natural angular frequency (sometimes be denoted as ω0). As the object oscillates, its acceleration can be calculated by where x is displacement from an equilibrium position. Using standard frequency f, this equation would be LC circuits The resonant angular frequency in a series LC circuit equals the square root of the reciprocal of the product of the capacitance (C, with SI unit farad) and the inductance of the circuit (L, with SI unit henry): Adding series resistance (for example, due to the resistance of the wire in a coil) does not change the resonant frequency of the series LC circuit. For a parallel tuned circuit, the above equation is often a useful approximation, but the resonant frequency does depend on the losses of parallel elements. Terminology Although angular frequency is often loosely referred to as frequency, it differs from frequency by a factor of 2, which potentially leads confusion when the distinction is not made clear.
Physical sciences
Classical mechanics
Physics
199865
https://en.wikipedia.org/wiki/Snipe
Snipe
A snipe is any of about 26 wading bird species in three genera in the family Scolopacidae. They are characterized by a very long, slender bill, eyes placed high on the head, and cryptic/camouflaging plumage. The Gallinago snipes have a nearly worldwide distribution, the Lymnocryptes snipe is restricted to Asia and Europe and the Coenocorypha snipes are found only in the outlying islands of New Zealand. The four species of painted snipe are not closely related to the typical snipes, and are placed in their own family, the Rostratulidae. Behaviour Snipes search for invertebrates in the mud with a "sewing-machine" action of their long bills. The sensitivity of the bill is caused by filaments belonging to the fifth pair of nerves, which run almost to the tip and open immediately under the soft cuticle in a series of cells; a similar adaptation is found in sandpipers; this adaptation gives this portion of the surface of the premaxillaries a honeycomb-like appearance: with these filaments the bird can sense its food in the mud without seeing it. Diet Snipes feed mainly on insect larva. Other invertebrate prey include snails, crustaceans, and worms. The snipe's bill allows the very tip to remain closed while the snipe slurps up invertebrates. Habitat Snipes can be found in various types of wet marshy settings including bogs, swamps, wet meadows, and along rivers, coast lines, and ponds. Snipes avoid settling in areas with dense vegetation, but rather seek marshy areas with patchy cover to hide from predators. Hunting Camouflage may enable snipes to remain undetected by hunters in marshland. The bird is also highly alert and startled easily, rarely staying long in the open. If the snipe flies, hunters have difficulty wing-shooting due to the bird's erratic flight pattern. The difficulties involved around hunting snipes gave rise to the military term sniper, which originally meant an expert hunter highly skilled in marksmanship and camouflaging, but later evolved to mean a sharpshooter or a shooter who makes distant shots from concealment.
Biology and health sciences
Charadriiformes
Animals
199882
https://en.wikipedia.org/wiki/Wedge-tailed%20eagle
Wedge-tailed eagle
The wedge-tailed eagle (Aquila audax) also known as the eaglehawk, is the largest bird of prey in the continent of Australia. It is also found in southern New Guinea to the north and is distributed as far south as the state of Tasmania. Adults of the species have long, broad wings, fully feathered legs, an unmistakable wedge-shaped tail, an elongated upper mandible, a strong beak and powerful feet. The wedge-tailed eagle is one of 12 species of large, predominantly dark-coloured booted eagles in the genus Aquila found worldwide. Genetic research has clearly indicated that the wedge-tailed eagle is fairly closely related to other, generally large members of the Aquila genus. A large brown-to-black bird of prey, it has a maximum reported wingspan of and a length of up to . The wedge-tailed eagle is one of its native continent's most generalised birds of prey. They reside in most habitats present in Australia, ranging from desert and semi-desert to plains to mountainous areas to forest, even sometimes tropical rainforests. Preferred habitats, however, tend towards those that have a fairly varied topography including rocky areas, some open terrain and native woodlots such as Eucalyptus stands. The wedge-tailed eagle is one of the world's most powerful avian predators. Although a true generalist, which hunts a wide range of prey, including birds, reptiles and, rarely, other taxa, the species is, by and large, a mammal predator. The introduction of the European rabbit (Oryctolagus cuniculus) has been a boon to the wedge-tailed eagle and they hunt these and other invasive species in large volume, although the wedge-tailed eagle otherwise generally lives off of marsupials, including many surprisingly large macropods. Additionally, wedge-tailed eagles often eat carrion, especially while young. The species tends to pair for several years, possibly mating for life. Wedge-tailed eagles usually construct a large stick nest in an ample tree, normally the largest in a stand, and typically lay two eggs, although sometimes one to four. Usually, breeding efforts manage to produce one or two fledglings which, after a few months more, tend to disperse widely. Nesting failures are usually attributable to human interference, such as logging activity and other alterations, which both degrade habitats and cause disturbances. The species is known to be highly sensitive to human disturbance at the nest, which may lead to abandonment of the young. Although historically heavily persecuted by humans through poisoning and shooting, mostly for alleged predation on sheep, wedge-tailed eagles have proved to be exceptionally resilient, and their numbers have quickly rebounded to being similar or even higher numbers than before European colonisation, thanks in part to humans inadvertently providing several food sources, such as rabbits and a large volume of roadkill. Taxonomy The species was first described in 1801 by the English ornithologist John Latham, under the binomial name Vultur audax. At one time, the wedge-tailed eagle was classified in it is own monotypical genus Uroaetus, perhaps due to its unique form. Today, the genus Vultur is used only for a completely unrelated bird of the New World vulture family, the Andean condor (Vultur gryphus). The specific scientific name for the species, audax, is derived from the Latin audax, meaning "bold", indicative of their perceived disposition, perhaps when hunting, although the species is, in general, highly wary, and even timid, around humans. However, the species is quite similar in many aspects of its morphology, appearance, behaviour and life history, to other species in the Aquila genus. The eagles of the Aquila genus are part of the subfamily Aquilinae, within the larger Accipitridae family. The subfamily is commonly referred to as booted eagles or sometimes as true eagles. Those species may be distinguished from most other accipitrids by the feathering covering their legs, regardless of distribution. With some 39 or so species, the Aquilinae is present on every continent except Antarctica. By a variety of phylogenetic testing, largely via Mitochondrial DNA and Nuclear DNA genes, it has been determined that the wedge-tailed eagle clusters with certain other Aquila eagles. The species found to share the most genetic similarities is the Verreaux's eagle (Aquila verreauxii) of Africa. However, the Gurney's eagle (Aquila gurneyi), a mostly allopatric but outwardly fairly similar eagle, is clearly a very close relation of the wedge-tailed eagle and the two are likely sister species, most probably originating from the same radiation across the Indo-Pacific region. The wedge-tailed, Gurney's and Verreaux's eagles form a clade or a species complex with the well-known golden eagle (Aquila chrysaetos), the most widely distributed species in the entire accipitrid family, as well as outwardly dissimilar (smaller and paler-bellied yet also powerful) eagles like the Bonelli's eagle (Aquila fasciata), the African hawk-eagle (Aquila spilogaster) and the Cassin's hawk-eagle (Aquila africanus), the latter three having once been considered members of a different genus. Beyond the aforementioned species, based on genetic testing, the four other Aquila species, although outwardly similar to golden and wedge-tailed eagles, being large, dark and brownish, with long wings, are thought to form a separate clade, and are paraphyletic from the members of what can be called the golden eagle clade. Other related outliers from outside the Aquila genus, are the small-to-mid-sized Clanga or spotted eagle species, and the widely found and quite small Hieraeetus eagles. One member of the latter genus contains the only other widely found Aquilinae eagle in Australia, the little eagle (Hieraaetus morphnoides). Subspecies Two subspecies of wedge-tailed eagle are recognised. However, the separation of the two subspecies has been called into question, largely because the reported differences in both size and coloration can be attributed to clinal variation, and some of the insular populations may still be at an intermediate stage of subspecific formation. A. a. audax (Latham, 1801) – This subspecies resides in the entire continent of Australia as well as in southern New Guinea. It is the typical wedge-tailed eagle as subsequently described. A. a. fleayi (Condon & Amadon, 1954) – This race is endemic to Tasmania. The subspecies is named in honour of David Fleay, an Australian naturalist who was the first to propose the difference of the insular race. A. a. fleayi differs from mainland wedge-tailed eagles mainly via size and colouring. It is larger than the mainland eagle and is said to have particularly outsized talon dimensions compared to mainland eagles. Furthermore, it has a deep chocolate brown overall colour rather than blackish, with a whitish buff colouring to the nape rather than tawny-rufous feathers there. The juvenile is altogether paler and sandier than an equivalent-aged wedge-tailed eagle on mainland Australia. Although the validity of the subspecies has been questioned, genetic studies have determined that there is no gene flow or introgression between Tasmanian and other wedge-tailed eagles. Furthermore, the insular race was likely formed by marine dispersals, a process wedge-tailed eagles may continue to engage in despite usually avoiding large bodies of water, albeit usually in narrower straits. Description Wedge-tailed eagles are very large and quite lanky birds. They are characteristically black but can appear tar to charcoal brown, depending on lighting and individual variation. They have a massive bill but possess a relatively small and rather flat head, with a long, almost vulturine neck. Furthermore, they are distinctive for their prominent carpals and baggy feathered trousers. The species tends to perch conspicuously on dead trees, telegraph poles, rocks or, at times, the open ground. Between the bill size, elongated shape and prominent shoulders, the species is highly distinctive. While perched, their long wings extend down to a long and markedly wedge-tipped tail. They have a large proportion of bare facial skin, which is thought to be an adaptation to the warm climate rather than carrion eating, because the non-carrion-eating Verreaux's eagle has similar facial feathering and the golden eagle eats carrion too. Against the blackish plumage, the tawny-rufous hackles on the neck, forming a lanceolated shape, as well as the pale brown to rufous crissum, and narrow mottled grey-brown band across the greater wing coverts, all stand out well. The sexes are indistinguishable by plumage. The juvenile is mainly darkish brown, with extensive rufous feather edging, and a paler, fairly streaky head. Furthermore, the juvenile has a lighter-brown crissum, and a light reddish-brown to golden nape, with similar colouring extending sometimes to the back and wing band. The wing band is considerably more prominent than those of adults, extending to the median and sometimes the lesser coverts. Rarely, a juvenile may be all dull black, lacking rufous edges or a wing band. Young eagles are much the same by the second through to the fourth years though they may be almost invariably visibly in moult and with a narrowing wing band. They become darker around the fifth year, with a red-brown nape and a still narrowing wing band. Full mature plumage is not attained until the seventh or eighth year, although sexual maturity can be considered as early as five. Adults have dark brown eyes, while juveniles usually have similar but slightly darker eyes. Wedge-tailed eagles are typically creamy white on the cere and feet, although those can be dull yellow, more so in juveniles than adults. The wedge-tailed eagle has a unique moult process in that they moult almost continuously and very slowly, and it might take three or more years for an eagle of the species to complete a moult. Moults are arrested only at times of famine, and happen gradually, so that they do not impede the bird's flight or hunting capacities. In flight, wedge-tailed eagles appear as a very large, dark raptor, with a protruding head, long and relatively narrow-looking wings, more or less parallel edged when soaring and, most distinctly, a long diamond-shaped tail. The shape is dissimilar to any other raptor in the world. Juveniles tend to be broader winged by comparison. The wingspan is around 2.2 times greater than the bird's total length. They tend to fly with rather loose but deep and powerful beats. Wedge-tailed eagles spend much time sailing along, looking quite stable and controlled even in strong winds. The species glides and soars on upswept wings with long splayed primaries. The ample tail may be upcurved, or "dished", at the edges. The eagle often spreads its deep wing emarginations to reduce drag in high winds. Contrary to their superlative and controlled appearance once on the wing, flight for wedge-tailed eagles can be a struggle even in normal circumstances, unless from it is from a pinnacle or it is somewhat windy and, within the forest, they may clamber about, with a "lack of grace", to reach the canopy. Gorged birds on the ground can be vulnerable, being practically grounded, which was an advantage historically to Aboriginal hunters. Human gliders have encountered wedge-tailed eagles at more than . The adult is all blackish on the wing but for the tawny-rufous nape and greyish wing band (running less than a quarter of the way down the wing's width). Little relieves the dark coloration below but the pale brown to rufous crissum and the pale greyish bases to their flight feathers. Juvenile wedge-tailed eagles appear much browner although in general are not dissimilar in pattern below though the body and wings relative to adult. However, juveniles may show some paler mottling, of an off-rufous colour. Meanwhile, the juvenile's tail and most flight feathers are barred greyish which in turn contrast against the pale based primaries with black tips. Above, the juvenile bears much paler and more sandy rufous colour from the head to at least upper mantle and along broad wing band (as well as more than half the wing width). The lighter dorsal colour sometimes extends to much of the back and scapulars. Rare individual juvenile eagles are dull black, without a wing band or paler edges. With much variation in individuals, generally as the young eagles age, the signature wing band shrinks incrementally and, after the fifth year, the plumage darkens. Size The female wedge-tailed eagle is one of the world's largest eagles. Its nearest rival in Australia for size is some 15 per cent smaller linearly and 25 per cent lighter in weight. As is typical in birds of prey, the female is larger than the male. Although a few individual females are larger by only a small amount, they average up to 33 per cent larger. A full-grown female weighs between , while the smaller males weigh . Total length varies between and the wingspan typically is between . In 1930, the average weight and wingspans of 43 birds were and . The same average figures for a survey of 126 eagles in 1932 were and , respectively. According to one guide, the mean body mass of male wedge-tailed eagles is while that of females is listed as , which, if accurate, is one of the most extreme examples of size sexual dimorphism known in any bird of prey. However, another sample showed far less stark size differences, with 29 males weighing an average of and 29 females an average of . In the same sample, from the Nullarbor Plain, males averaged wingspan of (sample of 26) and body length of (sample 5) while females had an average wingspan of (sample 23) and body length of . However, the Nullarbor Plain eagles appear slightly smaller than wedge-tailed eagle sizes from other surveys, based on body mass and wing chord sizes. An average length for males of and was described for wedge-tailed eagles in Queensland. Another source claimed an average male weight of and average female body mass of . Yet another book lists males as averaging and females as averaging . A sample of 10 males averaged while 19 females weighed . The mean body mass of males in Tasmania was while that for females was . The largest wingspan ever verified for an eagle was for this species. A female killed in Tasmania in 1931 had a wingspan of , and another female measured barely smaller at . Similar claims, however, have been made for the Steller's sea eagle (Haliaeetus pelagicus), which has been said to reach or exceed in wingspan. Reported claims of wedge-tailed eagles spanning and were unverified and deemed to be unreliable per Guinness World Records. This eagle's great length and wingspan place it among the largest eagles in the world, but its wings, at more than , and tail, at up to , are unusually elongated for its body weight, and nine or ten other eagle species regularly outweigh it. It is around the third heaviest Aquila species, outsized only somewhat by the golden eagle and slightly by the Verreaux's eagle, although it only slightly exceeds the weight of the Spanish imperial eagle (Aquila adalberti). Among the entire booted eagle subfamily, in addition to the two heavier Aquila, it is outsized in bulk by the martial eagle (Polemaetus bellicosus), while the also long-tailed crowned eagle (Stephanoaetus coronatus) can average of a roughly similar body mass to the wedge-tailed eagle, although the latter is marginally the heavier bird. The wedge-tailed is exceeded in body mass by only a few eagles, especially the Steller's sea eagle and harpy eagle (Harpia harpyja) and somewhat by the Philippine eagle (Pithecophaga jefferyi), the white-tailed eagle (Haliaeetus albicilla) and the bald eagle (Haliaeetus leucocephalus). However, it rivals the Steller's and harpy eagles and is known to be exceeded only by the Philippine eagle in total length. The wedge-tailed eagle's wingspan is the largest of any Aquila, and is exceeded amongst all eagles probably only by the white-tailed and Steller's sea eagles in average spread though its average (not maximum) wingspan is rivaled by that of the martial eagle. Among standard measurements, within the nominate subspecies, the wing chord of males may range from while that of the female is from . In Tasmania, the wing chord measured from in males and in females. In Nullarbor Plain, males averaged in wing chord while females averaged . Other Australian wedge-tailed eagles averaged in wing chord among males and among females. In Tasmania, the wing chord averaged in males and in females. The extreme tail length, slightly to greatly exceeding that of other Aquila, is in males from , averaging in the Nullarbor eagles and in Tasmania, and from , averaging in Nullarbor and in Tasmania. Although they only slightly exceed in tail length the heavier two Aquila and crowned eagles and they can rival the tail lengths of the Philippine and the Harpiinae eagles, Tasmanian wedge-tailed eagles are quite likely to be the longest-tailed of all modern eagles. The length of the tarsus may be from . The tarsus of 7 males averaged while that of 7 females averaged . In terms of bill measurements, the exposed culmen may range from in males and in females while total bill length (from the gape) is from and , in the sexes respectively. It is likely to be the largest billed Aquila, a bit ahead of the imperial eagles and the Verreaux's eagle, behind only the larger Haliaeetus and Philippine eagles amongst all eagles. In Tasmania, culmen lengths averaged in males and in females while the total length of the bill averaged and . The hallux claw, the enlarged rear talon on the hind toe, is slightly smaller than that of a golden or Verreaux's eagle, even proportionately, but is extremely sharp. According to one study, wedge-tailed eagles had a hallux claw of , ranging from , in males , ranging from in a sample of 10, in females. Another source listed the hallux claw of mainland Australian eagles as averaging in males and in females. Meanwhile, in Tasmanian eagles, the hallux claw averaged , ranging from in males while in females, the hallux claw averaged , ranging from . In terms of osteological structure and size, the wedge-tailed eagle is said to be proportional to other eagles, being notably smaller and less robust than the heaviest eagles, such as Steller's and harpies, but fairly similar in osteology, in both structure and proportions, to the golden eagle. Identification Their unique combination of large size, lanky build, long, diamond-shaped tail (though can be round-ended when both central feathers are moulted together), mainly black or rather dark plumage, and long wings seen when soaring or gliding make all ages of the wedge-tailed eagle fairly unmistakable in the majority of their range. The only main confusion species is often the black-breasted kite (Hamirostra melanosternon), which is surprisingly similar in colouring but is much smaller with a relatively short, squared tail and extensive clear white windows covering a good part of their wings. Juveniles of the white-bellied sea eagle (Haliaeetus leucogaster), at times mentioned as potentially confusable with a young wedge-tailed eagle, are much paler below with a rather different flight pattern: a short pale tail, bare legs, shorter, broader wings held in stiff dihedral. In New Guinea, the Gurney's eagle is more similar than those species in form and build but the Gurney's is somewhat smaller and more compact than the wedge-tailed eagle with rich yellow feet, a rather shorter rounded or faintly wedge-tipped tail, shorter and relatively broader wings (in adaptation to more forest-living). Furthermore, the Gurney's eagle has a much paler immature plumage. Although usually considered an island endemic, the Gurney's eagle is possibly capable of marine dispersals, as is the wedge-tailed eagle, that may lead to them to turn up in the forests of northern Australia and historical reports show that a rare vagrant of the species may indeed appear there. The Papuan eagle (Harpyopsis novaeguineae), the only other island raptor in New Guinea that approaches the wedge-tailed in size, is a highly distinct and forest-restricted species, being much paler, particularly below, with long, bare legs and different proportions, more like a giant Accipiter with short rounded wings, a long, somewhat rounded tipped tail, and a large, rounded head. Vocalizations Wedge-tailed eagles are not well known for its vocalization nor are they often heard. They may be silent for long stretches of time, possibly months, at least outside of breeding season. When vocalizations have been documented, it usually only near the nest and in aerial display, and can be hard to hear unless at close range. The commonest calls for wedge-tailed eagles are high, rather thin whistles, sometimes transcribed as I-see, I-see followed by a short descending see-tya. Also documented during the breeding season are various other whistles, yelps and squeals and an often rolling series. Characteristically, all their calls are surprisingly weak, though the main call is sometimes considered to have a "melancholy" quality. The opinion on their call is not dissimilar to the golden eagle, whose voice is similarly considered unimpressive. Female calls in wedge-tailed eagles are similar but are generally lower and harsher than males. Range and habitat Wedge-tailed eagles are found throughout Australia (including Tasmania), as well as southern New Guinea, in almost all habitats, though they tend to be more common in favourable habitat in southern and eastern Australia. In Australia, they may be found almost all the way from the Cape York Peninsula in the north down to Wilsons Promontory National Park and Great Otway National Parks in the southern tips of the continent, and from Shark Bay in the western side of the continent to Great Sandy National Park and Byron Bay in the east. They are widespread throughout the desert interior of Australia, but are rare in low densities in the most arid parts of the continent, such as the Lake Eyre Basin. Offshore, the wedge-tailed eagle may be distributed in several of the larger Australian islands and some of the smaller ones. Those include a majority of the Torres Strait Islands, Albany Island, Pipon Island, the isles of Bathurst Bay, many small isles in Queensland, from Night Island down to the South Cumberland Islands, Fraser Island, Moreton Island, North Stradbroke Island, Montague Island, Kangaroo Island, the Nuyts Archipelago, Groote Eylandt and the Tiwi Islands. In Tasmania, they may be found essentially throughout as well as some isles of the Kent Group, Bass Strait, Flinders Island and Cape Barren Island. In New Guinea, the wedge-tailed eagle is highly range restricted and can be found predominantly in the Trans-Fly savanna and grasslands and the general area around the Western Province, as well as in Indonesia Merauke Regency, with some isolated reports in Western New Guinea, the Bensbach River and the Oriomo River. Habitat The wedge-tailed eagle lives in an extremely wide range of habitats. Although range is restricted relative to the golden eagle, it likely occupies a wider range of habitat types than likely any other Aquila eagle, and may outrival any booted eagle species in their use of diverse habitats, being somewhat more akin to habitat generalist raptors such as Buteo buzzards. Assorted habitats known to host wedge-tailed eagles includes open woodland, savanna, heathland, grasslands, desert edge and semi-desert, subalpine forests, montane grasslands and mountain peaks, not-too-dense tropical rainforests, monsoon forests, dwarf conifer forests, some wetlands as well as regularly forays to coastal areas, though normally along the coasts they occur around plains somewhat away from the water. Favored habitat tends to be remote or rough country, at least partially wooded and not uncommonly varied with some rocky spots as well as in shrubland. Wedge-tailed eagles seem to prefer some dead trees to be present. They may occur around Eucalyptus woodland quite regularly, as well as Acacia woodland and mixed woodlands of Casuarina cristata-Flindersia maculosa-Callitris cypresses and also stands of Casuarina cunninghamiana. A strong preference was detected for C. cunninghamiana alternatively with several Eucalyptus species was detected in the Australian Capital Territory, sloping ground allowing good access and access to tall, mature trees being paramount to the eagles in the study. Quite often they will be seen soaring over hills, mountains or escarpments as well as over flat plains, especially spinex grassland. Dense forest is typically avoided with glades and edge often sought out in forested areas. While they do occur in rich riparian woodlands, it is with relative scarcity despite this being where many other raptors of the nation concentrate. In the deserts of the Lake Eyre basin, they are often seen in gibber plains along treed watercourses and drainage basins, here often concentrated around Eucalyptus in stony creek beds. In the sandy desert areas of Western Australia, wedge-tailed eagles were once reasonably common but have largely vacated the region after the macropod prey they live off of there were all but hunted to extinction. Wedge-tailed eagles commonly occur from sea level up to about with seemingly no preference based on altitudinal level. A fairly pronounced liking for mountainous localities such as plateaus has been detected in a few studies of wedge-tailed eagle. One of the few habitat types considered to be strongly avoided by wedge-tailed eagles are areas intensively settled or cultivated areas. A slightly fading tendency to avoid human areas has been detected, perhaps as persecution rates have gone far down, and the wedge-tailed eagle may be seen near towns and villages in exurban and even suburban areas largely within bushland. However, the species is seldom seen other than as a flyover in more developed towns and cities. Additionally, it is not uncommon to see these eagles in man-made spots such as pasture areas, forestry clearings, and rolling farmland areas. Behaviour This impressive bird of prey spends much of the day perching in trees, on rocks as well as similar exposed lookout sites such as cliffs from which it has a good view of its surroundings. Alternatively, they may sit on the ground for long periods of time or watch from a lower point, such as on termite mounds or anthills. Now and then, it takes off from its perch to fly low over its territory. Especially whilst not breeding, wedge-tailed eagles spends a considerable amount of the day on the wing. Wedge-tailed eagles are highly aerial, soaring for hours on end without wingbeat and seemingly without effort, regularly reaching and sometimes considerably higher. The purpose of soaring has received little specific study in wedge-tailed eagles, but it is likely, as in other accipitrids, in large part for surveying the territory and advertising their presence to other eagles. During the intense heat of the middle part of the day, it often soars high in the air, circling up on the thermal currents that rise from the ground below. Often when on the wing, it is scarcely visible to the human's naked eye. Their keen eyesight extends into ultraviolet bands. With a visual perception some three times more acute than those of humans, one of the largest pecten oculi of any bird and an eye roughly as big as a small human's, they may be one of the most sharp-eyed birds in the world. The wedge-tailed eagle is largely sedentary as expected of a raptor dwelling in the subtropics, although they also dwell in the tropics (far northern Australia and New Guinea) as well as in the temperate zone (Tasmania). However, juveniles of the species can be quite dispersive. In some cases, they have moved to a recorded distance of some . These extreme movements have been completed within 7 to 8 months after dispersal. More typically they move no farther than or so. The adult eagles can also be nomadic, though only in circumstances such as drought conditions. In turn this explains the species presence in places they don't breed, even adults. In addition to moving for drought in arid zone, also moves in highest part of New South Wales, e.g. the Snowy Mountains, the species often apparently vacates snow-covered alpine zone in winter. The small New Guinea population is apparently indistinguishable from the mainland race and so possibly result of recent colonization, although no records exist of migrating wedge-tailed eagles islands past the Torres Straits. However, it can be projected from its presence in various offshore islands its capacity for crossing straits ranging up to as far as apart. One post dispersal young eagle was observed to distribute from Kangaroo Island to the mainland, possibly a regular occurrence. Due to their tendency for wandering, some authors class the wedge-tailed eagles as a "partial or irruptive migrant". However, while they are arguably irruptive, it does not fit the mould of a true migrant well since under normal circumstances adults are rather sedentary unless environmental changes force them to move. The wedge-tailed eagle is the only bird that has a reputation for not infrequently attacking hang gliders and paragliders, although other eagles including the golden eagle have also been recorded to behave thusly. Based on the response the eagles show to the gliders, they presumably are defending their territory and treating the perceived intruder like another eagle. Cases are recorded of the birds damaging the fabric of these gliders with their talons as well as some other parts of the gliding apparatus, but not the humans themselves, has been reported. They have also been reported to attack and destroy unmanned aerial vehicles used for mining survey operations in Australia. The presence of a wedge-tailed eagle often causes panic among smaller birds and, as a result, aggressive species such as magpies (one of the most vulnerable types of passerine to eagle attacks), butcherbirds, wagtails, monarch flycatchers, lapwings, and miners as well as smaller birds of prey, including both accipitrids and falcons, any of which may aggressively mob eagles (see video). Multiple species may join the kerfuffle and mob them, especially while the eagles perched, often engaging in noisy calling, presumably meant to disorient the predator, and occasionally in physical attacks against the eagle, typically focused where the big, relatively lumbering eagles could not grasp the attacking birds. The wedge-tailed eagle usually does not engage its tormentors but sometimes rolls in the air to present talons whether perched or not. Sometimes wedge-tailed eagles appear to fight but this and other behaviours, especially between young eagles, may be interpreted as playful. Some such behaviours have included fetching sticks tossed by others, athletic flipping between juvenile eagles and even playing games with dogs, via floating above them until the dogs bark or leap then floating up until the dog settles and then repeating the "game". Flocking behaviour has been noted, similar to that of vultures (Cathartidae and Accipitridae) in other countries, when carrion is available. Dietary biology The wedge-tailed eagle is one of the world's most powerful avian predators. Due to its formidable and dominating nature, it is sometimes nicknamed "King of Birds", along with golden eagles. Prey is usually grabbed via a pounce or snatch during a gliding flight or a tail-chase from low quartering or transect flights. Prey is not infrequently spotted from a soaring flight and they may undertake a long, slanting stoop towards it. They may be able to spot prey from farther than a kilometre given their keen vision. Its typical hunting style is not all-together dissimilar from that of golden or Verreaux's eagles. Occasionally, a wedge-tailed eagle still hunts from a perch. Unsuccessful hunts typically exceed in number successful ones. Hunting habitat can be highly variably and can manage to capture prey in both open country and quite thick woodland or forest, though typically require an open understory in the latter. Almost all its prey is taken on the ground but to a lesser extent it may be taken from the tree canopy. They've been known to take birds such as currawongs and cockatoos by coming around them by surprise around a tree or by darting out in flight at close range for a brief tail-chase. Sometimes, an eagle may pull brushtail possums and other mammals from tree cavities, as well as young birds from a nest. They've been known to follow wildfires to search for fleeing animals or alternately tractors and other farm equipment for the same purpose. Wedge-tailed eagles occasionally pirate food from other predators. An eagle of the species can carry prey of at least . Large animals may be attacked by pairs or, occasionally, by groups acting cooperatively. One record shows 15 wedge-tailed eagles hunting kangaroos, two actively chasing at a time, then repeatedly being replaced by two more from the circling group overhead. Regardless of prey size and season, tandem hunts, mainly by breeding adult pairs or sometimes loosely associated young eagles, are not uncommon. Of 89 observed hunts in Central Australia around one-third were cooperative ones. As in other tandem hunting raptors, one eagle typically lies in wait generally unseen while the other eagle distracts and drives the prey towards it. When hunting domesticated prey, they've been seen to land near livestock mothers to intimidate them and separate their young, so they can attack the latter. Sometimes, wedge-tailed eagles may use fences to limit a prey's escape routes. In some cases, these eagles will attempt to force large prey such as kangaroos and dingos to fall off steep hillsides and injure themselves. At times, wedge-tailed eagles appear to hunt at earliest light or late twilight in order to come upon nocturnal prey such as hare-wallaby and bettongs. These eagles have been seen removing rabbits from traps and eating carrion in bright moonlight as well. At times, remarkably, wedge-tailed eagles have been covering large prey with vegetation, apparently to cache food too heavy to carry. Carrion is a major diet item, also; wedge-tails can spot the activity of ravens around a carcass from a great distance, and glide down to appropriate it. Carrion consumption is recorded in all season and contexts, although generally non-breeding birds are more likely to scavenge and young wedge-tailed eagles, even more so shortly post-dispersal, are thought to be far more likely to scavenge on carrion than adults generally. Wedge-tailed eagles are often seen by the roadside in rural Australia, feeding on animals that have been killed in collisions with vehicles. The importance of carrion relative to live prey has not been greatly studied but away from human development, especially roads, carrion is less likely to be encountered and eagles of all ages must presumably hunt to survive. In general, Australian accipitrids of many species not infrequently come to carrion and they along with large passerines like Corvus species and currawongs probably fulfill the niche that vultures do in other continents to some extent, albeit with considerably less specialization. Aggregations of wedge-tailed eagles may occur not infrequently at large carcasses, with up to 5–12 eagles or sometimes 20 gathering. A wedge-tailed eagle can gorge up to at a sitting and, when fulfilled, can lasts for an unusual amount of time, for up to weeks or even a month, before needing to hunt again, apparently due to the warmth of the environment. After feeding they may disgorge a relatively small pellet, long by wide and weighing some . Usually the diet is determined from a combination of reviewing these pellets along with loose prey remains. Prey spectrum The wedge-tailed eagle is a dietary generalist, opportunistically capturing a wide range of prey species. Its prey spectrum is quite broad, with well over 200 prey species documented to be taken and even this includes very few prey only from secondary accounts from Tasmania and New Guinea. The wedge-tailed eagles tends to prefer smallish to fairly large mammals as prey. However, they not infrequently take ample numbers of both birds and reptiles, along with scarcely other prey taxon. Out of 21 accrued dietary studies, 61.3% of prey items by number in the foods during nesting efforts were mammals, 21.6% were birds, 13.2% were reptiles, 2.1% by invertebrates, principally insects, 1.5% by fish, and almost no amphibians by number. Meanwhile, out of the 21, 13 studies calculated estimated biomass, and found that just shy of 90% of prey biomass was made of by mammals, 6.2% by birds and 3.4% by reptiles. Out of the Aquila genus, it is one of a few generalist species, however the wedge-tailed eagle is the Aquila most likely to typically attack the largest prey. Generally, this species prefers to attack birds and reptiles weighing over and mammals weighing over , although prey taken at times has varied from a few grams to more than sixteen times the weight of an individual eagle. A comparison estimate posited that around 2% of wedge-tailed eagle prey weighs less than , 4% of their prey weighs , 7% of their prey weighs , 10% weighs , 20% weighs , 25% weighs , 18% weighs and 14% weighs over . Projected from this comparison, the mean prey size for wedge-tailed eagles is estimated at , similar but just slightly ahead of the Verreaux's eagle and some 14% ahead of the golden eagle global mean prey size. Further studies estimated mean prey weight, showing the mean prey weigh in the Canberra-Australian Capital Territory region in three different studies was estimated to be , and , changing likely due to the shifting significances of leporids and larger macropods. In a small study from Armidale, New South Wales, it was estimated that mean prey weight was . It only ranks behind the crowned eagle and harpy eagle and rivals the martial eagle as the eagle likely to attack the largest prey on average. Mammals Introduced mammals While the introduction of invasive species to Australia has been generally having a negative to devastating effect on native animals and ecosystems, the wedge-tailed eagle is one of a few native species to largely benefit from these introductions. This is especially due to the introduction of the European rabbit, which were deliberately introduced repeatedly (abortively in 1859 and then via a concerted effect from 1937 to 1950), largely so the wealthy could hunt them. The wedge-tailed eagles quickly took to the rabbits as prey along with another introduced leporid, the European hare (Lepus europaeus). In almost every part of Australia, these eagles take rabbits in some numbers and they usually constitute the bulk of the prey species in most, if not all, Australian food studies. In some dietary studies rabbits have accounted for up to 89.2% of the diet by number and 86% by biomass, as in Bacchus Marsh, however they more typically range from 16% to 49% of the diet by number in various studies. One Canberra study found that 98.5% of the rabbits taken were adults. In the largest study near Canberra, over 5.5 years, 19.3% of the diet of wedge-tailed eagles was rabbits (12.7% of prey biomass) among 1421 prey items, so the eagles took a total of some 275 rabbits in the 11 to 17 studied territories of the area. A study estimated that mean weight of wild rabbits in Australia was , lower than estimated in the past. However, other studies estimated the mean weights of rabbits taken by wedge-tailed eagles as variously from or "usually over ", infrequently reported to , size of the rabbits being perhaps limited the poorly-suited soil and environs of the Australian wilderness. Meanwhile, the European hare is neither as widely established nor as prolifically taken as rabbits by wedge-tailed eagles but are by no means neglected and a substantial meal. With a mean body mass of , hares have been as much as nearly 10% of the local diet and up to 14% of prey biomass in studies. Rabbit haemorrhagic disease was deliberately introduced to control the population of rabbits subsequent to 1995, followed more effectively by introduction of myxoma virus to limit the damage rabbits have inflicted on native vegetation and resultingly have competed native mammals like wallabies out of parts of their range. Ultimately, the rabbit population may have more than halved and locally have been some 90% reduced. As a matter of consensus, the wedge-tailed eagles do not appear to be adversely affected in major ways by the biological control of rabbits since they can revert to primarily taking native prey species quite readily. In the region of Broken Hill, White Cliffs and Cunnamulla, rabbits have gone down from accounting for 56–69% of the diet to 16–31% of it. Furthermore, wedge-tailed eagles have been known to successfully maintain population in the absence of any rabbits in a few areas. Much more controversial at one time than hunting introduced rabbits and hares is the wedge-tailed eagle's occasional tendency to feed on and sometimes kill domesticated livestock animals. The predation of wedge-tailed eagles on young farm animals has been the primary historic driver for the persecution of the species. However, in no known study have domestic livestock been known to be primary prey. The closest association with them was in northwestern Queensland where lambs (Ovis aries) made up 32.7% of prey in pellets and 17.1% in remains, accounting for 15–21% of the prey biomass, while juvenile pigs (Sus scrofa domesticus) made up 7.3% of pellet remains and 22% of the biomass. Although it can be highly difficult, attempts have been made at parsing out whether the eagles had indeed killed the lambs rather than just lifting or dismantling them after finding them dead, as this eagle quite readily comes to carrion. The findings were that of 29 diagnosable lamb deaths in northwest Queensland, only 34.5% were due to eagle attacks. The wedge-tailed eagle is at times capable of taking very substantial livestock animals, lambs taken have been estimated to weigh a mean of or up to while fully grown sheep weighing some are infrequently vulnerable, presumably in large part to hunting pairs of eagles. In the largest study of the Canberra area, 82.5% of diagnosable sheep specimens were adults but probably were by and large scavenged. Meanwhile, young pigs included in the diet were estimated to weigh around , and sometimes feral piglets are included in the diet. When attacking lambs, the wedge-tailed eagles are apparently capable of driving their talons into the skull of the victim, although more typically they land along the back and grip the lamb along the spine until it weakens and collapses while flapping the wings for balance. This species will also land between a ewe or female pig and their respective lambs or piglets in order to separate the latter for attack. Wedge-tailed eagles are also known to at times prey on another animal introduced for human hunting purposes, the red fox (Vulpes vulpes), which can form up to about 4% of an eagle's breeding diet and 5% of the biomass, weighing up to . In Canberra, about 59% of the foxes found in the diet were adults. Additionally, feral cats, mainly juveniles, can be part of their prey. Native mammals Presumably, the primary native prey of wedge-tailed eagles is marsupials, particularly macropods, which is also in accord with studies involving places where rabbits have declined or never occurred. Many wallabies, kangaroos, and associated animals are included in the diet, with over 50 marsupials known to be in the species’ prey spectrum. When selecting marsupials, wedge-tailed eagles tend to ignore smaller species and focus on larger-sized ones. However, they generally most often take alive the young, small and sickly of large macropod marsupials. Findings were that juvenile macropods were taken out of proportion to their numbers in the environment, unlike rabbits which were taken roughly in proportion to their abundance. In recent times, they have been known to eat marsupials, such as kangaroos killed by cars. There is little evidence that macropods delivered to nests are usually roadkills or from carrion, but the source of prey is difficult to determine because, to minimize disturbance, examinations are usually done after breeding is complete. As well, the attendance at carrion by wedge-tailed eagles is disproportionately done by juvenile eagles. In one study of roadkills in Australia, the species ranked around fourth in frequency and capacity for carcass breakdown of scavengers at roadkills, behind feral pigs, red foxes, and ravens. A video surveillance study at the nest determined that seemingly freshly killed, albeit usually quite young macropods were delivered to nests near Broken Hill. As much as 20% to 30% of the diet can be made up of by macropods. Large and prominent species are known including the grey kangaroos and the red kangaroo (Osphranter rufus). Generally, juveniles are targeted of these large species with eastern grey kangaroos (Macropus giganteus) estimated to weigh when taken by wedge-tailed eagles, in Australian Capital Territory and New South Wales, while the weight of young western grey kangaroos (Macropus fuliginosus) was said to be in one study in Western Australia. The estimated weight of juvenile red kangaroos taken was in northwestern Queensland where they were the primary prey species ahead of lambs. However, wedge-tailed eagles do not shy away from attacking large, adult macropods. Similarly large adult macropods killed by these eagles can include common wallaroos (Osphranter robustus) (mean adult weight around ), antilopine kangaroo (Osphranter antilopinus) (mean adult weight around ), agile wallaby (Notamacropus agilis) (median adult weight around ), black-striped wallaby (Notamacropus dorsalis) (median adult weight around ), red-necked wallaby (Notamacropus rufogriseus) estimated to weigh around when taken, swamp wallaby (Wallabia bicolor) (mean adult weight around ), and even red kangaroo adults. They've been recorded attacking eastern grey kangaroos weighing over . In extreme cases, wedge-tailed eagles have killed kangaroos weighing approximately . In one case, a huge male eastern grey kangaroo, estimated to stand was successfully dispatched by a pair of wedge-tailed eagles. Furthermore, an adult female western grey kangaroo was witnessed to be killed "in a few minutes" by a hunting pair of wedge-tailed eagles, and the eagles are considered a serious predator of the western grey. In some unusual cases, wedge-tailed eagle hunting parties can form whilst hunting red kangaroos, sometimes including up to 15 eagles (more loose, opportunistic aggregations than well-organized groups), but usually only a pair is sufficient to kill such prey. Normally, the eagles repeatedly attack the kangaroo, sinking their talons into the back or nape and then fly up, when the second eagle starts doing the same. In some cases, as many as 123 attacks have been carried out against large kangaroos before they succumb. When attacking joeys, eagles may, in some cases, have intentionally caused a mother kangaroo to dislodge a joey from the pouch to capture and fly off with it. In addition, several smaller and more elusive macropods are taken including tree-kangaroos, hare-wallabies, nail-tail wallabies, rock-wallabies, dorcopsises and pademelons. Other marsupials are by no means neglected. In Shark Bay, hare-wallabies and bettongs seem to form the central part of the diet. Another dietary favourite is the common brushtail possum (Trichosurus vulpecula), weighing some , which was important supplemental prey in the Perth area and was the primary prey species on Kangaroo Island, at 33% of the diet there. Around Perth, other small, nocturnal marsupials were taken in some numbers including woylies (Bettongia penicillata) and southern brown bandicoots (Isoodon obesulus). The common ringtail possum (Pseudocheirus peregrinus) was the second most prominent prey species in the diet near Melbourne, comprising 20.1% of the diet, with some numbers of common brushtails also taken there. Long-nosed bandicoots (Perameles nasuta) were regular supplemental prey in northeastern New South Wales. Other notable marsupials known to fall prey to wedge-tailed eagles include adults of the following: koalas (Phascolarctos cinereus), quokkas (Setonix brachyurus), eastern (Dasyurus viverrinus), western (Dasyurus geoffroii) and tiger quolls (Dasyurus maculatus), Tasmanian devils (Sarcophilus harrisii), bilbies, numbats (Myrmecobius fasciatus), common wombats (Vombatus ursinus), southern greater gliders (Petauroides volans) and potoroos. With relative infrequency, other classes of mammals, beyond leporids and marsupials, may be taken opportunistically by wedge-tailed eagles. At least two species each of flying foxes and wattled bats are included in the prey spectrum. Occasionally, an eagle may take a monotreme including both the platypus (Ornithorhynchus anatinus) and the short-beaked echidna (Tachyglossus aculeatus). Several species of rat are readily taken and even the house mouse (Mus musculus), likely the smallest mammalian prey known for wedge-tailed eagles at around in weight. Although rare, a dingo (Canis familiaris) may be taken by a wedge-tailed eagle at times, mostly pups, or carrion but sometimes a pair of eagles can kill adults too. Beyond sheep, pigs, and infrequently young goats (Capra hircus), other ungulate prey, entirely introduced by man into the Australasian region, is eaten exclusively as carrion so far as is known, including cattle (Bos taurus - despite claims that eagles have killed young calves, which is possible, they have only ever been witnessed feeding on afterbirths and not harming calves), Javan rusa (Rusa timorensis) in New Guinea, sambar deer (Rusa unicolor) in northern Victoria and water buffalo (Bubalus bubalis) in the Northern Territory. In one instance, a young girl was apparently subject to a brief attack by a wedge-tailed eagle, in what was likely an attempted act of predation, near her rural home but the attack was abandoned by the eagle. It has been noted that some different species of large eagles are thought to occasionally attack children as prey though, among extant species, only the crowned eagle and martial eagle, both in Africa, are thought to have successfully carried out rare acts of predation on human children. Birds Birds take a clearly secondary position to mammals when importance and especially prey weight are concerned; however, the wedge-tailed eagle shows some fondness for avian prey. With more than 100 prey species included in the prey spectrum, birds are the most diverse class of prey taken by these eagles. Generally, the predation of birds seems to be highly opportunistic and no one type of bird reliably dominates the eagle's diet. However certain species, probably due to their commonality in eagle territories and perhaps vulnerability through their own behaviour that seem to be taken most often. These consist of Corvus species, especially little (Corvus mellori) and Australian ravens (Corvus coronoides), weighing a mean between species of around when taken, Australian magpies (Gymnorhina tibicen), Australian wood duck (Chenonetta jubata), galah (Eolophus roseicapilla), larger cockatoos and smaller parakeets and parrots. On Kangaroo Island, Australian and little ravens together constituted 19% of the diet. In Canberra, fairly prominent numbers of magpies, wood ducks, galahs and eastern (Platycercus eximius) and crimson rosellas (Platycercus elegans) are known to be taken, these collectively forming up to about 25% of the diet by number. In the Perth region, birds were taken amply, especially the Australian raven at 12.6% of prey remains and 4.7% of the biomass, with birds constituting just shy of 25% of the diet. Elsewhere in Western Australia, a similar percentage of the diet is made up of by birds, mostly the same species with some number of Australian ringnecks (Barnardius zonarius) and Baudin's black cockatoos (Zanda baudinii) as well. Peculiarly, one study found that among a large sample of 1826 prey items in the Northern Territory that the most often identified prey species was the tiny budgerigar (Melopsittacus undulatus), at one of the smallest avian prey species for this eagle. In a single study from the Fleurieu Peninsula, birds were the majority of prey for wedge-tailed eagles, at 62.5%, mostly Corvus followed by wood duck, galah and magpies. Other assorted avian prey include several species of waterfowl, including several ducks as well as swans and geese, and a fairly strong frequency of attacks on large rails, such as swamphens, moorhens, native-hens and coots. Additionally, wedge-tailed eagles may take Australian brush turkeys (Alectura lathami) and malleefowl (Leipoa ocellata), quail, pigeons and doves, frogmouths and owlet-nightjars, cuckoos, buttonquails, stilts, lapwings, plains-wanderers (Pedionomus torquatus), thick-knees, gulls, petrels, cormorants, herons, ibises and spoonbills, cranes, other birds of prey, kingfishers, honeyeaters, quail-thrushes, whistlers, monarch flycatchers, mudnesters, artamids, true thrushes, grass warblers, starlings and pipits. The smallest avian prey attributed to wedge-tailed eagles is the zebra finch (Taeniopygia guttata). Particularly large birds are sometimes taken of a few species. When it comes to the emu (Dromaius novaehollandiae), Australia's tallest and second heaviest bird, wedge-tailed eagles normally attack the small young but are capable of attacking adult emus more than 10 times their own weight. Two estimates estimated the typical body mass of emus attacked were merely , respectively, against an average of for adult emu. As much as 4% of the diet of wedge-tailed eagles can consist of emu chicks. Some of Australia's largest flying birds are also included in the wedge-tailed eagle's prey spectrum. These include the black swan (Cygnus atratus), estimated to weigh when taken, black-necked stork (Ephippiorhynchus asiaticus), which weighs at least , and brolga (Antigone rubicunda), arguably Australia's largest resident flying species of bird at a mean of . An unusually close feeding association with a very large bird is with the Australian bustard (Ardeotis australis) in northwestern Queensland, where bustards were found to account for 13.4% of the pellet contents and 23% of prey biomass. That study calculated the mean weight of bustards taken as , indicating that the eagles were selectively predating the much larger male bustards. Reptiles and other prey When selecting reptiles as prey, wedge-tailed eagles by far are most likely to pursue lizards. The range of lizards they may prey upon is highly diverse in size and nature, with somewhere between 20 and 30 species known in the prey spectrum. The most preferred reptilian prey by far is bearded dragons. Despite the small size of this prey relative to most mammalian prey, they can be key to survivorship in more arid vicinities such as central and western Australia where there is less diverse prey to pick from. In video monitored prey deliveries at Fowlers Gap Arid Zone Research Station, central bearded dragons (Pogona vitticeps) dominated the prey composition, making up 68.2% of 110 prey deliveries and the only known instance of reptiles forming the bulk of wedge-tailed eagle diet. A different study from prey remains and pellets found the central bearded dragon to comprise 28.6% of the diet among 192 prey items. In south-central Queensland, the bearded dragon was the leading prey species by number, making up 26.9% of 729 prey items. In northeastern New South Wales, the eastern bearded dragon (Pogona barbata) was the second most numerous prey species behind the rabbit, at 16.6% of the diet. The bearded dragons when taken by wedge-tailed eagles have had an estimated body mass ranging from . They also prey on jacky dragons. Larger lizards are readily taken as well given the opportunity. Skinks are occasional supplemental prey, common blue-tongued skink (Tiliqua scincoides) at around can make up around 5% of the diet (in northeast New South Wales), while the Centralian blue-tongued skink (Tiliqua multifasciata) was quite prominent in the diet in the Northern Territory. In Western Australia, shingleback skink (Tiliqua rugosa) and somewhat smaller western blue-tongued skink (Tiliqua occipitalis) collectively comprised about 7.5% of the diet. Much bigger lizards are sometimes taken, namely monitor lizards. Around 20% of the 231 prey items of in a Western Australian study was found to be monitor lizards, mostly yellow-spotted monitors (Varanus panoptes) with some sand goannas (Varanus gouldii). Adult Rosenberg's monitors (Varanus rosenbergi), weighing around can be also taken. Even lace monitors (Varanus varius), which weighs on average adults, can be a prey for this powerful eagle. Contrarily, lizards down to the size of a pygmy spiny-tailed skink (Egernia depressa) and a thorny devil (Moloch horridus) may be taken. Beyond lizards, wedge-tailed eagles seldom seem to hunt other types of reptiles. They hunt a few species of snakes, mostly venomous species, because they are prevalent in Australia. Snakes known to be included in the diet including tiger snakes (Notechis scutatus), eastern brown snake (Pseudonaja textilis), ringed brown snake (Pseudonaja modesta), bandy-bandy (Vermicella annulata), yellow-faced whipsnake (Demansia psammophis), red-bellied black snake (Pseudechis porphyriacus) and brown tree snake (Boiga irregularis). Eastern long-necked turtle (Chelodina longicollis) have been claimed as prey in one report although any other confirmed cases of predation on turtles by this species are not known. Notably, there are no reports of wedge-tailed eagles attacking pythons, despite several species being present in Australia, nor on crocodiles; perhaps these are the only predators too formidable to be attacked, as both of these reptiles can attain extremely large sizes. Predation on frogs or other amphibians is almost unheard of for wedge-tailed eagles, however, based on toxicity reports in eagles, they may consume invasive cane toads (Rhinella marina) from time to time. Similarly rare in the species’ diet is fish, although common carp (Cyprinus carpio) and western blue groper (Achoerodus gouldii) have been documented as prey. Occasionally, wedge-tailed eagles may even attack insects such as Psaltoda moerens cicadas and Heteronychus arator beetles. Truly exceptional is in the Northern Territory, where a large percentage of 1826 prey items was made up of by insects including unidentified Orthoptera, at about 10.8% of the diet, unidentified beetles, at about 8.4%, as well as some numbers of ants. Why and how they capture a profusion of insects locally is not clear, and they may be often from the stomachs of other prey or even byproduct from the captures of other prey or from the bodies of carcasses. Interspecific predatory relationships The wedge-tailed eagle occupies a fairly unique niche relative to other Aquila. While primarily continental in distribution, it is distributed well apart from most related species, whereas most Aquila are distributed in Eurasia or Africa and face considerable competition over resources, enabling certain specializations of most species in habitat or microhabitat, morphology and behaviours and often life history, including nesting grounds and often foods. The wedge-tailed eagle has the ability to exploit a more catholic variation of both prey and habitat since it exists with relatively fewer competing species. The most considerable potential competition comes in the two other eagles regularly distributed in Australia, the little eagle and white-bellied sea eagle. The little eagle has a few ecological similarities to the wedge-tailed eagle. It is also something of a habitat generalist, although it is found somewhat scarcely in more arid vicinity, high elevation areas and varied semi-open forest than the wedge-tailed eagle. Like the wedge-tailed eagle, the little eagle has in recent decades become a somewhat specialized predator of European rabbits. However, the size difference is extreme between the wedge-tailed and little eagles, with the earlier over four times heavier than the latter, and the little eagle as expected exploits a lower trophic level relative to its more powerful competitor. As in other areas where booted eagles and sea eagles have abutting ranges, sometimes wedge-tailed eagles compete with white-bellied sea eagles. One key difference from elsewhere where competition sometimes occurs such as the golden eagle with the white-tailed eagle (Haliaeetus albicilla) in Eurasia and the bald eagle (Haliaeetus leucocephalus) in North America, both of which are slightly heavier than the golden, is that the white-bellied sea eagle is the slightly smaller species than the wedge-tailed eagle, potentially giving the latter a more pronounced competitive edge. However, the white-bellied sea eagle clearly does not shy away from contentious border disputes with wedge-tailed eagles and the two species can often be seen be seen readily attacking each other, occasionally in talon grappling and sometimes cartwheeling attacks on one another. However, the ecological effect of interspecific competition of the two species is not clear. Although the wedge-tailed eagle is considered the dominant species of the two, they clearly do not take the presence of white-bellied sea eagles lightly and some authors feel they may avoid nesting near them. Clearly, there is ample partitioning between the wedge-tailed and white-bellied sea eagles, the latter adapted to mostly open wetlands and coasts and, while also a dietary generalist, they tend to derive most of their diet from fish, water birds and other wetland-dwelling prey, and they seldom compete directly for prey such as mammals with wedge-tailed eagles. Most other diurnal raptors that reside in Australia are considerably smaller and seldom can be said to present great competition to the wedge-tailed eagles, although some, such as swamp harriers (Circus approximans), black-breasted kites and grey goshawks (Accipiter novaehollandiae), are relatively large for their taxon and powerful predators in their own rights. In one instance, a square-tailed kite (Lophoictinia isura) was observed to engage in an apparent territorial fight with a wedge-tailed eagle, including talon-grappling. When it comes to carrion, wedge-tailed eagles tend to dominate other predators, especially most birds, with most kites, other assorted raptors and some large passerine birds, mainly Corvus species and butcherbirds, coming to dead animals including roadkills. However, heavier terrestrial meat-eaters can hold their own at times against wedge-tailed eagles, namely red foxes, dingos, monitor lizards and Tasmanian devils, despite all these species sometimes turning up as prey of these eagles as well. Sometimes the wedge-tailed eagle will readily rob various other raptors of their prey, including little eagles, white-bellied sea eagles and brown falcons (Falco berigora). Wedge-tailed eagles will opportunistically prey on other birds of prey. They share this aptitude with other large eagles in different parts of the world such as golden eagles, although such acts are relatively infrequent, it is clear that the wedge-tailed eagle is considered a primary threat by many raptors based on witnessed attacks by eagles on them and the mobbing behaviour of other raptors. Among the other birds of prey known to occasionally fall prey to these eagles are little eagles, collared sparrowhawks (Accipiter cirrocephalus), grey goshawks, brown goshawks (Accipiter fasciatus), Pacific bazas (Aviceda subcristata), black-breasted buzzards, peregrine falcons (Falco peregrinus), Australian hobbies (Falco longipennis), black falcons (Falco subniger), brown falcons and Nankeen kestrels (Falco cenchroides). Occasionally owls are also included in the prey spectrum when an opportunity arises, including barn owls (Tyto javanica), southern boobooks (Ninox boobook) and even powerful owls (Ninox strenua). Wedge-tailed eagles are apex predators and have no well-documented predators, although presumably they have some nest predators, likely including ravens and currawongs, especially when displaced by human disturbance from their nests. Occasionally, these eagles may possibly risk injury or death in conflicts against other powerful predators and scavengers, such as dingos, quolls, Tasmanian devils, goannas and snakes, but no such verified instances seem to be known in literature, and man is considered to be the wedge-tailed eagle's only true threat. Occasionally they may be injured and even killed via intra- and interspecies territorial conflicts and mobbing by other birds of prey, especially stooping peregrine falcons which have successfully knocked wedge-tailed eagles out of the sky, with a force known to kill both golden and bald eagles in other parts of the world. Due to the formidable aerial attack of the peregrine, it may be the only raptor besides the white-bellied sea eagle that wedge-tailed eagles may avoid nesting near. Most of the large falcons, including peregrine, brown and black falcons, and at times large owls nest in unused or abandoned wedge-tailed eagle nests. Breeding The breeding season is from July to December through much of range, in New Guinea apparently from May on. They have a distinct tendency that lay earlier in the more northerly part of the range. For instance, in northeastern Australia laying has been recorded in January and February and in Tasmania in September. In western Australia, breeding depends on food and during drought periods there may be no nesting for up to 4 years. Adult wedge-tailed eagles are usually solitary or occur in pairs but immatures are more gregarious. 10–15 young wedge-tailed eagles may rest or soar together or even hunt together and up to 40 have been recorded at once at a carcass. Mated adults perform mutual soaring, undulating dives, and tandem flights with rolling and foot-touching. The female may appear to ignore or more often turn over and present claws when a male is displaying. As possibly part of courtship feedings on pairs have taken place away from the nest and sharing of a cache of food may occur. Allopreening occurs occasionally between pairs but is seldom observed, although at times has been considered a “regular” part of the courtship process. Contrary to historical accounts, wedge-tailed eagles seldom engage in an elaborate courtship display and will instead generally try to conserve energy, instead devoting their energy for the upcoming trying breeding season along with territorial exclusions of conspecifics and obtaining food. Mating tends to occur on a bare branch or dead tree in the nest area, and may continue into the nestling period. Contrary to old accounts, the species does not mate in flight. In the pre-laying phase, mating was recorded to be preceded or accompanied by loud, slow yelping, but in the nestling period, the pair alighted together and the male mounted without preliminaries and a silent copulation lasted for one minute. Territories and home ranges Territories are established with aerial displays, which can include high circling by one or both of pair, sometimes interspersed with flight rolls and talon presenting. Most of the time, wedge-tailed eagles typically respect pair boundaries and can limit territorial behaviour to mild aerial flights, with the intruders usually giving the ground to incumbents. Violence is usually avoided but sometimes the most heated territorial disputes can escalate to deaths. Sometimes the displaying eagle may engage in a steep dive on part closed wings followed by an upwards swoop, later may escalate into spectacular sky dance with undulations; they may too loop-the-loop. Cartwheeling is typically rare but in one case, three immatures mock dived at each other, two birds interlocked and cartwheeled several times before breaking away. No cartwheeling or talon grappling has been reported between members of a mated pair, but occasionally reported as used against intruding eagles. Aerial displays may go on for a while normally early in the breeding season, between 3 months and 3 weeks prior to egg laying. Territorial attacks by male wedge-tailed eagles may be against any encountered intruding eagles, including both male and female intruders, while female eagles engage in less territorial attacks and when they do, it is exclusively against other females. Territorial aggression can extend towards hang gliders and aircraft, advances noisily, bill open and talons extended until flying just above and behind or slightly ahead of pilot then swoops repeatedly after making contact with the hang-glider. A core of some radius around the nest is most fervently defended. Foraging ranges from nest may be up to about . Foraging ranges on breeding home ranges may be around for males and for females in arid central western Australia. Range sizes of pair members vary greatly based on topography, habitat and prey access. Several reported densities of 3–6 pairs per , others of 7–12 pairs per . When rabbits were in plague type numbers, pairs may nest as close apart and 4 others no more than from those two pairs. In semi-arid areas of New South Wales near Menindee densities were found to be about a pair per , 10–12 pairs in good years, 3 in drought years. Not far from that in Mutawintji National Park density was around a pair per . Much higher densities were noted in this semi-arid zone of western New South Wales, with a pair per , against around a pair per in other arid zones. In Western Australia, arid areas had a nearest neighbour distance of while those nesting in mesic areas had a distance of . At Fowlers Gap, there were 9–10 pairs per . Near Canberra, around 37 pairs were reported in an area of , including some unusually as close as to paved roads and as close as from suburban spots. This contrasts strongly with 36 years prior, when few nests were near human-altered areas and the amount of pairs in the same area was about 32. In the Fleurieu Peninsula in South Australia during the early to mid 2000s, there was a pair per , active nest sites were apart, while the average home range around the nest is roughly. Resurvey efforts a dozen years later in Fleurieu Peninsula found a more populous population, resulting in a home range estimated at with some active nests as close as apart. In the Perth area, it was projected that the mean home range was about . Meanwhile, in southern Victoria the nearest neighbour distance of breeding pairs was while mean territory size was calculated at . Nests Both sexes may participate in building the nest but the female takes the greater share, often standing in the middle and building outwards. Often wedge-tailed eagles build alternative nest, up to 2 to 3 per territory, though when undisturbed uses the same general site repeatedly. In Tasmania, territories held a mean of 1.4 nests. The nest is usually either substantial or massive. The nest is a structure of sticks ranging across and deep when first build but with repeated additions up to across and nearly deep. The interior nest cup is commonly around across about deep. Four studies found the diameter of nests to average from as little as and as much as and in depth from as little as to as much as . Generally speaking in woodland or forest edge areas, nests tend to be larger, while those in sparser, more arid areas tend to have characteristically smaller nests, as they have lesser access to nest building materials. Good sized nests can weigh well over . Nests are usually lined with green leaves and twigs, a common practice in accipitrids. Infrequently, they may use an old nest built by another accipitrid, namely whistling kites (Haliastur sphenurus) and white-bellied sea eagles, with the earlier's nests apparently added to in order to enlargen it. Ideally the nest is located at above the ground on a lateral branch or main fork of lone or forest tree; in taller trees, nests can be as much as high to the opposite extreme down in lower ground or even on rocks or ground trees are scarce. In a few studies different areas of New South Wales, the mean nest height was from and were often relatively close to human development. Two results in southern Victoria found mean nest heights to be . In often particularly arid Western Australia, mean nest heights were reportedly lower, averaging at . Detailed study in Western Australia found nest heights were higher in Mediterranean scrubland at against in the arid zone, but nest height seemed to not have bearing on occupancy or success, territoriality kept the population regulated within the habitats. Occasionally they may nest in dwarf trees at as low as . Favoured nesting trees include many Eucalyptus and Casuarina species, as well as Corymbia, Callitris and Syncarpia glomulifera while in inland areas more often Acacia, Flindersia as well as Hakea leucoptera and Grevillea striata. The amount of Eucalyptus tree species used by wedge-tailed eagles is extremely diverse and ultimately the species seem to have no strong overall preferences regarding tree species, more importantly seeking a given tree of ample height and considerable broadness. Furthermore, nest trees are often on slightly elevated ground above the mean ground level, presumably in order to offer a more commanding view of the surrounding environment. Additionally, trees with fewer lower branches may be preferred. Nests are seldom on dead trees, usually this occurs where there is an absence of leafy ones. While Australian nests can be in quite varied surroundings, Tasmanian nests are almost exclusively within well forested areas. Forest type nests tend to have a sparse, open understory and woodlands or nearby glades often are considered perhaps more attractive to the species. In desert-type areas, they may nest on a hill or a rise, and in addition sometimes cliff ledges, or among rocks, and even on ground in both islands and desert-like areas, preferably in areas difficult for or inaccessible to humans. Additionally they've been known to nest on power pylons and telegraph poles. Other smaller animals may nest among the sticks at the base of active wedge-tailed eagle nests such as finches, pardalotes and even possums (which more so than the small birds are presumably vulnerable to the eagles if caught in the open), perhaps gaining some protection from the presence of the eagles. This is a not unknown phenomenon in many bird assemblages for small birds to gain incidental protection from strong raptors. Other species, such as Pacific black ducks (Anas superciliosa), falcons and owls, may also benefit by utilizing unused nests for their own breeding purposes, although typically only the falcons usually use them with relative regularity. Development of young and parental behaviour Clutch size is usually just one or two but sometimes to 4. About 80% of nests where eagles have managed to lay eggs contain two eggs. Mean clutch size is apparently somewhat higher in the western part of the range. The female lays multi-egg clutches by some 3 days or so apart. The eggs are buff or white in colour, often appearing heavily blotched all over with purple-brown, red-brown or lavender, or more sparsely spotted with reddish brown. The amount of spotting is quite variable on eggs even within a single clutch, some being heavily marked, others hardly at all, and at times concentrated on the pointier end of the egg. When freshly laid, the eggs are glossy but they become more matte and brittle with age. Eggs may range in height from , averaging in a sample of 54, by , averaging . Each egg normally weighs about , the equivalent to about three chicken eggs or about 3% of the female eagle's body weight, 10% when the clutch number is 3, which is typical for an Aquila eagle but a small percentage relative to smaller raptors. The larger eagles of Tasmania lay larger eggs on average reportedly. Wedge-tailed eagles sometimes lay runt eggs in nests in normal nests, in a condition apparently unique for Australian raptors, and these reportedly never hatch. If a clutch is lost or stolen early in incubation, some pairs have been documented to replace it, being able to do so about a month later. The incubation stage lasts for 42–48 days. The female of the pair either primarily or entirely incubates on her own and, like many eagles, she is a tight sitter. However, the male will incubate at times as well, at least up to an hour at a time. In New South Wales, the male was found to incubate for 16–20% of daylight during which the nest was unguarded for 3–13% of the day. In some cases, male incubation may vary from 1–6% of daylight to as much as 38% of daylight with shifts in extreme lasting up to 6 hours. Male primarily delivers prey to the nest during incubation (not prior), up about to the stage where the eaglet(s) can be left unattended. The chicks are covered in white down up at first and are expectedly semi-altricial. At about 12 days or so, a slightly greyer down develops and this ultimately becomes the woolly undercoat for the contour feathers. Within a couple days later, the black quills of the primaries often start to emerge, and can start to stand and move around the nest. At 28 days, the eaglets are showing their upper wing coverts increasingly through the down. At 35 days of age, some darker feathers are appearing on areas such as the breast, belly, mantle, back and head; mostly these are evident as a few dark rufous feathers poking through the head down while at this age they show a short buff-tipped tail. They are partially feathered up to 37 days and nearly completely feathered by 49 days. At around 37 days, they can attempt to tear food from carcass in the nests without much success. From 50 days onwards, the eaglet(s) play a good deal, pouncing on sticks and degree around the nest. Around this age, they are markedly almost full feathered but for the wing and tail, neither of which has reached its full length, and they may have a few wisps of down about their crown or neck. Weight increases are from about at 15 days with a notably increase in robustness to at 29 days, to at 49 days, making more rapid feathering growth thereafter primarily while body size growth slows considerably. Sibling aggression tends to begin at very early stage of life and decrease after first week. Unlike related eagles, there is some evidence that higher parental attendance limits instances of aggression, whereas in other eagles this occurs often in the parent's presence. In all eagles, the parent eagles do not attempt to intercede when runting or aggression between siblings occurs. Siblicide occurs occasionally in this species and it is considered a “facultative cainist” rather than an obligate one, meaning siblicide occurs occasionally and as conditions dictate as opposed to some eagles where it occurs almost invariably. Sufficient environmental conditions can largely reduce sibling aggression. In New South Wales, three of four successful pairs raised two fledglings and no sign of rivalry or pecking behaviors observed despite the size difference of two siblings. The female broods attentively at first but then decreases after the second or third week and then ceases brooding almost entirely by 30 days, even at night. For 40 or more days, the female continues to assist the young with feeding, typically from the male's prey deliveries though the female may resume hunting after nest attendance drops. Potential predators such as goannas are struck when found to be approaching the nest, although the eagles usually abandon the nest when a human approaches. Repeated intrusions and noisy disturbances may have a net negative effect such as on Tasmanian wedge-tailed eagles, as these factors often lead to nest failure. In one case in South Australia, the removal of a dead tree in the vicinity of a wedge-tailed eagle nest resulted in full abandonment of the nest by the parents. The female may too continue to bring green leaves to a late stages, doing so more often in a spell of wet weather. During times of plenty, caches can sometimes form around the nest, with much prey left partially or entirely uneaten. Upon leaving the nest at 11 to 12 weeks of age, the young eagles are not strong fliers for another 20 days or so, but can competent flying can be by about 90 days of ages, though full feather development is not until 120 days. Fledgling occurs at 67 to 95 days, typically being less than 90 days and averaging roughly around 79 days. Dependence lasts up to 4 to 6 months after fledging, with the juvenile eagles which overstay rarely known to be an occasionally fatal danger to the subsequent chick hatched to their parents. During the later periods of fledgling, interactions are restricted to brief prey deliveries and the parent eagles stop feeding the young eagle(s), forcing to go forage elsewhere for foods. A study of post-fledgling dispersal found in one case that a young eagle covered only a range, with a maximum covered in a week of . Most recoveries in one banding study were distributed under from their original banding site, mostly as fledgling age juveniles, but some meandered up to away. After dispersal, young eagles are floaters up until their 4th or 5th year, typically avoiding the territories of adults and searching out feeding opportunities. Up to two-thirds of young wedge-tailed eagles may die some time between fledgling and when they are 3–5 years but adults often have quite low mortality rates and can live the better part of a half century. First breeding is typically at 6 or 7 years old. Lifespans of wedge-tailed eagles in the wild are poorly known, with the maximum recorded in one banding study being merely 9 years, quite a paltry age compared to other large eagles, and it is quite conceivable that eagles who survive to maturity not infrequently live around twice that long or more. In captivity, the species has been known to live to around 40 years of age. Breeding success Only one young is typically produced from a clutch of two but occasionally two fledglings may occur. The breeding success rates of the species are variable. In overall studies, at least 52 to 90% of breeding pairs managed to produce a fledgling, with further projected numbers from this of 0.2–0.5 fledgling per pair, 0.7–1.2 fledgling per clutch and 1.1–1.3 fledgling per brood. In southwestern Australia, from 0.7 to 1.2 young are fledged per clutch laid, 0.19–0.46 young per pair per year. In south-central Queensland, fledgling productivity was 1.1 per young to pairs that laid eggs. Northern New South Wales eagles were able to produce 0.8 young per pair from 2005 to 2006 while 0.89 and 0.64 fledged young per pair per year was the fledgling rate in central and western New South Wales, respectively. In a further study in New South Wales at Burrendong Dam, from 1993 to 2003, 15 pairs produced an average of one fledgling per territory but 1998 due to drought conditions, the rate was only 0.4 chick per territory. Within Kinchega National Park, however, the rate of 0.99 young per pair was fairly consistent regardless of climatic conditions. In the Australian Capital Territory, pairs were said to produce 1.1 fledglings per pair. In southeastern Australia, from 0.9 to 1.5 young per clutch are laid, with 0.6–1.0 young per pair per year. In Tasmania, from 0.64 to 0.8 young are fledged per clutch laid, 1.07 per successful nest. 0.91 young were produced per pair in southern South Australia or 1.1 fledglings per successful nesting attempt. Subsequent research in South Australia found 38 successfully fledged young with 10 pairs or 26% producing two fledglings and that production was 1.1 per occupied territory and 1.3 per successful pair. 0.73 fledglings were produced pair per year in south-west Western Australia. Western Australian eagles produced 0.92 fledged young per clutch laid and 1.1 young per successful nest. During periods of drought in Western Australia, some wedge-tailed eagles may forgo breeding for up to four years. Higher annual rainfall in Western Australia, higher in mesic than arid areas, made a big difference in pair productivity, with 12% of arid zone pairs producing young, or 0.13 fledglings per pair, a very low productivity, while the mesic zone 69% of pairs produced fledglings, or 0.77 fledglings per pair. Generally, wedge-tailed eagles can nest in a variety of habitats and climatic conditions but tend to be slightly less productive in more arid environments. Significant broad-scale control is thought to be unlikely to be harming numbers of young being produced with those with a macropod-based diet perhaps having a richer diet. Like most eagles, wedge-tailed eagles fit the mould rather well of a K-selected breeder, i.e. being large, producing fewer young and tending to live relatively long. Conservation status In the 1990s, it was estimated broadly that the global population was somewhere between 10,001 and 1,000,000 individuals. As of 2009, Birdlife International listed the total population as only 100,000 mature individuals, possibly conservative and from admittedly poor supporting data. As of that analysis, Birdlife considers the overall population of wedge-tailed eagles to be “possibly increasing”. Generally, the wedge-tailed eagle appears to be quite stable in population. Although wedge-tailed eagles are often scarcer than those large distribution suggests, their total distribution covers more than 10.5 million square kilometres and the population is quite likely within hundreds of thousands. Thinning of forest cover, mostly inadvertent provisioning of carrion food sources and, particularly, rabbit introductions may have aided the species, and it may actually be commoner now than before European colonization. Though protected, sometimes wedge-tailed eagles are shot or trapped or killed by poison carcasses set out by farmers many of whom consider it a serious sheep killer. Historically, the wedge-tailed eagle was subject to persecution levels to rival any other eagle in the world. The heavy persecution began in the closing decades of the 19th century, due largely to the establishment of large-scale sheep farming in Australia. One Queensland station claimed to have poisoned 1060 eagles over 8 months in 1903. Laws passed from 1909 to 1925 made it mandatory for landowners and farmers to kill eagles as vermin with enforcement determined by a given region's minister or vermin board, resulting in even more sweeping efforts to destroy the species. Steel-jawed rabbit traps were set around carcasses and Heligoland traps could sometimes trap several eagles at once, beyond sustained shooting and poisoning efforts. Between the years 1958 and 1967, 120,000 bounties were paid in merely the states of Queensland and Western Australia on wedge-tailed eagles killed, meaning an average 13,000 were killed each year. Even by 1967 to 1976, likely intentional human killings accounted for 54% of wedge-tailed eagle mortalities in Western Australia, with an estimated 30,000 killed in the year of 1969 throughout Australia. Strong legal protections started in Western Australia in the 1950s increasingly so to the 1970s or later elsewhere, now it is protected and subject to limited persecution throughout. Despite reduced persecution, as of the 1980s, 54% of recovered eagles by the 1980s were killed by human persecution. Despite such stunningly high rates of persecution, the wedge-tailed eagle was remarkably resilient to the haphazard persecution inflicted by humans in a way many other Australian wildlife, especially the regionally endemic mammals, and even other eagles elsewhere often are not. Often the species is less intentionally harmed via human disturbance via land development particularly intensifying agricultural and modern settlements, which can in turn lead to clearing of mature trees, disturbances at the nest and decline of native prey species, all of which have a net negative effect on the wedge-tailed eagles. Eggshell thickness was not significantly decreased by the use of DDT likely due likely to the largely mammal-based diet of the species, whereas raptors which consume birds or fish are disproportionately effected by DDT. On occasion, the species is still subject to illegal shootings and poisonings, however persecution of the species is significantly less prevalent in recent decades. Occasionally but not commonly, they are killed by sodium fluoroacetate poisons long used to “control” Australian wildlife, but now generally directed at invasive species such as rabbits, feral pigs and foxes. A list of the main persistent threats in the 21st century to wedge-tailed eagles consists of: destruction of habitat, including logging, developments including urbanization, wind farm collisions and the disturbance and destruction associated with their construction, increasing density rural human populations, illegal persecution in sheep farm areas, drowning in open tanks in dry pastoral zones, roadkills (especially while foraging for roadkill carrion), collisions with fences, powerlines and airplanes, regular electrocutions, poisoning from rabbit baits and other baits and exposure to lead and other bullet fragments which may be responsible for some eagle debilitations and deaths. Within the Fleurieu Peninsula, some 1.74 eagles on average are claimed by wind farm turbine collisions. Conservation needs may differ in different habitats, i.e. in more coastal temperate areas, the eagle is reported to have difficulty nesting when hillsides have been cleared of trees, meanwhile inland, they have lesser need of trees in elevated locations because they are more often assisted by thermals. However, they cannot generally persist where leafy trees are clearcut. Surprising resilience even to drought was found recently in the wedge-tailed eagles in the Australian Capital Territory where pair occupancy remained consistent through drought for wedge-tailed eagles but not for little eagles, but this is may have more to do with the wedge-tails more successful uncoupling from a dependence on declining rabbits as prey than the little eagle. Of 84 eagle deaths or debilitating injuries, 52% were attributable to collisions or electrocutions, 15.5% due to persecution, 11% due to natural causes and 15% were due to unknown causes. Status in Tasmania The Tasmanian race of wedge-tailed eagle, A. a. fleayi, is quite restricted in range and habitat, with estimated numbers having gone from 140 pairs in the 1980s down to only 60–80 by the mid-1990s. With the island's population numbering quite low and likely continuing its declining, as evidenced by slow replacement of lost pair members, the subspecies is listed as state-endangered. Furthermore, surveys contrasting 1977–1981 with 1998–2001 data found a decline of around 28% in the island's reported number of eagles. Generally Tasmanian wedge-tailed eagles are even less tolerant of human alterations and disturbances near the nest site than mainland wedge-tailed eagles and have more specific habitat requirements. Historically, the same hunting organization in Tasmania that played a large role in the extinction of the thylacine (Thylacinus cynocephalus) also intentionally tried to hunt the Tasmanian wedge-tailed eagle into extinction, publicly having erroneously claimed that eagles were non-native in Tasmania; however, hunting is unlikely to further continue on a large scale in the state. Where habitat clearance and degradation is extensive in Tasmania, the native prey populations are insufficient to support eagles. Furthermore, the clearing or logging of trees is especially critical in Tasmania, where the eagle is by and large a forest-dependent breeder. Studies indicate that Tasmanian eagles mostly nest in emergent trees in old-growth native forest exposed to early morning sun and sheltered from prevailing strong winds and cold spring winds, given the more temperate climate there relative to most points in mainland Australia. The subspecies requires forest areas greater than in which to breed and is very prone to desert its nest when disturbed. A predicted change was calculated to the carrying capacity of the Tasmanian forest given current operations is modeled, likely driving the population down. Wind farms in Tasmania are also an occasional threat; although not thought to be a significant source of mortality, wedge-tailed eagles, especially young ones, are less success at avoiding invariably fatal collisions with them than Tasmanian white-bellied sea eagles. Furthermore, of 109 eagle carcasses recovered in Tasmania, all of them had trace levels of lead in their livers or femurs with at least part of the exposure likely from lead ammunition. In addition, like all eagles, Tasmanian wedge-tailed eagles are vulnerable to electrocutions, and collisions with vehicles, overhead wires, and fences and poisonings, largely via illegal killings by poachers of Tasmanian devils and forest ravens (Corvus tasmanicus). Efforts are underway to ameliorate the harm being done to Tasmanian wedge-tailed eagles, especially via forestry operations. In protected areas, protocols are in place to protect Tasmanian eagle nests and protect them by creating an obligatory nest reserve of at least 10 ha and forestry operation have been restricted during the breeding season to outside a buffer zone of , extending further to if the proposed work is in the line-of-sight of the nesting eagles. About 20% of known pairs are outside protected areas and on private land, so are largely outside the strict legal protection the subspecies has on governmental forest land. Furthermore, researchers are instituting rules to minimize disturbance, limiting breeding surveys to distant observations of whitewash and flattened treetops as proof of nesting and all detailed observations to be obtained after the cessation of breeding activities. Iconography The bird is an emblem of the Northern Territory. The Parks and Wildlife Service of the Northern Territory uses the wedge-tailed eagle, superimposed over a map of the Northern Territory, as their emblem. The New South Wales Police Force emblem contains a wedge-tailed eagle in flight, as does the Northern Territory Correctional Services. La Trobe University in Melbourne also uses the wedge-tailed eagle in its corporate logo and coat of arms. The wedge-tailed eagle is also a symbol of the Australian Defence Force, featuring prominently on the ADF Flag, and the Royal Australian Air Force and Australian Air Force Cadets both use a wedge-tailed eagle on their badges. The Royal Australian Air Force has named its airborne early warning and control aircraft after the bird, the Boeing E-7 Wedgetail. Early in 1967, the Australian Army 2nd Cavalry Regiment received its new badge, a wedge-tailed eagle swooping, carrying a lance-bearing the motto "Courage" in its talons. The regiment's mascot is a wedge-tailed eagle named "Courage". Since its formation, there have been two, Courage I and Courage II. In 1997, while on flight training with his handlers, Corporal Courage II refused to cooperate and flew away, not being found for two days following an extensive search. He was charged with being AWOL and reduced to the rank of trooper. He was promoted back to corporal in 1998. The West Coast Eagles, an AFL football club from Western Australia, uses a stylised wedge-tailed eagle as their club emblem. In recent years, they have had a real-life wedge-tailed eagle named "Auzzie" perform tricks before matches.
Biology and health sciences
Accipitrimorphae
Animals
200006
https://en.wikipedia.org/wiki/Epoch%20%28astronomy%29
Epoch (astronomy)
In astronomy, an epoch or reference epoch is a moment in time used as a reference point for some time-varying astronomical quantity. It is useful for the celestial coordinates or orbital elements of a celestial body, as they are subject to perturbations and vary with time. These time-varying astronomical quantities might include, for example, the mean longitude or mean anomaly of a body, the node of its orbit relative to a reference plane, the direction of the apogee or aphelion of its orbit, or the size of the major axis of its orbit. The main use of astronomical quantities specified in this way is to calculate other relevant parameters of motion, in order to predict future positions and velocities. The applied tools of the disciplines of celestial mechanics or its subfield orbital mechanics (for predicting orbital paths and positions for bodies in motion under the gravitational effects of other bodies) can be used to generate an ephemeris, a table of values giving the positions and velocities of astronomical objects in the sky at a given time or times. Astronomical quantities can be specified in any of several ways, for example, as a polynomial function of the time interval, with an epoch as a temporal point of origin (this is a common current way of using an epoch). Alternatively, the time-varying astronomical quantity can be expressed as a constant, equal to the measure that it had at the epoch, leaving its variation over time to be specified in some other way—for example, by a table, as was common during the 17th and 18th centuries. The word epoch was often used in a different way in older astronomical literature, e.g. during the 18th century, in connection with astronomical tables. At that time, it was customary to denote as "epochs", not the standard date and time of origin for time-varying astronomical quantities, but rather the values at that date and time of those time-varying quantities themselves. In accordance with that alternative historical usage, an expression such as "correcting the epochs" would refer to the adjustment, usually by a small amount, of the values of the tabulated astronomical quantities applicable to a fixed standard date and time of reference (and not, as might be expected from current usage, to a change from one date and time of reference to a different date and time). Epoch versus equinox Astronomical data are often specified not only in their relation to an epoch or date of reference but also in their relations to other conditions of reference, such as coordinate systems specified by "equinox", or "equinox and equator", or "equinox and ecliptic" – when these are needed for fully specifying astronomical data of the considered type. Date-references for coordinate systems When the data are dependent for their values on a particular coordinate system, the date of that coordinate system needs to be specified directly or indirectly. Celestial coordinate systems most commonly used in astronomy are equatorial coordinates and ecliptic coordinates. These are defined relative to the (moving) vernal equinox position, which itself is determined by the orientations of the Earth's rotation axis and orbit around the Sun. Their orientations vary (though slowly, e.g. due to precession), and there is an infinity of such coordinate systems possible. Thus the coordinate systems most used in astronomy need their own date-reference because the coordinate systems of that type are themselves in motion, e.g. by the precession of the equinoxes, nowadays often resolved into precessional components, separate precessions of the equator and of the ecliptic. The epoch of the coordinate system need not be the same, and often in practice is not the same, as the epoch for the data themselves. The difference between reference to an epoch alone, and a reference to a certain equinox with equator or ecliptic, is therefore that the reference to the epoch contributes to specifying the date of the values of astronomical variables themselves; while the reference to an equinox along with equator/ecliptic, of a certain date, addresses the identification of, or changes in, the coordinate system in terms of which those astronomical variables are expressed. (Sometimes the word 'equinox' may be used alone, e.g. where it is obvious from the context to users of the data in which form the considered astronomical variables are expressed, in equatorial form or ecliptic form.) The equinox with equator/ecliptic of a given date defines which coordinate system is used. Most standard coordinates in use today refer to 2000 TT (i.e. to 12h (noon) on the Terrestrial Time scale on January 1, 2000, see below), which occurred about 64 seconds sooner than noon UT1 on the same date (see ΔT). Before about 1984, coordinate systems dated to 1950 or 1900 were commonly used. There is a special meaning of the expression "equinox (and ecliptic/equator) of date". When coordinates are expressed as polynomials in time relative to a reference frame defined in this way, that means the values obtained for the coordinates in respect of any interval t after the stated epoch, are in terms of the coordinate system of the same date as the obtained values themselves, i.e. the date of the coordinate system is equal to (epoch + t). It can be seen that the date of the coordinate system need not be the same as the epoch of the astronomical quantities themselves. But in that case (apart from the "equinox of date" case described above), two dates will be associated with the data: one date is the epoch for the time-dependent expressions giving the values, and the other date is that of the coordinate system in which the values are expressed. For example, orbital elements, especially osculating elements for minor planets, are routinely given with reference to two dates: first, relative to a recent epoch for all of the elements: but some of the data are dependent on a chosen coordinate system, and then it is usual to specify the coordinate system of a standard epoch which often is not the same as the epoch of the data. An example is as follows: For minor planet (5145) Pholus, orbital elements have been given including the following data: Epoch 2010 Jan. 4.0 TT . . . = JDT 2455200.5 M 72.00071 . . . . . . . .(2000.0) n. 0.01076162 .. . . . Peri . 354.75938 a 20.3181594 . . . . . Node . 119.42656 e. 0.5715321 . . . . . Incl .. 24.66109 where the epoch is expressed in terms of Terrestrial Time, with an equivalent Julian date. Four of the elements are independent of any particular coordinate system: M is mean anomaly (deg), n: mean daily motion (deg/d), a: size of semi-major axis (AU), e: eccentricity (dimensionless). But the argument of perihelion, longitude of the ascending node and the inclination are all coordinate-dependent, and are specified relative to the reference frame of the equinox and ecliptic of another date "2000.0", otherwise known as J2000, i.e. January 1.5, 2000 (12h on January 1) or JD 2451545.0. Epochs and periods of validity In the particular set of coordinates exampled above, much of the elements has been omitted as unknown or undetermined; for example, the element n allows an approximate time-dependence of the element M to be calculated, but the other elements and n itself are treated as constant, which represents a temporary approximation (see Osculating elements). Thus a particular coordinate system (equinox and equator/ecliptic of a particular date, such as J2000.0) could be used forever, but a set of osculating elements for a particular epoch may only be (approximately) valid for a rather limited time, because osculating elements such as those exampled above do not show the effect of future perturbations which will change the values of the elements. Nevertheless, the period of validity is a different matter in principle and not the result of the use of an epoch to express the data. In other cases, e.g. the case of a complete analytical theory of the motion of some astronomical body, all of the elements will usually be given in the form of polynomials in interval of time from the epoch, and they will also be accompanied by trigonometrical terms of periodical perturbations specified appropriately. In that case, their period of validity may stretch over several centuries or even millennia on either side of the stated epoch. Some data and some epochs have a long period of use for other reasons. For example, the boundaries of the IAU constellations are specified relative to an equinox from near the beginning of the year 1875. This is a matter of convention, but the convention is defined in terms of the equator and ecliptic as they were in 1875. To find out in which constellation a particular comet stands today, the current position of that comet must be expressed in the coordinate system of 1875 (equinox/equator of 1875). Thus that coordinate system can still be used today, even though most comet predictions made originally for 1875 (epoch = 1875) would no longer be useful today, because of the lack of information about their time-dependence and perturbations. Changing the standard equinox and epoch To calculate the visibility of a celestial object for an observer at a specific time and place on the Earth, the coordinates of the object are needed relative to a coordinate system of the current date. If coordinates relative to some other date are used, then that will cause errors in the results. The magnitude of those errors increases with the time difference between the date and time of observation and the date of the coordinate system used, because of the precession of the equinoxes. If the time difference is small, then fairly easy and small corrections for the precession may well suffice. If the time difference gets large, then fuller and more accurate corrections must be applied. For this reason, a star position read from a star atlas or catalog based on a sufficiently old equinox and equator cannot be used without corrections if reasonable accuracy is required. Additionally, stars move relative to each other through space. Apparent motion across the sky relative to other stars is called proper motion. Most stars have very small proper motions, but a few have proper motions that accumulate to noticeable distances after a few tens of years. So, some stellar positions read from a star atlas or catalog for a sufficiently old epoch require proper motion corrections as well, for reasonable accuracy. Due to precession and proper motion, star data become less useful as the age of the observations and their epoch, and the equinox and equator to which they are referred, get older. After a while, it is easier or better to switch to newer data, generally referred to as a newer epoch and equinox/equator, than to keep applying corrections to the older data. Specifying an epoch or equinox Epochs and equinoxes are moments in time, so they can be specified in the same way as moments that indicate things other than epochs and equinoxes. The following standard ways of specifying epochs and equinoxes seem the most popular: Julian days, e.g., JD 2433282.4235 for January 0.9235, 1950 TT Besselian years (see below), e.g., 1950.0 or B1950.0 for January 0.9235, 1950 TT Julian years, e.g., J2000.0 for January 1.5, 2000 TT All three of these are expressed in TT = Terrestrial Time. Besselian years, used mostly for star positions, can be encountered in older catalogs but are now becoming obsolete. The Hipparcos catalog summary, for example, defines the "catalog epoch" as "J1991.25" (8.75 Julian years before January 1.5, 2000 TT, e.g., April 2.5625, 1991 TT). Besselian years A Besselian year is named after the German mathematician and astronomer Friedrich Bessel (1784–1846). defines the beginning of a Besselian year to be the moment at which the mean longitude of the Sun, including the effect of aberration and measured from the mean equinox of the date, is exactly 280 degrees. This moment falls near the beginning of the corresponding Gregorian year. The definition depended on a particular theory of the orbit of the Earth around the Sun, that of Newcomb (1895), which is now obsolete; for that reason among others, the use of Besselian years has also become or is becoming obsolete. says that a "Besselian epoch" can be calculated from the Julian date according to B = 1900.0 + (Julian date − 2415020.31352) / 365.242198781 Lieske's definition is not exactly consistent with the earlier definition in terms of the mean longitude of the Sun. When using Besselian years, specify which definition is being used. To distinguish between calendar years and Besselian years, it became customary to add ".0" to the Besselian years. Since the switch to Julian years in the mid-1980s, it has become customary to prefix "B" to Besselian years. So, "1950" is the calendar year 1950, and "1950.0" = "B1950.0" is the beginning of Besselian year 1950. The IAU constellation boundaries are defined in the equatorial coordinate system relative to the equinox of B1875.0. The Henry Draper Catalog uses the equinox B1900.0. The classical star atlas Tabulae Caelestes used B1925.0 as its equinox. According to Meeus, and also according to the formula given above, B1900.0 = JDE 2415020.3135 = 1900 January 0.8135 TT B1950.0 = JDE 2433282.4235 = 1950 January 0.9235 TT Julian years and J2000 A Julian year is an interval with the length of a mean year in the Julian calendar, i.e. 365.25 days. This interval measure does not itself define any epoch: the Gregorian calendar is in general use for dating. But, standard conventional epochs which are not Besselian epochs have been often designated nowadays with a prefix "J", and the calendar date to which they refer is widely known, although not always the same date in the year: thus "J2000" refers to the instant of 12 noon (midday) on January 1, 2000, and J1900 refers to the instant of 12 noon on January 0, 1900, equal to December 31, 1899. It is also usual now to specify on what time scale the time of day is expressed in that epoch-designation, e.g. often Terrestrial Time. In addition, an epoch optionally prefixed by "J" and designated as a year with decimals (), where is either positive or negative and is quoted to 1 or 2 decimal places, has come to mean a date that is an interval of Julian years of 365.25 days away from the epoch J2000 = JD 2451545.0 (TT), still corresponding (in spite of the use of the prefix "J" or word "Julian") to the Gregorian calendar date of January 1, 2000, at 12h TT (about 64 seconds before noon UTC on the same calendar day). (
Physical sciences
Celestial sphere: General
Astronomy
200011
https://en.wikipedia.org/wiki/Red%20supergiant
Red supergiant
Red supergiants (RSGs) are stars with a supergiant luminosity class (Yerkes class I) and a stellar classification K or M. They are the largest stars in the universe in terms of volume, although they are not the most massive or luminous. Betelgeuse and Antares A are the brightest and best known red supergiants (RSGs), indeed the only first magnitude red supergiant stars. Classification Stars are classified as supergiants on the basis of their spectral luminosity class. This system uses certain diagnostic spectral lines to estimate the surface gravity of a star, hence determining its size relative to its mass. Larger stars are more luminous at a given temperature and can now be grouped into bands of differing luminosity. The luminosity differences between stars are most apparent at low temperatures, where giant stars are much brighter than main-sequence stars. Supergiants have the lowest surface gravities and hence are the largest and brightest at a particular temperature. The Yerkes or Morgan-Keenan (MK) classification system is almost universal. It groups stars into five main luminosity groups designated by roman numerals: I supergiant; II bright giant; III giant; IV subgiant; V dwarf (main sequence). Specific to supergiants, the luminosity class is further divided into normal supergiants of class Ib and brightest supergiants of class Ia. The intermediate class Iab is also used. Exceptionally bright, low surface gravity, stars with strong indications of mass loss may be designated by luminosity class 0 (zero) although this is rarely seen. More often the designation Ia-0 will be used, and more commonly still Ia+. These hypergiant spectral classifications are very rarely applied to red supergiants, although the term red hypergiant is sometimes used for the most extended and unstable red supergiants like VY Canis Majoris and NML Cygni. The "red" part of "red supergiant" refers to the cool temperature. Red supergiants are the coolest supergiants, M-type, and at least some K-type stars although there is no precise cutoff. K-type supergiants are uncommon compared to M-type because they are a short-lived transition stage and somewhat unstable. The K-type stars, especially early or hotter K types, are sometimes described as orange supergiants (e.g. Zeta Cephei), or even as yellow (e.g. yellow hypergiant HR 5171 Aa). Properties Red supergiants are cool and large. They have spectral types of K and M, hence surface temperatures below 4,100 K. They are typically several hundred to over a thousand times the radius of the Sun, although size is not the primary factor in a star being designated as a supergiant. A bright cool giant star can easily be larger than a hotter supergiant. For example, Alpha Herculis is classified as a giant star with a radius of while Epsilon Pegasi is a K2 supergiant of only . Although red supergiants are much cooler than the Sun, they are so much larger that they are highly luminous, typically . There is a theoretical upper limit to the radius of a red supergiant at around . In the Hayashi limit, stars above this radius would be too unstable and simply do not form. Red supergiants have masses between about and 30 or . Main-sequence stars more massive than about do not expand and cool to become red supergiants. Red supergiants at the upper end of the possible mass and luminosity range are the largest known. Their low surface gravities and high luminosities cause extreme mass loss, millions of times higher than the Sun, producing observable nebulae surrounding the star. By the end of their lives red supergiants may have lost a substantial fraction of their initial mass. The more massive supergiants lose mass much more rapidly and all red supergiants appear to reach a similar mass of the order of by the time their cores collapse. The exact value depends on the initial chemical makeup of the star and its rotation rate. Most red supergiants show some degree of visual variability, but only rarely with a well-defined period or amplitude. Therefore, they are usually classified as irregular or semiregular variables. They even have their own sub-classes, SRC and LC for slow semi-regular and slow irregular supergiant variables respectively. Variations are typically slow and of small amplitude, but amplitudes up to four magnitudes are known. Statistical analysis of many known variable red supergiants shows a number of likely causes for variation: just a few stars show large amplitudes and strong noise indicating variability at many frequencies, thought to indicate powerful stellar winds that occur towards the end of the life of a red supergiant; more common are simultaneous radial mode variations over a few hundred days and probably non-radial mode variations over a few thousand days; only a few stars appear to be truly irregular, with small amplitudes, likely due to photospheric granulation. Red supergiant photospheres contain a relatively small number of very large convection cells compared to stars like the Sun. This causes variations in surface brightness that can lead to visible brightness variations as the star rotates. The spectra of red supergiants are similar to other cool stars, dominated by a forest of absorption lines of metals and molecular bands. Some of these features are used to determine the luminosity class, for example certain near-infrared cyanogen band strengths and the Ca II triplet. Maser emission is common from the circumstellar material around red supergiants. Most commonly this arises from H2O and SiO, but hydroxyl (OH) emission also occurs from narrow regions. In addition to high resolution mapping of the circumstellar material around red supergiants, VLBI or VLBA observations of masers can be used to derive accurate parallaxes and distances to their sources. Currently this has been applied mainly to individual objects, but it may become useful for analysis of galactic structure and discovery of otherwise obscured red supergiant stars. Surface abundances of red supergiants are dominated by hydrogen even though hydrogen at the core has been completely consumed. In the latest stages of mass loss, before a star explodes, surface helium may become enriched to levels comparable with hydrogen. In theoretical extreme mass loss models, sufficient hydrogen may be lost that helium becomes the most abundant element at the surface. When pre-red supergiant stars leave the main sequence, oxygen is more abundant than carbon at the surface, and nitrogen is less abundant than either, reflecting abundances from the formation of the star. Carbon and oxygen are quickly depleted and nitrogen enhanced as a result of the dredge-up of CNO-processed material from the fusion layers. Red supergiants are observed to rotate slowly or very slowly. Models indicate that even rapidly rotating main-sequence stars should be braked by their mass loss so that red supergiants hardly rotate at all. Those red supergiants such as Betelgeuse that do have modest rates of rotation may have acquired it after reaching the red supergiant stage, perhaps through binary interaction. The cores of red supergiants are still rotating and the differential rotation rate can be very large. Definition Supergiant luminosity classes are easy to determine and apply to large numbers of stars, but they group several very different types of stars into a single category. An evolutionary definition restricts the term supergiant to those massive stars which start core helium fusion without developing a degenerate helium core and without undergoing a helium flash. They will universally go on to burn heavier elements and undergo core-collapse resulting in a supernova. Less massive stars may develop a supergiant spectral luminosity class at relatively low luminosity, around when they are on the asymptotic giant branch (AGB) undergoing helium shell burning. Researchers now prefer to categorize these as AGB stars distinct from supergiants because they are less massive, have different chemical compositions at the surface, undergo different types of pulsation and variability, and will evolve differently, usually producing a planetary nebula and white dwarf. Most AGB stars will not become supernovae although there is interest in a class of super-AGB stars, those almost massive enough to undergo full carbon fusion, which may produce peculiar supernovae although without ever developing an iron core. One notable group of low mass high luminosity stars are the RV Tauri variables, AGB or post-AGB stars lying on the instability strip and showing distinctive semi-regular variations. Evolution Red supergiants develop from main-sequence stars with masses between about and 30 or . Higher-mass stars never cool sufficiently to become red supergiants. Lower-mass stars develop a degenerate helium core during a red giant phase, undergo a helium flash before fusing helium on the horizontal branch, evolve along the AGB while burning helium in a shell around a degenerate carbon-oxygen core, then rapidly lose their outer layers to become a white dwarf with a planetary nebula. AGB stars may develop spectra with a supergiant luminosity class as they expand to extreme dimensions relative to their small mass, and they may reach luminosities tens of thousands times the sun's. Intermediate "super-AGB" stars, around , can undergo carbon fusion and may produce an electron capture supernova through the collapse of an oxygen-neon core. Main-sequence stars, burning hydrogen in their cores, with masses between will have temperatures between about 25,000K and 32,000K and spectral types of early B, possibly very late O. They are already very luminous stars of due to rapid CNO cycle fusion of hydrogen and they have fully convective cores. In contrast to the Sun, the outer layers of these hot main-sequence stars are not convective. These pre-red supergiant main-sequence stars exhaust the hydrogen in their cores after 5–20 million years. They then start to burn a shell of hydrogen around the now-predominantly helium core, and this causes them to expand and cool into supergiants. Their luminosity increases by a factor of about three. The surface abundance of helium is now up to 40% but there is little enrichment of heavier elements. The supergiants continue to cool and most will rapidly pass through the Cepheid instability strip, although the most massive will spend a brief period as yellow hypergiants. They will reach late K or M class and become a red supergiant. Helium fusion in the core begins smoothly either while the star is expanding or once it is already a red supergiant, but this produces little immediate change at the surface. Red supergiants develop deep convection zones reaching from the surface over halfway to the core and these cause strong enrichment of nitrogen at the surface, with some enrichment of heavier elements. Some red supergiants undergo blue loops where they temporarily increase in temperature before returning to the red supergiant state. This depends on the mass, rate of rotation, and chemical makeup of the star. While many red supergiants will not experience a blue loop, some can have several. Temperatures can reach 10,000K at the peak of the blue loop. The exact reasons for blue loops vary in different stars, but they are always related to the helium core increasing as a proportion of the mass of the star and forcing higher mass-loss rates from the outer layers. All red supergiants will exhaust the helium in their cores within one or two million years and then start to burn carbon. This continues with fusion of heavier elements until an iron core builds up, which then inevitably collapses to produce a supernova. The time from the onset of carbon fusion until the core collapse is no more than a few thousand years. In most cases, core-collapse occurs while the star is still a red supergiant, the large remaining hydrogen-rich atmosphere is ejected, and this produces a type II supernova spectrum. The opacity of this ejected hydrogen decreases as it cools and this causes an extended delay to the drop in brightness after the initial supernova peak, the characteristic of a Type II-P supernova. The most luminous red supergiants, at near solar metallicity, are expected to lose most of their outer layers before their cores collapse, hence they evolve back to yellow hypergiants and luminous blue variables. Such stars can explode as type II-L supernovae, still with hydrogen in their spectra but not with sufficient hydrogen to cause an extended brightness plateau in their light curves. Stars with even less hydrogen remaining may produce the uncommon type IIb supernova, where there is so little hydrogen remaining that the hydrogen lines in the initial type II spectrum fade to the appearance of a Type Ib supernova. The observed progenitors of type II-P supernovae all have temperatures between 3,500K and 4,400K and luminosities between and . This matches the expected parameters of lower mass red supergiants. A small number of progenitors of type II-L and type IIb supernovae have been observed, all having luminosities around and somewhat higher temperatures up to 6,000K. These are a good match for slightly higher mass red supergiants with high mass-loss rates. There are no known supernova progenitors corresponding to the most luminous red supergiants, and it is expected that these evolve to Wolf Rayet stars before exploding. Clusters Red supergiants are necessarily no more than about 25 million years old and such massive stars are expected to form only in relatively large clusters of stars, so they are expected to be found mostly near prominent clusters. However they are fairly short-lived compared to other phases in the life of a star and only form from relatively uncommon massive stars, so there will generally only be small numbers of red supergiants in each cluster at any one time. The massive Hodge 301 cluster in the Tarantula Nebula contains three. Until the 21st century the largest number of red supergiants known in a single cluster was five in NGC 7419. Most red supergiants are found singly, for example Betelgeuse in the Orion OB1 association and Antares in the Scorpius–Centaurus association. Since 2006, a series of massive clusters have been identified near the base of the Crux-Scutum Arm of the galaxy, each containing multiple red supergiants. RSGC1 contains at least 12 red supergiants, RSGC2 (also known as Stephenson 2) contains at least 26, RSGC3 contains at least 8, and RSGC4 (also known as Alicante 8) also contains at least 8. A total of 80 confirmed red supergiants have been identified within a small area of the sky in the direction of these clusters. These four clusters appear to be part of a massive burst of star formation 10–20 million years ago at the near end of the bar at the centre of the galaxy. Similar massive clusters have been found near the far end of the galactic bar, but not such large numbers of red supergiants. Examples Red supergiants are rare stars, but they are visible at great distance and are often variable so there are a number of well-known naked-eye examples: Antares A Betelgeuse Epsilon Pegasi Zeta Cephei Lambda Velorum Eta Persei 31 and 32 Cygni Psi1 Aurigae 119 Tauri Mira was historically thought to be a red supergiant star, but is now widely accepted to be an asymptotic giant branch star. Some red supergiants are larger and more luminous, with radii exceeding over a thousand times that of the Sun. These are hence also referred to as red hypergiants: Mu Cephei VV Cephei A NML Cygni S Persei UY Scuti VY Canis Majoris Westerlund 1 W26 WOH G64: transitioned into a yellow hypergiant in 2014. KY Cygni BI Cygni A survey expected to capture virtually all Magellanic Cloud red supergiants detected around a dozen M class stars Mv−7 and brighter, around a quarter of a million times more luminous than the Sun, and from about 1,000 times the radius of the Sun upwards.
Physical sciences
Stellar astronomy
Astronomy
200091
https://en.wikipedia.org/wiki/Quadratic%20residue
Quadratic residue
In number theory, an integer q is a quadratic residue modulo n if it is congruent to a perfect square modulo n; that is, if there exists an integer x such that Otherwise, q is a quadratic nonresidue modulo n. Quadratic residues are used in applications ranging from acoustical engineering to cryptography and the factoring of large numbers. History, conventions, and elementary facts Fermat, Euler, Lagrange, Legendre, and other number theorists of the 17th and 18th centuries established theorems and formed conjectures about quadratic residues, but the first systematic treatment is § IV of Gauss's Disquisitiones Arithmeticae (1801). Article 95 introduces the terminology "quadratic residue" and "quadratic nonresidue", and says that if the context makes it clear, the adjective "quadratic" may be dropped. For a given n, a list of the quadratic residues modulo n may be obtained by simply squaring all the numbers 0, 1, ..., . Since a≡b (mod n) implies a2≡b2 (mod n), any other quadratic residue is congruent (mod n) to some in the obtained list. But the obtained list is not composed of mutually incongruent quadratic residues (mod n) only. Since a2≡(n−a)2 (mod n), the list obtained by squaring all numbers in the list 1, 2, ..., (or in the list 0, 1, ..., n) is symmetric (mod n) around its midpoint, hence it is actually only needed to square all the numbers in the list 0, 1, ..., n/2 . The list so obtained may still contain mutually congruent numbers (mod n). Thus, the number of mutually noncongruent quadratic residues modulo n cannot exceed n/2 + 1 (n even) or (n + 1)/2 (n odd). The product of two residues is always a residue. Prime modulus Modulo 2, every integer is a quadratic residue. Modulo an odd prime number p there are (p + 1)/2 residues (including 0) and (p − 1)/2 nonresidues, by Euler's criterion. In this case, it is customary to consider 0 as a special case and work within the multiplicative group of nonzero elements of the field . In other words, every congruence class except zero modulo p has a multiplicative inverse. This is not true for composite moduli. Following this convention, the multiplicative inverse of a residue is a residue, and the inverse of a nonresidue is a nonresidue. Following this convention, modulo an odd prime number there is an equal number of residues and nonresidues. Modulo a prime, the product of two nonresidues is a residue and the product of a nonresidue and a (nonzero) residue is a nonresidue. The first supplement to the law of quadratic reciprocity is that if p ≡ 1 (mod 4) then −1 is a quadratic residue modulo p, and if p ≡ 3 (mod 4) then −1 is a nonresidue modulo p. This implies the following: If p ≡ 1 (mod 4) the negative of a residue modulo p is a residue and the negative of a nonresidue is a nonresidue. If p ≡ 3 (mod 4) the negative of a residue modulo p is a nonresidue and the negative of a nonresidue is a residue. Prime power modulus All odd squares are ≡ 1 (mod 8) and thus also ≡ 1 (mod 4). If a is an odd number and m = 8, 16, or some higher power of 2, then a is a residue modulo m if and only if a ≡ 1 (mod 8). For example, mod (32) the odd squares are 12 ≡ 152 ≡ 1 32 ≡ 132 ≡ 9 52 ≡ 112 ≡ 25 72 ≡ 92 ≡ 49 ≡ 17 and the even ones are 02 ≡ 82 ≡ 162 ≡ 0 22 ≡ 62≡ 102 ≡ 142≡ 4 42 ≡ 122 ≡ 16. So a nonzero number is a residue mod 8, 16, etc., if and only if it is of the form 4k(8n + 1). A number a relatively prime to an odd prime p is a residue modulo any power of p if and only if it is a residue modulo p. If the modulus is pn, then pka is a residue modulo pn if k ≥ n is a nonresidue modulo pn if k < n is odd is a residue modulo pn if k < n is even and a is a residue is a nonresidue modulo pn if k < n is even and a is a nonresidue. Notice that the rules are different for powers of two and powers of odd primes. Modulo an odd prime power n = pk, the products of residues and nonresidues relatively prime to p obey the same rules as they do mod p; p is a nonresidue, and in general all the residues and nonresidues obey the same rules, except that the products will be zero if the power of p in the product ≥ n. Modulo 8, the product of the nonresidues 3 and 5 is the nonresidue 7, and likewise for permutations of 3, 5 and 7. In fact, the multiplicative group of the non-residues and 1 form the Klein four-group. Composite modulus not a prime power The basic fact in this case is if a is a residue modulo n, then a is a residue modulo pk for every prime power dividing n. if a is a nonresidue modulo n, then a is a nonresidue modulo pk for at least one prime power dividing n. Modulo a composite number, the product of two residues is a residue. The product of a residue and a nonresidue may be a residue, a nonresidue, or zero. For example, from the table for modulus 6   1, 2, 3, 4, 5 (residues in bold). The product of the residue 3 and the nonresidue 5 is the residue 3, whereas the product of the residue 4 and the nonresidue 2 is the nonresidue 2. Also, the product of two nonresidues may be either a residue, a nonresidue, or zero. For example, from the table for modulus 15   1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 (residues in bold). The product of the nonresidues 2 and 8 is the residue 1, whereas the product of the nonresidues 2 and 7 is the nonresidue 14. This phenomenon can best be described using the vocabulary of abstract algebra. The congruence classes relatively prime to the modulus are a group under multiplication, called the group of units of the ring , and the squares are a subgroup of it. Different nonresidues may belong to different cosets, and there is no simple rule that predicts which one their product will be in. Modulo a prime, there is only the subgroup of squares and a single coset. The fact that, e.g., modulo 15 the product of the nonresidues 3 and 5, or of the nonresidue 5 and the residue 9, or the two residues 9 and 10 are all zero comes from working in the full ring , which has zero divisors for composite n. For this reason some authors add to the definition that a quadratic residue a must not only be a square but must also be relatively prime to the modulus n. (a is coprime to n if and only if a2 is coprime to n.) Although it makes things tidier, this article does not insist that residues must be coprime to the modulus. Notations Gauss used and to denote residuosity and non-residuosity, respectively; for example, and , or and . Although this notation is compact and convenient for some purposes, a more useful notation is the Legendre symbol, also called the quadratic character, which is defined for all integers and positive odd prime numbers as There are two reasons why numbers ≡ 0 (mod ) are treated specially. As we have seen, it makes many formulas and theorems easier to state. The other (related) reason is that the quadratic character is a homomorphism from the multiplicative group of nonzero congruence classes modulo to the complex numbers under multiplication. Setting allows its domain to be extended to the multiplicative semigroup of all the integers. One advantage of this notation over Gauss's is that the Legendre symbol is a function that can be used in formulas. It can also easily be generalized to cubic, quartic and higher power residues. There is a generalization of the Legendre symbol for composite values of , the Jacobi symbol, but its properties are not as simple: if is composite and the Jacobi symbol then , and if then but if we do not know whether or . For example: and , but and . If is prime, the Jacobi and Legendre symbols agree. Distribution of quadratic residues Although quadratic residues appear to occur in a rather random pattern modulo n, and this has been exploited in such applications as acoustics and cryptography, their distribution also exhibits some striking regularities. Using Dirichlet's theorem on primes in arithmetic progressions, the law of quadratic reciprocity, and the Chinese remainder theorem (CRT) it is easy to see that for any M > 0 there are primes p such that the numbers 1, 2, ..., M are all residues modulo p. For example, if p ≡ 1 (mod 8), (mod 12), (mod 5) and (mod 28), then by the law of quadratic reciprocity 2, 3, 5, and 7 will all be residues modulo p, and thus all numbers 1–10 will be. The CRT says that this is the same as p ≡ 1 (mod 840), and Dirichlet's theorem says there are an infinite number of primes of this form. 2521 is the smallest, and indeed 12 ≡ 1, 10462 ≡ 2, 1232 ≡ 3, 22 ≡ 4, 6432 ≡ 5, 872 ≡ 6, 6682 ≡ 7, 4292 ≡ 8, 32 ≡ 9, and 5292 ≡ 10 (mod 2521). Dirichlet's formulas The first of these regularities stems from Peter Gustav Lejeune Dirichlet's work (in the 1830s) on the analytic formula for the class number of binary quadratic forms. Let q be a prime number, s a complex variable, and define a Dirichlet L-function as Dirichlet showed that if q ≡ 3 (mod 4), then Therefore, in this case (prime q ≡ 3 (mod 4)), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ..., q − 1 is a negative number. For example, modulo 11, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 (residues in bold) 1 + 4 + 9 + 5 + 3 = 22, 2 + 6 + 7 + 8 + 10 = 33, and the difference is −11. In fact the difference will always be an odd multiple of q if q > 3. In contrast, for prime q ≡ 1 (mod 4), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ..., q − 1 is zero, implying that both sums equal . Dirichlet also proved that for prime q ≡ 3 (mod 4), This implies that there are more quadratic residues than nonresidues among the numbers 1, 2, ..., (q − 1)/2. For example, modulo 11 there are four residues less than 6 (namely 1, 3, 4, and 5), but only one nonresidue (2). An intriguing fact about these two theorems is that all known proofs rely on analysis; no-one has ever published a simple or direct proof of either statement. Law of quadratic reciprocity If p and q are odd primes, then: ((p is a quadratic residue mod q) if and only if (q is a quadratic residue mod p)) if and only if (at least one of p and q is congruent to 1 mod 4). That is: where is the Legendre symbol. Thus, for numbers a and odd primes p that don't divide a: Pairs of residues and nonresidues Modulo a prime p, the number of pairs n, n + 1 where n R p and n + 1 R p, or n N p and n + 1 R p, etc., are almost equal. More precisely, let p be an odd prime. For i, j = 0, 1 define the sets and let That is, α00 is the number of residues that are followed by a residue, α01 is the number of residues that are followed by a nonresidue, α10 is the number of nonresidues that are followed by a residue, and α11 is the number of nonresidues that are followed by a nonresidue. Then if p ≡ 1 (mod 4) and if p ≡ 3 (mod 4) For example: (residues in bold) Modulo 17 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 A00 = {1,8,15}, A01 = {2,4,9,13}, A10 = {3,7,12,14}, A11 = {5,6,10,11}. Modulo 19 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 A00 = {4,5,6,16}, A01 = {1,7,9,11,17}, A10 = {3,8,10,15}, A11 = {2,12,13,14}. Gauss (1828) introduced this sort of counting when he proved that if p ≡ 1 (mod 4) then x4 ≡ 2 (mod p) can be solved if and only if p = a2 + 64 b2. The Pólya–Vinogradov inequality The values of for consecutive values of a mimic a random variable like a coin flip. Specifically, Pólya and Vinogradov proved (independently) in 1918 that for any nonprincipal Dirichlet character χ(n) modulo q and any integers M and N, in big O notation. Setting this shows that the number of quadratic residues modulo q in any interval of length N is It is easy to prove that In fact, Montgomery and Vaughan improved this in 1977, showing that, if the generalized Riemann hypothesis is true then This result cannot be substantially improved, for Schur had proved in 1918 that and Paley had proved in 1932 that for infinitely many d > 0. Least quadratic non-residue The least quadratic residue mod p is clearly 1. The question of the magnitude of the least quadratic non-residue n(p) is more subtle, but it is always prime, with 7 appearing for the first time at 71. The Pólya–Vinogradov inequality above gives O( log p). The best unconditional estimate is n(p) ≪ pθ for any θ>1/4, obtained by estimates of Burgess on character sums. Assuming the Generalised Riemann hypothesis, Ankeny obtained n(p) ≪ (log p)2. Linnik showed that the number of p less than X such that n(p) > Xε is bounded by a constant depending on ε. The least quadratic non-residues mod p for odd primes p are: 2, 2, 3, 2, 2, 3, 2, 5, 2, 3, 2, ... Quadratic excess Let p be an odd prime. The quadratic excess E(p) is the number of quadratic residues on the range (0,p/2) minus the number in the range (p/2,p) . For p congruent to 1 mod 4, the excess is zero, since −1 is a quadratic residue and the residues are symmetric under r ↔ p−r. For p congruent to 3 mod 4, the excess E is always positive. Complexity of finding square roots That is, given a number a and a modulus n, how hard is it to tell whether an x solving x2 ≡ a (mod n) exists assuming one does exist, to calculate it? An important difference between prime and composite moduli shows up here. Modulo a prime p, a quadratic residue a has 1 + (a|p) roots (i.e. zero if a N p, one if a ≡ 0 (mod p), or two if a R p and gcd(a,p) = 1.) In general if a composite modulus n is written as a product of powers of distinct primes, and there are n1 roots modulo the first one, n2 mod the second, ..., there will be n1n2... roots modulo n. The theoretical way solutions modulo the prime powers are combined to make solutions modulo n is called the Chinese remainder theorem; it can be implemented with an efficient algorithm. For example: Solve x2 ≡ 6 (mod 15). x2 ≡ 6 (mod 3) has one solution, 0; x2 ≡ 6 (mod 5) has two, 1 and 4. and there are two solutions modulo 15, namely 6 and 9. Solve x2 ≡ 4 (mod 15). x2 ≡ 4 (mod 3) has two solutions, 1 and 2; x2 ≡ 4 (mod 5) has two, 2 and 3. and there are four solutions modulo 15, namely 2, 7, 8, and 13. Solve x2 ≡ 7 (mod 15). x2 ≡ 7 (mod 3) has two solutions, 1 and 2; x2 ≡ 7 (mod 5) has no solutions. and there are no solutions modulo 15. Prime or prime power modulus First off, if the modulus n is prime the Legendre symbol can be quickly computed using a variation of Euclid's algorithm or the Euler's criterion. If it is −1 there is no solution. Secondly, assuming that , if n ≡ 3 (mod 4), Lagrange found that the solutions are given by and Legendre found a similar solution if n ≡ 5 (mod 8): For prime n ≡ 1 (mod 8), however, there is no known formula. Tonelli (in 1891) and Cipolla found efficient algorithms that work for all prime moduli. Both algorithms require finding a quadratic nonresidue modulo n, and there is no efficient deterministic algorithm known for doing that. But since half the numbers between 1 and n are nonresidues, picking numbers x at random and calculating the Legendre symbol until a nonresidue is found will quickly produce one. A slight variant of this algorithm is the Tonelli–Shanks algorithm. If the modulus n is a prime power n = pe, a solution may be found modulo p and "lifted" to a solution modulo n using Hensel's lemma or an algorithm of Gauss. Composite modulus If the modulus n has been factored into prime powers the solution was discussed above. If n is not congruent to 2 modulo 4 and the Kronecker symbol then there is no solution; if n is congruent to 2 modulo 4 and , then there is also no solution. If n is not congruent to 2 modulo 4 and , or n is congruent to 2 modulo 4 and , there may or may not be one. If the complete factorization of n is not known, and and n is not congruent to 2 modulo 4, or n is congruent to 2 modulo 4 and , the problem is known to be equivalent to integer factorization of n (i.e. an efficient solution to either problem could be used to solve the other efficiently). The above discussion indicates how knowing the factors of n allows us to find the roots efficiently. Say there were an efficient algorithm for finding square roots modulo a composite number. The article congruence of squares discusses how finding two numbers x and y where and suffices to factorize n efficiently. Generate a random number, square it modulo n, and have the efficient square root algorithm find a root. Repeat until it returns a number not equal to the one we originally squared (or its negative modulo n), then follow the algorithm described in congruence of squares. The efficiency of the factoring algorithm depends on the exact characteristics of the root-finder (e.g. does it return all roots? just the smallest one? a random one?), but it will be efficient. Determining whether a is a quadratic residue or nonresidue modulo n (denoted or ) can be done efficiently for prime n by computing the Legendre symbol. However, for composite n, this forms the quadratic residuosity problem, which is not known to be as hard as factorization, but is assumed to be quite hard. On the other hand, if we want to know if there is a solution for x less than some given limit c, this problem is NP-complete; however, this is a fixed-parameter tractable problem, where c is the parameter. In general, to determine if a is a quadratic residue modulo composite n, one can use the following theorem: Let , and . Then is solvable if and only if: The Legendre symbol for all odd prime divisors p of n. if n is divisible by 4 but not 8; or if n is divisible by 8. Note: This theorem essentially requires that the factorization of n is known. Also notice that if , then the congruence can be reduced to , but then this takes the problem away from quadratic residues (unless m is a square). The number of quadratic residues The list of the number of quadratic residues modulo n, for n = 1, 2, 3 ..., looks like: 1, 2, 2, 2, 3, 4, 4, 3, 4, 6, 6, 4, 7, 8, 6, ... A formula to count the number of squares modulo n is given by Stangl. Applications of quadratic residues Acoustics Sound diffusers have been based on number-theoretic concepts such as primitive roots and quadratic residues. Graph theory Paley graphs are dense undirected graphs, one for each prime p ≡ 1 (mod 4), that form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. Paley digraphs are directed analogs of Paley graphs, one for each p ≡ 3 (mod 4), that yield antisymmetric conference matrices. The construction of these graphs uses quadratic residues. Cryptography The fact that finding a square root of a number modulo a large composite n is equivalent to factoring (which is widely believed to be a hard problem) has been used for constructing cryptographic schemes such as the Rabin cryptosystem and the oblivious transfer. The quadratic residuosity problem is the basis for the Goldwasser-Micali cryptosystem. The discrete logarithm is a similar problem that is also used in cryptography. Primality testing Euler's criterion is a formula for the Legendre symbol (a|p) where p is prime. If p is composite the formula may or may not compute (a|p) correctly. The Solovay–Strassen primality test for whether a given number n is prime or composite picks a random a and computes (a|n) using a modification of Euclid's algorithm, and also using Euler's criterion. If the results disagree, n is composite; if they agree, n may be composite or prime. For a composite n at least 1/2 the values of a in the range 2, 3, ..., n − 1 will return "n is composite"; for prime n none will. If, after using many different values of a, n has not been proved composite it is called a "probable prime". The Miller–Rabin primality test is based on the same principles. There is a deterministic version of it, but the proof that it works depends on the generalized Riemann hypothesis; the output from this test is "n is definitely composite" or "either n is prime or the GRH is false". If the second output ever occurs for a composite n, then the GRH would be false, which would have implications through many branches of mathematics. Integer factorization In § VI of the Disquisitiones Arithmeticae Gauss discusses two factoring algorithms that use quadratic residues and the law of quadratic reciprocity. Several modern factorization algorithms (including Dixon's algorithm, the continued fraction method, the quadratic sieve, and the number field sieve) generate small quadratic residues (modulo the number being factorized) in an attempt to find a congruence of squares which will yield a factorization. The number field sieve is the fastest general-purpose factorization algorithm known. Table of quadratic residues The following table lists the quadratic residues mod 1 to 75 (a means it is not coprime to n). (For the quadratic residues coprime to n, see , and for nonzero quadratic residues, see .)
Mathematics
Modular arithmetic
null
200115
https://en.wikipedia.org/wiki/Trajectory
Trajectory
A trajectory or flight path is the path that an object with mass in motion follows through space as a function of time. In classical mechanics, a trajectory is defined by Hamiltonian mechanics via canonical coordinates; hence, a complete trajectory is defined by position and momentum, simultaneously. The mass might be a projectile or a satellite. For example, it can be an orbit — the path of a planet, asteroid, or comet as it travels around a central mass. In control theory, a trajectory is a time-ordered set of states of a dynamical system (see e.g. Poincaré map). In discrete mathematics, a trajectory is a sequence of values calculated by the iterated application of a mapping to an element of its source. Physics of trajectories A familiar example of a trajectory is the path of a projectile, such as a thrown ball or rock. In a significantly simplified model, the object moves only under the influence of a uniform gravitational force field. This can be a good approximation for a rock that is thrown for short distances, for example at the surface of the Moon. In this simple approximation, the trajectory takes the shape of a parabola. Generally when determining trajectories, it may be necessary to account for nonuniform gravitational forces and air resistance (drag and aerodynamics). This is the focus of the discipline of ballistics. One of the remarkable achievements of Newtonian mechanics was the derivation of Kepler's laws of planetary motion. In the gravitational field of a point mass or a spherically-symmetrical extended mass (such as the Sun), the trajectory of a moving object is a conic section, usually an ellipse or a hyperbola. This agrees with the observed orbits of planets, comets, and artificial spacecraft to a reasonably good approximation, although if a comet passes close to the Sun, then it is also influenced by other forces such as the solar wind and radiation pressure, which modify the orbit and cause the comet to eject material into space. Newton's theory later developed into the branch of theoretical physics known as classical mechanics. It employs the mathematics of differential calculus (which was also initiated by Newton in his youth). Over the centuries, countless scientists have contributed to the development of these two disciplines. Classical mechanics became a most prominent demonstration of the power of rational thought, i.e. reason, in science as well as technology. It helps to understand and predict an enormous range of phenomena; trajectories are but one example. Consider a particle of mass , moving in a potential field . Physically speaking, mass represents inertia, and the field represents external forces of a particular kind known as "conservative". Given at every relevant position, there is a way to infer the associated force that would act at that position, say from gravity. Not all forces can be expressed in this way, however. The motion of the particle is described by the second-order differential equation On the right-hand side, the force is given in terms of , the gradient of the potential, taken at positions along the trajectory. This is the mathematical form of Newton's second law of motion: force equals mass times acceleration, for such situations. Examples Uniform gravity, neither drag nor wind The ideal case of motion of a projectile in a uniform gravitational field in the absence of other forces (such as air drag) was first investigated by Galileo Galilei. To neglect the action of the atmosphere in shaping a trajectory would have been considered a futile hypothesis by practical-minded investigators all through the Middle Ages in Europe. Nevertheless, by anticipating the existence of the vacuum, later to be demonstrated on Earth by his collaborator Evangelista Torricelli, Galileo was able to initiate the future science of mechanics. In a near vacuum, as it turns out for instance on the Moon, his simplified parabolic trajectory proves essentially correct. In the analysis that follows, we derive the equation of motion of a projectile as measured from an inertial frame at rest with respect to the ground. Associated with the frame is a right-hand coordinate system with its origin at the point of launch of the projectile. The -axis is tangent to the ground, and the axis is perpendicular to it ( parallel to the gravitational field lines ). Let be the acceleration of gravity. Relative to the flat terrain, let the initial horizontal speed be and the initial vertical speed be . It will also be shown that the range is , and the maximum altitude is . The maximum range for a given initial speed is obtained when , i.e. the initial angle is 45. This range is , and the maximum altitude at the maximum range is . Derivation of the equation of motion Assume the motion of the projectile is being measured from a free fall frame which happens to be at (x,y) = (0,0) at t = 0. The equation of motion of the projectile in this frame (by the equivalence principle) would be . The co-ordinates of this free-fall frame, with respect to our inertial frame would be . That is, . Now translating back to the inertial frame the co-ordinates of the projectile becomes That is: (where v0 is the initial velocity, is the angle of elevation, and g is the acceleration due to gravity). Range and height The range, R, is the greatest distance the object travels along the x-axis in the I sector. The initial velocity, vi, is the speed at which said object is launched from the point of origin. The initial angle, θi, is the angle at which said object is released. The g is the respective gravitational pull on the object within a null-medium. The height, h, is the greatest parabolic height said object reaches within its trajectory Angle of elevation In terms of angle of elevation and initial speed : giving the range as This equation can be rearranged to find the angle for a required range (Equation II: angle of projectile launch) Note that the sine function is such that there are two solutions for for a given range . The angle giving the maximum range can be found by considering the derivative or with respect to and setting it to zero. which has a nontrivial solution at , or . The maximum range is then . At this angle , so the maximum height obtained is . To find the angle giving the maximum height for a given speed calculate the derivative of the maximum height with respect to , that is which is zero when . So the maximum height is obtained when the projectile is fired straight up. Orbiting objects If instead of a uniform downwards gravitational force we consider two bodies orbiting with the mutual gravitation between them, we obtain Kepler's laws of planetary motion. The derivation of these was one of the major works of Isaac Newton and provided much of the motivation for the development of differential calculus. Catching balls If a projectile, such as a baseball or cricket ball, travels in a parabolic path, with negligible air resistance, and if a player is positioned so as to catch it as it descends, he sees its angle of elevation increasing continuously throughout its flight. The tangent of the angle of elevation is proportional to the time since the ball was sent into the air, usually by being struck with a bat. Even when the ball is really descending, near the end of its flight, its angle of elevation seen by the player continues to increase. The player therefore sees it as if it were ascending vertically at constant speed. Finding the place from which the ball appears to rise steadily helps the player to position himself correctly to make the catch. If he is too close to the batsman who has hit the ball, it will appear to rise at an accelerating rate. If he is too far from the batsman, it will appear to slow rapidly, and then to descend.
Physical sciences
Classical mechanics
Physics
200129
https://en.wikipedia.org/wiki/Hair%20loss
Hair loss
Hair loss, also known as alopecia or baldness, refers to a loss of hair from part of the head or body. Typically at least the head is involved. The severity of hair loss can vary from a small area to the entire body. Inflammation or scarring is not usually present. Hair loss in some people causes psychological distress. Common types include male- or female-pattern hair loss, alopecia areata, and a thinning of hair known as telogen effluvium. The cause of male-pattern hair loss is a combination of genetics and male hormones; the cause of female pattern hair loss is unclear; the cause of alopecia areata is autoimmune; and the cause of telogen effluvium is typically a physically or psychologically stressful event. Telogen effluvium is very common following pregnancy. Less common causes of hair loss without inflammation or scarring include the pulling out of hair, certain medications including chemotherapy, HIV/AIDS, hypothyroidism, and malnutrition including iron deficiency. Causes of hair loss that occurs with scarring or inflammation include fungal infection, lupus erythematosus, radiation therapy, and sarcoidosis. Diagnosis of hair loss is partly based on the areas affected. Treatment of pattern hair loss may simply involve accepting the condition, which can also include shaving one's head. Interventions that can be tried include the medications minoxidil (or finasteride) and hair transplant surgery. Alopecia areata may be treated by steroid injections in the affected area, but these need to be frequently repeated to be effective. Hair loss is a common problem. Pattern hair loss by age 50 affects about half of men and a quarter of women. About 2% of people develop alopecia areata at some point in time. Terminology Baldness is the partial or complete lack of hair growth, and part of the wider topic of "hair thinning". The degree and pattern of baldness varies, but its most common cause is androgenic hair loss, alopecia androgenetica, or alopecia seborrheica, with the last term primarily used in Europe. Hypotrichosis Hypotrichosis is a condition of abnormal hair patterns, predominantly loss or reduction. It occurs, most frequently, by the growth of vellus hair in areas of the body that normally produce terminal hair. Typically, the individual's hair growth is normal after birth, but shortly thereafter the hair is shed and replaced with sparse, abnormal hair growth. The new hair is typically fine, short and brittle, and may lack pigmentation. Baldness may be present by the time the subject is 25 years old. Signs and symptoms Symptoms of hair loss include hair loss in patches usually in circular patterns, dandruff, skin lesions, and scarring. Alopecia areata (mild – medium level) usually shows in unusual hair loss areas, e.g., eyebrows, backside of the head or above the ears, areas the male pattern baldness usually does not affect. In male-pattern hair loss, loss and thinning begin at the temples and the crown and hair either thins out or falls out. Female-pattern hair loss occurs at the frontal and parietal. People have between 100,000 and 150,000 hairs on their head. The number of strands normally lost in a day varies but on average is 100. In order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. The first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. Styling can also reveal areas of thinning, such as a wider parting or a thinning crown. Skin conditions A substantially blemished face, back and limbs could point to cystic acne. The most severe form of the condition, cystic acne, arises from the same hormonal imbalances that cause hair loss and is associated with dihydrotestosterone production. Psychological The psychology of hair thinning is a complex issue. Hair is considered an essential part of overall identity: especially for women, for whom it often represents femininity and attractiveness. Men typically associate a full head of hair with youth and vigor. People experiencing hair thinning often find themselves in a situation where their physical appearance is at odds with their own self-image and commonly worry that they appear older than they are or less attractive to others. Psychological problems due to baldness, if present, are typically most severe at the onset of symptoms. Hair loss induced by cancer chemotherapy has been reported to cause changes in self-concept and body image. Body image does not return to the previous state after regrowth of hair for a majority of patients. In such cases, patients have difficulties expressing their feelings (alexithymia) and may be more prone to avoiding family conflicts. Family therapy can help families to cope with these psychological problems if they arise. Causes Although not completely understood, hair loss can have many causes: Pattern hair loss Male pattern hair loss is believed to be due to a combination of genetics and the male hormone dihydrotestosterone. The cause in female pattern hair loss remains unclear. Infection Dissecting cellulitis of the scalp Fungal infections (such as tinea capitis) Folliculitis from various causes Demodex folliculitis, caused by Demodex folliculorum, a microscopic mite that feeds on the sebum produced by the sebaceous glands, denies hair essential nutrients and can cause thinning. Demodex folliculorum is not present on every scalp and is more likely to live in an excessively oily scalp environment. Secondary syphilis Drugs Temporary or permanent hair loss can be caused by several medications, including those for blood pressure problems, diabetes, heart disease and cholesterol. Any that affect the body's hormone balance can have a pronounced effect: these include the contraceptive pill, hormone replacement therapy, steroids and acne medications. Some treatments used to cure mycotic infections can cause massive hair loss. Medications (side effects from drugs, including chemotherapy, anabolic steroids, and birth control pills) Trauma Traction alopecia is most commonly found in people with ponytails or cornrows who pull on their hair with excessive force. In addition, rigorous brushing and heat styling, rough scalp massage can damage the cuticle, the hard outer casing of the hair. This causes individual strands to become weak and break off, reducing overall hair volume. Frictional alopecia is hair loss caused by rubbing of the hair or follicles, most infamously around the ankles of men from socks, where even if socks are no longer worn, the hair often will not grow back. Trichotillomania is the loss of hair caused by compulsive pulling and bending of the hairs. Onset of this disorder tends to begin around the onset of puberty and usually continues through adulthood. Due to the constant extraction of the hair roots, permanent hair loss can occur. Traumas such as childbirth, major surgery, poisoning, and severe stress may cause a hair loss condition known as telogen effluvium, in which a large number of hairs enter the resting phase at the same time, causing shedding and subsequent thinning. The condition also presents as a side effect of chemotherapy – while targeting dividing cancer cells, this treatment also affects hair's growth phase with the result that almost 90% of hairs fall out soon after chemotherapy starts. Radiation to the scalp, as when radiotherapy is applied to the head for the treatment of certain cancers there, can cause baldness of the irradiated areas. Pregnancy Hair loss often follows childbirth in the postpartum period without causing baldness. During pregnancy, the hair is thicker owing to increased circulating estrogens. Approximately three months after giving birth (typically between 2 and 5 months), estrogen levels drop and hair loss occurs, often particularly noticeably around the hairline and temple area. Hair typically grows back normally and treatment is not indicated. A similar situation occurs in women taking the fertility-stimulating drug clomiphene. Other causes Autoimmune disease. Alopecia areata is an autoimmune disorder also known as "spot baldness" that can result in hair loss ranging from just one location (Alopecia areata monolocularis) to every hair on the entire body (Alopecia areata universalis). Although thought to be caused by hair follicles becoming dormant, what triggers alopecia areata is not known. In most cases the condition corrects itself, but it can also spread to the entire scalp (alopecia totalis) or to the entire body (alopecia universalis). Skin diseases and cancer. Localized or diffuse hair loss may also occur in cicatricial alopecia (lupus erythematosus, lichen plano pilaris, folliculitis decalvans, central centrifugal cicatricial alopecia, postmenopausal frontal fibrosing alopecia, etc.). Tumours and skin outgrowths also induce localized baldness (sebaceous nevus, basal cell carcinoma, squamous cell carcinoma). Tumor alopecia is the hair loss in the immediate vicinity of either benign or malignant tumors of the scalp. Hypothyroidism (an under-active thyroid) and the side effects of its related medications can cause hair loss, typically frontal, which is particularly associated with thinning of the outer third of the eyebrows (also seen with syphilis). Hyperthyroidism (an over-active thyroid) can also cause hair loss, which is parietal rather than frontal. Sebaceous cysts. Temporary loss of hair can occur in areas where sebaceous cysts are present for considerable duration (normally one to several weeks). Congenital triangular alopecia – It is a triangular, or oval in some cases, shaped patch of hair loss in the temple area of the scalp that occurs mostly in young children. The affected area mainly contains vellus hair follicles or no hair follicles at all, but it does not expand. Its causes are unknown, and although it is a permanent condition, it does not have any other effect on the affected individuals. Hair growth conditions. Gradual thinning of hair with age is a natural condition known as involutional alopecia. This is caused by an increasing number of hair follicles switching from the growth, or anagen, phase into a resting phase, or telogen phase, so that remaining hairs become shorter and fewer in number. An unhealthy scalp environment can play a significant role in hair thinning by contributing to miniaturization or causing damage. Obesity. Obesity-induced stress, such as that induced by a high-fat diet (HFD), targets hair follicle stem cells (HFSCs) to accelerate hair thinning in mice. It is likely that similar molecular mechanism play a role in human hair loss. Other causes of hair loss include: Alopecia mucinosa Biotinidase deficiency Chronic inflammation Diabetes Pseudopelade of Brocq Telogen effluvium Tufted folliculitis Genetics Genetic forms of localized autosomal recessive hypotrichosis include: Pathophysiology Hair follicle growth occurs in cycles. Each cycle consists of a long growing phase (anagen), a short transitional phase (catagen) and a short resting phase (telogen). At the end of the resting phase, the hair falls out (exogen) and a new hair starts growing in the follicle, beginning the cycle again. Normally, about 40 (0–78 in men) hairs reach the end of their resting phase each day and fall out. When more than 100 hairs fall out per day, clinical hair loss (telogen effluvium) may occur. A disruption of the growing phase causes abnormal loss of anagen hairs (anagen effluvium). Diagnosis Because they are not usually associated with an increased loss rate, male-pattern and female-pattern hair loss do not generally require testing. If hair loss occurs in a young man with no family history, drug use could be the cause. The pull test helps to evaluate diffuse scalp hair loss. Gentle traction is exerted on a group of hairs (about 40–60) on three different areas of the scalp. The number of extracted hairs is counted and examined under a microscope. Normally, fewer than three hairs per area should come out with each pull. If more than ten hairs are obtained, the pull test is considered positive. The pluck test is conducted by pulling hair out "by the roots". The root of the plucked hair is examined under a microscope to determine the phase of growth, and is used to diagnose a defect of telogen, anagen, or systemic disease. Telogen hairs have tiny bulbs without sheaths at their roots. Telogen effluvium shows an increased percentage of hairs upon examination. Anagen hairs have sheaths attached to their roots. Anagen effluvium shows a decrease in telogen-phase hairs and an increased number of broken hairs. Scalp biopsy is used when the diagnosis is unsure; a biopsy allows for differing between scarring and nonscarring forms. Hair samples are taken from areas of inflammation, usually around the border of the bald patch. Daily hair counts are normally done when the pull test is negative. It is done by counting the number of hairs lost. The hair from the first morning combing or during washing should be counted. The hair is collected in a clear plastic bag for 14 days. The strands are recorded. If the hair count is >100/day, it is considered abnormal except after shampooing, where hair counts will be up to 250 and be normal. Trichoscopy is a noninvasive method of examining hair and scalp. The test may be performed with the use of a handheld dermoscope or a video dermoscope. It allows differential diagnosis of hair loss in most cases. There are two types of identification tests for female pattern baldness: the Ludwig Scale and the Savin Scale. Both track the progress of diffused thinning, which typically begins on the crown of the head behind the hairline, and becomes gradually more pronounced. For male pattern baldness, the Hamilton–Norwood scale tracks the progress of a receding hairline and/or a thinning crown, through to a horseshoe-shaped ring of hair around the head and on to total baldness. In almost all cases of thinning, and especially in cases of severe hair loss, it is recommended to seek advice from a doctor or dermatologist. Many types of thinning have an underlying genetic or health-related cause, which a qualified professional will be able to diagnose. Management Hiding hair loss Head One method of hiding hair loss is the comb over, which involves restyling the remaining hair to cover the balding area. It is usually a temporary solution, useful only while the area of hair loss is small. As the hair loss increases, a comb over becomes less effective. Another method is to wear a hat or a hairpiece such as a wig or toupee. The wig is a layer of artificial or natural hair made to resemble a typical hair style. In most cases the hair is artificial. Wigs vary widely in quality and cost. In the United States, the best wigsthose that look like real haircost up to tens of thousands of dollars. Organizations also collect individuals' donations of their own natural hair to be made into wigs for young cancer patients who have lost their hair due to chemotherapy or other cancer treatment in addition to any type of hair loss. Eyebrows Though not as common as the loss of hair on the head, chemotherapy, hormone imbalance, forms of hair loss, and other factors can also cause loss of hair in the eyebrows. Loss of growth in the outer one third of the eyebrow is often associated with hypothyroidism. Artificial eyebrows are available to replace missing eyebrows or to cover patchy eyebrows. Eyebrow embroidery is another option which involves the use of a blade to add pigment to the eyebrows. This gives a natural 3D look for those who are worried about an artificial look and it lasts for two years. Micropigmentation (permanent makeup tattooing) is also available for those who want the look to be permanent. Medications Treatments for the various forms of hair loss have limited success. Three medications have evidence to support their use in male pattern hair loss: minoxidil, finasteride, and dutasteride. They typically work better to prevent further hair loss, than to regrow lost hair. On June 13, 2022, the U.S. Food and Drug Administration (FDA) approved Olumiant (baricitinib) for adults with severe alopecia areatal. It is the first FDA approved drug for systemic treatment, or treatment for any area of the body. Minoxidil (Rogaine) is a nonprescription medication approved for male pattern baldness and alopecia areata. In a liquid or foam, it is rubbed into the scalp twice a day. Some people have an allergic reaction to the propylene glycol in the minoxidil solution and a minoxidil foam was developed without propylene glycol. Not all users will regrow hair. Minoxidil may also be taken orally although this route of administration is not approved by the FDA. The longer the hair has stopped growing, the less likely minoxidil will regrow hair. Minoxidil is not effective for other causes of hair loss. Hair regrowth can take 1 to 6 months to begin. Treatment must be continued indefinitely. If the treatment is stopped, hair loss resumes. Any regrown hair and any hair susceptible to being lost, while Minoxidil was used, will be lost. Most frequent side effects are mild scalp irritation, allergic contact dermatitis, and unwanted hair in other parts of the body. Finasteride (Propecia) is used in male-pattern hair loss in a pill form, taken 1 milligram per day. It is not indicated for women and is not recommended in pregnant women (as it is known to cause birth defects in fetuses). Treatment is effective starting within 6 weeks of treatment. Finasteride causes an increase in hair retention, the weight of hair, and some increase in regrowth. Side effects in about 2% of males include decreased sex drive, erectile dysfunction, and ejaculatory dysfunction. Treatment should be continued as long as positive results occur. Once treatment is stopped, hair loss resumes. Corticosteroids injections into the scalp can be used to treat alopecia areata. This type of treatment is repeated on a monthly basis. Oral pills for extensive hair loss may be used for alopecia areata. Results may take up to a month to be seen. Immunosuppressants applied to the scalp have been shown to temporarily reverse alopecia areata, though the side effects of some of these drugs make such therapy questionable. There is some tentative evidence that anthralin may be useful for treating alopecia areata. Hormonal modulators (oral contraceptives or antiandrogens such as spironolactone and flutamide) can be used for female-pattern hair loss associated with hyperandrogenemia. Surgery Hair transplantation is usually carried out under local anesthetic. A surgeon will move healthy hair from the back and sides of the head to areas of thinning. The procedure can take between four and eight hours, and additional sessions can be carried out to make hair even thicker. Transplanted hair falls out within a few weeks, but regrows permanently within months. Surgical options, such as follicle transplants, scalp flaps, and hair loss reduction, are available. These procedures are generally chosen by those who are self-conscious about their hair loss, but they are expensive and painful, with a risk of infection and scarring. Once surgery has occurred, six to eight months are needed before the quality of new hair can be assessed. Scalp reduction is the process of decreasing of the area of bald skin on the head. In time, the skin on the head becomes flexible and stretched enough that some of it can be surgically removed. After the hairless scalp is removed, the space is closed with hair-covered scalp. Scalp reduction is generally done in combination with hair transplantation to provide a natural-looking hairline, especially those with extensive hair loss. Hairline lowering can sometimes be used to lower a high hairline secondary to hair loss, although there may be a visible scar after further hair loss. Wigs are an alternative to medical and surgical treatment; some patients wear a wig or hairpiece. They can be used permanently or temporarily to cover the hair loss. High-quality, natural-looking wigs and hairpieces are available. Chemotherapy Hypothermia caps may be used to prevent hair loss during some kinds of chemotherapy, specifically, when taxanes or anthracyclines are administered. It is not recommended to be used when cancer is present in the skin of the scalp or for lymphoma or leukemia. There are generally only minor side effects from scalp cooling given during chemotherapy. Embracing baldness Instead of attempting to conceal their hair loss, some people embrace it by either doing nothing about it or sporting a shaved head. The general public became more accepting of men with shaved heads in the early 1950s, when Russian-American actor Yul Brynner began sporting the look; the resulting phenomenon inspired many of his male fans to shave their heads. Male celebrities then continued to bring mainstream popularity to shaved heads, including athletes such as Michael Jordan and Zinedine Zidane and actors such as Dwayne Johnson, Ben Kingsley, and Jason Statham. Female baldness is still viewed as less normal in various parts of the world. Alternative medicine Dietary supplements are not typically recommended. There is only one small trial of saw palmetto which shows tentative benefit in those with mild to moderate androgenetic alopecia. There is no evidence for biotin. Evidence for most other alternative medicine remedies is also insufficient. There was no good evidence for ginkgo, aloe vera, ginseng, bergamot, hibiscus, or sophora as of 2011. Many people use unproven treatments to treat hair loss. Egg oil, in Indian, Japanese, Unani (Roghan Baiza Murgh) and Chinese traditional medicine, was traditionally used as a treatment for hair loss. Research Research is looking into connections between hair loss and other health issues. While there has been speculation about a connection between early-onset male pattern hair loss and heart disease, a review of articles from 1954 to 1999 found no conclusive connection between baldness and coronary artery disease. The dermatologists who conducted the review suggested further study was needed. Environmental factors are under review. A 2007 study indicated that smoking may be a factor associated with age-related hair loss among Asian men. The study controlled for age and family history, and found statistically significant positive associations between moderate or severe male pattern hair loss and smoking status. Vertex baldness is associated with an increased risk of coronary heart disease (CHD) and the relationship depends upon the severity of baldness, while frontal baldness is not. Thus, vertex baldness might be a marker of CHD and is more closely associated with atherosclerosis than frontal baldness. Hair follicle aging A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be primed by a sustained cellular response to the DNA damage that accumulates in renewing stem cells during aging. This damage response involves the proteolysis of type XVII collagen by neutrophil elastase in response to DNA damage in hair follicle stem cells. Proteolysis of collagen leads to elimination of the damaged cells and, consequently, to terminal hair follicle miniaturization. Hedgehog signaling In June 2022 the University of California, Irvine announced that researchers have discovered that hedgehog signaling in murine fibroblasts induces new hair growth and hair multiplication while hedgehog activation increases fibroblast heterogeneity and drives new cell states. A new signaling molecule called SCUBE3 potently stimulates hair growth and may offer a therapeutic treatment for androgenetic alopecia. Etymology The term alopecia () is from the Classical Greek ἀλώπηξ, alōpēx, meaning "fox". The origin of this usage is because this animal sheds its coat twice a year, or because in ancient Greece foxes often lost hair because of mange.
Biology and health sciences
Health and fitness: General
Health
200136
https://en.wikipedia.org/wiki/Static%20electricity
Static electricity
Static electricity is an imbalance of electric charges within or on the surface of a material. The charge remains until it can move away by an electric current or electrical discharge. The word "static" is used to differentiate it from current electricity, where an electric charge flows through an electrical conductor. A static electric charge can be created whenever two surfaces contact and or slide against each other and then separate. The effects of static electricity are familiar to most people because they can feel, hear, and even see sparks if the excess charge is neutralized when brought close to an electrical conductor (for example, a path to ground), or a region with an excess charge of the opposite polarity (positive or negative). The familiar phenomenon of a static shockmore specifically, an electrostatic dischargeis caused by the neutralization of a charge. Causes Materials are made of atoms that are normally electrically neutral because they contain equal numbers of positive charges (protons in their nuclei) and negative charges (electrons in "shells" surrounding the nucleus). The phenomenon of static electricity requires a separation of positive and negative charges. When two materials are in contact, electrons may move from one material to the other, which leaves an excess of positive charge on one material, and an equal negative charge on the other. When the materials are separated, they retain this charge imbalance. It is also possible for ions to be transferred. Contact-induced charge separation Electrons or ions can be exchanged between materials on contact or when they slide against each other, which is known as the triboelectric effect and results in one material becoming positively charged and the other negatively charged. The triboelectric effect is the main cause of static electricity as observed in everyday life, and in common high-school science demonstrations involving rubbing different materials together (e.g., fur against an acrylic rod). Contact-induced charge separation causes one's hair to stand up and causes "static cling" (for example, a balloon rubbed against the hair becomes negatively charged; when near a wall, the charged balloon is attracted to positively charged particles in the wall, and can "cling" to it, suspended against gravity). Pressure-induced charge separation Applied mechanical stress generates a electric polarization and in turn this can lead to separation of charge in many types of materials. The free carriers at the surface of a material compensate for the polarization induced by the strains. Heat-induced charge separation Heating can generate electric polarization, which in turn can lead a separation of charge in certain materials. All pyroelectric materials are also piezoelectric and do not have inversion symmetry. Charge-induced charge separation A charged object brought close to an electrically neutral conductive object causes a separation of charge within the neutral object. This is called electrostatic induction. Charges of the same polarity are repelled and move to the side of the object away from the external charge, and charges of the opposite polarity are attracted and move to the side facing the charge. As the force due to the interaction of electric charges falls off rapidly with increasing distance, the effect of the closer (opposite polarity) charges is greater and the two objects feel a force of attraction. Careful grounding of part of an object can permanently add or remove electrons, leaving the object with a global, permanent charge. Removal and prevention Removing or preventing a buildup of static charge can be as simple as opening a window or using a humidifier, to increase the moisture content of the air, making the atmosphere more conductive. Air ionizers can perform the same task. Items that are particularly sensitive to static discharge may be treated with the application of an antistatic agent, which adds a conducting surface layer that ensures any excess charge is evenly distributed. Fabric softeners and dryer sheets used in washing machines and clothes dryers are an example of an antistatic agent used to prevent and remove static cling. Many semiconductor devices used in electronics are particularly sensitive to static discharge. Conductive antistatic bags are commonly used to protect such components. People who work on circuits that contain these devices often ground themselves with a conductive antistatic strap. In the industrial settings such as paint or flour plants as well as in hospitals, antistatic safety boots are sometimes used to prevent a buildup of static charge due to contact with the floor. These shoes have soles with good conductivity. Anti-static shoes should not be confused with insulating shoes, which provide exactly the opposite benefit some protection against serious electric shocks from the mains voltage. Within medical cable assemblies and lead wires, random triboelectric noise is generated when the various conductors, insulation, and fillers rub against each other as the cable is flexed during movement. Noise generated within a cable is often called handling noise or cable noise, but this type of unwanted signal is more accurately described as triboelectric noise. When measuring low-level signals, noise in cable or wire may present a problem. For example, the noise in an ECG or another medical signal may make accurate diagnosis difficult or even impossible. Keeping triboelectric noise at acceptable levels requires careful material selection, design, and processing as cable material is manufactured. Static discharge The spark associated with static electricity is caused by electrostatic discharge, or simply static discharge, as excess charge is neutralized by a flow of charges from or to the surroundings. The feeling of an electric shock is caused by the stimulation of nerves as the current flows through the human body. The energy stored as static electricity on an object varies depending on the size of the object and its capacitance, the voltage to which it is charged, and the dielectric constant of the surrounding medium. For modelling the effect of static discharge on sensitive electronic devices, a human being is represented as a capacitor of 100 picofarads, charged to a voltage of 4,000 to 35,000 volts. When touching an object this energy is discharged in less than a microsecond. While the total energy is small, on the order of millijoules, it can still damage sensitive electronic devices. Larger objects will store more energy, which may be directly hazardous to human contact or which may give a spark that can ignite flammable gas or dust. Lightning Lightning is a dramatic natural example of static discharge. While the details are unclear and remain a subject of debate, the initial charge separation is thought to be associated with contact between ice particles within storm clouds. In general, significant charge accumulations can only persist in regions of low electrical conductivity (very few charges free to move in the surroundings), hence the flow of neutralizing charges often results from neutral atoms and molecules in the air being torn apart to form separate positive and negative charges, which travel in opposite directions as an electric current, neutralizing the original accumulation of charge. The static charge in air typically breaks down in this way at around 10,000 volts per centimeter (10 kV/cm) depending on humidity. The discharge superheats the surrounding air causing the bright flash, and produces a shock wave causing the booming sound. A lightning bolt is simply a scaled-up version of the sparks seen in more domestic occurrences of static discharge. The flash occurs because the air in the discharge channel is heated to such a high temperature that it emits light by incandescence. The clap of thunder is the result of the shock wave created as the superheated air expands. Electronic components Many semiconductor devices used in electronics are very sensitive to the presence of static electricity and can be damaged by a static discharge. The use of an antistatic strap is mandatory for researchers manipulating nanodevices. Further precautions can be taken by taking off shoes with thick rubber soles and permanently staying with a metallic ground. Static build-up in flowing flammable and ignitable materials Discharge of static electricity can create severe hazards in those industries dealing with flammable substances, where a small electrical spark might ignite explosive mixtures. The flowing movement of finely powdered substances or low conductivity fluids in pipes or through mechanical agitation can build up static electricity. The flow of granules of material such as sand down a plastic chute can transfer charge, which can be measured using a multimeter connected to metal foil lining the chute at intervals, and can be roughly proportional to particulate flow. Dust clouds of finely powdered substances can become combustible or explosive. When there is a static discharge in a dust or vapor cloud, explosions have occurred. Among the major industrial incidents that have occurred due to static discharge are the explosion of a grain silo in southwest France, a paint plant in Thailand, a factory making fiberglass moldings in Canada, a storage tank explosion in Glenpool, Oklahoma in 2003, and a portable tank filling operation and a tank farm in Des Moines, Iowa and Valley Center, Kansas in 2007. The ability of a fluid to retain an electrostatic charge depends on its electrical conductivity. When low conductivity fluids flow through pipelines or are mechanically agitated, contact-induced charge separation called flow electrification occurs. Fluids that have low electrical conductivity (below 50 picosiemens per meter), are called accumulators. Fluids having conductivity above 50 pS/m are called non-accumulators. In non-accumulators, charges recombine as fast as they are separated and hence electrostatic charge accumulation is not significant. In the petrochemical industry, 50 pS/m is the recommended minimum value of electrical conductivity for adequate removal of charge from a fluid. Kerosines may have conductivity ranging from less than 1 picosiemens per meter to 20 pS/m. For comparison, deionized water has a conductivity of about 10,000,000 pS/m or 10 μS/m. Transformer oil is part of the electrical insulation system of large power transformers and other electrical apparatus. Re-filling of large apparatus requires precautions against electrostatic charging of the fluid, which may damage sensitive transformer insulation. An important concept for insulating fluids is the static relaxation time. This is similar to the time constant τ (tau) of an RC circuit. For insulating materials, it is the ratio of the static dielectric constant divided by the electrical conductivity of the material. For hydrocarbon fluids, this is sometimes approximated by dividing the number 18 by the electrical conductivity of the fluid. Thus a fluid that has an electrical conductivity of 1 pS/m has an estimated relaxation time of about 18 seconds. The excess charge in a fluid dissipates almost completely after four to five times the relaxation time, or 90 seconds for the fluid in the above example. Charge generation increases at higher fluid velocities and larger pipe diameters, becoming quite significant in pipes 8 inches (200 mm) or larger. Static charge generation in these systems is best controlled by limiting fluid velocity. The British standard BS PD CLC/TR 50404:2003 (formerly BS-5958-Part 2) Code of Practice for Control of Undesirable Static Electricity prescribes pipe flow velocity limits. Because water content has a large impact on the fluids dielectric constant, the recommended velocity for hydrocarbon fluids containing water should be limited to 1 meter per second. Bonding and earthing are the usual ways charge buildup can be prevented. For fluids with electrical conductivity below 10 pS/m, bonding and earthing are not adequate for charge dissipation, and anti-static additives may be required. Fueling operations The flowing movement of flammable liquids like gasoline inside a pipe can build up static electricity. Non-polar liquids such as gasoline, toluene, xylene, diesel, kerosene and light crude oils exhibit significant ability for charge accumulation and charge retention during high velocity flow. Electrostatic discharges can ignite the fuel vapor. When the electrostatic discharge energy is high enough, it can ignite a fuel vapor and air mixture. Different fuels have different flammable limits and require different levels of electrostatic discharge energy to ignite. Electrostatic discharge while fueling with gasoline is a present danger at gas stations. Fires have also been started at airports while refueling aircraft with kerosene. New grounding technologies, the use of conducting materials, and the addition of anti-static additives help to prevent or safely dissipate the buildup of static electricity. Customers who need to fill containers at gas stations are advised to set them on the ground first so that any static buildup will dissipate without risk of fire or explosion. The flowing movement of gases in pipes alone creates little, if any, static electricity. It is envisaged that a charge generation mechanism only occurs when solid particles or liquid droplets are carried in the gas stream. In space exploration Due to the extremely low humidity in extraterrestrial environments, very large static charges can accumulate, causing a major hazard for the complex electronics used in space exploration vehicles. Static electricity is thought to be a particular hazard for astronauts on planned missions to the Moon and Mars. Walking over the extremely dry terrain could cause them to accumulate a significant amount of charge; reaching out to open the airlock on their return could cause a large static discharge, potentially damaging sensitive electronics. Ozone cracking A static discharge in the presence of air or oxygen can create ozone. Ozone can degrade rubber parts. Many elastomers are sensitive to ozone cracking. Exposure to ozone creates deep penetrative cracks in critical components like gaskets and O-rings. Fuel lines are also susceptible to the problem unless preventive action is taken. Preventive measures include adding anti-ozonants to the rubber mix, or using an ozone-resistant elastomer. Fires from cracked fuel lines have been a problem on vehicles, especially in the engine compartments where ozone can be produced by electrical equipment. Energies involved The energy released in a static electricity discharge may vary over a wide range. The energy in joules can be calculated from the capacitance (C) of the object and the static potential V in volts (V) by the formula E = ½CV2. One experimenter estimates the capacitance of the human body as high as 400 picofarads, and a voltage of 50,000 volts, discharged e.g. during touching a charged car, creating a spark with energy of 500 millijoules. Another estimate is 100–300 pF and 20,000 volts, producing a maximum energy of 60 mJ. IEC 479-2:1987 states that a discharge with energy greater than 5000 mJ is a direct serious risk to human health. IEC 60065 states that consumer products cannot discharge more than 350 mJ into a person. The maximal potential is limited to about 35–40 kV, due to corona discharge dissipating the charge at higher potentials. Potentials below 3000 volts are not typically detectable by humans. Maximal potential commonly achieved on human body range between 1 and 10 kV, though in optimal conditions as high as 20–25 kV can be reached. Low relative humidity increases the charge buildup; walking 20 feet (6 m) on vinyl floor at 15% relative humidity causes buildup of voltage up to 12 kV, while at 80% humidity the voltage is only 1.5 kV. As little as 0.2 millijoules may present an ignition hazard; such low spark energy is often below the threshold of human visual and auditory perception. Typical ignition energies are: 0.017 mJ for hydrogen, 0.2–2 mJ for hydrocarbon vapors, 1–50 mJ for fine flammable dust, 40–1000 mJ for coarse flammable dust. The energy needed to damage most electronic devices is between 2 and 1000 nanojoules. A relatively small energy, often as little as 0.2–2 millijoules, is needed to ignite a flammable mixture of a fuel and air. For the common industrial hydrocarbon gases and solvents, the minimum ignition energy required for ignition of vapor–air mixture is lowest for the vapor concentration roughly in the middle between the lower explosive limit and the upper explosive limit, and rapidly increases as the concentration deviates from this optimum to either side. Aerosols of flammable liquids may be ignited well below their flash point. Generally, liquid aerosols with particle sizes below 10 micrometers behave like vapors, particle sizes above 40 micrometers behave more like flammable dusts. Typical minimal flammable concentrations of aerosols lay between 15 and 50 g/m3. Similarly, presence of foam on the surface of a flammable liquid significantly increases ignitability. Aerosol of flammable dust can be ignited as well, resulting in a dust explosion; the lower explosive limit usually lies between 50 and 1000 g/m3; finer dusts tend to be more explosive and requiring less spark energy to set off. Simultaneous presence of flammable vapors and flammable dust can significantly decrease the ignition energy; a mere 1 vol.% of propane in air can reduce the required ignition energy of dust by 100 times. Higher than normal oxygen content in atmosphere also significantly lowers the ignition energy. There are five types of electrical discharges: Spark, responsible for the majority of industrial fires and explosions where static electricity is involved. Sparks occur between objects at different electric potentials. Good grounding of all parts of the equipment and precautions against charge buildups on equipment and personnel are used as prevention measures. Brush discharge occurs from a nonconductive charged surface or highly charged nonconductive liquids. The energy is limited to roughly 4 millijoules. To be hazardous, the voltage involved must be above about 20 kilovolts, the surface polarity must be negative, a flammable atmosphere must be present at the point of discharge, and the discharge energy must be sufficient for ignition. Further, because surfaces have a maximal charge density, an area of at least 100 cm2 has to be involved. This is not considered to be a hazard for dust clouds. Propagating brush discharge is high in energy and dangerous. Occurs when an insulating surface of up to 8 mm thick (e.g. a teflon or glass lining of a grounded metal pipe or a reactor) is subjected to a large charge buildup between the opposite surfaces, acting as a large-area capacitor. Cone discharge, also called bulking brush discharge, occurs over surfaces of charged powders with resistance above 1010 ohms, or also deep through the powder mass. Cone discharges are not usually observed in dust volumes below 1 m3. The energy involved depends on the grain size of the powder and the charge magnitude, and can reach up to 20 mJ. Larger dust volumes produce higher energies. Corona discharge, considered non-hazardous.
Physical sciences
Electrostatics
Physics
200167
https://en.wikipedia.org/wiki/Water%20cycle
Water cycle
The water cycle (or hydrologic cycle or hydrological cycle) is a biogeochemical cycle that involves the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time. However, the partitioning of the water into the major reservoirs of ice, fresh water, salt water and atmospheric water is variable and depends on climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere. The processes that drive these movements are evaporation, transpiration, condensation, precipitation, sublimation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The ocean plays a key role in the water cycle as it is the source of 86% of global evaporation. The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence the climate system. The evaporative phase of the cycle purifies water because it causes salts and other solids picked up during the cycle to be left behind. The condensation phase in the atmosphere replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It also reshapes the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet. Human actions are greatly affecting the water cycle. Activities such as deforestation, urbanization, and the extraction of groundwater are altering natural landscapes (land use changes) all have an effect on the water cycle. On top of this, climate change is leading to an intensification of the water cycle. Research has shown that global warming is causing shifts in precipitation patterns, increased frequency of extreme weather events, and changes in the timing and intensity of rainfall. These water cycle changes affect ecosystems, water availability, agriculture, and human societies. Description Overall process The water cycle is powered from the energy emitted by the sun. This energy heats water in the ocean and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. The water molecule has smaller molecular mass than the major components of the atmosphere, nitrogen () and oxygen () and hence is less dense. Due to the significant difference in density, buoyancy drives humid air higher. As altitude increases, air pressure decreases and the temperature drops (see Gas laws). The lower temperature causes water vapor to condense into tiny liquid water droplets which are heavier than the air, and which fall unless supported by an updraft. A huge concentration of these droplets over a large area in the atmosphere becomes visible as cloud, while condensation near ground level is referred to as fog. Atmospheric circulation moves water vapor around the globe; cloud particles collide, grow, and fall out of the upper atmospheric layers as precipitation. Some precipitation falls as snow, hail, or sleet, and can accumulate in ice caps and glaciers, which can store frozen water for thousands of years. Most water falls as rain back into the ocean or onto land, where the water flows over the ground as surface runoff. A portion of this runoff enters rivers, with streamflow moving water towards the oceans. Runoff and water emerging from the ground (groundwater) may be stored as freshwater in lakes. Not all runoff flows into rivers; much of it soaks into the ground as infiltration. Some water infiltrates deep into the ground and replenishes aquifers, which can store freshwater for long periods of time. Some infiltration stays close to the land surface and can seep back into surface-water bodies (and the ocean) as groundwater discharge or be taken up by plants and transferred back to the atmosphere as water vapor by transpiration. Some groundwater finds openings in the land surface and emerges as freshwater springs. In river valleys and floodplains, there is often continuous water exchange between surface water and ground water in the hyporheic zone. Over time, the water returns to the ocean, to continue the water cycle. The ocean plays a key role in the water cycle. The ocean holds "97% of the total water on the planet; 78% of global precipitation occurs over the ocean, and it is the source of 86% of global evaporation". Important physical processes within the water cycle include (in alphabetical order): Advection: The movement of water through the atmosphere. Without advection, water that evaporated over the oceans could not precipitate over land. Atmospheric rivers that move large volumes of water vapor over long distances are an example of advection. Condensation: The transformation of water vapor to liquid water droplets in the air, creating clouds and fog. Evaporation: The transformation of water from liquid to gas phases as it moves from the ground or bodies of water into the overlying atmosphere. The source of energy for evaporation is primarily solar radiation. Evaporation often implicitly includes transpiration from plants, though together they are specifically referred to as evapotranspiration. Total annual evapotranspiration amounts to approximately of water, of which evaporates from the oceans. 86% of global evaporation occurs over the ocean. Infiltration: The flow of water from the ground surface into the ground. Once infiltrated, the water becomes soil moisture or groundwater. A recent global study using water stable isotopes, however, shows that not all soil moisture is equally available for groundwater recharge or for plant transpiration. Percolation: Water flows vertically through the soil and rocks under the influence of gravity. Precipitation: Condensed water vapor that falls to the Earth's surface. Most precipitation occurs as rain, but also includes snow, hail, fog drip, graupel, and sleet. Approximately of water falls as precipitation each year, of it over the oceans. The rain on land contains of water per year and a snowing only . 78% of global precipitation occurs over the ocean. Runoff: The variety of ways by which water moves across the land. This includes both surface runoff and channel runoff. As it flows, the water may seep into the ground, evaporate into the air, become stored in lakes or reservoirs, or be extracted for agricultural or other human uses. Subsurface flow: The flow of water underground, in the vadose zone and aquifers. Subsurface water may return to the surface (e.g. as a spring or by being pumped) or eventually seep into the oceans. Water returns to the land surface at lower elevation than where it infiltrated, under the force of gravity or gravity induced pressures. Groundwater tends to move slowly and is replenished slowly, so it can remain in aquifers for thousands of years. Transpiration: The release of water vapor from plants and soil into the air. Residence times The residence time of a reservoir within the hydrologic cycle is the average time a water molecule will spend in that reservoir (see table). It is a measure of the average age of the water in that reservoir. Groundwater can spend over 10,000 years beneath Earth's surface before leaving. Particularly old groundwater is called fossil water. Water stored in the soil remains there very briefly, because it is spread thinly across the Earth, and is readily lost by evaporation, transpiration, stream flow, or groundwater recharge. After evaporating, the residence time in the atmosphere is about 9 days before condensing and falling to the Earth as precipitation. The major ice sheets – Antarctica and Greenland – store ice for very long periods. Ice from Antarctica has been reliably dated to 800,000 years before present, though the average residence time is shorter. In hydrology, residence times can be estimated in two ways. The more common method relies on the principle of conservation of mass (water balance) and assumes the amount of water in a given reservoir is roughly constant. With this method, residence times are estimated by dividing the volume of the reservoir by the rate by which water either enters or exits the reservoir. Conceptually, this is equivalent to timing how long it would take the reservoir to become filled from empty if no water were to leave (or how long it would take the reservoir to empty from full if no water were to enter). An alternative method to estimate residence times, which is gaining in popularity for dating groundwater, is the use of isotopic techniques. This is done in the subfield of isotope hydrology. Water in storage The water cycle describes the processes that drive the movement of water throughout the hydrosphere. However, much more water is "in storage" (or in "pools") for long periods of time than is actually moving through the cycle. The storehouses for the vast majority of all water on Earth are the oceans. It is estimated that of the 1,386,000,000 km3 of the world's water supply, about 1,338,000,000 km3 is stored in oceans, or about 97%. It is also estimated that the oceans supply about 90% of the evaporated water that goes into the water cycle. The Earth's ice caps, glaciers, and permanent snowpack stores another 24,064,000 km3 accounting for only 1.7% of the planet's total water volume. However, this quantity of water is 68.7% of all freshwater on the planet. Changes caused by humans Local or regional impacts Human activities can alter the water cycle at the local or regional level. This happens due to changes in land use and land cover. Such changes affect "precipitation, evaporation, flooding, groundwater, and the availability of freshwater for a variety of uses". Examples for such land use changes are converting fields to urban areas or clearing forests. Such changes can affect the ability of soils to soak up surface water. Deforestation has local as well as regional effects. For example it reduces soil moisture, evaporation and rainfall at the local level. Furthermore, deforestation causes regional temperature changes that can affect rainfall patterns. Aquifer drawdown or overdrafting and the pumping of fossil water increase the total amount of water in the hydrosphere. This is because the water that was originally in the ground has now become available for evaporation as it is now in contact with the atmosphere. Water cycle intensification due to climate change Since the middle of the 20th century, human-caused climate change has resulted in observable changes in the global water cycle. The IPCC Sixth Assessment Report in 2021 predicted that these changes will continue to grow significantly at the global and regional level. These findings are a continuation of scientific consensus expressed in the IPCC Fifth Assessment Report from 2007 and other special reports by the Intergovernmental Panel on Climate Change which had already stated that the water cycle will continue to intensify throughout the 21st century. Related processes Biogeochemical cycling While the water cycle is itself a biogeochemical cycle, flow of water over and beneath the Earth is a key component of the cycling of other biogeochemicals. Runoff is responsible for almost all of the transport of eroded sediment and phosphorus from land to waterbodies. The salinity of the oceans is derived from erosion and transport of dissolved salts from the land. Cultural eutrophication of lakes is primarily due to phosphorus, applied in excess to agricultural fields in fertilizers, and then transported overland and down rivers. Both runoff and groundwater flow play significant roles in transporting nitrogen from the land to waterbodies. The dead zone at the outlet of the Mississippi River is a consequence of nitrates from fertilizer being carried off agricultural fields and funnelled down the river system to the Gulf of Mexico. Runoff also plays a part in the carbon cycle, again through the transport of eroded rock and soil. Slow loss over geologic time The hydrodynamic wind within the upper portion of a planet's atmosphere allows light chemical elements such as Hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Planets with hot lower atmospheres could result in humid upper atmospheres that accelerate the loss of hydrogen. Historical interpretations In ancient times, it was widely thought that the land mass floated on a body of water, and that most of the water in rivers has its origin under the earth. Examples of this belief can be found in the works of Homer (). In Works and Days (ca. 700 BC), the Greek poet Hesiod outlines the idea of the water cycle: "[Vapour] is drawn from the ever-flowing rivers and is raised high above the earth by windstorm, and sometimes it turns to rain towards evening, and sometimes to wind when Thracian Boreas huddles the thick clouds." In the ancient Near East, Hebrew scholars observed that even though the rivers ran into the sea, the sea never became full. Some scholars conclude that the water cycle was described completely during this time in this passage: "The wind goeth toward the south, and turneth about unto the north; it whirleth about continually, and the wind returneth again according to its circuits. All the rivers run into the sea, yet the sea is not full; unto the place from whence the rivers come, thither they return again" (Ecclesiastes 1:6-7). Furthermore, it was also observed that when the clouds were full, they emptied rain on the earth (Ecclesiastes 11:3). In the Adityahridayam (a devotional hymn to the Sun God) of Ramayana, a Hindu epic dated to the 4th century BCE, it is mentioned in the 22nd verse that the Sun heats up water and sends it down as rain. By roughly 500 BCE, Greek scholars were speculating that much of the water in rivers can be attributed to rain. The origin of rain was also known by then. These scholars maintained the belief, however, that water rising up through the earth contributed a great deal to rivers. Examples of this thinking included Anaximander (570 BCE) (who also speculated about the evolution of land animals from fish) and Xenophanes of Colophon (530 BCE). Warring States period Chinese scholars such as Chi Ni Tzu (320 BCE) and Lu Shih Ch'un Ch'iu (239 BCE) had similar thoughts. The idea that the water cycle is a closed cycle can be found in the works of Anaxagoras of Clazomenae (460 BCE) and Diogenes of Apollonia (460 BCE). Both Plato (390 BCE) and Aristotle (350 BCE) speculated about percolation as part of the water cycle. Aristotle correctly hypothesized that the sun played a role in the Earth's hydraulic cycle in his book Meteorology, writing "By it [the sun's] agency the finest and sweetest water is everyday carried up and is dissolved into vapor and rises to the upper regions, where it is condensed again by the cold and so returns to the earth.", and believed that clouds were composed of cooled and condensed water vapor. Much like the earlier Aristotle, the Eastern Han Chinese scientist Wang Chong (27–100 AD) accurately described the water cycle of Earth in his Lunheng but was dismissed by his contemporaries. Up to the time of the Renaissance, it was wrongly assumed that precipitation alone was insufficient to feed rivers, for a complete water cycle, and that underground water pushing upwards from the oceans were the main contributors to river water. Bartholomew of England held this view (1240 CE), as did Leonardo da Vinci (1500 CE) and Athanasius Kircher (1644 CE). Discovery of the correct theory The first published thinker to assert that rainfall alone was sufficient for the maintenance of rivers was Bernard Palissy (1580 CE), who is often credited as the discoverer of the modern theory of the water cycle. Palissy's theories were not tested scientifically until 1674, in a study commonly attributed to Pierre Perrault. Even then, these beliefs were not accepted in mainstream science until the early nineteenth century.
Physical sciences
Hydrology
null
200199
https://en.wikipedia.org/wiki/Remora
Remora
The remora (), sometimes called suckerfish or sharksucker, is any of a family (Echeneidae) of ray-finned fish in the order Carangiformes. Depending on species, they grow to long. Their distinctive first dorsal fins take the form of a modified oval, sucker-like organ with slat-like structures that open and close to create suction and take a firm hold against the skin of larger marine animals. The disk is made up of stout, flexible membranes that can be raised and lowered to generate suction. By sliding backward, the remora can increase the suction, or it can release itself by swimming forward. Remoras sometimes attach to small boats, and have been observed attaching to divers as well. They swim well on their own, with a sinuous, or curved, motion. Characteristics Remora front dorsal fins have evolved to enable them to adhere by suction to smooth surfaces, and they spend most of their lives clinging to a host animal such as a whale, turtle, shark or ray. It is probably a mutualistic arrangement as the remora can move around on the host, removing ectoparasites and loose flakes of skin, while benefiting from the protection provided by the host and the constant flow of water across its gills. Although many believe that remoras feed off particulate matter from the host's meals, some posit alternative theories; they claim their diets to be composed primarily of host feces. Further research is needed to validate the extent of this alternative feeding mechanism. Habitat Remoras are tropical open-ocean dwellers, but are occasionally found in temperate or coastal waters if they have attached to large fish that have wandered into these areas. In the mid-Atlantic Ocean, spawning usually takes place in June and July; in the Mediterranean Sea, it occurs in August and September. The sucking disc begins to show when the young fish are about long. When the remora reaches about , the disc is fully formed and the remora can then attach to other animals. The remora's lower jaw projects beyond the upper, and the animal lacks a swim bladder. Some remoras associate with specific host species. They are commonly found attached to sharks, manta rays, whales, turtles, and dugongs, hence the common names "sharksucker" and "whalesucker". Smaller remoras also fasten onto fish such as tuna and swordfish, and some of the smallest remoras travel in the mouths or gills of large manta rays, ocean sunfish, swordfish and sailfish. The relationship between a remora and its host is most often taken to be one of commensalism, specifically phoresy. While some of the relationships are mutualistic, it is believed that dolphins with remoras attached do not benefit from the relationship. The attachment of the remora increases the dolphin's drag, which increases the energy needed for swimming. The remora is also thought to irritate the skin of the dolphin. Physiology Research into the physiology of the remora has been of significant benefit to the understanding of ventilation costs in fish. Remoras, like many other fishes, have two different modes of ventilation. Ram ventilation is the process in which at higher speeds, the remora uses the force of the water moving past it to create movement of fluid in the gills. At lower speeds, the remora will use a form of active ventilation, in which the fish actively moves fluid through its gills. In order to use active ventilation, a fish must actively use energy to move the fluid; however, determining this energy cost is normally complicated due to the fish's movement when using either method. As a result, the remora has proved invaluable in finding this cost difference (since they will stick to a shark or tube, and hence remain stationary despite the movement, or lack thereof, of water). Experimental data from studies on remora found that the associated cost for active ventilation created a 3.7–5.1% increased energy consumption in order to maintain the same quantity of fluid flow the fish obtained by using ram ventilation. Other research into the remora's physiology came about as a result of studies across multiple taxa, or using the remora as an out-group for certain evolutionary studies. Concerning the latter case, remoras were used as an outgroup when investigating tetrodotoxin resistance in remoras, pufferfish, and related species, finding remoras (specifically Echeneis naucrates) had a resistance of 6.1–5.5 M. Use for fishing Some cultures use remoras to catch turtles. A cord or rope is fastened to the remora's tail, and when a turtle is sighted, the fish is released from the boat; it usually heads directly for the turtle and fastens itself to the turtle's shell, and then both remora and turtle are hauled in. Smaller turtles can be pulled completely into the boat by this method, while larger ones are hauled within harpooning range. This practice has been reported throughout the Indian Ocean, especially from eastern Africa near Zanzibar and Mozambique, and from northern Australia near Cape York and Torres Strait. Similar reports come from Japan and from the Americas. Some of the first records of the "fishing fish" in the Western literature come from the accounts of the second voyage of Christopher Columbus. However, Leo Wiener considers the Columbus accounts to be apocryphal: what was taken for accounts of the Americas may have been, in fact, notes Columbus derived from accounts of the East Indies, his desired destination. Mythology In ancient times, the remora was believed to stop a ship from sailing. In Latin, remora means "delay", while the genus name Echeneis comes from Greek ἔχειν, echein ("to hold") and ναῦς, naus ("a ship"). In a notable account by Pliny the Elder, the remora is blamed for the defeat of Mark Antony at the Battle of Actium and, indirectly, for the death of Caligula. A modern version of the story is given by Jorge Luis Borges in Book of Imaginary Beings (1957). Gallery Timeline
Biology and health sciences
Acanthomorpha
null
200237
https://en.wikipedia.org/wiki/Eurasian%20eagle-owl
Eurasian eagle-owl
The Eurasian eagle-owl (Bubo bubo) is a species of eagle-owl, a type of bird that resides in much of Eurasia. It is often just called the eagle-owl in Europe and Asia. It is one of the largest species of owl. Females can grow to a total length of , with a wingspan of . Males are slightly smaller. This bird has distinctive ear tufts, with upper parts that are mottled with darker blackish colouring and tawny. The wings and tail are barred. The underparts are a variably hued buff, streaked with darker colouring. The facial disc is not very defined. The orange eyes are distinctive. At least 12 subspecies of the Eurasian eagle-owl are described. Eurasian eagle-owls are found in many habitats; mostly mountainous and rocky areas, often near varied woodland edge and near shrubby areas with openings or wetlands. They also inhabit coniferous forests, steppes, and remote areas. Occasionally, they are found in farmland and in park-like settings in European and Asian cities and, very rarely, in busier urban areas. The eagle-owl is mostly a nocturnal predator. Predominantly, they hunt small mammals, such as rodents and rabbits, but also birds and larger mammals. Secondary prey include reptiles, amphibians, fish, large insects, and invertebrates. The species typically breeds on cliff ledges, in gullies, among rocks, and in other concealed locations. The nest is a scrape containing a clutch of 2–4 eggs typically, which are laid at intervals and hatch at different times. The female incubates the eggs and broods the young. The male brings food for her and for the nestlings. Continuing parental care for the young is provided by both adults for about five months. In addition to being one of the largest living species of owl, the Eurasian eagle-owl is also one of the most widely distributed. With a total range in Europe and Asia of about and a total population estimated to be between 100,000 and 500,000 individuals, the IUCN lists the bird's conservation status as being of least concern, although the trend is listed as decreasing. The vast majority of eagle-owls live in Continental Europe, Scandinavia, Russia (which is almost certainly where the peak numbers and diversity of race occurs), and Central Asia. Additional minor populations exist in Anatolia, the northern Middle East, the montane upper part of South Asia, China, Korea and in Japan; in addition, an estimated 12 to 40 pairs are thought to reside in the United Kingdom as of 2016 (where they are arguably non-native), a number which may be on the rise, and have successfully bred in the UK since at least 1996. Tame eagle-owls have occasionally been used in pest control because of their size to deter large birds such as gulls from nesting. Description The Eurasian eagle-owl is among the larger birds of prey, smaller than the golden eagle (Aquila chrysaetos), but larger than the snowy owl (Bubo scandiacus), despite some overlap in size with both of those species. It is sometimes referred to as the world's largest owl, although Blakiston's fish owl (B. blakistoni) is slightly heavier on average and the much lighter weight great grey owl (Strix nebulosa) is slightly longer on average. Heimo Mikkola reported the largest specimens of eagle-owl as having the same upper body mass, , as the largest Blakiston's fish owl and attained a length around longer. In terms of average weight and wing size, the Blakiston's is the slightly larger species seemingly, even averaging a bit larger in these aspects than the biggest eagle-owl races from Russia. Also, although shorter than the largest of the latter species, the Eurasian eagle-owl can weigh well more than twice as much as the largest great grey owl. The Eurasian eagle-owl typically has a wingspan of , with the largest specimens possibly attaining . The total length of the species can vary from . Females can weigh from , and males can weigh from . In comparison, the barn owl (Tyto alba), the world's most widely distributed owl species, weighs about and the great horned owl (B. virginianus), which fills the eagle-owl's ecological niche in North America, weighs around . Besides the female being larger, little external sexual dimorphism is seen in the Eurasian eagle-owl, although the ear tufts of males reportedly tend to be more upright than those of females. When an eagle-owl is seen on its own in the field, distinguishing the individual's sex is generally not possible. Sex determination by size is possible by in-hand measurements. In some populations, the female typically may be slightly darker than the male. The plumage coloration across at least 13 accepted subspecies can be highly variable. The upper parts may be brown-black to tawny-buff to pale creamy gray, typically showing dense freckling on the forehead and crown, stripes on the nape, sides, and back of the neck, and dark splotches on the pale ground colour of the back, mantle, and scapulars. A narrow buff band, freckled with brown or buff, often runs up from the base of the bill, above the inner part of the eye, and along the inner edge of the black-brown ear tufts. The rump and upper tail-coverts are delicately patterned with dark vermiculations and fine, wavy barring, the extent of which varies with subspecies. The underwing coverts and undertail coverts are similar, but tend to be more strongly barred in brownish-black. The primaries and secondaries are brown with broad, dark brown bars and dark brown tips, and grey or buff irregular lines. A complete moult takes place each year between July and December. The facial disc is tawny-buff, speckled with black-brown, so densely on the outer edge of the disc as to form a "frame" around the face. The chin and throat are white with a brownish central streak. The feathers of the upper breast generally have brownish-black centres and reddish-brown edges except for the central ones, which have white edges. The chin and throat may appear white continuing down the center of the upper breast. The lower breast and belly feathers are creamy-brown to tawny buff to off-white with a variable amount of fine dark wavy barring, on a tawny-buff ground colour. The legs and feet (which are feathered almost to the talons) are likewise marked on a buff ground colour but more faintly. The tail is tawny-buff, mottled dark grey-brown with about six black-brown bars. The bill and feet are black. The iris is most often orange but is fairly variable. In some European birds, the iris is a bright reddish, blood-orange colour but then in subspecies found in arid, desert-like habitats, the iris can range into an orange-yellow colour (most closely related species generally have yellowish irises, excluding the Indian eagle-owl). Standard measurements and physiology Among standard measurements for the Eurasian eagle-owl, the wing chord measures , the tail measures long, the tarsus measures , and the total length of the bill is . The wings are reportedly the smallest in proportion to the body weight of any European owl, when measured by the weight per area of wing size, was found to be 0.72 g/cm2. Thus, they have quite high wing loading. The great horned owl has even smaller wings (0.8 g/cm2) relative to its body size. The golden eagle has slightly lower wing loading proportionately (0.65 g/cm2), so the aerial abilities of the two species (beyond the eagle's spectacular ability to stoop) may not be as disparate as expected. Some other owls, such as barn owls, short-eared owls (Asio flammeus), and even the related snowy owls have lower wing loading relative to their size, so are presumably able to fly faster, with more agility, and for more extended periods than the Eurasian eagle-owl. In the relatively small race B. b. hispanus, the middle claw, the largest talon, (as opposed to rear hallux-claw, which is the largest in accipitrids) was found to measure from in length. A female examined in Britain (origins unspecified) had a middle claw measuring , on par in length with a large female golden eagle hallux-claw. Generally, owls do not have talons as proportionately large as those of accipitrids, but have stronger, more robust feet relative to their size. Accipitrids use their talons to inflict organ damage and blood loss, whereas typical owls use their feet to constrict their prey to death, the talons serving only to hold the prey in place or provide incidental damage. The talons of the Eurasian eagle-owl are very large and not often exceeded in size by diurnal raptors. Unlike the great horned owls, the overall foot size and strength of the Eurasian eagle-owl is not known to have been tested, but the considerably smaller horned owl has one of the strongest grips ever measured in a bird. The feathers of the ear tufts in Spanish birds (when not damaged) were found to measure from . The ear openings (covered in feathers as in all birds) are relatively uncomplicated for an owl, but are also large, being larger on the right than on the left as in most owls, and proportionately larger than those of the great horned owl. In the female, the ear opening averages on the right and on the left, and in males, averages on the right and on the left. The depth of the facial disc and the size and complexity of the ear opening are directly correlated to the importance of sound in an owl's hunting behaviour. Examples of owls with more complicated ear structures and deeper facial disc are barn owls, long-eared owls (Asio otus), and boreal owls (Aegolius funereus). Given the uncomplicated structure of their ear openings and relatively shallow, undefined facial discs, hunting by ear is secondary to hunting by sight in eagle-owls; this seems to be true for Bubo in general. More sound-based hunters such as the aforementioned species likely focus their hunting activity in more complete darkness. Also, owls with white throat patches such as the Eurasian eagle-owl are more likely to be active in low-light conditions in the hours before and after sunrise and sunset rather than the darkest times in the middle of the night. The boreal and barn owls, to extend these examples, lack obvious visual cues such as white throat patches (puffed up in displaying eagle-owls), again indicative of primary activity being in darker periods. Distinguishing from other species The great size, bulky, barrel-shaped build, erect ear tufts, and orange eyes render this as a distinctive species. Other than general morphology, the above features differ markedly from those of two of the next largest subarctic owl species in Europe and western Asia, which are the great grey owl and the greyish to chocolate-brown Ural owl (Strix uralensis), both of which have no ear tufts and have a distinctly rounded head, rather than the blocky shape of the eagle-owl's head. The snowy owl is obviously distinctive from most eagle-owls, but during winter the palest Eurasian eagle-owl race (B. b. sibiricus) can appear off-white. Nevertheless, the latter is still distinctively an ear-tufted Eurasian eagle-owl and lacks the pure white background colour and variable blackish spotting of the slightly smaller species (which has relatively tiny, vestigial ear tufts that have only been observed to have flared on rare occasions). The long-eared owl has a somewhat similar plumage to the eagle-owl, but is considerably smaller (an average female eagle-owl may be twice as long and 10 times heavier than an average long-eared owl). Long-eared owls in Eurasia have vertical striping like that of the Eurasian eagle-owl, while long-eared owls in North America show a more horizontal striping like that of great horned owls. Whether these are examples of mimicry either way is unclear but it is known that both Bubo owls are serious predators of long-eared owls. The same discrepancy in underside streaking has also been noted in the Eurasian and American representations of the grey owl. A few other related species overlap minimally in range in Asia, mainly in East Asia and the southern reaches of the Eurasian eagle-owl's range. Three fish owls appear to overlap in range, the brown (Ketupa zeylonensis) in at least northern Pakistan, probably Kashmir, and discontinuously in southern Turkey, the tawny (K. flavipes) through much of eastern China, and Blakiston's fish owl in the Russian Far East, northeastern China, and Hokkaido. Fish owls are distinctively different looking, possessing more scraggy ear tufts that hang to the side rather than sit erect on top of the head, and generally have more uniform, brownish plumages without the contrasting darker streaking of an eagle-owl. The brown fish owl has no feathering on the tarsus or feet, and the tawny has feathering only on the upper portion of the tarsi, but the Blakiston's is nearly as extensively feathered on the tarsi and feet as the eagle-owl. Tawny and brown fish owls are both slightly smaller than co-occurring Eurasian eagle-owls, and Blakiston's fish owls are similar or slightly larger than co-occurring large northern eagle-owls. Fish owls, being tied to the edges of fresh water, where they hunt mainly fish and crabs, also have slightly differing, and more narrow, habitat preferences. In the lower Himalayas of northern Pakistan and Jammu and Kashmir, along with the brown fish owl, the Eurasian eagle-owl at the limit of its distribution may co-exist with at least two to three other eagle-owls. One of these, the dusky eagle-owl (B. coromandus) is smaller, with more uniform tan-brownish plumage, untidy uniform light streaking rather than the Eurasian's dark streaking below and an even less well-defined facial disc. The dusky is usually found in slightly more enclosed woodland areas than Eurasian eagle-owls. Another is possibly the spot-bellied eagle-owl (B. nipalensis), which is strikingly different looking, with stark brown plumage, rather than the warm hues typical of the Eurasian, bold spotting on a whitish background on the belly, and somewhat askew ear tufts that are bold white with light brown crossbars on the front. Both species may occur in some parts of the Himalayan foothills, but they are not currently verified to occur in the same area, in part because of the spot-bellied's preference for dense, primary forest. Most similar, with basically the same habitat preferences and the only one verified to co-occur with the Eurasian eagle-owls of the race B. b. turcomanus in Kashmir is the Indian eagle-owl (B. bengalensis). The Indian species is smaller, with a bolder, blackish facial disc border, more rounded and relatively smaller wings, and partially unfeathered toes. Far to the west, the pharaoh eagle-owl (B. ascalaphus) also seemingly overlaps in range with the Eurasian, at least in Jordan. Although also relatively similar to the Eurasian eagle-owl, the pharaoh eagle-owl is distinguished by its smaller size, paler, more washed-out plumage, and the diminished size of its ear tufts. Moulting The Eurasian eagle-owls' feathers are lightweight and robust, but nevertheless need to be replaced periodically as they become worn. In the Eurasian eagle-owl, this happens in stages, and the first moult starts the year after hatching with some body feathers and wing coverts being replaced. The next year, the three central secondaries on each wing and three middle tail feathers are shed and regrow, and the following year, two or three primaries and their coverts are lost. In the final year of this postjuvenile moult, the remaining primaries are moulted and all the juvenile feathers will have been replaced. Another moult takes place during years 6–12 of the bird's life. This happens between June and October after the conclusion of the breeding season, and again it is a staged process with six to nine main flight feathers being replaced each year. Such a moulting pattern lasting several years is repeated throughout the bird's life. Taxonomy The Eurasian eagle-owl was formally described by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae under the binomial name Strix bulbo. Although Linnaeus specified the "habitat" as "Europa" the type locality is restricted to Sweden. The Eurasian eagle-owl is now placed in the genus Bubo that was introduced by André Duméril in 1805. The genus Bubo with 20 extant species includes most of the larger owl species in the world today. Based on an extensive fossil record and a central distribution of extant species on that continent, Bubo appears to have evolved into existence in Africa, although early radiations seem to branch from southern Asia, as well. Two genera belonging to the scops owls complex, the giant scops owls (Otus gurneyi) found in Asia and the Ptilopsis or the white-faced scops owl found in Africa, although firmly ensconced in the scops owl group, appear to share some characteristics with the eagle-owls. The Strix genus is also related to Bubo, and is considered a "sister complex", with Pulsatrix possibly being intermediate between the two. The Eurasian eagle-owl appears to represent an expansion of the genus Bubo into the Eurasian continent. A few of the other species of Bubo seem to have been derived from the Eurasian eagle-owl, making it a "paraspecies", or they at least share a relatively recent common ancestor. The Pharaoh eagle-owl, distributed in the Arabian Peninsula and sections of the Sahara Desert through North Africa where rocky outcrops are found, was until recently considered a subspecies of the Eurasian eagle-owl. The Pharaoh eagle-owl apparently differs about 3.8% in mitochondrial DNA from the Eurasian eagle-owl, well past the minimum genetic difference to differentiate species of 1.5%. Smaller and paler than Eurasian eagle-owls, the Pharaoh eagle-owl can also be considered a distinct species largely due to its higher-pitched and more descending call, and the observation that Eurasian eagle-owls formerly found in Morocco (B. b. hispanus) apparently did not breed with the co-existing Pharaoh eagle-owls. On the contrary, the race still found together with the Pharaoh eagle-owl in the wild (B. b. interpositus) in the central Middle East has been found to interbreed in the wild with the Pharaoh eagle-owl, although genetical materials have indicated B. b. interpositus may itself be a distinct species from the Eurasian eagle-owl, as it differs from the nominate subspecies of the Eurasian eagle-owl by 2.8% in mitochondrial DNA. For three Asian Eurasian eagle-owl subspecies (B. b. ussuriensis, B. b. kiautschensis and B. b. hemachlana, respectively), it was found that they met the criterion for subspecies well, with a high haplotype diversity and in spite of a relatively recent common ancestor and low genetic diversity. The Indian eagle-owl (B. bengalensis) was also considered a subspecies of the Eurasian eagle-owl until recently, but its smaller size, distinct voice (more clipped and high-pitched than the Eurasian), and the fact that it is largely allopatric in distribution (filling out the Indian subcontinent) with other Eurasian eagle-owl races has led to it being considered a distinct species. The mitochondrial DNA of the Indian species also appears considerably distinct from the Eurasian species. The Cape eagle-owl (B. capensis) appears to represent a return of this genetic line back into the African continent, where it leads a lifestyle similar to Eurasian eagle-owls, albeit far to the south. Another offshoot of the northern Bubo group is the snowy owl. It appears to have separated from other Bubo species at least 4 million years ago. The fourth and most famous derivation of the evolutionary line that includes the Eurasian eagle-owl is the great horned owl, which appears to have been the result of primitive eagle-owls spreading into North America. According to some authorities, the great horned owls and Eurasian eagle-owls are barely distinct as species, with a similar level of divergence in their plumages as the Eurasian and North American representations of the great grey owl or the long-eared owl. More outward physical differences exist between the great horned owl and the Eurasian eagle-owl than in those two examples, including a great size difference favoring the Eurasian species, the great horned owl's horizontal rather than vertical underside barring, yellow rather than orange eyes, and a much stronger black bracket to the facial disc, not to mention a number of differences in their reproductive behaviour and distinctive voices. Furthermore, genetic research has revealed that the snowy owl is more closely related to the great horned owl than are Eurasian eagle-owls. The most closely related species beyond the Pharaoh, Indian, and Cape eagle-owls to the Eurasian eagle-owl is the smaller, less powerful and African spotted eagle-owl (B. africanus), which was likely to have divided from the line before they radiated away from Africa. Somehow, genetic materials indicate the spotted eagle-owl appears to share a more recent ancestor with the Indian eagle-owl than with the Eurasian eagle-owl or even the sympatric Cape eagle-owl. Eurasian eagle-owls in captivity have produced apparently healthy hybrids with both the Indian eagle-owl and the great horned owl. The pharaoh, Indian, and Cape eagle-owls and the great horned owl are all broadly similar in size to each other, but all are considerably smaller than the Eurasian eagle-owl, which averages at least 15–30% larger in linear dimensions and 30–50% larger in body mass than these other related species, possibly as the eagle-owls adapted to warmer climates and smaller prey. Fossils from southern France have indicated that during the Middle Pleistocene, Eurasian eagle-owls (this paleosubspecies is given the name B. b. davidi) were larger than they are today, even larger were those found in Azerbaijan and in the Caucasus (either B. b. bignadensis or B. bignadensis), which were deemed to date to the Late Pleistocene. About 12 subspecies are recognized today. Subspecies B. b. hispanus (Rothschild and Hartert, 1910) – Also known as the Spanish eagle-owl or the Iberian eagle-owl. This subspecies mainly occurs on the Iberian Peninsula, where it occupies a majority of Spain and scattered spots in Portugal. B. b. hispanus also inhabited, at least historically, wooded areas of the Atlas Mountains in Northern Africa, making it the only subspecies of Eurasian eagle-owl known to breed in Africa, but this population is thought to be extinct. In terms of its life history, this may be the most extensively studied subspecies of eagle-owl. The Spanish eagle-owl is the most similar in plumage to the nominate subspecies amongst other subspecies, but tends to be a somewhat lighter, more greyish colour, with generally lighter streaking and a paler belly. In males, wing chord length can range from and in females from . Wingspans in this subspecies can vary from , averaging about . Among standard measurements of B. b. hispanus, the tail is , the total bill length is and the tarsus is . Adult male B. b. hispanus from Spain weigh , averaging , while females weigh from , averaging . B. b. bubo (Linnaeus, 1758) – Also known as the European eagle-owl, the nominate subspecies inhabits continental Europe from near the Arctic Circle in Norway, Sweden, Finland, the southern Kola Peninsula, and Arkhangelsk where it ranges north to about latitude 64° 30' N., southward to the Baltic Sea, central Germany, to southeastern Belgium, eastern, central, and southern France to Northern Spain and parts of Italy including Sicily, and through Central and Southeastern Europe to Greece. It intergrades with B. b. ruthenus in northern Russia around the basin of the upper Mezen River and in the eastern vicinity of Gorki Leninskiye, Tambov and Voronezh, and intergrades with B. b. interpositus in northern Ukraine. This is a medium-sized race, measuring in wing chord length in males and . In captive owls of this subspecies, the mean wingspan were for males and for females. The total bill length is . Adult male European eagle-owls from Norway weigh , averaging , while females there weigh from , averaging . Unsurprisingly, adult owls from western Finland were about the same size, averaging . Another set of Finnish eagle-owls averaged somewhat larger still, with males averaging and females averaging . The subspecies seems to follow Bergmann's rule in regards to body size decreasing closer to the Equator, as specimens from central Europe average in body mass and those from Italy average about . The weight range for eagle-owls in Italy is . The nominate subspecies is perhaps the darkest of eagle-owl subspecies. Many nominate birds are heavily overlaid with broad black streaking over the upper-parts, head and chest. While generally a brownish base-colour, many nominate owls can appear rich rufous, especially about the head, upper-back and wing primaries. The lower belly is usually a buff brown, as opposed to whitish or yellowish in several other subspecies. Birds seen from Italy may show a tendency to be smaller than more northern birds and are reportedly duller, possessing paler ground coloration, and more narrow streaks. In Scandinavia, some birds are so darkly plumaged as to give a blackish-brown impression with almost no paler colour showing. B. b. ruthenus (Buturlin and Zhitkov, 1906) – May be also known as the eastern eagle-owl. This subspecies replaces the nominate in eastern Russia from about latitude 660 N. in the Timan-Pechora Basin south to the western Ural Mountains and the upper Don and lower Volga Rivers. This is a fairly large subspecies going on wing chord length, which is in males and in females. The subspecies is intermediate in coloration between the nominate subspecies and B. b. sibiricus. B. b. ruthenus may be confused with B. b. interpositus, even by authoritative ornithologists. B. b. interpositus is darker than B. b. ruthenus, distinctly more yellowish, less gray, and its brown pattern is darker, heavier, and more regular. The entire colour pattern of B. b. interpositus is brighter, richer, and more contrasting than that of B. b. ruthenus, but B. b. interpositus, though very well characterized, is an intermediate subspecies. B. b. interpositus (Rothschild and Hartert, 1910) – May be also known as Aharoni's eagle-owl or the Byzantine eagle-owl. B. b. interpositus ranges from southern Russia, south of the nominate, with which it intergrades in northern Ukraine, from Bessarabia and the steppes of Ukraine north to Kyiv and Kharkiv then eastward to the Crimea, the Caucasus and Transcaucasia to northwestern and northern Iran (Elburz, region of Tehran, and probably the southern Caspian districts), and through Asia Minor south to Syria and Iraq but not to the Syrian desert where it is replaced by the pharaoh eagle-owl. The latter and B. b. interpositus reportedly hybridize from western Syria south to southern Palestine. B. b. interpositus may be a distinct species from the Eurasian eagle-owl based on genetic studies. This medium-sized subspecies is about the same size as the nominate subspecies B. b. bubo, with male wing chord lengths and female lengths of . It differs from the nominate subspecies by being paler and more yellow, less ferruginous, and by having a sharper brown pattern; from B. b. turcomanus by being very much darker and less yellow, and also by being much more sharply and heavily patterned with brown. Aharoni's eagle-owl is darker and more rusty than B. b. ruthenus. B. b. sibiricus (Gloger, 1833) – Also known as the western Siberian eagle-owl. This subspecies is distributed from the Ural Mountains of western Siberia and Bashkiria to the mid Ob River and the western Altai Mountains, north to limits of the taiga, the most northerly distribution known in the species overall. B. b. sibiricus is a large subspecies, wherein the males measure in wing chord length, while the females are . Captive males were found to measure in wingspan and weigh ; whereas the females measure in wingspan and weigh . Males were cited with a mean body mass of approximately . This subspecies is physically the most distinctive of all the Eurasian eagle-owls, and is sometimes considered the most "beautiful and striking". It is the most pale of the eagle-owl subspecies; the general coloration is a buffy off-white overlaid with dark markings. The crown, hindneck and underparts are streaked blackish but somewhat sparingly, with the lower breast and belly indistinctly barred, the primary coverts dark, contrasting with rest of the wing. The head, back and shoulders are only somewhat dark unlike in most other subspecies. In the eastern limits of its range, B. b. sibiricus may intergrade with B. b. yenisseensis. B. b. yenisseensis (Buturlin, 1911) – Also known as the eastern Siberian eagle-owl. This subspecies is found in central Siberia from about the Ob eastward to Lake Baikal, north to about latitudes 580 to 590 N on the Yenisei River, south to the Altai, Tarbagatai and the Saur Mountain ranges and in Tannu Tuva and Khangai Mountains in northwestern Mongolia, grading into B. b. sibiricus near Tomsk in the west and into B. b. ussuriensis in the east of northern Mongolia. The zone of intergradations with the latter in Mongolia seems to be quite extensive, with intermediate eagle-owls being especially prevalent around the Tuul River Valley, resulting in owls intermediate in coloration between B. b. yenisseensis and B. b. ussuriensis. B. b. yenisseensis is a large subspecies, with wing chord lengths of in males and in females. B. b. yenisseensis is typically much darker with more yellowish ground colour than B. b. sibiricus. It does have a similar amount of dazzling white on its underwing as does sibiricus. It is buffy-greyish overall with well-expressed dark patterning on the upper-parts and around the head. The underside is overall pale greyish with black streaking. B. b. jakutensis (Buturlin, 1908) – May be also known as the Yakutian eagle-owl. This subspecies inhabits northeastern Siberia, from southern Yakutia north to about latitude 640 N., west in the basin of the Vilyuy River to the upper Nizhnyaya Tunguska River, and east to the coast of the Sea of Okhotsk from Magadan south to the Khabarovsk Krai. It has been reported farther north, from the regions of the upper Kolyma River and the upper Anadyr. Eurasian eagle-owls are absent in Kamchatka and north of the Verkhoyansk Range. This is a large subspecies, rivaling the proceeding two subspecies as the largest of all eagle-owls, going on wing chord length, which subspecies is largest is unclear considering the extensive size overlap in wing size. The wing chord is in males and in females. B. b. jakutensis is much darker and browner above than both B. b. sibiricus and B. b. yenisseensis, though its coloration is more diffused, less sharp than the latter. It is more distinctly streaked and barred below than B. b. sibiricus while being whiter and more heavily vermiculated below than B. b. yenisseensis. This subspecies evidences an almost disheveled, wild appearance suggesting more than other races the fish owl group. B. b. jakutensis has more muted brown and conspicuously elongated feathers, somewhat looser hanging ear tufts and a bulky, large-headed and almost neckless look even for an eagle-owl. B. b. ussuriensis (Poljakov, 1915) – Would presumably be also known as the Ussuri eagle-owl. This subspecies ranges from southeastern Siberia, to the south of the range of B. b. jakutensis, southward through eastern Transbaikal, Amurland, Sakhalin, Ussuriland and the Manchurian portion of the Chinese provinces of Shaanxi, Shanxi and Hebei. This subspecies is also reportedly found in the southern Kuril Islands ranging down to as far as northern Hokkaido, the only Japanese representation in the Eurasian eagle-owl species, although this is apparently not a stable, viable population. Going on wing chord length, B. b. ussuriensis is slightly smaller than the various subspecies from further north in Siberia. Males have a wing chord length of and females are . This subspecies differs from B. b. jakutensis by being much darker throughout. It is also darker than B. b. yenisseensis. The brown markings on the upper parts of B. b. ussuriensis are much more extensive and diffused than in B. b. jakutensis or B. b. yenisseensis, with the result that the white markings are much less conspicuous in B. b. ussuriensis than in the other two subspecies. The under parts are also more buffy, much less white, and more heavily streaked and vermiculated in B. b. ussuriensis than in the two more northerly, larger subspecies. It overlaps considerably with jakutensis and some birds are of an intermediate appearance. B. b. turcomanus (Eversmann, 1835) – Also known as the steppe eagle-owl. It is distributed from Kazakhstan between the Volga and upper Ural Rivers, the Caspian Sea coast and the former Aral Sea, but replaced in that country by B. b. omissus in the mountainous south and in the coastal region of the Mangyshlak Peninsula by B. b. gladkovi. Out of Kazakhstan, the range of B. b. turcomanus continues through the Transbaikal and the Tarim Basin to western Mongolia. This subspecies appears to be variable in size, but is generally medium-sized. Males can range in wing chord length from and females from . In standard measurements, the tail is , the tarsus is and the bill is . This subspecies can reportedly weigh from . The plumage background colour is pale, yellowish-buff. The dark patterns on the upper- and underparts is paler, less well-defined and more shattered than in B. b. interpositus. Dark longitudinal patterning on the under-parts discontinue above the belly. B. b. turcomanus is greyer than B. b. hemalachanus but is otherwise somewhat similar-looking. This subspecies is unique in that it seems to shun mountainous and obvious rocky habitats in favor of low hills, plateaus, lowlands, steppes, and semideserts at or near sea-level. B. b. omissus (Dementiev, 1932) – May be also known as the Turkoman eagle-owl or the Turkmenian eagle-owl. B. b. omissus is native to Turkmenistan and adjacent regions of northeastern Iran and western Xinjiang. This is a small subspecies (only nikolskii averages smaller among currently accepted races), with males possessing a wing chord length of and females of . B. b. omissus may be considered a typical sub-desert form. The general coloration is an ochre to buffy off-yellow; with the dark pattern on the upper- and under-parts being relatively undefined. The dark shaft-streaks on nape are very narrow, while the dark longitudinal patterning on the underparts does not cover the belly. A dark cross-pattern on the belly and flanks is thinner and paler than in B. b. turcomanus and some individuals may appear almost all pale below. Compared to B. b. nikolskii, which may occupy the more southern reaches of the same upland ranges, it is somewhat larger as well as darker, less distinctly yellowish and more heavily streaked. B. b. nikolskii (Zarudny, 1905) – May be also known in English as either the Afghan eagle-owl or the Iranian eagle-owl. The range of B. b. nikolskii appears to extend from the Balkan Mountains and Kopet Dagh in southern Transcaspia eastward to southeastern Uzbekistan or to perhaps southwestern Tadzhikistan, then southward 290 N. It may range north to Iran, Afghanistan and Baluchistan south to the region of Kalat, or at about latitude of Hindu Kush. In Iran, B. b. nikolskii is replaced by B. b. interpositus in the north, and probably also in the northwest, and probably by B. b. hemalachana in Badakhshan, part of northeastern Afghanistan. The birds of southern Tadzhikistan found west of the Pamirs are more or less intermediate between B. b. omissus and B. b. hemachalana. This is the smallest known subspecies of eagle-owl, though the only known measurements have been of wing chord length. Males can measure and females can measure in wing chord. Other than its smaller size, B. b. nikolskii is distinguished from the somewhat similar B. b. omissus by its rusty wash and being less dark above. B. b. hemachalana (Hume, 1873) – Also known as the Himalayan eagle-owl. The range of B. b. hemachalana extends from the Himalayas, from Pakistan through Jammu and Kashmir and Ladakh to at least Bhutan, also living in Tibet. Its range continues also westward to the Tian Shan system in Russian Turkestan, west to the Karatau, north to the Dzungarian Alatau, east to at least the Tekkes Valley in Xinjiang, and south to the regions of Kashgar, Yarkant and probably the western Kunlun Mountains. This bird is partly migratory, descending to the plains of Turkmenistan with colder winter weather, and apparently reaches northern Balochistan. This is a medium-sized subspecies, though it is larger than other potentially abutting arid Asian eagle-owl subspecies, which share a somewhat similar yellowish ground colour. The male attains a wing chord length of , while the female's wing chord is . The bill measures in length. 11 adult eagle-owls of the subspecies from the Tibetan Plateau averaged in tail length, in tarsus length and scaled an average of in mass. This subspecies is physically similar to B. b. turcomanus but the background colour is more light yellowish-brown and less buff. The dark patterns on the upperparts and underparts are more expressed and less regular than in B. b. turcomanus and B. b. omissus and the general colour from the mantle to the ear tufts is a more consistent brownish colour than most other abutting races. B. b. hemachalana differs from B. b. yenisseensis by being much more yellow on the rump, under tail coverts, and outer tail feathers, rather than grayish or whitish, and the ground coloration of its body is more yellowish above, and is less whitish below. Dark longitudinal pattern on the under-parts cover the forebelly. B. b. kiautschensis (Reichenow, 1903) – This subspecies could be also known as the North Chinese eagle-owl. It ranges from South Korea and China, south of the range of B. b. ussuriensis, southward to Guangdong and Yunnan, and inland to Sichuan and southern Gansu. This is a smallish subspecies, with the male's wing chord measuring and the female's being . In Korea, this subspecies was found to average in mass, with a range of . B. b. kiautschensis is much darker, more tawny and rufous, and slightly smaller than B. b. ussuriensis. According to museum accounts, it resembles the nominate subspecies from Europe (though obviously considerably disparate in distribution) rather closely in coloration but differs from it by being paler, more mottled, and less heavily marked with brown on the upper parts, by having narrower dark shaft streaks on the under parts, which average also duller and more ocher, and by averaging smaller. Images from South Korea of captive and wild owls show, on the contrary, that this race may be easily as darkly marked as most nominate eagle-owls with a more rufous base colour altogether suggesting a richer and more dusky-colored eagle-owl than from almost any other population. B. b. swinhoei (Hartert, 1913) – This subspecies could be also known as the South Chinese eagle-owl. It is endemic to southeastern China. A quite rufescent form, it is somewhat similar to B. b. kiautschensis. In this smallish subspecies, the wing chord measures in both sexes. This is a rather poorly known and described subspecies and is considered invalid by some authorities. Habitat Eagle-owls are distributed somewhat sparsely, but can potentially inhabit a wide range of habitats, with a partiality for irregular topography. They have been found in habitats as diverse as northern coniferous forests to the edge of vast deserts. Essentially, Eurasian eagle-owls have been found living in almost every climatic and environmental condition on the Eurasian continent, excluding the greatest extremities, i.e. they are absent from humid rainforest in Southeast Asia, and the high Arctic tundra, both of which they are more or less replaced by other species of Bubo owls. They are often found in the largest numbers in areas where cliffs and ravines are surrounded by a scattering of trees and bushes. Grassland areas such as alpine meadows or desert-like steppe can also host them so long as they have the cover and protection of rocky areas. The preference of eagle-owls for places with irregular topography has been reported in most known studies. The obvious benefit of such nesting locations is that both nests and daytime roosts located in rocky areas and/or steep slopes would be less accessible to predators, including man. Also, they may be attracted to the vicinity of riparian or wetlands areas, because the soft soil of wet areas is conducive to burrowing by the small, terrestrial mammals normally preferred in the diet, such as voles and rabbits. Due to their preference for rocky areas, the species is often found in mountainous areas, and can be found up to elevations of in the Alps, in the Himalayas, and in the adjacent Tibetan Plateau. They can also be found living at sea level and may nest amongst rocky sea cliffs. Despite their success in areas such as subarctic zones and mountains that are frigid for much of the year, warmer conditions seem to result in more successful breeding attempts per studies in the Eifel region of Germany. In a study from Spain, areas primarily consisting of woodlands (52% of study area being forested) were preferred with pine trees predominating the oaks in habitats used, as opposed to truly mixed pine-oak woodland. Pine and other coniferous stands are often preferred in great horned owls, as well, due to the constant density, which make overlooking the large birds more likely. In mountainous forest, they are not generally found in enclosed wooded areas, as is the tawny owl (Strix alucco), instead usually near forest edge. Only 2.7% of the habitat included in the territorial ranges for eagle-owls per the habitat study in Spain consisted of cultivated or agricultural land. Compared to golden eagles, though, they can visit cultivated land more regularly in hunting forays due to their nocturnal habits, which allow them to largely evade human activity. Other accounts make clear that farmland is only frequented where its less intensively farmed, holds more extensive treed and bushy areas, and often has limited to no irrigation; farmland areas with fallow or abandoned fields are more likely to hold more prey, so are prone to less frequent human disturbance. In the Italian Alps, almost no pristine habitat remained, and eagle-owls nested locally in the vicinity of towns, villages, and ski resorts. Although found in the largest numbers in areas sparsely populated by humans, farmland is sometimes inhabited, and they even have been observed living in park-like or other quiet settings within European cities. Since 2005, at least five pairs have nested in Helsinki. This is due in part to feral European rabbits (Oryctolagus cuniculus) having recently populated the Helsinki area, originally from pet rabbits released to the wild. The number is expected to increase due to the growth of the European rabbit population in Helsinki. European hares (Lepus europaeus), the often preferred prey species by biomass of the eagle-owls in their natural habitat, live only in rural areas of Finland, not in the city centre. In June 2007, an eagle-owl nicknamed 'Bubi' landed in the crowded Helsinki Olympic Stadium during the European Football Championship qualification match between Finland and Belgium. The match was interrupted for six minutes. After tiring of the match, following Jonathan Johansson's opening goal for Finland, the bird left the scene. Finland's national football team have had the nickname Huuhkajat (Finnish for "Eurasian eagle-owls") ever since. The owl was named "Helsinki Citizen of the Year" in December 2007. In 2020, a brood of three eagle-owl chicks was raised by their mother on a large, well-foliaged planter on an apartment window in the city centre of Geel, Belgium. Distribution The Eurasian eagle-owl is one of the most widely distributed of all owl species, although it is far less wide-ranging than the barn owl, the short-eared owl (Asio flammeus) and long-eared owl and lacks the circumpolar range of boreal species such as great grey owl, boreal owl and northern hawk owl (Surnia ulula). This eagle-owl reaches its westernmost range in the Iberian Peninsula, both almost throughout Spain and more spottily in Portugal. From there, the Eurasian eagle-owl ranges widely in the south of France from Toulouse to Monaco and as far north into the central part of the country as in Allier. Farther north, they are found sporadically and discontinuously in Luxembourg, southern and western Belgium and scarcely into the Netherlands. It is infrequently found in southern and central United Kingdom. In Germany, the eagle-owl can be found in large but highly discontinuous areas, mostly in the south and central areas but is almost entirely absent in areas such as Brandenburg. Across from its south German range, this species range is nearly continuous into the Czech Republic, Slovakia, northern and eastern Hungary and very spottily into Poland. In the fairly montane countries of Switzerland and Austria, the eagle-owl can be found fairly broadly. In Italy, the Eurasian eagle-owl is found where the habitat is favorable in much of the northern, western and central portions down to as far south Melito di Porto Salvo. From Italy, this species sweeps quite broadly along the Mediterranean coast in Southeastern Europe from Slovenia mostly continuously to most of Greece and Bulgaria. In eastern Europe, the Eurasian eagle-owl is found essentially throughout from central Romania to Estonia. The species also occupies a majority of Finland and Scandinavia, where most broadly found in Norway, somewhat more spottily in Sweden and in Denmark it is found widely in Jutland (absent from the islands). The Eurasian eagle-owl's range in Russia is truly massive, with the species apparently nearly unbound by habitat, with their distribution only excluding them from the true Arctic zone, i.e. their range stops around the tree line. If not the most densely populated species, they almost certainly stand as Russia's most widely distributed owl species. From Russia, they are found throughout Central Asia, residing continuously in each nation from Kazakhstan down to Afghanistan. In Asia Minor, they are found broadly in Georgia, Azerbaijan and somewhat so in western and southern Turkey but is quite sporadic in distribution overall in Turkey. A spotty range also exists in the Middle East in Syria, Iraq, Lebanon, Israel, Palestine, Jordan and western Iran, the species being found broadly only in north and western Iran. In South Asia, the Eurasian eagle-owl is found mostly often in northern Pakistan, northern Nepal and Bhutan and more marginally into far northern India. This species resides throughout Mongolia, almost the entirety of China (mainly absent only from southern Yunnan and southern Guangxi). From China and eastern Russia, the Eurasian eagle-owl is found throughout Korea, Sakhalin, the Kuril Islands and rarely into Japan in northern Hokkaido. Besides the Kurils, the farthest eastern part of the range for this species is in Magadan in the Russian Far East. Behaviour The Eurasian eagle-owl is largely nocturnal in activity, as are most owl species, with its activity focused in the first few hours after sunset and the last few hours before sunrise. In the northern stretches of its range, partial diurnal behaviour has been recorded, including active hunting in broad daylight during the late afternoon. In such areas, full nightfall is essentially non-existent at the peak of summer, so eagle-owls must presumably hunt and actively brood at the nest during daylight. The Eurasian eagle-owl has a number of vocalizations that are used at different times. It will usually select obvious topographic features such as rocky pinnacles, stark ridges and mountain peaks to use as regular song posts. These are dotted along the outer edges of the eagle-owl's territory and they are visited often but only for a few minutes at a time. Vocal activity is almost entirely confined to the colder months from late fall through winter, with vocal activity in October through December mainly having territorial purposes and from January to February being primarily oriented towards courtship and mating purposes. Vocalizations in a Spanish study begin no sooner than 29 minutes after sunset and end no later than 55 minutes before sunrise. The territorial song, which can be heard at great distance, is a deep resonant ooh-hu with emphasis on the first syllable for the male, and a more high-pitched and slightly more drawn-out uh-hu for the female. It is not uncommon for a pair to perform an antiphonal duet. The widely used name in Germany as well as some other sections of Europe for this species is uhu due to its song. At 250–350 Hz, the Eurasian eagle-owls territorial song or call is deeper, farther-carrying and is often considering "more impressive" than the territorial songs of the great horned owl or even that of the slightly larger Blakiston's fish owl, although the horned owl's call averages slightly longer in duration and the Blakiston's call is typically deeper. Other calls include a rather faint, laughter-like OO-OO-oo and a harsh kveck-kveck. Intruding eagle-owls and other potential dangers may be met with a "terrifying", extremely loud hooo. Raucous barks not unlike those of ural owls or long-eared owls have been recorded but are deeper and more powerful than those species' barks. Annoyance at close quarters is expressed by bill-clicking and cat-like spitting, and a defensive posture involves lowering the head, ruffling the back feathers, fanning the tail and spreading the wings. The Eurasian eagle-owl rarely assumes the so-called "tall-thin position", which is when an owl adopts an upright stance with plumage closely compressed and may stand tightly beside a tree trunk. Among others, the long-eared owl is among the most often reported to sit with this pose. The great horned owl has been more regularly recorded using the tall-thin, if not as consistently as some Strix and Asio owls, and it is commonly thought to aid camouflage if encountering a threatening or novel animal or sound. The Eurasian eagle-owl is a broad-winged species and engages in a strong, direct flight, usually consisting of shallow wing beats and long, surprisingly fast glides. It has, unusually for an owl, also been known to soar on updrafts on rare occasions. The latter method of flight has led them to be mistaken for Buteos, which are smaller and quite differently proportioned. Usually when seen flying during the day, it is due to being disturbed or displaced from its roost by humans or mobbing animals, such as crows. Eurasian eagle-owls are highly sedentary, normally maintaining a single territory throughout their adult lives. Eurasian eagle-owl are considered a completely non-migratory bird, as are all members of the Bubo genus excluding the snowy owl. Even those near the northern limits of their range, where winters are harsh and likely to bear little in food, the eagle-owl does not leave its native range. In 2020, a study presented evidence of a short distance distribution by adult eagle-owls in the fall subsequent to breeding, with 5 adults found to move over away from their nests. There are additionally claimed cases from Russia of Eurasian eagle-owls moving south for the winter, as the icebound, infamously harsh climate there may be too severe even for these hardy birds and their prey. Similarly, Eurasian eagle-owls living in the Tibetan highlands and Himalayas may in some anecdotal cases vacate their normal territories when winter hits and move south. In both of those examples, these are old, unverified reports and there is no evidence whatsoever of consistent, annual migration by Eurasian eagle-owls and the birds may eke out a living on their normal territories even in the sparsest times. Dietary biology Breeding Territoriality Eurasian eagle-owls are strictly territorial and will defend their territories from interloping eagle-owls year around, but territorial calling appears to peak around October to February. Territory size is similar or occasionally slightly greater than great horned owl: averaging . Territories are established by the male eagle-owl, who selected the highest points in the territory from which to sing. The high prominence of singing perches allows their song to be heard at greater distances and lessens the need for potentially dangerous physical confrontations in the areas where territories may meet. Nearly as important in territorial behaviour as vocalization is the white throat patch. When taxidermied specimens with flared white throats were placed around the perimeter of eagle-owl territories, male eagle-owls reacted quite strongly and often attacked the stuffed owl, reacting more mildly to a stuffed eagle-owl with a non-flared white throat. Females were less likely to be aggressive to mounted specimens and did not seem to vary in their response whether exposed to the specimens with or without the puffed up white patch. In January and February, the primary function for vocalization becomes for the purpose of courtship. More often than not, eagle-owls will pair for life but usually engage in courtship rituals annually, most likely to re-affirm pair bonds. When calling for the purposes of courtship, males tend to bow and hoot loudly but do so in a less contorted manner than the male great horned owl. Courtship in the Eurasian eagle-owl may involve bouts of "duetting", with the male sitting upright and the female bowing as she calls. There may be mutual bowing, billing and fondling before the female flies to a perch where coitus occurs, usually taking place several times over the course of a few minutes. Nests The male selects breeding sites and advertises their potential to the female by flying to them and kneading out a small depression (if soil is present) and making staccato notes and clucking noises. Several potential sites may be presented, with the female selecting one. In Baden-Wurttenberg, Germany, the amount of male nest site visits were found to increase in time spent over the pre-laying breeding season from a mean of 29 minutes to 3 hours with frequent incubation like sitting by the male. Like all owls, Eurasian eagle-owls do not build nests or add material but nest on the surface or material already present. Eurasian eagle-owls normally nest on rocks or boulders, most often utilizing cliff ledges and steep slopes, as well as crevices, gullies, holes or caves. Rocky areas that also prove concealing woodlots as well as, for hunting purposes, that border river valleys and grassy scrubland may be especially attractive. If only low rubble is present, they will nest on the ground between rocks. Often, in more densely forested areas, they've been recorded nesting on the ground, often among roots of trees, under large bushes and under fallen tree trunks. Steep slopes with dense vegetation are preferred if nesting on the ground, although some ground nests are surprisingly exposed or in flat spots such as in open spots of the taiga, steppe, ledges of river banks and between wide tree trunks. All Eurasian eagle-owl nests in the largely forested Altai Krai region of Russia were found to be on the ground, usually at the base of pines. This species does not often use other bird's nests as does the great horned owl, which often prefers nests built by other animals over any other nesting site. The Eurasian eagle-owl has been recorded in singular cases using nests built by common buzzards (Buteo buteo), golden eagle, greater spotted (Clanga clanga) and white-tailed eagles (Haliaeetus albicilla), common ravens (Corvus corax) and black storks (Ciconia nigra). Among the eagle-owls of the fairly heavily wooded wildlands of Belarus, they more commonly utilize nests built by other birds than most eagle-owls, i.e. stork or accipitrid nests, but a majority of nests are still located on the ground. This is contrary to the indication that ground nests are selected only if rocky areas or other bird nests are unavailable, as many will utilize ground nests even where large bird nests seem to be accessible. Tree holes being used for nesting sites are even more rarely recorded than nests constructed by other birds. While it may be assumed that the eagle-owl is too large to utilize tree hollows, when other large species like the great grey owl have never been recorded nesting in one, the even more robust Blakiston's fish owl nests exclusively in cavernous hollows. The Eurasian eagle-owl often uses the same nest site year after year. Parental behaviour In Engadin, Switzerland, the male eagle-owl alone hunts until the young are 4 to 5 weeks old and the female spends all her time brooding at the nest. After this point, the female gradually resumes hunting from both herself and the young and thus provides a greater range of food for the young. While it may seem contrary to the species' highly territorial nature, there is one verified cases of polygamy in Germany, with a male apparently mating with two females, and cooperative brooding in Spain, with a third adult of undetermined sex helping a breeding pair care for the chicks. The response of Eurasian eagle-owls to humans approaching at the nest is quite variable. The species is often rather less aggressive than some other owls, including related species like the spot-bellied eagle-, great horned and snowy owls, many of the northern Strix species, and even some rather smaller owl species, which often fearlessly attack any person found to be nearing their nests. Occasionally, if a person climbs to an active nest, the adult female eagle-owl will do a distraction display, in which they feign an injury. This is an uncommon behavior in most owls and is most often associated with small birds trying to falsely draw the attention of potential predators away from their offspring. More commonly, the adults withdraw to a safe distance, as their nests are usually well-camouflaged. Occasionally if cornered both adults and nestlings will do an elaborate threat display, also rare in owls in general, in which the eagle-owls raise their wings into a semi-circle and puff up their feathers, followed by a snapping of their bills. Apparently, eagle-owls of uncertain and probably exotic origin in Britain are likely to react aggressively to humans approaching the nest. Also, aggressive encounters involving eagle-owls around their nest, despite being historically uncommon, apparently have increased in recent decades in Scandinavia. The discrepancy of aggressiveness at the nest between the Eurasian eagle-owl and its Nearctic counterpart may be correlated to variation in the extent of nest predation that the species endured during the evolutionary process. Eggs and offspring development The eggs are normally laid at intervals of three days and are incubated only by the female. Laying generally begins in late winter but may be later in the year in colder habitats. During the incubation period, the female is brought food at the nest by her mate. A single clutch of white eggs is laid; each egg can measure from long by in width, and will usually weigh about . In Central Europe, eggs average , and in Siberia, eggs average . Their eggs are only slightly larger than those of snowy owls and the nominate subspecies of great horned owl, while similar in size to those of spot-bellied eagle-owls and Blakiston's fish owls. The Eurasian eagle-owl's eggs are noticeably larger than those of Indian eagle-owl and pharaoh eagle-owls. Usually clutch size is one or two, rarely three or four, and exceptionally to six. The average number of eggs laid varies with latitude in Europe. Clutch size ranges from 2.02 to 2.14 in Spain and the massifs of France, and 1.82 to 1.89 in central Europe and the eastern Alps; in Sweden and Finland, the mean clutch size is 1.56 and 1.87, respectively. While variation based on climate is not unusual for different wide-ranging palearctic species, the higher clutch size of western Mediterranean eagle-owls is also probably driven by the presence of lagomorphs in the diet, which provide high nutritional value than most other regular prey. The average clutch size, attributed as 2.7, was the lowest of any European owl per one study. One species was attributed with an even lower clutch size in North America, the great grey owl with a mean of 2.6, but the mean clutch size was much higher for the same species in Europe, at 4.05. In Spain, incubation is from mid-January to mid-March, hatching and early nestling period is from late March to early April, fledging and postfledging dependence can range from mid-April to August, and territorial/courtship is anytime hereafter; i.e. the period between the beginning of juvenile dispersal to egg laying; from September to early January. The same general date parameters were followed in southern France. In the Italian Alps, the mean egg-laying date was similarly February 27, but the young were more likely to be dependent later, as all fledglings were still being cared for by the end of August, and some even lingered under parental care until October. In northern climes, the breeding season shifts somewhat later by as much as a month so that egg laying may be as late as late March or early April. Nonetheless, the Eurasian eagle-owl is one of the earliest nesting bird species in Europe or northern, temperate Asia. The first egg hatches after 31 to 36 days of incubation. The eggs hatch successively; although the average interval between egg-laying is 3 days, the young tend to hatch no more than a day or two apart. Like all owls that nest in the open, the downy young are often a mottled grey with some white and buff, which provides camouflage. They open their eyes at 4 days of age. The chicks grow rapidly, being able to consume small prey whole after roughly 3 weeks. In Andalusia, the most noticeable development of the young before they leave the nest was the increase of body size, which was the highest growth rate of any studied owl and faster than either snowy or great horned owls. Body mass increased fourteen times over from 5 days old to 60 days old in this study. The male continues to bring prey, leaving it on or around the nest, and the female feeds the nestlings, tearing up the food into suitably sized pieces. The female resumes hunting after about 3 weeks, which increases the food supply to the chicks. Many nesting attempts produce two fledglings, indicating that siblicide is not as common as in other birds of prey, especially a few species of eagles. In Spain, males are thought to be the first egg laid to reduce the likelihood of sibling aggression due to the size difference, thus the younger female hatchling is less likely to be killed since it is similar in size to its older sibling. Apparently, the point at which the chicks venture out of the nest is driven by the location of the nest. In elevated nest sites, chicks usually wander out of the nest at 5 to as late as 7 weeks of age, but have been recorded leaving the nest if the nest is on the ground as early as 22 to 25 days old. The chicks can walk well at 5 weeks of age and by 7 weeks are taking short flights. Hunting and flying skills are not tested prior to the young eagle-owls leaving the nest. Young Eurasian eagle-owls leave the nest by 5–6 weeks of age and typically can be flying weakly (a few metres) by about 7–8 weeks of age. Normally, they are cared for at least another month. By the end of the month, the young eagle-owls are quite assured fliers. A few cases have been confirmed of adult eagle-owls in Spain feeding and caring for postfledgling juvenile eagle-owls that were not their own. A study from southern France found the mean number of fledglings per nest was 1.67. In central Europe, the mean number of fledglings per nest was between 1.8 and 1.9. The mean fledgling rate in the Italian Alps was 1.89, thus being similar. In the Italian Alps, heavier rainfall during breeding decreased fledgling success because it inhibited the ability of the parents to hunt and potentially exposed nestlings to hypothermia. In the reintroduced population of eagle-owls in Eifel, Germany, occupied territories produced an average of 1.17 fledglings, but not all occupying pairs attempted to breed, with about 23% of those attempting to breed being unsuccessful. In slightly earlier studies, possibly due to higher persecution rates, the mean number of young leaving the nest was often lower, such as 1.77 in Bavaria, Germany, 1.1 in lower Austria, and 0.6 in southern Sweden. An experimental supplemental feeding program to young eagle-owls on two small Norwegian islands were found to increase mean numbers of fledglings from a mean of about 1.2 to 1.7 despite evidence that increased human activity near the nest decreased owlet survivability. While sibling owls are close in the stage between leaving the nest and fully fledged, about 20 days after leaving the nest, the family unit seems to dissolve and the young disperse quickly and directly. All told, the dependence of young eagle-owls on their parents lasts for 20 to 24 weeks. Independence in central Europe is from September to November. The young leave their parents' care normally on their own, but are also sometimes chased away by their parents. The young Eurasian eagle-owls reach sexual maturity by the following year, but do not normally breed until they can establish a territory at around 2–3 years old. Until they are able to establish their own territories, young eagle-owls spend their lives as nomadic "floaters", and while they also call, select inconspicuous perch sites unlike breeding birds. Male floaters are especially wary about intrusion into an established territory to avoid potential conspecific aggression. Status The Eurasian eagle-owl has a very wide range across much of Europe and Asia, estimated to be about . In Europe, the population is estimated at 19,000 to 38,000 breeding pairs, and in the whole world around 250,000 to 2,500,000 individual birds. The population trend is thought to be decreasing because of human activities, but with such a large range and large total population, the International Union for Conservation of Nature has rated the bird as being of least concern. Although roughly equal in adaptability and wideness of distribution, the great horned owl, with a total estimated population up to 5.3 million individuals, apparently has a total population that is roughly twice that of the Eurasian eagle-owl. Numerous factors, including a shorter history of systematic persecution, lesser sensitivity to human disturbance while nesting, somewhat greater ability to adapt to marginal habitats and widespread urbanization, and slightly smaller territories may play into the horned owls greater numbers in modern times. Eurasian eagle-owls are listed in Appendix II of the Convention on International Trade in Endangered Species (CITES) meaning international trade (including in parts and derivatives) is regulated. Longevity The Eurasian eagle-owl is one of the longest-living owls. It can live for up to 20 years in the wild. A 19-year-old was once considered the oldest ringed eagle-owl. Some studies posited that, in protected areas, lifespans ranging 15–20 years may not be uncommon. A record-breaking specimen, was banded in the wild and later encountered at the age of 27 years and 9 months. Like many other bird species, they can live much longer in captivity, where they don't endure difficult natural conditions. They may have survived up to 68 years in zoo collections. Healthy adults normally have no natural predators, thus are considered apex predators. The leading causes of death are man-made: electrocution, traffic accidents, and shooting frequently claim the lives of eagle-owls. Anthropogenic mortality Electrocution was the greatest cause of mortality in 68% of 25 published studies, and accounted, on average, for 38.2% of the reported eagle-owl deaths. This was particularly true in the Italian Alps, where the number of dangerous, uninsulated pylons near nests was extremely high, but is highly problematic almost throughout the species' European distribution. In one telemetry study, 55% of 27 dispersing young were electrocuted within 1 year of their release from captivity, while electrocution rates of wild-born young are even higher. Mortality in the Swiss Rhine Valley was variable, in radio-tagged, released individuals, most died as a result of starvation (48%) rather than human-based causes, but 93% of the wild, untagged individuals found dead were due to human activities, 46% due to electrocution, and 43% due to collision with vehicles or trains. Insulation of pylons is thought to result in a stabilisation of the local population due to floaters taking up residence in unoccupied territories that formerly held deceased eagle-owls. Eurasian eagle-owls from Finland were found mainly to die due to electrocution (39%) and collisions with vehicles (22%). Wind turbine collisions can also be a serious cause of mortality locally. Eagle-owls have been singled out historically as a threat to game species, thus to the economic well-being of landowners, game-keepers, and even governmental agencies, and as such, have been singled out for widespread persecution. Local extinctions of Eurasian eagle-owls have been primarily due to persecution. Examples of this include northern Germany in 1830, the Netherlands sometimes in the late 19th century, Luxembourg in 1903, Belgium in 1943, and central and western Germany in the 1960s. In trying to determine causes of death for 1476 eagle-owls from Spain, most were unknown and undetermined types of trauma. The largest group that could be determined, 411 birds, was due to collisions, more than half of which were from electrocution, while 313 were due to persecution, and merely 85 were directly attributable to natural causes. Clearly, while pylon safety is perhaps the most serious factor to be addressed in Spain, persecution continues to be a massive problem for Spanish eagle-owls. Of seven European nations where modern Eurasian eagle-owl mortality is well-studied, continual persecution is by far the largest problem in Spain, although also continues to be serious (often comprising at least half of studied mortality) in France. From France and Spain, nearly equal numbers of eagle-owls are poisoned (for which raptors might not be the main target), or shot intentionally. Conservation and reintroductions While the eagle-owl remains reasonably numerous in some parts of its habitat where nature is still relatively little disturbed by human activity, such as the sparsely populated regions of Russia and Scandinavia, concern has been expressed about the future of the Eurasian eagle-owl in Western and Central Europe. There, very few areas are not heavily modified by human civilisation, thus exposing the birds to the risk of collisions with deadly man-made objects (e.g. pylons) and a depletion of native prey numbers due to ongoing habitat degradation and urbanisation. In Spain, long-term governmental protection of the Eurasian eagle-owl seems to have no positive effect on reducing the persecution of eagle-owls. Therefore, Spanish conservationists have recommended to boost education and stewardship programs to protect eagle-owls from direct killing by local residents. Unanimously, biologists studying eagle-owl mortality and conservation factors have recommended to proceed with the proper insulation of electric wires and pylons in areas where the species is present. As this measure is labour-intensive and therefore rather expensive, few efforts have actually been made to insulate pylons in areas with few fiscal resources devoted to conservation such as rural Spain. In Sweden, a mitigation project was launched to insulate transformers that are frequently damaged by eagle-owl electrocution. Large reintroduction programs were instituted in Germany after the eagle-owl was deemed extinct in the country as a breeding species by the 1960s, as a result of a long period of heavy persecution. The largest reintroduction there occurred from the 1970s to the 1990s in the Eifel region, near the border with Belgium and Luxembourg. The success of this measure, consisting in more than a thousand eagle-owls being reintroduced at an average cost of US$1,500 per bird, is a subject of controversy. Those eagle-owls reintroduced in the Eifel region appear to be able to breed successfully, and enjoy nesting success comparable with wild eagle-owls from elsewhere in Europe. Mortality levels in the Eifel region, though, appear to remain quite high due to anthropogenic factors. Also, concerns exist about a lack of genetic diversity of the species in this part of Germany. Apparently, the German reintroductions have allowed eagle-owls to repopulate neighbouring parts of Europe, as the breeding populations now occurring in the Low Countries (the Netherlands, Belgium, and Luxembourg) are believed to be the result of influx from regions further to the east. Smaller reintroductions have been done elsewhere, and the current breeding population in Sweden is believed to be primarily the result of a series of reintroductions. Conversely to numerous threats and declines incurred by Eurasian eagle-owls, areas where human-dependent, non-native prey species such as brown rats (Rattus norvegicus) and rock doves (Columba livia) have flourished, have given the eagle-owls a primary food source and allowed them occupy regions where they were once marginalized or absent. Occurrence in Great Britain The Eurasian eagle-owl at one time occurred naturally in Great Britain. Some, including the RSPB, have claimed that it had disappeared about 10,000–9,000 years ago, after the last ice age, but fossil remains found in Meare Lake Village indicate the eagle-owl occurring as recently as roughly 2,000 years ago in the fossil record. The lack of presence of the Eurasian eagle-owl in British folklore or writings in recent millennium may indicate the lack of occurrence by this species there. The flooding of the land bridge between Britain and continental Europe may have been responsible for their extirpation as they only disperse over limited distances, although early human persecution presumably played a role as well. Some reportages of eagle-owls in Britain have been revealed to actually be great horned owls or Indian eagle-owls, the latter a particularly popular owl in falconry circuits. Some breeding pairs do still occur in Britain, though the exact number of pairs and individuals is not definitely known. The World Owl Trust stated that they believe some eagle-owls occurring in North England and Scotland are naturally occurring, making the flight of roughly from the west coast of Norway to Shetland and the east coast of Scotland, as well as possibly from the coasts of the Netherlands and Belgium to the south. Although not migratory, eagle-owls can disperse some notable distances in young birds seeking a territory. Prior studies of eagle-owl distribution have indicated a strong reluctance to cross large bodies of water in the species. Many authorities state that the Eurasian eagle-owls occurring in Britain are individuals that have escaped from captivity. While, until the 19th century, wealthy collectors may have released unwanted eagle-owls, despite press to the contrary, no evidence of any organization or individual intentionally releasing eagle-owls recently with the intent to establish a breeding population has been found. Many feel that the eagle-owl would be classified as an "alien" species. Due to its predatory abilities, many, especially those in the press, have expressed alarm of their effect on "native" species. From 1994 to 2007, 73 escaped eagle-owls were not registered as returned, while 50 escapees were recaptured. Several recorded breeding attempts have been studied, and most were unsuccessful, due in large part to incidental disturbance by humans and some due to direct persecution, with eggs having been smashed. Effect on conservation-dependent species As highly opportunistic predators, Eurasian eagle-owls hunt almost any appropriately sized prey they encounter. Most often, they take whatever prey is locally common and can take a large number of species considered harmful to human financial interests, such as rats, mice, and pigeons. Eurasian eagle-owls do take rare or endangered species, as well. Among the species considered at least vulnerable (up to critically endangered as in the mink and eel, both heavily overexploited by humans) to extinction known to be hunted by Eurasian eagle-owls are Russian desman (Desmana moschata) Pyrenean desman (Galemys pyrenaicus), barbastelle (Barbastella barbastellus), European ground squirrel (Spermophilus citellus), southwestern water vole (Arvicola sapidus), European mink (Mustela lutreola), marbled polecat (Vormela peregusna), lesser white-fronted goose (Anser erythrops), Egyptian vulture (Neophron percnopterus), greater spotted eagle (Clanga clanga), eastern imperial eagle (Aquila heliaca), saker falcon (Falco cherrug), houbara bustard (Chlamydotis undulata), great bustard (Otis tarda), spur-thighed tortoise (Testudo graeca), Atlantic cod (Gadus morhua), European eel (Anguilla anguilla) and lumpfish (Cyclopterus lumpus).
Biology and health sciences
Strigiformes
null
200310
https://en.wikipedia.org/wiki/Sodium%20nitrate
Sodium nitrate
Sodium nitrate is the chemical compound with the formula . This alkali metal nitrate salt is also known as Chile saltpeter (large deposits of which were historically mined in Chile) to distinguish it from ordinary saltpeter, potassium nitrate. The mineral form is also known as nitratine, nitratite or soda niter. Sodium nitrate is a white deliquescent solid very soluble in water. It is a readily available source of the nitrate anion (NO3−), which is useful in several reactions carried out on industrial scales for the production of fertilizers, pyrotechnics, smoke bombs and other explosives, glass and pottery enamels, food preservatives (esp. meats), and solid rocket propellant. It has been mined extensively for these purposes. History The first shipment of saltpeter to Europe arrived in England from Peru in 1820 or 1825, right after that country's independence from Spain, but did not find any buyers and was dumped at sea in order to avoid customs toll. With time, however, the mining of South American saltpeter became a profitable business (in 1859, England alone consumed 47,000 metric tons). Chile fought the War of the Pacific (1879–1884) against the allies Peru and Bolivia and took over their richest deposits of saltpeter. In 1919, Ralph Walter Graystone Wyckoff determined its crystal structure using X-ray crystallography. Occurrence The largest accumulations of naturally occurring sodium nitrate are found in Chile and Peru, where nitrate salts are bound within mineral deposits called caliche ore. Nitrates accumulate on land through marine-fog precipitation and sea-spray oxidation/desiccation followed by gravitational settling of airborne NaNO3, KNO3, NaCl, Na2SO4, and I, in the hot-dry desert atmosphere. El Niño/La Niña extreme aridity/torrential rain cycles favor nitrates accumulation through both aridity and water solution/remobilization/transportation onto slopes and into basins; capillary solution movement forms layers of nitrates; pure nitrate forms rare veins. For more than a century, the world supply of the compound was mined almost exclusively from the Atacama desert in northern Chile until, at the turn of the 20th century, German chemists Fritz Haber and Carl Bosch developed a process for producing ammonia from the atmosphere on an industrial scale (see Haber process). With the onset of World War I, Germany began converting ammonia from this process into a synthetic Chilean saltpeter, which was as practical as the natural compound in production of gunpowder and other munitions. By the 1940s, this conversion process resulted in a dramatic decline in demand for sodium nitrate procured from natural sources. Chile still has the largest reserves of caliche, with active mines in such locations as Valdivia, María Elena and Pampa Blanca, and there it used to be called white gold. Sodium nitrate, potassium nitrate, sodium sulfate and iodine are all obtained by the processing of caliche. The former Chilean saltpeter mining communities of Humberstone and Santa Laura were declared UNESCO World Heritage sites in 2005. Synthesis Sodium nitrate is also synthesized industrially by neutralizing nitric acid with sodium carbonate or sodium bicarbonate: 2 HNO3 + Na2CO3 → 2 NaNO3 + H2O + CO2 HNO3 + NaHCO3 → NaNO3 + H2O + CO2 or also by neutralizing it with sodium hydroxide (however, this reaction is very exothermic): HNO3 + NaOH → NaNO3 + H2O or by mixing stoichiometric amounts of ammonium nitrate and sodium hydroxide, sodium bicarbonate or sodium carbonate: NH4NO3 + NaOH → NaNO3 + NH4OH NH4NO3 + NaHCO3 → NaNO3 + NH4HCO3 2NH4NO3 + Na2CO3 → 2NaNO3 + (NH4)2CO3 Uses Most sodium nitrate is used in fertilizers, where it supplies a water-soluble form of nitrogen. Its use, which is mainly outside of high-income countries, is attractive since it does not alter the pH of the soil. Another major use is as a complement to ammonium nitrate in explosives. Molten sodium nitrate and its solutions with potassium nitrate have good thermal stability (up to 600 °C) and high heat capacities. These properties are suitable for thermally annealing metals and for storing thermal energy in solar applications. Food Sodium nitrate is also a food additive used as a preservative and color fixative in cured meats and poultry; it is listed under its INS number 251 or E number E251. It is approved for use in the EU, US and Australia and New Zealand. Sodium nitrate should not be confused with sodium nitrite, which is also a common food additive and preservative used, for example, in deli meats. Thermal storage Sodium nitrate has also been investigated as a phase-change material for thermal energy recovery, owing to its relatively high melting enthalpy of 178 J/g. Examples of the applications of sodium nitrate used for thermal energy storage include solar thermal power technologies and direct steam generating parabolic troughs. Steel coating Sodium nitrate is used in a steel coating process in which it forms a surface of magnetite layer. Health concerns Studies have shown a link between increased levels of nitrates and increased deaths from certain diseases including Alzheimer's disease, diabetes mellitus, stomach cancer, and Parkinson's disease: possibly through the damaging effect of nitrosamines on DNA; however, little has been done to control for other possible causes in the epidemiological results. Nitrosamines, formed in cured meats containing sodium nitrate and nitrite, have been linked to gastric cancer and esophageal cancer. Sodium nitrate and nitrite are associated with a higher risk of colorectal cancer. Substantial evidence in recent decades, facilitated by an increased understanding of pathological processes and science, exists in support of the theory that processed meat increases the risk of colon cancer and that this is due to the nitrate content. A small amount of the nitrate added to meat as a preservative breaks down into nitrite, in addition to any nitrite that may also be added. The nitrite then reacts with protein-rich foods (such as meat) to produce carcinogenic NOCs (nitroso compounds). NOCs can be formed either when meat is cured or in the body as meat is digested. However, several things complicate the otherwise straightforward understanding that "nitrates in food raise the risk of cancer". Processed meats have no fiber, vitamins, or phytochemical antioxidants, are high in sodium, may contain high fat, and are often fried or cooked at a temperature sufficient to degrade protein into nitrosamines. Nitrates are key intermediates and effectors in the primary vasculature signaling which is necessary for all mammals to survive.
Physical sciences
Salts
null
200341
https://en.wikipedia.org/wiki/Megapode
Megapode
The megapodes, also known as incubator birds or mound-builders, are stocky, medium-large, chicken-like birds with small heads and large feet in the family Megapodiidae. Their name literally means "large foot" and is a reference to the heavy legs and feet typical of these terrestrial birds. All are browsers, and all except the malleefowl occupy wooded habitats. Most are brown or black in color. Megapodes are superprecocial, hatching from their eggs in the most mature condition of any bird. They hatch with open eyes, bodily coordination and strength, full wing feathers, and downy body feathers, and are able to run, pursue prey and, in some species, fly on the day they hatch. Etymology From the Greek ( = great) and , ( = foot). Description Megapodes are medium-sized to large terrestrial birds with large legs and feet with sharp claws. Megapodes are of three kinds: scrub fowl, brush turkeys, and mallee fowl or lowan. The largest members of the clade are the species of Alectura and Talegalla. The smallest are the Micronesian scrubfowl (Megapodius laperouse) and the Moluccan scrubfowl (Eulipoa wallacei). They have small heads, short beaks, and rounded and large wings. Their flying abilities vary within the clade. They present the hallux at the same level of the other toes just like the species of the clade Cracidae. The other Galliformes have their halluces raised above the level of the front toes. Distribution and habitat Megapodes are found in the broader Australasian region, including islands in the western Pacific, Australia, New Guinea, and the islands of Indonesia east of the Wallace Line, but also the Andaman and Nicobar Islands in the Bay of Bengal. The distribution of the family has contracted in the Pacific with the arrival of humans, and a number of island groups such as Fiji, Tonga, and New Caledonia have lost many or all of their species. Raoul Island, a New Zealand territory and the main island of the Kermadec Islands, may also have once had a species of megapode, based on settler accounts. Behaviour and ecology Megapodes are mainly solitary birds that do not incubate their eggs with their body heat as other birds do, but bury them. Their eggs are unusual in having a large yolk, making up 50–70% of the egg weight. The birds are best known for building massive nest mounds of decaying vegetation, which the male attends, adding or removing litter to regulate the internal heat while the eggs develop. However, some bury their eggs in other ways; there are burrow-nesters which use geothermal heat, and others which simply rely on the heat of the sun warming the sand. Some species vary their incubation strategy depending on the local environment. Although the Australian brushturkey was thought to exhibit temperature-dependent sex determination, this was later proven false; temperature does, however, affect embryo mortality and resulting offspring sex ratios. The nonsocial nature of their incubation raises questions as to how the hatchlings come to recognise other members of their species, which is due to imprinting in other members of the order Galliformes. Research suggests an instinctive visual recognition of specific movement patterns is made by the individual species of megapode. Megapode chicks do not have an egg tooth; they use their powerful claws to break out of the egg, and then tunnel their way up to the surface of the mound, lying on their backs and scratching at the sand and vegetable matter. Similar to other superprecocial birds, they hatch fully feathered and active, already able to fly and live independently from their parents. In megapodes superprecociality apparently evolved secondarily from brooding and at least loose parental care as more typical in Galliformes. Eggs previously assigned to Genyornis have been reassigned to giant megapode species. Some dietary and chronological data previously assigned to dromornithids may instead be assigned to the giant megapodes. Megapodes share some similarities to the extinct enantiornithes in terms of their superprecocial life cycle, though also several differences. Species The more than 20 living species are placed in seven genera. Although the evolutionary relationships between the Megapodiidae are especially uncertain, the morphological groups are clear: Phylogeny Taxonomy Genus †Mwalau Worthy et al. 2015 †Mwalau walterlinii Worthy et al. 2015 (Vanuatu) Genus †Ngawupodius Boles & Ivison 1999 †Ngawupodius minya Boles & Ivison 1999 Scrubfowl group Genus: Macrocephalon Maleo, Macrocephalon maleo Genus: Eulipoa (sometimes included in Megapodius) Moluccan megapode, Eulipoa wallacei. Genus: Megapodius Tongan megapode, Megapodius pritchardii Micronesian megapode, Megapodius laperouse Marianas Island megapode, Megapodius laperouse laperouse Palau Island megapode, Megapodius laperouse senex Nicobar megapode, Megapodius nicobariensis Philippine megapode, Megapodius cumingii Sula megapode, Megapodius bernsteinii Tanimbar megapode, Megapodius tenimberensis Dusky megapode, Megapodius freycinet Forsten's megapode, Megapodius (freycinet) forstenii Biak scrubfowl, Megapodius geelvinkianus Melanesian megapode, Megapodius eremita Vanuatu megapode, Megapodius layardi New Guinea scrubfowl, Megapodius decollatus Orange-footed scrubfowl, Megapodius reinwardt †Pile-builder scrubfowl, Megapodius molistructor Balouet & Olson 1989 †Viti Levu scrubfowl, Megapodius amissus Worthy 2000 †Consumed scrubfowl, Megapodius alimentum Steatman 1989a †M. andamanensis Walter 1980 nomen dubium [oospecies] †M. burnabyi Gray 1861 nomen dubium [oospecies] †Raoul Island scrubfowl, M. sp. †'Eua scrubfowl (small-footed megapode), M. sp. †Lifuka scrubfowl, M. sp. †Stout Tongan megapode, M. sp. †Large Vanuatu megapode, M. sp. †Large Solomon Islands, M. sp. †New Caledonia megapode, M. sp. †Loyalty megapode, M. sp. †New Ireland scrubfowl (large Bismarck's megapode), M. sp. Malleefowl, group Genus: Leipoa Malleefowl, Leipoa ocellata Brushturkey group Genus: Alectura Australian brushturkey, Alectura lathami Genus: Aepypodius Wattled brushturkey, Aepypodius arfakianus Waigeo brushturkey, Aepypodius bruijnii Genus: Talegalla Red-billed brushturkey, Talegalla cuvieri Black-billed brushturkey, Talegalla fuscirostris Collared brushturkey, Talegalla jobiensis Genus: †Progura Progura gallinacea – Queensland, Pleistocene Progura campestris – South Australia, Pleistocene Genus: †Latagallina Latagallina naracoortensis formerly Progura naracoortensis – New South Wales, South Australia, Pleistocene Latagallina olsoni – South Australia, Pleistocene Incertae sedis Genus: †Garrdimalga Garrdimalga mcnamarai – South Australia, Pleistocene Human uses In their native Oceania, indigenous peoples protect their nesting sites, as their eggs are considered to be delicacies. Their eggs are about twice the size of chicken eggs and the yolks are roughly four times as massive.
Biology and health sciences
Galliformes
Animals
200421
https://en.wikipedia.org/wiki/Little%20owl
Little owl
The little owl (Athene noctua), also known as the owl of Athena or owl of Minerva, is a bird that inhabits much of the temperate and warmer parts of Europe, the Palearctic east to Korea, and North Africa. It was introduced into Britain at the end of the 19th century and into the South Island of New Zealand in the early 20th century. This owl is a member of the typical or true owl family Strigidae, which contains most species of owl, the other grouping being the barn owls, Tytonidae. It is a small (approx. 22 cm long), cryptically coloured, mainly nocturnal species and is found in a range of habitats including farmland, woodland fringes, steppes and semi-deserts. It feeds on insects, earthworms, other invertebrates and small vertebrates. Males hold territories which they defend against intruders. This owl is a cavity nester and a clutch of about four eggs is laid in spring. The female does the incubation and the male brings food to the nest, first for the female and later for the newly hatched young. As the chicks grow, both parents hunt and bring them food, and the chicks leave the nest at about seven weeks of age. Being a common species with a wide range and large total population, the International Union for Conservation of Nature has assessed its conservation status as "least concern". Taxonomy The little owl was formally described in 1769 by the Italian naturalist Giovanni Antonio Scopoli under the binomial name Strix noctua. The little owl is now placed in the genus Athene that was introduced by the German zoologist Friedrich Boie in 1822. The owl was designated as the type species of the genus by George Robert Gray in 1841. The genus name, Athene, commemorates the Greek goddess Athena (whose name is also at times spelled Athene), as the owl was a symbol of wisdom. The species name noctua has, in effect, the same meaning, being the Latin name of an owl sacred to Minerva, Athena's Roman counterpart. The little owl is probably most closely related to the spotted owlet (Athene brama). A number of variations occur over the bird's wide range and there is some dispute over their taxonomy. The most distinct is the pale grey-brown Middle-Eastern type known as the Syrian little owl (A. n. lilith). A 2009 paper in the ornithological journal Dutch Birding (vol. 31: 35–37, 2009) has advocated splitting the southeastern races as a separate species, Lilith's owl (Athene glaux), with subspecies A. g. glaux, A. g. indigena, and A. g. lilith. DNA evidence and vocal patterns support this proposal. Other forms include another pale race, the north African A. n. desertae, and three intermediate subspecies, A. n. indigena of southeast Europe and Asia Minor, A. n. glaux in north Africa and southwest Asia, and A. n. bactriana of central Asia. Differences in size of bird and length of toes, reasons put forward for splitting off A. n. spilogastra, seem inconclusive; A. n. plumipes has been claimed to differ genetically from other members of the species and further investigation is required. In general, the different varieties both overlap with the ranges of neighbouring groups and intergrade (hybridise) with them across their boundaries. Thirteen subspecies are recognised: A. n. noctua (Scopoli, 1769) – central, south, southeast Europe to northwest Russia A. n. bactriana Blyth, 1847 – Iraq and Azerbaijan to Pakistan and northwest India A. n. glaux (Savigny, 1809) – coastal north Africa to southwest Israel A. n. impasta Bangs & Peters, JL, 1928 – west-central China A. n. indigena Brehm, CL, 1855 – Romania to Greece through Ukraine and Turkey east to south Russia A. n. lilith Hartert, E, 1913 – Cyprus, south Turkey to Iraq and the Sinai (Egypt) A. n. ludlowi Baker, ECS, 1926 – Himalayas A. n. orientalis Severtsov, 1873 – northeast Kazakhstan and northwest China A. n. plumipes Swinhoe, 1870 – Mongolia, south-central Siberia and northeast China A. n. saharae (Kleinschmidt, 1909) – Morocco to west Egypt and central Arabia A. n. somaliensis Reichenow, 1905 – east Ethiopia and Somalia A. n. spilogastra Heuglin, 1863 – east Sudan, Eritrea and northeast Ethiopia A. n. vidalii Brehm, AE, 1857 – west Europe Description The little owl is a small owl with a flat-topped head, a plump, compact body and a short tail. The facial disc is flattened above the eyes giving the bird a frowning expression. The plumage is greyish-brown, spotted, streaked and barred with white. The underparts are pale and streaked with darker colour. It is usually in length with a wingspan of for both sexes, and weighs about . The adult little owl of the most widespread form, the nominate A. n. noctua, is white-speckled brown above, and brown-streaked white below. It has a large head, long legs, and yellow eyes, and its white "eyebrows" give it a stern expression. Juveniles are duller, and lack the adult's white crown spots. This species has a bounding flight like a woodpecker. Moult begins in July and continues to November, with the male starting before the female. The call is a querulous kiew, kiew. Less frequently, various whistling or trilling calls are uttered. In the breeding season, other more modulated calls are made, and a pair may call in duet. Various yelping, chattering or barking sounds are made in the vicinity of the nest. Distribution and habitat The little owl is widespread across Europe, Asia and North Africa. Its range in Eurasia extends from the Iberian Peninsula and Denmark eastwards to China and southwards to the Himalayas. In Africa it is present from Mauritania to Egypt, the Red Sea and Arabia. It was introduced to the United Kingdom in the 19th century, and has spread across much of England and the whole of Wales. It was introduced to Otago in New Zealand by the local acclimatisation society in 1906, and to Canterbury a little later, and is now widespread in the eastern and northern South Island; it is partially protected under Schedule 2 of New Zealand's Wildlife Act 1953, whereas most introduced birds explicitly have no protection or are game birds. This is a sedentary species that is found in open countryside in a great range of habitats. These include agricultural land with hedgerows and trees, orchards, woodland verges, parks and gardens, as well as steppes and stony semi-deserts. It is also present in treeless areas such as dunes, and in the vicinity of ruins, quarries and rocky outcrops. It sometimes ventures into villages and suburbs. In the United Kingdom it is chiefly a bird of the lowlands, and usually occurs below . In continental Europe and Asia it may be found at much higher elevations; one individual was recorded from in Tibet. Behaviour and ecology This owl usually perches in an elevated position ready to swoop down on any small creature it notices. It feeds on prey such as insects and earthworms, as well as small vertebrates including amphibians, reptiles, birds and mammals. It may pursue prey on the ground and it caches surplus food in holes or other hiding places. A study of the pellets of indigestible material that the birds regurgitate found mammals formed 20 to 50% of the diet and insects 24 to 49%. Mammals taken included mice, rats, voles, shrews, moles and rabbits. The birds were mostly taken during the breeding season and were often fledglings, and including the chicks of game birds. The insects included Diptera, Dermaptera, Coleoptera, Lepidoptera and Hymenoptera. Some vegetable matter (up to 5%) was included in the diet and may have been ingested incidentally. The little owl is territorial, the male normally remaining in one territory for life. However, the boundaries may expand and contract, being largest in the courtship season in spring. The home range, in which the bird actually hunts for food, varies with the type of habitat and time of year. Little owls with home-ranges that incorporate a high diversity of habitats are much smaller (< 2 ha) than those which breed in monotonous farmland (with home-ranges over 12 ha). Larger home-ranges result in increased flight activity, longer foraging trips and fewer nest visits. If a male intrudes into the territory of another, the occupier approaches and emits its territorial calls. If the intruder persists, the occupier flies at him aggressively. If this is unsuccessful, the occupier repeats the attack, this time trying to make contact with his claws. In retreat, an owl often drops to the ground and makes a low-level escape. The territory is more actively defended against a strange male as compared to a known male from a neighbouring territory; it has been shown that the little owl can recognise familiar birds by voice. The little owl is partly diurnal and often perches boldly and prominently during the day. If living in an area with a large amount of human activity, little owls may grow used to humans and will remain on their perch, often in full view, while people are around. The little owl has a life expectancy of about 16 years. However, many birds do not reach maturity; severe winters can take their toll and some birds are killed by road vehicles at night, so the average lifespan may be on the order of 3 years. Breeding This owl becomes more vocal at night as the breeding season approaches in late spring. The nesting location varies with habitat, nests being found in holes in trees, in cliffs, quarries, walls, old buildings, river banks and rabbit burrows. A clutch of 3 to 5 eggs is laid (occasionally 2 to 8). The eggs are broadly elliptical, white and without gloss; they measure about . They are incubated by the female who sometimes starts sitting after the first egg is laid. While she is incubating the eggs, the male brings food for her. The eggs hatch after 28 or 29 days. At first the chicks are brooded by the female and the male brings in food which she distributes to them. Later, both parents are involved in hunting and feeding them. The young leave the nest at about 7 weeks, and can fly a week or two later. Usually there is a single brood but when food is abundant, there may be two. The energy reserves that little owl chicks are able to build up when in the nest influences their post-fledgling survival, with birds in good physical condition having a much higher chance of survival than those in poor condition. When the young disperse, they seldom travel more than about . Pairs of birds often remain together all year round and the bond may last until one partner dies. Status A. noctua has an extremely large range. It has been estimated that there are between 560 thousand and 1.3 million breeding pairs in Europe, and as Europe equates to 25 to 49% of the global range, the world population may be between 5 million and 15 million birds. The population is believed to be stable, and for these reasons, the International Union for Conservation of Nature has assessed the bird's conservation status as being of "least concern". In human culture Owls have often been depicted from the Upper Palaeolithic onwards, in forms from statuettes and drawings to pottery and wooden posts, but in the main they are generic rather than identifiable to species. The little owl is, however, closely associated with the Greek goddess Athena and the Roman goddess Minerva, and hence represents wisdom and knowledge. A little owl with an olive branch appears on a Greek tetradrachm coin from 500 BC (a copy of which appears on the modern Greek one-euro coin) and in a 5th-century B.C. bronze statue of Athena holding the bird in her hand. The call of a little owl was thought to have heralded the murder of Julius Caesar. In 1843 several little owls that had been brought from Italy were released by the English naturalist Charles Waterton on his estate at Walton Hall in Yorkshire but these failed to establish themselves. Later successful introductions were made by Lord Lilford on his Lilford Hall estate near Oundle in Northamptonshire and by Edmund Meade-Waldo at Stonewall Park near Edenbridge, Kent. From these areas the birds spread and had become abundant by 1900. The owls acquired a bad reputation and were believed to prey on game bird chicks. They therefore became a concern to game breeders who tried to eliminate them. In 1935 the British Trust for Ornithology initiated a study into the little owl's diet led by the naturalist Alice Hibbert-Ware. The report showed that the owls feed almost entirely on insects, other invertebrates and small mammals and thus posed little threat to game birds. There is evidence that from the 19th century little owls were occasionally kept as ornamental birds. In Italy, tamed and docked little owls were kept to hunt rodents and insects in the house and garden. More common was keeping little owls to use them in so-called cottage hunting. This took advantage of the fact that many bird species react to owls with aggressive behaviour when they discover them during the day (mobbing). Such huntings, particularly with tawny owls, were practiced in Italy from 350 B.C. until the 20th century and in Germany from the 17th to the 20th century. In Italy, mainly skylarks were caught in this way. The main place of trade was Crespina, a small town near Pisa. Here, little owls were traditionally sold on 29 September, after being taken from their nests and raised in human care. Only since the 1990s has this trade been officially banned; however, because of the long cultural tradition for hunting with little owls, exemptions are still granted. Thus, there is still a breeding center for little owls near Crespina, which is maintained by hunters. In 1992, the little owl appeared as a watermark on Jaap Drupsteen’s 100 guilder banknote for the Netherlands.
Biology and health sciences
Strigiformes
Animals
200458
https://en.wikipedia.org/wiki/Reflexive%20relation
Reflexive relation
In mathematics, a binary relation on a set is reflexive if it relates every element of to itself. An example of a reflexive relation is the relation "is equal to" on the set of real numbers, since every real number is equal to itself. A reflexive relation is said to have the reflexive property or is said to possess reflexivity. Along with symmetry and transitivity, reflexivity is one of three properties defining equivalence relations. Etymology The word reflexive is originally derived from the Medieval Latin reflexivus ('recoiling' [c.f. reflex], or 'directed upon itself') (c. 1250 AD) from the classical Latin reflexus- ('turn away', 'reflection') + -īvus (suffix). The word entered Early Modern English in the 1580s. The sense of the word meaning 'directed upon itself', as now used in mathematics, surviving mostly by its use in philosophy and grammar (c.f. Reflexive verb and Reflexive pronoun). The first explicit use of "reflexivity", that is, describing a relation as having the property that every element is related to itself, is generally attributed to Giuseppe Peano in his Arithmetices principia (1889), wherein he defines one of the fundamental properties of equality being . The first use of the word reflexive in the sense of mathematics and logic was by Bertrand Russell in his Principles of Mathematics (1903). Definitions A relation on the set is said to be if for every , . Equivalently, letting denote the identity relation on , the relation is reflexive if . The of is the union which can equivalently be defined as the smallest (with respect to ) reflexive relation on that is a superset of A relation is reflexive if and only if it is equal to its reflexive closure. The or of is the smallest (with respect to ) relation on that has the same reflexive closure as It is equal to The reflexive reduction of can, in a sense, be seen as a construction that is the "opposite" of the reflexive closure of For example, the reflexive closure of the canonical strict inequality on the reals is the usual non-strict inequality whereas the reflexive reduction of is Related definitions There are several definitions related to the reflexive property. The relation is called: , or if it does not relate any element to itself; that is, if holds for no A relation is irreflexive if and only if its complement in is reflexive. An asymmetric relation is necessarily irreflexive. A transitive and irreflexive relation is necessarily asymmetric. if whenever are such that then necessarily if whenever are such that then necessarily if every element that is part of some relation is related to itself. Explicitly, this means that whenever are such that then necessarily and Equivalently, a binary relation is quasi-reflexive if and only if it is both left quasi-reflexive and right quasi-reflexive. A relation is quasi-reflexive if and only if its symmetric closure is left (or right) quasi-reflexive. antisymmetric if whenever are such that then necessarily if whenever are such that then necessarily A relation is coreflexive if and only if its symmetric closure is anti-symmetric. A reflexive relation on a nonempty set can neither be irreflexive, nor asymmetric ( is called if implies not ), nor antitransitive ( is if implies not ). Examples Examples of reflexive relations include: "is equal to" (equality) "is a subset of" (set inclusion) "divides" (divisibility) "is greater than or equal to" "is less than or equal to" Examples of irreflexive relations include: "is not equal to" "is coprime to" on the integers larger than 1 "is a proper subset of" "is greater than" "is less than" An example of an irreflexive relation, which means that it does not relate any element to itself, is the "greater than" relation () on the real numbers. Not every relation which is not reflexive is irreflexive; it is possible to define relations where some elements are related to themselves but others are not (that is, neither all nor none are). For example, the binary relation "the product of and is even" is reflexive on the set of even numbers, irreflexive on the set of odd numbers, and neither reflexive nor irreflexive on the set of natural numbers. An example of a quasi-reflexive relation is "has the same limit as" on the set of sequences of real numbers: not every sequence has a limit, and thus the relation is not reflexive, but if a sequence has the same limit as some sequence, then it has the same limit as itself. An example of a left quasi-reflexive relation is a left Euclidean relation, which is always left quasi-reflexive but not necessarily right quasi-reflexive, and thus not necessarily quasi-reflexive. An example of a coreflexive relation is the relation on integers in which each odd number is related to itself and there are no other relations. The equality relation is the only example of a both reflexive and coreflexive relation, and any coreflexive relation is a subset of the identity relation. The union of a coreflexive relation and a transitive relation on the same set is always transitive. Number of reflexive relations The number of reflexive relations on an -element set is Philosophical logic Authors in philosophical logic often use different terminology. Reflexive relations in the mathematical sense are called totally reflexive in philosophical logic, and quasi-reflexive relations are called reflexive.
Mathematics
Set theory
null
200459
https://en.wikipedia.org/wiki/Symmetric%20relation
Symmetric relation
A symmetric relation is a type of binary relation. Formally, a binary relation R over a set X is symmetric if: where the notation aRb means that . An example is the relation "is equal to", because if is true then is also true. If RT represents the converse of R, then R is symmetric if and only if . Symmetry, along with reflexivity and transitivity, are the three defining properties of an equivalence relation. Examples In mathematics "is equal to" (equality) (whereas "is less than" is not symmetric) "is comparable to", for elements of a partially ordered set "... and ... are odd": Outside mathematics "is married to" (in most legal systems) "is a fully biological sibling of" "is a homophone of" "is a co-worker of" "is a teammate of" Relationship to asymmetric and antisymmetric relations By definition, a nonempty relation cannot be both symmetric and asymmetric (where if a is related to b, then b cannot be related to a (in the same way)). However, a relation can be neither symmetric nor asymmetric, which is the case for "is less than or equal to" and "preys on"). Symmetric and antisymmetric (where the only way a can be related to b and b be related to a is if ) are actually independent of each other, as these examples show. Properties A symmetric and transitive relation is always quasireflexive. One way to count the symmetric relations on n elements, that in their binary matrix representation the upper right triangle determines the relation fully, and it can be arbitrary given, thus there are as many symmetric relations as binary upper triangle matrices, 2n(n+1)/2.
Mathematics
Set theory
null
200463
https://en.wikipedia.org/wiki/Transitive%20relation
Transitive relation
In mathematics, a binary relation on a set is transitive if, for all elements , , in , whenever relates to and to , then also relates to . Every partial order and every equivalence relation is transitive. For example, less than and equality among real numbers are both transitive: If and then ; and if and then . Definition A homogeneous relation on the set is a transitive relation if, for all , if and , then . Or in terms of first-order logic: , where is the infix notation for . Examples As a non-mathematical example, the relation "is an ancestor of" is transitive. For example, if Amy is an ancestor of Becky, and Becky is an ancestor of Carrie, then Amy is also an ancestor of Carrie. On the other hand, "is the birth mother of" is not a transitive relation, because if Alice is the birth mother of Brenda, and Brenda is the birth mother of Claire, then it does not follow that Alice is the birth mother of Claire. In fact, this relation is antitransitive: Alice can never be the birth mother of Claire. Non-transitive, non-antitransitive relations include sports fixtures (playoff schedules), 'knows' and 'talks to'. The examples "is greater than", "is at least as great as", and "is equal to" (equality) are transitive relations on various sets. As are the set of real numbers or the set of natural numbers: whenever x > y and y > z, then also x > z whenever x ≥ y and y ≥ z, then also x ≥ z whenever x = y and y = z, then also x = z. More examples of transitive relations: "is a subset of" (set inclusion, a relation on sets) "divides" (divisibility, a relation on natural numbers) "implies" (implication, symbolized by "⇒", a relation on propositions) Examples of non-transitive relations: "is the successor of" (a relation on natural numbers) "is a member of the set" (symbolized as "∈") "is perpendicular to" (a relation on lines in Euclidean geometry) The empty relation on any set is transitive because there are no elements such that and , and hence the transitivity condition is vacuously true. A relation containing only one ordered pair is also transitive: if the ordered pair is of the form for some the only such elements are , and indeed in this case , while if the ordered pair is not of the form then there are no such elements and hence is vacuously transitive. Properties Closure properties The converse (inverse) of a transitive relation is always transitive. For instance, knowing that "is a subset of" is transitive and "is a superset of" is its converse, one can conclude that the latter is transitive as well. The intersection of two transitive relations is always transitive. For instance, knowing that "was born before" and "has the same first name as" are transitive, one can conclude that "was born before and also has the same first name as" is also transitive. The union of two transitive relations need not be transitive. For instance, "was born before or has the same first name as" is not a transitive relation, since e.g. Herbert Hoover is related to Franklin D. Roosevelt, who is in turn related to Franklin Pierce, while Hoover is not related to Franklin Pierce. The complement of a transitive relation need not be transitive. For instance, while "equal to" is transitive, "not equal to" is only transitive on sets with at most one element. Other properties A transitive relation is asymmetric if and only if it is irreflexive. A transitive relation need not be reflexive. When it is, it is called a preorder. For example, on set X = {1,2,3}: R = {(1,1), (2,2), (3,3), (1,3), (3,2)} is reflexive, but not transitive, as the pair (1,2) is absent, R = {(1,1), (2,2), (3,3), (1,3)} is reflexive as well as transitive, so it is a preorder, R = {(1,1), (2,2), (3,3)} is reflexive as well as transitive, another preorder. As a counter example, the relation on the real numbers is transitive, but not reflexive. Transitive extensions and transitive closure Let be a binary relation on set . The transitive extension of , denoted , is the smallest binary relation on such that contains , and if and then . For example, suppose is a set of towns, some of which are connected by roads. Let be the relation on towns where if there is a road directly linking town and town . This relation need not be transitive. The transitive extension of this relation can be defined by if you can travel between towns and by using at most two roads. If a relation is transitive then its transitive extension is itself, that is, if is a transitive relation then . The transitive extension of would be denoted by , and continuing in this way, in general, the transitive extension of would be . The transitive closure of , denoted by or is the set union of , , , ... . The transitive closure of a relation is a transitive relation. The relation "is the birth parent of" on a set of people is not a transitive relation. However, in biology the need often arises to consider birth parenthood over an arbitrary number of generations: the relation "is a birth ancestor of" is a transitive relation and it is the transitive closure of the relation "is the birth parent of". For the example of towns and roads above, provided you can travel between towns and using any number of roads. Relation types that require transitivity Preorder – a reflexive and transitive relation Partial order – an antisymmetric preorder Total preorder – a connected (formerly called total) preorder Equivalence relation – a symmetric preorder Strict weak ordering – a strict partial order in which incomparability is an equivalence relation Total ordering – a connected (total), antisymmetric, and transitive relation Counting transitive relations No general formula that counts the number of transitive relations on a finite set is known. However, there is a formula for finding the number of relations that are simultaneously reflexive, symmetric, and transitive – in other words, equivalence relations – , those that are symmetric and transitive, those that are symmetric, transitive, and antisymmetric, and those that are total, transitive, and antisymmetric. Pfeiffer has made some progress in this direction, expressing relations with combinations of these properties in terms of each other, but still calculating any one is difficult.
Mathematics
Set theory
null
200485
https://en.wikipedia.org/wiki/Muslin
Muslin
Muslin () is a cotton fabric of plain weave. It is made in a wide range of weights from delicate sheers to coarse sheeting. It is commonly believed that it gets its name from the city of Mosul, Iraq. Muslin was produced in different regions of the Indian subcontinent; Bengal Region was the main manufacturing area and the main centers were Sonargaon (near Dhaka), Shantipur and Murshidabad. Muslin was also produced in Malda and Hooghly. The muslin produced at Sonargaon and its surrounding areas was of excellent quality, which is popularly known as Dhaka Muslin. The muslin produced in Shantipur came to be known as Shantipuri Muslin, which was recognized by the East India Company. Muslin was made in Dhaka (Sonargaon) from very fine yarn, which is made from cotton called Phuti karpas; while in Malda, Radhanagar and Burdwan, muslin was made from fine yarn made from nurma or kaur cotton. A minimum of 300-count yarn was used for the muslin, making the muslin as transparent as glass. There were about 28 varieties of muslin, of which jamdani is still widely used. During the 17th and 18th centuries, Mughal Bengal emerged as the foremost muslin exporter in the world, with Dhaka as capital of the worldwide muslin trade. In the latter half of the 18th century, muslin weaving ceased in Bengal due to cheap fabrics from England and oppression by the colonialists. In India in the latter half of the 20th century and in Bangladesh in the second decade of the 21st century, initiatives were taken to revive muslin weaving, and the industry was able to be revived. Dhakai Muslin was recognized as a Geographical Indication (GI) product of Bangladesh in 2020 and Banglar Muslin (Bengal Muslin) was recognized as a Geographical Indication (GI) product of the Indian state of West Bengal in 2024. In 2013, Jamdani (a type of muslin) weaving art of Bangladesh included in the list of Masterpieces of the Oral and Intangible Heritage of Humanity by UNESCO under the title Traditional art of Jamdani Weaving. Etymology The dictionary Hobson Jobson published by two Englishmen named S.C. Burnell and Henry Yule mentions that the word muslin comes from 'Mosul'—a famous trading center and city in Iraq. Mosul produced a very fine cloth, which became known as muslin in Europe. History Early period The earliest specimen of Indian fine cotton cloth (like muslin) was found in Egypt as a mummy shroud around 2000 BC. The first commercial mention of Indian cotton is found in The Periplus of the Erythraean Sea (63 AD). The book mentions the export of fine cotton textiles from different parts of India to Europe. The eastern (Bengal) and north-western regions of India produced large quantities of fine cotton cloth, but Bengal cotton cloth was superior in quality. According to the text, European merchants procured fine cotton fabrics from the Gange port of Bengal. In this text, broad and smooth cotton cloth is referred to as Monachi and the finest cotton cloth is called Gangetic. A kingdom called 'Ruhma' is found in the Sulaiman al-Tajir written by the 9th century Arab merchant Sulaiman, where fine cotton fabrics was produced. There were cotton fabrics so fine and delicate that a single piece of cloth could be easily moved through the ring. Very fine cotton cloth was made in Mosul in the 12th century and later. Arab traders carried it to Europe as a commodity, and enchanted Europeans called it muslin; since then the very fine and beautiful cotton cloth came to be known as muslin. In 1298 AD, Marco Polo described in his book The Travels that muslin is made in Mosul, Iraq. Ibn Battuta, a Moroccan traveler who came to Bengal in the middle of the 14th century, praised the cotton cloth made in Sonargaon in his book The Rihla. Chinese writers who came to Bengal in the fifteenth century praised cotton cloth. Mughal period The muslin industry flourished in Bengal between the sixteenth and eighteenth centuries. The main muslin production centers in Bengal during this period were Dhaka and its surrounding areas, Shantipur, Malda and Hooghly. The 16th-century English traveller Ralph Fitch lauded the muslin he saw in Sonargaon. He visited India in 1583, described Sonargaon, "as a town ...... where there is the best and finest cloth made in all India". During the reign of Emperor Jahangir, Islam Khan Chishti shifted the capital from Rajmahal to Dhaka in 1610 AD, Dhaka gained prominence as the center of trade and commerce of Bengal. During this period the muslin produced in Dhaka achieved excellence, and the muslin produced here became world famous as Dhakai muslin. Mughal Emperor Akbar's courtier, Abul Fazal, praised the fine cotton fabric produced in Sonargaon (near Dhaka). Abul Fazl wrote "the Sarkar of Sonargaon produces a species of muslin very fine and in great quantity". European traders began arriving in the Bengali capital of Dhaka in the early seventeenth century, and these traders procured cotton cloth and muslin from Bengal for export to Europe. After the establishment of Murshidabad as the capital of Bengal, Cossimbazar—a small town on the banks of the Bhagirathi south of Murshidabad city, now included in the Baharampur municipality—became the center of a silk and cotton textile trade. The branch of the Bhagirathi that joined the Jalangi was called Cossimbazar river, and the triangular land surrounded by the Padma, Bhagirathi and Jalangi was called Cossimbazar Island. It was a major trading center for muslin and silk and a trading post (kuthi) of various European merchants. In 1670 AD, Streynsham Master mention that muslin was produced at Malda, Shantipur, Hooghly etc. Advaitacharya Goswami's Shantipur Parichaẏa, Volume II mentions that the East India Company purchased £150,000 worth of muslin annually in the early 19th century. During the 17th and 18th centuries, Mughal Bengal emerged as the foremost muslin exporter in the world, with Mughal Dhaka as capital of the worldwide muslin trade. It became highly popular in 18th-century France and eventually spread across much of the Western world. Dhaka muslin was first showcased in the UK at The Great Exhibition of the Works of Industry of All Nations in 1851. Decline under Company rule During the period of Company rule, the East India Company imported British-produced cloth into the Indian subcontinent, but became unable to compete with the local muslin industry. The Company administration initiated several policies in an attempt to suppress the muslin industry, and muslin production subsequently experienced a period of decline. It has been alleged that in some instances Indian weavers were rounded up and their thumbs chopped off, although this has been refuted by historians as a misreading of a report by William Bolts from 1772. Many of the threatened weavers fled East Bengal (present-day Bangladesh) and settled in the eastern districts of West Bengal, these districts were famous for the cotton products of Bengal. The quality, fineness and production volume of Bengali muslin declined as a result of these policies, continuing when India transitioned from Company rule to British Crown control. Revive: 1950s—Present India To revive Bengal muslin, two muslin production centers were set up by the Khadi and Rural Industries Commission, one at Basowa in Birbhum district of West Bengal, and the other at Panduru in Srikakulam district of Andhra Pradesh. Under the patronage of former Prime Minister Jawaharlal Nehru, Kalicharan Sharma took the lead in reviving the lost fame of muslin in Basowa, Birbhum district of West Bengal, with the help of some spinners. He soon found the dry climate of Birbhum quite unsuitable for spinning muslin yarn. Later he shifted his work center to the neighboring district of Murshidabad, and chose Chowk Islampur as the site of this weaving industry. Chowk Islampur, situated on the banks of the Bairab River, a tributary of the Padma, is an ancient village famous for spinning and weaving since the days of the East India Company. After India's independence, the village had already gained a reputation for high-quality silk weaving. A muslin training center was started at Chowk Islampur in 1955 under the supervision of Kalicharan Sharma. At first experiments were started on spinning yarn with traditional Kishan Charkha, but it was not possible to make more than 250 counts on this traditional Charkha. Kalicharan Sharma did further experiments and research and developed a highly sensitive six spindle Ambar Charkha (spinning wheel) capable of spinning 500 count yarn. This new Charkha was able to reduce the cost of production and increase the wages of spinners. The use of this ambar Charkha proved to be effective and promising for the regeneration of muslin. To concentrate on muslin spinning, the Khadi Society constructed a separate spacious two-storied building at Berhampore in 1966. The Government of West Bengal launched "Project Muslin" in 2013 with Khadi. The aim in this initiative was to revive the muslin fabric and support the weavers. Through this project, weavers from Murshidabad, Nadia, Maldah, Burdwan, Birbhum, Hooghly and Jhargram districts who are capable of weaving muslin cloth were identified. All these weavers are provided training and technical assistance to produce high quality muslin. Weavers are capable of producing 500 counts of muslin; Some weavers have been able to weave 700 count muslin. Project Muslin was able to expand the production of muslin in different parts of West Bengal. Muslin products produced in West Bengal include handkerchiefs, dhoti, bed sheets and men's and women's clothing. According to 2015 data, the products were priced between ₹400 and ₹25,000, while some premium sarees in this category were priced between ₹70,000 and ₹150,000. Bangladesh In the second decade of the 21st century, a scheme called Bangladesh Golden Heritage Muslin Yarn Manufacturing Technology and Muslin Cloth Restoration was undertaken to restore and develop the muslin production system in Bangladesh. Under this project, samples of muslin from different countries including India, Britain were inspected and data collected. Old maps of the Meghna River were examined and combined with modern satellite imagery to identify possible locations – where phuti carpus plants could still be found. From there, the genetic sequences of the recovered cotton plants were made and compared with the original ones. After testing, a carpus plant was identified, which was 70 percent identical to the Futi carpus. An island in the Meghna, 30 km north of Dhaka, was selected for the production of this corpus, where some seeds were sown experimentally in 2015, and the first cotton was harvested that year. But at that time there were no skilled spinners in Bangladesh to produce fine yarn, on the other hand Indian spinners were able to produce 200-300-400-500 count fine yarn from cotton. As a result, in joint venture with Indian spinners, a hybrid yarn of 200 and 300 count was produced by combining common and futi corpus cotton. At least 50 tools were needed to make cloth from yarn, which had to be reinvented, as they disappeared with muslin. Ultimately a weaver is able to weave a saree with a thread count of 300, which is nowhere near the quality of real Dhaka muslin; But much better quality than what the weavers of many generations past have woven. The Bangladesh Handloom Board (BHB) is implementing the first phase of the project titled Bangladesh's Golden Heritage Muslin Yarn Manufacturing Technology and Muslin Cloth Reviving, and the Revival work was completed in 2020. Dhakai Muslin was recognized as a GI (Geographical Indication) product on 28 December 2020. The Government of Bangladesh declared official revival of fine Dhaka Muslin in April 2022. In 2022, the Dhakai Muslin House was built on the banks of Shitalakshya river at Rupganj under Tarab municipality of Narayanganj district. The second phase of the project named ‘Dhaka Muslin Commercialization’ begins in 2023. Manufacturing process Since all the processes were manual, manufacturing involved many artisans for yarn spinning and weaving activities, but the leading role lay with the material and weaving. Ginning: For removing trash and cleaning and combing the fibers and making them parallel ready for spinning a boalee (upper jaw of a catfish) was used. Spinning and weaving: For extra humidity they used to weave during the rainy season for elasticity in the yarns and to avoid breakages. The process was so sluggish that it could take over five months to weave one piece of muslin. Characteristics Thin Muslins were originally made of cotton only. These were very thin, transparent, delicate and feather light breathable fabrics. There could be 1000–1800 yarns in warp and weigh for . Some varieties of muslin were so thin that they could even pass through the aperture of a lady's finger-ring. Transparency Gaius Petronius Arbiter (1st century AD Roman courtier and author of the Satyricon) described the transparent nature of the muslin cloth as below: Poetic names Certain delicate muslins were given poetic names such as Baft Hawa ("woven air"), Shabnam ("evening dew"), and āb-i-ravān ("flowing water"). The latter name refers to a fine and transparent variety of fine muslin from Dacca. The fabric's characteristics are summed up in its name. Types Muslin has several kinds of variations. Many of the below are mentioned in Ain-i-Akbari (16th-century detailed document) Khasa Tansukh Nainsook Chautar Alliballi The name embraces , 'superior', , 'good'. Adatais, a fine and clear fabric. Seerhand muslin was a variety in between nainsook and mull (another muslin type, a very thin and soft). The fabric was resistant to washing, retaining its clearness. and varieties of mulmul (Mulboos khas, Jhuna, Sarkar ali, Sarbati, Tarindam) were among the most delicate cotton muslins produced in the Indian subcontinent. More variations Mull is another kind of muslin. It is a soft, thin, and semitransparent material. The name is derived from Hindi which means "soft". Swiss mull is a type of which is finished with stiffening agents. Uses Dressmaking and sewing Because muslin is an inexpensive, unbleached cotton fabric available in different weights, it is often used as a backing or lining for quilts, and therefore can often be found in wide widths in the quilting sections of fabric stores. When sewing clothing, a dressmaker may test the fit of a garment by using muslin fabric to make a test-model before cutting pieces from more expensive fabric to make the final product, thereby avoiding potential costly mistakes. In the United States, these test-models are themselves sometimes referred to as "muslins," the process is called "making a muslin," and "muslin" has become the generic term for any test- or fitting garment, regardless of the fabric it is made from. In Britain and Australia, the term for a test- or fitting garment used to be Toile. The word "toile," from an Old French word for "cloth," entered the English language around the 12th century. (Today, toile simply refers to any sheer fabric, which may be made, for example, from linen or cotton.) The modern German term for a test- or fitting garment is Nesselmodell. Use in food production Muslin can be used as a filter: In a funnel when decanting fine wine or port to prevent sediment from entering the decanter To separate liquid from mush (for example, to make apple juice: wash, chop, boil, mash, then filter by pouring the mush into a muslin bag suspended over a jug) To retain a liquidy solid (for example, in home cheese-making, when the milk has curdled to a gel, pour into a muslin bag and squash between two saucers (upside down under a brick) to squeeze out the liquid whey from the cheese curd) Muslin is a filter in traditional Fijian kava production. Muslin is the material for the traditional cloth wrapped around a Christmas pudding. It is the fabric wrapped around the items in barmbrack, a fruitcake traditionally eaten at Halloween in Ireland. Beekeepers use muslin to filter melted beeswax to clean it of particles and debris. Set design and photography Muslin is often the cloth of choice for theatre sets. It is used to mask the background of sets and to establish the mood or feel of different scenes. It receives paint well and, if treated properly, can be made translucent. It also holds dyes well. It is often used to create nighttime scenes because when dyed, it often gets a wavy look with the color varying slightly, such that it resembles a night sky. Muslin shrinks after it is painted or sprayed with water, which is desirable in some common techniques such as soft-covered flats. In video production, muslin is used as a cheap greenscreen or bluescreen, either pre-colored or painted with latex paint (diluted with water). Muslin is the most common backdrop material used by photographers for formal portrait backgrounds. These backdrops are usually painted, most often with an abstract mottled pattern. In the early days of silent film-making, and until the late 1910s, movie studios did not have the elaborate lights needed to illuminate indoor sets, so most interior scenes were sets built outdoors with large pieces of muslin hanging overhead to diffuse sunlight. The Wizard of Oz features a sequence with a tornado constructed out of muslin, measuring 35-foot-high. Medicine Surgeons use muslin gauze in cerebrovascular neurosurgery to wrap around aneurysms or intracranial vessels at risk for bleeding. The thought is that the gauze reinforces the artery and helps prevent rupture. It is often used for aneurysms that, due to their size or shape, cannot be microsurgically clipped or coiled. Recognition Many travelers and merchants of the 13th and 14th centuries praised Bengal muslin, and claimed it as the best muslin. From the Mughal rulers to the European colonial rulers, Bengal's muslins were recognized for their superiority, with the muslins produced at Sonargaon being the best. In 2013, the traditional art of Jamdani weaving in Bangladesh was included in the list of Masterpieces of the Oral and Intangible Heritage of Humanity by UNESCO. In 2020, Dhakai muslin was given Geographical indication status as a product of Bangladesh. In 2024, Banglar Muslin (or Bengal Muslin) was granted Geographical Indication status as a product of West Bengal.
Technology
Fabrics and fibers
null
200646
https://en.wikipedia.org/wiki/Selective%20breeding
Selective breeding
Selective breeding (also called artificial selection) is the process by which humans use animal breeding and plant breeding to selectively develop particular phenotypic traits (characteristics) by choosing which typically animal or plant males and females will sexually reproduce and have offspring together. Domesticated animals are known as breeds, normally bred by a professional breeder, while domesticated plants are known as varieties, cultigens, cultivars, or breeds. Two purebred animals of different breeds produce a crossbreed, and crossbred plants are called hybrids. Flowers, vegetables and fruit-trees may be bred by amateurs and commercial or non-commercial professionals: major crops are usually the provenance of the professionals. In animal breeding artificial selection is often combined with techniques such as inbreeding, linebreeding, and outcrossing. In plant breeding, similar methods are used. Charles Darwin discussed how selective breeding had been successful in producing change over time in his 1859 book, On the Origin of Species. Its first chapter discusses selective breeding and domestication of such animals as pigeons, cats, cattle, and dogs. Darwin used artificial selection as an analogy to propose and explain the theory of natural selection but distinguished the latter from the former as a separate process that is non-directed. The deliberate exploitation of selective breeding to produce desired results has become very common in agriculture and experimental biology. Selective breeding can be unintentional, for example, resulting from the process of human cultivation; and it may also produce unintended – desirable or undesirable – results. For example, in some grains, an increase in seed size may have resulted from certain ploughing practices rather than from the intentional selection of larger seeds. Most likely, there has been an interdependence between natural and artificial factors that have resulted in plant domestication. History Selective breeding of both plants and animals has been practiced since prehistory; key species such as wheat, rice, and dogs have been significantly different from their wild ancestors for millennia, and maize, which required especially large changes from teosinte, its wild form, was selectively bred in Mesoamerica. Selective breeding was practiced by the Romans. Treatises as much as 2,000 years old give advice on selecting animals for different purposes, and these ancient works cite still older authorities, such as Mago the Carthaginian. The notion of selective breeding was later expressed by the Persian Muslim polymath Abu Rayhan Biruni in the 11th century. He noted the idea in his book titled India, which included various examples. Selective breeding was established as a scientific practice by Robert Bakewell during the British Agricultural Revolution in the 18th century. Arguably, his most important breeding program was with sheep. Using native stock, he was able to quickly select for large, yet fine-boned sheep, with long, lustrous wool. The Lincoln Longwool was improved by Bakewell, and in turn the Lincoln was used to develop the subsequent breed, named the New (or Dishley) Leicester. It was hornless and had a square, meaty body with straight top lines. These sheep were exported widely, including to Australia and North America, and have contributed to numerous modern breeds, despite the fact that they fell quickly out of favor as market preferences in meat and textiles changed. Bloodlines of these original New Leicesters survive today as the English Leicester (or Leicester Longwool), which is primarily kept for wool production. Bakewell was also the first to breed cattle to be used primarily for beef. Previously, cattle were first and foremost kept for pulling ploughs as oxen, but he crossed long-horned heifers and a Westmoreland bull to eventually create the Dishley Longhorn. As more and more farmers followed his lead, farm animals increased dramatically in size and quality. In 1700, the average weight of a bull sold for slaughter was 370 pounds (168 kg). By 1786, that weight had more than doubled to 840 pounds (381 kg). However, after his death, the Dishley Longhorn was replaced with short-horn versions. He also bred the Improved Black Cart horse, which later became the Shire horse. Charles Darwin coined the term 'selective breeding'; he was interested in the process as an illustration of his proposed wider process of natural selection. Darwin noted that many domesticated animals and plants had special properties that were developed by intentional animal and plant breeding from individuals that showed desirable characteristics, and discouraging the breeding of individuals with less desirable characteristics. Darwin used the term "artificial selection" twice in the 1859 first edition of his work On the Origin of Species, in Chapter IV: Natural Selection, and in Chapter VI: Difficulties on Theory: Animal breeding Animals with homogeneous appearance, behavior, and other characteristics are known as particular breeds or pure breeds, and they are bred through culling animals with particular traits and selecting for further breeding those with other traits. Purebred animals have a single, recognizable breed, and purebreds with recorded lineage are called pedigreed. Crossbreeds are a mix of two purebreds, whereas mixed breeds are a mix of several breeds, often unknown. Animal breeding begins with breeding stock, a group of animals used for the purpose of planned breeding. When individuals are looking to breed animals, they look for certain valuable traits in purebred stock for a certain purpose, or may intend to use some type of crossbreeding to produce a new type of stock with different, and, it is presumed, superior abilities in a given area of endeavor. For example, to breed chickens, a breeder typically intends to receive eggs, meat, and new, young birds for further reproduction. Thus, the breeder has to study different breeds and types of chickens and analyze what can be expected from a certain set of characteristics before he or she starts breeding them. Therefore, when purchasing initial breeding stock, the breeder seeks a group of birds that will most closely fit the purpose intended. Purebred breeding aims to establish and maintain stable traits, that animals will pass to the next generation. By "breeding the best to the best," employing a certain degree of inbreeding, considerable culling, and selection for "superior" qualities, one could develop a bloodline superior in certain respects to the original base stock. Such animals can be recorded with a breed registry, the organization that maintains pedigrees and/or stud books. However, single-trait breeding, breeding for only one trait over all others, can be problematic. In one case mentioned by animal behaviorist Temple Grandin, roosters bred for fast growth or heavy muscles did not know how to perform typical rooster courtship dances, which alienated the roosters from hens and led the roosters to kill the hens after mating with them. A Soviet attempt to breed lab rats with higher intelligence led to cases of neurosis severe enough to make the animals incapable of any problem solving unless drugs like phenazepam were used. The observable phenomenon of hybrid vigor stands in contrast to the notion of breed purity. However, on the other hand, indiscriminate breeding of crossbred or hybrid animals may also result in degradation of quality. Studies in evolutionary physiology, behavioral genetics, and other areas of organismal biology have also made use of deliberate selective breeding, though longer generation times and greater difficulty in breeding can make these projects challenging in such vertebrates as house mice. Plant breeding The process of plant breeding has been used for thousands of years, and began with the domestication of wild plants into uniform and predictable agricultural cultigens. These high-yielding varieties have been particularly important in agriculture. As crops improved, humans were able to move from hunter-gatherer style living to a mix of hunter-gatherer and agriculture practices. Although these higher yielding plants were derived from an extremely primitive version of plant breeding, this form of agriculture was an investment that the people who grew them were planting then could have a more varied diet. This meant that they did not completely stop their hunting and gathering immediately but instead over time transition and ultimately favored agriculture. Originally this was due to humans not wanting to risk using all their time and resources for their crops just to fail. Which was promptly called play farming due to the idea of "farmers" experimenting with agriculture. In addition, the ability for humans to stay within one place for food and create permanent settlements made the process move along faster. During this transitional period, crops began to acclimate and evolve with humans encouraging humans to invest further into crops. Over time this reliance on plant breeding has created problems, as highlighted by the book Botany of Desire where Michael Pollan shows the connection between basic human desires through four different plants: apples for sweetness, tulips for beauty, cannabis for intoxication, and potatoes for control. In a form of reciprocal evolution humans have influenced these plants as much as the plants have influenced the people that consume them, is known as coevolution. Selective plant breeding is also used in research to produce transgenic animals that breed "true" (i.e., are homozygous) for artificially inserted or deleted genes. Selective breeding in aquaculture Selective breeding in aquaculture holds high potential for the genetic improvement of fish and shellfish for the process of production. Unlike terrestrial livestock, the potential benefits of selective breeding in aquaculture were not realized until recently. This is because high mortality led to the selection of only a few broodstock, causing inbreeding depression, which then forced the use of wild broodstock. This was evident in selective breeding programs for growth rate, which resulted in slow growth and high mortality. Control of the reproduction cycle was one of the main reasons as it is a requisite for selective breeding programs. Artificial reproduction was not achieved because of the difficulties in hatching or feeding some farmed species such as eel and yellowtail farming. A suspected reason associated with the late realization of success in selective breeding programs in aquaculture was the education of the concerned people – researchers, advisory personnel and fish farmers. The education of fish biologists paid less attention to quantitative genetics and breeding plans. Another was the failure of documentation of the genetic gains in successive generations. This in turn led to failure in quantifying economic benefits that successful selective breeding programs produce. Documentation of the genetic changes was considered important as they help in fine tuning further selection schemes. Quality traits in aquaculture Aquaculture species are reared for particular traits such as growth rate, survival rate, meat quality, resistance to diseases, age at sexual maturation, fecundity, shell traits like shell size, shell color, etc. Growth rate – growth rate is normally measured as either body weight or body length. This trait is of great economic importance for all aquaculture species as faster growth rate speeds up the turnover of production. Improved growth rates show that farmed animals utilize their feed more efficiently through a positive correlated response. Survival rate – survival rate may take into account the degrees of resistance to diseases. This may also see the stress response as fish under stress are highly vulnerable to diseases. The stress fish experience could be of biological, chemical or environmental influence. Meat quality – the quality of fish is of great economic importance in the market. Fish quality usually takes into account size, meatiness, and percentage of fat, color of flesh, taste, shape of the body, ideal oil and omega-3 content. Age at sexual maturation – The age of maturity in aquaculture species is another very important attribute for farmers as during early maturation the species divert all their energy to gonad production affecting growth and meat production and are more susceptible to health problems (Gjerde 1986). Fecundity – As the fecundity in fish and shellfish is usually high it is not considered as a major trait for improvement. However, selective breeding practices may consider the size of the egg and correlate it with survival and early growth rate. Finfish response to selection Salmonids Gjedrem (1979) showed that selection of Atlantic salmon (Salmo salar) led to an increase in body weight by 30% per generation. A comparative study on the performance of select Atlantic salmon with wild fish was conducted by AKVAFORSK Genetics Centre in Norway. The traits, for which the selection was done included growth rate, feed consumption, protein retention, energy retention, and feed conversion efficiency. Selected fish had a twice better growth rate, a 40% higher feed intake, and an increased protein and energy retention. This led to an overall 20% better Fed Conversion Efficiency as compared to the wild stock. Atlantic salmon have also been selected for resistance to bacterial and viral diseases. Selection was done to check resistance to Infectious Pancreatic Necrosis Virus (IPNV). The results showed 66.6% mortality for low-resistant species whereas the high-resistant species showed 29.3% mortality compared to wild species. Rainbow trout (S. gairdneri) was reported to show large improvements in growth rate after 7–10 generations of selection. Kincaid et al. (1977) showed that growth gains by 30% could be achieved by selectively breeding rainbow trout for three generations. A 7% increase in growth was recorded per generation for rainbow trout by Kause et al. (2005). In Japan, high resistance to IPNV in rainbow trout has been achieved by selectively breeding the stock. Resistant strains were found to have an average mortality of 4.3% whereas 96.1% mortality was observed in a highly sensitive strain. Coho salmon (Oncorhynchus kisutch) increase in weight was found to be more than 60% after four generations of selective breeding. In Chile, Neira et al. (2006) conducted experiments on early spawning dates in coho salmon. After selectively breeding the fish for four generations, spawning dates were 13–15 days earlier. Cyprinids Selective breeding programs for the Common carp (Cyprinus carpio) include improvement in growth, shape and resistance to disease. Experiments carried out in the USSR used crossings of broodstocks to increase genetic diversity and then selected the species for traits like growth rate, exterior traits and viability, and/or adaptation to environmental conditions like variations in temperature. Kirpichnikov et al. (1974) and Babouchkine (1987) selected carp for fast growth and tolerance to cold, the Ropsha carp. The results showed a 30–40% to 77.4% improvement of cold tolerance but did not provide any data for growth rate. An increase in growth rate was observed in the second generation in Vietnam. Moav and Wohlfarth (1976) showed positive results when selecting for slower growth for three generations compared to selecting for faster growth. Schaperclaus (1962) showed resistance to the dropsy disease wherein selected lines suffered low mortality (11.5%) compared to unselected (57%). Channel Catfish Growth was seen to increase by 12–20% in selectively bred Iictalurus punctatus. More recently, the response of the Channel Catfish to selection for improved growth rate was found to be approximately 80%, that is, an average of 13% per generation. Shellfish response to selection Oysters Selection for live weight of Pacific oysters showed improvements ranging from 0.4% to 25.6% compared to the wild stock. Sydney-rock oysters (Saccostrea commercialis) showed a 4% increase after one generation and a 15% increase after two generations. Chilean oysters (Ostrea chilensis), selected for improvement in live weight and shell length showed a 10–13% gain in one generation. Bonamia ostrea is a protistan parasite that causes catastrophic losses (nearly 98%) in European flat oyster Ostrea edulis L. This protistan parasite is endemic to three oyster-regions in Europe. Selective breeding programs show that O. edulis susceptibility to the infection differs across oyster strains in Europe. A study carried out by Culloty et al. showed that 'Rossmore' oysters in Cork harbour, Ireland had better resistance compared to other Irish strains. A selective breeding program at Cork harbour uses broodstock from 3– to 4-year-old survivors and is further controlled until a viable percentage reaches market size. Over the years 'Rossmore' oysters have shown to develop lower prevalence of B. ostreae infection and percentage mortality. Ragone Calvo et al. (2003) selectively bred the eastern oyster, Crassostrea virginica, for resistance against co-occurring parasites Haplosporidium nelson (MSX) and Perkinsus marinus (Dermo). They achieved dual resistance to the disease in four generations of selective breeding. The oysters showed higher growth and survival rates and low susceptibility to the infections. At the end of the experiment, artificially selected C. virginica showed a 34–48% higher survival rate. Penaeid shrimps Selection for growth in Penaeid shrimps yielded successful results. A selective breeding program for Litopenaeus stylirostris saw an 18% increase in growth after the fourth generation and 21% growth after the fifth generation. Marsupenaeus japonicas showed a 10.7% increase in growth after the first generation. Argue et al. (2002) conducted a selective breeding program on the Pacific White Shrimp, Litopenaeus vannamei at The Oceanic Institute, Waimanalo, USA from 1995 to 1998. They reported significant responses to selection compared to the unselected control shrimps. After one generation, a 21% increase was observed in growth and 18.4% increase in survival to TSV. The Taura Syndrome Virus (TSV) causes mortalities of 70% or more in shrimps. C.I. Oceanos S.A. in Colombia selected the survivors of the disease from infected ponds and used them as parents for the next generation. They achieved satisfying results in two or three generations wherein survival rates approached levels before the outbreak of the disease. The resulting heavy losses (up to 90%) caused by Infectious hypodermal and haematopoietic necrosis virus (IHHNV) caused a number of shrimp farming industries started to selectively breed shrimps resistant to this disease. Successful outcomes led to development of Super Shrimp, a selected line of L. stylirostris that is resistant to IHHNV infection. Tang et al. (2000) confirmed this by showing no mortalities in IHHNV- challenged Super Shrimp post larvae and juveniles. Aquatic species versus terrestrial livestock Selective breeding programs for aquatic species provide better outcomes compared to terrestrial livestock. This higher response to selection of aquatic farmed species can be attributed to the following: High fecundity in both sexes fish and shellfish enabling higher selection intensity. Large phenotypic and genetic variation in the selected traits. Selective breeding in aquaculture provide remarkable economic benefits to the industry, the primary one being that it reduces production costs due to faster turnover rates. When selective breeding is carried out, some characteristics are lost for others that may suit a specific environment or situation. This is because of faster growth rates, decreased maintenance rates, increased energy and protein retention, and better feed efficiency. Applying genetic improvement programs to aquaculture species will increase their productivity. Thus allowing them to meet the increasing demands of growing populations. Conversely, selective breeding within aquaculture can create problems within the biodiversity of both stock and wild fish, which can hurt the industry down the road. Although there is great potential to improve aquaculture due to the current lack of domestication, it is essential that the genetic diversity of the fish are preserved through proper genetic management, as we domesticate these species. It is not uncommon for fish to escape the nets or pens that they are kept in, especially in mass. If these fish are farmed in areas they are not native to they may be able to establish themselves and outcompete native populations of fish, and cause ecological harm as an invasive species. Furthermore, if they are in areas where the fish being farmed are native too their genetics are selectively bred rather than being wild. These farmed fish could breed with the natives which could be problematic In the sense that they would have been bred for consumption rather than by chance. Resulting in an overall decrease in genetic diversity and rendering local fish populations less fit for survival. If proper management is not taking place then the economic benefits and the diversity of the fish species will falter. Advantages and disadvantages Selective breeding is a direct way to determine if a specific trait can evolve in response to selection. A single-generation method of breeding is not as accurate or direct. The process is also more practical and easier to understand than sibling analysis. Selective breeding is better for traits such as physiology and behavior that are hard to measure because it requires fewer individuals to test than single-generation testing. However, there are disadvantages to this process. This is because a single experiment done in selective breeding cannot be used to assess an entire group of genetic variances, individual experiments must be done for every individual trait. Also, due to the necessity of selective breeding experiments to require maintaining the organisms tested in a lab or greenhouse, it is impractical to use this breeding method on many organisms. Controlled mating instances are difficult to carry out in this case and this is a necessary component of selective breeding. Additionally, selective breeding can lead to a variety of issues including reduction of genetic diversity or physical problems. The process of selective breeding can create physical issues for plants or animals such as dogs selectively bred for extremely small sizes dislocating their kneecaps at a much more frequent rate then other dogs. An example in the plant world is the Lenape potatoes were selectively bred for their disease or pest resistance which was attributed to their high levels of toxic glycoalkaloid solanine which are usually present only in small amounts in potatoes fit for human consumption. When genetic diversity is lost it can also allow for populations to lack genetic alternatives to adapt to events. This becomes an issue of biodiversity, because attributes are so wide-spread they can result in mass epidemics. As seen in the Southern Corn leaf-blight epidemic of 1970 that wiped out 15% of the United States corn crop due to the wide use of a type of Texan corn strain that was artificially selected due to having sterile pollen to make farming easier. At the same time it was more vulnerable to Southern Corn leaf-blight.
Technology
Animal husbandry
null
200672
https://en.wikipedia.org/wiki/Lycopodiaceae
Lycopodiaceae
The Lycopodiaceae (class Lycopodiopsida, order Lycopodiales) are an old family of vascular plants, including all of the core clubmosses and firmosses, comprising 16 accepted genera and about 400 known species. This family originated about 380 million years ago in the early Devonian, though the diversity within the family has been much more recent. "Wolf foot" is another common name for this family due to the resemblance of either the roots or branch tips to a wolf's paw. Description Members of Lycopodiaceae are not spermatophytes and so do not produce seeds. Instead they produce spores, which are oily and flammable, and are the most economically important aspects of these plants. The spores are of one size (i.e. the plants are isosporous) and are borne on a specialized structure at the apex of a shoot called a strobilus (plural: strobili), which resembles a tiny battle club, from which the common name derives. Members of the family share the common feature of having a microphyll, which is a "small leaf with a single vein, and not associated with a leaf gap in the central vascular system." In Lycopodiaceae, the microphylls often densely cover the stem in a linear, scale-like, or appressed fashion to the stem, and the leaves are either opposite or spirally arranged. The club mosses commonly grow to be 5–20 cm tall. The gametophytes in most species are non-photosynthetic and myco-heterotrophic, but the subfamily Lycopodielloideae and a few species in the subfamily Huperzioideae have gametophytes with an upper green and photosynthetic part, and a colorless lower part in contact with fungal hyphae. In Lycopodioideae monoplastidic meiosis is common, whereas polyplastidic meiosis is found in Lycopodielloideae and Huperzioideae. Taxonomy The family Lycopodiaceae is considered to be basal within the Lycopodiopsida (lycophytes). One hypothesis for the evolutionary relationships involved is shown in the cladogram below. Within the family, there is support for three subgroups. In 2016, Field et al. proposed that the primary division is between Lycopodielloideae plus Lycopodioideae and the Huperzioideae (names sensu PPG I). There are about 400 known species in the family Lycopodiaceae. Sources differ in how they group these into genera. Field et al. (2016) say "Most Lycopodiaceae species have been re-classified into different genera several times, leading to uncertainty about their most appropriate generic identification." In the PPG I system, the family has 16 accepted genera, grouped into three subfamilies, Lycopodielloideae, Lycopodioideae and Huperzioideae, based in part on molecular phylogenetic studies. The Huperzioideae differ in producing spores in small lateral structures in the leaf axils, and it has been suggested that they be recognized as a separate family. Other sources use fewer genera; for example, the three genera placed in the subfamily Huperzioideae in PPG I, Huperzia, Phlegmariurus and Phylloglossum, have also all been treated within a broadly defined Huperzia. The species within this family generally have chromosome counts of n=34. A notable exception are the species in Diphasiastrum, which have counts of n=23. Genera , the Checklist of Ferns and Lycophytes of the World recognized the following genera as members of Lycopodiaceae. All of these are recognized by the Pteridophyte Phylogeny Group classification of 2016 (PPG I), except for the genus Brownseya, described in 2021. Other classifications circumscribe the genera in the family more broadly, recognizing the subfamilies Lycopodielloideae, Lycopodioideae, and Huperzioideae as the genera Lycopodiella, Lycopodium, and Huperzia. Phylogeny of Lycopodiaceae Distribution and habitat The members of Lycopodiaceae are terrestrial or epiphytic in habit and are most prevalent in tropical mountain and alpine environments. Though Lycopodiaceae are most abundant in these regions, they are cosmopolitan, excluding arid environments. Evolution Lycopodiaceae (homosporous lycophytes) split off from the branch leading to Selaginella and Isoetes (heterosporous lycophytes) about ~400 million years ago, during the early Devonian. The two subfamilies Lycopodioideae and Huperzioideae diverged ~350 million years ago, but has evolved so slowly that about 30% of their genes are still in syntenic blocks (remaining in the same arrangement). They have also gone through independent whole genome duplications. In most plants the majority of duplicate genes are lost relatively quickly through diploidization, but in this group both sets of genes tends to be retained with relatively few alterations, even after hundreds of millions of years after the duplication event. Spores indicate that the crown group of Lycopodiaceae had emerged by the Triassic-Jurassic boundary, around 200 million years ago, with a member of the crown group of Lycopodioideae known from the Early Cretaceous of China. Uses The running clubmosses (Diphasiastrum) have long been used as greenery for Christmas decoration. The spores have long been used as a flash powder. See Lycopodium powder. The spores have been used by violin makers for centuries as a pore filler. In Cornwall, club mosses gathered during certain lunar phases were historically used as a remedy for eye disease.
Biology and health sciences
Lycophytes
Plants
200716
https://en.wikipedia.org/wiki/Barycenter%20%28astronomy%29
Barycenter (astronomy)
In astronomy, the barycenter (or barycentre; ) is the center of mass of two or more bodies that orbit one another and is the point about which the bodies orbit. A barycenter is a dynamical point, not a physical object. It is an important concept in fields such as astronomy and astrophysics. The distance from a body's center of mass to the barycenter can be calculated as a two-body problem. If one of the two orbiting bodies is much more massive than the other and the bodies are relatively close to one another, the barycenter will typically be located within the more massive object. In this case, rather than the two bodies appearing to orbit a point between them, the less massive body will appear to orbit about the more massive body, while the more massive body might be observed to wobble slightly. This is the case for the Earth–Moon system, whose barycenter is located on average from Earth's center, which is 74% of Earth's radius of . When the two bodies are of similar masses, the barycenter will generally be located between them and both bodies will orbit around it. This is the case for Pluto and Charon, one of Pluto's natural satellites, as well as for many binary asteroids and binary stars. When the less massive object is far away, the barycenter can be located outside the more massive object. This is the case for Jupiter and the Sun; despite the Sun being a thousandfold more massive than Jupiter, their barycenter is slightly outside the Sun due to the relatively large distance between them. In astronomy, barycentric coordinates are non-rotating coordinates with the origin at the barycenter of two or more bodies. The International Celestial Reference System (ICRS) is a barycentric coordinate system centered on the Solar System's barycenter. Two-body problem The barycenter is one of the foci of the elliptical orbit of each body. This is an important concept in the fields of astronomy and astrophysics. In a simple two-body case, the distance from the center of the primary to the barycenter, r1, is given by: where : r1 is the distance from body 1's center to the barycenter a is the distance between the centers of the two bodies m1 and m2 are the masses of the two bodies. The semi-major axis of the secondary's orbit, r2, is given by . When the barycenter is located within the more massive body, that body will appear to "wobble" rather than to follow a discernible orbit. Primary–secondary examples The following table sets out some examples from the Solar System. Figures are given rounded to three significant figures. The terms "primary" and "secondary" are used to distinguish between involved participants, with the larger being the primary and the smaller being the secondary. Example with the Sun If —which is true for the Sun and any planet—then the ratio approximates to: Hence, the barycenter of the Sun–planet system will lie outside the Sun only if: —that is, where the planet is massive and far from the Sun. If Jupiter had Mercury's orbit (), the Sun–Jupiter barycenter would be approximately 55,000 km from the center of the Sun (). But even if the Earth had Eris's orbit (), the Sun–Earth barycenter would still be within the Sun (just over 30,000 km from the center). To calculate the actual motion of the Sun, only the motions of the four giant planets (Jupiter, Saturn, Uranus, Neptune) need to be considered. The contributions of all other planets, dwarf planets, etc. are negligible. If the four giant planets were on a straight line on the same side of the Sun, the combined center of mass would lie at about 1.17 solar radii, or just over 810,000 km, above the Sun's surface. The calculations above are based on the mean distance between the bodies and yield the mean value r1. But all celestial orbits are elliptical, and the distance between the bodies varies between the apses, depending on the eccentricity, e. Hence, the position of the barycenter varies too, and it is possible in some systems for the barycenter to be sometimes inside and sometimes outside the more massive body. This occurs where: The Sun–Jupiter system, with eJupiter = 0.0484, just fails to qualify: . Relativistic corrections In classical mechanics (Newtonian gravitation), this definition simplifies calculations and introduces no known problems. In general relativity (Einsteinian gravitation), complications arise because, while it is possible, within reasonable approximations, to define the barycenter, we find that the associated coordinate system does not fully reflect the inequality of clock rates at different locations. Brumberg explains how to set up barycentric coordinates in general relativity. The coordinate systems involve a world-time, i.e. a global time coordinate that could be set up by telemetry. Individual clocks of similar construction will not agree with this standard, because they are subject to differing gravitational potentials or move at various velocities, so the world-time must be synchronized with some ideal clock that is assumed to be very far from the whole self-gravitating system. This time standard is called Barycentric Coordinate Time (TCB). Selected barycentric orbital elements Barycentric osculating orbital elements for some objects in the Solar System are as follows: For objects at such high eccentricity, barycentric coordinates are more stable than heliocentric coordinates for a given epoch because the barycentric osculating orbit is not as greatly affected by where Jupiter is on its 11.8 year orbit.
Physical sciences
Celestial mechanics
null
201008
https://en.wikipedia.org/wiki/Confederation%20Bridge
Confederation Bridge
The Confederation Bridge () is a box girder bridge carrying the Trans-Canada Highway across the Abegweit Passage of the Northumberland Strait, linking the province of Prince Edward Island with the mainland province of New Brunswick. Opened on May 31, 1997, the bridge is Canada's longest bridge and the world's longest bridge over ice-covered water. Construction took place from 1 November 1993 until May 1997 and cost C$1.3 billion. Before its official naming, Prince Edward Islanders often referred to the bridge as the "Fixed Link". It officially opened to traffic on May 31, 1997. Structure The bridge is a two-lane toll bridge that carries the Trans-Canada Highway between Borden-Carleton, Prince Edward Island (at Route 1) and Cape Jourimain, New Brunswick (at Route 16). It is a multi-span balanced cantilever bridge with a post-tensioned concrete box girder structure. Most of the curved bridge is above water with a navigation span for ship traffic. The bridge rests on 62 piers, of which the 44 main piers are apart. The bridge is wide. The speed limit on the bridge is but can vary with wind and weather conditions. When travelling at the speed limit, it takes about 12 minutes to cross the bridge. Tolls Tolls apply only when leaving Prince Edward Island (when travelling westbound). The toll rates are $50.25 for a two-axle automobile and $8.50 for each additional axle. Motorcycles are charged $20.00. While pedestrians and cyclists are not permitted to cross the bridge, a shuttle service is available. Before 2006, the shuttle was free and since January 1, 2022, the service has charged $4.75 per pedestrian or $9.50 per cyclist when leaving Prince Edward Island. Baggage is charged at a rate of $4.25 per bag after the first bag. The other major Northumberland Strait crossing, Wood Islands Ferry from Wood Islands, Prince Edward Island to Caribou, Nova Scotia, charges when leaving Prince Edward Island. Other ferry fares include $20.00 per adult pedestrian, $40.00 per motorcycle, and $20.00 per bicycle. Travellers, whether entering the island by bridge and leaving by ferry or vice versa pay for leaving the island only. History Various proposals for a fixed link across the Northumberland Strait can be traced as far back as the 1870s when the provinces' railway systems were developed. Subsequent proposals arose during federal elections in the late 1950s and early 1960s. The ebb and flow of public support for a fixed link was indirectly tied to the varying levels of federal investment in ferry and steamship connections to the province over the years, finally culminating in a proposal in the mid-1980s which resulted in the construction of the current bridge. Water transportation links As a part of Prince Edward Island's admission into the Dominion of Canada in 1873, the Canadian government was obligated to provide: Following Confederation, early steamship services across Northumberland Strait connected the Island ports of Charlottetown and Georgetown with railway facilities at Pictou, Nova Scotia. Similar services operated from Summerside connected with railway facilities at Shediac, New Brunswick. The most direct route across the Northumberland Strait, however, was at the wide Abegweit Passage. Infrequent winter service provided by underpowered steamships incapable of breaking sea ice ensured the survival of a passenger and mail service across Abegweit Passage using iceboats until a permanent ferry service was established in the 1910s. The unsatisfactory winter steamship service and reliance upon primitive iceboats provoked complaints from the Island government until the federal government decided to implement a railcar ferry service across Abegweit Passage between new ports at Port Borden and Cape Tormentine. In 1912, the federal government promised to open a car ferry between the "Capes" (Cape Traverse, PEI to Cape Tormentine, NB). The privately owned New Brunswick and Prince Edward Island Railway from Sackville, New Brunswick to Cape Tormentine was purchased by the federal government and an order was made with a shipyard in England for an icebreaking railcar ferry, to be called the Prince Edward Island. Ports were developed at Carleton Point, several kilometres west of Cape Traverse, and the existing harbour at Cape Tormentine; the new port at Carleton Point would be named Borden in honour of Prime Minister Sir Robert Borden. The new ferry entered service in 1915 and operated on the former steamship routes until port facilities were opened in October 1917. Automobile service was added in 1938 and other vessels followed as the ferry service expanded in the post-war years. This ferry service was initially the responsibility of Canadian Government Railways (1917–1918) and later Canadian National Railway (1918–1983), then a CNR subsidiary CN Marine (1977–1986). In 1986, CN Marine was renamed when all federal government ferry services in Atlantic Canada were transferred to the new Crown corporation Marine Atlantic. Ferry service years Scotia I (various years 1917–1957) Scotia II (various years 1937–1968) Charlottetown (1931–1941) Abegweit (1947–1982), renamed Abby (1983–1986) Confederation (1962–1976) John Hamilton Gray (1968–1997) Lucy Maud Montgomery (1969–1973) Holiday Island (1971–1997) Vacationland (1971–1997) Abegweit (1982–1997) Early proposals Discussion of a fixed link can be traced to George Howlan, who called for construction of a railway tunnel beneath Abegweit Passage at the same time as the Prince Edward Island Railway was being built across the province in the 1870s. Howlan also raised the issue as a member of the provincial Legislative Assembly, and in 1891, as a Senator and member of a delegation to meetings on the subject, conducted at the British Parliament. The idea lost favour following his death in 1901. Talk of a fixed link was revived in the 1950s and 1960s, coinciding with federal election campaigns. The topic was raised in 1957, only two years following the opening of the Canso Causeway, and at the same time as another mega-project, the St. Lawrence Seaway was being constructed. A rockfill causeway was proposed to cross Abegweit Passage, with a bridge/tunnel to accommodate shipping. This plan was rejected for navigational reasons but was raised again in 1962, and in 1965, the federal government, ignoring concerns of the shipping industry, called for tenders for a $148 million fixed link featuring a tunnel/causeway/bridge. Approach roads and railway lines were constructed at Borden and Jourimain Island but the project was formally abandoned in 1969 upon scientific recommendation in favour of improved ferry services. Due to the extremely complex tidal regime in the Northumberland Strait consisting of diurnal and semi-diurnal cycles, any attempt to close Abegweit Passage would be next to impossible since the tidal cycles on each side of a causeway would be placed at opposites to each other. It is estimated by tidal experts at the Canadian Hydrographic Service, that tidal currents through a gap in such a causeway would be in excess of , powerful enough to counter most commercial ships and to sweep away boulders the size of houses. 1988 plebiscite Consideration of a fixed link was renewed in the 1980s by an unsolicited proposal from a Nova Scotia businessman. The federal government favoured the construction of a fixed link chiefly because of the rising costs of providing ferry service (a constitutional requirement dating from PEI's accession to Confederation) and the increasing deficits being incurred by the railway system on PEI (run as part of Canadian National, then a Crown corporation). The federal government proposed to provide a fixed subsidy for the construction and operation of a fixed link, in return for the province agreeing to the abandonment of the ferry service and the railway system. Following the election of the Progressive Conservative government of Brian Mulroney, with its agenda for regional development through so-called "mega-projects," Public Works Canada called for formal proposals in 1987 and received three offers. These proposals included a tunnel, a bridge, and a combined tunnel-causeway-bridge. These developments sparked an extremely divisive debate on the Island, and Premier Joe Ghiz promised a plebiscite to gauge public support, which was held on January 18, 1988. During the plebiscite debate, the anti-link group Friends of the Island cited potential ecological damage from the construction, as well as concerns about the impact on Prince Edward Island's lifestyle in general, and noted that the "mega-project" model has had limited success in other areas of the world, and rarely enriched the local population. The Friends of the Island believed that a fixed link was being pressured by a federal government not willing to shoulder the cost of constitutional obligations for funding an efficient ferry service, and that a link would be built largely for the benefit of mainland tourists and businesses waiting to exploit the Island. The pro-link group Islanders for a Better Tomorrow noted transportation reliability would result in improvements for exporters and the tourism industry. The result was 59.4% (in total percentage) in favour of the fixed link. Bridge development The debate did not end with the 1988 plebiscite and the federal government faced numerous legal challenges and a lengthy environmental impact assessment for the project. The developer of the single bridge proposal, Strait Crossing Development Inc., was selected and an announcement that the Northumberland Strait Crossing Project would be built was finally made on December 2, 1992; the developer being required to privately finance all construction through bond markets. Shareholders of Strait Crossing Development Inc. include: OMERS, an Ontario public servant pension fund (under the OMERS subsidiary BPC Maritime Corporation) VINCI Concessions Canada Inc., Montreal, Quebec Strait Crossing Inc., Calgary, Alberta (at one time part of the W. A. Stephenson / Stephenson Construction International (SCI) Engineers & Constructors Group of Companies) Constitutional amendment As mentioned, the Schedule to the Prince Edward Island Terms of Union in the Constitution of Canada required steamship service to connect the Island's railway system with that of mainland North America. A dedicated ferry service replaced the steamships in 1917, but no changes were made to the constitution. The fixed crossing, however, required a constitutional amendment (see Amendments to the Constitution of Canada). The Constitution Amendment Proclamation, 1993 (Prince Edward Island) dealt with this issue, as well as the issue of tolls on the crossing. It made clear that the government (or a private body) could charge a toll (an essential part of the government's financing plans) for the crossing without violating the terms of union: "That a fixed crossing joining the Island to the mainland may be substituted for the steam service referred to in this Schedule... That, for greater certainty, nothing in this Schedule prevents the imposition of tolls for the use of such a fixed crossing between the Island and the mainland, or the private operation of such a crossing;" Construction The construction, which was carried out by a construction joint venture of Ballast Nedam, GTMI (Canada), Northern Construction and Strait Crossing Inc., started in the fall of 1993, beginning with preparation of staging facilities. Bridge components were built year-round from 1994 to the summer of 1996, and placement of components began in the fall of 1994 until the fall of 1996. Approach roads, toll plazas, and final work on the structure continued until the spring of 1997, at an estimated total cost of $1 billion. All bridge components were constructed on land, in purpose-built staging yards located on the shoreline at Amherst Head, fronting on Borden Harbour just east of the town and ferry docks, and an inland facility located at Bayfield, New Brunswick, about west of Cape Tormentine. The Amherst Head staging facility was where all large components were built, including the pier bases, ice shields, main spans, and drop-in spans. The Bayfield facility was used to construct components for the near-shore bridges which were linked using a launching truss extending over shallow waters almost from the New Brunswick shore, and from the Prince Edward Island shore. Extremely durable high-grade concrete and reinforcing steel were used throughout construction of the pre-cast components, with the estimated lifespan of the bridge being in excess of 100 years. The reinforced concrete structure was also designed to withstand iceberg impacts, as a deflection cone encircles each pillar at the point when it meets the water surface that would cause an iceberg to bounce off. Their sheer size and weight required strengthening of the soil base during the design and preparation work for the Amherst Head staging facility, as well as the use of a crawler transport system to move pieces from fabrication to storage, and onto a nearby pier. These crawler transports, using specially designed teflon-coated concrete rails, earned the nickname lobsters from workers. All major components were lifted from the Amherst Head staging facility, transported, and placed in Abegweit Passage using the HLV Svanen, a Dutch-built heavy lift catamaran, which during the construction of the fixed link was reportedly the tallest man-made structure in the province. HLV Svanen was custom-built for use on the Great Belt Bridge in the early 1990s, Denmark's largest construction project, and was modified at a French shipyard before working on the Northumberland Strait Crossing Project. Following the placement of the final major component and completion of the bridge structure in Abegweit Passage on November 19, 1996, HLV Svanen returned to Denmark for use in construction of the Øresund Bridge. Construction of the fixed link required over 5,000 workers ranging from labourers and specialty trades, to engineers, surveyors, and managers. The economic impact of construction on Prince Edward Island was substantial, with the provincial GDP rising over 5% during the construction, providing a short-term economic boom for the Island. It neared completion in April 1997. Naming Throughout construction, the federal government received suggestions for names. A committee was formed on May 1, 1996, chaired by former PEI Premier Alex Campbell, to choose the new name from the submissions. The committee chose the name "Abegweit Crossing", which would pay homage to the Abegweit Passage which the bridge crosses, the vessel M/V Abegweit which the bridge would replace, and to the Mi'kmaq traditional name for the island. However, the Canadian government overruled the committee, and on September 27, 1996, the Minister of Public Works and Government Services Diane Marleau announced that the bridge's name would be "Confederation Bridge". This name is not without controversy as many Islanders feel the word "Confederation" is overused throughout the province, finding use in the name of a Northumberland Ferries Limited vessel (M/V Confederation), a performing arts centre and art gallery (Confederation Centre of the Arts), a shopping centre (Confederation Court Mall), and the province-wide rails to trails system (Confederation Trail), as well as in tourism promotions (e.g., "Birthplace of Confederation"). The President of Ireland, Mary McAleese, during a state visit to Canada in 1998, referred to the bridge as the "Span of Green Gables". In April 2022, the PEI legislature voted unanimously in favour of renaming the bridge to "Epekwitk Crossing", whereas Epekwitk ( ) is the traditional Mi'kmaq name for Prince Edward Island. The name change would need to be approved by the Canadian federal government in order to take effect. Finishing After completing the structure on November 19, 1996, SCI worked throughout the winter, paving the bridge deck, placing bridge concrete barrier guardrails which also act as wind barriers, placing bridge deck and navigational lighting, constructing the Borden-Carleton toll plaza, and finishing the New Brunswick and Prince Edward Island approach roads. In separate construction, the federal and provincial governments built a new commercial and tourist development on the abandoned CN rail yards in Borden-Carleton, with phase I of this facility opening in spring 1997 as "Gateway Village". New Brunswick has never received similar federal support to improve the economy of Cape Tormentine, which has become a shadow of its former role in PEI transportation history, although in recent years a new eco-tourist and visitor centre was opened on Jourimain Island near the western end of the bridge. Official opening The official opening for the bridge took place on May 31, 1997, with the first vehicle traffic crossing at approximately 5:00 p.m. ADT following a nationally televised ceremony which aired on CBC and included a sailpast of the schooner Bluenose II and several Canadian Coast Guard ships, a flyover by the Snowbirds, and an emotional farewell to the beloved ferries which made their final crossings that evening. It is estimated that almost 75,000 people participated in a "Bridge Walk" and "Bridge Run" during the hours immediately prior to the opening for traffic. BridgeFest '97 began on Friday, May 30th, starting with the run in the morning, and then the walk continuing for the rest of the day. Thousands of people walked across the bridge on Friday. In the days following the opening of the bridge, ferry operator Marine Atlantic disposed of its four vessels. The ferry terminals and docks in both ports were removed over the summer of 1997. Operation The bridge is operated by Strait Crossing Bridge Limited (SCBL), a subsidiary of the Strait Crossing Development Inc. consortium which built the structure. SCBL will privately manage, maintain, and operate the bridge until 2032, when these operations will transfer to the Government of Canada. The Government of Canada agreed to pay about $44 million a year for 33 years to Strait Crossing Development Inc., this being the subsidy which was formerly paid to Marine Atlantic to cover operating losses of the ferry system. These payments are in effect a mortgage and are being used by the developer to pay off construction costs. In 2032, the bridge's ownership will revert to the federal government. All tolls charged by SCBL are revenue for the consortium. Toll increases are indexed to inflation and regulated by the federal government. The consortium has rarely commented upon the profitability of the bridge, but during the structure's 10th anniversary, it was revealed that there had been a 30% cost overrun in construction ($330 million). The consortium is forced to cover this out of toll revenue since the federal government ferry subsidy is used to pay for the original tendered price ($1 billion). Operating costs for the bridge have also proven expensive, with warranty repairs for asphalt adherence and the complete replacement of all bridge deck lighting cutting into profits. Toll revenues have fallen over 30% since the bridge opened, largely because of declining tourism traffic and domestic travel and currently range from $25 to $30 million annually. After expenses in 2003, the consortium received a year-end dividend of $2.6 million. Effect The number of tourists visiting Prince Edward Island increased from 740,000 in 1996 (the year before the bridge opened) to 1,200,000 in 1997, but this dropped back to about 900,000 visitors annually. As a way of further promoting the island's new accessibility, the province issued vehicle licence plates from 1999 to 2006 that featured a likeness of the Confederation Bridge amid the serial number. These plates, along with four other designs, started being replaced by a single design in 2013.
Technology
Bridges
null
201017
https://en.wikipedia.org/wiki/Budgerigar
Budgerigar
The budgerigar ( ; Melopsittacus undulatus), also known as the common parakeet, shell parakeet or budgie ( ), is a small, long-tailed, seed-eating parrot native to Australia. Naturally the species is green and yellow with black, scalloped markings on the nape, back, and wings. Budgies are bred in captivity with colouring of blues, whites, yellows, greys, and even with small crests. Juveniles and chicks are monomorphic (the sexes are visually indistinguishable), while adults are told apart by their cere colouring and their behaviour. The species is monotypic, meaning it is the only member of the genus Melopsittacus, which is the only genus in the Melopsittacini tribe. The budgerigar is closely related to lories and the fig parrots. The origin of the budgerigar's name is unclear. First recorded in 1805, budgerigars are popular pets around the world due to their small size, low cost, and ability to mimic human speech. They are likely the third most popular pet in the world, after the domesticated dog and cat. Budgies are nomadic flock parakeets that have been bred in captivity since the 19th century. In both captivity and the wild, budgerigars breed opportunistically and in pairs. They are found wild throughout the drier parts of Australia, where they have survived harsh inland conditions for over five million years. Their success can be attributed to a nomadic lifestyle and their ability to breed while on the move. Etymology Several possible origins for the name budgerigar have been proposed. One origin could be that budgerigar may be a mispronunciation or alteration of the Gamilaraay word gidjirrigaa () or gijirragaa from the Yuwaalaraay. Another possible origin is that budgerigar might be a modified form of budgery or boojery (Australian English slang for "good") and gar ("cockatoo"). While many references mention "good" as part of the meaning, and a few specify "good bird", it is quite possible that reports by those local to the region are more accurate in specifying the direct translation as "good food". Alternative spellings include budgerygah and betcherrygah, the latter used by Indigenous people of the Liverpool Plains in New South Wales. Alternative names for the budgerigar include the shell parrot or shell parakeet, the warbling grass parakeet, the canary parrot, the zebra parrot, the flight bird, and the scallop parrot. Although more often used as a common name for small parrots in the genus Agapornis, the name "lovebird" has been used for budgerigars, because of their habit of close perching, mutual preening, and their long term pair-bonds. Taxonomy The budgerigar was first described by George Shaw in 1805, and given its current binomial name by John Gould in 1840. The genus name Melopsittacus, from Ancient Greek, means "melodious parrot". The species name undulatus is Latin for "undulated" or "wave-patterned". The budgerigar was once proposed to be a link between the genera Neophema and Pezoporus, based on the barred plumage. However, recent phylogenetic studies using DNA sequences place the budgerigar very close to the lories (tribe Loriini) and the fig parrots (tribe Cyclopsittini). Description Wild budgerigars average long, weigh , in wingspan, and display a light green body colour (abdomen and rumps), while their mantles (back and wing coverts) display pitch-black mantle markings (blackish in fledglings and immatures) edged in clear yellow undulations. The forehead and face is yellow in adults. Prior to their adult plumage, young individuals have blackish stripes down to the cere (nose) in young individuals until around 3–4 months of age. They display small, iridescent blue-violet cheek patches and a series of three black spots across each side of their throats (called throat patches). The two outermost throat spots are situated at the base of each cheek patch. The tail is cobalt (dark-blue); and outside tail feathers display central yellow flashes. Their wings have greenish-black flight feathers and black coverts with yellow fringes along with central yellow flashes, which only become visible in flight or when the wings are outstretched. Bills are olive grey and legs blueish-grey, with zygodactyl toes. In their natural Australian habitat, budgerigars are noticeably smaller than those in captivity. This particular parrot species has been bred in many other colours and shades in captivity (e.g. blue, grey, grey-green, pieds, violet, white, yellow-blue). Pet store individuals will commonly be blue, green, or yellow. Like most parrot species, budgerigar plumage fluoresces under ultraviolet light – a phenomenon possibly related to courtship and mate selection. The colour of the cere (the area containing the nostrils) differs between the sexes, being a lavender/baby blue in males, pale brownish/white (non breeding) to brown (breeding) in females, and pink in immature birds of both sexes (usually of a more even purplish-pink colour in young males). Some female budgerigars develop brown cere only during breeding time, which later returns to the normal colour. Young females can often be identified by a subtle, chalky whiteness that starts around the nostrils. Males that are either albino, lutino, dark-eyed clear or recessive pied (Danish pied or harlequin) retain the immature purplish-pink cere colour for their entire lives. Mature males usually have a cere of light to dark blue, but in some particular colour mutations it can be periwinkle, lavender, purplish or pink – including dark-eyed clears, Danish pieds (recessive pieds) and inos, which usually display much rounder heads. Female budgerigars display more dominant behaviour compared to males of the species and may act aggressively towards them. Budgerigars have tetrachromatic colour vision, although all four classes of cone cells will not operate simultaneously unless under sunlight or a UV lamp. The ultraviolet spectrum brightens their feathers to attract mates. The throat spots in budgerigars reflect UV and can be used to distinguish individual birds. While ultraviolet light is essential to the good health of caged and pet birds, inadequate darkness or rest results in overstimulation. Colour mutations All captive budgerigars are divided into two basic series of colours; namely, white-based (blue, grey and white) and yellow-based (green, grey-green and yellow). Presently, at least 32 primary mutations (including violet) occur, enabling hundreds of possible secondary mutations (stable combined primary mutations) and colour varieties (unstable combined mutations). Ecology Budgerigars are nomadic and flocks move on from sites as environmental conditions change. Budgerigars are found in open habitats, primarily in scrublands, open woodlands, and grasslands of Australia. The birds are normally found in small flocks, but can form very large flocks under favourable conditions. The nomadic movement of the flocks is tied to the availability of food and water. Budgerigars have two distinct flight speeds which they are capable of switching between depending on the circumstance. Budgerigars sometimes swarm together in groups containing thousands of individuals. Drought can drive flocks into more wooded habitat or coastal areas. They feed on the seeds of spinifex and grass, and sometimes ripening wheat. Budgerigars feed primarily on grass seeds. The species also opportunistically depredates growing cereal crops and lawn grass seeds. Due to the low water content of the seeds they rely on the availability of freshwater. Outside of Australia, the only long-term establishment of naturalised feral budgerigars is a large population near St. Petersburg, Florida. Increased competition for nesting sites from European starlings and house sparrows is thought to be a primary cause of the Florida population declining from the 1980s. The more consistent, year-round conditions in Florida significantly reduced their nomadic behaviour. The species has been introduced to various locations in Puerto Rico and the United States. Behaviour Breeding Breeding in the wild generally takes place between June and September in northern Australia and between August and January in the south, although budgerigars are opportunistic breeders and respond to rains when grass seeds become most abundant. Budgerigars are monogamous and breed in large colonies throughout their range. They show signs of affection to their flockmates by preening or feeding one another. Budgerigars feed one another by eating the seeds themselves, and then regurgitating it into their flockmate's mouth. Populations in some areas have increased as a result of increased water availability at farms. Nests are made in holes in trees, fence posts or logs lying on the ground; the four to six eggs are incubated for 18–21 days, with the young fledging about 30 days after hatching. In the wild, virtually all parrot species require a hollow tree or a hollow log as a nest site. Budgerigars will typically breed in captivity when provided with a nest box. The eggs are typically one to two centimetres long and are pearl white without any colouration if fertile. Female budgerigars can lay eggs without a male partner, but these unfertilised eggs will not hatch. Females normally have a whitish tan cere; however, when the female is laying eggs, her cere turns a crusty brown colour. Certain female budgies may always keep a whitish tan cere or always keep a crusty brown cere regardless of breeding condition. A female budgerigar will lay her eggs on alternating days. After the first one, there is usually a two-day gap until the next. She will usually lay between four and eight eggs, which she will incubate (usually starting after laying her second or third) for about 21 days each. Females only leave their nests for very quick defecations, stretches and quick meals once they have begun incubating and are by then almost exclusively fed by their mate (usually at the nest's entrance). Females will not allow a male to enter the nest, unless he forces his way inside. Clutch size ranges from 6 to 8 chicks. There is evidence of same-sex sexual behaviour amongst male budgerigars. It was originally hypothesised that they did this as a form of "courtship practice" so they were better breeding partners for females; however, an inverse relationship exists between participation in same-sex behaviour and pairing success. Chick health Breeding difficulties arise for various reasons. Some chicks may die from diseases and attacks from adults. Other budgerigars (virtually always females) may fight over the nest box, attacking each other or a brood. Another problem may be the birds' beaks being under-lapped, where the lower mandible is above the upper mandible. Most health issues and physical abnormalities in budgerigars are genetic. Care should be taken that birds used for breeding are active, healthy and unrelated. Budgerigars that are related or have fatty tumours or other potential genetic health problems should not be allowed to breed. Parasites (lice, mites, worms) and pathogens (bacteria, fungi and viruses), are contagious and thus transmitted between individuals through either direct or indirect contact. In some cases, chicks will experience splay leg. This medical condition may be congenital or acquired through malnutrition. Chicks can be treated with splints although this method is not always successful in curing the affected bird. Preventative measures include using proper nesting box materials such as pine shavings and cleaning the nest box between uses. Development Eggs take about 18–20 days before they start hatching. The hatchlings are altricial – blind, naked, unable to lift their head and totally helpless, and their mother feeds them and keeps them warm constantly. Around 10 days of age, the chicks' eyes will open, and they will start to develop feather down. The appearance of down occurs at the age for closed banding of the chicks. They develop feathers around three weeks of age. (One can often easily note the colour mutation of the individual birds at this point.) At this stage of the chicks' development, the male usually has begun to enter the nest to help his female in caring and feeding the chicks. Some budgerigar females, however, totally forbid the male from entering the nest and thus take the full responsibility of rearing the chicks until they fledge. Depending on the size of the clutch and most particularly in the case of single mothers, it may then be wise to transfer a portion of the hatchlings (or best of the fertile eggs) to another pair. The foster pair must already be in breeding mode and thus either at the laying or incubating stages, or already rearing hatchlings. As the chicks develop and grow feathers, they are able to be left on their own for longer periods of time. By the fifth week, the chicks are strong enough that both parents will be comfortable in staying out of the nest more. The youngsters will stretch their wings to gain strength before they attempt to fly. They will also help defend the box from enemies, mostly with their loud screeching. Young budgerigars typically fledge (leave the nest) around their fifth week of age and are usually completely weaned between six and eight weeks old. However, the age for fledging, as well as weaning, can vary slightly depending on the age and the number of surviving chicks. Generally speaking, the oldest chick is the first to be weaned. Although it is logically the last one to be weaned, the youngest chick is often weaned at a younger age than its older sibling(s). This can be a result of mimicking the actions of older siblings. Lone surviving chicks are often weaned at the youngest possible age as a result of having their parents' full attention and care. Hand-reared budgies may take slightly longer to wean than parent-raised chicks. Hand feeding is not routinely done with budgerigars, due to their small size and because young parent raised birds can be readily tamed. Relationship with humans Aviculture The budgerigar has been bred in captivity since the 1850s. Breeders have worked to produce a variety of colour, pattern and feather mutations, including albino, blue, cinnamon-ino (lacewing), clearwing, crested, dark, greywing, opaline, pieds, spangled, dilute (suffused) and violet. "English budgerigars", more correctly called "show" or "exhibition budgerigars", are about twice as large as their wild counterparts and have puffier head feathers, giving them a boldly exaggerated look. The eyes and beak can be almost totally obscured by these fluffy head feathers. English budgerigars are typically more expensive than wild-type birds, and have a shorter life span of about seven to nine years. Breeders of English budgerigars show their birds at animal shows. Most captive budgerigars in the pet trade are more similar in size and body conformation to wild budgerigars. Budgerigars are social animals and require stimulation in the shape of toys and interaction with humans or with other budgerigars. Budgerigars, and especially females, will chew material such as wood. When a budgerigar feels threatened, it will try to perch as high as possible and to bring its feathers close against its body in order to appear thinner. Tame budgerigars can be taught to speak, whistle and play with humans. Both males and females sing and can learn to mimic sounds and words and do simple tricks, but singing and mimicry are more pronounced and better perfected in males. Females rarely learn to mimic more than a dozen words. Males can easily acquire vocabularies ranging from a few dozen to a hundred words. Pet males, especially those kept alone, are generally the best speakers. Budgerigars will chew on anything they can find to keep their beaks trimmed. Mineral blocks (ideally enriched with iodine), cuttlebone and soft wooden pieces are suitable for this activity. Cuttlebones also supply calcium, essential for the proper forming of eggs and bone solidity. In captivity, budgerigars live an average of five to eight years, but life spans of 15–20 years have been reported. The life span depends on breed, lineage, and health, being highly influenced by exercise and diet. Budgerigars have been known to cause "bird fancier's lung" in sensitive people, a type of hypersensitivity pneumonitis. Apart from a handful of illnesses, diseases of the species are not transmittable to humans. Mimicry Budgerigars, like many other species of parrot, are able to mimic human speech. Puck, a male budgerigar owned by American Camille Jordan, holds the world record for the largest vocabulary of any bird, at 1,728 words. Puck died in 1994, with the record first appearing in the 1995 edition of Guinness World Records. The budgerigar "Disco" became Internet famous in 2013. Some of Disco's most repeated phrases included, "I am not a crook" and "Nobody puts baby bird in a corner!". In popular culture Small bathing suits for men, commonly referred to as togs or "Speedos", are informally called "budgie smugglers" in Australia. The phrase is humorously based on the appearance of the tight-fitting cloth around the male's genitals looking like a small budgie. The phrase was added to the Oxford English Dictionary in 2016. Gallery
Biology and health sciences
Psittaciformes
null
201022
https://en.wikipedia.org/wiki/Bell%20number
Bell number
In combinatorial mathematics, the Bell numbers count the possible partitions of a set. These numbers have been studied by mathematicians since the 19th century, and their roots go back to medieval Japan. In an example of Stigler's law of eponymy, they are named after Eric Temple Bell, who wrote about them in the 1930s. The Bell numbers are denoted , where is an integer greater than or equal to zero. Starting with , the first few Bell numbers are . The Bell number counts the different ways to partition a set that has exactly elements, or equivalently, the equivalence relations on it. also counts the different rhyme schemes for -line poems. As well as appearing in counting problems, these numbers have a different interpretation, as moments of probability distributions. In particular, is the -th moment of a Poisson distribution with mean 1. Counting Set partitions In general, is the number of partitions of a set of size . A partition of a set is defined as a family of nonempty, pairwise disjoint subsets of whose union is . For example, because the 3-element set can be partitioned in 5 distinct ways: As suggested by the set notation above, the ordering of subsets within the family is not considered; ordered partitions are counted by a different sequence of numbers, the ordered Bell numbers. is 1 because there is exactly one partition of the empty set. This partition is itself the empty set; it can be interpreted as a family of subsets of the empty set, consisting of zero subsets. It is vacuously true that all of the subsets in this family are non-empty subsets of the empty set and that they are pairwise disjoint subsets of the empty set, because there are no subsets to have these unlikely properties. The partitions of a set correspond one-to-one with its equivalence relations. These are binary relations that are reflexive, symmetric, and transitive. The equivalence relation corresponding to a partition defines two elements as being equivalent when they belong to the same partition subset as each other. Conversely, every equivalence relation corresponds to a partition into equivalence classes. Therefore, the Bell numbers also count the equivalence relations. Factorizations If a number is a squarefree positive integer, meaning that it is the product of some number of distinct prime numbers, then gives the number of different multiplicative partitions of . These are factorizations of into numbers greater than one, treating two factorizations as the same if they have the same factors in a different order. For instance, 30 is the product of the three primes 2, 3, and 5, and has = 5 factorizations: Rhyme schemes The Bell numbers also count the rhyme schemes of an n-line poem or stanza. A rhyme scheme describes which lines rhyme with each other, and so may be interpreted as a partition of the set of lines into rhyming subsets. Rhyme schemes are usually written as a sequence of Roman letters, one per line, with rhyming lines given the same letter as each other, and with the first lines in each rhyming set labeled in alphabetical order. Thus, the 15 possible four-line rhyme schemes are AAAA, AAAB, AABA, AABB, AABC, ABAA, ABAB, ABAC, ABBA, ABBB, ABBC, ABCA, ABCB, ABCC, and ABCD. Permutations The Bell numbers come up in a card shuffling problem mentioned in the addendum to . If a deck of n cards is shuffled by repeatedly removing the top card and reinserting it anywhere in the deck (including its original position at the top of the deck), with exactly n repetitions of this operation, then there are nn different shuffles that can be performed. Of these, the number that return the deck to its original sorted order is exactly Bn. Thus, the probability that the deck is in its original order after shuffling it in this way is Bn/nn, which is significantly larger than the 1/n! probability that would describe a uniformly random permutation of the deck. Related to card shuffling are several other problems of counting special kinds of permutations that are also answered by the Bell numbers. For instance, the nth Bell number equals the number of permutations on n items in which no three values that are in sorted order have the last two of these three consecutive. In a notation for generalized permutation patterns where values that must be consecutive are written adjacent to each other, and values that can appear non-consecutively are separated by a dash, these permutations can be described as the permutations that avoid the pattern 1-23. The permutations that avoid the generalized patterns 12-3, 32-1, 3-21, 1-32, 3-12, 21-3, and 23-1 are also counted by the Bell numbers. The permutations in which every 321 pattern (without restriction on consecutive values) can be extended to a 3241 pattern are also counted by the Bell numbers. However, the Bell numbers grow too quickly to count the permutations that avoid a pattern that has not been generalized in this way: by the (now proven) Stanley–Wilf conjecture, the number of such permutations is singly exponential, and the Bell numbers have a higher asymptotic growth rate than that. Triangle scheme for calculations The Bell numbers can easily be calculated by creating the so-called Bell triangle, also called Aitken's array or the Peirce triangle after Alexander Aitken and Charles Sanders Peirce. Start with the number one. Put this on a row by itself. () Start a new row with the rightmost element from the previous row as the leftmost number ( where r is the last element of (i-1)-th row) Determine the numbers not on the left column by taking the sum of the number to the left and the number above the number to the left, that is, the number diagonally up and left of the number we are calculating Repeat step three until there is a new row with one more number than the previous row (do step 3 until ) The number on the left hand side of a given row is the Bell number for that row. () Here are the first five rows of the triangle constructed by these rules: The Bell numbers appear on both the left and right sides of the triangle. Properties Summation formulas The Bell numbers satisfy a recurrence relation involving binomial coefficients: It can be explained by observing that, from an arbitrary partition of n + 1 items, removing the set containing the first item leaves a partition of a smaller set of k items for some number k that may range from 0 to n. There are choices for the k items that remain after one set is removed, and Bk choices of how to partition them. A different summation formula represents each Bell number as a sum of Stirling numbers of the second kind The Stirling number is the number of ways to partition a set of cardinality n into exactly k nonempty subsets. Thus, in the equation relating the Bell numbers to the Stirling numbers, each partition counted on the left hand side of the equation is counted in exactly one of the terms of the sum on the right hand side, the one for which k is the number of sets in the partition. has given a formula that combines both of these summations: Applying Pascal's inversion formula to the recurrence relation, we obtain which can be generalized in this manner: Other finite sum formulas using Stirling numbers of the first kind include which simplifies down with to and with , to which can be seen as the inversion formula for Stirling numbers applied to Spivey’s formula. Generating function The exponential generating function of the Bell numbers is In this formula, the summation in the middle is the general form used to define the exponential generating function for any sequence of numbers, and the formula on the right is the result of performing the summation in the specific case of the Bell numbers. One way to derive this result uses analytic combinatorics, a style of mathematical reasoning in which sets of mathematical objects are described by formulas explaining their construction from simpler objects, and then those formulas are manipulated to derive the combinatorial properties of the objects. In the language of analytic combinatorics, a set partition may be described as a set of nonempty urns into which elements labelled from 1 to n have been distributed, and the combinatorial class of all partitions (for all n) may be expressed by the notation Here, is a combinatorial class with only a single member of size one, an element that can be placed into an urn. The inner operator describes a set or urn that contains one or more labelled elements, and the outer describes the overall partition as a set of these urns. The exponential generating function may then be read off from this notation by translating the operator into the exponential function and the nonemptiness constraint ≥1 into subtraction by one. An alternative method for deriving the same generating function uses the recurrence relation for the Bell numbers in terms of binomial coefficients to show that the exponential generating function satisfies the differential equation . The function itself can be found by solving this equation. Moments of probability distributions The Bell numbers satisfy Dobinski's formula This formula can be derived by expanding the exponential generating function using the Taylor series for the exponential function, and then collecting terms with the same exponent. It allows Bn to be interpreted as the nth moment of a Poisson distribution with expected value 1. The nth Bell number is also the sum of the coefficients in the nth complete Bell polynomial, which expresses the nth moment of any probability distribution as a function of the first n cumulants. Modular arithmetic The Bell numbers obey Touchard's congruence: If p is any prime number then or, generalizing Because of Touchard's congruence, the Bell numbers are periodic modulo p, for every prime number p; for instance, for p = 2, the Bell numbers repeat the pattern odd-odd-even with period three. The period of this repetition, for an arbitrary prime number p, must be a divisor of and for all prime and , or it is exactly this number . The period of the Bell numbers to modulo n are 1, 3, 13, 12, 781, 39, 137257, 24, 39, 2343, 28531167061, 156, ... Integral representation An application of Cauchy's integral formula to the exponential generating function yields the complex integral representation Some asymptotic representations can then be derived by a standard application of the method of steepest descent. Log-concavity The Bell numbers form a logarithmically convex sequence. Dividing them by the factorials, Bn/n!, gives a logarithmically concave sequence. Growth rate Several asymptotic formulas for the Bell numbers are known. In the following bounds were established: for all positive integers ; moreover, if then for all , where and The Bell numbers can also be approximated using the Lambert W function, a function with the same growth rate as the logarithm, as established the expansion uniformly for as , where and each and are known expressions in . The asymptotic expression was established by . Bell primes raised the question of whether infinitely many Bell numbers are also prime numbers. These are called Bell primes. The first few Bell primes are: 2, 5, 877, 27644437, 35742549198872617291353508656626642567, 359334085968622831041960188598043661065388726959079837 corresponding to the indices 2, 3, 7, 13, 42 and 55 . The next Bell prime is B2841, which is approximately 9.30740105 × 106538. History The Bell numbers are named after Eric Temple Bell, who wrote about them in 1938, following up a 1934 paper in which he studied the Bell polynomials. Bell did not claim to have discovered these numbers; in his 1938 paper, he wrote that the Bell numbers "have been frequently investigated" and "have been rediscovered many times". Bell cites several earlier publications on these numbers, beginning with which gives Dobiński's formula for the Bell numbers. Bell called these numbers "exponential numbers"; the name "Bell numbers" and the notation Bn for these numbers was given to them by . The first exhaustive enumeration of set partitions appears to have occurred in medieval Japan, where (inspired by the popularity of the book The Tale of Genji) a parlor game called genji-ko sprang up, in which guests were given five packets of incense to smell and were asked to guess which ones were the same as each other and which were different. The 52 possible solutions, counted by the Bell number B5, were recorded by 52 different diagrams, which were printed above the chapter headings in some editions of The Tale of Genji. In Srinivasa Ramanujan's second notebook, he investigated both Bell polynomials and Bell numbers. Early references for the Bell triangle, which has the Bell numbers on both of its sides, include and .
Mathematics
Sequences
null
201154
https://en.wikipedia.org/wiki/Divide-and-conquer%20algorithm
Divide-and-conquer algorithm
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g., the Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFT). Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations. Divide and conquer The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the merge sort algorithm. The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the binary search algorithm for finding a record in a sorted list (or its analogue in numerical computing, the bisection algorithm for root finding). These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use tail recursion, they can be converted into simple loops. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class. An important application of divide and conquer is in optimization, where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the geometric series); this is known as prune and search. Early historical examples Early examples of these algorithms are primarily decrease and conquer – the original problem is successively broken down into single subproblems, and indeed can be solved iteratively. Binary search, a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by John Mauchly, the idea of using a sorted list of items to facilitate searching dates back at least as far as Babylonia in 200 BC. Another ancient decrease-and-conquer algorithm is the Euclidean algorithm to compute the greatest common divisor of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC. An early example of a divide-and-conquer algorithm with multiple subproblems is Gauss's 1805 description of what is now called the Cooley–Tukey fast Fourier transform (FFT) algorithm, although he did not analyze its operation count quantitatively, and FFTs did not become widespread until they were rediscovered over a century later. An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the merge sort algorithm, invented by John von Neumann in 1945. Another notable example is the algorithm invented by Anatolii A. Karatsuba in 1960 that could multiply two n-digit numbers in operations (in Big O notation). This algorithm disproved Andrey Kolmogorov's 1956 conjecture that operations would be required for that task. As another example of a divide-and-conquer algorithm that did not originally involve computers, Donald Knuth gives the method a post office typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered. This is related to a radix sort, described for punch-card sorting machines as early as 1929. Advantages Solving difficult problems Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases, and of combining sub-problems to the original problem. Similarly, decrease and conquer only requires reducing the problem to a single smaller problem, such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height to move a tower of height . Algorithm efficiency The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort and mergesort algorithms, the Strassen algorithm for matrix multiplication, and fast Fourier transforms. In all these examples, the D&C approach led to an improvement in the asymptotic cost of the solution. For example, if (a) the base cases have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size , and (b) there is a bounded number of sub-problems of size ~ at each stage, then the cost of the divide-and-conquer algorithm will be . Parallelism Divide-and-conquer algorithms are naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance because distinct sub-problems can be executed on different processors. Memory access Divide-and-conquer algorithms naturally tend to make efficient use of memory caches. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called cache-oblivious, because it does not contain the cache size as an explicit parameter. Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be optimal cache-oblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is blocking, as in loop nest optimization, where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache sizes of a particular machine. The same advantage exists with regards to other hierarchical storage systems, such as NUMA or virtual memory, as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels. Roundoff control In computations with rounded arithmetic, e.g. with floating-point numbers, a divide-and-conquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add N numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called pairwise summation that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first and pays the overhead of the recursive calls, it is usually more accurate. Implementation issues Recursion Divide-and-conquer algorithms are naturally implemented as recursive procedures. In that case, the partial sub-problems leading to the one currently being solved are automatically stored in the procedure call stack. A recursive function is a function that calls itself within its definition. Explicit stack Divide-and-conquer algorithms can also be implemented by a non-recursive program that stores the partial sub-problems in some explicit data structure, such as a stack, queue, or priority queue. This approach allows more freedom in the choice of the sub-problem that is to be solved next, a feature that is important in some applications — e.g. in breadth-first recursion and the branch-and-bound method for function optimization. This approach is also the standard solution in programming languages that do not provide support for recursive procedures. Stack size In recursive implementations of D&C algorithms, one must make sure that there is sufficient memory allocated for the recursion stack, otherwise, the execution may fail because of stack overflow. D&C algorithms that are time-efficient often have relatively small recursion depth. For example, the quicksort algorithm can be implemented so that it never requires more than nested recursive calls to sort items. Stack overflow may be difficult to avoid when using recursive procedures since many compilers assume that the recursion stack is a contiguous area of memory, and some allocate a fixed amount of space for it. Compilers may also save more information in the recursion stack than is strictly necessary, such as return address, unchanging parameters, and the internal variables of the procedure. Thus, the risk of stack overflow can be reduced by minimizing the parameters and internal variables of the recursive procedure or by using an explicit stack structure. Choosing the base cases In any recursive algorithm, there is considerable freedom in the choice of the base cases, the small subproblems that are solved directly in order to terminate the recursion. Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example, a Fast Fourier Transform algorithm could stop the recursion when the input is a single sample, and the quicksort list-sorting algorithm could stop when the input is the empty list; in both examples, there is only one base case to consider, and it requires no processing. On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a hybrid algorithm. This strategy avoids the overhead of recursive calls that do little or no work and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is short-circuiting the base case, also known as arm's-length recursion. In this case, whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking whether it is null, checking null before recursing; avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack. Thus, for example, many library implementations of quicksort will switch to a simple loop-based insertion sort (or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with entries would entail maximally quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack manipulation. Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely unrolled into code that has no recursion, loops, or conditionals (related to the technique of partial evaluation). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes. Source-code generation methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently. The generalized version of this idea is known as recursion "unrolling" or "coarsening", and various techniques have been proposed for automating the procedure of enlarging the base case. Dynamic programming for overlapping subproblems For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique which is commonly known as memoization. Followed to the limit, it leads to bottom-up divide-and-conquer algorithms such as dynamic programming.
Mathematics
Algorithms
null
201246
https://en.wikipedia.org/wiki/True%20parrot
True parrot
The true parrots are about 350 species of hook-billed, mostly herbivorous birds forming the superfamily Psittacoidea, one of the three superfamilies in the biological order Psittaciformes (parrots). True parrots are widespread, with species in Mexico, Central and South America, sub-Saharan Africa, India, Southeast Asia, Australia, and eastwards across the Pacific Ocean as far as Polynesia. The true parrots include many of the familiar parrots including macaws, conures, lorikeets, eclectus, Amazon parrots, grey parrot, and budgerigar. Most true parrots are colourful and flighted, with a few notable exceptions. Overview True parrots have a beak with a characteristic curved shape, the jaw with a mobility slightly higher than where it connects with the skull, and a generally upright position. They also have a large cranial capacity and are one of the most intelligent bird groups. They are good fliers and skillful climbers on branches of trees. Some species can imitate the human voice and other sounds, although they do not have vocal cords — instead possessing a vocal organ at the base of the trachea known as the syrinx. Like most parrots, the Psittacidae are primarily seed eaters. Some variation is seen in the diet of individual species, with fruits, nuts, leaves, and even insects and other animal prey being taken on occasion by some species. The lorikeets are predominantly nectar feeders; many other parrots drink nectar, as well. Most Psittacidae are cavity-nesting birds which form monogamous pair bonds. Distribution and habitat The true parrots are distributed throughout the tropical and subtropical regions of the world, mostly in the Southern Hemisphere, covering many different habitats, from the humid tropical forests to deserts in Australia, India, Southeast Asia, sub-Saharan Africa, Central and South America, and two species, one extinct (the Carolina parakeet), formerly in the United States. However, the larger populations are native to Australasia, South America, and Central America. Conservation status Many species are classified as threatened by the International Union for Conservation of Nature (see IUCN Red List of birds), as well as national and nongovernmental organizations. Trade in birds and other wild animals is governed by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Nearly all parrots are listed on CITES appendices, trade limited or prohibited. Trapping wild parrots for the pet trade, hunting, habitat loss, and competition from invasive species have diminished wild populations, with parrots being subjected to more exploitation than any other group of birds. Of the animals removed from the wild to be sold, very few survive during capture and transport, and those that do often die from poor conditions of captivity, poor diet, and stress. Measures taken to conserve the habitats of some high-profile charismatic species have also protected many of the less charismatic species living in the same ecosystems. About 18 species of parrots have gone extinct since 1500 (see List of extinct birds#Psittaciformes), nearly all in superfamily Psittacoidea. Taxonomy The parrot family Psittacidae (along with the family Cacatuidae comprising the order Psittaciformes) was traditionally considered to contain two subfamilies, the Psittacinae (typical parrots and allies) and the Loriinae (lories and lorikeets). However, the tree of the parrot family now has been reorganized under the superfamily Psittacoidea: family Psittacidae has been split into three families, tribes Strigopini and Nestorini split out and placed under superfamily Strigopoidea and a new monotypic superfamily Cacatuoidea created containing family Cacatuidae. The following classification is based on the most recent proposal, which in turn is based on all the relevant recent findings. Family Psittacidae, New World and African parrots Subfamily Psittacinae: Two African genera, Psittacus and Poicephalus Subfamily Arinae Tribe Arini: 17 genera, and one extinct genus Tribe Androglossini: seven genera clade (proposed tribe Amoropsittacini) four genera clade (proposed tribe Forpini) one genus (other tribes) five genera Family Psittrichasiidae, Indian Ocean island parrots Subfamily Psittrichasinae: one species, Pesquet's parrot Subfamily Coracopsinae: one genus with several species Family Psittaculidae, Asian and Australasian parrots, and lovebirds Subfamily Platycercinae Tribe Pezoporini: ground parrots and allies Tribe Platycercini: broad-tailed parrots Subfamily Psittacellinae: one genus (Psittacella) with several species Subfamily Loriinae Tribe Loriini: lories and lorikeets Tribe Melopsittacini: one species, the budgerigar Tribe Cyclopsittini: fig parrots Subfamily Agapornithinae: three genera Subfamily Psittaculinae Tribe Polytelini: three genera Tribe Psittaculini: Asian psittacines Tribe Micropsittini: pygmy parrots Species lists Species list sortable alphabetically by common or scientific name Species list in taxonomic order
Biology and health sciences
Psittaciformes
Animals
201258
https://en.wikipedia.org/wiki/Cockatoo
Cockatoo
A cockatoo is any of the 21 species of parrots belonging to the family Cacatuidae, the only family in the superfamily Cacatuoidea. Along with the Psittacoidea (true parrots) and the Strigopoidea (large New Zealand parrots), they make up the order Psittaciformes. The family has a mainly Australasian distribution, ranging from the Philippines and the eastern Indonesian islands of Wallacea to New Guinea, the Solomon Islands and Australia. Cockatoos are recognisable by their prominent crests and curved bills. Their plumage is generally less colourful than that of other parrots, being mainly white, grey, or black and often with coloured features in the crest, cheeks, or tail. On average, they are larger than other parrots; however, the cockatiel, the smallest cockatoo species, is medium-sized. The phylogenetic position of the cockatiel remains unresolved, except that it is one of the earliest offshoots of the cockatoo lineage. The remaining species are in two main clades. The five large black-coloured cockatoos of the genus Calyptorhynchus form one branch. The second and larger branch is formed by the genus Cacatua, comprising 12 species of white-plumaged cockatoos and three monotypic genera that branched off earlier, namely the pink and grey galah, the mainly grey gang-gang cockatoo and the large black-plumaged palm cockatoo. Cockatoos prefer to eat seeds, tubers, corms, fruit, flowers, and insects. They often feed in large flocks, particularly when ground-feeding. Cockatoos are monogamous and nest in tree hollows. Some cockatoo species have been adversely affected by habitat loss, particularly from a shortage of suitable nesting hollows after large, mature trees are cleared; conversely, some species have adapted well to human changes and are considered agricultural pests. Cockatoos are popular birds in aviculture, but their needs are difficult to meet. The cockatiel is the easiest cockatoo species to maintain and is by far the most frequently kept in captivity. White cockatoos are more commonly found in captivity than black cockatoos. Illegal trade in wild-caught birds contributes to the decline of some cockatoo species in the wild. Etymology The word cockatoo dates from the 17th century and is derived from Dutch kaketoe, which in turn is from the Indonesian/Malay kakatua. Seventeenth-century variants include cacato, cockatoon and crockadore, and cokato, cocatore and were used in the 18th century. The derivation has also been used for the family and generic names Cacatuidae and Cacatua, respectively. In Australian slang or vernacular speech, a person who is assigned to keep watch while others undertake clandestine or illegal activities, particularly gambling, may be referred to as a "cockatoo". Proprietors of small agricultural undertakings are often jocularly or slightly disparagingly referred to as "cocky farmers". Taxonomy The cockatoos were first defined as a subfamily Cacatuinae within the parrot family Psittacidae by English naturalist George Robert Gray in 1840, with Cacatua the first listed and type genus. This group has alternately been considered as either a full or subfamily by different authorities. American ornithologist James Lee Peters in his 1937 Check-list of Birds of the World and Sibley and Monroe in 1990 maintained it as a subfamily, while parrot expert Joseph Forshaw classified it as a family in 1973. Subsequent molecular studies indicate that the earliest offshoot from the original parrot ancestors were the New Zealand parrots of the family Strigopidae, and following this the cockatoos, now a well-defined group or clade, split off from the remaining parrots, which then radiated across the Southern Hemisphere and diversified into the many species of parrots, parakeets, macaws, lories, lorikeets, lovebirds and other true parrots of the superfamily Psittacoidea. The relationships among various cockatoo genera are largely resolved, although the placement of the cockatiel (Nymphicus hollandicus) at the base of the cockatoos remains uncertain. The cockatiel is alternatively placed basal to all other cockatoo species, as the sister taxon to the black cockatoo species of the genus Calyptorhynchus or as the sister taxon to a clade consisting of the white and pink cockatoo genera as well as the palm cockatoo. The remaining species are within two main clades, one consisting of the black species of the genus Calyptorhynchus while the other contains the remaining species. According to most authorities, the second clade includes the black palm cockatoo (Probosciger), the grey and reddish galah (Eolophus), and the gang-gang cockatoo (Callocephalon), although Probosciger is sometimes placed basal to all other species. The remaining species are mainly white or slightly pinkish and all belong to the genus Cacatua. The genera Eolophus and Cacatua are hypomelanistic. The genus Cacatua is further subdivided into the subgenera Licmetis, commonly known as corellas, and Cacatua, referred to as white cockatoos. Confusingly, the term "white cockatoo" has also been applied to the whole genus. The five cockatoo species of the genus Calyptorhynchus are commonly known as black cockatoos, and are divided into two subgenera—Calyptorhynchus and Zanda. The former group are sexually dichromatic, with the females having prominently barred plumage. The two are also distinguished by differences in the food-begging calls of juveniles. The fossil record of cockatoos is even more limited than that of parrots in general, with only one truly ancient cockatoo fossil known: a species of Cacatua, most probably subgenus Licmetis, found in Early Miocene (16–23 million years ago) deposits of Riversleigh, Australia. Although fragmentary, the remains are similar to the western corella and the galah. In Melanesia, subfossil bones of Cacatua species which apparently did not survive early human settlement have been found on New Caledonia and New Ireland. The bearing of these fossils on cockatoo evolution and phylogeny is fairly limited, although the Riversleigh fossil does allow tentative dating of the divergence of subfamilies. Genera and species There are about 44 different birds in the cockatoo family Cacatuidae including recognized subspecies. The current subdivision of this family is as follows: Subfamily Nymphicinae Genus Nymphicus Cockatiel, Nymphicus hollandicus (Kerr, 1792) Subfamily Calyptorhynchinae: Black cockatoos Genus Calyptorhynchus – black-and-red cockatoos Red-tailed black cockatoo, Calyptorhynchus banksii (Latham, 1790) (5 subspecies) Glossy black cockatoo, Calyptorhynchus lathami (Temminck, 1807) (3 subspecies) Genus Zanda – black-and-yellow/white cockatoos Yellow-tailed black cockatoo, Zanda funerea (Shaw, 1794) (2–3 subspecies) Carnaby's black cockatoo, Zanda latirostris (Carnaby, 1948) Baudin's black cockatoo, Zanda baudinii (Lear, 1832) Subfamily Cacatuinae Tribe Microglossini: One genus with one species, the black palm cockatoo. Genus Probosciger Palm cockatoo, Probosciger aterrimus (Gmelin, 1788) (4 subspecies) Tribe Cacatuini: Four genera of white, pink and grey species. Genus Callocephalon Gang-gang cockatoo, Callocephalon fimbriatum (Grant, 1803) Genus Eolophus Galah, Eolophus roseicapilla (Vieillot, 1817) (3 subspecies) Genus Cacatua (13 species) Subgenus Cacatua – true white cockatoos Yellow-crested cockatoo or lesser sulphur-crested cockatoo, Cacatua sulphurea (Gmelin, 1788) (5 subspecies) Citron-crested cockatoo, Cacatua citrinocristata (Fraser, 1844) Sulphur-crested cockatoo, Cacatua galerita (Latham, 1790) (4 subspecies) Blue-eyed cockatoo, Cacatua ophthalmica Sclater, 1864 White cockatoo or umbrella cockatoo, Cacatua alba (Müller, 1776) Salmon-crested cockatoo or Moluccan cockatoo, Cacatua moluccensis (Gmelin, 1788) Subgenus Licmetis – corellas Long-billed corella, Cacatua tenuirostris (Kuhl, 1820) Western corella, Cacatua pastinator (Gould, 1841) (2 subspecies) Little corella (also bare-eyed cockatoo), Cacatua sanguinea Gould, 1843 (4 subspecies) Tanimbar corella or Goffin's cockatoo, Cacatua goffiniana Roselaar and Michels, 2004 Solomons cockatoo or Ducorps's cockatoo, Cacatua ducorpsii Pucheran, 1853 Red-vented cockatoo or Philippine cockatoo, Cacatua haematuropygia (Müller, 1776) Subgenus Lophochroa – pink cockatoos Pink cockatoo or Major Mitchell's/Leadbeater's cockatoo, Cacatua leadbeateri (Vigors, 1831) (2 subspecies) Morphology The cockatoos are generally medium to large parrots of stocky build, which range from in length and in weight; however, one species, the cockatiel, is considerably smaller and slimmer than the other species, being long (including its long pointed tail feathers) and in weight. The movable headcrest, which is present in all cockatoos, is spectacular in many species; it is raised when the bird lands from flying or when it is aroused. Cockatoos share many features with other parrots, including the characteristic curved beak shape and a zygodactyl foot, with the two middle toes forward and the two outer toes backward. They differ in the presence of an erectile crest and their lack of the Dyck texture feather composition which causes the bright blues and greens seen in true parrots. Like other parrots, cockatoos have short legs, strong claws, a waddling gait and often use their strong bill as a third limb when climbing through branches. They generally have long broad wings used in rapid flight, with speeds up to being recorded for galahs. The members of the genus Calyptorhynchus and larger white cockatoos, such as the sulphur-crested cockatoo and the pink cockatoo, have shorter, rounder wings and a more leisurely flight. Cockatoos have a large bill, which is kept sharp by rasping the two jaws together when resting. The bill is complemented by a large muscular tongue which helps manipulate seeds inside the bill so that they can be de-husked before eating. During the de-husking, the lower jaw applies the pressure, the tongue holds the seed in place and the upper jaw acts as an anvil. The eye region of the skull is reinforced to support muscles which move the jaws sideways. The bills of male cockatoos are generally slightly larger than those of their female counterparts, but this size difference is quite marked in the palm cockatoo. The plumage of the cockatoos is less brightly coloured than that of the other parrots, with species generally being either black, grey or white. Many species have smaller areas of colour on their plumage, often yellow, pink and red, usually on the crest or tail. The galah and Major Mitchell's cockatoo are more broadly coloured in pink tones. Several species have a brightly coloured bare area around the eye and face known as a periophthalmic ring; the large red patch of bare skin of the palm cockatoo is the most extensive and covers some of the face, while it is more restricted in some other species of white cockatoo, notably the corellas and blue-eyed cockatoo. The plumage of males and females is similar in most species. The plumage of the female cockatiel is duller than the male, but the most marked sexual dimorphism occurs in the gang-gang cockatoo and the two species of black cockatoos in the subgenus Calyptorhynchus, namely the red-tailed and glossy black cockatoos. The iris colour differs in a few species, being pink or red in the female galah and the pink cockatoo and red-brown in some other female white cockatoo species. The males all have dark brown irises. Cockatoos maintain their plumage with frequent preening throughout the day. They remove dirt and oil and realign feather barbs by nibbling their feathers. They also preen other birds' feathers that are otherwise hard to get at. Cockatoos produce preen-oil from a gland on their lower back and apply it by wiping their plumage with their heads or already oiled feathers. Powder-down is produced by specialised feathers in the lumbar region and distributed by the preening cockatoo all over the plumage. Moulting is very slow and complex. Black cockatoos appear to replace their flight feathers one at a time, their moult taking two years to complete. This process is much shorter in other species, such as the galah and long-billed corella, which each take around six months to replace all their flight feathers. Voice The vocalisations of cockatoos are loud and harsh. They serve a number of functions, including allowing individuals to recognize one another, alerting others of predators, indicating individual moods, maintaining the cohesion of a flock and as warnings when defending nests. The use of calls and number of specific calls varies by species; the Carnaby's black cockatoo has as many as 15 types of call, whereas others, such as the pink cockatoo, have fewer. Some, like the gang-gang cockatoo, are comparatively quiet but do have softer growling calls when feeding. In addition to vocalisations, palm cockatoos communicate over large distances by drumming on a dead branch with a stick. Cockatoo species also make a characteristic hissing sound when threatened. Distribution and habitat Cockatoos have a much more restricted range than the true parrots, occurring naturally only in Australia, Indonesia, the Philippines, and some Pacific regions. Eleven of the 21 species exist in the wild only in Australia, while seven species occur only in the islands of the Philippines, Indonesia, Papua New Guinea and the Solomon Islands. No cockatoo species are found in Borneo, despite their presence on nearby Palawan and Sulawesi or many Pacific islands, although fossil remains have been recorded from New Caledonia. Three species occur in both New Guinea and Australia. Some species have widespread distributions, with the galah, for example, occurring over most of Australia, whereas other species have tiny distributions, confined to a small part of the continent, such as the Baudin's black cockatoo of Western Australia or to a small island group, such as the Tanimbar corella, which is restricted to the Tanimbar Islands of Indonesia. Some cockatoos have been introduced accidentally to areas outside their natural range such as New Zealand, Singapore, and Palau, while two Australian corella species have been introduced to parts of the continent where they are not native. Cockatoos occupy a wide range of habitats from forests in subalpine regions to mangroves. However, no species is found in all types of habitat. The most widespread species, such as the galah and cockatiel, are open-country specialists that feed on grass seeds. They are often highly mobile fast flyers and are nomadic. Flocks of birds move across large areas of the inland, locating and feeding on seed and other food sources. Drought may force flocks from more arid areas to move further into farming areas. Other cockatoo species, such as the glossy black cockatoo, inhabit woodlands, rainforests, shrublands and even alpine forests. The red-vented cockatoo inhabits mangroves and its absence from northern Luzon may be related to the lack of mangrove forests there. Forest-dwelling cockatoos are generally sedentary, as the food supply is more stable and predictable. Several species have adapted well to human modified habitats and are found in agricultural areas and even busy cities. Behaviour Cockatoos are diurnal and require daylight to find their food. They are not early risers, instead waiting until the sun has warmed their roosting sites before feeding. All species are generally highly social and roost, forage and travel in colourful and noisy flocks. These vary in size depending on availability of food; in times of plenty, flocks are small and number a hundred birds or less, while in droughts or other times of adversity, they may swell up to contain thousands or even tens of thousands of birds; one record from the Kimberley noted a flock of 32,000 little corellas. Species that inhabit open country form larger flocks than those of forested areas. Some species require roosting sites that are located near drinking sites; other species travel great distances between the roosting and feeding sites. Cockatoos have several characteristic methods of bathing; they may hang upside down or fly about in the rain or flutter in wet leaves in the canopy. Cockatoos have a preferred "footedness" analogous to human handedness. Most species are left-footed with 87–100% of individuals using their left feet to eat, but a few species favor their right foot. Breeding Cockatoos are monogamous breeders, with pair bonds that can last many years. Many birds pair up in flocks before they reach sexual maturity and delay breeding for a year at least. Females breed for the first time anywhere from three to seven years of age and males are often older. Sexual maturity is delayed so birds can develop the skills for raising and parenting young, which is prolonged compared with other birds; the young of some species remain with their parents for up to a year. Cockatoos may also display site fidelity, returning to the same nesting sites in consecutive years. Courtship is generally simple, particularly for established pairs, with the black cockatoos alone engaging in courtship feeding. Established pairs do engage in preening each other, but all forms of courtship drop off after incubation begins, possibly due to the strength of the pair-bond. Like most parrots, the cockatoos are cavity nesters, nesting in holes in trees, which they are unable to excavate themselves. These hollows are formed from decay or destruction of wood by branches breaking off, fungi or insects such as termites or even woodpeckers where their ranges overlap. In many places these holes are scarce and the source of competition, both with other members of the same species and with other species and types of animal. In general, cockatoos choose hollows only a little larger than themselves, hence different-sized species nest in holes of corresponding (and different) sizes. If given the opportunity, cockatoos prefer nesting over above the ground and close to water and food. The nesting hollows are lined with sticks, wood chips and branches with leaves. The eggs of cockatoos are oval and initially white, as their location makes camouflage unnecessary. However, they do become discoloured over the course of incubation. They range in size from in the palm and red-tailed black cockatoos, to in the cockatiel. Clutch size varies within the family, with the palm cockatoo and some other larger cockatoos laying only a single egg and the smaller species laying anywhere between two and eight eggs. Food supply also plays a role in clutch size. Some species can lay a second clutch if the first fails. Around 20% of eggs laid are infertile. The cockatoos' incubation and brooding responsibilities may either be undertaken by the female alone in the case of the black cockatoos or shared amongst the sexes as happens in the other species. In the case of the black cockatoos, the female is provisioned by the male several times a day. The young of all species are born covered in yellowish down, bar the palm cockatoo, whose young are born naked. Cockatoo incubation times are dependent on species size, with the smaller cockatiels having a period of around 20 days and the larger Carnaby's black cockatoo incubating its eggs for up to 29 days. The nestling period also varies by species size, with larger species having longer nestling periods. It is also affected by season and environmental factors and by competition with siblings in species with clutch sizes greater than one. Much of what is known about the nestling period of some species is dependent on aviary studies – aviary cockatiels can fledge after 5 weeks and the large palm cockatoos after 11 weeks. During this period, the young become covered in juvenile plumage while remaining in the hollow. Wings and tail feathers are slow to grow initially but more rapid as the primary feathers appear. Nestlings quickly reach about 80–90% of adult weight about two-thirds of the time through this period, plateauing before they leave the hollow; they fledge at this weight with wing and tail feathers still to grow a little before reaching adult dimensions. Growth rate of the young, as well as numbers fledged, are adversely impacted by reduced food supply and poor weather conditions. Diet and feeding Cockatoos are versatile feeders and consume a range of mainly vegetable food items. Seeds form a large part of the diet of all species; these are opened with their large and powerful bills. The galahs, corellas and some of the black cockatoos feed primarily on the ground; others feed mostly in trees. The ground-feeding species tend to forage in flocks, which form tight, squabbling groups where seeds are concentrated and dispersed lines where food is more sparsely distributed; they also prefer open areas where visibility is good. The western and long-billed corellas have elongated bills to excavate tubers and roots and the pink cockatoo walks in a circle around the doublegee (Emex australis) to twist out and remove the underground parts. Many species forage for food in the canopy of trees, taking advantage of serotiny (the storage of a large supply of seed in cones or gumnuts by plant genera such as Eucalyptus, Banksia and Hakea), a natural feature of the Australian landscape in dryer regions. These woody fruiting bodies are inaccessible to many species and harvested in the main by parrots, cockatoos and rodents in more tropical regions. The larger cones can be opened by the large bills of cockatoos but are too strong for smaller animals. Many nuts and fruits lie on the end of small branches which are unable to support the weight of the foraging cockatoo, which instead bends the branch towards itself and holds it with its foot. While some cockatoos are generalists taking a wide range of foods, others are specialists. The glossy black cockatoo specialises in the cones of trees of the genus Allocasuarina, preferring a single species, A. verticillata. It holds the cones in its foot and shreds them with its powerful bill before removing the seeds with its tongue. Some species take large numbers of insects, particularly when breeding; in fact the bulk of the yellow-tailed black cockatoo's diet is made up of insects. The large bill is used in order to extract grubs and larvae from rotting wood. The amount of time cockatoos have to spend foraging varies with the season. During times of plenty they may need to feed for only a few hours in the day, in the morning and evening, then spend the rest of the day roosting or preening in trees, but during the winter most of the day may be spent foraging. The birds have increased nutritional requirements during the breeding season, so they spend more time foraging for food during this time. Cockatoos have large crops, which allow them to store and digest food for some time after retiring to a tree. Predators and threats The peregrine falcon and little eagle have been reported taking galahs and the wedge-tailed eagle has been observed killing a sulphur-crested cockatoo. Eggs and nestlings are vulnerable to many hazards. Various species of monitor lizard (Varanus) are able to climb trees and enter hollows. Other predators recorded include the spotted wood owl on Rasa Island in the Philippines; the amethystine python, black butcherbird and rodents including the giant white-tailed rat in Cape York; and brushtail possum on Kangaroo Island. Furthermore, galahs and little corellas competing for nesting space with the glossy black cockatoo on Kangaroo Island have been recorded killing nestlings of the latter species there. Severe storms may also flood hollows drowning the young and termite or borer activity may lead to the internal collapse of nests. Like other parrots, cockatoos can be afflicted by psittacine beak and feather disease (PBFD). The viral infection causes feather loss and beak malformation and reduces the bird's overall immunity. Particularly prevalent in sulphur-crested cockatoos, little corellas and galahs, it has been recorded in 14 species of cockatoo to date. Although unlikely to significantly impact on large, healthy populations of birds in the wild, PBFD may pose a high risk to smaller stressed populations. A white cockatoo and a sulphur-crested cockatoo were found to be infected with the protozoon Haemoproteus and another sulphur-crested cockatoo had the malaria parasite Plasmodium on analysis of faecal samples at Almuñecar ornithological garden in Granada in Spain. Like amazon parrots and macaws, cockatoos frequently develop cloacal papillomas. The relationship with malignancy is unknown, as is the cause, although a parrot papilloma virus has been isolated from a grey parrot with the condition. Social learning Cockatoos have been shown to learn new skills through social interaction. In New South Wales, researchers and citizen scientists were able to track the spread of lid-flipping skills as cockatoos learned from each other to open garbage bins. Bin-opening spread more quickly to neighbouring suburbs than suburbs further away. In addition, birds in different areas developed their own variants for accomplishing the complex task. Relationship with humans Human activities have had positive effects on some species of cockatoo and negative effects on others. Many species of open country have benefited greatly from anthropogenic changes to the landscape, with the great increase in reliable seed food sources, and available water contributing to their survival, as well as their adaption to a diet including foreign foodstuffs. This benefit appears to be restricted to Australian species, as cockatoos favouring open country outside Australia have not become more abundant. Predominantly forest-dwelling species have suffered greatly from habitat destruction; in the main, they appear to have a more specialised diet and have not been able to incorporate exotic food into their diet. A notable exception is the yellow-tailed black cockatoo in eastern Australia. Pests Several species of cockatoo can be serious agricultural pests. They are sometimes controlled by shooting, poisoning or capture followed by gassing. Non-lethal damage mitigation methods used include scaring, habitat manipulation and the provision of decoy food dumps or sacrifice crops to distract them from the main crop. They can be a nuisance in urban areas due to destruction of property. They maintain their bills in the wild by chewing on wood, but in suburbia, they may chew outdoor furniture, door and window frames; soft decorative timbers such as western redcedar are readily demolished. Birds may also target external wiring and fixtures such as solar water heaters, television antennae and satellite dishes. A business in central Melbourne suffered as sulphur-crested cockatoos repeatedly stripped the silicone sealant from the plate glass windows. Galahs and red-tailed black cockatoos have stripped electrical cabling in rural areas and tarpaulin is targeted elsewhere. Outside Australia, the Tanimbar corella is a pest on Yamdena Island where it raids maize crops. In 1995 the Government of the state of Victoria published a report on problems caused by long-billed corellas, sulphur-crested cockatoos and galahs, three species which, along with the little corella, have large and growing populations, having benefited from anthropogenic changes to the landscape. Subsequent to the findings and publication of the report, these three species were declared unprotected by a Governor in Council Order under certain conditions and are allowed to be killed where serious damage is being caused by them to trees, vineyards, orchards, recreational reserves and commercial crops. Damage covered by the report included not only that to cereal crops, fruit and nut orchards and some kinds of vegetable crops but also to houses and communications equipment. The little corella is a declared pest of agriculture in Western Australia, where it is an aviculturally introduced species. The birds damage sorghum, maize, sunflower, chickpeas and other crops. They also defoliate amenity trees in parks and gardens, dig for edible roots and corms on sports grounds and race tracks, as well as chew wiring and household fittings. In South Australia, where flocks can number several thousand birds and the species is listed as unprotected, they are accused of defoliating red gums and other native or ornamental trees used for roosting, damaging tarpaulins on grain bunkers, wiring and flashing on buildings, taking grain from newly seeded paddocks and creating a noise nuisance. Several rare species and subspecies, too, have been recorded as causing problems. The Carnaby's black cockatoo, a threatened Western Australian endemic, has been considered a pest in pine plantations where the birds chew off the leading shoots of growing pine trees, resulting in bent trunks and reduced timber value. They are also known to damage nut and fruit crops, and have learnt to exploit canola crops. The Baudin's black cockatoo, also endemic to the south-west of Western Australia, can be a pest in apple and pear orchards where it destroys the fruit to extract the seeds. Muir's corella, the nominate subspecies of the western corella, is also a declared pest of agriculture in Western Australia, as well as being nationally vulnerable and listed under state legislation as being "rare or likely to become extinct". Status and conservation According to the IUCN and BirdLife International, seven species of cockatoo are considered to be vulnerable or worse and one is considered to be near-threatened. Of these, two species—the red-vented cockatoo and the yellow-crested cockatoo—are considered to be critically endangered. The principal threats to cockatoos are habitat loss and the wildlife trade. All cockatoos are dependent on trees for nesting and are vulnerable to their loss; in addition many species have specialised habitat requirements or live on small islands and have naturally small ranges, making them vulnerable to the loss of these habitats. Cockatoos are popular as pets and the capture and trade has threatened some species; between 1983 and 1990, 66,654 recorded salmon-crested cockatoos were exported from Indonesia, a figure that does not include the number of birds caught for the domestic trade or that were exported illegally. The capture of many species has subsequently been banned but the trade continues illegally. Birds are put in crates or bamboo tubing and conveyed on boats out of Indonesia and the Philippines. Not only are the rare species smuggled out of Indonesia but also common and rare cockatoos alike are smuggled out of Australia; birds are sedated, covered in nylon stockings and packed into PVC tubing which is then placed in unaccompanied luggage on international flights. Mortality is significant (30%) and eggs, more easily hidden on the bodies of smugglers on flights, are increasingly smuggled instead. Trafficking is thought to be run by organised gangs, who also trade Australian species for overseas species such as macaws coming the other way. All species of cockatoo except the cockatiel are protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which restricts import and export of wild-caught parrots to special licensed purposes. Five cockatoo species (including all subspecies)—the Tanimbar corella (Cacatua goffiniana), red-vented cockatoo (Cacatua haematuropygia), Moluccan cockatoo (Cacatua moluccensis), yellow-crested cockatoo (Cacatua sulphurea) and palm cockatoo (Probosciger aterrimus)—are protected on the CITES Appendix I list. With the exception of the cockatiel, all remaining cockatoo species are protected on the CITES Appendix II list. Aviculture Kept for their appearance, intelligence, and engaging personalities, cockatoos can nonetheless be problematic pets or companion parrots. Generally, they are not good at mimicking human speech, although the little corella is a renowned talker. As social animals, wild cockatoos have been known to learn human speech from ex-captive birds that have integrated into a flock. Their care is best provided by those experienced in keeping parrots. Cockatoos are social animals and their social needs are difficult to cater for, and they can suffer if kept in a cage on their own for long periods of time. The cockatiel is by far the cockatoo species most frequently kept in captivity. Among U.S. bird keepers that participated in a survey by APPMA in 2003/04, 39% had cockatiels, as opposed to only 3% that had (other) cockatoo species. The white cockatoos are more often encountered in aviculture than the black cockatoos. Black cockatoos are rarely seen in European zoos due to export restrictions on Australian wildlife but birds seized by governments have been loaned. Cockatoos are often very affectionate with their owner and at times other people but can demand a great deal of attention. It has been suggested that cockatoos' need for physical attention from humans may stem from suboptimal rearing techniques – young birds being removed from parental care for hand-rearing too early in the belief that this will produce a more suitable pet, leading the bird to seek out physical contact from humans as a parent substitute. Furthermore, their intense curiosity means they must be given a steady supply of objects to tinker with, chew, dismantle and destroy. Parrots in captivity may suffer from boredom, which can lead to stereotypic behaviour patterns, such as feather-plucking. Feather plucking is likely to stem from psychological rather than physical causes. Other major drawbacks include their painful bites, and their piercing screeches. The salmon-crested and white cockatoo species are particular offenders. All cockatoos have a fine powder on their feathers, which may induce allergies in certain people. In general, the smaller cockatoo species such as Goffin's and quieter Galah's cockatoos are much easier to keep as pets. The cockatiel is one of the most popular and easiest parrots to keep as a pet, and many colour mutations are available in aviculture. Larger cockatoos can live 30 to 70 years depending on the species, or occasionally longer, and cockatiels can live for about 20 years. As pets they require a long-term commitment from their owners. Their longevity is considered a positive trait as it reduces instances of the loss of a pet. The oldest cockatoo in captivity was a pink cockatoo named Cookie, residing at Brookfield Zoo in Chicago, which lived to be 83 years old (1933–2016). A salmon-crested cockatoo named King Tut who resided at the San Diego Zoo was nearly 69 when he died in 1990 and a palm cockatoo reached 56 in London Zoo in 2000. However, anecdotal reports describe birds of much greater ages. Cocky Bennett of Tom Ugly's Point in Sydney was a celebrated sulphur-crested cockatoo who was reported to have reached an age of 100 years or more. He had lost his feathers and was naked for much of his life. A palm cockatoo was reported to have reached 80 or 90 years of age in an Australian zoo, and a little corella that was removed from a nest in central Australia in 1904 was reported still alive in the late 1970s. In February 2010, a white cockatoo named Arthur was claimed to be 90 years old; he had lived with a family for generations in Dalaguete, Cebu, before being taken to Cebu City Zoo. Trained cockatoos are sometimes seen in bird shows in zoos. They are generally less motivated by food than other birds; some may respond more to petting or praise than food. Cockatoos can often be taught to wear a parrot harness, enabling their owners to take them outdoors. Cockatoos have been used in animal-assisted therapy, generally in nursing homes. Cockatoos often have pronounced responses to musical sounds and numerous videos exist showing the birds dancing to popular music. Research conducted in 2008 with an Eleonora cockatoo named Snowball had indicated that this particular individual is indeed capable of beat induction—perceiving human-created music and synchronizing his body movements to the beat. Culture The earliest European depiction of a cockatoo is in the falconry book De arte venandi cum avibus, written by Frederick II, Holy Roman Emperor. The next European depiction of a cockatoo, previously thought to be the earliest, is present in the 1496 painting by Andrea Mantegna titled Madonna della Vittoria. Later examples were painted by Hungarian artist Jakob Bogdani (1660–1724), who resided in Amsterdam from 1683 and then England, and appeared with numerous other birds in the bird pieces of the Dutch painter Melchior d'Hondecoeter (1636–1695). A cockatoo is the unlucky subject in An Experiment on a Bird in the Air Pump by English artist Joseph Wright of Derby, its fate unclear in the painting. Cockatoos were among the many Australian plants and animals which featured in decorative motifs in Federation architecture of the early 20th century. A visit to a Camden Town pet shop in 1958 inspired English painter William Roberts to paint The Cockatoos, in the collection of the Tate Gallery. American artist and sculptor Joseph Cornell was known for placing cutout paper cockatoos in his works. The government of the Australian Capital Territory adopted the gang-gang cockatoo as its official faunal emblem on 27 February 1997. The short-lived budget airline Impulse Airlines featured a sulphur-crested cockatoo on its corporate livery (and aeroplanes). The palm cockatoo, which has a unique beak and face colouration, is used as a symbol by the World Parrot Trust. Two 1970s police dramas featured protagonists with pet cockatoos. In the 1973 film Serpico, Al Pacino's character had a pet white cockatoo and the television show Baretta saw Robert Blake's character with Fred the Triton cockatoo. The popularity of the latter show saw a corresponding rise in popularity of cockatoos as pets in the late 1970s. Cockatoos have been used frequently in advertising; a cockatoo appeared in a 'cheeky' (and later toned-down) 2008 advertising campaign for Cockatoo Ridge Wineries. Intelligence A team of scientists from Oxford University, the University of Vienna and the Max Planck Institute conducted tests on ten untrained Tanimbar corellas (Cacatua goffiniana), and found that they were able to solve complex mechanical puzzles.
Biology and health sciences
Psittaciformes
null
201268
https://en.wikipedia.org/wiki/RNA%20polymerase
RNA polymerase
In molecular biology, RNA polymerase (abbreviated RNAP or RNApol), or more specifically DNA-directed/dependent RNA polymerase (DdRP), is an enzyme that catalyzes the chemical reactions that synthesize RNA from a DNA template. Using the enzyme helicase, RNAP locally opens the double-stranded DNA so that one strand of the exposed nucleotides can be used as a template for the synthesis of RNA, a process called transcription. A transcription factor and its associated transcription mediator complex must be attached to a DNA binding site called a promoter region before RNAP can initiate the DNA unwinding at that position. RNAP not only initiates RNA transcription, it also guides the nucleotides into position, facilitates attachment and elongation, has intrinsic proofreading and replacement capabilities, and termination recognition capability. In eukaryotes, RNAP can build chains as long as 2.4 million nucleotides. RNAP produces RNA that, functionally, is either for protein coding, i.e. messenger RNA (mRNA); or non-coding (so-called "RNA genes"). Examples of four functional types of RNA genes are: Transfer RNA (tRNA) Transfers specific amino acids to growing polypeptide chains at the ribosomal site of protein synthesis during translation; Ribosomal RNA (rRNA) Incorporates into ribosomes; Micro RNA (miRNA) Regulates gene activity; and, RNA silencing Catalytic RNA (ribozyme) Functions as an enzymatically active RNA molecule. RNA polymerase is essential to life, and is found in all living organisms and many viruses. Depending on the organism, a RNA polymerase can be a protein complex (multi-subunit RNAP) or only consist of one subunit (single-subunit RNAP, ssRNAP), each representing an independent lineage. The former is found in bacteria, archaea, and eukaryotes alike, sharing a similar core structure and mechanism. The latter is found in phages as well as eukaryotic chloroplasts and mitochondria, and is related to modern DNA polymerases. Eukaryotic and archaeal RNAPs have more subunits than bacterial ones do, and are controlled differently. Bacteria and archaea only have one RNA polymerase. Eukaryotes have multiple types of nuclear RNAP, each responsible for synthesis of a distinct subset of RNA: Structure The 2006 Nobel Prize in Chemistry was awarded to Roger D. Kornberg for creating detailed molecular images of RNA polymerase during various stages of the transcription process. In most prokaryotes, a single RNA polymerase species transcribes all types of RNA. RNA polymerase "core" from E. coli consists of five subunits: two alpha (α) subunits of 36 kDa, a beta (β) subunit of 150 kDa, a beta prime subunit (β′) of 155 kDa, and a small omega (ω) subunit. A sigma (σ) factor binds to the core, forming the holoenzyme. After transcription starts, the factor can unbind and let the core enzyme proceed with its work. The core RNA polymerase complex forms a "crab claw" or "clamp-jaw" structure with an internal channel running along the full length. Eukaryotic and archaeal RNA polymerases have a similar core structure and work in a similar manner, although they have many extra subunits. All RNAPs contain metal cofactors, in particular zinc and magnesium cations which aid in the transcription process. Function Control of the process of gene transcription affects patterns of gene expression and, thereby, allows a cell to adapt to a changing environment, perform specialized roles within an organism, and maintain basic metabolic processes necessary for survival. Therefore, it is hardly surprising that the activity of RNAP is long, complex, and highly regulated. In Escherichia coli bacteria, more than 100 transcription factors have been identified, which modify the activity of RNAP. RNAP can initiate transcription at specific DNA sequences known as promoters. It then produces an RNA chain, which is complementary to the template DNA strand. The process of adding nucleotides to the RNA strand is known as elongation; in eukaryotes, RNAP can build chains as long as 2.4 million nucleotides (the full length of the dystrophin gene). RNAP will preferentially release its RNA transcript at specific DNA sequences encoded at the end of genes, which are known as terminators. Products of RNAP include: Messenger RNA (mRNA)—template for the synthesis of proteins by ribosomes. Non-coding RNA or "RNA genes"—a broad class of genes that encode RNA that is not translated into protein. The most prominent examples of RNA genes are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. However, since the late 1990s, many new RNA genes have been found, and thus RNA genes may play a much more significant role than previously thought. Transfer RNA (tRNA)—transfers specific amino acids to growing polypeptide chains at the ribosomal site of protein synthesis during translation Ribosomal RNA (rRNA)—a component of ribosomes Micro RNA—regulates gene activity Catalytic RNA (Ribozyme)—enzymatically active RNA molecules RNAP accomplishes de novo synthesis. It is able to do this because specific interactions with the initiating nucleotide hold RNAP rigidly in place, facilitating chemical attack on the incoming nucleotide. Such specific interactions explain why RNAP prefers to start transcripts with ATP (followed by GTP, UTP, and then CTP). In contrast to DNA polymerase, RNAP includes helicase activity, therefore no separate enzyme is needed to unwind DNA. Action Initiation RNA polymerase binding in bacteria involves the sigma factor recognizing the core promoter region containing the −35 and −10 elements (located before the beginning of sequence to be transcribed) and also, at some promoters, the α subunit C-terminal domain recognizing promoter upstream elements. There are multiple interchangeable sigma factors, each of which recognizes a distinct set of promoters. For example, in E. coli, σ70 is expressed under normal conditions and recognizes promoters for genes required under normal conditions ("housekeeping genes"), while σ32 recognizes promoters for genes required at high temperatures ("heat-shock genes"). In archaea and eukaryotes, the functions of the bacterial general transcription factor sigma are performed by multiple general transcription factors that work together. The RNA polymerase-promoter closed complex is usually referred to as the "transcription preinitiation complex." After binding to the DNA, the RNA polymerase switches from a closed complex to an open complex. This change involves the separation of the DNA strands to form an unwound section of DNA of approximately 13 bp, referred to as the "transcription bubble". Supercoiling plays an important part in polymerase activity because of the unwinding and rewinding of DNA. Because regions of DNA in front of RNAP are unwound, there are compensatory positive supercoils. Regions behind RNAP are rewound and negative supercoils are present. Promoter escape RNA polymerase then starts to synthesize the initial DNA-RNA heteroduplex, with ribonucleotides base-paired to the template DNA strand according to Watson-Crick base-pairing interactions. As noted above, RNA polymerase makes contacts with the promoter region. However these stabilizing contacts inhibit the enzyme's ability to access DNA further downstream and thus the synthesis of the full-length product. In order to continue RNA synthesis, RNA polymerase must escape the promoter. It must maintain promoter contacts while unwinding more downstream DNA for synthesis, "scrunching" more downstream DNA into the initiation complex. During the promoter escape transition, RNA polymerase is considered a "stressed intermediate." Thermodynamically the stress accumulates from the DNA-unwinding and DNA-compaction activities. Once the DNA-RNA heteroduplex is long enough (~10 bp), RNA polymerase releases its upstream contacts and effectively achieves the promoter escape transition into the elongation phase. The heteroduplex at the active center stabilizes the elongation complex. However, promoter escape is not the only outcome. RNA polymerase can also relieve the stress by releasing its downstream contacts, arresting transcription. The paused transcribing complex has two options: (1) release the nascent transcript and begin anew at the promoter or (2) reestablish a new 3′-OH on the nascent transcript at the active site via RNA polymerase's catalytic activity and recommence DNA scrunching to achieve promoter escape. Abortive initiation, the unproductive cycling of RNA polymerase before the promoter escape transition, results in short RNA fragments of around 9 bp in a process known as abortive transcription. The extent of abortive initiation depends on the presence of transcription factors and the strength of the promoter contacts. Elongation The 17-bp transcriptional complex has an 8-bp DNA-RNA hybrid, that is, 8 base-pairs involve the RNA transcript bound to the DNA template strand. As transcription progresses, ribonucleotides are added to the 3′ end of the RNA transcript and the RNAP complex moves along the DNA. The characteristic elongation rates in prokaryotes and eukaryotes are about 10–100 nts/sec. Aspartyl (asp) residues in the RNAP will hold on to Mg2+ ions, which will, in turn, coordinate the phosphates of the ribonucleotides. The first Mg2+ will hold on to the α-phosphate of the NTP to be added. This allows the nucleophilic attack of the 3′-OH from the RNA transcript, adding another NTP to the chain. The second Mg2+ will hold on to the pyrophosphate of the NTP. The overall reaction equation is: (NMP)n + NTP → (NMP)n+1 + PPi Fidelity Unlike the proofreading mechanisms of DNA polymerase those of RNAP have only recently been investigated. Proofreading begins with separation of the mis-incorporated nucleotide from the DNA template. This pauses transcription. The polymerase then backtracks by one position and cleaves the dinucleotide that contains the mismatched nucleotide. In the RNA polymerase this occurs at the same active site used for polymerization and is therefore markedly different from the DNA polymerase where proofreading occurs at a distinct nuclease active site. The overall error rate is around 10−4 to 10−6. Termination In bacteria, termination of RNA transcription can be rho-dependent or rho-independent. The former relies on the rho factor, which destabilizes the DNA-RNA heteroduplex and causes RNA release. The latter, also known as intrinsic termination, relies on a palindromic region of DNA. Transcribing the region causes the formation of a "hairpin" structure from the RNA transcription looping and binding upon itself. This hairpin structure is often rich in G-C base-pairs, making it more stable than the DNA-RNA hybrid itself. As a result, the 8 bp DNA-RNA hybrid in the transcription complex shifts to a 4 bp hybrid. These last 4 base pairs are weak A-U base pairs, and the entire RNA transcript will fall off the DNA. Transcription termination in eukaryotes is less well understood than in bacteria, but involves cleavage of the new transcript followed by template-independent addition of adenines at its new 3′ end, in a process called polyadenylation. Other organisms Given that DNA and RNA polymerases both carry out template-dependent nucleotide polymerization, it might be expected that the two types of enzymes would be structurally related. However, x-ray crystallographic studies of both types of enzymes reveal that, other than containing a critical Mg2+ ion at the catalytic site, they are virtually unrelated to each other; indeed template-dependent nucleotide polymerizing enzymes seem to have arisen independently twice during the early evolution of cells. One lineage led to the modern DNA polymerases and reverse transcriptases, as well as to a few single-subunit RNA polymerases (ssRNAP) from phages and organelles. The other multi-subunit RNAP lineage formed all of the modern cellular RNA polymerases. Bacteria In bacteria, the same enzyme catalyzes the synthesis of mRNA and non-coding RNA (ncRNA). RNAP is a large molecule. The core enzyme has five subunits (~400 kDa): β′ The β′ subunit is the largest subunit, and is encoded by the rpoC gene. The β′ subunit contains part of the active center responsible for RNA synthesis and contains some of the determinants for non-sequence-specific interactions with DNA and nascent RNA. It is split into two subunits in Cyanobacteria and chloroplasts. β The β subunit is the second-largest subunit, and is encoded by the rpoB gene. The β subunit contains the rest of the active center responsible for RNA synthesis and contains the rest of the determinants for non-sequence-specific interactions with DNA and nascent RNA. α (αI and αII) Two copies of the α subunit, being the third-largest subunit, are present in a molecule of RNAP: αI and αII (one and two). Each α subunit contains two domains: αNTD (N-terminal domain) and αCTD (C-terminal domain). αNTD contains determinants for assembly of RNAP. αCTD (C-terminal domain) contains determinants for interaction with promoter DNA, making non-sequence-non-specific interactions at most promoters and sequence-specific interactions at upstream-element-containing promoters, and contains determinants for interactions with regulatory factors. ω The ω subunit is the smallest subunit. The ω subunit facilitates assembly of RNAP and stabilizes assembled RNAP. In order to bind promoters, RNAP core associates with the transcription initiation factor sigma (σ) to form RNA polymerase holoenzyme. Sigma reduces the affinity of RNAP for nonspecific DNA while increasing specificity for promoters, allowing transcription to initiate at correct sites. The complete holoenzyme therefore has 6 subunits: β′βαI and αIIωσ (~450 kDa). Eukaryotes Eukaryotes have multiple types of nuclear RNAP, each responsible for synthesis of a distinct subset of RNA. All are structurally and mechanistically related to each other and to bacterial RNAP: Eukaryotic chloroplasts contain a multi-subunit RNAP ("PEP, plastid-encoded polymerase"). Due to its bacterial origin, the organization of PEP resembles that of current bacterial RNA polymerases: It is encoded by the RPOA, RPOB, RPOC1 and RPOC2 genes on the plastome, which as proteins form the core subunits of PEP, respectively named α, β, β′ and β″. Similar to the RNA polymerase in E. coli, PEP requires the presence of sigma (σ) factors for the recognition of its promoters, containing the -10 and -35 motifs. Despite the many commonalities between plant organellar and bacterial RNA polymerases and their structure, PEP additionally requires the association of a number of nuclear encoded proteins, termed PAPs (PEP-associated proteins), which form essential components that are closely associated with the PEP complex in plants. Initially, a group consisting of 10 PAPs was identified through biochemical methods, which was later extended to 12 PAPs. Chloroplast also contain a second, structurally and mechanistically unrelated, single-subunit RNAP ("nucleus-encoded polymerase, NEP"). Eukaryotic mitochondria use POLRMT (human), a nucleus-encoded single-subunit RNAP. Such phage-like polymerases are referred to as RpoT in plants. Archaea Archaea have a single type of RNAP, responsible for the synthesis of all RNA. Archaeal RNAP is structurally and mechanistically similar to bacterial RNAP and eukaryotic nuclear RNAP I-V, and is especially closely structurally and mechanistically related to eukaryotic nuclear RNAP II. The history of the discovery of the archaeal RNA polymerase is quite recent. The first analysis of the RNAP of an archaeon was performed in 1971, when the RNAP from the extreme halophile Halobacterium cutirubrum was isolated and purified. Crystal structures of RNAPs from Sulfolobus solfataricus and Sulfolobus shibatae set the total number of identified archaeal subunits at thirteen. Archaea has the subunit corresponding to Eukaryotic Rpb1 split into two. There is no homolog to eukaryotic Rpb9 (POLR2I) in the S. shibatae complex, although TFS (TFIIS homolog) has been proposed as one based on similarity. There is an additional subunit dubbed Rpo13; together with Rpo5 it occupies a space filled by an insertion found in bacterial β′ subunits (1,377–1,420 in Taq). An earlier, lower-resolution study on S. solfataricus structure did not find Rpo13 and only assigned the space to Rpo5/Rpb5. Rpo3 is notable in that it's an iron–sulfur protein. RNAP I/III subunit AC40 found in some eukaryotes share similar sequences, but does not bind iron. This domain, in either case, serves a structural function. Archaeal RNAP subunit previously used an "RpoX" nomenclature where each subunit is assigned a letter in a way unrelated to any other systems. In 2009, a new nomenclature based on Eukaryotic Pol II subunit "Rpb" numbering was proposed. Viruses Orthopoxviruses and some other nucleocytoplasmic large DNA viruses synthesize RNA using a virally encoded multi-subunit RNAP. They are most similar to eukaryotic RNAPs, with some subunits minified or removed. Exactly which RNAP they are most similar to is a topic of debate. Most other viruses that synthesize RNA use unrelated mechanics. Many viruses use a single-subunit DNA-dependent RNAP (ssRNAP) that is structurally and mechanistically related to the single-subunit RNAP of eukaryotic chloroplasts (RpoT) and mitochondria (POLRMT) and, more distantly, to DNA polymerases and reverse transcriptases. Perhaps the most widely studied such single-subunit RNAP is bacteriophage T7 RNA polymerase. ssRNAPs cannot proofread. B. subtilis prophage SPβ uses YonO, a homolog of the β+β′ subunits of msRNAPs to form a monomeric (both barrels on the same chain) RNAP distinct from the usual "right hand" ssRNAP. It probably diverged very long ago from the canonical five-unit msRNAP, before the time of the last universal common ancestor. Other viruses use an RNA-dependent RNAP (an RNAP that employs RNA as a template instead of DNA). This occurs in negative strand RNA viruses and dsRNA viruses, both of which exist for a portion of their life cycle as double-stranded RNA. However, some positive strand RNA viruses, such as poliovirus, also contain RNA-dependent RNAP. History RNAP was discovered independently by Charles Loe, Audrey Stevens, and Jerard Hurwitz in 1960. By this time, one half of the 1959 Nobel Prize in Medicine had been awarded to Severo Ochoa for the discovery of what was believed to be RNAP, but instead turned out to be polynucleotide phosphorylase. Purification RNA polymerase can be isolated in the following ways: By a phosphocellulose column. By glycerol gradient centrifugation. By a DNA column. By an ion chromatography column. And also combinations of the above techniques.
Biology and health sciences
Molecular biology
Biology
201359
https://en.wikipedia.org/wiki/Squaring%20the%20circle
Squaring the circle
Squaring the circle is a problem in geometry first proposed in Greek mathematics. It is the challenge of constructing a square with the area of a given circle by using only a finite number of steps with a compass and straightedge. The difficulty of the problem raised the question of whether specified axioms of Euclidean geometry concerning the existence of lines and circles implied the existence of such a square. In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number. That is, is not the root of any polynomial with rational coefficients. It had been known for decades that the construction would be impossible if were transcendental, but that fact was not proven until 1882. Approximate constructions with any given non-perfect accuracy exist, and many such constructions have been found. Despite the proof that it is impossible, attempts to square the circle have been common in pseudomathematics (i.e. the work of mathematical cranks). The expression "squaring the circle" is sometimes used as a metaphor for trying to do the impossible. The term quadrature of the circle is sometimes used as a synonym for squaring the circle. It may also refer to approximate or numerical methods for finding the area of a circle. In general, quadrature or squaring may also be applied to other plane figures. History Methods to calculate the approximate area of a given circle, which can be thought of as a precursor problem to squaring the circle, were known already in many ancient cultures. These methods can be summarized by stating the approximation to that they produce. In around 2000 BCE, the Babylonian mathematicians used the approximation and at approximately the same time the ancient Egyptian mathematicians used Over 1000 years later, the Old Testament Books of Kings used the simpler approximation Ancient Indian mathematics, as recorded in the Shatapatha Brahmana and Shulba Sutras, used several different approximations Archimedes proved a formula for the area of a circle, according to which . In Chinese mathematics, in the third century CE, Liu Hui found even more accurate approximations using a method similar to that of Archimedes, and in the fifth century Zu Chongzhi found , an approximation known as Milü. The problem of constructing a square whose area is exactly that of a circle, rather than an approximation to it, comes from Greek mathematics. Greek mathematicians found compass and straightedge constructions to convert any polygon into a square of equivalent area. They used this construction to compare areas of polygons geometrically, rather than by the numerical computation of area that would be more typical in modern mathematics. As Proclus wrote many centuries later, this motivated the search for methods that would allow comparisons with non-polygonal shapes: The first known Greek to study the problem was Anaxagoras, who worked on it while in prison. Hippocrates of Chios attacked the problem by finding a shape bounded by circular arcs, the lune of Hippocrates, that could be squared. Antiphon the Sophist believed that inscribing regular polygons within a circle and doubling the number of sides would eventually fill up the area of the circle (this is the method of exhaustion). Since any polygon can be squared, he argued, the circle can be squared. In contrast, Eudemus argued that magnitudes can be divided up without limit, so the area of the circle would never be used up. Contemporaneously with Antiphon, Bryson of Heraclea argued that, since larger and smaller circles both exist, there must be a circle of equal area; this principle can be seen as a form of the modern intermediate value theorem. The more general goal of carrying out all geometric constructions using only a compass and straightedge has often been attributed to Oenopides, but the evidence for this is circumstantial. The problem of finding the area under an arbitrary curve, now known as integration in calculus, or quadrature in numerical analysis, was known as squaring before the invention of calculus. Since the techniques of calculus were unknown, it was generally presumed that a squaring should be done via geometric constructions, that is, by compass and straightedge. For example, Newton wrote to Oldenburg in 1676 "I believe M. Leibnitz will not dislike the theorem towards the beginning of my letter pag. 4 for squaring curve lines geometrically". In modern mathematics the terms have diverged in meaning, with quadrature generally used when methods from calculus are allowed, while squaring the curve retains the idea of using only restricted geometric methods. A 1647 attempt at squaring the circle, Opus geometricum quadraturae circuli et sectionum coni decem libris comprehensum by Grégoire de Saint-Vincent, was heavily criticized by Vincent Léotaud. Nevertheless, de Saint-Vincent succeeded in his quadrature of the hyperbola, and in doing so was one of the earliest to develop the natural logarithm. James Gregory, following de Saint-Vincent, attempted another proof of the impossibility of squaring the circle in Vera Circuli et Hyperbolae Quadratura (The True Squaring of the Circle and of the Hyperbola) in 1667. Although his proof was faulty, it was the first paper to attempt to solve the problem using algebraic properties of . Johann Heinrich Lambert proved in 1761 that is an irrational number. It was not until 1882 that Ferdinand von Lindemann succeeded in proving more strongly that is a transcendental number, and by doing so also proved the impossibility of squaring the circle with compass and straightedge. After Lindemann's impossibility proof, the problem was considered to be settled by professional mathematicians, and its subsequent mathematical history is dominated by pseudomathematical attempts at circle-squaring constructions, largely by amateurs, and by the debunking of these efforts. As well, several later mathematicians including Srinivasa Ramanujan developed compass and straightedge constructions that approximate the problem accurately in few steps. Two other classical problems of antiquity, famed for their impossibility, were doubling the cube and trisecting the angle. Like squaring the circle, these cannot be solved by compass and straightedge. However, they have a different character than squaring the circle, in that their solution involves the root of a cubic equation, rather than being transcendental. Therefore, more powerful methods than compass and straightedge constructions, such as neusis construction or mathematical paper folding, can be used to construct solutions to these problems. Impossibility The solution of the problem of squaring the circle by compass and straightedge requires the construction of the number , the length of the side of a square whose area equals that of a unit circle. If were a constructible number, it would follow from standard compass and straightedge constructions that would also be constructible. In 1837, Pierre Wantzel showed that lengths that could be constructed with compass and straightedge had to be solutions of certain polynomial equations with rational coefficients. Thus, constructible lengths must be algebraic numbers. If the circle could be squared using only compass and straightedge, then would have to be an algebraic number. It was not until 1882 that Ferdinand von Lindemann proved the transcendence of and so showed the impossibility of this construction. Lindemann's idea was to combine the proof of transcendence of Euler's number , shown by Charles Hermite in 1873, with Euler's identity This identity immediately shows that is an irrational number, because a rational power of a transcendental number remains transcendental. Lindemann was able to extend this argument, through the Lindemann–Weierstrass theorem on linear independence of algebraic powers of , to show that is transcendental and therefore that squaring the circle is impossible. Bending the rules by introducing a supplemental tool, allowing an infinite number of compass-and-straightedge operations or by performing the operations in certain non-Euclidean geometries makes squaring the circle possible in some sense. For example, Dinostratus' theorem uses the quadratrix of Hippias to square the circle, meaning that if this curve is somehow already given, then a square and circle of equal areas can be constructed from it. The Archimedean spiral can be used for another similar construction. Although the circle cannot be squared in Euclidean space, it sometimes can be in hyperbolic geometry under suitable interpretations of the terms. The hyperbolic plane does not contain squares (quadrilaterals with four right angles and four equal sides), but instead it contains regular quadrilaterals, shapes with four equal sides and four equal angles sharper than right angles. There exist in the hyperbolic plane (countably) infinitely many pairs of constructible circles and constructible regular quadrilaterals of equal area, which, however, are constructed simultaneously. There is no method for starting with an arbitrary regular quadrilateral and constructing the circle of equal area. Symmetrically, there is no method for starting with an arbitrary circle and constructing a regular quadrilateral of equal area, and for sufficiently large circles no such quadrilateral exists. Approximate constructions Although squaring the circle exactly with compass and straightedge is impossible, approximations to squaring the circle can be given by constructing lengths close to . It takes only elementary geometry to convert any given rational approximation of into a corresponding compass and straightedge construction, but such constructions tend to be very long-winded in comparison to the accuracy they achieve. After the exact problem was proven unsolvable, some mathematicians applied their ingenuity to finding approximations to squaring the circle that are particularly simple among other imaginable constructions that give similar precision. Construction by Kochański One of many early historical approximate compass-and-straightedge constructions is from a 1685 paper by Polish Jesuit Adam Adamandy Kochański, producing an approximation diverging from in the 5th decimal place. Although much more precise numerical approximations to were already known, Kochański's construction has the advantage of being quite simple. In the left diagram In the same work, Kochański also derived a sequence of increasingly accurate rational approximations Constructions using 355/113 Jacob de Gelder published in 1849 a construction based on the approximation This value is accurate to six decimal places and has been known in China since the 5th century as Milü, and in Europe since the 17th century. Gelder did not construct the side of the square; it was enough for him to find the value The illustration shows de Gelder's construction. In 1914, Indian mathematician Srinivasa Ramanujan gave another geometric construction for the same approximation. Constructions using the golden ratio An approximate construction by E. W. Hobson in 1913 is accurate to three decimal places. Hobson's construction corresponds to an approximate value of where is the golden ratio, . The same approximate value appears in a 1991 construction by Robert Dixon. In 2022 Frédéric Beatrix presented a geometrographic construction in 13 steps. Second construction by Ramanujan In 1914, Ramanujan gave a construction which was equivalent to taking the approximate value for to be giving eight decimal places of . He describes the construction of line segment OS as follows. Incorrect constructions In his old age, the English philosopher Thomas Hobbes convinced himself that he had succeeded in squaring the circle, a claim refuted by John Wallis as part of the Hobbes–Wallis controversy. During the 18th and 19th century, the false notions that the problem of squaring the circle was somehow related to the longitude problem, and that a large reward would be given for a solution, became prevalent among would-be circle squarers. In 1851, John Parker published a book Quadrature of the Circle in which he claimed to have squared the circle. His method actually produced an approximation of accurate to six digits. The Victorian-age mathematician, logician, and writer Charles Lutwidge Dodgson, better known by his pseudonym Lewis Carroll, also expressed interest in debunking illogical circle-squaring theories. In one of his diary entries for 1855, Dodgson listed books he hoped to write, including one called "Plain Facts for Circle-Squarers". In the introduction to "A New Theory of Parallels", Dodgson recounted an attempt to demonstrate logical errors to a couple of circle-squarers, stating: A ridiculing of circle squaring appears in Augustus De Morgan's book A Budget of Paradoxes, published posthumously by his widow in 1872. Having originally published the work as a series of articles in The Athenæum, he was revising it for publication at the time of his death. Circle squaring declined in popularity after the nineteenth century, and it is believed that De Morgan's work helped bring this about. Even after it had been proved impossible, in 1894, amateur mathematician Edwin J. Goodwin claimed that he had developed a method to square the circle. The technique he developed did not accurately square the circle, and provided an incorrect area of the circle which essentially redefined as equal to 3.2. Goodwin then proposed the Indiana pi bill in the Indiana state legislature allowing the state to use his method in education without paying royalties to him. The bill passed with no objections in the state house, but the bill was tabled and never voted on in the Senate, amid increasing ridicule from the press. The mathematical crank Carl Theodore Heisel also claimed to have squared the circle in his 1934 book, "Behold! : the grand problem no longer unsolved: the circle squared beyond refutation." Paul Halmos referred to the book as a "classic crank book." In literature The problem of squaring the circle has been mentioned over a wide range of literary eras, with a variety of metaphorical meanings. Its literary use dates back at least to 414 BC, when the play The Birds by Aristophanes was first performed. In it, the character Meton of Athens mentions squaring the circle, possibly to indicate the paradoxical nature of his utopian city. Dante's Paradise, canto XXXIII, lines 133–135, contain the verse: As the geometer his mind applies To square the circle, nor for all his wit Finds the right formula, howe'er he tries Qual è ’l geométra che tutto s’affige per misurar lo cerchio, e non ritrova, pensando, quel principio ond’elli indige, For Dante, squaring the circle represents a task beyond human comprehension, which he compares to his own inability to comprehend Paradise. Dante's image also calls to mind a passage from Vitruvius, famously illustrated later in Leonardo da Vinci's Vitruvian Man, of a man simultaneously inscribed in a circle and a square. Dante uses the circle as a symbol for God, and may have mentioned this combination of shapes in reference to the simultaneous divine and human nature of Jesus. Earlier, in canto XIII, Dante calls out Greek circle-squarer Bryson as having sought knowledge instead of wisdom. Several works of 17th-century poet Margaret Cavendish elaborate on the circle-squaring problem and its metaphorical meanings, including a contrast between unity of truth and factionalism, and the impossibility of rationalizing "fancy and female nature". By 1742, when Alexander Pope published the fourth book of his Dunciad, attempts at circle-squaring had come to be seen as "wild and fruitless": Mad Mathesis alone was unconfined, Too mad for mere material chains to bind, Now to pure space lifts her ecstatic stare, Now, running round the circle, finds it square. Similarly, the Gilbert and Sullivan comic opera Princess Ida features a song which satirically lists the impossible goals of the women's university run by the title character, such as finding perpetual motion. One of these goals is "And the circle – they will square it/Some fine day." The sestina, a poetic form first used in the 12th century by Arnaut Daniel, has been said to metaphorically square the circle in its use of a square number of lines (six stanzas of six lines each) with a circular scheme of six repeated words. writes that this form invokes a symbolic meaning in which the circle stands for heaven and the square stands for the earth. A similar metaphor was used in "Squaring the Circle", a 1908 short story by O. Henry, about a long-running family feud. In the title of this story, the circle represents the natural world, while the square represents the city, the world of man. In later works, circle-squarers such as Leopold Bloom in James Joyce's novel Ulysses and Lawyer Paravant in Thomas Mann's The Magic Mountain are seen as sadly deluded or as unworldly dreamers, unaware of its mathematical impossibility and making grandiose plans for a result they will never attain.
Mathematics
Euclidean geometry
null
201360
https://en.wikipedia.org/wiki/Eomaia
Eomaia
Eomaia ("dawn mother") is a genus of extinct fossil mammals containing the single species Eomaia scansoria, discovered in rocks that were found in the Yixian Formation, Liaoning Province, China, and dated to the Barremian Age of the Lower Cretaceous about . The single fossil specimen of this species is in length and virtually complete. An estimate of the body weight is . It is exceptionally well-preserved for a 125-million-year-old specimen. Although the fossil's skull is squashed flat, its teeth, tiny foot bones, cartilages and even its fur are visible. Description The Eomaia fossil shows clear traces of hair. However, this is not the earliest clear evidence of hair in the mammalian lineage, as fossils of Volaticotherium, and the docodont Castorocauda, discovered in rocks dated to about , also have traces of fur. Eomaia scansoria possessed several features in common with placental mammals that distinguish them from metatherians, the group that includes modern marsupials. These include an enlarged malleolus ("little hammer") at the bottom of the tibia (the larger of the two shin bones), a joint between the first metatarsal bone and the medial cuneiform bone in the foot which is offset further back than the joint between the second metatarsal and intermediate cuneiform bones (in metatherians these joints are level with each other), as well as various features of jaws and teeth. However, E. scansoria is not a true placental mammal as it lacks some features that are specific to placentals. These include the presence of a malleolus at the bottom of the fibula, the smaller of the two shin bones, a complete mortise and tenon upper ankle joint, where the rearmost bones of the foot fit into a socket formed by the ends of the tibia and fibula, and an atypical ancestral eutherian dental formula of . Eomaia had five upper and four lower incisors (much more typical for metatherians) and five premolars to three molars. Placental mammals have only up to three incisors on each top and bottom and four premolars to three molars, but the premolar/molar proportion is similar to placentals. Eomaia, like other early mammals and living marsupials, had a narrow pelvic outlet suggesting small undeveloped neonates requiring extensive nurturing. Epipubic bones extend forwards from the pelvis; these are not found in any placental, but are found in all other mammals, including non-placental eutherians, marsupials, monotremes and other Mesozoic mammals as well as in the cynodont therapsids that are closest to mammals. Their function is to stiffen the body during locomotion. This stiffening would be harmful in pregnant placentals, whose abdomens need to expand. Classification The discoverers of Eomaia claimed that, on the basis of 268 characters sampled from all major Mesozoic mammal clades and principal eutherian families of the Cretaceous, Eomaia could be placed at the root of the eutherian "family tree" along with Murtoilestes and Prokennalestes. This initial classification scheme is summarized below. In 2013, a much larger study of mammal relationships (including fossil species) was published by O'Leary et al. The study, which examined 4541 anatomical characters of 86 mammal species (including Eomaia scansoria), found "100% jackknife support that Eomaia falls outside of Eutheria as a stem taxon to Theria", and so could not be considered a placental or a eutherian as previously hypothesized. The results of this study are summarized in the cladogram below. The 2013 study by O'Leary et al. is part of a debate about the age of origin of placental mammals (see discussions. ), and in all trees published in that paper Eomaia fell outside Theria (i.e., debates about the findings of O'Leary et al. have not centered on the position of Eomaia). Meng (2014), who was a co-author on the O'Leary et al. (2013) paper, subsequently referred to Eomaia as a Eutherian but provided no analysis to support this claim. Gheerbrant et al. 2014 mentioned Eomaia in a list of Cretaceous taxa that represented "the primitive eutherian condition" but provided no analytical evidence for this claim; a similar claim was repeated by Sole et al. (2014) again without analytical support. A 2023 cladistical study again recovered Eomaia as a basal eutherian.
Biology and health sciences
Stem-mammals
Animals
201400
https://en.wikipedia.org/wiki/Hindu%20calendar
Hindu calendar
The Hindu calendar, also called Panchanga (), is one of various lunisolar calendars that are traditionally used in the Indian subcontinent and Southeast Asia, with further regional variations for social and Hindu religious purposes. They adopt a similar underlying concept for timekeeping based on sidereal year for solar cycle and adjustment of lunar cycles in every three years, but differ in their relative emphasis to moon cycle or the sun cycle and the names of months and when they consider the New Year to start. Of the various regional calendars, the most studied and known Hindu calendars are the Shalivahana Shaka (Based on the King Shalivahana, also the Indian national calendar) found in the Deccan region of Southern India and the Vikram Samvat (Bikrami) found in Nepal and the North and Central regions of India – both of which emphasize the lunar cycle. Their new year starts in spring. In regions such as Tamil Nadu and Kerala, the solar cycle is emphasized and this is called the Tamil calendar (though Tamil Calendar uses month names like in Hindu Calendar) and Malayalam calendar and these have origins in the second half of the 1st millennium CE. A Hindu calendar is sometimes referred to as Panchangam (पञ्चाङ्गम्), which is also known as Panjika in Eastern India. The ancient Hindu calendar conceptual design is also found in the Babylonian calendar, the Chinese calendar, and the Hebrew calendar, but different from the Gregorian calendar. Unlike the Gregorian calendar which adds additional days to the month to adjust for the mismatch between twelve lunar cycles (354 lunar days) and approximately 365 solar days, the Hindu calendar maintains the integrity of the lunar month, but inserts an extra full month, once every 32–33 months, to ensure that the festivals and crop-related rituals fall in the appropriate season. The Hindu calendars have been in use in the Indian subcontinent since Vedic times, and remain in use by the Hindus all over the world, particularly to set Hindu festival dates. Early Buddhist communities of India adopted the ancient Vedic calendar,later Vikrami calendar and then local Buddhist calendars. Buddhist festivals continue to be scheduled according to a lunar system. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Similarly, the ancient Jain traditions have followed the same lunisolar system as the Hindu calendar for festivals, texts and inscriptions. However, the Buddhist and Jain timekeeping systems have attempted to use the Buddha and the Mahavira's lifetimes as their reference points. The Hindu calendar is also important to the practice of Hindu astrology and zodiac system. It is also employed for observing the auspicious days of deities and occasions of fasting, such as Ekadashi. Origins The Vedic culture developed a sophisticated time keeping methodology and calendars for Vedic rituals, and timekeeping as well as the nature of solar and Moon movements are mentioned in Vedic texts. For example, Kaushitaki Brahmana chapter 19.3 mentions the shift in the relative location of the Sun towards north for 6 months, and south for 6 months. Time keeping was important to Vedic rituals, and Jyotisha was the Vedic era field of tracking and predicting the movements of astronomical bodies in order to keep time, in order to fix the day and time of these rituals. This study is one of the six ancient Vedangas, or ancillary science connected with the Vedas – the scriptures of Vedic Sanatan Sanskriti. Yukio Ohashi states that this Vedanga field developed from actual astronomical studies in ancient Vedic Period. The texts of Vedic Jyotisha sciences were translated into the Chinese language in the 2nd and 3rd centuries CE, and the Rigvedic passages on astronomy are found in the works of Zhu Jiangyan and Zhi Qian. According to Subhash Kak, the beginning of the Hindu calendar was much earlier. He cites Greek historians describing Maurya kings referring to a calendar which originated in 6676 BCE known as Saptarsi calendar. The Vikrami calendar is named after king Vikramaditya and starts in 57 BCE. Texts Hindu scholars kept precise time by observing and calculating the cycles of Surya (the Sun), Moon and the planets. These calculations about the Sun appear in various astronomical texts in Sanskrit, such as the 5th-century Aryabhatiya by Aryabhata, the 6th-century Romaka by Latadeva and Panca Siddhantika by Varahamihira, the 7th-century Khandakhadyaka by Brahmagupta and the 8th-century Sisyadhivrddida by Lalla. These texts present Surya and various planets and estimate the characteristics of the respective planetary motion. Other texts such as Surya Siddhanta dated to have been completed sometime between the 5th century and 10th century present their chapters on various deified planets with stories behind them. The manuscripts of these texts exist in slightly different versions. They present Surya, planet-based calculations and Surya's relative motion to Earth. These vary in their data, suggesting that the text were open and revised over their lives. For example, the 1st millennium CE Hindu scholars calculated the sidereal length of a year as follows, from their astronomical studies, with slightly different results: The Hindu texts used the lunar cycle for setting months and days, but the solar cycle to set the complete year. This system is similar to the Jewish and Babylonian ancient calendars, creating the same challenge of accounting for the mismatch between the nearly 354 lunar days in twelve months, versus over 365 solar days in a year. They tracked the solar year by observing the entrance and departure of Surya (sun, at sunrise and sunset) in the constellation formed by stars in the sky, which they divided into 12 intervals of 30 degrees each. Like other ancient human cultures, Hindus innovated a number of systems of which intercalary months became most used, that is adding another month every 32.5 months on average. As their calendar keeping and astronomical observations became more sophisticated, the Hindu calendar became more sophisticated with complex rules and greater accuracy. According to Scott Montgomery, the Siddhanta tradition at the foundation of Hindu calendars predate the Christian era, once had 18 texts of which only 5 have survived into the modern era. These texts provide specific information and formulae on motions of Sun, Moon and planets, to predict their future relative positions, equinoxes, rise and set, with corrections for prograde, retrograde motions, as well as parallax. These ancient scholars attempted to calculate their time to the accuracy of a truti (29.63 microseconds). In their pursuit of accurate tracking of relative movements of celestial bodies for their calendar, they had computed the mean diameter of the Earth, which was very close to the actual 12,742 km (7,918 mi). Hindu calendars were refined during the Gupta era astronomy by Āryabhaṭa and Varāhamihira in the 5th to 6th century. These, in turn, were based in the astronomical tradition of Vedāṅga Jyotiṣa, which in the preceding centuries had been standardised in a number of (non-extant) works known as Sūrya Siddhānta. Regional diversification took place in the medieval period. The astronomical foundations were further developed in the medieval period, notably by Bhāskara II (12th century). Balinese Hindu calendar Hinduism and Buddhism were the prominent religions of southeast Asia in the 1st millennium CE, prior to the Islamic conquest that started in the 14th century. The Hindus prevailed in Bali, Indonesia, and they have two types of Hindu calendar. One is a 210-day based Pawukon calendar which likely is a pre-Hindu system, and another is similar to lunisolar calendar system found in South India and it is called the Balinese saka calendar which uses Hindu methodology. The names of month and festivals of Balinese Hindus, for the most part, are different, though the significance and legends have some overlap. Astronomical basis The Hindu calendar is based on a geocentric model of the Solar System. A large part of this calendar is defined based on the movement of the Sun and the Moon around the Earth (saura māna and cāndra māna respectively). Furthermore, it includes synodic, sidereal, and tropical elements. Many variants of the Hindu calendar have been created by including and excluding these elements (solar, lunar, lunisolar etc.) and are in use in different parts of India. Year: Samvat Samvat refers to era of the several Hindu calendar systems in Nepal and India, in a similar manner to the Christian era. There are several samvat found in historic Buddhist, Hindu and Jain texts and epigraphy, of which three are most significant: Vikrama era, Old Shaka era and Shaka era of 78 CE. Vikram Samvat (Bikram Sambat): A northern Indian almanac which started in 57 BCE, and is also called the Vikrama Era. It is related to the Bikrami calendar, and is apocryphally linked to Vikramaditya. The year starts from the month of Baishakh / Vaishakha. This system is common in epigraphic evidence from northern, western, central and eastern Indian subcontinent, particularly after the early centuries of the 1st millennium CE. Shaka Samvat: There are two Shaka era systems in scholarly use, one is called Old Shaka Era, whose epoch is uncertain, probably sometime in the 1st millennium BCE because ancient Buddhist, Jain and Hindu inscriptions and texts use it. However, the starting point of Old Shaka Era is a subject of dispute among scholars. The second system is called Saka Era of 78 AD, or simply Saka Era, a system that is common in epigraphic evidence from southern India. Saka era of Southeast Asia: The Hindu calendar system in Indonesia is attributed to the legend of Hindus arriving with a sage Aji Saka in 1st-century Java, in March 78 CE. Numerous ancient and medieval era texts and inscriptions found in Indonesian islands use this reference year. In mainland southeast Asia, the earliest verifiable use of Hindu Saka methodology in inscriptions is marked Saka 533 in Ankor Borei, which corresponds to 611 CE, while the Kedukan Bukit inscription in Sumatra, containing three dates in Saka 604 (682 CE), is the earliest known use of the Shaka era in the Indonesian islands. However, these inscriptions only set the floruit for the use of the Shaka era in these places, and the Hindu calendar likely existed in southeast Asia before these dates to be used in important monuments. Further, the Hindu calendar system remained popular among the Hindus through to the 15th century, and thereafter in Bali. Indian national calendar (modern): combines many Hindu calendars into one official standardized one, but old ones remain in use. Months Solar month and seasons The Hindu calendar divides the zodiac into twelve division called rāśi ("group"). The Sun appears to move around the Earth through different divisions/constellations in the sky throughout the year, which in reality is actually caused by the Earth revolving around the Sun. The rāśis have 30° each and are named for constellations found in the zodiac. The time taken by the Sun to transit through a rāśi is a solar month whose name is identical to the name of the rāśi. In practice, solar months are mostly referred as rāśi (not months). The solar months are named differently in different regional calendars. While the Malayalam calendar broadly retains the phonetic Sanskrit names, the Bengali and Tamil calendars repurpose the Sanskrit lunar month names (Chaitra, Vaishaka etc.) as follows: The Tamil calendar replaces Mesha, Vrisha etc. with Chithirai, Vaigasi etc. The Bengali calendar is similar to the Tamil calendar except in that it starts the year with Boiśākh (instead of Choitrô), followed by Jyoisthô etc. The Assamese and Odia calendars too are structured the same way. The solar months (rāśi) along with their equivalent names in the Bangali, Malayalam and Tamil calendar are given below: The solar months (rāśi) along with the approximate correspondence to Hindu seasons and Gregorian months are: The names of the solar months are also used in the Darian calendar for the planet Mars. Lunar months Lunar months are defined based on lunar cycles, i.e. the regular occurrence of new moon and full moon and the intervening waxing and waning phases of the moon. Paksha A lunar month contains two fortnights called pakṣa (पक्ष, literally "side"). One fortnight is the bright, waxing half where the moon size grows and it ends in the full moon. This is called "Gaura Paksha" or Shukla Paksha. The other half is the darkening, waning fortnight which ends in the new moon. This is called "Vadhya Paksha" or Krishna Paksha. The Hindu festivals typically are either on or the day after the full moon night or the darkest night (amavasya, अमावास्या), except for some associated with Krishna, Durga or Rama. The lunar months of the hot summer and the busy major cropping-related part of the monsoon season typically do not schedule major festivals. Amānta and Purnimānta systems Two traditions have been followed in the Indian subcontinent with respect to lunar months: the amānta tradition, which ends the lunar month on new moon day (similar to the Islamic calendar) and the purnimānta tradition, which ends it on full moon day. As a consequence, in the amanta tradition, Shukla paksha precedes Krishna paksha in every lunar month, whereas in the purnimānta tradition, Krishna paksha precedes Shukla paksha in every lunar month. As a result, a Shukla paksha will always belong to the same month in both traditions, whereas a Krishna paksha will always be associated with different (but succeeding) months in each tradition. The amanta (also known as Amāvasyānta or Mukhyamana) tradition is followed by most Indian states that have a peninsular coastline (except Assam, West Bengal, Odisha, Tamil Nadu and Kerala, which use their own solar calendars). These states are Gujarat, Maharashtra, Goa, Karnataka, Andhra Pradesh and Telangana. Nepal and most Indian states north of the Vindhya mountains follow the poornimānta (or Gaunamana) tradition. The poornimānta tradition was being followed in the Vedic era. It was replaced with the amanta tradition as the Hindu calendar system prior to the 1st century BCE, but the Poornimanta tradition was restored in 57 BCE by king Vikramaditya, who wanted to return to the Vedic roots. The presence of this system is one of the factors considered in dating ancient Indian manuscripts and epigraphical evidence that have survived into the modern era. The two traditions of Amanta and Purnimanta systems have led to alternate ways of dating any festival or event that occurs in a Krishna paksha in the historic Hindu, Buddhist or Jain literature, and contemporary regional literature or festival calendars. For example, the Hindu festival of Maha Shivaratri falls on the fourteenth lunar day of Magha's Krishna paksha in the Amanta system, while the same exact day is expressed as the fourteenth lunar day of Phalguna's Krishna paksha in the Purnimanta system. Both lunisolar calendar systems are equivalent ways of referring to the same date, and they continue to be in use in different regions, though the Purnimanta system is now typically assumed as implied in modern Indology literature if not otherwise specified. List The names of the Hindu months vary by region. Those Hindu calendars which are based on lunar cycle are generally phonetic variants of each other, while the solar cycle are generally variants of each other too, suggesting that the timekeeping knowledge travelled widely across the Indian subcontinent in ancient times. During each lunar month, the Sun transits into a sign of the zodicac (sankranti). The lunar month in which the Sun transits into Mesha is named Chaitra and designated as the first month of the lunar year. A few major calendars are summarized below: Corrections between lunar and solar months Twelve Hindu mas (māsa, lunar month) are equal to approximately 354 days, while the length of a sidereal (solar) year is about 365 days. This creates a difference of about eleven days, which is offset every (29.53/10.63) = 2.71 years, or approximately every 32.5 months. Purushottam Maas or Adhik Maas is an extra month that is inserted to keep the lunar and solar calendars aligned. The twelve months are subdivided into six lunar seasons timed with the agriculture cycles, blooming of natural flowers, fall of leaves, and weather. To account for the mismatch between lunar and solar calendar, the Hindu scholars adopted intercalary months, where a particular month just repeated. The choice of this month was not random, but timed to sync back the two calendars to the cycle of agriculture and nature. The repetition of a month created the problem of scheduling festivals, weddings and other social events without repetition and confusion. This was resolved by declaring one month as Shudha (pure, clean, regular, proper, also called Deva month) and the other Mala or Adhika (extra, unclean and inauspicious, also called Asura month). The Hindu mathematicians who calculated the best way to adjust the two years, over long periods of a yuga (era, tables calculating 1000s of years), they determined that the best means to intercalate the months is to time the intercalary months on a 19-year cycle, similar to the Metonic cycle used in the Hebrew calendar. This intercalation is generally adopted in the 3rd, 5th, 8th, 11th, 14th, 16th and 19th year of this cycle. Further, the complex rules rule out the repetition of Mārgaśīrṣa (also called Agrahayana), Pausha and Magha lunar months. The historic Hindu texts are not consistent on these rules, with competing ideas flourishing in the Hindu culture. Rare corrections The Hindu calendar makes further rare adjustments, over a cycle of centuries, where a certain month is considered kshaya month (dropped). This occurs because of the complexity of the relative lunar, solar and earth movements. Underhill (1991) describes this part of Hindu calendar theory: "when the sun is in perigee, and a lunar month being at its longest, if the new moon immediately precedes a sankranti, then the first of the two lunar months is deleted (called nija or kshaya)." This, for example, happened in the year 1 BCE, when there was no new moon between Makara sankranti and Kumbha sankranti, and the month of Pausha was dropped. Day Just like months, the Hindu calendar has two measures of a day, one based on the lunar movement and the other on solar. The solar (saura) day or civil day, called divasa (), has been what most Hindus traditionally use, is easy and empirical to observe, with or without a clock, and it is defined as the period from one sunrise to another. The lunar day is called tithi (), and this is based on complicated measures of lunar movement. A lunar day or tithi may, for example, begin in the middle of an afternoon and end next afternoon. Both these days do not directly correspond to a mathematical measure for a day such as equal 24 hours of a solar year, a fact that the Hindu calendar scholars knew, but the system of divasa was convenient for the general population. The tithi have been the basis for timing rituals and festivals, while divasa for everyday use. The Hindu calendars adjust the mismatch in divasa and tithi, using a methodology similar to the solar and lunar months. A tithi is technically defined in Vedic texts, states John E. Cort, as "the time required by the combined motions of the Sun and Moon to increase (in a bright fortnight) or decrease (in a dark fortnight) their relative distance by twelve degrees of the zodiac. These motions are measured using a fixed map of celestial zodiac as reference, and given the elliptical orbits, a duration of a tithi varies between 21.5 and 26 hours, states Cort. However, in the Indian tradition, the general population's practice has been to treat a tithi as a solar day between one sunrise to next. A lunar month has 30 tithi. The technical standard makes each tithi contain different number of hours, but helps the overall integrity of the calendar. Given the variation in the length of a solar day with seasons, and the Moon's relative movements, the start and end time for tithi varies over the seasons and over the years, and the tithi adjusted to sync with divasa periodically with intercalation. Weekday/Vāsara Vāsara refers to the weekdays in Sanskrit. Also referred to as Vara and used as a suffix. The correspondence between the names of the week in Hindu and other Indo-European calendars are exact. This alignment of names probably took place sometime during the 3rd century CE. The weekday of a Hindu calendar has been symmetrically divided into 60 ghatika, each ghatika (24 minutes) is divided into 60 pala, each pala (24 seconds) is subdivided into 60 vipala, and so on. The term -vāsara is often realised as vāra or vaar in Sanskrit-derived and influenced languages. There are many variations of the names in the regional languages, mostly using alternate names of the celestial bodies involved. Five limbs of time The complete Vedic calendars contain five angas or parts of information: lunar day (tithi), solar day (diwas), asterism (naksatra), planetary joining (yoga) and astronomical period (karanam). This structure gives the calendar the name Panchangam. The first two are discussed above. Yoga See the main article on yoga. The Sanskrit word Yoga means "union, joining, attachment", but in astronomical context, this word means latitudinal and longitudinal information. The longitude of the Sun and the longitude of the Moon are added, and normalised to a value ranging between 0° to 360° (if greater than 360, one subtracts 360). This sum is divided into 27 parts. Each part will now equal 800' (where ' is the symbol of the arcminute which means 1/60 of a degree). These parts are called the yogas. They are labelled: Viṣkambha Prīti Āyuśmān Saubhāgya Śobhana Atigaṇḍa Sukarma Dhrti Śūla Gaṇḍa Vṛddhi Dhruva Vyāghatā Harṣaṇa Vajra Siddhi Vyatipāta Variyas Parigha Śiva Siddha Sādhya Śubha Śukla Brahma Māhendra Vaidhṛti Again, minor variations may exist. The yoga that is active during sunrise of a day is the prevailing yoga for the day. Karaṇa A karaṇa is half of a tithi. To be precise, a karaṇa is the time required for the angular distance between the Sun and the Moon to increase in steps of 6° starting from 0°. (Compare with the definition of a tithi.) Since the tithis are 30 in number, and since 1 tithi = 2 karaṇas, therefore one would logically expect there to be 60 karaṇas. But there are only 11 such karaṇas which fill up those slots to accommodate for those 30 tithis. There are actually 4 "fixed" (sthira) karaṇas and 7 "repeating" (cara) karaṇas. The 4 "fixed" karaṇas are: Śakuni (शकुनि) Catuṣpāda (चतुष्पाद) Nāga (नाग) Kiṃstughna (किंस्तुघ्न) The 7 "repeating" karaṇas are: Vava or Bava (बव) Valava or Bālava (बालव) Kaulava (कौलव) Taitila or Taitula (तैतिल) Gara or Garaja (गरज) Vaṇija (वणिज) Viṣṭi (Bhadra) (भद्रा) Now the first half of the 1st tithi (of Śukla Pakṣa) is always Kiṃtughna karaṇa. Hence this karaṇa is "fixed". Next, the 7-repeating karaṇas repeat eight times to cover the next 56 half-tithis. Thus these are the "repeating" (cara) karaṇas. The 3 remaining half-tithis take the remaining "fixed" karaṇas in order. Thus these are also "fixed" (sthira). Thus one gets 60 karaṇas from those 11 preset karaṇas. The Vedic day begins at sunrise. The karaṇa at sunrise of a particular day shall be the prevailing karaṇa for the whole day. (citation needed ) Nakshatra Nakshatras are divisions of ecliptic, each 13° 20', starting from 0° Aries. Festival calendar: Solar and Lunar dates Many holidays in the Hindu, Buddhist and Jaina traditions are based on the lunar cycles in the lunisolar timekeeping with foundations in the Hindu calendar system. A few holidays, however, are based on the solar cycle, such as the Vaisakhi, Pongal and those associated with Sankranti. The dates of the lunar cycle based festivals vary significantly on the Gregorian calendar and at times by several weeks. The solar cycle based ancient Hindu festivals almost always fall on the same Gregorian date every year and if they vary in an exceptional year, it is by one day. Regional variants The Hindu Calendar Reform Committee, appointed in 1952, identified more than thirty well-developed calendars, in use across different parts of India. Variants include the lunar emphasizing Vikrama, and the Shalivahana calendars, as well as the solar emphasizing Tamil calendar and Malayalam calendar. The two calendars most widely used today are the Vikrama calendar, which is in followed in western and northern India and Nepal, the Shalivahana Shaka calendar which is followed in the Deccan region of India (Comprising present day Indian states of Telangana, Andhra Pradesh, Karnataka, Maharashtra, and Goa). Lunar Calendars based on lunar cycle (lunar months in solar year, lunar phase for religious dates and new year): Vikram Samvat Vikrami era – North and Central India (Lunar) Gujarati samvat – Gujarat, Rajasthan Sindhi samvat – Sindhis Shalivahana calendar (Shaka era) – Used in Deccan region states of Maharashtra, Goa, Karnataka, Andhra Pradesh, Telangana Saptarishi era calendar – Kashmiri Pandits Kannada calendar – Karnataka Kashmiri calendar – Kashmir Maithili calendar Meitei calendar – Manipur Nepali calendar – Nepal, Sikkim Punjabi calendar – Punjab Sindhi calendar – Sindh Telugu calendar – Andhra Pradesh, Telangana Tibetan calendar – Tibet Solar Calendars based on solar cycle (solar months in solar year, lunar phase for religious dates but new year which falls on solar date – South and Southeast Asian solar New Year): Assamese calendar – Assam Bengali calendar – West Bengal, Bangladesh, Tripura, Barrak Region of Assam and parts of Jharkhand. Odia calendar – Odisha Tirhuta Panchang – Maithilis Tripuri calendar – Tripura Malayalam calendar – Kerala Tamil calendar – Tamil Nadu Tulu calendar – Tulus Vikram Samvat calendar – Nepal Vi– North and Central India (Solar) Bikram Sambat – Nepal, Sikkim Other related calendars across India and Asia Indian national calendar – used by Indian Government (civil calendar based on solar months) Vira Nirvana Samvat (Lunar) – Jain Buddhist calendar (Lunar) – Buddhist Tibetan calendar (Lunar) – Tibet, Ladakh, Sikkim, Arunachal Pradesh Pawukon calendar – Bali Balinese saka calendar (Lunar) – Bali Cham calendar (Lunar) – Chams Chula Sakarat (Lunar) – Myanmar Thai solar calendar – Thailand Thai lunar calendar – Thailand Khmer calendar (Lunar & Solar) – Cambodia
Technology
Calendars
null
201415
https://en.wikipedia.org/wiki/Quercus%20robur
Quercus robur
</noinclude> Quercus robur, the pedunculate oak or English oak, is a species of flowering plant in the beech and oak family, Fagaceae. It is a large tree, native to most of Europe and western Asia, and is widely cultivated in other temperate regions. It grows on soils of near neutral acidity in the lowlands and is notable for its value to natural ecosystems, supporting a very wide diversity of herbivorous insects and other pests, predators and pathogens. Description Quercus robur is a deciduous tree up to tall, with a single stout trunk that can be as much as in girth (circumference at breast height) or even in pollarded specimens. Older trees tend to be pollarded, with boles (the main trunk) long. These live longer and become more stout than unpollarded trees. The crown is spreading and unevenly domed, and trees often have massive lower branches. The bark is greyish-brown and closely grooved, with vertical plates. There are often large burrs on the trunk, which typically produce many small shoots. Oaks do not produce suckers but do recover well from pruning or lightning damage. The twigs are hairless and the buds are rounded (ovoid), brownish and pointed. The leaves are arranged alternately along the twigs and are broadly oblong or ovate, long by wide, with a short (typically ) petiole. They have a cordate (auricled) base and 3–6 rounded lobes, divided no further than halfway to the midrib. The leaves are usually glabrous or have just a few simple hairs on the lower surface. They are dark green above, paler below, and are often covered in small disks of spangle gall by autumn. Flowering takes place in spring (early May in England). It is wind-pollinated. The male flowers occur in narrow catkins some long and arranged in small bunches; the female flowers are small, brown with dark red stigmas, about 2 mm in diameter and are found at the tips of new shoots on peduncles 2–5 cm long. The fruits (acorns) are borne in clusters of 2–3 on a long peduncle (stalk) 4–8 cm long. Each acorn is 1.5–4 cm long, ovoid with a pointed tip, starting whitish-green and becoming brown, then black. As with all oaks, the acorns are carried in a shallow cup which can be distinctive in identifying the species. It is an "alternate bearing" species, its large crops produced every other year. Chemistry Grandinin/roburin E, castalagin/vescalagin, gallic acid, monogalloyl glucose (glucogallin) and valoneic acid dilactone, monogalloyl glucose, digalloyl glucose, trigalloyl glucose, rhamnose, quercitrin and ellagic acid are phenolic compounds found in Q. robur. The heartwood contains triterpene saponins. Similar species Q. robur is most likely to be confused with sessile oak, which shares much of its range. Distinguishing features of Q. robur include the auricles at the leaf base, the very short petiole, its clusters of acorns being borne on a long peduncle, and the lack of stellate hairs on the underside of the leaf. The two often hybridise in the wild, forming Quercus × rosacea. Turkey oak is also sometimes confused with it, but that species has "whiskers" on the winter buds and deeper lobes on the leaves (often more than halfway to the midrib). The acorn cups are also very different. Taxonomy Quercus robur (from the Latin quercus, "oak" + robur derived from a word meaning robust, strong) was named by Carl Linnaeus in Species Plantarum (1753). It is the type species of the genus and classified in the white oak section (Quercus section Quercus). It has numerous common names, including "common oak", "European oak" and "English oak". In French it is called "chêne pédonculé". The genome of Q. robur has been completely sequenced (GenOak project); the first version was published in 2016. It comprises 12 chromosome pairs (2n = 24), about  genes and 750 million bp. There are many synonyms, and numerous varieties and subspecies have been named. The populations in Iberia, Italy, southeast Europe, and Asia Minor and the Caucasus are sometimes treated as separate species, Q. orocantabrica, Q. brutia Tenore, Q. pedunculiflora K. Koch and Q. haas Kotschy respectively. Quercus × rosacea Bechst. (Q. petraea x Q. robur) is the only naturally-occurring hybrid, but the following crosses with other white oak species have been produced in cultivation: Q. × bimundorum (Q. alba × Q. robur) (two worlds oak) Q. × macdanielli (Q. macrocarpa × Q. robur) (heritage oak) Q. × turneri Willd. (Q. ilex × Q. robur) (Turner's oak) Q. × warei (Q. robur fastigiata x Q. bicolor). There are numerous cultivars available, among which the following are commonly grown: 'Fastigiata', cypress oak, is a large imposing tree with a narrow columnar habit. 'Concordia', golden oak, is a small, very slow-growing tree, eventually reaching , with bright golden-yellow leaves throughout spring and summer. It was originally raised in Van Geert's nursery at Ghent in 1843. 'Pendula', weeping oak, is a small to medium-sized tree with pendulous branches, reaching up to . 'Purpurea' is another small form, growing to , with purple leaves. 'Pectinata' (syn. 'Filicifolia'), cut-leaved oak, is a cultivar where the leaf is pinnately divided into fine, forward-pointing segments. Distribution and habitat The species is native to most of Europe and western Asia, and is widely cultivated in other temperate regions. It is a long-lived tree of high-canopy woodland, coppice and wood pasture, and it is commonly planted in hedges. When compared to sessile oak, it is more abundant in the lowlands of the south and east of Britain, and it occurs on more neutral (less acid) soils. It is rare on thin, well-drained calcareous (chalk and limestone) soil. Sometimes it is found on the margins of swamps, rivers and ponds, showing that it is fairly tolerant of intermittent flooding. Its Ellenberg values in Britain are L = 7, F = 5, R = 5, N = 4, and S = 0. Ecology Within its native range, Q. robur is valued for its importance to insects and other wildlife, supporting the highest biodiversity of insect herbivores of any British plant (at least 400 species). The most well-known of these are the ones that form galls, which number about 35. In Britain, the knopper gall is very common, and Andricus grossulariae produces somewhat similar spiky galls on the acorn cups. Also common in Britain are two types of spherical galls on the twigs: the oak marble gall and the cola nut gall. The latter are smaller and rougher than the former. A single, large exit hole indicates that the wasp inside has escaped, whereas several smaller holes show that it was parasitised by another insect, and these emerged instead. The undersides of oak leaves are often covered in spangle galls, which persist after the leaves fall. One of the most distinctive galls is the oak apple, a 4.5 cm diameter spongy ball created from the buds by the wasp Biorhiza pallida. The pineapple gall, while less common, is also easily recognised. The quantity of caterpillar species on an oak tree increases with the age of the tree, with blue tits and great tits timing their egg hatching to the leaves opening. The most common caterpillar species include the winter moth, the green tortrix and the mottled umber, all of which can become extremely abundant on the first flush of leaves in May, but the oak trees do recover their foliage later in the year. The acorns are typically produced in large quantities every other year (unlike Q. petraea, which produces large crops only every 4-10 years) and form a valuable food resource for several small mammals and some birds, notably Eurasian jays Garrulus glandarius. Jays were overwhelmingly the primary propagators of oaks before humans began planting them commercially (and remain the principal propagators for wild oaks), because of their habit of taking acorns from the umbra of its parent tree and burying them undamaged elsewhere. Diseases Acute oak decline Powdery mildew caused by Erysiphe alphitoides Sudden oak death Uses Quercus robur is planted for forestry, and produces a long-lasting and durable heartwood, much in demand for interior and furniture work. The wood is identified by a close examination of a cross-section perpendicular to fibres. The wood is characterised by its distinct (often wide) dark and light brown growth rings. The earlywood displays a vast number of large vessels (around in diameter). There are rays of thin (about ) yellow or light brown lines running across the growth rings. The timber is around per cubic meter in density. Additionally, although bitter due to their high tannin content, the acorns can be roasted and ground into a coffee substitute. In culture In the Scandinavian countries, oaks were considered the "thunderstorm trees", representing Thor, the god of thunder. A Finnish myth is that the World tree, a great oak which grew to block the movement of the sky, sunlight and moonlight, had to be felled, releasing its magic, thus creating the Milky Way. The oak tree also had a symbolic value in France. Some oaks were considered sacred by the Gauls; druids would cut down the mistletoe growing on them. Even after Christianization, oak trees were considered to protect as lightning would strike them rather than on nearby inhabitation. Such struck trees would often be turned into places of worship, like the Chêne chapelle. In 1746, all oak trees in Finland were legally classified as royal property, and oaks had enjoyed legal protection already from the 17th century. The oak is also the regional tree of the Southwest Finland region. During the French Revolution, oaks were often planted as trees of freedom. One such tree, planted during the 1848 Revolution, survived the destruction of Oradour-sur-Glane by the Nazis. After the announcement of General Charles de Gaulle's death, caricaturist Jacques Faizant represented him as a fallen oak. In Germany, the oak tree can be found in several paintings of Caspar David Friedrich and in "Of the life of a Good-For-Nothing" written by Joseph Freiherr von Eichendorff as a symbol of the state protecting every citizen. In Serbia the oak is a national symbol, having been part of the historical coat of arms of the Socialist Republic of Serbia, the historical coat of arms and flags of the Principality of Serbia, as well as the current traditional coat of arms and flag of Vojvodina. In England, the oak has assumed the status of a national emblem. This has its origins in the oak tree at Boscobel House, where the future King Charles II hid from his Parliamentarian pursuers in 1650 during the English Civil War; the tree has since been known as the Royal Oak. This event was celebrated nationally on 29 May as Oak Apple Day, which continues to this day in some communities. Many place names in England include a reference to this tree, including Oakley, Occold and Eyke. Copdock, in Suffolk, probably derives from a pollarded oak ("copped oak"). 'The Royal Oak' is the third most popular pub name in Britain (with 541 counted in 2007) and HMS Royal Oak has been the name of eight major Royal Navy warships. The naval associations are strengthened by the fact that oak was the main construction material for sailing warships. The Royal Navy was often described as "The Wooden Walls of Old England" (a paraphrase of the Delphic Oracle) and the Navy's official quick march is "Heart of Oak". In folklore, the Major Oak is where Robin Hood is purported to have taken shelter. Oak leaves (not necessarily of this species) have been depicted on the Croatian 5 lipa coin; on old German Deutsche Mark currency (1 through 10 Pfennigs; the 50 Pfennigs coin showed a woman planting an oak seedling), and now on German-issued euro currency coins (1 through 5 cents); and on British pound coins (1987 and 1992 issues). Notable trees It is often claimed that England has more ancient oaks than the rest of Europe combined. This is based on research by Aljos Farjon at the Royal Botanic Gardens, Kew, who found that there were 115 oaks (of both species) in England with a circumference of 9 m or more, compared with just 96 in Europe. This is attributed to the persistence of mediaeval deer parks in the landscape. The Majesty Oak, with a circumference of , is the thickest such tree in Great Britain. The Brureika (Bridal Oak) in Norway with a circumference of (in 2018) and the Kaive Oak in Latvia with a circumference of are among the thickest trees in Northern Europe. The largest historical oak was known as the Imperial Oak from Bosnia and Herzegovina. This specimen was recorded at 17.5 m in circumference at breast height and estimated at over 150 m³ in total volume. It collapsed in 1998. Two individuals of notable longevity are the Stelmužė Oak in Lithuania and the Granit Oak in Bulgaria, which are believed to be more than 1500 years old, possibly making them the oldest oaks in Europe; another specimen, called the 'Kongeegen' ('Kings Oak'), estimated to be about 1,200 years old, grows in Jaegerspris, Denmark. Yet another can be found in Kvilleken, Sweden, that is over 1000 years old and around. Of maiden (not pollarded) specimens, one of the oldest is the great oak of Ivenack, Germany. Tree-ring research of this tree and other oaks nearby gives an estimated age of 700 to 800 years. Also the Bowthorpe Oak in Lincolnshire, England is estimated to be 1,000 years old, making it the oldest in the UK, although there is Knightwood Oak in the New Forest that is also said to be as old. The highest density of Q. robur with a circumference of and more is in Latvia. In Ireland, at Birr Castle, a specimen over 400 years old has a girth of , known as the Carroll Oak. In the Basque Country (Spain and France), the 'tree of Gernika' is an ancient oak tree located in Gernika, under which the Lehendakari (Basque prime minister) swears his oath of office. The largest example in Australia is in Donnybrook, Western Australia.
Biology and health sciences
Fagales
Plants
201474
https://en.wikipedia.org/wiki/Hatchback
Hatchback
A hatchback is a car body configuration with a rear door that swings upward to provide access to the main interior of the car as a cargo area rather than just to a separated trunk. Hatchbacks may feature fold-down second-row seating, where the interior can be reconfigured to prioritize passenger or cargo volume. While early examples of the body configuration can be traced to the 1930s, the Merriam-Webster dictionary dates the term itself to 1970. The hatchback body style has been marketed worldwide on cars ranging in size from superminis to small family cars, as well as executive cars and some sports cars. They are a primary component of sport utility vehicles. Characteristics The distinguishing feature of a hatchback is a rear door that opens upwards and is hinged at roof level (as opposed to the boot/trunk lid of a saloon/sedan, which is hinged below the rear window). Most hatchbacks use a two-box design body style, where the cargo area (trunk/boot) and passenger areas are a single volume. The rear seats can often be folded down to increase the available cargo area. Hatchbacks may have a removable rigid parcel shelf, or flexible roll-up tonneau cover to cover the cargo space behind the rear seats. 3 door and 5 door terminology When describing the body style, the hatch is often counted as a door, therefore a hatchback with two passenger doors is called a three-door and a hatchback with four passenger doors is called a five-door. Estates vs. liftbacks vs. notchbacks Estates/station wagons and liftbacks have in common a two-box design configuration, a shared interior volume for passengers and cargo and a rear door (often called a tailgate in the case of an estate/wagon) that is hinged at roof level, similar to hatchbacks. Liftback cars are similar to hatchbacks from a functional perspective in having a tailgate hinged from the roof, but differ from hatchbacks from a styling perspective in having more of a sloped roofline. The term "fastback" may sometimes also be used by manufacturers to market liftback cars. A fastback is a broad automotive term used to describe the styling of the rear of a car in having a single slope from the roof to the rear bumper. Some hatchbacks are notchback three box designs, bearing a resemblance to sedans/saloons from a styling perspective, but being closer to hatchbacks in functionality by having a tailgate hinged from the roof. This is featured on cars such as the 1951 Kaiser-Frazer Vagabond, Simca 1100, Mazda 6 GG1, and Opel Vectra C. As such, notchbacks are not fastbacks, as the slope of the roofline on a notchback is interrupted by its three-box design. An estate/wagon typically differs from a liftback or hatchback by being longer (therefore more likely to have a D-pillar). Other potential differences of a station wagon include: steeper rake at the rear (i.e. the rear door is more vertical) a third row of seats rear suspension designed for increased load capacity or to minimize intrusion into the cargo area the tailgate is more likely to be a multi-part design or extend down to the bumper Liftback "Liftback" is a term for hatchback models in which the rear cargo door or hatch is more horizontally angled than on an average hatchback, and as a result, the hatch is lifted more upwards than backward, to open. The term was first used by Toyota in 1973, to describe the Toyota Celica Liftback GT. Later, Toyota needed to distinguish between two 5-door versions of the Toyota Corolla, one of which was a conventional 5-door hatchback with a nearly vertical rear hatch while the other one was a 5-door more horizontal hatch, for which the term Liftback was used. Saab called similar body style of their cars combi coupé, starting from 1974. History History The first production hatchback was made by Citroën in 1938: the (11CV) "Commerciale" version of their 1934–1957 Citroën Traction Avant series. The initial target market was tradesmen who needed to carry bulky objects, like butchers, bakers, vintners, and grocers. Before World War II, the tailgate had two pieces, a top section hinged from roof level and a bottom section hinged from below. When production of the Commerciale resumed after the war, the tailgate became a one-piece design that was hinged from roof level, as per the design used on most hatchbacks since. In 1949, Kaiser-Frazer introduced the Vagabond and Traveler hatchbacks. These models were styled much like a typical 1940s sedan, fully retaining their three-box profile; however, they included a two-piece tailgate as per the first Citroën 11CV Commerciale. The Vagabond and Traveler models also had folding rear seats and a shared volume for the passengers and cargo. The design was neither fully a sedan nor a station wagon, but the folding rear seat provided for a large, long interior cargo area. These Kaiser-Frazer models have been described as "America's First Hatchback". The British Motor Corporation (BMC) launched a 'Countryman' version of the Austin A40 Farina twobox economy car in 1959. Just like its A30 and A35 Countryman predecessors, it was a very small estate car — but instead of regular, sideways opening rear doors, it had a horizontally split tailgate, having a top-hinged upper door and bottom-hinged lower door. The 1959 A40 Countryman differed from the 1958 A40 Farina saloon, in that the rear window was marginally smaller, to allow for a frame that could be lifted with roof-mounted hinges and side support struts so that the car now incorporated a horizontal-split two-piece tailgate. The lower panel was now flush with the floor and its bottom-mounted hinges were strengthened. Sports cars In 1953, Aston Martin marketed the DB2 with a top-hinged rear tailgate, manufacturing 700 examples. Its successor, the 1958 DB Mark III, also offered a folding rear seat. The 1954 AC Aceca and later Aceca-Bristol from AC Cars had a similar hatch tailgate, though only 320 were built. In 1965, MG had Pininfarina modify the MGB roadster into a hatchback design called the MGB GT, becoming the first volume-production sports car with this type of body. Many coupés have 3 doors, including the Jaguar E-Type and Datsun 240Z. Mass market acceptance In 1961, Renault introduced the Renault 4 as a moderately upscale alternative to the Citroën 2CV. The Renault 4 was the first million-selling, mass-produced, compact two-box car with a steeply raked rear side, opened by a large, one-piece, lift-gate hatch. During its production life cycle, Renault marketed the R4 calling it a small station wagon, just like Austin's series of small Countryman estate models from 1954 until 1968 – even after the term "hatchback" appeared around 1970. The company only offered one two-box body style. The Renault 4 continued in production through 1992, selling over 8 million cars. In 1965, the R4 economy car was complemented by the D-segment Renault 16, the first volume production two-box, hatchback family car. Its rear seats were adjustable, would fold down, or could be completely removed. The Renault 16 was successful in a market segment previously exclusively populated by notchback sedans and, despite making only one body style for 15 years, consumers purchased over 1 million R16s. Modern hatchbacks Unlike the Renault 4, which had a semi-integrated body, mounted on a platform chassis, and a front mid-mounted and longitudinally placed engine behind the front axle, the 1967 Simca 1100, which followed in the footsteps of the 1959 BMC Mini with front-wheel drive, a more space-efficient transverse engine layout, unitary bodywork, and independent suspension (features which became key design concepts used by almost every mass-market family car since) - and it was the first hatchback with these features. The Simca 1100 also came in both three and five-door variants, and the hatchback models took a central position, traditionally taken up by saloons, in a full model line-up, completed by a station wagon, as well as panel van versions. Also in 1967, Citroën released the Dyane, a redesigned 2CV with a large rear hatch, to compete with the Renault 4. The Simca was closely followed by Mini's larger stablemate, the Austin Maxi. Counting the rear hatch made it a five-door saloon. It featured a transverse-mounted SOHC engine, a five-speed transmission, and a flexible seating arrangement which gave the option of forming a double bed. Created by the same designer as BMC's Mini, sir Alec Issigonis – accountants had determined that the car had to use the same set of doors as the Austin / Morris 1800, but would be marketed below it in the model range, so needed a shorter rear body. A curtailed rear end with a big hatch resulted. The Austin Maxi operated in the same market segment as the Renault 16, and the two competitors were closely matched in specifications and exterior dimensions, although the Maxi had significantly more interior space due to its transverse engine. In 1974, the Volkswagen Golf was introduced, intended to replace the ubiquitous Beetle. In 1976 British Leyland introduced the Rover 3500, a rear wheel drive executive car five-door hatchback. Europe Increasing demand for compact hatchbacks in Europe during the 1970s led to the release of models such as the Austin Ambassador, Austin Maestro, Fiat 127 and Renault 5. By the late 1970s and early 1980s, the majority of superminis and compact cars had been updated or replaced with hatchback models. Hatchbacks were the mainstay of manufacturers' D-segment offerings in Europe in the 1990s (they were already popular in the 1980s) and until the late 2000s. It was common for manufacturers to offer the same D-segment model in three different body styles: a 4-door sedan, a 5-door hatchback, and a 5-door station wagon. Such models included the Ford Mondeo, the Mazda 626/Mazda6, the Nissan Primera, the Opel Vectra/Insignia, and the Toyota Carina/Avensis. There were also models in this market segment available only as a 5-door hatchback or a 4-door sedan, and models available only as a 5-door hatchback or a 5-door station wagon. Often the hatchback and the sedan shared the same wheelbase and the same overall length, and the full rear overhang length of a conventional sedan trunk was retained on the five-door hatchback version of the car. The 1989-2000 Citroën XM and second-generation Skoda Superb (2008-2015) are cars that blur the line between hatchbacks and sedans. They feature an innovative "Twindoor" trunk lid. It can be opened like in a sedan, using the hinges located below the rear glass; or together with the rear glass, like in a hatchback, using the hinges at the roof. Audi and BMW introduced hatchbacks in 2009, but marketed them as "Sportback" (Audi) or "Gran Turismo"/"Gran Coupe" (BMW). In the 2010s hatchback versions became available on luxury cars such as the BMW 5 Series Gran Turismo, Porsche Panamera, and Audi A7 while the Skoda Octavia was always available as a hatchback. Meanwhile, three-door hatchbacks have seen a fall in popularity, compared with 5-door models. This has led to many models no longer being offered in 3-door body styles, for example, the Audi A3 and Renault Clio. North America In 1970, American Motors Corporation (AMC) released the first North American subcompact car since the 1953-1961 Nash Metropolitan, the AMC Gremlin. Although the Gremlin has the appearance of a hatchback, it is frequently called a Kammback coupe instead, with only its rear window being an upwards opening hatch, that gives access to the rear cargo space. The Gremlin was based on the AMC Hornet, but its abrupt hatchback rear end cut the car's overall length from . AMC added a hatchback version to its larger compact-sized Hornet line for the 1973 model year. The design and fold-down rear seat more than doubled cargo space and the Hornet was claimed to be the "first compact hatchback" manufactured by U.S. automaker. The 1975 Pacer featured a rear door or hatchback. A longer model with a wagon-type configuration was added in 1977 with its large rear "hatch" as one of the car's three doors, all having different sizes. The 1979 AMC Spirit was available in two designs, a "sedan" with a rear lift up window and a semi-fastback "liftback" version. General Motors' first hatchback model was the Chevrolet Vega, introduced in September 1970. Over a million Vega hatchbacks were produced for the 1971–1977 model years accounting for about half of the Vega's total production. The Vega hatchback was also rebadged and sold as the 1973–1977 Pontiac Astre, 1978 Chevrolet Monza S, 1975–1980 Buick Skyhawk, 1975–1980 Oldsmobile Starfire and 1977–1980 Pontiac Sunbird. In 1974, the larger Chevrolet Nova became available in a hatchback body style. The Nova hatchback was also rebadged as the Chevrolet Concours, Pontiac Ventura, Pontiac Phoenix, Oldsmobile Omega, Buick Apollo, and Buick Skylark. In 1980, General Motors released its first front-wheel drive hatchback models, the Chevrolet Citation and Pontiac Phoenix. Both AMC and GM offered a dealer accessory that turned their compact hatchback models into low-cost recreational vehicles. An example is the Mini-Camper Kit for the AMC Hornet, a low-priced canvas tent that converted an open hatchback into a camping compartment with room for sleeping. The "Mini-Camper" was a weatherproof covering that fitted over the roof section from the B-pillar back to the rear bumper that was easy to set up. Ford Motor Company's first hatchback was the Ford Pinto Runabout, introduced in 1971. The Pinto-based 1974-1978 Ford Mustang II was offered as a hatchback. The body style was continued for the redesigned Fox platform-based 1979 third generation Mustang and the Mercury Capri derivative. For 1981, Ford offered hatchback versions of its sub-compact Escort and the badge-engineered Mercury Lynx, which were now front-wheel drive. Two-seat hatchback derivatives were introduced for 1982, the Ford EXP and the Mercury LN-7. Chrysler Corporation's first hatchbacks (and first front-wheel drive cars) were the 1978 Dodge Omni / Plymouth Horizon models, which were based on the French Simca-Talbot Horizon. These were followed by the 3-door hatchback Dodge Omni 024 / Plymouth Horizon TC3 which were later renamed Dodge Charger and Plymouth Turismo. Japan The first Japanese hatchbacks were the 1972 Honda Civic, Nissan Sunny, and Nissan Cherry. The Civic and Cherry had front-wheel drive powertrains, which later became the common configuration for a hatchback. Along with the Honda Civic, other Japanese hatchback models included the Nissan Pulsar, Toyota Corolla, and Suzuki Swift. Almost all Japanese Kei cars ("city cars") use a hatchback body style, to maximize cargo capacity given the overall vehicle size is limited by the regulations applicable to these vehicles. Kei cars include the Mitsubishi Minica, Honda Life, Suzuki Fronte, Subaru Vivio, and Daihatsu Mira. USSR The first Soviet hatchback was the rear-wheel drive IZh 2125 Kombi, which entered production in 1973. This was followed only in the 1980s by the front-wheel drive Lada Samara in 1984, the Moskvitch 2141/Aleko in 1986, and ZAZ Tavria in 1987. Brazil In 2014, four of the top five selling models in Brazil were hatchbacks. However, in the 1980s and 1990s, hatchbacks were less popular than sedans, leading manufacturers to develop compact sedan models for the Brazilian market, for example, the Fiat Premio and sedan versions of the Opel Corsa and Ford Fiesta. India The vehicle is classified as a B-segment marque in the European single market, a segment referred to as a supermini in the British Isles. Prior to this, the "Swift" nameplate had been applied to the rebadged Suzuki Cultus in numerous export markets since 1984 and for the Japanese-market Suzuki Ignis since 2000. The Swift became its own model in 2004. Currently, the Swift is positioned between Ignis and Baleno in Suzuki's global Australia Holden produced the Torana Hatchback from 1976 to 1980 across the LX and UC generations. Up until recent years, buyers in Australia have preferred the station wagon body style, with the big three Australian manufacturers; Holden, Ford Australia, and Chrysler Australia all producing station wagon models of their sedan models. Australia started moving to hatchbacks partially in the mid-1990s with relatively cheap offerings from Hyundai and Honda. Australia now sells mostly hatchbacks, after the last domestic-built wagon, the Holden Commodore Sportwagon ceased production in October 2017. The Ford Laser hatchback was produced in Australia. Nissan produced the Pulsar and Pintara hatchbacks and Mitsubishi built the Colt hatch. Toyota produced the Corolla hatchback, and more recently Holden produced the Cruze Hatchback.
Technology
Motorized road transport
null
201479
https://en.wikipedia.org/wiki/Syngas
Syngas
Syngas, or synthesis gas, is a mixture of hydrogen and carbon monoxide, in various ratios. The gas often contains some carbon dioxide and methane. It is principally used for producing ammonia or methanol. Syngas is combustible and can be used as a fuel. Historically, it has been used as a replacement for gasoline, when gasoline supply has been limited; for example, wood gas was used to power cars in Europe during WWII (in Germany alone, half a million cars were built or rebuilt to run on wood gas). Production Syngas is produced by steam reforming or partial oxidation of natural gas or liquid hydrocarbons, or coal gasification. Steam reforming of methane is an endothermic reaction requiring 206 kJ/mol of methane: In principle, but rarely in practice, biomass and related hydrocarbon feedstocks could be used to generate biogas and biochar in waste-to-energy gasification facilities. The gas generated (mostly methane and carbon dioxide) is sometimes described as syngas but its composition differs from syngas. Generation of conventional syngas (mostly H2 and CO) from waste biomass has been explored. Composition, pathway for formation, and thermochemistry The chemical composition of syngas varies based on the raw materials and the processes. Syngas produced by coal gasification generally is a mixture of 30 to 60% carbon monoxide, 25 to 30% hydrogen, 5 to 15% carbon dioxide, and 0 to 5% methane. It also contains lesser amount of other gases. Syngas has less than half the energy density of natural gas. The first reaction, between incandescent coke and steam, is strongly endothermic, producing carbon monoxide (CO) and hydrogen (water gas in older terminology). When the coke bed has cooled to a temperature at which the endothermic reaction can no longer proceed, the steam is then replaced by a blast of air. The second and third reactions then take place, producing an exothermic reaction—forming initially carbon dioxide and raising the temperature of the coke bed—followed by the second endothermic reaction, in which the latter is converted to carbon monoxide. The overall reaction is exothermic, forming "producer gas" (older terminology). Steam can then be re-injected, then air etc., to give an endless series of cycles until the coke is finally consumed. Producer gas has a much lower energy value, relative to water gas, due primarily to dilution with atmospheric nitrogen. Pure oxygen can be substituted for air to avoid the dilution effect, producing gas of much higher calorific value. In order to produce more hydrogen from this mixture, more steam is added and the water gas shift reaction is carried out: The hydrogen can be separated from the by pressure swing adsorption (PSA), amine scrubbing, and membrane reactors. A variety of alternative technologies have been investigated, but none are of commercial value. Some variations focus on new stoichiometries such as carbon dioxide plus methane or partial hydrogenation of carbon dioxide. Other research focuses on novel energy sources to drive the processes including electrolysis, solar energy, microwaves, and electric arcs. Electricity generated from renewable sources is also used to process carbon dioxide and water into syngas through high-temperature electrolysis. This is an attempt to maintain carbon neutrality in the generation process. Audi, in partnership with company named Sunfire, opened a pilot plant in November 2014 to generate e-diesel using this process. Syngas that is not methanized typically has a lower heating value of 120 BTU/scf . Untreated syngas can be run in hybrid turbines that allow for greater efficiency because of their lower operating temperatures, and extended part lifetime. Uses Syngas is used as a source of hydrogen as well as a fuel. It is also used to directly reduce iron ore to sponge iron. Chemical uses include the production of methanol which is a precursor to acetic acid and many acetates; liquid fuels and lubricants via the Fischer–Tropsch process and previously the Mobil methanol to gasoline process; ammonia via the Haber process, which converts atmospheric nitrogen (N2) into ammonia which is used as a fertilizer; and oxo alcohols via an intermediate aldehyde.
Physical sciences
Chemical mixtures: General
null
201485
https://en.wikipedia.org/wiki/Lithium-ion%20battery
Lithium-ion battery
A lithium-ion or Li-ion battery is a type of rechargeable battery that uses the reversible intercalation of Li+ ions into electronically conducting solids to store energy. In comparison with other commercial rechargeable batteries, Li-ion batteries are characterized by higher specific energy, higher energy density, higher energy efficiency, a longer cycle life, and a longer calendar life. Also noteworthy is a dramatic improvement in lithium-ion battery properties after their market introduction in 1991: over the following 30 years, their volumetric energy density increased threefold while their cost dropped tenfold. In late 2024 global demand passed 1 Terawatt-hour per year, while production capacity was more than twice that. The invention and commercialization of Li-ion batteries may have had one of the greatest impacts of all technologies in human history, as recognized by the 2019 Nobel Prize in Chemistry. More specifically, Li-ion batteries enabled portable consumer electronics, laptop computers, cellular phones, and electric cars. Li-ion batteries also see significant use for grid-scale energy storage as well as military and aerospace applications. Lithium-ion cells can be manufactured to optimize energy or power density. Handheld electronics mostly use lithium polymer batteries (with a polymer gel as an electrolyte), a lithium cobalt oxide () cathode material, and a graphite anode, which together offer high energy density. Lithium iron phosphate (), lithium manganese oxide ( spinel, or -based lithium-rich layered materials, LMR-NMC), and lithium nickel manganese cobalt oxide ( or NMC) may offer longer life and a higher discharge rate. NMC and its derivatives are widely used in the electrification of transport, one of the main technologies (combined with renewable energy) for reducing greenhouse gas emissions from vehicles. M. Stanley Whittingham conceived intercalation electrodes in the 1970s and created the first rechargeable lithium-ion battery, based on a titanium disulfide cathode and a lithium-aluminium anode, although it suffered from safety problems and was never commercialized. John Goodenough expanded on this work in 1980 by using lithium cobalt oxide as a cathode. The first prototype of the modern Li-ion battery, which uses a carbonaceous anode rather than lithium metal, was developed by Akira Yoshino in 1985 and commercialized by a Sony and Asahi Kasei team led by Yoshio Nishi in 1991. Whittingham, Goodenough, and Yoshino were awarded the 2019 Nobel Prize in Chemistry for their contributions to the development of lithium-ion batteries. Lithium-ion batteries can be a safety hazard if not properly engineered and manufactured because they have flammable electrolytes that, if damaged or incorrectly charged, can lead to explosions and fires. Much progress has been made in the development and manufacturing of safe lithium-ion batteries. Lithium-ion solid-state batteries are being developed to eliminate the flammable electrolyte. Improperly recycled batteries can create toxic waste, especially from toxic metals, and are at risk of fire. Moreover, both lithium and other key strategic minerals used in batteries have significant issues at extraction, with lithium being water intensive in often arid regions and other minerals used in some Li-ion chemistries potentially being conflict minerals such as cobalt. Both environmental issues have encouraged some researchers to improve mineral efficiency and find alternatives such as lithium iron phosphate lithium-ion chemistries or non-lithium-based battery chemistries like iron-air batteries. There are at least 12 different chemistries of Li-ion batteries; see "List of battery types." History Research on rechargeable Li-ion batteries dates to the 1960s; one of the earliest examples is a /Li battery developed by NASA in 1965. The breakthrough that produced the earliest form of the modern Li-ion battery was made by British chemist M. Stanley Whittingham in 1974, who first used titanium disulfide () as a cathode material, which has a layered structure that can take in lithium ions without significant changes to its crystal structure. Exxon tried to commercialize this battery in the late 1970s, but found the synthesis expensive and complex, as is sensitive to moisture and releases toxic hydrogen sulfide () gas on contact with water. More prohibitively, the batteries were also prone to spontaneously catch fire due to the presence of metallic lithium in the cells. For this, and other reasons, Exxon discontinued the development of Whittingham's lithium-titanium disulfide battery. In 1980, working in separate groups Ned A. Godshall et al., and, shortly thereafter, Koichi Mizushima and John B. Goodenough, after testing a range of alternative materials, replaced with lithium cobalt oxide (, or LCO), which has a similar layered structure but offers a higher voltage and is much more stable in air. This material would later be used in the first commercial Li-ion battery, although it did not, on its own, resolve the persistent issue of flammability. These early attempts to develop rechargeable Li-ion batteries used lithium metal anodes, which were ultimately abandoned due to safety concerns, as lithium metal is unstable and prone to dendrite formation, which can cause short-circuiting. The eventual solution was to use an intercalation anode, similar to that used for the cathode, which prevents the formation of lithium metal during battery charging. The first to demonstrate lithium ion reversible intercalation into graphite anodes was Jürgen Otto Besenhard in 1974. Besenhard used organic solvents such as carbonates, however these solvents decomposed rapidly providing short battery cycle life. Later, in 1980, Rachid Yazami used a solid organic electrolyte, polyethylene oxide, which was more stable. In 1985, Akira Yoshino at Asahi Kasei Corporation discovered that petroleum coke, a less graphitized form of carbon, can reversibly intercalate Li-ions at a low potential of ~0.5 V relative to Li+ /Li without structural degradation. Its structural stability originates from its amorphous carbon regions, which serving as covalent joints to pin the layers together. Although it has a lower capacity compared to graphite (~Li0.5C6, 186 mAh g–1), it became the first commercial intercalation anode for Li-ion batteries owing to its cycling stability. In 1987, Yoshino patented what would become the first commercial lithium-ion battery using this anode. He used Goodenough's previously reported LiCoO2 as the cathode and a carbonate ester-based electrolyte. The battery was assembled in the discharged state, which made it safer and cheaper to manufacture. In 1991, using Yoshino's design, Sony began producing and selling the world's first rechargeable lithium-ion batteries. The following year, a joint venture between Toshiba and Asahi Kasei Co. also released a lithium-ion battery. Significant improvements in energy density were achieved in the 1990s by replacing Yoshino's soft carbon anode first with hard carbon and later with graphite. In 1990, Jeff Dahn and two colleagues at Dalhousie University (Canada) reported reversible intercalation of lithium ions into graphite in the presence of ethylene carbonate solvent (which is solid at room temperature and is mixed with other solvents to make a liquid). This represented the final innovation of the era that created the basic design of the modern lithium-ion battery. In 2010, global lithium-ion battery production capacity was 20 gigawatt-hours. By 2016, it was 28 GWh, with 16.4 GWh in China. Global production capacity was 767 GWh in 2020, with China accounting for 75%. Production in 2021 is estimated by various sources to be between 200 and 600 GWh, and predictions for 2023 range from 400 to 1,100 GWh. In 2012, John B. Goodenough, Rachid Yazami and Akira Yoshino received the 2012 IEEE Medal for Environmental and Safety Technologies for developing the lithium-ion battery; Goodenough, Whittingham, and Yoshino were awarded the 2019 Nobel Prize in Chemistry "for the development of lithium-ion batteries". Jeff Dahn received the ECS Battery Division Technology Award (2011) and the Yeager award from the International Battery Materials Association (2016). In April 2023, CATL announced that it would begin scaled-up production of its semi-solid condensed matter battery that produces a then record 500 Wh/kg. They use electrodes made from a gelled material, requiring fewer binding agents. This in turn shortens the manufacturing cycle. One potential application is in battery-powered airplanes. Another new development of lithium-ion batteries are flow batteries with redox-targeted solids, that use no binders or electron-conducting additives, and allow for completely independent scaling of energy and power. Design Generally, the negative electrode of a conventional lithium-ion cell is graphite made from carbon. The positive electrode is typically a metal oxide or phosphate. The electrolyte is a lithium salt in an organic solvent. The negative electrode (which is the anode when the cell is discharging) and the positive electrode (which is the cathode when discharging) are prevented from shorting by a separator. The electrodes are connected to the powered circuit through two pieces of metal called current collectors. The negative and positive electrodes swap their electrochemical roles (anode and cathode) when the cell is charged. Despite this, in discussions of battery design the negative electrode of a rechargeable cell is often just called "the anode" and the positive electrode "the cathode". In its fully lithiated state of LiC6, graphite correlates to a theoretical capacity of 1339 coulombs per gram (372 mAh/g). The positive electrode is generally one of three materials: a layered oxide (such as lithium cobalt oxide), a polyanion (such as lithium iron phosphate) or a spinel (such as lithium manganese oxide). More experimental materials include graphene-containing electrodes, although these remain far from commercially viable due to their high cost. Lithium reacts vigorously with water to form lithium hydroxide (LiOH) and hydrogen gas. Thus, a non-aqueous electrolyte is typically used, and a sealed container rigidly excludes moisture from the battery pack. The non-aqueous electrolyte is typically a mixture of organic carbonates such as ethylene carbonate and propylene carbonate containing complexes of lithium ions. Ethylene carbonate is essential for making solid electrolyte interphase on the carbon anode, but since it is solid at room temperature, a liquid solvent (such as propylene carbonate or diethyl carbonate) is added. The electrolyte salt is almost always lithium hexafluorophosphate (), which combines good ionic conductivity with chemical and electrochemical stability. The hexafluorophosphate anion is essential for passivating the aluminium current collector used for the positive electrode. A titanium tab is ultrasonically welded to the aluminium current collector. Other salts like lithium perchlorate (), lithium tetrafluoroborate (), and lithium bis(trifluoromethanesulfonyl)imide () are frequently used in research in tab-less coin cells, but are not usable in larger format cells, often because they are not compatible with the aluminium current collector. Copper (with a spot-welded nickel tab) is used as the current collector at the negative electrode. Current collector design and surface treatments may take various forms: foil, mesh, foam (dealloyed), etched (wholly or selectively), and coated (with various materials) to improve electrical characteristics. Depending on materials choices, the voltage, energy density, life, and safety of a lithium-ion cell can change dramatically. Current effort has been exploring the use of novel architectures using nanotechnology to improve performance. Areas of interest include nano-scale electrode materials and alternative electrode structures. Electrochemistry The reactants in the electrochemical reactions in a lithium-ion cell are the materials of the electrodes, both of which are compounds containing lithium atoms. Although many thousands of different materials have been investigated for use in lithium-ion batteries, only a very small number are commercially usable. All commercial Li-ion cells use intercalation compounds as active materials. The negative electrode is usually graphite, although silicon is often mixed in to increase the capacity. The electrolyte is usually lithium hexafluorophosphate, dissolved in a mixture of organic carbonates. A number of different materials are used for the positive electrode, such as LiCoO2, LiFePO4, and lithium nickel manganese cobalt oxides. During cell discharge the negative electrode is the anode and the positive electrode the cathode: electrons flow from the anode to the cathode through the external circuit. An oxidation half-reaction at the anode produces positively charged lithium ions and negatively charged electrons. The oxidation half-reaction may also produce uncharged material that remains at the anode. Lithium ions move through the electrolyte; electrons move through the external circuit toward the cathode where they recombine with the cathode material in a reduction half-reaction. The electrolyte provides a conductive medium for lithium ions but does not partake in the electrochemical reaction. The reactions during discharge lower the chemical potential of the cell, so discharging transfers energy from the cell to wherever the electric current dissipates its energy, mostly in the external circuit. During charging these reactions and transports go in the opposite direction: electrons move from the positive electrode to the negative electrode through the external circuit. To charge the cell the external circuit has to provide electrical energy. This energy is then stored as chemical energy in the cell (with some loss, e. g., due to coulombic efficiency lower than 1). Both electrodes allow lithium ions to move in and out of their structures with a process called insertion (intercalation) or extraction (deintercalation), respectively. As the lithium ions "rock" back and forth between the two electrodes, these batteries are also known as "rocking-chair batteries" or "swing batteries" (a term given by some European industries). The following equations exemplify the chemistry (left to right: discharging, right to left: charging). The negative electrode half-reaction for the graphite is LiC6 <=> C6 + Li+ + e^- The positive electrode half-reaction in the lithium-doped cobalt oxide substrate is CoO2 + Li+ + e- <=> LiCoO2 The full reaction being LiC6 + CoO2 <=> C6 + LiCoO2 The overall reaction has its limits. Overdischarging supersaturates lithium cobalt oxide, leading to the production of lithium oxide, possibly by the following irreversible reaction: Li+ + e^- + LiCoO2 -> Li2O + CoO Overcharging up to 5.2 volts leads to the synthesis of cobalt (IV) oxide, as evidenced by x-ray diffraction: LiCoO2 -> Li+ + CoO2 + e^- The transition metal in the positive electrode, cobalt (Co), is reduced from to during discharge, and oxidized from to during charge. The cell's energy is equal to the voltage times the charge. Each gram of lithium represents Faraday's constant/6.941, or 13,901 coulombs. At 3 V, this gives 41.7 kJ per gram of lithium, or 11.6 kWh per kilogram of lithium. This is slightly more than the heat of combustion of gasoline; however, lithium-ion batteries as a whole are still significantly heavier per unit of energy due to the additional materials used in production. Note that the cell voltages involved in these reactions are larger than the potential at which an aqueous solutions would electrolyze. Discharging and charging During discharge, lithium ions () carry the current within the battery cell from the negative to the positive electrode, through the non-aqueous electrolyte and separator diaphragm. During charging, an external electrical power source applies an over-voltage (a voltage greater than the cell's own voltage) to the cell, forcing electrons to flow from the positive to the negative electrode. The lithium ions also migrate (through the electrolyte) from the positive to the negative electrode where they become embedded in the porous electrode material in a process known as intercalation. Energy losses arising from electrical contact resistance at interfaces between electrode layers and at contacts with current collectors can be as high as 20% of the entire energy flow of batteries under typical operating conditions. The charging procedures for single Li-ion cells, and complete Li-ion batteries, are slightly different: A single Li-ion cell is charged in two stages: Constant current (CC) Constant voltage (CV) A Li-ion battery (a set of Li-ion cells in series) is charged in three stages: Constant current Balance (only required when cell groups become unbalanced during use) Constant voltage During the constant current phase, the charger applies a constant current to the battery at a steadily increasing voltage, until the top-of-charge voltage limit per cell is reached. During the balance phase, the charger/battery reduces the charging current (or cycles the charging on and off to reduce the average current) while the state of charge of individual cells is brought to the same level by a balancing circuit until the battery is balanced. Balancing typically occurs whenever one or more cells reach their top-of-charge voltage before the other(s), as it is generally inaccurate to do so at other stages of the charge cycle. This is most commonly done by passive balancing, which dissipates excess charge as heat via resistors connected momentarily across the cells to be balanced. Active balancing is less common, more expensive, but more efficient, returning excess energy to other cells (or the entire pack) via a DC-DC converter or other circuitry. Balancing most often occurs during the constant voltage stage of charging, switching between charge modes until complete. The pack is usually fully charged only when balancing is complete, as even a single cell group lower in charge than the rest will limit the entire battery's usable capacity to that of its own. Balancing can last hours or even days, depending on the magnitude of the imbalance in the battery. During the constant voltage phase, the charger applies a voltage equal to the maximum cell voltage times the number of cells in series to the battery, as the current gradually declines towards 0, until the current is below a set threshold of about 3% of initial constant charge current. Periodic topping charge about once per 500 hours. Top charging is recommended to be initiated when voltage goes below Failure to follow current and voltage limitations can result in an explosion. Charging temperature limits for Li-ion are stricter than the operating limits. Lithium-ion chemistry performs well at elevated temperatures but prolonged exposure to heat reduces battery life. Li‑ion batteries offer good charging performance at cooler temperatures and may even allow "fast-charging" within a temperature range of . Charging should be performed within this temperature range. At temperatures from 0 to 5 °C charging is possible, but the charge current should be reduced. During a low-temperature (under 0 °C) charge, the slight temperature rise above ambient due to the internal cell resistance is beneficial. High temperatures during charging may lead to battery degradation and charging at temperatures above 45 °C will degrade battery performance, whereas at lower temperatures the internal resistance of the battery may increase, resulting in slower charging and thus longer charging times. Batteries gradually self-discharge even if not connected and delivering current. Li-ion rechargeable batteries have a self-discharge rate typically stated by manufacturers to be 1.5–2% per month. The rate increases with temperature and state of charge. A 2004 study found that for most cycling conditions self-discharge was primarily time-dependent; however, after several months of stand on open circuit or float charge, state-of-charge dependent losses became significant. The self-discharge rate did not increase monotonically with state-of-charge, but dropped somewhat at intermediate states of charge. Self-discharge rates may increase as batteries age. In 1999, self-discharge per month was measured at 8% at 21 °C, 15% at 40 °C, 31% at 60 °C. By 2007, monthly self-discharge rate was estimated at 2% to 3%, and 2–3% by 2016. By comparison, the self-discharge rate for NiMH batteries dropped, as of 2017, from up to 30% per month for previously common cells to about 0.08–0.33% per month for low self-discharge NiMH batteries, and is about 10% per month in NiCd batteries. Cathode There are three classes of commercial cathode materials in lithium-ion batteries: (1) layered oxides, (2) spinel oxides and (3) oxoanion complexes. All of them were discovered by John Goodenough and his collaborators. Layered Oxides LiCoO2 was used in the first commercial lithium-ion battery made by Sony in 1991. The layered oxides have a pseudo-tetrahedral structure comprising layers made of MO6 octahedra separated by interlayer spaces that allow for two-dimensional lithium-ion diffusion. The band structure of LixCoO2 allows for true electronic (rather than polaronic) conductivity. However, due to an overlap between the Co4+ t2g d-band with the O2- 2p-band, the x must be >0.5, otherwise O2 evolution occurs. This limits the charge capacity of this material to ~140 mA h g−1. Several other first-row (3d) transition metals also form layered LiMO2 salts. Some can be directly prepared from lithium oxide and M2O3 (e.g. for M=Ti, V, Cr, Co, Ni), while others (M= Mn or Fe) can be prepared by ion exchange from NaMO2. LiVO2, LiMnO2 and LiFeO2 suffer from structural instabilities (including mixing between M and Li sites) due to a low energy difference between octahedral and tetrahedral environments for the metal ion M. For this reason, they are not used in lithium-ion batteries. However, Na+ and Fe3+ have sufficiently different sizes that NaFeO2 can be used in sodium-ion batteries. Similarly, LiCrO2 shows reversible lithium (de)intercalation around 3.2 V with 170–270 mAh/g. However, its cycle life is short, because of disproportionation of Cr4+ followed by translocation of Cr6+ into tetrahedral sites. On the other hand, NaCrO2 shows a much better cycling stability. LiTiO2 shows Li+ (de)intercalation at a voltage of ~1.5 V, which is too low for a cathode material. These problems leave and as the only practical layered oxide materials for lithium-ion battery cathodes. The cobalt-based cathodes show high theoretical specific (per-mass) charge capacity, high volumetric capacity, low self-discharge, high discharge voltage, and good cycling performance. Unfortunately, they suffer from a high cost of the material. For this reason, the current trend among lithium-ion battery manufacturers is to switch to cathodes with higher Ni content and lower Co content. In addition to a lower (than cobalt) cost, nickel-oxide based materials benefit from the two-electron redox chemistry of Ni: in layered oxides comprising nickel (such as nickel-cobalt-manganese NCM and nickel-cobalt-aluminium oxides NCA), Ni cycles between the oxidation states +2 and +4 (in one step between +3.5 and +4.3 V), cobalt- between +2 and +3, while Mn (usually >20%) and Al (typically, only 5% is needed) remain in +4 and 3+, respectively. Thus increasing the Ni content increases the cyclable charge. For example, NCM111 shows 160 mAh/g, while (NCM811) and (NCA) deliver a higher capacity of ~200 mAh/g. It is worth mentioning so-called "lithium-rich" cathodes, that can be produced from traditional NCM (, where M=Ni, Co, Mn) layered cathode materials upon cycling them to voltages/charges corresponding to Li:M<0.5. Under such conditions a new semi-reversible redox transition at a higher voltage with ca. 0.4-0.8 electrons/metal site charge appears. This transition involves non-binding electron orbitals centered mostly on O atoms. Despite significant initial interest, this phenomenon did not result in marketable products because of the fast structural degradation (O2 evolution and lattice rearrangements) of such "lithium-rich" phases. Cubic oxides (spinels) LiMn2O4 adopts a cubic lattice, which allows for three-dimensional lithium-ion diffusion. Manganese cathodes are attractive because manganese is less expensive than cobalt or nickel. The operating voltage of Li-LiMn2O4 battery is 4 V, and ca. one lithium per two Mn ions can be reversibly extracted from the tetrahedral sites, resulting in a practical capacity of <130 mA h g–1. However, Mn3+ is not a stable oxidation state, as it tends to disporportionate into insoluble Mn4+ and soluble Mn2+. LiMn2O4 can also intercalate more than 0.5 Li per Mn at a lower voltage around +3.0 V. However, this results in an irreversible phase transition due to Jahn-Teller distortion in Mn3+:t2g3eg1, as well as disproportionation and dissolution of Mn3+. An important improvement of Mn spinel are related cubic structures of the LiMn1.5Ni0.5O4 type, where Mn exists as Mn4+ and Ni cycles reversibly between the oxidation states +2 and +4. This materials show a reversible Li-ion capacity of ca. 135 mAh/g around 4.7 V. Although such high voltage is beneficial for increasing the specific energy of batteries, the adoption of such materials is currently hindered by the lack of suitable high-voltage electrolytes. In general, materials with a high nickel content are favored in 2023, because of the possibility of a 2-electron cycling of Ni between the oxidation states +2 and +4. LiV2O4 (lithium vanadium oxide) operates as a lower (ca. +3.0 V) voltage than LiMn2O4, suffers from similar durability issues, is more expensive, and thus is not considered of practical interest. Oxoanionic/olivins Around 1980 Manthiram discovered that oxoanions (molybdates and tungstates in that particular case) cause a substantial positive shift in the redox potential of the metal-ion compared to oxides. In addition, these oxoanionic cathode materials offer better stability/safety than the corresponding oxides. However, they also suffer from poor electronic conductivity due to the long distance between redox-active metal centers, which slows down the electron transport. This necessitates the use of small (less than 200 nm) cathode particles and coating each particle with a layer of electronically-conducting carbon. This reduces the packing density of these materials. Although numerous combinations of oxoanions (sulfate, phosphate, silicate) with various metals (mostly Mn, Fe, Co, Ni) have been studied, LiFePO4 is the only one that has been commercialized. Although it was originally used primarily for stationary energy storage due to its lower energy density compared to layered oxides, it has begun to be widely used in electric vehicles since the 2020s. Anode Negative electrode materials are traditionally constructed from graphite and other carbon materials, although newer silicon-based materials are being increasingly used (see Nanowire battery). In 2016, 89% of lithium-ion batteries contained graphite (43% artificial and 46% natural), 7% contained amorphous carbon (either soft carbon or hard carbon), 2% contained lithium titanate (LTO) and 2% contained silicon or tin-based materials. These materials are used because they are abundant, electrically conducting and can intercalate lithium ions to store electrical charge with modest volume expansion (~10%). Graphite is the dominant material because of its low intercalation voltage and excellent performance. Various alternative materials with higher capacities have been proposed, but they usually have higher voltages, which reduces energy density. Low voltage is the key requirement for anodes; otherwise, the excess capacity is useless in terms of energy density. As graphite is limited to a maximum capacity of 372 mAh/g much research has been dedicated to the development of materials that exhibit higher theoretical capacities and overcoming the technical challenges that presently encumber their implementation. The extensive 2007 Review Article by Kasavajjula et al. summarizes early research on silicon-based anodes for lithium-ion secondary cells. In particular, Hong Li et al. showed in 2000 that the electrochemical insertion of lithium ions in silicon nanoparticles and silicon nanowires leads to the formation of an amorphous Li-Si alloy. The same year, Bo Gao and his doctoral advisor, Professor Otto Zhou described the cycling of electrochemical cells with anodes comprising silicon nanowires, with a reversible capacity ranging from at least approximately 900 to 1500 mAh/g. Diamond-like carbon coatings can increase retention capacity by 40% and cycle life by 400% for lithium based batteries. To improve the stability of the lithium anode, several approaches to installing a protective layer have been suggested. Silicon is beginning to be looked at as an anode material because it can accommodate significantly more lithium ions, storing up to 10 times the electric charge, however this alloying between lithium and silicon results in significant volume expansion (ca. 400%), which causes catastrophic failure for the cell. Silicon has been used as an anode material but the insertion and extraction of \scriptstyle Li+ can create cracks in the material. These cracks expose the Si surface to an electrolyte, causing decomposition and the formation of a solid electrolyte interphase (SEI) on the new Si surface (crumpled graphene encapsulated Si nanoparticles). This SEI will continue to grow thicker, deplete the available \scriptstyle Li+, and degrade the capacity and cycling stability of the anode. In addition to carbon- and silicon- based anode materials for lithium-ion batteries, high-entropy metal oxide materials are being developed. These conversion (rather than intercalation) materials comprise an alloy (or subnanometer mixed phases) of several metal oxides performing different functions. For example, Zn and Co can act as electroactive charge-storing species, Cu can provide an electronically conducting support phase and MgO can prevent pulverization. Electrolyte Liquid electrolytes in lithium-ion batteries consist of lithium salts, such as , or in an organic solvent, such as ethylene carbonate, dimethyl carbonate, and diethyl carbonate. A liquid electrolyte acts as a conductive pathway for the movement of cations passing from the negative to the positive electrodes during discharge. Typical conductivities of liquid electrolyte at room temperature () are in the range of 10 mS/cm, increasing by approximately 30–40% at and decreasing slightly at . The combination of linear and cyclic carbonates (e.g., ethylene carbonate (EC) and dimethyl carbonate (DMC)) offers high conductivity and solid electrolyte interphase (SEI)-forming ability. Organic solvents easily decompose on the negative electrodes during charge. When appropriate organic solvents are used as the electrolyte, the solvent decomposes on initial charging and forms a solid layer called the solid electrolyte interphase, which is electrically insulating, yet provides significant ionic conductivity. The interphase prevents further decomposition of the electrolyte after the second charge. For example, ethylene carbonate is decomposed at a relatively high voltage, 0.7 V vs. lithium, and forms a dense and stable interface. Composite electrolytes based on POE (poly(oxyethylene)) provide a relatively stable interface. It can be either solid (high molecular weight) and be applied in dry Li-polymer cells, or liquid (low molecular weight) and be applied in regular Li-ion cells. Room-temperature ionic liquids (RTILs) are another approach to limiting the flammability and volatility of organic electrolytes. Recent advances in battery technology involve using a solid as the electrolyte material. The most promising of these are ceramics. Solid ceramic electrolytes are mostly lithium metal oxides, which allow lithium-ion transport through the solid more readily due to the intrinsic lithium. The main benefit of solid electrolytes is that there is no risk of leaks, which is a serious safety issue for batteries with liquid electrolytes. Solid ceramic electrolytes can be further broken down into two main categories: ceramic and glassy. Ceramic solid electrolytes are highly ordered compounds with crystal structures that usually have ion transport channels. Common ceramic electrolytes are lithium super ion conductors (LISICON) and perovskites. Glassy solid electrolytes are amorphous atomic structures made up of similar elements to ceramic solid electrolytes but have higher conductivities overall due to higher conductivity at grain boundaries. Both glassy and ceramic electrolytes can be made more ionically conductive by substituting sulfur for oxygen. The larger radius of sulfur and its higher ability to be polarized allow higher conductivity of lithium. This contributes to conductivities of solid electrolytes are nearing parity with their liquid counterparts, with most on the order of 0.1 mS/cm and the best at 10 mS/cm. An efficient and economic way to tune targeted electrolytes properties is by adding a third component in small concentrations, known as an additive. By adding the additive in small amounts, the bulk properties of the electrolyte system will not be affected whilst the targeted property can be significantly improved. The numerous additives that have been tested can be divided into the following three distinct categories: (1) those used for SEI chemistry modifications; (2) those used for enhancing the ion conduction properties; (3) those used for improving the safety of the cell (e.g. prevent overcharging). Electrolyte alternatives have also played a significant role, for example the lithium polymer battery. Polymer electrolytes are promising for minimizing the dendrite formation of lithium. Polymers are supposed to prevent short circuits and maintain conductivity. The ions in the electrolyte diffuse because there are small changes in the electrolyte concentration. Linear diffusion is only considered here. The change in concentration c, as a function of time t and distance x, is In this equation, D is the diffusion coefficient for the lithium ion. It has a value of in the electrolyte. The value for ε, the porosity of the electrolyte, is 0.724. Battery designs and formats Lithium-ion batteries may have multiple levels of structure. Small batteries consist of a single battery cell. Larger batteries connect cells in parallel into a module and connect modules in series and parallel into a pack. Multiple packs may be connected in series to increase the voltage. Electrode Layers and Electrolyte On the macrostructral level (length scale 0.1-5 mm) almost all commercial lithium-ion batteries comprise foil current collectors (aluminium for cathode and copper for anode). Copper is selected for the anode, because lithium does not alloy with it. Aluminum is used for the cathode, because it passivates in LiPF6 electrolytes. Cells Li-ion cells are available in various form factors, which can generally be divided into four types: Coin cells have a rugged design with metal (stainless steel, usually) casing. Because of their poor specific energy (in Wh/kg) and small energy (Wh) per cell, their use is limited to handwatches, portable calculators and research. Notably, coin format cells are more commonly used for primary lithium-metal batteries. Small cylindrical (solid body without terminals, such as those used in most e-bikes and most electric vehicle battery and older laptop batteries); they typically come in standard sizes. Large cylindrical (solid body with large threaded terminals) Flat or pouch (soft, flat body, such as those used in cell phones and newer laptops; these are lithium-ion polymer batteries. Rigid plastic case with large threaded terminals (such as electric vehicle traction packs) Cells with a cylindrical shape are made in a characteristic "swiss roll" manner (known as a "jelly roll" in the US), which means it is a single long "sandwich" of the positive electrode, separator, negative electrode, and separator rolled into a single spool. The result is encased in a container. One advantage of cylindrical cells is faster production speed. One disadvantage can be a large radial temperature gradient at high discharge rates. The absence of a case gives pouch cells the highest gravimetric energy density; however, many applications require containment to prevent expansion when their state of charge (SOC) level is high, and for general structural stability. Both rigid plastic and pouch-style cells are sometimes referred to as prismatic cells due to their rectangular shapes. Three basic battery types are used in 2020s-era electric vehicles: cylindrical cells (e.g., Tesla), prismatic pouch (e.g., from LG), and prismatic can cells (e.g., from LG, Samsung, Panasonic, and others). Lithium-ion flow batteries have been demonstrated that suspend the cathode or anode material in an aqueous or organic solution. As of 2014, the smallest Li-ion cell was pin-shaped with a diameter of 3.5 mm and a weight of 0.6 g, made by Panasonic. A coin cell form factor is available for LiCoO2 cells, usually designated with a "LiR" prefix. Batteries may be equipped with temperature sensors, heating/cooling systems, voltage regulator circuits, voltage taps, and charge-state monitors. These components address safety risks like overheating and short circuiting. Electrode Layers Cell voltage The average voltage of LCO (lithium cobalt oxide) chemistry is 3.6v if made with hard carbon cathode and 3.7v if made with graphite cathode. Comparatively, the latter has a flatter discharge voltage curve. Uses Lithium ion batteries are used in a multitude of applications from consumer electronics, toys, power tools and electric vehicles. More niche uses include backup power in telecommunications applications. Lithium-ion batteries are also frequently discussed as a potential option for grid energy storage, although as of 2020, they were not yet cost-competitive at scale. Performance Because lithium-ion batteries can have a variety of positive and negative electrode materials, the energy density and voltage vary accordingly. The open-circuit voltage is higher than in aqueous batteries (such as lead–acid, nickel–metal hydride and nickel–cadmium). Internal resistance increases with both cycling and age, although this depends strongly on the voltage and temperature the batteries are stored at. Rising internal resistance causes the voltage at the terminals to drop under load, which reduces the maximum current draw. Eventually, increasing resistance will leave the battery in a state such that it can no longer support the normal discharge currents requested of it without unacceptable voltage drop or overheating. Batteries with a lithium iron phosphate positive and graphite negative electrodes have a nominal open-circuit voltage of 3.2 V and a typical charging voltage of 3.6 V. Lithium nickel manganese cobalt (NMC) oxide positives with graphite negatives have a 3.7 V nominal voltage with a 4.2 V maximum while charging. The charging procedure is performed at constant voltage with current-limiting circuitry (i.e., charging with constant current until a voltage of 4.2 V is reached in the cell and continuing with a constant voltage applied until the current drops close to zero). Typically, the charge is terminated at 3% of the initial charge current. In the past, lithium-ion batteries could not be fast-charged and needed at least two hours to fully charge. Current-generation cells can be fully charged in 45 minutes or less. In 2015 researchers demonstrated a small 600 mAh capacity battery charged to 68 percent capacity in two minutes and a 3,000 mAh battery charged to 48 percent capacity in five minutes. The latter battery has an energy density of 620 W·h/L. The device employed heteroatoms bonded to graphite molecules in the anode. Performance of manufactured batteries has improved over time. For example, from 1991 to 2005 the energy capacity per price of lithium-ion batteries improved more than ten-fold, from 0.3 W·h per dollar to over 3 W·h per dollar. In the period from 2011 to 2017, progress has averaged 7.5% annually. Overall, between 1991 and 2018, prices for all types of lithium-ion cells (in dollars per kWh) fell approximately 97%. Over the same time period, energy density more than tripled. Efforts to increase energy density contributed significantly to cost reduction. Energy density can also be increased by improvements in the chemistry if the cell, for instance, by full or partial replacement of graphite with silicon. Silicon anodes enhanced with graphene nanotubes to eliminate the premature degradation of silicon open the door to reaching record-breaking battery energy density of up to 350 Wh/kg and lowering EV prices to be competitive with ICEs. Differently sized cells with similar chemistry can also have different energy densities. The 21700 cell has 50% more energy than the 18650 cell, and the bigger size reduces heat transfer to its surroundings. Round-trip efficiency The table below shows the result of an experimental evaluation of a "high-energy" type 3.0 Ah 18650 NMC cell in 2021, round-trip efficiency which compared the energy going into the cell and energy extracted from the cell from 100% (4.2v) SoC to 0% SoC (cut off 2.0v). A roundtrip efficiency is the percent of energy that can be used relative to the energy that went into charging the battery. Characterization of a cell in a different experiment in 2017 reported round-trip efficiency of 85.5% at 2C and 97.6% at 0.1C Lifespan The lifespan of a lithium-ion battery is typically defined as the number of full charge-discharge cycles to reach a failure threshold in terms of capacity loss or impedance rise. Manufacturers' datasheet typically uses the word "cycle life" to specify lifespan in terms of the number of cycles to reach 80% of the rated battery capacity. Simply storing lithium-ion batteries in the charged state also reduces their capacity (the amount of cyclable ) and increases the cell resistance (primarily due to the continuous growth of the solid electrolyte interface on the anode). Calendar life is used to represent the whole life cycle of battery involving both the cycle and inactive storage operations. Battery cycle life is affected by many different stress factors including temperature, discharge current, charge current, and state of charge ranges (depth of discharge). Batteries are not fully charged and discharged in real applications such as smartphones, laptops and electric cars and hence defining battery life via full discharge cycles can be misleading. To avoid this confusion, researchers sometimes use cumulative discharge defined as the total amount of charge (Ah) delivered by the battery during its entire life or equivalent full cycles, which represents the summation of the partial cycles as fractions of a full charge-discharge cycle. Battery degradation during storage is affected by temperature and battery state of charge (SOC) and a combination of full charge (100% SOC) and high temperature (usually > 50 °C) can result in a sharp capacity drop and gas generation. Multiplying the battery cumulative discharge by the rated nominal voltage gives the total energy delivered over the life of the battery. From this one can calculate the cost per kWh of the energy (including the cost of charging). Over their lifespan batteries degrade gradually leading to reduced cyclable charge (a.k.a. Ah capacity) and increased resistance (the latter translates into a lower operating cell voltage). Several degradation processes occur in lithium-ion batteries, some during cycling, some during storage, and some all the time: Degradation is strongly temperature-dependent: degradation at room temperature is minimal but increases for batteries stored or used in high temperature (usually > 35 °C) or low temperature (usually < 5 °C) environments. High charge levels also hasten capacity loss. Frequent over-charging (> 90%) and over-discharging (< 10%) may also hasten capacity loss. Keep the li-ion battery status to about 60% to 80% can reduce the capacity loss. In a study, scientists provided 3D imaging and model analysis to reveal main causes, mechanics, and potential mitigations of the problematic degradation of the batteries over charge cycles. They found "[p]article cracking increases and contact loss between particles and carbon-binder domain are observed to correlate with the cell degradation" and indicates that "the reaction heterogeneity within the thick cathode caused by the unbalanced electron conduction is the main cause of the battery degradation over cycling". The most common degradation mechanisms in lithium-ion batteries include: Reduction of the organic carbonate electrolyte at the anode, which results in the growth of Solid Electrolyte Interface (SEI), where ions get irreversibly trapped, i.e. loss of lithium inventory. This shows as increased ohmic impedance of the negative electrode and a drop in the cyclable Ah charge. At constant temperature, the SEI film thickness (and therefore, the SEI resistance and the loss in cyclable ) increases as a square root of the time spent in the charged state. The number of cycles is not a useful metric in characterizing this degradation pathway. Under high temperatures or in the presence of a mechanical damage the electrolyte reduction can proceed explosively. Lithium metal plating also results in the loss of lithium inventory (cyclable Ah charge), as well as internal short-circuiting and ignition of a battery. Once Li plating commences during cycling, it results in larger slopes of capacity loss per cycle and resistance increase per cycle. This degradation mechanism become more prominent during fast charging and low temperatures. Loss of the (negative or positive) electroactive materials due to dissolution (e.g. of species), cracking, exfoliation, detachment or even simple regular volume change during cycling. It shows up as both charge and power fade (increased resistance). Both positive and negative electrode materials are subject to fracturing due to the volumetric strain of repeated (de)lithiation cycles. Structural degradation of cathode materials, such as cation mixing in nickel-rich materials. This manifests as “electrode saturation", loss of cyclable Ah charge and as a "voltage fade". Other material degradations. Negative copper current collector is particularly prone to corrosion/dissolution at low cell voltages. PVDF binder also degrades, causing the detachment of the electroactive materials, and the loss of cyclable Ah charge. These are shown in the figure on the right. A change from one main degradation mechanism to another appears as a knee (slope change) in the capacity vs. cycle number plot. Most studies of lithium-ion battery aging have been done at elevated (50–60 °C) temperatures in order to complete the experiments sooner. Under these storage conditions, fully charged nickel-cobalt-aluminum and lithium-iron phosphate cells lose ca. 20% of their cyclable charge in 1–2 years. It is believed that the aforementioned anode aging is the most important degradation pathway in these cases. On the other hand, manganese-based cathodes show a (ca. 20–50%) faster degradation under these conditions, probably due to the additional mechanism of Mn ion dissolution. At 25 °C the degradation of lithium-ion batteries seems to follow the same pathway(s) as the degradation at 50 °C, but with half the speed. In other words, based on the limited extrapolated experimental data, lithium-ion batteries are expected to lose irreversibly ca. 20% of their cyclable charge in 3–5 years or 1000–2000 cycles at 25 °C. Lithium-ion batteries with titanate anodes do not suffer from SEI growth, and last longer (>5000 cycles) than graphite anodes. However, in complete cells other degradation mechanisms (i.e. the dissolution of and the place exchange, decomposition of PVDF binder and particle detachment) show up after 1000–2000 days, and the use titanate anode does not improve full cell durability in practice. Detailed degradation description A more detailed description of some of these mechanisms is provided below: Recommendations The IEEE standard 1188–1996 recommends replacing lithium-ion batteries in an electric vehicle, when their charge capacity drops to 80% of the nominal value. In what follows, we shall use the 20% capacity loss as a comparison point between different studies. We shall note, nevertheless, that the linear model of degradation (the constant % of charge loss per cycle or per calendar time) is not always applicable, and that a “knee point”, observed as a change of the slope, and related to the change of the main degradation mechanism, is often observed. Safety The problem of lithium-ion battery safety has been recognized even before these batteries were first commercially released in 1991. The two main reasons for lithium-ion battery fires and explosions are related to processes on the negative electrode (cathode). During a normal battery charge lithium ions intercalate into graphite. However, if the charge is forced to go too fast (or at a too low temperature) lithium metal starts plating on the anode, and the resulting dendrites can penetrate the battery separator, internally short-circuit the cell, resulting in high electric current, heating and ignition. In other mechanism, an explosive reaction between the charge anode material (LiC6) and the solvent (liquid organic carbonate) occurs even at open circuit, provided that the anode temperature exceeds a certain threshold above 70 °C. Nowadays, all reputable manufacturers employ at least two safety devices in all their lithium-ion batteries of an 18650 format or larger: a current interrupt (CID) device and a positive temperature coefficient (PTC) device. The CID comprises two metal disks, that make an electric contact with each other. When pressure inside the cell increases, the distance between the two disks increases too and they lose the electric contact with each other, thus terminating the flow of electric current through the battery. The PTC device is made of an electrically conducting polymer. When the current going through the PTC device increases, the polymer gets hot, and its electric resistance rises sharply, thus reducing the current through the battery. Fire hazard Lithium-ion batteries can be a safety hazard since they contain a flammable electrolyte and may become pressurized if they become damaged. A battery cell charged too quickly could cause a short circuit, leading to overheating, explosions, and fires. A Li-ion battery fire can be started due to (1) thermal abuse, e.g. poor cooling or external fire, (2) electrical abuse, e.g. overcharge or external short circuit, (3) mechanical abuse, e.g. penetration or crash, or (4) internal short circuit, e.g. due to manufacturing flaws or aging. Because of these risks, testing standards are more stringent than those for acid-electrolyte batteries, requiring both a broader range of test conditions and additional battery-specific tests, and there are shipping limitations imposed by safety regulators. There have been battery-related recalls by some companies, including the 2016 Samsung Galaxy Note 7 recall for battery fires. Lithium-ion batteries have a flammable liquid electrolyte. A faulty battery can cause a serious fire. Faulty chargers can affect the safety of the battery because they can destroy the battery's protection circuit. While charging at temperatures below 0 °C, the negative electrode of the cells gets plated with pure lithium, which can compromise the safety of the whole pack. Short-circuiting a battery will cause the cell to overheat and possibly to catch fire. Smoke from thermal runaway in a Li-ion battery is both flammable and toxic. The fire energy content (electrical + chemical) of cobalt-oxide cells is about 100 to 150 kJ/(A·h), most of it chemical. Around 2010, large lithium-ion batteries were introduced in place of other chemistries to power systems on some aircraft; , there had been at least four serious lithium-ion battery fires, or smoke, on the Boeing 787 passenger aircraft, introduced in 2011, which did not cause crashes but had the potential to do so. UPS Airlines Flight 6 crashed in Dubai after its payload of batteries spontaneously ignited. To reduce fire hazards, research projects are intended to develop non-flammable electrolytes. Damaging and overloading If a lithium-ion battery is damaged, crushed, or is subjected to a higher electrical load without having overcharge protection, problems may arise. External short circuit can trigger a battery explosion. Such incidents can occur when lithium-ion batteries are not disposed of through the appropriate channels, but are thrown away with other waste. The way they are treated by recycling companies can damage them and cause fires, which in turn can lead to large-scale conflagrations. Twelve such fires were recorded in Swiss recycling facilities in 2023. If overheated or overcharged, Li-ion batteries may suffer thermal runaway and cell rupture. During thermal runaway, internal degradation and oxidization processes can keep cell temperatures above 500 °C, with the possibility of igniting secondary combustibles, as well as leading to leakage, explosion or fire in extreme cases. To reduce these risks, many lithium-ion cells (and battery packs) contain fail-safe circuitry that disconnects the battery when its voltage is outside the safe range of 3–4.2 V per cell, or when overcharged or discharged. Lithium battery packs, whether constructed by a vendor or the end-user, without effective battery management circuits are susceptible to these issues. Poorly designed or implemented battery management circuits also may cause problems; it is difficult to be certain that any particular battery management circuitry is properly implemented. Voltage limits Lithium-ion cells are susceptible to stress by voltage ranges outside of safe ones between 2.5 and 3.65/4.1/4.2 or 4.35 V (depending on the components of the cell). Exceeding this voltage range results in premature aging and in safety risks due to the reactive components in the cells. When stored for long periods the small current draw of the protection circuitry may drain the battery below its shutoff voltage; normal chargers may then be useless since the battery management system (BMS) may retain a record of this battery (or charger) "failure". Many types of lithium-ion cells cannot be charged safely below 0 °C, as this can result in plating of lithium on the anode of the cell, which may cause complications such as internal short-circuit paths. Other safety features are required in each cell: Shut-down separator (for overheating) Tear-away tab (for internal pressure relief) Vent (pressure relief in case of severe outgassing) Thermal interrupt (overcurrent/overcharging/environmental exposure) These features are required because the negative electrode produces heat during use, while the positive electrode may produce oxygen. However, these additional devices occupy space inside the cells, add points of failure, and may irreversibly disable the cell when activated. Further, these features increase costs compared to nickel metal hydride batteries, which require only a hydrogen/oxygen recombination device and a back-up pressure valve. Contaminants inside the cells can defeat these safety devices. Also, these features can not be applied to all kinds of cells, e.g., prismatic high current cells cannot be equipped with a vent or thermal interrupt. High current cells must not produce excessive heat or oxygen, lest there be a failure, possibly violent. Instead, they must be equipped with internal thermal fuses which act before the anode and cathode reach their thermal limits. Replacing the lithium cobalt oxide positive electrode material in lithium-ion batteries with a lithium metal phosphate such as lithium iron phosphate (LFP) improves cycle counts, shelf life and safety, but lowers capacity. As of 2006, these safer lithium-ion batteries were mainly used in electric cars and other large-capacity battery applications, where safety is critical. In 2016, an LFP-based energy storage system was chosen to be installed in Paiyun Lodge on Mt.Jade (Yushan) (the highest lodge in Taiwan). As of June 2024, the system was still operating safely. Recalls In 2006, approximately 10 million Sony batteries used in laptops were recalled, including those in laptops from Dell, Sony, Apple, Lenovo, Panasonic, Toshiba, Hitachi, Fujitsu and Sharp. The batteries were found to be susceptible to internal contamination by metal particles during manufacture. Under some circumstances, these particles could pierce the separator, causing a dangerous short circuit. IATA estimates that over a billion lithium metal and lithium-ion cells are flown each year. Some kinds of lithium batteries may be prohibited aboard aircraft because of the fire hazard. Some postal administrations restrict air shipping (including EMS) of lithium and lithium-ion batteries, either separately or installed in equipment. Non-flammable electrolyte In 2023, most commercial Li-ion batteries employed alkylcarbonate solvent(s) to assure the formation solid electrolyte interface on the negative electrode. Since such solvents are readily flammable, there has been active research to replace them with non-flammable solvents or to add fire suppressants. Another source of hazard is hexafluorophosphate anion, which is needed to passitivate the negative current collector made of aluminium. Hexafluorophosphate reacts with water and releases volatile and toxic hydrogen fluoride. Efforts to replace hexafluorophosphate have been less successful. Supply chain Li-ion battery production is heavily concentrated, with 60% coming from China in 2024. In the 1990s, the United States was the World’s largest miner of lithium minerals, contributing to 1/3 of the total production. By 2010 Chile replaced the USA the leading miner, thanks to the development of lithium brines in Salar de Atacama. By 2024, Australia and China joined Chile as the top 3 miners. Environmental impact Extraction of lithium, nickel, and cobalt, manufacture of solvents, and mining byproducts present significant environmental and health hazards. Lithium extraction can be fatal to aquatic life due to water pollution. It is known to cause surface water contamination, drinking water contamination, respiratory problems, ecosystem degradation and landscape damage. It also leads to unsustainable water consumption in arid regions (1.9 million liters per ton of lithium). Massive byproduct generation of lithium extraction also presents unsolved problems, such as large amounts of magnesium and lime waste. Lithium mining takes place in North and South America, Asia, South Africa, Australia, and China. Cobalt for Li-ion batteries is largely mined in the Congo (see also Mining industry of the Democratic Republic of the Congo). Open-pit cobalt mining has led to deforestation and habitat destruction in the Democratic Republic of Congo. Open-pit nickel mining has led to environmental degradation and pollution in developing countries such as the Philippines and Indonesia. In 2024, nickel mining and processing was one of the main causes of deforestation in Indonesia. Manufacturing a kg of Li-ion battery takes about 67 megajoule (MJ) of energy. The global warming potential of lithium-ion batteries manufacturing strongly depends on the energy source used in mining and manufacturing operations, and is difficult to estimate, but one 2019 study estimated 73 kg CO2e/kWh. Effective recycling can reduce the carbon footprint of the production significantly. Solid waste and recycling Li-ion battery elements including iron, copper, nickel and cobalt are considered safe for incinerators and landfills. These metals can be recycled, usually by burning away the other materials, but mining generally remains cheaper than recycling; recycling may cost $3/kg, and in 2019 less than 5% of lithium-ion batteries were being recycled. Since 2018, the recycling yield was increased significantly, and recovering lithium, manganese, aluminum, the organic solvents of the electrolyte, and graphite is possible at industrial scales. The most expensive metal involved in the construction of the cell is cobalt. Lithium is less expensive than other metals used and is rarely recycled, but recycling could prevent a future shortage. Accumulation of battery waste presents technical challenges and health hazards. Since the environmental impact of electric cars is heavily affected by the production of lithium-ion batteries, the development of efficient ways to repurpose waste is crucial. Recycling is a multi-step process, starting with the storage of batteries before disposal, followed by manual testing, disassembling, and finally the chemical separation of battery components. Re-use of the battery is preferred over complete recycling as there is less embodied energy in the process. As these batteries are a lot more reactive than classical vehicle waste like tire rubber, there are significant risks to stockpiling used batteries. Pyrometallurgical recovery The pyrometallurgical method uses a high-temperature furnace to reduce the components of the metal oxides in the battery to an alloy of Co, Cu, Fe, and Ni. This is the most common and commercially established method of recycling and can be combined with other similar batteries to increase smelting efficiency and improve thermodynamics. The metal current collectors aid the smelting process, allowing whole cells or modules to be melted at once. The product of this method is a collection of metallic alloy, slag, and gas. At high temperatures, the polymers used to hold the battery cells together burn off and the metal alloy can be separated through a hydrometallurgical process into its separate components. The slag can be further refined or used in the cement industry. The process is relatively risk-free and the exothermic reaction from polymer combustion reduces the required input energy. However, in the process, the plastics, electrolytes, and lithium salts will be lost. Hydrometallurgical metals reclamation This method involves the use of aqueous solutions to remove the desired metals from the cathode. The most common reagent is sulfuric acid. Factors that affect the leaching rate include the concentration of the acid, time, temperature, solid-to-liquid-ratio, and reducing agent. It is experimentally proven that H2O2 acts as a reducing agent to speed up the rate of leaching through the reaction: 2 LiCoO2 (s) + 3 H2SO4 + H2O2 → 2 CoSO4 (aq) + Li2SO4 + 4 H2O + O2 Once leached, the metals can be extracted through precipitation reactions controlled by changing the pH level of the solution. Cobalt, the most expensive metal, can then be recovered in the form of sulfate, oxalate, hydroxide, or carbonate. [75] More recently recycling methods experiment with the direct reproduction of the cathode from the leached metals. In these procedures, concentrations of the various leached metals are premeasured to match the target cathode and then the cathodes are directly synthesized. The main issues with this method, however, is that a large volume of solvent is required and the high cost of neutralization. Although it's easy to shred up the battery, mixing the cathode and anode at the beginning complicates the process, so they will also need to be separated. Unfortunately, the current design of batteries makes the process extremely complex and it is difficult to separate the metals in a closed-loop battery system. Shredding and dissolving may occur at different locations. Direct recycling Direct recycling is the removal of the cathode or anode from the electrode, reconditioned, and then reused in a new battery. Mixed metal-oxides can be added to the new electrode with very little change to the crystal morphology. The process generally involves the addition of new lithium to replenish the loss of lithium in the cathode due to degradation from cycling. Cathode strips are obtained from the dismantled batteries, then soaked in NMP, and undergo sonication to remove excess deposits. It is treated hydrothermally with a solution containing LiOH/Li2SO4 before annealing. This method is extremely cost-effective for noncobalt-based batteries as the raw materials do not make up the bulk of the cost. Direct recycling avoids the time-consuming and expensive purification steps, which is great for low-cost cathodes such as LiMn2O4 and LiFePO4. For these cheaper cathodes, most of the cost, embedded energy, and carbon footprint is associated with the manufacturing rather than the raw material. It is experimentally shown that direct recycling can reproduce similar properties to pristine graphite. The drawback of the method lies in the condition of the retired battery. In the case where the battery is relatively healthy, direct recycling can cheaply restore its properties. However, for batteries where the state of charge is low, direct recycling may not be worth the investment. The process must also be tailored to the specific cathode composition, and therefore the process must be configured to one type of battery at a time. Lastly, in a time with rapidly developing battery technology, the design of a battery today may no longer be desirable a decade from now, rendering direct recycling ineffective. Physical materials separation Physical materials separation recovered materials by mechanical crushing and exploiting physical properties of different components such as particle size, density, ferromagnetism and hydrophobicity. Copper, aluminum and steel casing can be recovered by sorting. The remaining materials, called "black mass", which is composed of nickel, cobalt, lithium and manganese, need a secondary treatment to recover. Biological metals reclamation For the biological metals reclamation or bio-leaching, the process uses microorganisms to digest metal oxides selectively. Then, recyclers can reduce these oxides to produce metal nanoparticles. Although bio-leaching has been used successfully in the mining industry, this process is still nascent to the recycling industry and plenty of opportunities exists for further investigation. Electrolyte recycling Electrolyte recycling consists of two phases. The collection phase extracts the electrolyte from the spent Li-ion battery. This can be achieved through mechanical processes, distillation, freezing, solvent extraction, and supercritical fluid extraction. Due to the volatility, flammability, and sensitivity of the electrolyte, the collection process poses a greater difficulty than the collection process for other components of a Li-ion battery. The next phase consists of separation/electrolyte regeneration. Separation consists of isolating the individual components of the electrolyte. This approach is often used for the direct recovery of the Li salts from the organic solvents. In contrast, regeneration of the electrolyte aims to preserve the electrolyte composition by removing impurities which can be achieved through filtration methods. The recycling of the electrolytes, which consists 10-15 wt.% of the Li-ion battery, provides both an economic and environmental benefits. These benefits include the recovery of the valuable Li-based salts and the prevention of hazardous compounds, such as volatile organic compounds (VOCs) and carcinogens, being released into the environment. Compared to electrode recycling, less focus is placed on recycling the electrolyte of Li-ion batteries which can be attributed to lower economic benefits and greater process challenges. Such challenges can include the difficulty associated with recycling different electrolyte compositions, removing side products accumulated from electrolyte decomposition during its runtime, and removal of electrolyte adsorbed onto the electrodes. Due to these challenges, current pyrometallurgical methods of Li-ion battery recycling forgo electrolyte recovery, releasing hazardous gases upon heating. However, due to high energy consumption and environmental impact, future recycling methods are being directed away from this approach. Human rights impact Extraction of raw materials for lithium-ion batteries may present dangers to local people, especially land-based indigenous populations. Cobalt sourced from the Democratic Republic of the Congo is often mined by workers using hand tools with few safety precautions, resulting in frequent injuries and deaths. Pollution from these mines has exposed people to toxic chemicals that health officials believe to cause birth defects and breathing difficulties. Human rights activists have alleged, and investigative journalism reported confirmation, that child labor is used in these mines. A study of relationships between lithium extraction companies and indigenous peoples in Argentina indicated that the state may not have protected indigenous peoples' right to free prior and informed consent, and that extraction companies generally controlled community access to information and set the terms for discussion of the projects and benefit sharing. Development of the Thacker Pass lithium mine in Nevada, USA has met with protests and lawsuits from several indigenous tribes who have said they were not provided free prior and informed consent and that the project threatens cultural and sacred sites. Links between resource extraction and missing and murdered indigenous women have also prompted local communities to express concerns that the project will create risks to indigenous women. Protestors have been occupying the site of the proposed mine since January, 2021. Research Researchers are actively working to improve the power density, safety, cycle durability (battery life), recharge time, cost, flexibility, and other characteristics, as well as research methods and uses, of these batteries. Solid-state batteries are being researched as a breakthrough in technological barriers. Currently, solid-state batteries are expected to be the most promising next-generation battery, and various companies are working to popularize them. Research areas for lithium-ion batteries include extending lifetime, increasing energy density, improving safety, reducing cost, and increasing charging speed, among others. Research has been under way in the area of non-flammable electrolytes as a pathway to increased safety based on the flammability and volatility of the organic solvents used in the typical electrolyte. Strategies include aqueous lithium-ion batteries, ceramic solid electrolytes, polymer electrolytes, ionic liquids, and heavily fluorinated systems.
Technology
Energy storage
null
201487
https://en.wikipedia.org/wiki/Rechargeable%20battery
Rechargeable battery
A rechargeable battery, storage battery, or secondary cell (formally a type of energy accumulator), is a type of electrical battery which can be charged, discharged into a load, and recharged many times, as opposed to a disposable or primary battery, which is supplied fully charged and discarded after use. It is composed of one or more electrochemical cells. The term "accumulator" is used as it accumulates and stores energy through a reversible electrochemical reaction. Rechargeable batteries are produced in many different shapes and sizes, ranging from button cells to megawatt systems connected to stabilize an electrical distribution network. Several different combinations of electrode materials and electrolytes are used, including lead–acid, zinc–air, nickel–cadmium (NiCd), nickel–metal hydride (NiMH), lithium-ion (Li-ion), lithium iron phosphate (LiFePO4), and lithium-ion polymer (Li-ion polymer). Rechargeable batteries typically initially cost more than disposable batteries but have a much lower total cost of ownership and environmental impact, as they can be recharged inexpensively many times before they need replacing. Some rechargeable battery types are available in the same sizes and voltages as disposable types, and can be used interchangeably with them. Billions of dollars in research are being invested around the world for improving batteries as industry focuses on building better batteries. Applications Devices which use rechargeable batteries include automobile starters, portable consumer devices, light vehicles (such as motorized wheelchairs, golf carts, electric bicycles, and electric forklifts), road vehicles (cars, vans, trucks, motorbikes), trains, small airplanes, tools, uninterruptible power supplies, and battery storage power stations. Emerging applications in hybrid internal combustion-battery and electric vehicles drive the technology to reduce cost, weight, and size, and increase lifetime. Older rechargeable batteries self-discharge and require charging before first use; some newer low self-discharge NiMH batteries hold their charge for many months, and are typically sold factory-charged to about 70% of their rated capacity. Battery storage power stations use rechargeable batteries for load-leveling (storing electric energy at times of low demand for use during peak periods) and for renewable energy uses (such as storing power generated from photovoltaic arrays during the day to be used at night). Load-leveling reduces the maximum power which a plant must be able to generate, reducing capital cost and the need for peaking power plants. According to a report from Research and Markets, the analysts forecast the global rechargeable battery market to grow at a CAGR of 8.32% during the period 2018–2022. Small rechargeable batteries can power portable electronic devices, power tools, appliances, and so on. Heavy-duty batteries power electric vehicles, ranging from scooters to locomotives and ships. They are used in distributed electricity generation and in stand-alone power systems. Charging and discharging During charging, the positive active material is oxidized, releasing electrons, and the negative material is reduced, absorbing electrons. These electrons constitute the current flow in the external circuit. The electrolyte may serve as a simple buffer for internal ion flow between the electrodes, as in lithium-ion and nickel-cadmium cells, or it may be an active participant in the electrochemical reaction, as in lead–acid cells. The energy used to charge rechargeable batteries usually comes from a battery charger using AC mains electricity, although some are equipped to use a vehicle's 12-volt DC power outlet. The voltage of the source must be higher than that of the battery to force current to flow into it, but not too much higher or the battery may be damaged. Chargers take from a few minutes to several hours to charge a battery. Slow "dumb" chargers without voltage or temperature-sensing capabilities will charge at a low rate, typically taking 14 hours or more to reach a full charge. Rapid chargers can typically charge cells in two to five hours, depending on the model, with the fastest taking as little as fifteen minutes. Fast chargers must have multiple ways of detecting when a cell reaches full charge (change in terminal voltage, temperature, etc.) to stop charging before harmful overcharging or overheating occurs. The fastest chargers often incorporate cooling fans to keep the cells from overheating. Battery packs intended for rapid charging may include a temperature sensor that the charger uses to protect the pack; the sensor will have one or more additional electrical contacts. Different battery chemistries require different charging schemes. For example, some battery types can be safely recharged from a constant voltage source. Other types need to be charged with a regulated current source that tapers as the battery reaches fully charged voltage. Charging a battery incorrectly can damage a battery; in extreme cases, batteries can overheat, catch fire, or explosively vent their contents. Rate of discharge Battery charging and discharging rates are often discussed by referencing a "C" rate of current. The C rate is that which would theoretically fully charge or discharge the battery in one hour. For example, trickle charging might be performed at C/20 (or a "20-hour" rate), while typical charging and discharging may occur at C/2 (two hours for full capacity). The available capacity of electrochemical cells varies depending on the discharge rate. Some energy is lost in the internal resistance of cell components (plates, electrolyte, interconnections), and the rate of discharge is limited by the speed at which chemicals in the cell can move about. For lead-acid cells, the relationship between time and discharge rate is described by Peukert's law; a lead-acid cell that can no longer sustain a usable terminal voltage at a high current may still have usable capacity, if discharged at a much lower rate. Data sheets for rechargeable cells often list the discharge capacity on 8-hour or 20-hour or other stated time; cells for uninterruptible power supply systems may be rated at 15-minute discharge. The terminal voltage of the battery is not constant during charging and discharging. Some types have relatively constant voltage during discharge over much of their capacity. Non-rechargeable alkaline and zinc–carbon cells output 1.5 V when new, but this voltage drops with use. Most NiMH AA and AAA cells are rated at 1.2 V, but have a flatter discharge curve than alkalines and can usually be used in equipment designed to use alkaline batteries. Battery manufacturers' technical notes often refer to voltage per cell (VPC) for the individual cells that make up the battery. For example, to charge a 12 V lead-acid battery (containing 6 cells of 2 V each) at 2.3 VPC requires a voltage of 13.8 V across the battery's terminals. Damage from cell reversal Subjecting a discharged cell to a current in the direction which tends to discharge it further to the point the positive and negative terminals switch polarity causes a condition called . Generally, pushing current through a discharged cell in this way causes undesirable and irreversible chemical reactions to occur, resulting in permanent damage to the cell. Cell reversal can occur under a number of circumstances, the two most common being: When a battery or cell is connected to a charging circuit the wrong way around. When a battery made of several cells connected in series is deeply discharged. In the latter case, the problem occurs due to the different cells in a battery having slightly different capacities. When one cell reaches discharge level ahead of the rest, the remaining cells will force the current through the discharged cell. Many battery-operated devices have a low-voltage cutoff that prevents deep discharges from occurring that might cause cell reversal. A smart battery has voltage monitoring circuitry built inside. Cell reversal can occur to a weakly charged cell even before it is fully discharged. If the battery drain current is high enough, the cell's internal resistance can create a resistive voltage drop that is greater than the cell's forward emf. This results in the reversal of the cell's polarity while the current is flowing. The higher the required discharge rate of a battery, the better matched the cells should be, both in the type of cell and state of charge, in order to reduce the chances of cell reversal. In some situations, such as when correcting NiCd batteries that have been previously overcharged, it may be desirable to fully discharge a battery. To avoid damage from the cell reversal effect, it is necessary to access each cell separately: each cell is individually discharged by connecting a load clip across the terminals of each cell, thereby avoiding cell reversal. Damage during storage in fully discharged state If a multi-cell battery is fully discharged, it will often be damaged due to the cell reversal effect mentioned above. It is possible however to fully discharge a battery without causing cell reversal—either by discharging each cell separately, or by allowing each cell's internal leakage to dissipate its charge over time. Even if a cell is brought to a fully discharged state without reversal, however, damage may occur over time simply due to remaining in the discharged state. An example of this is the sulfation that occurs in lead-acid batteries that are left sitting on a shelf for long periods. For this reason it is often recommended to charge a battery that is intended to remain in storage, and to maintain its charge level by periodically recharging it. Since damage may also occur if the battery is overcharged, the optimal level of charge during storage is typically around 30% to 70%. Depth of discharge Depth of discharge (DOD) is normally stated as a percentage of the nominal ampere-hour capacity; 0% DOD means no discharge. As the usable capacity of a battery system depends on the rate of discharge and the allowable voltage at the end of discharge, the depth of discharge must be qualified to show the way it is to be measured. Due to variations during manufacture and aging, the DOD for complete discharge can change over time or number of charge cycles. Generally a rechargeable battery system will tolerate more charge/discharge cycles if the DOD is lower on each cycle. Lithium batteries can discharge to about 80 to 90% of their nominal capacity. Lead-acid batteries can discharge to about 50–60%. While flow batteries can discharge 100%. Lifespan and cycle stability If batteries are used repeatedly even without mistreatment, they lose capacity as the number of charge cycles increases, until they are eventually considered to have reached the end of their useful life. Different battery systems have differing mechanisms for wearing out. For example, in lead-acid batteries, not all the active material is restored to the plates on each charge/discharge cycle; eventually enough material is lost that the battery capacity is reduced. In lithium-ion types, especially on deep discharge, some reactive lithium metal can be formed on charging, which is no longer available to participate in the next discharge cycle. Sealed batteries may lose moisture from their liquid electrolyte, especially if overcharged or operated at high temperature. This reduces the cycling life. Recharging time Recharging time is an important parameter to the user of a product powered by rechargeable batteries. Even if the charging power supply provides enough power to operate the device as well as recharge the battery, the device is attached to an external power supply during the charging time. For electric vehicles used industrially, charging during off-shifts may be acceptable. For highway electric vehicles, rapid charging is necessary for charging in a reasonable time. A rechargeable battery cannot be recharged at an arbitrarily high rate. The internal resistance of the battery will produce heat, and excessive temperature rise will damage or destroy a battery. For some types, the maximum charging rate will be limited by the speed at which active material can diffuse through a liquid electrolyte. High charging rates may produce excess gas in a battery, or may result in damaging side reactions that permanently lower the battery capacity. Very roughly, and with many exceptions and caveats, restoring a battery's full capacity in one hour or less is considered fast charging. A battery charger system will include more complex control-circuit- and charging strategies for fast charging, than for a charger designed for slower recharging. Active components The active components in a secondary cell are the chemicals that make up the positive and negative active materials, and the electrolyte. The positive and negative electrodes are made up of different materials, with the positive exhibiting a reduction potential and the negative having an oxidation potential. The sum of the potentials from these half-reactions is the standard cell potential or voltage. In primary cells the positive and negative electrodes are known as the cathode and anode, respectively. Although this convention is sometimes carried through to rechargeable systems—especially with lithium-ion cells, because of their origins in primary lithium cells—this practice can lead to confusion. In rechargeable cells the positive electrode is the cathode on discharge and the anode on charge, and vice versa for the negative electrode. Types Commercial types The lead–acid battery, invented in 1859 by French physicist Gaston Planté, is the oldest type of rechargeable battery. Despite having a very low energy-to-weight ratio and a low energy-to-volume ratio, its ability to supply high surge currents means that the cells have a relatively large power-to-weight ratio. These features, along with the low cost, makes it attractive for use in motor vehicles to provide the high current required by automobile starter motors. The nickel–cadmium battery (NiCd) was invented by Waldemar Jungner of Sweden in 1899. It uses nickel oxide hydroxide and metallic cadmium as electrodes. Cadmium is a toxic element, and was banned for most uses by the European Union in 2004. Nickel–cadmium batteries have been almost completely superseded by nickel–metal hydride (NiMH) batteries. The nickel–iron battery (NiFe) was also developed by Waldemar Jungner in 1899; and commercialized by Thomas Edison in 1901 in the United States for electric vehicles and railway signalling. It is composed of only non-toxic elements, unlike many kinds of batteries that contain toxic mercury, cadmium, or lead. The nickel–metal hydride battery (NiMH) became available in 1989. These are now a common consumer and industrial type. The battery has a hydrogen-absorbing alloy for the negative electrode instead of cadmium. The lithium-ion battery was introduced in the market in 1991, is the choice in most consumer electronics, having the best energy density and a very slow loss of charge when not in use. It does have drawbacks too, particularly the risk of unexpected ignition from the heat generated by the battery. Such incidents are rare and according to experts, they can be minimized "via appropriate design, installation, procedures and layers of safeguards" so the risk is acceptable. Lithium-ion polymer batteries (LiPo) are light in weight, offer slightly higher energy density than Li-ion at slightly higher cost, and can be made in any shape. They are available but have not displaced Li-ion in the market. A primary use is for LiPo batteries is in powering remote-controlled cars, boats and airplanes. LiPo packs are readily available on the consumer market, in various configurations, up to 44.4 V, for powering certain R/C vehicles and helicopters or drones. Some test reports warn of the risk of fire when the batteries are not used in accordance with the instructions. Independent reviews of the technology discuss the risk of fire and explosion from lithium-ion batteries under certain conditions because they use liquid electrolytes. Other experimental types ‡ citations are needed for these parameters
Technology
Energy storage
null
201495
https://en.wikipedia.org/wiki/Lead%E2%80%93acid%20battery
Lead–acid battery
The lead–acid battery is a type of rechargeable battery first invented in 1859 by French physicist Gaston Planté. It is the first type of rechargeable battery ever created. Compared to modern rechargeable batteries, lead–acid batteries have relatively low energy density. Despite this, they are able to supply high surge currents. These features, along with their low cost, make them attractive for use in motor vehicles to provide the high current required by starter motors. Lead–acid batteries suffer from relatively short cycle lifespan (usually less than 500 deep cycles) and overall lifespan (due to the double sulfation in the discharged state), as well as long charging times. As they are not expensive compared to newer technologies, lead–acid batteries are widely used even when surge current is not important and other designs could provide higher energy densities. In 1999, lead–acid battery sales accounted for 40–50% of the value from batteries sold worldwide (excluding China and Russia), equivalent to a manufacturing market value of about US$15 billion. Large-format lead–acid designs are widely used for storage in backup power supplies in telecommunications networks such as for cell sites, high-availability emergency power systems as used in hospitals, and stand-alone power systems. For these roles, modified versions of the standard cell may be used to improve storage times and reduce maintenance requirements. Gel-cells and absorbed glass-mat batteries are common in these roles, collectively known as valve-regulated lead–acid (VRLA) batteries. When charged, the battery's chemical energy is stored in the potential difference between metallic lead at the negative side and PbO2 on the positive side. History The French scientist Nicolas Gautherot observed in 1801 that wires that had been used for electrolysis experiments would themselves provide a small amount of secondary current after the main battery had been disconnected. In 1859, Gaston Planté's lead–acid battery was the first battery that could be recharged by passing a reverse current through it. Planté's first model consisted of two lead sheets separated by rubber strips and rolled into a spiral. His batteries were first used to power the lights in train carriages while stopped at a station. In 1881, Camille Alphonse Faure invented an improved version that consisted of a lead grid lattice, into which a lead oxide paste was pressed, forming a plate. This design was easier to mass-produce. An early manufacturer (from 1886) of lead–acid batteries was Henri Tudor. Using a gel electrolyte instead of a liquid allows the battery to be used in different positions without leaking. Gel electrolyte batteries for any position were first used in the late 1920s, and in the 1930s, portable suitcase radio sets allowed the cell to be mounted vertically or horizontally (but not inverted) due to valve design. In the 1970s, the valve-regulated lead–acid (VRLA), or sealed, battery was developed, including modern absorbed glass mat (AGM) types, allowing operation in any position. It was discovered early in 2011 that lead–acid batteries do in fact use some aspects of relativity to function, and to a lesser degree liquid metal and molten-salt batteries such as the Ca–Sb and Sn–Bi also use this effect. Electrochemistry Discharge In the discharged state, both the positive and negative plates become lead(II) sulfate (), and the electrolyte loses much of its dissolved sulfuric acid and becomes primarily water. Negative plate reactionPb(s) + (aq) → (s) + (aq) + 2e− The release of two conduction electrons gives the lead electrode a negative charge. As electrons accumulate, they create an electric field which attracts hydrogen ions and repels sulfate ions, leading to a double-layer near the surface. The hydrogen ions screen the charged electrode from the solution, which limits further reaction, unless charge is allowed to flow out of the electrode. Positive plate reaction(s) + (aq) + 3(aq) + 2e− → (s) + 2(l) taking advantage of the metallic conductivity of . The total reaction can be written as (s) + (s) + 2(aq) → 2(s) + 2(l) The net energy released per mole (207 g) of Pb(s) converted to (s) is approximately 400 kJ, corresponding to the formation of 36 g of water. The sum of the molecular masses of the reactants is 642.6 g/mole, so theoretically a cell can produce two faradays of charge (192,971 coulombs) from 642.6 g of reactants, or 83.4 ampere-hours per kilogram for a 2-volt cell (or 13.9 ampere-hours per kilogram for a 12-volt battery). This comes to 167 watt-hours per kilogram of reactants, but in practice, a lead–acid cell gives only 30–40 watt-hours per kilogram of battery, due to the mass of the water and other constituent parts. Charging In the fully-charged state, the negative plate consists of lead, and the positive plate is lead dioxide. The electrolyte solution has a higher concentration of aqueous sulfuric acid, which stores most of the chemical energy. Overcharging with high charging voltages generates oxygen and hydrogen gas by electrolysis of water, which bubbles out and is lost. The design of some types of lead–acid battery (eg "flooded", but not VRLA (AGM or gel)) allows the electrolyte level to be inspected and topped up with pure water to replace any that has been lost this way. Effect of charge level on freezing point Because of freezing-point depression, the electrolyte is more likely to freeze in a cold environment when the battery has a low charge and a correspondingly low sulfuric acid concentration. Ion motion During discharge, produced at the negative plates moves into the electrolyte solution and is then consumed at the positive plates, while is consumed at both plates. The reverse occurs during the charge. This motion can be electrically-driven proton flow (the Grotthuss mechanism), or by diffusion through the medium, or by the flow of a liquid electrolyte medium. Since the electrolyte density is greater when the sulfuric acid concentration is higher, the liquid will tend to circulate by convection. Therefore, a liquid-medium cell tends to rapidly discharge and rapidly charge more efficiently than an otherwise-similar gel cell. Measuring the charge level Because the electrolyte takes part in the charge-discharge reaction, this battery has one major advantage over other chemistries: it is relatively simple to determine the state of charge by merely measuring the specific gravity of the electrolyte; the specific gravity falls as the battery discharges. Some battery designs include a simple hydrometer using colored floating balls of differing density. When used in diesel–electric submarines, the specific gravity was regularly measured and written on a blackboard in the control room to indicate how much longer the boat could remain submerged. The battery's open-circuit voltage can also be used to gauge the state of charge. If the connections to the individual cells are accessible, then the state of charge of each cell can be determined which can provide a guide as to the state of health of the battery as a whole; otherwise, the overall battery voltage may be assessed. Voltages for common usage IUoU battery charging is a three-stage charging procedure for lead–acid batteries. A lead–acid battery's nominal voltage is 2.2 V for each cell. For a single cell, the voltage can range from 1.8 V loaded at full discharge, to 2.10 V in an open circuit at full charge. Float voltage varies depending on battery type (flooded cells, gelled electrolyte, absorbed glass mat), and ranges from 1.8 V to 2.27 V. Equalization voltage, and charging voltage for sulfated cells, can range from 2.67 V to almost 3 V (only until a charge current is flowing). Specific values for a given battery depend on the design and manufacturer recommendations, and are usually given at a baseline temperature of , requiring adjustment for ambient conditions. IEEE Standard 485-2020 (first published in 1997) is the industry's recommended practice for sizing lead–acid batteries in stationary applications. Construction Plates The lead–acid cell can be demonstrated using sheet lead plates for the two electrodes. However, such a construction produces only around one ampere for roughly postcard-sized plates, and for only a few minutes. Gaston Planté found a way to provide a much larger effective surface area. In Planté's design, the positive and negative plates were formed of two spirals of lead foil, separated with a sheet of cloth and coiled up. The cells initially had low capacity, so a slow process of forming was required to corrode the lead foils, creating lead dioxide on the plates and roughening them to increase surface area. Initially, this process used electricity from primary batteries; when generators became available after 1870, the cost of producing batteries greatly declined. Planté plates are still used in some stationary applications, where the plates are mechanically grooved to increase their surface area. In 1880, Camille Alphonse Faure patented a method of coating a lead grid (which serves as the current conductor) with a paste of lead oxides, sulfuric acid, and water, followed by curing phase in which the plates were exposed to gentle heat in a high-humidity environment. The curing process changed the paste into a mixture of lead sulfates which adhered to the lead plate. Then, during the battery's initial charge (called formation), the cured paste on the plates was converted into electrochemically active material (the active mass). Faure's process significantly reduced the time and cost to manufacture lead–acid batteries, and gave a substantial increase in capacity compared with Planté's battery. Faure's method is still in use today, with only incremental improvements to paste composition, curing (which is still done with steam, but is now a very tightly controlled process), and structure and composition of the grid to which the paste is applied. The grid developed by Faure was of pure lead with connecting rods of lead at right angles. In contrast, present-day grids are structured for improved mechanical strength and improved current flow. In addition to different grid patterns (ideally, all points on the plate are equidistant from the power conductor), modern-day processes also apply one or two thin fiberglass mats over the grid to distribute the weight more evenly. And while Faure had used pure lead for his grids, within a year (1881) these had been superseded by lead–antimony (8–12%) alloys to give the structures additional rigidity. However, high-antimony grids have higher hydrogen evolution (which also accelerates as the battery ages), and thus greater outgassing and higher maintenance costs. These issues were identified by U. B. Thomas and W. E. Haring at Bell Labs in the 1930s and eventually led to the development of lead–calcium grid alloys in 1935 for standby power batteries on the U.S. telephone network. Related research led to the development of lead–selenium grid alloys in Europe a few years later. Both lead–calcium and lead–selenium grid alloys still add antimony, albeit in much smaller quantities than the older high-antimony grids: lead–calcium grids have 4–6% antimony while lead–selenium grids have 1–2%. These metallurgical improvements give the grid more strength, which allows it to carry more weight, and therefore more active material, and so the plates can be thicker, which in turn contributes to battery lifespan since there is more material available to shed before the battery becomes unusable. High-antimony alloy grids are still used in batteries intended for frequent cycling, e.g. in motor-starting applications where frequent expansion/contraction of the plates need to be compensated for, but where outgassing is not significant since charge currents remain low. Since the 1950s, batteries designed for infrequent cycling applications (e.g., standby power batteries) increasingly have lead–calcium or lead–selenium alloy grids since these have less hydrogen evolution and thus lower maintenance overhead. Lead–calcium alloy grids are cheaper to manufacture (the cells thus have lower up-front costs), and have a lower self-discharge rate, and lower watering requirements, but have slightly poorer conductivity, are mechanically weaker (and thus require more antimony to compensate), and are more strongly subject to corrosion (and thus a shorter lifespan) than cells with lead–selenium alloy grids. The open-circuit effect is a dramatic loss of battery cycle life, which was observed when calcium was substituted for antimony. It is also known as the antimony free effect. Modern approach Modern-day paste contains carbon black, blanc fixe (barium sulfate), and lignosulfonate. The blanc fixe acts as a seed crystal for the lead–to–lead-sulfate reaction. The blanc fixe must be fully dispersed in the paste in order for it to be effective. The lignosulfonate prevents the negative plate from forming a solid mass during the discharge cycle, instead enabling the formation of long needle–like dendrites. The long crystals have more surface area and are easily converted back to the original state on charging. Carbon black counteracts the effect of inhibiting formation caused by the lignosulfonates. Sulfonated naphthalene condensate dispersant is a more effective expander than lignosulfonate and speeds up formation. This dispersant improves the dispersion of barium sulfate in the paste, reduces hydroset time, produces a more breakage-resistant plate, reduces fine lead particles, and thereby improves handling and pasting characteristics. It extends battery life by increasing end-of-charge voltage. Sulfonated naphthalene requires about one-third to one-half the amount of lignosulfonate and is stable to higher temperatures. Once dry, the plates are stacked with suitable separators and inserted in a cell container. The alternate plates then constitute alternating positive and negative electrodes, and within the cell are later connected to one another (negative to negative, positive to positive) in parallel. The separators inhibit the plates from touching each other, which would otherwise constitute a short circuit. In flooded and gel cells, the separators are insulating rails or studs, formerly of glass or ceramic, and now of plastic. In AGM cells, the separator is the glass mat itself, and the rack of plates with separators are squeezed together before insertion into the cell; once in the cell, the glass mats expand slightly, effectively locking the plates in place. In multi-cell batteries, the cells are then connected to one another in series, either through connectors through the cell walls, or by a bridge over the cell walls. All intra-cell and inter-cell connections are of the same lead alloy as that used in the grids. This is necessary to prevent galvanic corrosion. Deep-cycle batteries have a different geometry for their positive electrodes. The positive electrode is not a flat plate but a row of lead–oxide cylinders or tubes strung side by side, so their geometry is called tubular or cylindrical. The advantage of this is an increased surface area in contact with the electrolyte, with higher discharge and charge currents than a flat-plate cell of the same volume and depth-of-charge. Tubular-electrode cells have a higher power density than flat-plate cells. This makes cylindrical-geometry plates especially suitable for high-current applications with weight or space limitations, such as for forklifts or for starting marine diesel engines. However, because cylinders have less active material in the same volume, they also have lower energy densities than otherwise comparable flat-plate cells, and less active material at the electrode also means they have less material available to shed before the cells become unusable. Cylindrical electrodes are also more complicated to manufacture uniformly, which tends to make them more expensive than flat-plate cells. These trade-offs limit the range of applications in which cylindrical batteries are meaningful to situations where there is insufficient space to install higher-capacity (and thus larger) flat-plate units. About 60% of the weight of an automotive-type lead–acid battery rated around 60 A·h is lead or internal parts made of lead; the balance is electrolyte, separators, and the case. For example, there are approximately of lead in a typical battery. Separators Separators between the positive and negative plates prevent short circuits through physical contact, mostly through dendrites (treeing), but also through shedding of the active material. Separators allow the flow of ions between the plates of an electrochemical cell to form a closed circuit. Wood, rubber, glass fiber mat, cellulose, and PVC or polyethylene plastic have been used to make separators. Wood was the original choice, but it deteriorates in the acid electrolyte. An effective separator must possess a number of mechanical properties, including permeability, porosity, pore size distribution, specific surface area, mechanical design and strength, electrical resistance, ionic conductivity, and chemical compatibility with the electrolyte. In service, the separator must have good resistance to acid and oxidation. The area of the separator must be a little larger than the area of the plates to prevent material shorting between the plates. The separators must remain stable over the battery's operating temperature range. Absorbent glass mat In the absorbent glass mat (AGM) design, the separators between the plates are replaced by a glass fibre mat soaked in electrolyte. There is only enough electrolyte in the mat to keep it wet, and if the battery is punctured, the electrolyte will not flow out of the mats. The principal purpose of replacing liquid electrolyte in a flooded battery with a semi-saturated fiberglass mat is to substantially increase the gas transport through the separator; hydrogen or oxygen gas produced during overcharge or charge (if the charge current is excessive) is able to freely pass through the glass mat and reduce or oxidize the opposing plate, respectively. In a flooded cell, the bubbles of gas float to the top of the battery and are lost to the atmosphere. This mechanism for the gas produced to recombine and the additional benefit of a semi-saturated cell providing no substantial leakage of electrolyte upon physical puncture of the battery case allows the battery to be completely sealed, which makes them useful in portable devices and similar roles. Additionally, the battery can be installed in any orientation, though if it is installed upside down, then acid may be blown out through the overpressure vent. To reduce the water loss rate, calcium is alloyed with the plates; however, gas build-up remains a problem when the battery is deeply or rapidly charged or discharged. To prevent over-pressurization of the battery casing, AGM batteries include a one-way blow-off valve, and are often known as valve-regulated lead–acid (VRLA) designs. Another advantage to the AGM design is that the electrolyte becomes the separator material and is mechanically strong. This allows the plate stack to be compressed together in the battery shell, slightly increasing energy density compared to liquid or gel versions. AGM batteries often show a characteristic bulging in their shells when built in common rectangular shapes, due to the expansion of the positive plates. The mat also prevents the vertical motion of the electrolyte within the battery. When a normal wet cell is stored in a discharged state, the heavier acid molecules tend to settle to the bottom of the battery, causing the electrolyte to stratify. When the battery is then used, the majority of the current flows only in this area, and the bottom of the plates tends to wear out rapidly. This is one of the reasons a conventional car battery can be ruined by leaving it stored for a long period and then used and recharged. The mat significantly prevents this stratification, eliminating the need to periodically shake the batteries, boil them, or run an equalization charge through them to mix the electrolyte. Stratification also causes the upper layers of the battery to become almost completely water, which can freeze in cold weather; AGMs are significantly less susceptible to damage due to low-temperature use. While AGM cells do not permit watering (typically it is impossible to add water without drilling a hole in the battery), their recombination process is fundamentally limited by the usual chemical processes. Hydrogen gas will even diffuse right through the plastic case itself. Some have found that it is profitable to add water to an AGM battery, but this must be done slowly to allow for the water to mix throughout the battery via diffusion. When a lead–acid battery loses water, its acid concentration increases, increasing the corrosion rate of the plates significantly. AGM cells already have a high acid content in an attempt to lower the water loss rate and increase standby voltage, and this brings about shorter life compared to a lead–antimony flooded battery. If the open circuit voltage of AGM cells is significantly higher than 2.093 volts, or 12.56 V for a 12 V battery, then it has a higher acid content than a flooded cell; while this is normal for an AGM battery, it is not desirable for long life. AGM cells that are intentionally or accidentally overcharged will show a higher open-circuit voltage according to the water lost (and acid concentration increased). One amp-hour of overcharge will electrolyse 0.335 grams of water per cell; some of this liberated hydrogen and oxygen will recombine, but not all of it. Gelled electrolytes During the 1970s, researchers developed the sealed version or gel battery, which mixes a silica gelling agent into the electrolyte (silica-gel-based lead–acid batteries used in portable radios from the early 1930s were not fully sealed). This converts the formerly liquid interior of the cells into a semi-stiff paste, providing many of the same advantages of the AGM. Such designs are even less susceptible to evaporation and are often used in situations where little or no periodic maintenance is possible. Gel cells also have lower freezing and higher boiling points than the liquid electrolytes used in conventional wet cells and AGMs, which makes them suitable for use in extreme conditions. The only downside to the gel design is that the gel prevents rapid motion of the ions in the electrolyte, which reduces carrier mobility and thus surge current capability. For this reason, gel cells are most commonly found in energy storage applications like off-grid systems. Maintenance-free, sealed, and valve-regulated lead–acid (VRLA) Both gel and AGM designs are sealed, do not require watering, can be used in any orientation, and use a valve for gas blowoff. For this reason, both designs can be called maintenance-free, sealed, and VRLA. However, it is quite common to find resources stating that these terms refer to one or another of these designs, specifically. In a valve-regulated lead–acid (VRLA) battery, the hydrogen and oxygen produced in the cells largely recombine into water. Leakage is minimal, although some electrolyte still escapes if the recombination cannot keep up with gas evolution. Since VRLA batteries do not require (and make impossible) regular checking of the electrolyte level, they have been called maintenance-free batteries. However, this is somewhat of a misnomer: VRLA cells do require maintenance. As electrolyte is lost, VRLA cells dry out and lose capacity. This can be detected by taking regular internal resistance, conductance, or impedance measurements. Regular testing reveals whether more involved testing and maintenance is required. Maintenance procedures have recently been developed allowing rehydration, often restoring significant amounts of lost capacity. VRLA types became popular on motorcycles around 1983, because the separator improves resistance to vibration and prevents the acid electrolyte from spilling. They are also popular in stationary applications such as telecommunications sites, due to their small footprint and installation flexibility. Applications Most of the world's lead–acid batteries are automobile starting, lighting, and ignition (SLI) batteries, with an estimated 320 million units shipped in 1999. In 1992 about 3 million tons of lead were used in the manufacture of batteries. Wet cell stand-by (stationary) batteries designed for deep discharge are commonly used in large backup power supplies for telephone and computer centres, grid energy storage, and off-grid household electric power systems. Lead–acid batteries are used in emergency lighting and to power sump pumps in case of power failure. Traction (propulsion) batteries are used in golf carts and other battery electric vehicles. Large lead–acid batteries are also used to power the electric motors in diesel–electric (conventional) submarines when submerged, and are used as emergency power on nuclear submarines as well. Valve-regulated lead–acid batteries cannot spill their electrolyte. They are used in back-up power supplies for alarm and smaller computer systems (particularly in uninterruptible power supplies) and for electric scooters, electric wheelchairs, electrified bicycles, marine applications, battery electric vehicles or micro hybrid vehicles, and motorcycles. Many electric forklifts use lead–acid batteries, where the weight is used as part of a counterweight. Lead–acid batteries were used to supply the filament (heater) voltage, with 2 V common in early vacuum tube (valve) radio receivers. Portable batteries for miners' cap headlamps typically have two or three cells. Cycles Starting batteries Lead–acid batteries designed for starting automotive engines are not designed for deep discharge. They have a large number of thin plates designed for maximum surface area, and therefore maximum current output, which can easily be damaged by deep discharge. Repeated deep discharges will result in capacity loss and ultimately in premature failure, as the electrodes disintegrate due to mechanical stresses that arise from cycling. Starting batteries kept on a continuous float charge will suffer corrosion of the electrodes which will also result in premature failure. Starting batteries should therefore be kept open circuit but charged regularly (at least once every two weeks) to prevent sulfation. Starting batteries are lighter than deep-cycle batteries of the same size, because the thinner and lighter cell plates do not extend all the way to the bottom of the battery case. This allows loose, disintegrated material to fall off the plates and collect at the bottom of the cell, prolonging the service life of the battery. If this loose debris rises enough, then it may touch the bottom of the plates and cause failure of a cell, resulting in loss of battery voltage and capacity. Deep-cycle batteries Specially-designed deep-cycle cells are much less susceptible to degradation due to cycling, and are required for applications where the batteries are regularly discharged, such as photovoltaic systems, electric vehicles (forklift, golf cart, electric cars, and others), and uninterruptible power supplies. These batteries have thicker plates that cannot deliver as much peak current but can withstand frequent discharging. Some batteries are designed as a compromise between starter (high-current) and deep cycle. They are able to be discharged to a greater degree than automotive batteries, but less so than deep-cycle batteries. They may be referred to as marine, motorhome, or leisure batteries. Fast and slow charge and discharge The capacity of a lead–acid battery is not a fixed quantity but varies according to how quickly it is discharged. The empirical relationship between discharge rate and capacity is known as Peukert's law. When a battery is charged or discharged, only the reacting chemicals, which are at the interface between the electrodes and the electrolyte, are initially affected. With time, the charge stored in the chemicals at the interface, often called interface charge or surface charge, spreads by diffusion of these chemicals throughout the volume of the active material. Consider a battery that has been completely discharged (such as occurs when leaving the car lights on overnight, a current draw of about 6 amps). If it then is given a fast charge for only a few minutes, the battery plates charge only near the interface between the plates and the electrolyte. In this case the battery voltage might rise to a value near that of the charger voltage; this causes the charging current to decrease significantly. After a few hours this interface charge will spread to the volume of the electrode and electrolyte; this leads to an interface charge so low that it may be insufficient to start the car. As long as the charging voltage stays below the gassing voltage (about 14.4 volts in a normal lead–acid battery), battery damage is unlikely, and in time the battery should return to a nominally charged state. Sulfation and desulfation Lead–acid batteries lose the ability to accept a charge when discharged for too long due to sulfation, the crystallization of lead sulfate. They generate electricity through a double sulfate chemical reaction. Lead and lead dioxide, the active materials on the battery's plates, react with sulfuric acid in the electrolyte to form lead sulfate. The lead sulfate first forms in a finely divided, amorphous state and easily reverts to lead, lead dioxide, and sulfuric acid when the battery recharges. As batteries cycle through numerous discharges and charges, some lead sulfate does not recombine into electrolyte and slowly converts into a stable crystalline form that no longer dissolves on recharging. Thus, not all the lead is returned to the battery plates, and the amount of usable active material necessary for electricity generation declines over time. Sulfation occurs in lead–acid batteries when they are subjected to insufficient charging during normal operation, it also occurs when lead–acid batteries left unused with incomplete charge for an extended time. It impedes recharging; sulfate deposits ultimately expand, cracking the plates and destroying the battery. Eventually, so much of the battery plate area is unable to supply current that the battery capacity is greatly reduced. In addition, the sulfate portion (of the lead sulfate) is not returned to the electrolyte as sulfuric acid. It is believed that large crystals physically block the electrolyte from entering the pores of the plates. A white coating on the plates may be visible in batteries with clear cases or after dismantling the battery. Batteries that are sulfated show a high internal resistance and can deliver only a small fraction of normal discharge current. Sulfation also affects the charging cycle, resulting in longer charging times, less-efficient and incomplete charging, and higher battery temperatures. SLI batteries (starting, lighting, ignition; e.g., car batteries) suffer the most deterioration because vehicles normally stand unused for relatively long periods of time. Deep-cycle and motive power batteries are subjected to regular controlled overcharging, eventually failing due to corrosion of the positive plate grids rather than sulfation. Sulfation can be avoided if the battery is fully recharged immediately after a discharge cycle. There are no known independently-verified ways to reverse sulfation. There are commercial products claiming to achieve desulfation through various techniques such as pulse charging, but there are no peer-reviewed publications verifying their claims. Sulfation prevention remains the best course of action, by periodically fully charging the lead–acid batteries. Stratification A typical lead–acid battery contains a mixture with varying concentrations of water and acid. Sulfuric acid has a higher density than water, which causes the acid formed at the plates during charging to flow downward and collect at the bottom of the battery. Eventually the mixture will again reach uniform composition by diffusion, but this is a very slow process. Repeated cycles of partial charging and discharging will increase stratification of the electrolyte, reducing the capacity and performance of the battery because the lack of acid on top limits plate activation. The stratification also promotes corrosion on the upper half of the plates and sulfation at the bottom. Periodic overcharging creates gaseous reaction products at the plate, causing convection currents which mix the electrolyte and resolve the stratification. Mechanical stirring of the electrolyte would have the same effect. Batteries in moving vehicles are also subject to sloshing and splashing in the cells, as the vehicle accelerates, brakes, and turns. Safety Excessive charging causes electrolysis, emitting hydrogen and oxygen in a process known as gassing. Wet cells have open vents to release any gas produced, and VRLA batteries rely on valves fitted to each cell. Catalytic caps are available for flooded cells to recombine hydrogen and oxygen. A VRLA cell normally recombines any hydrogen and oxygen produced inside the cell, but malfunction or overheating may cause gas to build up. If this happens (for example, on overcharging), then the valve vents the gas and normalizes the pressure, producing a characteristic acidic smell. However, valves can fail, such as if dirt and debris accumulate, allowing pressure to build up. Accumulated hydrogen and oxygen sometimes ignite in an internal explosion. The force of the explosion can cause the battery's casing to burst, or cause its top to fly off, spraying acid and casing fragments. An explosion in one cell may ignite any combustible gas mixture in the remaining cells. Similarly, in a poorly ventilated area, connecting or disconnecting a closed circuit (such as a load or a charger) to the battery terminals can also cause sparks and an explosion, if any gas was vented from the cells. Individual cells within a battery can also short circuit, causing an explosion. The cells of VRLA batteries typically swell when the internal pressure rises, thereby giving a warning to users and mechanics. The deformation varies from cell to cell, and is greatest at the ends where the walls are unsupported by other cells. Such over-pressurized batteries should be carefully isolated and discarded. Personnel working near batteries at risk of explosion should protect their eyes and exposed skin from burns due to spraying acid and fire by wearing a face shield, overalls, and gloves. Using goggles instead of a face shield leaves the face exposed to possible flying acid, case or battery fragments, and heat from a potential explosion. Environment Environmental concerns According to a 2003 report entitled "Getting the Lead Out", by Environmental Defense and the Ecology Center of Ann Arbor, Michigan, the batteries of vehicles on the road contained an estimated of lead. Some lead compounds are extremely toxic. Long-term exposure to even tiny amounts of these compounds can cause brain and kidney damage, hearing impairment, and learning problems in children. The auto industry uses over of lead every year, with 90% going to conventional lead–acid vehicle batteries. While lead recycling is a well-established industry, more than ends up in landfills every year. According to the federal Toxic Release Inventory, another are released in the lead mining and manufacturing process. Attempts are being made to develop alternatives (particularly for automotive use) because of concerns about the environmental consequences of improper disposal and of lead smelting operations, among other reasons. Alternatives are unlikely to displace them for applications such as engine starting or backup power systems, since the batteries, although heavy, are inexpensive in up-front cost. Recycling According to the Battery Council, an industry group, lead–acid battery recycling is one of the most successful recycling programs in the world. In the United States 99% of all battery lead was recycled between 2014 and 2018. However, documents of the U.S. Environmental Protection Administration, since 1982, have indicated rates varying between 60% and 95%. Lead is highly toxic to humans, and recycling it can result in pollution and contamination of people, resulting in numerous and lasting health problems. One ranking identifies lead–acid battery recycling as the world's most deadly industrial process, in terms of disability-adjusted life years lost—resulting in 2,000,000 to 4,800,000 estimated years of individual human life lost, globally. Lead–acid battery-recycling sites have themselves become a source of lead pollution, and by 1992, the EPA had selected 29 such sites for its Superfund clean-up, with 22 on its National Priority List. An effective pollution control system is a necessity to prevent lead emission. Continuous improvement in battery recycling plants and furnace designs is required to keep pace with emission standards for lead smelters. Additives Chemical additives have been used ever since the lead–acid battery became a commercial item, to reduce lead sulfate buildup on plates and improve battery condition when added to the electrolyte of a vented lead–acid battery. Such treatments are rarely, if ever, effective. Two compounds used for such purposes are Epsom salts and EDTA. Epsom salts reduce the internal resistance in a weak or damaged battery and may allow a small amount of extended life. EDTA can be used to dissolve the sulfate deposits of heavily discharged plates. However, the dissolved material is then no longer available to participate in the normal charge-discharge cycle, so a battery temporarily revived with EDTA will have a reduced life expectancy. Residual EDTA in the lead–acid cell forms organic acids which will accelerate corrosion of the lead plates and internal connectors. The active materials change physical form during charge/discharge, resulting in growth and distortion of the electrodes, and shedding of electrodes into the electrolyte. Once the active material has fallen out of the plates, it cannot be restored into position by any chemical treatment. Similarly, internal physical problems such as cracked plates, corroded connectors, or damaged separators cannot be restored chemically. Corrosion problems Corrosion of the external metal parts of the lead–acid battery results from a chemical reaction of the battery terminals, plugs, and connectors. Corrosion on the positive terminal is caused by electrolysis, due to a mismatch of metal alloys used in the manufacture of the battery terminal and cable connector. White corrosion is usually lead or zinc sulfate crystals. Aluminum connectors corrode to aluminum sulfate. Copper connectors produce blue and white corrosion crystals. Corrosion of a battery's terminals can be reduced by coating the terminals with petroleum jelly or a commercially available product made for the purpose. If the battery is overfilled with water and electrolyte, then thermal expansion can force some of the liquid out of the battery vents onto the top of the battery. This solution can then react with the lead and other metals in the battery connector and cause corrosion. The electrolyte can seep from the plastic-to-lead seal where the battery terminals penetrate the plastic case. Acid fumes that vaporize through the vent caps, often caused by overcharging, and insufficient battery box ventilation can allow the sulfuric acid fumes to build up and react with the exposed metals.
Technology
Energy storage
null
18962267
https://en.wikipedia.org/wiki/Axe
Axe
An axe (; sometimes spelled ax in American English; see spelling differences) is an implement that has been used for millennia to shape, split, and cut wood, to harvest timber, as a weapon, and as a ceremonial or heraldic symbol. The axe has many forms and specialised uses but generally consists of an axe head with a handle, also called a haft or a helve. Before the modern axe, the stone-age hand axe without a handle was used from 1.5 million years BP. Hafted axes (those with a handle) date only from 6,000 BC. The earliest examples of handled axes have heads of stone with some form of wooden handle attached (hafted) in a method to suit the available materials and use. Axes made of copper, bronze, iron and steel appeared as these technologies developed. The axe is an example of a simple machine, as it is a type of wedge, or dual inclined plane. This reduces the effort needed by the wood chopper. It splits the wood into two parts by the pressure concentration at the blade. The handle of the axe also acts as a lever allowing the user to increase the force at the cutting edge—not using the full length of the handle is known as choking the axe. For fine chopping using a side axe this sometimes is a positive effect, but for felling with a double bitted axe it reduces efficiency. Generally, cutting axes have a shallow wedge angle, whereas splitting axes have a deeper angle. Most axes are double bevelled (i.e. symmetrical about the axis of the blade), but some specialist broadaxes have a single bevel blade, and usually an offset handle that allows them to be used for finishing work without putting the user's knuckles at risk of injury. Less common today, they were once an integral part of a joiner and carpenter's tool kit, not just a tool for use in forestry. A tool of similar origin is the billhook. Most modern axes have steel heads and wooden handles, typically hickory in the US and ash in Europe and Asia, although plastic or fibreglass handles are also common. Modern axes are specialised by use, size and form. Hafted axes with short handles designed for use with one hand are often called "hand axes" but the term "hand axe" refers to axes without handles as well. Hatchets tend to be small hafted axes often with a hammer on the back side (the poll). As an easy-to-make tool, the axe has frequently been used in combat, and is one of humanity's oldest weapons. History Hand axes, of stone, and used without handles (hafts) were the first axes. They had knapped (chipped) cutting edges of flint or other stone. Early examples of hand axes date back to 1.6 mya in the later Oldowan, in Southern Ethiopia around 1.4 mya, and in 1.2 mya deposits in Olduvai Gorge. Stone axes made with ground cutting edges were first developed sometime in the late Pleistocene in Australia, where grind-edge axe fragments from sites in Arnhem Land date back at least 44,000 years; grind-edge axes were later present in Japan some time around 38,000 BP, and are known from several Upper Palaeolithic sites on the islands of Honshu and Kyushu. Hafted axes are first known from the Mesolithic period (). Few wooden hafts have been found from this period, but it seems that the axe was normally hafted by wedging. Birch-tar and rawhide lashings were used to fix the blade. The distribution of stone axes is an important indication of prehistoric trade. Thin sectioning is used to determine the provenance of the stone blades. In Europe, Neolithic "axe factories", where thousands of ground stone axes were roughed out, are known from many places, such as: Great Langdale, England (tuff) Rathlin Island, Ireland (porcellanite) Krzemionki, Poland (flint) Neolithic flint mines of Spiennes, Belgium (flint) Plancher-les-Mines, France (pelite) Aosta Valley, Italy (omphacite). Metal axes are still produced and in use today in parts of Papua, Indonesia. The Mount Hagen area of Papua New Guinea was an important production centre. From the late Neolithic/Chalcolithic onwards, axes were made of copper or copper mixed with arsenic. These axes were flat and hafted much like their stone predecessors. Axes continued to be made in this manner with the introduction of Bronze metallurgy. Eventually the hafting method changed and the flat axe developed into the "flanged axe", then palstaves, and later winged and socketed axes. Symbolism, ritual, and folklore At least since the late Neolithic, elaborate axes (battle-axes, T-axes, etc.) had a religious significance and probably indicated the exalted status of their owner. Certain types almost never show traces of wear; deposits of unshafted axe blades from the middle Neolithic (such as at the Somerset Levels in Britain) may have been gifts to the deities. In Minoan Crete, the double axe (labrys) had a special significance, used by priestesses in religious ceremonies. In 1998, a labrys, complete with an elaborately embellished haft, was found at Cham-Eslen, Canton of Zug, Switzerland. The haft was long and wrapped in ornamented birch-bark. The axe blade is long and made of antigorite, mined in the Gotthard-area. The haft goes through a biconical drilled hole and is fastened by wedges of antler and by birch-tar. It belongs to the early Cortaillod culture. The coat of arms of Norway features a lion rampant carrying an axe which represents King Olaf II of Norway, who was honoured as the Eternal King of Norway. In folklore, stone axes were sometimes believed to be thunderbolts and were used to guard buildings against lightning, as it was believed (mythically) that lightning never struck the same place twice. This has caused some skewing of axe distributions. Steel axes were important in superstition as well. A thrown axe could keep off a hailstorm, sometimes an axe was placed in the crops, with the cutting edge to the skies to protect the harvest against bad weather. An upright axe buried under the sill of a house would keep off witches, while an axe under the bed would assure male offspring. Basques, Australians and New Zealanders have developed variants of rural sports that perpetuate the traditions of log cutting with axe. The Basque variants, splitting horizontally or vertically disposed logs, are generically called aizkolaritza (from aizkora: axe). In Yorùbá mythology, the oshe (double-headed axe) symbolises Shango, Orisha (god) of thunder and lightning. It is said to represent swift and balanced justice. Shango altars often contain a carved figure of a woman holding a gift to the god with a double-bladed axe sticking up from her head. The Hurrian and Hittite weather god Teshub is depicted on a bas-relief at Ivriz wielding a thunderbolt and an axe. The Arkalochori Axe is a bronze, Minoan, axe from the second millennium BC thought to be used for religious purposes. Inscriptions on this axe have been compared with other ancient writing systems. Components The axe has two primary components: the axe head, and the haft. Axe head The axe head is typically bounded by the bit (or blade) at one end, and the poll (or butt) at the other, though some designs feature two bits opposite each other. The top corner of the bit where the cutting edge begins is called the toe, and the bottom corner is known as the heel. Either side of the head is called the cheek, which is sometimes supplemented by lugs where the head meets the haft, and the hole where the haft is mounted is called the eye. The part of the bit that descends below the rest of the axe-head is called the beard, and a bearded axe is an antiquated axe head with an exaggerated beard that can sometimes extend the cutting edge twice the height of the rest of the head. Axe haft The axe , sometimes called the handle or the , is traditionally made of a resilient hardwood like hickory or ash, but modern axes often have hafts made of durable synthetic materials. Antique axes and their modern reproductions, like the tomahawk, often had a simple, straight haft with a circular cross-section that wedged onto the axe-head without the aid of wedges or pins. Modern hafts are curved for better grip and to aid in the swinging motion, and are mounted securely to the head by wedging. The shoulder is where the head mounts onto the haft, and this is either a long oval or rectangular cross-section of the haft that is secured to the axe head with small metal or wooden wedges. The belly of the haft is the longest part, where it bows in gently, and the throat is where it curves sharply down to the short grip, just before the end of the haft, which is known as the knob. Types of axes Axes designed to cut or shape wood Felling axe: Cuts across the grain of wood, as in the felling of trees; in single or double bit (the bit is the cutting edge of the head) forms and many different weights, shapes, handle types and cutting geometries to match the characteristics of the material being cut. More so than with for instance a splitting axe, the bit of a felling axe needs to be very sharp, to be able to efficiently cut the fibres. Splitting axe: Used in wood splitting to split with the grain of the wood. Splitting axe bits are more wedge shaped. This shape causes the axe to rend the fibres of the wood apart, without having to cut through them. Broad axe: Used with the grain of the wood in precision splitting or "hewing" (i.e. the squaring-off of round timbers usually for use in construction). Broad axe bits are most commonly chisel-shaped (i.e. one flat and one beveled edge) facilitating more controlled work as the flat cheek passes along the squared timber. Adze: A variation featuring a head perpendicular to that of an axe. Rather than splitting wood side-by-side, it is used to rip a level surface into a horizontal piece of wood. It can also be used as a pickaxe for breaking up rocks and clay. Hatchet: A small, light axe designed for use in one hand specifically while camping or travelling. Carpenter's axe: A small axe, usually slightly larger than a hatchet, used in traditional woodwork, joinery and log-building. It has a pronounced beard and finger notch to allow a "choked" grip for precise control. The poll is designed for use as a hammer. Hand axe: A small axe used for intermediate chopping, similar to hatchets. Mortising axe: Used for creating mortises, a process which begins by drilling two holes at the ends of the intended mortise. Then the wood between the holes is removed with the mortising axe. Some forms of the tool have one blade, which may be pushed, swung or struck with a mallet. Others, such as twybil, bisaigüe and piochon have two, one of which is used for separating the fibres, and the other for levering out the waste. Axes as weapons Archer's axe: a one-handed axe with bearded head carried by medieval archers. It served both as weapon and tool. Defensively deployed archers in line used the poll of this axe to hammer wooden stakes into the ground and then sharpened the still exposed upper ends of these stakes by chopping them to points with the blade. Lines of such stakes were primarily intended to serve the archers as protective obstacles against cavalry attack. Battle axe: In its most common form, an arm-length weapon borne in one or both hands. Compared to a sword swing, it delivers more cleaving power against a smaller target area, making it more effective against armour, due to concentrating more of its weight in the axehead. Dagger-axe (Ji or Ge): A variant of Chinese polearmlike weapon with a divided two-part head, composed of the usual straight blade and a scythe-like blade. The straight blade is used to stab or feint, then the foe's body or head may be cut by pulling the scythe-like horizontal blade backwards. Ge has the horizontal blade but sometimes does not have the straight spear. Dane axe: A long-handled weapon with a large flat blade, often attributed to the Norsemen. Francisca or Frankish axe: a short throwing weapon of the European Migration Period, the name of which may have become attached to the Germanic tribe associated with it: the Franks (see France). Halberd: a spearlike weapon with a hooked poll, effective against mounted cavalry. Head axe: A type of thin-bladed axe with a distinctive shape specialized for headhunting from the Cordilleran peoples of the Philippines. Hurlbat: An entirely metal throwing axe sharpened on every auxiliary end to a point or blade, practically guaranteeing some form of damage against its target. Ono: a Japanese weapon wielded by sōhei warrior monks. Panabas: A chopping bladed tool or weapon from the Philippines often described as a cross between a sword and a battle axe. Parashu: The parashu () is an Indian battle-axe. It is generally wielded with two hands but could also be used with only one. It is depicted as the primary weapon of Parashurama, the 6th Avatar of Lord Vishnu in Hinduism. Poleaxe: designed to defeat plate armour. Its axe (or hammer) head is much narrower than other axes, which accounts for its penetrating power. Sagaris: An ancient weapon used by Scythians. Shepherd's axe: used by shepherds in the Carpathian Mountains, it could double as a walking stick. Spontoon tomahawk: A French trapper and Iroquois collaboration, this was an axe with a knife-like stabbing blade instead of the familiar wedged shape. Throwing axe: A weapon that was thrown and designed to strike with a similar splitting action as its handheld counterparts. These are often small in profile and usable with one hand. Tomahawk: used almost exclusively by Native Americans, its blade was originally crafted of stone. Along with the familiar war version, which could be fashioned as a throwing weapon, the pipe tomahawk was a ceremonial and diplomatic tool. Yue: A Chinese weapon with very large axe blade, also served as ceremonial weapon. Axes as tools Double bit axe: A common axe in the ancient world; introduced to America in the 1800s. The heavy head makes it ideal for felling trees. Often one bit is designated for tasks that would more quickly dull the edge such as cutting roots through dirt. Firefighter's axe, fire axe, or pick head axe: It has a pick-shaped pointed poll (area of the head opposite the cutting edge). It is often decorated in vivid colours (usually, the axe head is painted red and the blade remains unpainted) to make it easily visible during an emergency. Its primary use is for breaking down doors and windows. Crash axe: A short lightweight handheld emergency chopping tool with a sharp or serrated blade spanning a quarter circular from the axis of the handle, sometimes with a notch in the blade to catch on sheet metal, and often a short pick opposite the blade, this tool or a prybar is required to be carried in most large aircraft cockpits with 20 seats or more to quickly chop and pry walls and cabinets to gain access when extinguishing a fire while in flight or to escape when exits are unavailable. A crash axe is sometimes also used by crash rescue firefighter crews to chop through the airplane's sheet metal skin for a rescue opening; modern crash axes are often made with an electrically insulated handle. British gliders were fitted with escape axes in World War II. Ice axe or climbing axe: A number of different styles of ice axes are designed for ice climbing and enlarging steps used by climbers. Lathe hammer (also known as a lath hammer, lathing hammer, or lathing hatchet): a tool used for cutting and nailing wood lath which has a small hatchet blade on one side (which features a small lateral nick used for pulling out nails) and a hammer head on the other. Mattock: A dual-purpose axe, combining an adze and axe blade, or sometimes a pick and adze blade. Pickaxe: An axe with a large pointed end, rather than a flat blade. Sometimes exists as a double-bladed tool with a pick on one side and an axe or adze head on the other. Often used to break up hard material, such as rocks or concrete. Pulaski: An axe with a mattock blade built into the rear of the main axe blade, used for digging ('grubbing out') through and around roots as well as chopping. The pulaski is an indispensable tool used in fighting forest fires, as well as trail-building, brush clearance and similar functions. Slater's axe: An axe for cutting roofing slate, with a long point on the poll for punching nail holes, and with the blade offset laterally from the handle to protect the worker's hand from flying slate chips. Splitting maul: A splitting implement that has evolved from the simple "wedge" design to more complex designs. Some mauls have a conical "axehead"; compound mauls have swivelling "sub-wedges", among other types; others have a heavy wedge-shaped head, with a sledgehammer face opposite. Hammer axe Hammer axes (or axe-hammers) typically feature an extended poll, opposite the blade, shaped and sometimes hardened for use as a hammer. The name axe-hammer is often applied to a characteristic shape of perforated stone axe used in the Neolithic and Bronze Ages. Iron axe-hammers are found in Roman military contexts, e.g. Cramond, Edinburgh, and South Shields, Tyne and Wear.
Technology
Hand tools
null
18963754
https://en.wikipedia.org/wiki/Viscosity
Viscosity
Viscosity is a measure of a fluid's rate-dependent resistance to a change in shape or to movement of its neighboring portions relative to one another. For liquids, it corresponds to the informal concept of thickness; for example, syrup has a higher viscosity than water. Viscosity is defined scientifically as a force multiplied by a time divided by an area. Thus its SI units are newton-seconds per square meter, or pascal-seconds. Viscosity quantifies the internal frictional force between adjacent layers of fluid that are in relative motion. For instance, when a viscous fluid is forced through a tube, it flows more quickly near the tube's center line than near its walls. Experiments show that some stress (such as a pressure difference between the two ends of the tube) is needed to sustain the flow. This is because a force is required to overcome the friction between the layers of the fluid which are in relative motion. For a tube with a constant rate of flow, the strength of the compensating force is proportional to the fluid's viscosity. In general, viscosity depends on a fluid's state, such as its temperature, pressure, and rate of deformation. However, the dependence on some of these properties is negligible in certain cases. For example, the viscosity of a Newtonian fluid does not vary significantly with the rate of deformation. Zero viscosity (no resistance to shear stress) is observed only at very low temperatures in superfluids; otherwise, the second law of thermodynamics requires all fluids to have positive viscosity. A fluid that has zero viscosity (non-viscous) is called ideal or inviscid. For non-Newtonian fluid's viscosity, there are pseudoplastic, plastic, and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent. Etymology The word "viscosity" is derived from the Latin ("mistletoe"). also referred to a viscous glue derived from mistletoe berries. Definitions Dynamic viscosity In materials science and engineering, there is often interest in understanding the forces or stresses involved in the deformation of a material. For instance, if the material were a simple spring, the answer would be given by Hooke's law, which says that the force experienced by a spring is proportional to the distance displaced from equilibrium. Stresses which can be attributed to the deformation of a material from some rest state are called elastic stresses. In other materials, stresses are present which can be attributed to the deformation rate over time. These are called viscous stresses. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared; rather, they depend on how quickly the shearing occurs. Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation (the strain rate). Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow. In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed (see illustration to the right). If the speed of the top plate is low enough (to avoid turbulence), then in steady state the fluid particles move parallel to it, and their speed varies from at the bottom to at the top. Each layer of fluid moves faster than the one just below it, and friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, and an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed. In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to at the top. Moreover, the magnitude of the force, , acting on the top plate is found to be proportional to the speed and the area of each plate, and inversely proportional to their separation : The proportionality factor is the dynamic viscosity of the fluid, often simply referred to as the viscosity. It is denoted by the Greek letter mu (). The dynamic viscosity has the dimensions , therefore resulting in the SI units and the derived units: pressure multiplied by time energy per unit volume multiplied by time. The aforementioned ratio is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction parallel to the normal vector of the plates (see illustrations to the right). If the velocity does not vary linearly with , then the appropriate generalization is: where , and is the local shear velocity. This expression is referred to as Newton's law of viscosity. In shearing flows with planar symmetry, it is what defines . It is a special case of the general definition of viscosity (see below), which can be expressed in coordinate-free form. Use of the Greek letter mu () for the dynamic viscosity (sometimes also called the absolute viscosity) is common among mechanical and chemical engineers, as well as mathematicians and physicists. However, the Greek letter eta () is also used by chemists, physicists, and the IUPAC. The viscosity is sometimes also called the shear viscosity. However, at least one author discourages the use of this terminology, noting that can appear in non-shearing flows in addition to shearing flows. Kinematic viscosity In fluid dynamics, it is sometimes more appropriate to work in terms of kinematic viscosity (sometimes also called the momentum diffusivity), defined as the ratio of the dynamic viscosity () over the density of the fluid (). It is usually denoted by the Greek letter nu (): and has the dimensions , therefore resulting in the SI units and the derived units: specific energy multiplied by time energy per unit mass multiplied by time. General definition In very general terms, the viscous stresses in a fluid are defined as those resulting from the relative velocity of different fluid particles. As such, the viscous stresses must depend on spatial gradients of the flow velocity. If the velocity gradients are small, then to a first approximation the viscous stresses depend only on the first derivatives of the velocity. (For Newtonian fluids, this is also a linear dependence.) In Cartesian coordinates, the general relationship can then be written as where is a viscosity tensor that maps the velocity gradient tensor onto the viscous stress tensor . Since the indices in this expression can vary from 1 to 3, there are 81 "viscosity coefficients" in total. However, assuming that the viscosity rank-2 tensor is isotropic reduces these 81 coefficients to three independent parameters , , : and furthermore, it is assumed that no viscous forces may arise when the fluid is undergoing simple rigid-body rotation, thus , leaving only two independent parameters. The most usual decomposition is in terms of the standard (scalar) viscosity and the bulk viscosity such that and . In vector notation this appears as: where is the unit tensor. This equation can be thought of as a generalized form of Newton's law of viscosity. The bulk viscosity (also called volume viscosity) expresses a type of internal friction that resists the shearless compression or expansion of a fluid. Knowledge of is frequently not necessary in fluid dynamics problems. For example, an incompressible fluid satisfies and so the term containing drops out. Moreover, is often assumed to be negligible for gases since it is in a monatomic ideal gas. One situation in which can be important is the calculation of energy loss in sound and shock waves, described by Stokes' law of sound attenuation, since these phenomena involve rapid expansions and compressions. The defining equations for viscosity are not fundamental laws of nature, so their usefulness, as well as methods for measuring or calculating the viscosity, must be established using separate means. A potential issue is that viscosity depends, in principle, on the full microscopic state of the fluid, which encompasses the positions and momenta of every particle in the system. Such highly detailed information is typically not available in realistic systems. However, under certain conditions most of this information can be shown to be negligible. In particular, for Newtonian fluids near equilibrium and far from boundaries (bulk state), the viscosity depends only space- and time-dependent macroscopic fields (such as temperature and density) defining local equilibrium. Nevertheless, viscosity may still carry a non-negligible dependence on several system properties, such as temperature, pressure, and the amplitude and frequency of any external forcing. Therefore, precision measurements of viscosity are only defined with respect to a specific fluid state. To standardize comparisons among experiments and theoretical models, viscosity data is sometimes extrapolated to ideal limiting cases, such as the zero shear limit, or (for gases) the zero density limit. Momentum transport Transport theory provides an alternative interpretation of viscosity in terms of momentum transport: viscosity is the material property which characterizes momentum transport within a fluid, just as thermal conductivity characterizes heat transport, and (mass) diffusivity characterizes mass transport. This perspective is implicit in Newton's law of viscosity, , because the shear stress has units equivalent to a momentum flux, i.e., momentum per unit time per unit area. Thus, can be interpreted as specifying the flow of momentum in the direction from one fluid layer to the next. Per Newton's law of viscosity, this momentum flow occurs across a velocity gradient, and the magnitude of the corresponding momentum flux is determined by the viscosity. The analogy with heat and mass transfer can be made explicit. Just as heat flows from high temperature to low temperature and mass flows from high density to low density, momentum flows from high velocity to low velocity. These behaviors are all described by compact expressions, called constitutive relations, whose one-dimensional forms are given here: where is the density, and are the mass and heat fluxes, and and are the mass diffusivity and thermal conductivity. The fact that mass, momentum, and energy (heat) transport are among the most relevant processes in continuum mechanics is not a coincidence: these are among the few physical quantities that are conserved at the microscopic level in interparticle collisions. Thus, rather than being dictated by the fast and complex microscopic interaction timescale, their dynamics occurs on macroscopic timescales, as described by the various equations of transport theory and hydrodynamics. Newtonian and non-Newtonian fluids Newton's law of viscosity is not a fundamental law of nature, but rather a constitutive equation (like Hooke's law, Fick's law, and Ohm's law) which serves to define the viscosity . Its form is motivated by experiments which show that for a wide range of fluids, is independent of strain rate. Such fluids are called Newtonian. Gases, water, and many common liquids can be considered Newtonian in ordinary conditions and contexts. However, there are many non-Newtonian fluids that significantly deviate from this behavior. For example: Shear-thickening (dilatant) liquids, whose viscosity increases with the rate of shear strain. Shear-thinning liquids, whose viscosity decreases with the rate of shear strain. Thixotropic liquids, that become less viscous over time when shaken, agitated, or otherwise stressed. Rheopectic liquids, that become more viscous over time when shaken, agitated, or otherwise stressed. Bingham plastics that behave as a solid at low stresses but flow as a viscous fluid at high stresses. Trouton's ratio is the ratio of extensional viscosity to shear viscosity. For a Newtonian fluid, the Trouton ratio is 3. Shear-thinning liquids are very commonly, but misleadingly, described as thixotropic. Viscosity may also depend on the fluid's physical state (temperature and pressure) and other, external, factors. For gases and other compressible fluids, it depends on temperature and varies very slowly with pressure. The viscosity of some fluids may depend on other factors. A magnetorheological fluid, for example, becomes thicker when subjected to a magnetic field, possibly to the point of behaving like a solid. In solids The viscous forces that arise during fluid flow are distinct from the elastic forces that occur in a solid in response to shear, compression, or extension stresses. While in the latter the stress is proportional to the amount of shear deformation, in a fluid it is proportional to the rate of deformation over time. For this reason, James Clerk Maxwell used the term fugitive elasticity for fluid viscosity. However, many liquids (including water) will briefly react like elastic solids when subjected to sudden stress. Conversely, many "solids" (even granite) will flow like liquids, albeit very slowly, even under arbitrarily small stress. Such materials are best described as viscoelastic—that is, possessing both elasticity (reaction to deformation) and viscosity (reaction to rate of deformation). Viscoelastic solids may exhibit both shear viscosity and bulk viscosity. The extensional viscosity is a linear combination of the shear and bulk viscosities that describes the reaction of a solid elastic material to elongation. It is widely used for characterizing polymers. In geology, earth materials that exhibit viscous deformation at least three orders of magnitude greater than their elastic deformation are sometimes called rheids. Measurement Viscosity is measured with various types of viscometers and rheometers. Close temperature control of the fluid is essential to obtain accurate measurements, particularly in materials like lubricants, whose viscosity can double with a change of only 5 °C. A rheometer is used for fluids that cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. For some fluids, the viscosity is constant over a wide range of shear rates (Newtonian fluids). The fluids without a constant viscosity (non-Newtonian fluids) cannot be described by a single number. Non-Newtonian fluids exhibit a variety of different correlations between shear stress and shear rate. One of the most common instruments for measuring kinematic viscosity is the glass capillary viscometer. In coating industries, viscosity may be measured with a cup in which the efflux time is measured. There are several sorts of cup—such as the Zahn cup and the Ford viscosity cup—with the usage of each type varying mainly according to the industry. Also used in coatings, a Stormer viscometer employs load-based rotation to determine viscosity. The viscosity is reported in Krebs units (KU), which are unique to Stormer viscometers. Vibrating viscometers can also be used to measure viscosity. Resonant, or vibrational viscometers work by creating shear waves within the liquid. In this method, the sensor is submerged in the fluid and is made to resonate at a specific frequency. As the surface of the sensor shears through the liquid, energy is lost due to its viscosity. This dissipated energy is then measured and converted into a viscosity reading. A higher viscosity causes a greater loss of energy. Extensional viscosity can be measured with various rheometers that apply extensional stress. Volume viscosity can be measured with an acoustic rheometer. Apparent viscosity is a calculation derived from tests performed on drilling fluid used in oil or gas well development. These calculations and tests help engineers develop and maintain the properties of the drilling fluid to the specifications required. Nanoviscosity (viscosity sensed by nanoprobes) can be measured by fluorescence correlation spectroscopy. Units The SI unit of dynamic viscosity is the newton-second per square meter (N·s/m2), also frequently expressed in the equivalent forms pascal-second (Pa·s), kilogram per meter per second (kg·m−1·s−1) and poiseuille (Pl). The CGS unit is the poise (P, or g·cm−1·s−1 = 0.1 Pa·s), named after Jean Léonard Marie Poiseuille. It is commonly expressed, particularly in ASTM standards, as centipoise (cP). The centipoise is convenient because the viscosity of water at 20 °C is about 1 cP, and one centipoise is equal to the SI millipascal second (mPa·s). The SI unit of kinematic viscosity is square meter per second (m2/s), whereas the CGS unit for kinematic viscosity is the stokes (St, or cm2·s−1 = 0.0001 m2·s−1), named after Sir George Gabriel Stokes. In U.S. usage, stoke is sometimes used as the singular form. The submultiple centistokes (cSt) is often used instead, 1 cSt = 1 mm2·s−1 = 10−6 m2·s−1. 1 cSt is 1 cP divided by 1000 kg/m^3, close to the density of water. The kinematic viscosity of water at 20 °C is about 1 cSt. The most frequently used systems of US customary, or Imperial, units are the British Gravitational (BG) and English Engineering (EE). In the BG system, dynamic viscosity has units of pound-seconds per square foot (lb·s/ft2), and in the EE system it has units of pound-force-seconds per square foot (lbf·s/ft2). The pound and pound-force are equivalent; the two systems differ only in how force and mass are defined. In the BG system the pound is a basic unit from which the unit of mass (the slug) is defined by Newton's Second Law, whereas in the EE system the units of force and mass (the pound-force and pound-mass respectively) are defined independently through the Second Law using the proportionality constant gc. Kinematic viscosity has units of square feet per second (ft2/s) in both the BG and EE systems. Nonstandard units include the reyn (lbf·s/in2), a British unit of dynamic viscosity. In the automotive industry the viscosity index is used to describe the change of viscosity with temperature. The reciprocal of viscosity is fluidity, usually symbolized by or , depending on the convention used, measured in reciprocal poise (P−1, or cm·s·g−1), sometimes called the rhe. Fluidity is seldom used in engineering practice. At one time the petroleum industry relied on measuring kinematic viscosity by means of the Saybolt viscometer, and expressing kinematic viscosity in units of Saybolt universal seconds (SUS). Other abbreviations such as SSU (Saybolt seconds universal) or SUV (Saybolt universal viscosity) are sometimes used. Kinematic viscosity in centistokes can be converted from SUS according to the arithmetic and the reference table provided in ASTM D 2161. Molecular origins Momentum transport in gases is mediated by discrete molecular collisions, and in liquids by attractive forces that bind molecules close together. Because of this, the dynamic viscosities of liquids are typically much larger than those of gases. In addition, viscosity tends to increase with temperature in gases and decrease with temperature in liquids. Above the liquid-gas critical point, the liquid and gas phases are replaced by a single supercritical phase. In this regime, the mechanisms of momentum transport interpolate between liquid-like and gas-like behavior. For example, along a supercritical isobar (constant-pressure surface), the kinematic viscosity decreases at low temperature and increases at high temperature, with a minimum in between. A rough estimate for the value at the minimum is where is the Planck constant, is the electron mass, and is the molecular mass. In general, however, the viscosity of a system depends in detail on how the molecules constituting the system interact, and there are no simple but correct formulas for it. The simplest exact expressions are the Green–Kubo relations for the linear shear viscosity or the transient time correlation function expressions derived by Evans and Morriss in 1988. Although these expressions are each exact, calculating the viscosity of a dense fluid using these relations currently requires the use of molecular dynamics computer simulations. Somewhat more progress can be made for a dilute gas, as elementary assumptions about how gas molecules move and interact lead to a basic understanding of the molecular origins of viscosity. More sophisticated treatments can be constructed by systematically coarse-graining the equations of motion of the gas molecules. An example of such a treatment is Chapman–Enskog theory, which derives expressions for the viscosity of a dilute gas from the Boltzmann equation. Pure gases {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" ! Elementary calculation of viscosity for a dilute gas |- | Consider a dilute gas moving parallel to the -axis with velocity that depends only on the coordinate. To simplify the discussion, the gas is assumed to have uniform temperature and density. Under these assumptions, the velocity of a molecule passing through is equal to whatever velocity that molecule had when its mean free path began. Because is typically small compared with macroscopic scales, the average velocity of such a molecule has the form where is a numerical constant on the order of . (Some authors estimate ; on the other hand, a more careful calculation for rigid elastic spheres gives .) Next, because half the molecules on either side are moving towards , and doing so on average with half the average molecular speed , the momentum flux from either side is The net momentum flux at is the difference of the two: According to the definition of viscosity, this momentum flux should be equal to , which leads to |} Viscosity in gases arises principally from the molecular diffusion that transports momentum between layers of flow. An elementary calculation for a dilute gas at temperature and density gives where is the Boltzmann constant, the molecular mass, and a numerical constant on the order of . The quantity , the mean free path, measures the average distance a molecule travels between collisions. Even without a priori knowledge of , this expression has nontrivial implications. In particular, since is typically inversely proportional to density and increases with temperature, itself should increase with temperature and be independent of density at fixed temperature. In fact, both of these predictions persist in more sophisticated treatments, and accurately describe experimental observations. By contrast, liquid viscosity typically decreases with temperature. For rigid elastic spheres of diameter , can be computed, giving In this case is independent of temperature, so . For more complicated molecular models, however, depends on temperature in a non-trivial way, and simple kinetic arguments as used here are inadequate. More fundamentally, the notion of a mean free path becomes imprecise for particles that interact over a finite range, which limits the usefulness of the concept for describing real-world gases. Chapman–Enskog theory A technique developed by Sydney Chapman and David Enskog in the early 1900s allows a more refined calculation of . It is based on the Boltzmann equation, which provides a statistical description of a dilute gas in terms of intermolecular interactions. The technique allows accurate calculation of for molecular models that are more realistic than rigid elastic spheres, such as those incorporating intermolecular attractions. Doing so is necessary to reproduce the correct temperature dependence of , which experiments show increases more rapidly than the trend predicted for rigid elastic spheres. Indeed, the Chapman–Enskog analysis shows that the predicted temperature dependence can be tuned by varying the parameters in various molecular models. A simple example is the Sutherland model, which describes rigid elastic spheres with weak mutual attraction. In such a case, the attractive force can be treated perturbatively, which leads to a simple expression for : where is independent of temperature, being determined only by the parameters of the intermolecular attraction. To connect with experiment, it is convenient to rewrite as where is the viscosity at temperature . This expression is usually named Sutherland's formula. If is known from experiments at and at least one other temperature, then can be calculated. Expressions for obtained in this way are qualitatively accurate for a number of simple gases. Slightly more sophisticated models, such as the Lennard-Jones potential, or the more flexible Mie potential, may provide better agreement with experiments, but only at the cost of a more opaque dependence on temperature. A further advantage of these more complex interaction potentials is that they can be used to develop accurate models for a wide variety of properties using the same potential parameters. In situations where little experimental data is available, this makes it possible to obtain model parameters from fitting to properties such as pure-fluid vapour-liquid equilibria, before using the parameters thus obtained to predict the viscosities of interest with reasonable accuracy. In some systems, the assumption of spherical symmetry must be abandoned, as is the case for vapors with highly polar molecules like H2O. In these cases, the Chapman–Enskog analysis is significantly more complicated. Bulk viscosity In the kinetic-molecular picture, a non-zero bulk viscosity arises in gases whenever there are non-negligible relaxational timescales governing the exchange of energy between the translational energy of molecules and their internal energy, e.g. rotational and vibrational. As such, the bulk viscosity is for a monatomic ideal gas, in which the internal energy of molecules is negligible, but is nonzero for a gas like carbon dioxide, whose molecules possess both rotational and vibrational energy. Pure liquids In contrast with gases, there is no simple yet accurate picture for the molecular origins of viscosity in liquids. At the simplest level of description, the relative motion of adjacent layers in a liquid is opposed primarily by attractive molecular forces acting across the layer boundary. In this picture, one (correctly) expects viscosity to decrease with increasing temperature. This is because increasing temperature increases the random thermal motion of the molecules, which makes it easier for them to overcome their attractive interactions. Building on this visualization, a simple theory can be constructed in analogy with the discrete structure of a solid: groups of molecules in a liquid are visualized as forming "cages" which surround and enclose single molecules. These cages can be occupied or unoccupied, and stronger molecular attraction corresponds to stronger cages. Due to random thermal motion, a molecule "hops" between cages at a rate which varies inversely with the strength of molecular attractions. In equilibrium these "hops" are not biased in any direction. On the other hand, in order for two adjacent layers to move relative to each other, the "hops" must be biased in the direction of the relative motion. The force required to sustain this directed motion can be estimated for a given shear rate, leading to where is the Avogadro constant, is the Planck constant, is the volume of a mole of liquid, and is the normal boiling point. This result has the same form as the well-known empirical relation where and are constants fit from data. On the other hand, several authors express caution with respect to this model. Errors as large as 30% can be encountered using equation (), compared with fitting equation () to experimental data. More fundamentally, the physical assumptions underlying equation () have been criticized. It has also been argued that the exponential dependence in equation () does not necessarily describe experimental observations more accurately than simpler, non-exponential expressions. In light of these shortcomings, the development of a less ad hoc model is a matter of practical interest. Foregoing simplicity in favor of precision, it is possible to write rigorous expressions for viscosity starting from the fundamental equations of motion for molecules. A classic example of this approach is Irving–Kirkwood theory. On the other hand, such expressions are given as averages over multiparticle correlation functions and are therefore difficult to apply in practice. In general, empirically derived expressions (based on existing viscosity measurements) appear to be the only consistently reliable means of calculating viscosity in liquids. Local atomic structure changes observed in undercooled liquids on cooling below the equilibrium melting temperature either in terms of radial distribution function g(r) or structure factor S(Q) are found to be directly responsible for the liquid fragility: deviation of the temperature dependence of viscosity of the undercooled liquid from the Arrhenius equation (2) through modification of the activation energy for viscous flow. At the same time equilibrium liquids follow the Arrhenius equation. Mixtures and blends Gaseous mixtures The same molecular-kinetic picture of a single component gas can also be applied to a gaseous mixture. For instance, in the Chapman–Enskog approach the viscosity of a binary mixture of gases can be written in terms of the individual component viscosities , their respective volume fractions, and the intermolecular interactions. As for the single-component gas, the dependence of on the parameters of the intermolecular interactions enters through various collisional integrals which may not be expressible in closed form. To obtain usable expressions for which reasonably match experimental data, the collisional integrals may be computed numerically or from correlations. In some cases, the collision integrals are regarded as fitting parameters, and are fitted directly to experimental data. This is a common approach in the development of reference equations for gas-phase viscosities. An example of such a procedure is the Sutherland approach for the single-component gas, discussed above. For gas mixtures consisting of simple molecules, Revised Enskog Theory has been shown to accurately represent both the density- and temperature dependence of the viscosity over a wide range of conditions. Blends of liquids As for pure liquids, the viscosity of a blend of liquids is difficult to predict from molecular principles. One method is to extend the molecular "cage" theory presented above for a pure liquid. This can be done with varying levels of sophistication. One expression resulting from such an analysis is the Lederer–Roegiers equation for a binary mixture: where is an empirical parameter, and and are the respective mole fractions and viscosities of the component liquids. Since blending is an important process in the lubricating and oil industries, a variety of empirical and proprietary equations exist for predicting the viscosity of a blend. Solutions and suspensions Aqueous solutions Depending on the solute and range of concentration, an aqueous electrolyte solution can have either a larger or smaller viscosity compared with pure water at the same temperature and pressure. For instance, a 20% saline (sodium chloride) solution has viscosity over 1.5 times that of pure water, whereas a 20% potassium iodide solution has viscosity about 0.91 times that of pure water. An idealized model of dilute electrolytic solutions leads to the following prediction for the viscosity of a solution: where is the viscosity of the solvent, is the concentration, and is a positive constant which depends on both solvent and solute properties. However, this expression is only valid for very dilute solutions, having less than 0.1 mol/L. For higher concentrations, additional terms are necessary which account for higher-order molecular correlations: where and are fit from data. In particular, a negative value of is able to account for the decrease in viscosity observed in some solutions. Estimated values of these constants are shown below for sodium chloride and potassium iodide at temperature 25 °C (mol = mole, L = liter). Suspensions In a suspension of solid particles (e.g. micron-size spheres suspended in oil), an effective viscosity can be defined in terms of stress and strain components which are averaged over a volume large compared with the distance between the suspended particles, but small with respect to macroscopic dimensions. Such suspensions generally exhibit non-Newtonian behavior. However, for dilute systems in steady flows, the behavior is Newtonian and expressions for can be derived directly from the particle dynamics. In a very dilute system, with volume fraction , interactions between the suspended particles can be ignored. In such a case one can explicitly calculate the flow field around each particle independently, and combine the results to obtain . For spheres, this results in the Einstein's effective viscosity formula: where is the viscosity of the suspending liquid. The linear dependence on is a consequence of neglecting interparticle interactions. For dilute systems in general, one expects to take the form where the coefficient may depend on the particle shape (e.g. spheres, rods, disks). Experimental determination of the precise value of is difficult, however: even the prediction for spheres has not been conclusively validated, with various experiments finding values in the range . This deficiency has been attributed to difficulty in controlling experimental conditions. In denser suspensions, acquires a nonlinear dependence on , which indicates the importance of interparticle interactions. Various analytical and semi-empirical schemes exist for capturing this regime. At the most basic level, a term quadratic in is added to : and the coefficient is fit from experimental data or approximated from the microscopic theory. However, some authors advise caution in applying such simple formulas since non-Newtonian behavior appears in dense suspensions ( for spheres), or in suspensions of elongated or flexible particles. There is a distinction between a suspension of solid particles, described above, and an emulsion. The latter is a suspension of tiny droplets, which themselves may exhibit internal circulation. The presence of internal circulation can decrease the observed effective viscosity, and different theoretical or semi-empirical models must be used. Amorphous materials In the high and low temperature limits, viscous flow in amorphous materials (e.g. in glasses and melts) has the Arrhenius form: where is a relevant activation energy, given in terms of molecular parameters; is temperature; is the molar gas constant; and is approximately a constant. The activation energy takes a different value depending on whether the high or low temperature limit is being considered: it changes from a high value at low temperatures (in the glassy state) to a low value at high temperatures (in the liquid state). For intermediate temperatures, varies nontrivially with temperature and the simple Arrhenius form fails. On the other hand, the two-exponential equation where , , , are all constants, provides a good fit to experimental data over the entire range of temperatures, while at the same time reducing to the correct Arrhenius form in the low and high temperature limits. This expression, also known as Duouglas-Doremus-Ojovan model, can be motivated from various theoretical models of amorphous materials at the atomic level. A two-exponential equation for the viscosity can be derived within the Dyre shoving model of supercooled liquids, where the Arrhenius energy barrier is identified with the high-frequency shear modulus times a characteristic shoving volume. Upon specifying the temperature dependence of the shear modulus via thermal expansion and via the repulsive part of the intermolecular potential, another two-exponential equation is retrieved: where denotes the high-frequency shear modulus of the material evaluated at a temperature equal to the glass transition temperature , is the so-called shoving volume, i.e. it is the characteristic volume of the group of atoms involved in the shoving event by which an atom/molecule escapes from the cage of nearest-neighbours, typically on the order of the volume occupied by few atoms. Furthermore, is the thermal expansion coefficient of the material, is a parameter which measures the steepness of the power-law rise of the ascending flank of the first peak of the radial distribution function, and is quantitatively related to the repulsive part of the interatomic potential. Finally, denotes the Boltzmann constant. Eddy viscosity In the study of turbulence in fluids, a common practical strategy is to ignore the small-scale vortices (or eddies) in the motion and to calculate a large-scale motion with an effective viscosity, called the "eddy viscosity", which characterizes the transport and dissipation of energy in the smaller-scale flow (see large eddy simulation). In contrast to the viscosity of the fluid itself, which must be positive by the second law of thermodynamics, the eddy viscosity can be negative. Prediction Because viscosity depends continuously on temperature and pressure, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available at the temperatures and pressures of interest. This capability is important for thermophysical simulations, in which the temperature and pressure of a fluid can vary continuously with space and time. A similar situation is encountered for mixtures of pure fluids, where the viscosity depends continuously on the concentration ratios of the constituent fluids For the simplest fluids, such as dilute monatomic gases and their mixtures, ab initio quantum mechanical computations can accurately predict viscosity in terms of fundamental atomic constants, i.e., without reference to existing viscosity measurements. For the special case of dilute helium, uncertainties in the ab initio calculated viscosity are two order of magnitudes smaller than uncertainties in experimental values. For slightly more complex fluids and mixtures at moderate densities (i.e. sub-critical densities) Revised Enskog Theory can be used to predict viscosities with some accuracy. Revised Enskog Theory is predictive in the sense that predictions for viscosity can be obtained using parameters fitted to other, pure-fluid thermodynamic properties or transport properties, thus requiring no a priori experimental viscosity measurements. For most fluids, high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing viscosity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures and pressures, then it is called a "reference correlation" for that fluid. Reference correlations have been published for many pure fluids; a few examples are water, carbon dioxide, ammonia, benzene, and xenon. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases. Thermophysical modeling software often relies on reference correlations for predicting viscosity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp (open-source). Viscosity can also be computed using formulas that express it in terms of the statistics of individual particle trajectories. These formulas include the Green–Kubo relations for the linear shear viscosity and the transient time correlation function expressions derived by Evans and Morriss in 1988. The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics. An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules. Selected substances Observed values of viscosity vary over several orders of magnitude, even for common substances (see the order of magnitude table below). For instance, a 70% sucrose (sugar) solution has a viscosity over 400 times that of water, and 26,000 times that of air. More dramatically, pitch has been estimated to have a viscosity 230 billion times that of water. Water The dynamic viscosity of water is about 0.89 mPa·s at room temperature (25 °C). As a function of temperature in kelvins, the viscosity can be estimated using the semi-empirical Vogel-Fulcher-Tammann equation: where A = 0.02939 mPa·s, B = 507.88 K, and C = 149.3 K. Experimentally determined values of the viscosity are also given in the table below. The values at 20 °C are a useful reference: there, the dynamic viscosity is about 1 cP and the kinematic viscosity is about 1 cSt. Air Under standard atmospheric conditions (25 °C and pressure of 1 bar), the dynamic viscosity of air is 18.5 μPa·s, roughly 50 times smaller than the viscosity of water at the same temperature. Except at very high pressure, the viscosity of air depends mostly on the temperature. Among the many possible approximate formulas for the temperature dependence (see Temperature dependence of viscosity), one is: which is accurate in the range −20 °C to 400 °C. For this formula to be valid, the temperature must be given in kelvins; then corresponds to the viscosity in Pa·s. Other common substances Order of magnitude estimates The following table illustrates the range of viscosity values observed in common substances. Unless otherwise noted, a temperature of 25 °C and a pressure of 1 atmosphere are assumed. The values listed are representative estimates only, as they do not account for measurement uncertainties, variability in material definitions, or non-Newtonian behavior.
Physical sciences
Fluid mechanics
null
18963787
https://en.wikipedia.org/wiki/Ion
Ion
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons. A cation is a positively charged ion with fewer electrons than protons (e.g. K+ (potassium ion)) while an anion is a negatively charged ion with more electrons than protons. (e.g. Cl− (chloride ion) and OH− (hydroxide ion)). Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds. If only a + or - is present, it indicates a +1 or -1 charge. To indicate a more severe charge, the number of additional or missing electrons is supplied, as seen in O22- (negative charge, peroxide) and He2+ (positive charge, alpha particle). Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization. History of discovery The word ion was coined from neuter present participle of Greek ἰέναι (ienai), meaning "to go". A cation is something that moves down (, kato, meaning "down") and an anion is something that moves up (, ano, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of substance has moved through the solution in a current. This conveys matter from one place to the other. In correspondence with Faraday, Whewell also coined the words anode and cathode, as well as anion and cation as ions that are attracted to the respective electrodes. Svante Arrhenius put forth, in his 1884 dissertation, the explanation of the fact that solid crystalline salts dissociate into paired charged particles when dissolved, for which he would win the 1903 Nobel Prize in Chemistry. Arrhenius' explanation was that in forming a solution, the salt dissociates into Faraday's ions, he proposed that ions formed even in the absence of an electric current. Characteristics Ions in their gas-like state are highly reactive and will rapidly interact with ions of opposite charge to give neutral molecules or ionic salts. Ions are also produced in the liquid or solid state when salts interact with solvents (for example, water) to produce solvated ions, which are more stable, for reasons involving a combination of energy and entropy changes as the ions move away from each other to interact with the liquid. These stabilized species are more commonly found in the environment at low temperatures. A common example is the ions present in seawater, which are derived from dissolved salts. As charged objects, ions are attracted to opposite electric charges (positive to negative, and vice versa) and repelled by like charges. When they move, their trajectories can be deflected by a magnetic field. Electrons, due to their smaller mass and thus larger space-filling properties as matter waves, determine the size of atoms and molecules that possess any electrons at all. Thus, anions (negatively charged ions) are larger than the parent molecule or atom, as the excess electron(s) repel each other and add to the physical size of the ion, because its size is determined by its electron cloud. Cations are smaller than the corresponding parent atom or molecule due to the smaller size of the electron cloud. One particular cation (that of hydrogen) contains no electrons, and thus consists of a single proton – much smaller than the parent hydrogen atom. Anions and cations Anion (−) and cation (+) indicate the net electric charge on an ion. An ion that has more electrons than protons, giving it a net negative charge, is named an anion, and a minus indication "Anion (−)" indicates the negative charge. With a cation it is just the opposite: it has fewer electrons than protons, giving it a net positive charge, hence the indication "Cation (+)". Since the electric charge on a proton is equal in magnitude to the charge on an electron, the net electric charge on an ion is equal to the number of protons in the ion minus the number of electrons. An (−) ( , from the Greek word ἄνω (ánō), meaning "up") is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged). A (+) ( , from the Greek word κάτω (kátō), meaning "down") is an ion with fewer electrons than protons, giving it a positive charge. There are additional names used for ions with multiple charges. For example, an ion with a −2 charge is known as a dianion and an ion with a +2 charge is known as a dication. A zwitterion is a neutral molecule with positive and negative charges at different locations within that molecule. Cations and anions are measured by their ionic radius and they differ in relative size: "Cations are small, most of them less than 10−10 m (10−8 cm) in radius. But most anions are large, as is the most common Earth anion, oxygen. From this fact it is apparent that most of the space of a crystal is occupied by the anion and that the cations fit into the spaces between them." The terms anion and cation (for ions that respectively travel to the anode and cathode during electrolysis) were introduced by Michael Faraday in 1834 following his consultation with William Whewell. Natural occurrences Ions are ubiquitous in nature and are responsible for diverse phenomena from the luminescence of the Sun to the existence of the Earth's ionosphere. Atoms in their ionic state may have a different color from neutral atoms, and thus light absorption by metal ions gives the color of gemstones. In both inorganic and organic chemistry (including biochemistry), the interaction of water and ions is often relevant for understanding properties of systems; an example of their importance is in the breakdown of adenosine triphosphate (ATP), which provides the energy for many reactions in biological systems. Related technology Ions can be non-chemically prepared using various ion sources, usually involving high voltage or temperature. These are used in a multitude of devices such as mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters, and ion engines. As reactive charged particles, they are also used in air purification by disrupting microbes, and in household items such as smoke detectors. As signalling and metabolism in organisms are controlled by a precise ionic gradient across membranes, the disruption of this gradient contributes to cell death. This is a common mechanism exploited by natural and artificial biocides, including the ion channels gramicidin and amphotericin (a fungicide). Inorganic dissolved ions are a component of total dissolved solids, a widely known indicator of water quality. Detection of ionizing radiation The ionizing effect of radiation on a gas is extensively used for the detection of radiation such as alpha, beta, gamma, and X-rays. The original ionization event in these instruments results in the formation of an "ion pair"; a positive ion and a free electron, by ion impact by the radiation on the gas molecules. The ionization chamber is the simplest of these detectors, and collects all the charges created by direct ionization within the gas through the application of an electric field. The Geiger–Müller tube and the proportional counter both use a phenomenon known as a Townsend avalanche to multiply the effect of the original ionizing event by means of a cascade effect whereby the free electrons are given sufficient energy by the electric field to release further electrons by ion impact. Chemistry Denoting the charged state When writing the chemical formula for an ion, its net charge is written in superscript immediately after the chemical structure for the molecule/atom. The net charge is written with the magnitude before the sign; that is, a doubly charged cation is indicated as 2+ instead of +2. However, the magnitude of the charge is omitted for singly charged molecules/atoms; for example, the sodium cation is indicated as and not . An alternative (and acceptable) way of showing a molecule/atom with multiple charges is by drawing out the signs multiple times, this is often seen with transition metals. Chemists sometimes circle the sign; this is merely ornamental and does not alter the chemical meaning. All three representations of , , and shown in the figure, are thus equivalent. Monatomic ions are sometimes also denoted with Roman numerals, particularly in spectroscopy; for example, the (positively doubly charged) example seen above is referred to as , or Fe III (Fe I for a neutral Fe atom, Fe II for a singly ionized Fe ion). The Roman numeral designates the formal oxidation state of an element, whereas the superscripted Indo-Arabic numerals denote the net charge. The two notations are, therefore, exchangeable for monatomic ions, but the Roman numerals cannot be applied to polyatomic ions. However, it is possible to mix the notations for the individual metal centre with a polyatomic complex, as shown by the uranyl ion example. Sub-classes If an ion contains unpaired electrons, it is called a radical ion. Just like uncharged radicals, radical ions are very reactive. Polyatomic ions containing oxygen, such as carbonate and sulfate, are called oxyanions. Molecular ions that contain at least one carbon to hydrogen bond are called organic ions. If the charge in an organic ion is formally centred on a carbon, it is termed a carbocation (if positively charged) or carbanion (if negatively charged). Formation Formation of monatomic ions Monatomic ions are formed by the gain or loss of electrons to the valence shell (the outer-most electron shell) in an atom. The inner shells of an atom are filled with electrons that are tightly bound to the positively charged atomic nucleus, and so do not participate in this kind of chemical interaction. The process of gaining or losing electrons from a neutral atom or molecule is called ionization. Atoms can be ionized by bombardment with radiation, but the more usual process of ionization encountered in chemistry is the transfer of electrons between atoms or molecules. This transfer is usually driven by the attaining of stable ("closed shell") electronic configurations. Atoms will gain or lose electrons depending on which action takes the least energy. For example, a sodium atom, Na, has a single electron in its valence shell, surrounding 2 stable, filled inner shells of 2 and 8 electrons. Since these filled shells are very stable, a sodium atom tends to lose its extra electron and attain this stable configuration, becoming a sodium cation in the process Na -> Na+ + e- On the other hand, a chlorine atom, Cl, has 7 electrons in its valence shell, which is one short of the stable, filled shell with 8 electrons. Thus, a chlorine atom tends to gain an extra electron and attain a stable 8-electron configuration, becoming a chloride anion in the process: Cl + e- -> Cl- This driving force is what causes sodium and chlorine to undergo a chemical reaction, wherein the "extra" electron is transferred from sodium to chlorine, forming sodium cations and chloride anions. Being oppositely charged, these cations and anions form ionic bonds and combine to form sodium chloride, NaCl, more commonly known as table salt. Na+ + Cl- -> NaCl Formation of polyatomic and molecular ions Polyatomic and molecular ions are often formed by the gaining or losing of elemental ions such as a proton, , in neutral molecules. For example, when ammonia, , accepts a proton, —a process called protonation—it forms the ammonium ion, . Ammonia and ammonium have the same number of electrons in essentially the same electronic configuration, but ammonium has an extra proton that gives it a net positive charge. Ammonia can also lose an electron to gain a positive charge, forming the ion . However, this ion is unstable, because it has an incomplete valence shell around the nitrogen atom, making it a very reactive radical ion. Due to the instability of radical ions, polyatomic and molecular ions are usually formed by gaining or losing elemental ions such as , rather than gaining or losing electrons. This allows the molecule to preserve its stable electronic configuration while acquiring an electrical charge. Ionization potential The energy required to detach an electron in its lowest energy state from an atom or molecule of a gas with less net electric charge is called the ionization potential, or ionization energy. The nth ionization energy of an atom is the energy required to detach its nth electron after the first electrons have already been detached. Each successive ionization energy is markedly greater than the last. Particularly great increases occur after any given block of atomic orbitals is exhausted of electrons. For this reason, ions tend to form in ways that leave them with full orbital blocks. For example, sodium has one valence electron in its outermost shell, so in ionized form it is commonly found with one lost electron, as . On the other side of the periodic table, chlorine has seven valence electrons, so in ionized form it is commonly found with one gained electron, as . Caesium has the lowest measured ionization energy of all the elements and helium has the greatest. In general, the ionization energy of metals is much lower than the ionization energy of nonmetals, which is why, in general, metals will lose electrons to form positively charged ions and nonmetals will gain electrons to form negatively charged ions. Ionic bonding Ionic bonding is a kind of chemical bonding that arises from the mutual attraction of oppositely charged ions. Ions of like charge repel each other, and ions of opposite charge attract each other. Therefore, ions do not usually exist on their own, but will bind with ions of opposite charge to form a crystal lattice. The resulting compound is called an ionic compound, and is said to be held together by ionic bonding. In ionic compounds there arise characteristic distances between ion neighbours from which the spatial extension and the ionic radius of individual ions may be derived. The most common type of ionic bonding is seen in compounds of metals and nonmetals (except noble gases, which rarely form chemical compounds). Metals are characterized by having a small number of electrons in excess of a stable, closed-shell electronic configuration. As such, they have the tendency to lose these extra electrons in order to attain a stable configuration. This property is known as electropositivity. Non-metals, on the other hand, are characterized by having an electron configuration just a few electrons short of a stable configuration. As such, they have the tendency to gain more electrons in order to achieve a stable configuration. This tendency is known as electronegativity. When a highly electropositive metal is combined with a highly electronegative nonmetal, the extra electrons from the metal atoms are transferred to the electron-deficient nonmetal atoms. This reaction produces metal cations and nonmetal anions, which are attracted to each other to form a salt. Common ions
Physical sciences
Atomic physics
null
2355918
https://en.wikipedia.org/wiki/Ethnobiology
Ethnobiology
Ethnobiology is the multidisciplinary field of study of relationships among peoples, biota, and environments integrating many perspectives, from the social, biological, and medical sciences; along with application to conservation and sustainable development. The diversity of perspectives in ethnobiology allows for examining complex, dynamic interactions between human and natural systems. History Beginnings (15th century–19th century) Biologists have been interested in local biological knowledge since the time Europeans started colonising the world, from the 15th century onwards. Paul Sillitoe wrote that: Local biological knowledge, collected and sampled over these early centuries significantly informed the early development of modern biology: during the 17th century, Georg Eberhard Rumphius benefited from local biological knowledge in producing his catalogue, "Herbarium Amboinense", covering more than 1,200 species of the plants in Indonesia; during the 18th century, Carl Linnaeus relied upon Rumphius's work, and also corresponded with other people all around the world when developing the biological classification scheme that now underlies the arrangement of much of the accumulated knowledge of the biological sciences. during the 19th century, Charles Darwin, the 'father' of evolutionary theory, on his Voyage of the Beagle took interest in the local biological knowledge of peoples he encountered. Phase I (1900s–1940s) Ethnobiology itself, as a distinctive practice, only emerged during the 20th century as part of the records then being made about other peoples, and other cultures. As a practice, it was nearly always ancillary to other pursuits when documenting others' languages, folklore, and natural resource use. Roy Ellen commented that: This 'first phase' in the development of ethnobiology as a practice has been described as still having an essentially utilitarian purpose, often focusing on identifying those 'native' plants, animals and technologies of some potential use and value within increasingly dominant western economic systems Phase II (1950s–1970s) Arising out of practices in Phase I (above) came a 'second phase' in the development of 'ethnobiology', with researchers now striving to better document and better understand how other peoples' themselves "conceptualize and categorize" the natural world around them. In Sillitoe's words: This 'second' phase is marked: in Northern America (mid 1950s) with Harold Conklin's completing his doctorate entitled "The relation of Hanunóo culture to the plant world" in Britain (mid 1960s) with the publication of Claude Lévi-Strauss' book The Savage Mind legitimating "folk biological classification" as a worthy cross-cultural research endeavour in France (mid 1970s) with André-Georges Haudricourt's linguistic studies of botanical nomenclature and R. Porteres' and others work in economic biology. Present (1980s–2000s) By the turn of the 21st century, ethnobiological practices, research, and findings have had a significant impact and influence across a number of fields of biological inquiry including ecology, conservation biology, development studies, and political ecology. The Society of Ethnobiology advises on its web page: Ethnobiology is a rapidly growing field of research, gaining professional, student, and public interest ... internationally Ethnobiology has come out from its place as an ancillary practice in the shadows of other core pursuits, to arise as a whole field of inquiry and research in its own right: taught within many tertiary institutions and educational programs around the world; with its own methods manuals, its own readers, and its own textbooks Subjects of inquiry Usage All societies make use of the biological world in which they are situated, but there are wide differences in use, informed by perceived need, available technology, and the culture's sense of morality and sustainability. Ethnobiologists investigate what lifeforms are used for what purposes, the particular techniques of use, the reasons for these choices, and symbolic and spiritual implications of them. Taxonomy Different societies divide the living world up in different ways. Ethnobiologists attempt to record the words used in particular cultures for living things, from the most specific terms (analogous to species names in Linnean biology) to more general terms (such as 'tree' and even more generally 'plant'). They also try to understand the overall structure or hierarchy of the classification system (if there is one; there is ongoing debate as to whether there must always be an implied hierarchy. Cosmological, moral and spiritual significance Societies invest themselves and their world with meaning partly through their answers to questions like "how did the world happen?", "how and why did people come to be?", "what are proper practices, and why?", and "what realities exist beyond or behind our physical experience?" Understanding these elements of a societies' perspective is important to cultural research in general, and ethnobiologists investigate how a societies' view of the natural world informs and is informed by them. Traditional ecological knowledge In order to live effectively in a given place, a people needs to understand the particulars of their environment, and many traditional societies have complex and subtle understandings of the places in which they live. Ethnobiologists seek to share in these understandings, subject to ethical concerns regarding intellectual property and cultural appropriation. Cross-cultural ethnobiology In cross cultural ethnobiology research, two or more communities participate simultaneously. This enables the researcher to compare how a bio-resource is used by different communities. Subdisciplines Ethnobotany Ethnobotany investigates the relationship between human societies and plants: how humans use plants – as food, technology, medicine, and in ritual contexts; how they view and understand them; and their symbolic and spiritual role in a culture. Ethnozoology The subfield ethnozoology focuses on the relationship between humans and other animals throughout human history. It studies human practices such as hunting, fishing and animal husbandry in space and time, and human perspectives about animals such as their place in the moral and spiritual realms. Ethnoecology Ethnoecology refers to an increasingly dominant 'ethnobiological' research paradigm focused, primarily, on documenting, describing, and understanding how other peoples perceive, manage, and use whole ecosystems. Other disciplines Studies and writings within ethnobiology draw upon research from fields including archaeology, geography, linguistics, systematics, population biology, ecology, cultural anthropology, ethnography, pharmacology, nutrition, conservation, and sustainable development. Ethics Through much of the history of ethnobiology, its practitioners were primarily from dominant cultures, and the benefit of their work often accrued to the dominant culture, with little control or benefit invested in the indigenous peoples whose practice and knowledge they recorded. Just as many of those indigenous societies work to assert legitimate control over physical resources such as traditional lands or artistic and ritual objects, many work to assert legitimate control over their intellectual property. In an age when the potential exists for large profits from the discovery of, for example, new food crops or medicinal plants, modern ethnobiologists must consider intellectual property rights, the need for informed consent, the potential for harm to informants, and their "debt to the societies in which they work". Furthermore, these questions must be considered not only in light of western industrialized nations' common understanding of ethics and law, but also in light of the ethical and legal standards of the societies from which the ethnobiologist draws information.
Biology and health sciences
Biology basics
Biology
2356196
https://en.wikipedia.org/wiki/C%20Sharp%20%28programming%20language%29
C Sharp (programming language)
C# ( ) is a general-purpose high-level programming language supporting multiple paradigms. C# encompasses static typing, strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines. The principal inventors of the C# programming language were Anders Hejlsberg, Scott Wiltamuth, and Peter Golde from Microsoft. It was first widely distributed in July 2000 and was later approved as an international standard by Ecma (ECMA-334) in 2002 and ISO/IEC (ISO/IEC 23270 and 20619) in 2003. Microsoft introduced C# along with .NET Framework and Visual Studio, both of which were closed-source. At the time, Microsoft had no open-source products. Four years later, in 2004, a free and open-source project called Mono began, providing a cross-platform compiler and runtime environment for the C# programming language. A decade later, Microsoft released Visual Studio Code (code editor), Roslyn (compiler), and the unified .NET platform (software framework), all of which support C# and are free, open-source, and cross-platform. Mono also joined Microsoft but was not merged into .NET. the most recent stable version of the language is C# 13.0, which was released in 2024 in .NET 9.0. Design goals The Ecma standard lists these design goals for C#: The language is intended to be a simple, modern, general-purpose, object-oriented programming language. The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important. The language is intended for use in developing software components suitable for deployment in distributed environments. Portability is very important for source code and programmers, especially those already familiar with C and C++. Support for internationalization is very important. C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions. Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language. History During the development of the .NET Framework, the class libraries were originally written using a managed code compiler system named Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood for "C-like Object Oriented Language". Microsoft had considered keeping the name "Cool" as the final name of the language, but chose not to do so for trademark reasons. By the time the .NET project was publicly announced at the July 2000 Professional Developers Conference, the language had been renamed C#, and the class libraries and ASP.NET runtime had been ported to C#. Hejlsberg is C#'s principal designer and lead architect at Microsoft, and was previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly CodeGear Delphi, Inprise Delphi and Borland Delphi), and Visual J++. In interviews and technical papers, he has stated that flaws in most major programming languages (e.g. C++, Java, Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn, drove the design of the C# language. James Gosling, who created the Java programming language in 1994, and Bill Joy, a co-founder of Sun Microsystems, the originator of Java, called C# an "imitation" of Java; Gosling further said that "[C# is] sort of Java with reliability, productivity and security deleted." In July 2000, Hejlsberg said that C# is "not a Java clone" and is "much closer to C++" in its design. Since the release of C# 2.0 in November 2005, the C# and Java languages have evolved on increasingly divergent trajectories, becoming two quite different languages. One of the first major departures came with the addition of generics to both languages, with vastly different implementations. C# makes use of reification to provide "first-class" generic objects that can be used like any other class, with code generation performed at class-load time. Furthermore, C# has added several major features to accommodate functional-style programming, culminating in the LINQ extensions released with C# 3.0 and its supporting framework of lambda expressions, extension methods, and anonymous types. These features enable C# programmers to use functional programming techniques, such as closures, when it is advantageous to their application. The LINQ extensions and the functional imports help developers reduce the amount of boilerplate code that is included in common tasks like querying a database, parsing an XML file, or searching through a data structure, shifting the emphasis onto the actual program logic to help improve readability and maintainability. C# used to have a mascot called Andy (named after Anders Hejlsberg). It was retired on January 29, 2004. C# was originally submitted to the ISO/IEC JTC 1 subcommittee SC 22 for review, under ISO/IEC 23270:2003, was withdrawn and was then approved under ISO/IEC 23270:2006. The 23270:2006 is withdrawn under 23270:2018 and approved with this version. Name Microsoft first used the name C# in 1988 for a variant of the C language designed for incremental compilation. That project was not completed, and the name was later reused. The name "C sharp" was inspired by the musical notation whereby a sharp symbol indicates that the written note should be made a semitone higher in pitch. This is similar to the language name of C++, where "++" indicates that a variable should be incremented by 1 after being evaluated. The sharp symbol also resembles a ligature of four "+" symbols (in a two-by-two grid), further implying that the language is an increment of C++. Due to technical limits of display (standard fonts, browsers, etc.), and most keyboard layouts lacking a sharp symbol (), the number sign () was chosen to approximate the sharp symbol in the written name of the programming language. This convention is reflected in the ECMA-334 C# Language Specification. The "sharp" suffix has been used by a number of other .NET languages that are variants of existing languages, including J# (a .NET language also designed by Microsoft that is derived from Java 1.1), A# (from Ada), and the functional programming language F#. The original implementation of Eiffel for .NET was called Eiffel#, a name retired since the full Eiffel language is now supported. The suffix has also been used for libraries, such as Gtk# (a .NET wrapper for GTK and other GNOME libraries) and Cocoa# (a wrapper for Cocoa). Versions Syntax The core syntax of the C# language is similar to that of other C-style languages such as C, C++ and Java, particularly: Semicolons are used to denote the end of a statement. Curly brackets are used to group statements. Statements are commonly grouped into methods (functions), methods into classes, and classes into namespaces. Variables are assigned using an equals sign, but compared using two consecutive equals signs. Square brackets are used with arrays, both to declare them and to get a value at a given index in one of them. Distinguishing features Some notable features of C# that distinguish it from C, C++, and Java where noted, are: Portability By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework. However, the language specification does not state the code generation requirements of the compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Some C# compilers can also generate machine code like traditional compilers of C++ or Fortran. Typing C# supports strongly, implicitly typed variable declarations with the keyword var, and implicitly typed arrays with the keyword new[] followed by a collection initializer. Its type system is split into two families: Value types, like the built-in numeric types and user-defined structs, which are automatically handed over as copies when used as parameters, and reference types, including arrays, instances of classes, and strings, which only hand over a pointer to the respective object. Due to their special handling of the equality operator and their immutability, strings will nevertheless behave as if they were values, for all practical purposes. You can even use them as case labels. Where necessary, value types will be boxed automatically. C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and if, require an expression of a type that implements the true operator, such as the Boolean type. While C++ also has a Boolean type, it can be freely converted to and from integers, and expressions such as if (a) require only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing programmers to use expressions that return exactly bool can prevent certain types of programming mistakes such as if (a = b) (use of assignment = instead of equality ==). C# is more type safe than C++. The only implicit conversions by default are those that are considered safe, such as widening of integers. This is enforced at compile-time, during JIT, and, in some cases, at runtime. No implicit conversions occur between Booleans and integers, nor between enumeration members and integers (except for literal 0, which can be implicitly converted to any enumerated type). Any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors and conversion operators, which are both implicit by default. C# has explicit support for covariance and contravariance in generic types, unlike C++ which has some degree of support for contravariance simply through the semantics of return types on virtual methods. Enumeration members are placed in their own scope. The C# language does not allow for global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions. Local variables cannot shadow variables of the enclosing block, unlike C and C++. Metaprogramming Metaprogramming can be achieved in several ways: Reflection is supported through .NET APIs, which enable scenarios such as type metadata inspection and dynamic method invocation. Expression trees represent code as an abstract syntax tree, where each node is an expression that can be inspected or executed. This enables dynamic modification of executable code at runtime. Expression trees introduce some homoiconicity to the language. Attributes, in C# parlance, are metadata that can be attached to types, members, or entire assemblies, equivalent to annotations in Java. Attributes are accessible both to the compiler and to code through reflection, allowing them to adjust their behaviour. Many of the native attributes duplicate the functionality of GCC's and VisualC++'s platform-dependent preprocessor directives. System.Reflection.Emit namespace, which contains classes that emit metadata and CIL (types, assemblies, etc.) at runtime. The .NET Compiler Platform (Roslyn) provides API access to language compilation services, allowing for the compilation of C# code from within .NET applications. It exposes APIs for syntactic (lexical) analysis of code, semantic analysis, dynamic compilation to CIL, and code emission. Source generators, a feature of the Roslyn C# compiler, enable compile time metaprogramming. During the compilation process, developers can inspect the code being compiled with the compiler's API and pass additional generated C# source code to be compiled. Methods and functions A method in C# is a member of a class that can be invoked as a function (a sequence of instructions), rather than the mere value-holding capability of a field (i.e. class or instance variable). As in other syntactically similar languages, such as C++ and ANSI C, the signature of a method is a declaration comprising in order: any optional accessibility keywords (such as private), the explicit specification of its return type (such as int, or the keyword void if no value is returned), the name of the method, and finally, a parenthesized sequence of comma-separated parameter specifications, each consisting of a parameter's type, its formal name and optionally, a default value to be used whenever none is provided. Different from most other languages, call-by-reference parameters have to be marked both at the function definition and at the calling site, and you can choose between ref and out, the latter allowing handing over an uninitialized variable which will have a definite value on return. Additionally, you can specify a variable-sized argument list by applying the params keyword to the last parameter. Certain specific kinds of methods, such as those that simply get or set a field's value by returning or assigning it, do not require an explicitly stated full signature, but in the general case, the definition of a class includes the full signature declaration of its methods. Like C++, and unlike Java, C# programmers must use the scope modifier keyword virtual to allow methods to be overridden by subclasses. Unlike C++, you have to explicitly specify the keyword override when doing so. This is supposed to avoid confusion between overriding and newly overloading a function (i.e. hiding the former implementation). To do the latter, you have to specify the new keyword. You can use the keyword sealed to disallow further overrides for individual methods or whole classes. Extension methods in C# allow programmers to use static methods as if they were methods from a class's method table, allowing programmers to virtually add instance methods to a class that they feel should exist on that kind of objects (and instances of the respective derived classes). The type dynamic allows for run-time method binding, allowing for JavaScript-like method calls and run-time object composition. C# has support for strongly-typed function pointers via the keyword delegate. Like the Qt framework's pseudo-C++ signal and slot, C# has semantics specifically surrounding publish-subscribe style events, though C# uses delegates to do so. C# offers Java-like synchronized method calls, via the attribute [MethodImpl(MethodImplOptions.Synchronized)], and has support for mutually-exclusive locks via the keyword lock. Property C# supports classes with properties. The properties can be simple accessor functions with a backing field, or implement arbitrary getter and setter functions. A property is read-only if there's no setter. Like with fields, there can be class and instance properties. The underlying methods can be virtual or abstract like any other method. Since C# 3.0 the syntactic sugar of auto-implemented properties is available, where the accessor (getter) and mutator (setter) encapsulate operations on a single field of a class. Namespace A C# namespace provides the same level of code isolation as a Java package or a C++ , with very similar rules and features to a package. Namespaces can be imported with the "using" syntax. Memory access In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object (one that has been garbage collected), or to a random block of memory. An unsafe pointer can point to an instance of an unmanaged value type that does not contain any references to objects subject to garbage collections such as class instances, arrays or strings. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but it cannot dereference them. Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory that is no longer needed in most cases. Code that retains references to objects longer than is required can still experience higher memory usage than necessary, however once the final reference to an object is released the memory is available for garbage collection. Exceptions A range of standard exceptions are available to programmers. Methods in standard libraries regularly throw system exceptions in some circumstances and the range of exceptions thrown is normally documented. Custom exception classes can be defined for classes allowing handling to be put in place for particular circumstances as needed. The syntax for handling exceptions is the following:try { // something } catch (Exception ex) { // if error do this } finally { // always executes, regardless of error occurrence }Depending on your plans, the "finally" part can be left out. If error handling is not required, the (Exception ex) parameter can be omitted as well. Also, there can be several "catch" parts handling different kinds of exceptions. Checked exceptions are not present in C# (in contrast to Java). This has been a conscious decision based on the issues of scalability and versionability. Polymorphism Unlike C++, C# does not support multiple inheritance, although a class can implement any number of "interfaces" (fully abstract classes). This was a design decision by the language's lead architect to avoid complications and to simplify architectural requirements throughout CLI. When implementing multiple interfaces that contain a method with the same name and taking parameters of the same type in the same order (i.e. the same signature), similar to Java, C# allows both a single method to cover all interfaces and if necessary specific methods for each interface. C# also offers function overloading (a.k.a. ad-hoc-polymorphism), i.e. methods with the same name, but distinguishable signatures. Unlike Java, C# additionally supports operator overloading. Since version 2.0, C# offers parametric polymorphism, i.e. classes with arbitrary or constrained type parameters, e.g. List<T>, a variable-sized array which only can contain elements of type T. There are certain kinds of constraints you can specify for the type parameters: Has to be type X (or one derived from it), has to implement a certain interface, has to be a reference type, has to be a value type, has to implement a public parameterless constructor. Most of them can be combined, and you can specify any number of interfaces. Language Integrated Query (LINQ) C# has the ability to utilize LINQ through the .NET Framework. A developer can query a variety of data sources, provided the IEnumerable<T> interface is implemented on the object. This includes XML documents, an ADO.NET dataset, and SQL databases. + Using LINQ in C# brings advantages like IntelliSense support, strong filtering capabilities, type safety with compile error checking ability, and consistency for querying data over a variety of sources. There are several different language structures that can be utilized with C# and LINQ and they are query expressions, lambda expressions, anonymous types, implicitly typed variables, extension methods, and object initializers. LINQ has two syntaxes: query syntax and method syntax. However, the compiler always converts the query syntax to method syntax at compile time. using System.Linq; var numbers = new int[] { 5, 10, 8, 3, 6, 12 }; // Query syntax (SELECT num FROM numbers WHERE num % 2 = 0 ORDER BY num) var numQuery1 = from num in numbers where num % 2 == 0 orderby num select num; // Method syntax var numQuery2 = numbers .Where(num => num % 2 == 0) .OrderBy(n => n); Functional programming Though primarily an imperative language, C# always adds functional features over time, for example: Functions as first-class citizen – C# 1.0 delegates Higher-order functions – C# 1.0 together with delegates Anonymous functions – C# 2 anonymous delegates and C# 3 lambdas expressions Closures – C# 2 together with anonymous delegates and C# 3 together with lambdas expressions Type inference – C# 3 with implicitly typed local variables and C# 9 target-typed new expressions List comprehension – C# 3 LINQ Tuples – .NET Framework 4.0 but it becomes popular when C# 7.0 introduced a new tuple type with language support Nested functions – C# 7.0 Pattern matching – C# 7.0 Immutability – C# 7.2 readonly struct C# 9 record types and Init only setters Type classes – C# 12 roles/extensions (in development) Common type system C# has a unified type system. This unified type system is called Common Type System (CTS). A unified type system implies that all types, including primitives such as integers, are subclasses of the class. For example, every type inherits a method. Categories of data types CTS separates data types into two categories: Reference types Value types Instances of value types neither have referential identity nor referential comparison semantics. Equality and inequality comparisons for value types compare the actual data values within the instances, unless the corresponding operators are overloaded. Value types are derived from , always have a default value, and can always be created and copied. Some other limitations on value types are that they cannot derive from each other (but can implement interfaces) and cannot have an explicit default (parameterless) constructor because they already have an implicit one which initializes all contained data to the type-dependent default value (0, null, or alike). Examples of value types are all primitive types, such as (a signed 32-bit integer), (a 32-bit IEEE floating-point number), (a 16-bit Unicode code unit), decimal (fixed-point numbers useful for handling currency amounts), and (identifies a specific point in time with nanosecond precision). Other examples are (enumerations) and (user defined structures). In contrast, reference types have the notion of referential identity, meaning that each instance of a reference type is inherently distinct from every other instance, even if the data within both instances is the same. This is reflected in default equality and inequality comparisons for reference types, which test for referential rather than structural equality, unless the corresponding operators are overloaded (such as the case for ). Some operations are not always possible, such as creating an instance of a reference type, copying an existing instance, or performing a value comparison on two existing instances. Nevertheless, specific reference types can provide such services by exposing a public constructor or implementing a corresponding interface (such as or ). Examples of reference types are (the ultimate base class for all other C# classes), (a string of Unicode characters), and (a base class for all C# arrays). Both type categories are extensible with user-defined types. Boxing and unboxing Boxing is the operation of converting a value-type object into a value of a corresponding reference type. Boxing in C# is implicit. Unboxing is the operation of converting a value of a reference type (previously boxed) into a value of a value type. Unboxing in C# requires an explicit type cast. A boxed object of type T can only be unboxed to a T (or a nullable T). Example: int foo = 42; // Value type. object bar = foo; // foo is boxed to bar. int foo2 = (int)bar; // Unboxed back to value type. Libraries The C# specification details a minimum set of types and class libraries that the compiler expects to have available. In practice, C# is most often used with some implementation of the Common Language Infrastructure (CLI), which is standardized as ECMA-335 Common Language Infrastructure (CLI). In addition to the standard CLI specifications, there are many commercial and community class libraries that build on top of the .NET framework libraries to provide additional functionality. C# can make calls to any library included in the List of .NET libraries and frameworks. Examples Hello World The following is a very simple C# program, a version of the classic "Hello world" example using the top-level statements feature introduced in C# 9: using System; Console.WriteLine("Hello, world!"); For code written as C# 8 or lower, the entry point logic of a program must be written in a Main method inside a type: using System; class Program { static void Main() { Console.WriteLine("Hello, world!"); } } This code will display this text in the console window: Hello, world! Each line has a purpose: using System; The above line imports all types in the System namespace. For example, the Console class used later in the source code is defined in the System namespace, meaning it can be used without supplying the full name of the type (which includes the namespace). // A version of the classic "Hello World" programThis line is a comment; it describes and documents the code for the programmer(s).class Program Above is a class definition for the class. Everything that follows between the pair of braces describes that class.{ ... }The curly brackets demarcate the boundaries of a code block. In this first instance, they are marking the start and end of the class.static void Main() This declares the class member method where the program begins execution. The .NET runtime calls the method. Unlike in Java, the method does not need the keyword, which tells the compiler that the method can be called from anywhere by any class. Writing is equivalent to writing . The static keyword makes the method accessible without an instance of . Each console application's entry point must be declared otherwise the program would require an instance of , but any instance would require a program. To avoid that irresolvable circular dependency, C# compilers processing console applications (like that above) report an error if there is no method. The keyword declares that has no return value. (Note, however, that short programs can be written using Top Level Statements introduced in C# 9, as mentioned earlier.) Console.WriteLine("Hello, world!"); This line writes the output. is a static class in the namespace. It provides an interface to the standard input/output, and error streams for console applications. The program calls the method , which displays on the console a line with the argument, the string . Generics With .NET 2.0 and C# 2.0, the community got more flexible collections than those in .NET 1.x. In the absence of generics, developers had to use collections such as ArrayList to store elements as objects of unspecified kind, which incurred performance overhead when boxing/unboxing/type-checking the contained items. Generics introduced a massive new feature in .NET that allowed developers to create type-safe data structures. This shift is particularly important in the context of converting legacy systems, where updating to generics can significantly enhance performance and maintainability by replacing outdated data structures with more efficient, type-safe alternatives. Example public class DataStore<T> { private T[] items = new T[10]; private int count = 0; public void Add(T item) { items[count++] = item; } public T Get(int index) { return items[index]; } } Standardization and licensing In August 2001, Microsoft, Hewlett-Packard and Intel co-sponsored the submission of specifications for C# as well as the Common Language Infrastructure (CLI) to the standards organization Ecma International. In December 2001, ECMA released ECMA-334 C# Language Specification. C# became an ISO/IEC standard in 2003 (ISO/IEC 23270:2003 - Information technology — Programming languages — C#). ECMA had previously adopted equivalent specifications as the 2nd edition of C#, in December 2002. In June 2005, ECMA approved edition 3 of the C# specification, and updated ECMA-334. Additions included partial classes, anonymous methods, nullable types, and generics (somewhat similar to C++ templates). In July 2005, ECMA submitted to ISO/IEC JTC 1/SC 22, via the latter's Fast-Track process, the standards and related TRs. This process usually takes 6–9 months. The C# language definition and the CLI are standardized under ISO/IEC and Ecma standards that provide reasonable and non-discriminatory licensing protection from patent claims. Microsoft initially agreed not to sue open-source developers for violating patents in non-profit projects for the part of the framework that is covered by the Open Specification Promise. Microsoft has also agreed not to enforce patents relating to Novell products against Novell's paying customers with the exception of a list of products that do not explicitly mention C#, .NET or Novell's implementation of .NET (The Mono Project). However, Novell maintained that Mono does not infringe any Microsoft patents. Microsoft also made a specific agreement not to enforce patent rights related to the Moonlight browser plugin, which depends on Mono, provided it is obtained through Novell. A decade later, Microsoft began developing free, open-source, and cross-platform tooling for C#, namely Visual Studio Code, .NET Core, and Roslyn. Mono joined Microsoft as a project of Xamarin, a Microsoft subsidiary. Implementations Microsoft has developed open-source reference C# compilers and tools. The first compiler, Roslyn, compiles into intermediate language (IL), and the second one, RyuJIT, is a JIT (just-in-time) compiler, which is dynamic and does on-the-fly optimization and compiles the IL into native code for the front-end of the CPU. RyuJIT is open source and written in C++. Roslyn is entirely written in managed code (C#), has been opened up and functionality surfaced as APIs. It is thus enabling developers to create refactoring and diagnostics tools. Two branches of official implementation are .NET Framework (closed-source, Windows-only) and .NET Core (open-source, cross-platform); they eventually converged into one open-source implementation: .NET 5.0. At .NET Framework 4.6, a new JIT compiler replaced the former. Other C# compilers (some of which include an implementation of the Common Language Infrastructure and .NET class libraries): Mono, a Microsoft-sponsored project provides an open-source C# compiler, a complete open-source implementation of the CLI (including the required framework libraries as they appear in the ECMA specification,) and a nearly complete implementation of the NET class libraries up to .NET Framework 3.5. The Elements tool chain from RemObjects includes RemObjects C#, which compiles C# code to .NET's Common Intermediate Language, Java bytecode, Cocoa, Android bytecode, WebAssembly, and native machine code for Windows, macOS, and Linux. The DotGNU project (now discontinued) also provided an open-source C# compiler, a nearly complete implementation of the Common Language Infrastructure including the required framework libraries as they appear in the ECMA specification, and subset of some of the remaining Microsoft proprietary .NET class libraries up to .NET 2.0 (those not documented or included in the ECMA specification, but included in Microsoft's standard .NET Framework distribution). The Unity game engine uses C# as its primary scripting language. The Godot game engine has implemented an optional C# module due to a donation of $24,000 from Microsoft.
Technology
Programming languages
null
2356987
https://en.wikipedia.org/wiki/Tapinoma%20sessile
Tapinoma sessile
Tapinoma sessile is a species of small ant that goes by the common names odorous house ant, sugar ant, stink ant, and coconut ant. Their colonies are polydomous (consisting of multiple nests) and polygynous (containing multiple reproducing queens). Like many social insects, T. sessile employs complex foraging strategies, allocates food depending on environmental conditions, and engages in competition with other insects. T. sessile can be found in a huge diversity of habitats, including within houses. They forage mainly for honeydew, which is produced by aphids and scale insects that are guarded and tended by the ants, as well as floral nectar and other sugary foods. They are common household pests and are attracted to sources of water and sweets. Tapinoma sessile have long been suspected of exhibiting cloning behaviors similar to those observed in black crazy ants. This hypothesis has recently been confirmed through experimental evidence. In a notable experiment conducted by Marcello Ponzo, a colony consisting of seven queens and approximately 3,000 to 4,000 workers was kept in a controlled outworld and nest environment after being captured from the wild. Within a period of almost two months, the colony increased its number of queens from seven to ten under optimal conditions and a nutritious diet. This observation provides strong evidence supporting the theory of cloning in Tapinoma sessile. Like most other ants, T. sessile is eusocial. This is characterized by reproductive division of labor, cooperative care of the young, and overlapping generations. Etymology The binomial name Tapinoma sessile was assigned by Thomas Say in 1836. Sessile translates to "sitting" which probably refers to the gaster sitting directly on top of the petiole in the abdomen of the species. The common names "odorous house ant" and "coconut ant" come from the odor the ants produce when crushed, which is very similar to the pungent odor of a rotting coconut, blue cheese, or turpentine. Description T. sessile is a small ant that ranges in color from brown to black, and varies in length from to  inches (1.5–3.2 mm). When crushed, these ants leave a smell which leads to their nickname "stink ant". The gaster portion of the abdomen sits directly on top of the petiole in the abdomen of this species, which helps distinguish them from other small, dark, invasive ants. A comparison of the side view of T. sessile (below) and a diagram of the a typical ant body (below) shows how T. sessile’s gaster sits atop its petiole. This leads to a very small petiole and to the gaster being pointed downward. The anal pore then opens ventrally (toward the abdomen) instead of distally. Their antennae have 12 segments. The queens lay the eggs which incubate between 11–26 days. After hatching, the larval stage lasts between 13–29 days, and the pre-pupal and pupal stages last between 10–24 days. Little is known about the lifespan of the ant, though it has been shown that queens live at least 8 months (and probably much longer), workers at least a few months (and show every indication of living as long as queens), while males appear to live only approximately a week. Distribution T. sessile is native to North America and ranges from southern Canada to northern Mexico, but is less common in the desert southwest. Behavior Colonies vary in size from a few hundred to tens of thousands of individuals. Big colonies usually have multiple queens. The odorous house ant is tough: Injured workers have been observed to continue living and working with little hindrance, some queens with crushed abdomens still lay eggs, and there are documented instances of T. sessile queens surviving without food or water for over two months. They also appear highly tolerant to heat and cold. These ants are difficult to remove from a home after their colony has become well-established. When offered a choice of food sources, the ants preferred sugar and protein over lipids, and this preference persisted in all seasons. When specific sugar sources were studied the ants preferred sucrose over other sugars, such as fructose or glucose. Food allocation Foragers collect food that is around the nest area and bring it back to the colony to share with the other ants. T. sessile has polydomous colonies, meaning that one colony has multiple nests. Because of this, T. sessile is very good at foraging for food when there is great variance in the distribution of resources. Instead of going back to a faraway nest to deliver food, they move workers, queens, and the brood to be closer to the food, so that they can reduce the cost in effort of food transport. This is called 'dispersed central-place foraging'. It was also found that the half-life of the stay at any one nest was about 12.9 days. Buczkowski and Bennett also studied the pattern of food movement within a nest. They labeled sucrose with Immunoglobin G (IgG) proteins, and then identified them using an enzyme-linked immunosorbent assay (ELISA) to track the movement of food. They found that food was spread through trophallaxis (one animal regurgitating food for another). Despite this trophallactic spread of food, the workers kept most of the sucrose. They also found that some queens received more food than others, suggesting a dominance hierarchy even between queens. They also found that the nests were located in a system of trails, and that their distribution depended on where food was found and the distance between these patches of food. It was also found that the rate of trophallactic feeding depends on the number of ants per nest, and the quality of food available. When the number of donors is kept constant, but the number of total individuals in increased, more individuals test positive for the food marker. This indicates that more individuals are eating, but the amount they eat is less. If the number of donors was doubled, and the size of the overall population increased, the number of individuals receiving food more than doubled, again indicating that the number of individuals fed increased, but that the per capita amount of food consumed decreased. When searching for food, primary orientation is when ants are exploring a new terrain without the guidance of odor trails. Secondary orientation is when terrain has been explored, and there are pre-existing odor trails which ants use to orient themselves. When T. sessile ants are orienting themselves for the first time they often rely on topography. The major types of elements they rely on are bilaterally elevated, bilaterally depressed, unilaterally elevated, and unilaterally depressed. They use these types of surfaces to orient along, and lay the first odor trails, which can then be followed in the future, to the food source, by other ants. Seasonal behaviors It was also found that this ant species practices seasonal polydomy (having multiple colony sites) to have access to multiple food sources. The colony will overwinter in a single nest, and then during spring and summer when resources are more abundant they will form multiple nests. This allows them to better use food sources, that might be spread out. During the winter they will return again to the same nest location. Seasonal polydomy is rather rare, and only found in 10% of all polydomous species. Seasonal polydomy is not found in many ant species, but there are many ant species, including T. sessile, which move within a season: Migration to better forage sites is common. Seasonal activity patterns of the ants were also studied, and corresponding to the seasonal polydomy, it was observed that the ants displayed the most activity between March and September and displayed almost no activity from October to December. Daily activity patterns were also studied. In March T. sessile foraged during the day, but in April that pattern changed and the ant began to forage during both day and night. During most of the summer, T. sessile shows low levels of activity throughout the day and night. Competition with other ants Competition between species is often classified as exploitation or interference. Exploitation involves finding and using limited resources before they can be used by other species, while interference is the act of preventing others from getting resources by more direct force or aggression. When it comes to these behaviors, a species is considered dominant if it initiates an attack and subordinate if it avoids other species. In comparison with eight other ant species, T. sessile was more subordinate on the dominant to subordinate scale. The ant does not show a large propensity for attack, preferring to use chemical secretions instead of biting. When T. sessile, a subordinate species, was in the presence of dominant ant species such as C. ferrugineus, P. imparis, Lasius alienus, and F. subsericea, they reduced the amount of time spent foraging. This was tested with the use of bait, and when the subordinate species, such as T. sessile, encountered a dominant species they would leave the bait. It would then make sense that the subordinate species would forage at a different time than dominant species, so that they could avoid confrontation, but there is sizable overlap in foraging period on a daily and seasonal basis. Because T. sessile forages at the same time as dominant species, but avoids other foraging ants, they must have excellent exploitative abilities to survive. One of the invasive species that T. sessile has had to contend with is the Argentine ant (Linepithema humile). Studies of its interactions with L. humile has helped researchers better understand the aggression of T. sessile. T. sessile ants rarely fight alongside their nest-mates: They only were observed to have fought collectively in six of forty interactions. This often caused T. sessile to lose altercations with other ant species, such as L. humile, even when more T. sessile individuals were present. Whereas other ant species like L. humile fight together, T. sessile do not. T. sessile is, however, more likely to win in one-on-one interactions because they have effective chemical defenses. Other habits This species is a scavenger / predator ant that will eat most household foods, especially those that contain sugar, as well as other insects. Indoors they will colonize near heat sources or in insulation. In hot and dry situations, nests have been found in house plants and even in the lids of toilets. Outdoors they tend to colonize under rocks and exposed soil. They appear, however, to form colonies virtually anywhere, in a variety of conditions. In experiments where T. sessile workers were confined in an area without a queen, egg-laying (by the workers) was observed, though the workers destroyed any prepupa that emerged from the eggs. Odorous house ants have been observed collecting honeydew to feed on from aphids, scale insects, and membracids. They appear to be more likely to invade homes after rain (which washes away the honeydew they collect). Odorous house ants appear to be highly tolerant of other ants, with compound nests consisting of multiple ant species that included T. sessile having been observed. Predators and parasites Some birds and toads will eat odorous house ants on occasion. Wheeler (1916) mentions Bothriomyrmex dimmocki as a possible parasite of odorous house ant colonies, suggesting that B. dimmocki queens invade and replace T. sessile queens. Isobrachium myrmecophilum (a small wasp) appears to parasitize odorous house ants. Pest control T. sessile are not hard to control; they are vulnerable to most ant-killers, which are especially effective when applied as soon as their presence is noticed. If dealt with early, their numbers can be brought under control in just a few days. However, the longer a colony is ignored, the larger the population becomes and the longer it will take to clear the infestation – possibly a few weeks. These ants most commonly invade buildings in late winter and early spring (particularly after rain), at which times one should be on the lookout for newly-arrived ants foraging indoors. To discourage immigration, standing water should be eliminated in the house, as T. sessile are attracted to moisture. Plants should be trimmed away from buildings, so they do not make convenient routes for above-ground entry. Cracks, holes and joints should be sealed with polyurethane foam or caulk, especially those that are near the ground. Firewood, rocks, and other materials should not be stored next to a home because it provides sites for nest building near the home, and T. sessile naturally relocate their colonies to be near successful forage sites.
Biology and health sciences
Hymenoptera
Animals
2359857
https://en.wikipedia.org/wiki/Dunkleosteus
Dunkleosteus
Dunkleosteus is an extinct genus of large arthrodire ("jointed-neck") fish that existed during the Late Devonian period, about 382–358 million years ago. It was a pelagic fish inhabiting open waters, and one of the first vertebrate apex predators of any ecosystem. Dunkleosteus consists of ten species, some of which are among the largest placoderms ("plate-skinned") to have ever lived: D. terrelli, D. belgicus, D. denisoni, D. marsaisi, D. magnificus, D. missouriensis, D. newberryi, D. amblyodoratus, D. raveri, and D. tuderensis. However, the validity of several of these species is unclear (see below). The largest and best known species is D. terrelli. Since body shape is not known, various methods of estimation put the living total length of the largest known specimen of D. terrelli between long and weigh around . However, lengths of or more are poorly supported, with the most recent and extensive studies on the body shape and size of D. terrelli producing estimated lengths of approximately for typical adults and for exceptionally large individuals of this species. Dunkleosteus could quickly open and close its jaw, creating suction like modern-day suction feeders, and had a bite force that is considered the highest of any living or fossil fish, and among the highest of any animal. Fossils of Dunkleosteus have been found in the United States, Canada, Poland, Belgium, and Morocco. Discovery Dunkleosteus fossils were first discovered in 1867 by Jay Terrell, a hotel owner and amateur paleontologist who collected fossils in the cliffs along Lake Erie near his home of Sheffield Lake, Ohio (due west of Cleveland), United States. Terrell donated his fossils to John Strong Newberry and the Ohio Geological Survey, who in 1873 described all the material as belonging to a single new genus and species: Dinichthys herzeri. However, with later fossil discoveries, by 1875 it became apparent multiple large fish species were present in the Ohio Shale. Dinichthys herzeri came from the lowermost layer, the Huron Shale, whereas most of the fossils were coming from the younger Cleveland Shale and represented a distinct species. Newberry named this more common species "Dinichthys" terrelli, after Terrell. Most of Terrell's original collection does not survive, having been destroyed by a fire in Elyria, Ohio, in 1873. The largest collection of Dunkleosteus fossils in the world is housed at the Cleveland Museum of Natural History, with smaller collections (in descending order of size) held at the American Museum of Natural History, Smithsonian National Museum of Natural History, Yale Peabody Museum, the Natural History Museum in London, and the Cincinnati Museum Center. Specimens of Dunkleosteus are on display in many museums throughout the world (see table below), most of which are casts of the same specimen: CMNH 5768, the largest well-preserved individual of D. terrelli. The original CMNH 5768 is on display in the Cleveland Museum of Natural History. Taxonomy Dunkleosteus was named by Jean-Pierre Lehman in 1956 to honour David Dunkle (1911–1984), former curator of vertebrate paleontology at the Cleveland Museum of Natural History. The genus name Dunkleosteus combines David Dunkle's surname with the Ancient Greek word ( 'bone'), literally meaning "Dunkle's bone". Originally thought to be a member of the genus Dinichthys, Dunkleosteus was later recognized as belonging to its own genus in 1956. It was thought to be closely related to Dinichthys, and they were grouped together in the family Dinichthyidae. However, in the phylogenetic analysis of Carr and Hlavin (2010), Dunkleosteus and Dinichthys were found to belong to separate clades of arthrodires: Dunkleosteus belonged to a group called the Dunkleosteoidea while Dinichthys belonged to the distantly related Aspinothoracidi. Carr & Hlavin resurrected the family Dunkleosteidae and placed Dunkleosteus, Eastmanosteus, and a few other genera from Dinichthyidae within it. Dinichthyidae, in turn, is left a monospecific family, though closely related to arthrodires like Gorgonichthys and Heintzichthys. The cladogram below from the study of Zhu & Zhu (2013) shows the placement of Dunkleosteus within Dunkleosteidae and Dinichthys within the separate clade Aspinothoracidi: Alternatively, the subsequent 2016 Zhu et al. study using a larger morphological dataset recovered Panxiosteidae well outside of Dunkleosteoidea, leaving the status of Dunkleosteidae as a clade grouping separate from Dunkleosteoidea in doubt, as shown in the cladogram below: Species At least ten different species of Dunkleosteus have been described so far. However, many of them are poorly characterized and may be synonyms of previously named species or not pertain to Dunkleosteus. Dunkleosteus as currently defined is a wastebasket taxon for large dunkleosteoid arthrodires that are more evolutionarily derived than Eastmanosteus. The type species, D. terrelli, is the largest, best-known species of the genus. Size estimates for this species range from in length, though estimates greater than 4.5 m are poorly supported. Skulls of this species can be up to in length. D. terrelli fossil remains are found in Upper Frasnian to Upper Famennian Late Devonian strata of the United States (Huron, Chagrin, and Cleveland Shales of Ohio, the Conneaut and Chadakoin Formations of Pennsylvania, the Chattanooga Shale of Tennessee, the Lost Burro Formation of California, and possibly the Ives breccia of Texas) and Europe. D. belgicus (?) is known from fragments described from the Famennian of Belgium. The median dorsal plate is characteristic of the genus, but, a plate that was described as a suborbital is an anterolateral. Lelièvre (1982) considers this taxon a nomen dubium ("doubtful name") and suggests the material may actually pertain to Ardennosteus. D. denisoni is known from a small median dorsal plate, typical in appearance for Dunkleosteus, but much smaller than normal. It is comparable in skull structure to D. marsaisi. D. marsaisi refers to the Dunkleosteus fossils from the Lower Famennian Late Devonian strata of the Atlas Mountains in Morocco. It differs in size, the known skulls averaging a length of and in form to D. terrelli. In D. marsaisi, the snout is narrower, and a postpineal fenestra may be present. Many researchers and authorities consider it a synonym of D. terrelli. H. Schultze regards D. marsaisi as a member of Eastmanosteus. D. magnificus is a large placoderm from the Frasnian Rhinestreet Shale of New York. It was originally described as Dinichthys magnificus by Hussakof and Bryant in 1919, then as "Dinichthys mirabilis" by Heintz in 1932. Dunkle and Lane (1971) moved it to Dunkleosteus, whereas Dennis-Bryan (1987) considered it to belong to the genus Eastmanosteus. This species has a skull length of and a total estimated length of approximately . D. missouriensis is known from fragments from Frasnian Missouri. Dunkle and Lane regard them as being very similar to D. terrelli. In his revision of Dunkleosteus taxonomy, Hlavin (1976) considers this species to be tentatively synonymous with D. terrelli (Dunkleosteus cf. D. terrelli). D. newberryi is known primarily from a long infragnathal with a prominent anterior cusp, found in the Frasnian portion of the Genesee Group of New York, and originally described as Dinichthys newberryi. Lebedev et al. (2023) noted D. newberryi has an unusually long marginal tooth row compared to other species of Dunkleosteus and lacks the accessory odontoids typical of this genus, suggesting it might not belong to Dunkleosteus or even Dunkleosteoidea. D. amblyodoratus is known from some fragmentary remains from Late Devonian strata of Kettle Point Formation, Ontario. The species name means 'blunt spear' and refers to the way the nuchal and paranuchal plates in the back of the head form the shape of a blunted spearhead. D. raveri is a small species, possibly 1 meter long, known from an uncrushed skull roof found in a carbonate concretion from near the bottom of the Huron Shale, of the Famennian Ohio Shale strata. Besides its small size, it had comparatively large eyes. Because D. raveri was found in the strata directly below the strata where the remains of D. terrelli are found, D. raveri may have given rise to D. terrelli. The species name commemorates Clarence Raver of Wakeman, Ohio, who discovered the concretion containing the holotype. D. tuderensis is known from an infragnathal found in the lower-middle Famennian-aged Bilovo Formation of the Tver Region in northwest Russia. The specific name refers to the Maliy Tuder River as the holotype was found on its bank. In total, of the ten or so species listed above only four are agreed upon as valid species of Dunkleosteus by all researchers: D. terrelli (which may or may not include Dunkleosteus material from Morocco), D. raveri, D. tuderensis, and possibly D. amblyodoratus (which is known from limited material that appears distinct but is difficult to compare with other dunkleosteids). The taxonomy of early late Devonian (Frasnian) species is poorly established, whereas latest Devonian (Famennian) species are easily referable to this genus. This is not counting additional material assigned to Dunkleosteus sp. from the Famennian of California, Texas, Tennessee, and Poland. Description Size and anatomy Dunkleosteus was covered in dermal bone forming armor plates across its skull and front half of its trunk. This armor is often described as being over thick, but this is only across the thickened nuchal plate at the back of the skull. Thickening of the nuchal plate is a common feature of eubrachythoracid arthrodires. Across the rest of the body the armor is generally much thinner, only about in thickness. The plates of Dunkleosteus had both a hard cortical and a marrow-filled cancellous layer, unlike most teleost fishes and more similar to tetrapod bones. Mainly the armored frontal sections of specimens have been fossilized, and consequently, the appearance of the other portions of the fish is mostly unknown. In fact, only about 5% of Dunkleosteus specimens have more than a quarter of their skeleton preserved. Because of this, many reconstructions of the hindquarters are often based on fossils of smaller arthrodires, such as Coccosteus, which have preserved hind sections, leading to widely varying size estimates. Dunkleosteus terrelli is one of the largest known placoderms, with its maximum size being variably estimated as anywhere from by different researchers. However, most cited length estimates are speculative and lack quantitative or statistical backing, and lengths of or more are poorly supported. Most studies that estimate the length of Dunkleosteus terrelli do not provide information as to how these estimates were calculated, the measurements used to scale them, or which specimens were examined. Estimates in these studies are implied to be based on either CMNH 5768 (the largest complete armor of D. terrelli) or CMNH 5936 (the largest known jaw fragment). Additionally, these reconstructions often require Dunkleosteus to lack many features consistent across the body plans of other arthrodires like Coccosteus and Amazichthys. The most extensive analyses of body size and shape in Dunkleosteus terrelli produce length estimates of ~ for typical adults of this species, with very rare and exceptional individuals potentially reaching lengths of . These estimates were calculated using several different size proxies (head length, orbit-opercular length [head length minus snout length], ventral shield length, entering angle, locations of the pectoral and pelvic girdles relative to total length), which produce largely similar results. Statistical margins of error in these methods mean lengths as great as in typical adults and for exceptional individuals remain possible, but greater lengths result in proportions largely outside what is seen in other arthrodires and jawed fishes more generally, especially in terms of the size of the head and trunk armor relative to the total length of the animal and the relative location of the pectoral and pelvic fins. Indeed, the implied proportions under the upper ranges of the margins of error suggest even those lengths may be overly generous. Lengths at the lower end of the margins of error are unlikely given the preserved lengths of the head and trunk armor. Most studies with well-defined methods produce lengths of or less for Dunkleosteus terrelli, with the exception of Ferrón et al. (2017), which produces larger estimates of based on upper jaw perimeter of modern sharks. However, arthrodires have proportionally larger mouths than modern sharks, making the lengths estimated by Ferrón et al. (2017) unreliable. Upper jaw perimeter overestimates the size of complete arthrodires like Coccosteus and the estimates of Ferrón et al. (2017) result in Dunkleosteus having an extremely small head and hyper-elongate trunk relative to the known dimensions of the fossils. The reconstruction presented in Ferrón et al. (2017) is also incorrectly scaled to the known dimensions of the fossil material; if scaled to the size of CMNH 5768, it produces a length of , agreeing with the shorter estimates in later studies. Carr (2010) estimated a long adult individual of Dunkleosteus terrelli to have weighed , assuming a shark-like body plan and a similar length-weight relationship. Engelman (2023), using an ellipsoid volumetric method, estimated weights of for typical ( long) adult Dunkleosteus, and weights of for the largest ( in this study) individual. The higher weights by Engelman (2023) are mostly a result of the fact that arthrodires tend to have relatively deeper and wider bodies compared to sharks. An exceptionally preserved specimen of D. terrelli preserves a pectoral fin outline with ceratotrichia, implying that the fin morphology of placoderms was much more variable than previously thought, and was heavily influenced by locomotory requirements. This knowledge, coupled with the knowledge that fish morphology is more heavily influenced by feeding niche than phylogeny, allowed a 2017 study to infer the caudal fin shape of D. terrelli, reconstructing this fin with a strong ventral lobe, a high aspect ratio, narrow caudal peduncle, in contrast to previous reconstructions based on the anguilliform caudal fin of coccosteomorph placoderms. The only vertebral remains known for Dunkleosteus are a small series of 16 vertebrae within the trunk armor of the specimen CMNH 50322. Most of these vertebrae are highly fused, and have very prominent, laterally-projecting articular facets compared to other arthrodires. Although many arthrodires show the incorporation of anterior vertebrae into a synarcual, in these species the fused region is small whereas the fused region of Dunkleosteus extends almost to the end of the trunk armor, which would make its spine very stiff. This, along with a ridge on the inside of the trunk armor suggesting an unusually well-developed attachment for the horizontal septum, suggests Dunkleosteus may have had an anteriorly stiffened spine and specialized connective tissues to transmit force generated by the anterior trunk muscles to the tail fin, similar to thunniform vertebrates like lamnids and tunas. The pelvic girdle of Dunkleosteus is relatively small relative to the overall size of the armor. Several specimens preserve associated pelvic girdles, but their original position was not recorded during preservation. However, because these specimens were excavated from cliff faces, they were probably found in close to the armor, suggesting these fins were associated with the end of the ventral shield as in other arthrodires. One specimen may preserve pelvic fin basals near the end of the trunk armor. Length estimations of D. terrelli Paleobiology Diet Dunkleosteus terrelli possessed a four-bar linkage mechanism for jaw opening that incorporated connections between the skull, the thoracic shield, the lower jaw and the jaw muscles joined by movable joints. This mechanism allowed D. terrelli to both achieve a high speed of jaw opening, opening their jaws in 20 milliseconds and completing the whole process in 50–60 milliseconds (comparable to modern fishes that use suction feeding to assist in prey capture) and producing high bite forces when closing the jaw, estimated at at the tip and at the blade edge, or even up to and respectively. The bite force is considered the highest of any living or fossil fish, and among the highest of any animal. The pressures generated in those regions were high enough to puncture or cut through cuticle or dermal armor, suggesting that D. terrelli was adapted to prey on free-swimming, armored prey such as ammonites and other placoderms. In addition, teeth of a chondrichthyan thought to belong to Orodus (Orodus spp.) were found in association with Dunkleosteus remains, suggesting that these were probably stomach contents regurgitated from the animal. Orodus is thought to be tachypelagic, or a fast-swimming pelagic fish. Thus, Dunkleosteus might have been fast enough to catch these fast organisms, and not a slow swimmer like originally thought. Fossils of Dunkleosteus are frequently found with boluses of fish bones, semidigested and partially eaten remains of other fish. As a result, the fossil record indicates it may have routinely regurgitated prey bones rather than digest them. Mature individuals probably inhabited deep sea locations, like other placoderms, living in shallow waters during adolescence. A specimen of Dunkleosteus (CMNH 5302), and Titanichthys (CMNH 9889), show damage said to be puncture damage from the bony fangs of other Dunkleosteus. Reproduction Dunkleosteus, together with most other placoderms, may have also been among the first vertebrates to internalize egg fertilization, as seen in some modern sharks. Some other placoderms have been found with evidence that they may have been viviparous, including what appears to have been an umbilical cord. Growth Morphological studies on the lower jaws of juveniles of D. terrelli reveal they were proportionally as robust as those of adults, indicating they already could produce high bite forces and likely were able to shear into resistant prey tissue similar to adults, albeit on a smaller scale. This pattern is in direct contrast to the condition common in tetrapods in which the jaws of juveniles are more gracile than in adults.
Biology and health sciences
Prehistoric fish
Animals
2361264
https://en.wikipedia.org/wiki/Dryopithecus
Dryopithecus
Dryopithecus is a genus of extinct great apes from the middle–late Miocene boundary of Europe 12.5 to 11.1 million years ago (mya). Since its discovery in 1856, the genus has been subject to taxonomic turmoil, with numerous new species being described from single remains based on minute differences amongst each other, and the fragmentary nature of the holotype specimen makes differentiating remains difficult. There is currently only one uncontested species, the type species D. fontani, though there may be more. The genus is placed into the tribe Dryopithecini, which is either an offshoot of orangutans, African apes, or is its own separate branch. A male specimen was estimated to have weighed in life. Dryopithecus likely predominantly ate ripe fruit from trees, suggesting a degree of suspensory behaviour to reach them, though the anatomy of a humerus and femur suggest a greater reliance on walking on all fours (quadrupedalism). The face was similar to gorillas, and males had longer canines than females, which is typically correlated with high levels of aggression. They lived in a seasonal, paratropical climate, and may have built up fat reserves for winter. European great apes likely went extinct during a drying and cooling trend in the Late Miocene which caused the retreat of warm-climate forests. Etymology The genus name Dryopithecus comes from Ancient Greek drus 'oak tree' and pithekos 'ape' because the authority believed it inhabited an oak or pine forest in an environment similar to modern day Europe. The species D. fontani was named in honour of its discoverer, local collector Monsieur Alfred Fontan. Taxonomy The first Dryopithecus fossils were described from the French Pyrenees by French paleontologist Édouard Lartet in 1856, three years before Charles Darwin published his On the Origin of Species. Subsequent authors noted similarities to modern African great apes. In his The Descent of Man, Darwin briefly noted that Dryopithecus casts doubt on the African origin of apes: Dryopithecus taxonomy has been the subject of much turmoil, with new specimens being the basis of a new species or genus based on minute differences, resulting in several now-defunct species. By the 1960s, all non-human apes were classified into the now-obsolete family Pongidae, and extinct apes into Dryopithecidae. In 1965, English palaeoanthropologist David Pilbeam and American palaeontologist Elwyn L. Simons separated the genus–which included specimens from across the Old World at the time–into three subgenera: Dryopithecus in Europe, Sivapithecus in Asia, and Proconsul in Africa. Afterwards, there was discussion over whether each of these subgenera should be elevated to genus. In 1979, Sivapithecus was elevated to genus, and Dryopithecus was subdivided again into the subgenera Dryopithecus in Europe, and Proconsul, Limnopithecus, and Rangwapithecus in Africa. Since that time, several more species were assigned and moved, and by the 21st century, the genus included D. fontani, D. brancoi, D. laietanus, and D. crusafonti. However, the 2009 discovery of a partial skull of D. fontani caused many of them to be split off into different genera, such as the newly erected Hispanopithecus, because part of the confusion was caused by the fragmentary nature of the Dryopithecus holotype with vague and incomplete diagnostic characteristics. Currently, there is only one uncontested species, D. fontani. Specimens are: Holotype MNHNP AC 36, three pieces of a male mandible with teeth from Saint-Gaudens in the French Pyrenees. Based on dental development in chimpanzees, it was 6 to 8 years old, and several diagnostic characteristics made from the holotype would be lost in mature D. fontani; A partial left humerus arm bone, an additional mandible (MNHNP 1872-2), a left lower jaw and five isolated teeth are also known from the site. An upper incisor, NMB G.a.9., and female upper molar, FSL 213981, come from Saint-Alban-de-Roche, France. A male partial face, IPS35026, and femur, IPS41724, from Vallès Penedès in Catalonia, Spain. A female mandible with teeth, LMK-Pal 5508, from St. Stefan, Carinthia, Austria 12.5 mya, which could possibly be considered a separate species, "D. carinthiacus". Dryopithecus is classified into the namesake great ape tribe Dryopithecini, along with Hispanopithecus, Rudapithecus, Ouranopithecus, Anoiapithecus, and Pierolapithecus, though the latter two may belong to Dryopithecus, the former two may be synonymous, and the former three can also be placed into their own tribes. Dryopithecini is either regarded as an offshoot of orangutans (Ponginae), an ancestor to African apes and humans (Homininae), or its own separate branch (Dryopithecinae). Dryopithecus was a part of an adaptive radiation of great apes in the expanding forests of Europe in the warm climates of the Miocene Climatic Optimum, possibly descending from early or middle Miocene African apes which diversified in the proceeding Middle Miocene disruption (a cooling event). It is possible great apes first evolved in Europe or Asia, and then migrated down into Africa. Description Based on measurements of the femoral head of the Spanish IPS41724, the living weight for a male Dryopithecus was estimated to be . Dryopithecus teeth are most similar to those of modern chimps. The teeth are small and have a thin enamel layer. Dryopithecus has a slender jaw, indicating it was not well-suited for eating abrasive or hard food. Like modern apes, the males have pronounced canine teeth. The molars are wide, and the premolars wider. It has a wide roof of the mouth, a long muzzle (prognathism), and a large nose which is oriented nearly vertically to the face. In total, the face shows many similarities to the gorilla; since early to middle Miocene African apes do not share such similarities, gorilla-like features likely evolved independently in Dryopithecus rather than as a result of close affinities. The humerus, measuring an approximate , is similar in size and form to the bonobo. Like in bonobos, the shaft bows outward, and the insertion for the triceps and deltoids was poorly developed, suggesting Dryopithecus was not as adept to suspensory behaviour as orangutans. The femoral neck, which connects the femoral head to the femoral shaft, is not very long nor steep; the femoral head is positioned low to the greater trochanter; and the lesser trochanter is positioned more towards the backside. All these characteristics are important in the mobility of the hip joint, and indicate a quadrupedal mode of locomotion rather than suspensory. However, fruit trees in the time and area of the Austrian Dryopithecus were typically high and bore fruit on thinner terminal branches, suggesting suspensory behaviour to reach them. Paleobiology Dryopithecus likely predominantly ate fruit (frugivory), and evidence of cavities on the teeth of the Austrian Dryopithecus indicates a high-sugar diet, likely deriving from ripe fruits and honey. Dental wearing indicates Dryopithecus ate both soft and hard food, which could either indicate they consumed a wide array of different foods, or they ate harder foods as a fallback. Nonetheless, its unspecialized teeth indicate it had a flexible diet, and large body size would have permitted a large gut to aid in the processing of less-digestible food, perhaps stretching to include foods such as leaves (folivory) in times of famine like in modern apes. Unlike modern apes, Dryopithecus likely had a high carbohydrate, low fibre diet. A high-fructose diet is associated with elevated levels of uric acid, which is neutralized by uricase in most animals except great apes. It is thought they stopped producing it by 15 mya, resulting in increased blood pressure, which in turn led to increased activity, and a greater ability to build up fat reserves. The palaeoenvironment of late Miocene Austria indicates an abundance of fruiting trees and honey for nine or ten months out of the year, and Dryopithecus may have relied on these fat reserves during the late winter. High uric acid levels in the blood are also associated with increased intelligence. Dryopithecus males had larger canines than females, which is associated with high levels of aggression in modern primates. Paleoecology The remains of Dryopithecus are often associated with several large mammals, such as proboscideans (e. g., though not limited to, Gomphotherium), rhinoceroses (e. g., Lartetotherium), suids (e. g., Listriodon), bovids (e. g., Miotragocerus), equids (e. g., Anchitherium), hyaenids (e. g., Protictitherium), and felids (e. g., Pseudaelurus). Other associated primates are the great apes Hispanopithecus, Anoiapithecus, and Pierolapithecus; and the pliopithecid ape Pliopithecus. These fauna are consistent with a warm, forested, paratropical wetland environment, and it may have lived in a seasonal climate. For the Austrian Dryopithecus, plants such as Prunus, grapevines, black mulberry, strawberry trees, hickory, and chestnuts may have been important fruit sources; and the latter two, oak, beech, elm, and pine honey sources. The late Miocene was the beginning of a drying trend in Europe. Increasing seasonality and dry spells in the Mediterranean region and the emergence of a Mediterranean climate likely caused the replacement of forestland and woodland by open shrubland; and the uplift of the Alps caused tropical and warm-climate vegetation in Central Europe to retreat in favor of mid-latitude and alpine flora. This likely led to the extinction of great apes in Europe.
Biology and health sciences
Apes
Animals
5843526
https://en.wikipedia.org/wiki/Potoroo
Potoroo
Potoroo is a common name for species of Potorous, a genus of smaller marsupials. They are allied to the Macropodiformes, the suborder of kangaroo, wallaby, and other rat-kangaroo genera and is the only genus in the tribe Potoroini. All three extant species are threatened by ecological changes since the colonisation of Australia, especially the long-footed potoroo Potorous longipes (endangered) and P. gilbertii (critically endangered). The broad-faced potoroo P. platyops disappeared after its first description in the 19th century. The main threats are predation by introduced species (especially foxes) and habitat loss. Potoroos were formerly very common in Australia, and early settlers reported them as being significant pests to their crops. Status Gilbert's potoroo was first described in the West in 1840 by naturalist John Gilbert. It was then thought to have become extinct until being rediscovered in 1994 at the Two Peoples Bay Nature Reserve (near Albany) in Western Australia. Conservation efforts have grown an initial wild population of 30–40 to over 100. All species of Potorous are well within the "critical weight range" for mammals in Australia, those weighing from whose trajectory was toward decline or extinction during British settlement. Taxonomy A genus of smaller macropodids, it gives its name to the family Potoroidae. The species of Potorous have been greatly impacted or become extinct since their first descriptions, which has presented difficulties in determining the diversity of the genus. The number of species described by 1888 was five, when a revision by Oldfield Thomas merged this to three species. The genus was named Potorous by Anselme Gaëtan Desmarest in 1804, an epithet that was replaced by Illiger with the name Hypsiprymnus and cited by subsequent authors despite the protest of Desmarest. Oldfield Thomas saw no basis for this substitution and recognised Potorous in 1888. The common names for the species include rat-kangaroo, kangaroo rat, and potoroo. Classification The genus is allied with the extant Bettongia and Aepyprymnus, which along with the family Hypsiprymnodontidae, are informally grouped as the 'rat-kangaroos' of the suborder Macropodiformes. A conservative arrangement with allied modern and fossil genera may be summarised as: family Potoroidae subfamily †Palaeopotoroinae subfamily Potoroinae genus †Borungaboodie genus †Milliyowi genus †Purtia genus †Wakiewakie genus †Gumardee tribe Bettongini genus Aepyprymnus genus Bettongia genus †Caloprymnus tribe Potoroini genus Potorous genus †Purtia genus †Wakiewakie genus †Gumardee subfamily †Bulungamayinae Description The long-nosed potoroo sniffs the ground with a side to side motion near the vicinity of food. Once the long-nosed potoroo has located a possible food source (with its sense of smell), it positions itself to begin excavating with its fore paws. The skull of potoroos may be either narrow and elongated, as in the extant P. gilbertii, P. longipes, P. tridactylus, or broad and flattened, a feature of the extinct P. platyops. An external occipital crest is strongly defined, particularly in the males, and there is no apparent sagittal crest in the species cranial morphology. Potorous skulls have shallow and flattened auditory bullae. The dentition is distinguished by sharp and strong canines, the broad permanent premolars are long and low with a profile that is serrated, concave, or horizontal at the cutting edge. An acutely pointed incisor extends from the long and narrow lower mandible. The dental formula of the genus is the same as other potoroid taxa: I3/1 C1/0 PM1/1 M4/4. Two premolars in juveniles are replaced by a permanent sectorial premolar. In popular culture The first depiction of a potoroo species was published in 1790 by John White in his Journal of a Voyage to Botany Bay, the caption describing the animal as a "Poto Roo". The artwork was produced by Sarah Stone.
Biology and health sciences
Diprotodontia
Animals
5843584
https://en.wikipedia.org/wiki/Cross-bedding
Cross-bedding
In geology, cross-bedding, also known as cross-stratification, is layering within a stratum and at an angle to the main bedding plane. The sedimentary structures which result are roughly horizontal units composed of inclined layers. The original depositional layering is tilted, such tilting not being the result of post-depositional deformation. Cross-beds or "sets" are the groups of inclined layers, which are known as cross-strata. Cross-bedding forms during deposition on the inclined surfaces of bedforms such as ripples and dunes; it indicates that the depositional environment contained a flowing medium (typically water or wind). Examples of these bedforms are ripples, dunes, anti-dunes, sand waves, hummocks, bars, and delta slopes. Environments in which water movement is fast enough and deep enough to develop large-scale bed forms fall into three natural groupings: rivers, tide-dominated coastal and marine settings. Significance Cross-beds can tell geologists much about what an area was like in ancient times. The direction the beds are dipping indicates paleocurrent, the rough direction of sediment transport. The type and condition of sediments can tell geologists the type of environment (rounding, sorting, composition...). Studying modern analogs allows geologists to draw conclusions about ancient environments. Paleocurrent can be determined by seeing a cross-section of a set of cross-beds. However, to get a true reading, the axis of the beds must be visible. It is also difficult to distinguish between the cross-beds of a dune and the cross-beds of an antidune. (Dunes dip downstream while antidunes dip upstream.) The direction of motion of the cross-beds can show ancient flow or wind directions (called paleocurrents). The foresets are deposited at the angle of repose (~34 degrees from the horizontal), so geologists are able to measure dip direction of the cross-bedded sediments and calculate the paleoflow direction. However, most cross-beds are not tabular, they are troughs. Since troughs can give a 180 degree variation of the dip of foresets, false paleocurrents can be taken by blindly measuring foresets. In this case, true paleocurrent direction is determined by the axis of the trough. Paleocurrent direction is important in reconstructing past climate and drainage patterns: sand dunes preserve the prevalent wind directions, and current ripples show the direction rivers were moving. Formation Cross-bedding is formed by the downstream migration of bedforms such as ripples or dunes in a flowing fluid. The fluid flow causes sand grains to saltate up the stoss (upstream) side of the bedform and collect at the peak until the angle of repose is reached. At this point, the crest of granular material has grown too large and will be overcome by the force of the moving water, falling down the lee(downstream) side of the dune. Repeated avalanches will eventually form the sedimentary structure known as cross-bedding, with the structure dipping in the direction of the paleocurrent. The sediment that goes on to form cross-stratification is generally sorted before and during deposition on the "lee" side of the dune, allowing cross-strata to be recognized in rocks and sediment deposits. The angle and direction of cross-beds are generally fairly consistent. Individual cross-beds can range in thickness from just a few tens of centimeters, up to hundreds of feet or more depending upon the depositional environment and the size of the bedform. Cross-bedding can form in any environment in which a fluid flows over a bed with mobile material. It is most common in stream deposits (consisting of sand and gravel), tidal areas, and in aeolian dunes. Internal sorting patterns Cross-bedded sediments are recognized in the field by the many layers of "foresets", which are the series of layers that form on the downstream or lee side of the bedform (ripple or dune). These foresets are individually differentiable because of small-scale separation between layers of material of different sizes and densities. Cross-bedding can also be recognized by truncations in sets of ripple foresets, where previously-existing stream deposits are eroded by a later flood, and new bedforms are deposited in the scoured area. Geometries Cross-bedding can be subdivided according to the geometry of the sets and cross-strata into subcategories. The most commonly described types are tabular cross-bedding and trough cross-bedding. Tabular cross-bedding, or planar bedding consists of cross-bedded units that are extensive horizontally relative to the set thickness and that have essentially planar bounding surfaces. Trough cross-bedding, on the other hand, consists of cross-bedded units in which the bounding surfaces are curved, and hence limited in horizontal extent. Tabular (planar) cross-beds Tabular (planar) cross-beds consist of cross-bedded units that are large in horizontal extent relative to set thickness and that have essentially planar bounding surfaces. The foreset laminae of tabular cross-beds are curved so as to become tangential to the basal surface. Tabular cross-bedding is formed mainly by migration of large-scale, straight-crested ripples and dunes. It forms during lower-flow regimes. Individual beds range in thickness from a few tens of centimeters to a meter or more, but bed thickness down to 10 centimeters has been observed. Where the set height is less than 6 centimeters and the cross-stratification layers are only a few millimeters thick, the term cross-lamination is used, rather than cross-bedding. Cross-bed sets occur typically in granular sediments, especially sandstone, and indicate that sediments were deposited as ripples or dunes, which advanced due to a water or air current. Trough cross-beds Cross-beds are layers of sediment that are inclined relative to the base and top of the bed they are associated with. Cross-beds can tell modern geologists many things about ancient environments such as- depositional environment, the direction of sediment transport (paleocurrent) and even environmental conditions at the time of deposition. Typically, units in the rock record are referred to as beds, while the constituent layers that make up the bed are referred to as laminae, when they are less than 1 cm thick and strata when they are greater than 1 cm in thickness. Cross-beds are angled relative to either the base or the top of the surrounding beds. As opposed to angled beds, cross-beds are deposited at an angle rather than deposited horizontally and deformed later on. Trough cross-beds have lower surfaces which are curved or scoop shaped and truncate the underlying beds. The foreset beds are also curved and merge tangentially with the lower surface. They are associated with sand dune migration. Sediment The shape of the grains and the sorting and composition of sediment can provide additional information on the history of cross-beds. Roundness of the grains, limited variation in grain size, and high quartz contents are generally attributed to longer histories of weathering and sediment transport. For example: well-rounded, and well-sorted sand that is mostly composed of quartz grains is commonly found in beach environments, far from the source of the sediment. Poorly sorted and angular sediment that is composed of a diversity of minerals is more commonly found in rivers, near the source of the sediment. However, older sedimentary deposits are frequently eroded and re-mobilized. Thus, a river may well erode an older formation of well-rounded, well-sorted beach sands of nearly pure quartz. Environments Rivers Flows are characterized by climate (snows, rain, and ice melting) and gradient. Discharge variations measured on a variety of time scales can change water depth, and speed. Some rivers can be characterized by a predictable seasonably controlled hydrograph (reflecting snow melt or rainy season). Others are dominated by durational variations characteristic of alpine glaciers run-off or random storm events, which produce flashy discharge. Few rivers have a long term record of steady flow in the rock record. Bed forms are relatively dynamic sediment storage bodies with response times that are short relative to major changes in flow characteristics. Large scale bed forms are periodic and occur in the channel (scaled to depth). Their presence and morphologic variability have been related to flow strength expressed as mean velocity or shear stress. In a fluvial environment, the water in a stream loses energy and its ability transport sediment. The sediment "falls" out of the water and is deposited along a point bar. Over time the river may dry up or avulse and the point bar may be preserved as cross-bedding. Tide-dominated Tide dominated environments include: Coastal water bodies that are partially enclosed by topography, yet have a free connection to the sea. Coast lines that have a tidal range of greater than one meter. Areas in which the water run-off volume is low relative to the tidal volume or impact. In general, the greater the tidal range the greater the maximum flow strength. Cross-stratification in tidal-dominated areas can lead to the formation of Herringbone cross-stratification. Although the flow direction reverses regularly, the flow patterns of flood on ebb currents commonly do not coincide. Consequently, the water and transport sediment may follow a roundabout route in and out of the estuary. This leads to spatially varied systems where some parts of the estuary are flood dominated and other parts are ebb dominated. The temporal and spatial variability of flow and sediment transport, coupled with regular fluctuating water levels creates a variety of bed form morphology. Shallow marine Large scale bed forms occur on shallow, terrigenous or carbonate clastic continental shelves and epicontinental platforms which are affected by strong geostrophic currents, occasional storm surges and/or tide currents. Aeolian In an aeolian environment, cross-beds often exhibit inverse grading due to their deposition by grain flows. Winds blow sediment along the ground until they start to accumulate. The side that the accumulation occurs on is called the windward side. As it continues to build, some sediment falls over the end. This side is called the leeward side. Grain flows occur when the windward side accumulates too much sediment, the angle of repose is reached and the sediment tumbles down. As more sediment piles on top the weight causes the underlying sediment to cement together and form cross-beds.
Physical sciences
Sedimentology
Earth science
5843783
https://en.wikipedia.org/wiki/Ripple%20marks
Ripple marks
In geology, ripple marks are sedimentary structures (i.e., bedforms of the lower flow regime) and indicate agitation by water (current or waves) or directly by wind. Defining ripple cross-laminae and asymmetric ripples Current ripple marks, unidirectional ripples, or asymmetrical ripple marks are asymmetrical in profile, with a gentle up-current slope and a steeper down-current slope. The down-current slope is the angle of repose, which depends on the shape of the sediment. These commonly form in fluvial and aeolian depositional environments, and are a signifier of the lower part of the Lower Flow Regime. Ripple cross-laminae forms when deposition takes place during migration of current or wave ripples. A series of cross-laminae are produced by superimposing migrating ripples. The ripples form lateral to one another, such that the crests of vertically succeeding laminae are out of phase and appear to be advancing upslope. This process results in cross-bedded units that have the general appearance of waves in outcrop sections cut normal to the wave crests. In sections with other orientations, the laminae may appear horizontal or trough-shaped, depending upon the orientation and the shape of the ripples. Ripple cross-laminae will always have a steeper dip downstream, and will always be perpendicular to paleoflow meaning the orientation of the ripples will be in a direction that is ninety degrees to the direction that current if flowing. Scientists suggest current drag, or the slowing of current velocity, during deposition is responsible for ripple cross-laminae. Ripple marks in different environments Wave-formed ripples Also called bidirectional ripples, or symmetrical ripple marks have a symmetrical, almost sinusoidal profile; they indicate an environment with weak currents where water motion is dominated by wave oscillations. In most present-day streams, ripples will not form in sediment larger than coarse sand. Therefore, the stream beds of sand-bed streams are dominated by current ripples, while gravel-bed streams do not contain bedforms. The internal structure of ripples is a base of fine sand with coarse grains deposited on top since the size distribution of sand grains correlates to the size of the ripples. This occurs because the fine grains continue to move while the coarse grains accumulate and provide a protective barrier. Ripple marks formed by aeolian processes Normal ripples Also known as impact ripples, these occur in the lower part of the lower flow regime sands with grain sizes between 0.3-2.5 mm and normal ripples form wavelengths of 7-14 cm. Normal ripples have straight or slightly sinuous crests approximately transverse to the direction of the wind. Megaripples These occur in the upper part of the lower flow regime where sand with bimodal particle size distribution forms unusually long wavelength of 1-25 m where the wind is not strong enough to move the larger particles but strong enough to move the smaller grains by saltation. Transverse aeolian ridges There is some thought that transverse aeolian ridges are a form of fossilized ripple, but there is no conclusive evidence so far. Fluid drag ripples Also known as aerodynamic ripples, these are formed with fine, well-sorted grain particles accompanied by high velocity winds which result in long, flat ripples. The flat ripples are formed by long saltation paths taken by grains in suspension and grains on the ground surface. Definitions Crest The point on a wave with the maximum value or height. It is the location at the peak of the wave cycle as shown in picture to the right. Trough The opposite of a crest, so the minimum value or height in a wave. It is the location at the very lowest point of a wave cycle also shown in picture to right. Lee The lee side has a steeper slope than the stoss. The lee is always on the back side of the ripple, which is also on the opposite side of where the current flow meets the ripple. The current flows down the lee side. Stoss The stoss is the side of a wave or ripple that has a gentle slope versus a steeper slope. Current always flows up the stoss side and down the lee side. This can be used to determine current flow during the time of ripple formation.
Physical sciences
Sedimentology
Earth science
5843907
https://en.wikipedia.org/wiki/Graded%20bedding
Graded bedding
In geology, a graded bed is a bed characterized by a systematic change in grain or clast size from bottom to top of the bed. Most commonly this takes the form of normal grading, with coarser sediments at the base, which grade upward into progressively finer ones. Such a bed is also described as fining upward. Normally graded beds generally represent depositional environments which decrease in transport energy (rate of flow) as time passes, but these beds can also form during rapid depositional events. They are perhaps best represented in turbidite strata, where they indicate a sudden strong current that deposits heavy, coarse sediments first, with finer ones following as the current weakens. They can also form in terrestrial stream deposits. In reverse grading or inverse grading the bed coarsens upwards. This type of grading is relatively uncommon but is characteristic of sediments deposited by grain flow and debris flow. A favored explanation for reverse grading in these processes is kinetic sieving. It is also observed in aeolian processes, such as in pyroclastic fall deposits. These deposition processes are examples of granular convection. Graded bedding Graded bedding is a sorting of particles according to clast size and shape on a lithified horizontal plane. The term is an explanation as to how a geologic profile was formed. Stratification on a lateral plane is the physical result of active depositing of different size materials. Density and gravity forces in the downward movement of these materials in a confined system result in a separating of the detritus settling with respect to size. Thus, finer, higher-porosity clasts form at the top and denser, less porous clasts are consolidated on the bottom, in what is called normal grading. (Inversely graded beds are composed of large clasts on the top, with smaller clasts on the bottom.) Grades of the bedding material are determined by precipitation of solid components compared to the viscosity of the medium in which the particles precipitate. Steno's Principle of Original Horizontality explains that rock layers form in horizontal layers over an underdetermined time scale and depth. Nicholas Steno first published his hypothesis in 1669 after recognizing that fossils were preserved in layers of rock (strata). Formation For materials to settle in stratified layers the defining quality is periodicity. There must be repeated depositional events with changes in precipitation of materials over time. The thickness of graded beds ranges from 1 millimeter to multiple meters. There is no set time limit in which the layers are formed. Uniformity of size and shape of materials within the bed form must be present on a present or previously horizontal plane. Necessary conditions Weathering: the chemical or physical forces breaking apart the solid materials that are potentially transported. Erosion: The movement of material due to weathering forces that have freed materials for movement. Deposition: The material settles on a horizontal plane either through chemical or physical precipitation. Note: The secondary processes of compaction, cementation, and lithification help to hold a stratified bed in place. Origins Sedimentary graded bedding In aeolian or fluid depositional environments, where there is a decrease in transport energy over time, the bedding material is sorted more uniformly, according to the normal grading scale. As water or air slows, the turbidity decreases. The suspended load of the detritus then precipitate. In times of fast movement the bedding may be poorly sorted on the deposition surface and thus is not normally graded because of the quick movement of the material. In broad channels with decreasing slopes, slow-moving water can carry large amounts of detritus over a large area. Thus, graded beds form at points with decreased slopes in wide areas with less bounding of energy current flows. The energy is dispersed and decreases. Turbid sediments precipitate in concordant sizes and shapes in layers. Changes in currents or physical deformation in the environment can be determined upon observation and monitoring of a depositional surface or lithologic sequence with unconformities above or below a graded bed. Detrital sedimentary graded beds are formed from erosional, depositional, and weathering forces. Graded beds formed from detrital materials are generally composed of sand, and clay. After lithification, shale, siltstone, and sandstone are formed from the detrital deposits. Bioclastic graded bedding Bioclastic formations are of organic sources, such as biochemical chert, which forms from siliceous marine organism decay and diagenesis. Organic sedimentation of parent material from decaying plant matter in bogs or swamps can also result in a graded bedding complex. This activity leads to formation of peat or coal, after thousands of years. Limestone is more than 95% biogenic in origin. It is made from the deposition of carbonate fossils of marine organisms. Bio erosion caused by animals, such as bivalves, shrimp and sponges change the marine substrate, resulting in layered bedding planes, due to their sifting of bed material in search of food. Organic clastic bedding can become shale and oil shale or millions of years under pressure.
Physical sciences
Sedimentology
Earth science
5845425
https://en.wikipedia.org/wiki/Gros%20Michel
Gros Michel
Gros Michel (), often translated and known as "Big Mike", is an export cultivar of banana and was, until the 1950s, the main variety grown. The physical properties of the Gros Michel make it an excellent export produce; its thick peel makes it resilient to bruising during transport and the dense bunches that it grows in make it easy to ship. Taxonomy Gros Michel is a triploid cultivar of the wild banana Musa acuminata, belonging to the AAA group. Its official designation is Musa acuminata (AAA Group) 'Gros Michel'. Synonyms include: Musa acuminata L. cv. 'Gros Michel' Musa × paradisiaca L. cv. 'Gros Michel' Gros Michel is known as Guineo Gigante, Banano, and Plátano Roatán in Spanish. It is also known as Pisang Ambon in the Philippines and Indonesia, Thihmwe in Burma, Chek Ambuong in Cambodia, Kluai hom thong in Thailand, Pisang Embun in Malaysia, and Chuoi Tieu Cao #2 in Vietnam. Cultivation history Early popularity and decline French naturalist Nicolas Baudin carried a few corms of this banana from Southeast Asia, depositing them at a botanical garden on the Caribbean island of Martinique. In 1835, French botanist Jean François Pouyat carried Baudin's fruit from Martinique to Jamaica. Originally called the "Figue Baudin" ("Baudin's fig"), the fruits were later referred to as "Poyo," after their Jamaican importer; the origin of the name "Gros Michel" is unknown. Gros Michel bananas were grown on massive plantations in Honduras, Costa Rica, and elsewhere in Central America. The variety was once the dominant export banana to Europe and North America, grown in Central America but, in the 1950s, Panama disease, a wilt caused by the fungus Fusarium oxysporum f.sp. cubense, wiped out vast tracts of Gros Michel plantations in Central America, though it is still grown on non-infected land throughout the region. By the 1960s, the exporters of Gros Michel bananas were unable to keep trading such a susceptible cultivar, and they started growing resistant cultivars belonging to the Cavendish subgroup (another Musa acuminata AAA). A 2013 paper described experiments to create a version of Gros Michel which is resistant to black sigatoka, another fungal infection. Cultural references "Yes! We Have No Bananas", a novelty song about a grocer from the 1922 Broadway revue Make It Snappy, is said to have been inspired by a shortage of Gros Michel bananas, which began with the infestation of Panama disease early in the 20th century. The Gros Michel has a higher concentration of isoamyl acetate, the ester commonly used for "banana" food flavoring, than the Cavendish. This higher concentration is responsible for the myth that banana flavoring was based on the Gros Michel, but artificial banana flavor was not based on any specific cultivar.
Biology and health sciences
Tropical and tropical-like fruit
Plants
5845712
https://en.wikipedia.org/wiki/All-pass%20filter
All-pass filter
An all-pass filter is a signal processing filter that passes all frequencies equally in gain, but changes the phase relationship among various frequencies. Most types of filter reduce the amplitude (i.e. the magnitude) of the signal applied to it for some values of frequency, whereas the all-pass filter allows all frequencies through without changes in level. Common applications A common application in electronic music production is in the design of an effects unit known as a "phaser", where a number of all-pass filters are connected in sequence and the output mixed with the raw signal. It does this by varying its phase shift as a function of frequency. Generally, the filter is described by the frequency at which the phase shift crosses 90° (i.e., when the input and output signals go into quadrature – when there is a quarter wavelength of delay between them). They are generally used to compensate for other undesired phase shifts that arise in the system, or for mixing with an unshifted version of the original to implement a notch comb filter. They may also be used to convert a mixed phase filter into a minimum phase filter with an equivalent magnitude response or an unstable filter into a stable filter with an equivalent magnitude response. Active analog implementation Implementation using low-pass filter The operational amplifier circuit shown in adjacent figure implements a single-pole active all-pass filter that features a low-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by: which has one pole at -1/RC and one zero at 1/RC (i.e., they are reflections of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output quadrature at ω=1/RC (i.e., phase shift is 90°). This implementation uses a low-pass filter at the non-inverting input to generate the phase shift and negative feedback. At high frequencies, the capacitor is a short circuit, creating an inverting amplifier (i.e., 180° phase shift) with unity gain. At low frequencies and DC, the capacitor is an open circuit, creating a unity-gain voltage buffer (i.e., no phase shift). At the corner frequency ω=1/RC of the low-pass filter (i.e., when input frequency is 1/(2πRC)), the circuit introduces a 90° shift (i.e., output is in quadrature with input; the output appears to be delayed by a quarter period from the input). In fact, the phase shift of the all-pass filter is double the phase shift of the low-pass filter at its non-inverting input. Interpretation as a Padé approximation to a pure delay The Laplace transform of a pure delay is given by where is the delay (in seconds) and is complex frequency. This can be approximated using a Padé approximant, as follows: where the last step was achieved via a first-order Taylor series expansion of the numerator and denominator. By setting we recover from above. Implementation using high-pass filter The operational amplifier circuit shown in the adjacent figure implements a single-pole active all-pass filter that features a high-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by: <ref>Williams, A.B.; Taylor, F.J., Electronic Filter Design Handbook, McGraw-Hill, 1995 , p. 10.7.</ref> which has one pole at -1/RC and one zero at 1/RC (i.e., they are reflections of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output quadrature at ω=1/RC (i.e., phase lead is 90°). This implementation uses a high-pass filter at the non-inverting input to generate the phase shift and negative feedback. At high frequencies, the capacitor is a short circuit, thereby creating a unity-gain voltage buffer (i.e., no phase lead). At low frequencies and DC, the capacitor is an open circuit and the circuit is an inverting amplifier (i.e., 180° phase lead) with unity gain. At the corner frequency ω=1/RC of the high-pass filter (i.e., when input frequency is 1/(2πRC)), the circuit introduces a 90° phase lead (i.e., output is in quadrature with input; the output appears to be advanced by a quarter period from the input). In fact, the phase shift of the all-pass filter is double the phase shift of the high-pass filter at its non-inverting input. Voltage controlled implementation The resistor can be replaced with a FET in its ohmic mode to implement a voltage-controlled phase shifter; the voltage on the gate adjusts the phase shift. In electronic music, a phaser typically consists of two, four or six of these phase-shifting sections connected in tandem and summed with the original. A low-frequency oscillator (LFO) ramps the control voltage to produce the characteristic swooshing sound. Passive analog implementation The benefit to implementing all-pass filters with active components like operational amplifiers is that they do not require inductors, which are bulky and costly in integrated circuit designs. In other applications where inductors are readily available, all-pass filters can be implemented entirely without active components. There are a number of circuit topologies that can be used for this. The following are the most commonly used circuits. Lattice filter The lattice phase equaliser, or filter, is a filter composed of lattice, or X-sections. With single element branches it can produce a phase shift up to 180°, and with resonant branches it can produce phase shifts up to 360°. The filter is an example of a constant-resistance network (i.e., its image impedance is constant over all frequencies). T-section filter The phase equaliser based on T topology is the unbalanced equivalent of the lattice filter and has the same phase response. While the circuit diagram may look like a low pass filter it is different in that the two inductor branches are mutually coupled. This results in transformer action between the two inductors and an all-pass response even at high frequency. Bridged T-section filter The bridged T topology is used for delay equalisation, particularly the differential delay between two landlines being used for stereophonic sound broadcasts. This application requires that the filter has a linear phase response with frequency (i.e., constant group delay) over a wide bandwidth and is the reason for choosing this topology. Digital implementation A Z-transform implementation of an all-pass filter with a complex pole at is which has a zero at , where denotes the complex conjugate. The pole and zero sit at the same angle but have reciprocal magnitudes (i.e., they are reflections'' of each other across the boundary of the complex unit circle). The placement of this pole-zero pair for a given can be rotated in the complex plane by any angle and retain its all-pass magnitude characteristic. Complex pole-zero pairs in all-pass filters help control the frequency where phase shifts occur. To create an all-pass implementation with real coefficients, the complex all-pass filter can be cascaded with an all-pass that substitutes for , leading to the Z-transform implementation which is equivalent to the difference equation where is the output and is the input at discrete time step . Filters such as the above can be cascaded with unstable or mixed-phase filters to create a stable or minimum-phase filter without changing the magnitude response of the system. For example, by proper choice of , a pole of an unstable system that is outside of the unit circle can be canceled and reflected inside the unit circle.
Technology
Signal processing
null
12284396
https://en.wikipedia.org/wiki/Liver%20cancer
Liver cancer
Liver cancer, also known as hepatic cancer, primary hepatic cancer, or primary hepatic malignancy, is cancer that starts in the liver. Liver cancer can be primary in which the cancer starts in the liver, or it can be liver metastasis, or secondary, in which the cancer spreads from elsewhere in the body to the liver. Liver metastasis is the more common of the two liver cancers. Instances of liver cancer are increasing globally. Primary liver cancer is globally the sixth-most frequent cancer and the fourth-leading cause of death from cancer. In 2018, it occurred in 841,000 people and resulted in 782,000 deaths globally. Higher rates of liver cancer occur where hepatitis B and C are common, including Asia and sub-Saharan Africa. Males are more often affected with hepatocellular carcinoma (HCC) than females. Diagnosis is most frequent among those 55 to 65 years old. The leading cause of liver cancer is cirrhosis due to hepatitis B, hepatitis C, or alcohol. Other causes include aflatoxin, non-alcoholic fatty liver disease and liver flukes. The most common types are HCC, which makes up 80% of cases and intrahepatic cholangiocarcinoma. The diagnosis may be supported by blood tests and medical imaging, with confirmation by tissue biopsy. Given that there are many different causes of liver cancer, there are many approaches to liver cancer prevention. These efforts include immunization against hepatitis B, hepatitis B treatment, hepatitis C treatment, decreasing alcohol use, decreasing exposure to aflatoxin in agriculture, and management of obesity and diabetes. Screening is recommended in those with chronic liver disease. For example, it is recommended that people with chronic liver disease who are at risk for hepatocellular carcinoma be screened every 6 months using ultrasound imaging. Because liver cancer is an umbrella term for many types of cancer, the signs and symptoms depend on what type of cancer is present. Symptoms can be vague and broad. Cholangiocarcinoma is associated with sweating, jaundice, abdominal pain, weight loss, and liver enlargement. Hepatocellular carcinoma is associated with abdominal mass, abdominal pain, vomiting, anemia, back pain, jaundice, itching, weight loss and fever. Treatment options may include surgery, targeted therapy and radiation therapy. In certain cases, ablation therapy, embolization therapy or liver transplantation may be used. Classification Liver cancer can come from the liver parenchyma as well as other structures within the liver such as the bile duct, blood vessels and immune cells There are many sub-types of liver cancer, the most common of which are described below. Hepatocellular carcinoma The most frequent liver cancer, accounting for approximately 75% of all primary liver cancers, is hepatocellular carcinoma (HCC). HCC is a cancer formed by liver cells, known as hepatocytes, that become malignant. In terms of cancer deaths, worldwide HCC is considered the 3rd most common cause of cancer mortalities. In terms of HCC diagnosis, it is recommended that people with risk factors (including known chronic liver disease, cirrhosis, etc.) should receive screening ultrasounds. If the ultrasound shows a focal area that is larger than 1 centimeter in size, patients should then get a triple-phase contrast-enhanced CT or MRI scan. HCC can then be diagnosed radiologically using the Liver Imaging Reporting and Data System (LI-RADS). There is also a variant type of HCC that consists of both HCC and cholangiocarcinoma. Intrahepatic cholangiocarcinoma Cancer of the bile duct (cholangiocarcinoma and cholangiocellular cystadenocarcinoma) account for approximately 6% of primary liver cancers. Intrahepatic cholangiocarcinoma (CCA) is an epithelial cancer of the intra-hepatic biliary tree branches. Intrahepatic CCA is the second leading cause of primary liver cancer. It is more common in men and usually is diagnosed in 60-70 year olds. Risk factors for development of intrahepatic CCA include opisthorchus viverrini infection, Clonorchis sinensis infection, sclerosing cholangitis, choledochal cysts, past procedures of the biliary tree, exposure to thorotrast and dioxins, and cirrhosis. This cancer is usually asymptomatic until the disease has progressed. Symptoms include abdominal pain, night sweats, weight loss, and fatigue. Liver markers that can be increased with intrahepatic CCA are carcinoembryonic antigen (CEA), CA19-9, and CA-125. Angiosarcoma and hemangiosarcoma These are rare and aggressive liver cancers, yet are the third most common primary liver cancer making up 0.1-2.0% of primary liver cancer. Angiosarcoma and hemangiosarcoma of the liver come from the blood vessel's endothelial layer. These tumors have poor outcomes because they grow rapidly and metastasise easily. They are also hard to diagnose but are typically suspected on CT or MRI scans that show focal lesions with differing amounts of signal intensity (these tumors have a lot of bleeding or hemorrhage and subsequent dying of tissue (necrosis)). Biopsy with histopathological evaluation yields the definitive diagnosis. While the cause is often never identified (75% are idiopathic), they are associated with exposures to substances such as vinyl chloride, arsenic, thorotrast (e.g. occupational exposure). Radiation is also a risk factor. In adults, these tumors are more common in males; however, in children they are more common in females. Even with surgery, prognosis is poor with most individuals not living longer than six months after diagnosis. Only 3% of individuals live longer than two years. Hepatoblastoma Another type of cancer formed by liver cells is hepatoblastoma, which is specifically formed by immature liver cells. It is a rare malignant tumor that primarily develops in children, and accounts for approximately 1% of all cancers in children and 79% of all primary liver cancers under the age of 15. Most hepatoblastomas form in the right lobe. Metastasis to liver Many cancers found in the liver are not true liver cancers but are cancers from other sites in the body that have spread to the liver (known as metastases). Frequently, the site of origin is the gastrointestinal tract, since the liver is close to many of these metabolically active, blood-rich organs near to blood vessels and lymph nodes (such as pancreatic cancer, stomach cancer, colon cancer and carcinoid tumors mainly of the appendix), but also from breast cancer, ovarian cancer, lung cancer, renal cancer, prostate cancer. Children The Children's Oncology Group (COG) has developed a protocol to help diagnose and manage childhood liver tumors. Causes Viral infection Viral infection with hepatitis C virus (HCV) or Hepatitis B virus (HBV) is the chief cause of liver cancer in the world today, accounting for 80% of HCC. Men with chronic HCV or HBV are more likely to develop HCC than women with chronic HCV or HBV; however, the reasons for this gender difference is unknown. HBV infection is also linked to cholangiocarcinoma. The role of viruses other than HCV or HBV in liver cancer is much less clear, even though there is some evidence that co-infection of HBV and hepatitis D virus may increase the risk for HCC. HBV and HCV can lead to HCC, because these viral infections cause massive inflammation, fibrosis, and eventual cirrhosis occurs within the liver. In addition, many genetic and epigenetic changes are formed in liver cells during HCV and HBV infection, which is a major factor in the production of the liver tumors. The viruses induce malignant changes in cells by altering gene methylation, affecting gene expression, and promoting or repressing cellular signal transduction pathways. By doing this, the viruses can prevent cells from undergoing a programmed form of cell death (apoptosis) and promote viral replication and persistence. HBV and HCV also induce malignant changes by causing DNA damage and genomic instability. This involves the generation of reactive oxygen species, expression of proteins that interfere with DNA repair enzymes, and HCV induced activation of a mutator enzyme. Cirrhosis In addition to virus-related cirrhosis described above, other causes of cirrhosis can lead to HCC. Alcohol intake correlates with risk of HCC, and the risk is far greater in individuals with an alcohol-induced cirrhotic liver. There are a few disorders that are known to cause cirrhosis and lead to cancer, including hereditary hemochromatosis and primary biliary cirrhosis. Aflatoxin Aflatoxin exposure can lead to the development of HCC. The aflatoxins are a group of chemicals produced by the fungi Aspergillus flavus (the name comes from A. flavus toxin) and A. parasiticus. Food contamination by the fungi leads to ingestion of the chemicals, which are very toxic to the liver. Common foodstuffs contaminated with the toxins are cereals, peanuts, and other vegetables. The amount (dose) and how long (duration) that a person is in contact with aflatoxin is associated with HCC. Contamination of food is common in Africa, South-East Asia, and China. The mechanism by which aflatoxins cause cancer is through mutations and epigenetic alterations. Aflatoxins induce a spectrum of mutations, including in the p53 tumor suppressor gene, which is a mutation seen in many types of cancers. Mutation in p53, presumably in conjunction with other aflatoxin-induced mutations and epigenetic alterations, is likely a common cause of aflatoxin-induced carcinogenesis. Nonalcoholic steatohepatitis (NASH) and Nonalcoholic fatty liver (NAFL) NASH and NAFL is beginning to be called a risk factor for liver cancer, particularly HCC. In recent years, there has been a noted increase in liver transplantations for HCC that was attributable to NASH. More research is needed in this area and NASH/NAFL. Other risk factors in adults High grade dysplastic nodules are precancerous lesions of the liver. Within two years, there is a risk for cancer arising from these nodules of 30–40%. Obesity and metabolic syndrome have emerged as an important risk factor, as they can lead to steatohepatitis. Diabetes increases the risk for HCC. Smoking increases the risk for HCC compared to non-smokers and previous smokers. There is around 5-10% lifetime risk of cholangiocarcinoma in people with primary sclerosing cholangitis. Liver fluke infection increases the risk for cholangiocarcinoma, and this is the reason why Thailand has particularly high rates of this cancer. Choledochal cysts, Caroli's disease, and congenital hepatic fibrosis are associated with cholangiocarcinoma development. Genetic conditions: untreated hereditary hemochromatosis, alpha-1-antitrypsin deficiency, glycogen storage diseases, porphyria cutanea tarda, Wilson's disease, tyrosinemia have all been associated with development of HCC. Oral contraceptive pill: There is insufficient evidence to label oral contraceptives as a risk factor. However, recent studies have found that taking oral contraceptives for longer than five years is associated with HCC. Children Childhood liver cancer is uncommon. The liver cancer sub-types most commonly seen in children are hepatoblastoma, hepatocellular carcinoma, embryomal sarcoma of liver, infantile choriocarcinoma of liver, and biliary rhabdomyosarcoma. Increased risk for liver cancer in children can be caused by Beckwith–Wiedemann syndrome (associated with hepatoblastoma), familial adenomatous polyposis (associated with hepatoblastoma), low birth weight (associated with hepatoblastoma), Progressive familial intrahepatic cholestasis (associated with HCC) and Trisomy 18 (associated with hepatoblastoma). Diagnosis Many imaging modalities are used to aid in the diagnosis of liver cancer. For HCC these include medical ultrasound, computed tomography (CT) and magnetic resonance imaging (MRI). When imaging the liver with ultrasound, large lesions are likely to be HCC (e.g., a mass greater than 2 cm has more than 95% chance of being HCC).Given the blood flow to the liver, HCC would be most visible when the contrast flows through the arteries of the liver (also called the arterial phase) rather than when the contrast flows through the veins (also called the venous phase). Sometimes doctors will get a liver biopsy, if they are worried about HCC and the imaging studies (CT or MRI) do not have clear results. The majority of cholangiocarcimas occur in the hilar region of the liver, and often present as bile duct obstruction. If the cause of obstruction is suspected to be malignant, endoscopic retrograde cholangiopancreatography (ERCP), ultrasound, CT, MRI and magnetic resonance cholangiopancreatography (MRCP) are used. Tumor markers, chemicals sometimes found in the blood of people with cancer, can be helpful in diagnosing and monitoring the course of liver cancers. High levels of alpha-fetoprotein (AFP) in the blood can be found in many cases of HCC and intrahepatic cholangiocarcinoma. Of note, AFP is most useful for monitoring if liver cancers come back after treatment rather than for initial diagnosis. Cholangiocarcinoma can be detected with these commonly used tumor markers: carbohydrate antigen 19-9 (CA 19–9), carcinoembryonic antigen (CEA) and cancer antigen 125 (CA125). These tumor markers are found in primary liver cancers, as well as in other cancers and certain other disorders. Prevention Prevention of cancers can be separated into primary, secondary, and tertiary prevention. Primary prevention preemptively reduces exposure to a risk factor for liver cancer. One of the most successful primary liver cancer preventions is vaccination against hepatitis B. Vaccination against the hepatitis C virus is currently unavailable. Other forms of primary prevention are aimed at limiting transmission of these viruses by promoting safe injection practices, screening blood donation products, and screening high-risk asymptomatic individuals. Aflatoxin exposure can be avoided by post-harvest intervention to discourage mold, which has been effective in west Africa. Reducing alcohol use disorder, obesity, and diabetes mellitus would also reduce rates of liver cancer. Diet control in hemochromatosis could decrease the risk of iron overload, decreasing the risk of cancer. Secondary prevention includes both cure of the agent involved in the formation of cancer (carcinogenesis) and the prevention of carcinogenesis if this is not possible. Cure of virus-infected individuals is not possible, but treatment with antiviral drugs can decrease the risk of liver cancer. Chlorophyllin may have potential in reducing the effects of aflatoxin. Tertiary prevention includes treatments to prevent the recurrence of liver cancer. These include the use of surgical interventions, chemotherapy drugs, and antiviral drugs. Treatment General considerations Like many cancers, treatment depends on the specific type of liver cancer as well as stage of the cancer. The main way cancer is staged is based on the TMN staging systems. There are also liver cancer specific staging systems, each of which has treatment options that may result in a non recurrence of cancer, or cure. For example, for HCC it is common to use the Barcelona Clinic Liver Cancer Staging System. Treatments include surgery, medications, and ablation methods. There are many chemotherapeutic drugs approved for liver cancer including: atezolizumab, nivolumab, pembrolizumab, regorafenib. Increasingly, immunotherapy agents (also called targeted cancer therapies or precision medicine) are being used to treat hepatobiliary cancers. Recent advances in liver cancer treatment are exploring T cells engineered with chimeric antigen receptors (CARs) targeting glypican-3 (GPC3), such as GAP T cells, showing potential in addressing GPC3-positive tumors, especially in pediatric liver cancers. Hepatocellular carcinoma Partial surgical resection is the recommended treatment for hepatocellular carcinoma (HCC) when patients have sufficient hepatic function reserve. 5-year survival rates after resection have massively improved over the last few decades and can now range from 41 to 74%. However, recurrence rates after resection can exceed 70%, whether due to spread of the initial tumor or formation of new tumors . Liver transplantation can also be considered in cases of HCC where this form of treatment can be tolerated and the tumor fits specific criteria (such as the Milan criteria). In general, patients who are being considered for liver transplantation have multiple hepatic lesions, severe underlying liver dysfunction, or both. Percutaneous ablation is the only non-surgical treatment that can offer cure. There are many forms of percutaneous ablation, which consist of either injecting chemicals into the liver (ethanol or acetic acid) or producing extremes of temperature using radio frequency ablation, microwaves, lasers or cryotherapy. Of these, radio frequency ablation has one of the best reputations in HCC, but the limitations include inability to treat tumors close to other organs and blood vessels due to heat generation and the heat sink effect, respectively. In addition, long-term of outcomes of percutaneous ablation procedures for HCC have not been well studied. In general, surgery is the preferred treatment modality when possible. Systemic chemotherapeutics are not routinely used in HCC, although local chemotherapy may be used in a procedure known as transarterial chemoembolization (TACE). In this procedure, drugs that kill cancer cells and interrupt the blood supply are applied to the tumor. Because most systemic drugs have no efficacy in the treatment of HCC, research into the molecular pathways involved in the production of liver cancer produced sorafenib, a targeted therapy drug that prevents cell proliferation and blood cell growth. Sorafenib obtained FDA approval for the treatment of advanced hepatocellular carcinoma in November 2007. This drug provides a survival benefit for advanced HCC. Transarterial radioembolization (TRACE) is another option for HCC. In this procedure, radiation treatment is targeted at the tumor. TRACE is still considered an add on treatment rather than the first choice for treatment of HCC, as dual treatments of radiotherapy plus chemoembolization, local chemotherapy, systemic chemotherapy or targeted therapy drugs may show benefit over radiotherapy alone. Ablation methods (e.g. radiofrequency ablation or microwave ablation) are also an option for HCC treatment. This method is recommended for small, localized liver tumors as it is recommended that the area treated with radiofrequency ablation should be 2 centimeters or less. Intrahepatic cholangiocarcinoma Resection is an option in cholangiocarcinoma, but fewer than 30% of cases of cholangiocarcinoma are resectable at diagnosis. The reason the majority of intrahepatic cholangiocarcinomas are not able to be surgically removed is because there are often multiple focal tumors within the liver. After surgery, recurrence rates are up to 60%. Liver transplant may be used where partial resection is not an option, and adjuvant chemoradiation may benefit some cases. 60% of cholangiocarcinomas form in the perihilar region and photodynamic therapy can be used to improve quality of life and survival time in these un-resectable cases. Photodynamic therapy is a novel treatment that uses light activated molecules to treat the tumor. The compounds are activated in the tumor region by laser light, which causes the release of toxic reactive oxygen species, killing tumor cells. Systemic chemotherapies such as gemcitabine and cisplatin are sometimes used in inoperable cases of cholangiocarcinoma. Radio frequency ablation, transarterial chemoembolization and internal radiotherapy (brachytherapy) all show promise in the treatment of cholangiocarcinoma and can sometimes improve bile flow, which can decrease the symptoms a patient experiences. Radiotherapy may be used in the adjuvant setting or for palliative treatment of cholangiocarcinoma. Hepatoblastoma Removing the tumor by either surgical resection or liver transplant can be used in the treatment of hepatoblastoma. In some cases surgery can offer a cure. Chemotherapy may be used before and after surgery and transplant. Chemotherapy, including cisplatin, vincristine, cyclophosphamide, and doxorubicin are used for the systemic treatment of hepatoblastoma. Out of these drugs, cisplatin seems to be the most effective. Angiosarcoma and hemangiosarcoma Many of these tumors end up not being amenable to surgical treatment. Treatment options include surgically removing parts of the liver that are affected. Liver transplantation and chemotherapy are not effective for angiosarcomas and hemangiosarcomas of the liver. Epidemiology Globally, liver cancer is common and increasing. Most recent epidemiological data suggests that liver cancer is in the Top 10 for both prevalence and mortality (noted to be the sixth-leading cause of cancer and fourth most-common cause of death). The Global Burden of Disease Liver Cancer Collaboration found that from 1990 to 2015 the new cases of liver cancer per year increased by 75%. Estimates based on most recent data suggest that each year there are 841,000 new liver cancer diagnoses and 782,000 deaths across the globe. Liver cancer is the most common cancer in Egypt, the Gambia, Guinea, Mongolia, Cambodia, and Vietnam. In terms of gender breakdown, globally liver cancer is more common in men than in women. Given that HCC is the most-common type of liver cancer, the areas around the world with the most new cases of HCC each year are Northern and Western Africa as well as Eastern and South-Eastern Asia. China has 50% of HCC cases globally, and more than 80% of total cases occur in sub-Saharan Africa or in East-Asia due to hepatitis B virus. In these high disease burden areas, evidence indicates the majority of the HBC and HCV infections occur via perinatal transmission (also called mother-to-child transmission). However, it is important to note that the risk factors for HCC varies by geographic region. For example, in China, chronic HBV infection and aflatoxin are the largest risk factors; whereas, in Mongolia, it is a combination of HBV and HCV co-infection and high levels of alcohol use that are driving the high levels of HCC. In terms of intrahepatic cholangiocarcinoma, we currently do not have sufficient epidemiological data because it is a rare cancer. According to the United States National Cancer Institute, the incidence of cholangiocarcinoma is not known. Cholangiocarcinoma also has a significant geographical distribution, with Thailand showing the highest rates worldwide due to the presence of liver fluke. In the United States, there were 42,810 new cases of liver and intrahepatic bile duct cancer in 2020, which represents 2.4% of all new cancer cases in the United States. There are about 89.950 people who have liver and intrahepatic liver cancer in the United States. In terms of mortality, the 5-year survival rate for liver and intrahepatic bile duct cancers in the United States is 19.6%. In the United States, there is an estimated 1% chance of getting liver cancer across the lifespan, which makes this cancer relatively rare. Despite the low number of cases, it is one of the top causes of cancer deaths.
Biology and health sciences
Cancer
Health
7635377
https://en.wikipedia.org/wiki/Barium%20peroxide
Barium peroxide
Barium peroxide is an inorganic compound with the formula . This white solid (gray when impure) is one of the most common inorganic peroxides, and it was the first peroxide compound discovered. Being an oxidizer and giving a vivid green colour upon ignition (as do all barium compounds), it finds some use in fireworks; historically, it was also used as a precursor for hydrogen peroxide. Structure Barium peroxide consists of barium cations and peroxide anions . The solid is isomorphous to calcium carbide, . Preparation and use Barium peroxide arises by the reversible reaction of with barium oxide. The peroxide forms around 500 °C and oxygen is released above 820 °C. This reaction is the basis for the now-obsolete Brin process for separating oxygen from the atmosphere. Other oxides, e.g. and SrO, behave similarly. In another obsolete application, barium peroxide was once used to produce hydrogen peroxide via its reaction with sulfuric acid: The insoluble barium sulfate is filtered from the mixture.
Physical sciences
Peroxide salts
Chemistry
18969769
https://en.wikipedia.org/wiki/CP%20violation
CP violation
In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C-symmetry) while its spatial coordinates are inverted ("mirror" or P-symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present universe, and in the study of weak interactions in particle physics. Overview Until the 1950s, parity conservation was believed to be one of the fundamental geometric conservation laws (along with conservation of energy and conservation of momentum). After the discovery of parity violation in 1956, CP-symmetry was proposed to restore order. However, while the strong interaction and electromagnetic interaction are experimentally found to be invariant under the combined CP transformation operation, further experiments showed that this symmetry is slightly violated during certain types of weak decay. Only a weaker version of the symmetry could be preserved by physical phenomena, which was CPT symmetry. Besides C and P, there is a third operation, time reversal T, which corresponds to reversal of motion. Invariance under time reversal implies that whenever a motion is allowed by the laws of physics, the reversed motion is also an allowed one and occurs at the same rate forwards and backwards. The combination of CPT is thought to constitute an exact symmetry of all types of fundamental interactions. Because of the long-held CPT symmetry theorem, provided that it is valid, a violation of the CP-symmetry is equivalent to a violation of the T-symmetry. In this theorem, regarded as one of the basic principles of quantum field theory, charge conjugation, parity, and time reversal are applied together. Direct observation of the time reversal symmetry violation without any assumption of CPT theorem was done in 1998 by two groups, CPLEAR and KTeV collaborations, at CERN and Fermilab, respectively. Already in 1970 Klaus Schubert observed T violation independent of assuming CPT symmetry by using the Bell–Steinberger unitarity relation. History P-symmetry The idea behind parity symmetry was that the equations of particle physics are invariant under mirror inversion. This led to the prediction that the mirror image of a reaction (such as a chemical reaction or radioactive decay) occurs at the same rate as the original reaction. However, in 1956 a careful critical review of the existing experimental data by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang revealed that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. The first test based on beta decay of cobalt-60 nuclei was carried out in 1956 by a group led by Chien-Shiung Wu, and demonstrated conclusively that weak interactions violate the P-symmetry or, as the analogy goes, some reactions did not occur as often as their mirror image. However, parity symmetry still appears to be valid for all reactions involving electromagnetism and strong interactions. CP-symmetry Overall, the symmetry of a quantum mechanical system can be restored if another approximate symmetry S can be found such that the combined symmetry PS remains unbroken. This rather subtle point about the structure of Hilbert space was realized shortly after the discovery of P violation, and it was proposed that charge conjugation, C, which transforms a particle into its antiparticle, was the suitable symmetry to restore order. In 1956 Reinhard Oehme in a letter to Chen-Ning Yang and shortly after, Boris L. Ioffe, Lev Okun and A. P. Rudik showed that the parity violation meant that charge conjugation invariance must also be violated in weak decays. Charge violation was confirmed in the Wu experiment and in experiments performed by Valentine Telegdi and Jerome Friedman and Garwin and Lederman who observed parity non-conservation in pion and muon decay and found that C is also violated. Charge violation was more explicitly shown in experiments done by John Riley Holt at the University of Liverpool. Oehme then wrote a paper with Lee and Yang in which they discussed the interplay of non-invariance under P, C and T. The same result was also independently obtained by Ioffe, Okun and Rudik. Both groups also discussed possible CP violations in neutral kaon decays. Lev Landau proposed in 1957 CP-symmetry, often called just CP as the true symmetry between matter and antimatter. CP-symmetry is the product of two transformations: C for charge conjugation and P for parity. In other words, a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original process and so the combined CP-symmetry would be conserved in the weak interaction. In 1962, a group of experimentalists at Dubna, on Okun's insistence, unsuccessfully searched for CP-violating kaon decay. Experimental status Indirect CP violation In 1964, James Cronin, Val Fitch and coworkers provided clear evidence from kaon decay that CP-symmetry could be broken. (cf. also Ref. ). This work won them the 1980 Nobel Prize. This discovery showed that weak interactions violate not only the charge-conjugation symmetry C between particles and antiparticles and the P or parity symmetry, but also their combination. The discovery shocked particle physics and opened the door to questions still at the core of particle physics and of cosmology today. The lack of an exact CP-symmetry, but also the fact that it is so close to a symmetry, introduced a great puzzle. The kind of CP violation (CPV) discovered in 1964 was linked to the fact that neutral kaons can transform into their antiparticles (in which each quark is replaced with the other's antiquark) and vice versa, but such transformation does not occur with exactly the same probability in both directions; this is called indirect CP violation. Direct CP violation Despite many searches, no other manifestation of CP violation was discovered until the 1990s, when the NA31 experiment at CERN suggested evidence for CP violation in the decay process of the very same neutral kaons (direct CP violation). The observation was somewhat controversial, and final proof for it came in 1999 from the KTeV experiment at Fermilab and the NA48 experiment at CERN. Starting in 2001, a new generation of experiments, including the BaBar experiment at the Stanford Linear Accelerator Center (SLAC) and the Belle Experiment at the High Energy Accelerator Research Organisation (KEK) in Japan, observed direct CP violation in a different system, namely in decays of the B mesons. A large number of CP violation processes in B meson decays have now been discovered. Before these "B-factory" experiments, there was a logical possibility that all CP violation was confined to kaon physics. However, this raised the question of why CP violation did not extend to the strong force, and furthermore, why this was not predicted by the unextended Standard Model, despite the model's accuracy for "normal" phenomena. In 2011, a hint of CP violation in decays of neutral D mesons was reported by the LHCb experiment at CERN using 0.6 fb−1 of Run 1 data. However, the same measurement using the full 3.0 fb−1 Run 1 sample was consistent with CP-symmetry. In 2013 LHCb announced discovery of CP violation in strange B meson decays. In March 2019, LHCb announced discovery of CP violation in charmed decays with a deviation from zero of 5.3 standard deviations. In 2020, the T2K Collaboration reported some indications of CP violation in leptons for the first time. In this experiment, beams of muon neutrinos () and muon antineutrinos () were alternately produced by an accelerator. By the time they got to the detector, a significantly higher proportion of electron neutrinos () was observed from the beams, than electron antineutrinos () were from the beams. Analysis of these observations was not yet precise enough to determine the size of the CP violation, relative to that seen in quarks. In addition, another similar experiment, NOvA sees no evidence of CP violation in neutrino oscillations and is in slight tension with T2K. CP violation in the Standard Model "Direct" CP violation is allowed in the Standard Model if a complex phase appears in the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix) describing quark mixing, or the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix) describing neutrino mixing. A necessary condition for the appearance of the complex phase is the presence of at least three generations of fermions. If fewer generations are present, the complex phase parameter can be absorbed into redefinitions of the fermion fields. A popular rephasing invariant whose vanishing signals absence of CP violation and occurs in most CP violating amplitudes is the Jarlskog invariant: for quarks, which is times the maximum value of For leptons, only an upper limit exists: The reason why such a complex phase causes CP violation (CPV) is not immediately obvious, but can be seen as follows. Consider any given particles (or sets of particles) and and their antiparticles and Now consider the processes and the corresponding antiparticle process and denote their amplitudes and respectively. Before CP violation, these terms must be the same complex number. We can separate the magnitude and phase by writing . If a phase term is introduced from (e.g.) the CKM matrix, denote it . Note that contains the conjugate matrix to , so it picks up a phase term . Now the formula becomes: Physically measurable reaction rates are proportional to , thus so far nothing is different. However, consider that there are two different routes: and or equivalently, two unrelated intermediate states: and . This is exactly the case for the kaon where the decay is performed via different quark channels (see the Figure above). In this case we have: Some further calculation gives: Thus, we see that a complex phase gives rise to processes that proceed at different rates for particles and antiparticles, and CP is violated. From the theoretical end, the CKM matrix is defined as , where and are unitary transformation matrices which diagonalize the fermion mass matrices and , respectively. Thus, there are two necessary conditions for getting a complex CKM matrix: At least one of and is complex, or the CKM matrix will be purely real. If both of them are complex, and must be different, i.e., , or the CKM matrix will be an identity matrix, which is also purely real. For a standard model with three fermion generations, the most general non-Hermitian pattern of its mass matrices can be given by This M matrix contains 9 elements and 18 parameters, 9 from the real coefficients and 9 from the imaginary coefficients. Obviously, a 3x3 matrix with 18 parameters is too difficult to diagonalize analytically. However, a naturally Hermitian can be given by and it has the same unitary transformation matrix U with M. Besides, parameters in are correlated to those in M directly in the ways shown below That means if we diagonalize an matrix with 9 parameters, it has the same effect as diagonalizing an matrix with 18 parameters. Therefore, diagonalizing the matrix is certainly the most reasonable choice. The and matrix patterns given above are the most general ones. The perfect way to solve the CPV problem in the standard model is to diagonalize such matrices analytically and to achieve a U matrix which applies to both. Unfortunately, even though the matrix has only 9 parameters, it is still too complicated to be diagonalized directly. Thus, an assumption was employed to simplify the pattern, where is the real part of and is the imaginary part. Such an assumption could further reduce the parameter number from 9 to 5 and the reduced matrix can be given by where and . Diagonalizing analytically, the eigenvalues are given by and the matrix for up-type quarks can then be given by However, the order of the eigenvalues and correspondingly the order of the columns of does not necessarily have to be but can be any permutation of those. After obtaining a general matrix pattern, the same procedure can be applied to down-type quarks by introducing primed parameters. To construct the CKM matrix, the conjugate transpose of the matrix for up-type quarks, denoted as , has to be multiplied with the matrix for down-type quarks, denoted as . As mentioned earlier, there are no inherent constraints that dictate the assignment of eigenvalues to specific quark flavors. All potential permutations of eigenvalues are listed elsewhere. Among these 36 potential CKM matrices, 4 of them and fit experimental data to the order of or better, at tree level, where is one of the Wolfenstein parameters. The full expressions of parameters and are given by The best fit of the CKM elements are and Since the discovery of CP violation in 1964, physicists have believed that in theory, within the framework of the Standard Model, it is sufficient to search for appropriate Yukawa couplings (equivalent to a mass matrix) in order to generate a complex phase in the CKM matrix, thus automatically breaking CP symmetry. However, the specific matrix pattern has remained elusive. The above derivation provides the first evidence for this idea and offers some explicit examples to support it. Strong CP problem There is no experimentally known violation of the CP-symmetry in quantum chromodynamics. As there is no known reason for it to be conserved in QCD specifically, this is a "fine tuning" problem known as the strong CP problem. QCD does not violate the CP-symmetry as easily as the electroweak theory; unlike the electroweak theory in which the gauge fields couple to chiral currents constructed from the fermionic fields, the gluons couple to vector currents. Experiments do not indicate any CP violation in the QCD sector. For example, a generic CP violation in the strongly interacting sector would create the electric dipole moment of the neutron which would be comparable to 10−18 e·m while the experimental upper bound is roughly one trillionth that size. This is a problem because at the end, there are natural terms in the QCD Lagrangian that are able to break the CP-symmetry. For a nonzero choice of the θ angle and the chiral phase of the quark mass θ′ one expects the CP-symmetry to be violated. One usually assumes that the chiral quark mass phase can be converted to a contribution to the total effective angle, but it remains to be explained why this angle is extremely small instead of being of order one; the particular value of the θ angle that must be very close to zero (in this case) is an example of a fine-tuning problem in physics, and is typically solved by physics beyond the Standard Model. There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei–Quinn theory, involving new scalar particles called axions. A newer, more radical approach not requiring the axion is a theory involving two time dimensions first proposed in 1998 by Bars, Deliduman, and Andreev. Matter–antimatter imbalance The observable universe is made chiefly of matter, rather than consisting of equal parts of matter and antimatter as might be expected. It can be demonstrated that, to create an imbalance in matter and antimatter from an initial condition of balance, the Sakharov conditions must be satisfied, one of which is the existence of CP violation during the extreme conditions of the first seconds after the Big Bang. Explanations which do not involve CP violation are less plausible, since they rely on the assumption that the matter–antimatter imbalance was present at the beginning, or on other admittedly exotic assumptions. The Big Bang should have produced equal amounts of matter and antimatter if CP-symmetry was preserved; as such, there should have been total cancellation of both—protons should have cancelled with antiprotons, electrons with positrons, neutrons with antineutrons, and so on. This would have resulted in a sea of radiation in the universe with no matter. Since this is not the case, after the Big Bang, physical laws must have acted differently for matter and antimatter, i.e. violating CP-symmetry. The Standard Model contains at least three sources of CP violation. The first of these, involving the Cabibbo–Kobayashi–Maskawa matrix in the quark sector, has been observed experimentally and can only account for a small portion of the CP violation required to explain the matter-antimatter asymmetry. The strong interaction should also violate CP, in principle, but the failure to observe the electric dipole moment of the neutron in experiments suggests that any CP violation in the strong sector is also too small to account for the necessary CP violation in the early universe. The third source of CP violation is the Pontecorvo–Maki–Nakagawa–Sakata matrix in the lepton sector. The current long-baseline neutrino oscillation experiments, T2K and NOνA, may be able to find evidence of CP violation over a small fraction of possible values of the CP violating Dirac phase while the proposed next-generation experiments, Hyper-Kamiokande and DUNE, will be sensitive enough to definitively observe CP violation over a relatively large fraction of possible values of the Dirac phase. Further into the future, a neutrino factory could be sensitive to nearly all possible values of the CP violating Dirac phase. If neutrinos are Majorana fermions, the PMNS matrix could have two additional CP violating Majorana phases, leading to a fourth source of CP violation within the Standard Model. The experimental evidence for Majorana neutrinos would be the observation of neutrinoless double-beta decay. The best limits come from the GERDA experiment. CP violation in the lepton sector generates a matter-antimatter asymmetry through a process called leptogenesis. This could become the preferred explanation in the Standard Model for the matter-antimatter asymmetry of the universe if CP violation is experimentally confirmed in the lepton sector. If CP violation in the lepton sector is experimentally determined to be too small to account for matter-antimatter asymmetry, some new physics beyond the Standard Model would be required to explain additional sources of CP violation. Adding new particles and/or interactions to the Standard Model generally introduces new sources of CP violation since CP is not a symmetry of nature. Sakharov proposed a way to restore CP-symmetry using T-symmetry, extending spacetime before the Big Bang. He described complete CPT reflections of events on each side of what he called the "initial singularity". Because of this, phenomena with an opposite arrow of time at t < 0 would undergo an opposite CP violation, so the CP-symmetry would be preserved as a whole. The anomalous excess of matter over antimatter after the Big Bang in the orthochronous (or positive) sector, becomes an excess of antimatter before the Big Bang (antichronous or negative sector) as both charge conjugation, parity and arrow of time are reversed due to CPT reflections of all phenomena occurring over the initial singularity:
Physical sciences
Particle physics: General
Physics
665891
https://en.wikipedia.org/wiki/Thrashing%20%28computer%20science%29
Thrashing (computer science)
In computer science, thrashing occurs in a system with virtual memory when a computer's real storage resources are overcommitted, leading to a constant state of paging and page faults, slowing most application-level processing. This causes the performance of the computer to degrade or even collapse. The situation can continue indefinitely until the user closes some running applications or the active processes free up additional virtual memory resources. After initialization, most programs operate on a small number of code and data pages compared to the total memory the program requires. The pages most frequently accessed at any point are called the working set, which may change over time. When the working set is not significantly greater than the system's total number of real storage page frames, virtual memory systems work most efficiently, and an insignificant amount of computing is spent resolving page faults. As the total of the working sets grows, resolving page faults remains manageable until the growth reaches a critical point at which the number of faults increases dramatically and the time spent resolving them overwhelms the time spent on the computing the program was written to do. This condition is referred to as thrashing. Thrashing may occur on a program that randomly accesses huge data structures, as its large working set causes continual page faults that drastically slow down the system. Satisfying page faults may require freeing pages that will soon have to be re-read from disk. The term is also used for various similar phenomena, particularly movement between other levels of the memory hierarchy, wherein a process progresses slowly because significant time is being spent acquiring resources. "Thrashing" is also used in contexts other than virtual memory systems –for example, to describe cache issues in computing, or silly window syndrome in networking. Overview Virtual memory works by treating a portion of secondary storage such as a computer hard disk as an additional layer of the cache hierarchy. Virtual memory allows processes to use more memory than is physically present in main memory. Operating systems supporting virtual memory assign processes a virtual address space and each process refers to addresses in its execution context by a so-called virtual address. To access data such as code or variables at that address, the process must translate the address to a physical address in a process known as virtual address translation. In effect, physical main memory becomes a cache for virtual memory, which is in general stored on disk in memory pages. Programs are allocated a certain number of pages as needed by the operating system. Active memory pages exist in both RAM and on disk. Inactive pages are removed from the cache and written to disk when the main memory becomes full. If processes are utilizing all main memory and need additional memory pages, a cascade of severe cache misses known as page faults will occur, often leading to a noticeable lag in the operating system responsiveness. This process together with the futile, repetitive page swapping that occurs is known as "thrashing". This frequently leads to high, runaway CPU utilization that can grind the system to a halt. In modern computers, thrashing may occur in the paging system (if there is not sufficient physical memory or the disk access time is overly long), or in the I/O communications subsystem (especially in conflicts over internal bus access), etc. Depending on the configuration and algorithms involved, the throughput and latency of a system may degrade by multiple orders of magnitude. Thrashing is when the CPU performs 'productive' work less and 'swapping' work more. The overall memory access time may increase since the higher level memory is only as fast as the next lower level in the memory hierarchy. The CPU is busy swapping pages so much that it cannot respond to users' programs and interrupts as much as required. Thrashing occurs when there are too many pages in memory, and each page refers to another page. Real memory reduces its capacity to contain all the pages, so it uses 'virtual memory'. When each page in execution demands that page that is not currently in real memory (RAM) it places some pages on virtual memory and adjusts the required page on RAM. If the CPU is too busy doing this task, thrashing occurs. Causes In virtual memory systems, thrashing may be caused by programs or workloads that present insufficient locality of reference: if the working set of a program or a workload cannot be effectively held within physical memory, then constant data swapping, i.e., thrashing, may occur. The term was first used during the tape operating system days to describe the sound the tapes made when data was being rapidly written to and read. A worst case might occur on VAX processors. A single MOVL crossing a page boundary could have a source operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and a destination operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and the source and destination could both cross page boundaries. This single instruction references ten pages; if not all are in RAM, each will cause a page fault. The total number of pages thus involved in this particular instruction is ten, and all ten pages must be simultaneously present in memory. If any one of the ten pages cannot be swapped in (for example to make room for any of the other pages), the instruction will fault, and every attempt to restart it will fail until all ten pages can be swapped in. A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and release the memory pages to other processes. A timestamp is used to hand over the tokens one by one. The first version of swap-token is implemented in Linux. The second version is called preempt swap-token. In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be. Other uses Thrashing is best known in the context of memory and storage, but analogous phenomena occur for other resources, including: Where main memory is accessed in a pattern that leads to multiple main memory locations competing for the same cache lines, resulting in excessive cache misses. This is most likely to be problematic for caches with associativity. Where the translation lookaside buffer (TLB) acting as a cache for the memory management unit (MMU) which translates virtual addresses to physical addresses is too small for the working set of pages. TLB thrashing can occur even if instruction cache or data cache thrashing is not occurring because these are cached in different sizes. Instructions and data are cached in small blocks (cache lines), not entire pages, but address lookup is done at the page level. Thus even if the code and data working sets fit into the cache, if the working sets are fragmented across many pages, the virtual address working set may not fit into TLB, causing TLB thrashing. Frequent garbage collection, due to failure to allocate memory for an object, due to insufficient free memory or insufficient contiguous free memory due to memory fragmentation is referred to as heap thrashing. A similar phenomenon occurs for processes: when the process working set cannot be coscheduled, i.e. such that not all interacting processes are scheduled to run at the same time, they experience "process thrashing" due to being repeatedly scheduled and unscheduled, progressing only slowly.
Technology
Operating systems
null
665909
https://en.wikipedia.org/wiki/Injection%20%28medicine%29
Injection (medicine)
An injection (often and usually referred to as a "shot" in US English, a "jab" in UK English, or a "jag" in Scottish English and Scots) is the act of administering a liquid, especially a drug, into a person's body using a needle (usually a hypodermic needle) and a syringe. An injection is considered a form of parenteral drug administration; it does not involve absorption in the digestive tract. This allows the medication to be absorbed more rapidly and avoid the first pass effect. There are many types of injection, which are generally named after the body tissue the injection is administered into. This includes common injections such as subcutaneous, intramuscular, and intravenous injections, as well as less common injections such as epidural, intraperitoneal, intraosseous, intracardiac, intraarticular, and intracavernous injections. Injections are among the most common health care procedures, with at least 16 billion administered in developing and transitional countries each year. Of these, 95% are used in curative care or as treatment for a condition, 3% are to provide immunizations/vaccinations, and the rest are used for other purposes, including blood transfusions. The term injection is sometimes used synonymously with inoculation, but injection does not only refer to the act of inoculation. Injections generally administer a medication as a bolus (or one-time) dose, but can also be used for continuous drug administration. After injection, a medication may be designed to be released slowly, called a depot injection, which can produce long-lasting effects. An injection necessarily causes a small puncture wound to the body, and thus may cause localized pain or infection. The occurrence of these side effects varies based on injection location, the substance injected, needle gauge, procedure, and individual sensitivity. Rarely, more serious side effects including gangrene, sepsis, and nerve damage may occur. Fear of needles, also called needle phobia, is also common and may result in anxiety and fainting before, during, or after an injection. To prevent the localized pain that occurs with injections the injection site may be numbed or cooled before injection and the person receiving the injection may be distracted by a conversation or similar means. To reduce the risk of infection from injections, proper aseptic technique should be followed to clean the injection site before administration. If needles or syringes are reused between people, or if an accidental needlestick occurs, there is a risk of transmission of bloodborne diseases such as HIV and hepatitis. Unsafe injection practices contribute to the spread of bloodborne diseases, especially in less-developed countries. To combat this, safety syringes exist which contain features to prevent accidental needlestick injury and reuse of the syringe after it is used once. Furthermore, recreational drug users who use injections to administer the drugs commonly share or reuse needles after an injection. This has led to the development of needle exchange programs and safe injection sites as a public health measure, which may provide new, sterile syringes and needles to discourage the reuse of syringes and needles. Used needles should ideally be placed in a purpose-made sharps container which is safe and resistant to puncture. Some locations provide free disposal programs for such containers for their citizens. Types Injections are classified in multiple ways, including the type of tissue being injected into, the location in the body the injection is designed to produce effects, and the duration of the effects. Regardless of classification, injections require a puncture to be made, thus requiring sterile environments and procedures to minimize the risk of introducing pathogens into the body. All injections are considered forms of parenteral administration, which avoids the first pass metabolism which would potentially affect a medication absorbed through the gastrointestinal tract. Systemic Many injections are designed to administer a medication which has an effect throughout the body. Systemic injections may be used when a person cannot take medicine by mouth, or when the medication itself would not be absorbed into circulation from the gastrointestinal tract. Medications administered via a systemic injection will enter into blood circulation, either directly or indirectly, and thus will have an effect on the entire body. Intravenous Intravenous injections, abbreviated as IV, involve inserting a needle into a vein, allowing a substance to be delivered directly into the bloodstream. An intravenous injection provides the quickest onset of the desired effects because the substance immediately enters the blood, and is quickly circulated to the rest of the body. Because the substance is administered directly into the bloodstream, there is no delay in the onset of effects due to the absorption of the substance into the bloodstream. This type of injection is the most common and is used frequently for administration of medications in an inpatient setting. Another use for intravenous injections includes for the administration of nutrition to people who cannot get nutrition through the digestive tract. This is termed parenteral nutrition and may provide all or only part of a person's nutritional requirements. Parenteral nutrition may be pre-mixed or customized for a person's specific needs. Intravenous injections may also be used for recreational drugs when a rapid onset of effects is desired. Intramuscular Intramuscular injections, abbreviated as IM, deliver a substance deep into a muscle, where they are quickly absorbed by the blood vessels into systemic circulation. Common injection sites include the deltoid, vastus lateralis, and ventrogluteal muscles. Medical professionals are trained to give IM injections, but people who are not medical professionals can also be trained to administer medications like epinephrine using an autoinjector in an emergency. Some depot injections are also administered intramuscularly, including medroxyprogesterone acetate among others. In addition to medications, most inactivated vaccines, including the influenza vaccine, are given as an IM injection. Subcutaneous Subcutaneous injections, abbreviated as SC or sub-Q, consist of injecting a substance via a needle under the skin. Absorption of the medicine from this tissue is slower than in an intramuscular injection. Since the needle does not need to penetrate to the level of the muscle, a thinner and shorter needle can be used. Subcutaneous injections may be administered in the fatty tissue behind the upper arm, in the abdomen, or in the thigh. Certain medications, including epinephrine, may be used either intramuscularly or subcutaneously. Others, such as insulin, are almost exclusively injected subcutaneously. Live or attenuated vaccines, including the MMR vaccine (measles, mumps, rubella), varicella vaccine (chickenpox), and zoster vaccine (shingles) are also injected subcutaneously. Intradermal Intradermal injections, abbreviated as ID, consist of a substance delivered into the dermis, the layer of skin above the subcutaneous fat layer, but below the epidermis or top layer. An intradermal injection is administered with the needle placed almost flat against the skin, at a 5 to 15 degree angle. Absorption from an intradermal injection takes longer than when the injection is given intravenously, intramuscularly, or subcutaneously. For this reason, few medications are administered intradermally. Intradermal injections are most commonly used for sensitivity tests, including tuberculin skin tests and allergy tests, as well as sensitivity tests to medications a person has never had before. The reactions caused by tests which use intradermal injection are more easily seen due to the location of the injection, and when positive will present as a red or swollen area. Common sites of intradermal injections include the forearm and lower back. Intraosseous An intraosseous injection or infusion is the act of administering medication through a needle inserted into the bone marrow of a large bone. This method of administration is only used when it is not possible to maintain access through a less invasive method such as an intravenous line, either due to frequent loss of access due to a collapsed vessel, or due to the difficulty of finding a suitable vein to use in the first place. Intraosseous access is commonly obtained by inserting a needle into the bone marrow of the humerus or tibia, and is generally only considered once multiple attempts at intravenous access have failed, as it is a more invasive method of administration than an IV. With the exception of occasional differences in the accuracy of blood tests when drawn from an intraosseous line, it is considered to be equivalent in efficacy to IV access. It is most commonly used in emergency situations where there is not ample time to repeatedly attempt to obtain IV access, or in younger people for whom obtaining IV access is more difficult. Localized Injections may be performed into specific parts of the body when the medication's effects are desired to be limited to a specific location, or where systemic administration would produce undesirable side effects which may be avoided by a more directed injection. Injections to the corpus cavernosum of the penis, termed intracavernous injections, may be used to treat conditions which are localized to the penis. They can be self-administered for erectile dysfunction prior to intercourse or used in a healthcare setting for emergency treatment for a prolonged erection with an injection to either remove blood from the penis or to administer a sympathomimetic medication to reduce the erection. Intracavernosal injections of alprostadil may be used by people for whom other treatments such as PDE5 inhibitors are ineffective or contraindicated. Other medications may also be administered in this way, including papaverine, phentolamine, and aviptadil. The most common adverse effects of intercavernosal injections include fibrosis and pain, as well as hematomas or bruising around the injection site. Medications may also be administered by injecting them directly into the vitreous humor of the eye. This is termed an intravitreal injection, and may be used to treat endophthalmitis (an infection of the inner eye), macular degeneration, and macular edema. An intravitreal injection is performed by injecting a medication through the pupil into the vitreous humor core of the eye after applying a local anesthetic drop to numb the eye and a mydriatic drop to dilate the pupil. They are commonly used in lieu of systemic administration to both increase the concentrations present in the eye, as well as avoid systemic side effects of medications. When an effect is only required in one joint, a joint injection (or intra-articular injection) may be administered into the articular space surrounding the joint. These injections can range from a one-time dose of a steroid to help with pain and inflammation to complete replacement of the synovial fluid with a compound such as hyaluronic acid. The injection of a steroid into a joint is used to reduce inflammation associated with conditions such as osteoarthritis, and the effects may last for up to 6 months following a single injection. Hyaluronic acid injection is used to supplement the body's natural synovial fluid and decrease the friction and stiffness of the joint. Administering a joint injection generally involves the use of an ultrasound or other live imaging technique to ensure the injection is administered in the desired location, as well as to reduce the risk of damaging surrounding tissues. Long-acting Long-acting injectable (LAI) formulations of medications are not intended to have a rapid effect, but instead release a medication at a predictable rate continuously over a period of time. Both depot injections and solid injectable implants are used to increase adherence to therapy by reducing the frequency at which a person must take a medication. Depot A depot injection is an injection, usually subcutaneous, intradermal, or intramuscular, that deposits a drug in a localized mass, called a depot, from which it is gradually absorbed by surrounding tissue. Such injection allows the active compound to be released in a consistent way over a long period. Depot injections are usually either oil-based or an aqueous suspension. Depot injections may be available as certain salt forms of a drug, such as decanoate salts or esters. Examples of depot injections include haloperidol decanoate, medroxyprogesterone acetate, and naltrexone. Implant Injections may also be used to insert a solid or semi-solid into the body which releases a medication slowly over time. These implants are generally designed to be temporary, replaceable, and ultimately removed at the end of their use or when replaced. There are multiple contraceptive implants marketed for different active ingredients, as well as differing duration of action - most of these are injected under the skin. A form of buprenorphine for the treatment of opioid dependence is also available as an injectable implant. Various materials can be used to manufacture implants including biodegradable polymers, osmotic release systems, and small spheres which dissolve in the body. Adverse effects Pain The act of piercing the skin with a needle, while necessary for an injection, also may cause localized pain. The most common technique to reduce the pain of an injection is simply to distract the person receiving the injection. Pain may be dampened by prior application of ice or topical anesthetic, or pinching of the skin while giving the injection. Some studies also suggest that forced coughing during an injection stimulates a transient rise in blood pressure which inhibits the perception of pain. For some injections, especially deeper injections, a local anesthetic is given. When giving an injection to young children or infants, they may be distracted by giving them a small amount of sweet liquid, such as sugar solution, or be comforted by breastfeeding during the injection, which reduces crying. Infection A needle tract infection, also called a needlestick infection, is an infection that occurs when pathogens are inadvertently introduced into the tissues of the body during an injection. Contamination of the needle used for injection, or reuse of needles for injections in multiple people, can lead to transmission of hepatitis B and C, HIV, and other bloodstream infections. Injection drug users have high rates of unsafe needle use including sharing needles between people. The spread of HIV, Hepatitis B, and Hepatitis C from injection drug use is a common health problem, in particular contributing to over half of new HIV cases in North America in 1994. Other infections may occur when pathogens enter the body through the injection site, most commonly due to improper cleaning of the site before injection. Infections occurring in this way are mainly localized infections, including skin infections, skin structure infections, abscesses, or gangrene. An intravenous injection may also result in a bloodstream infection (termed sepsis) if the injection site is not cleaned properly prior to insertion. Sepsis is a life-threatening condition which requires immediate treatment. Others Injections into the skin and soft tissue generally do not cause any permanent damage, and the puncture heals within a few days. However, in some cases, injections can cause long-term adverse effects. Intravenous and intramuscular injections may cause damage to a nerve, leading to palsy or paralysis. Intramuscular injections may cause fibrosis or contracture. Injections also cause localized bleeding, which may lead to a hematoma. Intravenous injections may also cause phlebitis, especially when multiple injections are given in a vein over a short period of time. Infiltration and extravasation may also occur when a medication intended to be injected into a vein is inadvertently injected into surrounding tissues. Those who are afraid of needles may also experience fainting at the sight of a needle, or before or after an injection. Technique Proper needle use is important to perform injections safely, which includes the use of a new, sterile needle for each injection. This is partly because needles get duller with each use and partly because reusing needles increases risk of infection. Needles should not be shared between people, as this increases risk of transmitting blood-borne pathogens. The practice of using the same needle for multiple people increases the risk of disease transmission between people sharing the same medication. In addition, it is not recommended to reuse a used needle to pierce a medication bag, bottle, or ampule designed to provide multiple doses of a medication, instead a new needle should be used each time the container must be pierced. Aseptic technique should always be practiced when administering injections. This includes the use of barriers including gloves, gowns, and masks for health care providers. It also requires the use of a new, sterile needle, syringe and other equipment for each injection, as well as proper training to avoid touching non-sterile surfaces with sterile items. To help prevent accidental needlestick injury to the person administering the injection, and prevent reuse of the syringe for another injection, a safety syringe and needle may be used. The most basic reuse prevention device is an "auto-disable" plunger, which once pressed past a certain point will no longer retract. Another common safety feature is an auto-retractable needle, where the needle is spring-loaded and either retracts into the syringe after injection, or into a plastic sheath on the side of the syringe. Other safety syringes have an attached sheath which may be moved to cover the end of the needle after the injection is given. The World Health Organization recommends the use of single-use syringes with both reuse prevention devices and a needlestick injury prevention mechanism for all injections to prevent accidental injury and disease transmission. Novel injection techniques include drug diffusion within the skin using needle-free micro-jet injection (NFI) technology. Disposal of used needles Used needles should be disposed of in specifically designed sharps containers to reduce the risk of accidental needle sticks and exposure to other people. In addition, a new sharps container should be begun once it is full. A sharps container which is filled should be sealed properly to prevent re-opening or accidental opening during transportation. Some locations offer publicly accessible "sharps take-back" programs where a sharps container may be dropped off to a public location for safe disposal at no fee to the person. In addition, some pharmaceutical and independent companies provide mail-back sharps programs, sometimes for an additional fee. In the United States, there are 39 states that offer programs to provide needle or syringe exchange. Over half of non-industrialized countries report open burning of disposed or used syringes. This practice is considered unsafe by the World Health Organization. Aspiration The aspiration is the technique of pulling back on the plunger of a syringe prior to the actual injection. If blood flows into the syringe it signals that a blood vessel has been hit. Society and culture Due to the prevalence of unsafe injection practices, especially among injection drug users, many locations have begun offering supervised injection sites and needle exchange programs, which may be offered separately or colocated. These programs may provide new sterile needles upon request to mitigate infection risk, and some also provide access to on-site clinicians and emergency medical care if it becomes required. In the event of an overdose, a site may also provide medications such as naloxone, used as an antidote in opioid overdose situations, or other antidotes or emergency care. Safe injection site have been associated with lower rates of death from overdose, less ambulance calls, and lower rates of new HIV infections from unsafe needle practices. As of 2024, at least ten countries currently offer safe injection sites, including Australia, Canada, the United States, Denmark, France, Germany, Luxembourg, The Netherlands, Norway, Spain and Switzerland. In total, there are at least 120 sites operating. Plants and animals Many species of animals use injections for self-defence or catching prey. This includes venomous snakes which inject venom when they bite into the skin with their fangs. Common substances present in snake venom include neurotoxins, toxic proteins, and cytotoxic enzymes. Different species of snakes inject different formulations of venom, which may cause severe pain and necrosis before progressing into neurotoxicity and potentially death. The weever is a type of fish which has venomous spines covering its fins and gills and injects a venom consisting of proteins which cause a severe local reaction which is not life-threatening. Sting rays use their spinal blade to inject a protein-based venom which causes localized cell death but is not generally life-threatening. Some types of insects also utilize injection for various purposes. Bees use a stinger located in their hind region to inject a venom consisting of proteins such as melittin, which causes a localized painful and itching reaction. Leeches can inject an anticoagulant peptide called hirudin after attaching to prevent blood from clotting during feeding. This property of leeches has been used historically as a natural form of anticoagulation therapy, as well as for the use of bloodletting as a treatment for various ailments. Some species of ants inject forms of venom which include compounds which produce minor pain such as the formic acid, which is injected by members of the Formicinae subfamily. Other species of ants, including Dinoponera species, inject protein-based venom which causes severe pain but is still not life-threatening. The bullet ant (Paraponera clavata) injects a venom which contains a neurotoxin named poneratoxin which causes extreme pain, fever, and cold sweats, and may cause arrhythmias. Plants may use a form of injection which is passive, where the injectee pushes themselves against the stationary needle. The stinging nettle plant has many trichomes, or stinging hairs, over its leaves and stems which are used to inject a mix of irritating chemicals which includes histamine, serotonin, and acetylcholine. This sting produces a form of dermatitis which is characterized by a stinging, burning, and itching sensation in the area. Dendrocnide species, also called stinging trees, use their trichomes to inject a mix of neurotoxic peptides which causes a reaction similar to the stinging nettle, but also may result in recurring flares for a much longer period after the injection. While some plants have thorns, spines, and prickles, these generally are not used for injection of any substance, but instead it is the act of piercing the skin which causes them to be a deterrent.
Biology and health sciences
General concepts_2
Health
665951
https://en.wikipedia.org/wiki/Deterministic%20algorithm
Deterministic algorithm
In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they can be run on real machines efficiently. Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output. Formal definition Deterministic algorithms can be defined in terms of a state machine: a state describes what a machine is doing at a particular instant in time. State machines pass in a discrete manner from one state to another. Just after we enter the input, the machine is in its initial state or start state. If the machine is deterministic, this means that from this point onwards, its current state determines what its next state will be; its course through the set of states is predetermined. Note that a machine can be deterministic and still never stop or finish, and therefore fail to deliver a result. Examples of particular abstract machines which are deterministic include the deterministic Turing machine and deterministic finite automaton. Non-deterministic algorithms A variety of factors can cause an algorithm to behave in a way which is not deterministic, or non-deterministic: If it uses an external state other than the input, such as user input, a global variable, a hardware timer value, a random value, or stored disk data. If it operates in a way that is timing-sensitive, for example, if it has multiple processors writing to the same data at the same time. In this case, the precise order in which each processor writes its data will affect the result. If a hardware error causes its state to change in an unexpected way. Although real programs are rarely purely deterministic, it is easier for humans as well as other programs to reason about programs that are. For this reason, most programming languages and especially functional programming languages make an effort to prevent the above events from happening except under controlled conditions. The prevalence of multi-core processors has resulted in a surge of interest in determinism in parallel programming and challenges of non-determinism have been well documented. A number of tools to help deal with the challenges have been proposed to deal with deadlocks and race conditions. Disadvantages of determinism It is advantageous, in some cases, for a program to exhibit nondeterministic behavior. The behavior of a card shuffling program used in a game of blackjack, for example, should not be predictable by players — even if the source code of the program is visible. The use of a pseudorandom number generator is often not sufficient to ensure that players are unable to predict the outcome of a shuffle. A clever gambler might guess precisely the numbers the generator will choose and so determine the entire contents of the deck ahead of time, allowing him to cheat; for example, the Software Security Group at Reliable Software Technologies was able to do this for an implementation of Texas Hold 'em Poker that is distributed by ASF Software, Inc, allowing them to consistently predict the outcome of hands ahead of time. These problems can be avoided, in part, through the use of a cryptographically secure pseudo-random number generator, but it is still necessary for an unpredictable random seed to be used to initialize the generator. For this purpose, a source of nondeterminism is required, such as that provided by a hardware random number generator. Note that a negative answer to the P=NP problem would not imply that programs with nondeterministic output are theoretically more powerful than those with deterministic output. The complexity class NP (complexity) can be defined without any reference to nondeterminism using the verifier-based definition. Determinism categories in languages Mercury The mercury logic-functional programming language establishes different determinism categories for predicate modes as explained in the reference. Haskell Haskell provides several mechanisms: Non-determinism or notion of Fail the Maybe and Either types include the notion of success in the result. the fail method of the class Monad, may be used to signal fail as exception. the Maybe monad and MaybeT monad transformer provide for failed computations (stop the computation sequence and return Nothing) Neterminism/non-det with multiple solutions you may retrieve all possible outcomes of a multiple result computation, by wrapping its result type in a MonadPlus monad. (its method mzero makes an outcome fail and mplus collects the successful results). ML family and derived languages As seen in Standard ML, OCaml and Scala The option type includes the notion of success. Java In Java, the null reference value may represent an unsuccessful (out-of-domain) result.
Mathematics
Algorithms
null
666401
https://en.wikipedia.org/wiki/Triclinic%20crystal%20system
Triclinic crystal system
In crystallography, the triclinic (or anorthic) crystal system is one of the seven crystal systems. A crystal system is described by three basis vectors. In the triclinic system, the crystal is described by vectors of unequal length, as in the orthorhombic system. In addition, the angles between these vectors must all be different and may not include 90°. The triclinic lattice is the least symmetric of the 14 three-dimensional Bravais lattices. It has (itself) the minimum symmetry all lattices have: points of inversion at each lattice point and at 7 more points for each lattice point: at the midpoints of the edges and the faces, and at the center points. It is the only lattice type that itself has no mirror planes. Crystal classes The triclinic crystal system class names, examples, Schönflies notation, Hermann-Mauguin notation, point groups, International Tables for Crystallography space group number, orbifold, type, and space groups are listed in the table below. There are a total of 2 space groups. With each only one space group is associated. Pinacoidal is also known as triclinic normal. Pedial is also triclinic hemihedral. Mineral examples include plagioclase, microcline, rhodonite, turquoise, wollastonite and amblygonite, all in triclinic normal ().
Physical sciences
Crystallography
Physics
666697
https://en.wikipedia.org/wiki/Cubic%20crystal%20system
Cubic crystal system
In crystallography, the cubic (or isometric) crystal system is a crystal system where the unit cell is in the shape of a cube. This is one of the most common and simplest shapes found in crystals and minerals. There are three main varieties of these crystals: Primitive cubic (abbreviated cP and alternatively called simple cubic) Body-centered cubic (abbreviated cI or bcc) Face-centered cubic (abbreviated cF or fcc) Note: the term fcc is often used in synonym for the cubic close-packed or ccp structure occurring in metals. However, fcc stands for a face-centered-cubic Bravais lattice, which is not necessarily close-packed when a motif is set onto the lattice points. E.g. the diamond and the zincblende lattices are fcc but not close-packed. Each is subdivided into other variants listed below. Although the unit cells in these crystals are conventionally taken to be cubes, the primitive unit cells often are not. Bravais lattices The three Bravais latices in the cubic crystal system are: The primitive cubic lattice (cP) consists of one lattice point on each corner of the cube; this means each simple cubic unit cell has in total one lattice point. Each atom at a lattice point is then shared equally between eight adjacent cubes, and the unit cell therefore contains in total one atom ( × 8). The body-centered cubic lattice (cI) has one lattice point in the center of the unit cell in addition to the eight corner points. It has a net total of two lattice points per unit cell ( × 8 + 1). The face-centered cubic lattice (cF) has lattice points on the faces of the cube, that each gives exactly one half contribution, in addition to the corner lattice points, giving a total of four lattice points per unit cell ( × 8 from the corners plus  × 6 from the faces). The face-centered cubic lattice is closely related to the hexagonal close packed (hcp) system, where two systems differ only in the relative placements of their hexagonal layers. The [111] plane of a face-centered cubic lattice is a hexagonal grid. Attempting to create a base-centered cubic lattice (i.e., putting an extra lattice point in the center of each horizontal face) results in a simple tetragonal Bravais lattice. Coordination number (CN) is the number of nearest neighbors of a central atom in the structure. Each sphere in a cP lattice has coordination number 6, in a cI lattice 8, and in a cF lattice 12. Atomic packing factor (APF) is the fraction of volume that is occupied by atoms. The cP lattice has an APF of about 0.524, the cI lattice an APF of about 0.680, and the cF lattice an APF of about 0.740. Crystal classes The isometric crystal system class names, point groups (in Schönflies notation, Hermann–Mauguin notation, orbifold, and Coxeter notation), type, examples, international tables for crystallography space group number, and space groups are listed in the table below. There are a total 36 cubic space groups. Other terms for hexoctahedral are: normal class, holohedral, ditesseral central class, galena type. Single element structures As a rule, since atoms in a solid attract each other, the more tightly packed arrangements of atoms tend to be more common. (Loosely packed arrangements do occur, though, for example if the orbital hybridization demands certain bond angles.) Accordingly, the primitive cubic structure, with especially low atomic packing factor, is rare in nature, but is found in polonium. The bcc and fcc, with their higher densities, are both quite common in nature. Examples of bcc include iron, chromium, tungsten, and niobium. Examples of fcc include aluminium, copper, gold and silver. Another important cubic crystal structure is the diamond cubic structure, which can appear in carbon, silicon, germanium, and tin. Unlike fcc and bcc, this structure is not a lattice, since it contains multiple atoms in its primitive cell. Other cubic elemental structures include the A15 structure found in tungsten, and the extremely complicated structure of manganese. Multi-element structures Compounds that consist of more than one element (e.g. binary compounds) often have crystal structures based on the cubic crystal system. Some of the more common ones are listed here. These structures can be viewed as two or more interpenetrating sublattices where each sublattice occupies the interstitial sites of the others. Caesium chloride structure One structure is the "interpenetrating primitive cubic" structure, also called a "caesium chloride" or B2 structure. This structure is often confused for a body-centered cubic structure because the arrangement of atoms is the same. However, the caesium chloride structure has a basis composed of two different atomic species. In a body-centered cubic structure, there would be translational symmetry along the [111] direction. In the caesium chloride structure, translation along the [111] direction results in a change of species. The structure can also be thought of as two separate simple cubic structures, one of each species, that are superimposed within each other. The corner of the chloride cube is the center of the caesium cube, and vice versa. It works the same way for the NaCl structure described in the next section.  If you take out the Cl atoms, the leftover Na atoms still form an FCC structure, not a simple cubic structure. In the unit cell of CsCl, each ion is at the center of a cube of ions of the opposite kind, so the coordination number is eight. The central cation is coordinated to 8 anions on the corners of a cube as shown, and similarly, the central anion is coordinated to 8 cations on the corners of a cube. Alternately, one could view this lattice as a simple cubic structure with a secondary atom in its cubic void. In addition to caesium chloride itself, the structure also appears in certain other alkali halides when prepared at low temperatures or high pressures. Generally, this structure is more likely to be formed from two elements whose ions are of roughly the same size (for example, ionic radius of Cs+ = 167 pm, and Cl− = 181 pm). The space group of the caesium chloride (CsCl) structure is called Pmm (in Hermann–Mauguin notation), or "221" (in the International Tables for Crystallography). The Strukturbericht designation is "B2". There are nearly a hundred rare earth intermetallic compounds that crystallize in the CsCl structure, including many binary compounds of rare earths with magnesium, and with elements in groups 11, 12, and 13. Other compounds showing caesium chloride like structure are CsBr, CsI, high-temperature RbCl, AlCo, AgZn, BeCu, MgCe, RuAl and SrTl. Rock-salt structure The space group of the rock-salt or halite (sodium chloride) structure is denoted as Fmm (in Hermann–Mauguin notation), or "225" (in the International Tables for Crystallography). The Strukturbericht designation is "B1". In the rock-salt structure, each of the two atom types forms a separate face-centered cubic lattice, with the two lattices interpenetrating so as to form a 3D checkerboard pattern. The rock-salt structure has octahedral coordination: Each atom's nearest neighbors consist of six atoms of the opposite type, positioned like the six vertices of a regular octahedron. In sodium chloride there is a 1:1 ratio of sodium to chlorine atoms.  The structure can also be described as an FCC lattice of sodium with chlorine occupying each octahedral void or vice versa. Examples of compounds with this structure include sodium chloride itself, along with almost all other alkali halides, and "many divalent metal oxides, sulfides, selenides, and tellurides". According to the radius ratio rule, this structure is more likely to be formed if the cation is somewhat smaller than the anion (a cation/anion radius ratio of 0.414 to 0.732). The interatomic distance (distance between cation and anion, or half the unit cell length a) in some rock-salt-structure crystals are: 2.3 Å (2.3 × 10−10 m) for NaF, 2.8 Å for NaCl, and 3.2 Å for SnTe. Most of the alkali metal hydrides and halides have the rock salt structure, though a few have the caesium chloride structure instead. Many transition metal monoxides also have the rock salt structure (TiO, VO, CrO, MnO, FeO, CoO, NiO, CdO). The early actinoid monocarbides also have this structure (ThC, PaC, UC, NpC, PuC). Fluorite structure Much like the rock salt structure, the fluorite structure (AB2) is also an Fmm structure but has 1:2 ratio of ions. The anti-fluorite structure is nearly identical, except the positions of the anions and cations are switched in the structure. They are designated Wyckoff positions 4a and 8c whereas the rock-salt structure positions are 4a and 4b. Zincblende structure The space group of the Zincblende structure is called F3m (in Hermann–Mauguin notation), or 216. The Strukturbericht designation is "B3". The Zincblende structure (also written "zinc blende") is named after the mineral zincblende (sphalerite), one form of zinc sulfide (β-ZnS). As in the rock-salt structure, the two atom types form two interpenetrating face-centered cubic lattices. However, it differs from rock-salt structure in how the two lattices are positioned relative to one another. The zincblende structure has tetrahedral coordination: Each atom's nearest neighbors consist of four atoms of the opposite type, positioned like the four vertices of a regular tetrahedron. In zinc sulfide the ratio of zinc to sulfur is 1:1. Altogether, the arrangement of atoms in zincblende structure is the same as diamond cubic structure, but with alternating types of atoms at the different lattice sites. The structure can also be described as an FCC lattice of zinc with sulfur atoms occupying half of the tetrahedral voids or vice versa. Examples of compounds with this structure include zincblende itself, lead(II) nitrate, many compound semiconductors (such as gallium arsenide and cadmium telluride), and a wide array of other binary compounds. The boron group pnictogenides usually have a zincblende structure, though the nitrides are more common in the wurtzite structure, and their zincblende forms are less well known polymorphs. This group is also known as the II-VI family of compounds, most of which can be made in both the zincblende (cubic) or wurtzite (hexagonal) form. This group is also known as the III-V family of compounds. Heusler structure The Heusler structure, based on the structure of Cu2MnAl, is a common structure for ternary compounds involving transition metals. It has the space group Fmm (No. 225), and the Strukturbericht designation is L21. Together with the closely related half-Heusler and inverse-Huesler compounds, there are hundreds of examples. Iron monosilicide structure The space group of the iron monosilicide structure is P213 (No. 198), and the Strukturbericht designation is B20. This is a chiral structure, and is sometimes associated with helimagnetic properties. There are four atoms of each element for a total of eight atoms in the unit cell. Examples occur among the transition metal silicides and germanides, as well as a few other compounds such as gallium palladide. Weaire–Phelan structure A Weaire–Phelan structure has Pmn (223) symmetry. It has three orientations of stacked tetradecahedrons with pyritohedral cells in the gaps. It is found as a crystal structure in chemistry where it is usually known as a "type I clathrate structure". Gas hydrates formed by methane, propane, and carbon dioxide at low temperatures have a structure in which water molecules lie at the nodes of the Weaire–Phelan structure and are hydrogen bonded together, and the larger gas molecules are trapped in the polyhedral cages.
Physical sciences
Crystallography
Physics
667206
https://en.wikipedia.org/wiki/Touchscreen
Touchscreen
A touchscreen (or touch screen) is a type of display that can detect touch input from a user. It consists of both an input device (a touch panel) and an output device (a visual display). The touch panel is typically layered on the top of the electronic visual display of a device. Touchscreens are commonly found in smartphones, tablets, laptops, and other electronic devices. The display is often an LCD, AMOLED or OLED display. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work, while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size. A touchscreen enables the user to interact directly with what is displayed, instead of using a mouse, touchpad, or other such devices (other than a stylus, which is optional for most modern touchscreens). Touchscreens are common in devices such as smartphones, handheld game consoles, and personal computers. They are common in point-of-sale (POS) systems, automated teller machines (ATMs), electronic voting machines, and automobile infotainment systems and controls. They can also be attached to computers or, as terminals, to networks. They play a prominent role in the design of digital appliances such as personal digital assistants (PDAs) and some e-readers. Touchscreens are important in educational settings such as classrooms or on college campuses. The popularity of smartphones, tablets, and many types of information appliances has driven the demand and acceptance of common touchscreens for portable and functional electronics. Touchscreens are found in the medical field, heavy industry, automated teller machines (ATMs), and kiosks such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content. Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, and not by display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers have acknowledged the trend toward acceptance of touchscreens as a user interface component and have begun to integrate touchscreens into the fundamental design of their products. History One predecessor of the modern touchscreen includes stylus based systems. 1946 DIRECT LIGHT PEN - A patent was filed by Philco Company for a stylus designed for sports telecasting which, when placed against an intermediate cathode-ray tube (CRT) display would amplify and add to the original signal. Effectively, this was used for temporarily drawing arrows or circles onto a live television broadcast, as described in . 1962 OPTICAL - The first version of a touchscreen which operated independently of the light produced from the screen was patented by AT&T Corporation . This touchscreen utilized a matrix of collimated lights shining orthogonally across the touch surface. When a beam is interrupted by a stylus, the photodetectors which no longer are receiving a signal can be used to determine where the interruption is. Later iterations of matrix based touchscreens built upon this by adding more emitters and detectors to improve resolution, pulsing emitters to improve optical signal to noise ratio, and a nonorthogonal matrix to remove shadow readings when using multi-touch. 1963 INDIRECT LIGHT PEN - Later inventions built upon this system to free telewriting styli from their mechanical bindings. By transcribing what a user draws onto a computer, it could be saved for future use. See . 1965 CAPACITANCE AND RESISTANCE - The first finger driven touchscreen was developed by Eric Johnson, of the Royal Radar Establishment located in Malvern, England, who described his work on capacitive touchscreens in a short article published in 1965 and then more fully—with photographs and diagrams—in an article published in 1967. MID-60s ULTRASONIC CURTAIN - Another precursor of touchscreens, an ultrasonic-curtain-based pointing device in front of a terminal display, had been developed by a team around at Telefunken for an air traffic control system. In 1970, this evolved into a device named "Touchinput-" ("touch input facility") for the SIG 50 terminal utilizing a conductively coated glass screen in front of the display. This was patented in 1971 and the patent was granted a couple of years later. The same team had already invented and marketed the mouse RKS 100-86 for the SIG 100-86 a couple of years earlier. 1968 CAPACITANCE - The application of touch technology for air traffic control was described in an article published in 1968. Frank Beck and Bent Stumpe, engineers from CERN (European Organization for Nuclear Research), developed a transparent touchscreen in the early 1970s, based on Stumpe's work at a television factory in the early 1960s. Then manufactured by CERN, and shortly after by industry partners, it was put to use in 1973. 1972 OPTICAL - A group at the University of Illinois filed for a patent on an optical touchscreen that became a standard part of the Magnavox Plato IV Student Terminal and thousands were built for this purpose. These touchscreens had a crossed array of 16×16 infrared position sensors, each composed of an LED on one edge of the screen and a matched phototransistor on the other edge, all mounted in front of a monochrome plasma display panel. This arrangement could sense any fingertip-sized opaque object in close proximity to the screen. 1973 MULTI-TOUCH CAPACITANCE - In 1973, Beck and Stumpe published another article describing their capacitive touchscreen. This indicated that it was capable of multi-touch but this feature was purposely inhibited, presumably as this was not considered useful at the time ("A...variable...called BUT changes value from zero to five when a button is touched. The touching of other buttons would give other non-zero values of BUT but this is protected against by software" (Page 6, section 2.6). "Actual contact between a finger and the capacitor is prevented by a thin sheet of plastic" (Page 3, section 2.3). At that time Projected capacitance had not yet been invented. 1977 RESISTIVE - An American company, Elographics – in partnership with Siemens – began work on developing a transparent implementation of an existing opaque touchpad technology, U.S. patent  3,911,215, October 7, 1975, which had been developed by Elographics' founder George Samuel Hurst. The resulting resistive technology touch screen was first shown on the World's Fair at Knoxville in 1982. 1982 MULTI-TOUCH CAMERA - Multi-touch technology began in 1982, when the University of Toronto's Input Research Group developed the first human-input multi-touch system, using a frosted-glass panel with a camera placed behind the glass. 1983 OPTICAL - An optical touchscreen was used on the HP-150 starting in 1983. The HP 150 was one of the world's earliest commercial touchscreen computers. HP mounted their infrared transmitters and receivers around the bezel of a 9-inch Sony cathode ray tube (CRT). 1983 MULTI-TOUCH FORCE SENSING TOUCHSCREEN - Bob Boie of AT&T Bell Labs, used capacitance to track the mechanical changes in thickness of a soft, deformable overlay membrane when one or more physical objects interact with it; the flexible surface being easily replaced, if damaged by these objects. The patent states "the tactile sensor arrangements may be utilized as a touch screen". Many derivative sources retrospectively describe Boie as making a major advancement with his touchscreen technology; but no evidence has been found that a rugged multi-touch capacitive touchscreen, that could sense through a rigid, protective overlay - the sort later required for a mobile phone, was ever developed or patented by Boie. Many of these citations rely on anecdotal evidence from Bill Buxton of Bell Labs. However, Bill Buxton did not have much luck getting his hands on this technology. As he states in the citation: "Our assumption (false, as it turned out) was that the Boie technology would become available to us in the near future. Around 1990 I took a group from Xerox to see this technology it [sic] since I felt that it would be appropriate for the user interface of our large document processors. This did not work out". UP TO 1984 CAPACITANCE - Although, as cited earlier, Johnson is credited with developing the first finger operated capacitive and resistive touchscreens in 1965, these worked by directly touching wires across the front of the screen. Stumpe and Beck developed a self-capacitance touchscreen in 1972, and a mutual capacitance touchscreen in 1977. Both these devices could only sense the finger by direct touch or through a thin insulating film. This was 11 microns thick according to Stumpe's 1977 report. 1984 TOUCHPAD - Fujitsu released a touch pad for the Micro 16 to accommodate the complexity of kanji characters, which were stored as tiled graphics. 1986 GRAPHIC TABLET - A graphic touch tablet was released for the Sega AI Computer. EARLY 80s EVALUATION FOR AIRCRAFT - Touch-sensitive control-display units (CDUs) were evaluated for commercial aircraft flight decks in the early 1980s. Initial research showed that a touch interface would reduce pilot workload as the crew could then select waypoints, functions and actions, rather than be "head down" typing latitudes, longitudes, and waypoint codes on a keyboard. An effective integration of this technology was aimed at helping flight crews maintain a high level of situational awareness of all major aspects of the vehicle operations including the flight path, the functioning of various aircraft systems, and moment-to-moment human interactions. EARLY 80s EVALUATION FOR CARS - also, in the early 1980s, General Motors tasked its Delco Electronics division with a project aimed at replacing an automobile's non-essential functions (i.e. other than throttle, transmission, braking, and steering) from mechanical or electro-mechanical systems with solid state alternatives wherever possible. The finished device was dubbed the ECC for "Electronic Control Center", a digital computer and software control system hardwired to various peripheral sensors, servomechanisms, solenoids, antenna and a monochrome CRT touchscreen that functioned both as display and sole method of input. The ECC replaced the traditional mechanical stereo, fan, heater and air conditioner controls and displays, and was capable of providing very detailed and specific information about the vehicle's cumulative and current operating status in real time. The ECC was standard equipment on the 1985–1989 Buick Riviera and later the 1988–1989 Buick Reatta, but was unpopular with consumers—partly due to the technophobia of some traditional Buick customers, but mostly because of costly technical problems suffered by the ECC's touchscreen which would render climate control or stereo operation impossible. 1985 GRAPHIC TABLET - Sega released the Terebi Oekaki, also known as the Sega Graphic Board, for the SG-1000 video game console and SC-3000 home computer. It consisted of a plastic pen and a plastic board with a transparent window where pen presses are detected. It was used primarily with a drawing software application. 1985 MULTI-TOUCH CAPACITANCE - The University of Toronto group, including Bill Buxton, developed a multi-touch tablet that used capacitance rather than bulky camera-based optical sensing systems (see History of multi-touch). 1985 USED FOR POINT OF SALE - The first commercially available graphical point-of-sale (POS) software was demonstrated on the 16-bit Atari 520ST color computer. It featured a color touchscreen widget-driven interface. The ViewTouch POS software was first shown by its developer, Gene Mosher, at the Atari Computer demonstration area of the Fall COMDEX expo in 1986. 1987 CAPACITANCE TOUCH KEYS - Casio launched the Casio PB-1000 pocket computer with a touchscreen consisting of a 4×4 matrix, resulting in 16 touch areas in its small LCD graphic screen. 1988 SELECT ON "LIFT-OFF" - Touchscreens had a bad reputation of being imprecise until 1988. Most user-interface books would state that touchscreen selections were limited to targets larger than the average finger. At the time, selections were done in such a way that a target was selected as soon as the finger came over it, and the corresponding action was performed immediately. Errors were common, due to parallax or calibration problems, leading to user frustration. "Lift-off strategy" was introduced by researchers at the University of Maryland Human–Computer Interaction Lab (HCIL). As users touch the screen, feedback is provided as to what will be selected: users can adjust the position of the finger, and the action takes place only when the finger is lifted off the screen. This allowed the selection of small targets, down to a single pixel on a 640×480 Video Graphics Array (VGA) screen (a standard of that time). 1988 WORLD EXPO - From April to October 1988, the city of Brisbane, Australia hosted Expo 88, whose theme was “leisure in the age of technology”. To support the event and provide information to expo visitors, Telecom Australia (now Telstra) erected 8 kiosks around the expo site with a total of 56 touch screen information consoles, being specially modified Sony Videotex Workstations. Each system was also equipped with a videodisc player, speakers, and a 20 MB hard drive. In order to keep up-to-date information during the event, the database of visitor information was updated and remotely transferred to the computer terminals each night. Using the touch screens, visitors were able to find information about the exposition’s rides, attractions, performances, facilities, and the surrounding areas. Visitors could also select between information displayed in English and Japanese; a reflection of Australia’s overseas tourist market in the 1980s. It is worth noting that Telecom’s Expo Info system was based on an earlier system employed at Expo 86 in Vancouver, Canada. 1990 SINGLE AND MULTI-TOUCH GESTURES - Sears et al. (1990) gave a review of academic research on single and multi-touch human–computer interaction of the time, describing gestures such as rotating knobs, adjusting sliders, and swiping the screen to activate a switch (or a U-shaped gesture for a toggle switch). The HCIL team developed and studied small touchscreen keyboards (including a study that showed users could type at 25 on a touchscreen keyboard), aiding their introduction on mobile devices. They also designed and implemented multi-touch gestures such as selecting a range of a line, connecting objects, and a "tap-click" gesture to select while maintaining location with another finger. 1990 TOUCHSCREEN SLIDER AND TOGGLE SWITCHES - HCIL demonstrated a touchscreen slider, which was later cited as prior art in the lock screen patent litigation between Apple and other touchscreen mobile phone vendors (in relation to ). 1991 INERTIAL CONTROL - From 1991 to 1992, the Sun Star7 prototype PDA implemented a touchscreen with inertial scrolling. 1993 CAPACITANCE MOUSE / KEYPAD - Bob Boie of AT&T Bell Labs, patented a simple mouse or keypad that capacitively sensed just one finger through a thin insulator. Although not claimed or even mentioned in the patent, this technology could potentially have been used as a capacitance touchscreen. 1993 FIRST RESISTIVE TOUCHSCREEN PHONE - IBM released the IBM Simon, which is the first touchscreen phone. EARLY 90s ABANDONED GAME CONTROLLER - An early attempt at a handheld game console with touchscreen controls was Sega's intended successor to the Game Gear, though the device was ultimately shelved and never released due to the expensive cost of touchscreen technology in the early 1990s. 1994 FIRST WIRE BASED PROJECTED CAPACITANCE - Stumpe and Beck's touchscreens (1972/1977 - already cited), used opaque conductive copper tracks that obscured about 50% of the screen (80 micron track / 80 micron space). The advent of projected capacitance in 1984, however, with its improved sensing capability, indicated that most of these tracks could be eliminated. This proved to be so, and led to the invention of a wire based touchscreen in 1994, where one 25 micron diameter, insulation coated wire replaced about 30 of these 80 micron wide tracks, and could also accurately sense fingers through thick glass. Screen masking, caused by the copper, was reduced from 50% to less than 0.5%. The use of fine wire meant that very large touchscreens, several meters wide, could be plotted onto a thin polyester support film with a simple x/y pen plotter, eliminating the need for expensive and complicated sputter coating, laser ablation, screen printing or etching. The resulting, incredibly flexible, touchscreen film, less than 100 microns thick, could be attached by static or non-setting weak adhesive to one side of a sheet of glass, for sensing through that glass. Early versions of this device were controlled by the PIC16C54 microchip. 1994 FIRST PUB GAME WITH TOUCHSCREEN - Appearing in pubs in 1994, JPM's Monopoly SWP (skill with prizes) was the first machine to use touch screen technology instead of buttons (see Quiz machine / History). It used a 14 inch version of this newly invented wire based projected capacitance touchscreen and had 64 sensing areas - the wiring pattern being similar to that shown in the lower diagram. The zig-zag pattern was introduced to minimize visual reflections and prevent Moire interference between the wires and the monitor line scans. About 600 of these were sold for this purpose, retailing at £50 apiece, which was very cheap for the time. Working through very thick glass made it ideal for operation in a "hostile" environment, such as a pub. Although reflected light from the copper wires was noticeable under certain lighting conditions, this problem was eliminated by using tinted glass. The reflection issue was later resolved by using finer (10 micron diameter), dark coated wires. Throughout the following decade JPM continued to use touchscreens for many other games such as "Cluedo" and "Who wants to be a Millionaire". 1998 PROJECTED CAPACITANCE LICENSES - This technology was licensed four years later to Romag Glass Products - later to become Zytronic Displays, and Visual Planet in 2003 (see page 4). 2004 MOBILE MULTI-TOUCH PROJECTED CAPACITANCE PATENT - Apple patents its multi-touch capacitive touchscreen for mobile devices. 2004 VIDEO GAMES WITH TOUCHSCREENS - Touchscreens were not popularly used for video games until the release of the Nintendo DS in 2004. 2007 MOBILE PHONE WITH CAPACITANCE - The first mobile phone with a capacitive touchscreen was LG Prada, released in May 2007 (which was before the first iPhone released). By 2009, touchscreen-enabled mobile phones were becoming trendy and quickly gaining popularity in both basic and advanced devices. In Quarter-4 2009 for the first time, a majority of smartphones (i.e. not all mobile phones) shipped with touchscreens over non-touch. 2013 RESISTIVE VERSUS PROJECTED CAPACITANCE SALES - In 2007, 93% of touchscreens shipped were resistive and only 4% were projected capacitance. In 2013, 3% of touchscreens shipped were resistive and 96% were projected capacitance (see page 5). 2015 FORCE SENSING TOUCHSCREENS - Until recently, most consumer touchscreens could only sense one point of contact at a time, and few have had the capability to sense how hard one is touching. This has changed with the commercialization of multi-touch technology, and the Apple Watch being released with a force-sensitive display in April 2015. 2015 BISTATE PROJECTED CAPACITANCE - When used as a Projected Capacitance touchscreen, in mutual capacitance mode, diagonal wiring requires each I/O line to be capable of switching between two states (bistate), an output some of the time and an input at other times. I/Os are inputs most of the time, but, once every scan, one of the I/Os has to take its turn at being an output, the remaining input I/Os sensing any signals it generates. The I/O lines, therefore, may have to change from input to output, and vice versa, many times a second. This new design won an Electronics Weekly Elektra Award in 2017. 2021 FIRST "INFINITELY WIDE" TOUCHSCREEN PATENT - With standard x/y array touchscreens, the length of the horizontal sensing elements increases as the width of the touchscreen increases. Eventually, a limit is hit where the resistance gets so great that the touchscreen can no longer function properly. The patent describes how the use of diagonal elements ensures that the length of any element never exceeds 1.414 times the height of the touchscreen, no matter how wide it is. This could be reduced to 1.15 times the height, if opposing diagonal elements intersect at 60 degrees instead of 90 degrees. The elongated touchscreen could be controlled by a single processor, or the distant ends could be controlled totally independently by different processors, linked by a synchronizing processor in the overlapping middle section. The number of unique intersections could be increased by allowing individual sensing elements to run in two opposing directions - as shown in the diagram. Technologies There are a number of touchscreen technologies, with different methods of sensing touch. Resistive A resistive touchscreen panel is composed of several thin layers, the most important of which are two transparent electrically resistive layers facing each other with a thin gap between them. The top layer (the layer that is touched) has a coating on the underside surface; just beneath it is a similar resistive layer on top of its substrate. One layer has conductive connections along its sides, while the other along the top and bottom. A voltage is applied to one layer and sensed by the other. When an object, such as a fingertip or stylus tip, presses down onto the outer surface, the two layers touch to become connected at that point. The panel then behaves as a pair of voltage dividers, one axis at a time. By rapidly switching between each layer, the position of pressure on the screen can be detected. Resistive touch is used in restaurants, factories, and hospitals due to its high tolerance for liquids and contaminants. A major benefit of resistive-touch technology is its low cost. Additionally, they may be used with gloves on, or by using anything rigid as a finger substitute, as only sufficient pressure is necessary for the touch to be sensed. Disadvantages include the need to press down, and a risk of damage by sharp objects. Resistive touchscreens also suffer from poorer contrast, due to having additional reflections (i.e. glare) from the layers of material placed over the screen. This type of touchscreen has been used by Nintendo in the DS family, the 3DS family, and the Wii U GamePad. Due to their simple structure, with very few inputs, resistive touchscreens are mainly used for single touch operation, although some two touch versions (often described as multi-touch) are available. However, there are some true multi-touch resistive touchscreens available. These need many more inputs, and rely on x/y multiplexing to keep the I/O count down. One example of a true multi-touch resistive touchscreen can detect 10 fingers at the same time. This has 80 I/O connections. These are possibly split 34 x inputs / 46 y outputs, forming a standard 3:4 aspect ratio touchscreen with 1564 x/y intersecting touch sensing nodes. Tri-state multiplexing could have been used instead of x/y multiplexing. This would have reduced the I/O count from 80 to 60 while creating 1770 unique touch sensing nodes, with no need for a bezel, and with all inputs coming from just one edge. Surface acoustic wave Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen panel. When the panel is touched, a portion of the wave is absorbed. The change in ultrasonic waves is processed by the controller to determine the position of the touch event. Surface acoustic wave touchscreen panels can be damaged by outside elements. Contaminants on the surface can also interfere with the functionality of the touchscreen. SAW devices have a wide range of applications, including delay lines, filters, correlators and DC to DC converters. Capacitive touchscreen A capacitive touchscreen panel consists of an insulator, such as glass, coated with a transparent conductor, such as indium tin oxide (ITO). As the human body is also an electrical conductor, touching the surface of the screen results in a distortion of the screen's electrostatic field, measurable as a change in capacitance. Different technologies may be used to determine the location of the touch. The location is then sent to the controller for processing. Some touchscreens use silver instead of ITO, as ITO causes several environmental problems due to the use of indium. The controller is typically a complementary metal–oxide–semiconductor (CMOS) application-specific integrated circuit (ASIC) chip, which in turn usually sends the signals to a CMOS digital signal processor (DSP) for processing. Unlike a resistive touchscreen, some capacitive touchscreens cannot be used to detect a finger through electrically insulating material, such as gloves. This disadvantage especially affects usability in consumer electronics, such as touch tablet PCs and capacitive smartphones in cold weather when people may be wearing gloves. It can be overcome with a special capacitive stylus, or a special-application glove with an embroidered patch of conductive thread allowing electrical contact with the user's fingertip. A low-quality switching-mode power supply unit with an accordingly unstable, noisy voltage may temporarily interfere with the precision, accuracy and sensitivity of capacitive touch screens. Some capacitive display manufacturers continue to develop thinner and more accurate touchscreens. Those for mobile devices are now being produced with 'in-cell' technology, such as in Samsung's Super AMOLED screens, that eliminates a layer by building the capacitors inside the display itself. This type of touchscreen reduces the visible distance between the user's finger and what the user is touching on the screen, reducing the thickness and weight of the display, which is desirable in smartphones. A simple parallel-plate capacitor has two conductors separated by a dielectric layer. Most of the energy in this system is concentrated directly between the plates. Some of the energy spills over into the area outside the plates, and the electric field lines associated with this effect are called fringing fields. Part of the challenge of making a practical capacitive sensor is to design a set of printed circuit traces which direct fringing fields into an active sensing area accessible to a user. A parallel-plate capacitor is not a good choice for such a sensor pattern. Placing a finger near fringing electric fields adds conductive surface area to the capacitive system. The additional charge storage capacity added by the finger is known as finger capacitance, or CF. The capacitance of the sensor without a finger present is known as parasitic capacitance, or CP. Surface capacitance In this basic technology, only one side of the insulator is coated with a conductive layer. A small voltage is applied to the layer, resulting in a uniform electrostatic field. When a conductor, such as a human finger, touches the uncoated surface, a capacitor is dynamically formed. The sensor's controller can determine the location of the touch indirectly from the change in the capacitance as measured from the four corners of the panel. As it has no moving parts, it is moderately durable but has limited resolution, is prone to false signals from parasitic capacitive coupling, and needs calibration during manufacture. It is therefore most often used in simple applications such as industrial controls and kiosks. Although some standard capacitance detection methods are projective, in the sense that they can be used to detect a finger through a non-conductive surface, they are very sensitive to fluctuations in temperature, which expand or contract the sensing plates, causing fluctuations in the capacitance of these plates. These fluctuations result in a lot of background noise, so a strong finger signal is required for accurate detection. This limits applications to those where the finger directly touches the sensing element or is sensed through a relatively thin non-conductive surface. Projected capacitance Projected capacitive touch (PCT; also PCAP) technology is a variant of capacitive touch technology but where sensitivity to touch, accuracy, resolution and speed of touch have been greatly improved by the use of a simple form of artificial intelligence. This intelligent processing enables finger sensing to be projected, accurately and reliably, through very thick glass and even double glazing. Projected capacitance is a method for accurately detecting and tracking a particular variable, or group of variables (such as finger(s)), by: a) using a simple form of artificial intelligence to develop a profile of the capacitance changing effects expected for that variable, b) specifically looking for such changes, and c) eliminating measured capacitance changes that do not match this profile, attributable to global variables (such as temperature/humidity, dirt build-up, electrical noise), and local variables (such as rain drops, partial shade and hands/elbows). Capacitance sensors may be discrete - possibly (but not necessarily) in a regular array, or they may be multiplexed. Assumptions. In practice, various assumptions are made, such as: - a) fingers will not be touching the screen at "power-up", b) a finger will not be on the same spot for more than a fixed period of time, and c) fingers will not be touching everywhere at the same time. a) If a finger IS touching the screen at "power-up", then, as soon as it is removed a large "anti-touch" capacitance change will be detected. This signals to the processor to reset the touch thresholds and store new "no touch" values for each input. b) Long-term drift compensation is used to gradually raise or lower these thresholds (trending eventually to "no-touch"). This compensates for global changes in temperature and humidity. It also eliminates the possibility of any position appearing to be touched for too long, due to some "non-finger" event. This might be caused, for example, by a wet leaf landing on, and sticking to the screen. c) When a decision is to be made about the validity of one or more touches, then, assumption c) means that the average value, of changes measured for some of the inputs with the smallest change, can be used to "offset" the touch thresholds of the inputs in contention. This minimizes the influence of hands and arms. By these and other means, the processor is constantly fine tuning the touch thresholds, and tweaking the touch sensitivity of each input. This enables very small changes, caused only by fingers, to be accurately detected through thick overlays, or several centimeters of air. When a conductive object, such as a finger, comes into contact with a PCT panel, it distorts the local electrostatic field at that point. This is measurable as a change in capacitance. If a finger bridges the gap between two of the "tracks", the charge field is further interrupted and detected by the controller. The capacitance can be changed and measured at every individual point on the grid. This system is able to accurately track touches. Due to the top layer of a PCT being glass, it is sturdier than less-expensive resistive touch technology. Unlike traditional capacitive touch technology, it is possible for a PCT system to sense a passive stylus or gloved fingers. Moisture on the surface of the panel, high humidity, or collected dust are not a problem, especially with 'fine wire' based touchscreens due to the fact that wire based touchscreens have a very low 'parasitic' capacitance, and there is a greater distance between neighboring conductors. Projected capacitance has "long term drift compensation" built in. This minimizes the effects of slowly changing environmental factors, such as the build-up of dirt and effects caused by changes in the weather. Drops of rain have little effect, but flowing water, and especially flowing sea water (due to its electrical conductivity), can cause short term issues. A high frequency (RF) signal, possibly from 100 kHz to 1 MHz, is imposed on one track at a time, and appropriate capacitance measurements are taken ( as described later in this article). This process is repeated until all the tracks have been sampled. Conductive tracks are often transparent, one example being Indium tin oxide (ITO), a transparent electrical conductor, but these conductive tracks can be made of very fine, non-transparent metal mesh or individual fine wires. Projected capacitance touchscreen layout. Layout can vary depending on whether a single finger is to be detected or multiple fingers. In order to detect many fingers at the same time, some modern PCT touch screens are composed of thousands of discrete keys, each key being linked individually to the edge of the touch screen. This is enabled by etching an electrode grid pattern in a transparent conductive coating on one side of a sheet of glass or plastic. To reduce the number of input tracks, most PCT touch screens use multiplexing. This enables, for example, 100 (n) discrete key inputs to be reduced to 20 when using x/y multiplexing, or 15 if using bistate multiplexing or tri-state multiplexing. Capacitance multiplexing requires a grid of intersecting, but electrically isolated conductive tracks. This can be achieved in many different ways. One way is by creating parallel conductive tracks on one side of a plastic film, and similar parallel tracks on the other side, orientated at 90 degrees to the first side. Another way is to etch tracks on separate sheets of glass, and join these sheets, with tracks at right angles to each other, face to face using a thin non-conductive, adhesive interlayer. A simple alternative is to embed an x/y or diagonal grid of very fine, insulation coated conductive wires in a thin polyester film. This film can then be attached to one side of a sheet of glass, for operation through the glass. Touch resolution and the number of fingers that can be detected simultaneously is determined by the number of cross-over points (x * y). If x + y = n, then the maximum possible number of cross-overs is (n/2)2. Mutual capacitance An electrical signal, imposed on one electrical conductor, can be capacitively "sensed" by another electrical conductor that is in very close proximity, but electrically isolated—a feature that is exploited in mutual capacitance touchscreens. In a mutual capacitive sensor array, the "mutual" crossing of one electrical conductor with another electrical conductor, but with no direct electrical contact, forms a capacitor (see touchscreen#Construction). High frequency voltage pulses are applied to these conductors, one at a time. These pulses capacitively couple to every conductor that intersects it. Bringing a finger or conductive stylus close to the surface of the sensor changes the local electrostatic field, which in turn reduces the capacitance between these intersecting conductors. Any significant change in the strength of the signal sensed is used to determine if a finger is present or not at an intersection. The capacitance change at every intersection on the grid can be measured to accurately determine one or more touch locations. Mutual capacitance allows multi-touch operation where multiple fingers, palms or styli can be accurately tracked at the same time.The greater the number of intersections, the better the touch resolution and the more independent fingers that can be detected. This indicates a distinct advantage of diagonal wiring over standard x/y wiring, since diagonal wiring creates nearly twice the number of intersections. A 30 i/o, 16×14 x/y array, for example, would have 224 of these intersections / capacitors, and a 30 i/o diagonal lattice array could have 435 intersections. Each trace of an x/y mutual capacitance array only has one function, it is either an input or an output. The horizontal traces may be transmitters while the vertical traces are sensors, or vice versa. The traces in a diagonal mutual capacitance array, however, have to continuously change their functionality, "on the fly", by a process called bi-state multiplexing or Tri-state multiplexing. Some of the time a trace will be an output, at another time it will be an input or "grounded". A "look-up" table can be used to simplify this process. By slightly distorting the conductors in a "n" I/O diagonal matrix, the equivalent of a (n-1) by (n/2) array is formed. After address decoding, this can then be processed as a standard x/y array. Self-capacitance Self-capacitance sensors can have the same X/Y or diagonal grid layout as mutual capacitance sensors, but, with self-capacitance all the traces usually operate independently, with no interaction between different traces. Along with several other methods, the extra capacitive load of a finger on a trace electrode may be measured by a current meter, or by the change in frequency of an RC oscillator. Traces are sensed, one after the other until all the traces have been sensed. A finger may be detected anywhere along the whole length of a trace (even "off-screen"), but there is no indication where the finger is along that trace. If, however, a finger is also detected along another intersecting trace, then it is assumed that the finger position is at the intersection of the two traces. This allows for the speedy and accurate detection of a single finger. There is, however, ambiguity if more than one finger is to be detected. Two fingers may have four possible detection positions, only two of which are true, the other two being "ghosts." However, by selectively de-sensitizing any touch-points in contention, conflicting results are easily resolved. This enables self-capacitance to be used for two touch operation. Although mutual capacitance is simpler for multi-touch, multi-touch can be achieved using self-capacitance. If the trace being sensed is intersected by another trace that has a "desensitizing" signal on it, then that intersection is insensitive to touch. By imposing such a "desensitizing" signal on all the intersecting traces, except one, along the trace being sensed, then just a short length of that trace will be sensitive to touch. By selecting a sequence of these sensing sections along the trace, it is possible to determine the accurate position of multiple fingers along the one trace. This process can then be repeated for all the other traces until the whole screen has been scanned. Self-capacitive touch screen layers are used on mobile phones such as the Sony Xperia Sola, the Samsung Galaxy S4, Galaxy Note 3, Galaxy S5, and Galaxy Alpha. Self-capacitance is far more sensitive than mutual capacitance and is mainly used for single touch, simple gesturing and proximity sensing where the finger does not even have to touch the glass surface. Mutual capacitance is mainly used for multitouch applications. Many touchscreen manufacturers use both self and mutual capacitance technologies in the same product, thereby combining their individual benefits. Self-capacitance vs. mutual capacitance When using a 16 x 14 X/Y array to determine the position of a single finger by self-capacitance, 30 (i.e. 16 + 14) capacitance measurements are required. The finger being determined to be at the intersection of the strongest of the 16 x measurements and the strongest of the 14 y measurements. However, when using mutual capacitance, every intersection may have to be measured, making a total of 224 (i.e. 16 x 14) capacitance measurements. In this example, therefore, mutual capacitance requires nearly 7 times as many measurements as self-capacitance to detect the position of a finger. Many applications, such as selecting items from a list or menu, require just one finger, and self-capacitance is eminently suitable for such applications, due to the relatively low processing load, simpler processing method, the ability to sense through thick dielectric materials or air, and the possibility of reducing the number of inputs required, through repeat track layouts. For many other applications, however, such as for expanding / contracting items on the screen and for other gestures, two or more fingers need to be tracked. Two fingers can be detected and tracked accurately using self-capacitance, but this does involve a few extra calculations, and 4 extra capacitance measurements to eliminate the 2 "ghost" positions. One method is to undertake a full self-capacitance scan, to detect the 4 ambiguous finger positions, then use just 4 targeted mutual capacitance measurements to discover which two of the 4 positions are valid and which 2 are not. This gives a total of 34 measurements—still far less than the 224 required when using mutual capacitance alone. With 3 fingers, 9 disambiguations are required; with 4 fingers, 16 disambiguations etc. With more fingers, it may be decided that the process of disambiguation is too unwieldy. If sufficient processing power is available, the switch can then be made to full mutual capacitance scanning. Use of stylus on capacitive screens Capacitive touchscreens do not necessarily need to be operated by a finger, but until recently the special styli required could be quite expensive to purchase. The cost of this technology has fallen greatly in recent years and capacitive styli are now widely available for a nominal charge, and often given away free with mobile accessories. These consist of an electrically conductive shaft with a soft conductive rubber tip, thereby resistively connecting the fingers to the tip of the stylus. Infrared grid An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of the touch. A major benefit of such a system is that it can detect essentially any opaque object including a finger, gloved finger, stylus or pen. It is generally used in outdoor applications and POS systems that cannot rely on a conductor (such as a bare finger) to activate the touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not require any patterning on the glass which increases durability and optical clarity of the overall system. Infrared touchscreens are sensitive to dirt and dust that can interfere with the infrared beams, and suffer from parallax in curved surfaces and accidental press when the user hovers a finger over the screen while searching for the item to be selected. Infrared acrylic projection A translucent acrylic sheet is used as a rear-projection screen to display information. The edges of the acrylic sheet are illuminated by infrared LEDs, and infrared cameras are focused on the back of the sheet. Objects placed on the sheet are detectable by the cameras. When the sheet is touched by the user, frustrated total internal reflection results in leakage of infrared light which peaks at the points of maximum pressure, indicating the user's touch location. Microsoft's PixelSense tablets use this technology. Optical imaging Optical touchscreens are a relatively modern development in touchscreen technology, in which two or more image sensors (such as CMOS sensors) are placed around the edges (mostly the corners) of the screen. Infrared backlights are placed in the sensor's field of view on the opposite side of the screen. A touch blocks some lights from the sensors, and the location and size of the touching object can be calculated (see visual hull). This technology is growing in popularity due to its scalability, versatility, and affordability for larger touchscreens. Dispersive signal technology Introduced in 2002 by 3M, this system detects a touch by using sensors to measure the piezoelectricity in the glass. Complex algorithms interpret this information and provide the actual location of the touch. The technology is unaffected by dust and other outside elements, including scratches. Since there is no need for additional elements on screen, it also claims to provide excellent optical clarity. Any object can be used to generate touch events, including gloved fingers. A downside is that after the initial touch, the system cannot detect a motionless finger. However, for the same reason, resting objects do not disrupt touch recognition. Acoustic pulse recognition The key to this technology is that a touch at any one position on the surface generates a sound wave in the substrate which then produces a unique combined signal as measured by three or more tiny transducers attached to the edges of the touchscreen. The digitized signal is compared to a list corresponding to every position on the surface, determining the touch location. A moving touch is tracked by rapid repetition of this process. Extraneous and ambient sounds are ignored since they do not match any stored sound profile. The technology differs from other sound-based technologies by using a simple look-up method rather than expensive signal-processing hardware. As with the dispersive signal technology system, a motionless finger cannot be detected after the initial touch. However, for the same reason, the touch recognition is not disrupted by any resting objects. The technology was created by SoundTouch Ltd in the early 2000s, as described by the patent family EP1852772, and introduced to the market by Tyco International's Elo division in 2006 as Acoustic Pulse Recognition. The touchscreen used by Elo is made of ordinary glass, giving good durability and optical clarity. The technology usually retains accuracy with scratches and dust on the screen. The technology is also well suited to displays that are physically larger. Construction There are several principal ways to build a touchscreen. The key goals are to recognize one or more fingers touching a display, to interpret the command that this represents, and to communicate the command to the appropriate application. Multi-touch projected capacitance screens A very simple, low cost way to make a multi-touch projected capacitance touchscreen, is to sandwich an x/y or diagonal matrix of fine, insulation coated copper or tungsten wires between two layers of clear polyester film. This creates an array of proximity sensing micro-capacitors. One of these micro-capacitors every 10 to 15 mm is probably sufficient spacing if fingers are relatively widely spaced apart, but very high discrimination multi-touch may need a micro-capacitor every 5 or 6 mm. A similar system can be used for ultra-high resolution sensing, such as fingerprint sensing. Fingerprint sensors require a micro-capacitor spacing of about 44 to 50 microns. The touchscreens can be manufactured at home, using readily available tools and materials, or it can be done industrially. First, a "continuous-trace" wiring pattern is generated using a simple CAD system. The wire is threaded through a plotter pen and plotted directly, as one continuous wire, onto a thin sheet of adhesive coated, clear polyester film (such as "window film"), using a standard, low cost x/y pen plotter. After plotting, the single wire is gently cut into individual sections with a sharp scalpel, taking care not to damage the film. A second identical polyester film is laminated over the first film. The resulting touchscreen film is then trimmed to shape, and a connector is retro-fitted. The end product is extremely flexible, being about 75 microns thick (about the thickness of a human hair). It can even be creased without loss of functionality. The film can be mounted on, or behind non-conducting (or slightly conducting) surfaces. Usually, it is mounted behind a sheet of glass up to 12 mm thick (or more), for sensing through the glass. This method is suitable for a wide range of touchscreen sizes from very small to several meters wide - or even wider, if using a diagonally wired matrix. The end product is environmentally friendly as it uses recyclable polyester, and minute quantities of copper wire. The film could even have a second life as another product, such as drawing film, or wrapping film. Unlike some other touchscreen technologies, no complex processes or rare materials are used. For non-touchscreen applications, other plastics (e.g. vinyl or ABS) may be used. The film can be blow molded or heat formed into complex three dimensional shapes, such as bottles, globes or car dashboards. Alternatively, the wires can be embedded in thick plastic such as fiber glass or carbon fiber body panels. Single touch resistive touchscreens In the resistive approach, which used to be the most popular technique, there are typically four layers: Top polyester-coated layer with a transparent metallic-conductive coating on the bottom. Adhesive spacer Glass layer coated with a transparent metallic-conductive coating on the top Adhesive layer on the backside of the glass for mounting. When a user touches the surface, the system records the change in the electric current that flows through the display. Dispersive signal Dispersive signal technology measures the piezoelectric effect—the voltage generated when mechanical force is applied to a material that occurs chemically when a strengthened glass substrate is touched. Infrared There are two infrared-based approaches. In one, an array of sensors detects a finger touching or almost touching the display, thereby interrupting infrared light beams projected over the screen. In the other, bottom-mounted infrared cameras record heat from screen touches. In each case, the system determines the intended command based on the controls showing on the screen at the time and the location of the touch. Development The development of multi-touch screens facilitated the tracking of more than one finger on the screen; thus, operations that require more than one finger are possible. These devices also allow multiple users to interact with the touchscreen simultaneously. With the growing use of touchscreens, the cost of touchscreen technology is routinely absorbed into the products that incorporate it and is nearly eliminated. Touchscreen technology has demonstrated reliability and is found in airplanes, automobiles, gaming consoles, machine control systems, appliances, and handheld display devices including cellphones; the touchscreen market for mobile devices was projected to produce US$5 billion by 2009. The ability to accurately point on the screen itself is also advancing with the emerging graphics tablet-screen hybrids. Polyvinylidene fluoride (PVDF) plays a major role in this innovation due its high piezoelectric properties, which allow the tablet to sense pressure, making such things as digital painting behave more like paper and pencil. TapSense, announced in October 2011, allows touchscreens to distinguish what part of the hand was used for input, such as the fingertip, knuckle and fingernail. This could be used in a variety of ways, for example, to copy and paste, to capitalize letters, to activate different drawing modes, etc. Ergonomics and usage Touchscreen enable For touchscreens to be effective input devices, users must be able to accurately select targets and avoid accidental selection of adjacent targets. The design of touchscreen interfaces should reflect technical capabilities of the system, ergonomics, cognitive psychology and human physiology. Guidelines for touchscreen designs were first developed in the 2000s, based on early research and actual use of older systems, typically using infrared grids—which were highly dependent on the size of the user's fingers. These guidelines are less relevant for the bulk of modern touch devices which use capacitive or resistive touch technology. From the mid-2000s, makers of operating systems for smartphones have promulgated standards, but these vary between manufacturers, and allow for significant variation in size based on technology changes, so are unsuitable from a human factors perspective. Much more important is the accuracy humans have in selecting targets with their finger or a pen stylus. The accuracy of user selection varies by position on the screen: users are most accurate at the center, less so at the left and right edges, and least accurate at the top edge and especially the bottom edge. The R95 accuracy (required radius for 95% target accuracy) varies from in the center to in the lower corners. Users are subconsciously aware of this, and take more time to select targets which are smaller or at the edges or corners of the touchscreen. This user inaccuracy is a result of parallax, visual acuity and the speed of the feedback loop between the eyes and fingers. The precision of the human finger alone is much, much higher than this, so when assistive technologies are provided—such as on-screen magnifiers—users can move their finger (once in contact with the screen) with precision as small as 0.1 mm (0.004 in). Hand position, digit used and switching Users of handheld and portable touchscreen devices hold them in a variety of ways, and routinely change their method of holding and selection to suit the position and type of input. There are four basic types of handheld interaction: Holding at least in part with both hands, tapping with a single thumb Holding with two hands and tapping with both thumbs Holding with one hand, tapping with the finger (or rarely, thumb) of another hand Holding the device in one hand, and tapping with the thumb from that same hand Use rates vary widely. While two-thumb tapping is encountered rarely (1–3%) for many general interactions, it is used for 41% of typing interaction. In addition, devices are often placed on surfaces (desks or tables) and tablets especially are used in stands. The user may point, select or gesture in these cases with their finger or thumb, and vary use of these methods. Combined with haptics Touchscreens are often used with haptic response systems. A common example of this technology is the vibratory feedback provided when a button on the touchscreen is tapped. Haptics are used to improve the user's experience with touchscreens by providing simulated tactile feedback, and can be designed to react immediately, partly countering on-screen response latency. Research from the University of Glasgow (Brewster, Chohan, and Brown, 2007; and more recently Hogan) demonstrates that touchscreen users reduce input errors (by 20%), increase input speed (by 20%), and lower their cognitive load (by 40%) when touchscreens are combined with haptics or tactile feedback. On top of this, a study conducted in 2013 by Boston College explored the effects that touchscreens haptic stimulation had on triggering psychological ownership of a product. Their research concluded that a touchscreens ability to incorporate high amounts of haptic involvement resulted in customers feeling more endowment to the products they were designing or buying. The study also reported that consumers using a touchscreen were willing to accept a higher price point for the items they were purchasing. Customer service Touchscreen technology has become integrated into many aspects of customer service industry in the 21st century. The restaurant industry is a good example of touchscreen implementation into this domain. Chain restaurants such as Taco Bell, Panera Bread, and McDonald's offer touchscreens as an option when customers are ordering items off the menu. While the addition of touchscreens is a development for this industry, customers may choose to bypass the touchscreen and order from a traditional cashier. To take this a step further, a restaurant in Bangalore has attempted to completely automate the ordering process. Customers sit down to a table embedded with touchscreens and order off an extensive menu. Once the order is placed it is sent electronically to the kitchen. These types of touchscreens fit under the Point of Sale (POS) systems mentioned in the lead section. "Gorilla arm" Extended use of gestural interfaces without the ability of the user to rest their arm is referred to as "gorilla arm". It can result in fatigue, and even repetitive stress injury when routinely used in a work setting. Certain early pen-based interfaces required the operator to work in this position for much of the workday. Allowing the user to rest their hand or arm on the input device or a frame around it is a solution for this in many contexts. This phenomenon is often cited as an example of movements to be minimized by proper ergonomic design. Unsupported touchscreens are still fairly common in applications such as ATMs and data kiosks, but are not an issue as the typical user only engages for brief and widely spaced periods. Fingerprints Touchscreens can suffer from the problem of fingerprints on the display. This can be mitigated by the use of materials with optical coatings designed to reduce the visible effects of fingerprint oils. Most modern smartphones have oleophobic coatings, which lessen the amount of oil residue. Another option is to install a matte-finish anti-glare screen protector, which creates a slightly roughened surface that does not easily retain smudges. Glove touch Capacitive touchscreens rarely work when the user wears gloves. The thickness of the glove and the material they are made of play a significant role on that and the ability of a touchscreen to pick up a touch. Some devices have a mode which increases the sensitivity of the touchscreen. This allows the touchscreen to be used more reliably with gloves, but can also result in unreliable and phantom inputs. However, thin gloves such as medical gloves are thin enough for users to wear when using touchscreens; mostly applicable to medical technology and machines.
Technology
User interface
null
668700
https://en.wikipedia.org/wiki/Erythritol
Erythritol
Erythritol (, ) is an organic compound, the naturally occurring achiral meso four-carbon sugar alcohol (or polyol). It is the reduced form of either D- or L-erythrose and one of the two reduced forms of erythrulose. It is used as a food additive and sugar substitute. It is synthesized from corn using enzymes and fermentation. Its formula is , or HO(CH2)(CHOH)2(CH2)OH. Erythritol is 60–70% as sweet as table sugar. However, erythritol is almost completely noncaloric and does not affect blood sugar or cause tooth decay. Japanese companies pioneered the commercial development of erythritol as a sweetener in the 1990s. Etymology The name "erythritol" derives from the Greek word for the color red (erythros or ). That is the case even though erythritol is almost always found in the form of white crystals or powder, and chemical reactions do not turn it red. The name "erythritol" is adapted from a closely-related compound, erythrin, which turns red upon oxidation. History Erythritol was discovered in 1848 by the Scottish chemist John Stenhouse and first isolated in 1852. Starting from 1945, American chemists applied newly-developed techniques of chromatography to sugarcane juice and blackstrap molasses, finding in 1950 that erythritol was present in molasses fermented by yeast. It was first approved and marketed as a sweetener in Japan in 1990, and in the US in 1997. Occurrence Erythritol occurs naturally in some fruit and fermented foods. Uses Since 1990, erythritol has had a history of safe use as a sweetener and flavor-enhancer in food and beverage products and is approved for use by government regulatory agencies in more than 60 countries. Beverage categories for its use are coffee and tea, liquid dietary supplements, juice blends, soft drinks, and flavored water product variations, with foods including confections, biscuits and cookies, tabletop sweeteners, and sugar-free chewing gum. The mild sweetness of erythritol allows for a volume-for-volume replacement of sugar, whereas sweeter sugar substitutes need fillers that result in a noticeably different texture in baked products. Absorption and excretion Erythritol is absorbed rapidly into the blood, with peak amounts occurring in under two hours; the majority of an oral dose (80 to 90%) is excreted unchanged in the urine within 24 hours. Safety In 2023, the European Food Safety Authority reassessed the safety of erythritol and lowered the recommended daily intake limit to 0.5 grams per kg body weight, which equates to 35 g for an average adult (70 kg). The lower limit was set to "safeguard against its laxative effect and to mitigate against long-term effects, such as electrolyte imbalance arising from prolonged exposure to erythritol-induced diarrhea." Previously, in 2015, scientists assessed doses for erythritol where symptoms of mild gastrointestinal upset occurred, such as nausea, excess flatus, abdominal bloating or pain, and stool frequency. At a content of 1.6% in beverages, it was not considered to have a laxative effect. The upper limit of tolerance was 0.78 and 0.71g/kg body weight in adults and children respectively. In the United States, erythritol is among several sugar alcohols that are generally recognized as safe (GRAS) for food manufacturing. Dietary and metabolic aspects Caloric value and labeling Nutritional labeling of erythritol in food products varies from country to country. Some places, such as Japan and the European Union (EU), label it as zero-calorie. Under Food and Drug Administration (FDA) labeling requirements in the United States, erythritol has a caloric value of 0.2 calories per gram (95% less than sugar and other carbohydrates). Human digestion In the body, most erythritol is absorbed into the bloodstream in the small intestine and then for the most part excreted unchanged in the urine. About 10% enters the colon. In small doses, erythritol does not normally cause laxative effects and gas or bloating, as are often experienced after consumption of other sugar alcohols (such as maltitol, sorbitol, xylitol, and lactitol). About 90% is absorbed before it enters the large intestine, and since erythritol is not digested by intestinal bacteria, the remaining 10% is excreted in the feces. Large doses can cause nausea, stomach rumbling, and watery feces. Doses greater than 0.66 g/kg body weight in males and greater than 0.8 g/kg body weight in females cause laxation, and doses over cause diarrhea. Rarely, erythritol can cause allergic hives (urticaria). Blood sugar and insulin levels Erythritol has no effect on blood sugar or blood insulin levels, and therefore may be used as a sugar substitute by people with type 2 diabetes. The glycemic index (GI) of erythritol is 0% of the GI for glucose and the insulin index (II) is 2% of the II for glucose. Oral bacteria Erythritol is tooth-friendly; since it cannot be metabolized by oral bacteria, it does not contribute to tooth decay. In addition, erythritol, like xylitol, has antibacterial effects against streptococci bacteria, reduces dental plaque, and may be protective against tooth decay. Manufacturing Erythritol is manufactured by using enzymatic hydrolysis of the starch from corn to generate glucose. Glucose is then fermented with yeast or another fungus to produce erythritol. A genetically-engineered form of the yeast Yarrowia lipolytica has been optimized for erythritol production by fermentation by using glycerol as a carbon source and high osmotic pressure to increase yields up to 62%. Chemical properties Heat of solution Erythritol has a strong cooling effect (endothermic, or positive heat of solution) when it dissolves in water, which is often compared with the cooling effect of mint flavors. The cooling effect is present only when erythritol is not already dissolved in water, a situation that might be experienced in an erythritol-sweetened frosting, chocolate bar, chewing gum, or hard candy. The cooling effect of erythritol is very similar to that of xylitol and among the strongest cooling effects of all sugar alcohols. Erythritol has a pKa of 13.903 at 18 °C. Biological properties According to a 2014 study, erythritol functions as an insecticide toxic to the fruit fly Drosophila melanogaster, impairing motor ability and reducing longevity even when nutritive sugars were available. Erythritol is preferentially used by the Brucella spp. The presence of erythritol in the placentas of goats, cattle, and pigs has been proposed as an explanation for the accumulation of Brucella bacteria found at these sites. Synonyms In the 19th and the early 20th centuries, several synonyms were in use for erythritol: erythrol, erythrite, erythoglucin, eryglucin, erythromannite and phycite. Zerose is a tradename for erythritol.
Physical sciences
Sugar alcohols
Chemistry
669733
https://en.wikipedia.org/wiki/Die%20casting
Die casting
Die casting is a metal casting process that is characterized by forcing molten metal under high pressure into a mold cavity. The mold cavity is created using two hardened tool steel dies which have been machined into shape and work similarly to an injection mold during the process. Most die castings are made from non-ferrous metals, specifically zinc, copper, aluminium, magnesium, lead, pewter, and tin-based alloys. Depending on the type of metal being cast, a hot- or cold-chamber machine is used. The casting equipment and the metal dies represent large capital costs and this tends to limit the process to high-volume production. Manufacture of parts using die casting is relatively simple, involving only four main steps, which keeps the incremental cost per item low. It is especially suited for a large quantity of small- to medium-sized castings, which is why die casting produces more castings than any other casting process. Die castings are characterized by a very good surface finish (by casting standards) and dimensional consistency. History Die casting equipment was invented in 1838 for the purpose of producing movable type for the printing industry. The first die casting-related patent was granted in 1849 for a small hand-operated machine for the purpose of mechanized printing type production. In 1885 Ottmar Mergenthaler invented the Linotype machine, which cast an entire line of type as a single unit, using a die casting process. It nearly completely replaced setting type by hand in the publishing industry. The Soss die-casting machine, manufactured in Brooklyn, NY, was the first machine to be sold in the open market in North America. Other applications grew rapidly, with die casting facilitating the growth of consumer goods, and appliances, by greatly reducing the production cost of intricate parts in high volumes. In 1966, General Motors released the Acurad process. Cast metal The main die casting alloys are: zinc, aluminium, magnesium, copper, lead, and tin; although uncommon, ferrous die casting is also possible. Specific die casting alloys include: zinc aluminium; aluminium to, e.g. The Aluminum Association (AA) standards: AA 380, AA 384, AA 386, AA 390; and AZ91D magnesium. The following is a summary of the advantages of each alloy: Zinc: the easiest metal to cast; high ductility; high impact strength; easily plated; economical for small parts; promotes long die life. Aluminium: lightweight; high dimensional stability for very complex shapes and thin walls; good corrosion resistance; good mechanical properties; high thermal and electrical conductivity; retains strength at moderately high temperatures. Magnesium: the easiest metal to machine; excellent strength-to-weight ratio; lightest alloy commonly die cast. Copper: high hardness; high corrosion resistance; highest mechanical properties of alloys die cast; excellent wear resistance; excellent dimensional stability; strength approaching that of steel parts. Silicon tombac: high-strength alloy made of copper, zinc and silicon. Often used as an alternative for investment cast steel parts. Lead and tin: high density; extremely close dimensional accuracy; used for special forms of corrosion resistance. Such alloys are not used in foodservice applications for public health reasons. Type metal, an alloy of lead, tin and antimony (with sometimes traces of copper), is used for casting hand-set type in letterpress printing and hot foil blocking. Traditionally cast in hand jerk moulds, now predominantly die cast after the industrialisation of the type foundries. Around 1900 the slug casting machines came onto the market and added further automation, with sometimes dozens of casting machines at one newspaper office. , maximum weight limits for aluminium, brass, magnesium, and zinc castings are estimated at approximately , , , and , respectively. By late-2019, press machines capable of die casting single pieces over- were being used to produce aluminium chassis components for cars. The material used defines the minimum section thickness and minimum draft required for a casting as outlined in the table below. The thickest section should be less than , but can be greater. Design geometry There are a number of geometric features to be considered when creating a parametric model of a die casting: Draft is the amount of slope or taper given to cores or other parts of the die cavity to allow for easy ejection of the casting from the die. All die cast surfaces that are parallel to the opening direction of the die require draft for the proper ejection of the casting from the die. Die castings that feature proper draft are easier to remove from the die and result in high-quality surfaces and more precise finished product. Fillet is the curved juncture of two surfaces that would have otherwise met at a sharp corner or edge. Simply, fillets can be added to a die casting to remove undesirable edges and corners. Parting line represents the point at which two different sides of a mould come together. The location of the parting line defines which side of the die is the cover and which is the ejector. Bosses are added to die castings to serve as stand-offs and mounting points for parts that will need to be mounted. For maximum integrity and strength of the die casting, bosses must have universal wall thickness. Ribs are added to a die casting to provide added support for designs that require maximum strength without increased wall thickness. Holes and windows require special consideration when die casting because the perimeters of these features will grip to the die steel during solidification. To counteract this effect, generous draft should be added to hole and window features. Equipment There are two basic types of die casting machines: hot-chamber machines and cold-chamber machines. These are rated by how much clamping force they can apply. Typical ratings are between . Hot-chamber die casting Hot-chamber die casting, also known as gooseneck machines, rely upon a pool of molten metal to feed the die. At the beginning of the cycle the piston of the machine is retracted, which allows the molten metal to fill the "gooseneck". The pneumatic- or hydraulic-powered piston then forces this metal out of the gooseneck into the die. The advantages of this system include fast cycle times (approximately 15 cycles a minute) and the convenience of melting the metal in the casting machine. The disadvantages of this system are that it is limited to use with low-melting point metals and that aluminium cannot be used because it picks up some of the iron while in the molten pool. Therefore, hot-chamber machines are primarily used with zinc-, tin-, and lead-based alloys. Cold-chamber die casting These are used when the casting alloy cannot be used in hot-chamber machines; these include aluminium, zinc alloys with a large composition of aluminium, magnesium and copper. The process for these machines start with melting the metal in a separate furnace. Then a precise amount of molten metal is transported to the cold-chamber machine where it is fed into an unheated shot chamber (or injection cylinder). This shot is then driven into the die by a hydraulic or mechanical piston. The biggest disadvantage of this system is the slower cycle time due to the need to transfer the molten metal from the furnace to the cold-chamber machine. Mold or tooling Two dies are used in die casting; one is called the "cover die half" and the other the "ejector die half". Where they meet is called the parting line. The cover die contains the sprue (for hot-chamber machines) or shot hole (for cold-chamber machines), which allows the molten metal to flow into the dies; this feature matches up with the injector nozzle on the hot-chamber machines or the shot chamber in the cold-chamber machines. The ejector die contains the ejector pins and usually the runner, which is the path from the sprue or shot hole to the mould cavity. The cover die is secured to the stationary, or front, platen of the casting machine, while the ejector die is attached to the movable platen. The mould cavity is cut into two cavity inserts, which are separate pieces that can be replaced relatively easily and bolt into the die halves. The dies are designed so that the finished casting will slide off the cover half of the die and stay in the ejector half as the dies are opened. This assures that the casting will be ejected every cycle because the ejector half contains the ejector pins to push the casting out of that die half. The ejector pins are driven by an ejector pin plate, which accurately drives all of the pins at the same time and with the same force, so that the casting is not damaged. The ejector pin plate also retracts the pins after ejecting the casting to prepare for the next shot. There must be enough ejector pins to keep the overall force on each pin low, because the casting is still hot and can be damaged by excessive force. The pins still leave a mark, so they must be located in places where these marks will not hamper the casting's purpose. Other die components include cores and slides. Cores are components that usually produce holes or opening, but they can be used to create other details as well. There are three types of cores: fixed, movable, and loose. Fixed cores are ones that are oriented parallel to the pull direction of the dies (i.e. the direction the dies open), therefore they are fixed, or permanently attached to the die. Movable cores are ones that are oriented in any other way than parallel to the pull direction. These cores must be removed from the die cavity after the shot solidifies, but before the dies open, using a separate mechanism. Slides are similar to movable cores, except they are used to form undercut surfaces. The use of movable cores and slides greatly increases the cost of the dies. Loose cores, also called pick-outs, are used to cast intricate features, such as threaded holes. These loose cores are inserted into the die by hand before each cycle and then ejected with the part at the end of the cycle. The core then must be removed by hand. Loose cores are the most expensive type of core, because of the extra labor and increased cycle time. Other features in the dies include water-cooling passages and vents along the parting lines. These vents are usually wide and thin (approximately ) so that when the molten metal starts filling them the metal quickly solidifies and minimizes scrap. No risers are used because the high pressure ensures a continuous feed of metal from the gate. The most important material properties for the dies are thermal shock resistance and softening at elevated temperature; other important properties include hardenability, machinability, heat checking resistance, weldability, availability (especially for larger dies), and cost. The longevity of a die is directly dependent on the temperature of the molten metal and the cycle time. The dies used in die casting are usually made out of hardened tool steels, because cast iron cannot withstand the high pressures involved, therefore the dies are very expensive, resulting in high start-up costs. Metals that are cast at higher temperatures require dies made from higher alloy steels. The main failure mode for die casting dies is wear or erosion. Other failure modes are heat checking and thermal fatigue. Heat checking is when surface cracks occur on the die due to a large temperature change on every cycle. Thermal fatigue is when surface cracks occur on the die due to a large number of cycles. Process The following are the four steps in traditional die casting, also known as , these are also the basis for any of the die casting variations: die preparation, filling, ejection, and shakeout. The dies are prepared by spraying the mould cavity with lubricant. The lubricant both helps control the temperature of the die and it also assists in the removal of the casting. The dies are then closed and molten metal is injected into the dies under high pressure; between . Once the mould cavity is filled, the pressure is maintained until the casting solidifies. The dies are then opened and the shot (shots are different from castings because there can be multiple cavities in a die, yielding multiple castings per shot) is ejected by the ejector pins. Finally, the shakeout involves separating the scrap, which includes the gate, runners, sprues and flash, from the shot. This is often done using a special trim die in a power press or hydraulic press. Other methods of shaking out include sawing and grinding. A less labor-intensive method is to tumble shots if gates are thin and easily broken; separation of gates from finished parts must follow. This scrap is recycled by remelting it. The yield is approximately 67%. The high-pressure injection leads to a quick fill of the die, which is required so the entire cavity fills before any part of the casting solidifies. In this way, discontinuities are avoided, even if the shape requires difficult-to-fill thin sections. This creates the problem of air entrapment, because when the mould is filled quickly there is little time for the air to escape. This problem is minimized by including vents along the parting lines, however, even in a highly refined process there will still be some porosity in the center of the casting. Most die casters perform other secondary operations to produce features not readily castable, such as tapping a hole, polishing, plating, buffing, or painting. Inspection After the shakeout of the casting it is inspected for defects. The most common defects are misruns and cold shuts. These defects can be caused by cold dies, low metal temperature, dirty metal, lack of venting, or too much lubricant. Other possible defects are gas porosity, shrinkage porosity, hot tears, and flow marks. Flow marks are marks left on the surface of the casting due to poor gating, sharp corners, or excessive lubricant. Lubricants Water-based lubricants are the most used type of lubricant, because of health, environmental, and safety reasons. Unlike solvent-based lubricants, if water is properly treated to remove all minerals from it, it will not leave any by-product in the dies. If the water is not properly treated, then the minerals can cause surface defects and discontinuities. Today "water-in-oil" and "oil-in-water" emulsions are used, because, when the lubricant is applied, the water cools the die surface by evaporating, hence depositing the oil that helps release the shot. A common mixture for this type of emulsion is thirty parts water to one part oil, however in extreme cases a ratio of one-hundred to one is used. Oils that are used include heavy residual oil (HRO), animal fat, vegetable fat, synthetic oil, and all sorts of mixtures of these. HROs are gelatinous at room temperature, but at the high temperatures found in die casting, they form a thin film. Other substances are added to control the viscosity and thermal properties of these emulsions, e.g. graphite, aluminium, mica. Other chemical additives are used to inhibit rusting and oxidation. In addition emulsifiers are added to improve the emulsion manufacturing process, e.g. soap, alcohol esters, ethylene oxides. Historically, solvent-based lubricants, such as diesel fuel and kerosene, were commonly used. These were good at releasing the part from the die, but a small explosion occurred during each shot, which led to a build-up of carbon on the mould cavity walls. However, they were easier to apply evenly than water-based lubricants. Advantages Advantages of die casting: Excellent dimensional accuracy (dependent on casting material, but typically 0.1 mm for the first 2.5 cm (0.004 inch for the first inch) and 0.02 mm for each additional centimeter (0.002 inch for each additional inch). Smooth cast surfaces (Ra rms). Thinner walls can be cast as compared to sand and permanent mould casting (approximately ). Inserts can be cast-in (such as threaded inserts, heating elements, and high strength bearing surfaces). Reduces or eliminates secondary machining operations. Rapid production rates. Casting tensile strength as high as . Die casting fluid length is unaffected by solidification range, unlike permanent molds, sand castings, and other types. Disadvantages The main disadvantage to die casting is the very high capital cost. Both the casting equipment required and the dies and related components are very costly, as compared to most other casting processes. Therefore, to make die casting an economic process, a large production volume is needed. Other disadvantages are: The process is limited to high-fluidity metals. Increased scrap rates can be caused by fluidity failure, and scrap costs in die casting are high. Die casting involves a large number of parts, so questions of repeatability are particularly important. Casting weights have previously been limited to between 30 grams (1 oz) and 10 kg (20 lb), but from 2018 shots of have become possible. In the standard die casting process the final casting will have a small amount of porosity. This prevents any heat treating or welding, because the heat causes the gas in the pores to expand, which causes micro-cracks inside the part and exfoliation of the surface. However, some companies have found ways of reducing the porosity of the part, allowing limited welding and heat treating. Thus a related disadvantage of die casting is that it is only for parts in which softness is acceptable. Parts needing hardening (through hardening or case hardening) and tempering are not cast in dies. During the cooling process, some of the material stuffs the tiny crevices of the mold under pressure, which creates excess burrs that require extra work to trim. Variants Acurad Acurad was a die casting process developed by General Motors in the late 1950s and 1960s. The name is an acronym for accurate, reliable, and dense. It was developed to combine a stable fill and directional solidification with the fast cycle times of the traditional die casting process. The process pioneered four breakthrough technologies for die casting: thermal analysis, flow and fill modeling, heat treatable and high integrity die castings, and indirect squeeze casting (explained below). The thermal analysis was the first done for any casting process. This was done by creating an electrical analog of the thermal system. A cross-section of the dies were drawn on Teledeltos paper and then thermal loads and cooling patterns were drawn onto the paper. Water lines were represented by magnets of various sizes. The thermal conductivity was represented by the reciprocal of the resistivity of the paper. The Acurad system employed a bottom fill system that required a stable flow-front. Logical thought processes and trial and error were used because computerized analysis did not exist yet; however this modeling was the precursor to computerized flow and fill modeling. The Acurad system was the first die casting process that could successfully cast low-iron aluminium alloys, such as A356 and A357. In a traditional die casting process these alloys would solder to the die. Similarly, Acurad castings could be heat treated and meet the U.S. military specification MIL-A-21180-D. Finally, the Acurad system employed a patented double shot piston design. The idea was to use a second piston (located within the primary piston) to apply pressure after the shot had partially solidified around the perimeter of the casting cavity and shot sleeve. While the system was not very effective, it did lead the manufacturer of the Acurad machines, Ube Industries, to discover that it was just as effective to apply sufficient pressure at the right time later in the cycle with the primary piston; this is indirect squeeze casting. Pore-free When no porosity is allowed in a cast part then the pore-free casting process is used. It is identical to the standard process except oxygen is injected into the die before each shot to purge any air from the mould cavity. This causes small dispersed oxides to form when the molten metal fills the die, which virtually eliminates gas porosity. An added advantage to this is greater strength. Unlike standard die castings, these castings can be heat treated and welded. This process can be performed on aluminium, zinc, and lead alloys. Vacuum-assisted high-pressure die casting In vacuum assisted high pressure die casting, a.k.a. vacuum high pressure die casting (VHPDC), a vacuum pump removes air and gases from die cavity and metal delivery system before and during injection. Vacuum die casting reduces porosity, allows heat treating and welding, improves surface finish, and can increase strength. Heated-manifold direct-injection Heated-manifold direct-injection die casting, also known as direct-injection die casting or runnerless die casting, is a zinc die casting process where molten zinc is forced through a heated manifold and then through heated mini-nozzles, which lead into the moulding cavity. This process has the advantages of lower cost per part, through the reduction of scrap (by the elimination of sprues, gates, and runners) and energy conservation, and better surface quality through slower cooling cycles. Semi-solid Semi-solid die casting uses metal that is heated between its liquidus and solidus (or liquidus and eutectic temperature), so that it is in a "mushy" state. This allows for more complex parts and thinner walls. Low-pressure die casting Low-pressure die casting (LPDC) is a process developed to improve the consistency and integrity of parts, at the cost of a much slower cycle time. In LPDC, material is held in a reservoir below the die, from which it flows into the cavity when air pressure in the reservoir is increased. Typical pressures range from to . Somewhat higher pressures (up to ) may be applied after the material is in the die, to work it into fine details of the cavity and eliminate porosity. Typical cycle times for a low-pressure die casting process are longer than for other die-casting processes; an engine block can take up to fifteen minutes. It is primarily used for aluminum, but has been used for carbon steel as well. Integrated die casting Integrated die casting refers to the high-level integration of multiple separate and dispersed alloy parts through a large-tonnage die-casting machine, and then formed into 1–2 large castings. The aim is to reduce manufacturing costs through one-time molding, significantly decreasing the number of parts needed for car assembly and improving overall efficiency. Elon Musk's team first proposed this processing method during the Tesla manufacturing process which is Giga Press program.
Technology
Metallurgy
null
669746
https://en.wikipedia.org/wiki/Park%20and%20ride
Park and ride
A park and ride, also known as incentive parking or a commuter lot, is a parking lot with public transport connections that allows commuters and other people heading to city centres to leave their vehicles and transfer to a bus, rail system (rapid transit, light rail, or commuter rail), or carpool for the remainder of the journey. The vehicle is left in the parking lot during the day and retrieved when the owner returns. Park and rides are generally located in the suburbs of metropolitan areas or on the outer edges of large cities. A park and ride that only offers parking for meeting a carpool and not connections to public transport may also be called a park and pool. Park and ride is abbreviated as "P+R" on road signs in some countries, and is often styled as "Park & Ride" in marketing. Adoption In Sweden, a tax has been introduced on the benefit of free or cheap parking paid by an employer, if workers would otherwise have to pay. The tax has reduced the number of workers driving into the inner city, and increased the usage of park and ride areas, especially in Stockholm. The introduction of a congestion tax in Stockholm has further increased the usage of park and ride. In Prague, park and ride parking lots are established near some metro and railway stations (about 17 parks near 12 metro stations and 3 train stations, in 2011). These parking lots offer low prices and all-day and return (2× 75 min) tickets including the public transport fare. Benefits Park and ride facilities allow commuters to avoid a stressful drive along congested roads and a search for scarce, expensive city-centre parking. They may well reduce congestion by assisting the use of public transport in congested urban areas. There is not much research on the pros and cons of park and ride schemes. It has been suggested that there is "a lack of clear-cut evidence for park and ride's widely assumed impact in reducing congestion". Park and ride facilities help commuters who live beyond practical walking distance from the railway station or bus stop. They may also suit commuters with alternative fuel vehicles, which often have reduced range, when the facility is closer to home than the ultimate destination. They also are useful as a fixed meeting place for those carsharing, carpooling, or using "kiss and ride" (see below). Also, some transit operators use park and ride facilities to encourage more efficient driving practices by reserving parking spaces for low emission designs, high-occupancy vehicles, or carsharing. Many park and rides have passenger waiting areas and/or toilets. Travel information, such as leaflets and posters, may be provided. At larger facilities, extra services such as a travel office, food shop, car wash, or cafeteria may be provided. These are often encouraged by municipal operators to encourage use of park and ride. Bus park and rides Park and ride facilities, with dedicated parking lots and bus services, began in the 1960s in the UK. Oxford operated the first such scheme, initially with an experimental service operating part-time from a motel on the A34 in the 1960s and then on a full-time basis from 1973. Better Choice Parking first offered an airport park and ride service at London Gatwick Airport in 1978. Oxford now operates park and ride from 5 dedicated parking lots around the city. As of 2015, Oxford has the biggest urban park & ride network in the UK with a combined capacity of 5,031 car parking spaces. Railway park and rides Some railway stations are promoted as a park and ride facility for a town a few miles away, for instance for Looe and St Erth for St  Ives, both in Cornwall, England, and Norden for Swanage, Dorset, England (by steam railway). These help relieve traffic congestion and parking problems in the town. In contrast, some stations act as a railhead, easily accessed by road, for long-distance traffic. Names of stations in the UK with large car parks outside the main urban area are often suffixed with "Parkway", such as , , and . At and , the stations are there to serve air as well as road passengers. In the United States, it is common for outlying rail stations to include automobile parking, often with hundreds of spaces. Bike and ride B & R (B + R) is a name for using cycle boxes or racks near public transport terminals, mostly together with P & R parking lots. This system can be promoted through integrated fare and tickets with public transport system. Kiss and ride / kiss and fly Many railway stations and airports feature a "kiss-and-ride" or "kiss-and-fly" area in which cars can stop briefly to discharge or, less commonly, pick up passengers. The term first appeared in a 20 January 1956 report in the Los Angeles Times. It refers to the nominal scenario whereby a passenger is driven to the station by spouse or partner, then they kiss each other goodbye before the passenger catches the train. Deutsche Bahn has announced that it will be changing the English expressions for Kiss and Ride, Service Points and Counters to German ones. In Italy the new Bologna Centrale railway station uses "kiss and ride" signs. Some high-speed railway stations in Taiwan have signs outside stations reading "Kiss and Ride" in English, with Chinese characters above the words that read "temporary pick-up and drop-off zone". Kiss and Rides are getting popular in Poland. Cities with such areas include Wrocław (since October 2011), Kraków (since 15 November 2013), Warsaw (since 2016), and Toruń (since 2016). Locally they are known by their English name, i.e. "Kiss and ride" and while the sign is non-standardized, all of them contain the letters K+R. In the Netherlands, many English terms appear in the Dutch language, and "Kiss & Ride" is one of them. Car-share park and rides Park and ride schemes do not necessarily involve public transport. They can be provided to reduce the number of cars on the road by promoting carpooling, vanpooling, and carsharing. Partly because of the concentration of riders, and thus a reduced number of vehicles, these park and ride terminals often have express transit services into the urban area, such as a high-occupancy vehicle lane. The service may take passengers in only one direction in the morning (typically towards a central business district) and in the opposite direction in the evening, with no or a limited number of trips available in the middle of the day. It is often not allowed to park at these locations overnight. These attributes vary from region to region, reflecting local transportation policies and commuter needs.
Technology
Concepts of ground transport
null
4410074
https://en.wikipedia.org/wiki/Soil%20acidification
Soil acidification
Soil acidification is the buildup of hydrogen cations, which reduces the soil pH. Chemically, this happens when a proton donor gets added to the soil. The donor can be an acid, such as nitric acid, sulfuric acid, or carbonic acid. It can also be a compound such as aluminium sulfate, which reacts in the soil to release protons. Acidification also occurs when base cations such as calcium, magnesium, potassium and sodium are leached from the soil. Soil acidification naturally occurs as lichens and algae begin to break down rock surfaces. Acids continue with this dissolution as soil develops. With time and weathering, soils become more acidic in natural ecosystems. Soil acidification rates can vary, and increase with certain factors such as acid rain, agriculture, and pollution. Causes Acid rain Rainfall is naturally acidic due to carbonic acid forming from carbon dioxide in the atmosphere . This compound causes rainfall pH to be around 5.0–5.5. When rainfall has a lower pH than natural levels, it can cause rapid acidification of soil. Sulfur dioxide and nitrogen oxides are precursors of stronger acids that can lead to acid rain production when they react with water in the atmosphere. These gases may be present in the atmosphere due to natural sources such as lightning and volcanic eruptions, or from anthropogenic emissions. Basic cations like calcium are leached from the soil as acidic rainfall flows, which allows aluminum and proton levels to increase. Nitric and sulfuric acids in acid rain and snow can have different effects on the acidification of forest soils, particularly seasonally in regions where a snow pack may accumulate during the winter. Snow tends to contain more nitric acid than sulfuric acid, and as a result, a pulse of nitric acid-rich snow meltwater may leach through high elevation forest soils during a short time in the spring. This volume of water may comprise as much as 50% of the annual precipitation. The nitric acid flush of meltwater may cause a sharp, short term, decrease in the drainage water pH entering groundwater and surface waters. The decrease in pH can solubilize Al3+ that is toxic to fish, especially newly-hatched fry with immature gill systems through which they pass large volumes of water to obtain O2 for respiration. As the snow meltwater flush passes, water temperatures rise, and lakes and streams produce more dissolved organic matter; the Al concentration in drainage water decreases and is bound to organic acids, making it less toxic to fish. In rain, the ratio of nitric-to-sulfuric acids decreases to approximately 1:2. The higher sulfuric acid content of rain also may not release as much Al3+ from soils as does nitric acid, in part due to the retention (adsorption) of SO42- by soils. This process releases OH− into soil solution and buffers the pH decrease caused by the added H+ from both acids. The forest floor organic soil horizons (layers) that are high in organic matter also buffer pH, and decrease the load of H+ that subsequently leaches through underlying mineral horizons. Biological weathering Plant roots acidify soil by releasing protons and organic acids so as to chemically weather soil minerals. Decaying remains of dead plants on soil may also form organic acids which contribute to soil acidification. Acidification from leaf litter on the O-horizon is more pronounced under coniferous trees such as pine, spruce and fir, which return fewer base cations to the soil, rather than under deciduous trees; however, soil pH differences attributed to vegetation often preexisted that vegetation, and help select for species which tolerate them. Calcium accumulation in existing biomass also strongly affects soil pH - a factor which can vary from species to species. Parent materials Certain parent materials also contribute to soil acidification. Granites and their allied igneous rocks are called "acidic" because they have a lot of free quartz, which produces silicic acid on weathering. Also, they have relatively low amounts of calcium and magnesium. Some sedimentary rocks such as shale and coal are rich in sulfides, which, when hydrated and oxidized, produce sulfuric acid which is much stronger than silicic acid. Many coal soils are too acidic to support vigorous plant growth, and coal gives off strong precursors to acid rain when it is burned. Marine clays are also sulfide-rich in many cases, and such clays become very acidic if they are drained to an oxidizing state. Soil amendments Soil amendments such as chemical fertilizers can cause soil acidification. Sulfur based fertilizers can be highly acidifying, examples include elemental sulfur and iron sulfate while others like potassium sulfate have no significant effect on soil pH. While most nitrogen fertilizers have an acidifying effect, ammonium-based nitrogen fertilizers are more acidifying than other nitrogen sources. Ammonia-based nitrogen fertilizers include ammonium sulfate, diammonium phosphate, monoammonium phosphate, and ammonium nitrate. Organic nitrogen sources, such as urea and compost, are less acidifying. Nitrate sources which have little or no ammonium, such as calcium nitrate, magnesium nitrate, potassium nitrate, and sodium nitrate, are not acidifying. Pollution Acidification may also occur from nitrogen emissions into the air, as the nitrogen may end up deposited into the soil. Animal livestock is responsible for nearly 65 percent of man-made ammonia emissions. Anthropogenic sources of sulfur dioxides and nitrogen oxides play a major role in increase of acid rain production. The use of fossil fuels and motor exhaust are the largest anthropogenic contributors to sulfuric gases and nitrogen oxides, respectively. Aluminum is one of the few elements capable of making soil more acidic. This is achieved by aluminum taking hydroxide ions out of water, leaving hydrogen ions behind. As a result, the soil is more acidic, which makes it unlivable for many plants. Another consequence of aluminum in soils is aluminum toxicity, which inhibits root growth. Agriculture management practices Agriculture managements approaches such as monoculture and chemical fertilization often leads to soil problems such as soil acidification, degradation, and soil-borne diseases, which ultimately have a negative impact on agricultural productivity and sustainability. Effects Soil acidification can cause damage to plants and organisms in the soil. In plants, soil acidification results in smaller, less durable roots. Acidic soils sometimes damage the root tips reducing further growth. Plant height is impaired and seed germination also decreases. Soil acidification impacts plant health, resulting in reduced cover and lower plant density. Overall, stunted growth is seen in plants. Soil acidification is directly linked to a decline in endangered species of plants. In the soil, acidification reduces microbial and macrofaunal diversity. This can reduce soil structure decline which makes it more sensitive to erosion. There are less nutrients available in the soil, larger impact of toxic elements to plants, and consequences to soil biological functions (such as nitrogen fixation). A recent study showed that sugarcane monoculture induces soil acidity, reduces soil fertility, shifts microbial structure, and reduces its activity. Furthermore, most beneficial bacterial genera decreased significantly due to sugarcane monoculture, while beneficial fungal genera showed a reverse trend. Therefore, mitigating soil acidity, improving soil fertility, and soil enzymatic activities, including improved microbial structure with beneficial service to plants and soil, can be an effective measure to develop a sustainable sugarcane cropping system. At a larger scale, soil acidification is linked to losses in agricultural productivity due to these effects. Impacts of acidic water and Soil acidification on plants could be minor or in most cases major. In minor cases which do not result in fatality of plant life include; less-sensitive plants to acidic conditions and or less potent acid rain. Also in minor cases the plant will eventually die due to the acidic water lowering the plants natural pH. Acidic water enters the plant and causes important plant minerals to dissolve and get carried away; which ultimately causes the plant to die of lack of minerals for nutrition. In major cases which are more extreme; the same process of damage occurs as in minor cases, which is removal of essential minerals, but at a much quicker rate. Likewise, acid rain that falls on soil and on plant leaves causes drying of the waxy leaf cuticle; which ultimately causes rapid water loss from the plant to the outside atmosphere and results in death of the plant. To see if a plant is being affected by soil acidification, one can closely observe the plant leaves. If the leaves are green and look healthy, the soil pH is normal and acceptable for plant life. But if the plant leaves have yellowing between the veins on their leaves, that means the plant is suffering from acidification and is unhealthy. Moreover, a plant suffering from soil acidification cannot photosynthesize. Drying out of the plant due to acidic water destroy chloroplast organelles. Without being able to photosynthesize a plant cannot create nutrients for its own survival or oxygen for the survival of aerobic organisms; which affects most species of Earth and ultimately end the purpose of the plants existence. Prevention and management Soil acidification is a common issue in long-term crop production which can be reduced by lime, organic amendments (e.g., straw and manure) and biochar application. In sugarcane, soybean and corn crops grown in acidic soils, lime application resulted in nutrient restoration, increase in soil pH, increase in root biomass, and better plant health. Different management strategies may also be applied to prevent further acidification: using less acidifying fertilizers, considering fertilizer amount and application timing to reduce nitrate-nitrogen leaching, good irrigation management with acid-neutralizing water, and considering the ratio of basic nutrients to nitrogen in harvested crops. Sulfur fertilizers should only be used in responsive crops with a high rate of crop recovery. By reducing anthropogenic sources of sulfur dioxides and nitrogen oxides, and with air-pollution control measures, let us try to reduce acid rain and soil acidification worldwide. This has been observed in Ontario, Canada, over several lakes and demonstrated improvements in water pH and alkalinity.
Physical sciences
Soil science
Earth science
4410985
https://en.wikipedia.org/wiki/Port%20of%20Rotterdam
Port of Rotterdam
The Port of Rotterdam is the largest seaport in Europe, and the world's largest seaport outside of Asia, located in and near the city of Rotterdam, in the province of South Holland in the Netherlands. From 1962 until 2004, it was the world's busiest port by annual cargo tonnage. It was overtaken first in 2004 by the port of Singapore, and since then by Shanghai and other very large Chinese seaports. In 2020, Rotterdam was the world's tenth-largest container port in terms of twenty-foot equivalent units (TEU) handled. In 2017, Rotterdam was also the world's tenth-largest cargo port in terms of annual cargo tonnage. Covering , the port of Rotterdam now stretches over a distance of . It consists of the city centre's historic harbour area, including Delfshaven; the Maashaven/Rijnhaven/Feijenoord complex; the harbours around Nieuw-Mathenesse; Waalhaven; Vondelingenplaat; Eemhaven; Botlek; Europoort, situated along the Calandkanaal, Nieuwe Waterweg and Scheur (the latter two being continuations of the Nieuwe Maas); and the reclaimed Maasvlakte area, which projects into the North Sea. The Port of Rotterdam is located in the middle of the Rhine-Meuse-Scheldt delta. Rotterdam has five port concessions (ports) within its boundaries - operated by separate companies under the overall authority of Rotterdam. Rotterdam consists of five distinct port areas and three distribution parks that facilitate the needs of a hinterland with over 500,000,000 consumers throughout the continent of Europe. Nieuwe Waterweg In the first half of the 19th century the port activities moved from the centre westward towards the North Sea. To improve the connection to the North Sea, the Nieuwe Waterweg ("New Waterway"), a large canal, was designed to connect the Rhine and Meuse rivers to the sea. The Nieuwe Waterweg, designed by Pieter Caland, was to be partly dug, then to further deepen the canal bed by the natural flow of the water. Ultimately however, the last part had to be dug by manual labour as well. Nevertheless, Rotterdam from then on had a direct connection between the sea and harbour areas with sufficient depth. The Nieuwe Waterweg has since been deepened several times. It was ready in 1872 and all sorts of industrial activity formed on the banks of this canal. Europoort and Maasvlakte extensions Over the years the port was further developed seaward by building new docks and harbour-basins. Rotterdam's harbour territory has been enlarged by the construction of the Europoort (gate to Europe) complex along the mouth of the Nieuwe Waterweg. In the 1970s the port was extended into the sea at the south side of the mouth of the Nieuwe Waterweg by completion of the Maasvlakte (Meuse-plain) which was built in the North Sea near Hook of Holland. In the past five years the industrialised skyline has been changed by the addition of large numbers of wind turbines taking advantage of the exposed coastal conditions. The construction of a second Maasvlakte received initial political approval in 2004, but was stopped by the Raad van State (the Dutch Council of State, which advises the government and parliament on legislation and governance) in 2005, because the plans did not take enough account of environmental issues. On 10 October 2006, however, approval was acquired to start construction in 2008, aiming for the first ship to anchor in 2013. Characteristics Most important for the port of Rotterdam is the petrochemical industry and general cargo transshipment handlings. The harbour functions as an important transit point for transport of bulk and other goods between the European continent and other parts of the world. From Rotterdam goods are transported by ship, river barge, train or road. Since 2000 the Betuweroute, a fast cargo railway from Rotterdam to Germany, has been under construction. The Dutch part of this railway opened in 2007. Large oil refineries are located west of the city. The rivers Meuse (Maas) and Rhine also provide excellent access to the pan-European hinterland. 24-metre draft The EECV-quay of the port has a draft of 24 metres (78 feet). This made it one of only two available mooring locations for one of the largest bulk cargo ships in the world, the iron ore bulk carrier MS Berge Stahl when it is fully loaded, along with the Terminal of Ponta da Madeira in Brazil, until the opening of a new deep-water iron ore wharf at Caofeidian in China in 2011. The ship's draft of 23 meters (75 feet) leaves only 1 metre (3 feet) of under keel clearance, therefore it can only dock in a restricted tidal window. Such ships must travel in the Eurogeul waterway. Robotic container operations Much of the container loading and stacking in the port is handled by autonomous robotic cranes and computer controlled chariots. Europe Container Terminals, which operates two major container terminals at the port, pioneered the development of terminal automation. At the Delta terminal, the chariots—or automated guided vehicles (AGV)—are unmanned and each carries one container. The chariots navigate their own way around the terminal with the help of a magnetic grid built into the terminal tarmac. Once a container is loaded onto an AGV, it is identified by infrared "eyes" and delivered to its designated place within the terminal. This terminal is also named "the ghost terminal". Unmanned Automated Stacking Cranes (ASC) take containers to/from the AGVs and store them in the stacking yard. The newer Euromax terminal implements an evolution of this design that eliminates the use of straddle carriers for the land-side operations. Smart Technology The Port Authority at the Port of Rotterdam uses the Internet of Things, a cloud-based platform, to collect and process data from sensors around the port. In May 2019, the port sent Container 42 out on a two-year data-collecting mission. Urban renewal in vacant port areas As early as 1892, the Leuvehaven attracted the first museum visitors. Art lovers could view one of Van Gogh's first exhibitions in the art gallery at number 74 Leuvehaven. At the time, no one would have thought that the harbor itself would have become a museum a hundred years later. In 1979 the Maritime Museum opened the museum ship the Buffel in the Leuvehaven. That ship used to serve for the Dutch navy. On April 16, 1983, the Maritime Museum was built at the head of the Leuvehaven. It opened in 1986. The Maritime Museum (Havenmuseum, merged with the Maritime Museum since 2014) filled the rest of the harbor with ships. The Leuvehaven is still a home port for a small number of inland vessels. The Oude Haven is one of the oldest ports of Rotterdam. It is located in the center of the city, south-east of Rotterdam Blaak station. Today the Oude Haven is a well-known and busy nightlife area with cafes and restaurants with terraces on the water, close to the famous Kubuswoningen, the Witte Huis and the adjacent Mariniersmuseum. Rotterdam University of Applied Sciences has a location nearby. The most important project in this development is the Kop van Zuid - an area on the south bank of the Nieuwe Maas, directly opposite the city center. The area has not been used as a port since the German bombing in 1940 and fell into disrepair in the decades that followed. In 1993 the Hotel New York, former office building of the Holland America Lines (Nederlandsch Amerikaansche Stoomvaart Maatschappij), opened. With the construction of the Erasmusbrug in 1996, the city created a direct connection between the two banks of the Meuse. Since then, numerous public buildings such as the Luxor theater, several museums, but also office and residential high-rises have been built. In March 2020 it was announced that the Rijnhaven will be partially filled in after 2024 and used for residential construction and the construction of a city park. The Posthumalaan will then become a city boulevard with high residential towers and the Wilhelminaplein and Rijnhaven underground stations will be renovated. In the meantime, the Floating Office Rotterdam (FOR) opened in September 2021 on the Antoine Platekade and accommodates the Global Center on Adaptation. The FOR also includes a restaurant and an outdoor swimming pool. This is a project in the context of the Rotterdam Climate Initiative (RCI). Administration The port is operated by the Port of Rotterdam Authority, originally a municipal body of the municipality of Rotterdam, but since 1 January 2004, a government corporation jointly owned by the municipality of Rotterdam and the Dutch State. Flood barriers The Port of Rotterdam and its surrounding area is susceptible to a storm surge from the North Sea. As part of the Delta Works plan, the Maeslantkering flood barrier was constructed from 1991 to 1997 to protect the area. This flood barrier consists of two huge doors that normally rest in a dry dock besides the Nieuwe Waterweg. When a flood of above NAP (mean sea level) is predicted, the barrier is activated. The dry dock is flooded, and the gates rotate around a pivot to float into position, like caissons, and sunk in place. When the water level recedes enough to open the gates, they are floated back into their docks. Another barrier, the Hartelkering, is situated in the Hartelkanaal. Sustainability The Port of Rotterdam aims to be emissions-free by the year 2050. In 2018, the Port Authority CEO launched a EUR 5 million incentive scheme for climate-friendly shipping. According to Benchmarkia's Industrial Park Ranking, Rotterdam Harbour has been ranked 20th among all industrial parks worldwide based on sustainability. Map of port
Technology
Specific piers and ports
null
1691376
https://en.wikipedia.org/wiki/Amazon%20Web%20Services
Amazon Web Services
Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered, pay-as-you-go basis. Clients will often use this in combination with autoscaling (a process that allows a client to use more computing in times of high application usage, and then scale down to reduce costs when there is less traffic). These cloud computing web services provide various services related to networking, compute, storage, middleware, IoT and other processing capacity, as well as software tools via AWS server farms. This frees clients from managing, scaling, and patching hardware and operating systems. One of the foundational services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, with extremely high availability, which can be interacted with over the internet via REST APIs, a CLI or the AWS console. AWS's virtual computers emulate most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk (HDD)/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM). AWS services are delivered to customers via a network of AWS server farms located throughout the world. Fees are based on a combination of usage (known as a "Pay-as-you-go" model), hardware, operating system, software, and networking features chosen by the subscriber requiring various degrees of availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. Amazon provides select portions of security for subscribers (e.g. physical security of the data centers) while other aspects of security are the responsibility of the subscriber (e.g. account management, vulnerability scanning, patching). AWS operates from many global geographical regions including seven in North America. Amazon markets AWS to subscribers as a way of obtaining large-scale computing capacity more quickly and cheaply than building an actual physical server farm. All services are billed based on usage, but each service measures usage in varying ways. As of 2023 Q1, AWS has 31% market share for cloud infrastructure while the next two competitors Microsoft Azure and Google Cloud have 25%, and 11% respectively, according to Synergy Research Group. Services AWS comprises over 200 products and services including computing, storage, networking, database, analytics, application services, deployment, management, machine learning, mobile, developer tools, RobOps and tools for the Internet of Things. The most popular include Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (Amazon S3), Amazon Connect, and AWS Lambda (a serverless function that can perform arbitrary code written in any language that can be configured to be triggered by hundreds of events, including HTTP calls). Services expose functionality through APIs for clients to use in their applications. These APIs are accessed over HTTP, using the REST architectural style and SOAP protocol for older APIs and exclusively JSON for newer ones. Clients can interact with these APIs in various ways, including from the AWS console (a website), by using SDKs written in various languages (such as Python, Java, and JavaScript), or by making direct REST calls. History Founding (2000–2005) The genesis of AWS came in the early . After building Merchant.com, Amazon's e-commerce-as-a-service platform that offers third-party retailers a way to build their own web-stores, Amazon pursued service-oriented architecture as a means to scale its engineering operations, led by then CTO Allan Vermeulen. Around the same time frame, Amazon was frustrated with the speed of its software engineering, and sought to implement various recommendations put forth by Matt Round, an engineering leader at the time, including maximization of autonomy for engineering teams, adoption of REST, standardization of infrastructure, removal of gate-keeping decision-makers (bureaucracy), and continuous deployment. He also called for increasing the percentage of the time engineers spent building the software rather than doing other tasks. Amazon created "a shared IT platform" so its engineering organizations, which were spending 70% of their time on "undifferentiated heavy-lifting" such as IT and infrastructure problems, could focus on customer-facing innovation instead. Besides, in dealing with unusual peak traffic patterns, especially during the holiday season, by migrating services to commodity Linux hardware and relying on open source software, Amazon's Infrastructure team, led by Tom Killalea, Amazon's first CISO, had already run its data centers and associated services in a "fast, reliable, cheap" way. In July 2002 Amazon.com Web Services, managed by Colin Bryar, launched its first web services, opening up the Amazon.com platform to all developers. Over one hundred applications were built on top of it by 2004. This unexpected developer interest took Amazon by surprise and convinced them that developers were "hungry for more". By the summer of 2003, Andy Jassy had taken over Bryar's portfolio at Rick Dalzell's behest, after Vermeulen, who was Bezos' first pick, declined the offer. Jassy subsequently mapped out the vision for an "Internet OS" made up of foundational infrastructure primitives that alleviated key impediments to shipping software applications faster. By fall 2003, databases, storage, and compute were identified as the first set of infrastructure pieces that Amazon should launch. Jeff Barr, an early AWS employee, credits Vermeulen, Jassy, Bezos himself, and a few others for coming up with the idea that would evolve into EC2, S3, and RDS; Jassy recalls the idea was the result of brainstorming for about a week with "ten of the best technology minds and ten of the best product management minds" on about ten different internet applications and the most primitive building blocks required to build them. Werner Vogels cites Amazon's desire to make the process of "invent, launch, reinvent, relaunch, start over, rinse, repeat" as fast as it could was leading them to break down organizational structures with "two-pizza teams" and application structures with distributed systems; and that these changes ultimately paved way for the formation of AWS and its mission "to expose all of the atomic-level pieces of the Amazon.com platform". According to Brewster Kahle, co-founder of Alexa Internet, which was acquired by Amazon in 1999, his start-up's compute infrastructure helped Amazon solve its big data problems and later informed the innovations that underpinned AWS. Jassy assembled a founding team of 57 employees from a mix of engineering and business backgrounds to kick-start these initiatives, with a majority of the hires coming from outside the company; Jeff Lawson, Twilio CEO, Adam Selipsky, Tableau CEO, and Mikhail Seregine, co-founder at Outschool among them. In late 2003, the concept for compute, which would later launch as EC2, was reformulated when Chris Pinkham and Benjamin Black presented a paper internally describing a vision for Amazon's retail computing infrastructure that was completely standardized, completely automated, and would rely extensively on web services for services such as storage and would draw on internal work already underway. Near the end of their paper, they mentioned the possibility of selling access to virtual servers as a service, proposing the company could generate revenue from the new infrastructure investment. Thereafter Pinkham, Willem van Biljon, and lead developer Christopher Brown developed the Amazon EC2 service, with a team in Cape Town, South Africa. In November 2004, AWS launched its first infrastructure service for public usage: Simple Queue Service (SQS). S3, EC2, and other first generation services (2006–2010) On March 14, 2006, AWS launched Amazon S3 cloud storage followed by EC2 in August 2006. Pi Corporation, a startup Paul Maritz co-founded, was the first beta-user of EC2 outside of Amazon, while Microsoft was among EC2's first enterprise customers. Later that year, SmugMug, one of the early AWS adopters, attributed savings of around US$400,000 in storage costs to S3. According to Vogels, S3 was built with 8 microservices when it launched in 2006, but had over 300 microservices by 2022. In September 2007, AWS announced its annual Start-up Challenge, a contest with prizes worth $100,000 for entrepreneurs and software developers based in the US using AWS services such as S3 and EC2 to build their businesses. The first edition saw participation from Justin.tv, which Amazon would later acquire in 2014. Ooyala, an online media company, was the eventual winner. Additional AWS services from this period include SimpleDB, Mechanical Turk, Elastic Block Store, Elastic Beanstalk, Relational Database Service, DynamoDB, CloudWatch, Simple Workflow, CloudFront, and Availability Zones. Growth (2010–2015) In November 2010, it was reported that all of Amazon.com's retail sites had migrated to AWS. Prior to 2012, AWS was considered a part of Amazon.com and so its revenue was not delineated in Amazon financial statements. In that year industry watchers for the first time estimated AWS revenue to be over $1.5 billion. On November 27, 2012, AWS hosted its first major annual conference, re:Invent with a focus on AWS's partners and ecosystem, with over 150 sessions. The three-day event was held in Las Vegas because of its relatively cheaper connectivity with locations across the United States and the rest of the world. Andy Jassy and Werner Vogels presented keynotes, with Jeff Bezos joining Vogels for a fireside chat. AWS opened early registrations at US$1,099 per head for their customers from over 190 countries. On stage with Andy Jassy at the event which saw around 6000 attendees, Reed Hastings, CEO at Netflix, announced plans to migrate 100% of Netflix's infrastructure to AWS. To support industry-wide training and skills standardization, AWS began offering a certification program for computer engineers, on April 30, 2013, to highlight expertise in cloud computing. Later that year, in October, AWS launched Activate, a program for start-ups worldwide to leverage AWS credits, third-party integrations, and free access to AWS experts to help build their business. In 2014, AWS launched its partner network, AWS Partner Network (APN), which is focused on helping AWS-based companies grow and scale the success of their business with close collaboration and best practices. In January 2015, Amazon Web Services acquired Annapurna Labs, an Israel-based microelectronics company for a reported US$350–370M. In April 2015, Amazon.com reported AWS was profitable, with sales of $1.57 billion in the first quarter of the year and $265 million of operating income. Founder Jeff Bezos described it as a fast-growing $5 billion business; analysts described it as "surprisingly more profitable than forecast". In October, Amazon.com said in its Q3 earnings report that AWS's operating income was $521 million, with operating margins at 25 percent. AWS's 2015 Q3 revenue was $2.1 billion, a 78% increase from 2014's Q3 revenue of $1.17 billion. 2015 Q4 revenue for the AWS segment increased 69.5% y/y to $2.4 billion with a 28.5% operating margin, giving AWS a $9.6 billion run rate. In 2015, Gartner estimated that AWS customers are deploying 10x more infrastructure on AWS than the combined adoption of the next 14 providers. Current era (2016–present) In 2016 Q1, revenue was $2.57 billion with net income of $604 million, a 64% increase over 2015 Q1 that resulted in AWS being more profitable than Amazon's North American retail business for the first time. Jassy was thereafter promoted to CEO of the division. Around the same time, Amazon experienced a 42% rise in stock value as a result of increased earnings, of which AWS contributed 56% to corporate profits. AWS had $17.46 billion in annual revenue in 2017. By the end of 2020, the number had grown to $46 billion. Reflecting the success of AWS, Jassy's annual compensation in 2017 hit nearly $36 million. In January 2018, Amazon launched an autoscaling service on AWS. In November 2018, AWS announced customized ARM cores for use in its servers. Also in November 2018, AWS is developing ground stations to communicate with customers' satellites. In 2019, AWS reported 37% yearly growth and accounted for 12% of Amazon's revenue (up from 11% in 2018). In April 2021, AWS reported 32% yearly growth and accounted for 32% of $41.8 billion cloud market in Q1 2021. In January 2022, AWS joined the MACH Alliance, a non-profit enterprise technology advocacy group. In June 2022, it was reported that in 2019 Capital One had not secured their AWS resources properly, and was subject to a data breach by a former AWS employee. The employee was convicted of hacking into the company's cloud servers to steal customer data and use computer power to mine cryptocurrency. The ex-employee was able to download the personal information of more than 100 million Capital One customers. In June 2022, AWS announced they had launched the AWS Snowcone, a small computing device, to the International Space Station on the Axiom Mission 1. In September 2023, AWS announced it would become AI startup Anthropic's primary cloud provider. Amazon has committed to investing up to $4 billion in Anthropic and will have a minority ownership position in the company. AWS also announced the GA of Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI companies available through a single application programming interface (API) In April 2024, AWS announced a new service called Deadline Cloud, which lets customers set up, deploy and scale up graphics and visual effects rendering pipelines on AWS cloud infrastructure. In December 2024, AWS announced Amazon Nova, its own family of foundation models. These models, offered through Amazon Bedrock, are designed for various tasks including content generation, video understanding, and building agentic applications. They are available in six different sizes. Customer base Notable customers include NASA, and the Obama presidential campaign of 2012. In October 2013, AWS was awarded a $600M contract with the CIA. In 2019, it was reported that more than 80% of Germany's listed DAX companies use AWS. In August 2019, the U.S. Navy said it moved 72,000 users from six commands to an AWS cloud system as a first step toward pushing all of its data and analytics onto the cloud. In 2021, DISH Network announced it will develop and launch its 5G network on AWS. In October 2021, it was reported that spy agencies and government departments in the UK such as GCHQ, MI5, MI6, and the Ministry of Defence, have contracted AWS to host their classified materials. In 2022 Amazon shared a $9 billion contract from the United States Department of Defense for cloud computing with Google, Microsoft, and Oracle. Multiple financial services firms have shifted to AWS in some form. Significant service outages On April 20, 2011, AWS suffered a major outage. Parts of the Elastic Block Store service became "stuck" and could not fulfill read/write requests. It took at least two days for the service to be fully restored. On June 29, 2012, several websites that rely on Amazon Web Services were taken offline due to a severe storm in Northern Virginia, where AWS's largest data center cluster is located. On October 22, 2012, a major outage occurred, affecting many sites including Reddit, Foursquare, Pinterest. The cause was a memory leak bug in an operational data collection agent. On December 24, 2012, AWS suffered another outage causing websites such as Netflix to be unavailable for customers in the Northeastern United States. AWS cited their Elastic Load Balancing service as the cause. On February 28, 2017, AWS experienced a massive outage of S3 services in its Northern Virginia region. A majority of websites that relied on AWS S3 either hung or stalled, and Amazon reported within five hours that AWS was fully online again. No data has been reported to have been lost due to the outage. The outage was caused by a human error made while debugging, that resulted in removing more server capacity than intended, which caused a domino effect of outages. On November 25, 2020, AWS experienced several hours of outage on the Kinesis service in North Virginia (US-East-1) region. Other services relying on Kinesis were also impacted. On December 7, 2021, an outage mainly affected the Eastern United States, disrupting delivery service and streaming. Availability and topology AWS has distinct operations in 33 geographical "regions": eight in North America, one in South America, eight in Europe, three in the Middle East, one in Africa, and twelve in Asia Pacific. Most AWS regions are enabled by default for AWS accounts. Regions introduced after 20 March 2019 are considered to be opt-in regions, requiring a user to explicitly enable them in order for the region to be usable in the account. For opt-in regions, Identity and Access Management (IAM) resources such as users and roles are only propagated to the regions that are enabled. Each region is wholly contained within a single country and all of its data and services stay within the designated region. Each region has multiple "Availability Zones", which consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. Availability Zones do not automatically provide additional scalability or redundancy within a region, since they are intentionally isolated from each other to prevent outages from spreading between zones. Several services can operate across Availability Zones (e.g., S3, DynamoDB) while others can be configured to replicate across zones to spread demand and avoid downtime from failures. Amazon Web Services operated an estimated 1.4 million servers across 11 regions and 28 availability zones. The global network of AWS Edge locations consists of over 300 points of presence worldwide, including locations in North America, Europe, Asia, Australia, Africa, and South America. AWS has announced the planned launch of six additional regions in Malaysia, Mexico, New Zealand, Thailand, Saudi Arabia, and the European Union. In mid March 2023, Amazon Web Services signed a cooperation agreement with the New Zealand Government to build large data centers in New Zealand. In 2014, AWS claimed its aim was to achieve 100% renewable energy usage in the future. In the United States, AWS's partnerships with renewable energy providers include Community Energy of Virginia, to support the US East region; Pattern Development, in January 2015, to construct and operate Amazon Wind Farm Fowler Ridge; Iberdrola Renewables, LLC, in July 2015, to construct and operate Amazon Wind Farm US East; EDP Renewables North America, in November 2015, to construct and operate Amazon Wind Farm US Central; and Tesla Motors, to apply battery storage technology to address power needs in the US West (Northern California) region. Pop-up lofts AWS also has "pop-up lofts" in different locations around the world. These market AWS to entrepreneurs and startups in different tech industries in a physical location. Visitors can work or relax inside the loft, or learn more about what they can do with AWS. In June 2014, AWS opened their first temporary pop-up loft in San Francisco. In May 2015 they expanded to New York City, and in September 2015 expanded to Berlin. AWS opened its fourth location, in Tel Aviv from March 1, 2016, to March 22, 2016. A pop-up loft was open in London from September 10 to October 29, 2015. The pop-up lofts in New York and San Francisco are indefinitely closed due to the COVID-19 pandemic while Tokyo has remained open in a limited capacity. Charitable work In 2017, AWS launched AWS re/Start in the United Kingdom to help young adults and military veterans retrain in technology-related skills. In partnership with the Prince's Trust and the Ministry of Defence (MoD), AWS will help to provide re-training opportunities for young people from disadvantaged backgrounds and former military personnel. AWS is working alongside a number of partner companies including Cloudreach, Sage Group, EDF Energy, and Tesco Bank. In April 2022, AWS announced the organization has committed more than $30 million over three years to early-stage start-ups led by Black, Latino, LGBTQIA+, and Women founders as part of its AWS impact Accelerator. The Initiative offers qualifying start-ups up to $225,000 in cash, credits, extensive training, mentoring, technical guidance and includes up to $100,000 in AWS service credits. Reception Environmental impact In 2016, Greenpeace assessed major tech companies—including cloud services providers like AWS, Microsoft, Oracle, Google, IBM, Salesforce and Rackspace—based on their level of "clean energy" usage. Greenpeace evaluated companies on their mix of renewable-energy sources; transparency; renewable-energy commitment and policies; energy efficiency and greenhouse-gas mitigation; renewable-energy procurement; and advocacy. The group gave AWS an overall "C" grade. Greenpeace credited AWS for its advances toward greener computing in recent years and its plans to launch multiple wind and solar farms across the United States. The organization stated that Amazon is opaque about its carbon footprint. In January 2021, AWS joined an industry pledge to achieve climate neutrality of data centers by 2030, the Climate Neutral Data Centre Pact. As of 2023, Amazon as a whole is the largest corporate purchaser of renewable energy in the world, a position it has held since 2020, and has a global portfolio of over 20 GW of renewable energy capacity. In 2022, 90% of all Amazon operations, including data centers, were powered by renewables. Denaturalization protest US Department of Homeland Security has employed the software ATLAS, which runs on Amazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review by USCIS officers regarding denaturalization. Some of the scanned data came from the Terrorist Screening Database and the National Crime Information Center. The algorithm and the criteria for the algorithm were secret. Amazon faced protests from its own employees and activists for the anti-migrant collaboration with authorities. Israeli–Palestinian conflict The contract for Project Nimbus drew rebuke and condemnation from the companies' shareholders as well as their employees, over concerns that the project would lead to abuses of Palestinians' human rights in the context of the ongoing occupation and the Israeli–Palestinian conflict. Specifically, they voice concern over how the technology will enable further surveillance of Palestinians and unlawful data collection on them as well as facilitate the expansion of Israel's illegal settlements on Palestinian land. A government procurement document featuring 'obligatory customers' of Nimbus, including "two of Israel’s leading state-owned weapons manufacturers" Israel Aerospace Industries and Rafael Advanced Defense Systems, was published in 2021 with periodic updates since (up to Oct 2023). Challenges Like other cloud computing solutions, applications hosted on Amazon Web Services (AWS) are subject to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment. Issues Some AWS customers have complained about receiving unexpectedly large bills, commonly referred to as "surprise bills." This can occur due to various reasons, including but not limited to misconfigurations, security breaches, complex pricing—especially when multiple AWS services are used together—and unexpected data transfer charges. Community-Driven AWS SDK Alternatives AWS-Lite is an open-source, lightweight alternative to the official AWS SDK for Node.js, created and maintained by the team behind Deno, a runtime environment. It offers a reduced package size, which can lower memory usage and improve performance. However, AWS-Lite does not support the full range of AWS services and features available in the official SDK, limiting its applicability to targeted scenarios.
Technology
Cloud server
null
1692055
https://en.wikipedia.org/wiki/Oxygen%20difluoride
Oxygen difluoride
Oxygen difluoride is a chemical compound with the formula . As predicted by VSEPR theory, the molecule adopts a bent molecular geometry. It is a strong oxidizer and has attracted attention in rocketry for this reason. With a boiling point of −144.75 °C, OF2 is the most volatile (isolable) triatomic compound. The compound is one of many known oxygen fluorides. Preparation Oxygen difluoride was first reported in 1929; it was obtained by the electrolysis of molten potassium fluoride and hydrofluoric acid containing small quantities of water. The modern preparation entails the reaction of fluorine with a dilute aqueous solution of sodium hydroxide, with sodium fluoride as a side-product: Structure and bonding It is a covalently bonded molecule with a bent molecular geometry and a F-O-F bond angle of 103 degrees. Its powerful oxidizing properties are suggested by the oxidation number of +2 for the oxygen atom instead of its normal −2. Reactions Above 200 °C, decomposes to oxygen and fluorine by a radical mechanism. reacts with many metals to yield oxides and fluorides. Nonmetals also react: phosphorus reacts with to form and ; sulfur gives and ; and unusually for a noble gas, xenon reacts (at elevated temperatures) yielding and xenon oxyfluorides. Oxygen difluoride reacts with water to form hydrofluoric acid: It can oxidize sulphur dioxide to sulfur trioxide and elemental fluorine: However, in the presence of UV radiation, the products are sulfuryl fluoride () and pyrosulfuryl fluoride (): Safety Oxygen difluoride is considered an unsafe gas due to its oxidizing properties. It reacts explosively with water. Hydrofluoric acid produced by the hydrolysis of with water is highly corrosive and toxic, capable of causing necrosis, leaching calcium from the bones and causing cardiovascular damage, among a host of other highly toxic effects. Other acute poisoning effects include: pulmonary edema, bleeding lungs, headaches, etc. Chronic exposure to oxygen difluoride, like that of other chemicals that release fluoride ions, can lead to fluorosis and other symptoms of chronic fluoride poisoning. Oxygen difluoride may be associated with kidney damage. The maximum workplace exposure limit is 0.05 ppm. Popular culture In Robert L. Forward's science fiction novel Camelot 30K, oxygen difluoride was used as a biochemical solvent by fictional life forms living in the solar system's Kuiper belt. While would be a solid at 30K, the fictional alien lifeforms were described as endothermic, maintaining elevated body temperatures and liquid blood by radiothermal heating.
Physical sciences
Covalent oxides
Chemistry
1694357
https://en.wikipedia.org/wiki/Bikont
Bikont
A bikont ("two flagella") is any of the eukaryotic organisms classified in the group Bikonta. Many single-celled and multi-celled organisms are members of the group, and these, as well as the presumed ancestor, have two flagella. Enzymes Another shared trait of bikonts is the fusion of two genes into a single unit: the genes for thymidylate synthase (TS) and dihydrofolate reductase (DHFR) encode a single protein with two functions. The genes are separately translated in unikonts. Relationships Some research suggests that a unikont (a eukaryotic cell with a single flagellum) was the ancestor of opisthokonts (Animals, Fungi, and related forms) and Amoebozoa, and a bikont was the ancestor of Archaeplastida (Plants and relatives), Excavata, Rhizaria, and Chromalveolata. Cavalier-Smith has suggested that Apusozoa, which are typically considered incertae sedis, are in fact bikonts. Relationships within the bikonts are not yet clear. Cavalier-Smith has grouped the Excavata and Rhizaria into the Cabozoa and the Archaeplastida and Chromalveolata into the Corticata, but at least one other study has suggested that the Rhizaria and Chromalveolata form a clade. An alternative to the Unikont–Bikont division was suggested by Derelle et al. in 2015, where they proposed the acronyms Opimoda–Diphoda respectively, as substitutes to the older terms. The name Diphoda is formed from the letters of DIscoba and diaPHOretickes (shown in capitals). Cladogram A "classical" cladogram (data from 2012, 2015) is: However, a cladogram (data from 2015, 2016) with the root in Excavata is The corticates correspond roughly to the bikonts. While Haptophyta, Cryptophyta, Glaucophyta, Rhodophyta, the SAR supergroup and viridiplantae are usually considered monophyletic, Archaeplastida may be paraphyletic, and the mutual relationships between these phyla are still to be fully resolved. Recent reconstructions placed Archaeplastida and Hacrobia together in an "HA supergroup" or "AH supergroup", which was a sister clade to the SAR supergroup within the SAR/HA supergroup. However, this seems to have fallen out of favor as the monophyly of hacrobia has come under dispute.
Biology and health sciences
Bikonts
Plants
17917891
https://en.wikipedia.org/wiki/Durian
Durian
The durian () is the edible fruit of several tree species belonging to the genus Durio. There are 30 recognized species, at least nine of which produce edible fruit. Durio zibethinus, native to Borneo and Sumatra, is the only species available on the international market. It has over 300 named varieties in Thailand and over 200 in Malaysia as of 2021. Other species are sold in their local regions. Known in some regions as the "king of fruits", the durian is distinctive for its large size, strong odour, and thorn-covered rind. The fruit can grow as large as long and in diameter, and it typically weighs . Its shape ranges from oblong to round, the colour of its husk from green to brown, and its flesh from pale yellow to red, depending on the species. Some people regard the durian as having a pleasantly sweet fragrance, whereas others find the aroma overpowering and unpleasant. The smell evokes reactions ranging from deep appreciation to intense disgust. The persistence of its strong odour, which may linger for several days, has led some hotels and public transportation services in Southeast Asia, such as in Singapore and Bangkok, to ban the fruit. The flesh can be consumed at various stages of ripeness, and it is used to flavour a wide variety of sweet desserts and savoury dishes in Southeast Asian cuisines. The seeds can be eaten when cooked. Etymology The name 'durian' is derived from the Malay word ('thorn'), a reference to the numerous prickly thorns on its rind, combined with the noun-building suffix . According to the Oxford English Dictionary, the alternate spelling durion was first used in a 1588 translation of The History of the Great and Mighty Kingdom of China and the Situation Thereof by the Spanish explorer Juan González de Mendoza. Other historical variant spellings include duryoen, duroyen, durean, and dorian. The name of the type species, D. zibethinus, is derived from Italian (the civet). Description Durian trees are large, growing to in height depending on the species. The leaves are evergreen, elliptic to oblong and long. The flowers are produced in three to thirty clusters together on large branches and directly on the trunk, with each flower having a calyx (sepals) and five (rarely four or six) petals. Durian trees have one or two flowering and fruiting periods per year, although the timing varies depending on the species, cultivars, and localities. A typical durian tree can bear fruit after four or five years. The durian fruit can hang from any branch, and matures roughly three months after pollination. The fruit can grow up to long and in diameter, and typically weighs 1 to 3 kilograms (2–7 lb). Its shape ranges from oblong to round, the colour of its husk green to brown, and its flesh pale-yellow to red, depending on the species. Among the thirty known species of Durio, nine produce edible fruits: D. zibethinus, D. dulcis, D. grandiflorus, D. graveolens, D. kutejensis, D. lowianus, D. macrantha, D. oxleyanus and D. testudinarius. D. zibethinus is the only species commercially cultivated on a large scale and available outside its native region. Since this species is open-pollinated, it shows considerable diversity in fruit colour and odour, size of flesh and seed, and tree phenology. In the species name, zibethinus refers to the Indian civet, Viverra zibetha. There is disagreement over whether this name, bestowed by Linnaeus, alludes to civets being so fond of the durian that the fruit was used as bait to entrap them, or to the durian's smelling like the civet. Durian flowers are large and feathery, with copious nectar, and give off a heavy, sour, buttery odour. These features are typical of flowers pollinated by certain species of bats that eat nectar and pollen. Durians can be pollinated by bats (the cave nectar bat Eonycteris spelaea, the lesser short-nosed fruit bat Cynopterus brachyotis, and the large flying fox, Pteropus vampyrus). Two species, D. grandiflorus and D. oblongus, are pollinated by spiderhunter birds (Nectariniidae), while D. kutejensis is pollinated by giant honey bees and birds as well as by bats. Some scientists have hypothesised that the development of monothecate anthers and larger flowers (compared with those of the remaining genera in Durioneae) in the clade consisting of Durio, Boschia, and Cullenia was in conjunction with a transition from beetle pollination to vertebrate pollination. Cultivars Over the centuries, numerous durian cultivars, propagated by vegetative clones, have arisen in Southeast Asia. They used to be grown, with mixed results, from seeds of trees bearing superior quality fruit. They are now propagated by layering, marcotting, or more commonly, grafting, including bud, veneer, wedge, whip and U-grafting, onto seedlings of randomly selected rootstocks. Different cultivars may be distinguished to some extent by variations in the fruit shape, such as the shape of the spines. Malaysian varieties The Malaysian Ministry of Agriculture and Agro-Based Industry has since 1934 maintained a list of registered varieties, where each cultivar is assigned a common name and a code number starting with "D". These codes are widely used through Southeast Asia; as of 2021, there were over 200 registered varieties. Many superior cultivars have been identified through competitions held at the annual Malaysian Agriculture, Horticulture, and Agrotourism Show. There are 13 common Malaysian varieties having favourable qualities of colour, texture, odour, taste, high yield, and resistance against various diseases. Musang King (D197) was discovered in the 1980s, when a man named Tan Lai Fook from Raub, Pahang, stumbled upon a durian tree in Gua Musang, Kelantan. He brought a branch back to Raub for grafting. The cultivar was named after its place of origin. The variety has bright yellow flesh and is like a more potent or enhanced version of the D24. The D24 or Sultan durian has golden yellow flesh and a rich texture and aroma. It is a popular variety in Malaysia. Other popular cultivars in Malaysia include "Tekka", with a distinctive yellowish core in the inner stem; "D168" (IOI), which is round, of medium size, green and yellow outer skin, and easily dislodged flesh which is medium-thick, solid, yellow in colour, and sweet; and "Red Prawn" (Udang Merah, D175), found in the states of Pahang and Johor. The fruit is medium-sized, oval, brownish green, with short thorns. The flesh is thick, not solid, yellow-coloured, and has a sweet taste. Indonesian varieties Indonesia has more than 100 varieties of durian. The most cultivated species is D. zibethinus. Notable varieties are Sukun (Central Java), Sitokong (Betawi), Sijapang (Betawi), Simas (Bogor), Sunan (Jepara), Si dodol and Si hijau (South Kalimantan), and Petruk (Central Java). Thai varieties In Thailand, Mon Thong is the most commercially sought after cultivar, for its thick, full-bodied creamy and mild sweet-tasting flesh with moderate smell and smaller seeds, while Chanee is most resistant to infection by Phytophthora palmivora. Kan Yao is less common, but prized for its longer window of time when it is both sweet and odourless. Among the cultivars in Thailand, five are currently in large-scale commercial cultivation: Chanee, Mon Thong, Kan Yao, Ruang, and Kradum. By 2007, Thai government scientist Songpol Somsri had crossbred more than ninety varieties of durian to create Chantaburi No. 1, a cultivar without the characteristic odour. Another hybrid, Chantaburi No. 3, develops the odour about three days after the fruit is picked, which enables an odourless transport yet satisfies consumers who prefer the pungent odour. In 2012, two odourless cultivars, Long Laplae and Lin Laplae, were presented to the public by Yothin Samutkhiri, governor of Thailand's Uttaradit province where they were developed. Cultivation and trade In 2018, Thailand was ranked the world's number one exporter of durian, producing around 700,000 tonnes of durian per year, 400,000 tonnes of which are exported to mainland China and Hong Kong. Chantaburi in Thailand holds the World Durian Festival in early May each year. This single province is responsible for half of the durian production of Thailand. The Davao Region is the top producer of the fruit in the Philippines, producing 60% of the country's total. In Brunei, consumers prefer D. graveolens, D. kutejensis, and D. oxleyanus. These species constitute a genetically diverse crop source. Durian was introduced into Australia in the early 1960s, and clonal material followed in 1975. Over thirty clones of D. zibethinus and six other Durio species have been subsequently introduced into Australia. In 2019 the value of imported fresh durian became the highest of all fresh fruits imported to China, which was previously cherries. In 2021, China purchased at least US$3.4 billion worth or 90 percent of Thailand's fresh durian exports in that year. Overall Chinese imports grew to $4 billion in 2022, when the Philippines and Vietnam gained permission to export fresh durians to China, and $6.7 billion in 2023 when 1.4 million tonnes were imported. Durian has become a status symbol indicating wealth. Durian from Thailand retails at around ¥150 (US$20), while the more prestigious Musang King variety retails at around ¥500 and can be a birthday or wedding gift. The potential value for exporters has allowed China to leverage durian as part of trade talks. The entire export of durians from Southeast Asia to China increased from US$550 million in 2017, to US$6.7 billion in 2023. China's largest imports of the fruit came from Thailand, followed by Malaysia and Vietnam. Durian is a relatively costly fruit because of its short shelf life. Shelf life can be extended to around 4 to 5 weeks by shrink wrapping each fruit. This inhibits dehiscence, probably by multiple mechanisms: inhibiting respiration; reducing loss of water; holding the fruit's parts together; and reducing decomposition by microbes. The edible portion of the fruit, known as the aril and usually called the 'flesh' or 'pulp', only accounts for about 15–30% of the mass of the entire fruit. Flavour and odour History The strong flavour and odour of the fruit have prompted views ranging from appreciation to disgust. Writing in 1856, the British naturalist Alfred Russel Wallace called the fruit's consistency and flavour "indescribable. A rich custard highly flavoured with almonds gives the best general idea of it, but there are occasional wafts of flavour that call to mind cream-cheese, onion-sauce, sherry-wine, and other incongruous dishes. Then there is a rich glutinous smoothness in the pulp which nothing else possesses, but which adds to its delicacy." He concluded that it provided a "new sensation worth a voyage to the East to experience. ... as producing a food of the most exquisite flavour it is unsurpassed." Wallace described himself as being at first reluctant to try it because of the aroma, but on eating one in Borneo "out of doors, I at once became a confirmed Durian eater". He cites another writer as stating: "To those not used to it, it seems at first to smell like rotten onions, but immediately after they have tasted it they prefer it to all other food. The natives give it honourable titles, exalt it, and make verses on it." The novelist Anthony Burgess wrote that eating durian is "like eating sweet raspberry blancmange in the lavatory". The travel and food writer Richard Sterling states that "its odor is best described as pig-excrement, turpentine and onions, garnished with a gym sock." Other comparisons have been made with the civet, sewage, stale vomit, skunk spray and used surgical swabs. Such descriptions may reflect the odour's variability. Different species and cultivars vary markedly in aroma; for example, red durian (D. dulcis) has a deep caramel flavour with a turpentine odour while red-fleshed durian (D. graveolens) emits a fragrance of roasted almonds. The fruit's strong smell has led to its ban from public transport systems in Singapore and in Bangkok. Biochemical basis A draft genome analysis of durian indicates it has about 46,000 coding and non-coding genes, among which a class called methionine gamma-lyases – which regulate the odour of organosulfur compounds – may be primarily responsible for the distinct odour. Hundreds of phytochemicals responsible for durian flavour and aroma include diverse volatile compounds, such as esters, ketones, alcohols (primarily ethanol), and organosulfur compounds, with various thiols. Ethyl 2-methylbutanoate had the highest content among esters in a study of several varieties. Sugar content, primarily sucrose, has a range of 8–20% among different durian varieties. Durian flesh contains diverse polyphenols, especially myricetin, and various carotenoids, including a rich content of beta-carotene. In 2019, ethanethiol and its derivatives was identified as a source of the fetid smell. However, the biochemical pathway by which the plant produces ethanethiol remained unclear. People in Southeast Asia with frequent exposures to durian are able to easily distinguish the sweet-like scent of its ketones and esters from rotten or putrescine odours which are from volatile amines and fatty acids. Some individuals are unable to differentiate these smells and find this fruit noxious, whereas others find it pleasant and appealing. This strong odour can be detected half a mile away by animals, thus luring them. In addition, the fruit is highly appetising to diverse animals, including squirrels, mouse deer, pigs, sun bear, orangutan, elephants, and even carnivorous tigers. While some of these animals swallow the seed with the fruit and then transport it some distance before excreting, thus dispersing the seed. The thorny, armoured covering of the fruit discourages smaller animals; larger animals are more likely to transport the seeds far from the parent tree. Ripeness and selection According to Larousse Gastronomique, the durian fruit is ready to eat when its husk begins to crack. However, the ideal stage of ripeness to be enjoyed varies from region to region in Southeast Asia and by species. Some species grow so tall that they can only be collected once they have fallen to the ground, whereas most cultivars of D. zibethinus are nearly always cut from the tree and allowed to ripen while waiting to be sold. Some people in southern Thailand prefer their durians relatively young, when the clusters of fruit within the shell are still crisp in texture and mild in flavour. For some people in northern Thailand, the preference is for the fruit to be soft and aromatic. In Malaysia and Singapore, most consumers prefer the fruit to be as ripe and pungent in aroma as possible and may even risk allowing the fruit to continue ripening after its husk has already cracked open. In this state, the flesh becomes richly creamy and slightly alcoholic. The various preferences regarding ripeness among consumers make it hard to issue general statements about choosing a "good" durian. A durian that falls off the tree continues to ripen for two to four days, but after five or six days most would consider it overripe and unpalatable. All the same, some Thais cook such overripe fruit with palm sugar, creating a dessert called durian (or thurian) guan. Uses Culinary In Thailand, durian is eaten fresh with sweet sticky rice, and blocks of durian paste are sold in the markets, though much of the paste is adulterated with pumpkin. Unripe durians are cooked as a vegetable, except in the Philippines, where all uses are sweet rather than savoury. Malaysians make both sugared and salted preserves from durian. When durian is minced with salt, onions and vinegar, it is called boder. In Kelantan of Malaysia, fresh durian or tempoyak is mixed with onion and chilli slices, lime juice and budu (fermented anchovy sauce) and eaten as a condiment with rice-based meals. The seeds, which are the size of chestnuts, can be eaten boiled, roasted or fried in coconut oil, with a texture that is similar to taro or yam, but stickier. In Java, the seeds are sliced thin and cooked with sugar as a confection. Uncooked seeds are potentially toxic due to cyclopropene fatty acids. Durian fruit is used to flavour a wide variety of sweet edibles such as traditional Malay candy, ice kacang, dodol, lempuk, rose biscuits, ice cream, milkshakes, mooncakes, Yule logs, and cappuccino. Es durian (durian ice cream) is a popular dessert in Indonesia, sold at streetside stalls in Indonesian cities, especially in Java. Pulut durian or ketan durian is glutinous rice steamed with coconut milk and served with ripened durian. In Sabah, red durian is fried with onions and chilli and served as a side dish. Red-fleshed durian is traditionally added to sayur, an Indonesian soup made from freshwater fish. Ikan brengkes tempoyak is fish cooked in a durian-based sauce, traditional in Sumatra. Nutrition Raw durian is composed of 65% water, 27% carbohydrates (including 4% dietary fibre), 5% fat and 1% protein. In 100 grams, raw or fresh frozen durian provides 33% of the Daily Value (DV) of thiamine and a moderate content of other B vitamins, vitamin C, and the dietary mineral manganese (15–24% DV, table). Different durian varieties from Malaysia, Thailand and Indonesia vary in their carbohydrate content from 16 to 29%, fat content from 2–5%, protein content from 2–4%, and dietary fibre content from 1–4%, and in caloric value from 84 to 185 kcal per 100 grams. The fatty acids in durian flesh are particularly rich in oleic acid and palmitic acid. Origin and history The origin of the durian is thought to be in the region of Borneo and Sumatra, with wild trees in the Malay Peninsula and orchards commonly cultivated in a wide region from India to New Guinea. Four hundred years ago, it was traded across present-day Myanmar and was actively cultivated especially in Thailand and South Vietnam. The earliest known European reference to the durian is the record of Niccolò de' Conti, who travelled to Southeast Asia in the 15th century. Translated from the Latin in which Poggio Bracciolini recorded de Conti's travels: "They [people of Sumatra] have a green fruit which they call durian, as big as a watermelon. Inside there are five things like elongated oranges, and resembling thick butter, with a combination of flavours." The Portuguese physician Garcia de Orta described durians in Colóquios dos simples e drogas da India published in 1563. In 1741, Herbarium Amboinense by the German botanist Georg Eberhard Rumphius was published, providing the most detailed and accurate account of durians for over a century. The genus Durio has a complex taxonomy that has seen the subtraction and addition of many species since it was created by Rumphius. During the early stages of its taxonomical study, there was some confusion between durian and the soursop (Annona muricata), for both of these species had thorny green fruit. The Malay name for the soursop is durian Belanda, meaning Dutch durian. In the 18th century, Johann Anton Weinmann considered the durian to belong to Castaneae as its fruit was similar to the horse chestnut. D. zibethinus was introduced into Ceylon by the Portuguese in the 16th century and was reintroduced many times later. It has been planted in the Americas but confined to botanical gardens. The first seedlings were sent from the Royal Botanic Gardens, Kew, to Auguste Saint-Arroman of Dominica in 1884. In Southeast Asia, the durian has been cultivated for centuries at the village level, probably since the late 18th century, and commercially since the mid-20th century. In My Tropic Isle, Australian author and naturalist Edmund James Banfield tells how, in the early 20th century, a friend in Singapore sent him a durian seed, which he planted and cared for on his tropical island off the north coast of Queensland. In 1949, the British botanist E. J. H. Corner published The Durian Theory, or the Origin of the Modern Tree. This proposed that endozoochory (the enticement of animals to transport seeds in their stomach) arose before any other method of seed dispersal and that primitive ancestors of Durio species were the earliest practitioners of that dispersal method, in particular red durian (D. dulcis) exemplifying the primitive fruit of flowering plants. However, in more recent circumscriptions of Durioneae, the tribe into which Durio and its sister taxa fall, fleshy arils and spiny fruits are derived within the clade. Some genera possess these characters, but others do not. The most recent molecular evidence (on which the most recent, well-supported circumscription of Durioneae is based) therefore refutes Corner's Durian Theory. Since the early 1990s, the domestic and international demand for durian in the Association of Southeast Asian Nations (ASEAN) region has increased significantly. In the early 2020s, a durian craze in China led to a large increase in international trade of the fruit. Culture and folk medicine Cultural influences A common local belief is that the durian is harmful when eaten with coffee or alcoholic beverages. The latter belief can be traced back at least to the 18th century when Rumphius stated that one should not drink alcohol after eating durians as it will cause indigestion and bad breath. In 1929, J. D. Gimlette wrote in his Malay Poisons and Charm Cures that the durian fruit must not be eaten with brandy. In 1981, J. R. Croft wrote in his Bombacaceae: In Handbooks of the Flora of Papua New Guinea that "a feeling of morbidity" often follows the consumption of alcohol too soon after eating durian. Several medical investigations on the validity of this belief have been conducted with varying conclusions, though a study by the University of Tsukuba finds the fruit's high sulphur content inhibits the activity of aldehyde dehydrogenase, causing a 70 percent reduction of the ability to clear certain toxins such as alcohol from the body. In its native Southeast Asia, the durian is an everyday food and portrayed in the local media in accordance with the cultural perception it has in the region. The durian symbolised the subjective nature of ugliness and beauty in Hong Kong director Fruit Chan's 2000 film Durian Durian (榴槤飄飄, lau lin piu piu), and was a nickname for the reckless but lovable protagonist of the eponymous Singaporean TV comedy Durian King played by Adrian Pang. Likewise, the oddly shaped Esplanade building in Singapore (Theatres on the Bay) is often called "The Durian" by locals, and "The Big Durian" is the nickname of Jakarta, Indonesia. A saying in Malay and Indonesian, mendapat durian runtuh, "getting a fallen durian", is the equivalent of the English phrase 'windfall gain'. Nevertheless, trees bearing mature durians are dangerous because the fruit is heavy, armed with sharp thorns, and can fall from a significant height. Hardhats are worn when collecting the fruit. A common saying is that a durian has eyes, and can see where it is falling, because the fruit supposedly never falls during daylight hours when people may be hurt. In Malaysia, a spineless durian clone D172 was registered by the Agriculture Department in 1989. It was called "Durian Botak" ('Bald Durian'). Sumatran elephants and tigers sometimes eat durians. Being a fruit much loved by a variety of animals, the durian is sometimes taken to signify the animalistic aspect of humans, as in the legend of Orang Mawas, the Malaysian version of Bigfoot, and Orang Pendek, its Sumatran version, both of which have been claimed to feast on durians. Folk medicine In Malaysia, a decoction of the leaves and roots used to be prescribed as an antipyretic. The leaf juice is applied on the head of a fever patient. The most complete description of the medicinal use of the durian as remedies for fevers is a Malay prescription, collected by Burkill and Haniff in 1930. It instructs the reader to boil the roots of Hibiscus rosa-sinensis with the roots of Durio zibethinus, Nephelium longana, Nephelium mutabile and Artocarpus integrifolius, and drink the decoction or use it as a poultice. Southeast Asian traditional beliefs, as well as traditional Chinese food therapy, consider the durian fruit to have warming properties liable to cause excessive sweating. The traditional method to counteract this is to pour water into the empty shell of the fruit after the pulp has been consumed and drink it. An alternative method is to eat the durian in accompaniment with mangosteen, which is considered to have cooling properties. Pregnant women or people with high blood pressure are traditionally advised not to consume durian. The Javanese believe durian to have aphrodisiac qualities, and impose a set of rules on what may or may not be consumed with it or shortly thereafter. A saying in Indonesian, , meaning "the durian falls and the sarong comes up", refers to this belief. The warnings against the supposed lecherous quality of this fruit soon spread to the West – the Swedenborgian philosopher Herman Vetterling commented on so-called "erotic properties" of the durian in the early 20th century. Environmental impact The high demand for durians in China has prompted a shift in Malaysia from small-scale durian orchards to large-scale industrial operations. Forests are cleared to make way for large durian plantations, compounding an existing deforestation problem caused by the cultivation of oil palms. Animal species such as the small flying fox, which pollinates durian trees, and the Malayan tiger are endangered by the increasing deforestation of their habitats. In the Gua Musang District, the state government approved the conversion of of forestry, including indigenous lands of the Orang Asli, to durian plantations. The prevalence of the Musang King and Monthong varieties in Malaysia and Thailand, respectively, has led to concerns about a decrease in the durian's genetic diversity at the expense of higher-quality varieties. A 2022 study of durian species in Kalimantan, Indonesia, found low genetic diversity, suggestive of inbreeding depression and genetic drift. Additionally, these dominant hybrid varieties are more susceptible to pests and fungal diseases, requiring the use of insecticides and fungicides that can weaken the trees.
Biology and health sciences
Others
null
18973446
https://en.wikipedia.org/wiki/Geometry
Geometry
Geometry (; ) is a branch of mathematics concerned with properties of space such as the distance, shape, size, and relative position of figures. Geometry is, along with arithmetic, one of the oldest branches of mathematics. A mathematician who works in the field of geometry is called a geometer. Until the 19th century, geometry was almost exclusively devoted to Euclidean geometry, which includes the notions of point, line, plane, distance, angle, surface, and curve, as fundamental concepts. Originally developed to model the physical world, geometry has applications in almost all sciences, and also in art, architecture, and other activities that are related to graphics. Geometry also has applications in areas of mathematics that are apparently unrelated. For example, methods of algebraic geometry are fundamental in Wiles's proof of Fermat's Last Theorem, a problem that was stated in terms of elementary arithmetic, and remained unsolved for several centuries. During the 19th century several discoveries enlarged dramatically the scope of geometry. One of the oldest such discoveries is Carl Friedrich Gauss's ("remarkable theorem") that asserts roughly that the Gaussian curvature of a surface is independent from any specific embedding in a Euclidean space. This implies that surfaces can be studied intrinsically, that is, as stand-alone spaces, and has been expanded into the theory of manifolds and Riemannian geometry. Later in the 19th century, it appeared that geometries without the parallel postulate (non-Euclidean geometries) can be developed without introducing any contradiction. The geometry that underlies general relativity is a famous application of non-Euclidean geometry. Since the late 19th century, the scope of geometry has been greatly expanded, and the field has been split in many subfields that depend on the underlying methods—differential geometry, algebraic geometry, computational geometry, algebraic topology, discrete geometry (also known as combinatorial geometry), etc.—or on the properties of Euclidean spaces that are disregarded—projective geometry that consider only alignment of points but not distance and parallelism, affine geometry that omits the concept of angle and distance, finite geometry that omits continuity, and others. This enlargement of the scope of geometry led to a change of meaning of the word "space", which originally referred to the three-dimensional space of the physical world and its model provided by Euclidean geometry; presently a geometric space, or simply a space is a mathematical structure on which some geometry is defined. History The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia and Egypt in the 2nd millennium BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. The earliest known texts on geometry are the Egyptian Rhind Papyrus (2000–1800 BC) and Moscow Papyrus (), and the Babylonian clay tablets, such as Plimpton 322 (1900 BC). For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, or frustum. Later clay tablets (350–50 BC) demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiter's position and motion within time-velocity space. These geometric procedures anticipated the Oxford Calculators, including the mean speed theorem, by 14 centuries. South of Egypt the ancient Nubians established a system of geometry including early versions of sun clocks. In the 7th century BC, the Greek mathematician Thales of Miletus used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales's theorem. Pythagoras established the Pythagorean School, which is credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history. Eudoxus (408–) developed the method of exhaustion, which allowed the calculation of areas and volumes of curvilinear figures, as well as a theory of ratios that avoided the problem of incommensurable magnitudes, which enabled subsequent geometers to make significant advances. Around 300 BC, geometry was revolutionized by Euclid, whose Elements, widely considered the most successful and influential textbook of all time, introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of the Elements were already known, Euclid arranged them into a single, coherent logical framework. The Elements was known to all educated people in the West until the middle of the 20th century and its contents are still taught in geometry classes today. Archimedes () of Syracuse, Italy used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave remarkably accurate approximations of pi. He also studied the spiral bearing his name and obtained formulas for the volumes of surfaces of revolution. Indian mathematicians also made many important contributions in geometry. The Shatapatha Brahmana (3rd century BC) contains rules for ritual geometric constructions that are similar to the Sulba Sutras. According to , the Śulba Sūtras contain "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians. They contain lists of Pythagorean triples, which are particular cases of Diophantine equations. In the Bakhshali manuscript, there are a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Aryabhata's Aryabhatiya (499) includes the computation of areas and volumes. Brahmagupta wrote his astronomical work in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral. Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula), as well as a complete description of rational triangles (i.e. triangles with rational sides and rational areas). In the Middle Ages, mathematics in medieval Islam contributed to the development of geometry, especially algebraic geometry. Al-Mahani (b. 853) conceived the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. Thābit ibn Qurra (known as Thebit in Latin) (836–901) dealt with arithmetic operations applied to ratios of geometrical quantities, and contributed to the development of analytic geometry. Omar Khayyam (1048–1131) found geometric solutions to cubic equations. The theorems of Ibn al-Haytham (Alhazen), Omar Khayyam and Nasir al-Din al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were part of a line of research on the parallel postulate continued by later European geometers, including Vitello (), Gersonides (1288–1344), Alfonso, John Wallis, and Giovanni Girolamo Saccheri, that by the 19th century led to the discovery of hyperbolic geometry. In the early 17th century, there were two important developments in geometry. The first was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry studies properties of shapes which are unchanged under projections and sections, especially as they relate to artistic perspective. Two developments in geometry in the 19th century changed the way it had been studied previously. These were the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky, János Bolyai and Carl Friedrich Gauss and of the formulation of symmetry as the central consideration in the Erlangen programme of Felix Klein (which generalized the Euclidean and non-Euclidean geometries). Two of the master geometers of the time were Bernhard Riemann (1826–1866), working primarily with tools from mathematical analysis, and introducing the Riemann surface, and Henri Poincaré, the founder of algebraic topology and the geometric theory of dynamical systems. As a consequence of these major changes in the conception of geometry, the concept of "space" became something rich and varied, and the natural background for theories as different as complex analysis and classical mechanics. Main concepts The following are some of the most important concepts in geometry. Axioms Euclid took an abstract approach to geometry in his Elements, one of the most influential books ever written. Euclid introduced certain axioms, or postulates, expressing primary or self-evident properties of points, lines, and planes. He proceeded to rigorously deduce other properties by mathematical reasoning. The characteristic feature of Euclid's approach to geometry was its rigor, and it has come to be known as axiomatic or synthetic geometry. At the start of the 19th century, the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky (1792–1856), János Bolyai (1802–1860), Carl Friedrich Gauss (1777–1855) and others led to a revival of interest in this discipline, and in the 20th century, David Hilbert (1862–1943) employed axiomatic reasoning in an attempt to provide a modern foundation of geometry. Spaces and subspaces Points Points are generally considered fundamental objects for building geometry. They may be defined by the properties that they must have, as in Euclid's definition as "that which has no part", or in synthetic geometry. In modern mathematics, they are generally defined as elements of a set called space, which is itself axiomatically defined. With these modern definitions, every geometric shape is defined as a set of points; this is not the case in synthetic geometry, where a line is another fundamental object that is not viewed as the set of the points through which it passes. However, there are modern geometries in which points are not primitive objects, or even without points. One of the oldest such geometries is Whitehead's point-free geometry, formulated by Alfred North Whitehead in 1919–1920. Lines Euclid described a line as "breadthless length" which "lies equally with respect to the points on itself". In modern mathematics, given the multitude of geometries, the concept of a line is closely tied to the way the geometry is described. For instance, in analytic geometry, a line in the plane is often defined as the set of points whose coordinates satisfy a given linear equation, but in a more abstract setting, such as incidence geometry, a line may be an independent object, distinct from the set of points which lie on it. In differential geometry, a geodesic is a generalization of the notion of a line to curved spaces. Planes In Euclidean geometry a plane is a flat, two-dimensional surface that extends infinitely; the definitions for other types of geometries are generalizations of that. Planes are used in many areas of geometry. For instance, planes can be studied as a topological surface without reference to distances or angles; it can be studied as an affine space, where collinearity and ratios can be studied but not distances; it can be studied as the complex plane using techniques of complex analysis; and so on. Curves A curve is a 1-dimensional object that may be straight (like a line) or not; curves in 2-dimensional space are called plane curves and those in 3-dimensional space are called space curves. In topology, a curve is defined by a function from an interval of the real numbers to another space. In differential geometry, the same definition is used, but the defining function is required to be differentiable. Algebraic geometry studies algebraic curves, which are defined as algebraic varieties of dimension one. Surfaces A surface is a two-dimensional object, such as a sphere or paraboloid. In differential geometry and topology, surfaces are described by two-dimensional 'patches' (or neighborhoods) that are assembled by diffeomorphisms or homeomorphisms, respectively. In algebraic geometry, surfaces are described by polynomial equations. Solids A solid is a three-dimensional object bounded by a closed surface; for example, a ball is the volume bounded by a sphere. Manifolds A manifold is a generalization of the concepts of curve and surface. In topology, a manifold is a topological space where every point has a neighborhood that is homeomorphic to Euclidean space. In differential geometry, a differentiable manifold is a space where each neighborhood is diffeomorphic to Euclidean space. Manifolds are used extensively in physics, including in general relativity and string theory. Angles Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. In modern terms, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. The size of an angle is formalized as an angular measure. In Euclidean geometry, angles are used to study polygons and triangles, as well as forming an object of study in their own right. The study of the angles of a triangle or of angles in a unit circle forms the basis of trigonometry. In differential geometry and calculus, the angles between plane curves or space curves or surfaces can be calculated using the derivative. Measures: length, area, and volume Length, area, and volume describe the size or extent of an object in one dimension, two dimension, and three dimensions respectively. In Euclidean geometry and analytic geometry, the length of a line segment can often be calculated by the Pythagorean theorem. Area and volume can be defined as fundamental quantities separate from length, or they can be described and calculated in terms of lengths in a plane or 3-dimensional space. Mathematicians have found many explicit formulas for area and formulas for volume of various geometric objects. In calculus, area and volume can be defined in terms of integrals, such as the Riemann integral or the Lebesgue integral. Other geometrical measures include the curvature and compactness. Metrics and measures The concept of length or distance can be generalized, leading to the idea of metrics. For instance, the Euclidean metric measures the distance between points in the Euclidean plane, while the hyperbolic metric measures the distance in the hyperbolic plane. Other important examples of metrics include the Lorentz metric of special relativity and the semi-Riemannian metrics of general relativity. In a different direction, the concepts of length, area and volume are extended by measure theory, which studies methods of assigning a size or measure to sets, where the measures follow rules similar to those of classical area and volume. Congruence and similarity Congruence and similarity are concepts that describe when two shapes have similar characteristics. In Euclidean geometry, similarity is used to describe objects that have the same shape, while congruence is used to describe objects that are the same in both size and shape. Hilbert, in his work on creating a more rigorous foundation for geometry, treated congruence as an undefined term whose properties are defined by axioms. Congruence and similarity are generalized in transformation geometry, which studies the properties of geometric objects that are preserved by different kinds of transformations. Compass and straightedge constructions Classical geometers paid special attention to constructing geometric objects that had been described in some other way. Classically, the only instruments used in most geometric constructions are the compass and straightedge. Also, every construction had to be complete in a finite number of steps. However, some problems turned out to be difficult or impossible to solve by these means alone, and ingenious constructions using neusis, parabolas and other curves, or mechanical devices, were found. Rotation and orientation The geometrical concepts of rotation and orientation define part of the placement of objects embedded in the plane or in space. Dimension Traditional geometry allowed dimensions 1 (a line or curve), 2 (a plane or surface), and 3 (our ambient world conceived of as three-dimensional space). Furthermore, mathematicians and physicists have used higher dimensions for nearly two centuries. One example of a mathematical use for higher dimensions is the configuration space of a physical system, which has a dimension equal to the system's degrees of freedom. For instance, the configuration of a screw can be described by five coordinates. In general topology, the concept of dimension has been extended from natural numbers, to infinite dimension (Hilbert spaces, for example) and positive real numbers (in fractal geometry). In algebraic geometry, the dimension of an algebraic variety has received a number of apparently different definitions, which are all equivalent in the most common cases. Symmetry The theme of symmetry in geometry is nearly as old as the science of geometry itself. Symmetric shapes such as the circle, regular polygons and platonic solids held deep significance for many ancient philosophers and were investigated in detail before the time of Euclid. Symmetric patterns occur in nature and were artistically rendered in a multitude of forms, including the graphics of Leonardo da Vinci, M. C. Escher, and others. In the second half of the 19th century, the relationship between symmetry and geometry came under intense scrutiny. Felix Klein's Erlangen program proclaimed that, in a very precise sense, symmetry, expressed via the notion of a transformation group, determines what geometry is. Symmetry in classical Euclidean geometry is represented by congruences and rigid motions, whereas in projective geometry an analogous role is played by collineations, geometric transformations that take straight lines into straight lines. However it was in the new geometries of Bolyai and Lobachevsky, Riemann, Clifford and Klein, and Sophus Lie that Klein's idea to 'define a geometry via its symmetry group' found its inspiration. Both discrete and continuous symmetries play prominent roles in geometry, the former in topology and geometric group theory, the latter in Lie theory and Riemannian geometry. A different type of symmetry is the principle of duality in projective geometry, among other fields. This meta-phenomenon can roughly be described as follows: in any theorem, exchange point with plane, join with meet, lies in with contains, and the result is an equally true theorem. A similar and closely related form of duality exists between a vector space and its dual space. Contemporary geometry Euclidean geometry Euclidean geometry is geometry in its classical sense. As it models the space of the physical world, it is used in many scientific areas, such as mechanics, astronomy, crystallography, and many technical fields, such as engineering, architecture, geodesy, aerodynamics, and navigation. The mandatory educational curriculum of the majority of nations includes the study of Euclidean concepts such as points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, and analytic geometry. Euclidean vectors Euclidean vectors are used for a myriad of applications in physics and engineering, such as position, displacement, deformation, velocity, acceleration, force, etc. Differential geometry Differential geometry uses techniques of calculus and linear algebra to study problems in geometry. It has applications in physics, econometrics, and bioinformatics, among others. In particular, differential geometry is of importance to mathematical physics due to Albert Einstein's general relativity postulation that the universe is curved. Differential geometry can either be intrinsic (meaning that the spaces it considers are smooth manifolds whose geometric structure is governed by a Riemannian metric, which determines how distances are measured near each point) or extrinsic (where the object under study is a part of some ambient flat Euclidean space). Non-Euclidean geometry Topology Topology is the field concerned with the properties of continuous mappings, and can be considered a generalization of Euclidean geometry. In practice, topology often means dealing with large-scale properties of spaces, such as connectedness and compactness. The field of topology, which saw massive development in the 20th century, is in a technical sense a type of transformation geometry, in which transformations are homeomorphisms. This has often been expressed in the form of the saying 'topology is rubber-sheet geometry'. Subfields of topology include geometric topology, differential topology, algebraic topology and general topology. Algebraic geometry Algebraic geometry is fundamentally the study by means of algebraic methods of some geometrical shapes, called algebraic sets, and defined as common zeros of multivariate polynomials. Algebraic geometry became an autonomous subfield of geometry , with a theorem called Hilbert's Nullstellensatz that establishes a strong correspondence between algebraic sets and ideals of polynomial rings. This led to a parallel development of algebraic geometry, and its algebraic counterpart, called commutative algebra. From the late 1950s through the mid-1970s algebraic geometry had undergone major foundational development, with the introduction by Alexander Grothendieck of scheme theory, which allows using topological methods, including cohomology theories in a purely algebraic context. Scheme theory allowed to solve many difficult problems not only in geometry, but also in number theory. Wiles' proof of Fermat's Last Theorem is a famous example of a long-standing problem of number theory whose solution uses scheme theory and its extensions such as stack theory. One of seven Millennium Prize problems, the Hodge conjecture, is a question in algebraic geometry. Algebraic geometry has applications in many areas, including cryptography and string theory. Complex geometry Complex geometry studies the nature of geometric structures modelled on, or arising out of, the complex plane. Complex geometry lies at the intersection of differential geometry, algebraic geometry, and analysis of several complex variables, and has found applications to string theory and mirror symmetry. Complex geometry first appeared as a distinct area of study in the work of Bernhard Riemann in his study of Riemann surfaces. Work in the spirit of Riemann was carried out by the Italian school of algebraic geometry in the early 1900s. Contemporary treatment of complex geometry began with the work of Jean-Pierre Serre, who introduced the concept of sheaves to the subject, and illuminated the relations between complex geometry and algebraic geometry. The primary objects of study in complex geometry are complex manifolds, complex algebraic varieties, and complex analytic varieties, and holomorphic vector bundles and coherent sheaves over these spaces. Special examples of spaces studied in complex geometry include Riemann surfaces, and Calabi–Yau manifolds, and these spaces find uses in string theory. In particular, worldsheets of strings are modelled by Riemann surfaces, and superstring theory predicts that the extra 6 dimensions of 10 dimensional spacetime may be modelled by Calabi–Yau manifolds. Discrete geometry Discrete geometry is a subject that has close connections with convex geometry. It is concerned mainly with questions of relative position of simple geometric objects, such as points, lines and circles. Examples include the study of sphere packings, triangulations, the Kneser-Poulsen conjecture, etc. It shares many methods and principles with combinatorics. Computational geometry Computational geometry deals with algorithms and their implementations for manipulating geometrical objects. Important problems historically have included the travelling salesman problem, minimum spanning trees, hidden-line removal, and linear programming. Although being a young area of geometry, it has many applications in computer vision, image processing, computer-aided design, medical imaging, etc. Geometric group theory Geometric group theory studies group actions on objects that are regarded as geometric (significantly, isometric actions on metric spaces) to study finitely generated groups, often involving large-scale geometric techniques. It is closely connected to low-dimensional topology, such as in Grigori Perelman's proof of the Geometrization conjecture, which included the proof of the Poincaré conjecture, a Millennium Prize Problem. Group actions on their Cayley graphs are foundational examples of isometric group actions. Other major topics include quasi-isometries, Gromov-hyperbolic groups and their generalizations (relatively and acylindrically hyperbolic groups), free groups and their automorphisms, groups acting on trees, various notions of nonpositive curvature for groups (CAT(0) groups, Dehn functions, automaticity...), right angled Artin groups, and topics close to combinatorial group theory such as small cancellation theory and algorithmic problems (e.g. the word, conjugacy, and isomorphism problems). Other group-theoretic topics like mapping class groups, property (T), solvability, amenability and lattices in Lie groups are sometimes regarded as strongly geometric as well. Convex geometry Convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis and discrete mathematics. It has close connections to convex analysis, optimization and functional analysis and important applications in number theory. Convex geometry dates back to antiquity. Archimedes gave the first known precise definition of convexity. The isoperimetric problem, a recurring concept in convex geometry, was studied by the Greeks as well, including Zenodorus. Archimedes, Plato, Euclid, and later Kepler and Coxeter all studied convex polytopes and their properties. From the 19th century on, mathematicians have studied other areas of convex mathematics, including higher-dimensional polytopes, volume and surface area of convex bodies, Gaussian curvature, algorithms, tilings and lattices. Applications Geometry has found applications in many fields, some of which are described below. Art Mathematics and art are related in a variety of ways. For instance, the theory of perspective showed that there is more to geometry than just the metric properties of figures: perspective is the origin of projective geometry. Artists have long used concepts of proportion in design. Vitruvius developed a complicated theory of ideal proportions for the human figure. These concepts have been used and adapted by artists from Michelangelo to modern comic book artists. The golden ratio is a particular proportion that has had a controversial role in art. Often claimed to be the most aesthetically pleasing ratio of lengths, it is frequently stated to be incorporated into famous works of art, though the most reliable and unambiguous examples were made deliberately by artists aware of this legend. Tilings, or tessellations, have been used in art throughout history. Islamic art makes frequent use of tessellations, as did the art of M. C. Escher. Escher's work also made use of hyperbolic geometry. Cézanne advanced the theory that all images can be built up from the sphere, the cone, and the cylinder. This is still used in art theory today, although the exact list of shapes varies from author to author. Architecture Geometry has many applications in architecture. In fact, it has been said that geometry lies at the core of architectural design. Applications of geometry to architecture include the use of projective geometry to create forced perspective, the use of conic sections in constructing domes and similar objects, the use of tessellations, and the use of symmetry. Physics The field of astronomy, especially as it relates to mapping the positions of stars and planets on the celestial sphere and describing the relationship between movements of celestial bodies, have served as an important source of geometric problems throughout history. Riemannian geometry and pseudo-Riemannian geometry are used in general relativity. String theory makes use of several variants of geometry, as does quantum information theory. Other fields of mathematics Calculus was strongly influenced by geometry. For instance, the introduction of coordinates by René Descartes and the concurrent developments of algebra marked a new stage for geometry, since geometric figures such as plane curves could now be represented analytically in the form of functions and equations. This played a key role in the emergence of infinitesimal calculus in the 17th century. Analytic geometry continues to be a mainstay of pre-calculus and calculus curriculum. Another important area of application is number theory. In ancient Greece the Pythagoreans considered the role of numbers in geometry. However, the discovery of incommensurable lengths contradicted their philosophical views. Since the 19th century, geometry has been used for solving problems in number theory, for example through the geometry of numbers or, more recently, scheme theory, which is used in Wiles's proof of Fermat's Last Theorem.
Mathematics
Mathematics
null
18973622
https://en.wikipedia.org/wiki/Leaf
Leaf
A leaf (: leaves) is a principal appendage of the stem of a vascular plant, usually borne laterally above ground and specialized for photosynthesis. Leaves are collectively called foliage, as in "autumn foliage", while the leaves, stem, flower, and fruit collectively form the shoot system. In most leaves, the primary photosynthetic tissue is the palisade mesophyll and is located on the upper side of the blade or lamina of the leaf, but in some species, including the mature foliage of Eucalyptus, palisade mesophyll is present on both sides and the leaves are said to be isobilateral. The leaf is an integral part of the stem system, and most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax, and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll which is essential for photosynthesis as it absorbs light energy from the Sun. A leaf with lighter-colored or white patches or edges is called a variegated leaf. Leaves can have many different shapes, sizes, textures and colors. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them (the majority) as broad-leaved or megaphyllous plants, which also include acrogymnosperms and ferns. In the lycopods, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls. Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems called phyllodes which differ from leaves both in their structure and origin. Some structures of non-vascular plants look and function much like leaves. Examples include the phyllids of mosses and liverworts. General characteristics Leaves are the most important organs of most vascular plants. Green plants are autotrophic, meaning that they do not obtain food from other living things but instead create their own food by photosynthesis. They capture the energy in sunlight and use it to make simple sugars, such as glucose and sucrose, from carbon dioxide () and water. The sugars are then stored as starch, further processed by chemical synthesis into more complex organic molecules such as proteins or cellulose, the basic structural material in plant cell walls, or metabolized by cellular respiration to provide chemical energy to run cellular processes. The leaves draw water from the ground in the transpiration stream through a vascular conducting system known as xylem and obtain carbon dioxide from the atmosphere by diffusion through openings called stomata in the outer covering layer of the leaf (epidermis), while leaves are orientated to maximize their exposure to sunlight. Once sugar has been synthesized, it needs to be transported to areas of active growth such as the shoots and roots. Vascular plants transport sucrose in a special tissue called the phloem. The phloem and xylem are parallel to each other, but the transport of materials is usually in opposite directions. Within the leaf these vascular systems branch (ramify) to form veins which supply as much of the leaf as possible, ensuring that cells carrying out photosynthesis are close to the transportation system. Typically leaves are broad, flat and thin (dorsiventrally flattened), thereby maximising the surface area directly exposed to light and enabling the light to penetrate the tissues and reach the chloroplasts, thus promoting photosynthesis. They are arranged on the plant so as to expose their surfaces to light as efficiently as possible without shading each other, but there are many exceptions and complications. For instance, plants adapted to windy conditions may have pendent leaves, such as in many willows and eucalypts. The flat, or laminar, shape also maximizes thermal contact with the surrounding air, promoting cooling. Functionally, in addition to carrying out photosynthesis, the leaf is the principal site of transpiration, providing the energy required to draw the transpiration stream up from the roots, and guttation. Many conifers have thin needle-like or scale-like leaves that can be advantageous in cold climates with frequent snow and frost. These are interpreted as reduced from megaphyllous leaves of their Devonian ancestors. Some leaf forms are adapted to modulate the amount of light they absorb to avoid or mitigate excessive heat, ultraviolet damage, or desiccation, or to sacrifice light-absorption efficiency in favor of protection from herbivory. For xerophytes the major constraint is not light flux or intensity, but drought. Some window plants such as Fenestraria species and some Haworthia species such as Haworthia tesselata and Haworthia truncata are examples of xerophytes. Leaves function to store chemical energy and water (especially in succulents) and may become specialized organs serving other functions, such as tendrils of peas and other legumes, the protective spines of cacti, and the insect traps in carnivorous plants such as Nepenthes and Sarracenia. Leaves are the fundamental structural units from which cones are constructed in gymnosperms (each cone scale is a modified megaphyll leaf known as a sporophyll) and from which flowers are constructed in flowering plants. The internal organization of most kinds of leaves has evolved to maximize exposure of the photosynthetic organelles (chloroplasts) to light and to increase the absorption of while at the same time controlling water loss. Their surfaces are waterproofed by the plant cuticle, and gas exchange between the mesophyll cells and the atmosphere is controlled by minute (length and width measured in tens of μm) stomata which open or close to regulate the rate exchange of , oxygen (O2), and water vapor into and out of the internal intercellular space system. Stomatal opening is controlled by the turgor pressure in a pair of guard cells that surround the stomatal aperture. In any square centimeter of a plant leaf, there may be from 1,000 to 100,000 stomata. The shape and structure of leaves vary considerably from species to species of plant, depending largely on their adaptation to climate and available light, but also to other factors such as grazing animals, available nutrients, and ecological competition from other plants. Considerable changes in leaf type occur within species, too, for example as a plant matures (Eucalyptus species commonly have isobilateral, pendent leaves when mature and dominating their neighbors; however, such trees tend to have erect or horizontal dorsiventral leaves as seedlings, when their growth is limited by the available light.) Other factors include the need to balance water loss at high temperature and low humidity against the need to absorb . In most plants, leaves also are the primary organs responsible for transpiration and guttation (beads of fluid forming at leaf margins). Leaves can also store food and water and are modified accordingly to meet these functions, for example in the leaves of succulent plants and in bulb scales. The concentration of photosynthetic structures in leaves requires that they be richer in protein, minerals, and sugars than, say, woody stem tissues. Accordingly, leaves are prominent in the diet of many animals. Correspondingly, leaves represent heavy investment on the part of the plants bearing them, and their retention or disposition are the subject of elaborate strategies for dealing with pest pressures, seasonal conditions, and protective measures such as the growth of thorns and the production of phytoliths, lignins, tannins and poisons. Deciduous plants in cold temperate regions typically shed their leaves in autumn, whereas in areas with a severe dry season, some plants may shed their leaves until the dry season ends. In either case, the shed leaves often contribute their retained nutrients to the soil where they fall. In contrast, many other non-seasonal plants, such as palms and conifers, retain their leaves for long periods; Welwitschia retains its two main leaves throughout a lifetime that may exceed a thousand years. The leaf-like organs of bryophytes (e.g., mosses and liverworts), known as phyllids, differ greatly morphologically from the leaves of vascular plants. In most cases, they lack vascular tissue, are a single cell thick and have no cuticle, stomata, or internal system of intercellular spaces. (The phyllids of the moss family Polytrichaceae are notable exceptions.) The phyllids of bryophytes are only present on the gametophytes, while in contrast the leaves of vascular plants are only present on the sporophytes. These can further develop into either vegetative or reproductive structures. Simple, vascularized leaves (microphylls), such as those of the early Devonian lycopsid Baragwanathia, first evolved as enations, extensions of the stem. True leaves or euphylls of larger size and with more complex venation did not become widespread in other groups until the Devonian period, by which time the carbon dioxide concentration in the atmosphere had dropped significantly. This occurred independently in several separate lineages of vascular plants, in progymnosperms like Archaeopteris, in Sphenopsida, ferns and later in the gymnosperms and angiosperms. Euphylls are also referred to as macrophylls or megaphylls (large leaves). Morphology A structurally complete leaf of an angiosperm consists of a petiole (leaf stalk), a lamina (leaf blade), stipules (small structures located to either side of the base of the petiole) and a sheath. Not every species produces leaves with all of these structural components. The proximal stalk or petiole is called a stipe in ferns. The lamina is the expanded, flat component of the leaf which contains the chloroplasts. The sheath is a structure, typically at the base that fully or partially clasps the stem above the node, where the leaf is attached. Leaf sheathes typically occur in Poaceae (grasses) and Apiaceae (umbellifers). Between the sheath and the lamina, there may be a pseudopetiole, a petiole like structure. Pseudopetioles occur in some monocotyledons including bananas, palms and bamboos. Stipules may be conspicuous (e.g. beans and roses), soon falling or otherwise not obvious as in Moraceae or absent altogether as in the Magnoliaceae. A petiole may be absent (apetiolate), or the blade may not be laminar (flattened). The petiole mechanically links the leaf to the plant and provides the route for transfer of water and sugars to and from the leaf. The lamina is typically the location of the majority of photosynthesis. The upper (adaxial) angle between a leaf and a stem is known as the axil of the leaf. It is often the location of a bud. Structures located there are called "axillary".External leaf characteristics, such as shape, margin, hairs, the petiole, and the presence of stipules and glands, are frequently important for identifying plants to family, genus or species levels, and botanists have developed a rich terminology for describing leaf characteristics. Leaves almost always have determinate growth. They grow to a specific pattern and shape and then stop. Other plant parts like stems or roots have non-determinate growth, and will usually continue to grow as long as they have the resources to do so. The type of leaf is usually characteristic of a species (monomorphic), although some species produce more than one type of leaf (dimorphic or polymorphic). The longest leaves are those of the Raffia palm, R. regalis which may be up to long and wide. The terminology associated with the description of leaf morphology is presented, in illustrated form, at Wikibooks. Where leaves are basal, and lie on the ground, they are referred to as prostrate. Basic leaf types Perennial plants whose leaves are shed annually are said to have deciduous leaves, while leaves that remain through winter are evergreens. Leaves attached to stems by stalks (known as petioles) are called petiolate, and if attached directly to the stem with no petiole they are called sessile. Ferns have fronds. Conifer leaves are typically needle- or awl-shaped or scale-like; they are usually evergreen but can sometimes be deciduous. Usually, they have a single vein. The standard form of flowering plants (angiosperm) includes stipules, a petiole, and a lamina. Lycophytes have microphylls. Sheath leaves are the type found in most grasses and many other monocots. Other specialized leaves include those of Nepenthes, a pitcher plant. Dicot leaves have blades with pinnate venation (where major veins diverge from one large mid-vein and have smaller connecting networks between them). Less commonly, dicot leaf blades may have palmate venation (several large veins diverging from petiole to leaf edges). Finally, some exhibit parallel venation. Monocot leaves in temperate climates usually have narrow blades and usually parallel venation converging at leaf tips or edges. Some also have pinnate venation. Arrangement on the stem The arrangement of leaves on the stem is known as phyllotaxis. A large variety of phyllotactic patterns occur in nature: Alternate One leaf, branch, or flower part attaches at each point or node on the stem, and leaves alternate direction—to a greater or lesser degree—along the stem. Basal Arising from the base of the plant. Cauline Attached to the aerial stem. Opposite Two leaves, branches, or flower parts attach at each point or node on the stem. Leaf attachments are paired at each node. Decussate An opposite arrangement in which each successive pair is rotated 90° from the previous. Whorled, or verticillate Three or more leaves, branches, or flower parts attach at each point or node on the stem. As with opposite leaves, successive whorls may or may not be decussate, rotated by half the angle between the leaves in the whorl (i.e., successive whorls of three rotated 60°, whorls of four rotated 45°, etc.). Opposite leaves may appear whorled near the tip of the stem. Pseudoverticillate describes an arrangement only appearing whorled, but not actually so. Rosulate Leaves form a rosette. Rows The term distichous literally means two rows. Leaves in this arrangement may be alternate or opposite in their attachment. The term 2-ranked is equivalent. The terms tristichous and tetrastichous are sometimes encountered. For example, the "leaves" (actually microphylls) of most species of Selaginella are tetrastichous but not decussate. In the simplest mathematical models of phyllotaxis, the apex of the stem is represented as a circle. Each new node is formed at the apex, and it is rotated by a constant angle from the previous node. This angle is called the divergence angle. The number of leaves that grow from a node depends on the plant species. When a single leaf grows from each node, and when the stem is held straight, the leaves form a helix. The divergence angle is often represented as a fraction of a full rotation around the stem. A rotation fraction of 1/2 (a divergence angle of 180°) produces an alternate arrangement, such as in Gasteria or the fan-aloe Kumara plicatilis. Rotation fractions of 1/3 (divergence angles of 120°) occur in beech and hazel. Oak and apricot rotate by 2/5, sunflowers, poplar, and pear by 3/8, and in willow and almond the fraction is 5/13. These arrangements are periodic. The denominator of the rotation fraction indicates the number of leaves in one period, while the numerator indicates the number of complete turns or gyres made in one period. For example: 180° (or ): two leaves in one circle (alternate leaves) 120° (or ): three leaves in one circle 144° (or ): five leaves in two gyres 135° (or ): eight leaves in three gyres. Most divergence angles are related to the sequence of Fibonacci numbers . This sequence begins 1, 1, 2, 3, 5, 8, 13; each term is the sum of the previous two. Rotation fractions are often quotients of a Fibonacci number by the number two terms later in the sequence. This is the case for the fractions 1/2, 1/3, 2/5, 3/8, and 5/13. The ratio between successive Fibonacci numbers tends to the golden ratio . When a circle is divided into two arcs whose lengths are in the ratio , the angle formed by the smaller arc is the golden angle, which is . Because of this, many divergence angles are approximately . In plants where a pair of opposite leaves grows from each node, the leaves form a double helix. If the nodes do not rotate (a rotation fraction of zero and a divergence angle of 0°), the two helices become a pair of parallel lines, creating a distichous arrangement as in maple or olive trees. More common in a decussate pattern, in which each node rotates by 1/4 (90°) as in the herb basil. The leaves of tricussate plants such as Nerium oleander form a triple helix. The leaves of some plants do not form helices. In some plants, the divergence angle changes as the plant grows. In orixate phyllotaxis, named after Orixa japonica, the divergence angle is not constant. Instead, it is periodic and follows the sequence 180°, 90°, 180°, 270°. Divisions of the blade Two basic forms of leaves can be described considering the way the blade (lamina) is divided. A simple leaf has an undivided blade. However, the leaf may be dissected to form lobes, but the gaps between lobes do not reach to the main vein. A compound leaf has a fully subdivided blade, each leaflet of the blade being separated along a main or secondary vein. The leaflets may have petiolules and stipels, the equivalents of the petioles and stipules of leaves. Because each leaflet can appear to be a simple leaf, it is important to recognize where the petiole occurs to identify a compound leaf. Compound leaves are a characteristic of some families of higher plants, such as the Fabaceae. The middle vein of a compound leaf or a frond, when it is present, is called a rachis. Palmately compound The leaflets all have a common point of attachment at the end of the petiole, radiating like fingers of a hand; for example, Cannabis (hemp) and Aesculus (buckeyes). Pinnately compound Leaflets are arranged either side of the main axis, or rachis. Bipinnately compound Leaves are twice divided: the leaflets (technically "subleaflets") are arranged along a secondary axis that is one of several branching off the rachis. Each leaflet is called a pinnule. The group of pinnules on each secondary vein forms a pinna; for example, Albizia (silk tree). Trifoliate (or trifoliolate) A pinnate leaf with just three leaflets; for example, Trifolium (clover), Laburnum (laburnum), and some species of Toxicodendron (for instance, poison ivy). Pinnatifid Pinnately dissected to the central vein, but with the leaflets not entirely separate; for example, Polypodium, some Sorbus (whitebeams). In pinnately veined leaves the central vein is known as the midrib. Characteristics of the petiole Leaves which have a petiole (leaf stalk) are said to be petiolate. Sessile (epetiolate) leaves have no petiole, and the blade attaches directly to the stem. Subpetiolate leaves are nearly petiolate or have an extremely short petiole and may appear to be sessile. In clasping or decurrent leaves, the blade partially surrounds the stem. When the leaf base completely surrounds the stem, the leaves are said to be perfoliate, such as in Eupatorium perfoliatum. In peltate leaves, the petiole attaches to the blade inside the blade margin. In some Acacia species, such as the koa tree (Acacia koa), the petioles are expanded or broadened and function like leaf blades; these are called phyllodes. There may or may not be normal pinnate leaves at the tip of the phyllode. A stipule, present on the leaves of many dicotyledons, is an appendage on each side at the base of the petiole, resembling a small leaf. Stipules may be lasting and not be shed (a stipulate leaf, such as in roses and beans), or be shed as the leaf expands, leaving a stipule scar on the twig (an exstipulate leaf). The situation, arrangement, and structure of the stipules is called the "stipulation". Free, lateral As in Hibiscus. Adnate Fused to the petiole base, as in Rosa. Ochreate Provided with ochrea, or sheath-formed stipules, as in Polygonaceae; e.g., rhubarb. Encircling the petiole base Veins Veins (sometimes referred to as nerves) constitute one of the most visible features of leaves. The veins in a leaf represent the vascular structure of the organ, extending into the leaf via the petiole and providing transportation of water and nutrients between leaf and stem, and play a crucial role in the maintenance of leaf water status and photosynthetic capacity. They also play a role in the mechanical support of the leaf. Within the lamina of the leaf, while some vascular plants possess only a single vein, in most this vasculature generally divides (ramifies) according to a variety of patterns (venation) and form cylindrical bundles, usually lying in the median plane of the mesophyll, between the two layers of epidermis. This pattern is often specific to taxa, and of which angiosperms possess two main types, parallel and reticulate (net like). In general, parallel venation is typical of monocots, while reticulate is more typical of eudicots and magnoliids ("dicots"), though there are many exceptions. The vein or veins entering the leaf from the petiole are called primary or first-order veins. The veins branching from these are secondary or second-order veins. These primary and secondary veins are considered major veins or lower order veins, though some authors include third order. Each subsequent branching is sequentially numbered, and these are the higher order veins, each branching being associated with a narrower vein diameter. In parallel veined leaves, the primary veins run parallel and equidistant to each other for most of the length of the leaf and then converge or fuse (anastomose) towards the apex. Usually, many smaller minor veins interconnect these primary veins but may terminate with very fine vein endings in the mesophyll. Minor veins are more typical of angiosperms, which may have as many as four higher orders. In contrast, leaves with reticulate venation have a single (sometimes more) primary vein in the centre of the leaf, referred to as the midrib or costa, which is continuous with the vasculature of the petiole. The secondary veins, also known as second order veins or lateral veins, branch off from the midrib and extend toward the leaf margins. These often terminate in a hydathode, a secretory organ, at the margin. In turn, smaller veins branch from the secondary veins, known as tertiary or third order (or higher order) veins, forming a dense reticulate pattern. The areas or islands of mesophyll lying between the higher order veins, are called areoles. Some of the smallest veins (veinlets) may have their endings in the areoles, a process known as areolation. These minor veins act as the sites of exchange between the mesophyll and the plant's vascular system. Thus, minor veins collect the products of photosynthesis (photosynthate) from the cells where it takes place, while major veins are responsible for its transport outside of the leaf. At the same time water is being transported in the opposite direction. The number of vein endings is variable, as is whether second order veins end at the margin, or link back to other veins. There are many elaborate variations on the patterns that the leaf veins form, and these have functional implications. Of these, angiosperms have the greatest diversity. Within these the major veins function as the support and distribution network for leaves and are correlated with leaf shape. For instance, the parallel venation found in most monocots correlates with their elongated leaf shape and wide leaf base, while reticulate venation is seen in simple entire leaves, while digitate leaves typically have venation in which three or more primary veins diverge radially from a single point. In evolutionary terms, early emerging taxa tend to have dichotomous branching with reticulate systems emerging later. Veins appeared in the Permian, prior to the appearance of angiosperms in the Triassic, during which vein hierarchy appeared enabling higher function, larger leaf size and adaption to a wider variety of climatic conditions. Although it is the more complex pattern, branching veins appear to be plesiomorphic and in some form were present in ancient seed plants as long as 250 million years ago. A pseudo-reticulate venation that is actually a highly modified penniparallel one is an autapomorphy of some Melanthiaceae, which are monocots; e.g., Paris quadrifolia (True-lover's Knot). In leaves with reticulate venation, veins form a scaffolding matrix imparting mechanical rigidity to leaves. Morphology changes within a single plant Homoblasty Characteristic in which a plant has small changes in leaf size, shape, and growth habit between juvenile and adult stages, in contrast to; Heteroblasty Characteristic in which a plant has marked changes in leaf size, shape, and growth habit between juvenile and adult stages. Anatomy Medium-scale features Leaves are normally extensively vascularized and typically have networks of vascular bundles containing xylem, which supplies water for photosynthesis, and phloem, which transports the sugars produced by photosynthesis. Many leaves are covered in trichomes (small hairs) which have diverse structures and functions. Small-scale features The major tissue systems present are The epidermis, which covers the upper and lower surfaces The mesophyll tissue, which consists of photosynthetic cells rich in chloroplasts. (also called chlorenchyma) The arrangement of veins (the vascular tissue) These three tissue systems typically form a regular organization at the cellular scale. Specialized cells that differ markedly from surrounding cells, and which often synthesize specialized products such as crystals, are termed idioblasts. Major leaf tissues Epidermis The epidermis is the outer layer of cells covering the leaf. It is covered with a waxy cuticle which is impermeable to liquid water and water vapor and forms the boundary separating the plant's inner cells from the external world. The cuticle is in some cases thinner on the lower epidermis than on the upper epidermis, and is generally thicker on leaves from dry climates as compared with those from wet climates. The epidermis serves several functions: protection against water loss by way of transpiration, regulation of gas exchange and secretion of metabolic compounds. Most leaves show dorsoventral anatomy: The upper (adaxial) and lower (abaxial) surfaces have somewhat different construction and may serve different functions. The epidermis tissue includes several differentiated cell types; epidermal cells, epidermal hair cells (trichomes), cells in the stomatal complex; guard cells and subsidiary cells. The epidermal cells are the most numerous, largest, and least specialized and form the majority of the epidermis. They are typically more elongated in the leaves of monocots than in those of dicots. Chloroplasts are generally absent in epidermal cells, the exception being the guard cells of the stomata. The stomatal pores perforate the epidermis and are surrounded on each side by chloroplast-containing guard cells, and two to four subsidiary cells that lack chloroplasts, forming a specialized cell group known as the stomatal complex. The opening and closing of the stomatal aperture is controlled by the stomatal complex and regulates the exchange of gases and water vapor between the outside air and the interior of the leaf. Stomata therefore play the important role in allowing photosynthesis without letting the leaf dry out. In a typical leaf, the stomata are more numerous over the abaxial (lower) epidermis than the adaxial (upper) epidermis and are more numerous in plants from cooler climates. Mesophyll Most of the interior of the leaf between the upper and lower layers of epidermis is a parenchyma (ground tissue) or chlorenchyma tissue called the mesophyll (Greek for "middle leaf"). This assimilation tissue is the primary location of photosynthesis in the plant. The products of photosynthesis are called "assimilates". In ferns and most flowering plants, the mesophyll is divided into two layers: An upper palisade layer of vertically elongated cells, one to two cells thick, directly beneath the adaxial epidermis, with intercellular air spaces between them. Its cells contain many more chloroplasts than the spongy layer. Cylindrical cells, with the chloroplasts close to the walls of the cell, can take optimal advantage of light. The slight separation of the cells provides maximum absorption of carbon dioxide. Sun leaves have a multi-layered palisade layer, while shade leaves or older leaves closer to the soil are single-layered. Beneath the palisade layer is the spongy layer. The cells of the spongy layer are more branched and not so tightly packed, so that there are large intercellular air spaces between them. The pores or stomata of the epidermis open into substomatal chambers, which are connected to the intercellular air spaces between the spongy and palisade mesophyll cell, so that oxygen, carbon dioxide and water vapor can diffuse into and out of the leaf and access the mesophyll cells during respiration, photosynthesis and transpiration. Leaves are normally green, due to chlorophyll in chloroplasts in the mesophyll cells. Some plants have leaves of different colours due to the presence of accessory pigments such as carotenoids in their mesophyll cells. Vascular tissue The veins are the vascular tissue of the leaf and are located in the spongy layer of the mesophyll. The pattern of the veins is called venation. In angiosperms the venation is typically parallel in monocotyledons and forms an interconnecting network in broad-leaved plants. They were once thought to be typical examples of pattern formation through ramification, but they may instead exemplify a pattern formed in a stress tensor field. A vein is made up of a vascular bundle. At the core of each bundle are clusters of two distinct types of conducting cells: Xylem Cells that bring water and minerals from the roots into the leaf. Phloem Cells that usually move sap, with dissolved sucrose (glucose to sucrose) produced by photosynthesis in the leaf, out of the leaf. The xylem typically lies on the adaxial side of the vascular bundle and the phloem typically lies on the abaxial side. Both are embedded in a dense parenchyma tissue, called the sheath, which usually includes some structural collenchyma tissue. Leaf development According to Agnes Arber's partial-shoot theory of the leaf, leaves are partial shoots, being derived from leaf primordia of the shoot apex. Early in development they are dorsiventrally flattened with both dorsal and ventral surfaces. Compound leaves are closer to shoots than simple leaves. Developmental studies have shown that compound leaves, like shoots, may branch in three dimensions. On the basis of molecular genetics, Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties." Many dicotyledonous leaves show endogenously driven daily rhythmicity in growth. Ecology Biomechanics Plants respond and adapt to environmental factors, such as light and mechanical stress from wind. Leaves need to support their own mass and align themselves in such a way as to optimize their exposure to the sun, generally more or less horizontally. However, horizontal alignment maximizes exposure to bending forces and failure from stresses such as wind, snow, hail, falling debris, animals, and abrasion from surrounding foliage and plant structures. Overall leaves are relatively flimsy with regard to other plant structures such as stems, branches and roots. Both leaf blade and petiole structure influence the leaf's response to forces such as wind, allowing a degree of repositioning to minimize drag and damage, as opposed to resistance. Leaf movement like this may also increase turbulence of the air close to the surface of the leaf, which thins the boundary layer of air immediately adjacent to the surface, increasing the capacity for gas and heat exchange, as well as photosynthesis. Strong wind forces may result in diminished leaf number and surface area, which while reducing drag, involves a trade off of also reducing photosynthesis. Thus, leaf design may involve compromise between carbon gain, thermoregulation and water loss on the one hand, and the cost of sustaining both static and dynamic loads. In vascular plants, perpendicular forces are spread over a larger area and are relatively flexible in both bending and torsion, enabling elastic deforming without damage. Many leaves rely on hydrostatic support arranged around a skeleton of vascular tissue for their strength, which depends on maintaining leaf water status. Both the mechanics and architecture of the leaf reflect the need for transportation and support. Read and Stokes (2006) consider two basic models, the "hydrostatic" and "I-beam leaf" form (see Fig 1). Hydrostatic leaves such as in Prostanthera lasianthos are large and thin, and may involve the need for multiple leaves rather single large leaves because of the amount of veins needed to support the periphery of large leaves. But large leaf size favors efficiency in photosynthesis and water conservation, involving further trade offs. On the other hand, I-beam leaves such as Banksia marginata involve specialized structures to stiffen them. These I-beams are formed from bundle sheath extensions of sclerenchyma meeting stiffened sub-epidermal layers. This shifts the balance from reliance on hydrostatic pressure to structural support, an obvious advantage where water is relatively scarce. Long narrow leaves bend more easily than ovate leaf blades of the same area. Monocots typically have such linear leaves that maximize surface area while minimising self-shading. In these a high proportion of longitudinal main veins provide additional support. Interactions with other organisms Although not as nutritious as other organs such as fruit, leaves provide a food source for many organisms. The leaf is a vital source of energy production for the plant, and plants have evolved protection against animals that consume leaves, such as tannins, chemicals which hinder the digestion of proteins and have an unpleasant taste. Animals that are specialized to eat leaves are known as folivores. Some species have cryptic adaptations by which they use leaves in avoiding predators. For example, the caterpillars of some leaf-roller moths will create a small home in the leaf by folding it over themselves. Several other lepidopteran larvae modify leaves for shelter; perhaps the greatest variety of shelter types occurs among the skipper butterflies (Hesperiidae), which will cut, fold, and bind leaves using silk. Some sawflies similarly roll the leaves of their food plants into tubes. Females of the Attelabidae, so-called leaf-rolling weevils, lay their eggs into leaves that they then roll up as means of protection. Other herbivores and their predators mimic the appearance of the leaf. Reptiles such as some chameleons, and insects such as some katydids, also mimic the oscillating movements of leaves in the wind, moving from side to side or back and forth while evading a possible threat. Seasonal leaf loss Leaves in temperate, boreal, and seasonally dry zones may be seasonally deciduous (falling off or dying for the inclement season). This mechanism to shed leaves is called abscission. When the leaf is shed, it leaves a leaf scar on the twig. In cold autumns, they sometimes change color, and turn yellow, bright-orange, or red, as various accessory pigments (carotenoids and xanthophylls) are revealed when the tree responds to cold and reduced sunlight by curtailing chlorophyll production. Red anthocyanin pigments are now thought to be produced in the leaf as it dies, possibly to mask the yellow hue left when the chlorophyll is lost—yellow leaves appear to attract herbivores such as aphids. Optical masking of chlorophyll by anthocyanins reduces risk of photo-oxidative damage to leaf cells as they senesce, which otherwise may lower the efficiency of nutrient retrieval from senescing autumn leaves. Evolutionary adaptation In the course of evolution, leaves have adapted to different environments in the following ways: Waxy micro- and nanostructures on the surface reduce wetting by rain and adhesion of contamination (See Lotus effect). Divided and compound leaves reduce wind resistance and promote cooling. Hairs on the leaf surface trap humidity in dry climates and create a boundary layer reducing water loss. Waxy plant cuticles reduce water loss. Large surface area provides a large area for capture of sunlight. In harmful levels of sunlight, specialized leaves, opaque or partly buried, admit light through a translucent leaf window for photosynthesis at inner leaf surfaces (e.g. Fenestraria). Kranz leaf anatomy in plants which perform carbon fixation Succulent leaves store water and organic acids for use in CAM photosynthesis. Aromatic oils, poisons or pheromones produced by leaf borne glands deter herbivores (e.g. eucalypts). Inclusions of crystalline minerals deter herbivores (e.g. silica phytoliths in grasses, raphides in Araceae). Petals attract pollinators. Spines protect the plants from herbivores (e.g. cacti). Stinging hairs to protect against herbivory, e.g. in Urtica dioica and Dendrocnide moroides (Urticaceae). Special leaves on carnivorous plants are adapted for trapping food, mainly invertebrate prey, though some species trap small vertebrates as well (see carnivorous plants). Bulbs store food and water (e.g. onions). Tendrils allow the plant to climb (e.g. peas). Bracts and pseudanthia (false flowers) replace normal flower structures when the true flowers are greatly reduced (e.g. spurges, spathes in the Araceae and floral heads in the Asteraceae). Terminology Shape Edge (margin) The edge or margin is the outside perimeter of a leaf. The terms are interchangeable. Apex (tip) Base Acuminate Coming to a sharp, narrow, prolonged point. Acute Coming to a sharp, but not prolonged point. Auriculate Ear-shaped. Cordate Heart-shaped with the notch towards the stalk. Cuneate Wedge-shaped. Hastate Shaped like an halberd and with the basal lobes pointing outward. Oblique Slanting. Reniform Kidney-shaped but rounder and broader than long. Rounded Curving shape. Sagittate Shaped like an arrowhead and with the acute basal lobes pointing downward. Truncate Ending abruptly with a flat end, that looks cut off. Surface The leaf surface is also host to a large variety of microorganisms; in this context it is referred to as the phyllosphere. Lepidote Covered with fine scurfy scales. Hairiness "Hairs" on plants are properly called trichomes. Leaves can show several degrees of hairiness. The meaning of several of the following terms can overlap. Arachnoid, or arachnose With many fine, entangled hairs giving a cobwebby appearance. Barbellate With finely barbed hairs (barbellae). Bearded With long, stiff hairs. Bristly With stiff hair-like prickles. Canescent Hoary with dense grayish-white pubescence. Ciliate Marginally fringed with short hairs (cilia). Ciliolate Minutely ciliate. Floccose With flocks of soft, woolly hairs, which tend to rub off. Glabrescent Losing hairs with age. Glabrous No hairs of any kind present. Glandular With a gland at the tip of the hair. Hirsute With rather rough or stiff hairs. Hispid With rigid, bristly hairs. Hispidulous Minutely hispid. Hoary With a fine, close grayish-white pubescence. Lanate, or lanose With woolly hairs. Pilose With soft, clearly separated hairs. Puberulent, or puberulous With fine, minute hairs. Pubescent With soft, short and erect hairs. Scabrous, or scabrid Rough to the touch. Sericeous Silky appearance through fine, straight and appressed (lying close and flat) hairs. Silky With adpressed, soft and straight pubescence. Stellate, or stelliform With star-shaped hairs. Strigose With appressed, sharp, straight and stiff hairs. Tomentose Densely pubescent with matted, soft white woolly hairs. Tomentulose Minutely or only slightly tomentose. Villous With long and soft hairs, usually curved. Woolly With long, soft and tortuous or matted hairs. Timing Hysteranthous Developing after the flowers Synanthous Developing at the same time as the flowers Venation Classification A number of different classification systems of the patterns of leaf veins (venation or veination) have been described, starting with Ettingshausen (1861), together with many different descriptive terms, and the terminology has been described as "formidable". One of the commonest among these is the Hickey system, originally developed for "dicotyledons" and using a number of Ettingshausen's terms derived from Greek (1973–1979): (see also: Simpson Figure 9.12, p. 468) Hickey system 1. Pinnate (feather-veined, reticulate, pinnate-netted, penniribbed, penninerved, or penniveined) The veins arise pinnately (feather like) from a single primary vein (mid-vein) and subdivide into secondary veinlets, known as higher order veins. These, in turn, form a complicated network. This type of venation is typical for (but by no means limited to) "dicotyledons" (non monocotyledon angiosperms). E.g., Ostrya. There are three subtypes of pinnate venation: These in turn have a number of further subtypes such as eucamptodromous, where secondary veins curve near the margin without joining adjacent secondary veins. 2. Parallelodromous (parallel-veined, parallel-ribbed, parallel-nerved, penniparallel, striate) Two or more primary veins originating beside each other at the leaf base, and running parallel to each other to the apex and then converging there. Commissural veins (small veins) connect the major parallel veins. Typical for most monocotyledons, such as grasses. The additional terms marginal (primary veins reach the margin), and reticulate (net-veined) are also used. 3. Campylodromous ( – curve) Several primary veins or branches originating at or close to a single point and running in recurved arches, then converging at apex. E.g. Maianthemum . 4. Acrodromous Two or more primary or well developed secondary veins in convergent arches towards apex, without basal recurvature as in Campylodromous. May be basal or suprabasal depending on origin, and perfect or imperfect depending on whether they reach to 2/3 of the way to the apex. E.g., Miconia (basal type), Endlicheria (suprabasal type). 5. Actinodromous Three or more primary veins diverging radially from a single point. E.g., Arcangelisia (basal type), Givotia (suprabasal type). 6. Palinactodromous Primary veins with one or more points of secondary dichotomous branching beyond the primary divergence, either closely or more distantly spaced. E.g., Platanus. Types 4–6 may similarly be subclassified as basal (primaries joined at the base of the blade) or suprabasal (diverging above the blade base), and perfect or imperfect, but also flabellate. At about the same time, Melville (1976) described a system applicable to all Angiosperms and using Latin and English terminology. Melville also had six divisions, based on the order in which veins develop. Arbuscular (arbuscularis) Branching repeatedly by regular dichotomy to give rise to a three dimensional bush-like structure consisting of linear segment (2 subclasses) Flabellate (flabellatus) Primary veins straight or only slightly curved, diverging from the base in a fan-like manner (4 subclasses) Palmate (palmatus) Curved primary veins (3 subclasses) Pinnate (pinnatus) Single primary vein, the midrib, along which straight or arching secondary veins are arranged at more or less regular intervals (6 subclasses) Collimate (collimatus) Numerous longitudinally parallel primary veins arising from a transverse meristem (5 subclasses) Conglutinate (conglutinatus) Derived from fused pinnate leaflets (3 subclasses) A modified form of the Hickey system was later incorporated into the Smithsonian classification (1999) which proposed seven main types of venation, based on the architecture of the primary veins, adding Flabellate as an additional main type. Further classification was then made on the basis of secondary veins, with 12 further types, such as; Brochidodromous Closed form in which the secondaries are joined in a series of prominent arches, as in Hildegardia. Craspedodromous Open form with secondaries terminating at the margin, in toothed leaves, as in Celtis. Eucamptodromous Intermediate form with upturned secondaries that gradually diminish apically but inside the margin, and connected by intermediate tertiary veins rather than loops between secondaries, as in Cornus. Cladodromous Secondaries freely branching toward the margin, as in Rhus. terms which had been used as subtypes in the original Hickey system. Further descriptions included the higher order, or minor veins and the patterns of areoles (see Leaf Architecture Working Group, Figures 28–29). Flabellate Several to many equal fine basal veins diverging radially at low angles and branching apically. E.g. Paranomus. Analyses of vein patterns often fall into consideration of the vein orders, primary vein type, secondary vein type (major veins), and minor vein density. A number of authors have adopted simplified versions of these schemes. At its simplest the primary vein types can be considered in three or four groups depending on the plant divisions being considered; pinnate palmate parallel where palmate refers to multiple primary veins that radiate from the petiole, as opposed to branching from the central main vein in the pinnate form, and encompasses both of Hickey types 4 and 5, which are preserved as subtypes; e.g., palmate-acrodromous (see National Park Service Leaf Guide). Palmate, Palmate-netted, palmate-veined, fan-veined Several main veins of approximately equal size diverge from a common point near the leaf base where the petiole attaches, and radiate toward the edge of the leaf. Palmately veined leaves are often lobed or divided with lobes radiating from the common point. They may vary in the number of primary veins (3 or more), but always radiate from a common point. e.g. most Acer (maples). Other systems Alternatively, Simpson uses: Uninervous Central midrib with no lateral veins (microphyllous), seen in the non-seed bearing tracheophytes, such as horsetails Dichotomous Veins successively branching into equally sized veins from a common point, forming a Y junction, fanning out. Amongst temperate woody plants, Ginkgo biloba is the only species exhibiting dichotomous venation. Also some pteridophytes (ferns). Parallel Primary and secondary veins roughly parallel to each other, running the length of the leaf, often connected by short perpendicular links, rather than form networks. In some species, the parallel veins join at the base and apex, such as needle-type evergreens and grasses. Characteristic of monocotyledons, but exceptions include Arisaema, and as below, under netted. Netted (reticulate, pinnate) A prominent midvein with secondary veins branching off along both sides of it. The name derives from the ultimate veinlets which form an interconnecting net like pattern or network. (The primary and secondary venation may be referred to as pinnate, while the net like finer veins are referred to as netted or reticulate); most non-monocot angiosperms, exceptions including Calophyllum. Some monocots have reticulate venation, including Colocasia, Dioscorea and Smilax. However, these simplified systems allow for further division into multiple subtypes. Simpson, (and others) divides parallel and netted (and some use only these two terms for Angiosperms) on the basis of the number of primary veins (costa) as follows; Parallel Netted (Reticulate) These complex systems are not used much in morphological descriptions of taxa, but have usefulness in plant identification, although criticized as being unduly burdened with jargon. An older, even simpler system, used in some flora uses only two categories, open and closed. Open: Higher order veins have free endings among the cells and are more characteristic of non-monocotyledon angiosperms. They are more likely to be associated with leaf shapes that are toothed, lobed or compound. They may be subdivided as; Pinnate (feather-veined) leaves, with a main central vein or rib (midrib), from which the remainder of the vein system arises Palmate, in which three or more main ribs rise together at the base of the leaf, and diverge upward. Dichotomous, as in ferns, where the veins fork repeatedly Closed: Higher order veins are connected in loops without ending freely among the cells. These tend to be in leaves with smooth outlines, and are characteristic of monocotyledons. They may be subdivided into whether the veins run parallel, as in grasses, or have other patterns. Other descriptive terms There are also many other descriptive terms, often with very specialized usage and confined to specific taxonomic groups. The conspicuousness of veins depends on a number of features. These include the width of the veins, their prominence in relation to the lamina surface and the degree of opacity of the surface, which may hide finer veins. In this regard, veins are called obscure and the order of veins that are obscured and whether upper, lower or both surfaces, further specified. Terms that describe vein prominence include bullate, channelled, flat, guttered, impressed, prominent and recessed (Fig. 6.1 Hawthorne & Lawrence 2013). Veins may show different types of prominence in different areas of the leaf. For instance Pimenta racemosa has a channelled midrib on the upper surface, but this is prominent on the lower surface. Describing vein prominence: Bullate Surface of leaf raised in a series of domes between the veins on the upper surface, and therefore also with marked depressions. e.g. Rytigynia pauciflora, Vitis vinifera Channelled (canalicululate) Veins sunken below the surface, resulting in a rounded channel. Sometimes confused with "guttered" because the channels may function as gutters for rain to run off and allow drying, as in many Melastomataceae. e.g. (see) Pimenta racemosa (Myrtaceae), Clidemia hirta (Melastomataceae). Guttered Veins partly prominent, the crest above the leaf lamina surface, but with channels running along each side, like gutters Impressed Vein forming raised line or ridge which lies below the plane of the surface which bears it, as if pressed into it, and are often exposed on the lower surface. Tissue near the veins often appears to pucker, giving them a sunken or embossed appearance Obscure Veins not visible, or not at all clear; if unspecified, then not visible with the naked eye. e.g. Berberis gagnepainii. In this Berberis, the veins are only obscure on the undersurface. Prominent Vein raised above surrounding surface so to be easily felt when stroked with finger. e.g. (see) Pimenta racemosa, Spathiphyllum cannifolium Recessed Vein is sunk below the surface, more prominent than surrounding tissues but more sunken in channel than with impressed veins. e.g. Viburnum plicatum. Describing other features: Plinervy (plinerved) More than one main vein (nerve) at the base. Lateral secondary veins branching from a point above the base of the leaf. Usually expressed as a suffix, as in 3-plinerved or triplinerved leaf. In a 3-plinerved (triplinerved) leaf three main veins branch above the base of the lamina (two secondary veins and the main vein) and run essentially parallel subsequently, as in Ceanothus and in Celtis. Similarly, a quintuplinerve (five-veined) leaf has four secondary veins and a main vein. A pattern with 3–7 veins is especially conspicuous in Melastomataceae. The term has also been used in Vaccinieae. The term has been used as synonymous with acrodromous, palmate-acrodromous or suprabasal acrodromous, and is thought to be too broadly defined. Scalariform Veins arranged like the rungs of a ladder, particularly higher order veins Submarginal Veins running close to leaf margin Trinerved 2 major basal nerves besides the midrib Diagrams of venation patterns Size The terms megaphyll, macrophyll, mesophyll, notophyll, microphyll, nanophyll and leptophyll are used to describe leaf sizes (in descending order), in a classification devised in 1934 by Christen C. Raunkiær and since modified by others.
Biology and health sciences
Plant: General
null
18973752
https://en.wikipedia.org/wiki/Coccinellidae
Coccinellidae
Coccinellidae () is a widespread family of small beetles. They are commonly known as ladybugs in North America and ladybirds in the United Kingdom; "lady" refers to mother Mary. Entomologists use the names ladybird beetles or lady beetles to avoid confusion with true bugs. The more than 6,000 described species have a global distribution and are found in a variety of habitats. They are oval beetles with a domed back and flat underside. Many of the species have conspicuous aposematic (warning) colours and patterns, such as red with black spots, that warn potential predators that they taste bad. Most coccinellid species are carnivorous predators, preying on insects such as aphids and scale insects. Other species are known to consume non-animal matter, including plants and fungi. They are promiscuous breeders, reproducing in spring and summer in temperate regions and during the wet season in tropical regions. Many predatory species lay their eggs near colonies of prey, providing their larvae with a food source. Like most insects, they develop from larva to pupa to adult. Temperate species hibernate and diapause during the winter; tropical species are dormant during the dry season. Coccinellids migrate between dormancy and breeding sites. Species that prey on agricultural pests are considered beneficial insects. Several species have been introduced outside their range as biological control agents, with varying degrees of success. Some species are pests themselves and attack agricultural crops, or can infest people's homes, particularly in winter. Invasive species like Harmonia axyridis can pose an ecological threat to native coccinellid species. Other threats to coccinellids include climate change and habitat destruction. These insects have played roles in folklore, religion and poetry, and are particularly popular in nursery rhymes. Etymology The name Coccinellidae, created by Pierre André Latreille in 1807, is derived from the Latin word meaning . The common English name ladybird originated in Britain where the insects became known as "Our Lady's birds". Mary ("Our Lady") was often depicted wearing a red cloak in early art, and the seven spots of the species Coccinella septempunctata (the most common in Europe) were said to represent her seven joys and seven sorrows. In the United States, the name was popularly adapted to ladybug. Entomologists prefer the names ladybird beetles or lady beetles to avoid confusion with true bugs. Names in some other countries may be similar; for example, in Germany they are known as meaning or . Description Coccinellids range in size from . They are sexually dimorphic; adult females tend to be slightly larger than males. They are generally oval with domed backs and flattened undersides. They have large compound eyes and clubbed antennae with seven to eleven segments. The powerful mandibles (equivalent to jaws) typically have pairs of "teeth" which face each other. The coccinellid prothorax (front of thorax) is broad and convex, and can cover the back of the head. Being beetles, they have hardened, non-overlapping forewings, known as elytra, which cover up the more fragile hindwings when the insects are not in flight. Their legs are relatively short, with a tarsal formula of 4-4-4 (may appear 3-3-3 because the third segment of each tarsus is reduced). The tarsus (end of leg) has two claws at the tip. As adults, these beetles differ from their closest relatives with the following morphological characteristics: Five pairs of spiracles (holes) on the abdomen A tentorium (internal supports inside the head) with separated branches at the front and no bridge No line dividing the frons and clypeus (frontoclypeal suture) Maxillary palps with non-needle-shaped tips, Divided galea and lacinia (lobes at the end of the mouthparts) Smaller molar (flattened) area of the mandible Coxal cavities (holes where the leg articulates with the thorax) that open from the back in the front of the thorax and from the front in the middle of the thorax Epimeron (corner plates) on the metathorax with parallel edges Lines on the second abdominal sternum Tube-shaped, siphon-like genitalia in the male Coccinellids are often distinctively coloured and patterned. The elytron may be light with dark spots or dark with light spots. Light areas are typically yellow, red, orange or brown, and the spots vary in size and shape and numbers. Some species have striped or checkered patterns. The pigment carotene creates the lighter colours, and melanins create darker colours. Other parts of the body also vary in colouration. These colour patterns typically serve as warning colouration, but some can act as camouflage, attract mates or even regulate heat. Several individual species may display polymorphism and even change colour between seasons. Coccinellid larvae are elongated with square heads. They are covered in hairs or setae, the abdominal segments, in particular, each having six divided into pairs, and one to three segmented antennae. Their colouration varies from grey, blue-grey, grey-brown or brown and spotted with white, yellow, red or orange. They tend to brighten as they get closer to adulthood. Evolution Fossil history Over 6,000 living species of Coccinellidae have been described. They are sparsely preserved in the fossil record. Although molecular clock estimates have placed their origin in the Cretaceous, the oldest fossils of the group are known from the Oise amber of France, dating to the Early Eocene (Ypresian) around 53 million years ago, which belong to the extant genera Rhyzobius and Nephus. The greatest number of fossils comes from the younger Eocene Baltic amber, including members of the extant genera Serangium and Rhyzobius as well as extinct genera belonging to the tribes Microweiseini (Baltosidis) and Sticholotidini (Electrolotis). Phylogeny The Coccinellidae are within the superfamily Coccinelloidea, which in turn is part of the infraorder Cucujiformia, a group containing most of the plant-eating beetles. The ladybirds form the majority of the species in the Coccinelloidea; many of the rest are fungus-feeding beetles or scavengers. Coccinellidae have historically been divided into up seven subfamilies (Chilocorinae, Coccidulinae, Coccinellinae, Epilachninae, Microweiseinae, Scymninae and Sticholotidinae) and 35 tribes based on morphology. However, genetics studies have called into question the monophyly (single ancestry) of most of these subfamilies. The monophyly of Coccinellinae has the most support. A 2021 genetic study sampling many species, identified three subfamilies, Microweiseinae (with three tribes), Coccinellinae (26 tribes) and a newly identified group, the Monocoryninae (one tribe). All three subfamilies were strongly supported, but the study noted that although the tribes are mostly monophyletic, their relationships are only weakly supported. The study suggests that the crown group appeared some 143 Mya in the Early Cretaceous, and that the group diversified rapidly during the Late Cretaceous, perhaps because the growth in diversity of angiosperm plants then encouraged the radiation of insects of the clade Sternorrhyncha such as aphids, on which ladybirds could feed. An earlier 2009 study concluded that consumption of scale insects is the most basal diet of Coccinellidae. Aphid-eating evolved three separate times and leaf-eating evolved twice, one of which evolved from a clade that contains both aphid-eating and pollen-eating. The fungi-eating also evolved from aphid-eating. Biology and ecology Flight Coccinellids mostly fly during the day. Springy, cylindrical veins in the hindwings stiffen when in flight and bend when folding. Folding of the wings is further aided by creases in the membrane. These beetles may migrate long distances to hibernation and breeding sites, and areas with more food. They appear to be drawn to recognisable landmarks. The more crowded an area is, the more individuals leave, but will remain if there are enough prey species to feed on. "Trivial flights" refer to flying while foraging or when finding a place to lay eggs. One study of species in Britain found that coccinellids can fly as far as . They flew at speeds of and could reach altitudes close to . Life cycle In temperate climates, coccinellids typically breed from late spring to early summer. In warmer temperate regions, reproduction may occur in spring, fall and winter; tropical species reproduce during the wet season. Mating is promiscuous. In some species, females appear to be selective in their partners, preferring males of a certain size and colour. Males produce sperm packets each with 14,000 sperm, and insert three of them into the female, even though she can only hold 18,000 sperm. This is likely a form of sperm competition. Like other insects, coccinellids develop from egg, to larva, to pupa and finally adult. Eggs tend to be bright yellow, and the females lay them close together, standing upright and near where they can access food. The number of eggs in a cluster can vary depending on the species; it is typically in the double digits but some species can lay over a thousand eggs in their lifetime. After hatching, the larvae will begin eating, including the other eggs in their clutch. Certain species lay extra infertile trophic eggs with the fertile eggs, providing a backup food source for the larvae when they hatch. The ratio of infertile to fertile eggs increases with scarcity of food at the time of egg laying. Larvae typically have four instar stages with three moults between them. The larva eventually transitions into a pupa; which involves the development of a hunch, the fusion of the legs to the body, and the attachment of the posterior to the surface. Pupae may be uncovered, partially covered or fully covered by larval skin depending on the species. The pupa is mostly immobile, but the head can move in response to irritation. When the adult emerges, it has its hindwings, while the elytron starts out softer and lighter in colour, with no patterns. The length of each development stage varies based on climate and between species. For Adalia bipunctata, eggs hatch after four to eight days, the larva stage lasts around three weeks and the pupa lasts seven to ten days. Adult coccinellids develop much of their final colouration within hours, but may not fully darken for weeks or months. The lifespan of an adult reaches up to a year. In temperate areas, coccinellids may hibernate or enter diapause during the winter. Individuals during this period gather in clumps, large or small depending on the species. Overwintering insects can be found both in lowland areas, aggregating under dead vegetation, and at the tops of hills, hibernating under rocks and on grass tussocks. In areas with particularly hot summers, the insects experience summer dormancy or aestivation; in the tropics, coccinellids enter dormancy during the dry season. Trophic roles Coccinellids act both as predators, prey and parasitic hosts in food webs. The majority of coccinellids are carnivorous and predatory, typically preying on Sternorrhyncha insects like aphids, scale insects, whiteflies, psyllids and adelgids. Some species feed on the larvae of moths and other beetles, as well as mites. Since much of their prey are agricultural pests, coccinellids are considered to be beneficial insects. A 2009 metastudy by Hodek and Honěk found that aphid-eaters constituted around 68 percent of species that live in temperate areas but only 20 percent of species worldwide. Around 36 percent of total species mostly feed on scale insects. Larvae and adults eat the same foods, unlike in other insect groups. Ladybird species vary in dietary specificity. An example of a specialist species is those of the genus Stethorus, which feed on spider mites. Aphid-eaters tend to be generalist; they have a high voracity and can multiply quickly in response to outbreaks, and switch to other prey when the ephemeral aphids become scarce. Predators of scale insects tend to be less voracious and are slower breeders and developers; matching their prey. Under pressure from coccinellid predation, aphid species have evolved to become more toxic, forcing coccinellids to develop immunities. Coccinellid predators of aphids need to defend themselves against ants that tend and defend aphids for their honeydew, and coccinellid eggs laid near aphids are disposed of. Some species including Coccinella magnifica and Diomus have adapted to grow within ant nests as larvae, and some like Diomus thoracicus are predators of the brood of the ant Wasmannia auropunctata. Cannibalism has been recorded in several species; which includes larvae eating eggs or other larvae, and adults feeding on individuals of any life stage. Some coccinellids are mostly non-predatory, such as some species in the genera Epilachna and Henosepilachna. The majority of predatory species may also supplement their diet with other sources of food both in their larval and adult stages. Non-animal matter consumed include leaves, pollen, nectar, sap, fungi, and honeydew. Members of the tribe Halyziini of the subfamily Coccinellinae are obligate fungus feeders. Coccinellids of any lifestage are preyed on by predators such as birds, spiders, ants and lacewings. They are also hosts for parasites, including some flies, ticks, mites, hymenopterans and nematodes, and pathogens, including bacteria, fungi and protozoa. Wolbachia bacteria infects eggs and kills male zygotes. The promiscuity of Coccinellids has led to their being affected by sexually transmitted infections. Defense The bright warning colouration of many coccinellids discourage potential predators, warning of their toxicity. A 2015 study of five ladybird species found that their colouration honestly signalled their toxicity, implying the warning is genuine. Species with more contrast with the background environment tended to be more toxic. Coccinellid haemolymph (blood) contains toxic alkaloids, azamacrolides and polyamines, as well as foul-smelling pyrazines. Coccinellids can produce at least 50 types of alkaloids. When disturbed, ladybirds further defend themselves with reflex bleeding, exuding drops from their tibio-femoral (knee) joints, effectively presenting predators with a sample of their toxic and bitter body fluid. Predator-deterring poisons are particularly important for the immobile pupa. Access to food can affect the concentration of both pigments and toxins. The similarity of coccinellid patterning in red and orange with black markings has led to suggestions that they and some species of chrysomelids form Müllerian mimicry rings particularly to defend them from birds. Despite their chemical defenses, coccinellids are preyed on by some clerid beetles in the genus Enoclerus, several species of which are brightly coloured in red and black, and which possibly sequester the toxins of the prey to defend themselves against other predators. As an anti-predator defense, spiders of the genus Eresus, known as ladybird spiders, have evolved to replicate the patterns of coccinellids. This is a form of Batesian mimicry, as the spiders lack the chemicals. This resemblance is limited to adult male spiders which are actively searching for females and exposed – unlike the females and young, which remain sheltered in burrows. Distribution and status Coccinellidae are found on every continent except Antarctica. Asian and African species are less studied than others. Coccinellids can be found in a variety of habitats, both on the ground and in the trees. They may specialise using certain plants. Some species can live in extreme environments such as high mountains, arid deserts and cold regions. Several of the most famous species have wide ranges, but others are more endemic and possibly threatened. Threats to coccinellids include climate change, agriculture, urbanisation, and invasive species. Coccinellid biodiversity will likely be affected by the rising of both average temperatures and heat fluctuations. Climate change may lead to smaller larvae, as well as increase energy and metabolic needs and interspecific predation. Agriculture and urbanisation threatens these insects though habitat destruction and homogenisation and the use of pesticides. Invasive threats include other coccinellids, particularly C. septempunctata in North America and H. axyridis globally. These invaders outcompete the native species as well as eat their eggs. As of 2022, the IUCN Red List does not list the conservation status for any coccinellid, though there is an IUCN SSC Ladybird Specialist Group. Conservationists have suggested several measures for protecting the insects, including citizen science and education programs, habitat preservation and restoration, prevention of the spread of invasive species and a global monitoring program. Relationship to humans Biological control Coccinellids have been valued in biological pest control, as they prey on agricultural pests such as aphids and scale insects. Their importance in controlling pests was noted as far back as 1814 in England. Their efficiency can vary: sometimes they have a relatively small effect on aphid populations; at others they cause significant seasonal declines. Several species have been introduced to areas outside their native range; the first being the vedalia beetle, Novius cardinalis. The larva of the species was introduced to California in 1887 from Australia, to protect citrus trees from cottony cushion scale. The project was markedly successful, costing $1,500 in 1889, making it "a textbook example of the great potential of classical biological control as a tactic for suppressing invasive pests." The beetle was then used in 29 countries, again with success; reasons for this include its high prey specificity, fast development, multiple generations each year, efficient discovery of host patches, and larval development completed on a single host insect. There have been many further attempts to use ladybird species against pests, with varying degrees of success. Scale insect-eating coccinellids have been more successfully used than aphid predators. Out of 155 deliberate introductions meant to control aphids by the year 2000, only one was deemed to be "substantially successful". This is due to aphid-eating species being fast-breeding, generalist and voracious, and thus difficult to control. As pests Coccinellids can also act as pests. Harmonia axyridis is native to East Asia, but has been introduced to the Americas, Europe and Africa. In North America, this species begins to appear indoors in the autumn when they leave their summer feeding sites to search out places to stay for winter. Typically, when temperatures warm to the mid-60s °F (around 18 °C) in the late afternoon, they swarm onto or into buildings illuminated by the sun from nearby fields and forests. After an abnormally long period of hot, dry weather in the summer of 1976 in the UK, a marked increase in the aphid population was followed by a "plague" of the native Coccinella septempunctata; there were many reports of people being bitten as the supply of aphids dwindled. H. axyridis, C. septempunctata and Hippodamia convergens are the most common causes of ladybird taint in wine. As few as 1.3 to 1.5 coccinellids per of grapes can affect wine quality when they are present during the wine-making process. The Mexican bean beetle is an agricultural pest as it primarily feeds on plants, especially legumes, instead of insects. In culture Coccinellids have had important roles in culture and religion, being associated with luck, love, fertility and prophecy. "Ladybird" is an affectionate term for someone, such as a loved one. In European folklore, an insect acts as a matchmaker, crawling on a woman and then flying to their true love. Coccinellids have been said to predict the future, particularly weather conditions and how well the crops will grow. In Christianity, coccinellids have been seen as the literal gatekeepers of Heaven. A Swedish name for the insects, Himmelska nycla, means "Keys of Heaven". Jews have referred to the insects as the "Cow of Moses our Teacher". The Cherokee have revered them as the "Great Beloved Woman"; this was used as a title for the highest-ranking woman in the government, who would be painted in the colours and patterns of the insect during ceremonies. Coccinellids have been popularly featured in poems and nursery rhymes, the most famous being Ladybird! Ladybird!. This has come in several forms, including:
Biology and health sciences
Beetles (Coleoptera)
null
18973869
https://en.wikipedia.org/wiki/Psychiatry
Psychiatry
Psychiatry is the medical specialty devoted to the diagnosis, prevention, and treatment of deleterious mental conditions. These include various matters related to mood, behaviour, cognition, perceptions, and emotions. Initial psychiatric assessment of a person begins with creating a case history and conducting a mental status examination. Physical examinations, psychological tests, and laboratory tests may be conducted. On occasion, neuroimaging or other neurophysiological studies are performed. Mental disorders are diagnosed in accordance with diagnostic manuals such as the International Classification of Diseases (ICD), edited by the World Health Organization (WHO), and the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association (APA). The fifth edition of the DSM (DSM-5), published in May 2013, reorganized the categories of disorders and added newer information and insights consistent with current research. Treatment may include psychotropics (psychiatric medicines), interventional approaches and psychotherapy, and also other modalities such as assertive community treatment, community reinforcement, substance-abuse treatment, and supported employment. Treatment may be delivered on an inpatient or outpatient basis, depending on the severity of functional impairment or risk to the individual or community. Research within psychiatry is conducted on an interdisciplinary basis with other professionals, such as epidemiologists, nurses, social workers, occupational therapists, and clinical psychologists. Etymology The term psychiatry was first coined by the German physician Johann Christian Reil in 1808 and literally means the 'medical treatment of the soul' (ψυχή psych- 'soul' from Ancient Greek psykhē 'soul'; -iatry 'medical treatment' from Gk. ιατρικός iātrikos 'medical' from ιάσθαι iāsthai 'to heal'). A medical doctor specializing in psychiatry is a psychiatrist (for a historical overview, see: Timeline of psychiatry). Theory and focus Psychiatry refers to a field of medicine focused specifically on the mind, aiming to study, prevent, and treat mental disorders in humans. It has been described as an intermediary between the world from a social context and the world from the perspective of those who are mentally ill. People who specialize in psychiatry often differ from most other mental health professionals and physicians in that they must be familiar with both the social and biological sciences. The discipline studies the operations of different organs and body systems as classified by the patient's subjective experiences and the objective physiology of the patient. Psychiatry treats mental disorders, which are conventionally divided into three general categories: mental illnesses, severe learning disabilities, and personality disorders. Although the focus of psychiatry has changed little over time, the diagnostic and treatment processes have evolved dramatically and continue to do so. Since the late 20th century, the field of psychiatry has continued to become more biological and less conceptually isolated from other medical fields. Scope of practice Though the medical specialty of psychiatry uses research in the field of neuroscience, psychology, medicine, biology, biochemistry, and pharmacology, it has generally been considered a middle ground between neurology and psychology. Because psychiatry and neurology are deeply intertwined medical specialties, all certification for both specialties and for their subspecialties is offered by a single board, the American Board of Psychiatry and Neurology, one of the member boards of the American Board of Medical Specialties. Unlike other physicians and neurologists, psychiatrists specialize in the doctor–patient relationship and are trained to varying extents in the use of psychotherapy and other therapeutic communication techniques. Psychiatrists also differ from psychologists in that they are physicians and have post-graduate training called residency (usually four to five years) in psychiatry; the quality and thoroughness of their graduate medical training is identical to that of all other physicians. Psychiatrists can therefore counsel patients, prescribe medication, order laboratory tests, order neuroimaging, and conduct physical examinations. As well, some psychiatrists are trained in interventional psychiatry and can deliver interventional treatments such as electroconvulsive therapy, transcranial magnetic stimulation, vagus nerve stimulation and ketamine. Ethics The World Psychiatric Association issues an ethical code to govern the conduct of psychiatrists (like other purveyors of professional ethics). The psychiatric code of ethics, first set forth through the Declaration of Hawaii in 1977 has been expanded through a 1983 Vienna update and in the broader Madrid Declaration in 1996. The code was further revised during the organization's general assemblies in 1999, 2002, 2005, and 2011. The World Psychiatric Association code covers such matters as confidentiality, the death penalty, ethnic or cultural discrimination, euthanasia, genetics, the human dignity of incapacitated patients, media relations, organ transplantation, patient assessment, research ethics, sex selection, torture, and up-to-date knowledge. In establishing such ethical codes, the profession has responded to a number of controversies about the practice of psychiatry, for example, surrounding the use of lobotomy and electroconvulsive therapy. Discredited psychiatrists who operated outside the norms of medical ethics include Harry Bailey, Donald Ewen Cameron, Samuel A. Cartwright, Henry Cotton, and Andrei Snezhnevsky. Approaches Psychiatric illnesses can be conceptualised in a number of different ways. The biomedical approach examines signs and symptoms and compares them with diagnostic criteria. Mental illness can be assessed, conversely, through a narrative which tries to incorporate symptoms into a meaningful life history and to frame them as responses to external conditions. Both approaches are important in the field of psychiatry but have not sufficiently reconciled to settle controversy over either the selection of a psychiatric paradigm or the specification of psychopathology. The notion of a "biopsychosocial model" is often used to underline the multifactorial nature of clinical impairment. In this notion the word model is not used in a strictly scientific way though. Alternatively, a Niall McLaren acknowledges the physiological basis for the mind's existence but identifies cognition as an irreducible and independent realm in which disorder may occur. The biocognitive approach includes a mentalist etiology and provides a natural dualist (i.e., non-spiritual) revision of the biopsychosocial view, reflecting the efforts of Australian psychiatrist Niall McLaren to bring the discipline into scientific maturity in accordance with the paradigmatic standards of philosopher Thomas Kuhn. Once a medical professional diagnoses a patient there are numerous ways that they could choose to treat the patient. Often psychiatrists will develop a treatment strategy that incorporates different facets of different approaches into one. Drug prescriptions are very commonly written to be regimented to patients along with any therapy they receive. There are three major pillars of psychotherapy that treatment strategies are most regularly drawn from. Humanistic psychology attempts to put the "whole" of the patient in perspective; it also focuses on self exploration. Behaviorism is a therapeutic school of thought that elects to focus solely on real and observable events, rather than mining the unconscious or subconscious. Psychoanalysis, on the other hand, concentrates its dealings on early childhood, irrational drives, the unconscious, and conflict between conscious and unconscious streams. Practitioners All physicians can diagnose mental disorders and prescribe treatments utilizing principles of psychiatry. Psychiatrists are trained physicians who specialize in psychiatry and are certified to treat mental illness. They may treat outpatients, inpatients, or both; they may practice as solo practitioners or as members of groups; they may be self-employed, be members of partnerships, or be employees of governmental, academic, nonprofit, or for-profit entities; employees of hospitals; they may treat military personnel as civilians or as members of the military; and in any of these settings they may function as clinicians, researchers, teachers, or some combination of these. Although psychiatrists may also go through significant training to conduct psychotherapy, psychoanalysis or cognitive behavioral therapy, it is their training as physicians that differentiates them from other mental health professionals. As a career choice in the US Psychiatry was not a popular career choice among medical students, even though medical school placements are rated favorably. This has resulted in a significant shortage of psychiatrists in the United States and elsewhere. Strategies to address this shortfall have included the use of short 'taster' placements early in the medical school curriculum and attempts to extend psychiatry services further using telemedicine technologies and other methods. Recently, however, there has been an increase in the number of medical students entering into a psychiatry residency. There are several reasons for this surge, including the intriguing nature of the field, growing interest in genetic biomarkers involved in psychiatric diagnoses, and newer pharmaceuticals on the drug market to treat psychiatric illnesses. Subspecialties The field of psychiatry has many subspecialties that require additional training and certification by the American Board of Psychiatry and Neurology (ABPN). Such subspecialties include: Addiction psychiatry, addiction medicine Brain injury medicine Child and adolescent psychiatry Consultation-liaison psychiatry Forensic psychiatry Geriatric psychiatry Hospice and palliative medicine Sleep medicine Additional psychiatry subspecialties, for which the ABPN does not provide formal certification, include: Biological psychiatry Community psychiatry Cross-cultural psychiatry Emergency psychiatry Evolutionary psychiatry Global mental health Learning disabilities Military psychiatry Neurodevelopmental disorders Neuropsychiatry Interventional Psychiatry Social psychiatry Addiction psychiatry focuses on evaluation and treatment of individuals with alcohol, drug, or other substance-related disorders, and of individuals with dual diagnosis of substance-related and other psychiatric disorders. Biological psychiatry is an approach to psychiatry that aims to understand mental disorders in terms of the biological function of the nervous system. Child and adolescent psychiatry is the branch of psychiatry that specializes in work with children, teenagers, and their families. Community psychiatry is an approach that reflects an inclusive public health perspective and is practiced in community mental health services. Cross-cultural psychiatry is a branch of psychiatry concerned with the cultural and ethnic context of mental disorder and psychiatric services. Emergency psychiatry is the clinical application of psychiatry in emergency settings. Forensic psychiatry utilizes medical science generally, and psychiatric knowledge and assessment methods in particular, to help answer legal questions. Geriatric psychiatry is a branch of psychiatry dealing with the study, prevention, and treatment of mental disorders in the elderly. Global mental health is an area of study, research and practice that places a priority on improving mental health and achieving equity in mental health for all people worldwide, although some scholars consider it to be a neo-colonial, culturally insensitive project. Liaison psychiatry is the branch of psychiatry that specializes in the interface between other medical specialties and psychiatry. Military psychiatry covers special aspects of psychiatry and mental disorders within the military context. Neuropsychiatry is a branch of medicine dealing with mental disorders attributable to diseases of the nervous system. Social psychiatry is a branch of psychiatry that focuses on the interpersonal and cultural context of mental disorder and mental well-being. In larger healthcare organizations, psychiatrists often serve in senior management roles, where they are responsible for the efficient and effective delivery of mental health services for the organization's constituents. For example, the Chief of Mental Health Services at most VA medical centers is usually a psychiatrist, although psychologists occasionally are selected for the position as well. In the United States, psychiatry is one of the few specialties which qualify for further education and board-certification in pain medicine, palliative medicine, and sleep medicine. Research Psychiatric research is, by its very nature, interdisciplinary; combining social, biological and psychological perspectives in attempt to understand the nature and treatment of mental disorders. Clinical and research psychiatrists study basic and clinical psychiatric topics at research institutions and publish articles in journals. Under the supervision of institutional review boards, psychiatric clinical researchers look at topics such as neuroimaging, genetics, and psychopharmacology in order to enhance diagnostic validity and reliability, to discover new treatment methods, and to classify new mental disorders. Clinical application Diagnostic systems Psychiatric diagnoses take place in a wide variety of settings and are performed by many different health professionals. Therefore, the diagnostic procedure may vary greatly based upon these factors. Typically, though, a psychiatric diagnosis utilizes a differential diagnosis procedure where a mental status examination and physical examination is conducted, with pathological, psychopathological or psychosocial histories obtained, and sometimes neuroimages or other neurophysiological measurements are taken, or personality tests or cognitive tests administered. In some cases, a brain scan might be used to rule out other medical illnesses, but at this time relying on brain scans alone cannot accurately diagnose a mental illness or tell the risk of getting a mental illness in the future. Some clinicians are beginning to utilize genetics and automated speech assessment during the diagnostic process but on the whole these remain research topics. Potential use of MRI/fMRI in diagnosis In 2018, the American Psychological Association commissioned a review to reach a consensus on whether modern clinical MRI/fMRI will be able to be used in the diagnosis of mental health disorders. The criteria presented by the APA stated that the biomarkers used in diagnosis should: "have a sensitivity of at least 80% for detecting a particular psychiatric disorder" "should have a specificity of at least 80% for distinguishing this disorder from other psychiatric or medical disorders" "should be reliable, reproducible, and ideally be noninvasive, simple to perform, and inexpensive" "proposed biomarkers should be verified by 2 independent studies each by a different investigator and different population samples and published in a peer-reviewed journal" The review concluded that although neuroimaging diagnosis may technically be feasible, very large studies are needed to evaluate specific biomarkers which were not available. Diagnostic manuals Three main diagnostic manuals used to classify mental health conditions are in use today. The ICD-11 is produced and published by the World Health Organization, includes a section on psychiatric conditions, and is used worldwide. The Diagnostic and Statistical Manual of Mental Disorders, produced and published by the American Psychiatric Association (APA), is primarily focused on mental health conditions and is the main classification tool in the United States. It is currently in its fifth revised edition and is also used worldwide. The Chinese Society of Psychiatry has also produced a diagnostic manual, the Chinese Classification of Mental Disorders. The stated intention of diagnostic manuals is typically to develop replicable and clinically useful categories and criteria, to facilitate consensus and agreed upon standards, whilst being atheoretical as regards etiology. However, the categories are nevertheless based on particular psychiatric theories and data; they are broad and often specified by numerous possible combinations of symptoms, and many of the categories overlap in symptomology or typically occur together. While originally intended only as a guide for experienced clinicians trained in its use, the nomenclature is now widely used by clinicians, administrators and insurance companies in many countries. The DSM has attracted praise for standardizing psychiatric diagnostic categories and criteria. It has also attracted controversy and criticism. Some critics argue that the DSM represents an unscientific system that enshrines the opinions of a few powerful psychiatrists. There are ongoing issues concerning the validity and reliability of the diagnostic categories; the reliance on superficial symptoms; the use of artificial dividing lines between categories and from 'normality'; possible cultural bias; medicalization of human distress and financial conflicts of interest, including with the practice of psychiatrists and with the pharmaceutical industry; political controversies about the inclusion or exclusion of diagnoses from the manual, in general or in regard to specific issues; and the experience of those who are most directly affected by the manual by being diagnosed, including the consumer/survivor movement. Treatment General considerations Individuals receiving psychiatric treatment are commonly referred to as patients but may also be called clients, consumers, or service recipients. They may come under the care of a psychiatric physician or other psychiatric practitioners by various paths, the two most common being self-referral or referral by a primary care physician. Alternatively, a person may be referred by hospital medical staff, by court order, involuntary commitment, or, in countries such as the UK and Australia, by sectioning under a mental health law. A psychiatrist or medical provider evaluates people through a psychiatric assessment for their mental and physical condition. This usually involves interviewing the person and often obtaining information from other sources such as other health and social care professionals, relatives, associates, law enforcement personnel, emergency medical personnel, and psychiatric rating scales. A mental status examination is carried out, and a physical examination is usually performed to establish or exclude other illnesses that may be contributing to the alleged psychiatric problems. A physical examination may also serve to identify any signs of self-harm; this examination is often performed by someone other than the psychiatrist, especially if blood tests and medical imaging are performed. Like most medications, psychiatric medications can cause adverse effects in patients, and some require ongoing therapeutic drug monitoring, for instance full blood counts, serum drug levels, renal function, liver function or thyroid function. Electroconvulsive therapy (ECT) is sometimes administered for serious conditions, such as those unresponsive to medication. The efficacy and adverse effects of psychiatric drugs may vary from patient to patient. Inpatient treatment Psychiatric treatments have changed over the past several decades. In the past, psychiatric patients were often hospitalized for six months or more, with some cases involving hospitalization for many years. Average inpatient psychiatric treatment stay has decreased significantly since the 1960s, a trend known as deinstitutionalization. Today in most countries, people receiving psychiatric treatment are more likely to be seen as outpatients. If hospitalization is required, the average hospital stay is around one to two weeks, with only a small number receiving long-term hospitalization. However, in Japan psychiatric hospitals continue to keep patients for long periods, sometimes even keeping them in physical restraints, strapped to their beds for periods of weeks or months. Psychiatric inpatients are people admitted to a hospital or clinic to receive psychiatric care. Some are admitted involuntarily, perhaps committed to a secure hospital, or in some jurisdictions to a facility within the prison system. In many countries including the United States and Canada, the criteria for involuntary admission vary with local jurisdiction. They may be as broad as having a mental health condition, or as narrow as being an immediate danger to themselves or others. Bed availability is often the real determinant of admission decisions to hard pressed public facilities. People may be admitted voluntarily if the treating doctor considers that safety is not compromised by this less restrictive option. For many years, controversy has surrounded the use of involuntary treatment and use of the term "lack of insight" in describing patients. Internationally, mental health laws vary significantly but in many cases, involuntary psychiatric treatment is permitted when there is deemed to be a significant risk to the patient or others due to the patient's illness. Involuntary treatment refers to treatment that occurs based on a treating physician's recommendations, without requiring consent from the patient. Inpatient psychiatric wards may be secure (for those thought to have a particular risk of violence or self-harm) or unlocked/open. Some wards are mixed-sex whilst same-sex wards are increasingly favored to protect women inpatients. Once in the care of a hospital, people are assessed, monitored, and often given medication and care from a multidisciplinary team, which may include physicians, pharmacists, psychiatric nurse practitioners, psychiatric nurses, clinical psychologists, psychotherapists, psychiatric social workers, occupational therapists and social workers. If a person receiving treatment in a psychiatric hospital is assessed as at particular risk of harming themselves or others, they may be put on constant or intermittent one-to-one supervision and may be put in physical restraints or medicated. People on inpatient wards may be allowed leave for periods of time, either accompanied or on their own. In many developed countries there has been a massive reduction in psychiatric beds since the mid 20th century, with the growth of community care. Italy has been a pioneer in psychiatric reform, particularly through the no-restraint initiative that began nearly fifty years ago. The Italian movement, heavily influenced by Franco Basaglia, emphasizes ethical treatment and the elimination of physical restraints in psychiatric care. A study examining the application of these principles in Italy found that 14 general hospital psychiatric units reported zero restraint incidents in 2022. Standards of inpatient care remain a challenge in some public and private facilities, due to levels of funding, and facilities in developing countries are typically grossly inadequate for the same reason. Even in developed countries, programs in public hospitals vary widely. Some may offer structured activities and therapies offered from many perspectives while others may only have the funding for medicating and monitoring patients. This may be problematic in that the maximum amount of therapeutic work might not actually take place in the hospital setting. This is why hospitals are increasingly used in limited situations and moments of crisis where patients are a direct threat to themselves or others. Alternatives to psychiatric hospitals that may actively offer more therapeutic approaches include rehabilitation centers or "rehab" as popularly termed. Outpatient treatment Outpatient treatment involves periodic visits to a psychiatrist for consultation in his or her office, or at a community-based outpatient clinic. During initial appointments, a psychiatrist generally conducts a psychiatric assessment or evaluation of the patient. Follow-up appointments then focus on making medication adjustments, reviewing potential medication interactions, considering the impact of other medical disorders on the patient's mental and emotional functioning, and counseling patients regarding changes they might make to facilitate healing and remission of symptoms. The frequency with which a psychiatrist sees people in treatment varies widely, from once a week to twice a year, depending on the type, severity and stability of each person's condition, and depending on what the clinician and patient decide would be best. Increasingly, psychiatrists are limiting their practices to psychopharmacology (prescribing medications), as opposed to previous practice in which a psychiatrist would provide traditional 50-minute psychotherapy sessions, of which psychopharmacology would be a part, but most of the consultation sessions consisted of "talk therapy". This shift began in the early 1980s and accelerated in the 1990s and 2000s. A major reason for this change was the advent of managed care insurance plans, which began to limit reimbursement for psychotherapy sessions provided by psychiatrists. The underlying assumption was that psychopharmacology was at least as effective as psychotherapy, and it could be delivered more efficiently because less time is required for the appointment. Because of this shift in practice patterns, psychiatrists often refer patients whom they think would benefit from psychotherapy to other mental health professionals, e.g., clinical social workers and psychologists. Telepsychiatry History Earliest knowledge The earliest known texts on mental disorders are from ancient India and include the Ayurvedic text, Charaka Samhita. The first hospitals for curing mental illness were established in India during the 3rd century BCE. Greek philosophers, including Thales, Plato, and Aristotle (especially in his De Anima treatise), also addressed the workings of the mind. As early as the 4th century BC, the Greek physician Hippocrates theorized that mental disorders had physical rather than supernatural causes. In 387 BCE, Plato suggested that the brain is where mental processes take place. In 4th to 5th century B.C. Greece, Hippocrates wrote that he visited Democritus and found him in his garden cutting open animals. Democritus explained that he was attempting to discover the cause of madness and melancholy. Hippocrates praised his work. Democritus had with him a book on madness and melancholy. During the 5th century BCE, mental disorders, especially those with psychotic traits, were considered supernatural in origin, a view which existed throughout ancient Greece and Rome, as well as Egyptian regions. Alcmaeon, believed the brain, not the heart, was the "organ of thought". He tracked the ascending sensory nerves from the body to the brain, theorizing that mental activity originated in the CNS and that the cause of mental illness resided within the brain. He applied this understanding to classify mental diseases and treatments. Religious leaders often turned to versions of exorcism to treat mental disorders often utilizing methods that many consider to be cruel or barbaric methods. Trepanning was one of these methods used throughout history. In the 6th century AD, Lin Xie carried out an early psychological experiment, in which he asked people to draw a square with one hand and at the same time draw a circle with the other (ostensibly to test people's vulnerability to distraction). It has been cited that this was an early psychiatric experiment. The Islamic Golden Age fostered early studies in Islamic psychology and psychiatry, with many scholars writing about mental disorders. The Persian physician Muhammad ibn Zakariya al-Razi, also known as "Rhazes", wrote texts about psychiatric conditions in the 9th century. As chief physician of a hospital in Baghdad, he was also the director of one of the first bimaristans in the world. The first bimaristan was founded in Baghdad in the 9th century, and several others of increasing complexity were created throughout the Arab world in the following centuries. Some of the bimaristans contained wards dedicated to the care of mentally ill patients. During the Middle Ages, Psychiatric hospitals and lunatic asylums were built and expanded throughout Europe. Specialist hospitals such as Bethlem Royal Hospital in London were built in medieval Europe from the 13th century to treat mental disorders, but were used only as custodial institutions and did not provide any type of treatment. It is the oldest extant psychiatric hospital in the world. An ancient text known as The Yellow Emperor's Classic of Internal Medicine identifies the brain as the nexus of wisdom and sensation, includes theories of personality based on yin–yang balance, and analyzes mental disorder in terms of physiological and social disequilibria. Chinese scholarship that focused on the brain advanced during the Qing Dynasty with the work of Western-educated Fang Yizhi (1611–1671), Liu Zhi (1660–1730), and Wang Qingren (1768–1831). Wang Qingren emphasized the importance of the brain as the center of the nervous system, linked mental disorder with brain diseases, investigated the causes of dreams, insomnia, psychosis, depression and epilepsy. Medical specialty The beginning of psychiatry as a medical specialty is dated to the middle of the nineteenth century, although its germination can be traced to the late eighteenth century. In the late 17th century, privately run asylums for the insane began to proliferate and expand in size. In 1713, the Bethel Hospital Norwich was opened, the first purpose-built asylum in England. In 1656, Louis XIV of France created a public system of hospitals for those with mental disorders, but as in England, no real treatment was applied. During the Enlightenment, attitudes towards the mentally ill began to change. It came to be viewed as a disorder that required compassionate treatment. In 1758, English physician William Battie wrote his Treatise on Madness on the management of mental disorder. It was a critique aimed particularly at the Bethlem Royal Hospital, where a conservative regime continued to use barbaric custodial treatment. Battie argued for a tailored management of patients entailing cleanliness, good food, fresh air, and distraction from friends and family. He argued that mental disorder originated from dysfunction of the material brain and body rather than the internal workings of the mind. The introduction of moral treatment was initiated independently by the French doctor Philippe Pinel and the English Quaker William Tuke. In 1792, Pinel became the chief physician at the Bicêtre Hospital. Patients were allowed to move freely about the hospital grounds, and eventually dark dungeons were replaced with sunny, well-ventilated rooms. Pinel's student and successor, Jean Esquirol (1772–1840), went on to help establish 10 new mental hospitals that operated on the same principles. Although Tuke, Pinel and others had tried to do away with physical restraint, it remained widespread into the 19th century. At the Lincoln Asylum in England, Robert Gardiner Hill, with the support of Edward Parker Charlesworth, pioneered a mode of treatment that suited "all types" of patients, so that mechanical restraints and coercion could be dispensed with—a situation he finally achieved in 1838. In 1839, Sergeant John Adams and Dr. John Conolly were impressed by the work of Hill, and introduced the method into their Hanwell Asylum, by then the largest in the country. The modern era of institutionalized provision for the care of the mentally ill, began in the early 19th century with a large state-led effort. In England, the Lunacy Act 1845 was an important landmark in the treatment of the mentally ill, as it explicitly changed the status of mentally ill people to patients who required treatment. All asylums were required to have written regulations and to have a resident qualified physician. In 1838, France enacted a law to regulate both the admissions into asylums and asylum services across the country. In the United States, the erection of state asylums began with the first law for the creation of one in New York, passed in 1842. The Utica State Hospital was opened around 1850. Many state hospitals in the United States were built in the 1850s and 1860s on the Kirkbride Plan, an architectural style meant to have curative effect. At the turn of the century, England and France combined had only a few hundred individuals in asylums. By the late 1890s and early 1900s, this number had risen to the hundreds of thousands. However, the idea that mental illness could be ameliorated through institutionalization ran into difficulties. Psychiatrists were pressured by an ever-increasing patient population, and asylums again became almost indistinguishable from custodial institutions. In the early 1800s, psychiatry made advances in the diagnosis of mental illness by broadening the category of mental disease to include mood disorders, in addition to disease level delusion or irrationality. The 20th century introduced a new psychiatry into the world, with different perspectives of looking at mental disorders. For Emil Kraepelin, the initial ideas behind biological psychiatry, stating that the different mental disorders are all biological in nature, evolved into a new concept of "nerves", and psychiatry became a rough approximation of neurology and neuropsychiatry. Following Sigmund Freud's pioneering work, ideas stemming from psychoanalytic theory also began to take root in psychiatry. The psychoanalytic theory became popular among psychiatrists because it allowed the patients to be treated in private practices instead of warehoused in asylums. By the 1970s, however, the psychoanalytic school of thought became marginalized within the field. Biological psychiatry reemerged during this time. Psychopharmacology and neurochemistry became the integral parts of psychiatry starting with Otto Loewi's discovery of the neuromodulatory properties of acetylcholine; thus identifying it as the first-known neurotransmitter. Subsequently, it has been shown that different neurotransmitters have different and multiple functions in regulation of behaviour. In a wide range of studies in neurochemistry using human and animal samples, individual differences in neurotransmitters' production, reuptake, receptors' density and locations were linked to differences in dispositions for specific psychiatric disorders. For example, the discovery of chlorpromazine's effectiveness in treating schizophrenia in 1952 revolutionized treatment of the disorder, as did lithium carbonate's ability to stabilize mood highs and lows in bipolar disorder in 1948. Psychotherapy was still utilized, but as a treatment for psychosocial issues. This proved the idea of neurochemical nature of many psychiatric disorders. Another approach to look for biomarkers of psychiatric disorders is Neuroimaging that was first utilized as a tool for psychiatry in the 1980s. In 1963, US president John F. Kennedy introduced legislation delegating the National Institute of Mental Health to administer Community Mental Health Centers for those being discharged from state psychiatric hospitals. Later, though, the Community Mental Health Centers focus shifted to providing psychotherapy for those with acute but less serious mental disorders. Ultimately there were no arrangements made for actively following and treating severely mentally ill patients who were being discharged from hospitals, resulting in a large population of chronically homeless people with mental illness. Controversy and criticism The institution of psychiatry has attracted controversy since its inception. Scholars including those from social psychiatry, psychoanalysis, psychotherapy, and critical psychiatry have produced critiques. It has been argued that psychiatry confuses disorders of the mind with disorders of the brain that can be treated with drugs; that its use of drugs is in part due to lobbying by drug companies resulting in distortion of research; and that the concept of "mental illness" is often used to label and control those with beliefs and behaviours that the majority of people disagree with; and that it is too influenced by ideas from medicine causing it to misunderstand the nature of mental distress. Critique of psychiatry from within the field comes from the critical psychiatry group in the UK. Double argues that most critical psychiatry is anti-reductionist. Rashed argues new mental health science has moved beyond this reductionist critique by seeking integrative and biopsychosocial models for conditions and that much of critical psychiatry now exists with orthodox psychiatry but notes that many critiques remain unaddressed The term anti-psychiatry was coined by psychiatrist David Cooper in 1967 and was later made popular by Thomas Szasz. The word Antipsychiatrie was already used in Germany in 1904. The basic premise of the anti-psychiatry movement is that psychiatrists attempt to classify "normal" people as "deviant"; psychiatric treatments are ultimately more damaging than helpful to patients; and psychiatry's history involves (what may now be seen as) dangerous treatments, such as psychosurgery an example of this being the frontal lobectomy (commonly called a lobotomy). The use of lobotomies largely disappeared by the late 1970s.
Biology and health sciences
Fields of medicine
null