id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
204118
https://en.wikipedia.org/wiki/Fortification
Fortification
A fortification (also called a fort, fortress, fastness, or stronghold) is a military construction designed for the defense of territories in warfare, and is used to establish rule in a region during peacetime. The term is derived from Latin ("strong") and ("to make"). From very early history to modern times, defensive walls have often been necessary for cities to survive in an ever-changing world of invasion and conquest. Some settlements in the Indus Valley Civilization were the first small cities to be fortified. In ancient Greece, large cyclopean stone walls fitted without mortar had been built in Mycenaean Greece, such as the ancient site of Mycenae. A Greek phrourion was a fortified collection of buildings used as a military garrison, and is the equivalent of the Roman castellum or fortress. These constructions mainly served the purpose of a watch tower, to guard certain roads, passes, and borders. Though smaller than a real fortress, they acted as a border guard rather than a real strongpoint to watch and maintain the border. The art of setting out a military camp or constructing a fortification traditionally has been called "castrametation" since the time of the Roman legions. Fortification is usually divided into two branches: permanent fortification and field fortification. There is also an intermediate branch known as semipermanent fortification. Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that they are a residence of a monarch or noble and command a specific defensive territory. Roman forts and hill forts were the main antecedents of castles in Europe, which emerged in the 9th century in the Carolingian Empire. The Early Middle Ages saw the creation of some towns built around castles. Medieval-style fortifications were largely made obsolete by the arrival of cannons in the 14th century. Fortifications in the age of black powder evolved into much lower structures with greater use of ditches and earth ramparts that would absorb and disperse the energy of cannon fire. Walls exposed to direct cannon fire were very vulnerable, so the walls were sunk into ditches fronted by earth slopes to improve protection. The arrival of explosive shells in the 19th century led to another stage in the evolution of fortification. Star forts did not fare well against the effects of high explosives, and the intricate arrangements of bastions, flanking batteries and the carefully constructed lines of fire for the defending cannon could be rapidly disrupted by explosive shells. Steel-and-concrete fortifications were common during the 19th and early 20th centuries. The advances in modern warfare since World War I have made large-scale fortifications obsolete in most situations. Nomenclature Many United States Army installations are known as forts, although they are not always fortified. During the pioneering era of North America, many outposts on the frontiers, even non-military outposts, were referred to generically as forts. Larger military installations may be called fortresses; smaller ones were once known as fortalices. The word fortification can refer to the practice of improving an area's defense with defensive works. City walls are fortifications but are not necessarily called fortresses. The art of setting out a military camp or constructing a fortification traditionally has been called castrametation since the time of the Roman legions. Laying siege to a fortification and of destroying it is commonly called siegecraft or siege warfare and is formally known as poliorcetics. In some texts, this latter term also applies to the art of building a fortification. Fortification is usually divided into two branches: permanent fortification and field fortification. Permanent fortifications are erected at leisure, with all the resources that a state can supply of constructive and mechanical skill, and are built of enduring materials. Field fortifications—for example breastworks—and often known as fieldworks or earthworks, are extemporized by troops in the field, perhaps assisted by workers and tools and with materials that do not require much preparation, such as soil, brushwood, and light timber, or sandbags (see sangar). An example of field fortification was the construction of Fort Necessity by George Washington in 1754. There is also an intermediate branch known as semipermanent fortification. This is employed when in the course of a campaign it becomes desirable to protect some locality with the best imitation of permanent defenses that can be made in a short time, given ample resources and skilled civilian workers. An example of this is the construction of Roman forts in England and in other Roman territories where camps were set up with the intention of staying for some time, but not permanently. Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that it describes a residence of a monarch or noble and commands a specific defensive territory. An example of this is the massive medieval castle of Carcassonne. History Early uses Defensive fences for protecting humans and domestic animals against predators was used long before the appearance of writing and began "perhaps with primitive man blocking the entrances of his caves for security from large carnivores". From very early history to modern times, walls have been a necessity for many cities. Amnya Fort in western Siberia has been described by archeologists as one of the oldest known fortified settlements, as well as the northernmost Stone Age fort. In Bulgaria, near the town of Provadia a walled fortified settlement today called Solnitsata starting from 4700 BC had a diameter of about , was home to 350 people living in two-story houses, and was encircled by a fortified wall. The huge walls around the settlement, which were built very tall and with stone blocks which are high and thick, make it one of the earliest walled settlements in Europe but it is younger than the walled town of Sesklo in Greece from 6800 BC. Uruk in ancient Sumer (Mesopotamia) is one of the world's oldest known walled cities. The Ancient Egyptians also built fortresses on the frontiers of the Nile Valley to protect against invaders from adjacent territories, as well as circle-shaped mud brick walls around their cities. Many of the fortifications of the ancient world were built with mud brick, often leaving them no more than mounds of dirt for today's archeologists. A massive prehistoric stone wall surrounded the ancient temple of Ness of Brodgar 3200 BC in Scotland. Named the "Great Wall of Brodgar" it was thick and tall. The wall had some symbolic or ritualistic function. The Assyrians deployed large labor forces to build new palaces, temples and defensive walls. Bronze Age Europe In Bronze Age Malta, some settlements also began to be fortified. The most notable surviving example is Borġ in-Nadur, where a bastion built in around 1500 BC was found. Exceptions were few—notably, ancient Sparta and ancient Rome did not have walls for a long time, choosing to rely on their militaries for defense instead. Initially, these fortifications were simple constructions of wood and earth, which were later replaced by mixed constructions of stones piled on top of each other without mortar. In ancient Greece, large stone walls had been built in Mycenaean Greece, such as the ancient site of Mycenae (famous for the huge stone blocks of its 'cyclopean' walls). In classical era Greece, the city of Athens built two parallel stone walls, called the Long Walls, that reached their fortified seaport at Piraeus a few miles away. In Central Europe, the Celts built large fortified settlements known as oppida, whose walls seem partially influenced by those built in the Mediterranean. The fortifications were continuously being expanded and improved. Around 600 BC, in Heuneburg, Germany, forts were constructed with a limestone foundation supported by a mudbrick wall approximately 4 meters tall, probably topped by a roofed walkway, thus reaching a total height of 6 meters. The wall was clad with lime plaster, regularly renewed. Towers protruded outwards from it. The Oppidum of Manching (German: Oppidum von Manching) was a large Celtic proto-urban or city-like settlement at modern-day Manching (near Ingolstadt), Bavaria (Germany). The settlement was founded in the 3rd century BC and existed until . It reached its largest extent during the late La Tène period (late 2nd century BC), when it had a size of 380 hectares. At that time, 5,000 to 10,000 people lived within its 7.2 km long walls. The oppidum of Bibracte is another example of a Gaulish fortified settlement. Bronze and Iron Age Near East The term casemate wall is used in the archeology of Israel and the wider Near East, having the meaning of a double wall protecting a city or fortress, with transverse walls separating the space between the walls into chambers. These could be used as such, for storage or residential purposes, or could be filled with soil and rocks during siege in order to raise the resistance of the outer wall against battering rams. Originally thought to have been introduced to the region by the Hittites, this has been disproved by the discovery of examples predating their arrival, the earliest being at Ti'inik (Taanach) where such a wall has been dated to the 16th century BC. Casemate walls became a common type of fortification in the Southern Levant between the Middle Bronze Age (MB) and Iron Age II, being more numerous during the Iron Age and peaking in Iron Age II (10th–6th century BC). However, the construction of casemate walls had begun to be replaced by sturdier solid walls by the 9th century BC, probably due the development of more effective battering rams by the Neo-Assyrian Empire. Casemate walls could surround an entire settlement, but most only protected part of it. The three different types included freestanding casemate walls, then integrated ones where the inner wall was part of the outer buildings of the settlement, and finally filled casemate walls, where the rooms between the walls were filled with soil right away, allowing for a quick, but nevertheless stable construction of particularly high walls. Ancient Rome The Romans fortified their cities with massive, mortar-bound stone walls. The most famous of these are the largely extant Aurelian Walls of Rome and the Theodosian Walls of Constantinople, together with partial remains elsewhere. These are mostly city gates, like the Porta Nigra in Trier or Newport Arch in Lincoln. Hadrian's Wall was built by the Roman Empire across the width of what is now northern England following a visit by Roman Emperor Hadrian (AD 76–138) in AD 122. Indian subcontinent A number of forts dating from the Later Stone Age to the British Raj are found in the mainland Indian subcontinent (modern day India, Pakistan, Bangladesh and Nepal). "Fort" is the word used in India for all old fortifications. Numerous Indus Valley Civilization sites exhibit evidence of fortifications. By about 3500 BC, hundreds of small farming villages dotted the Indus floodplain. Many of these settlements had fortifications and planned streets. The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dykes and defensive walls, for neighboring communities bickered constantly about the control of prime agricultural land. The fortification varies by site. While Dholavira has stone-built fortification walls, Harrapa is fortified using baked bricks; sites such as Kalibangan exhibit mudbrick fortifications with bastions and Lothal has a quadrangular fortified layout. Evidence also suggested of fortifications in Mohenjo-daro. Even a small town—for instance, Kotada Bhadli, exhibiting sophisticated fortification-like bastions—shows that nearly all major and minor towns of the Indus Valley Civilization were fortified. Forts also appeared in urban cities of the Gangetic valley during the second urbanization period between 600 and 200 BC, and as many as 15 fortification sites have been identified by archeologists throughout the Gangetic valley, such as Kaushambi, Mahasthangarh, Pataliputra, Mathura, Ahichchhatra, Rajgir, and Lauria Nandangarh. The earliest Mauryan period brick fortification occurs in one of the stupa mounds of Lauria Nandangarh, which is 1.6 km in perimeter and oval in plan and encloses a habitation area.Mundigak () in present-day south-east Afghanistan has defensive walls and square bastions of sun dried bricks. India currently has over 180 forts, with the state of Maharashtra alone having over 70 forts, which are also known as durg, many of them built by Shivaji, founder of the Maratha Empire. A large majority of forts in India are in North India. The most notable forts are the Red Fort at Old Delhi, the Red Fort at Agra, the Chittor Fort and Mehrangarh Fort in Rajasthan, the Ranthambhor Fort, Amer Fort and Jaisalmer Fort also in Rajasthan and Gwalior Fort in Madhya Pradesh. Arthashastra, the Indian treatise on military strategy describes six major types of forts differentiated by their major modes of defenses. Sri Lanka Forts in Sri Lanka date back thousands of years, with many being built by Sri Lankan kings. These include several walled cities. With the outset of colonial rule in the Indian Ocean, Sri Lanka was occupied by several major colonial empires that from time to time became the dominant power in the Indian Ocean. The colonists built several western-style forts, mostly in and around the coast of the island. The first to build colonial forts in Sri Lanka were the Portuguese; these forts were captured and later expanded by the Dutch. The British occupied these Dutch forts during the Napoleonic wars. Most of the colonial forts were garrisoned up until the early 20th century. The coastal forts had coastal artillery manned by the Ceylon Garrison Artillery during the two world wars. Most of these were abandoned by the military but retained civil administrative officers, while others retained military garrisons, which were more administrative than operational. Some were reoccupied by military units with the escalation of the Sri Lankan Civil War; Jaffna fort, for example, came under siege several times. China Large tempered earth (i.e. rammed earth) walls were built in ancient China since the Shang dynasty (–1050 BC); the capital at ancient Ao had enormous walls built in this fashion (see siege for more info). Although stone walls were built in China during the Warring States (481–221 BC), mass conversion to stone architecture did not begin in earnest until the Tang dynasty (618–907 AD). The Great Wall of China had been built since the Qin dynasty (221–207 BC), although its present form was mostly an engineering feat and remodeling of the Ming dynasty (1368–1644 AD). In addition to the Great Wall, a number of Chinese cities also employed the use of defensive walls to defend their cities. Notable Chinese city walls include the city walls of Hangzhou, Nanking, the Old City of Shanghai, Suzhou, Xi'an and the walled villages of Hong Kong. The famous walls of the Forbidden City in Beijing were established in the early 15th century by the Yongle Emperor. The Forbidden City made up the inner portion of the Beijing city fortifications. Philippines Spanish colonial fortifications During the Spanish Era several forts and outposts were built throughout the archipelago. Most notable is Intramuros, the old walled city of Manila located along the southern bank of the Pasig River. The historic city was home to centuries-old churches, schools, convents, government buildings and residences, the best collection of Spanish colonial architecture before much of it was destroyed by the bombs of World War II. Of all the buildings within the 67-acre city, only one building, the San Agustin Church, survived the war. Partial listing of Spanish forts: Intramuros, Manila Cuartel de Santo Domingo, Santa Rosa, Laguna Fuerza de Cuyo, Cuyo, Palawan Fuerza de Cagayancillo, Cagayancillo, Palawan Real Fuerza de Nuestra Señora del Pilar de Zaragoza, Zamboanga City Fuerza de San Felipe, Cavite City Fuerza de San Pedro, Cebu Fuerte dela Concepcion y del Triunfo, Ozamiz, Misamis Occidental Fuerza de San Antonio Abad, Manila Fuerza de Pikit, Pikit, Cotabato Fuerza de Santiago, Romblon, Romblon Fuerza de Jolo, Jolo, Sulu Fuerza de Masbate, Masbate Fuerza de Bongabong, Bongabong, Oriental Mindoro Cotta de Dapitan, Dapitan, Zamboanga del Norte Fuerte de Alfonso XII, Tukuran, Zamboanga del Sur Fuerza de Bacolod, Bacolod, Lanao del Norte Guinsiliban Watchtower, Guinsiliban, Camiguin Laguindingan Watchtower, Laguindingan, Misamis Oriental Kutang San Diego, Gumaca, Quezon Baluarte Luna, Luna, La Union Local fortifications The Ivatan people of the northern islands of Batanes built their so-called idjang on hills and elevated areas to protect themselves during times of war. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived. The Igorots built forts made of stone walls that averaged several meters in width and about two to three times the width in height around 2000 BC. The Muslim Filipinos of the south built strong fortresses called kota or moong to protect their communities. Usually, many of the occupants of these kotas are entire families rather than just warriors. Lords often had their own kotas to assert their right to rule, it served not only as a military installation but as a palace for the local Lord. It is said that at the height of the Maguindanao Sultanate's power, they blanketed the areas around Western Mindanao with kotas and other fortifications to block the Spanish advance into the region. These kotas were usually made of stone and bamboo or other light materials and surrounded by trench networks. As a result, some of these kotas were burned easily or destroyed. With further Spanish campaigns in the region, the sultanate was subdued and a majority of kotas dismantled or destroyed. kotas were not only used by the Muslims as defense against Spaniards and other foreigners, renegades and rebels also built fortifications in defiance of other chiefs in the area. During the American occupation, rebels built strongholds and the datus, rajahs, or sultans often built and reinforced their kotas in a desperate bid to maintain rule over their subjects and their land. Many of these forts were also destroyed by American expeditions, as a result, very very few kotas still stand to this day. Notable kotas: Kota Selurong: an outpost of the Bruneian Empire in Luzon, later became the City of Manila. Kuta Wato/Kota Bato: Literally translates to "stone fort" the first known stone fortification in the country, its ruins exist as the "Kutawato Cave Complex" Kota Sug/Jolo: The capital and seat of the Sultanate of Sulu. When it was occupied by the Spaniards in the 1870s they converted the kota into the world's smallest walled city. Pre-Islamic Arabia During Muhammad's lifetime During Muhammad's era in Arabia, many tribes made use of fortifications. In the Battle of the Trench, the largely outnumbered defenders of Medina, mainly Muslims led by Islamic prophet Muhammad, dug a trench, which together with Medina's natural fortifications, rendered the confederate cavalry (consisting of horses and camels) useless, locking the two sides in a stalemate. Hoping to make several attacks at once, the confederates persuaded the Medina-allied Banu Qurayza to attack the city from the south. However, Muhammad's diplomacy derailed the negotiations, and broke up the confederacy against him. The well-organized defenders, the sinking of confederate morale, and poor weather conditions caused the siege to end in a fiasco. During the Siege of Ta'if in January 630, Muhammad ordered his followers to attack enemies who fled from the Battle of Hunayn and sought refuge in the fortress of Taif. Islamic world Africa The entire city of Kerma in Nubia (present day Sudan) was encompassed by fortified walls surrounded by a ditch. Archeology has revealed various Bronze Age bastions and foundations constructed of stone together with either baked or unfired brick. The walls of Benin are described as the world's second longest man-made structure, as well as the most extensive earthwork in the world, by the Guinness Book of Records, 1974. The walls may have been constructed between the thirteenth and mid-fifteenth century CE or, during the first millennium CE. Strong citadels were also built other in areas of Africa. Yorubaland for example had several sites surrounded by the full range of earthworks and ramparts seen elsewhere, and sited on ground. This improved defensive potential—such as hills and ridges. Yoruba fortifications were often protected with a double wall of trenches and ramparts, and in the Congo forests concealed ditches and paths, along with the main works, often bristled with rows of sharpened stakes. Inner defenses were laid out to blunt an enemy penetration with a maze of defensive walls allowing for entrapment and crossfire on opposing forces. A military tactic of the Ashanti was to create powerful log stockades at key points. This was employed in later wars against the British to block British advances. Some of these fortifications were over a hundred yards long, with heavy parallel tree trunks. They were impervious to destruction by artillery fire. Behind these stockades, numerous Ashanti soldiers were mobilized to check enemy movement. While formidable in construction, many of these strongpoints failed because Ashanti guns, gunpowder and bullets were poor, and provided little sustained killing power in defense. Time and time again British troops overcame or bypassed the stockades by mounting old-fashioned bayonet charges, after laying down some covering fire. Defensive works were of importance in the tropical African Kingdoms. In the Kingdom of Kongo field fortifications were characterized by trenches and low earthen embankments. Such strongpoints ironically, sometimes held up much better against European cannon than taller, more imposing structures. Medieval Europe Roman forts and hill forts were the main antecedents of castles in Europe, which emerged in the 9th century in the Carolingian Empire. The Early Middle Ages saw the creation of some towns built around castles. These cities were only rarely protected by simple stone walls and more usually by a combination of both walls and ditches. From the 12th century, hundreds of settlements of all sizes were founded all across Europe, which very often obtained the right of fortification soon afterward. The founding of urban centers was an important means of territorial expansion and many cities, especially in eastern Europe, were founded precisely for this purpose during the period of Ostsiedlung. These cities are easy to recognize due to their regular layout and large market spaces. The fortifications of these settlements were continuously improved to reflect the current level of military development. During the Renaissance era, the Venetian Republic raised great walls around cities, and the finest examples, among others, are in Nicosia (Cyprus), Rocca di Manerba del Garda (Lombardy), and Palmanova (Italy), or Dubrovnik (Croatia), which proved to be futile against attacks but still stand to this day. Unlike the Venetians, the Ottomans used to build smaller fortifications but in greater numbers, and only rarely fortified entire settlements such as Počitelj, Vratnik, and Jajce in Bosnia. Development after introduction of firearms Medieval-style fortifications were largely made obsolete by the arrival of cannons on the 14th century battlefield. Fortifications in the age of black powder evolved into much lower structures with greater use of ditches and earth ramparts that would absorb and disperse the energy of cannon fire. Walls exposed to direct cannon fire were very vulnerable, so were sunk into ditches fronted by earth slopes. This placed a heavy emphasis on the geometry of the fortification to allow defensive cannonry interlocking fields of fire to cover all approaches to the lower and thus more vulnerable walls. The evolution of this new style of fortification can be seen in transitional forts such as Sarzanello in North West Italy which was built between 1492 and 1502. Sarzanello consists of both crenellated walls with towers typical of the medieval period but also has a ravelin like angular gun platform screening one of the curtain walls which is protected from flanking fire from the towers of the main part of the fort. Another example is the fortifications of Rhodes which were frozen in 1522 so that Rhodes is the only European walled town that still shows the transition between the classical medieval fortification and the modern ones. A manual about the construction of fortification was published by Giovanni Battista Zanchi in 1554. Fortifications also extended in depth, with protected batteries for defensive cannonry, to allow them to engage attacking cannons to keep them at a distance and prevent them from bearing directly on the vulnerable walls. The result was star shaped fortifications with tier upon tier of hornworks and bastions, of which Fort Bourtange is an excellent example. There are also extensive fortifications from this era in the Nordic states and in Britain, the fortifications of Berwick-upon-Tweed and the harbor archipelago of Suomenlinna at Helsinki being fine examples. 19th century During the 18th century, it was found that the continuous enceinte, or main defensive enclosure of a bastion fortress, could not be made large enough to accommodate the enormous field armies which were increasingly being employed in Europe; neither could the defenses be constructed far enough away from the fortress town to protect the inhabitants from bombardment by the besiegers, the range of whose guns was steadily increasing as better manufactured weapons were introduced. Therefore, since refortifying the Prussian fortress cities of Koblenz and Köln after 1815, the principle of the ring fortress or girdle fortress was used: forts, each several hundred meters out from the original enceinte, were carefully sited so as to make best use of the terrain and to be capable of mutual support with neighboring forts. Gone were citadels surrounding towns: forts were to be moved some distance away from cities to keep the enemy at a distance so their artillery could not bombard said urbanized settlements. From now on a ring of forts were to be built at a spacing that would allow them to effectively cover the intervals between them. The arrival of explosive shells in the 19th century led to yet another stage in the evolution of fortification. Star forts did not fare well against the effects of high explosives and the intricate arrangements of bastions, flanking batteries and the carefully constructed lines of fire for the defending cannon could be rapidly disrupted by explosive shells. Worse, the large open ditches surrounding forts of this type were an integral part of the defensive scheme, as was the covered way at the edge of the counterscarp. The ditch was extremely vulnerable to bombardment with explosive shells. In response, military engineers evolved the polygonal style of fortification. The ditch became deep and vertically sided, cut directly into the native rock or soil, laid out as a series of straight lines creating the central fortified area that gives this style of fortification its name. Wide enough to be an impassable barrier for attacking troops but narrow enough to be a difficult target for enemy shellfire, the ditch was swept by fire from defensive blockhouses set in the ditch as well as firing positions cut into the outer face of the ditch itself. The profile of the fort became very low indeed, surrounded outside the ditch covered by caponiers by a gently sloping open area so as to eliminate possible cover for enemy forces, while the fort itself provided a minimal target for enemy fire. The entrypoint became a sunken gatehouse in the inner face of the ditch, reached by a curving ramp that gave access to the gate via a rolling bridge that could be withdrawn into the gatehouse. Much of the fort moved underground. Deep passages and tunnel networks now connected the blockhouses and firing points in the ditch to the fort proper, with magazines and machine rooms deep under the surface. The guns, however, were often mounted in open emplacements and protected only by a parapet; both in order to keep a lower profile and also because experience with guns in closed casemates had seen them put out of action by rubble as their own casemates were collapsed around them. The new forts abandoned the principle of the bastion, which had also been made obsolete by advances in arms. The outline was a much-simplified polygon, surrounded by a ditch. These forts, built in masonry and shaped stone, were designed to shelter their garrison against bombardment. One organizing feature of the new system involved the construction of two defensive curtains: an outer line of forts, backed by an inner ring or line at critical points of terrain or junctions (see, for example, Séré de Rivières system in France). Traditional fortification however continued to be applied by European armies engaged in warfare in colonies established in Africa against lightly armed attackers from amongst the indigenous population. A relatively small number of defenders in a fort impervious to primitive weaponry could hold out against high odds, the only constraint being the supply of ammunition. 20th and 21st centuries Steel-and-concrete fortifications were common during the 19th and early 20th centuries. However, the advances in modern warfare since World War I have made large-scale fortifications obsolete in most situations. In the 1930s and 1940s, some fortifications were built with designs taking into consideration the new threat of aerial warfare, such as Fort Campbell in Malta. Despite this, only underground bunkers are still able to provide some protection in modern wars. Many historical fortifications were demolished during the modern age, but a considerable number survive as popular tourist destinations and prominent local landmarks today. The downfall of permanent fortifications had two causes: The ever-escalating power, speed, and reach of artillery and airpower meant that almost any target that could be located could be destroyed if sufficient force were massed against it. As such, the more resources a defender devoted to reinforcing a fortification, the more combat power that fortification justified being devoted to destroying it, if the fortification's destruction was demanded by an attacker's strategy. From World War II, bunker busters were used against fortifications. By 1950, nuclear weapons were capable of destroying entire cities and producing dangerous radiation. This led to the creation of civilian nuclear air raid shelters. The second weakness of permanent fortification was its very permanency. Because of this, it was often easier to go around a fortification and, with the rise of mobile warfare in the beginning of World War II, this became a viable offensive choice. When a defensive line was too extensive to be entirely bypassed, massive offensive might could be massed against one part of the line allowing a breakthrough, after which the rest of the line could be bypassed. Such was the fate of the many defensive lines built before and during World War II, such as the Siegfried Line, the Stalin Line, and the Atlantic Wall. This was not the case with the Maginot Line; it was designed to force the Germans to invade other countries (Belgium or Switzerland) to go around it, and was successful in that sense. Instead field fortification rose to dominate defensive action. Unlike the trench warfare which dominated World War I, these defenses were more temporary in nature. This was an advantage because since it was less extensive it formed a less obvious target for enemy force to be directed against. If sufficient power were massed against one point to penetrate it, the forces based there could be withdrawn and the line could be reestablished relatively quickly. Instead of a supposedly impenetrable defensive line, such fortifications emphasized defense in depth, so that as defenders were forced to pull back or were overrun, the lines of defenders behind them could take over the defense. Because the mobile offensives practiced by both sides usually focused on avoiding the strongest points of a defensive line, these defenses were usually relatively thin and spread along the length of a line. The defense was usually not equally strong throughout, however. The strength of the defensive line in an area varied according to how rapidly an attacking force could progress in the terrain that was being defended—both the terrain the defensive line was built on and the ground behind it that an attacker might hope to break out into. This was both for reasons of the strategic value of the ground, and its defensive value. This was possible because while offensive tactics were focused on mobility, so were defensive tactics. The dug-in defenses consisted primarily of infantry and antitank guns. Defending tanks and tank destroyers would be concentrated in mobile brigades behind the defensive line. If a major offensive was launched against a point in the line, mobile reinforcements would be sent to reinforce that part of the line that was in danger of failing. Thus the defensive line could be relatively thin because the bulk of the fighting power of the defenders was not concentrated in the line itself but rather in the mobile reserves. A notable exception to this rule was seen in the defensive lines at the Battle of Kursk during World War II, where German forces deliberately attacked the strongest part of the Soviet defenses, seeking to crush them utterly. The terrain that was being defended was of primary importance because open terrain that tanks could move over quickly made possible rapid advances into the defenders' rear areas that were very dangerous to the defenders. Thus such terrain had to be defended at all costs. In addition, since in theory the defensive line only had to hold out long enough for mobile reserves to reinforce it, terrain that did not permit rapid advance could be held more weakly because the enemy's advance into it would be slower, giving the defenders more time to reinforce that point in the line. For example, the Battle of the Hurtgen Forest in Germany during the closing stages of World War II is an excellent example of how difficult terrain could be used to the defenders' advantage. After World War II, intercontinental ballistic missiles capable of reaching much of the way around the world were developed, so speed became an essential characteristic of the strongest militaries and defenses. Missile silos were developed, so missiles could be fired from the middle of a country and hit cities and targets in another country, and airplanes (and aircraft carriers) became major defenses and offensive weapons (leading to an expansion of the use of airports and airstrips as fortifications). Mobile defenses could be had underwater, too, in the form of ballistic missile submarines capable of firing submarine launched ballistic missiles. Some bunkers in the mid to late 20th century came to be buried deep inside mountains and prominent rocks, such as Gibraltar and the Cheyenne Mountain Complex. On the ground itself, minefields have been used as hidden defenses in modern warfare, often remaining long after the wars that produced them have ended. Demilitarized zones along borders are arguably another type of fortification, although a passive kind, providing a buffer between potentially hostile militaries. Military airfields Military airfields offer a fixed "target rich" environment for even relatively small enemy forces, using hit-and-run tactics by ground forces, stand-off attacks (mortars and rockets), air attacks, or ballistic missiles. Key targets—aircraft, munitions, fuel, and vital technical personnel—can be protected by fortifications. Aircraft can be protected by revetments, hesco barriers, hardened aircraft shelters and underground hangars which will protect from many types of attack. Larger aircraft types tend to be based outside the operational theater. Munition storage follows safety rules which use fortifications (bunkers and bunds) to provide protection against accident and chain reactions (sympathetic detonations). Weapons for rearming aircraft can be stored in small fortified expense stores closer to the aircraft. At Biên Hòa, South Vietnam, on the morning of May 16, 1965, as aircraft were being refueled and armed, a chain reaction explosion destroyed 13 aircraft, killed 34 personnel, and injured over 100; this, along with damage and losses of aircraft to enemy attack (by both infiltration and stand-off attacks), led to the construction of revetments and shelters to protect aircraft throughout South Vietnam. Aircrew and ground personnel will need protection during enemy attacks and fortifications range from culvert section "duck and cover" shelters to permanent air raid shelters. Soft locations with high personnel densities such as accommodation and messing facilities can have limited protection by placing prefabricated concrete walls or barriers around them, examples of barriers are Jersey Barriers, T Barriers or Splinter Protection Units (SPUs). Older fortification may prove useful such as the old 'Yugo' pyramid shelters built in the 1980s which were used by US personnel on 8 Jan 2020 when Iran fired 11 ballistic missiles at Ayn al-Asad Airbase in Iraq. Fuel is volatile and has to comply with rules for storage which provide protection against accidents. Fuel in underground bulk fuel installations is well protected though valves and controls are vulnerable to enemy action. Above-ground tanks can be susceptible to attack. Ground support equipment will need to be protected by fortifications to be usable after an enemy attack. Permanent (concrete) guard fortifications are safer, stronger, last longer and are more cost-effective than sandbag fortifications. Prefabricated positions can be made from concrete culvert sections. The British Yarnold Bunker is made from sections of a concrete pipe. Guard towers provide an increased field of view but a lower level of protection. Dispersal and camouflage of assets can supplement fortifications against some forms of airfield attack. Counterinsurgency Just as in colonial periods, comparatively obsolete fortifications are still used for low intensity conflicts. Such fortifications range in size from small patrol bases or forward operating bases up to huge airbases such as Camp Bastion/Leatherneck in Afghanistan. Much like in the 18th and 19th century, because the enemy is not a powerful military force with the heavy weaponry required to destroy fortifications, walls of gabion, sandbag or even simple mud can provide protection against small arms and antitank weapons—although such fortifications are still vulnerable to mortar and artillery fire. Forts Forts in modern American usage often refer to space set aside by governments for a permanent military facility; these often do not have any actual fortifications, and can have specializations (military barracks, administration, medical facilities, or intelligence). However, there are some modern fortifications that are referred to as forts. These are typically small semipermanent fortifications. In urban combat, they are built by upgrading existing structures such as houses or public buildings. In field warfare they are often log, sandbag or gabion type construction. Such forts are typically only used in low-level conflicts, such as counterinsurgency conflicts or very low-level conventional conflicts, such as the Indonesia–Malaysia confrontation, which saw the use of log forts for use by forward platoons and companies. The reason for this is that static above-ground forts cannot survive modern direct or indirect fire weapons larger than mortars, RPGs and small arms. Prisons and others Fortifications designed to keep the inhabitants of a facility in rather than attacker out can also be found, in prisons, concentration camps, and other such facilities. Those are covered in other articles, as most prisons and concentration camps are not primarily military forts (although forts, camps, and garrison towns have been used as prisons and/or concentration camps; such as Theresienstadt, Guantanamo Bay detention camp and the Tower of London for example). Field fortifications
Technology
Structures
null
204174
https://en.wikipedia.org/wiki/Anatidae
Anatidae
The Anatidae are the biological family of water birds that includes ducks, geese, and swans. The family has a cosmopolitan distribution, occurring on all the world's continents except Antarctica. These birds are adapted for swimming, floating on the water surface, and, in some cases, diving in at least shallow water. The family contains around 174 species in 43 genera (the magpie goose is no longer considered to be part of the Anatidae and is now placed in its own family, Anseranatidae). They are generally herbivorous and are monogamous breeders. A number of species undertake annual migrations. A few species have been domesticated for agriculture, and many others are hunted for food and recreation. Five species have become extinct since 1600, and many more are threatened with extinction. Description and ecology The ducks, geese, and swans are small- to large-sized birds with a broad and elongated general body plan. Diving species vary from this in being rounder. Extant species range in size from the cotton pygmy goose, at as little as 26.5 cm (10.5 in) and 164 g (5.8 oz), to the trumpeter swan, at as much as 183 cm (6 ft) and 17.2 kg (38 lb). The largest anatid ever known is the extinct flightless Garganornis ballmanni at 22 kg (49 lb). The wings are short and pointed, and supported by strong wing muscles that generate rapid beats in flight. They typically have long necks, although this varies in degree between species. The legs are short, strong, and set far to the back of the body (more so in the more aquatic species), and have a leathery feel with a scaly texture. Combined with their body shape, this can make some species awkward on land, but they are stronger walkers than other marine and water birds such as grebes or petrels. They typically have webbed feet, though a few species such as the Nene have secondarily lost their webbing. The bills are made of soft keratin with a thin and sensitive layer of skin on top (which has a leathery feel when touched). For most species, the shape of the bill tends to be more flattened to a greater or lesser extent. These contain serrated lamellae which are particularly well defined in the filter-feeding species. Their feathers are excellent at shedding water due to special oils. Many of the ducks display sexual dimorphism, with the males being more brightly coloured than the females (although the situation is reversed in species such as the paradise shelduck). The swans, geese, and whistling-ducks lack sexually dimorphic plumage. Anatids are vocal birds, producing a range of quacks, honks, squeaks, and trumpeting sounds, depending on species; the female often has a deeper voice than the male. Anatids are generally herbivorous as adults, feeding on various water-plants, although some species also eat fish, molluscs, or aquatic arthropods. One group, the mergansers, are primarily piscivorous, and have serrated bills to help them catch fish. In a number of species, the young include a high proportion of invertebrates in their diets, but become purely herbivorous as adults. Breeding The anatids are generally seasonal and monogamous breeders. The level of monogamy varies within the family; many of the smaller ducks only maintain the bond for a single season and find a new partner the following year, whereas the larger swans, geese and some of the more territorial ducks maintain pair bonds over a number of years, and even for life in some species. However, forced extrapair copulation among anatids is common, occurring in 55 species in 17 genera. Anatidae is a large proportion of the 3% of bird species to possess a penis, though they vary significantly in size, shape, and surface elaboration. Most species are adapted for copulation on the water only. They construct simple nests from whatever material is close at hand, often lining them with a layer of down plucked from the mother's breast. In most species, only the female incubates the eggs. The young are precocial, and are able to feed themselves from birth. One aberrant species, the black-headed duck, is an obligate brood parasite, laying its eggs in the nests of gulls and coots. While this species never raises its own young, a number of other ducks occasionally lay eggs in the nests of conspecifics (members of the same species) in addition to raising their own broods. Relationship with humans Duck, eider, and goose feathers and down have long been popular for bedspreads, pillows, sleeping bags, and coats. The members of this family also have long been used for food. Humans have had a long relationship with ducks, geese, and swans; they are important economically and culturally to humans, and several duck species have benefited from an association with people. However, some anatids are agricultural pests, and have acted as vectors for zoonoses such as avian influenza. Since 1600, five species of ducks have become extinct due to the activities of humans, and subfossil remains have shown that humans caused numerous extinctions in prehistory. Today, many more are considered threatened. Most of the historic and prehistoric extinctions were insular species, vulnerable due to small populations (often endemic to a single island), and island tameness. Evolving on islands that lacked predators, these species lost antipredator behaviours, as well as the ability to fly, and were vulnerable to human hunting pressure and introduced species. Other extinctions and declines are attributable to overhunting, habitat loss and modification, and hybridisation with introduced ducks (for example the introduced ruddy duck swamping the white-headed duck in Europe). Numerous governments and conservation and hunting organisations have made considerable progress in protecting ducks and duck populations through habitat protection and creation, laws and protection, and captive-breeding programmes. Systematics History of classification The name Anatidae for the family was introduced by the English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1819. While the status of the Anatidae as a family is straightforward, and which species properly belong to it is little debated, the relationships of the different tribes and subfamilies within it are poorly understood. The listing in the box at right should be regarded as simply one of several possible ways of organising the many species within the Anatidae; see discussion in the next section. The systematics of the Anatidae are in a state of flux. Previously divided into six subfamilies, a study of anatomical characters by Livezey suggests the Anatidae are better treated in nine subfamilies. This classification was popular in the late 1980s to 1990s. But mtDNA sequence analyses indicate, for example, the dabbling and diving ducks do not belong in the same subfamily. While shortcomings certainly occur in Livezey's analysis, mtDNA is an unreliable source for phylogenetic information in many waterfowl (especially dabbling ducks) due to their ability to produce fertile hybrids, in rare cases possibly even beyond the level of genus (see for example the "Barbary duck"). Because the sample size of many molecular studies available to date is small, mtDNA results must be considered with caution. While a comprehensive review of the Anatidae which unites all evidence into a robust phylogeny is still lacking, the reasons for the confusing data are at least clear: As demonstrated by the Late Cretaceous fossil Vegavis iaai—an early modern waterbird which belonged to an extinct lineage—the Anatidae are an ancient group among the modern birds. Their earliest direct ancestors, though not documented by fossils yet, likewise can be assumed to have been contemporaries with the non-avian dinosaurs. The long period of evolution and shifts from one kind of waterbird lifestyle to another have obscured many plesiomorphies, while apparent apomorphies are quite often the result of parallel evolution, for example the "non-diving duck" type displayed by such unrelated genera as Dendrocygna, Amazonetta, and Cairina. For the fossil record, see below. Alternatively, the Anatidae may be considered to consist of three subfamilies (ducks, geese, and swans, essentially) which contain the groups as presented here as tribes, with the swans separated as subfamily Cygninae, the goose subfamily Anserinae also containing the whistling ducks, and the Anatinae containing all other clades. Genera For the living and recently extinct members of each genus, see the article List of Anatidae species. Subfamily: Dendrocygninae (one pantropical genus, of distinctive long-legged goose-like birds) Dendrocygna, whistling ducks (8 living species) Thalassornis, white-backed duck Subfamily: Anserinae, swans and geese (3–7 extant genera with 25–30 living species, mainly cool temperate Northern Hemisphere, but also some Southern Hemisphere species, with the swans in one genus [two genera in some treatments], and the geese in three genera [two genera in some treatments]. Some other species are sometimes placed herein, but seem somewhat more distinct [see below]) Cygnus, true swans (6 species, 4 sometimes separated in Olor) Anser, grey geese and white geese (11 species) Branta, black geese (6 living species) Subfamily: Stictonettinae (one genus in Australia, formerly included in the Oxyurinae, but with anatomy suggesting a distinct ancient lineage perhaps closest to the Anserinae, especially the Cape Barren goose) Stictonetta, freckled duck Subfamily: Plectropterinae (one genus in Africa, formerly included in the "perching ducks", but closer to the Tadorninae) Plectropterus, spur-winged goose Subfamily: Tadorninae – shelducks and sheldgeese (This group of larger, often semiterrestrial waterfowl can be seen as intermediate between Anserinae and Anatinae. The 1986 revision has resulted in the inclusion of 10 extant genera with about two-dozen living species [one probably extinct] in this subfamily, mostly from the Southern Hemisphere but a few in the Northern Hemisphere; the affiliations of several presumed tadornine genera has later been questioned and the group in the traditional lineup is likely to be paraphyletic.) Pachyanas, Chatham Island duck (prehistoric) Tadorna, shelducks (6 species, 1 probably extinct) – possibly paraphyletic Radjah, Radjah shelduck Salvadorina, Salvadori's teal Centrornis, Madagascar sheldgoose (prehistoric, tentatively placed here) Alopochen, Egyptian goose and Mascarene shelducks (1 living species, 2 extinct) Neochen, (2 species) Chloephaga, sheldgeese (4 species) Hymenolaimus, blue duck Merganetta, torrent duck Subfamily: Aythyinae, diving ducks (Some 15 species of diving ducks, of worldwide distribution, in two to four genera; The 1986 morphological analysis suggested the probably extinct pink-headed duck of India, previously treated separately in Rhodonessa, should be placed in Netta, but this has been questioned. Furthermore, while morphologically close to dabbling ducks, the mtDNA data indicate a treatment as distinct subfamily is indeed correct, with the Tadorninae being actually closer to dabbling ducks than the diving ducks) Netta, red-crested pochard and allies (4 species, 1 probably extinct) Aythya, pochards, scaups, etc. (12 species) Subfamily: Anatinae, dabbling ducks and moa-nalos (The dabbling duck group, of worldwide distribution, were previously restricted to just one or two genera, but had been extended to include eight extant genera and about 55 living species, including several genera formerly known as the "perching ducks"; mtDNA on the other hand confirms that the genus Anas is over-lumped and casts doubt on the diving duck affiliations of several genera [see below]. The moa-nalos, of which four species in three genera are known to date, are a peculiar group of flightless, extinct anatids from the Hawaiian Islands. Gigantic in size and with massive bills, they were believed to be geese, but have been shown to be actually very closely related to mallards. They evolved filling the ecological niche of turtles, ungulates, and other megaherbivores. Anas: pintails, mallards, etc. (40–50 living species, 3 extinct) Chendytes, diving-geese (extinct c. 450–250 BCE, A basal member of the dabbling duck clade) Spatula, shovelers Mareca, wigeons and gadwalls Lophonetta, crested duck Speculanas, bronze-winged duck Amazonetta, Brazilian teal Sibirionetta, Baikal teal Chelychelynechen, turtle-jawed moa-nalo (prehistoric) Thambetochen, large-billed moa-nalos (2 species, prehistoric) Ptaiochen, small-billed moa-nalo (prehistoric) Tribe: Mergini, eiders, scoters, sawbills and other sea-ducks (There are 9 extant genera and some 20 living species; most of this group occur in the Northern Hemisphere, but a few [mostly extinct] mergansers in the Southern Hemisphere) Shiriyanetta (prehistoric) Polysticta, Steller's eider Somateria, eiders (3 species) Histrionicus, harlequin duck (includes Ocyplonessa) Camptorhynchus, Labrador duck (extinct) Melanitta, scoters (6 species) Clangula, long-tailed duck (1 species) Bucephala, goldeneyes (3 species) Mergellus, smew Lophodytes, hooded merganser Mergus, mergansers (4 living species, 1 extinct). Tribe: Oxyurini, stiff-tail ducks (a small group of 3–4 genera, 2–3 of them monotypic, with 7–8 living species) Oxyura, stiff-tailed ducks (5 living species) Nomonyx, masked duck Heteronetta, black-headed duck Unresolved: The largest degree of uncertainty concerns whether a number of genera are closer to the shelducks or to the dabbling ducks.
Biology and health sciences
Anseriformes
Animals
204228
https://en.wikipedia.org/wiki/Quarry
Quarry
A quarry is a type of open-pit mine in which dimension stone, rock, construction aggregate, riprap, sand, gravel, or slate is excavated from the ground. The operation of quarries is regulated in some jurisdictions to manage their safety risks and reduce their environmental impact. The word quarry can also include the underground quarrying for stone, such as Bath stone. History For thousands of years, only hand tools had been used in quarries. In the eighteenth century, the use of drilling and blasting operations was mastered. Types of rock Types of rock extracted from quarries include: Chalk China clay Cinder Clay Coal Construction aggregate (sand and gravel) Coquina Diabase Gabbro Granite Gritstone Gypsum Limestone Marble Ores Phosphate rock Quartz Sandstone Slate Travertine Methods of quarrying The method of removal of stones from their natural bed by using different operations is called quarrying. Methods of quarrying include: a) Digging – This method is used when the quarry consists of small and soft pieces of stones. b) Heating – This method is used when the natural rock bed is horizontal and small in thickness. c) Wedging –This method is used when the hard rock consists of natural fissure. When natural fissures are absent then artificial fissures are prepared by drilling holes. d) Blasting – It is the process of removal of stones with the help of controlled explosives is filled in the holes of the stones. Line of least resistance plays very important role in the blasting process. Following steps are used in the blasting process; 1) Drilling holes – Blast holes are drilled by using drilling machines. 2) Charging – Explosive powders are fed into the cleaned & dried blast holes. 3) Tamping – The remaining portion of the blast holes are filled by clay, ash, fuse and wirings. 4) Firing –The fuses of blasting holes are fired by using electrical power supply or match sticks. Slabs Many quarry stones such as marble, granite, limestone, and sandstone are cut into larger slabs and removed from the quarry. The surfaces are polished and finished with varying degrees of sheen or luster. Polished slabs are often cut into tiles or countertops and installed in many kinds of residential and commercial properties. Natural stone quarried from the earth is often considered a luxury and tends to be a highly durable surface, thus highly desirable. Problems Quarries in level areas with shallow groundwater or which are located close to surface water often have engineering problems with drainage. Generally the water is removed by pumping while the quarry is operational, but for high inflows more complex approaches may be required. For example, the Coquina quarry is excavated to more than below sea level. To reduce surface leakage, a moat lined with clay was constructed around the entire quarry. Groundwater entering the pit is pumped up into the moat. As a quarry becomes deeper, water inflows generally increase and it also becomes more expensive to lift the water higher during removal; this can become the limiting factor in quarry depth. Some water-filled quarries are worked from beneath the water, by dredging. Many people and municipalities consider quarries to be eyesores and require various abatement methods to address problems with noise, dust, and appearance. One of the more effective and famous examples of successful quarry restoration is Butchart Gardens in Victoria, British Columbia, Canada. A further problem is pollution of roads from trucks leaving the quarries. To control and restrain the pollution of public roads, wheel washing systems are becoming more common. Quarry lakes Many quarries naturally fill with water after abandonment and become lakes. Others are made into landfills. Water-filled quarries can be very deep, often or more, and surprisingly cold, so swimming in quarry lakes is generally not recommended. Unexpectedly cold water can cause a swimmer's muscles to suddenly weaken; it can also cause shock and even hypothermia. Though quarry water is often very clear, submerged quarry stones, abandoned equipment, dead animals and strong currents make diving into these quarries extremely dangerous. Several people drown in quarries each year. However, many inactive quarries are converted into safe swimming sites. Such lakes, even lakes within active quarries, can provide important habitat for animals.
Technology
Building materials
null
204277
https://en.wikipedia.org/wiki/Normal%20number
Normal number
In mathematics, a real number is said to be simply normal in an integer base b if its infinite sequence of digits is distributed uniformly in the sense that each of the b digit values has the same natural density 1/b. A number is said to be normal in base b if, for every positive integer n, all possible strings n digits long have density b−n. Intuitively, a number being simply normal means that no digit occurs more frequently than any other. If a number is normal, no finite combination of digits of a given length occurs more frequently than any other combination of the same length. A normal number can be thought of as an infinite sequence of coin flips (binary) or rolls of a die (base 6). Even though there will be sequences such as 10, 100, or more consecutive tails (binary) or fives (base 6) or even 10, 100, or more repetitions of a sequence such as tail-head (two consecutive coin flips) or 6-1 (two consecutive rolls of a die), there will also be equally many of any other sequence of equal length. No digit or sequence is "favored". A number is said to be normal (sometimes called absolutely normal) if it is normal in all integer bases greater than or equal to 2. While a general proof can be given that almost all real numbers are normal (meaning that the set of non-normal numbers has Lebesgue measure zero), this proof is not constructive, and only a few specific numbers have been shown to be normal. For example, any Chaitin's constant is normal (and uncomputable). It is widely believed that the (computable) numbers , , and e are normal, but a proof remains elusive. Definitions Let be a finite alphabet of -digits, the set of all infinite sequences that may be drawn from that alphabet, and the set of finite sequences, or strings. Let be such a sequence. For each in let denote the number of times the digit appears in the first digits of the sequence . We say that is simply normal if the limit for each . Now let be any finite string in and let be the number of times the string appears as a substring in the first digits of the sequence . (For instance, if , then .) is normal if, for all finite strings , where denotes the length of the string . In other words, is normal if all strings of equal length occur with equal asymptotic frequency. For example, in a normal binary sequence (a sequence over the alphabet ), and each occur with frequency ; , , , and each occur with frequency ; , , , , , , , and each occur with frequency ; etc. Roughly speaking, the probability of finding the string in any given position in is precisely that expected if the sequence had been produced at random. Suppose now that is an integer greater than 1 and is a real number. Consider the infinite digit sequence expansion of in the base positional number system (we ignore the decimal point). We say that is simply normal in base if the sequence is simply normal and that is normal in base if the sequence is normal. The number is called a normal number (or sometimes an absolutely normal number) if it is normal in base for every integer greater than 1. A given infinite sequence is either normal or not normal, whereas a real number, having a different base- expansion for each integer , may be normal in one base but not in another (in which case it is not a normal number). For bases and with rational (so that and ) every number normal in base is normal in base . For bases and with irrational, there are uncountably many numbers normal in each base but not the other. A disjunctive sequence is a sequence in which every finite string appears. A normal sequence is disjunctive, but a disjunctive sequence need not be normal. A rich number in base is one whose expansion in base is disjunctive: one that is disjunctive to every base is called absolutely disjunctive or is said to be a lexicon. A number normal in base is rich in base , but not necessarily conversely. The real number is rich in base if and only if the set is dense in the unit interval. We defined a number to be simply normal in base if each individual digit appears with frequency . For a given base , a number can be simply normal (but not normal or rich), rich (but not simply normal or normal), normal (and thus simply normal and rich), or none of these. A number is absolutely non-normal or absolutely abnormal if it is not simply normal in any base. Properties and examples The concept of a normal number was introduced by . Using the Borel–Cantelli lemma, he proved that almost all real numbers are normal, establishing the existence of normal numbers. showed that it is possible to specify a particular such number. proved that there is a computable absolutely normal number. Although this construction does not directly give the digits of the numbers constructed, it shows that it is possible in principle to enumerate each digit of a particular normal number. The set of non-normal numbers, despite being "large" in the sense of being uncountable, is also a null set (as its Lebesgue measure as a subset of the real numbers is zero, so it essentially takes up no space within the real numbers). Also, the non-normal numbers (as well as the normal numbers) are dense in the reals: the set of non-normal numbers between two distinct real numbers is non-empty since it contains every rational number (in fact, it is uncountably infinite and even comeagre). For instance, there are uncountably many numbers whose decimal expansions (in base 3 or higher) do not contain the digit 1, and none of those numbers are normal. Champernowne's constant obtained by concatenating the decimal representations of the natural numbers in order, is normal in base 10. Likewise, the different variants of Champernowne's constant (done by performing the same concatenation in other bases) are normal in their respective bases (for example, the base-2 Champernowne constant is normal in base 2), but they have not been proven to be normal in other bases. The Copeland–Erdős constant obtained by concatenating the prime numbers in base 10, is normal in base 10, as proved by . More generally, the latter authors proved that the real number represented in base b by the concatenation where f(n) is the nth prime expressed in base b, is normal in base b. proved that the number represented by the same expression, with f(n) = n2, obtained by concatenating the square numbers in base 10, is normal in base 10. proved that the number represented by the same expression, with f being any non-constant polynomial whose values on the positive integers are positive integers, expressed in base 10, is normal in base 10. proved that if f(x) is any non-constant polynomial with real coefficients such that f(x) > 0 for all x > 0, then the real number represented by the concatenation where [f(n)] is the integer part of f(n) expressed in base b, is normal in base b. (This result includes as special cases all of the above-mentioned results of Champernowne, Besicovitch, and Davenport & Erdős.) The authors also show that the same result holds even more generally when f is any function of the form where the αs and βs are real numbers with β > β1 > β2 > ... > βd ≥ 0, and f(x) > 0 for all x > 0. show an explicit uncountably infinite class of b-normal numbers by perturbing Stoneham numbers. It has been an elusive goal to prove the normality of numbers that are not artificially constructed. While , π, ln(2), and e are strongly conjectured to be normal, it is still not known whether they are normal or not. It has not even been proven that all digits actually occur infinitely many times in the decimal expansions of those constants (for example, in the case of π, the popular claim "every string of numbers eventually occurs in π" is not known to be true). It has also been conjectured that every irrational algebraic number is absolutely normal (which would imply that is normal), and no counterexamples are known in any base. However, no irrational algebraic number has been proven to be normal in any base. Non-normal numbers No rational number is normal in any base, since the digit sequences of rational numbers are eventually periodic. gives an example of an irrational number that is absolutely abnormal. Let Then α is a Liouville number and is absolutely abnormal. Properties Additional properties of normal numbers include: Every non-zero real number is the product of two normal numbers. This follows from the general fact that every number is the product of two numbers from a set if the complement of X has measure 0. If x is normal in base b and a ≠ 0 is a rational number, then is also normal in base b. If is dense (for every and for all sufficiently large n, ) and are the base-b expansions of the elements of A, then the number , formed by concatenating the elements of A, is normal in base b (Copeland and Erdős 1946). From this it follows that Champernowne's number is normal in base 10 (since the set of all positive integers is obviously dense) and that the Copeland–Erdős constant is normal in base 10 (since the prime number theorem implies that the set of primes is dense). A sequence is normal if and only if every block of equal length appears with equal frequency. (A block of length k is a substring of length k appearing at a position in the sequence that is a multiple of k: e.g. the first length-k block in S is S[1..k], the second length-k block is S[k+1..2k], etc.) This was implicit in the work of and made explicit in the work of . A number is normal in base b if and only if it is simply normal in base bk for all . This follows from the previous block characterization of normality: Since the nth block of length k in its base b expansion corresponds to the nth digit in its base bk expansion, a number is simply normal in base bk if and only if blocks of length k appear in its base b expansion with equal frequency. A number is normal if and only if it is simply normal in every base. This follows from the previous characterization of base b normality. A number is b-normal if and only if there exists a set of positive integers where the number is simply normal in bases bm for all No finite set suffices to show that the number is b-normal. All normal sequences are closed under finite variations: adding, removing, or changing a finite number of digits in any normal sequence leaves it normal. Similarly, if a finite number of digits are added to, removed from, or changed in any simply normal sequence, the new sequence is still simply normal. Connection to finite-state machines Agafonov showed an early connection between finite-state machines and normal sequences: every infinite subsequence selected from a normal sequence by a regular language is also normal. In other words, if one runs a finite-state machine on a normal sequence, where each of the finite-state machine's states are labeled either "output" or "no output", and the machine outputs the digit it reads next after entering an "output" state, but does not output the next digit after entering a "no output state", then the sequence it outputs will be normal. A deeper connection exists with finite-state gamblers (FSGs) and information lossless finite-state compressors (ILFSCs). A finite-state gambler (a.k.a. finite-state martingale) is a finite-state machine over a finite alphabet , each of whose states is labelled with percentages of money to bet on each digit in . For instance, for an FSG over the binary alphabet , the current state q bets some percentage of the gambler's money on the bit 0, and the remaining fraction of the gambler's money on the bit 1. The money bet on the digit that comes next in the input (total money times percent bet) is multiplied by , and the rest of the money is lost. After the bit is read, the FSG transitions to the next state according to the input it received. A FSG d succeeds on an infinite sequence S if, starting from $1, it makes unbounded money betting on the sequence; i.e., ifwhere is the amount of money the gambler d has after reading the first n digits of S (see limit superior). A finite-state compressor is a finite-state machine with output strings labelling its state transitions, including possibly the empty string. (Since one digit is read from the input sequence for each state transition, it is necessary to be able to output the empty string in order to achieve any compression at all). An information lossless finite-state compressor is a finite-state compressor whose input can be uniquely recovered from its output and final state. In other words, for a finite-state compressor C with state set Q, C is information lossless if the function , mapping the input string of C to the output string and final state of C, is 1–1. Compression techniques such as Huffman coding or Shannon–Fano coding can be implemented with ILFSCs. An ILFSC C compresses an infinite sequence S ifwhere is the number of digits output by C after reading the first n digits of S. The compression ratio (the limit inferior above) can always be made to equal 1 by the 1-state ILFSC that simply copies its input to the output. Schnorr and Stimm showed that no FSG can succeed on any normal sequence, and Bourke, Hitchcock and Vinodchandran showed the converse. Therefore: Ziv and Lempel showed: (they actually showed that the sequence's optimal compression ratio over all ILFSCs is exactly its entropy rate, a quantitative measure of its deviation from normality, which is 1 exactly when the sequence is normal). Since the LZ compression algorithm compresses asymptotically as well as any ILFSC, this means that the LZ compression algorithm can compress any non-normal sequence. These characterizations of normal sequences can be interpreted to mean that "normal" = "finite-state random"; i.e., the normal sequences are precisely those that appear random to any finite-state machine. Compare this with the algorithmically random sequences, which are those infinite sequences that appear random to any algorithm (and in fact have similar gambling and compression characterizations with Turing machines replacing finite-state machines). Connection to equidistributed sequences A number x is normal in base b if and only if the sequence is equidistributed modulo 1, or equivalently, using Weyl's criterion, if and only if This connection leads to the terminology that x is normal in base β for any real number β if and only if the sequence is equidistributed modulo 1.
Mathematics
Basics
null
204420
https://en.wikipedia.org/wiki/Saliva
Saliva
Saliva (commonly referred to as spit or drool) is an extracellular fluid produced and secreted by salivary glands in the mouth. In humans, saliva is around 99% water, plus electrolytes, mucus, white blood cells, epithelial cells (from which DNA can be extracted), enzymes (such as lipase and amylase), and antimicrobial agents (such as secretory IgA, and lysozymes). The enzymes found in saliva are essential in beginning the process of digestion of dietary starches and fats. These enzymes also play a role in breaking down food particles entrapped within dental crevices, thus protecting teeth from bacterial decay. Saliva also performs a lubricating function, wetting food and permitting the initiation of swallowing, and protecting the oral mucosa from drying out. Saliva has specialized purposes for a variety of animal species beyond predigestion. Certain swifts construct nests with their sticky saliva. The foundation of bird's nest soup is an aerodramus nest. Venomous saliva injected by fangs is used by cobras, vipers, and certain other members of the venom clade to hunt. Some caterpillars use modified salivary glands to store silk proteins, which they then use to make silk fiber. Composition Produced in salivary glands, human saliva comprises 99.5% water, but also contains many important substances, including electrolytes, mucus, antibacterial compounds and various enzymes. Medically, constituents of saliva can noninvasively provide important diagnostic information related to oral and systemic diseases. Water: 99.5% Electrolytes: 2–21 mmol/L sodium (lower than blood plasma) 10–36 mmol/L potassium (higher than plasma) 1.2–2.8 mmol/L calcium (similar to plasma) 0.08–0.5 mmol/L magnesium 5–40 mmol/L chloride (lower than plasma) 25 mmol/L bicarbonate (higher than plasma) 1.4–39 mmol/L phosphate Iodine (mmol/L concentration is usually higher than plasma, but dependent variable according to dietary iodine intake) Mucus (mucus in saliva mainly consists of mucopolysaccharides and glycoproteins) Antibacterial compounds (thiocyanate, hydrogen peroxide, and secretory immunoglobulin A) Epidermal growth factor (EGF) Saliva eliminates caesium, which can substitute for potassium in the cells. Various enzymes; most notably: α-amylase (EC3.2.1.1), or ptyalin, secreted by the acinar cells of the parotid and submandibular glands, starts the digestion of starch before the food is even swallowed; it has a pH optimum of 7.4 Lingual lipase, which is secreted by the acinar cells of the sublingual gland; has a pH optimum around 4.0 so it is not activated until entering the acidic environment of the stomach Kallikrein, an enzyme that proteolytically cleaves high-molecular-weight kininogen to produce bradykinin, which is a vasodilator; it is secreted by the acinar cells of all three major salivary glands Antimicrobial enzymes that kill bacteria: Lysozyme Salivary lactoperoxidase Lactoferrin Immunoglobulin A Beta defensin Proline-rich proteins (function in enamel formation, Ca2+-binding, microbe killing and lubrication) Minor enzymes including: salivary acid phosphatases A+B, N-acetylmuramoyl-L-alanine amidase, NAD(P)H dehydrogenase (quinone), superoxide dismutase, glutathione transferase, class 3 aldehyde dehydrogenase, glucose-6-phosphate isomerase, and tissue kallikrein (function unknown) Cells: possibly as many as 8 million human and 500 million bacterial cells per mL. The presence of bacterial products (small organic acids, amines, and thiols) causes saliva to sometimes exhibit a foul odor. Opiorphin, a pain-killing substance found in human saliva Haptocorrin, a protein which binds to vitamin B12 to protect it against degradation in the stomach, before it binds to intrinsic factor. Daily salivary output Experts debate the amount of saliva that a healthy person produces. Production is estimated at 1500ml per day and researchers generally accept that during sleep the amount drops significantly. In humans, the submandibular gland contributes around 70 to 75% of secretions, while the parotid gland secretes about 20 to 25%; small amounts are secreted from the other salivary glands. Functions Saliva contributes to the digestion of food and to the maintenance of oral hygiene. Without normal salivary function the frequency of dental caries, gum disease (gingivitis and periodontitis), and other oral problems increases significantly. Saliva limits the growth of bacterial pathogens and is a major factor in sustaining systemic and oral health through the prevention of tooth decay and the removal of sugars and other food sources for microbes. Lubricant Saliva coats the oral mucosa mechanically protecting it from trauma during eating, swallowing, and speaking. Mouth soreness is very common in people with reduced saliva (xerostomia) and food (especially dry food) sticks to the inside of the mouth. Digestion The digestive functions of saliva include moistening food and helping to create a food bolus. The lubricative function of saliva allows the food bolus to be passed easily from the mouth into the esophagus. Saliva contains the enzyme amylase, also called ptyalin, which is capable of breaking down starch into simpler sugars such as maltose and dextrin that can be further broken down in the small intestine. About 30% of starch digestion takes place in the mouth cavity. Salivary glands also secrete salivary lipase (a more potent form of lipase) to begin fat digestion. Salivary lipase plays a large role in fat digestion in newborn infants as their pancreatic lipase still needs some time to develop. Role in taste Saliva is very important in the sense of taste. It is the liquid medium in which chemicals are carried to taste receptor cells (mostly associated with lingual papillae). People with little saliva often complain of dysgeusia (i.e. disordered taste, e.g. reduced ability to taste, or having a bad, metallic taste at all times). A rare condition identified to affect taste is that of 'Saliva Hypernatrium', or excessive amounts of sodium in saliva that is not caused by any other condition (e.g., Sjögren syndrome), causing everything to taste 'salty'. Other Saliva maintains the pH of the mouth. Saliva is supersaturated with various ions. Certain salivary proteins prevent precipitation, which would form salts. These ions act as a buffer, keeping the acidity of the mouth within a certain range, typically pH 6.2–7.4. This prevents minerals in the dental hard tissues from dissolving. Saliva secretes carbonic anhydrase (gustin), which is thought to play a role in the development of taste buds. Saliva contains EGF. EGF results in cellular proliferation, differentiation, and survival. EGF is a low-molecular-weight polypeptide first purified from the mouse submandibular gland, but since then found in many human tissues including submandibular gland, parotid gland. Salivary EGF, which seems also regulated by dietary inorganic iodine, also plays an important physiological role in the maintenance of oro-esophageal and gastric tissue integrity. The biological effects of salivary EGF include healing of oral and gastroesophageal ulcers, inhibition of gastric acid secretion, stimulation of DNA synthesis as well as mucosal protection from intraluminal injurious factors such as gastric acid, bile acids, pepsin, and trypsin and to physical, chemical and bacterial agents. Production The production of saliva is stimulated both by the sympathetic nervous system and the parasympathetic. Sympathetic stimulation of saliva is to facilitate respiration, whereas parasympathetic stimulation is to facilitate digestion. Parasympathetic stimulation leads to acetylcholine (ACh) release onto the salivary acinar cells. ACh binds to muscarinic receptors, specifically M3, and causes an increased intracellular calcium ion concentration (through the IP3/DAG second messenger system). Increased calcium causes vesicles within the cells to fuse with the apical cell membrane leading to secretion. ACh also causes the salivary gland to release kallikrein, an enzyme that converts kininogen to lysyl-bradykinin. Lysyl-bradykinin acts upon blood vessels and capillaries of the salivary gland to generate vasodilation and increased capillary permeability, respectively. The resulting increased blood flow to the acini allows the production of more saliva. In addition, Substance P can bind to Tachykinin NK-1 receptors leading to increased intracellular calcium concentrations and subsequently increased saliva secretion. Lastly, both parasympathetic and sympathetic nervous stimulation can lead to myoepithelium contraction which causes the expulsion of secretions from the secretory acinus into the ducts and eventually to the oral cavity. Sympathetic stimulation results in the release of norepinephrine. Norepinephrine binding to α-adrenergic receptors will cause an increase in intracellular calcium levels leading to more fluid vs. protein secretion. If norepinephrine binds β-adrenergic receptors, it will result in more protein or enzyme secretion vs. fluid secretion. Stimulation by norepinephrine initially decreases blood flow to the salivary glands due to constriction of blood vessels but this effect is overtaken by vasodilation caused by various local vasodilators. Saliva production may also be pharmacologically stimulated by the so-called sialagogues. It can also be suppressed by the so-called antisialagogues. Behaviour Spitting Spitting is the act of forcibly ejecting saliva or other substances from the mouth. In many parts of the world, it is considered rude and a social taboo, and has sometimes been outlawed. In some countries, for example, it has been outlawed for reasons of public decency and attempting to reduce the spread of disease. These laws may not strictly enforced, but in Singapore, the fine for spitting may be as high as SGD$2,000 for multiple offenses, and one can even be arrested. In China, expectoration is more socially acceptable (even if officially disapproved of or illegal), and spittoons are still a common appearance in some cultures. Some animals, even humans in some cases, use spitting as an automatic defensive maneuver. Camels are well known for doing this, though most domestic camels are trained not to. Spitting by an infected person (for example, one with SARS-CoV-2) whose saliva contains large amounts of virus, is a health hazard to the public. Glue to construct bird nests Many birds in the swift family, Apodidae, produce a viscous saliva during nesting season to glue together materials to construct a nest. Two species of swifts in the genus Aerodramus build their nests using only their saliva, the base for bird's nest soup. Wound licking A common belief is that saliva contained in the mouth has natural disinfectants, which leads people to believe it is beneficial to "lick their wounds". Researchers at the University of Florida at Gainesville have discovered a protein called nerve growth factor (NGF) in the saliva of mice. Wounds doused with NGF healed twice as fast as untreated and unlicked wounds; therefore, saliva can help to heal wounds in some species. NGF has been found in human saliva, as well as antibacterial agents as secretory mucin, IgA, lactoferrin, lysozyme and peroxidase. It has not been shown that human licking of wounds disinfects them, but licking is likely to help clean the wound by removing larger contaminants such as dirt and may help to directly remove infective bodies by brushing them away. Therefore, licking would be a way of wiping off pathogens, useful if clean water is not available to the animal or person. Classical conditioning In Pavlov's experiment, dogs were conditioned to salivate in response to a ringing bell; this stimulus is associated with a meal or hunger. Salivary secretion is also associated with nausea. Saliva is usually formed in the mouth through an act called gleeking, which can be voluntary or involuntary. Making alcoholic beverages Some old cultures use chewed grains to produce alcoholic beverages, such as chicha, kasiri or sake. Substitutes A number of commercially available saliva substitutes exist.
Biology and health sciences
Gastrointestinal tract
Biology
204433
https://en.wikipedia.org/wiki/Macrotis
Macrotis
Macrotis is a genus of desert-dwelling marsupial omnivores known as bilbies or rabbit-bandicoots; they are members of the order Peramelemorphia. At the time of European colonisation of Australia, there were two species. The lesser bilby became extinct in the 1950s; the greater bilby survives but remains endangered. It is currently listed as a vulnerable species. The greater bilby is on average long, excluding the tail, which is usually around long. Its fur is usually grey or white; it has a long, pointy nose and very long ears, hence the reference of its nickname to rabbits. Taxonomy Macrotis means 'big-eared' ( + 'ear') in Greek, referring to the animal's large, long ears. The genus name was first proposed as a subgeneric classification, which after a century of taxonomic confusion was eventually stabilised as the accepted name in a 1932 revision by Ellis Troughton. In reviewing the systematic arrangement of the genus, Troughton recognised three species names, including one highly variable population with six subspecies. The family's current name Thylacomyidae is derived from an invalid synonym Thylacomys, meaning 'pouched mouse', from the Ancient Greek (, 'pouch, sack') and (, 'mouse, muscle'), sometimes misspelt Thalacomys. The term bilby is a loanword from the Yuwaalaraay Aboriginal language of northern New South Wales, meaning long-nosed rat. It is known as dalgite in Western Australia, and in South Australia, pinkie is sometimes used. The Wiradjuri of New South Wales also call it "bilby". Gerard Krefft recorded the name Jacko used by the peoples of the lower Darling in 1864, emended to Jecko in 1866 along with Wuirrapur from the peoples at the lower Murray River. Classification The placement of the population within taxonomic classification has changed in recent years. Vaughan (1978) and Groves and Flannery (1990) both placed this family within the family Peramelidae. Kirsch et al. (1997) found them to be distinct from the species in Peroryctidae (which is now a subfamily in Peramelidae). McKenna and Bell (1997) also placed it in Peramelidae, but as the sister of Chaeropus in the subfamily Chaeropodinae. Here is a summary of the treatment as a peramelemorph family: Peramelemorphia Thylacomyidae Genus Macrotis Macrotis lagotis, extant Macrotis leucura, extinct Chaeropodidae (pig-footed bandicoots, extinct) Peramelidae (genera known as bandicoots, extant and extinct) Fossil taxa allied to the family are: Genus †Ischnodon †Ischnodon australis Genus †Liyamayi †Liyamayi dayi Description Bilbies have the characteristic long bandicoot muzzle and very big ears that radiate heat. They are about long. Compared to bandicoots, they have a longer tail, bigger ears, and softer, silky fur. The size of their ears allows them to have better hearing. They are nocturnal omnivores that do not need to drink water, as they obtain their moisture from food, which includes insects and their larvae, seeds, spiders, bulbs, fruit, fungi, and very small animals. Most food is found by digging or scratching in the soil, and using their very long tongues. Unlike bandicoots, they are excellent burrowers and build extensive tunnel systems with their strong forelimbs and well-developed claws. A bilby typically makes a number of burrows within its home range, up to about a dozen, and moves between them, using them for shelter both from predators and the heat of the day. The female bilby's pouch faces backwards, which prevents the pouch from getting filled with dirt while she is digging. Bilbies have a gestation of about 12–14 days, one of the shortest among mammals. The appearance of the bilby has been alluded to as "Australia’s answer to the Easter rabbit". Conservation Bilbies are slowly becoming endangered because of habitat loss and change, and competition with other animals. There is a national recovery plan being developed for saving them. This program includes captive breeding, monitoring populations, and reestablishing bilbies where they once lived. There have been reasonably successful moves to popularise the bilby as a native alternative to the Easter Bunny by selling chocolate Easter Bilbies (sometimes with a portion of the profits going to bilby protection and research). Reintroduction efforts have begun, with a successful reintroduction into the Arid Recovery Reserve in South Australia in 2000, and a reintroduction into Currawinya National Park in Queensland, where six bilbies were released into a predator-proof enclosure in April 2019. Successful reintroductions have also occurred on the Peron Peninsula in Western Australia as a part of the Western Shield program, and at other conservation lands, including islands and the Australian Wildlife Conservancy's Scotia and Yookamurra Sanctuaries. There is a highly successful bilby breeding program at Kanyana Wildlife Rehabilitation Centre near Perth, Western Australia. Evolution The bilby lineage extends back 15 million years. In 2014 scientists found part of a 15-million-year-old fossilised jaw of a bilby which had shorter teeth that were probably used for eating forest fruit. Prior to this discovery, the oldest bilby fossil on record was 5 million years old. Modern bilbies have evolved to have long teeth used to dig holes in the desert to eat worms and insects. It is thought the bilby diverged from its closest relative, an originally-carnivorous bandicoot, 20 million years ago.
Biology and health sciences
Marsupials
Animals
204464
https://en.wikipedia.org/wiki/Intensive%20and%20extensive%20properties
Intensive and extensive properties
Physical or chemical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes. The terms "intensive and extensive quantities" were introduced into physics by German mathematician Georg Helm in 1898, and by American physicist and chemist Richard C. Tolman in 1917. According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system. An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature, T; refractive index, n; density, ρ; and hardness, η. By contrast, an extensive property or extensive quantity is one whose magnitude is additive for subsystems. Examples include mass, volume and entropy. Not all properties of matter fall into these two categories. For example, the square root of the volume is neither intensive nor extensive. If a system is doubled in size by juxtaposing a second identical system, the value of an intensive property equals the value for each subsystem and the value of an extensive property is twice the value for each subsystem. However the property √V is instead multiplied by √2 . The distinction between intensive and extensive properties has some theoretical uses. For example, in thermodynamics, the state of a simple compressible system is completely specified by two independent, intensive properties, along with one extensive property, such as mass. Other intensive properties are derived from those two intensive variables. Intensive properties An intensive property is a physical quantity whose value does not depend on the amount of substance which was measured. The most obvious intensive quantities are ratios of extensive quantities. In a homogeneous system divided into two halves, all its extensive properties, in particular its volume and its mass, are divided into two halves. All its intensive properties, such as the mass per volume (mass density) or volume per mass (specific volume), must remain the same in each half. The temperature of a system in thermal equilibrium is the same as the temperature of any part of it, so temperature is an intensive quantity. If the system is divided by a wall that is permeable to heat or to matter, the temperature of each subsystem is identical. Additionally, the boiling temperature of a substance is an intensive property. For example, the boiling temperature of water is 100 °C at a pressure of one atmosphere, regardless of the quantity of water remaining as liquid. Examples Examples of intensive properties include: charge density, ρ (or ne) chemical potential, μ color concentration, c energy density, ρ magnetic permeability, μ mass density, ρ (or specific gravity) melting point and boiling point molality, m or b pressure, p refractive index specific conductance (or electrical conductivity) specific heat capacity, cp specific internal energy, u specific rotation, [α] specific volume, v standard reduction potential, E° surface tension temperature, T thermal conductivity velocity v viscosity See List of materials properties for a more exhaustive list specifically pertaining to materials. Extensive properties An extensive property is a physical quantity whose value is proportional to the size of the system it describes, or to the quantity of matter in the system. For example, the mass of a sample is an extensive quantity; it depends on the amount of substance. The related intensive quantity is the density which is independent of the amount. The density of water is approximately 1g/mL whether you consider a drop of water or a swimming pool, but the mass is different in the two cases. Dividing one extensive property by another extensive property gives an intensive property—for example: mass (extensive) divided by volume (extensive) gives density (intensive). Any extensive quantity E for a sample can be divided by the sample's volume, to become the "E density" for the sample; similarly, any extensive quantity "E" can be divided by the sample's mass, to become the sample's "specific E"; extensive quantities "E" which have been divided by the number of moles in their sample are referred to as "molar E". Examples Examples of extensive properties include: amount of substance, n enthalpy, H entropy, S Gibbs energy, G heat capacity, Cp Helmholtz energy, A or F internal energy, U spring stiffness, K mass, m volume, V Conjugate quantities In thermodynamics, some extensive quantities measure amounts that are conserved in a thermodynamic process of transfer. They are transferred across a wall between two thermodynamic systems or subsystems. For example, species of matter may be transferred through a semipermeable membrane. Likewise, volume may be thought of as transferred in a process in which there is a motion of the wall between two systems, increasing the volume of one and decreasing that of the other by equal amounts. On the other hand, some extensive quantities measure amounts that are not conserved in a thermodynamic process of transfer between a system and its surroundings. In a thermodynamic process in which a quantity of energy is transferred from the surroundings into or out of a system as heat, a corresponding quantity of entropy in the system respectively increases or decreases, but, in general, not in the same amount as in the surroundings. Likewise, a change in the amount of electric polarization in a system is not necessarily matched by a corresponding change in electric polarization in the surroundings. In a thermodynamic system, transfers of extensive quantities are associated with changes in respective specific intensive quantities. For example, a volume transfer is associated with a change in pressure. An entropy change is associated with a temperature change. A change in the amount of electric polarization is associated with an electric field change. The transferred extensive quantities and their associated respective intensive quantities have dimensions that multiply to give the dimensions of energy. The two members of such respective specific pairs are mutually conjugate. Either one, but not both, of a conjugate pair may be set up as an independent state variable of a thermodynamic system. Conjugate setups are associated by Legendre transformations. Composite properties The ratio of two extensive properties of the same object or system is an intensive property. For example, the ratio of an object's mass and volume, which are two extensive properties, is density, which is an intensive property. More generally properties can be combined to give new properties, which may be called derived or composite properties. For example, the base quantities mass and volume can be combined to give the derived quantity density. These composite properties can sometimes also be classified as intensive or extensive. Suppose a composite property is a function of a set of intensive properties and a set of extensive properties , which can be shown as . If the size of the system is changed by some scaling factor, , only the extensive properties will change, since intensive properties are independent of the size of the system. The scaled system, then, can be represented as . Intensive properties are independent of the size of the system, so the property F is an intensive property if for all values of the scaling factor, , (This is equivalent to saying that intensive composite properties are homogeneous functions of degree 0 with respect to .) It follows, for example, that the ratio of two extensive properties is an intensive property. To illustrate, consider a system having a certain mass, , and volume, . The density, is equal to mass (extensive) divided by volume (extensive): . If the system is scaled by the factor , then the mass and volume become and , and the density becomes ; the two s cancel, so this could be written mathematically as , which is analogous to the equation for above. The property is an extensive property if for all , (This is equivalent to saying that extensive composite properties are homogeneous functions of degree 1 with respect to .) It follows from Euler's homogeneous function theorem that where the partial derivative is taken with all parameters constant except . This last equation can be used to derive thermodynamic relations. Specific properties A specific property is the intensive property obtained by dividing an extensive property of a system by its mass. For example, heat capacity is an extensive property of a system. Dividing heat capacity, , by the mass of the system gives the specific heat capacity, , which is an intensive property. When the extensive property is represented by an upper-case letter, the symbol for the corresponding intensive property is usually represented by a lower-case letter. Common examples are given in the table below. Molar properties If the amount of substance in moles can be determined, then each of these thermodynamic properties may be expressed on a molar basis, and their name may be qualified with the adjective molar, yielding terms such as molar volume, molar internal energy, molar enthalpy, and molar entropy. The symbol for molar quantities may be indicated by adding a subscript "m" to the corresponding extensive property. For example, molar enthalpy is . Molar Gibbs free energy is commonly referred to as chemical potential, symbolized by , particularly when discussing a partial molar Gibbs free energy for a component in a mixture. For the characterization of substances or reactions, tables usually report the molar properties referred to a standard state. In that case a superscript is added to the symbol. Examples: = is the molar volume of an ideal gas at standard conditions for temperature and pressure (being and ). is the standard molar heat capacity of a substance at constant pressure. is the standard enthalpy variation of a reaction (with subcases: formation enthalpy, combustion enthalpy...). is the standard reduction potential of a redox couple, i.e. Gibbs energy over charge, which is measured in volt = J/C. Limitations The general validity of the division of physical properties into extensive and intensive kinds has been addressed in the course of science. Redlich noted that, although physical properties and especially thermodynamic properties are most conveniently defined as either intensive or extensive, these two categories are not all-inclusive and some well-defined concepts like the square-root of a volume conform to neither definition. Other systems, for which standard definitions do not provide a simple answer, are systems in which the subsystems interact when combined. Redlich pointed out that the assignment of some properties as intensive or extensive may depend on the way subsystems are arranged. For example, if two identical galvanic cells are connected in parallel, the voltage of the system is equal to the voltage of each cell, while the electric charge transferred (or the electric current) is extensive. However, if the same cells are connected in series, the charge becomes intensive and the voltage extensive. The IUPAC definitions do not consider such cases. Some intensive properties do not apply at very small sizes. For example, viscosity is a macroscopic quantity and is not relevant for extremely small systems. Likewise, at a very small scale color is not independent of size, as shown by quantum dots, whose color depends on the size of the "dot".
Physical sciences
Thermodynamics
Physics
204466
https://en.wikipedia.org/wiki/Bubble%20chamber
Bubble chamber
A bubble chamber is a vessel filled with a superheated transparent liquid (most often liquid hydrogen) used to detect electrically charged particles moving through it. It was invented in 1952 by Donald A. Glaser, for which he was awarded the 1960 Nobel Prize in Physics. Supposedly, Glaser was inspired by the bubbles in a glass of beer; however, in a 2006 talk, he refuted this story, although saying that while beer was not the inspiration for the bubble chamber, he did experiments using beer to fill early prototypes. While bubble chambers were extensively used in the past, they have now mostly been supplanted by wire chambers, spark chambers, drift chambers, and silicon detectors. Notable bubble chambers include the Big European Bubble Chamber (BEBC) and Gargamelle. Function and use The bubble chamber is similar to a cloud chamber, both in application and in basic principle. It is normally made by filling a large cylinder with a liquid heated to just below its boiling point. As particles enter the chamber, a piston suddenly decreases its pressure, and the liquid enters into a superheated, metastable phase. Charged particles create an ionization track, around which the liquid vaporizes, forming microscopic bubbles. Bubble density around a track is proportional to a particle's energy loss. Bubbles grow in size as the chamber expands, until they are large enough to be seen or photographed. Several cameras are mounted around it, allowing a three-dimensional image of an event to be captured. Bubble chambers with resolutions down to a few micrometers (μm) have been operated. It is often useful to subject the entire chamber to a constant magnetic field. It acts on charged particles through Lorentz force and causes them to travel in helical paths whose radii are determined by the particles' charge-to-mass ratios and their velocities. Because the magnitude of the charge of all known, charged, long-lived subatomic particles is the same as that of an electron, their radius of curvature must be proportional to their momentum. Thus, by measuring a particle's radius of curvature, its momentum can be determined. Notable discoveries Notable discoveries made by bubble chamber include the discovery of weak neutral currents at Gargamelle in 1973, which established the soundness of the electroweak theory and led to the discovery of the W and Z bosons in 1983 (at the UA1 and UA2 experiments). Recently, bubble chambers have been used in research on weakly interacting massive particles (WIMP)s, at SIMPLE, COUPP, PICASSO and more recently, PICO. Drawbacks Although bubble chambers were very successful in the past, they are of limited use in modern very-high-energy experiments for a variety of reasons: The need for a photographic readout rather than three-dimensional electronic data makes it less convenient, especially in experiments which must be reset, repeated and analyzed many times. The superheated phase must be ready at the precise moment of collision, which complicates the detection of short-lived particles. Bubble chambers are neither large nor massive enough to analyze high-energy collisions, where all products should be contained inside the detector. The high-energy particles may have path radii too large to be accurately measured in a relatively small chamber, thereby hindering precise estimation of momentum. Due to these issues, bubble chambers have largely been replaced by wire chambers, which allow particle energies to be measured at the same time. Another alternative technique is the spark chamber. Examples 30 cm Bubble Chamber (CERN) 81 cm Saclay Bubble Chamber 2 m Bubble Chamber (CERN) Berne Infinitesimal Bubble Chamber Bevatron, a particle accelerator with a liquid hydrogen bubble chamber Big European Bubble Chamber Holographic Lexan Bubble Chamber Gargamelle, a heavy liquid bubble chamber which operated in CERN between 1970 and 1979. LExan Bubble Chamber PICO, liquid freon bubble chamber searching for dark matter
Physical sciences
Basics_3
Physics
204504
https://en.wikipedia.org/wiki/Millennium
Millennium
A millennium () is a period of one thousand years or one hundred decades or ten centuries, sometimes called a kiloannum (ka), or kiloyear (ky). Normally, the word is used specifically for periods of a thousand years that begin at the starting point (initial reference point) of the calendar in consideration and at later years that are whole number multiples of a thousand years after the start point. The term can also refer to an interval of time beginning on any date. Millennia sometimes have religious or theological implications (see millenarianism). The word millennium derives from the Latin , thousand, and , year. Debate over millennium celebrations There was a public debate leading up to the celebrations of the year 2000 as to whether the beginning of that year should be understood as the beginning of the "new" millennium. Historically, there has been debate around the turn of previous decades, centuries, and millennia, but not so much for decades. The issue arises from the difference between the convention of using ordinal numbers to count years and millennia, as in "the third millennium", or using a vernacular description, as in "the two thousands". The difference of opinion comes down to whether to celebrate, respectively, the end or the beginning of the "-000" year. The first convention is common in English-speaking countries, but the latter is favoured in, for example, Sweden (tvåtusentalet, which translates literally as the two thousands period). Those holding that the arrival of the new millennium should be celebrated in the transition from 2000 to 2001 (i.e., December 31, 2000, to January 1, 2001) argued that the Anno Domini system of counting years began with the year 1 (there was no year 0) and therefore the first millennium was from the year 1 to the end of the year 1000, the second millennium from 1001 to the end of 2000, and the third millennium beginning with 2001 and ending at the end of 3000. Similarly, the first millennium BC was from the year 1000 BC to the end of the year 1 BC. Popular culture supported celebrating the arrival of the new millennium in the transition from 1999 to 2000 (i.e., December 31, 1999, to January 1, 2000), in that the change of the hundreds digit in the year number, with the zeroes rolling over, is consistent with the vernacular demarcation of decades by their 'tens' digit (e.g. naming the period 1980 to 1989 as "the 1980s" or "the eighties"). This has been described as "the odometer effect". Also, the "year 2000" had been a popular phrase referring to an often utopian future, or a year when stories in such a future were set. There was also media and public interest in the Y2K computer bug. A third position was expressed by Bill Paupe, honorary consul for Kiribati: "To me, I just don't see what all the hoopla is about ... it's not going to change anything. The next day the sun is going to come up again and then it will all be forgotten." Even for those who did celebrate, in astronomical terms, there was nothing special about this particular event. Stephen Jay Gould, in his essay "Dousing Diminutive Dennis' Debate (or DDDD = 2000)", discussed the "high" versus "pop" culture interpretation of the transition. Gould noted that the high culture, strict construction had been the dominant viewpoint at the 20th century's beginning, but that the pop culture viewpoint dominated at its end. The start of the 21st century and 3rd millennium was celebrated worldwide at the start of the year 2000. One year later, at the start of the year 2001, the celebrations had largely returned to the usual ringing in of just another new year, although some welcomed "the real millennium", including America's official timekeeper, the U.S. Naval Observatory, and the countries of Cuba and Japan. The popular approach was to treat the end of 1999 as the end of "a millennium" and to hold millennium celebrations at midnight between December 31, 1999, and January 1, 2000, with the cultural and psychological significance of the events listed above combining to cause celebrations to be observed one year earlier than the formal date.
Physical sciences
Time
Basics and measurement
204539
https://en.wikipedia.org/wiki/Mira%20variable
Mira variable
Mira variables (named for the prototype star Mira) are a class of pulsating stars characterized by very red colours, pulsation periods longer than 100 days, and amplitudes greater than one magnitude in infrared and 2.5 magnitude at visual wavelengths. They are red giants in the very late stages of stellar evolution, on the asymptotic giant branch (AGB), that will expel their outer envelopes as planetary nebulae and become white dwarfs within a few million years. Mira variables are stars massive enough that they have undergone helium fusion in their cores but are less than two solar masses, stars that have already lost about half their initial mass. However, they can be thousands of times more luminous than the Sun due to their very large distended envelopes. They are pulsating due to the entire star expanding and contracting. This produces a change in temperature along with radius, both of which factors cause the variation in luminosity. The pulsation depends on the mass and radius of the star and there is a well-defined relationship between period and luminosity (and colour). The very large visual amplitudes are not due to large luminosity changes, but due to a shifting of energy output between infra-red and visual wavelengths as the stars change temperature during their pulsations. Early models of Mira stars assumed that the star remained spherically symmetric during this process (largely to keep the computer modelling simple, rather than for physical reasons). A recent survey of Mira variable stars found that 75% of the Mira stars which could be resolved using the IOTA telescope are not spherically symmetric, a result which is consistent with previous images of individual Mira stars, so there is now pressure to do realistic three-dimensional modelling of Mira stars on supercomputers. Mira variables may be oxygen-rich or carbon-rich. Carbon-rich stars such as R Leporis arise from a narrow set of conditions that override the normal tendency for AGB stars to maintain a surplus of oxygen over carbon at their surfaces due to dredge-ups. Pulsating AGB stars such as Mira variables undergo fusion in alternating hydrogen and helium shells, which produces periodic deep convection known as dredge-ups. These dredge-ups bring carbon from the helium burning shell to the surface and would result in a carbon star. However, in stars above about , hot bottom burning occurs. This is when the lower regions of the convective region are hot enough for significant CNO cycle fusion to take place which destroys much of the carbon before it can be transported to the surface. Thus more massive AGB stars do not become carbon-rich. Mira variables are rapidly losing mass and this material often forms dust shrouds around the star. In some cases conditions are suitable for the formation of natural masers. A small subset of Mira variables appear to change their period over time: the period increases or decreases by a substantial amount (up to a factor of three) over the course of several decades to a few centuries. This is believed to be caused by thermal pulses, where the helium shell reignites the outer hydrogen shell. This changes the structure of the star, which manifests itself as a change in period. This process is predicted to happen to all Mira variables, but the relatively short duration of thermal pulses (a few thousand years at most) over the asymptotic giant branch lifetime of the star (less than a million years), means we only see it in a few of the several thousand Mira stars known, possibly in R Hydrae. Most Mira variables do exhibit slight cycle-to-cycle changes in period, probably caused by nonlinear behaviour in the stellar envelope including deviations from spherical symmetry. Mira variables are popular targets for amateur astronomers interested in variable star observations, because of their dramatic changes in brightness. Some Mira variables (including Mira itself) have reliable observations stretching back well over a century. List The following list contains selected Mira variables. Unless otherwise noted, the given magnitudes are in the V-band, and distances are from the Gaia DR2 star catalogue.
Physical sciences
Stellar astronomy
Astronomy
1118041
https://en.wikipedia.org/wiki/Dental%20implant
Dental implant
A dental implant (also known as an endosseous implant or fixture) is a prosthesis that interfaces with the bone of the jaw or skull to support a dental prosthesis such as a crown, bridge, denture, or facial prosthesis or to act as an orthodontic anchor. The basis for modern dental implants is a biological process called osseointegration, in which materials such as titanium or zirconia form an intimate bond to the bone. The implant fixture is first placed so that it is likely to osseointegrate, then a dental prosthetic is added. A variable amount of healing time is required for osseointegration before either the dental prosthetic (a tooth, bridge, or denture) is attached to the implant or an abutment is placed which will hold a dental prosthetic or crown. Success or failure of implants depends primarily on the thickness and health of the bone and gingival tissues that surround the implant, but also on the health of the person receiving the treatment and drugs which affect the chances of osseointegration. The amount of stress that will be put on the implant and fixture during normal function is also evaluated. Planning the position and number of implants is key to the long-term health of the prosthetic since biomechanical forces created during chewing can be significant. The position of implants is determined by the position and angle of adjacent teeth, by lab simulations or by using computed tomography with CAD/CAM simulations and surgical guides called stents. The prerequisites for long-term success of osseointegrated dental implants are healthy bone and gingiva. Since both can atrophy after tooth extraction, pre-prosthetic procedures such as sinus lifts or gingival grafts are sometimes required to recreate ideal bone and gingiva. The final prosthetic can be either fixed, where a person cannot remove the denture or teeth from their mouth, or removable, where they can remove the prosthetic. In each case an abutment is attached to the implant fixture. Where the prosthetic is fixed, the crown, bridge or denture is fixed to the abutment either with lag screws or with dental cement. Where the prosthetic is removable, a corresponding adapter is placed in the prosthetic so that the two pieces can be secured together. The risks and complications related to implant therapy divide into those that occur during surgery (such as excessive bleeding or nerve injury, inadequate primary stability), those that occur in the first six months (such as infection and failure to osseointegrate) and those that occur long-term (such as peri-implantitis and mechanical failures). In the presence of healthy tissues, a well-integrated implant with appropriate biomechanical loads can have 5-year plus survival rates from 93 to 98 percent and 10-to-15-year lifespans for the prosthetic teeth. Long-term studies show a 16- to 20-year success (implants surviving without complications or revisions) between 52% and 76%, with complications occurring up to 48% of the time. Artificial intelligence is relevant as the basis for clinical decision support systems at the present time. Intelligent systems are used as an aid in determining the success rate of implants. Medical uses The primary use of dental implants is to support dental prosthetics (i.e. false teeth). Modern dental implants work through a biologic process where bone fuses tightly to the surface of specific materials such as titanium and some ceramics. The integration of implant and bone can support physical loads for decades without failure. The US has seen an increasing use of dental implants, with usage increasing from 0.7% of patients missing at least one tooth (1999–2000), to 5.7% (2015–2016), and was projected to potentially reach 26% in 2026. Implants are used to replace missing individual teeth (single tooth restorations), multiple teeth, or to restore edentulous (toothless) dental arches (implant retained fixed bridge, implant-supported overdenture). While use of dental implants in the US has increased, other treatments to tooth loss exist. Dental implants are also used in orthodontics to provide anchorage (orthodontic mini implants). Orthodontic treatment might be required prior to placing a dental implant. An evolving field is the use of implants to retain obturators (removable prostheses used to fill a communication between the oral and maxillary or nasal cavities). Facial prosthetics, used to correct facial deformities (e.g. from cancer treatment or injuries), can use connections to implants placed in the facial bones. Depending on the situation the implant may be used to retain either a fixed or removable prosthetic that replaces part of the face. Single tooth implant restoration Single tooth restorations are individual freestanding units not connected to other teeth or implants, used to replace missing individual teeth. For individual tooth replacement, an implant abutment is first secured to the implant with an abutment screw. A crown (the dental prosthesis) is then connected to the abutment with dental cement, a small screw, or fused with the abutment as one piece during fabrication. Dental implants, in the same way, can also be used to retain a multiple tooth dental prosthesis either in the form of a fixed bridge or removable dentures. There is limited evidence that implant-supported single crowns perform better than tooth-supported fixed partial dentures (FPDs) on a long-term basis. However, taking into account the favorable cost-benefit ratio and the high implant survival rate, dental implant therapy is the first-line strategy for single-tooth replacement. Implants preserve the integrity of the teeth adjacent to the edentulous area, and it has been shown that dental implant therapy is less costly and more efficient over time than tooth-supported FPDs for the replacement of one missing tooth. The major disadvantage of dental implant surgery is the need for a surgical procedure. Implant retained fixed bridge or implant supported bridge An implant supported bridge (or fixed denture) is a group of teeth secured to dental implants so the prosthetic cannot be removed by the user. They are similar to conventional bridges, except that the prosthesis is supported and retained by one or more implants instead of natural teeth. Bridges typically connect to more than one implant and may also connect to teeth as anchor points. Typically the number of teeth will outnumber the anchor points with the teeth that are directly over the implants referred to as abutments and those between abutments referred to as pontics. Implant supported bridges attach to implant abutments in the same way as a single tooth implant replacement. A fixed bridge may replace as few as two teeth (also known as a fixed partial denture) and may extend to replace an entire arch of teeth (also known as a fixed full denture). In both cases, the prosthesis is said to be fixed because it cannot be removed by the denture wearer. Implant-supported overdenture A removable implant-supported denture (also an implant-supported overdenture) is a removable prosthesis which replaces teeth, using implants to improve support, retention and stability. They are most commonly complete dentures (as opposed to partial), used to restore edentulous dental arches. The dental prosthesis can be disconnected from the implant abutments with finger pressure by the wearer. To enable this, the abutment is shaped as a small connector (a button, ball, bar or magnet) which can be connected to analogous adapters in the underside of the dental prosthesis. Orthodontic mini-implants (TAD) Dental implants are used in orthodontic patients to replace missing teeth (as above) or as a temporary anchorage device (TAD) to facilitate orthodontic movement by providing an additional anchorage point. For teeth to move, a force must be applied to them in the direction of the desired movement. The force stimulates cells in the periodontal ligament to cause bone remodeling, removing bone in the direction of travel of the tooth and adding it to the space created. In order to generate a force on a tooth, an anchor point (something that will not move) is needed. Since implants do not have a periodontal ligament, and bone remodelling will not be stimulated when tension is applied, they are ideal anchor points in orthodontics. Typically, implants designed for orthodontic movement are small and do not fully osseointegrate, allowing easy removal following treatment. They are indicated when needing to shorten treatment time, or as an alternative to extra-oral anchorage. Mini-implants are frequently placed between the roots of teeth, but may also be sited in the roof of the mouth. They are then connected to a fixed brace to help move the teeth. Small-diameter implants (mini-implants) The introduction of small-diameter implants has provided dentists the means of providing edentulous and partially edentulous patients with immediate functioning transitional prostheses while definitive restorations are being fabricated. Many clinical studies have been done on the success of long-term usage of these implants. Based on the findings of many studies, mini dental implants exhibit excellent survival rates in the short to medium term (3–5 years). They appear to be a reasonable alternative treatment modality to retain mandibular complete overdentures from the available evidence. Composition A typical conventional implant consists of a titanium screw (resembling a tooth root) with a roughened or smooth surface. The majority of dental implants are made of commercially pure titanium, which is available in four grades depending upon the amount of carbon, nitrogen, oxygen and iron contained. Cold work hardened CP4 (maximum impurity limits of N .05 percent, C .10 percent, H .015 percent, Fe .50 percent, and O .40 percent) is the most commonly used titanium for implants. Grade 5 titanium, Titanium 6AL-4V (signifying the titanium alloy containing 6 percent aluminium and 4 percent vanadium alloy) is slightly harder than CP4 and used in the industry mostly for abutment screws and abutments. Most modern dental implants also have a textured surface (through etching, anodic oxidation or various-media blasting) to increase the surface area and osseointegration potential of the implant. If C.P. titanium or a titanium alloy has more than 85% titanium content, it will form a titanium-biocompatible titanium oxide surface layer or veneer that encloses the other metals, preventing them from contacting the bone. Ceramic (zirconia-based) implants exist in one-piece (combining the screw and the abutment) or two-piece systems - the abutment being either cemented or screwed – and might lower the risk for peri‐implant diseases, but long-term data on success rates is missing. Technique Planning General considerations Planning for dental implants focuses on the general health condition of the patient, the local health condition of the mucous membranes and the jaws and the shape, size, and position of the bones of the jaws, adjacent and opposing teeth. There are few health conditions that absolutely preclude placing implants and there are certain conditions that can increase the risk of failure. Those with poor oral hygiene, heavy smokers and diabetics are all at greater risk for a variant of gum disease that affects implants called peri-implantitis, increasing the chance of long-term failures. Long-term steroid use, osteoporosis and other diseases that affect the bones can increase the risk of early failure of implants. It has been suggested that radiotherapy can negatively affect the survival of implants. Nevertheless, a systemic study published in 2016 concluded that dental implants installed in the irradiated area of an oral cavity may have a high survival rate, provided that the patient maintains oral hygiene measures and regular follow-ups to prevent complications. Biomechanical considerations The long-term success of implants is determined in part by the forces they have to support. As implants have no periodontal ligament, there is no sensation of pressure when biting so the forces created are higher. To offset this, the location of implants must distribute forces evenly across the prosthetics they support. Concentrated forces can result in fracture of the bridgework, implant components, or loss of bone adjacent the implant. The ultimate location of implants is based on both biologic (bone type, vital structures, health) and mechanical factors. Implants placed in thicker, stronger bone like that found in the front part of the bottom jaw have lower failure rates than implants placed in lower density bone, such as the back part of the upper jaw. People who grind their teeth also increase the force on implants and increase the likelihood of failures. The design of implants has to account for a lifetime of real-world use in a person's mouth. Regulators and the dental implant industry have created a series of tests to determine the long-term mechanical reliability of implants in a person's mouth where the implant is struck repeatedly with increasing forces (similar in magnitude to biting) until it fails. When a more exacting plan is needed beyond clinical judgment, the dentist will make an acrylic guide (called a stent) prior to surgery which guides optimal positioning of the implant. Increasingly, dentists opt to get a CT scan of the jaws and any existing dentures, then plan the surgery on CAD/CAM software. The stent can then be made using stereolithography following computerized planning of a case from the CT scan. The use of CT scanning in complex cases also helps the surgeon identify and avoid vital structures such as the inferior alveolar nerve and the sinus. Bisphosphonate drugs The use of bone-building drugs, like bisphosphonates and anti-RANKL drugs, requires special consideration with implants because they have been associated with a disorder called medication-associated osteonecrosis of the jaw (MRONJ). The drugs change bone turnover, which is thought to put people at risk for death of bone when having minor oral surgery. At routine doses (for example, those used to treat routine osteoporosis) the effects of the drugs linger for months or years but the risk appears to be very low. Because of this duality, uncertainty exists in the dental community about how to best manage the risk of BRONJ when placing implants. A 2009 position paper by the American Association of Oral and Maxillofacial Surgeons discussed that the risk of BRONJ from low dose oral therapy (or slow-release injectable) as between 0.01 and 0.06 percent for any procedure done on the jaws (implant, extraction, etc.). The risk is higher with intravenous therapy, procedures on the lower jaw, people with other medical issues, those on steroids, those on more potent bisphosphonates and people who have taken the drug for more than three years. The position paper recommends against placing implants in people who are taking high-dose or high-frequency intravenous therapy for cancer care. Otherwise, implants can generally be placed and the use of bisphosphonates does not appear to affect implant survival. Additional precaution can be taken by administering pentoxifylline and tocopherol both pre-operatively and post-operatively. Main surgical procedures Placing the implant Most implant systems have five basic steps for placement of each implant: Soft tissue reflection: An incision is made over the crest of bone, splitting the thicker attached gingiva roughly in half so that the final implant will have a thick band of tissue around it. The edges of tissue, each referred to as a flap, are pushed back to expose the bone. Flapless surgery is an alternate technique, where a small punch of tissue (the diameter of the implant) is removed for implant placement rather than raising flaps. Drilling at high speed: After reflecting the soft tissue, and using a surgical guide or stent as necessary, pilot holes are placed with precision drills at highly regulated speed to prevent burning or pressure necrosis of the bone. Drilling at low speed: The pilot hole is expanded by using progressively wider drills (typically between three and seven successive drilling steps, depending on implant width and length). Care is taken not to damage the osteoblast or bone cells by overheating. A cooling saline or water spray keeps the temperature low. Placement of the implant: The implant screw is placed and can be self-tapping; otherwise, the prepared site is tapped with an implant analog. It is then screwed into place with a torque controlled wrench at a precise torque so as not to overload the surrounding bone (overloaded bone can die, a condition called osteonecrosis, which may lead to failure of the implant to fully integrate or bond with the jawbone). Tissue adaptation: The gingiva is adapted around the entire implant to provide a thick band of healthy tissue around the healing abutment. In contrast, an implant can be "buried", where the top of the implant is sealed with a cover screw and the tissue is closed to completely cover it. A second procedure would then be required to uncover the implant at a later date. Timing of implants after extraction of teeth There are different approaches to placement dental implants after tooth extraction. The approaches are: Immediate post-extraction implant placement. Delayed immediate post-extraction implant placement (two weeks to three months after extraction). Late implantation (three months or more after tooth extraction). An increasingly common strategy to preserve bone and reduce treatment times includes the placement of a dental implant into a recent extraction site. On the one hand, it shortens treatment time and can improve aesthetics because the soft tissue envelope is preserved. On the other hand, implants may have a slightly higher rate of initial failure. Conclusions on this topic are difficult to draw, however, because few studies have compared immediate and delayed implants in a scientifically rigorous manner. One versus two-stage surgery After an implant is placed the internal components are covered with either a healing abutment, or a cover screw. A healing abutment passes through the mucosa, and the surrounding mucosa is adapted around it. A cover screw is flush with the surface of the dental implant, and is designed to be completely covered by mucosa. After an integration period, a second surgery is required to reflect the mucosa and place a healing abutment. In the early stages of implant development (1970−1990) implant systems used a two-stage approach, believing that it improved the odds of initial implant survival. Subsequent research suggests that no difference in implant survival existed between one-stage and two-stage surgeries, and the choice of whether or not to "bury" the implant in the first stage of surgery became a concern of soft tissue (gingiva) management. When tissue is inadequate, deficient or mutilated by the loss of teeth, adjacent bone or gingiva, implants are placed and allowed to osseointegrate, then the gingival flat is surgically placed around the healing abutments. The downside of a two-stage technique is the need for additional surgery and compromise of circulation to the tissue due to repeated surgeries. The choice of one or two stages now centers around how best to reconstruct the soft tissues around lost teeth. Additional procedures to augment deficient bone in implant site For an implant to osseointegrate, it needs to be surrounded by a healthy quantity of bone. In order for it to survive long-term, it needs to have a thick healthy soft tissue (gingiva) envelope around it. It is common for either the bone or soft tissue to be so deficient that the surgeon needs to reconstruct it either before or during implant placement. All techniques of augmenting the alveolar bone in preparation for implant placement are invasive and associated with a degree of morbidity. Hard tissue (bone) reconstruction Bone grafting is necessary when there is a lack of bone. Also, it helps to stabilize the implant by increasing survival of the implant and decreasing marginal bone level loss. While there are always new implant types, such as short implants, and techniques to allow compromise, a general treatment goal is to have a minimum of in bone height, and in width. Alternatively, bone defects are graded from A to D (A=10+ mm of bone, B=7–9 mm, C=4–6 mm and D=0–3 mm) where an implant's likelihood of osseointegrating is related to the grade of bone. To achieve an adequate width and height of bone, various bone grafting techniques have been developed. The most frequently used is called guided bone graft augmentation where a defect is filled with either natural (harvested or autograft) bone or allograft (donor bone or synthetic bone substitute), covered with a semi-permeable membrane and allowed to heal. During the healing phase, natural bone replaces the graft, forming a new bony base for the implant. Three common procedures are: Sinus lift Lateral alveolar augmentation (increase in the width of a site) Vertical alveolar augmentation (increase in the height of a site) Other, more invasive procedures, also exist for larger bone defects including mobilization of the inferior alveolar nerve to allow placement of a fixture, onlay bone grafting using the iliac crest or another large source of bone and microvascular bone graft where the blood supply to the bone is transplanted with the source bone and reconnected to the local blood supply. The final decision about which bone grafting technique that is best is based on an assessment of the degree of vertical and horizontal bone loss that exists, each of which is classified into mild (2–3 mm loss), moderate (4–6 mm loss) or severe (greater than 6 mm loss). Orthodontic extrusion or orthodontic implant site development can be used in selected cases for vertical/horizontal alveolar augmentation. Soft tissue (gingiva) reconstruction The gingiva surrounding a tooth has a 2–3 mm band of bright pink, very strong attached mucosa, then a darker, larger area of unattached mucosa that folds into the cheeks. When replacing a tooth with an implant, a band of strong, attached gingiva is needed to keep the implant healthy in the long-term. This is especially important with implants because the blood supply is more precarious in the gingiva surrounding an implant, and is theoretically more susceptible to injury because of a longer attachment to the implant than on a tooth (a longer biologic width). When an adequate band of attached tissue is absent, it can be recreated with a soft tissue graft. There are four methods that can be used to transplant soft tissue. A roll of tissue adjacent to an implant (referred to as a palatal roll) can be moved towards the lip (buccal), gingiva from the palate can be transplanted, deeper connective tissue from the palate can be transplanted or, when a larger piece of tissue is needed, a finger of tissue based on a blood vessel in the palate (called a vascularized interpositional periosteal-connective tissue (VIP-CT) flap) can be repositioned to the area.Xenogeneic collagen matrices are used for gingival augmentation after dental implantation. Additionally, for an implant to look esthetic, a band of full, plump gingiva is needed to fill in the space on either side of implant. The most common soft tissue complication is called a black triangle, where the papilla (the small triangular piece of tissue between two teeth) shrinks back and leaves a triangular void between the implant and the adjacent teeth. Dentists can only expect 2–4 mm of papilla height over the underlying bone. A black triangle can be expected if the distance between where the teeth touch and bone is any greater. The orthodontic implant site-switching technique Alveolar bone resorption is a common side effect of tooth removal (extraction) due to severe tooth decay, trauma, or infection that limits dental implant placement. Surgical bone augmentation is associated with limitations such as high cost, bone graft rejection or failure, pain, infection, and the addition of 6–12 months to the treatment time till the graft matures. Compared with invasive bone augmentation surgery, orthodontic tooth movement has the capacity to regenerate the deficient alveolar ridge and create adequate bone volume for implant placement. This is particularly useful when restoring one or two missing teeth with implants; however, the orthodontic implant site-switching technique can only be used when there is an edentulous area adjacent to healthy teeth that can be moved orthodontically into the edentulous site and generate healthy bone volume for implant placement. Orthodontic tooth movement can generate new bone. This is because of the fibres of the periodontal ligament (PDL) surrounding the teeth and attached to the alveolar bone, the stretched fibres in the PDL stimulate the osteoblasts depositing new alveolar bone. For instance, the orthodontic forced eruption of hopeless teeth can augment the bone vertically and eliminate or reduce the amount of bone graft required prior to implant placement. Similarly, where there is a bone-deficient edentulous (toothless) site, it is possible to move the healthy adjacent teeth into this area, closing the edentulous space and simultaneously creating an implant site with enough bone adjacent to where implant placement was originally planned. Recovery The prosthetic phase begins once the implant is well integrated (or has a reasonable assurance that it will integrate) and an abutment is in place to bring it through the mucosa. Even in the event of early loading (less than three months), many practitioners will place temporary teeth until osseointegration is confirmed. The prosthetic phase of restoring an implant requires an equal amount of technical expertise as the surgical because of the biomechanical considerations, especially when multiple teeth are to be restored. The dentist will work to restore the vertical dimension of occlusion, the esthetics of the smile, and the structural integrity of the teeth to evenly distribute the forces of the implants. Healing time There are various options for when to attach teeth to dental implants, classified into: Immediate loading procedure. Early loading (one week to twelve weeks). Delayed loading (over three months) For an implant to become permanently stable, the body must grow bone to the surface of the implant (osseointegration). Based on this biologic process, it was thought that loading an implant during the osseointegration period would result in movement that would prevent osseointegration, and thus increase implant failure rates. As a result, three to six months of integrating time (depending on various factors) was allowed before placing the teeth on implants (restoring them). However, later research suggests that the initial stability of the implant in bone is a more important determinant of success of implant integration, rather than a certain period of healing time. As a result, the time allowed to heal is typically based on the density of bone the implant is placed in and the number of implants splinted together, rather than a uniform amount of time. When implants can withstand high torque (35 Ncm) and are splinted to other implants, there are no meaningful differences in long-term implant survival or bone loss between implants loaded immediately, at three months, or at six months. The corollary is that single implants, even in solid bone, require a period of no-load to minimize the risk of initial failure. Single teeth, bridges and fixed dentures An abutment is selected depending on the application. In many single crown and fixed partial denture scenarios (bridgework), custom abutments are used. An impression of the top of the implant is made with the adjacent teeth and gingiva. A dental lab then simultaneously fabricates an abutment and crown. The abutment is seated on the implant, a screw passes through the abutment to secure it to an internal thread on the implant (lag-screw). There are variations on this, such as when the abutment and implant body are one piece or when a stock (prefabricated) abutment is used. Custom abutments can be made by hand, as a cast metal piece or custom milled from metal or zirconia, all of which have similar success rates. The platform between the implant and the abutment can be flat (buttress) or conical fit. In conical fit abutments, the collar of the abutment sits inside the implant which allows a stronger junction between implant and abutment and a better seal against bacteria into the implant body. To improve the gingival seal around the abutment collar, a narrowed collar on the abutment is used, referred to as platform switching. The combination of conical fits and platform switching gives marginally better long term periodontal conditions compared to flat-top abutments. Regardless of the abutment material or technique, an impression of the abutment is then taken and a crown secured to the abutment with dental cement. Another variation on abutment/crown model is when the crown and abutment are one piece and the lag-screw traverses both to secure the one-piece structure to the internal thread on the implant. There does not appear to be any benefit, in terms of success, for cement versus screw-retained prosthetics, although the latter is believed to be easier to maintain (and change when the prosthetic fractures) and the former offers high esthetic performance. Prosthetic procedures for removable dentures When a removable denture is worn, retainers to hold the denture in place can be either custom made or "off-the-shelf" (stock) abutments. When custom retainers are used, four or more implant fixtures are placed and an impression of the implants is taken and a dental lab creates a custom metal bar with attachments to hold the denture in place. Significant retention can be created with multiple attachments and the use of semi-precision attachments (such as a small diameter pin that pushes through the denture and into the bar) which allows for little or no movement in the denture, but it remains removable. However, the same four implants angled in such a way to distribute occlusal forces may be able to safely hold a fixed denture in place with comparable costs and number of procedures giving the denture wearer a fixed solution. Alternatively, stock abutments are used to retain dentures using a male-adapter attached to the implant and a female adapter in the denture. Two common types of adapters are the ball-and-socket style retainer and the button-style adapter. These types of stock abutments allow movement of the denture, but enough retention to improve the quality of life for denture wearers, compared to conventional dentures. Regardless of the type of adapter, the female portion of the adapter that is housed in the denture will require periodic replacement, however the number and adapter type does not seem to affect patient satisfaction with the prosthetic for various removable alternatives. Maintenance After placement, implants need to be cleaned (similar to natural teeth) with a periodontal scaler to remove any plaque. Because of the more precarious blood supply to the gingiva, care should be taken with dental floss. Implants will lose bone at a rate similar to natural teeth in the mouth (e.g. if someone has periodontal disease, an implant can be affected by a similar disorder) but will otherwise last. The porcelain on crowns should be expected to discolour, fracture or require repair approximately every ten years, although there is significant variation in the service life of dental crowns based on the position in the mouth, the forces being applied from opposing teeth and the restoration material. Where implants are used to retain a complete denture, depending on the type of attachment, connections need to be changed or refreshed every one to two years. An oral irrigator may also be useful for cleaning around implants. The same kinds of techniques used for cleaning teeth are recommended for maintaining hygiene around implants, and can be manually or professionally administered. Examples of this would be using soft toothbrushes or nylon-coated interproximal brushes. The one implication during professional treatment is that metal instruments may cause damage to the metallic surface of the implant or abutment, which can lead to bacterial colonisation. To avoid this, there are specially designed instruments made with hard plastic or rubber. Additionally rinsing (twice daily) with antimicrobial mouthwashes has been shown to be beneficial. There is no evidence that one type of antimicrobial is better than the other. Peri-implantitis is a condition that may occur with implants due to bacteria, plaque, or design and it is on the rise. This disease begins as a reversible condition called peri-implant mucositis but can progress to peri-implantitis if left untreated, which can lead to implant failure. People are encouraged to discuss oral hygiene and maintenance of implants with their dentists. There are different interventions if peri-implantitis occurs, such as mechanical debridement, antimicrobial irrigation, and antibiotics. There can also be surgery such as open-flap debridement to remove bacteria, assess/smooth implant surface, or decontaminate implant surface. There is not enough evidence to know which intervention is best in the case of peri-implantitis. Risks and complications During surgery Placement of dental implants is a surgical procedure and carries the normal risks of surgery including infection, excessive bleeding and necrosis of the flap of tissue around the implant. Nearby anatomic structures, such as the inferior alveolar nerve, the maxillary sinus and blood vessels, can also be injured when the osteotomy is created or the implant placed. Even when the lining of the maxillary sinus is perforated by an implant, long term sinusitis is rare. An inability to place the implant in bone to provide stability of the implant (referred to as primary stability of the implant) increases the risk of failure to osseointegration. First six months Primary implant stability Primary implant stability refers to the stability of a dental implant immediately after implantation. The stability of the titanium screw implant in the patient's bone tissue post surgery may be non-invasively assessed using resonance frequency analysis. Sufficient initial stability may allow immediate loading with prosthetic reconstruction, though early loading poses a higher risk of implant failure than conventional loading. The relevance of primary implant stability decreases gradually with regrowth of bone tissue around the implant in the first weeks after surgery, leading to secondary stability. Secondary stability is different from the initial stabilization, because it results from the ongoing process of bone regrowth into the implant (osseointegration). When this healing process is complete, the initial mechanical stability becomes biological stability. Primary stability is critical to implantation success until bone regrowth maximizes mechanical and biological support of the implant. Regrowth usually occurs during the 3–4 weeks after implantation. Insufficient primary stability, or high initial implant mobility, can lead to failure. Immediate post-operative risks Infection (pre-op antibiotics reduce the risk of implant failure by 33 percent but do not affect the risk of infection). Excessive bleeding Flap breakdown (less-than 5 percent) Failure to integrate An implant is tested between 8 and 24 weeks to determine if it is integrated. There is significant variation in the criteria used to determine implant success, the most commonly cited criteria at the implant level are the absence of pain, mobility, infection, gingival bleeding, radiographic lucency or peri-implant bone loss greater than 1.5 mm. Dental implant success is related to operator skill, quality and quantity of the bone available at the site, and the patient's oral hygiene, but the most important factor is primary implant stability. While there is significant variation in the rate that implants fail to integrate (due to individual risk factors), the approximate values are 1 to 6 percent Integration failure is rare, particularly if a dentist's or oral surgeon's instructions are followed closely by the patient. Immediate loading implants may have a higher rate of failure, potentially due to being loaded immediately after trauma or extraction, but the difference with proper care and maintenance is well within statistical variance for this type of procedure. More often, osseointegration failure occurs when a patient is either too unhealthy to receive the implant or engages in behavior that contraindicates proper dental hygiene including smoking or drug use. Long term The long-term complications that result from restoring teeth with implants relate directly to the risk factors of the patient and the technology. There are the risks associated with appearance including a high smile line, poor gingival quality and missing papillae, difficulty in matching the form of natural teeth that may have unequal points of contact or uncommon shapes, bone that is missing, atrophied or otherwise shaped in an unsuitable manner, unrealistic expectations of the patient or poor oral hygiene. The risks can be related to biomechanical factors, where the geometry of the implants does not support the teeth in the same way the natural teeth did such as when there are cantilevered extensions, fewer implants than roots or teeth that are longer than the implants that support them (a poor crown-to-root ratio). Similarly, grinding of the teeth, lack of bone or low diameter implants increase the biomechanical risk. Finally there are technological risks, where the implants themselves can fail due to fracture or a loss of retention to the teeth they are intended to support. Long-term failures are due to either loss of bone around the tooth and/or gingiva due to peri-implantitis or a mechanical failure of the implant. Because there is no dental enamel on an implant, it does not fail due to cavities like natural teeth. While large-scale, long-term studies are scarce, several systematic reviews estimate the long-term (five to ten years) survival of dental implants at 93–98 percent depending on their clinical use. During initial development of implant retained teeth, all crowns were attached to the teeth with screws, but more recent advancements have allowed placement of crowns on the abutments with dental cement (akin to placing a crown on a tooth). This has created the potential for cement, that escapes from under the crown during cementation to get caught in the gingiva and create a peri-implantitis (see picture below). While the complication can occur, there does not appear to be any additional peri-implantitis in cement-retained crowns compared to screw-retained crowns overall. In compound implants (two stage implants), between the actual implant and the superstructure (abutment) are gaps and cavities into which bacteria can penetrate from the oral cavity. Later these bacteria will return into the adjacent tissue and can cause periimplantitis. Criteria for the success of the implant supported dental prosthetic varies from study to study, but can be broadly classified into failures due to the implant, soft tissues or prosthetic components or a lack of satisfaction on the part of the patient. The most commonly cited criteria for success are function of at least five years in the absence of pain, mobility, radiographic lucency and peri-implant bone loss of greater than 1.5 mm on the implant, the lack of suppuration or bleeding in the soft tissues and occurrence of technical complications/prosthetic maintenance, adequate function, and esthetics in the prosthetic. In addition, the patient should ideally be free of pain, paraesthesia, able to chew and taste and be pleased with the esthetics. The rates of complications vary by implant use and prosthetic type and are listed below: Single crown implants (5-year) Implant survival: 96.8 percent Crown survival: metal-ceramic: 95.4 percent; all-ceramic: 91.2 percent; cumulative rate of ceramic or acrylic veneer fracture: 4.5 percent Peri-implantitis: 9.7 percent up to 40 percent Peri-implant mucositis: 50 percent Implant fracture: 0.14 percent Screw or abutment loosening: 12.7 percent Screw or abutment fracture: 0.35 percent Fixed complete dentures Progressive vertical bone loss, but still in function (peri-implantitis): 8.5 percent Failure after the first year 5 percent at five years, 7 percent at ten years Incidence of veneer fracture at: 5-year: 13.5 to 30.6 percent, 10-year: 51.9 percent (32.3 to 75.5 percent with a confidence interval at 95 percent) 15-year: 66.6 percent (44.3 to 86.4 percent with a confidence interval at 95 percent) 10-year incidence of framework fracture: 6 percent (2.6 to 9.3 percent with a confidence interval at 95 percent) 10-year incidence of esthetic deficiency: 6.1 percent (2.4 to 9.7 percent with a confidence interval at 95 percent) prosthetic screw loosening: 5 percent over five years to 15 percent over ten years The most common complication being fracture or wear of the tooth structure, especially beyond ten years with fixed dental prostheses made of metal-ceramic having significantly higher ten-year survival compared those made of gold-acrylic. Removable dentures (overdentures) Loosening of removable denture retention: 33 percent Dentures needing to be relined or having a retentive clip fracture : 16 to 19 percent History There is archeological evidence that humans have attempted to replace missing teeth with root form implants for thousands of years. Remains from ancient China (dating 4000 years ago) have carved bamboo pegs, tapped into the bone, to replace lost teeth, and 2000-year-old remains from ancient Egypt have similarly shaped pegs made of precious metals. Some Egyptian mummies were found to have transplanted human teeth, and in other instances, teeth made of ivory. Etruscans produced the first pontics using single gold bands as early as 630 BC and perhaps earlier. Wilson Popenoe and his wife in 1931, at a site in Honduras dating back to 600 AD, found the lower mandible of a young Mayan woman, with three missing incisors replaced by pieces of sea shells, shaped to resemble teeth. Bone growth around two of the implants, and the formation of calculus, indicates that they were functional as well as esthetic. The fragment is currently part of the Osteological Collection of the Peabody Museum of Archaeology and Ethnology at Harvard University. In modern times, a tooth replica implant was reported as early as 1969, but the polymethacrylate tooth analogue was encapsulated by soft tissue rather than osseointegrated. The early part of the 20th century saw a number of implants made of a variety of materials. One of the earliest successful implants was the Greenfield implant system of 1913 (also known as the Greenfield crib or basket). Greenfield's implant, an iridioplatinum implant attached to a gold crown, showed evidence of osseointegration and lasted for a number of years. The first use of titanium as an implantable material was by Bothe, Beaton and Davenport in 1940, who observed how close the bone grew to titanium screws, and the difficulty they had in extracting them. Bothe et al. were the first researchers to describe what would later be called osseointegration (a name that would be marketed later on by Per-Ingvar Brånemark). In 1951, Gottlieb Leventhal implanted titanium rods in rabbits. Leventhal's positive results led him to believe that titanium represented the ideal metal for surgery. In the 1950s research was being conducted at Cambridge University in England on blood flow in living organisms. These workers devised a method of constructing a chamber of titanium which was then embedded into the soft tissue of the ears of rabbits. In 1952 the Swedish orthopaedic surgeon, Per-Ingvar Brånemark, was interested in studying bone healing and regeneration. During his research time at Lund University he adopted the Cambridge designed "rabbit ear chamber" for use in the rabbit femur. Following the study, he attempted to retrieve these expensive chambers from the rabbits and found that he was unable to remove them. Brånemark observed that bone had grown into such close proximity with the titanium that it effectively adhered to the metal. Brånemark carried out further studies into this phenomenon, using both animal and human subjects, which all confirmed this unique property of titanium. Leonard Linkow, in the 1950s, was one of the first to insert titanium and other metal implants into the bones of the jaw. Artificial teeth were then attached to these pieces of metal. In 1965 Brånemark placed his first titanium dental implant into a human volunteer. He began working in the mouth as it was more accessible for continued observations and there was a high rate of missing teeth in the general population offered more subjects for widespread study. He termed the clinically observed adherence of bone with titanium as "osseointegration". Since then implants have evolved into three basic types: Root form implants; the most common type of implant indicated for all uses. Within the root form type of implant, there are roughly 18 variants, all made of titanium but with different shapes and surface textures. There is limited evidence showing that implants with relatively smooth surfaces are less prone to peri-implantitis than implants with rougher surfaces and no evidence showing that any particular type of dental implant has superior long-term success. Zygoma implant; a long implant that can anchor to the cheek bone by passing through the maxillary sinus to retain a complete upper denture when bone is absent. While zygomatic implants offer a novel approach to severe bone loss in the upper jaw, it has not been shown to offer any advantage over bone grafting functionally although it may offer a less invasive option, depending on the size of the reconstruction required. Small-diameter implants are implants of low diameter with one-piece construction (implant and abutment) that are sometimes used for denture retention or orthodontic anchorage.
Biology and health sciences
Dental treatments
Health
1118042
https://en.wikipedia.org/wiki/Asymptotic%20giant%20branch
Asymptotic giant branch
The asymptotic giant branch (AGB) is a region of the Hertzsprung–Russell diagram populated by evolved cool luminous stars. This is a period of stellar evolution undertaken by all low- to intermediate-mass stars (about 0.5 to 8 solar masses) late in their lives. Observationally, an asymptotic-giant-branch star will appear as a bright red giant with a luminosity ranging up to thousands of times greater than the Sun. Its interior structure is characterized by a central and largely inert core of carbon and oxygen, a shell where helium is undergoing fusion to form carbon (known as helium burning), another shell where hydrogen is undergoing fusion forming helium (known as hydrogen burning), and a very large envelope of material of composition similar to main-sequence stars (except in the case of carbon stars). Stellar evolution When a star exhausts the supply of hydrogen by nuclear fusion processes in its core, the core contracts and its temperature increases, causing the outer layers of the star to expand and cool. The star becomes a red giant, following a track towards the upper-right hand corner of the HR diagram. Eventually, once the temperature in the core has reached approximately , helium burning (fusion of helium nuclei) begins. The onset of helium burning in the core halts the star's cooling and increase in luminosity, and the star instead moves down and leftwards in the HR diagram. This is the horizontal branch (for population II stars) or a blue loop for stars more massive than about . After the completion of helium burning in the core, the star again moves to the right and upwards on the diagram, cooling and expanding as its luminosity increases. Its path is almost aligned with its previous red-giant track, hence the name asymptotic giant branch, although the star will become more luminous on the AGB than it did at the tip of the red-giant branch. Stars at this stage of stellar evolution are known as AGB stars. AGB stage The AGB phase is divided into two parts, the early AGB (E-AGB) and the thermally pulsing AGB (TP-AGB). During the E-AGB phase, the main source of energy is helium fusion in a shell around a core consisting mostly of carbon and oxygen. During this phase, the star swells up to giant proportions to become a red giant again. The star's radius may become as large as one astronomical unit (). After the helium shell runs out of fuel, the TP-AGB starts. Now the star derives its energy from fusion of hydrogen in a thin shell, which restricts the inner helium shell to a very thin layer and prevents it fusing stably. However, over periods of 10,000 to 100,000 years, helium from the hydrogen shell burning builds up and eventually the helium shell ignites explosively, a process known as a helium shell flash. The power of the shell flash peaks at thousands of times the observed luminosity of the star, but decreases exponentially over just a few years. The shell flash causes the star to expand and cool which shuts off the hydrogen shell burning and causes strong convection in the zone between the two shells. When the helium shell burning nears the base of the hydrogen shell, the increased temperature reignites hydrogen fusion and the cycle begins again. The large but brief increase in luminosity from the helium shell flash produces an increase in the visible brightness of the star of a few tenths of a magnitude for several hundred years. These changes are unrelated to the brightness variations on periods of tens to hundreds of days that are common in this type of star. During the thermal pulses, which last only a few hundred years, material from the core region may be mixed into the outer layers, changing the surface composition, in a process referred to as dredge-up. Because of this dredge-up, AGB stars may show S-process elements in their spectra and strong dredge-ups can lead to the formation of carbon stars. All dredge-ups following thermal pulses are referred to as third dredge-ups, after the first dredge-up, which occurs on the red-giant branch, and the second dredge up, which occurs during the E-AGB. In some cases there may not be a second dredge-up but dredge-ups following thermal pulses will still be called a third dredge-up. Thermal pulses increase rapidly in strength after the first few, so third dredge-ups are generally the deepest and most likely to circulate core material to the surface. AGB stars are typically long-period variables, and suffer mass loss in the form of a stellar wind. For M-type AGB stars, the stellar winds are most efficiently driven by micron-sized grains. Thermal pulses produce periods of even higher mass loss and may result in detached shells of circumstellar material. A star may lose 50 to 70% of its mass during the AGB phase. The mass-loss rates typically range between 10−8 and 10−5 M⊙ year−1, and can even reach as high as 10−4 M⊙ year−1; while wind velocities are typically between 5 and 30 km/s. Circumstellar envelopes of AGB stars The extensive mass loss of AGB stars means that they are surrounded by an extended circumstellar envelope (CSE). Given a mean AGB lifetime of one Myr and an outer velocity of , its maximum radius can be estimated to be roughly (30 light years). This is a maximum value since the wind material will start to mix with the interstellar medium at very large radii, and it also assumes that there is no velocity difference between the star and the interstellar gas. These envelopes have a dynamic and interesting chemistry, much of which is difficult to reproduce in a laboratory environment because of the low densities involved. The nature of the chemical reactions in the envelope changes as the material moves away from the star, expands and cools. Near the star the envelope density is high enough that reactions approach thermodynamic equilibrium. As the material passes beyond about the density falls to the point where kinetics, rather than thermodynamics, becomes the dominant feature. Some energetically favorable reactions can no longer take place in the gas, because the reaction mechanism requires a third body to remove the energy released when a chemical bond is formed. In this region many of the reactions that do take place involve radicals such as OH (in oxygen rich envelopes) or CN (in the envelopes surrounding carbon stars). In the outermost region of the envelope, beyond about , the density drops to the point where the dust no longer completely shields the envelope from interstellar UV radiation and the gas becomes partially ionized. These ions then participate in reactions with neutral atoms and molecules. Finally as the envelope merges with the interstellar medium, most of the molecules are destroyed by UV radiation. The temperature of the CSE is determined by heating and cooling properties of the gas and dust, but drops with radial distance from the photosphere of the stars which are –. Chemical peculiarities of an AGB CSE outwards include: Photosphere: Local thermodynamic equilibrium chemistry Pulsating stellar envelope: Shock chemistry Dust formation zone Chemically quiet Interstellar ultraviolet radiation and photodissociation of molecules – complex chemistry The dichotomy between oxygen-rich and carbon-rich stars has an initial role in determining whether the first condensates are oxides or carbides, since the least abundant of these two elements will likely remain in the gas phase as COx. In the dust formation zone, refractory elements and compounds (Fe, Si, MgO, etc.) are removed from the gas phase and end up in dust grains. The newly formed dust will immediately assist in surface catalyzed reactions. The stellar winds from AGB stars are sites of cosmic dust formation, and are believed to be the main production sites of dust in the universe. The stellar winds of AGB stars (Mira variables and OH/IR stars) are also often the site of maser emission. The molecules that account for this are SiO, H2O, OH, HCN, and SiS. SiO, H2O, and OH masers are typically found in oxygen-rich M-type AGB stars such as R Cassiopeiae and U Orionis, while HCN and SiS masers are generally found in carbon stars such as IRC +10216. S-type stars with masers are uncommon. After these stars have lost nearly all of their envelopes, and only the core regions remain, they evolve further into short-lived protoplanetary nebula. The final fate of the AGB envelopes are represented by planetary nebulae (PNe). Physical samples Physical samples, known as presolar grains, of mineral grains from AGB stars are available for laboratory analysis in the form of individual refractory presolar grains. These formed in the circumstellar dust envelopes and were transported to the early Solar System by stellar wind. A majority of presolar silicon carbide grains have their origin in 1–3 M☉carbon stars in the late thermally-pulsing AGB phase of their stellar evolution. Late thermal pulse As many as a quarter of all post-AGB stars undergo what is dubbed a "born-again" episode. The carbon–oxygen core is now surrounded by helium with an outer shell of hydrogen. If the helium is re-ignited a thermal pulse occurs and the star quickly returns to the AGB, becoming a helium-burning, hydrogen-deficient stellar object. If the star still has a hydrogen-burning shell when this thermal pulse occurs, it is termed a "late thermal pulse". Otherwise it is called a "very late thermal pulse". The outer atmosphere of the born-again star develops a stellar wind and the star once more follows an evolutionary track across the Hertzsprung–Russell diagram. However, this phase is very brief, lasting only about 200 years before the star again heads toward the white dwarf stage. Observationally, this late thermal pulse phase appears almost identical to a Wolf–Rayet star in the midst of its own planetary nebula. Stars such as Sakurai's Object and FG Sagittae are being observed as they rapidly evolve through this phase. Mapping the circumstellar magnetic fields of thermal-pulsating (TP-) AGB stars has recently been reported using the so-called Goldreich-Kylafis effect. Super-AGB stars Stars close to the upper mass limit to still qualify as AGB stars show some peculiar properties and have been dubbed super-AGB stars. They have masses above and up to 9 or (or more). They represent a transition to the more massive supergiant stars that undergo full fusion of elements heavier than helium. During the triple-alpha process, some elements heavier than carbon are also produced: mostly oxygen, but also some magnesium, neon, and even heavier elements. Super-AGB stars develop partially degenerate carbon–oxygen cores that are large enough to ignite carbon in a flash analogous to the earlier helium flash. The second dredge-up is very strong in this mass range and that keeps the core size below the level required for burning of neon as occurs in higher-mass supergiants. The size of the thermal pulses and third dredge-ups are reduced compared to lower-mass stars, while the frequency of the thermal pulses increases dramatically. Some super-AGB stars may explode as an electron capture supernova, but most will end as oxygen–neon white dwarfs. Since these stars are much more common than higher-mass supergiants, they could form a high proportion of observed supernovae. Detecting examples of these supernovae would provide valuable confirmation of models that are highly dependent on assumptions.
Physical sciences
Stellar astronomy
Astronomy
1118171
https://en.wikipedia.org/wiki/Flatness%20problem
Flatness problem
The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time. In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value. The problem was first mentioned by Robert Dicke in 1969. The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory. Energy density and the Friedmann equation According to Einstein's field equations of general relativity, the structure of spacetime is affected by the presence of matter and energy. On small scales space appears flat – as does the surface of the Earth if one looks at a small area. On large scales however, space is bent by the gravitational effect of matter. Since relativity indicates that matter and energy are equivalent, this effect is also produced by the presence of energy (such as light and other electromagnetic radiation) in addition to matter. The amount of bending (or curvature) of the universe depends on the density of matter/energy present. This relationship can be expressed by the first Friedmann equation. In a universe without a cosmological constant, this is: Here is the Hubble parameter, a measure of the rate at which the universe is expanding. is the total density of mass and energy in the universe, is the scale factor (essentially the 'size' of the universe), and is the curvature parameter — that is, a measure of how curved spacetime is. A positive, zero or negative value of corresponds to a respectively closed, flat or open universe. The constants and are Newton's gravitational constant and the speed of light, respectively. Cosmologists often simplify this equation by defining a critical density, . For a given value of , this is defined as the density required for a flat universe, i.e. . Thus the above equation implies . Since the constant is known and the expansion rate can be measured by observing the speed at which distant galaxies are receding from us, can be determined. Its value is currently around . The ratio of the actual density to this critical value is called Ω, and its difference from 1 determines the geometry of the universe: corresponds to a greater than critical density, , and hence a closed universe. gives a low density open universe, and Ω equal to exactly 1 gives a flat universe. The Friedmann equation, can be re-arranged into which after factoring , and using , leads to The right hand side of the last expression above contains constants only and therefore the left hand side must remain constant throughout the evolution of the universe. As the universe expands the scale factor increases, but the density decreases as matter (or energy) becomes spread out. For the standard model of the universe which contains mainly matter and radiation for most of its history, decreases more quickly than increases, and so the factor will decrease. Since the time of the Planck era, shortly after the Big Bang, this term has decreased by a factor of around and so must have increased by a similar amount to retain the constant value of their product. Current value of Ω Measurement The value of Ω at the present time is denoted Ω0. This value can be deduced by measuring the curvature of spacetime (since , or , is defined as the density for which the curvature ). The curvature can be inferred from a number of observations. One such observation is that of anisotropies (that is, variations with direction - see below) in the Cosmic Microwave Background (CMB) radiation. The CMB is electromagnetic radiation which fills the universe, left over from an early stage in its history when it was filled with photons and a hot, dense plasma. This plasma cooled as the universe expanded, and when it cooled enough to form stable atoms it no longer absorbed the photons. The photons present at that stage have been propagating ever since, growing fainter and less energetic as they spread through the ever-expanding universe. The temperature of this radiation is almost the same at all points on the sky, but there is a slight variation (around one part in 100,000) between the temperature received from different directions. The angular scale of these fluctuations - the typical angle between a hot patch and a cold patch on the sky - depends on the curvature of the universe which in turn depends on its density as described above. Thus, measurements of this angular scale allow an estimation of Ω0. Another probe of Ω0 is the frequency of Type-Ia supernovae at different distances from Earth. These supernovae, the explosions of degenerate white dwarf stars, are a type of standard candle; this means that the processes governing their intrinsic brightness are well understood so that a measure of apparent brightness when seen from Earth can be used to derive accurate distance measures for them (the apparent brightness decreasing in proportion to the square of the distance - see luminosity distance). Comparing this distance to the redshift of the supernovae gives a measure of the rate at which the universe has been expanding at different points in history. Since the expansion rate evolves differently over time in cosmologies with different total densities, Ω0 can be inferred from the supernovae data. Data from the Wilkinson Microwave Anisotropy Probe (WMAP, measuring CMB anisotropies) combined with that from the Sloan Digital Sky Survey and observations of type-Ia supernovae constrain Ω0 to be 1 within 1%. In other words, the term |Ω − 1| is currently less than 0.01, and therefore must have been less than 10−62 at the Planck era. The cosmological parameters measured by Planck spacecraft mission reaffirmed previous results by WMAP. Implication This tiny value is the crux of the flatness problem. If the initial density of the universe could take any value, it would seem extremely surprising to find it so 'finely tuned' to the critical value . Indeed, a very small departure of Ω from 1 in the early universe would have been magnified during billions of years of expansion to create a current density very far from critical. In the case of an overdensity this would lead to a universe so dense it would cease expanding and collapse into a Big Crunch (an opposite to the Big Bang in which all matter and energy falls back into an extremely dense state) in a few years or less; in the case of an underdensity it would expand so quickly and become so sparse it would soon seem essentially empty, and gravity would not be strong enough by comparison to cause matter to collapse and form galaxies resulting in a big freeze. In either case the universe would contain no complex structures such as galaxies, stars, planets and any form of life. This problem with the Big Bang model was first pointed out by Robert Dicke in 1969, and it motivated a search for some reason the density should take such a specific value. Solutions to the problem Some cosmologists agreed with Dicke that the flatness problem was a serious one, in need of a fundamental reason for the closeness of the density to criticality. But there was also a school of thought which denied that there was a problem to solve, arguing instead that since the universe must have some density it may as well have one close to as far from it, and that speculating on a reason for any particular value was "beyond the domain of science". That, however, is a minority viewpoint, even among those sceptical of the existence of the flatness problem. Several cosmologists have argued that, for a variety of reasons, the flatness problem is based on a misunderstanding. Anthropic principle One solution to the problem is to invoke the anthropic principle, which states that humans should take into account the conditions necessary for them to exist when speculating about causes of the universe's properties. If two types of universe seem equally likely but only one is suitable for the evolution of intelligent life, the anthropic principle suggests that finding ourselves in that universe is no surprise: if the other universe had existed instead, there would be no observers to notice the fact. The principle can be applied to solve the flatness problem in two somewhat different ways. The first (an application of the 'strong anthropic principle') was suggested by C. B. Collins and Stephen Hawking, who in 1973 considered the existence of an infinite number of universes such that every possible combination of initial properties was held by some universe. In such a situation, they argued, only those universes with exactly the correct density for forming galaxies and stars would give rise to intelligent observers such as humans: therefore, the fact that we observe Ω to be so close to 1 would be "simply a reflection of our own existence". An alternative approach, which makes use of the 'weak anthropic principle', is to suppose that the universe is infinite in size, but with the density varying in different places (i.e. an inhomogeneous universe). Thus some regions will be over-dense and some under-dense . These regions may be extremely far apart - perhaps so far that light has not had time to travel from one to another during the age of the universe (that is, they lie outside one another's cosmological horizons). Therefore, each region would behave essentially as a separate universe: if we happened to live in a large patch of almost-critical density we would have no way of knowing of the existence of far-off under- or over-dense patches since no light or other signal has reached us from them. An appeal to the anthropic principle can then be made, arguing that intelligent life would only arise in those patches with Ω very close to 1, and that therefore our living in such a patch is unsurprising. This latter argument makes use of a version of the anthropic principle which is 'weaker' in the sense that it requires no speculation on multiple universes, or on the probabilities of various different universes existing instead of the current one. It requires only a single universe which is infinite - or merely large enough that many disconnected patches can form - and that the density varies in different regions (which is certainly the case on smaller scales, giving rise to galactic clusters and voids). However, the anthropic principle has been criticised by many scientists. For example, in 1979 Bernard Carr and Martin Rees argued that the principle "is entirely post hoc: it has not yet been used to predict any feature of the Universe." Others have taken objection to its philosophical basis, with Ernan McMullin writing in 1994 that "the weak Anthropic principle is trivial ... and the strong Anthropic principle is indefensible." Since many physicists and philosophers of science do not consider the principle to be compatible with the scientific method, another explanation for the flatness problem was needed. Inflation The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e. grows as with time , for some constant ) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth. His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology. However, "In December, 1980 when Guth was developing his inflation model, he was not trying to solve either the flatness or horizon problems. Indeed, at that time, he knew nothing of the horizon problem and had never quantitatively calculated the flatness problem". He was a particle physicist trying to solve the magnetic monopole problem." The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore, the term increases extremely rapidly as the scale factor grows exponentially. Recalling the Friedmann Equation , and the fact that the right-hand side of this expression is constant, the term must therefore decrease with time. Thus if initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become amplified and lead to a very curved universe with no opportunity to form galaxies and other structures. This success in solving the flatness problem is considered one of the major motivations for inflationary theory. However, some physicists deny that inflationary theory resolves the flatness problem, arguing that it merely moves the fine-tuning from the probability distribution to the potential of a field, or even deny that it is a scientific theory. Post inflation Although inflationary theory is regarded as having had much success, and the evidence for it is compelling, it is not universally accepted: cosmologists recognize that there are still gaps in the theory and are open to the possibility that future observations will disprove it. In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed. Many of these contain parameters or initial conditions which themselves require fine-tuning in much the way that the early density does without inflation. For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy and gravity, particle production in an oscillating universe, and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles, maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified. Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem. The question arises, however, whether it is still the dominant explanation because it is the best explanation, or because the community is unaware of progress on this problem. In particular, in addition to the idea that Ω is not a suitable parameter in this context, other arguments against the flatness problem have been presented: if the universe collapses in the future, then the flatness problem "exists", but only for a relatively short time, so a typical observer would not expect to measure Ω appreciably different from 1; in the case of a universe which expands forever with a positive cosmological constant, fine-tuning is needed not to achieve a (nearly) flat universe, but also to avoid it. Einstein–Cartan theory The flatness problem is naturally solved by the Einstein–Cartan–Sciama–Kibble theory of gravity, without an exotic form of matter required in inflationary theory. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. It has no free parameters. Including torsion gives the correct conservation law for the total (orbital plus intrinsic) angular momentum of matter in the presence of gravity. The minimal coupling between torsion and Dirac spinors obeying the nonlinear Dirac equation generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical big bang singularity, replacing it with a bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the big bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
Physical sciences
Physical cosmology
Astronomy
1118730
https://en.wikipedia.org/wiki/Enantiornithes
Enantiornithes
The Enantiornithes, also known as enantiornithines or enantiornitheans in literature, are a group of extinct avialans ("birds" in the broad sense), the most abundant and diverse group known from the Mesozoic era. Almost all retained teeth and clawed fingers on each wing, but otherwise looked much like modern birds externally. Over seventy species of Enantiornithes have been named, but some names represent only single bones, so it is likely that not all are valid. The Enantiornithes became extinct at the Cretaceous–Paleogene boundary, along with Hesperornithes and all other non-avian dinosaurs. Discovery and naming The first Enantiornithes to be discovered were incorrectly referred to modern bird groups. For example, the first known species of Enantiornithes, Gobipteryx minuta, was originally considered a paleognath related to ostriches and tinamou. The Enantiornithes were first recognized as a distinct lineage, or "subclass" of birds, by Cyril A. Walker in 1981. Walker made this discovery based on some partial remains from the late Cretaceous period of what is now Argentina, which he assigned to a new genus, Enantiornis, giving the entire group its name. Since the 1990s, many more complete specimens of Enantiornithes have been discovered, and it was determined that a few previously described "birds" (e.g. Iberomesornis, Cathayornis, and Sinornis) were also Enantiornithes. The name "Enantiornithes" means "opposite birds", from Ancient Greek enantios () "opposite" + ornithes () "birds" . The name was coined by Cyril Alexander Walker in his landmark paper which established the group. In his paper, Walker explained what he meant by "opposite": This refers to an anatomical feature – the articulation of the shoulder bones – which has a concave-convex socket joint between the scapula (shoulder blade) and coracoid (the primary bone of the shoulder girdle in vertebrates other than mammals) that is the reverse of that of modern birds. Specifically, in the Enantiornithes, the scapula is concave and dish-shaped at this joint, and the coracoid is convex. In modern birds, the coracoscapular joint has a concave coracoid and convex scapula. Walker was not clear on his reasons for giving this name in the etymology section of his paper, and this ambiguity led to some confusion among later researchers. For example, Alan Feduccia stated in 1996: Feduccia's point about the tarsometatarsus (the combined upper foot and ankle bone) is correct, but Walker did not use this reasoning in his original paper. Walker never described the fusion of the tarsometatarsus as opposite, but rather as "Only partial". Also, it is not certain that Enantiornithes had triosseal canals, since no fossil preserves this feature. As a group, the Enantiornithes are often referred to as "enantiornithines" in literature. However, several scientists have noted that this is incorrect, because following the standard rules for forming the names of animal groups, it implies reference only to the subfamily Enantiornithinae. Following the naming conventions used for modern birds as well as extinct groups, it has been pointed out that the correct term is "enantiornithean". Origin and range Praeornis, from the Oxfordian-Kimmeridgian of Kazakhstan, may have been the earliest known member of Enantiornithes according to Agnolin et al. (2017). Birds with confidently identified characteristics of Enantiornithes found in Albian of Australia, Maastrichtian of South America, and Campanian of Mexico (Alexornis), Mongolia and western edge of prehistoric Asia suggest a worldwide distribution of this group or in the relatively warm regions, at least. Enantiornithes have been found on every continent except Antarctica. Fossils attributable to this group are exclusively Cretaceous in age, and it is believed that the Enantiornithes became extinct at the same time as their non-avialan dinosaur relatives. The earliest known Enantiornithes are from the Early Cretaceous of Spain (e.g. Noguerornis) and China (e.g. Protopteryx) and the latest from the Late Cretaceous of North and South America (e.g. Avisaurus and Enantiornis). The widespread occurrence of this group suggests that at least some Enantiornithes were able to cross oceans under their own power; they are the first known avialan lineage with a global distribution. Description Many fossils of Enantiornithes are very fragmentary, and some species are only known from a piece of a single bone. Almost all specimens that are complete, in full articulation, and with soft tissue preservation are known from Las Hoyas in Cuenca, Spain and the Jehol group in Liaoning (China). Extraordinary remains of Enantiornithes have also been preserved in Burmese amber deposits dated to 99 million years ago and include hatchlings described in 2017 and 2018, as well as isolated body parts such as wings and feet. These amber remains are among the most well-preserved of any mesozoic dinosaur. Fossils of this clade have been found in both inland and marine sediments, suggesting that they were an ecologically diverse group. Enantiornithes appear to have included waders, swimmers, granivores, insectivores, fishers, and raptors. The vast majority of Enantiornithes were small, between the size of a sparrow and a starling, however display considerable variation in size with some species. The largest species in this clade include Pengornis houi, Xiangornis shenmi, Zhouornis hani, and Mirarce eatoni, (with the latter species being described as similar in size to modern turkeys,) although at least a few larger species may have also existed, including a potentially crane-sized species known only from footprints in the Eumeralla Formation (and possibly also represented in the Wonthaggi Formation by a single furcula). Among the smallest described specimens are unnamed hatchlings, although the holotype specimens of Parvavis chuxiongensis and Cratoavis cearensis are comparable in size to small tits or hummingbirds. Skull Given their wide range of habitats and diets, the cranial morphology of Enantiornithes varied considerably between species. Skulls of Enantiornithes combined a unique suite of primitive and advanced features. As in more primitive avialans like Archaeopteryx, they retained several separate cranial bones, small premaxillae (bones of the snout tip) and most species had toothy jaws rather than toothless beaks. Only a few species, such as Gobipteryx minuta, were fully toothless and had beaks. They also had simple quadrate bones, a complete bar separating each orbit (eye hole) from each antorbital fenestra, and dentaries (the main toothed bones of the lower jaw) without forked rear tips. A squamosal bone is preserved in an indeterminate juvenile specimen, while a postorbital is preserved in Shenqiornis and Pengornis. In modern birds these bones are assimilated into the cranium. Some Enantiornithes may have had their temporal fenestrae (holes in the side of the head) merged into the orbits as in modern birds due to the postorbitals either not being present or not being long enough to divide the openings. A quadratojugal bone, which in modern birds is fused to the jugal, is preserved in Pterygornis. The presence of these primitive features of the skull would have rendered the Enantiornithes capable of only limited cranial kinesis (the ability to move the jaw independent of the cranium). Wing As a very large group of birds, the Enantiornithes displayed a high diversity of different body plans based on differences in ecology and feeding, reflected in an equal diversity of wing forms, many paralleling adaptations to different lifestyles seen in modern birds. In general, the wings of Enantiornithes were advanced compared to more primitive avialans like Archaeopteryx, and displayed some features related to flight similar to those found in the lineage leading to modern birds, the Ornithuromorpha. While most Enantiornithes had claws on at least some of their fingers, many species had shortened hands, a highly mobile shoulder joint, and proportional changes in the wing bones similar to modern birds. Like modern birds, Enantiornithes had alulas, or "bastard wings", small forward-pointing arrangements of feathers on the first digit that granted higher maneuverability in the air and aided in precise landings. Several wings with preserved feathers have been found in Burmese amber. These are the first complete Mesozoic dinosaur remains preserved this way (a few isolated feathers are otherwise known, unassigned to any species), and one of the most exquisitely preserved dinosaurian fossils known. The preserved wings show variations in feather pigment and prove that Enantiornithes had fully modern feathers, including barbs, barbules, and hooklets, and a modern arrangement of wing feather including long flight feathers, short coverts, a large alula and an undercoat of down. One fossil of Enantiornithes shows wing-like feather tufts on its legs, similar to Archaeopteryx. The leg feathers are also reminiscent of the four-winged dinosaur Microraptor, however differ by the feathers being shorter, more disorganized (they do not clearly form a wing) and only extend down to the ankle rather than along the foot. Tail Clarke et al. (2006) surveyed all fossils of Enantiornithes then known and concluded that none had preserved tail feathers that formed a lift-generating fan, as in modern birds. They found that all avialans outside of Euornithes (the clade they referred to as Ornithurae) with preserved tail feathers had only short coverts or elongated paired tail plumes. They suggested that the development of the pygostyle in Enantiornithes must have been a function of tail shortening, not the development of a modern tail feather anatomy. These scientists suggested that a fan of tail feathers and the associated musculature needed to control them, known as the rectrical bulb, evolved alongside a short, triangular pygostyle, like the ones in modern birds, rather than the long, rod- or dagger-shaped pygostyles in more primitive avialans like the Enantiornithes. Instead of a feather fan, most Enantiornithes had a pair of long specialized pinfeathers similar to those of the extinct Confuciusornis and certain extant birds-of-paradise. However, further discoveries showed that at least among basal Enantiornithes, tail anatomy was more complex than previously thought. One genus, Shanweiniao, was initially interpreted as having at least four long tail feathers that overlapped each other and might have formed a lift-generating surface similar to the tail fans of Euronithes, though a later study indicates that Shanweiniao was more likely to have rachis-dominated tail feathers similar to feathers present in Paraprotopteryx. Chiappeavis, a primitive pengornithid, had a fan of tail feathers similar to that of more primitive avialans like Sapeornis, suggesting that this might have been the ancestral condition, with pinfeathers being a feature evolved several times in early avialans for display purposes. Another species of Enantiornithes, Feitianius, also had an elaborate fan of tail feathers. More importantly, soft tissue preserved around the tail was interpreted as the remains of a rectrical bulb, suggesting that this feature was not in fact restricted to species with modern-looking pygostyles, but might have evolved much earlier than previously thought and been present in many Enantiornithes. At least one genus of Enantiornithes, Cruralispennia, had a modern-looking pygostyle but lacked a tail fan. Biology Diet Given the wide diversity of skull shape among Enantiornithes, many different dietary specializations must have been present among the group. Some, like Shenqiornis, had large, robust jaws suitable for eating hard-shelled invertebrates. The short, blunt teeth of Pengornis were likely used to feed on soft-bodied arthropods. The strongly hooked talons of Bohaiornithidae suggest that they were predators of small to medium-sized vertebrates, but their robust teeth instead suggest a diet of hard-shelled animals. A few specimens preserve actual stomach contents. Unfortunately, none of these preserve the skull, so direct correlation between their known diet and snout/tooth shape cannot be made. Eoalulavis was found to have the remains of exoskeletons from aquatic crustaceans preserved in its digestive tract, and Enantiophoenix preserved corpuscles of amber among the fossilized bones, suggesting that this animal fed on tree sap, much like modern sapsuckers and other birds. The sap would have fossilized and become amber. However, more recently it has been suggested that the sap moved post-mortem, hence not representing true stomachal contents. Combined with the putative fish pellets of Piscivorenantiornis turning out to be fish excrement, the strange stomachal contents of some species turning out to be ovaries and the supposed gastroliths of Bohaiornis being random mineral precipitates, only the Eoalulavis displays actual stomach contents. A study on paravian digestive systems indicates that known Enantiornithes lacked a crop and a gizzard, didn't use gastroliths and didn't eject pellets. This is considered at odds with the high diversity of diets that their different teeth and skull shapes imply, though some modern birds have lost the gizzard and rely solely on strong stomachal acids. An example was discovered with what was suspected to be gastroliths in the what would have been the fossil's stomach, re-opening the discussion of the use of gastroliths by Enantiornithes. X-ray and scanning microscope inspection of the rocks determined that they were actually chalcedony crystals, and not gastroliths. Longipterygidae is the most extensively studied family in terms of diet due to their rather unusual rostral anatomy, with long jaws and few teeth arranged at the jaw ends. They have variously been interpreted as piscivores, probers akin to shorebirds and as arboreal bark-probers. A 2022 study however does find them most likely to be generalistic insectivores (sans possibly Shengjingornis due to its larger size, poorly preserved skull and unusual pedal anatomy), being too small for specialised carnivory and herbivory; the atypical rostrum is tentatively speculated to be unrelated to feeding ecology. However a posterior study has found them to be herbivorous, including the presence of gymnosperm seeds in their digestive system. Avisaurids occupied a niche analogous to modern birds of prey, having the ability to lift small prey with their feet in a manner similar to hawks or owls. Predation A fossil from Spain reported by Sanz et al. in 2001 included the remains of four hatchling skeletons of three different species of Enantiornithes. They are substantially complete, very tightly associated, and show surface pitting of the bones that indicates partial digestion. The authors concluded that this association was a regurgitated pellet and, from the details of the digestion and the size, that the hatchlings were swallowed whole by a pterosaur or small theropod dinosaur. This was the first evidence that Mesozoic avialans were prey animals, and that some Mesozoic pan-avians regurgitated pellets like owls do today. Life history Known fossils of Enantiornithes include eggs, embryos, and hatchlings. An embryo, still curled in its egg, has been reported from the Yixian Formation. Juvenile specimens can be identified by a combination of factors: rough texture of their bone tips indicating portions which were still made of cartilage at the time of death, relatively small breastbones, large skulls and eyes, and bones which had not yet fused to one another. Some hatchling specimens have been given formal names, including "Liaoxiornis delicatus"; however, Luis Chiappe and colleagues considered the practice of naming new species based on juveniles detrimental to the study of Enantiornithes, because it is nearly impossible to determine which adult species a given juvenile specimen belongs to, making any species with a hatchling holotype a nomen dubium. Together with hatchling specimens of the Mongolian Gobipteryx and Gobipipus, these finds demonstrate that hatchling Enantiornithes had the skeletal ossification, well-developed wing feathers, and large brain which correlate with precocial or superprecocial patterns of development in birds of today. In other words, Enantiornithes probably hatched from the egg already well developed and ready to run, forage, and possibly even fly at just a few days old. Findings suggests Enantiornithes, especially the toothed species, had a longer incubation time than modern birds. Analyses of Enantiornithes bone histology have been conducted to determine the growth rates of these animals. A 2006 study of Concornis bones showed a growth pattern different from modern birds; although growth was rapid for a few weeks after hatching, probably until fledging, this small species did not reach adult size for a long time, probably several years. Other studies have all supported the view that growth to adult size was slow, as it is in living precocial birds (as opposed to altricial birds, which are known to reach adult size quickly). Studies of the rate of bone growth in a variety of Enantiornithes has shown that smaller species tended to grow faster than larger ones, the opposite of the pattern seen in more primitive species like Jeholornis and in non-avialan dinosaurs. Some analyses have interpreted the bone histology to indicate that Enantiornithes may not have had fully avian endothermy, instead having an intermediate metabolic rate. However a 2021 study rejects the idea that they had less endothermic metabolisms than modern birds. Evidence of colonial nesting has been found in Enantiornithes, in sediments from the Late Cretaceous (Maastrichtian) of Romania. Evidence from nesting sites shows that Enantiornithes buried their eggs like modern megapodes, which is consistent with their inferred superprecocial adaptations. A 2020 study on a juvenile's feathers further stresses the ontological similarities to modern megapodes, but cautions several differences such as the arboreal nature of most Enantiornithes as opposed to the terrestrial lifestyle of megapodes. It has been speculated that superprecociality in Enantiornithes might have prevented them from developing specialised toe arrangements seen in modern birds like zygodactyly. Although the vast majority of histology studies and known remains of Enantiornithes point to superprecociality being the norm, one specimen, MPCM-LH-26189, seems to represent an altricial juvenile, implying that like modern birds Enantiornithes explored multiple reproductive strategies. Flight Because many Enantiornithes lacked complex tails and possessed radically different wing anatomy compared to modern birds, they have been the subject of several studies testing their flight capabilities. Traditionally, they have been considered inferior flyers, due to the shoulder girdle anatomy being assumed to be more primitive and unable to support a ground-based launching mechanism, as well as due to the absence of rectrices in many species. However, several studies have shown that they were efficient flyers, like modern birds, possessing a similarly complex nervous system and wing feather ligaments. Additionally, the lack of a complex tail appears to not have been very relevant for avian flight as a whole - some extinct birds like lithornids also lacked complex tail feathers but were good flyers, and they appear to have been capable of a ground based launching. Enantiornithes resemble Ornithuromorphs in many anatomical features of the flight apparatus, but a sternal keel is absent in the basal-most members, only a single basal taxon appears to have had a triosseal canal, and their robust pygostyle seems unable to support the muscles that control the modern tail feathers involved in flight. Though some basal Enantiornithes exhibit ancestral flight apparatuses, by the end of the Mesozoic many Enantiornithes had several features convergent with the Neornithes including a deeply keeled sternum, a narrow furcula with a short hypocleidium, and ulnar quill knobs that indicate increased aerial abilities. At least Elsornis appears to have become secondarily flightless. Classification Some researchers classify Enantiornithes, along with the true birds, in the class Aves. Others use the more restrictive crown group definition of Aves (which only includes neornithes, anatomically modern birds), and place Enantiornithes in the more inclusive group Avialae. Enantiornithes were more advanced than Archaeopteryx, Confuciusornis, and Sapeornis, but in several respects they were more primitive than modern birds, perhaps following an intermediate evolutionary path. A consensus of scientific analyses indicates that Enantiornithes is one of two major groups within the larger group Ornithothoraces. The other ornithothoracine group is Euornithes or Ornithuromorpha, which includes all living birds as a subset. This means that Enantiornithes were a successful branch of avialan evolution, but one that diversified entirely separately from the lineage leading to modern birds. One study has however found that the shared sternal anatomy was acquired independently and such a relationship needs to be reexamined. Enantiornithes classification and taxonomy has historically been complicated by a number of factors. In 2010, paleontologists Jingmai O'Connor and Gareth Dyke outlined a number of criticisms against the prevailing practices of scientists failing to describe many specimens in enough detail for others to evaluate thoroughly. Some species have been described based on specimens which are held in private collections, making further study or review of previous findings impossible. Because it is often unfeasible for other scientists to study each specimen in person given the worldwide distribution of the Enantiornithes, and due to the many uninformative descriptions which have been published on possibly important specimens, many of these specimens become "functional nomina dubia". Furthermore, many species have been named based on extremely fragmentary specimens, which would not be very informative scientifically even if they were described sufficiently. Over one-third of all named species are based on only a fragment of a single bone. O'Connor and Dyke argued that while these specimens can help expand knowledge of the time span or geographic range of the Enantiornithes and it is important to describe them, naming such specimens is "unjustifiable". Relationships Enantiornithes is the sister group to Euornithes, and together they form a clade called Ornithothoraces (though see above). Most phylogenetic studies have recovered Enantiornithes as a monophyletic group distinct from the modern birds and their closest relatives. The 2002 phylogenetic analysis by Clarke and Norell, though, reduced the number of Enantiornithes autapomorphies to just four. Enantiornithes systematics are highly provisional and notoriously difficult to study, due to their small size and the fact that Enantiornithes tend to be extremely homoplastic, or very similar to each other in most of their skeletal features due to convergent evolution rather than common ancestry. What appears fairly certain by now is that there were subdivisions within Enantiornithes possibly including some minor basal lineages in addition to the more advanced Euenantiornithes. The details of the interrelationship of all these lineages, indeed the validity of most, is disputed, although the Avisauridae, for one example, seem likely to constitute a valid group. Phylogenetic taxonomists have hitherto been very reluctant to suggest delimitations of clades of Enantiornithes. One such delineation named the Euenantiornithes, was defined by Chiappe (2002) as comprising all species closer to Sinornis than to Iberomesornis. Because Iberomesornis is often found to be the most primitive or basal member of the Enantiornithes, Euenantiornithes may be an extremely inclusive group, made up of all Enantiornithes except for Iberomesornis itself. Despite being in accordance with phylogenetic nomenclature, this definition of Euenantiornithes was severely criticized by some researchers, such as Paul Sereno, who called it "a ill-defined clade [...] a good example of a poor choice in a phylogenetic definition". The cladogram below was found by an analysis by Wang et al. in 2015, updated from a previous data set created by Jingmai O'Connor. The cladogram below is from Wang et al., 2022, and includes most named taxa and recovers several previously-named clades. Letters on branches indicate the positions of "wildcard" taxa, those which have been recovered in multiple disparate positions. Key to letters: b = Boluochia c = Cathayornis e = Enantiophoenix f = Houornis h = Longipteryx i = Parabohaiornis j = Pterygornis l = Vorona m = Yuanjiawaornis n = Yungavolucris List of genera Incertae sedis Enantiornithes taxonomy is difficult to evaluate, and as a result few clades within the group are consistently found by phylogenetic analyses. Most Enantiornithes are not included in any specific family, and as such are listed here. Many of these have been considered Euenantiornithes, although the controversy behind this name means that it is not used consistently in studies of Enantiornithes. Longipterygidae The Longipterygidae was a family of long-snouted early Cretaceous Enantiornithes, with teeth only at the tips of the snout. They are generally considered to be fairly basal members of the group. Pengornithidae The Pengornithidae was a family of large early Enantiornithes. They had numerous small teeth and numerous primitive features which are lost in most other Enantiornithes. Mostly known from the early Cretaceous of China, with putative Late Cretaceous taxa from Madagascar. Bohaiornithidae Bohaiornithids were large but geologically short-lived early Enantiornithes, with long, hooked talons and robust teeth with curved tips. They may have been equivalent to birds of prey, although this interpretation is open to much debate. The monophyly of this group is doubtful, and it may actually be an evolutionary grade. Gobipterygidae Some members of the group are obscure or poorly described and may be synonymous with its type species, Gobipteryx minuta. Avisauridae Avisauridae is subjected to two differing definitions of varying inclusiveness. The more inclusive definition, which follows Cau & Arduini (2008), is used here. Avisaurids were a long-lasting and widespread family of Enantiornithes, which are mainly distinguished by specific features of their tarsometatarsals (ankle bones). The largest and most advanced members of the group survived in North and South America up until the end of the Cretaceous, yet are very fragmentary compared to some earlier taxa. Dubious genera and notable unnamed specimens Gobipipus reshetovi: Described in 2013 from embryo specimens within eggshells from the Barun Goyot Formation of Mongolia. These specimens were very similar to embryonic Gobipteryx specimens, although the describers of Gobipipus (a set of controversial paleontologists including Evgeny Kurochkin and Sankar Chatterjee) consider it distinct. Hebeiornis fengningensis: A synonym of Vescornis due to having been described from the same specimen. Despite having been described in 1999, 5 years prior to the description of Vescornis, the description was so poor compared to the description of Vescornis that the latter name is considered to take priority by most authors. As a result, the name Hebeiornis is considered a nomen nudum ("naked name"). "Proornis" is an informally-named bird from North Korea. It may not be a member of Enantiornithes. Liaoxiornis delicatus: Described in 1999 from a specimen of Enantiornithes found in the Yixian Formation. This specimen was originally considered to be a tiny adult, but later found to be a hatchling. Other specimens have henceforth been assigned to the genus. Due to a lack of distinguishing feature, many paleontologists have considered this genus an undiagnostic nomen dubium. "Wasaibpanchi": A supposed member of Enantiornithes from Pakistan; the describing paper is of dubious status. LP-4450: A juvenile of an indeterminate specimen of Enantiornithes from the El Montsec Formation of Spain. Its 2006 description studied the histology of the skeleton, while later studies reported a squamosal bone present in the specimen but unknown in other Enantiornithes. IVPP V 13939: Briefly described in 2004, this Yixian Enantiornithes specimen had advanced pennaceous feathers on its legs, similar to (albeit shorter than) those of other paravians such as Microraptor and Anchiornis. DIP-V-15100 and DIP-V-15101: Two different wings from hatchling specimens which were described in 2015. They attracted a significant amount of media attention upon their description. They were preserved in exceptional details due to having been trapped within Burmese amber for approximately 99 million years. HPG-15-1: A partial corpse of an Enantiornithes hatchling also preserved in Burmese amber. Although indeterminate, it attracted even more media attention than the two wings upon its description in 2017. CUGB P1202: An indeterminate juvenile bohaiornithid from the Jiufotang Formation. A 2016 analysis of its feathering found elongated putative melanosomes, suggesting that a large portion of its feathering was iridescent. DIP-V-15102: Another corpse of an indeterminate hatchling preserved in Burmese amber. Described in early 2018. MPCM-LH-26189 a/b: A partial skeleton of a hatchling from Las Hoyas in Spain, including both slab and counter-slab components. Its 2018 description revealed how various features developed in Enantiornithes as they aged. Such features include the ossification of the sternum from various smaller bones, and the fusion of tail vertebrae into a pygostyle.
Biology and health sciences
Prehistoric birds
Animals
1118789
https://en.wikipedia.org/wiki/Linearization
Linearization
In mathematics, linearization (British English: linearisation) is finding the linear approximation to a function at a given point. The linear approximation of a function is the first order Taylor expansion around the point of interest. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems. This method is used in fields such as engineering, physics, economics, and ecology. Linearization of a function Linearizations of a function are lines—usually lines that can be used for purposes of calculation. Linearization is an effective method for approximating the output of a function at any based on the value and slope of the function at , given that is differentiable on (or ) and that is close to . In short, linearization approximates the output of a function near . For example, . However, what would be a good approximation of ? For any given function , can be approximated if it is near a known differentiable point. The most basic requisite is that , where is the linearization of at . The point-slope form of an equation forms an equation of a line, given a point and slope . The general form of this equation is: . Using the point , becomes . Because differentiable functions are locally linear, the best slope to substitute in would be the slope of the line tangent to at . While the concept of local linearity applies the most to points arbitrarily close to , those relatively close work relatively well for linear approximations. The slope should be, most accurately, the slope of the tangent line at . Visually, the accompanying diagram shows the tangent line of at . At , where is any small positive or negative value, is very nearly the value of the tangent line at the point . The final equation for the linearization of a function at is: For , . The derivative of is , and the slope of at is . Example To find , we can use the fact that . The linearization of at is , because the function defines the slope of the function at . Substituting in , the linearization at 4 is . In this case , so is approximately . The true value is close to 2.00024998, so the linearization approximation has a relative error of less than 1 millionth of a percent. Linearization of a multivariable function The equation for the linearization of a function at a point is: The general equation for the linearization of a multivariable function at a point is: where is the vector of variables, is the gradient, and is the linearization point of interest . Uses of linearization Linearization makes it possible to use tools for studying linear systems to analyze the behavior of a nonlinear function near a given point. The linearization of a function is the first order term of its Taylor expansion around the point of interest. For a system defined by the equation , the linearized system can be written as where is the point of interest and is the -Jacobian of evaluated at . Stability analysis In stability analysis of autonomous systems, one can use the eigenvalues of the Jacobian matrix evaluated at a hyperbolic equilibrium point to determine the nature of that equilibrium. This is the content of the linearization theorem. For time-varying systems, the linearization requires additional justification. Microeconomics In microeconomics, decision rules may be approximated under the state-space approach to linearization. Under this approach, the Euler equations of the utility maximization problem are linearized around the stationary steady state. A unique solution to the resulting system of dynamic equations then is found. Optimization In mathematical optimization, cost functions and non-linear components within can be linearized in order to apply a linear solving method such as the Simplex algorithm. The optimized result is reached much more efficiently and is deterministic as a global optimum. Multiphysics In multiphysics systems—systems involving multiple physical fields that interact with one another—linearization with respect to each of the physical fields may be performed. This linearization of the system with respect to each of the fields results in a linearized monolithic equation system that can be solved using monolithic iterative solution procedures such as the Newton–Raphson method. Examples of this include MRI scanner systems which results in a system of electromagnetic, mechanical and acoustic fields.
Mathematics
Basics_2
null
1118840
https://en.wikipedia.org/wiki/Soil%20classification
Soil classification
Soil classification deals with the systematic categorization of soils based on distinguishing characteristics as well as criteria that dictate choices in use. Overview Soil classification is a dynamic subject, from the structure of the system, to the definitions of classes, to the application in the field. Soil classification can be approached from the perspective of soil as a material and soil as a resource. Inscriptions at the temple of Horus at Edfu outline a soil classification used by Tanen to determine what kind of temple to build at which site. Ancient Greek scholars produced a number of classification based on several different qualities of the soil. Engineering Geotechnical engineers classify soils according to their engineering properties as they relate to use for foundation support or building material. Modern engineering classification systems are designed to allow an easy transition from field observations to basic predictions of soil engineering properties and behaviors. The most common engineering classification system for soils in North America is the Unified Soil Classification System (USCS). The USCS has three major classification groups: (1) coarse-grained soils (e.g. sands and gravels); (2) fine-grained soils (e.g. silts and clays); and (3) highly organic soils (referred to as "peat"). The USCS further subdivides the three major soil classes for clarification. It distinguishes sands from gravels by grain size, classifying some as "well-graded" and the rest as "poorly-graded". Silts and clays are distinguished by the soils' Atterberg limits, and thus the soils are separated into "high-plasticity" and "low-plasticity" soils. Moderately organic soils are considered subdivisions of silts and clays and are distinguished from inorganic soils by changes in their plasticity properties (and Atterberg limits) on drying. The European soil classification system (ISO 14688) is very similar, differing primarily in coding and in adding an "intermediate-plasticity" classification for silts and clays, and in minor details. Other engineering soil classification systems in the United States include the AASHTO Soil Classification System, which classifies soils and aggregates relative to their suitability for pavement construction, and the Modified Burmister system, which works similarly to the USCS but includes more coding for various soil properties. A full geotechnical engineering soil description will also include other properties of the soil including color, in-situ moisture content, in-situ strength, and somewhat more detail about the material properties of the soil than is provided by the USCS code. The USCS and additional engineering description is standardized in ASTM D 2487. Soil science For soil resources, experience has shown that a natural system approach to classification, i.e. grouping soils by their intrinsic property (soil morphology), behaviour, or genesis, results in classes that can be interpreted for many diverse uses. Differing concepts of pedogenesis, and differences in the significance of morphological features to various land uses can affect the classification approach. Despite these differences, in a well-constructed system, classification criteria group similar concepts so that interpretations do not vary widely. This is in contrast to a technical system approach to soil classification, where soils are grouped according to their fitness for a specific use and their edaphic characteristics. Natural system approaches to soil classification, such as the French Soil Reference System (Référentiel pédologique français) are based on presumed soil genesis. Systems have developed, such as USDA soil taxonomy and the World Reference Base for Soil Resources, which use taxonomic criteria involving soil morphology and laboratory tests to inform and refine hierarchical classes. Another approach is numerical classification, also called ordination, where soil individuals are grouped by multivariate statistical methods such as cluster analysis. This produces natural groupings without requiring any inference about soil genesis. In soil survey, as practiced in the United States, soil classification usually means criteria based on soil morphology in addition to characteristics developed during soil formation. Criteria are designed to guide choices in land use and soil management. As indicated, this is a hierarchical system that is a hybrid of both natural and objective criteria. USDA soil taxonomy provides the core criteria for differentiating soil map units. This is a substantial revision of the 1938 USDA soil taxonomy which was a strictly natural system. The USDA classification was originally developed by Guy Donald Smith, director of the U.S. Department of Agriculture's soil survey investigations. Soil taxonomy based soil map units are additionally sorted into classes based on technical classification systems. Land Capability Classes, hydric soil, and prime farmland are some examples. The European Union uses the World Reference Base for Soil Resources (WRB), currently the fourth edition is valid. According to the first edition of the WRB (1998), the booklet "Soils of the European Union" was published by the former Institute of Environment and Sustainability (now: Land Resources Unit, European Soil Data Centre/ESDAC). In addition to scientific soil classification systems, there are also vernacular soil classification systems. Folk taxonomies have been used for millennia, while scientifically based systems are relatively recent developments. Knowledge on the spatial distribution of soils has increased dramatically. SoilGrids is a system for automated soil mapping based on models fitted using soil profiles and environmental covariate data. On a global scale, it provides maps at 1.00–0.25 km spatial resolution. Whether sustainability might be the ultimate goal for managing the global soil resources, these new developments require studied soils to be classified and given its own name. OSHA The U.S. Occupational Safety and Health Administration (OSHA) requires the classification of soils to protect workers from injury when working in excavations and trenches. OSHA uses three soil classifications plus one for rock, based primarily on strength but also other factors which affect the stability of cut slopes: : natural solid mineral matter that can be excavated with vertical sides and remain intact while exposed. - cohesive, plastic soils with unconfined compressive strength greater than 1.5 ton per square foot (tsf)(144 kPa), and meeting several other requirements (which induces a lateral earth pressure of 25 psf per ft of depth) - cohesive soils with unconfined compressive strength between 0.5 tsf (48 kPa) and 1.5 tsf (144 kPa), or unstable dry rock, or soils which would otherwise be Type A (lateral earth pressure of 45 psf per ft of depth) - granular soils or cohesive soils with unconfined compressive strength less than 0.5 tsf (48 kPa) or any submerged or freely seeping soil or adversely bedded soils (lateral earth pressure of 80 psf per ft of depth) - A subtype of Type C soil, though is not officially recognized by OSHA as a separate type, induces a lateral earth pressure of 60 psf per ft of depth Each of the soil classifications has implications for the way the excavation must be made or the protections (sloping, shoring, shielding, etc.) that must be provided to protect workers from collapse of the excavated bank.
Physical sciences
Soil science
Earth science
1119275
https://en.wikipedia.org/wiki/Carbon-dioxide%20laser
Carbon-dioxide laser
The carbon-dioxide laser (CO2 laser) was one of the earliest gas lasers to be developed. It was invented by Kumar Patel of Bell Labs in 1964 and is still one of the most useful types of laser. Carbon-dioxide lasers are the highest-power continuous-wave lasers that are currently available. They are also quite efficient: the ratio of output power to pump power can be as large as 20%. The CO2 laser produces a beam of infrared light with the principal wavelength bands centering on 9.6 and 10.6 micrometers (μm). Amplification The active laser medium (laser gain/amplification medium) is a gas discharge which is air- or water-cooled, depending on the power being applied. The filling gas within a sealed discharge tube consists of around 10–20% carbon dioxide (), around 10–20% nitrogen (), a few percent hydrogen () and/or xenon (Xe), with the remainder being helium (He). A different mixture is used in a flow-through laser, where is continuously pumped through it. The specific proportions vary according to the particular laser. The population inversion in the laser is achieved by the following sequence: electron impact excites the {v1(1)} quantum vibrational modes of nitrogen. Because nitrogen is a homonuclear molecule, it cannot lose this energy by photon emission, and its excited vibrational modes are therefore metastable and relatively long-lived. {v1(1)} and {v3(1)} being nearly perfectly resonant (total molecular energy differential is within 3 cm−1 when accounting for anharmonicity, centrifugal distortion and vibro-rotational interaction, which is more than made up for by the Maxwell speed distribution of translational-mode energy), collisionally de-excites by transferring its vibrational mode energy to the CO2 molecule, causing the carbon dioxide to excite to its {v3(1)} (asymmetric stretch) vibrational mode quantum state. The then radiatively emits at either 10.6 μm by dropping to the {v1(1)} (symmetric-stretch) vibrational mode, or 9.6 μm by dropping to the {v20(2)} (bending) vibrational mode. The carbon dioxide molecules then transition to their {v20(0)} vibrational mode ground state from {v1(1)} or {v20(2)} by collision with cold helium atoms, thus maintaining population inversion. The resulting hot helium atoms must be cooled in order to sustain the ability to produce a population inversion in the carbon dioxide molecules. In sealed lasers, this takes place as the helium atoms strike the walls of the laser discharge tube. In flow-through lasers, a continuous stream of CO2 and nitrogen is excited by the plasma discharge and the hot gas mixture is exhausted from the resonator by pumps. The addition of helium also plays a role in the initial vibrational excitation of , due to a near-resonant dissociation reaction with metastable He(23S1). Substituting helium with other noble gases, such as neon or argon, does not lead to an enhancement of laser output. Because the excitation energy of molecular vibrational and rotational mode quantum states are low, the photons emitted due to transition between these quantum states have comparatively lower energy, and longer wavelength, than visible and near-infrared light. The 9–12 μm wavelength of CO2 lasers is useful because it falls into an important window for atmospheric transmission (up to 80% atmospheric transmission at this wavelength), and because many natural and synthetic materials have strong characteristic absorption in this range. The laser wavelength can be tuned by altering the isotopic ratio of the carbon and oxygen atoms comprising the molecules in the discharge tube. Construction Because CO2 lasers operate in the infrared, special materials are necessary for their construction. Typically, the mirrors are silvered, while windows and lenses are made of either germanium or zinc selenide. For high power applications, gold mirrors and zinc selenide windows and lenses are preferred. There are also diamond windows and lenses in use. Diamond windows are extremely expensive, but their high thermal conductivity and hardness make them useful in high-power applications and in dirty environments. Optical elements made of diamond can even be sand blasted without losing their optical properties. Historically, lenses and windows were made out of salt (either sodium chloride or potassium chloride). While the material was inexpensive, the lenses and windows degraded slowly with exposure to atmospheric moisture. The most basic form of a CO2 laser consists of a gas discharge (with a mix close to that specified above) with a total reflector at one end, and an output coupler (a partially reflecting mirror) at the output end. The CO2 laser can be constructed to have continuous wave (CW) powers between milliwatts (mW) and hundreds of kilowatts (kW). It is also very easy to actively Q-switch a CO2 laser by means of a rotating mirror or an electro-optic switch, giving rise to Q-switched peak powers of up to gigawatts (GW). Because the laser transitions are actually on vibration-rotation bands of a linear triatomic molecule, the rotational structure of the P and R bands can be selected by a tuning element in the laser cavity. Prisms are not practical as tuning elements because most media that transmit in the mid-infrared absorb or scatter some of the light, so the frequency tuning element is almost always a diffraction grating. By rotating the diffraction grating, a particular rotational line of the vibrational transition can be selected. The finest frequency selection may also be obtained through the use of an etalon. In practice, together with isotopic substitution, this means that a continuous comb of frequencies separated by around 1 cm−1 (30 GHz) can be used that extend from 880 to 1090 cm−1. Such "line-tuneable" carbon-dioxide lasers are principally of interest in research applications. The laser's output wavelength is affected by the particular isotopes contained in the carbon dioxide molecule, with heavier isotopes causing longer wavelength emission. Applications Industrial (cutting and welding) Because of the high power levels available (combined with reasonable cost for the laser), CO2 lasers are frequently used in industrial applications for cutting and welding, while lower power level lasers are used for engraving. In selective laser sintering, CO2 lasers are used to fuse particles of plastic powder into parts. Medical (soft-tissue surgery) Carbon-dioxide lasers have become useful in surgical procedures because water (which makes up most biological tissue) absorbs this frequency of light very well. Some examples of medical uses are laser surgery and skin resurfacing ("laser facelifts", which essentially consist of vaporizing the skin to promote collagen formation). CO2 lasers may be used to treat certain skin conditions such as hirsuties papillaris genitalis by removing bumps or podules. CO2 lasers can be used to remove vocal-fold lesions, such as vocal-fold cysts. Researchers in Israel are experimenting with using CO2 lasers to weld human tissue, as an alternative to traditional sutures. The 10.6 μm CO2 laser remains the best surgical laser for the soft tissue where both cutting and hemostasis are achieved photo-thermally (radiantly). CO2 lasers can be used in place of a scalpel for most procedures and are even used in places a scalpel would not be used, in delicate areas where mechanical trauma could damage the surgical site. CO2 lasers are the best suited for soft-tissue procedures in human and animal specialties, as compared to laser with other wavelengths. Advantages include less bleeding, shorter surgery time, less risk of infection, and less post-op swelling. Applications include gynecology, dentistry, oral and maxillofacial surgery, and many others. A CO2 dental laser at the 9.25–9.6 μm wavelength is sometimes used in dentistry for hard-tissue ablation. The hard-tissue is ablated at temperatures as high as 5,000 °C, producing bright thermal radiation. Other The common plastic poly (methyl methacrylate) (PMMA) absorbs IR light in the 2.8–25 μm wavelength band, so CO2 lasers have been used in recent years for fabricating microfluidic devices from it, with channel widths of a few hundred micrometers. Because the atmosphere is quite transparent to infrared light, CO2 lasers are also used for military rangefinding using LIDAR techniques. CO2 lasers are used in spectroscopy and the Silex process to enrich uranium. In semiconductor manufacturing, CO2 lasers are used for extreme ultraviolet generation. The Soviet Polyus was designed to use a megawatt carbon-dioxide laser as an in-orbit weapon to destroy SDI satellites.
Technology
Lasers
null
1119697
https://en.wikipedia.org/wiki/Vocal%20sac
Vocal sac
The vocal sac is the flexible membrane of skin possessed by most male frogs and toads. The purpose of the vocal sac is usually as an amplification of their mating or advertisement call. The presence or development of the vocal sac is one way of externally determining the sex of a frog or toad in many species; taking frogs as an example; The vocal sac is open to the mouth cavity of the frog, with two slits on either side of the tongue. To call, the frog inflates its lungs and shuts its nose and mouth. Air is then expelled from the lungs, through the larynx, and into the vocal sac. The vibrations of the larynx emits a sound, which resonates on the elastic membrane of the vocal sac. The resonance causes the sound to be amplified and allows the call to carry further. Muscles within the body wall force the air back and forth between the lungs and vocal sac. Development The development of the vocal sac is different in most species, however they mostly follow the same process. The development of the unilobular vocal sac begins with two small growths on the floor of the mouth. These grow until they form two small pouches, which expand until they meet in the centre of the mouth and form one large cavity, which then grows until it is fully developed. Purpose The primary purpose of the vocal sac is to amplify the advertisement call of the male, and attract females from as large an area as possible. Species of frog without vocal sacs may only be heard within a radius of a few metres, whereas some species with vocal sacs can be heard over away. Modern frog species (neobatrachians and some mesobatrachians) which lack vocal sacs tend to inhabit areas close to flowing water. The sound of the flowing water overpowers the advertisement call, so they must advertise by other means. An alternative use of the vocal sac is employed by the frogs of the family Rhinodermatidae. The males of the two species of this family scoop recently hatched tadpoles into their mouth, where they move into the vocal sac. The tadpoles of Darwin's frog (Rhinoderma darwinii) remain in the vocal sac until metamorphosis, whereas the Chile Darwin's frog (Rhinoderma rufum) will transport the tadpoles to a water source.
Biology and health sciences
Gastrointestinal tract
Biology
1119831
https://en.wikipedia.org/wiki/Glass%20wool
Glass wool
Glass wool is an insulating material made from glass fiber arranged using a binder into a texture similar to wool. The process traps many small pockets of air between the glass, and these small air pockets result in high thermal insulation properties. Glass wool is produced in rolls or in slabs, with different thermal and mechanical properties. It may also be produced as a material that can be sprayed or applied in place, on the surface to be insulated. The modern method for producing glass wool was invented by Games Slayter while he was working at the Owens-Illinois Glass Co. (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933. Principles of function Gases possess poor thermal conduction properties compared to liquids and solids and thus make good insulation material if they can be trapped in materials so that much of the heat that flows through the material is forced to flow through the gas. In order to further augment the effectiveness of a gas (such as air) it may be disrupted into small cells which cannot effectively transfer heat by natural convection. Natural convection involves a larger bulk flow of gas driven by buoyancy and temperature differences, and it does not work well in small gas cells where there is little density difference to drive it, and the high surface area to volume ratios of the small cells retards bulk gas flow inside them by means of viscous drag. In order to accomplish the formation of small gas cells in man-made thermal insulation, glass and polymer materials can be used to trap air in a foam-like structure. The same principle used in glass wool is used in other man-made insulators such as rock wool, Styrofoam, wet suit neoprene foam fabrics, and fabrics such as Gore-Tex and polar fleece. The air-trapping property is also the insulation principle used in nature in down feathers and insulating hair such as natural wool. Manufacturing process Natural sand and recycled glass are mixed and heated to , to produce glass. The fiberglass is usually produced by a method similar to making cotton candy. Molten glass is forced through a rapidly spinning metal cup, called a 'spinner'. The centrifugal force pulls the glass through small holes in the spinner. The newly created fibers cool on contact with the air. Cohesion and mechanical strength are obtained by the presence of a binder that “cements” the fibers together. A drop of binder is placed at each fiber intersection. The fiber mat is then heated to around to polymerize the resin and is calendered to give it strength and stability. Finally, the wool mat is cut and packed in rolls or panels, palletized, and stored for use. Uses Glass wool is a thermal insulation material consisting of intertwined and flexible glass fibers, which causes it to "package" air, resulting in a low density that can be varied through compression and binder content (as noted above, these air cells are the actual insulator). Glass wool can be a loose-fill material, blown into attics, or together with an active binder, sprayed on the underside of structures, sheets, and panels that can be used to insulate flat surfaces such as cavity wall insulation, ceiling tiles, curtain walls, and ducting. It is also used to insulate piping and for soundproofing. Fiberglass batts and blankets Batts are precut, whereas blankets are available in continuous rolls. Compressing the material reduces its effectiveness. Cutting it to accommodate electrical boxes and other obstructions allows air a free path to cross through the wall cavity. One can install batts in two layers across an unfinished attic floor, perpendicular to each other, for increased effectiveness at preventing heat bridging. Blankets can cover joists and studs as well as the space between them. Batts can be challenging and unpleasant to hang under floors between joists; straps, or staple cloth or wire mesh across joists, can hold it up. Gaps between batts (bypasses) can become sites of air infiltration or condensation (both of which reduce the effectiveness of the insulation) and require strict attention during the installation. By the same token careful weatherization and installation of vapour barriers is required to ensure that the batts perform optimally. Air infiltration can be also reduced by adding a layer of cellulose loose-fill on top of the material. Health problems Fiberglass will irritate the eyes, skin, and the respiratory system. Potential symptoms include irritation of eyes, skin, nose, and throat, dyspnea (breathing difficulty), sore throat, hoarseness and cough. Fiberglass used for insulating appliances appears to produce human disease that is similar to asbestosis. Scientific evidence demonstrates that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. Unfortunately these work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied. Fiberglass insulation should never be left exposed in an occupied area, according to the American Lung Association. In June 2011, the United States' National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products. Similarly, California's Office of Environmental Health Hazard Assessment ("OEHHA"), in November 2011, published a modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." The United States' NTP and California's OEHHA action means that a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under Federal or California law. All fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) in October 2001 as Not Classifiable as to carcinogenicity to humans (Group 3). Fiberglass itself is resistant to mold. If mold is found in or on fiberglass it is more likely that the binder is the source of the mold, since binders are often organic and more hygroscopic than the glass wool. In tests, glass wool was found to be highly resistant to the growth of mold. Only exceptional circumstances resulted in mold growth: very high relative humidity, 96% and above, or saturated glass wool, although saturated wool glass will only have moderate growth.
Technology
Materials
null
1120048
https://en.wikipedia.org/wiki/Scroll
Scroll
A scroll (from the Old French escroe or escroue), also known as a roll, is a roll of papyrus, parchment, or paper containing writing. Structure A scroll is usually partitioned into pages, which are sometimes separate sheets of papyrus or parchment glued together at the edges. Scrolls may be marked divisions of a continuous roll of writing material. The scroll is usually unrolled so that one page is exposed at a time, for writing or reading, with the remaining pages rolled and stowed to the left and right of the visible page. Text is written in lines from the top to the bottom of the page. Depending on the language, the letters may be written left to right, right to left, or alternating in direction (boustrophedon). History Scrolls were the first form of editable record keeping texts, used in Eastern Mediterranean ancient Egyptian civilizations. Parchment scrolls were used by the Israelites among others before the codex or bound book with parchment pages was invented by the Romans, which became popular around the 1st century AD. Scrolls were more highly regarded than codices until well into Roman times. The ink used in writing scrolls had to adhere to a surface that was rolled and unrolled, so special inks were developed. Even so, ink would slowly flake off scrolls. Rolls Shorter pieces of parchment or paper are called rolls or rotuli, although usage of the term by modern historians varies with periods. Historians of the classical period tend to use roll instead of scroll. Rolls may still be many meters or feet long, and were used in the medieval and Early Modern period in Europe and various West Asian cultures for manuscript administrative documents intended for various uses, including accounting, rent-rolls, legal agreements, and inventories. A distinction that sometimes applies is that the lines of writing in rotuli run across the width of the roll (that is to say, are parallel with any unrolled portion) rather than along the length, divided into page-like sections. Rolls may be wider than most scrolls, up to perhaps 60 cm or two feet wide. Rolls were often stored together in a special cupboard on shelves. A special Chinese form of short book, called the "whirlwind book", consists of several pieces of paper bound at the top with bamboo and then rolled up. Scotland In Scotland, the term scrow was used from about the 13th to the 17th centuries for scroll, writing, or documents in list or schedule form. There existed an office of Clerk of the Scrow (Rotulorum Clericus) meaning the Clerk of the Rolls or Clerk of the Register. Replacement by the codex The codex form of the book—that is, folding a scroll into pages, which made reading and handling the document much easier—appears during the Roman period. Stemming from a passage in Suetonius' Divus Julius (56.6), legend has it that Julius Caesar was the first to fold scrolls, concertina-fashion, for dispatches to his forces campaigning in Gaul. But the precise meaning of the passage is by no means clear. As C. H. Roberts and T. C. Skeat point out, the idea that "Julius Caesar may have been the inventor of the codex... is indeed a fascinating proposition; but in view of the uncertainties surrounding the passage, it is doubtful whether any such conclusion can be drawn". What the evidence of surviving early codices does make clear is that Christians were among the earliest to make widespread use of the codex. Several Christian papyrus codices known to us date from the second century, including at least one generally accepted as being no later than A.D. 150. "All in all, it is impossible to believe that the Christian adoption of the codex can have taken place any later than circa A.D. 100 (it may, of course, have been earlier)". There were certainly practical reasons for the change. Scrolls were awkward to read if a reader wished to consult material at opposite ends of the document. Further, scrolls were written only on one side, while both sides of the codex page were used. Eventually, the folds were cut into sheets, or "leaves", and bound together along one edge. The bound pages were protected by stiff covers, usually of wood enclosed with leather. is Latin for a "block of wood": the Latin , the root of "library", and the German , the source of "book", both refer to wood. The codex was not only easier to handle than the scroll, but it also fit conveniently on library shelves. The spine generally held the book's title, facing out, affording easier organization of the collection. The surface on which the ink was applied was kept flat, not subjected to weakening by the repeated bending and unbending that scrolls undergo as they are alternately rolled up for storage and unrolled for reading, which creates physical stresses in both the papyrus and the ink of scrolls. The term codex technically refers only to manuscript books — those that, at one time, were handwritten. More specifically, a codex is the term used primarily for a bound manuscript from Roman times up through the Middle Ages. From the fourth century on, the codex became the standard format for books, and scrolls were no longer generally used. After the contents of a parchment scroll were copied in codex format, the scroll was seldom preserved. The majority that did survive were found by archaeologists in burial pits and in the buried trash of forgotten communities. Modern technology Modern technology may be able to assist in reading ancient scrolls. In January 2015, computer software may be making progress in reading 2,000-year-old Herculaneum scrolls, computer scientists report. After working for more than 10 years on unlocking the contents of damaged Herculaneum scrolls, researchers may be able to progress towards reading the scrolls, which cannot be physically opened. In popular culture Many role-playing games (such as Dungeons & Dragons) feature scrolls as magical items, which cast spells when they are read aloud. Typically, the scroll is consumed in the process.
Technology
Media and communication
null
1120742
https://en.wikipedia.org/wiki/Artichoke
Artichoke
The artichoke (Cynara cardunculus var. scolymus), also known by the names French artichoke, globe artichoke, and green artichoke in the United States, is a variety of a species of thistle cultivated as food. The edible portion of the plant consists of the flower buds before the flowers come into bloom. The budding artichoke flower-head is a cluster of many budding small flowers (an inflorescence), together with many bracts, on an edible base. Once the buds bloom, the structure changes to a coarse, barely edible form. Another variety of the same species is the cardoon, a perennial plant native to the Mediterranean region. Both wild forms and cultivated varieties (cultivars) exist. Description This vegetable grows to tall, with arching, deeply lobed, silvery, glaucous-green leaves long. The flowers develop in a large head from an edible bud about diameter with numerous triangular scales; the individual florets are purple. The edible portions of the buds consist primarily of the fleshy lower portions of the involucral bracts and the base, known as the heart; the mass of immature florets in the center of the bud is called the choke or beard (which are inedible in older, larger flowers). Name The English word artichoke was borrowed in the sixteenth century from the northern Italian word (the standard modern Italian being ). The Italian term was itself borrowed either from Spanish (today usually ) or directly from the source of the Spanish word—medieval Andalusi Arabic (, including the Arabic definite article ). The Arabic form is still used in Maghrebi Arabic today, while other variants in Arabic include , and Modern Standard Arabic . These Arabic forms themselves derive from classical Arabic () singular word of the plural () meaning "scale". Other languages which derive their word for the artichoke from Arabic include Israeli Hebrew, which has the word (). The original Hebrew name (see ) is , which is found in the Mishna. Despite being borrowed from Arabic, European terms for the artichoke have in turn influenced Arabic in their own right. For example, the modern Levantine Arabic term for artichoke is (). This literally means 'earthy thorny', and is an Arabicisation (through phono-semantic matching) of the English word artichoke or other European terms like it. As in the case of Levantine Arabic , names for the artichoke have frequently changed form due to folk etymology and phono-semantic matching. The Italian form seems to have been adapted to correspond to Italian ('arch-, chief') and ('stump'). Forms of the French word (which also derives from Arabic, possibly via Spanish) have over the years included (corresponding to , 'warm') and (corresponding to , 'height'). Forms found in English have included hartichoak, corresponding to heart and choke, which were likely associated with the belief that the inedible centre of the vegetable could choke its eaters or that the plant can take over a garden, choking out other plants. Ecology Artichokes are affected by fungal pathogens including Verticillium dahliae and Rhizoctonia solani. Soil solarization has been successful in other crop-fungus pathosystems and is evaluated for suppression of V. dahliae and R. solani by Guerrero et al. 2019. Cultivation Early history The artichoke is a domesticated variety of the wild cardoon (Cynara cardunculus), which is native to the Mediterranean area. There was debate over whether the artichoke was a food among the ancient Greeks and Romans, or whether that cultivar was developed later, with Classical sources referring instead to the wild cardoon. The cardoon is mentioned as a garden plant in the eighth century BCE by Homer and Hesiod. Pliny the Elder mentioned growing of 'carduus' in Carthage and Cordoba. In North Africa, where it is still found in the wild state, the seeds of artichokes, probably cultivated, were found during the excavation of Roman-period Mons Claudianus in Egypt. Varieties of artichokes were cultivated in Sicily beginning in the classical period of the ancient Greeks; the Greeks calling them kaktos. In that period, the Greeks ate the leaves and flower heads, which cultivation had already improved from the wild form. The Romans called the vegetable carduus (hence the name cardoon). Further improvement in the cultivated form appears to have taken place in the medieval period in Muslim Spain and the Maghreb, although the evidence is inferential only. By the twelfth century, it was being mentioned in the compendious guide to farming composed by Ibn al-'Awwam in Seville (though it does not appear in earlier major Andalusian Arabic works on agriculture), and in Germany by Hildegard von Bingen. Le Roy Ladurie, in his book Les paysans de Languedoc, has documented the spread of artichoke cultivation in Italy and southern France in the late fifteenth and early sixteenth centuries, when the artichoke appeared as a new arrival with a new name, which may be taken to indicate an arrival of an improved cultivated variety: The Dutch introduced artichokes to England, where they grew in Henry VIII's garden at Newhall in 1530. From the mid-17th century artichokes 'enjoyed a vogue' in European courts. The hearts were considered luxury ingredients in the new court cookery as recorded by writers such as François Pierre La Varenne, the author of Le Cuisinier François (1651). It was also claimed, in this period, that artichokes had aphrodisiac properties. They were taken to the United States in the nineteenth century—to Louisiana by French immigrants and to California by Spanish immigrants. Agricultural output Cultivation of the globe artichoke is concentrated in the Americas and the countries bordering the Mediterranean basin. The main European producers are Italy, Spain, and France and the main American producers are Argentina, Peru and the United States. In the United States, California provides nearly 100% of the U.S. crop, with about 80% of that being grown in Monterey County; there, Castroville proclaims itself to be "The Artichoke Center of the World" and holds the annual Castroville Artichoke Festival. More recently, artichokes have been grown in South Africa in a small town called Parys, located along the Vaal River. In 2020, the world produced approximately 1.5 million tons of artichokes. Artichokes can be produced from seeds or from vegetative means such as division, root cuttings, or micropropagation. Although technically perennials that normally produce the edible flower during only the second and subsequent years, certain varieties of artichokes can be grown from seed as annuals, producing a limited harvest at the end of the first growing season, even in regions where the plants are not normally winter-hardy. This means home gardeners in northern regions can attempt to produce a crop without the need to overwinter plants with special treatment or protection. The seed cultivar 'Imperial Star' has been bred to produce in the first year without such measures. An even newer cultivar, 'Northern Star', is said to be able to overwinter in more northerly climates, and readily survives subzero temperatures. Commercial culture is limited to warm areas in USDA hardiness zone 7 and above. It requires good soil, regular watering and feeding, and frost protection in winter. Rooted suckers can be planted each year, so mature specimens can be disposed of after a few years, as each individual plant lives only a few years. The peak season for artichoke harvesting is the spring, but they can continue to be harvested throughout the summer, with another peak period in mid-autumn. When harvested, they are cut from the plant so as to leave an inch or two of stem. Artichokes possess good keeping qualities, frequently remaining quite fresh for two weeks or longer under average retail conditions. Apart from culinary applications, the globe artichoke is also an attractive plant for its bright floral display, sometimes grown in herbaceous borders for its bold foliage and large, purple flower heads. Cultivars Traditional (vegetative propagation) Green, big: 'Vert de Laon' (France), 'Camus de Bretagne', 'Castel' (France), 'Green Globe' (USA, South Africa) Green, medium-size: 'Verde Palermo' (Sicily, Italy), 'Blanca de Tudela' (Spain), 'Argentina', 'Española' (Chile), 'Blanc d'Oran' (Algeria), 'Sakiz', 'Bayrampasha' (Turkey) Purple, big: 'Romanesco', 'C3' (Italy) Purple, medium-size: 'Violet de Provence' (France), 'Brindisino', 'Catanese', 'Niscemese' (Sicily), 'Violet d'Algerie' (Algeria), 'Baladi' (Egypt), 'Ñato' (Argentina), 'Violetta di Chioggia' (Italy) Spined: 'Spinoso Sardo e Ingauno' (Sardinia, Italy), 'Criolla' (Peru). White, in some parts of the world. Propagated by seeds For industry: 'Madrigal', 'Lorca', 'A-106', 'Imperial Star' Green: 'Symphony', 'Harmony' Purple: 'Concerto', 'Opal', 'Tempo' Uses Nutrition Cooked unsalted artichoke is 82% water, 12% carbohydrates, 3% protein, and 3% fat. In a 100-gram reference serving, cooked artichoke supplies 74 calories, is a rich source (20% or more of the Daily Value, DV) of folate, and is a moderate source (10–19% DV) of vitamin K (16% DV), magnesium, sodium, and phosphorus (10–12% DV). Culinary Large globe artichokes are frequently prepared by removing all but or so of the stem. To remove thorns, which may interfere with eating, around a quarter of each scale can be cut off. To cook, the artichoke is simmered for 15 to 30 minutes, or steamed for 30–40 minutes (less for small ones). A cooked, unseasoned artichoke has a delicate flavor. Salt may be added to the water if boiling artichokes. Covered artichokes, in particular those that have been cut, can turn brown due to the enzymatic browning and chlorophyll oxidation. Placing them in water slightly acidified with vinegar or lemon juice can prevent the discoloration. Leaves are often removed one at a time, and the fleshy base eaten, with vinaigrette, hollandaise, vinegar, butter, mayonnaise, aioli, lemon juice, or other sauces. The fibrous upper part of each leaf is usually discarded. The heart is eaten when the inedible choke has been peeled away from the base and discarded. The thin leaves covering the choke are also edible. In Italy, artichoke hearts in oil are the usual vegetable for the "spring" section of the "four seasons" pizza (alongside tomatoes and basil for summer, mushrooms for autumn, and prosciutto and olives for winter). A recipe well known in Rome is Jewish-style artichokes, which are deep-fried whole. The softer parts of artichokes are also eaten raw, one leaf at a time dipped in vinegar and olive oil, or thinly sliced and dressed with lemon and olive oil. There are many stuffed artichoke recipes. A common Italian stuffing uses a mixture of bread crumbs, garlic, oregano, parsley, grated cheese, and prosciutto or sausage. A bit of the mixture is then pushed into the spaces at the base of each leaf and into the center before boiling or steaming. In Spain, younger, smaller, and more tender artichokes are used. They can be sprinkled with olive oil and left in hot ashes in a barbecue, sautéed in olive oil with garlic, with rice as a paella, or sautéed and combined with eggs in a tortilla (frittata). Often cited is the Greek anginares alla Polita ("artichokes city-style", referring to the city of Constantinople), a hearty, savory stew made with artichoke hearts, potatoes, and carrots, and flavored with onion, lemon, and dill. The island of Tinos, or the villages of Iria and Kantia in the Peloponnese, still very much celebrate their local production, including with a day of the artichoke or an artichoke festival. Artichokes may also be prepared by completely breaking off all of the leaves, leaving the bare heart. The leaves are steamed to soften the fleshy base part of each leaf to be used as the basis for any number of side dishes or appetizing dips, or the fleshy part is left attached to the heart, while the upper parts of the leaves are discarded. The remaining concave-shaped heart is often filled with meat, then fried or baked in a savory sauce. Canned or frozen artichoke hearts are a time-saving substitute, though the consistency and stronger flavor of fresh hearts, when available, is often preferred. Deep-fried artichoke hearts are eaten in coastal areas of California. Throughout North Africa, the Middle East, Turkey, and Armenia, ground lamb is a favorite filling for stuffed artichoke hearts. Spices reflect the local cuisine of each country. In Lebanon, for example, the typical filling would include lamb, onion, tomato, pinenuts, raisins, parsley, dill, mint, black pepper, and allspice. A popular Turkish vegetarian variety uses only onion, carrot, green peas, and salt. Artichokes are often prepared with white sauces or other kinds of sauces. Herbal tea Artichokes can also be made into a herbal tea. The infusion is consumed particularly among the Vietnamese. An artichoke-based herbal tea called Ceai de Anghinare is made in Romania. The flower portion is put into water and consumed as a herbal tea in Mexico. It has a slightly bitter, woody taste. Apéritif Artichoke is the primary botanical ingredient of the Italian aperitif Cynar, with 16.5% alcohol by volume, produced by the Campari Group. It can be served over ice as an aperitif or as a cocktail mixed with orange juice, which is especially popular in Switzerland. It is also used to make a 'Cin Cyn', a slightly less-bitter version of the Negroni cocktail, by substituting Cynar for Campari. Genome The globe artichoke genome has been sequenced. The genome assembly covers 725 of the 1,084 Mb genome and the sequence codes for about 27,000 genes. An understanding of the genome structure is an important step in understanding traits of the globe artichoke, which may aid in the identification of economically important genes from related species.
Biology and health sciences
Asterales
null
1121399
https://en.wikipedia.org/wiki/Threshing
Threshing
Threshing or thrashing is the process of loosening the edible part of grain (or other crop) from the straw to which it is attached. It is the step in grain preparation after reaping. Threshing does not remove the bran from the grain. History of threshing Through much of the important history of agriculture, threshing was time-consuming and usually laborious, with a bushel of wheat taking about an hour. In the late 18th century, before threshing was mechanized, about one-quarter of agricultural labor was devoted to it. It is likely that in the earliest days of agriculture the little grain that was raised was shelled by hand, but as the quantity increased the grain was probably beaten out with a stick, or the sheaf beaten upon the ground. An improvement on this, as the quantity further increased, was the practice of the ancient Egyptians of spreading out the loosened sheaves on a circular enclosure of hard ground, and driving oxen, sheep or other animals round and round over it so as to tread out the grain. This enclosure was placed on an elevated piece of ground so that when the straw was removed the wind blew away the chaff and left the corn. A contemporary version of this in some locations is to spread the grain on the surface of a country road so the grain may be threshed by the wheels of passing vehicles. This method, however, damaged part of the grain, and it was partially superseded by the threshing sledge, a heavy frame mounted with three or more rollers, sometimes spiked, which revolved as it was drawn over the spread out corn by two oxen. A common sledge with a ridged or grooved bottom was also used. Similar methods to these were used by the ancient Greeks, and continued to be employed in the modern period in some places. In Italy the use of a tapering roller fastened to an upright shaft in the centre of the thrashing floor and pulled round from the outer end by oxen seems to be a descendant of the Roman or roller sledge. The flail, a pair of connected sticks used to beat the grain, evolved from the early method of using a single stick. It, with the earlier methods, was described by Pliny the Elder in his first-century CE Natural History: "The cereals are threshed in some places with the threshing board on the threshing floor; in others they are trampled by a train of horses, and in others they are beaten with flails". It seems to have been the thrashing implement in general use in all Northern European countries, and was the chief means of thrashing grain as late as 1860. It was known in Japan very early, and was probably used in conjunction with the stripper, an implement fashioned very much like a large comb, with the teeth made of hard wood and pointing upwards. The straw after being reaped was brought to this and combed through by hand, the heads being drawn off and afterwards threshed on the threshing floor by the flail. Much more recently, just such an implement, known as a "heckle", has been used for combing the bolls or heads off flax, or for straightening the fibre in the after-treatment. After the grain had been beaten out by the flail or ground out by other means the straw was carefully raked away and the corn and chaff collected to be separated by winnowing when there was a wind blowing. This consisted of tossing the mixture of corn and chaff into the air so that the wind carried away the chaff while the grain fell back on the threshing floor. The best grain fell nearest while the lightest grain was carried some distance before falling, thus an approximate grading of the grain was obtained. It was also performed when there was no wind by fanning while pouring the mixture from a vessel. Later, a fanning or winnowing mill was invented. Barns were constructed with large doors opening in the direction of the prevailing winds so that the wind could blow right through the barn and across the threshing floor for the purpose of winnowing the corn. The flail continued to be used for special purposes such as flower seeds, and also where the quantity grown was small enough to render it not worth while to use a threshing mill. With regard to the amount of grain threshed in a day by the flail, a fair average quantity was 8 bushels of wheat, 30 bushels of oats, 16 bushels of barley, 20 bushels of beans, 8 bushels of rye and 20 bushels of buckwheat. Mechanization In the 18th century there were efforts to create a power-driven threshing machine. In 1732 Michael Menzies, a Scot, obtained a patent for a power-driven machine. This was arranged to drive a large number of flails operated by water power, but was not particularly successful. The first practical effort leading in the right direction was made by a Scottish farmer named Leckie about 1758. He invented what was described as a "rotary machine consisting of a set of cross arms attached to a horizontal shaft and enclosed in a cylindrical case." This machine did not work very well, but it demonstrated the superiority of the rotary motion and pointed to the ways in which thrashing machines should be constructed. True industrialization of threshing began in 1786 with the invention of the threshing machine by Scot Andrew Meikle. In this the loosened sheaves were fed, ears first, from a feeding board between two fluted revolving rollers to the beating cylinder. This cylinder or "drum" was armed with four iron-shod beaters or spars of wood parallel to its axle, and these striking the ears of corn as they protruded from the rollers knocked out the grain. The drum revolved at 200 to 250 revolutions per minute and carried the loose grain and straw on to a concave sieve beneath another revolving drum or rake with pegs which rubbed the straw on to the concave and caused the grain and chaff to fall through. Another revolving rake tossed the straw out of the machine. The straw thus passing under one peg drum and over the next was subjected to a thorough rubbing and tossing which separated the grain and chaff from it. These fell on to the floor beneath, ready for winnowing. A later development of the beater-drum was to fix iron pegs on the framework, and thus was evolved the Scottish "peg-mill," which remained the standard type for nearly a hundred years, and was adopted across the US. In Britain, the development of high-speed drums carried considerable risk, and a type of safety guard was mandated by the Threshing Machine Act of 1878. Contemporary industrialization In modern developed areas, threshing is mostly done by machine, usually by a combine harvester, which harvests, threshes, and winnows the grain while it is still in the field. The cereal may be stored in a barn or silos. Threshing festivals A threshing bee was traditionally a bee in which local people gathered together to pitch in and get the season's threshing done. Such bees were sometimes festivals or events within larger harvest festivals. The original purpose has largely become obsolete, but the festival tradition lives on in some modern examples that commemorate the past and include flea markets, hog wrestling, and dances. Gallery
Technology
Agronomical techniques
null
5856895
https://en.wikipedia.org/wiki/Proterosuchus
Proterosuchus
Proterosuchus is an extinct genus of archosauriform reptiles that lived during the Early Triassic. It contains three valid species: the type species P. fergusi and the referred species P. alexanderi and P. goweri. All three species lived in what is now South Africa. The genus was named in 1903 by the South African paleontologist Robert Broom. The genus Chasmatosaurus is a junior synonym of Proterosuchus. Proterosuchus was a mid-sized quadrupedal reptile with a sprawling stance that could reach a length of up to . It had a large head and distinctively hooked snout. It was a predator, which may have hunted prey such as Lystrosaurus. The lifestyle of Proterosuchus remains debated; it may have been terrestrial or it may have been a semiaquatic ambush predator similar to modern crocodiles. Proterosuchus is one of the earliest members of the clade Archosauriformes, which also includes crocodilians, pterosaurs, and dinosaurs, including birds. It lived in the aftermath of the Permian–Triassic extinction event, the largest known mass extinction in the timeline of Earth's history. Description Proterosuchus was a quadrupedal reptile with a sprawling stance. It could reach a total length of up to . Proterosuchus fergusi is the largest known proterosuchid with a skull length of and a possible body length of . Like most reptiles, Proterosuchus had scaly skin. Proterosuchus had a proportionally large head and long neck compared to its body. The most distinctive characteristic of its head was its strongly hooked snout, formed by a downturned premaxilla. The premaxilla contained up to nine teeth in adults, and the teeth in the snout tip were splayed out to the sides. The jaws of Proterosuchus contained numerous teeth, with up to 9 premaxillary, 31 maxillary, and 28 dentary teeth in each side. The teeth of Proterosuchus were recurved, labiolingually compressed, and serrated, as in most archosauriforms. They were isodont, or all equal in size and shape, in adult individuals, but in juveniles, the teeth were less strongly curved in the back of the jaw. The skull of Proterosuchus exhibits many features characteristic of its position as a basal archosauriform. It bears a prominent antorbital fenestra, like most archosauriforms. In some specimens, the jugal and quadratojugal contact to complete the ventral margin of the lower temporal fenestra, as in other archosauriforms, but in other specimens, there is a narrow gap between the bones so that the lower temporal bar is incomplete as in non-archosauriform archosauromorphs. The lower jaw bears a small external mandibular fenestra, another characteristic of archosauriforms and their closest relatives. Classification Proterosuchus is an early member of Archosauriformes, which also contains crocodilians, pterosaurs, and dinosaurs, including birds. It is the type genus of Proterosuchidae, which also contains the genus Archosaurus. Proterosuchidae is, by definition, the most basal clade of archosauriforms, as Archosauriformes is defined based on their phylogenetic position. Under pre-cladistic taxonomy, Proterosuchus was classified in the order Thecodontia and suborder Proterosuchia. Both taxa are now recognized as paraphyletic groups of basal archosauriforms. Ezcurra et al. (2023) recovered Proterosuchus as the most basal member of the family Proterosuchidae, and the only definitive proterosuchid to not be a member of the subfamily Chasmatosuchinae. As Chasmatosuchinae contains the Permian Archosaurus, this would suggest that the ancestor of Proterosuchus diverged from other proterosuchids during the Permian. Species Valid species Proterosuchus currently contains three valid species, all from the Lower Triassic of South Africa. Proterosuchus fergusi is the type species of Proterosuchus. It was named in 1903 by Robert Broom based on a specimen from Tarkastad donated by John Fergus, for whom the species was named. It is known from several specimens, and the species Chasmatosaurus vanhoepeni and Elaphrosuchus rubidgei are junior synonyms of it. The holotype is poorly preserved and indeterminate, and a neotype has been suggested. It is distinguished from other species of Proterosuchus by its more strongly curved quadrate. Proterosuchus alexanderi was named by A. Hoffmann in 1965 based on a subadult specimen. It is currently only known from one specimen. It is distinguished from other species of Proterosuchus by its longer snout. Proterosuchus goweri was named by Martín D. Ezcurra and Richard J. Butler in 2015, based on a specimen that had originally been described as a specimen of Chasmatosaurus vanhoepeni. It is currently only known from one specimen. It is distinguished from other species of Proterosuchus by a deep horizontal process of the maxilla, a sinusoidal ventral margin of the maxilla, and a gap in the tooth row between the premaxilla and maxilla. Other species Several species have been assigned to Proterosuchus or its junior synonym Chasmatosaurus in the past that are either no longer valid or no longer assigned to Proterosuchus. Ankistrodon indicus was named in 1865 by Thomas Henry Huxley, based on a specimen from the Induan-age Panchet Formation of India. Ankistrodon has been regarded as a synonym of Proterosuchus or Chasmatosaurus in the past, but if this synonymy were correct, Ankistrodon would have priority over the other names. It is now considered a nomen dubium. Chasmatosaurus vanhoepeni is the type species of Chasmatosaurus. It was named in 1924 by Haughton. The species name honors E. C. N. van Hoepen, who collected and prepared the holotype. It is now considered a junior synonym of Proterosuchus fergusi. Like all P. fergusi specimens, it is from the Lystrosaurus Assemblage Zone of the Beaufort Group of South Africa. Chasmatosaurus yuani was named by C. C. Young in 1936, based on specimens from the Induan-age Jiucaiyuan Formation of China. It is considered a valid species of proterosuchid, but is not formally assigned to Proterosuchus. It is considered to be in need of taxonomic revision. It is more closely related to Proterosuchus goweri than to other species of Proterosuchus. Elaphrosuchus rubidgei was named by Robert Broom in 1946. It is now considered a junior synonym of Proterosuchus fergusi, with the holotype being a juvenile specimen thereof. Like all P. fergusi specimens, it is from the Lystrosaurus Assemblage Zone of the Beaufort Group of South Africa. Chasmatosaurus ultimus was named by C. C. Young in 1964, based on a specimen from the Anisian-age Ermaying Formation of China. It was long believed to be the geologically youngest species of proterosuchid, as it would be the only one from the Middle Triassic. However, it is no longer considered to be a proterosuchid and is now considered to be a suchian archosaur. It is now considered a nomen dubium. Palaeobiology The lifestyle of Proterosuchus is debated. It has conventionally been depicted as a semiaquatic ambush predator similar to modern crocodiles. However, it lived in an arid environment and many aspects of its anatomy conflict with a semiaquatic lifestyle. In particular, its limbs are well-ossified, as in terrestrial animals, and the nostrils are laterally-positioned on the snout, not dorsally-positioned. The histology of its bones is reminiscent of terrestrial animals, not semiaquatic ones. However, support for a semiaquatic lifestyle comes from its brain anatomy, which resembles semiaquatic predators such as crocodiles more closely than terrestrial reptiles. The orientation of its ear canals suggests its neutral head posture had the snout angled upward, which would have raised the nostrils high enough for the animal to breathe while largely submerged. However, the utility of the orientation of the semicircular canal in determining head posture and habitat preference has been challenged. Proterosuchus was a predator, but the specifics of its diet are not known. It has been suggested to have eaten fish or the abundant contemporary dicynodont Lystrosaurus. Snout function The function of the hooked snout in Proterosuchus is not fully known. The most likely use was in sexual or social signaling, similar to the hooked snout of male salmon. As the snout does not appear to have been sexually dimorphic, it may be an example of mutual sexual selection. The snout may have been used in a specialized method of predation, as it exhibits high resistance to dorsoventral bending. However, what this method may have been is unclear. The premaxillary teeth do not show wear facets and did not occlude with the teeth of the lower jaw, indicating that they were not used in any abrasive activities and could not have been used to grip prey. The snout tip did not have the pressure receptors present in crocodilians and Spinosaurus. Senses Proterosuchus had mesopic vision, indicating that it was adapted to see well in both bright and dim light. Mesopic vision is characteristic of cathemeral animals, which are active in both night and day, and crepuscular animals, which are active in twilight. Adaptations to see in dim light may have been ancestral to archosaurs, and Proterosuchus may have been an early example of this trend. However, Proterosuchus lived near the Antarctic Circle, so its mesopic vision may have instead been an adaptation to the highly seasonal day lengths it experienced. The hearing of Proterosuchus was likely adapted for lower frequencies, as in modern crocodiles. Due to its low-sensitivity hearing, Proterosuchus probably did not rely heavily on vocal communication and may have been relatively solitary. Based on the size of its olfactory bulbs, Proterosuchus had a strong sense of smell, similar to that of modern crocodiles. However, its olfactory bulbs were not as large as those of its relative Tasmaniosaurus, suggesting different habits and potentially a more aquatic ecology in Proterosuchus. Metabolism The metabolism of Proterosuchus is disputed. Like other crocopod archosauromorphs, Proterosuchus had a higher metabolic rate than extant ectotherms. Furthermore, Proterosuchus possessed fibrolamellar bone, indicative of a high growth rate and corresponding high metabolism. However, studies conflict on whether the metabolism of Proterosuchus was within the range of extant endotherms. Its metabolic rate was lower than most other crocopods, except for the ectothermic phytosaurs and crocodilians, which may have been an adaptation to a crocodile-like predatory strategy. Ontogeny Proterosuchus grew quickly. It probably reached sexual maturity within a year, at roughly two-thirds of its maximum adult body size. Rapid growth rates were typical of Early Triassic archosauromorphs, and may have been an adaptation to surviving the hostile environment of the Early Triassic. Juvenile Proterosuchus may have hunted different prey from adults. Palaeoecology Proterosuchus fossils are found in the Lystrosaurus Assemblage Zone of the Beaufort Group in South Africa. Proterosuchus was the first new species to arrive in the Karoo environment after the Permian–Triassic extinction. Proterosuchus and the therocephalian Moschorhinus were the largest carnivores in the ecosystem at the time, and soon after the extinction Moschorhinus declined and went extinct while Proterosuchus thrived. The most common tetrapod in Proterosuchus'''s environment was the herbivorous dicynodont Lystrosaurus''. The environment was hot, semi-arid and experienced droughts.
Biology and health sciences
Other prehistoric archosaurs
Animals
17947615
https://en.wikipedia.org/wiki/Distaff
Distaff
A distaff (, , also called a rock) is a tool used in spinning. It is designed to hold the unspun fibers, keeping them untangled and thus easing the spinning process. It is most commonly used to hold flax and sometimes wool, but can be used for any type of fibre. Fiber is wrapped around the distaff and tied in place with a piece of ribbon or string. The word comes from Low German dis, meaning a bunch of flax, connected with staff. As an adjective, the term distaff is used to describe the female side of a family. The corresponding term for the male side of a family is the "spear" side. Form In Western Europe, there were two common forms of distaves, depending on the spinning method. The traditional form is a staff held under one's arm while using a spindle – see the figure illustration. It is about long, held under the left arm, with the right hand used in drawing the fibres from it. This version is the older of the two, as spindle spinning predates spinning on a wheel. A distaff can also be mounted as an attachment to a spinning wheel. On a wheel, it is placed next to the bobbin, where it is in easy reach of the spinner. This version is shorter, but otherwise does not differ from the spindle version. By contrast, the traditional Russian distaff, used both with spinning wheels and with spindles, is L-shaped and consists of a horizontal board, known as the dontse (), and a flat vertical piece, frequently oar-shaped, to the inner side of which the bundle of fibers was tied or pinned. The spinner sat on the dontse, with the vertical piece of the distaff to her left, and drew the fibers out with her left hand. The distaff was often richly carved and painted and was an important element of Russian folk art. Recently, handspinners have begun using wrist distaves to hold their fiber; these are made of flexible material, such as braided yarn, and can swing freely from the wrist. A wrist distaff generally consists of a loop with a tail, at the end of which is a tassel, often with beads on each strand. The spinner wraps the roving or tow around the tail and through the loop to keep it out of the way, and to keep it from getting snagged. Dressing Dressing a distaff is the act of wrapping the fiber around the distaff. With flax, the wrapping is done by laying the flax fibers down, approximately parallel to each other and the distaff, then carefully rolling the fibers onto the distaff. A ribbon or string is then tied at the top and loosely wrapped around the fibers to keep them in place. Other meanings The term distaff is also used as an adjective to describe the matrilineal branch of a family, i.e., to the person's mother and her blood relatives. This term developed in the English-speaking communities where a distaff spinning tool was used often to symbolize domestic life. Proverbs 31 cites the "wife of noble character" as one who "holds the distaff". One still-recognized use of the term is in horse racing, in which races limited to female horses are referred to as distaff races. From 1984 until 2007, at the American Breeders' Cup, the major race for fillies and mares was the Breeders' Cup Distaff. From 2008 to 2012, the event was referred to as the Breeders' Cup Ladies' Classic. Starting in 2013, the name of the race changed back to Breeders' Cup Distaff. It is commonly regarded as the female analog of the better-known Breeders' Cup Classic, though female horses are not barred from entering that race. The phrase "on the distaff side" was commonly used by reporters covering athletic competitions when transitioning from men's events over to the highlights of women's events. In Norse mythology, the goddess Frigg spins clouds from her bejewelled distaff in the Norse constellation known as Frigg's Spinning Wheel (Friggerock, also known as Orion's belt). In popular culture The Women's division of the mixed-martial-arts organization EXC (Elite Xtreme Combat) is known as the "Distaff Division". In the video game Loom by Lucasfilm Games (1990), the Weavers' Guild, the game's equivalent to wizards, and the main character, Bobbin Threadbare, use wooden staves called "distaffs" to control their magic, with which they "weave the very fabric of reality".
Technology
Spinning
null
4421272
https://en.wikipedia.org/wiki/Chronic%20condition
Chronic condition
A chronic condition (also known as chronic disease or chronic illness) is a health condition or disease that is persistent or otherwise long-lasting in its effects or a disease that comes with time. The term chronic is often applied when the course of the disease lasts for more than three months. Common chronic diseases include diabetes, functional gastrointestinal disorder, eczema, arthritis, asthma, chronic obstructive pulmonary disease, autoimmune diseases, genetic disorders and some viral diseases such as hepatitis C and acquired immunodeficiency syndrome. An illness which is lifelong because it ends in death is a terminal illness. It is possible and not unexpected for an illness to change in definition from terminal to chronic. Diabetes and HIV for example were once terminal yet are now considered chronic due to the availability of insulin for diabetics and daily drug treatment for individuals with HIV which allow these individuals to live while managing symptoms. In medicine, chronic conditions are distinguished from those that are acute. An acute condition typically affects one portion of the body and responds to treatment. A chronic condition, on the other hand, usually affects multiple areas of the body, is not fully responsive to treatment, and persists for an extended period of time. Chronic conditions may have periods of remission or relapse where the disease temporarily goes away, or subsequently reappears. Periods of remission and relapse are commonly discussed when referring to substance abuse disorders which some consider to fall under the category of chronic condition. Chronic conditions are often associated with non-communicable diseases which are distinguished by their non-infectious causes. Some chronic conditions though, are caused by transmissible infections such as HIV/AIDS. 63% of all deaths worldwide are from chronic conditions. Chronic diseases constitute a major cause of mortality, and the World Health Organization (WHO) attributes 38 million deaths a year to non-communicable diseases. In the United States approximately 40% of adults have at least two chronic conditions. Living with two or more chronic conditions is referred to as multimorbidity. Types Chronic conditions have often been used to describe the various health related states of the human body such as syndromes, physical impairments, disabilities as well as diseases. Epidemiologists have found interest in chronic conditions due to the fact they contribute to disease, disability, and diminished physical and/or mental capacity. For example, high blood pressure or hypertension is considered to be not only a chronic condition itself but also correlated with diseases such as heart attack or stroke. Additionally, some socioeconomic factors may be considered as a chronic condition as they lead to disability in daily life. An important one that public health officials in the social science setting have begun highlighting is chronic poverty. Researchers, particularly those studying the United States, utilize the Chronic Condition Indicator (CCI) which maps ICD codes as "chronic" or "non-chronic". The list below includes these chronic conditions and diseases: In 2015 the World Health Organization produced a report on non-communicable diseases, citing the four major types as: Cancers Cardiovascular diseases, including cerebrovascular disease, heart failure, and ischemic cardiopathy Chronic respiratory diseases, such as asthma and chronic obstructive pulmonary disease (COPD) Diabetes mellitus (type 1, type 2, pre-diabetes, gestational diabetes) Other examples of chronic diseases and health conditions include: Alzheimer's disease Atrial fibrillation Attention deficit hyperactivity disorder Autism Autoimmune diseases, such as ulcerative colitis, lupus erythematosus, Crohn's disease, coeliac disease, Hashimoto's thyroiditis, and relapsing polychondritis Blindness Cerebral palsy (all types) Chronic graft-versus-host disease Chronic hepatitis Chronic pancreatitis Chronic kidney disease Chronic osteoarticular diseases, such as osteoarthritis and rheumatoid arthritis Chronic pain syndromes, such as post-vasectomy pain syndrome and complex regional pain syndrome Dermatological conditions such as atopic dermatitis and psoriasis Down Syndrome Dwarfism Deafness and hearing impairment Ehlers–Danlos syndrome (various types) Endometriosis Epilepsy Fetal alcohol spectrum disorder Fibromyalgia HIV/AIDS Hereditary spherocytosis Huntington's disease Hypertension Mental illness Migraines Multiple sclerosis Myalgic encephalomyelitis ( chronic fatigue syndrome) Narcolepsy Obesity Osteogenesis Imperfecta Osteoporosis Parkinson's disease Periodontal disease Polycystic Ovarian Syndrome Postural orthostatic tachycardia syndrome Prader-Willi Syndrome Sickle cell anemia and other hemoglobin disorders Substance Abuse Disorders Sleep apnea Thyroid disease Tuberculosis Williams Syndrome And many more. Risk factors While risk factors vary with age and gender, many of the common chronic diseases in the US are caused by dietary, lifestyle and metabolic risk factors. Therefore, these conditions might be prevented by behavioral changes, such as quitting smoking, adopting a healthy diet, and increasing physical activity. Social determinants are important risk factors for chronic diseases. Social factors, e.g., socioeconomic status, education level, and race/ethnicity, are a major cause for the disparities observed in the care of chronic disease. Lack of access and delay in receiving care result in worse outcomes for patients from minorities and underserved populations. Those barriers to medical care complicate patients monitoring and continuity in treatment. In the US, minorities and low-income populations are less likely to seek, access and receive preventive services necessary to detect conditions at an early stage. The majority of US health care and economic costs associated with medical conditions are incurred by chronic diseases and conditions and associated health risk behaviors. Eighty-four percent of all health care spending in 2006 was for the 50% of the population who have one or more common chronic medical conditions (CDC, 2014). There are several psychosocial risk and resistance factors among children with chronic illness and their family members. Adults with chronic illness were significantly more likely to report life dissatisfaction than those without chronic illness. Compared to their healthy peers, children with chronic illness have about a twofold increase in psychiatric disorders. Higher parental depression and other family stressors predicted more problems among patients. In addition, sibling problems along with the burden of illness on the family as a whole led to more psychological strain on the patients and their families. Prevention A growing body of evidence supports that prevention is effective in reducing the effect of chronic conditions; in particular, early detection results in less severe outcomes. Clinical preventive services include screening for the existence of the disease or predisposition to its development, counseling and immunizations against infectious agents. Despite their effectiveness, the utilization of preventive services is typically lower than for regular medical services. In contrast to their apparent cost in time and money, the benefits of preventive services are not directly perceived by patient because their effects are on the long term or might be greater for society as a whole than at the individual level. Therefore, public health programs are important in educating the public, and promoting healthy lifestyles and awareness about chronic diseases. While those programs can benefit from funding at different levels (state, federal, private) their implementation is mostly in charge of local agencies and community-based organizations. Studies have shown that public health programs are effective in reducing mortality rates associated to cardiovascular disease, diabetes and cancer, but the results are somewhat heterogeneous depending on the type of condition and the type of programs involved. For example, results from different approaches in cancer prevention and screening depended highly on the type of cancer. The rising number of patient with chronic diseases has renewed the interest in prevention and its potential role in helping control costs. In 2008, the Trust for America's Health produced a report that estimated investing $10 per person annually in community-based programs of proven effectiveness and promoting healthy lifestyle (increase in physical activity, healthier diet and preventing tobacco use) could save more than $16 billion annually within a period of just five years. A 2017 review (updated in 2022) found that it is uncertain whether school-based policies on targeting risk factors on chronic diseases such as healthy eating policies, physical activity policies, and tobacco policies can improve student health behaviours or knowledge of staffs and students. The updated review in 2022 did determine a slight improvement in measures of obesity and physical activity as the use of improved strategies lead to increased implementation interventions but continued to call for additional research to address questions related to alcohol use and risk. Encouraging those with chronic conditions to continue with their outpatient (ambulatory) medical care and attend scheduled medical appointments may help improve outcomes and reduce medical costs due to missed appointments. Finding patient-centered alternatives to doctors or consultants scheduling medical appointments has been suggested as a means of improving the number of people with chronic conditions that miss medical appointments, however there is no strong evidence that these approaches make a difference. Nursing Nursing can play an important role in assisting patients with chronic diseases achieve longevity and experience wellness. Scholars point out that the current neoliberal era emphasizes self-care, in both affluent and low-income communities. This self-care focus extends to the nursing of patients with chronic diseases, replacing a more holistic role for nursing with an emphasis on patients managing their own health conditions. Critics note that this is challenging if not impossible for patients with chronic disease in low-income communities where health care systems, and economic and social structures do not fully support this practice. A study in Ethiopia showcases a nursing-heavy approach to the management of chronic disease. Foregrounding the problem of distance from healthcare facility, the study recommends patients increase their request for care. It uses nurses and health officers to fill, in a cost-efficient way, the large unmet need for chronic disease treatment. They led their health centers staffed by nurses and health officers; so, there are specific training required for involvement in the programmed must be carried out regularly, to ensure that new staff is educated in administering chronic disease care. The program shows that community-based care and education, primarily driven by nurses and health officers, works. It highlights the importance of nurses following up with individuals in the community, and allowing nurses flexibility in meeting their patients' needs and educating them for self-care in their homes. Epidemiology The epidemiology of chronic disease is diverse and the epidemiology of some chronic diseases can change in response to new treatments. In the treatment of HIV, the success of anti-retroviral therapies means that many patients will experience this infection as a chronic disease that for many will span several decades of their chronic life. Some epidemiology of chronic disease can apply to multiple diagnosis. Obesity and body fat distribution for example contribute and are risk factors for many chronic diseases such as diabetes, heart, and kidney disease. Other epidemiological factors, such as social, socioeconomic, and environment do not have a straightforward cause and effect relationship with chronic disease diagnosis. While typically higher socioeconomic status is correlated with lower occurrence of chronic disease, it is not known is there is a direct cause and effect relationship between these two variables. The epidemiology of communicable chronic diseases such as AIDS is also different from that of noncommunicable chronic disease. While Social factors do play a role in AIDS prevalence, only exposure is truly needed to contract this chronic disease. Communicable chronic diseases are also typically only treatable with medication intervention, rather than lifestyle change as some non-communicable chronic diseases can be treated. United States As of 2003, there are a few programs which aim to gain more knowledge on the epidemiology of chronic disease using data collection. The hope of these programs is to gather epidemiological data on various chronic diseases across the United States and demonstrate how this knowledge can be valuable in addressing chronic disease. In the United States, as of 2004 nearly one in two Americans (133 million) has at least one chronic medical condition, with most subjects (58%) between the ages of 18 and 64. The number is projected to increase by more than one percent per year by 2030, resulting in an estimated chronically ill population of 171 million. The most common chronic conditions are high blood pressure, arthritis, respiratory diseases like emphysema, and high cholesterol. Based on data from 2014 Medical Expenditure Panel Survey (MEPS), about 60% of adult Americans were estimated to have one chronic illness, with about 40% having more than one; this rate appears to be mostly unchanged from 2008. MEPS data from 1998 showed 45% of adult Americans had at least one chronic illness, and 21% had more than one. According to research by the CDC, chronic disease is also especially a concern in the elderly population in America. Chronic diseases like stroke, heart disease, and cancer were among the leading causes of death among Americans aged 65 or older in 2002, accounting for 61% of all deaths among this subset of the population. It is estimated that at least 80% of older Americans are currently living with some form of a chronic condition, with 50% of this population having two or more chronic conditions. The two most common chronic conditions in the elderly are high blood pressure and arthritis, with diabetes, coronary heart disease, and cancer also being reported among the elder population. In examining the statistics of chronic disease among the living elderly, it is also important to make note of the statistics pertaining to fatalities as a result of chronic disease. Heart disease is the leading cause of death from chronic disease for adults older than 65, followed by cancer, stroke, diabetes, chronic lower respiratory diseases, influenza and pneumonia, and, finally, Alzheimer's disease. Though the rates of chronic disease differ by race for those living with chronic illness, the statistics for leading causes of death among elderly are nearly identical across racial/ethnic groups. Chronic illnesses cause about 70% of deaths in the US and in 2002 chronic conditions (heart disease, cancers, stroke, chronic respiratory diseases, diabetes, Alzheimer's disease, mental illness and kidney diseases) were six of the top ten causes of mortality in the general US population. Economic impact United States Chronic diseases are a major factor in the continuous growth of medical care spending. In 2002, the U.S. Department of Health and Human Services stated that the health care for chronic diseases cost the most among all health problems in the U.S. Healthy People 2010 reported that more than 75% of the $2 trillion spent annually in U.S. medical care are due to chronic conditions; spending are even higher in proportion for Medicare beneficiaries (aged 65 years and older). Furthermore, in 2017 it was estimated that 90% of the $3.3 billion spent on healthcare in the United States was due to the treatment of chronic diseases and conditions. Spending growth is driven in part by the greater prevalence of chronic illnesses and the longer life expectancy of the population. Also, improvement in treatments has significantly extended the lifespans of patients with chronic diseases but results in additional costs over long period of time. A striking success is the development of combined antiviral therapies that led to remarkable improvement in survival rates and quality of life of HIV-infected patients. In addition to direct costs in health care, chronic diseases are a significant burden to the economy, through limitations in daily activities, loss in productivity and loss of days of work. A particular concern is the rising rates of overweight and obesity in all segments of the U.S. population. Obesity itself is a medical condition and not a disease, but it constitutes a major risk factor for developing chronic illnesses, such as diabetes, stroke, cardiovascular disease and cancers. Obesity results in significant health care spending and indirect costs, as illustrated by a recent study from the Texas comptroller reporting that obesity alone cost Texas businesses an extra $9.5 billion in 2009, including more than $4 billion for health care, $5 billion for lost productivity and absenteeism, and $321 million for disability. Social and personal impact There have been recent links between social factors and prevalence as well as outcome of chronic conditions. Mental health The connection between loneliness, overall health, and chronic conditions has recently been highlighted. Some studies have shown that loneliness has detrimental health effects similar to that of smoking and obesity. One study found that feelings of isolation are associated with higher self reporting of health as poor, and feelings of loneliness increased the likelihood of mental health disorders in individuals. The connection between chronic illness and loneliness is established, yet oftentimes ignored in treatment. One study for example found that a greater number of chronic illnesses per individual were associated with feelings of loneliness. Some of the possible reasons for this listed are an inability to maintain independence as well as the chronic illness being a source of stress for the individual. A study of loneliness in adults over age 65 found that low levels of loneliness as well as high levels of familial support were associated with better outcomes of multiple chronic conditions such as hypertension and diabetes. There are some recent movements in the medical sphere to address these connections when treating patients with chronic illness. The biopsychosocial approach for example, developed in 2006 focuses on patients "patient's personality, family, culture, and health dynamics." Physicians are leaning more towards a psychosocial approach to chronic illness to aid the increasing number of individuals diagnosed with these conditions. Despite this movement, there is still criticism that chronic conditions are not being treated appropriately, and there is not enough emphasis on the behavioral aspects of chronic conditions or psychological types of support for patients. The mental health intersectionality on those with chronic conditions is a large aspect often overlooked by doctors. And chronic illness therapists are available for support to help with the mental toll of chronic illness a it is often underestimated in society. Adults with chronic illness that restrict their daily life present with more depression and lower self-esteem than healthy adults and adults with non-restricting chronic illness. The emotional influence of chronic illness also has an effect on the intellectual and educational development of the individual. For example, people living with type 1 diabetes endure a lifetime of monotonous and rigorous health care management usually involving daily blood glucose monitoring, insulin injections, and constant self-care. This type of constant attention that is required by type 1 diabetes and other chronic illness can result in psychological maladjustment. There have been several theories, namely one called diabetes resilience theory, that posit that protective processes buffer the impact of risk factors on the individual's development and functioning. Financial cost People with chronic conditions pay more out-of-pocket; a study found that Americans spent $2,243 more on average. The financial burden can increase medication non-adherence. In some countries, laws protect patients with chronic conditions from excessive financial responsibility; for example, as of 2008 France limited copayments for those with chronic conditions, and Germany limits cost sharing to 1% of income versus 2% for the general public. Within the medical-industrial complex, chronic illnesses can impact the relationship between pharmaceutical companies and people with chronic conditions. Life-saving drugs, or life-extending drugs, can be inflated for a profit. There is little regulation on the cost of chronic illness drugs, which suggests that abusing the lack of a drug cap can create a large market for drug revenue. Likewise, certain chronic conditions can last throughout one's lifetime and create pathways for pharmaceutical companies to take advantage of this. Gender Gender influences how chronic disease is viewed and treated in society. Women's chronic health issues are often considered to be most worthy of treatment or most severe when the chronic condition interferes with a woman's fertility. Historically, there is less of a focus on a woman's chronic conditions when it interferes with other aspects of her life or well-being. Many women report feeling less than or even "half of a woman" due to the pressures that society puts on the importance of fertility and health when it comes to typically feminine ideals. These kinds of social barriers interfere with women's ability to perform various other activities in life and fully work toward their aspirations. Socioeconomic class and race Race is also implicated in chronic illness, although there may be many other factors involved. Racial minorities are 1.5-2 times more likely to have most chronic diseases than white individuals. Non-Hispanic blacks are 40% more likely to have high blood pressure that non-Hispanic whites, diagnosed diabetes is 77% higher among non-Hispanic blacks, and American Indians and Alaska Natives are 60% more likely to be obese than non-Hispanic whites. Some of this prevalence has been suggested to be in part from environmental racism. Flint, Michigan, for example, had high levels of lead poisoning in their drinkable water after waste was dumped into low-value housing areas. There are also higher rates of asthma in children who live in lower income areas due to an abundance of pollutants being released on a much larger scale in these areas. Advocacy and research organizations In Europe, the European Chronic Disease Alliance was formed in 2011, which represents over 100,000 healthcare workers. In the United States, there are a number of nonprofits focused on chronic conditions, including entities focused on specific diseases such as the American Diabetes Association, Alzheimer's Association, or Crohn's and Colitis Foundation. There are also broader groups focused on advocacy or research into chronic illness in general, such as the National Association of Chronic Disease Directors, Partnership to Fight Chronic Disease, the Chronic Disease Coalition which arose in Oregon in 2015, and the Chronic Policy Care Alliance.
Biology and health sciences
Disease: general classification
Health
12328559
https://en.wikipedia.org/wiki/Misgurnus%20fossilis
Misgurnus fossilis
The weatherfish (Misgurnus fossilis) is a species of true loach that has a wide range in Europe and some parts of Asia. It is an omnivorous scavenger bottom feeder, using its sensitive barbels to find edible items. The diet mostly consists of small aquatic invertebrates along with some detritus. The weatherfish is long and thin which allows it to burrow through the substrate and navigate through places that deeper bodied fish would have trouble with. It grows up to in total length, though there are fishermen who say they have caught longer, up to . If true, this would make Misgurnus fossilis the largest species of true loach. This loach has a very wide range, especially in Europe. It occurs north of the Alps, from the Meuse River in western Europe all the way to the Neva River in northwestern Russia. It also occurs in the northern part of the Black Sea basin from the Danube River to the Kuban River, and in the Caspian Sea in the River Volga and River Ural drainages. It is also introduced in a few different areas, but not to the extent of the pond loach (Misgurnus anguillicaudatus). Adult weatherfish live in dense patches of aquatic vegetation while juveniles prefer to live near the shoreline in very shallow water where there is a lot of detritus; neither adults nor juveniles are found in open areas without vegetation. Because of their habitat preferences, dredging and aquatic weed removal poses a danger to weatherfish populations. The weatherfish is listed as least concern but is protected in most of its range. They are able to survive in habitats that many other fish would be unable to because of their ability to breathe atmospheric oxygen. In low oxygen conditions, the weatherfish will swim to the surface and gulp air. The air then goes through the intestines where a complex system of blood vessels extracts the oxygen, before expelling the air from the anus.
Biology and health sciences
Cypriniformes
Animals
11371560
https://en.wikipedia.org/wiki/Expectation%20value%20%28quantum%20mechanics%29
Expectation value (quantum mechanics)
In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean). It is a fundamental concept in all areas of quantum physics. Operational definition Consider an operator . The expectation value is then in Dirac notation with a normalized state vector. Formalism in quantum mechanics In quantum theory, an experimental setup is described by the observable to be measured, and the state of the system. The expectation value of in the state is denoted as . Mathematically, is a self-adjoint operator on a separable complex Hilbert space. In the most commonly used case in quantum mechanics, is a pure state, described by a normalized vector in the Hilbert space. The expectation value of in the state is defined as If dynamics is considered, either the vector or the operator is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however. If has a complete set of eigenvectors , with eigenvalues so that then () can be expressed as This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues are the possible outcomes of the experiment, and their corresponding coefficient is the probability that this outcome will occur; it is often called the transition probability. A particularly simple case arises when is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment results in "1", and it can be computed as In quantum theory, it is also possible for an operator to have a non-discrete spectrum, such as the position operator in quantum mechanics. This operator has a completely continuous spectrum, with eigenvalues and eigenvectors depending on a continuous parameter, . Specifically, the operator acts on a spatial vector as . In this case, the vector can be written as a complex-valued function on the spectrum of (usually the real line). This is formally achieved by projecting the state vector onto the eigenvalues of the operator, as in the discrete case . It happens that the eigenvectors of the position operator form a complete basis for the vector space of states, and therefore obey a completeness relation in quantum mechanics: The above may be used to derive the common, integral expression for the expected value (), by inserting identities into the vector expression of expected value, then expanding in the position basis: Where the orthonormality relation of the position basis vectors , reduces the double integral to a single integral. The last line uses the modulus of a complex valued function to replace with , which is a common substitution in quantum-mechanical integrals. The expectation value may then be stated, where is unbounded, as the formula A similar formula holds for the momentum operator, in systems where it has continuous spectrum. All the above formulas are valid for pure states only. Prominently in thermodynamics and quantum optics, also mixed states are of importance; these are described by a positive trace-class operator , the statistical operator or density matrix. The expectation value then can be obtained as General formulation In general, quantum states are described by positive normalized linear functionals on the set of observables, mathematically often taken to be a C*-algebra. The expectation value of an observable is then given by If the algebra of observables acts irreducibly on a Hilbert space, and if is a normal functional, that is, it is continuous in the ultraweak topology, then it can be written as with a positive trace-class operator of trace 1. This gives formula () above. In the case of a pure state, is a projection onto a unit vector . Then , which gives formula () above. is assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write in a spectral decomposition, with a projection-valued measure . For the expectation value of in a pure state , this means which may be seen as a common generalization of formulas () and () above. In non-relativistic theories of finitely many particles (quantum mechanics, in the strict sense), the states considered are generally normal. However, in other areas of quantum theory, also non-normal states are in use: They appear, for example. in the form of KMS states in quantum statistical mechanics of infinitely extended media, and as charged states in quantum field theory. In these cases, the expectation value is determined only by the more general formula (). Example in configuration space As an example, consider a quantum mechanical particle in one spatial dimension, in the configuration space representation. Here the Hilbert space is , the space of square-integrable functions on the real line. Vectors are represented by functions , called wave functions. The scalar product is given by . The wave functions have a direct interpretation as a probability distribution: gives the probability of finding the particle in an infinitesimal interval of length about some point . As an observable, consider the position operator , which acts on wavefunctions by The expectation value, or mean value of measurements, of performed on a very large number of identical independent systems will be given by The expectation value only exists if the integral converges, which is not the case for all vectors . This is because the position operator is unbounded, and has to be chosen from its domain of definition. In general, the expectation of any observable can be calculated by replacing with the appropriate operator. For example, to calculate the average momentum, one uses the momentum operator in configuration space, . Explicitly, its expectation value is Not all operators in general provide a measurable value. An operator that has a pure real expectation value is called an observable and its value can be directly measured in experiment.
Physical sciences
Quantum mechanics
Physics
9929778
https://en.wikipedia.org/wiki/Biomining
Biomining
Biomining refers to any process that uses living organisms to extract metals from ores and other solid materials. Typically these processes involve prokaryotes, however fungi and plants (phytoextraction also known as phytomining) may also be used. Biomining is one of several applications within biohydrometallurgy with applications in ore refinement, precious metal recovery, and bioremediation. The largest application currently being used is the treatment of mining waste containing iron, copper, zinc, and gold allowing for salvation of any discarded minerals. It may also be useful in maximizing the yields of increasingly low grade ore deposits. Biomining has been proposed as a relatively environmentally friendly alternative and/or supplementation to traditional mining. Current methods of biomining are modified leach mining processes. These aptly named bioleaching processes most commonly includes the inoculation of extracted rock with bacteria and acidic solution, with the leachate salvaged and processed for the metals of value. Biomining has many applications outside of metal recovery, most notably is bioremediation which has already been used to clean up coastlines after oil spills. There are also many promising future applications, like space biomining, fungal bioleaching and biomining with hybrid biomaterials. History of biomining The possibility of using microorganisms in biomining applications was realized after the 1951 paper by Kenneth Temple and Arthur Colmer. In the paper the authors presented evidence that the bacteria Acidithiobacillus ferrooxidans (basonym Thiobacillus ferrooxidans) is an iron oxidizer that thrive in iron, copper and magnesium-rich environments. In the experiment, A. ferrooxidans was inoculated into media containing between 2,000 and 26,000 ppm ferrous iron, finding that the bacteria grew faster and were more motile in the high iron concentrations. The byproducts of the bacterial growth caused the media to turn very acidic, in which the microorganisms still thrived. Following this experiment, the potential to use fungi to leach metals from their environment and use microorganisms to take up radioactive elements like uranium and thorium have also been explored. While the 1960s was when industrial biomining got its start, humans have been unknowingly using biomining practices for hundreds of years. In western Europe the practice of extracting copper from metallic iron by placing it into drainage streams, used to be considered an act of alchemy. However, today we know that it is a fairly simple chemical reaction. Cu2+ + Fe0 → Cu0 + Fe2+ In the Middle Ages in Portugal, Spain and Wales, miners unknowingly used this reaction to their advantage when they discovered that when flooding deep mine shafts for a period with some leftover iron they were able to obtain copper. In China, the use of biomining techniques has been documented as early as 6th-7th century BC. The relationship between water and ore to produce copper was well documented, and during the Tang dynasty and Song dynasty copper was produced using hydrometallurgical techniques. Though the mechanism of oxidation via bacteria was not understood, the unintended use of biomining allowed copper production in China to reach 1000 Tons per year. Current Biomining Methods Biooxidation (Biological pre-treatment) Biological pre-treatment utilizes the natural oxidation abilities of microorganisms to remove unwanted minerals that interfere with the extraction of the target metals. This is not always necessary but is widely used in the removal of arsenopyrite and pyrite from gold (Au). Adidithiobacillus spp. release the gold by the following reaction. 2 FeAsS[Au] + 7 O2 + 2 H2O + H2SO4 → Fe2(SO4)3 + 2 H3AsO4 + [Au] Stirred tank bioreactors are used for the biooxidation of gold. While stirred tanks have been used to bioleach cobalt for copper mine tailings, these are costly systems that can reach sizes of >1300m3 meaning that they are almost exclusively used for very high value minerals like gold. Bioleaching (Bioprocessing) Dump Bioleaching Dump Bioleaching was one of the first widely used applications of biomining. In dump bioleaching, waste rock is piled into mounds (>100m tall) and saturated with sulfuric acid to encourage mineral oxidation from native bacteria. Inoculation of the rock with bacteria is often not performed in dump bioleaching which instead relies on the bacteria already present in the rock. Heap Bioleaching Heap bioleaching is a newer take on dump leaching. The process includes more processing in which the rocks are ground into a finer grain size. This finer grain is then stacked only 2 – 10 m high and is well irrigated allowing for plenty of oxygen and carbon dioxide to reach the bacteria. The mounds are also often inoculated with bacteria. The liquid coming out at the bottom of the pile, called leachate, is rich in the processed mineral. The heaps reside on large non-porous platforms which are used to catch the leachate for processing. Once collected the leachate is transported to a precipitation plant where the metal is reprecipitated and purified. The waste liquid, now void of the valuable minerals, can be pumped back to the top of the pile and the cycle is repeated. The temperature inside the leach dump often rises spontaneously as a result of microbial activities. Thus, thermophilic iron-oxidizing chemolithotrophs such as thermophilic Acidithiobacillus species and Leptospirillum and at even higher temperatures the thermoacidophilic archaeon Sulfolobus (Metallosphaera sedula) may become important in the leaching process above 40 °C. In situ Biomining In situ biomining involves the flooding and inoculation of fractured ore bodies that have yet to be removed from the ground. Once the bacteria are introduced to the ore deposits, they begin leaching the precious metals, which can then be extracted as leachate with a recovery well. In-situ mining also shows promise for applications in cost-effective deep subsurface extraction of metals. In situ Biomining, is the one current method utilizing bioleaching that serves as an effective and viable replacement for traditional mining. Because in-situ biomining, negates the need for the extraction of the ore bodies, this method stops the need for any hauling or smelting of the ore. This would mean there would be no waste rocks or mineral tailings that contaminate the surface. However, in-situ biomining also has the most environmental concerns of all of the leaching methods, as there is the potential for the contamination of ground water. These concerns however can be careful managed, especially because most of this mining would occur below the water table. This method was used in Canada in the 1970s to extract additional uranium out of exploited mines. Similarly to copper, Acidithiobacillus ferrooxidans can oxidize U4+ to U6+ with O2 as electron acceptor. However, it is likely that the uranium leaching process depends more on the chemical oxidation of uranium by Fe3+, with At. ferrooxidans contributing mainly through the reoxidation of Fe2+ to Fe3+. UO2 + Fe(SO4)3 → UO2SO4 + 2 FeSO4 Applications One of the largest applications of these leaching methods is in the mining of copper Acidithiobacillus ferrooxidans has the ability to solubilize copper by oxidizing the reduced form of iron (Fe2+) with sulfur electrons and carbon dioxide. This process results in ferric ions (Fe3+) and H+ in a series of cyclical reactions. CuFeS2+4H++O2 --> Cu2++Fe2++2S0+2H2O, 4Fe2++4H++O2 4Fe3++2H2O, 2S0+3O2+2H2O→2SO2−4+4H+, CuFeS2+4Fe3+→Cu2++2S0+5Fe2+, The copper metal is then recovered by using scrap iron: Fe0 + Cu2+ → Cu0 + Fe2+ Using Bacteria such as A. ferrooxidans to leach copper from mine tailings has improved recovery rates and reduced operating costs. Moreover, it permits extraction from low grade ores – an important consideration in the face of the depletion of high grade ores. Economic Feasibility and Potential Drawbacks It has been well established that bioleaching allows of the cheaper processing of low-grade ore when the bacteria are given the correct growth conditions. This allow for economic extraction of low-grade ore and increases mining reserves in a sustainable way. Like any process of mineral recovery there are concerns about the ability to scale biomining to the size the industry would need. The biggest potential drawbacks of biomining are the relatively slow leaching and extraction times and need for expensive specialized equipment. Biomining techniques only show economic viability as a complementary process to mining, not as a replacement. Biomining may make traditional mining more environmentally and economically friendly, by re-processing fresh or abandoned mine tailings and the detoxification of copper production concentrates to generate economically valuable copper-enriched liquors. There is great economic feasibility for in-situ biomining to replace traditional mining in a cheaper and more environmentally friendly way, however it has yet to be adopted on any large scale. Gold Gold is frequently found in nature associated with arsenopyrite and pyrite. In the microbial leaching process Acidithiobacillus ferrooxidans etc. dissolve the these iron minerals, exposing trapped gold (Au): 2 FeAsS[Au] + 7 O2 + 2 H2O + H2SO4 → Fe(SO4)3 + 2 H3AsO4 + [Au] Biohydrometallurgy is an emerging trend in biomining in which commercial mining plants operate continuously stirred tank reactor (STR) and the airlift reactor (ALR) or pneumatic reactor (PR) of the Pachuca type to extract the low concentration mineral resources efficiently. The development of industrial mineral processing using microorganisms has been established in South Africa, Brazil and Australia. Iron-and sulfur-oxidizing microorganisms are used to release copper, gold, and uranium from minerals. Electrons are pulled off of sulfur metal through oxidation and then put onto iron, producing reducing equivalents in the cell in the process. This is shown in this figure. These reducing equivalents then go on to produce adenosine triphosphate in the cell through the electron transport chain. Most industrial plants for biooxidation of gold-bearing concentrates have been operated at 40 °C with mixed cultures of mesophilic bacteria of the genera Acidithiobacillus or Leptospirillum ferrooxidans. In other studies the iron-reducing archaea Pyrococcus furiosus were shown to produce hydrogen gas which can then be used as fuel. Using Bacteria such as Acidithiobacillus ferrooxidans to leach copper from mine tailings has improved recovery rates and reduced operating costs. Moreover, it permits extraction from low grade ores – an important consideration in the face of the depletion of high grade ores. The acidophilic archaea Sulfolobus metallicus and Metallosphaera sedula can tolerate up to 4% of copper and have been exploited for mineral biomining. Between 40 and 60% copper extraction was achieved in primary reactors and more than 90% extraction in secondary reactors with overall residence times of about 6 days. All of these microbes are gaining energy by oxidizing these metals. Oxidation means increasing the number of bonds between an atom to oxygen. Microbes will oxidize sulfur. The resulting electrons will reduce iron, releasing energy that can be used by the cell. Bioremediation Bioremediation is the process of using microbial systems to restore the environment to a healthy state by detoxifying and degrading environmental contaminants. When dealing with mine waste and metal toxic contamination of the environment, bioremediation can be used to lessen the mobility of the metals through the ecosystem. Common mine and metal wastes include arsenic, cadmium, chromium, copper, lead, mercury, nickel and zinc which can make its way into the environment through rain and waterways where it can be moved long distances. These metals pose potential toxicology risks to wild animals and plates as well as humans. When the right microbes are introduced to mines or areas with mining contamination and toxicity, they can alter the structure of the metals to make it less bioavailable and lessening its mobility in the ecosystem. It is important to note however, that certain microbes may increase the amount of metals that get dissolved into the environment. This is why scientific studies and testing must be conducted to find the most beneficial bacteria for the situation. Bioremediation is not specific to metals. In 1989 an Exxon Valdez oil tanker spilled 42 million liters of crude oil into Prince William Sound. The oil was washed ashore by tides and covered 778 km of the shoreline of the sound, but also spread to covered 1309 km of the gulf of Alaska. In attempts to rejuvenate the coast after the oil spill, Exxon and the EPA began testing bioremediation strategies, which were later implemented on the coast line. They introduced fertilizer to the environment that promoted the growth of naturally occurring hydrocarbon degrading microorganisms. After the applications, microbial assemblages were determined to be made up of 40% oil degrading bacteria, and one year later that number had fallen back to its baseline of around 1%. Two years after the spill, the region of contaminated shoreline spanned 10.2 km. This case indicated that microbial bioremediation may work as a modern technique for restoring natural systems by removing toxins from the environment. Future prospects Additional capabilities, of current bioleaching technologies include the bioleaching of metals from sulfide ores, phosphate ores, and concentrating of metals from solution. One project recently under investigation is the use of biological methods for the reduction of sulfur in coal-cleaning applications. Biomining in space The concept of space biomining is creating a new field in the world of space exploration. The main space agencies believe that space biomining may provided an approach to the extraction of metals, minerals, nutrients, water, oxygen and volatiles from extraterrestrial regolith. Bioleaching in space also shows promise for application in building biological life support systems (BLSS). BLSS do not usually contain biological component, however, the use of microorganisms to breakdown waste and regolith, while being able to capture their byproducts like nitrates and methane would theoretically allow for a cyclical system of regenerative life support. Fungi in Biomining Species of filamentous fungi, specifically those in the genera of Aspergillus and Penicillium have been shown as effective bioleaching agents. Fungi have the ability to solubilize metals through acidolysis, redoxolysis and chelation reactions. Like bacteria, fungi have been studied for their ability to extract rare earth elements and to process low grade ore. But their most promising and studied usage is in the breakdown of E-waste and the recovery of valuable metals from it, like gold. Despite the promise of fungal bioleaching, there has been no industrial applications of it as it does not out compete its bacterial counterparts. Hybrid Biomaterials Hybrid Biomaterials are created by attaching peptides to magnetic nanoparticles. The peptides attached are specific proteins that have the capacity to bind to organic/inorganic materials with high affinity. This allows for highly specific custom hybrid molecules to be developed, that bind to molecules of interest. The magnetic nanoparticles that these proteins are bound to, allow for the separation of the biomaterial and the bound molecules from an aqueous solution. There has already been successful development of these hybrid biomaterials for eluting gold and molybdenite from solution, and this technique shows great promise for cleaning up tailing ponds.
Technology
Biotechnology
null
9930037
https://en.wikipedia.org/wiki/Hyderabad%20Metro
Hyderabad Metro
The Hyderabad Metro is a rapid transit system, serving the city of Hyderabad, Telangana, India. It is the third longest operational metro network in India after Delhi Metro and Namma Metro (Bengaluru), and the lines are arranged in a secant model. It is funded by a public–private partnership (PPP), with the state government holding a minority equity stake. Hyderabad Metro is the world's largest elevated Metro Rail system based on DBFOT basis (Design, Build, Finance, Operate and Transfer). A special purpose vehicle company, L&T Metro Rail Hyderabad Ltd (L&TMRHL), was established by the construction company Larsen & Toubro to develop the Hyderabad metro rail project. A stretch from Miyapur to Nagole, with 24 stations, was inaugurated on 28 November 2017 by Prime Minister Narendra Modi. This was the longest rapid transit metro line opened in one go in India. It is estimated to cost . As of February 2020, about 490,000 people use the Metro per day. Trains are crowded during the morning and evening rush hours. A ladies only coach was introduced on all the trains from 7 May 2018. Post-COVID, 450,000 passengers were travelling on Hyderabad Metro daily on average by December 2022. On 3 July 2023, Hyderabad Metro Rail achieved ridership clocking in at 0.51 million on that day. History The Hyderabad Metro rail project was approved by the Union government, in 2003. As Hyderabad continued to grow, the Multi-Modal Transport System (MMTS) had insufficient capacity for public transport, and the Union Ministry of Urban Development approved construction of the Hyderabad Metro rail project, directing the Delhi Metro Rail Corporation to conduct a survey of the proposed lines and to submit a Detailed Project Report (DPR). To meet rising public transport needs and mitigate growing road traffic in the twin cities of Hyderabad and Secunderabad, the state government and the South Central Railway jointly launched the MMTS in August 2005. The initial plan was for the Metro to connect with the existing MMTS to provide commuters with alternate modes of transport. Simultaneously, the proposals for taking up the construction of MMTS Phase II were also taken forward. In 2007, N. V. S. Reddy was appointed Managing Director of Hyderabad Metro Rail Limited, and the same year, Central Government approved financial assistance of 1639 crore under a Viability Gap Funding (VGF) scheme. The option of an underground metro system in Hyderabad was ruled out by L&T due to the presence of hard rocks, boulders and the topography of the soil in Hyderabad. Hyderabad Metro initially began under the Andhra Pradesh Municipal Tramways (Construction, Operation and Maintenance) Act, 2008 and later on, it came under the Central Metro Act which permitted revision of fares. On 26 March 2018, the Government of Telangana announced that it would set up an SPV "Hyderabad Airport Metro Limited (HAML)", jointly promoted by HMRL and HMDA, to extend the Blue line from Raidurg to Rajiv Gandhi International Airport, Shamshabad, under Phase II after the completion of Phase I in 2020. Initial bidding The bidding process was completed by July 2008 and awarded to Maytas, which failed to achieve financial closure for the project as per schedule by March 2009. Re-bidding The State government cancelled the contract and called for a fresh rebidding for the project. In the July 2010 rebidding process, Larsen & Toubro (L&T) emerged as the lowest bidder for the project. L&T came forward to take up the work for about as viability gap funding as against the sanctioned . The Indian National Congress government proactively pursued the project, but it was delayed due to separate state agitation and later due to the apprehensions of the new government. A consortium of 10 banks led by State Bank of India sanctioned the entire debt requirement of Hyderabad Metro project, which was the largest fund tie-up in India for a non-power infrastructure Public-private partnership (PPP) project at that time. Mascot The mascot of Hyderabad Metro Rail is Niz. It was derived from the word Nizam, who ruled the princely state of Hyderabad. Construction milestones Hyderabad Metro Map, Timings, Route & Fare: Everything You Need To Know Groundbreaking (Bhoomi Puja) for the project was conducted on 26 April 2012 the concessionaire started the pillar erection on the same day for Stage-I and on 6 June 2012 for Stage-II. The work for Corridor 2 has been delayed due to traders in Koti and Sultan Bazar demanding realignment of the route to safeguard traders and old age heritage markets. If the recent bill proposed in Parliament which allows construction within a radius of heritage structures and sites of historical or archaeological importance is passed, Metro might receive a chance as it helps to connect the Old city with IT corridor. The construction of the entire has been split into 6 stages with the first stage originally scheduled to be completed by March 2015 In November 2013, L&T Hyderabad Metro started laying of rails on the metro viaduct between Nagole and Mettuguda, a stretch of . The first highly sophisticated train of the Hyderabad Metro Rail (HMR) came from Korea during the third week of May 2014. Stringent trial runs commenced from June 2014 till February 2015. The trial runs started on the Miyapur to Sanjeeva Reddy Nagar stretch in October 2015. CMRS inspection for Stage-II (Miyapur and S.R.Nagar Section) was done on 9, 10 August 2016. The steel bridge of the HMR was successfully placed over the Oliphant bridge in August 2017. In November 2017, Commissioner of Railway Safety (CMRS) granted safety approval for stretch from Miyapur to SR Nagar, stretch from SR Nagar to Mettuguda and stretch from Nagole to Mettuguda. 16-km Ameerpet - LB Nagar Metro stretch was opened for commercial operations from 24 September 2018. The Ameerpet - HITEC City route was opened on conditional basis on 20 March 2019. The reversal facility after HITEC City metro station was started on 20 August 2019. On 19 May 2019, the construction of all the 2,599 pillars for the Hyderabad Metro rail (except the stretch in old city) was completed. The Green Line Corridor from Jubilee Bus Station to Mahatma Gandhi Bus Station was issued the Safety Certificate by the Commissioner of Metro Rail Safety and inauguration of services on the section was done on 7 February 2020 by the Chief Minister of Telangana, K. Chandrashekar Rao. Groundbreaking ceremony for Airport Express line was done by Chief Minister of Telangana, K. Chandrashekar Rao on 9 December 2022 Construction phases Phase I Phase I of the project includes 3 lines covering a distance of around . The metro rail line between Nagole and Secunderabad was originally scheduled open by December 2015; it was partly opened on 29 November 2017 and the entire phase 1 completed in 2020. A 'Supplemental Concession Agreement' was signed between L&T Metro Rail Hyderabad and Government of Telangana, under which L&T Metro Rail Hyderabad was granted an interest free soft loan of Rs 100 crore. Line 1 - Red Line - LB Nagar-Miyapur - 27 stations Line 2 - Green Line - JBS–Falaknuma 15 stations Line 3 - Blue Line - Nagole–Raidurg - 24 stations Note: Stage 4/2 MGBS–Falaknuma section () is also part of the initial phase I, but has been rumored that the state government might take up this section instead of L&T, but will be completed along with the phase I work. The Stage 3/2 HITEC City–Raidurg section () of Corridor III was not initial part of phase I, it was later on added by the newly elected state government. This section is opened on 29 November 2019. Old city metro line Earlier in 2010, All India Majlis-e-Ittehadul Muslimeen suggested an alternate route for metro in the old city through Purana pul, Muslimjung, Bahadurpura, Zoo Park, Tadbun junction, Kalapathar, Misrigunj and Shamsheergunj to Falaknuma. However, this route was not accepted. The eastern parts of the old city have access to the metro via the Malakpet metro station. A -long green line in the old city will pass through Dar-ul-Shifa, Salar Jung Museum, Charminar, Shah-Ali-Banda, Shamsheer Gunj, Jungametta and ends at Falaknuma. In June 2022, Hyderabad Metro Rail started a fresh survey of the Old City route from MGBS for underground utilities. The survey is through the Lidar, Global Positioning System and Inertial measurement unit and the plan is to build the elevated line alongside Musi river and center of the road. In July 2023, Telangana Chief minister K. Chandrashekar Rao instructed the municipal administration and L&T Chairman to take forward the metro project in old city. On 16 July 2023, Hyderabad Metro MD NVS Reddy informed that preparatory works for taking up Metro Rail works in old city has started and land acquisition notices for 1,100 affected properties will be issued in about a month. All the five metro stations in the old city will have 120-feet wide roads under the viaduct. On 27 August 2023, Hyderabad Metro Rail Limited started a drone survey of the proposed rail alignment in the old city. Phase II The Government of Telangana is planning second phase of metro rail covering 67.5 km, to cost 17,150 crore. The construction of Phase II will be taken up solely by the state government, instead of public–private partnership (PPP) mode in Phase I. Delhi Metro Rail Corporation (DMRC) was entrusted to give a detailed project report (DPR) for Phase II. Metro Rail Phase II expansion plan is for about , which includes providing link to Shamshabad RGI Airport. In February 2020, Hyderabad Metro MD NVS Reddy said that three corridors are considered for phase 2. The DPR has been submitted to state government. In November 2022, Telangana government asked the Central Government to sanction the metro rail Phase-II, to be jointly owned by Telangana and the Centre (on the lines of MMTS) with external financial assistance. Telangana government proposed metro rail connectivity for about 26 km from BHEL to Lakdikapul with 23 stations and extension of other stretch from Nagole to LB Nagar covering a distance of about 5 kms with 4 stations. BHEL-Lakdikapul metro rail corridor is expected to pass through Miyapur, Raidurg, Khajaguda Junction, Mehdipatnam, Tolichowki and Masab tank areas. Telangana Government has asked Central Government to sanction for metro works in the upcoming union budget for 2023-24. For implementing the project, detailed project reports (DPRs) have already been prepared by the state government with the help of Delhi Metro Rail Corporation (DMRC). The reports have been sent to the Centre. Also the Hyderabad airport metro limited and HMDA will build an elevated Hyderabad Bus Rapid Transit System between Kokapet neopolis and KPHB Colony metro station covering In March 2023, K. T. Rama Rao said that metro line from Nagole will be extended to LB Nagar, which will further be extended till Hayathnagar. LB Nagar will be extended to Rajiv Gandhi International Airport, Shamshabad. Union government pointed out to certain shortfalls in the Government of Telangana’s plan to build BHEL-Lakdikapul and Nagole to LB Nagar metro route. K. T. Rama Rao replied in a letter to Union Minister of Housing and Urban Affairs Hardeep Singh Puri that the rejection is discrimination against Telangana. Hyderabad Airport Express Metro On 26 March 2018, a special purpose vehicle company, Hyderabad Airport Metro Limited (HAML), was established by Government of Telangana to develop the Hyderabad Airport Metro Express. The Airport Express Metro Corridor is proposed to have 27-km elevated, on ground and a -km underground section to connect to the airport terminal. The airport route will have 9 elevated stations and one underground station. From Raidurg Metro terminal station, it will pass through Khajaguda Junction, touching Outer Ring Road at Nanakramguda junction, traverse along ORR to Shamshabad Airport through the existing dedicated Metro Rail Right-of-Way. Chief Minister of Telangana K. Chandrashekar Rao laid the foundation stone for Hyderabad Metro Airport Express on 9 December 2022. It will be built at an approximate cost of . In 2024, this route was cancelled by present Congress govt. CM Revanth Reddy announced the airport metro new route will be passing through old city to airport via Jalpally. Current phases The construction work was undertaken in two phases. There are six stages of completion in Phase I. Network Currently, the Hyderabad Metro has 57 stations. Phase I of the Hyderabad metro has 64 stations; they have escalators and elevators to reach the stations, announcement boards and electronic display systems. The stations also have service roads underneath them to for other public transportation systems to drop-off and pick-up passengers. The signboards of Hyderabad Metro are displayed in Telugu, English, Hindi and Urdu at metro stations. All stations of Hyderabad Metro Rail are equipped with tactile pathway right from street level till the platform level along with elevator buttons equipped with Braille, for providing a barrier less navigation for the visually impaired commuters. Otis Elevator Company supplied and maintains the 670 elevators in use on the system. The numbering of metro pillars of Hyderabad Metro is alpha-numeric with corridor I (Miyapur-LB Nagar) designated as ‘A’, corridor II (JBS-Falaknuma) designated as ‘B’ and corridor III (Nagole-Raidurg) designated as ‘C’. The numbering begins from the Point of Beginning (POB) corridor-wise like the pier numbers on corridor I is C1 near Nagole bridge (corridor beginning), C296 near Mettuguda, C583 near Begumpet, C623 near Ameerpet, C1001 near Hitec city, and C1052 near Riadurg. Any future expansion of corridors would be having D, E, F etc. The metro Rail pillars are linked them with Google Maps and GPS (Global Position System). In May 2018, L&T Metro Rail signed a contract with Powergrid Corporation of India to install electric vehicle charging facilities at all metro stations beginning with Miyapur and Dr. B R Ambedkar Balanagar stations. L&THMRL has set up free wifi access units for commuters at Miyapur, Ameerpet and Nagole metro stations, in association with ACT Fibernet, as part of a pilot project. Metro Rail Phase II expansion plan is for about . In April 2019, K. T. Rama Rao said that of metro rail was planned for Hyderabad, with metro along entire Outer Ring Road. All metro corridors are scheduled to terminate at Shamshabad, near Rajiv Gandhi International Airport, as planned in Hyderabad Metro Rail Phase-II. In August 2019, KT Rama Rao said that state cabinet has approved the Hyderabad Metro Airport Express Link from Raidurg to the airport. Current status Finances The Hyderabad metro is a public–private partnership project, the total cost of this transport systems is 3.07 billion which is shared by both Larsen & Toubro (90%) and the Government of Telangana (10%). In July 2022, L&T Metro Rail Hyderabad Limited came up with a unique concept of ‘Office Bubbles’ wherein it will offer remote, co-working spaces as part of its Transit-oriented development (ToD). The L&T Hyderabad Metro organisation is offering 1,750 sq. ft. space with two units each in 49 Metro stations across the three corridors and another 5,000-30,000 sq. ft. in eight other Metro stations. Focusing on IT companies, Office Bubbles concept offers the spoke–hub distribution paradigm. In Hyderabad Metro, 40 per cent of the retail space was sold even before the metro stations were built to generate non-fare revenue. L&TMRHL built real- estate projects like Next Galleria malls in Panjagutta, Irrum Manzil, Hitech City and Musarambagh with skywalks, for generating non-fare revenues under Transit-oriented development (TOD). In 2019, Hyderabad Metro started a semi-naming policy of metro stations, awarded through an open e-tendering process, to generate non-fare revenues. Depots Hyderabad Metro currently has 2 operational depots. Miyapur and Uppal depot land is 100 acres each. The proposed Falaknuma depot will be constructed in 17 acres. Ridership The Metro has opened to overwhelming response, with over 200,000 people using it on Day 1. On the first Sunday of operations, the Metro was used by 240,000 people. As of 2020, the daily ridership is about 490,000. Although there was hiccups in the beginning of operations in 2017 with meager ridership of 100,000 per day, opening the new lines to LB Nagar and Hi-Tech city in 2018–19, ridership has surged and reached milestones from 2 to 4 lakhs in very short duration. Trains are initially being operated at a frequency of 3 minutes in very peak hours and every 5 minutes in peak hours (between Miyapur-LB Nagar) and 4 minutes in peak hours (between Hi-Tec City/ Ameerpet-Nagole), though maximum achievable frequency is every 90 seconds. Similarly, three-car trains are being used currently, though it is planned to use six-car trains in the future. In December 2017, Hyderabad Metro Rail launched its mobile app, TSavaari. Hyderabad Metro timings are available on T-Savari app. Ola Cabs and Uber tied up its services with app. In May 2022, Hyderabad Metro Managing Director N.V.S. Reddy ruled out possibility of attaching one single or double coach to three-coach train sets. Each three-coach train can take between 900-1,000 passengers per trip and the project has been envisaged in such a manner that another three-coach set rake can be attached to make them into six-coach trains with the stations/depots too already planned for the increased length of the trains. L&T Metro Rail has been using 53 train sets of three coaches each with four three-coach sets under repair or maintenance undertaken using special software based on Internet of Things. Hyderabad Metro Rail crossed 100 million cumulative ridership milestone in just 671 days. In February 2023, Hyderabad Metro announced that Folding cycles are allowed on Metro, which are of the size of a 40 kg bag, but only during non-peak hours. Last-mile connectivity In order to enhance first and last mile connectivity of Hyderabad Metro Rail, Svida Mobility Pvt Ltd, an urban mobility services startup signed a Memorandum of Understanding (MoU) with the L&T Metro Rail Hyderabad Limited (L&TMRHL) with plans to scale up their feeder vehicle services. Svida offers services via a robust AI-enabled tech platform, which provides the booking of feeder vehicles. Svida Mobility Pvt Ltd is L&TMRHL authorised feeder service provider since 2019. The first and last mile connectivity routes, across seven metro stations - Raidurg, Parade ground, Mettuguda, LB Nagar, Uppal, KPHB and Miyapur- use e-Autos and Tata wingers. On 21 April 2022, Hyderabad metro launched its electric auto services in collaboration with AI-enabled ride-hailing mobility platform MetroRide. The services were launched at two metro stations - Parade Grounds and Raidurg Stations. Cost The initial official estimated cost of the 72 km long Metro project stood at . The State Government decided to bear 10% of it, while L&T was to bear the remaining 90% of the cost. The construction work which was supposed to commence on 3 March 2011 commenced in 2012. In March 2012, the cost of the project was revised upwards to . This has been further revised upwards to (as of November 2017). Infrastructure The 71.3 km standard-gauge network will feature ballastless track throughout and will be electrified at 25 kV AC 50 Hz. An operations control centre and depot are constructed at Uppal. At some places, a flyover, underpass and metro has been constructed at the same place, as part of Strategic road development plan (SRDP). CBTC Technology At the end of 2012, L&T Metro Rail awarded Thales a 7.4 billion ($US 134m) contract to provide CBTC and integrated telecommunications and supervision systems on all three lines. Thales Group supplied its SelTrac Communications-based train control (CBTC) technology, and trains initially run in automatic train operation mode with minimum headways of 90 seconds, although the system will support eventual migration to unattended train operation (UTO). Rolling stock On 12 September 2012, Larsen and Toubro Metro Rail Hyderabad Ltd (LTMRHL) announced that it has awarded tender for supply of rolling stock to Hyundai Rotem. The tender is for 57 trains consisting of 171 cars which will be delivered in phases at least 9 months before the commencement of each stage. On 2 October 2013, LTMRHL unveiled its train car for Hyderabad Metro. A model coach which is half the size of the actual coach, was on public display at Necklace Road on the banks of Hussain Sagar in the heart of Hyderabad. The trains will be 3.2m wide and 4m high. There will be 4 doors on each side of each coach. On 10 April 2014, the first metro train for HMR rolled out of Hyundai Rotem factory at Changwon in South Korea and reached Hyderabad in May 2014. On 31 December 2014, Hyderabad Metro Rail successfully conducted a training run in Automatic Train Operation (ATO) mode for the first time between Nagole and Mettuguda. In February 2022, Hyderabad Metro became India's first metro rail to introduce ozone-based sanitisation of its train coaches. Hyderabad Metro rakes regenerate power using the regenerative braking system. Ticketing and recharge The L&T Hyderabad project has an automated ticketing system with features such as contactless smart card based ticketing, slim automatic gates, payment by cash and credit/debit card, passenger operated ticket vending machine and provision of common ticketing system. It also have a provision of NFC-based technology to enable usage of mobile phones as fare media and high performance machine to avoid long queues. Hyderabad Metro Rail smart card acts as a virtual wallet that facilitates seamless travel. A smart card can be purchased from a ticketing office at any Hyderabad Metro station or through TSavaari App. A smart card can be recharged for a minimum amount of 50 and maximum amount of 3000. The smart card can be recharged through TSavaari App, HMR Passenger website (www.ltmetro.com), or Paytm App. There is 10% discount on all trips made through smart card. In December 2019, Hyderabad Metro started cashless QR (Quick Response) code payment option for e-tickets through MakeMyTrip and Goibibo. In October 2022, Hyderabad Metro became the first Metro rail in the country to launch an end-to-end fully digital payment-enabled Metro ticket booking through the WhatsApp e-ticketing facility. Samsung Data Systems India, a subsidiary of South Korean firm Samsung, has been awarded the automatic fare collection system package for the L&T metro rail project. The package involves design, manufacture, supply, installation, testing and commissioning of the system. Official ticket prices were announced on 25 November 2017. The base fare is 10 for up to 2 km. Sanitation and maintenance In 2023, the Hyderabad Metro implemented a system to collect user charges at stations with high passenger traffic to ensure effective maintenance of public washrooms. The management of these facilities was assigned to Sulabh International, a well-known sanitation organization. Under this arrangement, commuters are charged a nominal fee of Rs 2 for using urinals and Rs 5 for accessing toilets. Awards and nominations The HMR project was showcased as one of the top 100 strategic global infrastructure projects at the Global Infrastructure Leadership Forum held in New York during February–March 2013. L&T Metro Rail Hyderabad Limited (LTMRHL) was conferred the SAP ACE Award 2015 in the 'Strategic HR and Talent Management' category. In 2018 the Rasoolpura, Paradise and Prakash Nagar Metro stations were awarded the Indian Green Building Council's (IGBC) Green MRTS Platinum Award. Hyderabad Metro was adjudged as the Best Urban Mass Transit Project by the Government of India in November 2018. In October 2022, three metro stations of Hyderabad Metro- Durgam Cheruvu, Punjagutta and LB Nagar were awarded Indian Green Building Council (IGBC) Green MRTS Certification with the highest platinum rating under elevated stations category. With this, Hyderabad Metro Rail has 23 metro stations certified with the IGBC Platinum rating. In March 2024, a study done by Indian School of Business team on Hyderabad Metro Rail project execution was published as a case study in Stanford University for the benefit of management practitioners. In popular culture 2018 Telugu film Devadas starring Nani and Rashmika Mandanna, was the first film to be shot in Hyderabad Metro. Some scenes of 2021 Telugu films- Vakeel Saab starring Pawan Kalyan and Nivetha Thomas, and Ek Mini Katha starring Santosh Shoban and Kavya Thapar were also shot in Hyderabad Metro. In June 2022, a scene from the upcoming Indian science fiction film Kalki 2898 AD, starring Amitabh Bachchan was shot at Raidurg Metro Station. Some scenes of the film 18 Pages were also shot in Hyderabad Metro. Many scenes of the 2023 released Hindi film 8 A.M. Metro and Telugu film Kushi were shot in Hyderabad Metro. Network map
Technology
India
null
18993816
https://en.wikipedia.org/wiki/Solid
Solid
Solid is one of the four fundamental states of matter along with liquid, gas, and plasma. The molecules in a solid are closely packed together and contain the least amount of kinetic energy. A solid is characterized by structural rigidity (as in rigid bodies) and resistance to a force applied to the surface. Unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. The atoms in a solid are bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice), or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed with little pressure whereas gases can be compressed with little pressure because the molecules in a gas are loosely packed. The branch of physics that deals with solids is called solid-state physics, and is the main branch of condensed matter physics (which also includes liquids). Materials science is primarily concerned with the physical and chemical properties of solids. Solid-state chemistry is especially concerned with the synthesis of novel materials, as well as the science of identification and chemical composition. Microscopic description The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals. In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites, whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline. Almost all common metals, and many ceramics, are polycrystalline. In other materials, there is no long-range order in the position of the atoms. These solids are known as amorphous solids; examples include polystyrene and glass. Whether a solid is crystalline or amorphous depends on the material involved, and the conditions in which it was formed. Solids that are formed by slow cooling will tend to be crystalline, while solids that are frozen rapidly are more likely to be amorphous. Likewise, the specific crystal structure adopted by a crystalline solid depends on the material involved and on how it was formed. While many common objects, such as an ice cube or a coin, are chemically identical throughout, many other common materials comprise a number of different substances packed together. For example, a typical rock is an aggregate of several different minerals and mineraloids, with no specific chemical composition. Wood is a natural organic material consisting primarily of cellulose fibers embedded in a matrix of organic lignin. In materials science, composites of more than one constituent material can be designed to have desired properties. Classes of solids The forces between the atoms in a solid can take a variety of forms. For example, a crystal of sodium chloride (common salt) is made up of ionic sodium and chlorine, which are held together by ionic bonds. In diamond or silicon, the atoms share electrons and form covalent bonds. In metals, electrons are shared in metallic bonding. Some solids, particularly most organic compounds, are held together with van der Waals forces resulting from the polarization of the electronic charge cloud on each molecule. The dissimilarities between the types of solid result from the differences between their bonding. Metals Metals typically are strong, dense, and good conductors of both electricity and heat. The bulk of the elements in the periodic table, those to the left of a diagonal line drawn from boron to polonium, are metals. Mixtures of two or more elements in which the major component is a metal are known as alloys. People have been using metals for a variety of purposes since prehistoric times. The strength and reliability of metals has led to their widespread use in construction of buildings and other structures, as well as in most vehicles, many appliances and tools, pipes, road signs and railroad tracks. Iron and aluminium are the two most commonly used structural metals. They are also the most abundant metals in the Earth's crust. Iron is most commonly used in the form of an alloy, steel, which contains up to 2.1% carbon, making it much harder than pure iron. Because metals are good conductors of electricity, they are valuable in electrical appliances and for carrying an electric current over long distances with little energy loss or dissipation. Thus, electrical power grids rely on metal cables to distribute electricity. Home electrical systems, for example, are wired with copper for its good conducting properties and easy machinability. The high thermal conductivity of most metals also makes them useful for stovetop cooking utensils. The study of metallic elements and their alloys makes up a significant portion of the fields of solid-state chemistry, physics, materials science and engineering. Metallic solids are held together by a high density of shared, delocalized electrons, known as "metallic bonding". In a metal, atoms readily lose their outermost ("valence") electrons, forming positive ions. The free electrons are spread over the entire solid, which is held together firmly by electrostatic interactions between the ions and the electron cloud. The large number of free electrons gives metals their high values of electrical and thermal conductivity. The free electrons also prevent transmission of visible light, making metals opaque, shiny and lustrous. More advanced models of metal properties consider the effect of the positive ions cores on the delocalised electrons. As most metals have crystalline structure, those ions are usually arranged into a periodic lattice. Mathematically, the potential of the ion cores can be treated by various models, the simplest being the nearly free electron model. Minerals Minerals are naturally occurring solids formed through various geological processes under high pressures. To be classified as a true mineral, a substance must have a crystal structure with uniform physical properties throughout. Minerals range in composition from pure elements and simple salts to very complex silicates with thousands of known forms. In contrast, a rock sample is a random aggregate of minerals and/or mineraloids, and has no specific chemical composition. The vast majority of the rocks of the Earth's crust consist of quartz (crystalline SiO2), feldspar, mica, chlorite, kaolin, calcite, epidote, olivine, augite, hornblende, magnetite, hematite, limonite and a few other minerals. Some minerals, like quartz, mica or feldspar are common, while others have been found in only a few locations worldwide. The largest group of minerals by far is the silicates (most rocks are ≥95% silicates), which are composed largely of silicon and oxygen, with the addition of ions of aluminium, magnesium, iron, calcium and other metals. Ceramics Ceramic solids are composed of inorganic compounds, usually oxides of chemical elements. They are chemically inert, and often are capable of withstanding chemical erosion that occurs in an acidic or caustic environment. Ceramics generally can withstand high temperatures ranging from . Exceptions include non-oxide inorganic materials, such as nitrides, borides and carbides. Traditional ceramic raw materials include clay minerals such as kaolinite, more recent materials include aluminium oxide (alumina). The modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide. Both are valued for their abrasion resistance, and hence find use in such applications as the wear plates of crushing equipment in mining operations. Most ceramic materials, such as alumina and its compounds, are formed from fine powders, yielding a fine grained polycrystalline microstructure that is filled with light-scattering centers comparable to the wavelength of visible light. Thus, they are generally opaque materials, as opposed to transparent materials. Recent nanoscale (e.g. sol-gel) technology has, however, made possible the production of polycrystalline transparent ceramics such as transparent alumina and alumina compounds for such applications as high-power lasers. Advanced ceramics are also used in the medicine, electrical and electronics industries. Ceramic engineering is the science and technology of creating solid-state ceramic materials, parts and devices. This is done either by the action of heat, or, at lower temperatures, using precipitation reactions from chemical solutions. The term includes the purification of raw materials, the study and production of the chemical compounds concerned, their formation into components, and the study of their structure, composition and properties. Mechanically speaking, ceramic materials are brittle, hard, strong in compression and weak in shearing and tension. Brittle materials may exhibit significant tensile strength by supporting a static load. Toughness indicates how much energy a material can absorb before mechanical failure, while fracture toughness (denoted KIc) describes the ability of a material with inherent microstructural flaws to resist fracture via crack growth and propagation. If a material has a large value of fracture toughness, the basic principles of fracture mechanics suggest that it will most likely undergo ductile fracture. Brittle fracture is very characteristic of most ceramic and glass-ceramic materials that typically exhibit low (and inconsistent) values of KIc. For an example of applications of ceramics, the extreme hardness of zirconia is utilized in the manufacture of knife blades, as well as other industrial cutting tools. Ceramics such as alumina, boron carbide and silicon carbide have been used in bulletproof vests to repel large-caliber rifle fire. Silicon nitride parts are used in ceramic ball bearings, where their high hardness makes them wear resistant. In general, ceramics are also chemically resistant and can be used in wet environments where steel bearings would be susceptible to oxidation (or rust). As another example of ceramic applications, in the early 1980s, Toyota researched production of an adiabatic ceramic engine with an operating temperature of over . Ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. In a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. Work is also being done in developing ceramic parts for gas turbine engines. Turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. Such engines are not in production, however, because the manufacturing of ceramic parts in the sufficient precision and durability is difficult and costly. Processing methods often result in a wide distribution of microscopic flaws that frequently play a detrimental role in the sintering process, resulting in the proliferation of cracks, and ultimate mechanical failure. Glass ceramics Glass-ceramic materials share many properties with both non-crystalline glasses and crystalline ceramics. They are formed as a glass, and then partially crystallized by heat treatment, producing both amorphous and crystalline phases so that crystalline grains are embedded within a non-crystalline intergranular phase. Glass-ceramics are used to make cookware (originally known by the brand name CorningWare) and stovetops that have high resistance to thermal shock and extremely low permeability to liquids. The negative coefficient of thermal expansion of the crystalline ceramic phase can be balanced with the positive coefficient of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net coefficient of thermal expansion close to zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. Glass ceramics may also occur naturally when lightning strikes the crystalline (e.g. quartz) grains found in most beach sand. In this case, the extreme and immediate heat of the lightning (~2500 °C) creates hollow, branching rootlike structures called fulgurite via fusion. Organic solids Organic chemistry studies the structure, properties, composition, reactions, and preparation by synthesis (or other means) of chemical compounds of carbon and hydrogen, which may contain any number of other elements such as nitrogen, oxygen and the halogens: fluorine, chlorine, bromine and iodine. Some organic compounds may also contain the elements phosphorus or sulfur. Examples of organic solids include wood, paraffin wax, naphthalene and a wide variety of polymers and plastics. Wood Wood is a natural organic material consisting primarily of cellulose fibers embedded in a matrix of lignin. Regarding mechanical properties, the fibers are strong in tension, and the lignin matrix resists compression. Thus wood has been an important construction material since humans began building shelters and using boats. Wood to be used for construction work is commonly known as lumber or timber. In construction, wood is not only a structural material, but is also used to form the mould for concrete. Wood-based materials are also extensively used for packaging (e.g. cardboard) and paper, which are both created from the refined pulp. The chemical pulping processes use a combination of high temperature and alkaline (kraft) or acidic (sulfite) chemicals to break the chemical bonds of the lignin before burning it out. Polymers One important property of carbon in organic chemistry is that it can form certain compounds, the individual molecules of which are capable of attaching themselves to one another, thereby forming a chain or a network. The process is called polymerization and the chains or networks polymers, while the source compound is a monomer. Two main groups of polymers exist: those artificially manufactured are referred to as industrial polymers or synthetic polymers (plastics) and those naturally occurring as biopolymers. Monomers can have various chemical substituents, or functional groups, which can affect the chemical properties of organic compounds, such as solubility and chemical reactivity, as well as the physical properties, such as hardness, density, mechanical or tensile strength, abrasion resistance, heat resistance, transparency, color, etc.. In proteins, these differences give the polymer the ability to adopt a biologically active conformation in preference to others (see self-assembly). People have been using natural organic polymers for centuries in the form of waxes and shellac, which is classified as a thermoplastic polymer. A plant polymer named cellulose provided the tensile strength for natural fibers and ropes, and by the early 19th century natural rubber was in widespread use. Polymers are the raw materials (the resins) used to make what are commonly called plastics. Plastics are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Polymers that have been around, and that are in current widespread use, include carbon-based polyethylene, polypropylene, polyvinyl chloride, polystyrene, nylons, polyesters, acrylics, polyurethane, and polycarbonates, and silicon-based silicones. Plastics are generally classified as "commodity", "specialty" and "engineering" plastics. Composite materials Composite materials contain two or more macroscopic phases, one of which is often ceramic. For example, a continuous matrix, and a dispersed phase of ceramic particles or fibers. Applications of composite materials range from structural elements such as steel-reinforced concrete, to the thermally insulative tiles that play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is Reinforced Carbon-Carbon (RCC), the light gray material that withstands reentry temperatures up to and protects the nose cap and leading edges of Space Shuttle's wings. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfural alcohol in a vacuum chamber, and cured/pyrolized to convert the furfural alcohol to carbon. In order to provide oxidation resistance for reuse capability, the outer layers of the RCC are converted to silicon carbide. Domestic examples of composites can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for strength, bulk, or electro-static dispersion. These additions may be referred to as reinforcing fibers, or dispersants, depending on their purpose. Thus, the matrix material surrounds and supports the reinforcement materials by maintaining their relative positions. The reinforcements impart their special mechanical and physical properties to enhance the matrix properties. A synergism produces material properties unavailable from the individual constituent materials, while the wide variety of matrix and strengthening materials provides the designer with the choice of an optimum combination. Semiconductors Semiconductors are materials that have an electrical resistivity (and conductivity) between that of metallic conductors and non-metallic insulators. They can be found in the periodic table moving diagonally downward right from boron. They separate the electrical conductors (or metals, to the left) from the insulators (to the right). Devices made from semiconductor materials are the foundation of modern electronics, including radio, computers, telephones, etc. Semiconductor devices include the transistor, solar cells, diodes and integrated circuits. Solar photovoltaic panels are large semiconductor devices that directly convert light into electrical energy. In a metallic conductor, current is carried by the flow of electrons, but in semiconductors, current can be carried either by electrons or by the positively charged "holes" in the electronic band structure of the material. Common semiconductor materials include silicon, germanium and gallium arsenide. Nanomaterials Many traditional solids exhibit different properties when they shrink to nanometer sizes. For example, nanoparticles of usually yellow gold and gray silicon are red in color; gold nanoparticles melt at much lower temperatures (~300 °C for 2.5 nm size) than the gold slabs (1064 °C); and metallic nanowires are much stronger than the corresponding bulk metals. The high surface area of nanoparticles makes them extremely attractive for certain applications in the field of energy. For example, platinum metals may provide improvements as automotive fuel catalysts, as well as proton exchange membrane (PEM) fuel cells. Also, ceramic oxides (or cermets) of lanthanum, cerium, manganese and nickel are now being developed as solid oxide fuel cells (SOFC). Lithium, lithium-titanate and tantalum nanoparticles are being applied in lithium-ion batteries. Silicon nanoparticles have been shown to dramatically expand the storage capacity of lithium-ion batteries during the expansion/contraction cycle. Silicon nanowires cycle without significant degradation and present the potential for use in batteries with greatly expanded storage times. Silicon nanoparticles are also being used in new forms of solar energy cells. Thin film deposition of silicon quantum dots on the polycrystalline silicon substrate of a photovoltaic (solar) cell increases voltage output as much as 60% by fluorescing the incoming light prior to capture. Here again, surface area of the nanoparticles (and thin films) plays a critical role in maximizing the amount of absorbed radiation. Biomaterials Many natural (or biological) materials are complex composites with remarkable mechanical properties. These complex structures, which have risen from hundreds of million years of evolution, are inspiring materials scientists in the design of novel materials. Their defining characteristics include structural hierarchy, multifunctionality and self-healing capability. Self-organization is also a fundamental feature of many biological materials and the manner by which the structures are assembled from the molecular level up. Thus, self-assembly is emerging as a new strategy in the chemical synthesis of high performance biomaterials. Physical properties Physical properties of elements and compounds that provide conclusive evidence of chemical composition include odor, color, volume, density (mass per unit volume), melting point, boiling point, heat capacity, physical form and shape at room temperature (solid, liquid or gas; cubic, trigonal crystals, etc.), hardness, porosity, index of refraction and many others. This section discusses some physical properties of materials in the solid state. Mechanical The mechanical properties of materials describe characteristics such as their strength and resistance to deformation. For example, steel beams are used in construction because of their high strength, meaning that they neither break nor bend significantly under the applied load. Mechanical properties include elasticity, plasticity, tensile strength, compressive strength, shear strength, fracture toughness, ductility (low in brittle materials) and indentation hardness. Solid mechanics is the study of the behavior of solid matter under external actions such as external forces and temperature changes. A solid does not exhibit macroscopic flow, as fluids do. Any degree of departure from its original shape is called deformation. The proportion of deformation to original size is called strain. If the applied stress is sufficiently low, almost all solid materials behave in such a way that the strain is directly proportional to the stress (Hooke's law). The coefficient of the proportion is called the modulus of elasticity or Young's modulus. This region of deformation is known as the linearly elastic region. Three models can describe how a solid responds to an applied stress: Elasticity – When an applied stress is removed, the material returns to its undeformed state. Viscoelasticity – These are materials that behave elastically, but also have damping. When the applied stress is removed, work has to be done against the damping effects and is converted to heat within the material. This results in a hysteresis loop in the stress–strain curve. This implies that the mechanical response has a time-dependence. Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, irreversible plastic deformation (or viscous flow) occurs after yield that is permanent. Many materials become weaker at high temperatures. Materials that retain their strength at high temperatures, called refractory materials, are useful for many purposes. For example, glass-ceramics have become extremely useful for countertop cooking, as they exhibit excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. In the aerospace industry, high performance materials used in the design of aircraft and/or spacecraft exteriors must have a high resistance to thermal shock. Thus, synthetic fibers spun out of organic polymers and polymer/ceramic/metal composite materials and fiber-reinforced polymers are now being designed with this purpose in mind. Thermal Because solids have thermal energy, their atoms vibrate about fixed mean positions within the ordered (or disordered) lattice. The spectrum of lattice vibrations in a crystalline or glassy network provides the foundation for the kinetic theory of solids. This motion occurs at the atomic level, and thus cannot be observed or detected without highly specialized equipment, such as that used in spectroscopy. Thermal properties of solids include thermal conductivity, which is the property of a material that indicates its ability to conduct heat. Solids also have a specific heat capacity, which is the capacity of a material to store energy in the form of heat (or thermal lattice vibrations). Electrical Electrical properties include both electrical resistivity and conductivity, dielectric strength, electromagnetic permeability, and permittivity. Electrical conductors such as metals and alloys are contrasted with electrical insulators such as glasses and ceramics. Semiconductors behave somewhere in between. Whereas conductivity in metals is caused by electrons, both electrons and holes contribute to current in semiconductors. Alternatively, ions support electric current in ionic conductors. Many materials also exhibit superconductivity at low temperatures; they include metallic elements such as tin and aluminium, various metallic alloys, some heavily doped semiconductors, and certain ceramics. The electrical resistivity of most electrical (metallic) conductors generally decreases gradually as the temperature is lowered, but remains finite. In a superconductor, however, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. A dielectric, or electrical insulator, is a substance that is highly resistant to the flow of electric current. A dielectric, such as plastic, tends to concentrate an applied electric field within itself, which property is used in capacitors. A capacitor is an electrical device that can store energy in the electric field between a pair of closely spaced conductors (called 'plates'). When voltage is applied to the capacitor, electric charges of equal magnitude, but opposite polarity, build up on each plate. Capacitors are used in electrical circuits as energy-storage devices, as well as in electronic filters to differentiate between high-frequency and low-frequency signals. Electro-mechanical Piezoelectricity is the ability of crystals to generate a voltage in response to an applied mechanical stress. The piezoelectric effect is reversible in that piezoelectric crystals, when subjected to an externally applied voltage, can change shape by a small amount. Polymer materials like rubber, wool, hair, wood fiber, and silk often behave as electrets. For example, the polymer polyvinylidene fluoride (PVDF) exhibits a piezoelectric response several times larger than the traditional piezoelectric material quartz (crystalline SiO2). The deformation (~0.1%) lends itself to useful technical applications such as high-voltage sources, loudspeakers, lasers, as well as chemical, biological, and acousto-optic sensors and/or transducers. Optical Materials can transmit (e.g. glass) or reflect (e.g. metals) visible light. Many materials will transmit some wavelengths while blocking others. For example, window glass is transparent to visible light, but much less so to most of the frequencies of ultraviolet light that cause sunburn. This property is used for frequency-selective optical filters, which can alter the color of incident light. For some purposes, both the optical and mechanical properties of a material can be of interest. For example, the sensors on an infrared homing ("heat-seeking") missile must be protected by a cover that is transparent to infrared radiation. The current material of choice for high-speed infrared-guided missile domes is single-crystal sapphire. The optical transmission of sapphire does not actually extend to cover the entire mid-infrared range (3–5 μm), but starts to drop off at wavelengths greater than approximately 4.5 μm at room temperature. While the strength of sapphire is better than that of other available mid-range infrared dome materials at room temperature, it weakens above 600 °C. A long-standing trade-off exists between optical bandpass and mechanical durability; new materials such as transparent ceramics or optical nanocomposites may provide improved performance. Guided lightwave transmission involves the field of fiber optics and the ability of certain glasses to transmit, simultaneously and with low loss of intensity, a range of frequencies (multi-mode optical waveguides) with little interference between them. Optical waveguides are used as components in integrated optical circuits or as the transmission medium in optical communication systems. Opto-electronic A solar cell or photovoltaic cell is a device that converts light energy into electrical energy. Fundamentally, the device needs to fulfill only two functions: photo-generation of charge carriers (electrons and holes) in a light-absorbing material, and separation of the charge carriers to a conductive contact that will transmit the electricity (simply put, carrying electrons off through a metal contact into an external circuit). This conversion is called the photoelectric effect, and the field of research related to solar cells is known as photovoltaics. Solar cells have many applications. They have long been used in situations where electrical power from the grid is unavailable, such as in remote area power systems, Earth-orbiting satellites and space probes, handheld calculators, wrist watches, remote radiotelephones and water pumping applications. More recently, they are starting to be used in assemblies of solar modules (photovoltaic arrays) connected to the electricity grid through an inverter, that is not to act as a sole supply but as an additional electricity source. All solar cells require a light absorbing material contained within the cell structure to absorb photons and generate electrons via the photovoltaic effect. The materials used in solar cells tend to have the property of preferentially absorbing the wavelengths of solar light that reach the earth surface. Some solar cells are optimized for light absorption beyond Earth's atmosphere, as well. History Fields of study Solid-state physics Solid-state chemistry Materials science
Physical sciences
States of matter
null
18993825
https://en.wikipedia.org/wiki/Liquid
Liquid
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape. The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container. Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars). Introduction Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid. A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a container but forms its own surface, and it may not always mix readily with another liquid. These properties make a liquid suitable for applications such as hydraulics. Liquid particles are bound firmly but not rigidly. They are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the increased vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its boiling point, the cohesive forces that bind the molecules closely together break, and the liquid changes to its gaseous state (unless superheating occurs). If the temperature is decreased, the distances between the molecules become smaller. When the liquid reaches its freezing point the molecules will usually lock into a very specific order, called crystallizing, and the bonds between them become more rigid, changing the liquid into its solid state (unless supercooling occurs). Examples Only two elements are liquid at standard conditions for temperature and pressure: mercury and bromine. Four more elements have melting points slightly above room temperature: francium, caesium, gallium and rubidium. In addition, certain mixtures of elements are liquid at room temperature, even if the individual elements are solid under the same conditions (see eutectic mixture). An example is the sodium-potassium metal alloy NaK. Other metal alloys that are liquid at room temperature include galinstan, which is a gallium-indium-tin alloy that melts at , as well as some amalgams (alloys involving mercury). Pure substances that are liquid under normal conditions include water, ethanol and many other organic solvents. Liquid water is of vital importance in chemistry and biology, and it is necessary for all known forms of life. Inorganic liquids include water, magma, inorganic nonaqueous solvents and many acids. Important everyday liquids include aqueous solutions like household bleach, other mixtures of different substances such as mineral oil and gasoline, emulsions like vinaigrette or mayonnaise, suspensions like blood, and colloids like paint and milk. Many gases can be liquefied by cooling, producing liquids such as liquid oxygen, liquid nitrogen, liquid hydrogen and liquid helium. Not all gases can be liquified at atmospheric pressure, however. Carbon dioxide, for example, can only be liquified at pressures above 5.1 atm. Some materials cannot be classified within the classical three states of matter. For example, liquid crystals (used in liquid-crystal displays) possess both solid-like and liquid-like properties, and belong to their own state of matter distinct from either liquid or solid. Applications Lubrication Liquids are useful as lubricants due to their ability to form a thin, freely flowing layer between solid materials. Lubricants such as oil are chosen for viscosity and flow characteristics that are suitable throughout the operating temperature range of the component. Oils are often used in engines, gear boxes, metalworking, and hydraulic systems for their good lubrication properties. Solvation Many liquids are used as solvents, to dissolve other liquids or solids. Solutions are found in a wide variety of applications, including paints, sealants, and adhesives. Naphtha and acetone are used frequently in industry to clean oil, grease, and tar from parts and machinery. Body fluids are water-based solutions. Surfactants are commonly found in soaps and detergents. Solvents like alcohol are often used as antimicrobials. They are found in cosmetics, inks, and liquid dye lasers. They are used in the food industry, in processes such as the extraction of vegetable oil. Cooling Liquids tend to have better thermal conductivity than gases, and the ability to flow makes a liquid suitable for removing excess heat from mechanical components. The heat can be removed by channeling the liquid through a heat exchanger, such as a radiator, or the heat can be removed with the liquid during evaporation. Water or glycol coolants are used to keep engines from overheating. The coolants used in nuclear reactors include water or liquid metals, such as sodium or bismuth. Liquid propellant films are used to cool the thrust chambers of rockets. In machining, water and oils are used to remove the excess heat generated, which can quickly ruin both the work piece and the tooling. During perspiration, sweat removes heat from the human body by evaporating. In the heating, ventilation, and air-conditioning industry (HVAC), liquids such as water are used to transfer heat from one area to another. Cooking Liquids are often used in cooking due to their excellent heat-transfer capabilities. In addition to thermal conduction, liquids transmit energy by convection. In particular, because warmer fluids expand and rise while cooler areas contract and sink, liquids with low kinematic viscosity tend to transfer heat through convection at a fairly constant temperature, making a liquid suitable for blanching, boiling, or frying. Even higher rates of heat transfer can be achieved by condensing a gas into a liquid. At the liquid's boiling point, all of the heat energy is used to cause the phase change from a liquid to a gas, without an accompanying increase in temperature, and is stored as chemical potential energy. When the gas condenses back into a liquid this excess heat-energy is released at a constant temperature. This phenomenon is used in processes such as steaming. Distillation Since liquids often have different boiling points, mixtures or solutions of liquids or gases can typically be separated by distillation, using heat, cold, vacuum, pressure, or other means. Distillation can be found in everything from the production of alcoholic beverages, to oil refineries, to the cryogenic distillation of gases such as argon, oxygen, nitrogen, neon, or xenon by liquefaction (cooling them below their individual boiling points). Hydraulics Liquid is the primary component of hydraulic systems, which take advantage of Pascal's law to provide fluid power. Devices such as pumps and waterwheels have been used to change liquid motion into mechanical work since ancient times. Oils are forced through hydraulic pumps, which transmit this force to hydraulic cylinders. Hydraulics can be found in many applications, such as automotive brakes and transmissions, heavy equipment, and airplane control systems. Various hydraulic presses are used extensively in repair and manufacturing, for lifting, pressing, clamping and forming. Liquid metals Liquid metals have several properties that are useful in sensing and actuation, particularly their electrical conductivity and ability to transmit forces (incompressibility). As freely flowing substances, liquid metals retain these bulk properties even under extreme deformation. For this reason, they have been proposed for use in soft robots and wearable healthcare devices, which must be able to operate under repeated deformation. The metal gallium is considered to be a promising candidate for these applications as it is a liquid near room temperature, has low toxicity, and evaporates slowly. Miscellaneous Liquids are sometimes used in measuring devices. A thermometer often uses the thermal expansion of liquids, such as mercury, combined with their ability to flow to indicate temperature. A manometer uses the weight of the liquid to indicate air pressure. The free surface of a rotating liquid forms a circular paraboloid and can therefore be used as a telescope. These are known as liquid-mirror telescopes. They are significantly cheaper than conventional telescopes, but can only point straight upward (zenith telescope). A common choice for the liquid is mercury. Mechanical properties Volume Quantities of liquids are measured in units of volume. These include the SI unit cubic metre (m3) and its divisions, in particular the cubic decimeter, more commonly called the litre (1 dm3 = 1 L = 0.001 m3), and the cubic centimetre, also called millilitre (1 cm3 = 1 mL = 0.001 L = 10−6 m3). The volume of a quantity of liquid is fixed by its temperature and pressure. Liquids generally expand when heated, and contract when cooled. Water between 0 °C and 4 °C is a notable exception. On the other hand, liquids have little compressibility. Water, for example, will compress by only 46.4 parts per million for every unit increase in atmospheric pressure (bar). At around 4000 bar (400 megapascals or 58,000 psi) of pressure at room temperature water experiences only an 11% decrease in volume. Incompressibility makes liquids suitable for transmitting hydraulic power, because a change in pressure at one point in a liquid is transmitted undiminished to every other part of the liquid and very little energy is lost in the form of compression. However, the negligible compressibility does lead to other phenomena. The banging of pipes, called water hammer, occurs when a valve is suddenly closed, creating a huge pressure-spike at the valve that travels backward through the system at just under the speed of sound. Another phenomenon caused by liquid's incompressibility is cavitation. Because liquids have little elasticity they can literally be pulled apart in areas of high turbulence or dramatic change in direction, such as the trailing edge of a boat propeller or a sharp corner in a pipe. A liquid in an area of low pressure (vacuum) vaporizes and forms bubbles, which then collapse as they enter high pressure areas. This causes liquid to fill the cavities left by the bubbles with tremendous localized force, eroding any adjacent solid surface. Pressure and buoyancy In a gravitational field, liquids exert pressure on the sides of a container as well as on anything within the liquid itself. This pressure is transmitted in all directions and increases with depth. If a liquid is at rest in a uniform gravitational field, the pressure at depth is given by where: is the pressure at the surface is the density of the liquid, assumed uniform with depth is the gravitational acceleration For a body of water open to the air, would be the atmospheric pressure. Static liquids in uniform gravitational fields also exhibit the phenomenon of buoyancy, where objects immersed in the liquid experience a net force due to the pressure variation with depth. The magnitude of the force is equal to the weight of the liquid displaced by the object, and the direction of the force depends on the average density of the immersed object. If the density is smaller than that of the liquid, the buoyant force points upward and the object floats, whereas if the density is larger, the buoyant force points downward and the object sinks. This is known as Archimedes' principle. Surfaces Unless the volume of a liquid exactly matches the volume of its container, one or more surfaces are observed. The presence of a surface introduces new phenomena which are not present in a bulk liquid. This is because a molecule at a surface possesses bonds with other liquid molecules only on the inner side of the surface, which implies a net force pulling surface molecules inward. Equivalently, this force can be described in terms of energy: there is a fixed amount of energy associated with forming a surface of a given area. This quantity is a material property called the surface tension, in units of energy per unit area (SI units: J/m2). Liquids with strong intermolecular forces tend to have large surface tensions. A practical implication of surface tension is that liquids tend to minimize their surface area, forming spherical drops and bubbles unless other constraints are present. Surface tension is responsible for a range of other phenomena as well, including surface waves, capillary action, wetting, and ripples. In liquids under nanoscale confinement, surface effects can play a dominating role since – compared with a macroscopic sample of liquid – a much greater fraction of molecules are located near a surface. The surface tension of a liquid directly affects its wettability. Most common liquids have tensions ranging in the tens of mJ/m2, so droplets of oil, water, or glue can easily merge and adhere to other surfaces, whereas liquid metals such as mercury may have tensions ranging in the hundreds of mJ/m2, thus droplets do not combine easily and surfaces may only wet under specific conditions. The surface tensions of common liquids occupy a relatively narrow range of values when exposed to changing conditions such as temperature, which contrasts strongly with the enormous variation seen in other mechanical properties, such as viscosity. The free surface of a liquid is disturbed by gravity (flatness) and waves (surface roughness). Flow An important physical property characterizing the flow of liquids is viscosity. Intuitively, viscosity describes the resistance of a liquid to flow. More technically, viscosity measures the resistance of a liquid to deformation at a given rate, such as when it is being sheared at finite velocity. A specific example is a liquid flowing through a pipe: in this case the liquid undergoes shear deformation since it flows more slowly near the walls of the pipe than near the center. As a result, it exhibits viscous resistance to flow. In order to maintain flow, an external force must be applied, such as a pressure difference between the ends of the pipe. The viscosity of liquids decreases with increasing temperature. Precise control of viscosity is important in many applications, particularly the lubrication industry. One way to achieve such control is by blending two or more liquids of differing viscosities in precise ratios. In addition, various additives exist which can modulate the temperature-dependence of the viscosity of lubricating oils. This capability is important since machinery often operate over a range of temperatures (see also viscosity index). The viscous behavior of a liquid can be either Newtonian or non-Newtonian. A Newtonian liquid exhibits a linear strain/stress curve, meaning its viscosity is independent of time, shear rate, or shear-rate history. Examples of Newtonian liquids include water, glycerin, motor oil, honey, or mercury. A non-Newtonian liquid is one where the viscosity is not independent of these factors and either thickens (increases in viscosity) or thins (decreases in viscosity) under shear. Examples of non-Newtonian liquids include ketchup, custard, or starch solutions. Sound propagation The speed of sound in a liquid is given by where is the bulk modulus of the liquid and the density. As an example, water has a bulk modulus of about 2.2 GPa and a density of 1000 kg/m3, which gives c = 1.5 km/s. Thermodynamics Phase transitions At a temperature below the boiling point, any matter in liquid form will evaporate until reaching equilibrium with the reverse process of condensation of its vapor. At this point the vapor will condense at the same rate as the liquid evaporates. Thus, a liquid cannot exist permanently if the evaporated liquid is continually removed. A liquid at or above its boiling point will normally boil, though superheating can prevent this in certain circumstances. At a temperature below the freezing point, a liquid will tend to crystallize, changing to its solid form. Unlike the transition to gas, there is no equilibrium at this transition under constant pressure, so unless supercooling occurs, the liquid will eventually completely crystallize. However, this is only true under constant pressure, so that (for example) water and ice in a closed, strong container might reach an equilibrium where both phases coexist. For the opposite transition from solid to liquid, see melting. Liquids in space The phase diagram explains why liquids do not exist in space or any other vacuum. Since the pressure is essentially zero (except on surfaces or interiors of planets and moons) water and other liquids exposed to space will either immediately boil or freeze depending on the temperature. In regions of space near the Earth, water will freeze if the sun is not shining directly on it and vaporize (sublime) as soon as it is in sunlight. If water exists as ice on the Moon, it can only exist in shadowed holes where the sun never shines and where the surrounding rock does not heat it up too much. At some point near the orbit of Saturn, the light from the Sun is too faint to sublime ice to water vapor. This is evident from the longevity of the ice that composes Saturn's rings. Solutions Liquids can form solutions with gases, solids, and other liquids. Two liquids are said to be miscible if they can form a solution in any proportion; otherwise they are immiscible. As an example, water and ethanol (drinking alcohol) are miscible whereas water and gasoline are immiscible. In some cases a mixture of otherwise immiscible liquids can be stabilized to form an emulsion, where one liquid is dispersed throughout the other as microscopic droplets. Usually this requires the presence of a surfactant in order to stabilize the droplets. A familiar example of an emulsion is mayonnaise, which consists of a mixture of water and oil that is stabilized by lecithin, a substance found in egg yolks. Microscopic description The microscopic structure of liquids is complex and historically has been the subject of intense research and debate. A few of the key ideas are explained below. General description Microscopically, liquids consist of a dense, disordered packing of molecules. This contrasts with the other two common phases of matter, gases and solids. Although gases are disordered, the molecules are well-separated in space and interact primarily through molecule-molecule collisions. Conversely, although the molecules in solids are densely packed, they usually fall into a regular structure, such as a crystalline lattice (glasses are a notable exception). Short-range ordering While liquids do not exhibit long-range ordering as in a crystalline lattice, they do possess short-range order, which persists over a few molecular diameters. In all liquids, excluded volume interactions induce short-range order in molecular positions (center-of-mass coordinates). Classical monatomic liquids like argon and krypton are the simplest examples. Such liquids can be modeled as disordered "heaps" of closely packed spheres, and the short-range order corresponds to the fact that nearest and next-nearest neighbors in a packing of spheres tend to be separated by integer multiples of the diameter. In most liquids, molecules are not spheres, and intermolecular forces possess a directionality, i.e., they depend on the relative orientation of molecules. As a result, there is short-ranged orientational order in addition to the positional order mentioned above. Orientational order is especially important in hydrogen-bonded liquids like water. The strength and directional nature of hydrogen bonds drives the formation of local "networks" or "clusters" of molecules. Due to the relative importance of thermal fluctuations in liquids (compared with solids), these structures are highly dynamic, continuously deforming, breaking, and reforming. Energy and entropy The microscopic features of liquids derive from an interplay between attractive intermolecular forces and entropic forces. The attractive forces tend to pull molecules close together, and along with short-range repulsive interactions, they are the dominant forces behind the regular structure of solids. The entropic forces are not "forces" in the mechanical sense; rather, they describe the tendency of a system to maximize its entropy at fixed energy (see microcanonical ensemble). Roughly speaking, entropic forces drive molecules apart from each other, maximizing the volume they occupy. Entropic forces dominant in gases and explain the tendency of gases to fill their containers. In liquids, by contrast, the intermolecular and entropic forces are comparable, so it is not possible to neglect one in favor of the other. Quantitatively, the binding energy between adjacent molecules is the same order of magnitude as the thermal energy . No small parameter The competition between energy and entropy makes liquids difficult to model at the molecular level, as there is no idealized "reference state" that can serve as a starting point for tractable theoretical descriptions. Mathematically, there is no small parameter from which one can develop a systematic perturbation theory. This situation contrasts with both gases and solids. For gases, the reference state is the ideal gas, and the density can be used as a small parameter to construct a theory of real (nonideal) gases (see virial expansion). For crystalline solids, the reference state is a perfect crystalline lattice, and possible small parameters are thermal motions and lattice defects. Role of quantum mechanics Like all known forms of matter, liquids are fundamentally quantum mechanical. However, under standard conditions (near room temperature and pressure), much of the macroscopic behavior of liquids can be understood in terms of classical mechanics. The "classical picture" posits that the constituent molecules are discrete entities that interact through intermolecular forces according to Newton's laws of motion. As a result, their macroscopic properties can be described using classical statistical mechanics. While the intermolecular force law technically derives from quantum mechanics, it is usually understood as a model input to classical theory, obtained either from a fit to experimental data or from the classical limit of a quantum mechanical description. An illustrative, though highly simplified example is a collection of spherical molecules interacting through a Lennard-Jones potential. For the classical limit to apply, a necessary condition is that the thermal de Broglie wavelength, is small compared with the length scale under consideration. Here, is the Planck constant and is the molecule's mass. Typical values of are about 0.01-0.1 nanometers (Table 1). Hence, a high-resolution model of liquid structure at the nanoscale may require quantum mechanical considerations. A notable example is hydrogen bonding in associated liquids like water, where, due to the small mass of the proton, inherently quantum effects such as zero-point motion and tunneling are important. For a liquid to behave classically at the macroscopic level, must be small compared with the average distance between molecules. That is, Representative values of this ratio for a few liquids are given in Table 1. The conclusion is that quantum effects are important for liquids at low temperatures and with small molecular mass. For dynamic processes, there is an additional timescale constraint: where is the timescale of the process under consideration. For room-temperature liquids, the right-hand side is about 10−14 seconds, which generally means that time-dependent processes involving translational motion can be described classically. At extremely low temperatures, even the macroscopic behavior of certain liquids deviates from classical mechanics. Notable examples are hydrogen and helium. Due to their low temperature and mass, such liquids have a thermal de Broglie wavelength comparable to the average distance between molecules. Dynamic phenomena The expression for the sound velocity of a liquid, , contains the bulk modulus K. If K is frequency-independent, then the liquid behaves as a linear medium, so that sound propagates without dissipation or mode coupling. In reality, all liquids show some dispersion: with increasing frequency, K crosses over from the low-frequency, liquid-like limit to the high-frequency, solid-like limit . In normal liquids, most of this crossover takes place at frequencies between GHz and THz, sometimes called hypersound. At sub-GHz frequencies, a normal liquid cannot sustain shear waves: the zero-frequency limit of the shear modulus is 0. This is sometimes seen as the defining property of a liquid. However, like the bulk modulus K, the shear modulus G is also frequency-dependent and exhibits a similar crossover at hypersound frequencies. According to linear response theory, the Fourier transform of K or G describes how the system returns to equilibrium after an external perturbation; for this reason, the dispersion step in the GHz to THz region is also called relaxation. As a liquid is supercooled toward the glass transition, the structural relaxation time exponentially increases, which explains the viscoelastic behavior of glass-forming liquids. Experimental methods The absence of long-range order in liquids is mirrored by the absence of Bragg peaks in X-ray and neutron diffraction. Under normal conditions, the diffraction pattern has circular symmetry, expressing the isotropy of the liquid. Radially, the diffraction intensity smoothly oscillates. This can be described by the static structure factor , with wavenumber given by the wavelength of the probe (photon or neutron) and the Bragg angle . The oscillations of express the short-range order of the liquid, i.e., the correlations between a molecule and "shells" of nearest neighbors, next-nearest neighbors, and so on. An equivalent representation of these correlations is the radial distribution function , which is related to the Fourier transform of . It represents a spatial average of a temporal snapshot of pair correlations in the liquid. Prediction of liquid properties Methods for predicting liquid properties can be organized by their "scale" of description, that is, the length scales and time scales over which they apply. Macroscopic methods use equations that directly model the large-scale behavior of liquids, such as their thermodynamic properties and flow behavior. Microscopic methods use equations that model the dynamics of individual molecules. Mesoscopic methods fall in between, combining elements of both continuum and particle-based models. Macroscopic Empirical correlations Empirical correlations are simple mathematical expressions intended to approximate a liquid's properties over a range of experimental conditions, such as varying temperature and pressure. They are constructed by fitting simple functional forms to experimental data. For example, the temperature-dependence of liquid viscosity is sometimes approximated by the function , where and are fitting constants. Empirical correlations allow for extremely efficient estimates of physical properties, which can be useful in thermophysical simulations. However, they require high quality experimental data to obtain a good fit and cannot reliably extrapolate beyond the conditions covered by experiments. Thermodynamic potentials Thermodynamic potentials are functions that characterize the equilibrium state of a substance. An example is the Gibbs free energy , which is a function of pressure and temperature. Knowing any one thermodynamic potential is sufficient to compute all equilibrium properties of a substance, often simply by taking derivatives of . Thus, a single correlation for can replace separate correlations for individual properties. Conversely, a variety of experimental measurements (e.g., density, heat capacity, vapor pressure) can be incorporated into the same fit; in principle, this would allow one to predict hard-to-measure properties like heat capacity in terms of other, more readily available measurements (e.g., vapor pressure). Hydrodynamics Hydrodynamic theories describe liquids in terms of space- and time-dependent macroscopic fields, such as density, velocity, and temperature. These fields obey partial differential equations, which can be linear or nonlinear. Hydrodynamic theories are more general than equilibrium thermodynamic descriptions, which assume that liquids are approximately homogeneous and time-independent. The Navier-Stokes equations are a well-known example: they are partial differential equations giving the time evolution of density, velocity, and temperature of a viscous fluid. There are numerous methods for numerically solving the Navier-Stokes equations and its variants. Mesoscopic Mesoscopic methods operate on length and time scales between the particle and continuum levels. For this reason, they combine elements of particle-based dynamics and continuum hydrodynamics. An example is the lattice Boltzmann method, which models a fluid as a collection of fictitious particles that exist on a lattice. The particles evolve in time through streaming (straight-line motion) and collisions. Conceptually, it is based on the Boltzmann equation for dilute gases, where the dynamics of a molecule consists of free motion interrupted by discrete binary collisions, but it is also applied to liquids. Despite the analogy with individual molecular trajectories, it is a coarse-grained description that typically operates on length and time scales larger than those of true molecular dynamics (hence the notion of "fictitious" particles). Other methods that combine elements of continuum and particle-level dynamics include smoothed-particle hydrodynamics, dissipative particle dynamics, and multiparticle collision dynamics. Microscopic Microscopic simulation methods work directly with the equations of motion (classical or quantum) of the constituent molecules. Classical molecular dynamics Classical molecular dynamics (MD) simulates liquids using Newton's law of motion; from Newton's second law (), the trajectories of molecules can be traced out explicitly and used to compute macroscopic liquid properties like density or viscosity. However, classical MD requires expressions for the intermolecular forces ("F" in Newton's second law). Usually, these must be approximated using experimental data or some other input. Ab initio (quantum) molecular dynamics Ab initio quantum mechanical methods simulate liquids using only the laws of quantum mechanics and fundamental atomic constants. In contrast with classical molecular dynamics, the intermolecular force fields are an output of the calculation, rather than an input based on experimental measurements or other considerations. In principle, ab initio methods can simulate the properties of a given liquid without any prior experimental data. However, they are very expensive computationally, especially for large molecules with internal structure.
Physical sciences
States of matter
null
18993869
https://en.wikipedia.org/wiki/Gas
Gas
Gas is one of the four fundamental states of matter. The others are solid, liquid, and plasma. A pure gas may be made up of individual atoms (e.g. a noble gas like neon), elemental molecules made from one type of atom (e.g. oxygen), or compound molecules made from a variety of atoms (e.g. carbon dioxide). A gas mixture, such as air, contains a variety of pure gases. What distinguishes gases from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer. The gaseous state of matter occurs between the liquid and plasma states, the latter of which provides the upper-temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention. High-density atomic gases super-cooled to very low temperatures are classified by their statistical behavior as either Bose gases or Fermi gases. For a comprehensive listing of these exotic states of matter, see list of states of matter. Elemental gases The only chemical elements that are stable diatomic homonuclear molecular gases at STP are hydrogen (H2), nitrogen (N2), oxygen (O2), and two halogens: fluorine (F2) and chlorine (Cl2). When grouped with the monatomic noble gases – helium (He), neon (Ne), argon (Ar), krypton (Kr), xenon (Xe), and radon (Rn) – these gases are referred to as "elemental gases". Etymology The word gas was first used by the early 17th-century Flemish chemist Jan Baptist van Helmont. He identified carbon dioxide, the first known gas other than air. Van Helmont's word appears to have been simply a phonetic transcription of the Ancient Greek word  – the g in Dutch being pronounced like ch in "loch" (voiceless velar fricative, ) – in which case Van Helmont simply was following the established alchemical usage first attested in the works of Paracelsus. According to Paracelsus's terminology, chaos meant something like . An alternative story is that Van Helmont's term was derived from "gahst (or geist), which signifies a ghost or spirit". That story is given no credence by the editors of the Oxford English Dictionary. In contrast, the French-American historian Jacques Barzun speculated that Van Helmont had borrowed the word from the German , meaning the froth resulting from fermentation. Physical characteristics Because most gases are difficult to observe directly, they are described through the use of four physical properties or macroscopic characteristics: pressure, volume, number of particles (chemists group them by moles) and temperature. These four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a mathematical relationship among these properties expressed by the ideal gas law (see section below). Gas particles are widely separated from one another, and consequently, have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from electrostatic interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another; gases that contain permanently charged ions are known as plasmas. Gaseous compounds with polar covalent bonds contain permanent charge imbalances and so experience relatively strong intermolecular forces, although the compound's net charge remains neutral. Transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these intermolecular forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion. Compared to the other states of matter, gases have low density and viscosity. Pressure and temperature influence the particles within a certain volume. This variation in particle separation and speed is referred to as compressibility. This particle separation and size influences optical properties of gases as can be found in the following list of refractive indices. Finally, gas particles spread apart or diffuse in order to homogeneously distribute themselves throughout any container. Macroscopic view of gases When observing gas, it is typical to specify a frame of reference or length scale. A larger length scale corresponds to a macroscopic or global point of view of the gas. This region (referred to as a volume) must be sufficient in size to contain a large sampling of gas particles. The resulting statistical analysis of this sample size produces the "average" behavior (i.e. velocity, temperature or pressure) of all the gas particles within the region. In contrast, a smaller length scale corresponds to a microscopic or particle point of view. Macroscopically, the gas characteristics measured are either in terms of the gas particles themselves (velocity, pressure, or temperature) or their surroundings (volume). For example, Robert Boyle studied pneumatic chemistry for a small portion of his career. One of his experiments related the macroscopic properties of pressure and volume of a gas. His experiment used a J-tube manometer which looks like a test tube in the shape of the letter J. Boyle trapped an inert gas in the closed end of the test tube with a column of mercury, thereby making the number of particles and the temperature constant. He observed that when the pressure was increased in the gas, by adding more mercury to the column, the trapped gas' volume decreased (this is known as an inverse relationship). Furthermore, when Boyle multiplied the pressure and volume of each observation, the product was constant. This relationship held for every gas that Boyle observed leading to the law, (PV=k), named to honor his work in this field. There are many mathematical tools available for analyzing gas properties. Boyle's lab equipment allowed the use of just a simple calculation to obtain his analytical results. His results were possible because he was studying gases in relatively low pressure situations where they behaved in an "ideal" manner. These ideal relationships apply to safety calculations for a variety of flight conditions on the materials in use. However, the high technology equipment in use today was designed to help us safely explore the more exotic operating environments where the gases no longer behave in an "ideal" manner. As gases are subjected to extreme conditions, tools to interpret them become more complex, from the Euler equations for inviscid flow to the Navier–Stokes equations that fully account for viscous effects. This advanced math, including statistics and multivariable calculus, adapted to the conditions of the gas system in question, makes it possible to solve such complex dynamic situations as space vehicle reentry. An example is the analysis of the space shuttle reentry pictured to ensure the material properties under this loading condition are appropriate. In this flight situation, the gas is no longer behaving ideally. Pressure The symbol used to represent pressure in equations is "p" or "P" with SI units of pascals. When describing a container of gas, the term pressure (or absolute pressure) refers to the average force per unit area that the gas exerts on the surface of the container. Within this volume, it is sometimes easier to visualize the gas particles moving in straight lines until they collide with the container (see diagram at top). The force imparted by a gas particle into the container during this collision is the change in momentum of the particle. During a collision only the normal component of velocity changes. A particle traveling parallel to the wall does not change its momentum. Therefore, the average force on a surface must be the average change in linear momentum from all of these gas particle collisions. Pressure is the sum of all the normal components of force exerted by the particles impacting the walls of the container divided by the surface area of the wall. Temperature The symbol used to represent temperature in equations is T with SI units of kelvins. The speed of a gas particle is proportional to its absolute temperature. The volume of the balloon in the video shrinks when the trapped gas particles slow down with the addition of extremely cold nitrogen. The temperature of any physical system is related to the motions of the particles (molecules and atoms) which make up the [gas] system. In statistical mechanics, temperature is the measure of the average kinetic energy stored in a molecule (also known as the thermal energy). The methods of storing this energy are dictated by the degrees of freedom of the molecule itself (energy modes). Thermal (kinetic) energy added to a gas or liquid (an endothermic process) produces translational, rotational, and vibrational motion. In contrast, a solid can only increase its internal energy by exciting additional vibrational modes, as the crystal lattice structure prevents both translational and rotational motion. These heated gas molecules have a greater speed range (wider distribution of speeds) with a higher average or mean speed. The variance of this distribution is due to the speeds of individual particles constantly varying, due to repeated collisions with other particles. The speed range can be described by the Maxwell–Boltzmann distribution. Use of this distribution implies ideal gases near thermodynamic equilibrium for the system of particles being considered. Specific volume The symbol used to represent specific volume in equations is "v" with SI units of cubic meters per kilogram. The symbol used to represent volume in equations is "V" with SI units of cubic meters. When performing a thermodynamic analysis, it is typical to speak of intensive and extensive properties. Properties which depend on the amount of gas (either by mass or volume) are called extensive properties, while properties that do not depend on the amount of gas are called intensive properties. Specific volume is an example of an intensive property because it is the ratio of volume occupied by a unit of mass of a gas that is identical throughout a system at equilibrium. 1000 atoms a gas occupy the same space as any other 1000 atoms for any given temperature and pressure. This concept is easier to visualize for solids such as iron which are incompressible compared to gases. However, volume itself --- not specific --- is an extensive property. Density The symbol used to represent density in equations is ρ (rho) with SI units of kilograms per cubic meter. This term is the reciprocal of specific volume. Since gas molecules can move freely within a container, their mass is normally characterized by density. Density is the amount of mass per unit volume of a substance, or the inverse of specific volume. For gases, the density can vary over a wide range because the particles are free to move closer together when constrained by pressure or volume. This variation of density is referred to as compressibility. Like pressure and temperature, density is a state variable of a gas and the change in density during any process is governed by the laws of thermodynamics. For a static gas, the density is the same throughout the entire container. Density is therefore a scalar quantity. It can be shown by kinetic theory that the density is inversely proportional to the size of the container in which a fixed mass of gas is confined. In this case of a fixed mass, the density decreases as the volume increases. Microscopic view of gases If one could observe a gas under a powerful microscope, one would see a collection of particles without any definite shape or volume that are in more or less random motion. These gas particles only change direction when they collide with another particle or with the sides of the container. This microscopic view of gas is well-described by statistical mechanics, but it can be described by many different theories. The kinetic theory of gases, which makes the assumption that these collisions are perfectly elastic, does not account for intermolecular forces of attraction and repulsion. Kinetic theory of gases Kinetic theory provides insight into the macroscopic properties of gases by considering their molecular composition and motion. Starting with the definitions of momentum and kinetic energy, one can use the conservation of momentum and geometric relationships of a cube to relate macroscopic system properties of temperature and pressure to the microscopic property of kinetic energy per molecule. The theory provides averaged values for these two properties. The kinetic theory of gases can help explain how the system (the collection of gas particles being considered) responds to changes in temperature, with a corresponding change in kinetic energy. For example: Imagine you have a sealed container of a fixed-size (a constant volume), containing a fixed-number of gas particles; starting from absolute zero (the theoretical temperature at which atoms or molecules have no thermal energy, i.e. are not moving or vibrating), you begin to add energy to the system by heating the container, so that energy transfers to the particles inside. Once their internal energy is above zero-point energy, meaning their kinetic energy (also known as thermal energy) is non-zero, the gas particles will begin to move around the container. As the box is further heated (as more energy is added), the individual particles increase their average speed as the system's total internal energy increases. The higher average-speed of all the particles leads to a greater rate at which collisions happen (i.e. greater number of collisions per unit of time), between particles and the container, as well as between the particles themselves. The macroscopic, measurable quantity of pressure, is the direct result of these microscopic particle collisions with the surface, over which, individual molecules exert a small force, each contributing to the total force applied within a specific area. (Read .) Likewise, the macroscopically measurable quantity of temperature, is a quantification of the overall amount of motion, or kinetic energy that the particles exhibit. (Read .) Thermal motion and statistical mechanics In the kinetic theory of gases, kinetic energy is assumed to purely consist of linear translations according to a speed distribution of particles in the system. However, in real gases and other real substances, the motions which define the kinetic energy of a system (which collectively determine the temperature), are much more complex than simple linear translation due to the more complex structure of molecules, compared to single atoms which act similarly to point-masses. In real thermodynamic systems, quantum phenomena play a large role in determining thermal motions. The random, thermal motions (kinetic energy) in molecules is a combination of a finite set of possible motions including translation, rotation, and vibration. This finite range of possible motions, along with the finite set of molecules in the system, leads to a finite number of microstates within the system; we call the set of all microstates an ensemble. Specific to atomic or molecular systems, we could potentially have three different kinds of ensemble, depending on the situation: microcanonical ensemble, canonical ensemble, or grand canonical ensemble. Specific combinations of microstates within an ensemble are how we truly define macrostate of the system (temperature, pressure, energy, etc.). In order to do that, we must first count all microstates though use of a partition function. The use of statistical mechanics and the partition function is an important tool throughout all of physical chemistry, because it is the key to connection between the microscopic states of a system and the macroscopic variables which we can measure, such as temperature, pressure, heat capacity, internal energy, enthalpy, and entropy, just to name a few. (Read: Partition function Meaning and significance) Using the partition function to find the energy of a molecule, or system of molecules, can sometimes be approximated by the Equipartition theorem, which greatly-simplifies calculation. However, this method assumes all molecular degrees of freedom are equally populated, and therefore equally utilized for storing energy within the molecule. It would imply that internal energy changes linearly with temperature, which is not the case. This ignores the fact that heat capacity changes with temperature, due to certain degrees of freedom being unreachable (a.k.a. "frozen out") at lower temperatures. As internal energy of molecules increases, so does the ability to store energy within additional degrees of freedom. As more degrees of freedom become available to hold energy, this causes the molar heat capacity of the substance to increase. Brownian motion Brownian motion is the mathematical model used to describe the random movement of particles suspended in a fluid. The gas particle animation, using pink and green particles, illustrates how this behavior results in the spreading out of gases (entropy). These events are also described by particle theory. Since it is at the limit of (or beyond) current technology to observe individual gas particles (atoms or molecules), only theoretical calculations give suggestions about how they move, but their motion is different from Brownian motion because Brownian motion involves a smooth drag due to the frictional force of many gas molecules, punctuated by violent collisions of an individual (or several) gas molecule(s) with the particle. The particle (generally consisting of millions or billions of atoms) thus moves in a jagged course, yet not so jagged as would be expected if an individual gas molecule were examined. Intermolecular forces - the primary difference between Real and Ideal gases Forces between two or more molecules or atoms, either attractive or repulsive, are called intermolecular forces. Intermolecular forces are experienced by molecules when they are within physical proximity of one another. These forces are very important for properly modeling molecular systems, as to accurately predict the microscopic behavior of molecules in any system, and therefore, are necessary for accurately predicting the physical properties of gases (and liquids) across wide variations in physical conditions. Arising from the study of physical chemistry, one of the most prominent intermolecular forces throughout physics, are van der Waals forces. Van der Waals forces play a key role in determining nearly all physical properties of fluids such as viscosity, flow rate, and gas dynamics (see physical characteristics section). The van der Waals interactions between gas molecules, is the reason why modeling a "real gas" is more mathematically difficult than an "ideal gas". Ignoring these proximity-dependent forces allows a real gas to be treated like an ideal gas, which greatly simplifies calculation. The intermolecular attractions and repulsions between two gas molecules depend on the distance between them. The combined attractions and repulsions are well-modelled by the Lennard-Jones potential, which is one of the most extensively studied of all interatomic potentials describing the potential energy of molecular systems. Due to the general applicability and importance, the Lennard-Jones model system is often referred to as 'Lennard-Jonesium'. The Lennard-Jones potential between molecules can be broken down into two separate components: a long-distance attraction due to the London dispersion force, and a short-range repulsion due to electron-electron exchange interaction (which is related to the Pauli exclusion principle). When two molecules are relatively distant (meaning they have a high potential energy), they experience a weak attracting force, causing them to move toward each other, lowering their potential energy. However, if the molecules are too far away, then they would not experience attractive force of any significance. Additionally, if the molecules get too close then they will collide, and experience a very high repulsive force (modelled by Hard spheres) which is a much stronger force than the attractions, so that any attraction due to proximity is disregarded. As two molecules approach each other, from a distance that is neither too-far, nor too-close, their attraction increases as the magnitude of their potential energy increases (becoming more negative), and lowers their total internal energy. The attraction causing the molecules to get closer, can only happen if the molecules remain in proximity for the duration of time it takes to physically move closer. Therefore, the attractive forces are strongest when the molecules move at low speeds. This means that the attraction between molecules is significant when gas temperatures is low. However, if you were to isothermally compress this cold gas into a small volume, forcing the molecules into close proximity, and raising the pressure, the repulsions will begin to dominate over the attractions, as the rate at which collisions are happening will increase significantly. Therefore, at low temperatures, and low pressures, attraction is the dominant intermolecular interaction. If two molecules are moving at high speeds, in arbitrary directions, along non-intersecting paths, then they will not spend enough time in proximity to be affected by the attractive London-dispersion force. If the two molecules collide, they are moving too fast and their kinetic energy will be much greater than any attractive potential energy, so they will only experience repulsion upon colliding. Thus, attractions between molecules can be neglected at high temperatures due to high speeds. At high temperatures, and high pressures, repulsion is the dominant intermolecular interaction. Accounting for the above stated effects which cause these attractions and repulsions, real gases, delineate from the ideal gas model by the following generalization: At low temperatures, and low pressures, the volume occupied by a real gas, is less than the volume predicted by the ideal gas law. At high temperatures, and high pressures, the volume occupied by a real gas, is greater than the volume predicted by the ideal gas law. Mathematical models An equation of state (for gases) is a mathematical model used to roughly describe or predict the state properties of a gas. At present, there is no single equation of state that accurately predicts the properties of all gases under all conditions. Therefore, a number of much more accurate equations of state have been developed for gases in specific temperature and pressure ranges. The "gas models" that are most widely discussed are "perfect gas", "ideal gas" and "real gas". Each of these models has its own set of assumptions to facilitate the analysis of a given thermodynamic system. Each successive model expands the temperature range of coverage to which it applies. Ideal and perfect gas The equation of state for an ideal or perfect gas is the ideal gas law and reads where P is the pressure, V is the volume, n is amount of gas (in mol units), R is the universal gas constant, 8.314 J/(mol K), and T is the temperature. Written this way, it is sometimes called the "chemist's version", since it emphasizes the number of molecules n. It can also be written as where is the specific gas constant for a particular gas, in units J/(kg K), and ρ = m/V is density. This notation is the "gas dynamicist's" version, which is more practical in modeling of gas flows involving acceleration without chemical reactions. The ideal gas law does not make an assumption about the heat capacity of a gas. In the most general case, the specific heat is a function of both temperature and pressure. If the pressure-dependence is neglected (and possibly the temperature-dependence as well) in a particular application, sometimes the gas is said to be a perfect gas, although the exact assumptions may vary depending on the author and/or field of science. For an ideal gas, the ideal gas law applies without restrictions on the specific heat. An ideal gas is a simplified "real gas" with the assumption that the compressibility factor Z is set to 1 meaning that this pneumatic ratio remains constant. A compressibility factor of one also requires the four state variables to follow the ideal gas law. This approximation is more suitable for applications in engineering although simpler models can be used to produce a "ball-park" range as to where the real solution should lie. An example where the "ideal gas approximation" would be suitable would be inside a combustion chamber of a jet engine. It may also be useful to keep the elementary reactions and chemical dissociations for calculating emissions. Real gas Each one of the assumptions listed below adds to the complexity of the problem's solution. As the density of a gas increases with rising pressure, the intermolecular forces play a more substantial role in gas behavior which results in the ideal gas law no longer providing "reasonable" results. At the upper end of the engine temperature ranges (e.g. combustor sections – 1300 K), the complex fuel particles absorb internal energy by means of rotations and vibrations that cause their specific heats to vary from those of diatomic molecules and noble gases. At more than double that temperature, electronic excitation and dissociation of the gas particles begins to occur causing the pressure to adjust to a greater number of particles (transition from gas to plasma). Finally, all of the thermodynamic processes were presumed to describe uniform gases whose velocities varied according to a fixed distribution. Using a non-equilibrium situation implies the flow field must be characterized in some manner to enable a solution. One of the first attempts to expand the boundaries of the ideal gas law was to include coverage for different thermodynamic processes by adjusting the equation to read pVn = constant and then varying the n through different values such as the specific heat ratio, γ. Real gas effects include those adjustments made to account for a greater range of gas behavior: Compressibility effects (Z allowed to vary from 1.0) Variable heat capacity (specific heats vary with temperature) Van der Waals forces (related to compressibility, can substitute other equations of state) Non-equilibrium thermodynamic effects Issues with molecular dissociation and elementary reactions with variable composition. For most applications, such a detailed analysis is excessive. Examples where real gas effects would have a significant impact would be on the Space Shuttle re-entry where extremely high temperatures and pressures were present or the gases produced during geological events as in the image of the 1990 eruption of Mount Redoubt. Permanent gas Permanent gas is a term used for a gas which has a critical temperature below the range of normal human-habitable temperatures and therefore cannot be liquefied by pressure within this range. Historically such gases were thought to be impossible to liquefy and would therefore permanently remain in the gaseous state. The term is relevant to ambient temperature storage and transport of gases at high pressure. Historical research Boyle's law Boyle's law was perhaps the first expression of an equation of state. In 1662 Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was carefully measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. The image of Boyle's equipment shows some of the exotic tools used by Boyle during his study of gases. Through these experiments, Boyle noted that the pressure exerted by a gas held at a constant temperature varies inversely with the volume of the gas. For example, if the volume is halved, the pressure is doubled; and if the volume is doubled, the pressure is halved. Given the inverse relationship between pressure and volume, the product of pressure (P) and volume (V) is a constant (k) for a given mass of confined gas as long as the temperature is constant. Stated as a formula, thus is: Because the before and after volumes and pressures of the fixed amount of gas, where the before and after temperatures are the same both equal the constant k, they can be related by the equation: Charles's law In 1787, the French physicist and balloon pioneer, Jacques Charles, found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to the same extent over the same 80 kelvin interval. He noted that, for an ideal gas at constant pressure, the volume is directly proportional to its temperature: Gay-Lussac's law In 1802, Joseph Louis Gay-Lussac published results of similar, though more extensive experiments. Gay-Lussac credited Charles' earlier work by naming the law in his honor. Gay-Lussac himself is credited with the law describing pressure, which he found in 1809. It states that the pressure exerted on a container's sides by an ideal gas is proportional to its temperature. Avogadro's law In 1811, Amedeo Avogadro verified that equal volumes of pure gases contain the same number of particles. His theory was not generally accepted until 1858 when another Italian chemist Stanislao Cannizzaro was able to explain non-ideal exceptions. For his work with gases a century prior, the physical constant that bears his name (the Avogadro constant) is the number of atoms per mole of elemental carbon-12 (). This specific number of gas particles, at standard temperature and pressure (ideal gas law) occupies 22.40 liters, which is referred to as the molar volume. Avogadro's law states that the volume occupied by an ideal gas is proportional to the amount of substance in the volume. This gives rise to the molar volume of a gas, which at STP is 22.4 dm3/mol (liters per mole). The relation is given by where n is the amount of substance of gas (the number of molecules divided by the Avogadro constant). Dalton's law In 1801, John Dalton published the law of partial pressures from his work with ideal gas law relationship: The pressure of a mixture of non reactive gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for n species as: Pressuretotal = Pressure1 + Pressure2 + ... + Pressuren The image of Dalton's journal depicts symbology he used as shorthand to record the path he followed. Among his key journal observations upon mixing unreactive "elastic fluids" (gases) were the following: Unlike liquids, heavier gases did not drift to the bottom upon mixing. Gas particle identity played no role in determining final pressure (they behaved as if their size was negligible). Special topics Compressibility Thermodynamicists use this factor (Z) to alter the ideal gas equation to account for compressibility effects of real gases. This factor represents the ratio of actual to ideal specific volumes. It is sometimes referred to as a "fudge-factor" or correction to expand the useful range of the ideal gas law for design purposes. Usually this Z value is very close to unity. The compressibility factor image illustrates how Z varies over a range of very cold temperatures. Boundary layer Particles will, in effect, "stick" to the surface of an object moving through it. This layer of particles is called the boundary layer. At the surface of the object, it is essentially static due to the friction of the surface. The object, with its boundary layer is effectively the new shape of the object that the rest of the molecules "see" as the object approaches. This boundary layer can separate from the surface, essentially creating a new surface and completely changing the flow path. The classical example of this is a stalling airfoil. The delta wing image clearly shows the boundary layer thickening as the gas flows from right to left along the leading edge. Turbulence In fluid dynamics, turbulence or turbulent flow is a flow regime characterized by chaotic, stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and time. The satellite view of weather around Robinson Crusoe Islands illustrates one example. Viscosity Viscosity, a physical property, is a measure of how well adjacent molecules stick to one another. A solid can withstand a shearing force due to the strength of these sticky intermolecular forces. A fluid will continuously deform when subjected to a similar load. While a gas has a lower value of viscosity than a liquid, it is still an observable property. If gases had no viscosity, then they would not stick to the surface of a wing and form a boundary layer. A study of the delta wing in the Schlieren image reveals that the gas particles stick to one another (see Boundary layer section). Reynolds number In fluid mechanics, the Reynolds number is the ratio of inertial forces (vsρ) which dominate a turbulent flow, to viscous forces (μ/L) which is proportional to viscosity. It is one of the most important dimensionless numbers in fluid dynamics and is used, usually along with other dimensionless numbers, to provide a criterion for determining dynamic similitude. As such, the Reynolds number provides the link between modeling results (design) and the full-scale actual conditions. It can also be used to characterize the flow. Maximum entropy principle As the total number of degrees of freedom approaches infinity, the system will be found in the macrostate that corresponds to the highest multiplicity. In order to illustrate this principle, observe the skin temperature of a frozen metal bar. Using a thermal image of the skin temperature, note the temperature distribution on the surface. This initial observation of temperature represents a "microstate". At some future time, a second observation of the skin temperature produces a second microstate. By continuing this observation process, it is possible to produce a series of microstates that illustrate the thermal history of the bar's surface. Characterization of this historical series of microstates is possible by choosing the macrostate that successfully classifies them all into a single grouping. Thermodynamic equilibrium When energy transfer ceases from a system, this condition is referred to as thermodynamic equilibrium. Usually, this condition implies the system and surroundings are at the same temperature so that heat no longer transfers between them. It also implies that external forces are balanced (volume does not change), and all chemical reactions within the system are complete. The timeline varies for these events depending on the system in question. A container of ice allowed to melt at room temperature takes hours, while in semiconductors the heat transfer that occurs in the device transition from an on to off state could be on the order of a few nanoseconds.
Physical sciences
States of matter
null
18994037
https://en.wikipedia.org/wiki/Sand
Sand
Sand is a granular material composed of finely divided mineral particles. Sand has various compositions but is defined by its grain size. Sand grains are smaller than gravel and coarser than silt. Sand can also refer to a textural class of soil or soil type; i.e., a soil containing more than 85 percent sand-sized particles by mass. The composition of sand varies, depending on the local rock sources and conditions, but the most common constituent of sand in inland continental settings and non-tropical coastal settings is silica (silicon dioxide, or SiO2), usually in the form of quartz. Calcium carbonate is the second most common type of sand. One such example of this is aragonite, which has been created over the past 500million years by various forms of life, such as coral and shellfish. It is the primary form of sand apparent in areas where reefs have dominated the ecosystem for millions of years, as in the Caribbean. Somewhat more rarely, sand may be composed of calcium sulfate, such as gypsum and selenite, as is found in places such as White Sands National Park and Salt Plains National Wildlife Refuge in the U.S. Sand is a non-renewable resource over human timescales, and sand suitable for making concrete is in high demand. Desert sand, although plentiful, is not suitable for concrete. Fifty billion tons of beach sand and fossil sand are used each year for construction. Composition The exact definition of sand varies. The scientific Unified Soil Classification System used in engineering and geology corresponds to US Standard Sieves, and defines sand as particles with a diameter of between 0.074 and 4.75 millimeters. By another definition, in terms of particle size as used by geologists, sand particles range in diameter from 0.0625 mm (or  mm) a volume of approximately 0.00012 cubic millimetres, to 2 mm, a volume of approximately 4.2 cubic millimetres, the difference in volumes being 34,688 measures difference. Any particle falling within this range of sizes is termed a sand grain. Sand grains are between gravel (with particles ranging from 2 mm up to 64 mm by the latter system, and from 4.75 mm up to 75 mm in the former) and silt (particles smaller than 0.0625 mm down to 0.004 mm). The size specification between sand and gravel has remained constant for more than a century, but particle diameters as small as 0.02 mm were considered sand under the Albert Atterberg standard in use during the early 20th century. The grains of sand in Archimedes' The Sand Reckoner written around 240 BCE, were 0.02 mm in diameter. A 1938 specification of the United States Department of Agriculture was 0.05 mm. A 1953 engineering standard published by the American Association of State Highway and Transportation Officials set the minimum sand size at 0.074 mm. Sand feels gritty when rubbed between the fingers. Silt, by comparison, feels like flour. ISO 14688 grades sands as fine, medium, and coarse with ranges 0.063 mm to 0.2 mm to 0.63 mm to 2.0 mm. In the United States, sand is commonly divided into five sub-categories based on size: very fine sand ( –  mm diameter), fine sand ( mm –  mm), medium sand ( mm –  mm), coarse sand ( mm – 1 mm), and very coarse sand (1 mm – 2 mm). These sizes are based on the Krumbein phi scale, where size is Φ = -log2D; D being the particle size in mm. On this scale, for sand the value of Φ varies from −1 to +4, with the divisions between sub-categories at whole numbers. The most common constituent of sand, in inland continental settings and non-tropical coastal settings, is silica (silicon dioxide, or SiO2), usually in the form of quartz, which, because of its chemical inertness and considerable hardness, is the most common mineral resistant to weathering. The composition of mineral sand is highly variable, depending on the local rock sources and conditions. The bright white sands found in tropical and subtropical coastal settings are eroded limestone and may contain coral and shell fragments in addition to other organic or organically derived fragmental material, suggesting that sand formation depends on living organisms, too. The gypsum sand dunes of the White Sands National Park in New Mexico are famous for their bright, white color. Arkose is a sand or sandstone with considerable feldspar content, derived from weathering and erosion of a (usually nearby) granitic rock outcrop. Some sands contain magnetite, chlorite, glauconite, or gypsum. Sands rich in magnetite are dark to black in color, as are sands derived from volcanic basalts and obsidian. Chlorite-glauconite bearing sands are typically green in color, as are sands derived from basaltic lava with a high olivine content. Many sands, especially those found extensively in Southern Europe, have iron impurities within the quartz crystals of the sand, giving a deep yellow color. Sand deposits in some areas contain garnets and other resistant minerals, including some small gemstones. Sources Rocks erode or weather over a long period of time, mainly by water and wind, and their sediments are transported downstream. These sediments continue to break apart into smaller pieces until they become fine grains of sand. The type of rock the sediment originated from and the intensity of the environment give different compositions of sand. The most common rock to form sand is granite, where the feldspar minerals dissolve faster than the quartz, causing the rock to break apart into small pieces. In high energy environments rocks break apart much faster than in more calm settings. In granite rocks this results in more feldspar minerals in the sand because they do not have as much time to dissolve away. The term for sand formed by weathering is "epiclastic." Sand from rivers are collected either from the river itself or its flood plain and accounts for the majority of the sand used in the construction industry. Because of this, many small rivers have been depleted, causing environmental concern and economic losses to adjacent land. The rate of sand mining in such areas greatly outweighs the rate the sand can replenish, making it a non-renewable resource. Sand dunes are a consequence of dry conditions or wind deposition. The Sahara Desert is very dry because of its geographic location and proximity to the equator. It is known for its vast sand dunes, which exist mainly due to a lack of vegetation and water. Over time, wind blows away fine particles, such as clay and dead organic matter, leaving only sand and larger rocks. Only 15% of the Sahara is sand dunes, while 70% is bare rock. The wind is responsible for creating these different environments and shaping the sand to be round and smooth. These properties make desert sand unusable for construction. Beach sand is also formed by erosion. Over thousands of years, rocks are eroded near the shoreline from the constant motion of waves and the sediments build up. Weathering and river deposition also accelerate the process of creating a beach, along with marine animals interacting with rocks, such as eating the algae off of them. Once there is a sufficient amount of sand, the beach acts as a barrier to keep the land from eroding any further. This sand is ideal for construction as it is angular and of various sizes. Marine sand (or ocean sand) comes from sediments transported into the ocean and the erosion of ocean rocks. The thickness of the sand layer varies, however it is common to have more sand closer to land; this type of sand is ideal for construction and is a very valuable commodity. Europe is the main miners of marine sand, which greatly hurts ecosystems and local fisheries. Study The study of individual grains can reveal much historical information as to the origin and kind of transport of the grain. Quartz sand that is recently weathered from granite or gneiss quartz crystals will be angular. It is called grus in geology or sharp sand in the building trade where it is preferred for concrete, and in gardening where it is used as a soil amendment to loosen clay soils. Sand that is transported long distances by water or wind will be rounded, with characteristic abrasion patterns on the grain surface. Desert sand is typically rounded. People who collect sand as a hobby are known as arenophiles. Organisms that thrive in sandy environments are psammophiles. Uses Abrasion: Before sandpaper, wet sand was used as an abrasive element between rotating devices with elastic surface and hard materials such as very hard stone (making of stone vases), or metal (removal of old stain before re-staining copper cooking pots). Agriculture: Sandy soils are ideal for crops such as watermelons, peaches, and peanuts, and their excellent drainage characteristics make them suitable for intensive dairy farming. Air filtration: Finer sand particles mixed with cloth was commonly used in certain gas mask filter designs but have largely been replaced by microfibers. Aquaria: Sand makes a low-cost aquarium base material which some believe is better than gravel for home use. It is also a necessity for saltwater reef tanks, which emulate environments composed largely of aragonite sand broken down from coral and shellfish. Artificial reefs: Geotextile bagged sand can serve as the foundation for new reefs. Artificial islands in the Persian Gulf. Beach nourishment: Governments move sand to beaches where tides, storms, or deliberate changes to the shoreline erode the original sand. Brick: Manufacturing plants add sand to a mixture of clay and other materials for manufacturing bricks. Cob: Cob is a building material consisting of water, organic material (like straw), lime, and subsoil, which largely consists of sand. Coarse sand makes up as much as 75% of cob. Concrete: Sand is often a principal component of this critical construction material. Glass: Sand rich in silica is the principal component in common glasses. Hydraulic fracturing: A drilling technique for natural gas, which uses rounded silica sand as a "proppant", a material to hold open cracks that are caused by the hydraulic fracturing process. Landscaping: Sand makes small hills and slopes (golf courses would be an example). Mortar: Sand is mixed with masonry cement or Portland cement and lime to be used in masonry construction. Paint: Mixing sand with paint produces a textured finish for walls and ceilings or non-slip floor surfaces. Railroads: Engine drivers and rail transit operators use sand to improve the traction of wheels on the rails. Recreation: Playing with sand is a favorite beach activity. One of the most beloved uses of sand is to make sometimes intricate, sometimes simple structures known as sand castles, proverbially impermanent. Special play areas for children, enclosing a significant area of sand and known as sandboxes, are common on many public playgrounds, and even at some single-family homes. Sand dunes are also popular among climbers, motorcyclists and beach buggy drivers. Roads: Sand improves traction (and thus traffic safety) in icy or snowy conditions. Sand animation: Performance artists draw images in sand. Makers of animated films use the same term to describe their use of sand on frontlit or backlit glass. Sand casting: Casters moisten or oil molding sand, also known as foundry sand and then shape it into molds into which they pour molten material. This type of sand must be able to withstand high temperatures and pressure, allow gases to escape, have a uniform, small grain size, and be non-reactive with metals. Sandbags: These protect against floods and gunfire. The inexpensive bags are easy to transport when empty, and unskilled volunteers can quickly fill them with local sand in emergencies. Sandblasting: Graded sand serves as an abrasive in cleaning, preparing, and polishing. Silicon: Quartz sand is a raw material for the production of silicon. Thermal weapon: While not in widespread use anymore, sand used to be heated and poured on invading troops in the classical and medieval time periods. Water filtration: Media filters use sand for filtering water. It is also commonly used by many water treatment facilities, often in the form of rapid sand filters. Tayammum: Tayammum is an Islamic ritual wiping of parts of the body. Zoanthid "skeletons": Animals in this order of marine benthic cnidarians related to corals and sea anemones incorporate sand into their mesoglea for structural strength, which they need because they lack a true skeleton. Resources and environmental concerns Only some sands are suitable for the construction industry, for example for making concrete. Grains of desert sand are rounded by being blown in the wind, and for this reason do not produce solid concrete, unlike the rough sand from the sea. Because of the growth of population and of cities and the consequent construction activity there is a huge demand for these special kinds of sand, and natural sources are running low. In 2012 French director Denis Delestrac made a documentary called "Sand Wars" about the impact of the lack of construction sand. It shows the ecological and economic effects of both legal and illegal trade in construction sand. To retrieve the sand, the method of hydraulic dredging is used. This works by pumping the top few meters of sand out of the water and filling it into a boat, which is then transported back to land for processing. All marine life mixed in with the extracted sand is killed and the ecosystem can continue to suffer for years after the mining is complete. Not only does this affect marine life, but also the local fishing industries because of the loss of life, and communities living close to the water's edge. When sand is taken out of the water it increases the risk of landslides, which can lead to loss of agricultural land and/or damage to dwellings. Sand's many uses require a significant dredging industry, raising environmental concerns over fish depletion, landslides, and flooding. Countries such as China, Indonesia, Malaysia, and Cambodia ban sand exports, citing these issues as a major factor. It is estimated that the annual consumption of sand and gravel is 40 billion tons and sand is a US$70 billion global industry. With increasing use, more is expected to come from recycling and alternatives to sand. The global demand for sand in 2017 was 9.55 billion tons as part of a $99.5 billion industry. In April 2022, the United Nations Environment Programme (UNEP) published a report stating that 50 billion tons of sand and gravel were being used every year. The report made 10 recommendations, including a ban on beach extraction, to avert a crisis, and move toward a circular economy for the two resources. Hazards While sand is generally non-toxic, sand-using activities such as sandblasting require precautions. Bags of silica sand used for sandblasting now carry labels warning the user to wear respiratory protection to avoid breathing the resulting fine silica dust. Safety data sheets for silica sand state that "excessive inhalation of crystalline silica is a serious health concern." In areas of high pore water pressure, sand and salt water can form quicksand, which is a colloid hydrogel that behaves like a liquid. Quicksand produces a considerable barrier to escape for creatures caught within, who often die from exposure (not from submersion) as a result. People sometimes dig holes in the sand at beaches for recreational purposes, but if too deep they can result in serious injury or death in the event of a collapse. Manufacture Manufactured sand (M sand) is sand made from rock by artificial processes, usually for construction purposes in cement or concrete. It differs from river sand by being more angular, and has somewhat different properties. Case studies In Dubai, United Arab Emirates, sand needed to construct infrastructure and create the Dubai Islands exceeds local supplies, requiring sand from Australia. The artificial islands required more than 835 million tonnes of sand, at a cost greater than $26 billion USD.
Physical sciences
Petrology
null
18994087
https://en.wikipedia.org/wiki/Sound
Sound
In physics, sound is a vibration that propagates as an acoustic wave through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Only acoustic waves that have frequencies lying between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory percept in humans. In air at atmospheric pressure, these represent sound waves with wavelengths of to . Sound waves above 20 kHz are known as ultrasound and are not audible to humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges, allowing some to even hear ultrasounds. Definition Sound is defined as "(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a)." Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a sensation. Acoustics Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gasses, liquids, and solids including vibration, sound, ultrasound, and infrasound. A scientist who works in the field of acoustics is an acoustician, while someone working in the field of acoustical engineering may be called an acoustical engineer. An audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound. Applications of acoustics are found in almost all aspects of modern society, subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electro-acoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration. Physics Sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids. The sound waves are generated by a sound source, such as the vibrating diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium. As the source continues to vibrate the medium, the vibrations propagate away from the source at the speed of sound, thus forming the sound wave. At a fixed distance from the source, the pressure, velocity, and displacement of the medium vary in time. At an instant in time, the pressure, velocity, and displacement vary in space. The particles of the medium do not travel with the sound wave. This is intuitively obvious for a solid, and the same is true for liquids and gases (that is, the vibrations of particles in the gas or liquid transport the vibrations, while the average position of the particles over time does not change). During propagation, waves can be reflected, refracted, or attenuated by the medium. The behavior of sound propagation is generally affected by three things: A complex relationship between the density and pressure of the medium. This relationship, affected by temperature, determines the speed of sound within the medium. Motion of the medium itself. If the medium is moving, this movement may increase or decrease the absolute speed of the sound wave depending on the direction of the movement. For example, sound moving through wind will have its speed of propagation increased by the speed of the wind if the sound and wind are moving in the same direction. If the sound and wind are moving in opposite directions, the speed of the sound wave will be decreased by the speed of the wind. The viscosity of the medium. Medium viscosity determines the rate at which sound is attenuated. For many media, such as air or water, attenuation due to viscosity is negligible. When sound is moving through a medium that does not have constant physical properties, it may be refracted (either dispersed or focused). The mechanical vibrations that can be interpreted as sound can travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through a vacuum. Studies has shown that sound waves are able to carry a tiny amount of mass and is surrounded by a weak gravitational field. Waves Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves. It requires a medium to propagate. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress at right angle to the direction of propagation. Sound waves may be viewed using parabolic mirrors and objects that produce sound. The energy carried by an oscillating sound wave converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter, and the kinetic energy of the displacement velocity of particles of the medium. Although there are many complexities relating to the transmission of sounds, at the point of reception (i.e. the ears), sound is readily dividable into two simple elements: pressure and time. These fundamental elements form the basis of all sound waves. They can be used to describe, in absolute terms, every sound we hear. In order to understand the sound more fully, a complex wave such as the one shown in a blue background on the right of this text, is usually separated into its component parts, which are a combination of various sound wave frequencies (and noise). Sound waves are often simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties: Frequency, or its inverse, wavelength Amplitude, sound pressure or Intensity Speed of sound Direction Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. In air at standard temperature and pressure, the corresponding wavelengths of sound waves range from to . Sometimes speed and direction are combined as a velocity vector; wave number and direction are combined as a wave vector. Transverse waves, also known as shear waves, have the additional property, polarization, which is not a characteristic of longitudinal sound waves. Speed The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density: This was later proven wrong and the French mathematician Laplace corrected the formula by deducing that the phenomenon of sound travelling is not isothermal, as believed by Newton, but adiabatic. He added another factor to the equation—gamma—and multiplied by , thus coming up with the equation . Since , the final equation came up to be , which is also known as the Newton–Laplace equation. In this equation, K is the elastic bulk modulus, c is the velocity of sound, and is the density. Thus, the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density. Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In air at sea level, the speed of sound is approximately using the formula . The speed of sound is also slightly sensitive, being subject to a second-order anharmonic effect, to the sound amplitude, which means there are non-linear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array). If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations. In fresh water the speed of sound is approximately . In steel, the speed of sound is about . Sound moves the fastest in solid atomic hydrogen at about . Sound pressure level Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm Pa) and (1 atm Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. The sound pressure level (SPL) or Lp is defined as where p is the root-mean-square sound pressure and is a reference sound pressure. Commonly used reference sound pressures, defined in the standard ANSI S1.1-1994, are 20 μPa in air and 1 μPa in water. Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound pressure level. Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels. Perception A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of psychoacoustics is dedicated to such studies. Webster's dictionary defined sound as: "1. The sensation of hearing, that which is heard; specif.: a. Psychophysics. Sensation due to stimulation of the auditory nerves and auditory centers of the brain, usually by vibrations transmitted in a material medium, commonly air, affecting the organ of hearing. b. Physics. Vibrational energy which occasions such a sensation. Sound is propagated by progressive longitudinal vibratory disturbances (sound waves)." This means that the correct response to the question: "if a tree falls in a forest and no one is around to hear it, does it make a sound?" is "yes", and "no", dependent on whether being answered using the physical, or the psychophysical definition, respectively. The physical reception of sound in any hearing organism is limited to a range of frequencies. Humans normally hear sound frequencies between approximately 20 Hz and 20,000 Hz (20 kHz), The upper limit decreases with age. Sometimes sound refers to only those vibrations with frequencies that are within the hearing range for humans or sometimes it relates to a particular animal. Other species have different ranges of hearing. For example, dogs can perceive vibrations higher than 20 kHz. As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound. Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see below). Soundscape is the component of the acoustic environment that can be perceived by humans. The acoustic environment is the combination of all sounds (whether audible to humans or not) within a given area as modified by the environment and understood by people, in context of the surrounding environment. There are, historically, six experimentally separable ways in which sound waves are analysed. They are: pitch, duration, loudness, timbre, sonic texture and spatial location. Some of these terms have a standardised definition (for instance in the ANSI Acoustical Terminology ANSI/ASA S1.1-2013). More recent approaches have also considered temporal envelope and temporal fine structure as perceptually relevant analyses. Pitch Pitch is perceived as how "low" or "high" a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound. For simple sounds, pitch relates to the frequency of the slowest vibration in the sound (called the fundamental harmonic). In the case of complex sounds, pitch perception can vary. Sometimes individuals identify different pitches for the same sound, based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them. Specific attention is given to recognising potential harmonics. Every sound is placed on a pitch continuum from low to high. For example: white noise (random noise spread evenly across all frequencies) sounds higher in pitch than pink noise (random noise spread evenly across octaves) as white noise has more high frequency content. Duration Duration is perceived as how "long" or "short" a sound is and relates to onset and offset signals created by nerve responses to sounds. The duration of a sound usually lasts from the time the sound is first noticed until the sound is identified as having changed or ceased. Sometimes this is not directly related to the physical duration of a sound. For example; in a noisy environment, gapped sounds (sounds that stop and start) can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth. This can be of great benefit in understanding distorted messages such as radio signals that suffer from interference, as (owing to this effect) the message is heard as if it was continuous. Loudness Loudness is perceived as how "loud" or "soft" a sound is and relates to the totalled number of auditory nerve stimulations over short cyclic time periods, most likely over the duration of theta wave cycles. This means that at short durations, a very short sound can sound softer than a longer sound even though they are presented at the same intensity level. Past around 200 ms this is no longer the case and the duration of the sound no longer affects the apparent loudness of the sound. Timbre Timbre is perceived as the quality of different sounds (e.g. the thud of a fallen rock, the whir of a drill, the tone of a musical instrument or the quality of a voice) and represents the pre-conscious allocation of a sonic identity to a sound (e.g. "it's an oboe!"). This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch and the spread and intensity of overtones in the sound over an extended time frame. The way a sound changes over time provides most of the information for timbre identification. Even though a small section of the wave form from each instrument looks very similar, differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content. Less noticeable are the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano. Texture Sonic texture relates to the number of sound sources and the interaction between them. The word texture, in this context, relates to the cognitive separation of auditory objects. In music, texture is often referred to as the difference between unison, polyphony and homophony, but it can also relate (for example) to a busy cafe; a sound which might be referred to as cacophony. Spatial location Spatial location represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment. In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification. Frequency Ultrasound Ultrasound is sound waves with frequencies higher than 20,000 Hz. Ultrasound is not different from audible sound in its physical properties, but cannot be heard by humans. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz. Medical ultrasound is commonly used for diagnostics and treatment. Infrasound Infrasound is sound waves with frequencies lower than 20 Hz. Although sounds of such low frequency are too low for humans to hear as a pitch, these sound are heard as discrete pulses (like the 'popping' sound of an idling motorcycle). Whales, elephants and other animals can detect infrasound and use it to communicate. It can be used to detect volcanic eruptions and is used in some types of music.
Physical sciences
Physics
null
18994196
https://en.wikipedia.org/wiki/Computer%20virus
Computer virus
A computer virus is a type of malware that, when executed, replicates itself by modifying other computer programs and inserting its own code into those programs. If this replication succeeds, the affected areas are then said to be "infected" with a computer virus, a metaphor derived from biological viruses. Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. By contrast, a computer worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks. Virus writers use social engineering deceptions and exploit detailed knowledge of security vulnerabilities to initially infect systems and to spread the virus. Viruses use complex anti-detection/stealth strategies to evade antivirus software. Motives for creating viruses can include seeking profit (e.g., with ransomware), desire to send a political message, personal amusement, to demonstrate that a vulnerability exists in software, for sabotage and denial of service, or simply because they wish to explore cybersecurity issues, artificial life and evolutionary algorithms. As of 2013, computer viruses caused billions of dollars' worth of economic damage each year. In response, an industry of antivirus software has cropped up, selling or freely distributing virus protection to users of various operating systems. History The first academic work on the theory of self-replicating computer programs was done in 1949 by John von Neumann who gave lectures at the University of Illinois about the "Theory and Organization of Complicated Automata". The work of von Neumann was later published as the "Theory of self-reproducing automata". In his essay von Neumann described how a computer program could be designed to reproduce itself. Von Neumann's design for a self-reproducing computer program is considered the world's first computer virus, and he is considered to be the theoretical "father" of computer virology. In 1972, Veith Risak directly building on von Neumann's work on self-replication, published his article "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (Self-reproducing automata with minimal information exchange). The article describes a fully functional virus written in assembler programming language for a SIEMENS 4004/35 computer system. In 1980, Jürgen Kraus wrote his Diplom thesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at the University of Dortmund. In his work Kraus postulated that computer programs can behave in a way similar to biological viruses. The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early 1970s. Creeper was an experimental self-replicating program written by Bob Thomas at BBN Technologies in 1971. Creeper used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'M THE CREEPER. CATCH ME IF YOU CAN!" was displayed. The Reaper program was created to delete Creeper. In 1982, a program called "Elk Cloner" was the first personal computer virus to appear "in the wild"—that is, outside the single computer or computer lab where it was created. Written in 1981 by Richard Skrenta, a ninth grader at Mount Lebanon High School near Pittsburgh, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the personal computer and displaying a short poem beginning "Elk Cloner: The program with a personality." In 1984, Fred Cohen from the University of Southern California wrote his paper "Computer Viruses – Theory and Experiments". It was the first paper to explicitly call a self-reproducing program a "virus", a term introduced by Cohen's mentor Leonard Adleman. In 1987, Cohen published a demonstration that there is no algorithm that can perfectly detect all possible viruses. Cohen's theoretical compression virus was an example of a virus which was not malicious software (malware), but was putatively benevolent (well-intentioned). However, antivirus professionals do not accept the concept of "benevolent viruses", as any desired function can be implemented without involving a virus (automatic compression, for instance, is available under Windows at the choice of the user). Any virus will by definition make unauthorised changes to a computer, which is undesirable even if no damage is done or intended. The first page of Dr Solomon's Virus Encyclopaedia explains the undesirability of viruses, even those that do nothing but reproduce. An article that describes "useful virus functionalities" was published by J. B. Gunn under the title "Use of virus functions to provide a virtual APL interpreter under user control" in 1984. The first IBM PC compatible virus in the "wild" was a boot sector virus dubbed (c)Brain, created in 1986 and was released in 1987 by Amjad Farooq Alvi and Basit Farooq Alvi in Lahore, Pakistan, reportedly to deter unauthorized copying of the software they had written. The first virus to specifically target Microsoft Windows, WinVir was discovered in April 1992, two years after the release of Windows 3.0. The virus did not contain any Windows API calls, instead relying on DOS interrupts. A few years later, in February 1996, Australian hackers from the virus-writing crew VLAD created the Bizatch virus (also known as "Boza" virus), which was the first known virus to specifically target Windows 95. This virus attacked the new portable executable (PE) files introduced in Windows 95. In late 1997 the encrypted, memory-resident stealth virus Win32.Cabanas was released—the first known virus that targeted Windows NT (it was also able to infect Windows 3.0 and Windows 9x hosts). Even home computers were affected by viruses. The first one to appear on the Amiga was a boot sector virus called SCA virus, which was detected in November 1987. Design Parts A computer virus generally contains three parts: the infection mechanism, which finds and infects new files, the payload, which is the malicious code to execute, and the trigger, which determines when to activate the payload. Infection mechanism Also called the infection vector, this is how the virus spreads. Some viruses have a search routine, which locate and infect files on disk. Other viruses infect files as they are run, such as the Jerusalem DOS virus. Trigger Also known as a logic bomb, this is the part of the virus that determines the condition for which the payload is activated. This condition may be a particular date, time, presence of another program, size on disk exceeding a threshold, or opening a specific file. Payload The payload is the body of the virus that executes the malicious activity. Examples of malicious activities include damaging files, theft of confidential information or spying on the infected system. Payload activity is sometimes noticeable as it can cause the system to slow down or "freeze". Sometimes payloads are non-destructive and their main purpose is to spread a message to as many people as possible. This is called a virus hoax. Phases Virus phases is the life cycle of the computer virus, described by using an analogy to biology. This life cycle can be divided into four phases: Dormant phase The virus program is idle during this stage. The virus program has managed to access the target user's computer or software, but during this stage, the virus does not take any action. The virus will eventually be activated by the "trigger" which states which event will execute the virus. Not all viruses have this stage. Propagation phase The virus starts propagating, which is multiplying and replicating itself. The virus places a copy of itself into other programs or into certain system areas on the disk. The copy may not be identical to the propagating version; viruses often "morph" or change to evade detection by IT professionals and anti-virus software. Each infected program will now contain a clone of the virus, which will itself enter a propagation phase. Triggering phase A dormant virus moves into this phase when it is activated, and will now perform the function for which it was intended. The triggering phase can be caused by a variety of system events, including a count of the number of times that this copy of the virus has made copies of itself. The trigger may occur when an employee is terminated from their employment or after a set period of time has elapsed, in order to reduce suspicion. Execution phase This is the actual work of the virus, where the "payload" will be released. It can be destructive such as deleting files on disk, crashing the system, or corrupting files or relatively harmless such as popping up humorous or political messages on screen. Targets and replication Computer viruses infect a variety of different subsystems on their host computers and software. One manner of classifying viruses is to analyze whether they reside in binary executables (such as .EXE or .COM files), data files (such as Microsoft Word documents or PDF files), or in the boot sector of the host's hard drive (or some combination of all of these). A memory-resident virus (or simply "resident virus") installs itself as part of the operating system when executed, after which it remains in RAM from the time the computer is booted up to when it is shut down. Resident viruses overwrite interrupt handling code or other functions, and when the operating system attempts to access the target file or disk sector, the virus code intercepts the request and redirects the control flow to the replication module, infecting the target. In contrast, a non-memory-resident virus (or "non-resident virus"), when executed, scans the disk for targets, infects them, and then exits (i.e. it does not remain in memory after it is done executing). Many common applications, such as Microsoft Outlook and Microsoft Word, allow macro programs to be embedded in documents or emails, so that the programs may be run automatically when the document is opened. A macro virus (or "document virus") is a virus that is written in a macro language and embedded into these documents so that when users open the file, the virus code is executed, and can infect the user's computer. This is one of the reasons that it is dangerous to open unexpected or suspicious attachments in e-mails. While not opening attachments in e-mails from unknown persons or organizations can help to reduce the likelihood of contracting a virus, in some cases, the virus is designed so that the e-mail appears to be from a reputable organization (e.g., a major bank or credit card company). Boot sector viruses specifically target the boot sector and/or the Master Boot Record (MBR) of the host's hard disk drive, solid-state drive, or removable storage media (flash drives, floppy disks, etc.). The most common way of transmission of computer viruses in boot sector is physical media. When reading the VBR of the drive, the infected floppy disk or USB flash drive connected to the computer will transfer data, and then modify or replace the existing boot code. The next time a user tries to start the desktop, the virus will immediately load and run as part of the master boot record. Email viruses are viruses that intentionally, rather than accidentally, use the email system to spread. While virus infected files may be accidentally sent as email attachments, email viruses are aware of email system functions. They generally target a specific type of email system (Microsoft Outlook is the most commonly used), harvest email addresses from various sources, and may append copies of themselves to all email sent, or may generate email messages containing copies of themselves as attachments. Detection To avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the DOS platform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool antivirus software, however, especially those which maintain and date cyclic redundancy checks on file changes. Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files have many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file. Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them (for example, Conficker). A Virus may also hide its presence using a rootkit by not showing itself on the list of system processes or by disguising itself within a trusted process. In the 2010s, as computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access. In addition, only a small fraction of known viruses actually cause real incidents, primarily because many viruses remain below the theoretical epidemic threshold. Read request intercepts While some kinds of antivirus software employ various techniques to counter stealth mechanisms, once the infection occurs any recourse to "clean" the system is unreliable. In Microsoft Windows operating systems, the NTFS file system is proprietary. This leaves antivirus software little alternative but to send a "read" request to Windows files that handle such requests. Some viruses trick antivirus software by intercepting its requests to the operating system. A virus can hide by intercepting the request to read the infected file, handling the request itself, and returning an uninfected version of the file to the antivirus software. The interception can occur by code injection of the actual operating system files that would handle the read request. Thus, an antivirus software attempting to detect the virus will either not be permitted to read the infected file, or, the "read" request will be served with the uninfected version of the same file. The only reliable method to avoid "stealth" viruses is to boot from a medium that is known to be "clear". Security software can then be used to check the dormant operating system files. Most security software relies on virus signatures, or they employ heuristics. Security software may also use a database of file "hashes" for Windows OS files, so the security software can identify altered files, and request Windows installation media to replace them with authentic versions. In older versions of Windows, file cryptographic hash functions of Windows OS files stored in Windows—to allow file integrity/authenticity to be checked—could be overwritten so that the System File Checker would report that altered system files are authentic, so using file hashes to scan for altered files would not always guarantee finding an infection. Self-modification Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-called virus signatures. Different antivirus programs will employ different search methods when identifying viruses. If a virus scanner finds such a pattern in a file, it will perform other checks to make sure that it has found the virus, and not merely a coincidental sequence in an innocent file, before it notifies the user that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each infected file contains a different variant of the virus. One method of evading signature detection is to use simple encryption to encipher (encode) the body of the virus, leaving only the encryption module and a static cryptographic key in cleartext which does not change from one infection to the next. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is entirely possible to decrypt the final virus, but this is probably not required, since self-modifying code is such a rarity that finding some may be reason enough for virus scanners to at least "flag" the file as suspicious. An old but compact way will be the use of arithmetic operation like addition or subtraction and the use of logical conditions such as XORing, where each byte in a virus is with a constant so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions. A simpler older approach did not use a key, where the encryption consisted only of operations with no parameters, like incrementing and decrementing, bitwise rotation, arithmetic negation, and logical NOT. Some viruses, called polymorphic viruses, will employ a means of encryption inside an executable in which the virus is encrypted under certain events, such as the virus scanner being disabled for updates or the computer being rebooted. This is called cryptovirology. Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by a decryption module. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well-written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using "signatures". Antivirus software can detect it by decrypting the viruses using an emulator, or by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus has to have a polymorphic engine (also called "mutating engine" or "mutation engine") somewhere in its encrypted body. See polymorphic code for technical detail on how such engines operate. Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for antivirus professionals and investigators to obtain representative samples of the virus, because "bait" files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection. To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new executables. Viruses that utilize this technique are said to be in metamorphic code. To enable metamorphism, a "metamorphic engine" is needed. A metamorphic virus is usually very large and complex. For example, W32/Simile consisted of over 14,000 lines of assembly language code, 90% of which is part of the metamorphic engine. Effects Damage is due to causing system failure, corrupting data, wasting computer resources, increasing maintenance costs or stealing personal information. Even though no antivirus software can uncover all computer viruses (especially new ones), computer security researchers are actively searching for new ways to enable antivirus solutions to more effectively detect emerging viruses, before they become widely distributed. A power virus is a computer program that executes specific machine code to reach the maximum CPU power dissipation (thermal energy output for the central processing units). Computer cooling apparatus are designed to dissipate power up to the thermal design power, rather than maximum power, and a power virus could cause the system to overheat if it does not have logic to stop the processor. This may cause permanent physical damage. Power viruses can be malicious, but are often suites of test software used for integration testing and thermal testing of computer components during the design phase of a product, or for product benchmarking. Stability test applications are similar programs which have the same effect as power viruses (high CPU usage) but stay under the user's control. They are used for testing CPUs, for example, when overclocking. Spinlock in a poorly written program may cause similar symptoms, if it lasts sufficiently long. Different micro-architectures typically require different machine code to hit their maximum power. Examples of such machine code do not appear to be distributed in CPU reference materials. Infection vectors As software is often designed with security features to prevent unauthorized use of system resources, many viruses must exploit and manipulate security bugs, which are security defects in a system or application software, to spread themselves and infect other computers. Software development strategies that produce large numbers of "bugs" will generally also produce potential exploitable "holes" or "entrances" for the virus. To replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs (see code injection). If a user attempts to launch an infected program, the virus' code may be executed simultaneously. In operating systems that use file extensions to determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created and named "picture.png.exe", in which the user sees only "picture.png" and therefore assumes that this file is a digital image and most likely is safe, yet when opened, it runs the executable on the client machine. Viruses may be installed on removable media, such as flash drives. The drives may be left in a parking lot of a government building or other target, with the hopes that curious users will insert the drive into a computer. In a 2015 experiment, researchers at the University of Michigan found that 45–98 percent of users would plug in a flash drive of unknown origin. The vast majority of viruses target systems running Microsoft Windows. This is due to Microsoft's large market share of desktop computer users. The diversity of software systems on a network limits the destructive potential of viruses and malware. Open-source operating systems such as Linux allow users to choose from a variety of desktop environments, packaging tools, etc., which means that malicious code targeting any of these systems will only affect a subset of all users. Many Windows users are running the same set of applications, enabling viruses to rapidly spread among Microsoft Windows systems by targeting the same exploits on large numbers of hosts. While Linux and Unix in general have always natively prevented normal users from making changes to the operating system environment without permission, Windows users are generally not prevented from making these changes, meaning that viruses can easily gain control of the entire system on Windows hosts. This difference has continued partly due to the widespread use of administrator accounts in contemporary versions like Windows XP. In 1997, researchers created and released a virus for Linux—known as "Bliss". Bliss, however, requires that the user run it explicitly, and it can only infect programs that the user has the access to modify. Unlike Windows users, most Unix users do not log in as an administrator, or "root user", except to install or configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code to Usenet, allowing researchers to see how it worked. Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. Personal computers of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the "wild" for many years. Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in bulletin board system (BBS), modem use, and software sharing. Bulletin board–driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBSs. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by other computers. Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Microsoft Word and Microsoft Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected email messages, those viruses which did take advantage of the Microsoft Outlook Component Object Model (COM) interface. Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents". A virus may also send a web address link as an instant message to all the contacts (e.g., friends and colleagues' e-mail addresses) stored on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating. Viruses that spread using cross-site scripting were first reported in 2002, and were academically demonstrated in 2005. There have been multiple instances of the cross-site scripting viruses in the "wild", exploiting websites such as MySpace (with the Samy worm) and Yahoo!. Countermeasures In 1989 The ADAPSO Software Industry Division published Dealing With Electronic Vandalism, in which they followed the risk of data loss by "the added risk of losing customer confidence." Many users install antivirus software that can detect and eliminate known viruses when the computer attempts to download or run the executable file (which may be distributed as an email attachment, or on USB flash drives, for example). Some antivirus software blocks known malicious websites that attempt to install malware. Antivirus software does not change the underlying capability of hosts to transmit viruses. Users must update their software regularly to patch security vulnerabilities ("holes"). Antivirus software also needs to be regularly updated to recognize the latest threats. This is because malicious hackers and other individuals are always creating new viruses. The German AV-TEST Institute publishes evaluations of antivirus software for Windows and Android. Examples of Microsoft Windows anti virus and anti-malware software include the optional Microsoft Security Essentials (for Windows XP, Vista and Windows 7) for real-time protection, the Windows Malicious Software Removal Tool (now included with Windows (Security) Updates on "Patch Tuesday", the second Tuesday of each month), and Windows Defender (an optional download in the case of Windows XP). Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Some such free programs are almost as good as commercial competitors. Common security vulnerabilities are assigned CVE IDs and listed in the US National Vulnerability Database. Secunia PSI is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it. Ransomware and phishing scam alerts appear as press releases on the Internet Crime Complaint Center noticeboard. Ransomware is a virus that posts a message on the user's screen saying that the screen or system will remain locked or unusable until a ransom payment is made. Phishing is a deception in which the malicious individual pretends to be a friend, computer security expert, or other benevolent individual, with the goal of convincing the targeted individual to reveal passwords or other personal information. Other commonly used preventive measures include timely operating system updates, software updates, careful Internet browsing (avoiding shady websites), and installation of only trusted software. Certain browsers flag sites that have been reported to Google and that have been confirmed as hosting malware by Google. There are two common methods that an antivirus software application uses to detect viruses, as described in the antivirus software article. The first, and by far the most common method of virus detection is using a list of virus signature definitions. This works by examining the content of the computer's memory (its Random Access Memory (RAM), and boot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives, or USB flash drives), and comparing those files against a database of known virus "signatures". Virus signatures are just strings of code that are used to identify individual viruses; for each virus, the antivirus designer tries to choose a unique signature string that will not be found in a legitimate program. Different antivirus programs use different "signatures" to identify viruses. The disadvantage of this detection method is that users are only protected from viruses that are detected by signatures in their most recent virus definition update, and not protected from new viruses (see "zero-day attack"). A second method to find viruses is to use a heuristic algorithm based on common virus behaviors. This method can detect new viruses for which antivirus security firms have yet to define a "signature", but it also gives rise to more false positives than using signatures. False positives can be disruptive, especially in a commercial environment, because it may lead to a company instructing staff not to use the company computer system until IT services have checked the system for viruses. This can slow down productivity for regular workers. Recovery strategies and methods One may reduce the damage done by viruses by making regular backups of data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time, as in a hard drive), read-only or not accessible for other reasons, such as using different file systems. This way, if data is lost through a virus, one can start again using the backup (which will hopefully be recent). If a backup session on optical media like CD and DVD is closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto the CD/DVD). Likewise, an operating system on a bootable CD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removable flash drives. Many websites run by antivirus software companies provide free online virus scanning, with limited "cleaning" facilities (after all, the purpose of the websites is to sell antivirus products and services). Some websites—like Google subsidiary VirusTotal.com—allow users to upload one or more suspicious files to be scanned and checked by one or more antivirus programs in one operation. Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Microsoft offers an optional free antivirus utility called Microsoft Security Essentials, a Windows Malicious Software Removal Tool that is updated as part of the regular Windows update regime, and an older optional anti-malware (malware removal) tool Windows Defender that has been upgraded to an antivirus product in Windows 8. Some viruses disable System Restore and other important Windows tools such as Task Manager and CMD. An example of a virus that does this is CiaDoor. Many such viruses can be removed by rebooting the computer, entering Windows "safe mode" with networking, and then using system tools or Microsoft Safety Scanner. System Restore on Windows Me, Windows XP, Windows Vista and Windows 7 can restore the registry and critical system files to a previous checkpoint. Often a virus will cause a system to "hang" or "freeze", and a subsequent hard reboot will render a system restore point from the same day corrupted. Restore points from previous days should work, provided the virus is not designed to corrupt the restore files and does not exist in previous restore points. Microsoft's System File Checker (improved in Windows 7 and later) can be used to check for, and repair, corrupted system files. Restoring an earlier "clean" (virus-free) copy of the entire partition from a cloned disk, a disk image, or a backup copy is one solution—restoring an earlier backup disk "image" is relatively simple to do, usually removes any malware, and may be faster than "disinfecting" the computer—or reinstalling and reconfiguring the operating system and programs from scratch, as described below, then restoring user preferences. Reinstalling the operating system is another approach to virus removal. It may be possible to recover copies of essential user data by booting from a live CD, or connecting the hard drive to another computer and booting from the second computer's operating system, taking great care not to infect that computer by executing any infected programs on the original drive. The original hard drive can then be reformatted and the OS and all programs installed from original media. Once the system has been restored, precautions must be taken to avoid reinfection from any restored executable files. Popular culture The first known description of a self-reproducing program in fiction is in the 1970 short story The Scarred Man by Gregory Benford which describes a computer program called VIRUS which, when installed on a computer with telephone modem dialing capability, randomly dials phone numbers until it hits a modem that is answered by another computer, and then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE. His story was based on an actual computer virus written in FORTRAN that Benford had created and run on the lab computer in the 1960s, as a proof-of-concept, and which he told John Brunner about in 1970. The idea was explored further in two 1972 novels, When HARLIE Was One by David Gerrold and The Terminal Man by Michael Crichton, and became a major theme of the 1975 novel The Shockwave Rider by John Brunner. The 1973 Michael Crichton sci-fi film Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok. Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery." Other malware The term "virus" is also misused by extension to refer to other types of malware. "Malware" encompasses computer viruses along with many other forms of malicious software, such as computer "worms", ransomware, spyware, adware, trojan horses, keyloggers, rootkits, bootkits, malicious Browser Helper Object (BHOs), and other malicious software. The majority of active malware threats are trojan horse programs or computer worms rather than computer viruses. The term computer virus, coined by Fred Cohen in 1985, is a misnomer. Viruses often perform some type of harmful activity on infected host computers, such as acquisition of hard disk space or central processing unit (CPU) time, accessing and stealing private information (e.g., credit card numbers, debit card numbers, phone numbers, names, email addresses, passwords, bank information, house addresses, etc.), corrupting data, displaying political, humorous or threatening messages on the user's screen, spamming their e-mail contacts, logging their keystrokes, or even rendering the computer useless. However, not all viruses carry a destructive "payload" and attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating computer programs that modify other software without user consent by injecting themselves into the said programs, similar to a biological virus which replicates within living cells.
Technology
Computer security
null
18994221
https://en.wikipedia.org/wiki/Hospital
Hospital
A hospital is a healthcare institution providing patient treatment with specialized health science and auxiliary healthcare staff and medical equipment. The best-known type of hospital is the general hospital, which typically has an emergency department to treat urgent health problems ranging from fire and accident victims to a sudden illness. A district hospital typically is the major health care facility in its region, with many beds for intensive care and additional beds for patients who need long-term care. Specialized hospitals include trauma centers, rehabilitation hospitals, children's hospitals, geriatric hospitals, and hospitals for specific medical needs, such as psychiatric hospitals for psychiatric treatment and other disease-specific categories. Specialized hospitals can help reduce health care costs compared to general hospitals. Hospitals are classified as general, specialty, or government depending on the sources of income received. A teaching hospital combines assistance to people with teaching to health science students and auxiliary healthcare students. A health science facility smaller than a hospital is generally called a clinic. Hospitals have a range of departments (e.g. surgery and urgent care) and specialist units such as cardiology. Some hospitals have outpatient departments and some have chronic treatment units. Common support units include a pharmacy, pathology, and radiology. Hospitals are typically funded by public funding, health organizations (for-profit or nonprofit), health insurance companies, or charities, including direct charitable donations. Historically, hospitals were often founded and funded by religious orders, or by charitable individuals and leaders. Hospitals are currently staffed by professional physicians, surgeons, nurses, and allied health practitioners. In the past, however, this work was usually performed by the members of founding religious orders or by volunteers. However, there are various Catholic religious orders, such as the Alexians and the Bon Secours Sisters that still focus on hospital ministry in the late 1990s, as well as several other Christian denominations, including the Methodists and Lutherans, which run hospitals. In accordance with the original meaning of the word, hospitals were original "places of hospitality", and this meaning is still preserved in the names of some institutions such as the Royal Hospital Chelsea, established in 1681 as a retirement and nursing home for veteran soldiers. Etymology During the Middle Ages, hospitals served different functions from modern institutions in that they were almshouses for the poor, hostels for pilgrims, or hospital schools. The word "hospital" comes from the Latin , signifying a stranger or foreigner, hence a guest. Another noun derived from this, came to signify hospitality, that is the relation between guest and shelterer, hospitality, friendliness, and hospitable reception. By metonymy, the Latin word then came to mean a guest-chamber, guest's lodging, an inn. is thus the root for the English words host (where the p was dropped for convenience of pronunciation) hospitality, hospice, hostel, and hotel. The latter modern word derives from Latin via the Old French romance word , which developed a silent s, which letter was eventually removed from the word, the loss of which is signified by a circumflex in the modern French word . The German word shares similar roots. Types Some patients go to a hospital just for diagnosis, treatment, or therapy and then leave ("outpatients") without staying overnight; while others are "admitted" and stay overnight or for several days or weeks or months ("inpatients"). Hospitals are usually distinguished from other types of medical facilities by their ability to admit and care for inpatients whilst the others, which are smaller, are often described as clinics. General and acute care The best-known type of hospital is the general hospital, also known as an acute-care hospital. These facilities handle many kinds of disease and injury, and normally have an emergency department (sometimes known as "accident & emergency") or trauma center to deal with immediate and urgent threats to health. Larger cities may have several hospitals of varying sizes and facilities. Some hospitals, especially in the United States and Canada, have their own ambulance service. District A district hospital typically is the major health care facility in its region, with large numbers of beds for intensive care, critical care, and long-term care. In California, "district hospital" refers specifically to a class of healthcare facility created shortly after World War II to address a shortage of hospital beds in many local communities. Even today, district hospitals are the sole public hospitals in 19 of California's counties, and are the sole locally accessible hospital within nine additional counties in which one or more other hospitals are present at a substantial distance from a local community. Twenty-eight of California's rural hospitals and 20 of its critical-access hospitals are district hospitals. They are formed by local municipalities, have boards that are individually elected by their local communities, and exist to serve local needs. They are a particularly important provider of healthcare to uninsured patients and patients with Medi-Cal (which is California's Medicaid program, serving low-income persons, some senior citizens, persons with disabilities, children in foster care, and pregnant women). In 2012, district hospitals provided $54 million in uncompensated care in California. Specialized A specialty hospital is primarily and exclusively dedicated to one or a few related medical specialties. Subtypes include rehabilitation hospitals, children's hospitals, seniors' (geriatric) hospitals, long-term acute care facilities, and hospitals for dealing with specific medical needs such as psychiatric problems (see psychiatric hospital), cancer treatment, certain disease categories such as cardiac, oncology, or orthopedic problems, and so forth. In Germany, specialised hospitals are called Fachkrankenhaus; an example is Fachkrankenhaus Coswig (thoracic surgery). In India, specialty hospitals are known as super-specialty hospitals and are distinguished from multispecialty hospitals which are composed of several specialties. Specialised hospitals can help reduce health care costs compared to general hospitals. For example, Narayana Health's cardiac unit in Bangalore specialises in cardiac surgery and allows for a significantly greater number of patients. It has 3,000 beds and performs 3,000 paediatric cardiac operations annually, the largest number in the world for such a facility. Surgeons are paid on a fixed salary instead of per operation, thus when the number of procedures increases, the hospital is able to take advantage of economies of scale and reduce its cost per procedure. Each specialist may also become more efficient by working on one procedure like a production line. Teaching A teaching hospital delivers healthcare to patients as well as training to prospective medical professionals such as medical students and student nurses. It may be linked to a medical school or nursing school, and may be involved in medical research. Students may also observe clinical work in the hospital. Clinics Clinics generally provide only outpatient services, but some may have a few inpatient beds and a limited range of services that may otherwise be found in typical hospitals. Departments or wards A hospital contains one or more wards that house hospital beds for inpatients. It may also have acute services such as an emergency department, operating theatre, and intensive care unit, as well as a range of medical specialty departments. A well-equipped hospital may be classified as a trauma center. They may also have other services such as a hospital pharmacy, radiology, pathology, and medical laboratories. Some hospitals have outpatient departments such as behavioral health services, dentistry, and rehabilitation services. A hospital may also have a department of nursing, headed by a chief nursing officer or director of nursing. This department is responsible for the administration of professional nursing practice, research, and policy for the hospital. Many units have both a nursing and a medical director that serve as administrators for their respective disciplines within that unit. For example, within an intensive care nursery, a medical director is responsible for physicians and medical care, while the nursing manager is responsible for all the nurses and nursing care. Support units may include a medical records department, release of information department, technical support, clinical engineering, facilities management, plant operations, dining services, and security departments. Remote monitoring The COVID-19 pandemic stimulated the development of virtual wards across the British NHS. Patients are managed at home, monitoring their own oxygen levels using an oxygen saturation probe if necessary and supported by telephone. West Hertfordshire Hospitals NHS Trust managed around 1200 patients at home between March and June 2020 and planned to continue the system after COVID-19, initially for respiratory patients. Mersey Care NHS Foundation Trust started a COVID Oximetry@Home service in April 2020. This enables them to monitor more than 5000 patients a day in their own homes. The technology allows nurses, carers, or patients to record and monitor vital signs such as blood oxygen levels. History Early examples In early India, Fa Xian, a Chinese Buddhist monk who travelled across India , recorded examples of healing institutions. According to the Mahavamsa, the ancient chronicle of Sinhalese royalty, written in the sixth century AD, King Pandukabhaya of Sri Lanka (r. 437–367 BC) had lying-in-homes and hospitals (Sivikasotthi-Sala). A hospital and medical training center also existed at Gundeshapur, a major city in southwest of the Sassanid Persian Empire founded in AD 271 by Shapur I. In ancient Greece, temples dedicated to the healer-god Asclepius, known as Asclepeion functioned as centers of medical advice, prognosis, and healing. The Asclepeia spread to the Roman Empire. While public healthcare was non-existent in the Roman Empire, military hospitals called valetudinaria did exist stationed in military barracks and would serve the soldiers and slaves within the fort. Evidence exists that some civilian hospitals, while unavailable to the Roman population, were occasionally privately built in extremely wealthy Roman households located in the countryside for that family, although this practice seems to have ended in 80 AD. Middle Ages The declaration of Christianity as an accepted religion in the Roman Empire drove an expansion of the provision of care. Following the First Council of Nicaea in AD 325 construction of a hospital in every cathedral town was begun, including among the earliest hospitals by Saint Sampson in Constantinople and by Basil, bishop of Caesarea in modern-day Turkey. By the twelfth century, Constantinople had two well-organised hospitals, staffed by doctors who were both male and female. Facilities included systematic treatment procedures and specialised wards for various diseases. The earliest general hospital in the Islamic world was built in 805 in Baghdad by Harun Al-Rashid. By the 10th century, Baghdad had five more hospitals, while Damascus had six hospitals by the 15th century, and Córdoba alone had 50 major hospitals, many exclusively for the military, by the end of the 15th century. The Islamic bimaristan served as a center of medical treatment, as well nursing home and lunatic asylum. It typically treated the poor, as the rich would have been treated in their own homes. Hospitals in this era were the first to require medical licenses for doctors, and compensation for negligence could be made. Hospitals were forbidden by law to turn away patients who were unable to pay. These hospitals were financially supported by waqfs, as well as state funds. In India, public hospitals existed at least since the reign of Firuz Shah Tughlaq in the 14th century. The Mughal emperor Jahangir in the 17th century established hospitals in large cities at government expense with records showing salaries and grants for medicine being paid for by the government. In China, during the Song dynasty, the state began to take on social welfare functions previously provided by Buddhist monasteries and instituted public hospitals, hospices and dispensaries. Early modern and Enlightenment Europe In Europe the medieval concept of Christian care evolved during the 16th and 17th centuries into a secular one. In England, after the dissolution of the monasteries in 1540 by King Henry VIII, the church abruptly ceased to be the supporter of hospitals, and only by direct petition from the citizens of London, were the hospitals St Bartholomew's, St Thomas's and St Mary of Bethlehem's (Bedlam) endowed directly by the crown; this was the first instance of secular support being provided for medical institutions. In 1682, Charles II founded the Royal Hospital Chelsea as a retirement home for old soldiers known as Chelsea Pensioners, an instance of the use of the word "hospital" to mean an almshouse. Ten years later, Mary II founded the Royal Hospital for Seamen, Greenwich, with the same purpose. The voluntary hospital movement began in the early 18th century, with hospitals being founded in London by the 1720s, including Westminster Hospital (1719) promoted by the private bank C. Hoare & Co and Guy's Hospital (1724) funded from the bequest of the wealthy merchant, Thomas Guy. Other hospitals sprang up in London and other British cities over the century, many paid for by private subscriptions. St Bartholomew's in London was rebuilt from 1730 to 1759, and the London Hospital, Whitechapel, opened in 1752. These hospitals represented a turning point in the function of the institution; they began to evolve from being basic places of care for the sick to becoming centers of medical innovation and discovery and the principal place for the education and training of prospective practitioners. Some of the era's greatest surgeons and doctors worked and passed on their knowledge at the hospitals. They also changed from being mere homes of refuge to being complex institutions for the provision and advancement of medicine and care for sick. The Charité was founded in Berlin in 1710 by King Frederick I of Prussia as a response to an outbreak of plague. Voluntary hospitals also spread to Colonial America; Bellevue Hospital in New York City opened in 1736, first as a workhouse and then later as a hospital; Pennsylvania Hospital in Philadelphia opened in 1752, New York Hospital, now Weill Cornell Medical Center in New York City opened in 1771, and Massachusetts General Hospital in Boston opened in 1811. When the Vienna General Hospital opened in 1784 as the world's largest hospital, physicians acquired a new facility that gradually developed into one of the most important research centers. Another Enlightenment era charitable innovation was the dispensary; these would issue the poor with medicines free of charge. The London Dispensary opened its doors in 1696 as the first such clinic in the British Empire. The idea was slow to catch on until the 1770s, when many such organisations began to appear, including the Public Dispensary of Edinburgh (1776), the Metropolitan Dispensary and Charitable Fund (1779) and the Finsbury Dispensary (1780). Dispensaries were also opened in New York 1771, Philadelphia 1786, and Boston 1796. The Royal Naval Hospital, Stonehouse, Plymouth, was a pioneer of hospital design in having "pavilions" to minimize the spread of infection. John Wesley visited in 1785, and commented "I never saw anything of the kind so complete; every part is so convenient, and so admirably neat. But there is nothing superfluous, and nothing purely ornamented, either within or without." This revolutionary design was made more widely known by John Howard, the philanthropist. In 1787 the French government sent two scholar administrators, Coulomb and Tenon, who had visited most of the hospitals in Europe. They were impressed and the "pavilion" design was copied in France and throughout Europe. 19th century English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons (1803) that set the standard for many textbooks. In the mid-19th century, hospitals and the medical profession became more professionalised, with a reorganisation of hospital management along more bureaucratic and administrative lines. The Apothecaries Act 1815 made it compulsory for medical students to practise for at least half a year at a hospital as part of their training. Florence Nightingale pioneered the modern profession of nursing during the Crimean War when she set an example of compassion, commitment to patient care and diligent and thoughtful hospital administration. The first official nurses' training programme, the Nightingale School for Nurses, was opened in 1860, with the mission of training nurses to work in hospitals, to work with the poor and to teach. Nightingale was instrumental in reforming the nature of the hospital, by improving sanitation standards and changing the image of the hospital from a place the sick would go to die, to an institution devoted to recuperation and healing. She also emphasised the importance of statistical measurement for determining the success rate of a given intervention and pushed for administrative reform at hospitals. By the late 19th century, the modern hospital was beginning to take shape with a proliferation of a variety of public and private hospital systems. By the 1870s, hospitals had more than trebled their original average intake of 3,000 patients. In continental Europe the new hospitals generally were built and run from public funds. The National Health Service, the principal provider of health care in the United Kingdom, was founded in 1948. During the nineteenth century, the Second Viennese Medical School emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialisation advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna, being considered as the birth of specialised medicine. 20th century and beyond By the late 19th and early 20th centuries, medical advancements such as anesthesia and sterile techniques that could make surgery less risky, and the availability of more advanced diagnostic devices such as X-rays, continued to make hospitals a more attractive option for treatment. Modern hospitals measure various efficiency metrics such as occupancy rates, the average length of stay, time to service, patient satisfaction, physician performance, patient readmission rate, inpatient mortality rate, and case mix index. In the United States, the number of hospitalizations grew to its peak in 1981 with 171 admissions per 1,000 Americans and 6,933 hospitals. This trend subsequently reversed, with the rate of hospitalization falling by more than 10% and the number of US hospitals shrinking from 6,933 in 1981 to 5,534 in 2016. Occupancy rates also dropped from 77% in 1980 to 60% in 2013. Among the reasons for this are the increasing availability of more complex care elsewhere such as at home or the physicians' offices and also the less therapeutic and more life-threatening image of the hospitals in the eyes of the public. In the US, a patient may sleep in a hospital bed, but be considered outpatient and "under observation" if not formally admitted. In the U.S., inpatient stays are covered under Medicare Part A, but a hospital might keep a patient under observation which is only covered under Medicare Part B, and subjects the patient to additional coinsurance costs. In 2013, the Center for Medicare and Medicaid Services (CMS) introduced a "two-midnight" rule for inpatient admissions, intended to reduce an increasing number of long-term "observation" stays being used for reimbursement. This rule was later dropped in 2018. In 2016 and 2017, healthcare reform and a continued decline in admissions resulted in US hospital-based healthcare systems performing poorly financially. Microhospitals, with bed capacities of between eight and fifty, are expanding in the United States. Similarly, freestanding emergency rooms, which transfer patients that require inpatient care to hospitals, were popularised in the 1970s and have since expanded rapidly across the United States. The Catholic Church is the largest non-government provider of health careservices in the world. It has around 18,000 clinics, 16,000 homes for the elderly and those with special needs, and 5,500 hospitals, with 65 percent of them located in developing countries. In 2010, the Church's Pontifical Council for the Pastoral Care of Health Care Workers said that the Church manages 26% of the world's health care facilities. Funding Modern hospitals derive funding from a variety of sources. They may be funded by private payment and health insurance or public expenditure, charitable donations. In the United Kingdom, the National Health Service delivers health care to legal residents funded by the state "free at the point of delivery", and emergency care free to anyone regardless of nationality or status. Due to the need for hospitals to prioritise their limited resources, there is a tendency in countries with such systems for 'waiting lists' for non-crucial treatment, so those who can afford it may take out private health care to access treatment more quickly. In the United States, hospitals typically operate privately and in some cases on a for-profit basis, such as HCA Healthcare. The list of procedures and their prices are billed with a chargemaster; however, these prices may be lower for health care obtained within healthcare networks. Legislation requires hospitals to provide care to patients in life-threatening emergency situations regardless of the patient's ability to pay. Privately funded hospitals which admit uninsured patients in emergency situations incur direct financial losses, such as in the aftermath of Hurricane Katrina. Quality and safety As the quality of health care has increasingly become an issue around the world, hospitals have increasingly had to pay serious attention to this matter. Independent external assessment of quality is one of the most powerful ways to assess this aspect of health care, and hospital accreditation is one means by which this is achieved. In many parts of the world such accreditation is sourced from other countries, a phenomenon known as international healthcare accreditation, by groups such as Accreditation Canada in Canada, the Joint Commission in the U.S., the Trent Accreditation Scheme in Great Britain, and the Haute Autorité de santé (HAS) in France. In England, hospitals are monitored by the Care Quality Commission. In 2020, they turned their attention to hospital food standards after seven patient deaths from listeria linked to pre-packaged sandwiches and salads in 2019, saying "Nutrition and hydration is part of a patient's recovery." The World Health Organization reported in 2011 that being admitted to a hospital was far riskier than flying. Globally, the chance of a patient being subject to a treatment error in a hospital was about 10%, and the chance of death resulting from an error was about one in 300. according to Liam Donaldson. 7% of hospitalised patients in developed countries, and 10% in developing countries, acquire at least one health care-associated infection. In the U.S., 1.7 million infections are acquired in hospital each year, leading to 100,000 deaths, figures much worse than in Europe where there were 4.5 million infections and 37,000 deaths. Architecture Modern hospital buildings are designed to minimise the effort of medical personnel and the possibility of contamination while maximising the efficiency of the whole system. Travel time for personnel within the hospital and the transportation of patients between units is facilitated and minimised. The building also should be built to accommodate heavy departments such as radiology and operating rooms while space for special wiring, plumbing, and waste disposal must be allowed for in the design. However, many hospitals, even those considered "modern", are the product of continual and often badly managed growth over decades or even centuries, with utilitarian new sections added on as needs and finances dictate. As a result, Dutch architectural historian Cor Wagenaar has called many hospitals: Some newer hospitals now try to re-establish design that takes the patient's psychological needs into account, such as providing more fresh air, better views and more pleasant colour schemes. These ideas harken back to the late eighteenth century, when the concept of providing fresh air and access to the 'healing powers of nature' were first employed by hospital architects in improving their buildings. The research of British Medical Association is showing that good hospital design can reduce patient's recovery time. Exposure to daylight is effective in reducing depression. Single-sex accommodation help ensure that patients are treated in privacy and with dignity. Exposure to nature and hospital gardens is also important – looking out windows improves patients' moods and reduces blood pressure and stress level. Open windows in patient rooms have also demonstrated some evidence of beneficial outcomes by improving airflow and increased microbial diversity. Eliminating long corridors can reduce nurses' fatigue and stress. Another ongoing major development is the change from a ward-based system (where patients are accommodated in communal rooms, separated by movable partitions) to one in which they are accommodated in individual rooms. The ward-based system has been described as very efficient, especially for the medical staff, but is considered to be more stressful for patients and detrimental to their privacy. A major constraint on providing all patients with their own rooms is however found in the higher cost of building and operating such a hospital; this causes some hospitals to charge for private rooms.
Biology and health sciences
Health, fitness, and medicine
null
18994268
https://en.wikipedia.org/wiki/DeviantArt
DeviantArt
DeviantArt (formerly stylized as deviantART) is an American online community that features artwork, videography, photography, and literature, launched on August 7, 2000, by Angelo Sotira, Scott Jarkoff, and Matthew Stephens, among others. DeviantArt, Inc. is headquartered in the Hollywood area of Los Angeles, California. DeviantArt had about 36 million visitors annually by 2008. In 2010, DeviantArt users were submitting about 1.4 million favorites and about 1.5 million comments daily. In 2011, it was the thirteenth largest social network with about 3.8 million weekly visits. Several years later, in 2017, the site had more than 25 million members and more than 250 million submissions. In February 2017, the website was acquired by Israeli software company Wix.com in a $36 million deal. History Creation DeviantArt started as a site connected with people who took computer applications and modified them to their own tastes, or who posted the applications from the original designs. As the site grew, members in general became known as artists and submissions as arts. DeviantArt was originally launched on August 7, 2000, by Scott Jarkoff, Matt Stephens, Angelo Sotira, and others, as part of a larger network of music-related websites called the Dmusic Network. The site flourished largely because of its unique offering and the contributions of its core member base and a team of volunteers after its launch, but it was officially incorporated in 2001 about eight months after launch. DeviantArt was loosely inspired by projects like Winamp facelift, customize.org, deskmod.com, screenphuck.com, and skinz.org, all application skin-based websites. Sotira entrusted all public aspects of the project to Scott Jarkoff as an engineer and visionary to launch the early program. All three co-founders shared backgrounds in the application skinning community, but it was Matt Stephens whose major contribution to DeviantArt was the suggestion to take the concept further than skinning and more toward an art community. Many of the individuals involved with the initial development and promotion of DeviantArt still hold positions with the project. Angelo Sotira is the chief executive officer. On November 14, 2006, DeviantArt introduced the option to submit their works under Creative Commons licenses giving the artists the right to choose how their works can be used. A Creative Commons license is one of several public copyright licenses that allow the distribution of copyrighted works. On September 30, 2007, a film category was added to DeviantArt, allowing artists to upload videos. An artist and other viewers can add annotations to sections of the film, giving comments or critiques to the artist about a particular moment in the film. In 2007, DeviantArt received $3.5 million in Series A (first round) funding from undisclosed investors, and in 2013, it received $10 million in Series B funding. On December 4, 2014, the site unveiled a new logo and announced the release of an official mobile app on both iOS and Android, released on December 10, 2014. On February 23, 2017, DeviantArt was acquired by Wix.com, Inc. for $36 million. The site plans to integrate DeviantArt and Wix functionality, including the ability to utilize DeviantArt resources on websites built with Wix, and integrating some of Wix's design tools into the site. As of March 1, 2017, Syria was banned from accessing DeviantArt's services entirely, citing US and Israeli sanctions and aftermath on February 19, 2018. After Syrian user Mythiril used a VPN to access the site and disclosed the geoblocking in a journal, titled "The hypocrisy of deviantArt," DeviantArt ended the geoblocking except for commercial features. In autumn of 2018, spambots began hacking into an indeterminately large number of long-inactive accounts and placing spam Weblinks in their victims' About sections (formerly known as DeviantIDs), where users of the site display their public profile information. An investigation into this matter began in January 2019. This situation ended sometime in late 2021. Copyright and licensing issues There is no review for potential copyright and Creative Commons licensing violations when a work is submitted to DeviantArt, so potential violations can remain unnoticed until reported to administrators using the mechanism available for such issues. Some members of the community have been the victims of copyright infringement from vendors using artwork illegally on products and prints, as reported in 2007. The reporting system in which to counteract copyright infringement directly on the site has been subject to a plethora of criticism from members of the site, given that it may take weeks, or even a month before a filed complaint for copyright infringement is answered. Contests for companies and academia Due to the nature of DeviantArt as an art community with a worldwide reach, companies use DeviantArt to promote themselves and create more advertising through contests. CoolClimate is a research network connected with the University of California, and they held a contest in 2012 to address the impact of climate change. Worldwide submissions were received, and the winner was featured in The Huffington Post. Various car companies have held contests. Dodge ran a contest in 2012 for art of the Dodge Dart and over 4,000 submissions were received. Winners received cash and item prizes, and were featured in a gallery at Dodge-Chrysler headquarters. Lexus partnered with DeviantArt in 2013 to run a contest for cash and other prizes based on their Lexus IS design; the winner's design became a modified Lexus IS and was showcased at the SEMA 2013 show in Los Angeles, California. DeviantArt hosts contests for upcoming movies, such as Riddick. Fan art for Riddick was submitted, and director David Twohy chose the winners, who would receive cash prizes and some other DeviantArt-related prizes, as well as having their artwork made into official fan-art posters for events. A similar contest was held for Dark Shadows where winners received cash and other prizes. Video games also conduct contests with DeviantArt, such as the 2013 Tomb Raider contest. The winner had their art made into an official print sold internationally at the Tomb Raider store and received cash and other prizes. Other winners also received cash and DeviantArt-related prizes. Litigation In January of 2023, three artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists. In July 2023, U.S. District Judge William Orrick inclined to dismiss most of the lawsuit filed by Andersen, McKernan, and Ortiz but allowed them to file a new complaint. Website The site has over 550 million images which have been uploaded by its over 75 million registered members. By July 2011, DeviantArt was the largest online art community. Members of DeviantArt may leave comments and critiques on individual deviation pages, allowing the site to be called "a [free] peer evaluation application." Along with textual critique, DeviantArt now offers the option to leave a small picture as a comment. This can be achieved using an option of DeviantArt Muro, which is a browser-based drawing tool that DeviantArt has developed and hosts. However, only members of DeviantArt can save their work as deviations. Another feature of Muro is what is called "Redraw," it records the user as they draw their image, and then the user can post the entire process as a film deviation. Some artists in late 2013 began experimenting with the use of breakfast cereal as the subject of their pieces, although this trend has only started spreading. Individual deviations are displayed on their own pages, with a list of statistical information about the image, as well as a place for comments by the artist and other members, and the option to share through other social media (Facebook, Twitter, etc.). Prior to Version 9, Deviations were required to be organized into categories when a member uploaded an image and this allowed DeviantArt's search engine to find images concerning similar topics. Individual members can organize their own deviations into folders on their personal pages. The member pages (profiles) show a member's personally uploaded deviations and journal postings. Journals are like personal blogs for the member pages, and the choice of topic is up to each member; some use it to talk about their personal or art-related lives, others use it to spread awareness or marshal support for a cause. Also displayed are a member's favorites, a collection of other users' images from DeviantArt that a member saves to its own folder. Another thing found on the profile page is a member's watchers; a member adds another member to their watch list in order to be notified when that member uploads something. The watcher notifications are gathered in a member's Message Center with other notices, like when other users comment on that member's deviations, or when the member's image has been put in someone's favorites. Members can build groups that any registered member of the site can join. These groups are usually based on an artist's chosen medium and content. Some examples of these are Literature (poetry, prose, etc.), Drawing (traditional, digital, or mixed-media), Photography (macro, nature, fashion, stills), and many others. Within these groups are where they do collaborations and have their art featured and introduced to artists of the same kind. DeviantArt does not allow pornographic, sexually explicit and/or obscene material to be submitted; however, "tasteful" nudity is allowed, even as photographs. To view mature artwork and content, members must be at least 18 years of age and to enable the content, they have to make an account. In order to communicate on a more private level,
Technology
Social network and blogging
null
16767087
https://en.wikipedia.org/wiki/Cure
Cure
A cure is a substance or procedure that ends a medical condition, such as a medication, a surgical operation, a change in lifestyle or even a philosophical mindset that helps end a person's sufferings; or the state of being healed, or cured. The medical condition could be a disease, mental illness, genetic disorder, or simply a condition a person considers socially undesirable, such as baldness or lack of breast tissue. An incurable disease may or may not be a terminal illness; conversely, a curable illness can still result in the patient's death. The proportion of people with a disease that are cured by a given treatment, called the cure fraction or cure rate, is determined by comparing disease-free survival of treated people against a matched control group that never had the disease. Another way of determining the cure fraction and/or "cure time" is by measuring when the hazard rate in a diseased group of individuals returns to the hazard rate measured in the general population. Inherent in the idea of a cure is the permanent end to the specific instance of the disease. When a person has the common cold, and then recovers from it, the person is said to be cured, even though the person might someday catch another cold. Conversely, a person that has successfully managed a disease, such as diabetes mellitus, so that it produces no undesirable symptoms for the moment, but without actually permanently ending it, is not cured. Related concepts, whose meaning can differ, include response, remission and recovery. Statistical model In complex diseases, such as cancer, researchers rely on statistical comparisons of disease-free survival (DFS) of patients against matched, healthy control groups. This logically rigorous approach essentially equates indefinite remission with cure. The comparison is usually made through the Kaplan-Meier estimator approach. The simplest cure rate model was published by Joseph Berkson and Robert P. Gage in 1952. In this model, the survival at any given time is equal to those that are cured plus those that are not cured, but who have not yet died or, in the case of diseases that feature asymptomatic remissions, have not yet re-developed signs and symptoms of the disease. When all of the non-cured people have died or re-developed the disease, only the permanently cured members of the population will remain, and the DFS curve will be perfectly flat. The earliest point in time that the curve goes flat is the point at which all remaining disease-free survivors are declared to be permanently cured. If the curve never goes flat, then the disease is formally considered incurable (with the existing treatments). The Berkson and Gage equation is where is the proportion of people surviving at any given point in time, is the proportion that are permanently cured, and is an exponential curve that represents the survival of the non-cured people. Cure rate curves can be determined through an analysis of the data. The analysis allows the statistician to determine the proportion of people that are permanently cured by a given treatment, and also how long after treatment it is necessary to wait before declaring an asymptomatic individual to be cured. Several cure rate models exist, such as the expectation-maximization algorithm and Markov chain Monte Carlo model. It is possible to use cure rate models to compare the efficacy of different treatments. Generally, the survival curves are adjusted for the effects of normal aging on mortality, especially when diseases of older people are being studied. From the perspective of the patient, particularly one that has received a new treatment, the statistical model may be frustrating. It may take many years to accumulate sufficient information to determine the point at which the DFS curve flattens (and therefore no more relapses are expected). Some diseases may be discovered to be technically incurable, but also to require treatment so infrequently as to be not materially different from a cure. Other diseases may prove to have multiple plateaus, so that what was once hailed as a "cure" results unexpectedly in very late relapses. Consequently, patients, parents and psychologists developed the notion of psychological cure, or the moment at which the patient decides that the treatment was sufficiently likely to be a cure as to be called a cure. For example, a patient may declare himself to be "cured", and to determine to live his life as if the cure were definitely confirmed, immediately after treatment. Related terms Response Response is a partial reduction in symptoms after treatment. RecoveryRecovery is a restoration of health or functioning. A person who has been cured may not be fully recovered, and a person who has recovered may not be cured, as in the case of a person in a temporary remission or who is an asymptomatic carrier for an infectious disease. PreventionPrevention is a way to avoid an injury, sickness, disability, or disease in the first place, and generally it will not help someone who is already ill (though there are exceptions). For instance, many babies and young children are vaccinated against polio (a highly infectious disease) and other infectious diseases, which prevents them from contracting polio. But the vaccination does not work on patients who already have polio. A treatment or cure is applied after a medical problem has already started. TherapyTherapy treats a problem, and may or may not lead to its cure. In incurable conditions, a treatment ameliorates the medical condition, often only for as long as the treatment is continued or for a short while after treatment is ended. For example, there is no cure for AIDS, but treatments are available to slow down the harm done by HIV and extend the treated person's life. Treatments don't always work. For example, chemotherapy is a treatment for cancer, but it may not work for every patient. In easily cured forms of cancer, such as childhood leukaemia's, testicular cancer and Hodgkin lymphoma, cure rates may approach 90%. In other forms, treatment may be essentially impossible. A treatment need not be successful in 100% of patients to be considered curative. A given treatment may permanently cure only a small number of patients; so long as those patients are cured, the treatment is considered curative. Examples Cures can take the form of natural antibiotics (for bacterial infections), synthetic antibiotics such as the sulphonamides, or fluoroquinolones, antivirals (for a very few viral infections), antifungals, antitoxins, vitamins, gene therapy, surgery, chemotherapy, radiotherapy, and so on. Despite a number of cures being developed, the list of incurable diseases remains long. 1700s Scurvy became curable (as well as preventable) with doses of vitamin C (for example, in limes) when James Lind published A Treatise on the Scurvy (1753). 1890s Antitoxins to diphtheria and tetanus toxins were produced by Emil Adolf von Behring and his colleagues from 1890 onwards. The use of diphtheria antitoxin for the treatment of diphtheria was regarded by The Lancet as the "most important advance of the [19th] Century in the medical treatment of acute infectious disease". 1930s Sulphonamides become the first widely available cure for bacterial infections. Antimalarials were first synthesized, making malaria curable. 1940s Bacterial infections became curable with the development of antibiotics. 2010s Hepatitis C, a viral infection, became curable through treatment with antiviral medications.
Biology and health sciences
Treatments
Health
15054973
https://en.wikipedia.org/wiki/Ardipithecus%20kadabba
Ardipithecus kadabba
Ardipithecus kadabba is the scientific classification given to fossil remains "known only from teeth and bits and pieces of skeletal bones", originally estimated to be 5.8 to 5.2 million years old, and later revised to 5.77 to 5.54 million years old. According to the first description, these fossils are close to the common ancestor of chimps and humans. Their development lines are estimated to have parted 6.5–5.5 million years ago. It has been described as a "probable chronospecies" (i.e. ancestor) of A. ramidus. Although originally considered a subspecies of A. ramidus, in 2004 anthropologists Yohannes Haile-Selassie, Gen Suwa, and Tim D. White published an article elevating A. kadabba to species level on the basis of newly discovered teeth from Ethiopia. These teeth show "primitive morphology and wear pattern" which demonstrate that A. kadabba is a distinct species from A. ramidus. The specific name comes from the Afar word for "basal family ancestor". Taxonomy Fossil remains were first described in 2001 by Ethiopian paleoanthropologist Yohannes Haile-Selassie based on bones collected from five localities in the Middle Awash, Ethiopia. Haile-Selassie initially classified them as Ardipithecus ramidus kadabba, with kadabba deriving from the Afar language meaning "basal family ancestor". In 2004, he, along with Japanese paleoanthropologist Gen Suwa and American paleoanthropologist Tim D. White, elevated it to species level as A. kadabba based on apparently primitive features compared to A. ramidus. A. kadabba is considered to have been the direct ancestor of A. ramidus, making Ardipithecus a chronospecies. Along with elevating it to species level, they suggested that Ardipithecus, Sahelanthropus, and Orrorin could potentially belong to the same genus. In 2008, American paleoanthropologists Bernard Wood and Nicholas Lonerga said that the larger ape-like canines of A. kadabba cast doubt on its assignment to the human line, but the position of Ardipithecus near humans has been reaffirmed by the discoverers and colleagues. They see a lineage of apes whose teeth continually reduce in size: A. kadabba–A. ramidus–Australopithecus anamensis–Au africanus, though they are unsure if Ardipithecus were the ancestors to these Australopithecus species, or were only closely related. Evolutionary tree according to a 2019 study: Description A. kadabba is known from nineteen specimens which reveal elements of the teeth, jaw, feet, and hands and arms. The holotype specimen, ALA-VP-2/10, is a right lower jaw fragment with a third molar, discovered in December 1997, and five associated left lower jaw teeth or root fragments collected in 1999. This correction of the initial allocation of the fossil record was based on the argument that Ardipithecus kadabba had more "primitive" features than other Ardipithecus fossils. Ardipithecus kadabba thus also has a greater similarity with the genera Sahelanthropus and Orrorin. These statements were based on additional bone finds that came to light in November 2002 and were dated at 5.8 to 5.6 million years. At the same time, it was emphasized that evidence could be found of a reduced "honing" complex, traces on the teeth that arise when the canines rub against each other when biting, constantly sharpening their peaks, which has been found in all older finds. The loss of this feature in the successor species of Ardipithecus ramidus has been used for the allocation of discoveries in that line of development of great apes that led to the australopithecines and the genus Homo. Paleoecology The first description suggested that Ardipithecus kadabba lived in a habitat that consisted of forests, wooded savannas, and open water areas, as had been described for Sahelanthropus.
Biology and health sciences
Australopithecines
Biology
15054977
https://en.wikipedia.org/wiki/Ardipithecus%20ramidus
Ardipithecus ramidus
Ardipithecus ramidus is a species of australopithecine from the Afar region of Early Pliocene Ethiopia 4.4 million years ago (mya). A. ramidus, unlike modern hominids, has adaptations for both walking on two legs (bipedality) and life in the trees (arboreality). However, it would not have been as efficient at bipedality as humans, nor at arboreality as non-human great apes. Its discovery, along with Miocene apes, has reworked academic understanding of the chimpanzee–human last common ancestor from appearing much like modern-day chimpanzees, orangutans and gorillas to being a creature without a modern anatomical cognate. The facial anatomy suggests that A. ramidus males were less aggressive than those of modern chimps, which is correlated to increased parental care and monogamy in primates. It has also been suggested that it was among the earliest of human ancestors to use some proto-language, possibly capable of vocalizing at the same level as a human infant. This is based on evidence of human-like skull architecture, cranial base angle and vocal tract dimensions, all of which in A. ramidus are paedomorphic when compared to chimpanzees and bonobos. This suggests the trend toward paedomorphic or juvenile-like form evident in human evolution, may have begun with A. ramidus. Given these unique features, it has been argued that in A. ramidus we may have the first evidence of human-like forms of social behaviour, vocally mediated sociality as well as increased levels of prosociality via the process of self-domestication—all of which seem to be associated with the same underlying changes in skull architecture. A. ramidus appears to have inhabited woodland and bushland corridors between savannas, and was a generalized omnivore. Taxonomy The first remains were described in 1994 by American anthropologist Tim D. White, Japanese paleoanthropologist Gen Suwa, and Ethiopian paleontologist Berhane Asfaw. The holotype specimen, ARA-VP-6/1, comprised an associated set of 10 teeth; and there were 16 other paratypes identified, preserving also skull and arm fragments. These were unearthed in the 4.4-million-year-old (Ma) deposits of the Afar region in Aramis, Ethiopia from 1992 to 1993, making them the oldest hominin remains at the time, surpassing Australopithecus afarensis. They initially classified it as Australopithecus ramidus, the species name deriving from the Afar language ramid "root". In 1995, they made a corrigendum recommending it be split off into a separate genus, Ardipithecus; the name stems from Afar ardi "ground" or "floor". The 4.4-million-year-old female ARA-VP 6/500 ("Ardi") is the most complete specimen. Fossils from at least nine A. ramidus individuals at As Duma, Gona Western Margin, Afar, were unearthed from 1993 to 2003. The fossils were dated to between 4.32 and 4.51 million years ago. In 2001, 6.5- to 5.5-million-year-old fossils from the Middle Awash were classified as a subspecies of A. ramidus by Ethiopian paleoanthropologist Yohannes Haile-Selassie. In 2004, Haile-Selassie, Suwa and White split it off into its own species, A. kadabba. A. kadabba is considered to have been the direct ancestor of A. ramidus, making Ardipithecus a chronospecies. The exact affinities of Ardipithecus have been debated. White, in 1994, considered A. ramidus to have been more closely related to humans than chimpanzees, though noting it to be the most ape-like fossil hominin to date. In 2001, French paleontologist Brigitte Senut and colleagues aligned it more closely to chimpanzees, but this has been refuted. In 2009, White and colleagues reaffirmed the position of Ardipithecus as more closely related to modern humans based on dental similarity, a short base of the skull, and adaptations to bipedality. In 2011, primatologist Esteban Sarmiento said that there is not enough evidence to assign Ardipithecus to Hominini (comprising both humans and chimps), but its closer affinities to humans have been reaffirmed in following years. White and colleagues consider it to have been closely related to or the ancestor of the temporally close Australopithecus anamensis, which was the ancestor to Au. afarensis. Before the discovery of Ardipithecus and other pre-Australopithecus hominins, it was assumed that the chimpanzee–human last common ancestor and preceding apes appeared much like modern-day chimpanzees, orangutans and gorillas, which would have meant these three changed very little over millions of years. Their discovery led to the postulation that modern great apes, much like humans, evolved several specialized adaptations to their environment (have highly derived morphologies), and their ancestors were comparatively poorly adapted to suspensory behavior or knuckle walking, and did not have such a specialized diet. Also, the origins of bipedality were thought to have occurred due to a switch from a forest to a savanna environment, but the presence of bipedal pre-Australopithecus hominins in woodlands has called this into question, though they inhabited wooded corridors near or between savannas. It is also possible that Ardipithecus and pre-Australopithecus were random offshoots of the hominin line. Description Assuming subsistence was primarily sourced from climbing in trees, A. ramidus may not have exceeded . "Ardi," a larger female specimen, was estimated to have stood and weighed based on comparisons with large-bodied female apes. Unlike the later Australopithecus but much like chimps and humans, males and females were about the same size. A. ramidus had a small brain, measuring . This is slightly smaller than a modern bonobo or chimp brain, but much smaller than the brain of Australopithecus—about —and roughly 20% the size of the modern human brain. Like chimps, the A. ramidus face was much more pronounced (prognathic) than modern humans. The size of the upper canine tooth in A. ramidus males was not distinctly different from that of females (only 12% larger), in contrast to the sexual dimorphism observed in chimps where males have significantly larger and sharper upper canines than females. A. ramidus feet are better suited for walking than chimps. However, like non-human great apes, but unlike all previously recognized human ancestors, it had a grasping big toe adapted for locomotion in the trees (an arboreal lifestyle), though it was likely not as specialized for grasping as it is in modern great apes. Its tibial and tarsal lengths indicate a leaping ability similar to bonobos. It lacks any characters suggestive of specialized suspension, vertical climbing, or knuckle walking; and it seems to have used a method of locomotion unlike any modern great ape, which combined arboreal palm walking clambering and a form of bipedality more primitive than Australopithecus. The discovery of such unspecialized locomotion led American anthropologist Owen Lovejoy and colleagues to postulate that the chimpanzee–human last common ancestor used a similar method of locomotion. The upper pelvis (distance from the sacrum to the hip joint) is shorter than in any known ape. It is inferred to have had a long lumbar vertebral series, and lordosis (human curvature of the spine), which are adaptations for bipedality. However, the legs were not completely aligned with the torso (were anterolaterally displaced), and Ardipithecus may have relied more on its quadriceps than hamstrings which is more effective for climbing than walking. However, it lacked foot arches and had to adopt a flat-footed stance. These would have made it less efficient at walking and running than Australopithecus and Homo. It may not have employed a bipedal gait for very long time intervals. It may have predominantly used palm walking on the ground, Nonetheless, A. ramidus still had specialized adaptations for bipedality, such as a robust fibularis longus muscle used in pushing the foot off the ground while walking (plantarflexion), the big toe (though still capable of grasping) was used for pushing off, and the legs were aligned directly over the ankles instead of bowing out like in non-human great apes. Paleobiology The reduced canine size and reduced skull robustness in A. ramidus males (about the same size in males and females) is typically correlated with reduced male–male conflict, increased parental investment, and monogamy. Because of this, it is assumed that A. ramidus lived in a society similar to bonobos and ateline monkeys due to a process of self domestication (becoming more and more docile which allows for a more gracile build). Because a similar process is thought to have occurred with the comparatively docile bonobos from more aggressive chimps, A. ramidus society may have seen an increase in maternal care and female mate selection compared to its ancestors. Alternatively, it is possible that increased male size is a derived trait instead of basal (it evolved later rather than earlier), and is a specialized adaptation in modern great apes as a response to a different and more physically exerting lifestyle in males than females rather than being tied to interspecific conflict. Australian anthropologists Gary Clark and Maciej Henneberg argued that such shortening of the skull—which may have caused a descension of the larynx—as well as lordosis—allowing better movement of the larynx—increased vocal ability, significantly pushing back the origin of language to well before the evolution of Homo. They argued that self domestication was aided by the development of vocalization, living in a pro-social society, as a means of non-violently dealing with conflict. They conceded that chimps and A. ramidus likely had the same vocal capabilities, but said that A. ramidus made use of more complex vocalizations, and vocalized at the same level as a human infant due to selective pressure to become more social. This would have allowed their society to become more complex. They also noted that the base of the skull stopped growing with the brain by the end of juvenility, whereas in chimps it continues growing with the rest of the body into adulthood; and considered this evidence of a switch from a gross skeletal anatomy trajectory to a neurological development trajectory due to selective pressure for sociability. Nonetheless, their conclusions are highly speculative. American primatologist Craig Stanford postulated that A. ramidus behaved similarly to chimps, which frequent both the trees and the ground, have a polygynous society, hunt cooperatively, and are the most technologically advanced non-human. However, Clark and Henneberg concluded that Ardipithecus cannot be compared to chimps, having been too similar to humans. According to French paleoprimatologist Jean-Renaud Boisserie, the hands of Ardipithecus would have been dextrous enough to handle basic tools, though it has not been associated with any tools. The teeth of A. ramidus indicate that it was likely a generalized omnivore and fruit eater which predominantly consumed C3 plants in woodlands or gallery forests. The teeth lacked adaptations for abrasive foods. Lacking the speed and agility of chimps and baboons, meat intake by Ardipithecus, if done, would have been sourced from only what could have been captured by limited pursuit, or from scavenging carcasses. The second-to-fourth digit ratios of A. ramidus are low, consistent with high androgenisation and a disposition towards polygyny. Paleoecology Half of the large mammal species associated with A. ramidus at Aramis are spiral-horned antelope and colobine monkeys (namely Kuseracolobus and Pliopapio). There are a few specimens of primitive white and black rhino species, and elephants, giraffes and hippo specimens are less abundant. These animals indicate that Aramis ranged from wooded grasslands to forests, but A. ramidus likely preferred the closed habitats, specifically riverine areas as such water sources may have supported more canopy coverage. Aramis as a whole generally had less than 25% canopy cover. There were exceedingly high rates of scavenging, indicating a highly competitive environment somewhat like Ngorongoro Crater. Predators of the area were the hyenas Ikelohyaena abronia and Crocuta dietrichi, the bear Agriotherium, the cats Dinofelis and Megantereon, the dog Eucyon, and crocodiles. Bayberry, hackberry and palm trees appear to have been common at the time from Aramis to the Gulf of Aden; and botanical evidence suggests a cool, humid climate. Conversely, annual water deficit (the difference between water loss by evapotranspiration and water gain by precipitation) at Aramis was calculated to have been about , which is seen in some of the hottest, driest parts of East Africa. Carbon isotope analyses of the herbivore teeth from the Gona Western Margin associated with A. ramidus indicate that these herbivores fed mainly on C4 plants and grasses rather than forest plants. The area seems to have featured bushland and grasslands.
Biology and health sciences
Australopithecines
Biology
7674011
https://en.wikipedia.org/wiki/Slow%20earthquake
Slow earthquake
A slow earthquake is a discontinuous, earthquake-like event that releases energy over a period of hours to months, rather than the seconds to minutes characteristic of a typical earthquake. First detected using long term strain measurements, most slow earthquakes now appear to be accompanied by fluid flow and related tremor, which can be detected and approximately located using seismometer data filtered appropriately (typically in the 1–5 Hz band). That is, they are quiet compared to a regular earthquake, but not "silent" as described in the past. Slow earthquakes should not be confused with tsunami earthquakes, in which relatively slow rupture velocity produces tsunami out of proportion to the triggering earthquake. In a tsunami earthquake, the rupture propagates along the fault more slowly than usual, but the energy release occurs on a similar timescale to other earthquakes. Causes Earthquakes occur as a consequence of gradual stress increases in a region, and once it reaches the maximum stress that the rocks can withstand a rupture generates and the resulting earthquake motion is related to a drop in the shear stress of the system. Earthquakes generate seismic waves when the rupture in the system occurs, the seismic waves consist of different types of waves that are capable of moving through the Earth like ripples over water. The causes that lead to slow earthquakes have only been theoretically investigated, by the formation of longitudinal shear cracks that were analysed using mathematical models. The different distributions of initial stress, sliding frictional stress, and specific fracture energy are all taken into account. If the initial stress minus the sliding frictional stress (with respect to the initial crack) is low, and the specific fracture energy or the strength of the crustal material (relative to the amount of stress) is high then slow earthquakes will occur regularly. In other words, slow earthquakes are caused by a variety of stick-slip and creep processes intermediated between asperity-controlled brittle and ductile fracture. Asperities are tiny bumps and protrusions along the faces of fractures. They are best documented from intermediate crustal levels of certain subduction zones (especially those that dip shallowly – SW Japan, Cascadia, Chile), but appear to occur on other types of faults as well, notably strike-slip plate boundaries such as the San Andreas fault and "mega-landslide" normal faults on the flanks of volcanos. Locations Faulting takes place all over Earth; faults can include convergent, divergent, and transform faults, and normally occur on plate margins. some of the locations that have been recently studied for slow earthquakes include: Cascadia, California, Japan, New Zealand, Mexico, and Alaska. The locations of slow earthquakes can provide new insights into the behavior of normal or fast earthquakes. By observing the location of tremors associated with slow-slip and slow earthquakes, seismologists can determine the extension of the system and estimate future earthquakes in the area of study. Types Teruyuki Kato identifies various types of slow earthquake: low frequency earthquakes (LFE) very low frequency earthquakes (VLF) and deep-low-frequency earthquakes slow slip events (SSE) episodic tremor and slip (ETS) Low frequency earthquakes Low frequency earthquakes (LFEs) are seismic events defined by waveforms with periods far greater than those of ordinary earthquakes and abundantly occur during slow earthquakes. LFEs can be volcanic, semi-volcanic, or tectonic in origin, but only tectonic LFEs or LFEs generated during slow earthquakes are described here. Tectonic LFEs are characterized by generally low magnitudes (M<3) and have frequencies peaked between 1 and 3 Hz. They are the largest constituent of non-volcanic tremor at subduction zones, and in some cases are the only constituent. In contrast to ordinary earthquakes, tectonic LFEs occur largely during long-lived slip events at subduction interfaces (up to several weeks in some cases) called slow slip events (SSEs). The mechanism responsible for their generation at subduction zones is thrust-sense slip along transitional segments of the plate interface. LFEs are highly sensitive seismic events which can likely be triggered by tidal forces as well as propagating waves from distant earthquakes. LFEs have hypocenters located down-dip from the seismogenic zone, the source region of megathrust earthquakes. During SSEs, LFE foci migrate along strike at the subduction interface in concert with the primary shear slip front. The depth occurrence of low frequency earthquakes is in the range of approximately 20–45 kilometers depending on the subduction zone, and at shallower depths at strike-slip faults in California. At "warm" subduction zones like the west coast of North America, or sections in eastern Japan this depth corresponds to a transition or transient slip zone between the locked and stable slip intervals of the plate interface. The transition zone is located at depths approximately coincidental with the continental Mohorovicic discontinuity. At the Cascadia subduction zone, the distribution of LFEs form a surface roughly parallel to intercrustal seismic events, but displaced 5–10 kilometers down-dip, providing evidence that LFEs are generated at the plate interface. Low frequency earthquakes are an active area of research and may be important seismic indicators for higher magnitude earthquakes. Since slow slip events and their corresponding LFE signals have been recorded, none of them have been accompanied by a megathrust earthquake, however, SSEs act to increase the stress in the seismogenic zone by forcing the locked interval between the subducting and overriding plate to accommodate for down-dip movement. Some calculations find that the probability of a large earthquake occurring during a slow slip event are 30–100 times greater than background probabilities. Understanding the seismic hazard that LFEs might herald is among the primary reasons for their research. Additionally, LFEs are useful for the tomographic imaging of subduction zones because their distributions accurately map the deep plate contact near the Mohorovicic discontinuity. History Low frequency earthquakes were first classified in 1999 when the Japan Meteorological Agency (JMA) began differentiating LFE's seismic signature in their seismicity catalogue. The discovery and understanding of LFEs at subduction zones is due in part to the fact that the seismic signatures of these events were found away from volcanoes. Prior to their discovery, tremor events of this style were mainly associated with volcanism where the tremor is generated by partial coupling of flowing magmatic fluids. Japanese researchers first detected "low-frequency continuous tremor" near the top of the subducting Philippine Sea plate in 2002. After initially interpreting this seismic data as dehydration induced tremor, researchers in 2007 found that the data contained many LFE waveforms, or LFE swarms. Prior to 2007, tremor and LFEs were believed to be distinct events that often occurred together, but contemporarily LFEs are known to be the largest constituent forming tectonic tremor. LFEs and SSEs are frequently observed at subduction zones in western North America, Japan, Mexico, Costa Rica, New Zealand, as well as in shallow strike slip faults in California. Detection Low frequency earthquakes do not exhibit the same seismic character as regular earthquakes namely because they lack distinct, impulsive body waves. P wave arrivals from LFEs have amplitudes so small that they are often difficult to detect, so when the JMA first distinguished the unique class of earthquake it was primarily by the detection of S wave arrivals which were emergent. Because of this, detecting LFEs is nearly impossible using classical techniques. Despite their lack of important seismic identifiers, LFEs can be detected at low Signal-to-Noise-Ratio (SNR) thresholds using advanced seismic correlation methods. The most common method for identifying LFEs involves the correlation of the seismic record with a template constructed from confirmed LFE waveforms. Since LFEs are such subtle events and have amplitudes that are frequently drowned out by background noise, templates are built by stacking similar LFE waveforms to reduce the SNR. Noise is reduced to such an extent that a relatively clean waveform can be searched for in the seismic record, and when correlation coefficients are deemed high enough an LFE is detected. Determination of the slip orientation responsible for LFEs and earthquakes in general is done by the P wave first-motion method. LFE P waves, when successfully detected, have first motions indicative of compressional stress, indicating that thrust-sense slip is responsible for their generation. Extracting high quality P wave data out of LFE waveforms can be quite difficult, however, and is furthermore important for accurate hypocentral depth determinations. The detection of high quality P wave arrivals is a recent advent thanks to the deployment of highly sensitive seismic monitoring networks. The depth occurrence of LFEs are generally determined by P wave arrivals but have also been determined by mapping LFE epicenters against subducting plate geometries. This method does not discriminate whether or not the observed LFE was triggered at the plate interface or within the down-going slab itself, so additional geophysical analysis is required to determine where exactly the focus is located. Both methods find that LFEs are indeed triggered at the plate contact. Low frequency earthquakes in Cascadia The Cascadia subduction zone spans from northern California to about halfway up Vancouver Island and is where the Juan de Fuca, Explorer, and Gorda plates are overridden by North America. In the Cascadia subduction zone, LFEs are predominantly observed at the plate interface down-dip of the seismogenic zone. In the southern section of the subduction zone from latitudes 40°N to 41.8°N low frequency earthquakes occur at depths between 28 and 47 kilometers, whereas farther north near Vancouver Island the range contracts to approximately 25–37 kilometers. This depth section of the subduction zone has been classified by some authors as the "transient slip" or "transition" zone due to its episodic slip behavior and is bounded up-dip and down-dip by the "locked zone" and "stable-slip zone", respectively. The transient slip section of the Cascadia is marked by high Vp/Vs ratios (P wave velocity divided by S wave velocity) and is designated as a Low Velocity Zone (LVZ). Furthermore, the LVZ has high Poisson's ratios as determined by teleseismic wave observations. These seismic properties defining the LVZ have been interpreted as an overpressured region of the down-going slab with high pore fluid pressures. The presence of water at the subduction interface and its relation to the generation of LFEs is not fully understood, but hydrolytic weakening of the rock contact is likely important. Where megathrust earthquakes (M>8) have been repeatedly observed in the shallow sections (<25 km depth) of the Cascadia subduction zone, low frequency earthquakes have recently been discovered to occur at greater depths, down-dip of the seismogenic zone. The first indicator of low frequency earthquakes in Cascadia was discovered in 1999 when an aseismic event took place at the subduction interface wherein the overriding North American plate slipped 2 centimeters south-west over a several-week period as recorded by Global Positioning System (GPS) sites in British Columbia. This apparent slow slip event occurred over a 50-by-300-kilometer area and took approximately 35 days. Researchers estimated that the energy released in such an event would be equivalent to a magnitude 6–7 earthquake, yet no significant seismic signal was detected. The aseismic character of the event led observers to conclude that the slip was mediated by ductile deformation at depth. After further analysis of the GPS record, these reverse slip events were found to repeat at 13- to 16-month intervals, and last 2 to 4 weeks at any one GPS station. Soon after, geophysicists were able to extract the seismic signatures from these slow slip events and found that they were akin to tremor and classified the phenomenon as episodic tremor and slip (ETS). Upon the advent of improved processing techniques, and the discovery that LFEs form part of tremor, low frequency earthquakes were widely considered a commonplace occurrence at the plate interface down-dip of the seismogenic zone in Cascadia. Low frequency tremors in the Cascadia subduction zone are strongly associated with tidal loading. A number of studies in Cascadia find that the peak low frequency earthquake signals alternate from being in phase with peak tidal shear stress rate to being in phase with peak tidal shear stress, suggesting that LFEs are modulated by changes in sea level. The shear slip events responsible for LFEs are therefore quite sensitive to pressure changes in the range of several kilo-pascals. Low frequency earthquakes in Japan The discovery of LFEs originates in Japan at the Nankai trough and is in part due to the nationwide collaboration of seismological research following the Kobe earthquake of 1995. Low frequency earthquakes in Japan were first observed in a subduction setting where the Philippine Sea plate subducts at the Nankai trough near Shikoku. The low-frequency continuous tremor researchers observed was initially interpreted to be a result of dehydration reactions in the subducting plate. The source of these tremors occurred at an average depth of around 30 kilometers, and they were distributed along the strike of the subduction interface over a length of 600 kilometers. Similar to Cascadia, these low frequency tremors occurred with slow slip events that had a recurrence interval of approximately 6 months. The later discovery of LFEs forming tremor confirmed the widespread existence of LFEs at Japanese subduction zones, and LFEs are widely observed and believed to occur as a result of SSEs. The distribution of LFEs in Japan are centered around the subduction of the Philippine Sea plate and not the Pacific plate farther north. This is likely due to the difference in subduction geometries between the two plates. The Philippine Sea plate at the Nankai trough subducts at shallower overall angles than does the Pacific plate at the Japan Trench, thereby making the Japan trench less suitable for SSEs and LFEs. LFEs in Japan have hypocenters located near the deepest extent of the transition zone, down-dip from the seismogenic zone. Estimates for the depth occurrence of the seismogenic zone near Tokai, Japan are 8–22 kilometers as determined by thermal methods. Furthermore, LFEs occur at a temperature range of 450–500 °C in Tokai, indicating that temperature may play an important role in the generation of LFEs in Japan. Very low frequency earthquakes Very low frequency earthquakes (VLFs) can be considered a sub-category of low frequency earthquakes that differ in terms of duration and period. VLFs have magnitudes of approximately 3-3.5, durations around 20 seconds, and are further enriched in low frequency energy (0.03–0.02 Hz). VLFs predominantly occur with LFEs, but the reverse is not true. There are two major subduction zone settings where VLFs have been detected, 1) within the offshore accretionary prism and 2) at the plate interface down-dip of the seismogenic zone. Since these two environments have considerably different depths, they have been termed shallow VLFs and deep VLFs, respectively. Like LFEs, very low frequency earthquakes migrate along-strike during ETS events. VLFs have been found at both the Cascadia subduction zone in western North America, as well as in Japan at the Nankai trough and Ryukyu trench. VLFs are produced by reverse fault mechanisms, similar to LFEs. Slow slip events Slow slip events (SSEs) are long lived shear slip events at subduction interfaces and the physical processes responsible for the generation of slow earthquakes. They are slow thrust-sense displacement episodes that can have durations up to several weeks, and are thus termed "slow". In many cases, the recurrence interval for slow slip events is remarkably periodic and accompanied by tectonic tremor, prompting seismologists to term episodic tremor and slip (ETS). In the Cascadia, the return period for SSEs is approximately 14.5 months, but varies along the margin of the subduction zone. In the Shikoku region in southwest Japan, the interval is shorter at approximately 6 months, as determined by crustal tilt changes. Some SSEs have durations in excess of several years, like the Tokai SSE that lasted from mid-2000 to 2003. Slow slip event's locus of displacement propagate along the strike of subduction interfaces at velocities of 5–10 kilometers per day during slow earthquakes in the Cascadia, and this propagation is responsible for the similar migration of LFEs and tremor. Episodic tremor and slip Slow earthquakes can be episodic (relative of plate movement), and therefore somewhat predictable, a phenomenon termed "episodic tremor and slip" or "ETS" in the literature. ETS events can last for weeks as opposed to "normal earthquakes" occur in a matter of seconds. Several slow-earthquake events around the world appear to have triggered major, damaging seismic earthquakes in the shallower crust (e.g., 2001 Nisqually, 1995 Antofagasta). Conversely, major earthquakes trigger "post-seismic creep" in the deeper crust and mantle. Every five years a year-long quake of this type occurs beneath the New Zealand capital, Wellington. It was first measured in 2003, and has reappeared in 2008 and 2013. It lasts for around a year each time, releasing as much energy as a magnitude 7 quake.
Physical sciences
Seismology
Earth science
19003265
https://en.wikipedia.org/wiki/Neptune
Neptune
Neptune is the eighth and farthest known planet from the Sun. It is the fourth-largest planet in the Solar System by diameter, the third-most-massive planet, and the densest giant planet. It is 17 times the mass of Earth. Compared to its fellow ice giant Uranus, Neptune is slightly more massive, but denser and smaller. Being composed primarily of gases and liquids, it has no well-defined solid surface, and orbits the Sun once every 164.8 years at an orbital distance of . It is named after the Roman god of the sea and has the astronomical symbol representing Neptune's trident. Neptune is not visible to the unaided eye and is the only planet in the Solar System that was not initially observed by direct empirical observation. Rather, unexpected changes in the orbit of Uranus led Alexis Bouvard to hypothesise that its orbit was subject to gravitational perturbation by an unknown planet. After Bouvard's death, the position of Neptune was mathematically predicted from his observations, independently, by John Couch Adams and Urbain Le Verrier. Neptune was subsequently directly observed with a telescope on 23 September 1846 by Johann Gottfried Galle within a degree of the position predicted by Le Verrier. Its largest moon, Triton, was discovered shortly thereafter, though none of the planet's remaining moons were located telescopically until the 20th century. The planet's distance from Earth gives it a small apparent size, and its distance from the Sun renders it very dim, making it challenging to study with Earth-based telescopes. Only the advent of the Hubble Space Telescope and of large ground-based telescopes with adaptive optics allowed for detailed observations. Neptune was visited by Voyager 2, which flew by the planet on 25 August 1989; Voyager 2 remains the only spacecraft to have visited it. Like the gas giants (Jupiter and Saturn), Neptune's atmosphere is composed primarily of hydrogen and helium, along with traces of hydrocarbons and possibly nitrogen, but contains a higher proportion of ices such as water, ammonia and methane. Similar to Uranus, its interior is primarily composed of ices and rock; both planets are normally considered "ice giants" to distinguish them. Along with Rayleigh scattering, traces of methane in the outermost regions make Neptune appear faintly blue. In contrast to the strongly seasonal atmosphere of Uranus, which can be featureless for long periods of time, Neptune's atmosphere has active and consistently visible weather patterns. At the time of the Voyager 2 flyby in 1989, the planet's southern hemisphere had a Great Dark Spot comparable to the Great Red Spot on Jupiter. In 2018, a newer main dark spot and smaller dark spot were identified and studied. These weather patterns are driven by the strongest sustained winds of any planet in the Solar System, as high as . Because of its great distance from the Sun, Neptune's outer atmosphere is one of the coldest places in the Solar System, with temperatures at its cloud tops approaching . Temperatures at the planet's centre are approximately . Neptune has a faint and fragmented ring system (labelled "arcs"), discovered in 1984 and confirmed by Voyager 2. History Discovery Some of the earliest known telescopic observations ever, Galileo's drawings on 28 Dec. 1612 and 27 Jan. 1613 (New Style) contain plotted points that match what is now known to have been the positions of Neptune on those dates. Both times, Galileo seems to have mistaken Neptune for a fixed star when it appeared close—in conjunction—to Jupiter in the night sky. Hence, he is not credited with Neptune's discovery. At his first observation in Dec. 1612, Neptune was almost stationary in the sky because it had just turned retrograde that day. This apparent backward motion is created when Earth's orbit takes it past an outer planet. Because Neptune was only beginning its yearly retrograde cycle, the motion of the planet was far too slight to be detected with Galileo's small telescope. In 2009, a study suggested that Galileo was at least aware that the "star" he had observed had moved relative to fixed stars. In 1821, Alexis Bouvard published astronomical tables of the orbit of Uranus. Subsequent observations revealed substantial deviations from the tables, leading Bouvard to hypothesize that an unknown body was perturbing the orbit through gravitational interaction. In 1843, John Couch Adams began work on the orbit of Uranus using the data he had. He requested extra data from Sir George Airy, the Astronomer Royal, who supplied it in February 1844. Adams continued to work in 1845–1846 and produced several different estimates of a new planet. In 1845–1846, Urbain Le Verrier, developed his own calculations independently from Adams, but aroused no enthusiasm among his compatriots. In June 1846, upon seeing Le Verrier's first published estimate of the planet's longitude and its similarity to Adams's estimate, Airy persuaded James Challis to search for the planet. Challis vainly scoured the sky throughout August and September. Challis had, in fact, observed Neptune a year before the planet's subsequent discoverer, Johann Gottfried Galle, and on two occasions, 4 and 12 August 1845. However, his out-of-date star maps and poor observing techniques meant that he failed to recognize the observations as such until he carried out later analysis. Challis was full of remorse but blamed his neglect on his maps and the fact that he was distracted by his concurrent work on comet observations. Meanwhile, Le Verrier sent a letter and urged Berlin Observatory astronomer Galle to search with the observatory's refractor. Heinrich d'Arrest, a student at the observatory, suggested to Galle that they could compare a recently drawn chart of the sky in the region of Le Verrier's predicted location with the current sky to seek the displacement characteristic of a planet, as opposed to a fixed star. On the evening of 23 September 1846, the day Galle received the letter, he discovered Neptune just northeast of Iota Aquarii, 1° from the "five degrees east of Delta Capricorn" position Le Verrier had predicted it to be, about 12° from Adams's prediction, and on the border of Aquarius and Capricornus according to the modern IAU constellation boundaries. In the wake of the discovery, there was a nationalistic rivalry between the French and the British over who deserved credit for the discovery. Eventually, an international consensus emerged that Le Verrier and Adams deserved joint credit. Since 1966, Dennis Rawlins has questioned the credibility of Adams's claim to co-discovery, and the issue was re-evaluated by historians with the return in 1998 of the "Neptune papers" (historical documents) to the Royal Observatory, Greenwich. Naming Shortly after its discovery, Neptune was referred to simply as "the planet exterior to Uranus" or as "Le Verrier's planet". The first suggestion for a name came from Galle, who proposed the name Janus. In England, Challis put forward the name Oceanus. Claiming the right to name his discovery, Le Verrier quickly proposed the name Neptune for this new planet, though falsely stating that this had been officially approved by the French Bureau des Longitudes. In October, he sought to name the planet Le Verrier, after himself, and he had loyal support in this from the observatory director, François Arago. This suggestion met with stiff resistance outside France. French almanacs quickly reintroduced the name Herschel for Uranus, after that planet's discoverer Sir William Herschel, and Leverrier for the new planet. Struve came out in favour of the name Neptune on 29 December 1846, to the Saint Petersburg Academy of Sciences, after the colour of the planet as viewed through a telescope. Soon, Neptune became the internationally accepted name. In Roman mythology, Neptune was the god of the sea, identified with the Greek Poseidon. The demand for a mythological name seemed to be in keeping with the nomenclature of the other planets, all of which were named for deities in Greek and Roman mythology. Most languages today use some variant of the name "Neptune" for the planet. In Chinese, Vietnamese, Japanese, and Korean, the planet's name was translated as "sea king star" (). In Mongolian, Neptune is called (), reflecting its namesake god's role as the ruler of the sea. In modern Greek, the planet is called Poseidon (, ), the Greek counterpart of Neptune. In Hebrew, (), from a Biblical sea monster mentioned in the Book of Psalms, was selected in a vote managed by the Academy of the Hebrew Language in 2009 as the official name for the planet, even though the existing Latin term () is commonly used. In Māori, the planet is called , named after the Māori god of the sea. In Nahuatl, the planet is called , named after the rain god Tlāloc. In Thai, Neptune is referred to by the Westernised name () but is also called (, ), after Ketu (), the descending lunar node, who plays a role in Hindu astrology. In Malay, the name , after the Hindu god of seas, is attested as far back as the 1970s, but was eventually superseded by the Latinate equivalents (in Malaysian) or (in Indonesian). The usual adjectival form is Neptunian. The nonce form Poseidean (), from Poseidon, has also been used, though the usual adjectival form of Poseidon is Poseidonian (). Status From its discovery in 1846 until the discovery of Pluto in 1930, Neptune was the farthest known planet. When Pluto was discovered, it was considered a planet, and Neptune thus became the second-farthest known planet, except for a 20-year period between 1979 and 1999 when Pluto's elliptical orbit brought it closer than Neptune to the Sun, making Neptune the ninth planet from the Sun during this period. The increasingly accurate estimations of Pluto's mass from ten times that of Earth's to far less than that of the Moon and the discovery of the Kuiper belt in 1992 led many astronomers to debate whether Pluto should be considered a planet or as part of the Kuiper belt. In 2006, the International Astronomical Union defined the word "planet" for the first time, reclassifying Pluto as a "dwarf planet" and making Neptune once again the outermost-known planet in the Solar System. Physical characteristics Neptune's mass of 1.0243 kg is intermediate between Earth and the larger gas giants: it is 17 times that of Earth but just 1/19th that of Jupiter. Its gravity at 1 bar is 11.15 m/s2, 1.14 times the surface gravity of Earth, and surpassed only by Jupiter. Neptune's equatorial radius of 24,764 km is nearly four times that of Earth. Neptune, like Uranus, is an ice giant, a subclass of giant planet, because they are smaller and have higher concentrations of volatiles than Jupiter and Saturn. In the search for exoplanets, Neptune has been used as a metonym: discovered bodies of similar mass are often referred to as "Neptunes", just as scientists refer to various extrasolar bodies as "Jupiters". Internal structure Neptune's internal structure resembles that of Uranus. Its atmosphere forms about 5 to 10% of its mass and extends perhaps 10 to 20% of the way towards the core. Pressure in the atmosphere reaches about 10 GPa, or about 10 atmospheres. Increasing concentrations of methane, ammonia and water are found in the lower regions of the atmosphere. The mantle is equivalent to 10 to 15 Earth masses and is rich in water, ammonia and methane. As is customary in planetary science, this mixture is called icy even though it is a hot, dense supercritical fluid. This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean. The mantle may consist of a layer of ionic water in which the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallizes but the hydrogen ions float around freely within the oxygen lattice. At a depth of 7,000 km, the conditions may be such that methane decomposes into diamond crystals that rain downwards like hailstones. Scientists believe that this kind of diamond rain occurs on Jupiter, Saturn, and Uranus. Very-high-pressure experiments at Lawrence Livermore National Laboratory suggest that the top of the mantle may be an ocean of liquid carbon with floating solid 'diamonds'. The core of Neptune is likely composed of iron, nickel and silicates, with an interior model giving a mass about 1.2x that of Earth. The pressure at the centre is 7 Mbar (700 GPa), about twice as high as that at the centre of Earth, and the temperature may be . Atmosphere At high altitudes, Neptune's atmosphere is 80% hydrogen and 19% helium. A trace amount of methane is present. Prominent absorption bands of methane exist at wavelengths above 600 nm, in the red and infrared portion of the spectrum. As with Uranus, this absorption of red light by atmospheric methane is part of what gives Neptune its faint blue hue, which is more pronounced for Neptune's due to concentrated haze in Uranus's atmosphere. Neptune's atmosphere is subdivided into two main regions: the lower troposphere, where temperature decreases with altitude, and the stratosphere, where temperature increases with altitude. The boundary between the two, the tropopause, lies at a pressure of . The stratosphere then gives way to the thermosphere at a pressure lower than 10−5 to 10−4 bars (1 to 10 Pa). The thermosphere gradually transitions to the exosphere. Models suggest that Neptune's troposphere is banded by clouds of varying compositions depending on altitude. The upper-level clouds lie at pressures below one bar, where the temperature is suitable for methane to condense. For pressures between one and five bars (100 and 500 kPa), clouds of ammonia and hydrogen sulfide are thought to form. Above a pressure of five bars, the clouds may consist of ammonia, ammonium sulfide, hydrogen sulfide and water. Deeper clouds of water ice should be found at pressures of about , where the temperature reaches . Underneath, clouds of ammonia and hydrogen sulfide may be found. High-altitude clouds on Neptune have been observed casting shadows on the opaque cloud deck below. There are high-altitude cloud bands that wrap around the planet at constant latitudes. These circumferential bands have widths of 50–150 km and lie about 50–110 km above the cloud deck. These altitudes are in the layer where weather occurs, the troposphere. Weather does not occur in the higher stratosphere or thermosphere. In August 2023, the high-altitude clouds of Neptune vanished, prompting a study spanning thirty years of observations by the Hubble Space Telescope and ground-based telescopes. The study found that Neptune's high-altitude cloud activity is bound to Solar cycles, and not to the planet's seasons. Neptune's spectra suggest that its lower stratosphere is hazy due to condensation of products of ultraviolet photolysis of methane, such as ethane and ethyne. The stratosphere is home to trace amounts of carbon monoxide and hydrogen cyanide. The stratosphere of Neptune is warmer than that of Uranus due to the elevated concentration of hydrocarbons. For reasons that remain obscure, the planet's thermosphere is at an anomalously high temperature of about . The planet is too far from the Sun for this heat to be generated by ultraviolet radiation. One candidate for a heating mechanism is atmospheric interaction with ions in the planet's magnetic field. Other candidates are gravity waves from the interior that dissipate in the atmosphere. The thermosphere contains traces of carbon dioxide and water, which may have been deposited from external sources such as meteorites and dust. Colour Neptune's atmosphere is faintly blue in the optical spectrum, only slightly more saturated than the blue of Uranus's atmosphere. Early renderings of the two planets greatly exaggerated Neptune's colour contrast "to better reveal the clouds, bands and winds", making it seem deep blue compared to Uranus's off-white. The two planets had been imaged with different systems, making it hard to directly compare the resulting composite images. This was revisited with the colour normalised over time, most comprehensively in late 2023. Magnetosphere Neptune's magnetosphere consists of a magnetic field that is strongly tilted relative to its rotational axis at 47° and offset of at least 0.55 radius (~13,500 km) from the planet's physical centre—resembling Uranus's magnetosphere. Before the arrival of Voyager 2 to Neptune, it was hypothesised that Uranus's sideways rotation caused its tilted magnetosphere. In comparing the magnetic fields of the two planets, scientists now think the extreme orientation may be characteristic of flows in the planets' interiors. This field may be generated by convective fluid motions in a thin spherical shell of electrically conducting liquids (probably a combination of ammonia, methane and water), resulting in a dynamo action. The dipole component of the magnetic field at the magnetic equator of Neptune is about 14 microteslas (0.14 G). The dipole magnetic moment of Neptune is about 2.2 T·m3 (14 μT·RN3, where RN is the radius of Neptune). Neptune's magnetic field has a complex geometry that includes relatively large contributions from non-dipolar components, including a strong quadrupole moment that may exceed the dipole moment in strength. By contrast, Earth, Jupiter and Saturn have only relatively small quadrupole moments, and their fields are less tilted from the polar axis. The large quadrupole moment of Neptune may be the result of an offset from the planet's centre and geometrical constraints of the field's dynamo generator. Measurements by Voyager 2 in extreme-ultraviolet and radio frequencies revealed that Neptune has faint and weak but complex and unique aurorae; however, these observations were limited in time and did not contain infrared. Subsequent astronomers using the Hubble Space Telescope have not glimpsed the aurorae, in contrast to the more well-defined aurorae of Uranus. Neptune's bow shock, where the magnetosphere begins to slow the solar wind, occurs at a distance of 34.9 times the radius of the planet. The magnetopause, where the pressure of the magnetosphere counterbalances the solar wind, lies at a distance of 23–26.5 times the radius of Neptune. The tail of the magnetosphere extends out to at least 72 times the radius of Neptune, and likely much farther. Climate Neptune's weather is characterized by extremely dynamic storm systems, with winds reaching speeds of almost —exceeding supersonic flow. More typically, by tracking the motion of persistent clouds, wind speeds have been shown to vary from 20 m/s in the easterly direction to 325 m/s westward. At the cloud tops, the prevailing winds range in speed from 400 m/s along the equator to 250 m/s at the poles. Most of the winds on Neptune move in a direction opposite the planet's rotation. The general pattern of winds showed prograde rotation at high latitudes vs. retrograde rotation at lower latitudes. The difference in flow direction is thought to be a "skin effect" and not due to any deeper atmospheric processes. At 70°S latitude, a high-speed jet travels at a speed of 300 m/s. Due to seasonal changes, the cloud bands in the southern hemisphere of Neptune have been observed to increase in size and albedo. This trend was first seen in 1980. The long orbital period of Neptune results in seasons lasting 40 Earth years. Neptune differs from Uranus in its typical level of meteorological activity. Voyager 2 observed weather phenomena on Neptune during its 1989 flyby, but no comparable phenomena on Uranus during its 1986 flyby. The abundance of methane, ethane and acetylene at Neptune's equator is 10–100 times greater than at the poles. This is interpreted as evidence for upwelling at the equator and subsidence near the poles, as photochemistry cannot account for the distribution without meridional circulation. In 2007, it was discovered that the upper troposphere of Neptune's south pole was about 10 K warmer than the rest of its atmosphere, which averages about . The temperature differential is enough to let methane, which elsewhere is frozen in the troposphere, escape into the stratosphere near the pole. The relative "hot spot" is due to Neptune's axial tilt, which has exposed the south pole to the Sun for the last quarter of Neptune's year, or roughly 40 Earth years. As Neptune slowly moves towards the opposite side of the Sun, the south pole will be darkened and the north pole illuminated, causing the methane release to shift to the north pole. Storms In 1989, the Great Dark Spot, an anticyclonic storm system spanning , was discovered by NASA's Voyager 2 spacecraft. The storm resembled the Great Red Spot of Jupiter. Some five years later, on 2 November 1994, the Hubble Space Telescope did not see the Great Dark Spot on the planet. Instead, a new storm similar to the Great Dark Spot was found in Neptune's northern hemisphere. The is another storm, a white cloud group farther south than the Great Dark Spot. This nickname first arose during the months leading up to the Voyager 2 encounter in 1989, when they were observed moving at speeds faster than the Great Dark Spot (and images acquired later would subsequently reveal the presence of clouds moving even faster than those that had initially been detected by Voyager 2). The Small Dark Spot is a southern cyclonic storm, the second-most-intense storm observed during the 1989 encounter. It was initially completely dark, but as Voyager 2 approached the planet, a bright core developed, which can be seen in most of the highest-resolution images. In 2018, a newer main dark spot and smaller dark spot were identified and studied. In 2023, the first ground-based observation of a dark spot on Neptune was announced. Neptune's dark spots are thought to occur in the troposphere at lower altitudes than the brighter cloud features, so they appear as holes in the upper cloud decks. As they are stable features that can persist for several months, they are thought to be vortex structures. Often associated with dark spots are brighter, persistent methane clouds that form around the tropopause layer. The persistence of companion clouds shows that some former dark spots may continue to exist as cyclones even though they are no longer visible as a dark feature. Dark spots may dissipate when they migrate too close to the equator or possibly through some other, unknown mechanism. In 1989, Voyager 2's Planetary Radio Astronomy (PRA) experiment observed around 60 lightning flashes, or Neptunian electrostatic discharges emitting energies over . A plasma wave system (PWS) detected 16 electromagnetic wave events with a frequency range of at magnetic latitudes 7–33˚. These plasma wave detections were possibly triggered by lightning over 20 minutes in the ammonia clouds of the magnetosphere. During Voyager 2’s closest approach to Neptune, the PWS instrument provided Neptune's first plasma wave detections at a sample rate of 28,800 samples per second. The measured plasma densities range from . Neptunian lightning may occur in three cloud layers, with microphysical modelling suggesting that most of these occurrences happen in the water clouds of the troposphere or the shallow ammonia clouds of the magnetosphere. Neptune is predicted to have 1/19 the lightning flash rate of Jupiter and to display most of its lightning activity at high latitudes. However, lightning on Neptune seems to resemble lightning on Earth rather than Jovian lightning. Internal heating Neptune's more varied weather when compared to Uranus is due in part to its higher internal heating. The upper regions of Neptune's troposphere reach a low temperature of . At a depth where the atmospheric pressure equals , the temperature is . Deeper inside the layers of gas, the temperature rises steadily. As with Uranus, the source of this heating is unknown, but the discrepancy is larger: Uranus only radiates 1.1 times as much energy as it receives from the Sun; whereas Neptune radiates about 2.61 times as much energy as it receives from the Sun. Neptune is over 50% farther from the Sun than Uranus and receives only ~40% of Uranus's amount of sunlight; however, its internal energy is still enough for the fastest planetary winds in the Solar System. Depending on the thermal properties of its interior, the heat left over from Neptune's formation may be sufficient to explain its current heat flow, though it is harder to explain Uranus's lack of internal heat while preserving the apparent similarity between the two planets. Orbit and rotation The average distance between Neptune and the Sun is (about 30.1 astronomical units (AU), the mean distance from the Earth to the Sun), and it completes an orbit on average every 164.79 years, subject to a variability of around ±0.1 years. The perihelion distance is 29.81 AU, and the aphelion distance is 30.33 AU. Neptune's orbital eccentricity is only 0.008678, making it the planet in the Solar System with the second most circular orbit after Venus. The orbit of Neptune is inclined 1.77° compared to that of Earth. On 11 July 2011, Neptune completed its first full barycentric orbit since its discovery in 1846; it did not appear at its exact discovery position in the sky because Earth was in a different location in its 365.26-day orbit. Because of the motion of the Sun in relation to the barycentre of the Solar System, on 11 July, Neptune was not at its exact discovery position in relation to the Sun—if the more common heliocentric coordinate system is used, the discovery longitude was reached on 12 July 2011. The axial tilt of Neptune is 28.32°, which is similar to the tilts of Earth (23°) and Mars (25°). As a result, Neptune experiences seasonal changes similar to those on Earth. The long orbital period of Neptune means that the seasons last for forty Earth years. Its sidereal rotation period (day) is roughly 16.11 hours. Because its axial tilt is comparable to Earth's, the variation in the length of its day over the course of its long year is not any more extreme. Because Neptune is not a solid body, its atmosphere undergoes differential rotation. The wide equatorial zone rotates with a period of about 18 hours, which is slower than the 16.1-hour rotation of the planet's magnetic field. By contrast, the reverse is true for the polar regions where the rotation period is 12 hours. This differential rotation is the most pronounced of any planet in the Solar System, and it results in strong latitudinal wind shear. Formation and resonances Formation The formation of the ice giants, Neptune and Uranus, has been difficult to model precisely. Current models suggest that the matter density in the outer regions of the Solar System was too low to account for the formation of such large bodies from the traditionally accepted method of core accretion, and various hypotheses have been advanced to explain their formation. One is that the ice giants were not formed by core accretion but from instabilities within the original protoplanetary disc and later had their atmospheres blasted away by radiation from a nearby massive OB star. An alternative concept is that they formed closer to the Sun, where the matter density was higher, and then subsequently migrated to their current orbits after the removal of the gaseous protoplanetary disc. This hypothesis of migration after formation is favoured due to its ability to better explain the occupancy of the populations of small objects observed in the trans-Neptunian region. The current most widely accepted explanation of the details of this hypothesis is known as the Nice model, which is a dynamical evolution scenario that explores the potential effect of a migrating Neptune and the other giant planets on the structure of the Kuiper belt. Orbital resonances Neptune's orbit has a profound impact on the region directly beyond it, known as the Kuiper belt. The Kuiper belt is a ring of small icy worlds, similar to the asteroid belt but far larger, extending from Neptune's orbit at 30 AU out to about 55 AU from the Sun. Much in the same way that Jupiter's gravity dominates the asteroid belt, Neptune's gravity dominates the Kuiper belt. Over the age of the Solar System, certain regions of the Kuiper belt became destabilised by Neptune's gravity, creating gaps in its structure. The region between 40 and 42 AU is an example. There do exist orbits within these empty regions where objects can survive for the age of the Solar System. These resonances occur when Neptune's orbital period is a precise fraction of that of the object, such as 1:2, or 3:4. If, say, an object orbits the Sun once for every two Neptune orbits, it will only complete half an orbit by the time Neptune returns to its original position. The most heavily populated resonance in the Kuiper belt, with over 200 known objects, is the 2:3 resonance. Objects in this resonance complete 2 orbits for every 3 of Neptune, and are known as plutinos because the largest of the known Kuiper belt objects, Pluto, is among them. Although Pluto crosses Neptune's orbit regularly, the 2:3 resonance makes it so that they can never collide. The 3:4, 3:5, 4:7 and 2:5 resonances are less populated. Neptune has a number of known trojan objects occupying both the Sun–Neptune and Lagrangian points—gravitationally stable regions leading and trailing Neptune in its orbit, respectively. Neptune trojans can be viewed as being in a 1:1 resonance with Neptune. Some Neptune trojans are remarkably stable in their orbits, and are likely to have formed alongside Neptune rather than being captured. The first object identified as associated with Neptune's trailing Lagrangian point was . Neptune has a temporary quasi-satellite, . The object has been a quasi-satellite of Neptune for about 12,500 years and it will remain in that dynamical state for another 12,500 years. Moons Neptune has 16 known moons. Triton is the largest Neptunian moon, accounting for more than 99.5% of the mass in orbit around Neptune, and is the only one massive enough to be spheroidal. Triton was discovered by William Lassell just 17 days after the discovery of Neptune itself. Unlike all other large planetary moons in the Solar System, Triton has a retrograde orbit, indicating that it was captured rather than forming in place; it was probably once a dwarf planet in the Kuiper belt. It is close enough to Neptune to be locked into a synchronous rotation, and it is slowly spiralling inward because of tidal acceleration. It will eventually be torn apart, in about 3.6 billion years, when it reaches the Roche limit. In 1989, Triton was the coldest object that had yet been measured in the Solar System, with estimated temperatures of . This very low temperature is due to Triton's very high albedo which causes it to reflect a lot of sunlight instead of absorbing it. Neptune's second-known satellite (by order of discovery), the irregular moon Nereid, has one of the most eccentric orbits of any satellite in the Solar System. The eccentricity of 0.7512 gives it an apoapsis that is seven times its periapsis distance from Neptune. From July to September 1989, Voyager 2 discovered six moons of Neptune. Of these, the irregularly shaped Proteus is notable for being as large as a body of its density can be without being pulled into a spherical shape by its own gravity. Although the second-most-massive Neptunian moon, it is only 0.25% the mass of Triton. Neptune's innermost four moons—Naiad, Thalassa, Despina and Galatea—orbit close enough to be within Neptune's rings. The next-farthest out, Larissa, was originally discovered in 1981 when it had occulted a star. This occultation had been attributed to ring arcs, but when Voyager 2 observed Neptune in 1989, Larissa was found to have caused it. Five new irregular moons discovered between 2002 and 2003 were announced in 2004. A new moon and the smallest yet, Hippocamp, was found in 2013 by combining multiple Hubble images. Because Neptune was the Roman god of the sea, Neptune's moons have been named after lesser sea gods. Planetary rings Neptune has a planetary ring system, though one much less substantial than that of Saturn and Uranus. The rings may consist of ice particles coated with silicates or carbon-based material, which most likely gives them a reddish hue. The three main rings are the narrow Adams Ring, 63,000 km from the centre of Neptune, the Le Verrier Ring, at 53,000 km, and the broader, fainter Galle Ring, at 42,000 km. A faint outward extension to the Le Verrier Ring has been named Lassell; it is bounded at its outer edge by the Arago Ring at 57,000 km. The first of these planetary rings was detected in 1968 by a team led by Edward Guinan. In the early 1980s, analysis of this data along with newer observations led to the hypothesis that this ring might be incomplete. Evidence that the rings might have gaps first arose during a stellar occultation in 1984 when the rings obscured a star on immersion but not on emersion. Images from Voyager 2 in 1989 settled the issue by showing several faint rings. The outermost ring, Adams, contains five prominent arcs now named Courage, Liberté, Egalité 1, Egalité 2 and Fraternité (Courage, Liberty, Equality and Fraternity). The existence of arcs was difficult to explain because the laws of motion would predict that arcs would spread out into a uniform ring over short timescales. Astronomers now estimate that the arcs are corralled into their current form by the gravitational effects of Galatea, a moon just inward from the ring. Earth-based observations announced in 2005 appeared to show that Neptune's rings were much more unstable than previously thought. Images taken from the W. M. Keck Observatory in 2002 and 2003 show considerable decay in the rings when compared to images by Voyager 2. In particular, it seems that the Liberté arc might disappear in as little as one century. Observation Neptune brightened about 10% between 1980 and 2000 mostly due to the changing of the seasons. Neptune may continue to brighten as it approaches perihelion in 2042. The apparent magnitude currently ranges from 7.67 to 7.89 with a mean of 7.78 and a standard deviation of 0.06. Prior to 1980, the planet was as faint as magnitude 8.0. Neptune is too faint to be visible to the naked eye. It can be outshone by Jupiter's Galilean moons, the dwarf planet Ceres and the asteroids 4 Vesta, 2 Pallas, 7 Iris, 3 Juno, and 6 Hebe. A telescope or strong binoculars will resolve Neptune as a small blue disk, similar in appearance to Uranus. Because of the distance of Neptune from Earth, its angular diameter only ranges from 2.2 to 2.4 arcseconds, the smallest of the Solar System planets. Its small apparent size makes it challenging to study visually. Most telescopic data was fairly limited until the advent of the Hubble Space Telescope and large ground-based telescopes with adaptive optics (AO). The first scientifically useful observation of Neptune from ground-based telescopes using adaptive optics was commenced in 1997 from Hawaii. Neptune is currently approaching perihelion (closest approach to the Sun) and has been shown to be heating up, with increased atmospheric activity and brightness as a consequence. Combined with technological advancements, ground-based telescopes with adaptive optics are recording increasingly more detailed images of it. Both Hubble and the adaptive-optics telescopes on Earth have made many new discoveries within the Solar System since the mid-1990s, with a large increase in the number of known satellites and moons around the outer planet, among others. In 2004 and 2005, five new small satellites of Neptune with diameters between 38 and 61 kilometres were discovered. From Earth, Neptune goes through apparent retrograde motion every 367 days, resulting in a looping motion against the background stars during each opposition. These loops carried it close to the 1846 discovery coordinates in April and July 2010 and again in October and November 2011. Neptune's 164-year orbital period means that the planet takes an average of 13 years to move through each constellation of the zodiac. In 2011, it completed its first full orbit of the Sun since being discovered and returned to where it was first spotted northeast of Iota Aquarii. Observation of Neptune in the radio-frequency band shows that it is a source of both continuous emission and irregular bursts. Both sources are thought to originate from its rotating magnetic field. In the infrared part of the spectrum, Neptune's storms appear bright against the cooler background, allowing the size and shape of these features to be readily tracked. Exploration Voyager 2 is the only spacecraft that has visited Neptune. The spacecraft closest approach to the planet occurred on 25 August 1989. Because this was the last major planet the spacecraft could visit, it was decided to make a close flyby of the moon Triton, regardless of the consequences to the trajectory, similarly to what was done for Voyager 1s encounter with Saturn and its moon Titan. The images relayed back to Earth from Voyager 2 became the basis of a 1989 PBS all-night program, Neptune All Night. During the encounter, signals from the spacecraft required 246 minutes to reach Earth. Hence, for the most part, Voyager 2 mission relied on preloaded commands for the Neptune encounter. The spacecraft performed a near-encounter with the moon Nereid before it came within 4,400 km of Neptune's atmosphere on 25 August, then passed close to the planet's largest moon Triton later the same day. The spacecraft verified the existence of a magnetic field surrounding the planet and discovered that the field was offset from the centre and tilted in a manner similar to the field around Uranus. Neptune's rotation period was determined using measurements of radio emissions and Voyager 2 showed that Neptune had a surprisingly active weather system. Six new moons were discovered, and the planet was shown to have more than one ring. The flyby provided the first accurate measurement of Neptune's mass which was found to be 0.5 per cent less than previously calculated. The new figure disproved the hypothesis that an undiscovered Planet X acted upon the orbits of Neptune and Uranus. Since 2018, the China National Space Administration has been studying a concept for a pair of Voyager-like interstellar probes tentatively known as Shensuo. Both probes would be launched in the 2020s and take differing paths to explore opposing ends of the heliosphere; the second probe, IHP-2, would fly by Neptune in January 2038, passing only 1,000 km above the cloud tops, and potentially carry an atmospheric impactor to be released during its approach. Afterward, it will continue its mission throughout the Kuiper belt toward the heliosphere tail, which is so far unexplored. After Voyager 2 and IHP-2s flybys, the next step in scientific exploration of the Neptunian system is considered to be an orbital mission; most proposals have been by NASA, most often for a Flagship orbiter. In 2003, there was a proposal in NASA's "Vision Missions Studies" for a "Neptune Orbiter with Probes" mission that does Cassini-level science. A subsequent proposal, that was not selected, was for Argo, a flyby spacecraft to be launched in 2019, that would visit Jupiter, Saturn, Neptune, and a Kuiper belt object. The focus would have been on Neptune and its largest moon Triton to be investigated around 2029. The proposed New Horizons 2 mission might have done a close flyby of the Neptunian system, but it was later scrapped. Currently a pending proposal for the Discovery Program, the Trident spacecraft would conduct a flyby of Neptune and Triton; however, the mission was not selected for Discovery 15 or 16. Neptune Odyssey is another concept for a Neptune orbiter and atmospheric probe that was studied as a possible large strategic science mission by NASA; it would have launched between 2031 and 2033, and arrive at Neptune by 2049. However, for logistical reasons the Uranus Orbiter and Probe mission was selected as the ice giant orbiter mission recommendation, with top priority ahead of the Enceladus Orbilander. Two notable proposals for a Triton-focused Neptune orbiter mission that would be costed right between the Trident and Odyssey missions (under the New Frontiers program) are Triton Ocean World Surveyor and Nautilus, with cruise stages taking place in the 2031–47 and 2041–56 time periods, respectively. Neptune is a potential target for China's Tianwen-5, which could arrive in 2058.
Physical sciences
Astronomy
null
2374165
https://en.wikipedia.org/wiki/Angiology
Angiology
Angiology (from Greek , angeīon, "vessel"; and , -logia) is the medical specialty dedicated to studying the circulatory system and of the lymphatic system, i.e., arteries, veins and lymphatic vessels. In the UK, this field is more often termed angiology, and in the United States the term vascular medicine is more frequent. The field of vascular medicine (angiology) is the field that deals with preventing, diagnosing, and treating lymphatic and blood vessel related diseases. Overview Arterial diseases include the aorta (aneurysms/dissection) and arteries supplying the legs, hands, kidneys, brain, intestines. It also covers arterial thrombosis and embolism; vasculitides; and vasospastic disorders. Naturally, it deals with preventing cardiovascular diseases such as heart attack and stroke. Venous diseases include venous thrombosis, chronic venous insufficiency, and varicose veins. Lymphatic diseases include primary and secondary forms of lymphedema. It also involves modification of risk factors for vascular disease like high cholesterol, high blood pressure. Cardiovascular risk factors such high blood pressure, elevated cholesterol and others fall under the specialty of vascular medicine. Vascular medicine training Vascular medicine (angiology) training is well established in some European countries. The first European educational Working Group was born in 1991 in Milan (EWMA) becoming in 1998 a European Scientific Association VAS Vascular Independent Research and Education European Organization with several European educational programmes: European Fellowship, European Master, CESMA-UEMS European Diploma and Postgrad Courses. In the United States there are several independent vascular medicine training programs and twelve NIH funded three-year programs as well. These programs are suitable for either Internal medicine specialists as a fellowship or for cardiologists. In 2005, the first vascular medicine boards were administered by the American Board of Vascular Medicine. For current education and training in vascular medicine, see Society for Vascular Medicine.
Biology and health sciences
Fields of medicine
null
2375338
https://en.wikipedia.org/wiki/Carbon%20tetrafluoride
Carbon tetrafluoride
Tetrafluoromethane, also known as carbon tetrafluoride or R-14, is the simplest perfluorocarbon (CF4). As its IUPAC name indicates, tetrafluoromethane is the perfluorinated counterpart to the hydrocarbon methane. It can also be classified as a haloalkane or halomethane. Tetrafluoromethane is a useful refrigerant but also a potent greenhouse gas. It has a very high bond strength due to the nature of the carbon–fluorine bond. Bonding Because of the multiple carbon–fluorine bonds, and the high electronegativity of fluorine, the carbon in tetrafluoromethane has a significant positive partial charge which strengthens and shortens the four carbon–fluorine bonds by providing additional ionic character. Carbon–fluorine bonds are the strongest single bonds in organic chemistry. Additionally, they strengthen as more carbon–fluorine bonds are added to the same carbon. In the one-carbon organofluorine compounds represented by molecules of fluoromethane, difluoromethane, trifluoromethane, and tetrafluoromethane, the carbon–fluorine bonds are strongest in tetrafluoromethane. This effect is due to the increased coulombic attractions between the fluorine atoms and the carbon because the carbon has a positive partial charge of 0.76. Preparation Tetrafluoromethane is the product when any carbon compound, including carbon itself, is burned in an atmosphere of fluorine. With hydrocarbons, hydrogen fluoride is a coproduct. It was first reported in 1926. It can also be prepared by the fluorination of carbon dioxide, carbon monoxide or phosgene with sulfur tetrafluoride. Commercially it is manufactured by the reaction of hydrogen fluoride with dichlorodifluoromethane or chlorotrifluoromethane; it is also produced during the electrolysis of metal fluorides MF, MF2 using a carbon electrode. Although it can be made from a myriad of precursors and fluorine, elemental fluorine is expensive and difficult to handle. Consequently, is prepared on an industrial scale using hydrogen fluoride: CCl2F2 + 2 HF → CF4 + 2 HCl Laboratory synthesis Tetrafluoromethane and silicon tetrafluoride can be prepared in the laboratory by the reaction of silicon carbide with fluorine. SiC + 4 F2 → CF4 + SiF4 Reactions Tetrafluoromethane, like other fluorocarbons, is very stable due to the strength of its carbon–fluorine bonds. The bonds in tetrafluoromethane have a bonding energy of 515 kJ⋅mol−1. As a result, it is inert to acids and hydroxides. However, it reacts explosively with alkali metals. Thermal decomposition or combustion of CF4 produces toxic gases (carbonyl fluoride and carbon monoxide) and in the presence of water will also yield hydrogen fluoride. It is very slightly soluble in water (about 20 mg⋅L−1), but miscible with organic solvents. Uses Tetrafluoromethane is sometimes used as a low temperature refrigerant (R-14). It is used in electronics microfabrication alone or in combination with oxygen as a plasma etchant for silicon, silicon dioxide, and silicon nitride. It also has uses in neutron detectors. Environmental effects Tetrafluoromethane is a potent greenhouse gas that contributes to the greenhouse effect. It is very stable, has an atmospheric lifetime of 50,000 years, and a high greenhouse warming potential 6,500 times that of CO2. Tetrafluoromethane is the most abundant perfluorocarbon in the atmosphere, where it is designated as PFC-14. Its atmospheric concentration is growing. As of 2019, the man-made gases CFC-11 and CFC-12 continue to contribute a stronger radiative forcing than PFC-14. Although structurally similar to chlorofluorocarbons (CFCs), tetrafluoromethane does not deplete the ozone layer because the carbon–fluorine bond is much stronger than that between carbon and chlorine. Main industrial emissions of tetrafluoromethane besides hexafluoroethane are produced during production of aluminium using Hall-Héroult process. CF4 also is produced as product of the breakdown of more complex compounds such as halocarbons. Health risks Due to its density, tetrafluoromethane can displace air, creating an asphyxiation hazard in inadequately ventilated areas. Otherwise, it is normally harmless due to its stability.
Physical sciences
Halocarbons
Chemistry
2375391
https://en.wikipedia.org/wiki/Hydraulic%20manifold
Hydraulic manifold
A hydraulic manifold is a component that regulates fluid flow between pumps and actuators and other components in a hydraulic system. It is like a switchboard in an electrical circuit because it lets the operator control how much fluid flows between which components of a hydraulic machinery. For example, in a backhoe loader a manifold turns on or shuts off or diverts flow to the telescopic arms of the front bucket and the back bucket. The manifold is connected to the levers in the operator's cabin which the operator uses to achieve the desired manifold behaviour. A manifold is composed of assorted hydraulic valves connected to each other. It is the various combinations of states of these valves that allow complex control behaviour in a manifold. A hydraulic manifold is a block of metal with flow paths drilled through it, connecting various ports. Hydraulic manifolds consist of one or more relative large pipes called a "barrel" or "main", with numerous junctions connecting smaller pipes and ports.
Technology
Hydraulics and pneumatics
null
2375692
https://en.wikipedia.org/wiki/Cargo%20aircraft
Cargo aircraft
A cargo aircraft (also known as freight aircraft, freighter, airlifter or cargo jet) is a fixed-wing aircraft that is designed or converted for the carriage of cargo rather than passengers. Such aircraft generally feature one or more large doors for loading cargo. Passenger amenities are removed or not installed, although there are usually basic comfort facilities for the crew such as a galley, lavatory, and bunks in larger planes. Freighters may be operated by civil passenger or cargo airlines, by private individuals, or by government agencies of individual countries such as the armed forces. Aircraft designed for cargo flight usually have features that distinguish them from conventional passenger aircraft: a wide/tall fuselage cross-section, a high-wing to allow the cargo area to sit near the ground, numerous wheels to allow it to land at unprepared locations, and a high-mounted tail to allow cargo to be driven directly into and off the aircraft. By 2015, dedicated freighters represent 43% of the 700 billion ATK (available tonne-kilometer) capacity, while 57% is carried in airliner's cargo holds. Also in 2015, Boeing forecast belly freight to rise to 63% while specialised cargoes would represent 37% of a 1,200 billion ATKs in 2035. The Cargo Facts Consulting firm forecasts that the global freighter fleet will rise from 1,782 to 2,920 cargo aircraft from 2019 to 2039. History Aircraft were put to use carrying cargo in the form of air mail as early as 1911. Although the earliest aircraft were not designed primarily as cargo carriers, by the mid-1920s aircraft manufacturers were designing and building dedicated cargo aircraft. In the UK during the early 1920s, the need was recognized for a freighter aircraft to transport troops and material quickly to pacify tribal revolts in the newly occupied territories of the Middle East. The Vickers Vernon, a development of the Vickers Vimy Commercial, entered service with the Royal Air Force as the first dedicated troop transport in 1921. In February 1923 this was put to use by the RAF's Iraq Command who flew nearly 500 Sikh troops from Kingarban to Kirkuk in the first ever strategic airlift of troops. Vickers Victorias played an important part in the Kabul Airlift of November 1928 – February 1929, when they evacuated diplomatic staff and their dependents together with members of the Afghan royal family endangered by a civil war. The Victorias also helped to pioneer air routes for Imperial Airways' Handley Page HP.42 airliners. The World War II German design, the Arado Ar 232 was the first purpose-built cargo aircraft. The Ar 232 was intended to supplant the earlier Junkers Ju 52 freighter conversions, but only a few were built. Most other forces used freighter versions of airliners in the cargo role as well, most notably the C-47 Skytrain version of the Douglas DC-3, which served with practically every Allied nation. One important innovation for future cargo aircraft design was introduced in 1939, with the fifth and sixth prototypes of the Junkers Ju 90 four-engined military transport aircraft, with the earliest known example of a rear loading ramp. This aircraft, like most of its era, used tail-dragger landing gear which caused the aircraft to have a decided rearward tilt when landed. These aircraft introduced the Trapoklappe, a powerful ramp/hydraulic lift with a personnel stairway centered between the vehicle trackway ramps, that raised the rear of the aircraft into the air and allowed easy loading. A similar rear loading ramp even appeared in a somewhat different form on the nosewheel gear-equipped, late WW II era American Budd RB-1 Conestoga twin-engined cargo aircraft. Postwar Europe also served to play a major role in the development of the modern air cargo and air freight industry. It is during the Berlin Airlift at the height of the Cold War, when a massive mobilization of aircraft was undertaken by the West to supply West Berlin with food and supplies, in a virtual around the clock air bridge, after the Soviet Union closed and blockaded Berlin's land links to the west. To rapidly supply the needed numbers of aircraft, many older types, especially the Douglas C-47 Skytrain, were pressed into service. In operation it was found that it took as long or longer to unload these older designs as the much larger tricycle landing gear Douglas C-54 Skymaster which was easier to move about in when landed. The C-47s were quickly removed from service, and from then on flat-decks were a requirement of all new cargo designs. In the years following the war era a number of new custom-built cargo aircraft were introduced, often including some "experimental" features. For instance, the US's C-82 Packet featured a removable cargo area, while the C-123 Provider introduced the now-common rear fuselage/upswept tail shaping to allow for a much larger rear loading ramp. But it was the introduction of the turboprop that allowed the class to mature, and even one of its earliest examples, the C-130 Hercules, in the 21st century as the Lockheed Martin C-130J, is still the yardstick against which newer military transport aircraft designs are measured. Although larger, smaller and faster designs have been proposed for many years, the C-130 continues to improve at a rate that keeps it in production. "Strategic" cargo aircraft became an important class of their own starting with the Lockheed C-5 Galaxy in the 1960s and a number of similar Soviet designs from the 70s and 80s, and culminating in the Antonov An-225, the world's largest aircraft. These designs offer the ability to carry the heaviest loads, even main battle tanks, at global ranges. The Boeing 747 was originally designed to the same specification as the C-5, but later modified as a design that could be offered as either passenger or all-freight versions. The "bump" on the top of the fuselage allows the crew area to be clear of the cargo containers sliding out of the front in the event of an accident. When the Airbus A380 was announced, the maker originally accepted orders for the freighter version A380F, offering the second largest payload capacity of any cargo aircraft, exceeded only by the An-225. An aerospace consultant has estimated that the A380F would have 7% better payload and better range than the 747-8F, but also higher trip costs. Starting May 2020 Portuguese Hi Fly started charting cargo flights with an A380, carrying medical supplies from China to different parts of the world in the response to the COVID-19 outbreak. It allows almost of cargo between the three decks. In November 2020 Emirates started offering an A380 mini-freighter, which allows for 50 tons of cargo in the belly of the plane. Importance Cargo aircraft has had many uses throughout the years, but the current importance of cargo aircraft is not highly talked about. Cargo planes today can carry almost everything ranging from perishables and supplies to fully built cars and livestock. The most use of cargo aircraft comes from the increase in online shopping through retailers like Amazon and eBay. Since most of these items are made all over the world, air cargo is used to get it from point A to point B as fast as possible. Air cargo significantly adds to the world trade value, Air cargo transports over US$6 trillion worth of goods, accounting for approximately 35% of world trade by value. This helps producers keep the costs of goods down, allows consumers to be able to purchase more items, and allows stores to remain with goods on the shelf.   Not only is air cargo important in the delivery and shipping aspect, it is also highly important in the job industry. Air cargo companies around the United States employ over 250,000 workers, U.S. cargo airlines employed 268,730 workers in August 2023, 34% of the industry total. Cargo aircraft types Nearly all commercial cargo aircraft presently in the fleet are derivatives or transformations of passenger aircraft. However, there are three other methods to the development of cargo aircraft. Derivatives of non-cargo aircraft Many types can be converted from airliner to freighter by installing a main deck cargo door with its control systems; upgrading floor beams for cargo loads and replacing passenger equipment and furnishings with new linings, ceilings, lighting, floors, drains and smoke detectors. Specialized engineering teams rival Airbus and Boeing, giving the aircraft another 15–20 years of life. Aeronautical Engineers Inc converts the Boeing 737-300/400/800, McDonnell Douglas MD-80 and Bombardier CRJ200. Israel Aerospace Industries’ Bedek Aviation converts the 737-300/400/700/800 in about 90 days, 767-200/300s in about four months and 747-400s in five months, and is looking at the Boeing 777, Airbus A330 and A321. Voyageur Aviation located in North Bay, Ontario converts the DHC-8-100 into the DHC-8-100 Package Freighter Conversion. An A300B4-200F conversion cost $5M in 1996, an A300-600F $8M in 2001, a McDonnell Douglas MD-11F $9M in 1994, a B767-300ERF $13M in 2007, a Boeing 747-400 PSF $22M in 2006, an A330-300 P2F was estimated at $20M in 2016 and a Boeing 777-200ER BCF at $40M in 2017. By avoiding the main deck door installation and relying on lighter elevators between decks, LCF Conversions wants to convert A330/A340s or B777s for $6.5M to $7.5M. In the mid-2000s, passenger 747-400s cost $30–50 million before a $25 million conversion, a Boeing 757 had to cost $15 million before conversion, falling to below $10 million by 2018, and $5 million for a 737 Classic, falling to $2–3 million for a Boeing 737-400 by 2018. Derivative freighters have most of their development costs already amortized, and lead time before production is shorter than all new aircraft. Converted cargo aircraft use older technology; their direct operating costs are higher than what might be achieved with current technology. Since they have not been designed specifically for air cargo, loading and unloading is not optimized; the aircraft may be pressurized more than necessary, and there may be unnecessary apparatus for passenger safety. Dedicated civilian cargo aircraft A dedicated commercial air freighter is an airplane which has been designed from the beginning as a freighter, with no restrictions caused by either passenger or military requirements. Over the years, there has been a dispute concerning the cost effectiveness of such an airplane, with some cargo carriers stating that they could consistently earn a profit if they had such an aircraft. To help resolve this disagreement, the National Aeronautics and Space Administration (NASA) selected two contractors, Douglas Aircraft Co. and Lockheed-Georgia Co., to independently evaluate the possibility of producing such a freighter by 1990. This was done as part of the Cargo/Logistics Airlift Systems Study (CLASS). At comparable payloads, dedicated cargo aircraft was said to provide a 20 percent reduction in trip cost and a 15 percent decrease in aircraft price compared to other cargo aircraft. These findings, however, are extremely sensitive to assumptions about fuel and labor costs and, most particularly, to growth in demand for air cargo services. Further, it ignores the competitive situation brought about by the lower capital costs of future derivative air cargo aircraft. The main advantage of the dedicated air freighter is that it can be designed specifically for air freight demand, providing the type of loading and unloading, flooring, fuselage configuration, and pressurization which are optimized for its mission. Moreover, it can make full use of NASA's ACEE results, with the potential of significantly lowering operating costs and fuel usage. Such a high overhead raises the price of the airplane and its direct operating cost (because of depreciation and insurance costs) and increases the financial risks to investors, especially since it would be competing with derivatives which have much smaller development costs per unit and which themselves have incorporated some of the cost-reducing technology. Joint civil-military cargo aircraft One benefit of a combined development is that the development costs would be shared by the civil and military sectors, and the number of airplanes required by the military could be decreased by the number of civil reserve airplanes purchased by air carriers and available to the military in case of emergency. There are some possible drawbacks, as the restrictions executed by joint development, the punishments that would be suffered by both civil and military airplanes, and the difficulty in discovering an organizational structure that authorizes their compromise. Some features appropriate to a military aircraft would have to be rejected, because they are not suitable for a civil freighter. Moreover, each airplane would have to carry some weight which it would not carry if it were independently designed. This additional weight lessens the payload and the profitability of the commercial version. This could either be compensated by a transfer payment at acquisition, or an operating penalty compensation payment. Most important, it is not clear that there will be an adequate market for the civil version or that it will be cost competitive with derivatives of passenger aircraft. Unpiloted cargo aircraft Rapid delivery demand and e-commerce growth stimulate UAV freighters development for 2020: Californian Elroy Air wants to replace trucks on inefficient routes and should fly a subscale prototype; Californian Natilus plans a Boeing 747 sized transpacific unpiloted freighter and should fly a subscale prototype; Californian Sabrewing Aircraft targets small regional unpiloted freighter and should fly a 65%-scale vehicle in 2018 fall; The Chinese Academy of Sciences flew its payload AT200 in October 2017 based on New Zealand's PAC P-750 XSTOL utility turboprop Chinese package carrier SF Express conducted emergency logistics tests in December 2017 with a Tengoen Technologies’ TB001 medium-altitude UAV, and plan an eight-turbofan carrying more than Boeing flew its Boeing Cargo Air Vehicle prototype, a vertical takeoff and landing (eVTOL) craft. Carpinteria, California-startup Dorsal Aircraft wants to make light standard ISO containers part of its unpiloted freighter structure where the wing, engines and tail are attached to a dorsal spine fuselage. Interconnecting long aluminum containers carry the flight loads, aiming to lower overseas airfreight costs by 60%, and plan to convert C-130H with the help of Wagner Aeronautical of San Diego, experienced in passenger-to-cargo conversions. Beijing-based Beihang UAS Technology developed its BZK-005 high-altitude, long-range UAV for cargo transport, capable of carrying over at . Garuda Indonesia will test three of them initially from September 2019, before operations in the fourth quarter. Garuda plans up to 100 cargo UAVs to connect remote regions with limited airports in Maluku, Papua, and Sulawesi. Examples Early air mail and airlift logistics aircraft Avro Lancastrian (Transatlantic mail) Avro York (Berlin Airlift) Boeing C-7000 Curtiss JN-4 Douglas M-2 Douglas DC-3 Douglas DC-4 Douglas DC-6 Converted airliners Oversize transport Light aircraft Military cargo aircraft Experimental cargo aircraft Hughes H-4 Hercules ("Spruce Goose") Lockheed R6V Constitution LTV XC-142 Comparisons
Technology
Types of aircraft
null
2375896
https://en.wikipedia.org/wiki/Vietnamese%20Pot-bellied
Vietnamese Pot-bellied
Vietnamese Pot-bellied is the exonym for the Lon I () or I pig, an endangered traditional Vietnamese breed of small domestic pig. The I is uniformly black and has short legs and a low-hanging belly, from which the name derives. It is reared for meat; it is slow-growing, but the pork has good flavour. The I was depicted in the traditional Đông Hồ paintings of Bắc Ninh province as a symbol of happiness, satiety and wealth. History The I is a traditional Vietnamese breed. It is thought to have originated in the province of Nam Định, in the Red River Delta. It was the dominant local pig breed in most provinces of the delta, and was widely distributed in Nam Định province and the neighbouring provinces of Hà Nam, Ninh Bình and Thái Bình, as well as in the province of Thanh Hóa immediately to the south, in the North Central Coast region. Until the 1970s the I was probably the most numerous pig breed in northern Vietnam, with numbers running into millions. From that time, the more productive Móng Cái began to supplant it. The National Institute of Animal Husbandry of Vietnam started a conservation programme, with subsidies for farmers who reared purebred stock, but this had little benefit – there was some increase in numbers, but at the cost of increased inbreeding. In 1991, the total population of the I was estimated at , and by 2010 the estimated number was 120. In 2003 the National Institute of Animal Husbandry listed its conservation status as "critical"; in 2007 the FAO listed it as "endangered". Small numbers of I pigs were exported in the 1960s to Canada and Sweden, to be kept in zoos or to be used for laboratory experiments. Within a decade, the I had spread to animal parks in other countries in Europe; a few were reared on smallholdings. The I entered the United States from Canada in the mid-1980s, and by the end of the decade the "pot-bellied pig" was being marketed as a pet. Not all of these were purebred, and some grew to considerable size; the fad was short-lived. In 2013 it was declared an invasive species in Spain. Characteristics The I is a small pig, with an average weight of approximately , and an average height of about It is uniformly black, with heavily wrinkled skin. It has a pronounced sway back and a large sagging belly, which in pregnant sows may drag on the ground. The head is small, with an up-turned snout, small ears and eyes, and heavy sagging jowls. The I is robust and has good resistance to disease and to parasites. It is usually raised extensively, and forages well on the rice straw and water plants of its native area. It is particularly well adapted to the marshy and muddy terrain on which it usually lives: it has plantigrade feet, with weight borne on all four toes of each foot. Two principal types are recognised within the breed: the I-mo or Fatty I is the typical small short-legged pig, with small upward-pointing ears and a short snout; the I-pha or Large I is taller, has longer legs and a longer snout, with bigger ears held horizontally.
Biology and health sciences
Pigs
Animals
17953697
https://en.wikipedia.org/wiki/Magnetostratigraphy
Magnetostratigraphy
Magnetostratigraphy is a geophysical correlation technique used to date sedimentary and volcanic sequences. The method works by collecting oriented samples at measured intervals throughout the section. The samples are analyzed to determine their characteristic remanent magnetization (ChRM), that is, the polarity of Earth's magnetic field at the time a stratum was deposited. This is possible because volcanic flows acquire a thermoremanent magnetization and sediments acquire a depositional remanent magnetization, both of which reflect the direction of the Earth's field at the time of formation. This technique is typically used to date sequences that generally lack fossils or interbedded igneous rock. It is particularly useful in high-resolution correlation of deep marine stratigraphy where it allowed the validation of the Vine–Matthews–Morley hypothesis related to the theory of plate tectonics. Technique When measurable magnetic properties of rocks vary stratigraphically they may be the basis for related but different kinds of stratigraphic units known collectively as magnetostratigraphic units (magnetozones). The magnetic property most useful in stratigraphic work is the change in the direction of the remanent magnetization of the rocks, caused by reversals in the polarity of the Earth's magnetic field. The direction of the remnant magnetic polarity recorded in the stratigraphic sequence can be used as the basis for the subdivision of the sequence into units characterized by their magnetic polarity. Such units are called "magnetostratigraphic polarity units" or chrons. If the ancient magnetic field was oriented similar to today's field (North Magnetic Pole near the Geographic North Pole) the strata retains a normal polarity. If the data indicates that the North Magnetic Pole was near the Geographic South Pole, the strata exhibits reversed polarity. Polarity chron A polarity chron, or in context chron, is the time interval between polarity reversals of Earth's magnetic field. It is the time interval represented by a magnetostratigraphic polarity unit. It represents a certain time period in geologic history where the Earth's magnetic field was in predominantly a "normal" or "reversed" position. Chrons are numbered in order starting from today and increasing in number into the past. As well as a number, each chron is divided into two parts, labelled "n" and "r", thereby showing the position of the field's polarity. Chrons are also referred by a capital letter of a reference sequence such as "C". A chron is the time equivalent to a chronozone or a polarity zone. It was called a "polarity subchron" when the interval is less than 200,000 years long, although the term was redefined in 2020 to an approximate duration between 10,000 to 100,000 years and polarity chron for an approximate duration between 100,000 years and a million years. Other terms used are Megachron for a duration between 108 and 109 years, Superchron for a duration between 107 and 108 years and Crytochron for a duration less than 3×104 years. Chron nomenclature The nomenclature for the succession of polarity intervals, especially when changes are of short durations, or not universal (the earth's magnetic field is complex) is challenging, as each new discovery has to be inserted (or if not validated, removed). The two standardised marine magnetic anomalies sequences are the "C-sequence" and "M-sequence" and cover from the Middle Jurassic to date. Accordingly the main C polarity chrons series extend backwards from the current C1n, commonly termed Brunhes, with the most recent transition at C1r, commonly termed Matuyama, at 0.773  which is the Brunhes–Matuyama reversal. The C (for Cenozoic) sequence ends in the Cretaceous Normal Superchron termed C34n which on age calibration occurred at 120.964 Ma and lasted to Chron C33r at 83.650  Ma that defined the Santonian geologic age. The M series is defined from M0, with full label M0r, at 121.400 Ma, which is the beginning of the Aptian to M44n.2r which is before 171.533 Ma in the Aalenian. Subdivisions in the sequencies also have specific nomenclature so C8n.2n is the second oldest normal polarity subchron comprising normal-polarity Chron C8n and the youngest cryptochron, the Emperor cryptochron, is named C1n-1. Certain terms in the literature such as M-1r to describe a postulated brief reversal at about 118 Ma are provisional. Sampling procedures Oriented paleomagnetic samples are collected in the field using a rock core drill, or as hand samples (chunks broken off the rock face). To average out sampling errors, a minimum of three samples is taken from each sample site. Spacing of the sample sites within a stratigraphic section depends on the rate of deposition and the age of the section. In sedimentary layers, the preferred lithologies are mudstones, claystones, and very fine-grained siltstones because the magnetic grains are finer and more likely to orient with the ambient field during deposition. Analytical procedures Samples are first analyzed in their natural state to obtain their natural remanent magnetization (NRM). The NRM is then stripped away in a stepwise manner using thermal or alternating field demagnetization techniques to reveal the stable magnetic component. Magnetic orientations of all samples from a site are then compared and their average magnetic polarity is determined with directional statistics, most commonly Fisher statistics or bootstrapping. The statistical significance of each average is evaluated. The latitudes of the Virtual Geomagnetic Poles from those sites determined to be statistically significant are plotted against the stratigraphic level at which they were collected. These data are then abstracted to the standard black and white magnetostratigraphic columns in which black indicates normal polarity and white is reversed polarity. Correlation and ages Because the polarity of a stratum can only be normal or reversed, variations in the rate at which the sediment accumulated can cause the thickness of a given polarity zone to vary from one area to another. This presents the problem of how to correlate zones of like polarities between different stratigraphic sections. To avoid confusion at least one isotopic age needs to be collected from each section. In sediments, this is often obtained from layers of volcanic ash. Failing that, one can tie a polarity to a biostratigraphic event that has been correlated elsewhere with isotopic ages. With the aid of the independent isotopic age or ages, the local magnetostratigraphic column is correlated with the Global Magnetic Polarity Time Scale (GMPTS). Because the age of each reversal shown on the GMPTS is relatively well known, the correlation establishes numerous time lines through the stratigraphic section. These ages provide relatively precise dates for features in the rocks such as fossils, changes in sedimentary rock composition, changes in depositional environment, etc. They also constrain the ages of cross-cutting features such as faults, dikes, and unconformities. Sediment accumulation rates Perhaps the most powerful application of these data is to determine the rate at which the sediment accumulated. This is accomplished by plotting the age of each reversal (in millions of years ago) vs. the stratigraphic level at which the reversal is found (in meters). This provides the rate in meters per million years which is usually rewritten in terms of millimeters per year (which is the same as kilometers per million years). These data are also used to model basin subsidence rates. Knowing the depth of a hydrocarbon source rock beneath the basin-filling strata allows calculation of the age at which the source rock passed through the generation window and hydrocarbon migration began. Because the ages of cross-cutting trapping structures can usually be determined from magnetostratigraphic data, a comparison of these ages will assist reservoir geologists in their determination of whether or not a play is likely in a given trap. Changes in sedimentation rate revealed by magnetostratigraphy are often related to either climatic factors or to tectonic developments in nearby or distant mountain ranges. Evidence to strengthen this interpretation can often be found by looking for subtle changes in the composition of the rocks in the section. Changes in sandstone composition are often used for this type of interpretation. Siwalik magnetostratigraphy The Siwalik fluvial sequence (~6000 m thick, ~20 to 0.5 Ma) represents a good example of magnetostratigraphy application in resolving confusion in continental fossil based records.
Physical sciences
Stratigraphy
Earth science
396686
https://en.wikipedia.org/wiki/Hazel
Hazel
Hazels are plants of the genus Corylus of deciduous trees and large shrubs native to the temperate Northern Hemisphere. The genus is usually placed in the birch family, Betulaceae, though some botanists split the hazels (with the hornbeams and allied genera) into a separate family Corylaceae. The fruit of the hazel is the hazelnut. Hazels have simple, rounded leaves with double-serrate margins. The flowers are produced very early in spring before the leaves, and are monoecious, with single-sex catkins. The male catkins are pale yellow and long, and the female ones are very small and largely concealed in the buds, with only the bright-red, 1-to-3 mm-long styles visible. The fruits are nuts long and 1–2 cm diameter, surrounded by an involucre (husk) which partly to fully encloses the nut. The shape and structure of the involucre, and also the growth habit (whether a tree or a suckering shrub), are important in the identification of the different species of hazel. The pollen of hazel species, which are often the cause for allergies in late winter or early spring, can be identified under magnification (600×) by their characteristic granular exines bearing three conspicuous pores. Species Corylus has around 14–18 species. The circumscription of species in eastern Asia is disputed, with World Flora Online and the Flora of China differing in which taxa are accepted, within this region. WFO accepts 17 species while Flora of China accepts 20 species (including Corylus mandshurica). Only those taxa accepted by both sources are listed below. The species are grouped as follows: Nut surrounded by a soft, leafy involucre, multiple-stemmed, suckering shrubs to 12 m tall Involucre short, about the same length as the nut Corylus americana – American hazel, eastern North America Corylus avellana – Common hazel, Europe and western Asia Corylus heterophylla – Asian hazel, Asia Corylus yunnanensis – Yunnan hazel, central and southern China Involucre long, twice the length of the nut or more, forming a 'beak' Corylus colchica – Colchican filbert, Caucasus Corylus cornuta – Beaked hazel, North America Corylus maxima – Filbert, southeastern Europe and southwest Asia Corylus sieboldiana – Asian beaked hazel, northeastern Asia and Japan (syn. C. mandshurica) Nut surrounded by a stiff, spiny involucre, single-stemmed trees to 20–35 m tall Involucre moderately spiny and also with glandular hairs Corylus chinensis – Chinese hazel, western China Corylus colurna – Turkish hazel, southeastern Europe and Asia Minor Corylus fargesii – Farges' hazel, western China Corylus jacquemontii – Jacquemont's hazel, Himalaya Corylus wangii – Wang's hazel, southwest China Involucre densely spiny, resembling a chestnut burr Corylus ferox – Himalayan hazel, Himalaya, Tibet and southwest China (syn. C. tibetica). Several hybrids exist, and they can occur between species in different sections of the genus, e.g. Corylus × colurnoides (C. avellana × C. colurna). The oldest confirmed hazel species is Corylus johnsonii found as fossils in the Ypresian-age rocks of Ferry County, Washington. Chilean hazel (Gevuina avellana), despite its name, is not related to this genus. Ecology At least 21 species of fungus have a mutualistic relationship with hazel. Lactarius pyrogalus grows almost exclusively on hazel, and hazel is one of two kinds of host for the rare Hypocreopsis rhododendri. Several rare species of Graphidion lichen depend on hazel trees. In the UK, five species of moth are specialised to feed on hazel including Parornix devoniella. Animals which eat hazelnuts include red deer, dormouse and red squirrel. Uses The nuts of all hazels are edible. The common hazel is the species most extensively grown for its nuts, followed in importance by the filbert. Nuts are also harvested from the other species, but apart from the filbert, none is of significant commercial importance. A number of cultivars of the common hazel and filbert are grown as ornamental plants in gardens, including forms with contorted stems (C. avellana 'Contorta', popularly known as "Corkscrew hazel" or "Harry Lauder's walking stick" from its gnarled appearance); with weeping branches (C. avellana 'Pendula'); and with purple leaves (C. maxima 'Purpurea'). Hazel is a traditional material used for making wattle, withy fencing, baskets, and the frames of coracle boats. The tree can be coppiced, and regenerating shoots allow for harvests every few years. There is a seven-year cycle (cut and grow) for hurdle (fence) making. Hazels are used as food plants by the larvae of various species of Lepidoptera including Eriocrania chrysolepidella. Culture The Celts believed hazelnuts gave one wisdom and inspiration. There are numerous variations on an ancient tale that nine hazel trees grew around a sacred pool, dropping into the water nuts that were eaten by salmon (a fish sacred to Druids), which absorbed the wisdom. A Druid teacher, in his bid to become omniscient, caught one of these special salmon and asked a student to cook the fish, but not to eat it. While he was cooking it, a blister formed and the pupil used his thumb to burst it, which he naturally sucked to cool, thereby absorbing the fish's wisdom. This boy was called Fionn Mac Cumhail (Fin McCool) and went on to become one of the most heroic leaders in Gaelic mythology. "The Hazel Branch" from Grimms' Fairy Tales claims that hazel branches offer the greatest protection from snakes and other things that creep on the earth. In the Grimm tale "Cinderella", a hazel branch is planted by the protagonist at her mother's grave and grows into a tree that is the site where the girl's wishes are granted by birds. The Russian Oreshnik () missile is named for the Hazel tree. Gallery
Biology and health sciences
Fagales
Plants
397842
https://en.wikipedia.org/wiki/Maned%20wolf
Maned wolf
The maned wolf (Chrysocyon brachyurus) is a large canine of South America. It is found in Argentina, Brazil, Bolivia, Peru, and Paraguay, and is almost extinct in Uruguay. Its markings resemble those of foxes, but it is neither a fox nor a wolf. It is the only species in the genus Chrysocyon (meaning "golden dog" in : chryso-kyōn). It is the largest canine in South America, weighing and up to at the withers. Its long, thin legs and dense reddish coat give it a distinct appearance. The maned wolf is a crepuscular and omnivorous animal adapted to the open environments of the South American savanna, with an important role in the seed dispersal of fruits, especially the wolf apple (Solanum lycocarpum). The maned wolf is a solitary animal. It communicates primarily by scent marking, but also gives a loud call known as "roar-barking". This mammal lives in open and semi-open habitats, especially grasslands with scattered bushes and trees, in the Cerrado of south, central-west, and southeastern Brazil; Paraguay; northern Argentina; and Bolivia east and north of the Andes, and far southeastern Peru (Pampas del Heath only). It is very rare in Uruguay, possibly being displaced completely through loss of habitat. The International Union for Conservation of Nature lists it as near threatened, while it is considered a vulnerable species by the Brazilian Institute of Environment and Renewable Natural Resources. In 2011, a female maned wolf, run over by a truck, underwent stem cell treatment at the , this being the first recorded case of the use of stem cells to heal injuries in a wild animal. Etymology The term maned wolf is an allusion to the mane of the nape. It is known locally as (meaning "large fox") in the Guarani language, or kalak in the Toba Qom language, in Portuguese, and , , or in Spanish. The term lobo, "wolf", originates from the Latin . Guará and aguará originated from Tupi-Guarani agoa'rá, "by the fuzz". It also is called borochi in Bolivia. Taxonomy Although the maned wolf displays many fox-like characteristics, it is not closely related to foxes. It lacks the elliptical pupils found distinctively in foxes. The maned wolf's evolutionary relationship to the other members of the canid family makes it a unique animal. Electrophoretic studies did not link Chrysocyon with any of the other living canids studied. One conclusion of this study is that the maned wolf is the only species among the large South American canids that survived the late Pleistocene extinction. Fossils of the maned wolf from the Holocene and the late Pleistocene have been excavated from the Brazilian Highlands. A 2003 study on the brain anatomy of several canids placed the maned wolf together with the Falkland Islands wolf and with pseudo-foxes of the genus Pseudalopex. One study based on DNA evidence showed that the extinct genus Dusicyon, comprising the Falkland Islands wolf and its mainland relative, was the most closely related species to the maned wolf in historical times, and that about seven million years ago it shared a common ancestor with that genus. A 2015 study reported genetic signatures in maned wolves that are indicative of population expansion followed by contraction that took place during Pleistocene interglaciations about 24,000 years before present. The maned wolf is not closely related to canids found outside South America. It is not a fox, wolf, coyote or jackal, but a distinct canid; though, based only on morphological similarities, it previously had been placed in the Canis and Vulpes genera. Its closest living relative is the bush dog (genus Speothos), and it has a more distant relationship to other South American canines (the short-eared dog, the crab-eating fox, and the zorros or Lycalopex). Description The species was described in 1815 by Johann Karl Wilhelm Illiger, initially as Canis brachyurus. Lorenz Oken classified it as Vulpes cancosa, and only in 1839 did Charles Hamilton Smith describe the genus Chrysocyon. Other authors later considered it as a member of the Canis genus. Fossils of Chrysocyon dated from the Late Pleistocene and Holocene epochs were collected in one of Peter Wilheim Lund expeditions to Lagoa Santa, Minas Gerais (Brazil). The specimen is kept in the South American Collection of the Zoologisk Museum in Denmark. Since no other record exists of fossils in other areas, the species is suggested to have evolved in this geographic region. The maned wolf bears minor similarities to the red fox, although it belongs to a different genus. The average adult weighs and stands up to tall at the shoulder, and has a head-body length of , with the tail adding another . Its ears are large and long . The maned wolf is the tallest of the wild canids; its long legs are likely an adaptation to the tall grasslands of its native habitat. Fur of the maned wolf may be reddish-brown to golden orange on the sides with long, black legs, and a distinctive black mane. The coat is marked further with a whitish tuft at the tip of the tail and a white "bib" beneath the throat. The mane is erectile and typically is used to enlarge the wolf's profile when threatened or when displaying aggression. Melanistic maned wolves do exist, but are rare. The first photograph of a black adult maned wolf was taken by a camera trap in northern Minas Gerais in Brazil in 2013. The skull can be identified by its reduced carnassials, small upper incisors, and long canine teeth. Like other canids, it has 42 teeth with the dental formula . The maned wolf's rhinarium extends to the upper lip, similar to the bush dog, but its vibrissae are longer. The skull also features a prominent sagittal crest. The maned wolf's footprints are similar to those of the dog, but have disproportionately small plantar pads when compared to the well-opened digit marks. The dog has pads up to three times larger than the maned wolf's footprint. These pillows have a triangular shape. The front footprints are long and wide, and those of the hind feet are long and wide. One feature that differentiates the maned wolf's footprint from those of other South American canids is the proximal union of the third and fourth digits. The maned wolf also is known for the distinctive cannabis-like odor of its territory markings, which has earned it the nickname "skunk wolf". Genetics Genetically, the maned wolf has 37 pairs of autosomes within diploid genes, with a karyotype similar to that of other canids. It has 76 chromosomes, so cannot interbreed with other canids. Evidence suggests that 15,000 years ago, the species suffered a reduction in its genetic diversity, called the bottleneck effect. However, its diversity is still greater than that of other canids. Ecology and behavior Hunting and territoriality The maned wolf is a twilight animal, but its activity pattern is more related to the relative humidity and temperature, similar to that observed with the bush dog (Speothos venaticus). Peak activity occurs between 8 and 10 am, and 8 and 10 pm. On cold or cloudy days, they can be active all day. The species is likely to use open fields for foraging and more closed areas, such as riparian forests, to rest, especially on warmer days. Unlike most large canids (such as the gray wolf, the African hunting dog, or the dhole), the maned wolf is a solitary animal and does not form packs. It typically hunts alone, usually between sundown and midnight, rotating its large ears to listen for prey animals in the grass. It taps the ground with a front foot to flush out the prey and pounce to catch it. It kills prey by biting on the neck or back, and shaking the prey violently if necessary. Monogamous pairs may defend a shared territory around , although outside of mating, the individuals may meet only rarely. The territory is crisscrossed by paths that they create as they patrol at night. Several adults may congregate in the presence of a plentiful food source, for example, a fire-cleared patch of grassland that would leave small vertebrate prey exposed while foraging. Both female and male maned wolves use their urine to communicate, e.g. to mark their hunting paths or the places where they have buried hunted prey. The urine has a very distinctive odor, which some people liken to hops or cannabis. The responsible substance very likely is a pyrazine, which also occurs in both plants. At the Rotterdam Zoo, this smell once set the police on a hunt for cannabis smokers. The preferred habitat of the maned wolf includes grasslands, scrub prairies, and forests. Reproduction and life cycle Their mating season ranges from November to April. Gestation lasts 60 to 65 days, and a litter may have from two to six black-furred pups, each weighing roughly . Pups are fully grown when one year old. During that first year, the pups rely on their parents for food. Data on the maned wolf's estrus and reproductive cycle mainly come from captive animals, particularly about breeding endocrinology. Hormonal changes of maned wolves in the wild follow the same variation pattern of those in captivity. Females ovulate spontaneously, but some authors suggest that the presence of a male is important for estrus induction. Captive animals in the Northern Hemisphere breed between October and February and in the Southern Hemisphere between August and October. This indicates that photoperiod plays an important role in maned wolf reproduction, mainly due to the production of semen. Generally, one estrus occurs per year. The amount of sperm produced by the maned wolf is lower compared to those of other canids. Copulation occurs during the four-day estrus period, and lasts up to 15 minutes. Courtship is similar to that of other canids, characterized by frequent approaches and anogenital investigation. Gestation lasts 60 to 65 days and a litter may have from two to six pups. One litter of seven has been recorded. Birthing has been observed in May in the Canastra Mountains, but data from captive animals suggest that births are concentrated between June and September. The maned wolf reproduces with difficulty in the wild, with a high rate of infant mortality. Females can go up to two years without breeding. Breeding in captivity is even more difficult, especially in temperate parts of the Northern Hemisphere. Pups are born weighing between 340 and 430 grams. They begin their lives with black fur, becoming red after 10 weeks. The eyes open at about 9 days of age. They are nursed up to 4 months. Afterwards, they are fed by their parents by regurgitation, starting on the third week of age and lasting up to 10 months. Three-month-old pups begin to accompany their mother while she forages. Males and females both engage in parental care, but it is primarily done by the females. Data on male parental care have been collected from captive animals, and little is known whether this occurs frequently in the wild. Maned wolves reach sexual maturity at one year of age, when they leave their birth territory. The maned wolf's longevity in the wild is unknown, but estimates in captivity are between 12 and 15 years. A report was made of an individual at the São Paulo Zoo that lived to be 22 years old. Diet The maned wolf is omnivorous. It specialises in preying on small and medium-sized animals, including small mammals (typically rodents and rabbits), birds and their eggs, reptiles, and even fish, gastropods, other terrestrial molluscs, and insects, but a large portion of its diet (more than 50%, according to some studies) is vegetable matter, including sugarcane, tubers, bulbs, roots and fruit. Up to 301 food items have been recorded in the maned wolf's diet, including 116 plants and 178 animal species. The maned wolf hunts by chasing its prey, digging holes, and jumping to catch birds in flight. About 21% of hunts are successful. Some authors have recorded active pursuits of the Pampas deer. They were also observed feeding on carcasses of run down animals. Fecal analysis has shown consumption of the giant anteater, bush dog, and collared peccary, but whether these animals are actively hunted or scavenged is not known. Armadillos are also commonly consumed. Animals are more often consumed in the dry season. The wolf apple (Solanum lycocarpum), a tomato-like fruit, is the maned wolf's most common food item. With some exceptions, these fruits make up between 40 and 90% of the maned wolf's diet. The wolf apple is actively sought by the maned wolf, and is consumed throughout the year, unlike other fruits that can only be eaten in abundance during the rainy season. It can consume several fruits at a time and disperse intact seeds by defecating, making it an excellent disperser of the wolf apple plant. Despite their preferred habitat, maned wolves are ecologically flexible and can survive in disturbed habitats, from burned areas to places with high human influences. Burned areas have some small mammals, such as hairy-tailed bolo mouse (Necromys lasiurus) and vesper mouse (Calomys spp.) that they can hunt and survive on. Historically, captive maned wolves were fed meat-heavy diets, but that caused them to develop bladder stones. Zoo diets for them now feature fruits and vegetables, as well as meat and specialized extruded diet formulated for maned wolves to be low in stone-causing compounds (i.e. cystine). A maned wolf from Texas was found to be a host of an intestinal acanthocephalan worm, Pachysentis canicola. Relations with other species The maned wolf participates in symbiotic relationships. It contributes to the propagation and dissemination of the plants on which it feeds, through excretion. Often, maned wolves defecate on the nests of leafcutter ants. The ants then use the dung to fertilize their fungus gardens, but they discard the seeds contained in the dung onto refuse piles just outside their nests. This process significantly increases the germination rate of the seeds. Maned wolves suffer from ticks, mainly of the genus Amblyomma, and by flies such as Cochliomyia hominivorax usually on the ears. The maned wolf is poorly parasitized by fleas. The sharing of territory with domestic dogs results in a number of diseases, such as rabies virus, parvovirus, distemper virus, canine adenovirus, protozoan Toxoplasma gondii, bacterium Leptospira interrogans, and nematode Dirofilaria immitis. The maned wolf is particularly susceptible to potentially fatal infection by the giant kidney worm. Ingestion of the wolf apple could prevent maned wolves from contracting this nematode, but such a hypothesis has been questioned by several authors. Its predators are mainly large cats, such as the puma (Puma concolor) and the jaguar (Panthera onca), but it is most often preyed upon by the jaguar. Humans Generally, the maned wolf is shy and flees when alarmed, so it poses little direct threat to humans. Popularly, the maned wolf is thought to have the potential of being a chicken thief. It once was considered a similar threat to cattle, sheep, and pigs, although this now is known to be false. Historically, in a few parts of Brazil, these animals were hunted for some body parts, notably the eyes, that were believed to be good-luck charms. Since its classification as a vulnerable species by the Brazilian government, it has received greater consideration and protection. They are threatened by habitat loss and being run over by automobiles. Feral and domestic dogs pass on diseases to them, and have been known to attack them. The species occurs in several protected areas, including the national parks of Caraça and Emas in Brazil. The maned wolf is well represented in captivity, and has been bred successfully at many zoos, particularly in Argentina, North America (part of a Species Survival Plan) and Europe (part of a European Endangered Species Programme). In 2012, a total of 3,288 maned wolves were kept at more than 300 institutions worldwide. The Smithsonian National Zoo Park has been working to protect maned wolves for nearly 30 years, and coordinates the collaborative, interzoo maned wolf Species Survival Plan of North America, which includes breeding maned wolves, studying them in the wild, protecting their habitat, and educating people about them. Hunting The practice of hunting maned wolves is historically poorly documented, but it is speculated to be relatively frequent. This is partly because during the Portuguese and Spanish colonization of South America, Europeans projected onto the maned wolf the historical aversion they had towards Iberian wolves, and their reputation for eating sheep and other domestic animals. And even though the species is now better seen, many people consider it a potential risk to domestic birds and children. In Brazil, the impacts of hunting on the species are better known than in Argentina, as is the impact of predation on domestic birds, which engenders retaliation from farmers. The species is also accused of attacking sheep, which increases human animosity. In Brazil, people also aimed to prevent these animals from attacking chickens, using a Brazilian variant of the Portuguese podengo, called the Brazilian podengo or Crioulo podengo. Conservation The maned wolf is not considered an endangered species by the IUCN because of its wide geographical distribution and adaptability to man-made environments. However, due to declining populations, it is classified as a near-threatened species. This decline is mostly due to human activities such as deforestation, increasing traffic in highways resulting in roadkill, and urban growth. Due to the decrease in their habitat, the wolves often migrate to urban regions looking for easier access to food. This increases their contact with domestic animals, as well as the risk of infectious and parasitic diseases amongst the wolves which can lead to death. Until 1996 the maned wolf was a vulnerable species by the IUCN. It is also listed in CITES Appendix II, which regulates international trade in the species. The ICMBio list in Brazil that follows the same IUCN criteria considers the wolf to be a vulnerable species. By these same criteria, the Brazilian state lists also consider it more problematic: it is a vulnerable species in the lists of São Paulo and Minas Gerais, while in the lists of Paraná, Santa Catarina and Rio Grande do Sul the maned wolf is considered as "endangered" and "critically endangered" respectively. In Uruguay, although there is no such list as Brazil and IUCN, it is regarded as a species with "priority" for conservation. In Argentina it is not considered to be in critical danger, but it is recognized that its populations are declining and fragmented. The situation of the maned wolf in Bolivia and Paraguay is uncertain. Even with these uncertainties the maned wolf is protected against hunting in all countries. In Brazil, Argentina, and Uruguay it is forbidden by law to hunt the maned wolf. Conservationists are also taking other steps to ensure its survival, especially as urbanization continues to spread in its natural habitat. In human cultures Human attitudes and opinions about the maned wolf vary across populations, ranging from fear and tolerance to aversion. In some regions of Brazil, parts of the animal's body are believed to help cure bronchitis, kidney disease, and even snake bites. It is also believed to bring good luck. These parts can be teeth, the heart, ears, and even dry stools. In Bolivia, mounting a saddle made of maned wolf leather is believed to protect from bad luck. Despite these superstitions, no large-scale use of parts of this animal occurs. In urban societies in Brazil, people tend to be sympathetic to the maned wolf, seeing no value in it as a hunting animal or pest. They often consider its preservation to be important, and although these societies associate it with force and ferocity, they do not consider it a dangerous animal. Although popular in some places and common in many zoos, it can go unnoticed. Studies in zoos in Brazil showed that up to 30% of respondents were either unaware or unable to recognize a maned wolf. It was considered a common animal by the Guarani people, and the first names used by Europeans, such as the Spanish Jesuit missionary Joseph of Anchieta, were the same used by the native peoples (yaguaraçú). Spanish naturalist Felix de Azara also used the Guarani name to refer to it and was one of the first to describe the biology of the species and consider it an important part of Paraguay's fauna. Much of the negative view of the maned wolf as a poultry predator stems from European ethnocentrism, where peasants often had problems with wolves and foxes. The maned wolf rarely causes antipathy in the human populations of the places in which it lives, so it has been used as a flag species for the preservation of the Brazilian cerrado. It is represented on the 200-reais banknote, released in September 2020. It has also been represented on the 100-cruzeiros reais coin, which circulated in Brazil between 1993 and 1994. The urine smell of cannabis Many individuals argue that specific varieties of cannabis possess a scent remarkably similar to the urine of animals like cats. However, the resemblance in the odor of maned wolf urine is even more pronounced. The intense smell of their urine could serve as an adaptation for territorial maintenance, designed to be potent enough for detection from a considerable distance. This resemblance is so striking that in 2006, authorities at Rotterdam Zoo were alerted to investigate complaints about a visitor allegedly smoking cannabis while observing the animals./ Drawing from knowledge about the organic compounds found in the urine of cats and dogs, it is conceivable that the source of the maned wolf's pungent urine could be a sulphur-based compound. For instance, cats feature a sulphur-containing amino acid known as felinin in their urine, contributing to olfactory communication. It is plausible that maned wolves possess a similar substance. Gallery
Biology and health sciences
Canines
Animals
397986
https://en.wikipedia.org/wiki/Nail%20%28anatomy%29
Nail (anatomy)
A nail is a protective plate characteristically found at the tip of the digits (fingers and toes) of all primates, corresponding to the claws in other tetrapod animals. Fingernails and toenails are made of a tough rigid protein called alpha-keratin, a polymer also found in the claws, hooves, and horns of vertebrates. Structure The nail consists of the nail plate, the nail matrix and the nail bed below it, and the grooves surrounding it. Parts of the nail The nail matrix is the active tissue (or germinal matrix) that generates cells. The cells harden as they move outward from the nail root to the nail plate. The nail matrix is also known as the matrix unguis, keratogenous membrane, or onychostroma. It is the part of the nail bed that is beneath the nail and contains nerves, lymph, and blood vessels. The matrix produces cells that become the nail plate. The width and thickness of the nail plate is determined by the size, length, and thickness of the matrix, while the shape of the fingertip bone determines if the nail plate is flat, arched, or hooked. The matrix will continue to produce cells as long as it receives nutrition and remains in a healthy condition. As new nail plate cells are made, they push older nail plate cells forward; and in this way older cells become compressed, flat, and translucent. This makes the capillaries in the nail bed below visible, resulting in a pink color. The lunula ("small moon") is the visible part of the matrix, the whitish crescent-shaped base of the visible nail. The lunula can best be seen in the thumb and may not be visible in the little finger. The lunula appears white due to a reflection of light at the point where the nail matrix and nail bed meet. The nail bed is the skin beneath the nail plate. It is the area of the nail on which the nail plate rests. Nerves and blood vessels found here supply nourishment to the entire nail unit. Like all skin, it is made of two types of tissues: the dermis and the epidermis. The epidermis is attached to the dermis by tiny longitudinal "grooves" called matrix crests (cristae matricis unguis). In old age, the nail plate becomes thinner, and these grooves become more visible. The nail bed is highly innervated, and removal of the nail plate is often excruciatingly painful as a result. The nail sinus (sinus unguis) is where the nail root is; i.e. the base of the nail underneath the skin. It originates from the actively growing tissue below, the matrix. The nail plate (corpus unguis) sometimes referred to as the nail body, is the visible hard nail area from the nail root to the free edge, made of translucent keratin protein. Several layers of dead, compacted cells cause the nail to be strong but flexible. Its (transverse) shape is determined by the form of the underlying bone. In common usage, the word nail often refers to this part only. The nail plate is strongly attached to the nail bed and does not contain any nerves or blood vessels. The free margin (margo liber) or distal edge is the anterior margin of the nail plate corresponds to the abrasive or cutting edge of the nail. The hyponychium (informally known as the "quick") is the epithelium located beneath the nail plate at the junction between the free edge and the skin of the fingertip. It forms a seal that protects the nail bed. The onychodermal band is the seal between the nail plate and the hyponychium. It is just under the free edge, in that portion of the nail where the nail bed ends and can be recognized in fair-skinned people by its glassy, greyish colour. It is not visible in some individuals while it is highly prominent on others. Eponychium Together, the eponychium and the cuticle form a protective seal. The cuticle is the semi-circular layer of almost invisible dead skin cells that "ride out on" and cover the back of the visible nail plate. The eponychium is the fold of skin cells that produces the cuticle. They are continuous, and some references view them as one entity. (Thus the names eponychium, cuticle, and perionychium would be synonymous, although a distinction is still drawn here.) It is the cuticle (nonliving part) that is removed during a manicure, but the eponychium (living part) should not be touched due to risk of infection. The eponychium is a small band of living cells (epithelium) that extends from the posterior nail wall onto the base of the nail. The eponychium is the end of the proximal fold that folds back upon itself to shed an epidermal layer of skin onto the newly formed nail plate. The perionyx is the projecting edge of the eponychium covering the proximal strip of the lunula. The nail wall (vallum unguis) is the cutaneous fold overlapping the sides and proximal end of the nail. The lateral margin (margo lateralis) lies beneath the nail wall on the sides of the nail, and the nail groove or fold (sulcus matricis unguis) are the cutaneous slits into which the lateral margins are embedded. Paronychium The paronychium is the soft tissue border around the nail, and paronychia is an infection in this area. The paronychium is the skin that overlaps onto the sides of the nail plate, also known as the paronychial edge. The paronychium is the site of hangnails, ingrown nails, and paronychia, a skin infection. Hyponychium The hyponychium is the area of epithelium, particularly the thickened portion, underlying the free edge of the nail plate. It is sometimes called the "quick", as in the phrase "cutting to the quick". Function A healthy fingernail has the function of protecting the distal phalanx, the fingertip, and the surrounding soft tissues from injuries. It also serves to enhance precise delicate movements of the distal digits through counter-pressure exerted on the pulp of the finger. The nail then acts as a counter-force when the end of the finger touches an object, thereby enhancing the sensitivity of the fingertip, although the nail itself has no nerve endings. Finally, the nail functions as a tool enabling a so-called "extended precision grip" (e.g., pulling out a splinter in one's finger), and certain cutting or scraping actions. Growth The growing part of the nail is under the skin at the nail's proximal end under the epidermis, which is the only living part of a nail. In mammals, the growth rate of nails is related to the length of the terminal phalanges (outermost finger bones). Thus, in humans, the nail of the index finger grows faster than that of the little finger; and fingernails grow up to four times faster than toenails. In humans, fingernails grow at an average rate of approx. a month, whereas toenails grow about half as fast (approx. average a month). Fingernails require three to six months to regrow completely, and toenails require twelve to eighteen months. Actual growth rate is dependent upon age, sex, season, exercise level, diet, and hereditary factors. The longest female nails known ever to have existed measured a total of 8.65 m (28 ft 4.5 in). Contrary to popular belief, nails do not continue to grow after death; the skin dehydrates and tightens, making the nails (and hair) appear to grow. Permeability The nail is often considered an impermeable barrier, but this is not true. In fact, it is much more permeable than the skin, and the composition of the nail includes 7–12% water. This permeability has implications for penetration by harmful and medicinal substances; in particular cosmetics applied to the nails can pose a risk. Water can penetrate the nail as can many other substances including paraquat, a fast acting herbicide that is harmful to humans; urea which is often an ingredient in creams and lotions meant for use on hands and fingers; several fungicidal agents such as salicylic acid, miconazole branded Monistat, natamycin; and sodium hypochlorite which is the active ingredient in common household bleach (but usually only in 2–3% concentration). Clinical significance Healthcare and pre-hospital-care providers (EMTs or paramedics) often use the fingernail beds as a cursory indicator of distal tissue perfusion of individuals who may be dehydrated or in shock. However, this test is not considered reliable in adults. This is known as the CRT or blanch test. The fingernail bed is briefly depressed to turn the nail-bed white. When the pressure is released, the normal pink colour should be restored within a second or two. Delayed return to pink color can be an indicator of certain shock states such as hypovolemia. Nail growth record can show the history of recent health and physiological imbalances, and has been used as a diagnostic tool since ancient times. Deep, horizontally transverse grooves known as "Beau's lines" may form across the nails (horizontal, not along the nail from cuticle to tip). These lines are usually a natural consequence of aging, although they may result from disease. Discoloration, thinning, thickening, brittleness, splitting, grooves, Mees' lines, small white spots, receded lunula, clubbing (convex), flatness, and spooning (concave) can indicate illness in other areas of the body, nutrient deficiencies, drug reaction, poisoning, or merely local injury. Nails can also become thickened (onychogryphosis), loosened (onycholysis), infected with fungus (onychomycosis), or degenerate (onychodystrophy). A common nail disorder is an ingrowing toenail (onychocryptosis). DNA profiling is a technique employed by forensic scientists on hair, fingernails, toenails, etc. Health and care The best way to care for nails is to trim them regularly. Filing is also recommended, as to keep nails from becoming too rough and to remove any small bumps or ridges that may cause the nail to get tangled up in materials such as cloth. Bluish or purple fingernail beds may be a symptom of peripheral cyanosis, which indicates oxygen deprivation. Nails can dry out, just like skin. They can also peel, break, and be infected. Toe infections, for instance, can be caused or exacerbated by dirty socks, specific types of aggressive exercise (long-distance running), tight footwear, and walking unprotected in an unclean environment. Common organisms causing nail infections include yeasts and molds (particularly dermatophytes). Nail tools used by different people may transmit infections. Standard hygiene and sanitation procedures avoid transmission. In some cases, gel and cream cuticle removers can be used instead of cuticle scissors. Nail disease can be very subtle and should be evaluated by a dermatologist with a focus in this particular area of medicine. However, most times it is a nail stylist who will note a subtle change in nail disease. Inherited accessory nail of the fifth toe occurs where the toenail of the smallest toe is separated, forming a smaller "sixth toenail" in the outer corner of the nail. Like any other nail, it can be cut using a nail clipper. Finger entrapment injuries are common in children and can include damage to the finger pulp and fingernail. These are usually treated by cleaning the area and applying a sterile dressing. Surgery may sometimes be required to repair the laceration or broken bones. Effect of nutrition Biotin-rich foods and supplements may help strengthen brittle fingernails. Vitamin A is an essential micronutrient for vision, reproduction, cell and tissue differentiation, and immune function. Vitamin D and calcium work together in cases of maintaining homeostasis, creating muscle contraction, transmission of nerve pulses, blood clotting, and membrane structure. A lack of vitamin A, vitamin D, or calcium can cause dryness and brittleness. Insufficient vitamin B12 can lead to excessive dryness, darkened nails, and rounded or curved nail ends. Insufficient intake of both vitamin A and B results in fragile nails with horizontal and vertical ridges. Some over-the-counter vitamin supplements such as certain multivitamins and biotin may help in growth of strong nails, although this is quite subjective. Both vitamin B12 and folate play a role in red blood cell production and oxygen transportation to nail cells. Inadequacies can result in discoloration of the nails. Diminished dietary intake of omega-3 fatty acids may contribute to dry and brittle nails. Protein is a building material for new nails; therefore, low dietary protein intake may cause anemia and the resultant reduced hemoglobin in the blood filling the capillaries of the nail bed reflects varying amounts of light incident on the nail matrix resulting in lighter shades of pink ultimately resulting in white nail beds when the hemoglobin is very low. When hemoglobin is close to 15 or 16 grams, most of the spectrum of light is absorbed and only the pink color is reflected back and the nails look pink. Essential fatty acids play a large role in healthy skin as well as nails. Splitting and flaking of nails may be due to a lack of linoleic acid. Iron-deficiency anemia can lead to a pale color along with a thin, brittle, ridged texture. Iron deficiency in general may cause the nails to become flat or concave, rather than convex. As oxygen is needed for healthy nails, an iron deficiency or anemia can lead to vertical ridges or concavity in the nails. RDAs for iron vary considerably depending on age and gender. The recommendation for men is 8 mg per day, while that of women aged 19–50 is 18 mg per day. After women hit age 50 or go through menopause, their iron needs drop to 8 mg daily. Society and culture Fashion Manicures (for the hands) and pedicures (for the feet) are health and cosmetic procedures to groom, trim, and paint the nails and manage calluses. They require various tools such as cuticle scissors, nail scissors, nail clippers, and nail files. Artificial nails can also be fixed onto real nails for cosmetic purposes. A person whose occupation is to cut, shape and care for nails as well as to apply overlays such as acrylic and UV gel is sometimes called a nail stylist. The place where a nail stylist works may be a nail salon or nail shop or nail bar. Painting the nails with colored nail polish (also called nail lacquer and nail varnish) to improve the appearance is a common practice dating back to at least 3000 BC. Acrylic nails are made out of acrylic glass (PMMA). When it is mixed with a liquid monomer (usually ethyl methacrylate mixed with some inhibitor) it forms a malleable bead. This mixture begins to cure immediately, continuing until completely solid in minutes. Acrylic nails can last up to 21 days but can last longer with touch-ups. To give acrylic nails color, gel polish, nail polish, and dip powders can be applied. Gel nails can be used in order to create artificial nail extensions, but can also be used like nail polish. They are hardened using ultraviolet light. They last longer than regular nail polish and do not chip. They have a high-gloss finish and last for two to three weeks. Nail wraps are formed by cutting pieces of fiberglass, linen, silk fabric, or another material to fit on the surface of the nail (or a tip attached prior), to be sealed onto the nail plate with a layer of resin or glue. They do not damage the nail and also provide strength to the nail but are not used to lengthen it. It can also be used to fix broken nails. However, the treatment is more expensive. With the dip powder method, a clear liquid is brushed onto a nail and the nail is then placed into pigmented powder. Dip nails tend to last about a month, 2–3 weeks longer than gel and acrylic nails. It can be worn on natural nails, or with tips to create an artificial nail. Dip powder nails do not require any UV/LED light to be cured, instead they are sealed using an activator. The quickest way to remove dip powder is to drill, clip off, or buff out layers of the powder so, when they are soaking in acetone, they slide right off. Length records Guinness World Records began tracking record fingernail lengths in 1955, when a Chinese priest was listed as having fingernails long. The current record-holder for men, according to Guinness, is Shridhar Chillal from India who set the record in 1998 with a total of of nails on his left hand. His longest nail, on his thumb, was long. The former record-holder for women was Lee Redmond of the U.S., who set the record in 2001 and as of 2008 had nails with a total length on both hands of , with the longest nail on her right thumb at . However, as of 2022, the record for women is held by Diana Armstrong from Minneapolis. Evolution in primates The nail is an unguis, meaning a keratin structure at the end of a digit. Other examples of ungues include the claw, hoof, and talon. The nails of primates and the hooves of running mammals evolved from the claws of earlier animals. In contrast to nails, claws are typically curved ventrally (downwards in animals) and compressed sideways. They serve a multitude of functionsincluding climbing, digging, and fightingand have undergone numerous adaptive changes in different animal taxa. Claws are pointed at their ends and are composed of two layers: a thick, deep layer and a superficial, hardened layer which serves a protective function. The underlying bone is a virtual mold of the overlying horny structure and therefore has the same shape as the claw or nail. Compared to claws, nails are flat, less curved, and do not extend far beyond the tip of the digits. The ends of the nails usually consist only of the "superficial", hardened layer and are not pointed like claws. With only a few exceptions, primates retain plesiomorphic (original, "primitive") hands with five digits, each equipped with either a nail or a claw. For example, nearly all living strepsirrhine primates have nails on all digits except the second toe which is equipped with a grooming claw. Tarsiers have a grooming claw on second and third toes. Less commonly known, a grooming claw is also found on the second pedal digit of owl monkeys (Aotus), titis (Callicebus), and possibly other New World monkeys. The needle-clawed bushbaby (Euoticus) has keeled nails (the thumb and the first and the second toes have claws) featuring a central ridge that ends in a needle-like tip. A study of the fingertip morphology of four small-bodied New World monkey species indicated a correlation between increasing small-branch foraging and: expanded apical pads (fingertips), developed epidermal ridges (fingerprints), broadened distal parts of distal phalanges (fingertip bones), and reduced flexor and extensor tubercles (attachment areas for finger muscles on bones). This suggests that whereas claws are useful on large-diameter branches, wide fingertips with nails and epidermal ridges were required for habitual locomotion on small-diameter branches. It also indicates keel-shaped nails of Callitrichines (a family of New World monkeys) is a derived postural adaptation rather than retained ancestral condition. An alternative theory is that the nails of primates evolved to enable silent movement through trees while stalking prey, replacing noisier claws to make ambush hunting more effective.
Biology and health sciences
Integumentary system
null
398146
https://en.wikipedia.org/wiki/Cedrus%20libani
Cedrus libani
Cedrus libani, commonly known as cedar of Lebanon, Lebanon cedar, or Lebanese cedar (), is a species of tree in the genus Cedrus, a part of the pine family, native to the mountains of the Eastern Mediterranean basin. It is a large evergreen conifer that has great religious and historical significance in the cultures of the Middle East, and is referenced many times in the literature of ancient civilisations. It is the national emblem of Lebanon and is widely used as an ornamental tree in parks and gardens. Description Cedrus libani can reach in height, with a massive monopodial columnar trunk up to in diameter. The trunks of old trees ordinarily fork into several large, erect branches. The rough and scaly bark is dark grey to blackish brown, and is run through by deep, horizontal fissures that peel in small chips. The first-order branches are ascending in young trees; they grow to a massive size and take on a horizontal, wide-spreading disposition. Second-order branches are dense and grow in a horizontal plane. The crown is conical when young, becoming broadly tabular with age with fairly level branches; trees growing in dense forests maintain more pyramidal shapes. Shoots and leaves The shoots are dimorphic, with both long and short shoots. New shoots are pale brown, older shoots turn grey, grooved and scaly. C. libani has slightly resinous ovoid vegetative buds measuring long and wide enclosed by pale brown deciduous scales. The leaves are needle-like, arranged in spirals and concentrated at the proximal end of the long shoots, and in clusters of 15–35 on the short shoots; they are long and wide, rhombic in cross-section, and vary from light green to glaucous green with stomatal bands on all four sides. Cones Cedrus libani produces cones beginning at around the age of 40. Its cones are borne in autumn, the male cones appear in early September and the female ones in late September. Male cones occur at the ends of the short shoots; they are solitary and erect about long and mature from a pale green to a pale brown color. The female seed cones also grow at the terminal ends of short shoots. The young seed cones are resinous, sessile, and pale green; they require 17 to 18 months after pollination to mature. The mature, woody cones are long and wide; they are scaly, resinous, ovoid or barrel-shaped, and gray-brown in color. Mature cones open from top to bottom, they disintegrate and lose their seed scales, releasing the seeds until only the cone rachis remains attached to the branches. The seed scales are thin, broad, and coriaceous, measuring long and wide. The seeds are ovoid, long and wide, attached to a light brown wedge-shaped wing that is long and wide. C. libani grows rapidly until the age of 45 to 50 years; growth becomes extremely slow after the age of 70. Taxonomy Cedrus is the Latin name for true cedars. The specific epithet refers to the Lebanon mountain range where the species was first described by French botanist Achille Richard; the tree is commonly known as the Lebanon cedar or cedar of Lebanon. Two distinct types are recognized as varieties: C. libani var. libani and C. libani var. brevifolia. C. libani var. libani: Lebanon cedar, cedar of Lebanon – grows in Lebanon, western Syria, and south-central Turkey. C. libani var. stenocoma (the Taurus cedar), considered a subspecies in earlier literature, is now recognized as an ecotype of C. libani var. libani. It usually has a spreading crown that does not flatten. This distinct morphology is a habit that is assumed to cope with the competitive environment, since the tree occurs in dense stands mixed with the tall-growing Abies cilicica, or in pure stands of young cedar trees. C. libani var. brevifolia: The Cyprus cedar occurs on the island's Troodos Mountains. This taxon was considered a separate species from C. libani because of morphological and ecophysiological trait differences. It is characterized by slow growth, shorter needles, and higher tolerance to drought and aphids. Genetic relationship studies, however, did not recognize C. brevifolia as a separate species, the markers being indistinguishable from those of C. libani. Distribution and habitat C. libani var. libani is endemic to elevated mountains around the Eastern Mediterranean in Lebanon, Syria, and Turkey. The tree grows in well-drained calcareous lithosols on rocky, north- and west-facing slopes and ridges and thrives in rich loam or a sandy clay in full sun. Its natural habitat is characterized by warm, dry summers and cool, moist winters with an annual precipitation of ; the trees are blanketed by a heavy snow cover at the higher elevations. In Lebanon and Turkey, it occurs most abundantly at elevations of , where it forms pure forests or mixed forests with Cilician fir (Abies cilicica), European black pine (Pinus nigra), Turkish pine (Pinus brutia), and several juniper species. In Turkey, it can occur as low as . C. libani var. brevifolia grows in similar conditions in the Troodos Mountains of Cyprus at medium to high elevations ranging from . History and symbolism In the Epic of Gilgamesh, one of the earliest great works of literature, the Sumerian hero Gilgamesh and his friend Enkidu travel to the legendary Cedar Forest to kill its guardian and cut down its trees. While early versions of the story place the forest in Iran, later Babylonian accounts of the story place the Cedar Forest in Lebanon. The Lebanon cedar is mentioned several times in the Bible. Hebrew priests were ordered by Moses to use the bark of the Lebanon cedar in the treatment of leprosy. Solomon also procured cedar timber to build the Temple in Jerusalem. The Hebrew prophet Isaiah used the Lebanon cedar (together with "oaks of Bashan", "all the high mountains" and "every high tower") as examples of loftiness as a metaphor for the pride of the world and in Psalm 92:12 it says "The righteous shall flourish like the palm tree: he shall grow like a cedar in Lebanon". National and regional significance The Lebanon cedar is the national emblem of Lebanon, and is displayed on the flag of Lebanon and coat of arms of Lebanon. It is also the logo of Middle East Airlines, which is Lebanon's national carrier. Beyond that, it is also the main symbol of Lebanon's "Cedar Revolution" of 2005, the 17 October Revolution, also known as the Thawra ("Revolution") along with many Lebanese political parties and movements, such as the Lebanese Forces. Finally, Lebanon is sometimes metonymically referred to as the Land of the Cedars. Arkansas, among other US states, has a Champion Tree program that records exceptional tree specimens. The Lebanon cedar recognized by the state is located inside Hot Springs National Park and is estimated to be over 100 years old. Cultivation The Lebanon cedar is widely planted as an ornamental tree in parks and gardens. When the first cedar of Lebanon was planted in Britain is unknown, but it dates at least to 1664, when it is mentioned in Sylva, or A Discourse of Forest-Trees and the Propagation of Timber by John Evelyn. In Britain, cedars of Lebanon are known for their use in London's Highgate Cemetery. C. libani has gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017). Propagation In order to germinate Cedrus libani seeds, potting soil is preferred, since it is less likely to contain fungal species which may kill the seedling in its early stages. Before sowing it is important to soak the seed at room temperature for a period of 24 hours followed by cold stratification (~3–5°C) for two to four weeks. Once the seeds have been sown, it is recommended that they be kept at room temperature (~20°C) and in the vicinity of sunlight. The soil should be kept slightly damp with low frequency watering. Over-watering may cause damping off which will quickly kill the seedlings. Initial growth will be around 3–5cm the first year and will accelerate subsequent years. Uses Cedar wood is prized for its fine grain, attractive yellow color, and fragrance. It is exceptionally durable and immune to insect ravages. Wood from C. libani has a density of 560 kg/m3; it is used for furniture, construction, and handicrafts. In Turkey, shelterwood cutting and clearcutting techniques are used to harvest timber and promote uniform forest regeneration. Cedar resin (cedria) and cedar essential oil (cedrum) are prized extracts from the timber and cones of the cedar tree. Ecology and conservation Over the centuries, extensive deforestation has occurred, with only small remnants of the original forests surviving. Deforestation has been particularly severe in Lebanon and on Cyprus; on Cyprus, only small trees up to tall survive, though Pliny the Elder recorded cedars tall there. Attempts have been made at various times throughout history to conserve the Lebanon cedars. The first was made by the Roman emperor Hadrian; he created an imperial forest and ordered it marked by inscribed boundary stones, two of which are in the museum of the American University of Beirut. Extensive reforestation of cedar is carried out in the Mediterranean region. In Turkey, over 50 million young cedars are planted annually, covering an area around . Lebanese cedar populations are also expanding through an active program combining replanting and protection of natural regeneration from browsing goats, hunting, forest fires, and woodworms. The Lebanese approach emphasizes natural regeneration by creating proper growing conditions. The Lebanese state has created several reserves, including the Chouf Cedar Reserve, the Jaj Cedar Reserve, the Tannourine Reserve, the Ammouaa and Karm Shbat Reserves in the Akkar district, and the Forest of the Cedars of God near Bsharri. Because during the seedling stage, differentiating C. libani from C. atlantica or C. deodara is difficult, the American University of Beirut has developed a DNA-based method of identification to ensure that reforestation efforts in Lebanon are of the cedars of Lebanon and not other types. Diseases and pests C. libani is susceptible to a number of soil-borne, foliar, and stem pathogens. The seedlings are prone to fungal attacks. Botrytis cinerea, a necrotrophic fungus known to cause considerable damage to food crops, attacks the cedar needles, causing them to turn yellow and drop. Armillaria mellea (commonly known as honey fungus) is a basidiomycete that fruits in dense clusters at the base of trunks or stumps and attacks the roots of cedars growing in wet soils. The Lebanese cedar shoot moth (Parasyndemis cedricola) is a species of moth of the family Tortricidae found in the forests of Lebanon and Turkey; its larvae feed on young cedar leaves and buds. Gallery
Biology and health sciences
Pinaceae
Plants
398279
https://en.wikipedia.org/wiki/Osaka%20Metro
Osaka Metro
The is a major rapid transit system in the Osaka metropolitan area of Japan, operated by the Osaka Metro Company, Ltd. It serves the city of Osaka and the adjacent municipalities of Higashiosaka, Kadoma, Moriguchi, Sakai, Suita, and Yao. Osaka Metro forms an integral part of the extensive mass transit system of Greater Osaka (part of the Kansai region), having 123 out of the 1,108 rail stations (2007) in the Osaka-Kobe-Kyoto region. In 2010, the greater Osaka region had 13 million rail passengers daily (see Transport in Keihanshin) of which the Osaka Municipal Subway (as it was then known) accounted for 2.29 million. Osaka Metro is the only subway system in Japan to be partially legally classified as a tram system, whereas all other subway systems in Japan are legally classified as railways. Despite this, it has all the characteristics typical of a full-fledged metro system. Overview The network's first service, the Midōsuji Line from to , opened in 1933. As a north–south trunk route, it is the oldest and busiest line in the whole network. Both it and the main east–west route, the Chūō Line, were later extended to the north and east, respectively. These extensions are owned by other railway companies, but both Osaka Metro and these private operators run their own set of trains through between the two sections. All but one of the remaining lines of the network, including the Yotsubashi Line, Tanimachi Line, and Sennichimae Line, are completely independent lines with no through services. The lone exception is the Sakaisuji Line, which operates through trains to existing Hankyu Railway lines and is the only line to operate through services to existing railway lines that are not isolated from the national rail network (which is the case with the Midōsuji and Chūō Lines). As such, it is not compatible with the rest of the lines. Nearly all stations have a letter number combination, the letter identifying the line served by the station and the number indicating the relative location of the station on the line. For example, Higobashi Station on the Yotsubashi Line is also known as Y12. This combination is heard in bilingual Japanese-English automated next-station announcements on board all trains, which also provide information on local businesses near the station. Only Hankyu stations served by the Sakaisuji Line do not follow this convention. Management The network is operated by a municipally owned stock company trading as the Osaka Metro Company, Ltd. The Osaka Metro Co. is the direct legal successor to the Osaka Municipal Transportation Bureau, which operated the subway as ; under the Bureau's management, the subway was the oldest publicly operated subway network in Japan, having begun operations in 1933. A proposal to corporatize the Osaka subway was sent to the city government in February 2013 and was given final approval in 2017. The rationale behind corporatization is that it would bring private investors to Osaka and could help revive Osaka's economy. The Osaka Metro Co. was incorporated on June 1, 2017, and took over operations on April 1, 2018. The Osaka Metro Co. also operates all city buses in Osaka, through its majority-owned subsidiary, the . Branding Osaka Metro stations are denoted by the Osaka Metro Co.'s corporate logo, a white-on-dark-blue icon placed at ground-level entrances, depicting an "M" (for "Metro") based on a coiled ribbon, which would form an "O" (for "Osaka") when viewed from the side (this symbol is officially called the "moving M"), with the "Osaka Metro" wordmark set in the Gotham typeface. "Osaka Metro" (in Latin characters) is the official branding in Japanese, and is always represented as such in official media. (News outlets have been seen to use 大阪メトロ, presumably to better flow with article text.) Individual lines are represented by a public-facing name (e.g. “Midōsuji Line” for Rapid Electric Tramway Line No. 1) and a specific color, as well as a single Latin letter, which is paired with a different number at each station for easy identification (see below). Icons for each line (featured in station wayfinding signage) are represented by a solid roundel in the line color, superimposed with the line's letter-designation in the Parisine typeface. An older branding (also used on the original tram network run by the city until 1969) is the "Mio-Den" mark, which depicts an old-fashioned , the logo for Osaka City, over the kanji for , short for . This mark is still present on newer trainsets and staff uniforms as Osaka Metro retained it as its monsho, as well as a connection to the subway network's roots. When it was run by the Osaka Municipal Transportation Bureau, the subway used a logo known as the symbol, which is a katakana for superimposed over a circular capital “O” for “Osaka” (see infobox, above). This remained on many older trainsets and at stations, until it was completely replaced by the Osaka Metro logo by 2020. Lines Currently, there are eight lines, operating on of track and serving 124 stations; there is also a -long, 10-station automated people mover line known as the "New Tram". Table notes Planned line and extensions In addition, there are five line extensions and one entirely new line that are planned. However, on August 28, 2014, the Osaka Municipal Transportation Bureau met about creating the extensions of the later five of the six lines listed below, and have stated considering the current cost of the new extensions (and the possibly of privatization at the time), the government has also considered using light rail transit or bus rapid transit instead. Osaka Metro is now experimenting with bus rapid transit on the route of the Imazatosuji Line extension, with “Imazato Liner” service between Imazato and Yuzato-Rokuchōme slated to begin in April 2019. With Osaka being the host of Expo 2025, a northwest extension to Yumeshima (the event's planned site) opened on 19 January 2025, with long-term plans envisioning a further extension to Sakurajima north of Universal Studios Japan via Maishima Sports Island. Provisions were put in place for such an extension when the Yumesaki Tunnel between Cosmosquare and Yumeshima was built in the late-2000s, but the then-state of the artificial island at the time of the bid (with only industrial facilities and a single convenience store for the workers) meant it would have been unlikely to proceed had Osaka not won said bid. Technology and rolling stock Osaka Municipal Subway rolling stock use two types of propulsion systems. The vast majority of lines use trains with conventional electric motors, but the two newest lines, the Nagahori Tsurumi-ryokuchi Line and Imazatosuji Line, use linear motor-powered trains, which allow them to use smaller trains and tunnels, reducing construction costs. These two lines have half-height automatic platform gates installed at all station platforms, as does the Sennichimae Line, the Midosuji Line, and the Sakaisuji Line. Also, unlike most other rapid transit networks in Japan (but like the preceding Tokyo Metro Ginza Line [the only rapid transit line in Asia at the time], and the subsequent Marunouchi line, the early lines in Nagoya and the Blue line in Yokohama), most Osaka subway lines use a third rail electrification system for trains. Only three lines use overhead catenary: the Sakaisuji Line, to accommodate through services on Hankyu trackage; and the linear-motor Nagahori Tsurumi-ryokuchi and Imazatosuji Lines. Also unusually, all lines use standard gauge; there are no narrow gauge sections of track due to the network being almost entirely self-enclosed (although Kyoto and Kobe also have entirely standard gauge metros with through services to private railways). Conventional motored 21 series ("New 20 series"): Midōsuji Line 22 series ("New 20 series"): Tanimachi Line 23 series ("New 20 series"): Yotsubashi Line 24 series ("New 20 series"): Chūō Line 25 series ("New 20 series"): Sennichimae Line 66 series: Sakaisuji Line 400 series: Chūō Line 30000 series: Tanimachi Line, Midōsuji Line, Chūō Line (30000A series) Linear motored 70 series: Nagahori Tsurumi-ryokuchi Line 80 series: Imazatosuji Line, Nagahori Tsurumi-ryokuchi Line Fares Osaka Metro charges five types of fares for single rides, based on the distance traveled in each journey. Some discount fares exist. Incidents On April 8, 1970, a gas explosion occurred during an expansion of the Tanimachi Line at Tenjimbashisuji Rokuchōme Station, killing 79 people and injuring 420. The gas leaked out from a detached joint and filled the tunnel and exploded when a service vehicle's engine sparked over leaking gas, creating a fire column over tall that burned around 30 buildings and damaged or destroyed a total of 495 buildings. Network map
Technology
Japan
null
398561
https://en.wikipedia.org/wiki/General%20anaesthesia
General anaesthesia
General anaesthesia (UK) or general anesthesia (US) is medically induced loss of consciousness that renders a patient unarousable even by painful stimuli. It is achieved through medications, which can be injected or inhaled, often with an analgesic and neuromuscular blocking agent. General anaesthesia is usually performed in an operating theatre to allow surgical procedures that would otherwise be intolerably painful for a patient, or in an intensive care unit or emergency department to facilitate endotracheal intubation and mechanical ventilation in critically ill patients. Depending on the procedure, general anaesthesia may be optional or required. No matter whether the patient prefers to be unconscious or not, certain pain stimuli can lead to involuntary responses from the patient, such as movement or muscle contractions, that make the operation extremely difficult. Thus, for many procedures, general anaesthesia is necessary from a practical point of view. The patient's natural breathing may be inadequate during the procedure and intervention is often necessary to protect the airway. Various drugs are used to achieve unconsciousness, amnesia, analgesia, loss of reflexes of the autonomic nervous system, and in some cases paralysis of skeletal muscles. The best combination of anaesthetics for a given patient and procedure is chosen by an anaesthetist or other specialist in consultation with the patient and the surgeon or practitioner performing the procedure. History Attempts at producing general anaesthesia can be traced throughout recorded history in the writings of the ancient Sumerians, Babylonians, Assyrians, Egyptians, Greeks, Romans, Indians, and Chinese. During the Middle Ages, scholars made advances in the Eastern world and Europe. The Renaissance saw advances in anatomy and surgical technique. However, surgery remained a treatment of last resort. Largely because of the associated pain, many patients chose certain death over surgery. Although there has been debate as to who deserves the most credit for the discovery of general anaesthesia, scientific discoveries in the late 18th and early 19th centuries were critical to the eventual introduction and development of modern anaesthetic techniques. Two enormous leaps occurred in the late 19th century, which allowed the transition to modern surgery. An appreciation of the germ theory of disease led to the development of antiseptic techniques in surgery. Antisepsis, which soon gave way to asepsis, reduced the overall morbidity and mortality of surgery to a far more acceptable rate. Concurrently, significant advances in pharmacology and physiology led to the development of general anaesthesia. On 14 November 1804, Hanaoka Seishū, a Japanese surgeon, became the first person on record to perform successful surgery using general anaesthesia. In the 20th century, general anaesthesia's safety and efficacy improved with routine tracheal intubation and other advanced airway management techniques. Advances in monitoring and new anaesthetic agents with improved pharmacokinetic and pharmacodynamic characteristics also contributed to this trend, and standardized training programs for anaesthesiologists and nurse anaesthetists emerged. Purpose General anaesthesia has many purposes and is routinely used in many surgical procedures. An appropriate surgical anaesthesia should include the following goals: Hypnosis/Unconsciousness (loss of awareness) Analgesia (loss of response to pain) Amnesia (loss of memory) Immobility (loss of motor reflexes) Paralysis (skeletal muscle relaxation and normal muscle relaxation) Instead of receiving continuous deep sedation, such as via benzodiazepines, dying patients may choose to be completely unconscious as they die. Biochemical mechanism of action The biochemical mechanism of action of general anaesthetics is still controversial. Anaesthetics have myriad sites of action and affect the central nervous system (CNS) at several levels. General anaesthesia interrupts or changes the functions of CNS components including the cerebral cortex, thalamus, reticular activating system, and spinal cord. Theories of anaesthesia identify target sites in the CNS, neural networks and arousal circuits linked with unconsciousness, and some anaesthetics can potentially activate specific sleep-active regions. Two non-exclusionary mechanisms include membrane-mediated and direct protein-mediated anesthesia. Potential protein-mediated molecular targets are GABAA,and NMDA glutamate receptors. General anaesthesia was thought to enhance the inhibitory transmission or to reduce the excitatory transmission of neuro signaling. Most volatile anaesthetics have been found to be a GABAA agonist, although the site of action on the receptor remains unknown. Ketamine is a non-competitive NMDA receptor antagonist. The chemical structure and properties of anaesthetics, as first noted by Meyer and Overton, suggest they could target the plasma membrane. A membrane-mediated mechanism that could account for the activation of an ion channel remained elusive until recently. A study from 2020 showed that inhaled anaesthetics (chloroform and isoflurane) could displace phospholipase D2 from ordered lipid domains in the plasma membrane, which led to the production of the signaling molecule phosphatidic acid (PA). The signaling molecule activated TWIK-related K+ channels (TREK-1), a channel involved in anaesthesia. PLDnull fruit flies were shown to resist anaesthesia. The results established a membrane mediated target for inhaled anaesthetics. Preoperative evaluation Before a procedure, the anaesthesiologist reviews medical records, interviews the patient, and examines them to determine an appropriate anaesthetic plan and decide what combination of drugs and dosages will be needed for the patient's comfort and safety during the procedure. A variety of non-invasive and invasive monitoring devices may be necessary to ensure a safe and effective procedure. Key factors in this evaluation are the patient's age, gender, body mass index, medical and surgical history, current medications, exercise capacity, and fasting time. Thorough and accurate preoperative evaluation is crucial for the effective safety of the anaesthetic plan. For example, a patient who consumes significant quantities of alcohol or illicit drugs could be undermedicated during the procedure if they fail to disclose this fact, and this could lead to anaesthesia awareness or intraoperative hypertension. Commonly used medications can also interact with anaesthetics, and failure to disclose such usage can increase the risk during the operation. Inaccurate timing of last meal can also increase the risk for aspiration of food, and lead to serious complications. An important aspect of pre-anaesthetic evaluation is an assessment of the patient's airway, involving inspection of the mouth opening and visualisation of the soft tissues of the pharynx. The condition of teeth and location of dental crowns are checked, and neck flexibility and head extension are observed. The most commonly performed airway assessment is the Mallampati score, which evaluates the airway base on the ability to view airway structures with the mouth open and the tongue protruding. Mallampati tests alone have limited accuracy, and other evaluations are routinely performed addition to the Mallampati test including mouth opening, thyromental distance, neck range of motion, and mandibular protrusion. In a patient with suspected distorted airway anatomy, endoscopy or ultrasound is sometimes used to evaluate the airway before planning for the airway management. Premedication Prior to administration of a general anaesthetic, the anaesthetist may administer one or more drugs that complement or improve the quality or safety of the anaesthetic or simply provide anxiolysis. Premedication also often has mild sedative effects and may reduce the amount of anaesthetic agent required during the case. One commonly used premedication is clonidine, an alpha-2 adrenergic agonist. It reduces postoperative shivering, postoperative nausea and vomiting, and emergence delirium. However, a randomized controlled trial from 2021 demonstrated that clonidine is less effective at providing anxiolysis and more sedative in children of preschool age. Oral clonidine can take up to 45 minutes to take full effect, The drawbacks of clonidine include hypotension and bradycardia, but these can be advantageous in patients with hypertension and tachycardia. Another commonly used alpha-2 adrenergic agonist is dexmedetomidine, which is commonly used to provide a short term sedative effect (<24 hours). Dexmedetomidine and certain atypical antipsychotic agents may be also used in uncooperative children. Benzodiazepines are the most commonly used class of drugs for premedication. The most commonly utilized benzodiazepine is Midazolam, which is characterized by a rapid onset and short duration. Midazolam is effective in reducing preoperative anxiety, including separation anxiety in children. It also provides mild sedation, sympathicolysis, and anterograde amnesia. Melatonin has been found to be effective as an anaesthetic premedication in both adults and children because of its hypnotic, anxiolytic, sedative, analgesic, and anticonvulsant properties. Recovery is more rapid after premedication with melatonin than with midazolam, and there is also a reduced incidence of post-operative agitation and delirium. Melatonin has been shown to have a similar effect in reducing perioperative anxiety in adult patients compared to benzodiazepine. Another example of anaesthetic premedication is the preoperative administration of beta adrenergic antagonists, which reduce the burden of arrhythmias after cardiac surgery. However, evidence also has shown an association of increased adverse events with beta-blockers in non-cardiac surgery. Anaesthesiologists may administer one or more antiemetic agents such as ondansetron, droperidol, or dexamethasone to prevent postoperative nausea and vomiting. NSAIDs are commonly used analgesic premedication agent, and often reduce need for opioids such as fentanyl or sufentanil. Also gastrokinetic agents such as metoclopramide, and histamine antagonists such as famotidine. Non-pharmacologic preanaesthetic interventions include playing cognitive behavioral therapy, music therapy, aromatherapy, hypnosis massage, pre-operative preparation video, and guided imagery relaxation therapy, etc. These techniques are particularly useful for children and patients with intellectual disabilities. Minimizing sensory stimulation or distraction by video games may help to reduce anxiety prior to or during induction of general anaesthesia. Larger high-quality studies are needed to confirm the most effective non-pharmacological approaches for reducing this type of anxiety. Parental presence during premedication and induction of anaesthesia has not been shown to reduce anxiety in children. It is suggested that parents who wish to attend should not be actively discouraged, and parents who prefer not to be present should not be actively encouraged to attend. Anaesthesia and the brain Anaesthesia has little to no effect on brain function, unless there is an existing brain disruption. Barbiturates, or the drugs used to administer anaesthesia, do not affect auditory brain stem response. An example of a brain disruption would be a concussion. It can be risky and lead to further brain injury if anaesthesia is used on a concussed person. Concussions create ionic shifts in the brain that adjust the neuronal transmembrane potential. In order to restore this potential more glucose has to be made to equal the potential that is lost. This can be very dangerous and lead to cell death. This makes the brain very vulnerable in surgery. There are also changes to cerebral blood flow. The injury complicates the oxygen blood flow and supply to the brain. Stages of anaesthesia Guedel's classification, described by Arthur Ernest Guedel in 1937, describes four stages of anaesthesia. Despite newer anaesthetic agents and delivery techniques, which have led to more rapid onset of—and recovery from—anaesthesia (in some cases bypassing some of the stages entirely), the principles remain. Stage 1 Stage 1, also known as induction, is the period between the administration of induction agents and loss of consciousness. During this stage, the patient progresses from analgesia without amnesia to analgesia with amnesia. Patients can carry on a conversation at this time, and may complain about visual disturbance. Stage 2 Stage 2, also known as the excitement or delirium stage, is the period following loss of consciousness and marked by excited and delirious activity. During this stage, the patient's respiration and heart rate may become irregular. In addition, there may be uncontrolled movements, vomiting, suspension of breathing, and pupillary dilation. Because the combination of spastic movements, vomiting, and irregular respiration may compromise the patient's airway, rapidly acting drugs are used to minimize time in this stage and reach Stage 3 as fast as possible. Stage 3 In Stage 3, also known as surgical anaesthesia, the skeletal muscles relax, vomiting stops. Respiratory depression and cessation of eye movements are the hallmarks of this stage. The patient is unconscious and ready for surgery. This stage is divided into four planes: The eyes roll, then become fixed; eyelid and swallow reflexes are lost. Still have regular spontaneous breathing; Corneal and laryngeal reflexes are lost; The pupillary light reflex is lost; and the process is marked by complete relaxation of abdominal and intercostal muscles. Ideal level of anesthesia for most surgeries. Full diaphragm paralysis and irregular shallow abdominal respiration occur. Stage 4 Stage 4, also known as overdose, occurs when too much anaesthetic medication is given relative to the amount of surgical stimulation and the patient has severe brainstem or medullary depression, resulting in a cessation of respiration and potential cardiovascular collapse. This stage is lethal without cardiovascular and respiratory support. Induction General anaesthesia is usually induced in an operating theatre or in an anaesthetic room next to the theatre. More rarely, it may be induced in an endoscopy suite, intensive care unit, radiology or cardiology department, emergency department, ambulance, or even at the site of a disaster where extrication of the patient may be impractical. Anaesthetics can be administered by inhalation, injection (intravenous, intramuscular, or subcutaneous), oral, or rectal routes. Once they enter the circulatory system, the agents are transported to their biochemical sites of action in the central and autonomic nervous systems. Most general anaesthetics are intravenous or inhaled. Commonly used intravenous induction agents include propofol, sodium thiopental, etomidate, methohexital, and ketamine. Inhalational anaesthesia may be chosen when intravenous access is difficult to obtain (e.g., children), when difficulty maintaining the airway is anticipated, or when the patient prefers it. Sevoflurane is the most commonly used agent for inhalational induction, because it is less irritating to the tracheobronchial tree than other agents. As an example sequence of induction drugs: Pre-oxygenation or denitrogenation to fill lungs with 100% oxygen to permit a longer period of apnea during intubation without affecting blood oxygen levels Fentanyl for systemic analgesia during intubation Propofol for sedation for intubation Switching from oxygen to a mixture of oxygen and inhalational anesthetic once intubation is complete Laryngoscopy and intubation are both very stimulating. The process of induction blunts the response to these manoeuvres while simultaneously inducing a near-coma state to prevent awareness. Physiologic monitoring Several monitoring technologies allow for a controlled induction of, maintenance of, and emergence from general anaesthesia. Standard for basic anesthetic monitoring is a guideline published by the ASA, which describes that the patient's oxygenation, ventilation, circulation and temperature should be continually evaluated during anesthetic. Continuous electrocardiography (ECG or EKG): Electrodes are placed on the patient's skin to monitor heart rate and rhythm. This may also help the anaesthesiologist to identify early signs of heart ischaemia. Typically lead II and V5 are monitored for arrhythmias and ischemia, respectively. Continuous pulse oximetry (SpO2): A device is placed, usually on a finger, to allow for early detection of a fall in a patient's hemoglobin saturation with oxygen (hypoxaemia). Blood pressure monitoring: There are two methods of measuring the patient's blood pressure. The first, and most common, is non-invasive blood pressure (NIBP) monitoring. This involves placing a blood pressure cuff around the patient's arm, forearm, or leg. A machine takes blood pressure readings at regular, preset intervals throughout the surgery. The second method is invasive blood pressure (IBP) monitoring, which allows beat to beat monitoring of blood pressure. This method is reserved for patients with significant heart or lung disease, the critically ill, and those undergoing major procedures such as cardiac or transplant surgery, or when large blood loss is expected. It involves placing a special type of plastic cannula in an artery, usually in the wrist (radial artery) or groin (femoral artery). Agent concentration measurement: anaesthetic machines typically have monitors to measure the percentage of inhalational anaesthetic agents used as well as exhalation concentrations. These monitors include measuring oxygen, carbon dioxide, and inhalational anaesthetics (e.g., nitrous oxide, isoflurane). Oxygen measurement: Almost all circuits have an alarm in case oxygen delivery to the patient is compromised. The alarm goes off if the fraction of inspired oxygen drops below a set threshold. A circuit disconnect alarm or low pressure alarm indicates failure of the circuit to achieve a given pressure during mechanical ventilation. Capnography measures the amount of carbon dioxide exhaled by the patient in percent or mmHg, allowing the anaesthesiologist to assess the adequacy of ventilation. MmHg is usually used to allow the provider to see more subtle changes. Temperature measurement to discern hypothermia or fever, and to allow early detection of malignant hyperthermia. Electroencephalography, entropy monitoring, or other systems may be used to verify the depth of anaesthesia. This reduces the likelihood of anaesthesia awareness and of overdose. Airway management Anaesthetized patients lose protective airway reflexes (such as coughing), airway patency, and sometimes a regular breathing pattern due to anaesthetics, opioids, or muscle relaxants. To maintain an open airway and regulate breathing, some form of breathing tube is inserted after the patient is unconscious. To enable mechanical ventilation, an endotracheal tube is often used, although there are alternatives such as face masks or laryngeal mask airways. Generally, full mechanical ventilation is only used if a very deep state of general anaesthesia is to be induced, and/or with a profoundly ill or injured patient. Induction of general anaesthesia usually results in apnea and requires ventilation until the drugs wear off and spontaneous breathing starts. In other words, ventilation may be needed for induction and maintenance of general anaesthesia, or just during the induction. However, mechanical ventilation can provide ventilatory support during spontaneous breathing to ensure adequate gas exchange. General anaesthesia can also be induced with the patient spontaneously breathing and therefore maintaining their own oxygenation which can be beneficial in certain scenarios (e.g. difficult airway or tubeless surgery). Spontaneous ventilation has been traditionally maintained with inhalational agents (i.e. halothane or sevoflurane) which is called a gas or inhalational induction. Spontaneous ventilation can also be maintained using intravenous anaesthesia (e.g. propofol). Intravenous anaesthesia to maintain spontaneous respiration has certain advantages over inhalational agents (i.e. suppressed laryngeal reflexes) but requires careful titration. Spontaneous Respiration using Intravenous anaesthesia and High-flow nasal oxygen (STRIVE Hi) is a technique that has been used in difficult and obstructed airways. Eye management General anaesthesia reduces the tonic contraction of the orbicularis oculi muscle, causing lagophthalmos (incomplete eye closure) in 59% of people. In addition, tear production and tear-film stability are reduced, resulting in corneal epithelial drying and reduced lysosomal protection. The protection afforded by Bell's phenomenon (in which the eyeball turns upward during sleep, protecting the cornea) is also lost. Careful management is required to reduce the likelihood of eye injuries during general anaesthesia. Some of the methods to prevent eye injury during general anesthesia includes taping the eyelids shut, use of eye ointments, and specially designed eye protective goggles. Neuromuscular blockade Paralysis, or temporary muscle relaxation with a neuromuscular blocker, is an integral part of modern anaesthesia. The first drug used for this purpose was curare, introduced in the 1940s, which has now been superseded by drugs with fewer side effects and, generally, shorter duration of action. Muscle relaxation allows surgery within major body cavities, such as the abdomen and thorax, without the need for very deep anaesthesia, and also facilitates endotracheal intubation. Acetylcholine, a natural neurotransmitter found at the neuromuscular junction, causes muscles to contract when it is released from nerve endings. Muscle paralytic drugs work by preventing acetylcholine from attaching to its receptor. Paralysis of the muscles of respiration—the diaphragm and intercostal muscles of the chest—requires that some form of artificial respiration be implemented. Because the muscles of the larynx are also paralysed, the airway usually needs to be protected by means of an endotracheal tube. Paralysis is most easily monitored by means of a peripheral nerve stimulator. This device intermittently sends short electrical pulses through the skin over a peripheral nerve while the contraction of a muscle supplied by that nerve is observed. The effects of muscle relaxants are commonly reversed at the end of surgery by anticholinesterase drugs, which are administered in combination with muscarinic anticholinergic drugs to minimize side effects. Examples of skeletal muscle relaxants in use today are pancuronium, rocuronium, vecuronium, cisatracurium, atracurium, mivacurium, and succinylcholine. Novel neuromuscular blockade reversal agents such as sugammadex may also be used; it works by directly binding muscle relaxants and removing it from the neuromuscular junction. Sugammadex was approved for use in the United States in 2015, and rapidly gained popularity. A study from 2022 has shown that Sugammadex and neostigmine are likely similarly safe in the reversal of neuromuscular blockade. Maintenance The duration of action of intravenous induction agents is generally 5 to 10 minutes, after which spontaneous recovery of consciousness will occur. In order to prolong unconsciousness for the duration of surgery, anaesthesia must be maintained. This is achieved by allowing the patient to breathe a carefully controlled mixture of oxygen and a volatile anaesthetic agent, or by administering intravenous medication (usually propofol). Inhaled anaesthetic agents are also frequently supplemented by intravenous analgesic agents, such as opioids (usually fentanyl or a fentanyl derivative) and sedatives (usually propofol or midazolam). Propofol can be used for total intravenous anaesthetia (TIVA), therefore supplementation by inhalation agents is not required. General anesthesia is usually considered safe; however, there are reported cases of patients with distortion of taste and/or smell due to local anesthetics, stroke, nerve damage, or as a side effect of general anesthesia. At the end of surgery, administration of anaesthetic agents is discontinued. Recovery of consciousness occurs when the concentration of anaesthetic in the brain drops below a certain level (this occurs usually within 1 to 30 minutes, mostly depending on the duration of surgery). In the 1990s, a novel method of maintaining anaesthesia was developed in Glasgow, Scotland. Called target controlled infusion (TCI), it involves using a computer-controlled syringe driver (pump) to infuse propofol throughout the duration of surgery, removing the need for a volatile anaesthetic and allowing pharmacologic principles to more precisely guide the amount of the drug used by setting the desired drug concentration. Advantages include faster recovery from anaesthesia, reduced incidence of postoperative nausea and vomiting, and absence of a trigger for malignant hyperthermia. At present , TCI is not permitted in the United States, but a syringe pump delivering a specific rate of medication is commonly used instead. Other medications are occasionally used to treat side effects or prevent complications. They include antihypertensives to treat high blood pressure; ephedrine or phenylephrine to treat low blood pressure; salbutamol to treat asthma, laryngospasm, or bronchospasm; and epinephrine or diphenhydramine to treat allergic reactions. Glucocorticoids or antibiotics are sometimes given to prevent inflammation and infection, respectively. Emergence Emergence is the return to baseline physiologic function of all organ systems after the cessation of general anaesthetics. This stage may be accompanied by temporary neurologic phenomena, such as agitated emergence (acute mental confusion), aphasia (impaired production or comprehension of speech), or focal impairment in sensory or motor function. Shivering is also fairly common and can be clinically significant because it causes an increase in oxygen consumption, carbon dioxide production, cardiac output, heart rate, and systemic blood pressure. The proposed mechanism is based on the observation that the spinal cord recovers at a faster rate than the brain. This results in uninhibited spinal reflexes manifested as clonic activity (shivering). This theory is supported by the fact that doxapram, a CNS stimulant, is somewhat effective in abolishing postoperative shivering. Cardiovascular events such as increased or decreased blood pressure, rapid heart rate, or other cardiac dysrhythmias are also common during emergence from general anaesthesia, as are respiratory symptoms such as dyspnoea. Responding and following verbal command, is a criterion commonly utilized to assess the patient's readiness for tracheal extubation. Postoperative care Postoperative pain is managed in the anaesthesia recovery unit (PACU) with regional analgesia or oral, transdermal, or parenteral medication. Patients may be given opioids, as well as other medications like non steroidal anti-inflammatory drugs and acetaminophen. Sometimes, opioid medication is administered by the patient themselves using a system called a patient controlled analgesic. The patient presses a button to activate a syringe device and receive a preset dose or "bolus" of the drug, usually a strong opioid such as morphine, fentanyl, or oxycodone (e.g., one milligram of morphine). The PCA device then "locks out" for a preset period to allow the drug to take effect, and also prevent the patient from overdosing. If the patient becomes too sleepy or sedated, they make no more requests. This confers a fail-safe aspect that is lacking in continuous-infusion techniques. If these medications cannot effectively manage the pain, local anesthetic may be directly injected to the nerve in a procedure called a nerve block. In the recovery unit, many vital signs are monitored, including oxygen saturation, heart rhythm and respiration, blood pressure, and core body temperature. Postanesthetic shivering is common. Apart from causing discomfort and exacerbating pain, shivering has been shown to increase oxygen consumption, catecholamine release, risk for hypothermia, and induce lactic acidosis. A number of techniques are used to reduce shivering, such as warm blankets, or wrapping the patient in a sheet that circulates warmed air, called a bair hugger. If the shivering cannot be managed with external warming devices, drugs such as dexmedetomidine, or other α2-agonists, anticholinergics, central nervous system stimulants, or corticosteroids may be used. In many cases, opioids used in general anaesthesia can cause postoperative ileus, even after non-abdominal surgery. Administration of a μ-opioid antagonist such as alvimopan immediately after surgery can help accelerate the timing of hospital discharge, but does not reduce the development of paralytic ileus. Enhanced Recovery After Surgery (ERAS) is a society that provides up-to-date guidelines and consensus to ensure continuity of care and improve recovery and peri-operative care. Adherence to the pathway and guidelines has been shown to associate with improved post-operative outcomes and lower costs to the health care system. Perioperative mortality Most perioperative mortality is attributable to complications from the operation, such as haemorrhage, sepsis, and failure of vital organs. Over the last several decades, the overall anesthesia related mortality rate improved significantly for anaesthetics administered. Advancements in monitoring equipment, anaesthetic agents, and increased focus on perioperative safety are some reasons for the decrease in perioperative mortality. In the United States, the current estimated anaesthesia-related mortality is about 1.1 per million population per year. The highest death rates were found in the geriatric population, especially those 85 and older. A review from 2018 examined perioperative anaesthesia interventions and their impact on anaesthesia-related mortality. Interventions found to reduce mortality include pharmacotherapy, ventilation, transfusion, nutrition, glucose control, dialysis and medical device. Interestingly, a randomized controlled trial from 2022 demonstrated that there is no significant difference in mortality between patient receiving handover from one clinician to another compared to the control group. Mortality directly related to anaesthetic management is very uncommon but may be caused by pulmonary aspiration of gastric contents, asphyxiation, or anaphylaxis. These in turn may result from malfunction of anaesthesia-related equipment or, more commonly, human error. In 1984, after a television programme highlighting anaesthesia mishaps aired in the United States, American anaesthesiologist Ellison C. Pierce appointed the Anesthesia Patient Safety and Risk Management Committee within the American Society of Anesthesiologists. This committee was tasked with determining and reducing the causes of anaesthesia-related morbidity and mortality. An outgrowth of this committee, the Anesthesia Patient Safety Foundation, was created in 1985 as an independent, nonprofit corporation with the goal "that no patient shall be harmed by anesthesia". The rare but major complication of general anaesthesia is malignant hyperthermia. All major hospitals should have a protocol in place with an emergency drug cart near the OR for this potential complication.
Biology and health sciences
Medical procedures: General
Health
398638
https://en.wikipedia.org/wiki/Biogeochemical%20cycle
Biogeochemical cycle
A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere. For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients. There are biogeochemical cycles for many other elements, such as for oxygen, hydrogen, phosphorus, calcium, iron, sulfur, mercury and selenium. There are also cycles for molecules, such as water and silica. In addition there are macroscopic cycles such as the rock cycle, and human-induced cycles for synthetic compounds such as for polychlorinated biphenyls (PCBs). In some cycles there are geological reservoirs where substances can remain or be sequestered for long periods of time. Biogeochemical cycles involve the interaction of biological, geological, and chemical processes. Biological processes include the influence of microorganisms, which are critical drivers of biogeochemical cycling. Microorganisms have the ability to carry out wide ranges of metabolic processes essential for the cycling of nutrients and chemicals throughout global ecosystems. Without microorganisms many of these processes would not occur, with significant impact on the functioning of land and ocean ecosystems and the planet's biogeochemical cycles as a whole. Changes to cycles can impact human health. The cycles are interconnected and play important roles regulating climate, supporting the growth of plants, phytoplankton and other organisms, and maintaining the health of ecosystems generally. Human activities such as burning fossil fuels and using large amounts of fertilizer can disrupt cycles, contributing to climate change, pollution, and other environmental problems. Overview Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules — carbon, nitrogen, hydrogen, oxygen, phosphorus, and sulfur — take a variety of chemical forms and may exist for long periods in the atmosphere, on land, in water, or beneath the Earth's surface. Geologic processes, such as weathering, erosion, water drainage, and the subduction of the continental plates, all play a role in this recycling of materials. Because geology and chemistry have major roles in the study of this process, the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle. The six aforementioned elements are used by organisms in a variety of ways. Hydrogen and oxygen are found in water and organic molecules, both of which are essential to life. Carbon is found in all organic molecules, whereas nitrogen is an important component of nucleic acids and proteins. Phosphorus is used to make nucleic acids and the phospholipids that comprise biological membranes. Sulfur is critical to the three-dimensional shape of proteins. The cycling of these elements is interconnected. For example, the movement of water is critical for leaching sulfur and phosphorus into rivers which can then flow into oceans. Minerals cycle through the biosphere between the biotic and abiotic components and from one organism to another. Ecological systems (ecosystems) have many biogeochemical cycles operating as a part of the system, for example, the water cycle, the carbon cycle, the nitrogen cycle, etc. All chemical elements occurring in organisms are part of biogeochemical cycles. In addition to being a part of living organisms, these chemical elements also cycle through abiotic factors of ecosystems such as water (hydrosphere), land (lithosphere), and/or the air (atmosphere). The living factors of the planet can be referred to collectively as the biosphere. All the nutrients — such as carbon, nitrogen, oxygen, phosphorus, and sulfur — used in ecosystems by living organisms are a part of a closed system; therefore, these chemicals are recycled instead of being lost and replenished constantly such as in an open system. The major parts of the biosphere are connected by the flow of chemical elements and compounds in biogeochemical cycles. In many of these cycles, the biota plays an important role. Matter from the Earth's interior is released by volcanoes. The atmosphere exchanges some compounds and elements rapidly with the biota and oceans. Exchanges of materials between rocks, soils, and the oceans are generally slower by comparison. The flow of energy in an ecosystem is an open system; the Sun constantly gives the planet energy in the form of light while it is eventually used and lost in the form of heat throughout the trophic levels of a food web. Carbon is used to make carbohydrates, fats, and proteins, the major sources of food energy. These compounds are oxidized to release carbon dioxide, which can be captured by plants to make organic compounds. The chemical reaction is powered by the light energy of sunshine. Sunlight is required to combine carbon with hydrogen and oxygen into an energy source, but ecosystems in the deep sea, where no sunlight can penetrate, obtain energy from sulfur. Hydrogen sulfide near hydrothermal vents can be utilized by organisms such as the giant tube worm. In the sulfur cycle, sulfur can be forever recycled as a source of energy. Energy can be released through the oxidation and reduction of sulfur compounds (e.g., oxidizing elemental sulfur to sulfite and then to sulfate). Although the Earth constantly receives energy from the Sun, its chemical composition is essentially fixed, as the additional matter is only occasionally added by meteorites. Because this chemical composition is not replenished like energy, all processes that depend on these chemicals must be recycled. These cycles include both the living biosphere and the nonliving lithosphere, atmosphere, and hydrosphere. Biogeochemical cycles can be contrasted with geochemical cycles. The latter deals only with crustal and subcrustal reservoirs even though some process from both overlap. Compartments Atmosphere Hydrosphere The global ocean covers more than 70% of the Earth's surface and is remarkably heterogeneous. Marine productive areas, and coastal ecosystems comprise a minor fraction of the ocean in terms of surface area, yet have an enormous impact on global biogeochemical cycles carried out by microbial communities, which represent 90% of the ocean's biomass. Work in recent years has largely focused on cycling of carbon and macronutrients such as nitrogen, phosphorus, and silicate: other important elements such as sulfur or trace elements have been less studied, reflecting associated technical and logistical issues. Increasingly, these marine areas, and the taxa that form their ecosystems, are subject to significant anthropogenic pressure, impacting marine life and recycling of energy and nutrients. A key example is that of cultural eutrophication, where agricultural runoff leads to nitrogen and phosphorus enrichment of coastal ecosystems, greatly increasing productivity resulting in algal blooms, deoxygenation of the water column and seabed, and increased greenhouse gas emissions, with direct local and global impacts on nitrogen and carbon cycles. However, the runoff of organic matter from the mainland to coastal ecosystems is just one of a series of pressing threats stressing microbial communities due to global change. Climate change has also resulted in changes in the cryosphere, as glaciers and permafrost melt, resulting in intensified marine stratification, while shifts of the redox-state in different biomes are rapidly reshaping microbial assemblages at an unprecedented rate. Global change is, therefore, affecting key processes including primary productivity, CO2 and N2 fixation, organic matter respiration/remineralization, and the sinking and burial deposition of fixed CO2. In addition to this, oceans are experiencing an acidification process, with a change of ~0.1 pH units between the pre-industrial period and today, affecting carbonate/bicarbonate buffer chemistry. In turn, acidification has been reported to impact planktonic communities, principally through effects on calcifying taxa. There is also evidence for shifts in the production of key intermediary volatile products, some of which have marked greenhouse effects (e.g., N2O and CH4, reviewed by Breitburg in 2018, due to the increase in global temperature, ocean stratification and deoxygenation, driving as much as 25 to 50% of nitrogen loss from the ocean to the atmosphere in the so-called oxygen minimum zones or anoxic marine zones, driven by microbial processes. Other products, that are typically toxic for the marine nekton, including reduced sulfur species such as H2S, have a negative impact for marine resources like fisheries and coastal aquaculture. While global change has accelerated, there has been a parallel increase in awareness of the complexity of marine ecosystems, and especially the fundamental role of microbes as drivers of ecosystem functioning. Lithosphere Biosphere Microorganisms drive much of the biogeochemical cycling in the earth system. Reservoirs The chemicals are sometimes held for long periods of time in one place. This place is called a reservoir, which, for example, includes such things as coal deposits that are storing carbon for a long period of time. When chemicals are held for only short periods of time, they are being held in exchange pools. Examples of exchange pools include plants and animals. Plants and animals utilize carbon to produce carbohydrates, fats, and proteins, which can then be used to build their internal structures or to obtain energy. Plants and animals temporarily use carbon in their systems and then release it back into the air or surrounding medium. Generally, reservoirs are abiotic factors whereas exchange pools are biotic factors. Carbon is held for a relatively short time in plants and animals in comparison to coal deposits. The amount of time that a chemical is held in one place is called its residence time or turnover time (also called the renewal time or exit age). Box models Box models are widely used to model biogeochemical systems. Box models are simplified versions of complex systems, reducing them to boxes (or storage reservoirs) for chemical materials, linked by material fluxes (flows). Simple box models have a small number of boxes with properties, such as volume, that do not change with time. The boxes are assumed to behave as if they were mixed homogeneously. These models are often used to derive analytical formulas describing the dynamics and steady-state abundance of the chemical species involved. The diagram at the right shows a basic one-box model. The reservoir contains the amount of material M under consideration, as defined by chemical, physical or biological properties. The source Q is the flux of material into the reservoir, and the sink S is the flux of material out of the reservoir. The budget is the check and balance of the sources and sinks affecting material turnover in a reservoir. The reservoir is in a steady state if Q = S, that is, if the sources balance the sinks and there is no change over time. The residence or turnover time is the average time material spends resident in the reservoir. If the reservoir is in a steady state, this is the same as the time it takes to fill or drain the reservoir. Thus, if τ is the turnover time, then τ = M/S. The equation describing the rate of change of content in a reservoir is When two or more reservoirs are connected, the material can be regarded as cycling between the reservoirs, and there can be predictable patterns to the cyclic flow. More complex multibox models are usually solved using numerical techniques. The diagram on the left shows a simplified budget of ocean carbon flows. It is composed of three simple interconnected box models, one for the euphotic zone, one for the ocean interior or dark ocean, and one for ocean sediments. In the euphotic zone, net phytoplankton production is about 50 Pg C each year. About 10 Pg is exported to the ocean interior while the other 40 Pg is respired. Organic carbon degradation occurs as particles (marine snow) settle through the ocean interior. Only 2 Pg eventually arrives at the seafloor, while the other 8 Pg is respired in the dark ocean. In sediments, the time scale available for degradation increases by orders of magnitude with the result that 90% of the organic carbon delivered is degraded and only 0.2 Pg C yr−1 is eventually buried and transferred from the biosphere to the geosphere. The diagram on the right shows a more complex model with many interacting boxes. Reservoir masses here represents carbon stocks, measured in Pg C. Carbon exchange fluxes, measured in Pg C yr−1, occur between the atmosphere and its two major sinks, the land and the ocean. The black numbers and arrows indicate the reservoir mass and exchange fluxes estimated for the year 1750, just before the Industrial Revolution. The red arrows (and associated numbers) indicate the annual flux changes due to anthropogenic activities, averaged over the 2000–2009 time period. They represent how the carbon cycle has changed since 1750. Red numbers in the reservoirs represent the cumulative changes in anthropogenic carbon since the start of the Industrial Period, 1750–2011. Fast and slow cycles There are fast and slow biogeochemical cycles. Fast cycle operate in the biosphere and slow cycles operate in rocks. Fast or biological cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere. As an example, the fast carbon cycle is illustrated in the diagram below on the left. This cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere. It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change. The slow cycle is illustrated in the diagram above on the right. It involves medium to long-term geochemical processes belonging to the rock cycle. The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels. Deep cycles The terrestrial subsurface is the largest reservoir of carbon on earth, containing 14–135 Pg of carbon and 2–19% of all biomass. Microorganisms drive organic and inorganic compound transformations in this environment and thereby control biogeochemical cycles. Current knowledge of the microbial ecology of the subsurface is primarily based on 16S ribosomal RNA (rRNA) gene sequences. Recent estimates show that <8% of 16S rRNA sequences in public databases derive from subsurface organisms and only a small fraction of those are represented by genomes or isolates. Thus, there is remarkably little reliable information about microbial metabolism in the subsurface. Further, little is known about how organisms in subsurface ecosystems are metabolically interconnected. Some cultivation-based studies of syntrophic consortia and small-scale metagenomic analyses of natural communities suggest that organisms are linked via metabolic handoffs: the transfer of redox reaction products of one organism to another. However, no complex environments have been dissected completely enough to resolve the metabolic interaction networks that underpin them. This restricts the ability of biogeochemical models to capture key aspects of the carbon and other nutrient cycles. New approaches such as genome-resolved metagenomics, an approach that can yield a comprehensive set of draft and even complete genomes for organisms without the requirement for laboratory isolation have the potential to provide this critical level of understanding of biogeochemical processes. Some examples Some of the more well-known biogeochemical cycles are shown below: Many biogeochemical cycles are currently being studied for the first time. Climate change and human impacts are drastically changing the speed, intensity, and balance of these relatively unknown cycles, which include: the mercury cycle, and the human-caused cycle of PCBs. Biogeochemical cycles always involve active equilibrium states: a balance in the cycling of the element between compartments. However, overall balance may involve compartments distributed on a global scale. As biogeochemical cycles describe the movements of substances on the entire globe, the study of these is inherently multidisciplinary. The carbon cycle may be related to research in ecology and atmospheric sciences. Biochemical dynamics would also be related to the fields of geology and pedology.
Physical sciences
Earth science basics: General
Earth science
399231
https://en.wikipedia.org/wiki/Concussion
Concussion
A concussion, also known as a mild traumatic brain injury (mTBI), is a head injury that temporarily affects brain functioning. Symptoms may include headache, dizziness, difficulty with thinking and concentration, sleep disturbances, mood changes, a brief period of memory loss, brief loss of consciousness; problems with balance; nausea; blurred vision; and mood changes. Concussion should be suspected if a person indirectly or directly hits their head and experiences any of the symptoms of concussion. Symptoms of a concussion may be delayed by 1–2 days after the accident. It is not unusual for symptoms to last 2 weeks in adults and 4 weeks in children. Fewer than 10% of sports-related concussions among children are associated with loss of consciousness. Common causes include motor vehicle collisions, falls, sports injuries, and bicycle accidents. Risk factors include physical violence, drinking alcohol and a prior history of concussion. The mechanism of injury involves either a direct blow to the head or forces elsewhere on the body that are transmitted to the head. This is believed to result in neuron dysfunction, as there are increased glucose requirements, but not enough blood supply. A thorough evaluation by a qualified medical provider working in their scope of practice (such as a physician or nurse practitioner) is required to rule out life-threatening head injuries, injuries to the cervical spine, and neurological conditions and to use information obtained from the medical evaluation to diagnose a concussion. Glasgow coma scale score 13 to 15, loss of consciousness for less than 30 minutes, and memory loss for less than 24 hours may be used to rule out moderate or severe traumatic brain injuries. Diagnostic imaging such as a CT scan or an MRI may be required to rule out severe head injuries. Routine imaging is not required to diagnose concussion. Prevention of concussion approaches includes the use of a helmet and mouth guard for certain sporting activities, seatbelt use in motor vehicles, following rules and policies on body checking and body contact in organized sport, and neuromuscular training warm-up exercises. Treatment of concussion includes relative rest for no more than 1–2 days, aerobic exercise to increase the heart rate and gradual step-wise return to activities, school, and work. Prolonged periods of rest may slow recovery and result in greater depression and anxiety. Paracetamol (acetaminophen) or NSAIDs may be recommended to help with a headache. Prescribed aerobic exercise may improve recovery. Physiotherapy may be useful for persisting balance problems, headache, or whiplash; cognitive behavioral therapy may be useful for mood changes and sleep problems. Evidence to support the use of hyperbaric oxygen therapy and chiropractic therapy is lacking. Worldwide, concussions are estimated to affect more than 3.5 per 1,000 people a year. Concussions are classified as mild traumatic brain injuries and are the most common type of TBIs. Males and young adults are most commonly affected. Outcomes are generally good. Another concussion before the symptoms of a prior concussion have resolved is associated with worse outcomes. Repeated concussions may also increase the risk in later life of chronic traumatic encephalopathy, Parkinson's disease and depression. Signs and symptoms Concussion symptoms vary between people and include physical, cognitive, and emotional symptoms. Symptoms may appear immediately or be delayed by 1–2 days. Delayed onset of symptoms may still be serious and require a medical assessment. Up to one-third of people with concussion experience longer or persisting concussion symptoms, also known as post concussion syndrome or persisting symptoms after concussion, which is defined as concussion symptoms lasting for 4 weeks or longer in children and adolescents, and symptoms lasting for more than 14 days in an adult. The severity of the initial symptoms is the strongest predictor of recovery time in adults. Physical Headaches are the most common mTBI symptom. Others include dizziness, vomiting, nausea, lack of motor coordination, difficulty balancing, or other problems with movement or sensation. Visual symptoms include light sensitivity, seeing bright lights, blurred vision, and double vision. Tinnitus, or a ringing in the ears, is also commonly reported. In one in about seventy concussions, concussive convulsions occur, but seizures that take place during or immediately after a concussion are not "post-traumatic seizures", and, unlike post-traumatic seizures, are not predictive of post-traumatic epilepsy, which requires some form of structural brain damage, not just a momentary disruption in normal brain functioning. Concussive convulsions are thought to result from temporary loss or inhibition of motor function and are not associated either with epilepsy or with more serious structural damage. They are not associated with any particular sequelae and have the same high rate of favorable outcomes as concussions without convulsions. Cognitive and emotional Cognitive symptoms include confusion, disorientation, and difficulty focusing attention. Loss of consciousness may occur, but is not necessarily correlated with the severity of the concussion if it is brief. Post-traumatic amnesia, in which events following the injury cannot be recalled, is a hallmark of concussions. Confusion may be present immediately or may develop over several minutes. A person may repeat the same questions, be slow to respond to questions or directions, have a vacant stare, or have slurred or incoherent speech. Other concussion symptoms include changes in sleeping patterns and difficulty with reasoning, concentrating, and performing everyday activities. A concussion can result in changes in mood including crankiness, loss of interest in favorite activities or items, tearfulness, and displays of emotion that are inappropriate to the situation. Common symptoms in concussed children include restlessness, lethargy, and irritability. Mechanism Forces The brain is surrounded by cerebrospinal fluid, which protects it from light trauma. More severe impacts, or the forces associated with rapid acceleration, may not be absorbed by this cushion. Concussions, and other head-related injuries, occur when external forces acting on the head are transferred to the brain. Such forces can occur when the head is struck by an object or surface (a 'direct impact'), or when the torso rapidly changes position (i.e. from a body check) and force is transmitted to the head (an 'indirect impact'). Forces may cause linear, rotational, or angular movement of the brain or a combination of them. In rotational movement, the head turns around its center of gravity, and in angular movement, it turns on an axis, not through its center of gravity. The amount of rotational force is thought to be the major component in concussion and its severity. As of 2007, studies with athletes have shown that the amount of force and the location of the impact are not necessarily correlated with the severity of the concussion or its symptoms, and have called into question the threshold for concussion previously thought to exist at around 70–75 g. The parts of the brain most affected by rotational forces are the midbrain and diencephalon. It is thought that the forces from the injury disrupt the normal cellular activities in the reticular activating system located in these areas and that this disruption produces the loss of consciousness often seen in concussion. Other areas of the brain that may be affected include the upper part of the brain stem, the fornix, the corpus callosum, the temporal lobe, and the frontal lobe. Angular accelerations of 4600, 5900, or 7900 rad/s2 are estimated to have 25, 50, or 80% risk of mTBI respectively. Pathophysiology In both animals and humans, mTBI can alter the brain's physiology for hours to years, setting into motion a variety of pathological events. As one example, in animal models, after an initial increase in glucose metabolism, there is a subsequent reduced metabolic state which may persist for up to four weeks after injury. Though these events are thought to interfere with neuronal and brain function, the metabolic processes that follow concussion are reversible in a large majority of affected brain cells; however, a few cells may die after the injury. Included in the cascade of events unleashed in the brain by concussion is impaired neurotransmission, loss of regulation of ions, deregulation of energy use and cellular metabolism, and a reduction in cerebral blood flow. Excitatory neurotransmitters, chemicals such as glutamate that serve to stimulate nerve cells, are released in excessive amounts. The resulting cellular excitation causes neurons to fire excessively. This creates an imbalance of ions such as potassium and calcium across the cell membranes of neurons (a process like excitotoxicity). At the same time, cerebral blood flow is relatively reduced for unknown reasons, though the reduction in blood flow is not as severe as it is in ischemia. Thus cells get less glucose than they normally do, which causes an "energy crisis". Concurrently with these processes, the activity of mitochondria may be reduced, which causes cells to rely on anaerobic metabolism to produce energy, increasing levels of the byproduct lactate. For a period of minutes to days after a concussion, the brain is especially vulnerable to changes in intracranial pressure, blood flow, and anoxia. According to studies performed on animals (which are not always applicable to humans), large numbers of neurons can die during this period in response to slight, normally innocuous changes in blood flow. Concussion involves diffuse (as opposed to focal) brain injury, meaning that the dysfunction occurs over a widespread area of the brain rather than in a particular spot. It is thought to be a milder type of diffuse axonal injury, because axons may be injured to a minor extent due to stretching. Animal studies in which rodents were concussed have revealed lifelong neuropathological consequences such as ongoing axonal degeneration and neuroinflammation in subcortical white matter tracts. Axonal damage has been found in the brains of concussion patients who died from other causes, but inadequate blood flow to the brain due to other injuries may have contributed. Findings from a study of the brains of deceased NFL athletes who received concussions suggest that lasting damage is done by such injuries. This damage, the severity of which increases with the cumulative number of concussions sustained, can lead to a variety of other health issues. The debate over whether concussion is a functional or structural phenomenon is ongoing. Structural damage has been found in the mildly traumatically injured brains of animals, but it is not clear whether these findings would apply to humans. Such changes in brain structure could be responsible for certain symptoms such as visual disturbances, but other sets of symptoms, especially those of a psychological nature, are more likely to be caused by reversible pathophysiological changes in cellular function that occur after concussion, such as alterations in neurons' biochemistry. These reversible changes could also explain why dysfunction is frequently temporary. A task force of head injury experts called the Concussion In Sport Group met in 2001 and decided that "concussion may result in neuropathological changes but the acute clinical symptoms largely reflect a functional disturbance rather than structural injury." Using animal studies, the pathology of a concussion seems to start with mechanical shearing and stretching forces disrupting the cell membrane of nerve cells through "mechanoporation". This results in potassium outflow from within the cell into the extracellular space with the subsequent release of excitatory neurotransmitters including glutamate which leads to enhanced potassium extrusion, in turn resulting in sustained depolarization, impaired nerve activity and potential nerve damage. Human studies have failed to identify changes in glutamate concentration immediately post-mTBI, though disruptions have been seen 3 days to 2 weeks post-injury. In an effort to restore ion balance, the sodium-potassium ion pumps increase activity, which results in excessive ATP (adenosine triphosphate) consumption and glucose utilization, quickly depleting glucose stores within the cells. Simultaneously, inefficient oxidative metabolism leads to anaerobic metabolism of glucose and increased lactate accumulation. There is a resultant local acidosis in the brain and increased cell membrane permeability, leading to local swelling. After this increase in glucose metabolism, there is a subsequent lower metabolic state which may persist for up to 4 weeks after injury. A completely separate pathway involves a large amount of calcium accumulating in cells, which may impair oxidative metabolism and begin further biochemical pathways that result in cell death. Again, both of these main pathways have been established from animal studies and the extent to which they apply to humans is still somewhat unclear. Diagnosis Head trauma recipients are initially assessed to exclude a more severe emergency such as an intracranial hemorrhage or other serious head or neck injuries. This includes the "ABCs" (airway, breathing, circulation) and stabilization of the cervical spine, which is assumed to be injured in any athlete who is found to be unconscious after head or neck injury. Indications that screening for more serious injury is needed include 'red flag symptoms' or 'concussion danger signs': worsening headaches, persisting vomiting, increasing disorientation or a deteriorating level of consciousness, seizures, and unequal pupil size. Those with such symptoms, or those who are at higher risk of a more serious brain injury, require an emergency medical assessment. Brain imaging such as a CT scan or MRI may be suggested, but should be avoided unless there are progressive neurological symptoms, focal neurological findings, or concern of skull fracture on exam. Diagnosis of concussion requires an assessment performed by a physician or nurse practitioner to rule out severe injuries to the brain and cervical spine, mental health conditions, or other medical conditions. Diagnosis is based on physical and neurological examination findings, duration of unconsciousness (usually less than 30 minutes) and post-traumatic amnesia (usually less than 24 hours), and the Glasgow Coma Scale (people with mTBI have scores of 13 to 15). A CT scan or MRI is not required to diagnose concussion. Neuropsychological tests such as the SCAT5/child SCAT5 may be suggested measure cognitive function. Such tests may be administered hours, days, or weeks after the injury, or at different times to demonstrate any trend. Some athletes are also being tested pre-season (pre-season baseline testing) to provide a baseline for comparison in the event of an injury, though this may not reduce risk or affect return to play and baseline testing is not required or suggested for most children and adults. If the Glasgow coma scale is less than 15 at two hours or less than 14 at any time, a CT is recommended. In addition, a CT scan is more likely to be performed if observation after discharge is not assured or intoxication is present, there is suspected increased risk for bleeding, age greater than 60, or less than 16. Most concussions, without complication, cannot be detected with MRI or CT scans. However, changes have been reported on MRI and SPECT imaging in those with concussion and normal CT scans, and persisting concussion symptoms may be associated with abnormalities visible on SPECT and PET scans. Mild head injury may or may not produce abnormal EEG readings. A blood test known as the Brain Trauma Indicator was approved in the United States in 2018 and may be able to rule out the risk of intracranial bleeding and thus the need for a CT scan for adults. Concussion may be under-diagnosed because of the lack of the highly noticeable signs and symptoms while athletes may minimize their injuries to remain in the competition. Direct impact to the head is not required for a concussion diagnosis, as other bodily impacts with a subsequent force transmission to the head are also causes. A retrospective survey in 2005 suggested that more than 88% of concussions are unrecognized. Particularly, many younger athletes struggle with identifying their concussions, which often result in the non-disclosure of concussions and consequently under-representing the incidence of concussions in the context of sport. Diagnosis can be complex because concussion shares symptoms with other conditions. For example, persisting concussion symptoms such as cognitive problems may be misattributed to brain injury when, in fact, due to post-traumatic stress disorder (PTSD). There are no fluid biomarkers (i.e., blood or urine tests) that are validated for diagnosing concussion in children or adolescents. Classification No single definition of concussion, minor head injury, or mild traumatic brain injury is universally accepted. In 2001, the expert Concussion in Sport Group of the first International Symposium on Concussion in Sport defined concussion as "a complex pathophysiological process affecting the brain, induced by traumatic biomechanical forces." It was agreed that concussion typically involves temporary impairment of neurological function that heals by itself within time, and that neuroimaging normally shows no gross structural changes to the brain as the result of the condition. However, although no structural brain damage occurs according to the classic definition, some researchers have included injuries in which structural damage has occurred, and the National Institute for Health and Clinical Excellence definition includes physiological or physical disruption in the brain's synapses. Also, by definition, concussion has historically involved a loss of consciousness. However, the definition has evolved over time to include a change in consciousness, such as amnesia, although controversy continues about whether the definition should include only those injuries in which loss of consciousness occurs. This debate resurfaces in some of the best-known concussion grading scales, in which those episodes involving loss of consciousness are graded as being more severe than those without. Definitions of mild traumatic brain injury (mTBI) were inconsistent until the World Health Organization's International Statistical Classification of Diseases and Related Health Problems (ICD-10) provided a consistent, authoritative definition across specialties in 1992. Since then, various organizations such as the American Congress of Rehabilitation Medicine and the American Psychiatric Association in its Diagnostic and Statistical Manual of Mental Disorders have defined mTBI using some combination of loss of consciousness, post-traumatic amnesia, and the Glasgow Coma Scale. Concussion falls under the classification of mild TBI, but it is not clear whether concussion is implied in mild brain injury or mild head injury. "mTBI" and "concussion" are often treated as synonyms in medical literature but other injuries such as intracranial hemorrhages (e.g. intra-axial hematoma, epidural hematoma, and subdural hematoma) are not necessarily precluded in mTBI or mild head injury, as they are in concussion. mTBI associated with abnormal neuroimaging may be considered "complicated mTBI". "Concussion" can be considered to imply a state in which brain function is temporarily impaired and "mTBI" to imply a pathophysiological state, but in practice, few researchers and clinicians distinguish between the terms. Descriptions of the condition, including the severity and the area of the brain affected, are now used more often than "concussion" in clinical neurology. Prevention Prevention of mTBI involves general measures such as wearing seat belts, using airbags in cars, and protective equipment such as helmets for high-risk sports. Older people are encouraged to reduce fall risk by keeping floors free of clutter and wearing thin, flat shoes with hard soles that do not interfere with balance. Protective equipment such as helmets and other headgear and policy changes such as the banning of body checking in youth hockey leagues have been found to reduce the number and severity of concussions in athletes. Secondary prevention such as a Return to Play Protocol for an athlete may reduce the risk of repeat concussions. New "Head Impact Telemetry System" technology is being placed in helmets to study injury mechanisms and may generate knowledge that will potentially help reduce the risk of concussions among American Football players. Mouth guards have been put forward as a preventative measure, and there is mixed evidence supporting its use in preventing concussions but rather has support in preventing dental trauma. Educational interventions, such as handouts, videos, workshops, and lectures, can improve concussion knowledge of diverse groups, particularly youth athletes and coaches. Strong concussion knowledge may be associated with greater recognition of concussion symptoms, higher rates of concussion reporting behaviors, and reduced body checking-related penalties and injuries, thereby lowering risk of mTBI. Due to the incidence of concussion in sport, younger athletes often do not disclose concussions and their symptoms. Common reasons for non-disclosure include a lack of awareness of the concussion, the belief that the concussion was not serious enough, and not wanting to leave the game or team due to their injury. Self-reported concussion rates among U-20 and elite rugby union players in Ireland are 45–48%, indicating that many concussions go unreported. Changes to the rules or enforcing existing rules in sports, such as those against "head-down tackling", or "spearing", which is associated with a high injury rate, may also prevent concussions. Treatment Adults and children with a suspected concussion require a medical assessment with a doctor or nurse practitioner to confirm the diagnosis of concussion and rule out more serious head injuries. After life-threatening head injuries, injuries to the cervical spine, and neurological conditions are ruled out, exclusion of neck or head injury, observation should be continued for several hours. If repeated vomiting, worsening headache, dizziness, seizure activity, excessive drowsiness, double vision, slurred speech, unsteady walk, or weakness or numbness in arms or legs, or signs of basilar skull fracture develop, immediate assessment in an emergency department is needed. Observation to monitor for worsening condition is an important part of treatment. While it is common advice that someone who is concussed should not be allowed to fall asleep in case they go into a coma, for general cases this is not supported by current evidence. People may be released after assessment from their primary care medical clinic, hospital, or emergency room to the care of a trusted person with instructions to return if they display worsening symptoms or those that might indicate an emergent condition ("red flag symptoms") such as change in consciousness, convulsions, severe headache, extremity weakness, vomiting, new bleeding or deafness in either or both ears. Education about symptoms, their management, and their normal time course, may lead to an improved outcome. Rest and return to physical and cognitive activity Physical and cognitive rest is recommended for the first 24–48 hours following a concussion after which injured persons should gradually start gentle low-risk physical and cognitive activities that do not make current symptoms worse or bring on new symptoms. Any activity for which there is a risk of contact, falling, or bumping the head should be avoided until the person has clearance from a doctor or nurse practitioner. Low-risk activities can be started even while a person has symptoms. Resting completely for longer than 24–48 hours following concussion has been shown to be associated with longer recovery. Return-to-school The resumption of low-risk school activities should begin as soon as the student feels ready and has completed an initial period of cognitive rest of no more than 24–48 hours following the acute injury. Long absences from school are not suggested, however; the return to school should be gradual and step-wise. Prolonged complete mental or physical rest (beyond 24–48 hours after the accident that lead to the concussion) may worsen outcomes, however, rushing back to full school work load before the person is ready, has also been associated with longer-lasting symptoms and an extended recovery time. Students with a suspected concussion are required to see a doctor for an initial medical assessment and for suggestions on recovery, however, medical clearance is not required for a student to return to school. Since students may appear 'normal', continuing education of relevant school personnel may be needed to ensure appropriate accommodations are made such as part-days and extended deadlines. Accommodations should be based on the monitoring of symptoms that are present during the return-to-school transition including headaches, dizziness, vision problems, memory loss, difficulty concentrating, and abnormal behavior. Students must have completely resumed their school activities (without requiring concussion-related academic supports) before returning to full-contact or competitive sports. Return-to-sport For persons participating in athletics, it is suggested that participants progress through a series of graded steps. These steps include: Stage 1 (Immediately after injury): 24–48 hours (maximum) of relative physical and cognitive rest. This can include gentle daily activities such as walking in the house, gentle housework, and light school work that do not make symptoms worse. No sports activities. Stage 2: Light aerobic activity such as walking or stationary cycling Stage 3: Sport-specific activities such as running drills and skating drills Stage 4: Non-contact training drills (exercise, coordination, and cognitive load) Stage 5: Full-contact practice (requires medical clearance) Stage 6: Return to full-contact sport or high-risk activities (requires medical clearance) At each step, the person should not have worsening or new symptoms for at least 24 hours before progressing to the next. If symptoms worsen or new symptoms begin, athletes should drop back to the previous level for at least another 24 hours. Intercollegiate or professional athletes, are typically followed closely by team athletic trainers during this period but others may not have access to this level of health care and may be sent home with minimal monitoring. Medications Medications may be prescribed to treat headaches, sleep problems and depression. Analgesics such as ibuprofen can be taken for headaches, but paracetamol (acetaminophen) is preferred to minimize the risk of intracranial hemorrhage. Concussed individuals are advised not to use alcohol or other drugs that have not been approved by a doctor as they can impede healing. Activation database-guided EEG biofeedback has been shown to return the memory abilities of the concussed individual to levels better than the control group. About one percent of people who receive treatment for mTBI need surgery for a brain injury. Return to work Determining the ideal time for a person to return to work will depend on personal factors and job-related factors including the intensity of the job and the risk of falling or hitting one's head at work during recovery. After the required initial recovery period of complete rest (24–48 hours after the concussion began), gradually and safely returning to the workplace with accommodations and support in place, should be prioritized over staying home and resting for long periods of time, to promote physical recovery and reduce the risk of people becoming socially isolated. The person should work with their employer to design a step-wise "return-to-work" plan. For those with a high-risk job, medical clearance may be required before resuming an activity that could lead to another head injury. Students should have completed the full return-to-school progression with no academic accommodations related to the concussion required before starting to return to part-time work. Prognosis The majority of children and adults fully recover from a concussion, however some may experience a prolonged recovery. There is no single physical test, blood test (or fluid biomarkers), or imaging test that can be used to determine when a person has fully recovered from concussion. A person's recovery may be influenced by a variety of factors that include age at the time of injury, intellectual abilities, family environment, social support system, occupational status, coping strategies, and financial circumstances. Factors such as a previous head injury or a coexisting medical condition have been found to predict longer-lasting persisting concussion symptoms. Other factors that may lengthen recovery time after mTBI include psychological problems such as substance abuse or clinical depression, poor health before the injury or additional injuries sustained during it, and life stress. Longer periods of amnesia or loss of consciousness immediately after the injury may indicate longer recovery times from residual symptoms. Other strong factors include participation in a contact sport and body mass size. Pediatric concussion Most children recover completely from concussion in less than four weeks, however 15–30% of youth may experience symptoms that last longer than a month. People aged 65+ with concussion Mild traumatic brain injury recovery time in people over age 65 may have increased complications due to elevated health concerns, or comorbidities. This often results in longer hospitalization duration, poorer cognitive outcomes, and higher mortality rates. Repeat concussion For unknown reasons, having had one concussion significantly increases a person's risk of having another. Having previously sustained a sports concussion has been found to be a strong factor increasing the likelihood of a concussion in the future. People who have had a concussion seem more susceptible to another one, particularly if the new injury occurs before symptoms from the previous concussion have completely gone away. It is also a negative process if smaller impacts cause the same symptom severity. Repeated concussions may increase a person's risk in later life for dementia, Parkinson's disease, and depression. Post-concussion syndrome In post-concussion syndrome, symptoms do not resolve for weeks, months, or years after a concussion, and may occasionally be permanent. About 10% to 20% of people have persisting concussion symptoms for more than a month. Symptoms may include headaches, dizziness, fatigue, anxiety, memory and attention problems, sleep problems, and irritability. Rest, a previously recommended recovery technique, has limited effectiveness. A recommended treatment in both children and adults with symptoms beyond 4 weeks involves an active rehabilitation program with reintroduction of non-contact aerobic activity. Progressive physical exercise has been shown to reduce long-term post-concussive symptoms. Symptoms usually go away on their own within months but may last for years. The question of whether the syndrome is due to structural damage or other factors such as psychological ones, or a combination of these, has long been the subject of debate. Cumulative effects As of 1999, cumulative effects of concussions were poorly understood, especially the effects on children. The severity of concussions and their symptoms may worsen with successive injuries, even if a subsequent injury occurs months or years after an initial one. Symptoms may be more severe and changes in neurophysiology can occur with the third and subsequent concussions. As of 2006, studies had conflicting findings on whether athletes have longer recovery times after repeat concussions and whether cumulative effects such as impairment in cognition and memory occur. Cumulative effects may include chronic traumatic encephalopathy, psychiatric disorders and loss of long-term memory. For example, the risk of developing clinical depression has been found to be significantly greater for retired American football players with a history of three or more concussions than for those with no concussion history. An experience of three or more concussions is associated with a fivefold greater chance of developing Alzheimer's disease earlier and a threefold greater chance of developing memory deficits. Chronic traumatic encephalopathy, or "CTE", is an example of the cumulative damage that can occur as the result of multiple concussions or less severe blows to the head. The condition was previously referred to as "dementia pugilistica", or "punch drunk" syndrome, as it was first noted in boxers. The disease can lead to cognitive and physical disabilities such as parkinsonism, speech and memory problems, slowed mental processing, tremor, depression, and inappropriate behavior. It shares features with Alzheimer's disease. Second-impact syndrome Second-impact syndrome, in which the brain swells dangerously after a minor blow, may occur in very rare cases. The condition may develop in people who receive a second blow days or weeks after an initial concussion before its symptoms have gone away. No one is certain of the cause of this often fatal complication, but it is commonly thought that the swelling occurs because the brain's arterioles lose the ability to regulate their diameter, causing a loss of control over cerebral blood flow. As the brain swells, intracranial pressure rapidly rises. The brain can herniate, and the brain stem can fail within five minutes. Except in boxing, all cases have occurred in athletes under age 20. Due to the very small number of documented cases, the diagnosis is controversial, and doubt exists about its validity. A 2010 Pediatrics review article stated that there is debate whether the brain swelling is due to two separate hits or to just one hit, but in either case, catastrophic football head injuries are three times more likely in high school athletes than in college athletes. Epidemiology Most cases of traumatic brain injury are concussions. A World Health Organization (WHO) study estimated that between 70 and 90% of head injuries that receive treatment are mild. However, due to under reporting and to the widely varying definitions of concussion and mTBI, it is difficult to estimate how common the condition is. Estimates of the incidence of concussion may be artificially low, for example, due to under reporting. At least 25% of people with mTBI fail to get assessed by a medical professional. The WHO group reviewed studies on the epidemiology of mTBI and found a hospital treatment rate of 1–3 per 1000 people, but since not all concussions are treated in hospitals, they estimated that the rate per year in the general population is over 6 per 1000 people. Age Young children have the highest concussion rate among all age groups. However, most people with a concussion are young adults. A Canadian study found that the yearly incidence of mTBI is lower in older age groups (graph at right). Studies suggest males develop mTBI at about twice the rate of their female counterparts. However, female athletes may be at a higher risk of sustaining a concussion than their male counterparts. Sports Up to five percent of sports injuries are concussions. The U.S. Centers for Disease Control and Prevention estimates that 300,000 sports-related concussions occur yearly in the U.S., but that number includes only athletes who lost consciousness. Since loss of consciousness is thought to occur in less than 10% of concussions, the CDC estimate is likely lower than the real number. Sports in which concussion is particularly common include American football, the rugby codes, MMA and boxing (a boxer aims to "knock out", i.e. give a mild traumatic brain injury to, the opponent). The injury is so common in the latter that several medical groups have called for a ban on the sport, including the American Academy of Neurology, the World Medical Association, and the medical associations of the UK, the US, Australia, and Canada. Workplace Concussions may also be common and occur in the workplace. According to the US Bureau of Labour Statistics, the most common causes of mTBI-related hospitalizations and deaths from the workplace are falls, force of heavy objects, and vehicular collisions. As a consequence, jobs in the construction, transportation, and natural resource industries (e.g. agriculture, fishing, mining) have more elevated mTBI incidence rates ranging from 10 to 20 cases per 100,000 workers. In particular, as vehicular collisions are the leading cause of workplace mTBI-related injuries, workers from the transportation sector often carry the most risk. Despite these findings, there still remain important gaps in data compilation on workplace-related mTBIs, which has raised questions about increased concussion surveillance and preventive measures in private industry. History The Hippocratic Corpus, a collection of medical works from ancient Greece, mentions concussion, later translated to commotio cerebri, and discusses loss of speech, hearing, and sight that can result from "commotion of the brain". This idea of disruption of mental function by "shaking of the brain" remained the widely accepted understanding of concussion until the 19th century. In the 10th century, the Persian physician Muhammad ibn Zakarīya Rāzi was the first to write about concussion as distinct from other types of head injury. He may have been the first to use the term "cerebral concussion", and his definition of the condition, a transient loss of function with no physical damage, set the stage for the medical understanding of the condition for centuries. In the 13th century, the physician Lanfranc of Milan's Chiurgia Magna described concussion as brain "commotion", also recognizing a difference between concussion and other types of traumatic brain injury (though many of his contemporaries did not), and discussing the transience of post-concussion symptoms as a result of temporary loss of function from the injury. In the 14th century, the surgeon Guy de Chauliac pointed out the relatively good prognosis of concussion as compared to more severe types of head trauma such as skull fractures and penetrating head trauma. In the 16th-century, the term "concussion" came into use, and symptoms such as confusion, lethargy, and memory problems were described. The 16th century physician Ambroise Paré used the term commotio cerebri, as well as "shaking of the brain", "commotion", and "concussion". Until the 17th century, a concussion was usually described by its clinical features, but after the invention of the microscope, more physicians began exploring underlying physical and structural mechanisms. However, the prevailing view in the 17th century was that the injury did not result from physical damage, and this view continued to be widely held throughout the 18th century. The word "concussion" was used at the time to describe the state of unconsciousness and other functional problems that resulted from the impact, rather than a physiological condition. In 1839, Guillaume Dupuytren described brain contusions, which involve many small hemorrhages, as contusio cerebri and showed the difference between unconsciousness associated with damage to the brain parenchyma and that due to concussion, without such injury. In 1941, animal experiments showed that no macroscopic damage occurs in concussion. Society and culture Costs Due to the lack of a consistent definition, the economic costs of mTBI are not known, but they are estimated to be very high. These high costs are due in part to the large percentage of hospital admissions for head injury that is due to mild head trauma, but indirect costs such as lost work time and early retirement account for the bulk of the costs. These direct and indirect costs cause the expense of mild brain trauma to rival that of moderate and severe head injuries. Terminology The terms mild brain injury, mild traumatic brain injury (mTBI), mild head injury (MHI), and concussion may be used interchangeably; although the term "concussion" is still used in sports literature as interchangeable with "MHI" or "mTBI", the general clinical medical literature uses "mTBI" instead, since a 2003 CDC report outlined it as an important strategy. In this article, "concussion" and "mTBI" are used interchangeably. The term "concussion" is from Latin concutere, "to shake violently" or concussus, "action of striking together". Research Minocycline, lithium, and N-acetylcysteine show tentative success in animal models. Measurement of predictive visual tracking is being studied as a screening technique to identify mild traumatic brain injury. A head-mounted display unit with eye-tracking capability shows a moving object in a predictive pattern for the person to follow with their eyes. People without brain injury will be able to track the moving object with smooth pursuit eye movements and correct trajectory while it is hypothesized that those with mild traumatic brain injury cannot. Grading systems National and international clinical practice guidelines do not recommend a concussion grading system for use by medical professionals. Historical information on grading systems In the past, the decision to allow athletes to return to participation was frequently based on the grade of concussion. However, current research and recommendations by professional organizations including the National Athletic Trainers' Association recommend against such use of these grading systems. Currently, injured athletes are prohibited from returning to play before they are symptom-free during both rest and exertion and until results of the neuropsychological tests have returned to pre-injury levels. Three grading systems have been most widely followed: by Robert Cantu, the Colorado Medical Society, and the American Academy of Neurology. Each employs three grades, as summarized in the following table: At least 41 systems measure the severity, or grade, of a mild head injury, and there is little agreement about which is best. In an effort to simplify, the 2nd International Conference on Concussion in Sport, meeting in Prague in 2004, decided that these systems should be abandoned in favor of a 'simple' or 'complex' classification. However, the 2008 meeting in Zurich abandoned the simple versus complex terminology, although the participants did agree to keep the concept that most (80–90%) concussions resolve in a short period (7–10 days) and although the recovery time frame may be longer in children and adolescents.
Biology and health sciences
Injury
null
399328
https://en.wikipedia.org/wiki/Cassiterite
Cassiterite
Cassiterite is a tin oxide mineral, SnO2. It is generally opaque, but it is translucent in thin crystals. Its luster and multiple crystal faces produce a desirable gem. Cassiterite was the chief tin ore throughout ancient history and remains the most important source of tin today. Occurrence Most sources of cassiterite today are found in alluvial or placer deposits containing the weathering-resistant grains. The best sources of primary cassiterite are found in the tin mines of Bolivia, where it is found in crystallised hydrothermal veins. Rwanda has a nascent cassiterite mining industry. Fighting over cassiterite deposits (particularly in Walikale) is a major cause of the conflict waged in eastern parts of the Democratic Republic of the Congo. This has led to cassiterite being considered a conflict mineral. Cassiterite is a widespread minor constituent of igneous rocks. The Bolivian veins and the 4500 year old workings of Cornwall and Devon, England, are concentrated in high temperature quartz veins and pegmatites associated with granitic intrusives. The veins commonly contain tourmaline, topaz, fluorite, apatite, wolframite, molybdenite, and arsenopyrite. The mineral occurs extensively in Cornwall as surface deposits on Bodmin Moor, for example, where there are extensive traces of a hydraulic mining method known as streaming. The current major tin production comes from placer or alluvial deposits in Malaysia, Thailand, Indonesia, the Maakhir region of Somalia, and Russia. Hydraulic mining methods are used to concentrate mined ore, a process which relies on the high specific gravity of the SnO2 ore, of about 7.0. Crystallography Crystal twinning is common in cassiterite and most aggregate specimens show crystal twins. The typical twin is bent at a near-60-degree angle, forming an "elbow twin". Botryoidal or reniform cassiterite is called wood tin. Cassiterite is also used as a gemstone and collector specimens when quality crystals are found. Etymology The name derives from the Greek κασσίτερος (transliterated as "kassiteros") for "tin". Early references to κασσίτερος can be found in Homer's Iliad, such as in the description the Shield of Achillies. For example, the passage in book 18 chapter 610: αὐτὰρ ἐπεὶ δὴ τεῦξε σάκος μέγα τε στιβαρόν τε, 610τεῦξ᾽ ἄρα οἱ θώρηκα φαεινότερον πυρὸς αὐγῆς, τεῦξε δέ οἱ κόρυθα βριαρὴν κροτάφοις ἀραρυῖαν καλὴν δαιδαλέην, ἐπὶ δὲ χρύσεον λόφον ἧκε, τεῦξε δέ οἱ κνημῖδας ἑανοῦ κασσιτέροιο.Translated as: then wrought he for him a corselet brighter than the blaze of fire, and he wrought for him a heavy helmet, fitted to his temples, a fair helm, richly-dight, and set thereon a crest of gold; and he wrought him greaves of pliant tin. But when the glorious god of the two strong arms had fashioned all the armourLiddell-Scott-Jones suggest the etymology to be originally Elamite; citing the Babylonian kassi-tira, hence the sanskrit kastīram. However the Akkadian word (the lingua franca of the Ancient Near East, including Babylonia) for tin was "anna-ku" (cuneiform: 𒀭𒈾). Roman Ghirshman (1954) suggests, from the region of the Kassites, an ancient people in west and central Iran; a view also taken by J D Muhly. There are relatively few words in Ancient Greek at begin with "κασσ-"; suggesting that it is an ethnonym. Attempts at understanding the etymology of the word were made in antiquity, such as Pliny the Elder in his Historia Naturalis (book 34 chapter 37.1):"White lead (tin) is the most valuable; the Greeks applied to it the name cassheros". And Stephanus of Byzantium in his Ethnica states: "Κασσίτερα νησοσ εν τω Ωκεανω, τη Ίνδικη προσεχης, ως Διονυσιοσ εν Βασσαρικοισ. Εξ ης ο κασσίτερος."Which can be translated as: Kassitera, an island in the ocean, neighbouring India, as Dionysius states in the Bassarika. From there comes tin. Use It may be primarily used as a raw material for tin extraction and smelting.
Physical sciences
Minerals
Earth science
399678
https://en.wikipedia.org/wiki/Fermi%20Gamma-ray%20Space%20Telescope
Fermi Gamma-ray Space Telescope
The Fermi Gamma-ray Space Telescope (FGST, also FGRST), formerly called the Gamma-ray Large Area Space Telescope (GLAST), is a space observatory being used to perform gamma-ray astronomy observations from low Earth orbit. Its main instrument is the Large Area Telescope (LAT), with which astronomers mostly intend to perform an all-sky survey studying astrophysical and cosmological phenomena such as active galactic nuclei, pulsars, other high-energy sources and dark matter. Another instrument aboard Fermi, the Gamma-ray Burst Monitor (GBM; formerly GLAST Burst Monitor), is being used to study gamma-ray bursts and solar flares. Fermi, named for high-energy physics pioneer Enrico Fermi, was launched on 11 June 2008 at 16:05 UTC aboard a Delta II 7920-H rocket. The mission is a joint venture of NASA, the United States Department of Energy, and government agencies in France, Germany, Italy, Japan, and Sweden, becoming the most sensitive gamma-ray telescope on orbit, succeeding INTEGRAL. The project is a recognized CERN experiment (RE7). Overview Fermi includes two scientific instruments, the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). The LAT is an imaging gamma-ray detector (a pair-conversion instrument) which detects photons with energy from about 20 million to about 300 billion electronvolts (20 MeV to 300 GeV), with a field of view of about 20% of the sky; it may be thought of as a sequel to the EGRET instrument on the Compton Gamma Ray Observatory. The GBM consists of 14 scintillation detectors (twelve sodium iodide crystals for the 8 keV to 1 MeV range and two bismuth germanate crystals with sensitivity from 150 keV to 30 MeV), and can detect gamma-ray bursts in that energy range across the whole of the sky not occluded by the Earth. General Dynamics Advanced Information Systems (formerly Spectrum Astro and now Orbital Sciences) in Gilbert, Arizona, designed and built the spacecraft that carries the instruments. It travels in a low, circular orbit with a period of about 95 minutes. Its normal mode of operation maintains its orientation so that the instruments will look away from the Earth, with a "rocking" motion to equalize the coverage of the sky. The view of the instruments will sweep out across most of the sky about 16 times per day. The spacecraft can also maintain an orientation that points to a chosen target. Both science instruments underwent environmental testing, including vibration, vacuum, and high and low temperatures to ensure that they can withstand the stresses of launch and continue to operate in space. They were integrated with the spacecraft at the General Dynamics ASCENT facility in Gilbert, Arizona. Data from the instruments are available to the public through the Fermi Science Support Center web site. Software for analyzing the data is also available. GLAST renamed Fermi Gamma-ray Space Telescope NASA's Alan Stern, associate administrator for Science at NASA Headquarters, launched a public competition 7 February 2008, closing 31 March 2008, to rename GLAST in a way that would "capture the excitement of GLAST's mission and call attention to gamma-ray and high-energy astronomy ... something memorable to commemorate this spectacular new astronomy mission ... a name that is catchy, easy to say and will help make the satellite and its mission a topic of dinner table and classroom discussion". Fermi gained its new name in 2008: On 26 August 2008, GLAST was renamed the "Fermi Gamma-ray Space Telescope" in honor of Enrico Fermi, a pioneer in high-energy physics. Mission NASA designed the mission with a five-year lifetime, with a goal of ten years of operations. The key scientific objectives of the Fermi mission have been described as: To understand the mechanisms of particle acceleration in active galactic nuclei (AGN), pulsars, and supernova remnants (SNR). Resolve the gamma-ray sky: unidentified sources and diffuse emission. Determine the high-energy behavior of gamma-ray bursts and transients. Probe dark matter (e.g. by looking for an excess of gamma rays from the center of the Milky Way) and early Universe. Search for evaporating primordial micro black holes (MBH) from their presumed gamma burst signatures (Hawking Radiation component). The National Academies of Sciences ranked this mission as a top priority. Many new possibilities and discoveries are anticipated to emerge from this single mission and greatly expand our view of the Universe. Blazars and active galaxies Study energy spectra and variability of wavelengths of light coming from blazars so as to determine the composition of the black hole jets aimed directly at Earth -- whether they are (a) a combination of electrons and positrons or (b) only protons. Gamma-ray bursts Study gamma-ray bursts with an energy range several times more intense than ever before so that scientists may be able to understand them better. Neutron stars Study younger, more energetic pulsars in the Milky Way than ever before so as to broaden our understanding of stars. Study the pulsed emissions of magnetospheres so as to possibly solve how they are produced. Study how pulsars generate winds of interstellar particles. Milky Way galaxy Provide new data to help improve upon existing theoretical models of our own galaxy. Gamma-ray background radiation Study better than ever before whether ordinary galaxies are responsible for gamma-ray background radiation. The potential for a tremendous discovery awaits if ordinary sources are determined to be irresponsible, in which case the cause may be anything from self-annihilating dark matter to entirely new chain reactions among interstellar particles that have yet to be conceived. The early universe Study better than ever before how concentrations of visible and ultraviolet light change over time. The mission should easily detect regions of spacetime where gamma-rays interacted with visible or UV light to make matter. This can be seen as an example of E=mc2 working in reverse, where energy is converted into mass, in the early universe. Sun Study better than ever before how our own Sun produces gamma rays in solar flares. Dark matter Search for evidence that dark matter is made up of weakly interacting massive particles, complementing similar experiments already planned for the Large Hadron Collider as well as other underground detectors. The potential for a tremendous discovery in this area is possible over the next several years. Fundamental physics Test better than ever before certain established theories of physics, such as whether the speed of light in vacuum remains constant regardless of wavelength. Einstein's general theory of relativity contends that it does, yet some models in quantum mechanics and quantum gravity predict that it may not. Search for gamma rays emanating from former black holes that once exploded, providing yet another potential step toward the unification of quantum mechanics and general relativity. Determine whether photons naturally split into smaller photons, as predicted by quantum mechanics and already achieved under controlled, man-made experimental conditions. Unknown discoveries Scientists estimate a very high possibility for new scientific discoveries, even revolutionary discoveries, emerging from this single mission. Mission timeline Prelaunch On 4 March 2008, the spacecraft arrived at the Astrotech payload processing facility in Titusville, Florida. On 4 June 2008, after several previous delays, launch status was retargeted for 11 June at the earliest, the last delays resulting from the need to replace the Flight Termination System batteries. The launch window extended from 15:45 to 17:40 UTC daily, until 7 August 2008. Launch Launch occurred successfully on 11 June 2008 at 16:05 UTC aboard a Delta 7920H-10C rocket from Cape Canaveral Air Force Station Space Launch Complex 17-B. Spacecraft separation took place about 75 minutes after launch. Orbit Fermi resides in a low-Earth circular orbit at an altitude of , and at an inclination of 28.5 degrees. Software modifications GLAST received some minor modifications to its computer software on 23 June 2008. LAT/GBM computers operational Computers operating both the LAT and GBM and most of the LAT's components were turned on 24 June 2008. The LAT high voltage was turned on 25 June, and it began detecting high-energy particles from space, but minor adjustments were still needed to calibrate the instrument. The GBM high voltage was also turned on 25 June, but the GBM still required one more week of testing/calibrations before searching for gamma-ray bursts. Sky survey mode After presenting an overview of the Fermi instrumentation and goals, Jennifer Carson of SLAC National Accelerator Laboratory had concluded that the primary goals were "all achievable with the all-sky scanning mode of observing". Fermi switched to "sky survey mode" on 26 June 2008 so as to begin sweeping its field of view over the entire sky every three hours (every two orbits). Collision avoided On 30 April 2013, NASA revealed that the telescope had narrowly avoided a collision a year earlier with a defunct Cold War-era Soviet spy satellite, Kosmos 1805, in April 2012. Orbital predictions several days earlier indicated that the two satellites were expected to occupy the same point in space within 30 milliseconds of each other. On 3 April, telescope operators decided to stow the satellite's high-gain parabolic antenna, rotate the solar panels out of the way and to fire Fermi's rocket thrusters for one second to move it out of the way. Even though the thrusters had been idle since the telescope had been placed in orbit nearly five years earlier, they worked correctly and potential disaster was thus avoided. Extended mission 2013–2018 In August 2013 Fermi started its 5-year mission extension. Pass 8 software upgrade In June 2015, the Fermi LAT Collaboration released "Pass 8 LAT data". Iterations of the analysis framework used by LAT are called "passes" and at launch Fermi LAT data was analyzed using Pass 6. Significant improvements to Pass 6 were included in Pass 7 which debuted in August 2011. Every detection by the Fermi LAT since its launch, was reexamined with the latest tools to learn how the LAT detector responded to both each event and to the background. This improved understanding led to two major improvements: gamma-rays that had been missed by previous analysis were detected and the direction they arrived from was determined with greater accuracy. The impact of the latter is to sharpen Fermi LAT's vision as illustrated in the figure on the right. Pass 8 also delivers better energy measurements and a significantly increased effective area. The entire mission dataset was reprocessed. These improvements have the greatest impact on both the low and high ends of the range of energy Fermi LAT can detect - in effect expanding the energy range within which LAT can make useful observations. The improvement in the performance of Fermi LAT due to Pass 8 is so dramatic that this software update is sometimes called the cheapest satellite upgrade in history. Among numerous advances, it allowed for a better search for Galactic spectral lines from dark matter interactions, analysis of extended supernova remnants, and to search for extended sources in the Galactic plane. For almost all event classes, Version P8R2 had a residual background that was not fully isotropic. This anisotropy was traced to cosmic-ray electrons leaking through the ribbons of the Anti-Coincidence Detector and a set of cuts allowed rejection of these events while minimally impacting acceptance. This selection was used to create the P8R3 version of LAT data. Solar array drive failure On 16 March 2018 one of Fermi's solar arrays quit rotating, prompting a transition to "safe hold" mode and instrument power off. This was the first mechanical failure in nearly 10 years. Fermi's solar arrays rotate to maximize the exposure of the arrays to the Sun. The motor that drives that rotation failed to move as instructed in one direction. On 27 March, the satellite was placed at a fixed angle relative to its orbit to maximize solar power. The next day the GBM instrument was turned back on. On 2 April, operators turned LAT on and it resumed operations on 8 April. Alternative observation strategies are being developed due to power and thermal requirements. Discoveries Pulsar discovery The first major discovery came when the space telescope detected a pulsar in the CTA 1 supernova remnant that appeared to emit radiation in the gamma ray bands only, a first for its kind. This new pulsar sweeps the Earth every 316.86 milliseconds and is about 4,600 light-years away. Greatest gamma-ray burst energy release In September 2008, the gamma-ray burst GRB 080916C in the constellation Carina was recorded by the Fermi telescope. This burst is notable as having "the largest apparent energy release yet measured". The explosion had the power of about 9,000 ordinary supernovae, and the relativistic jet of material ejected in the blast must have moved at a minimum of 99.9999% the speed of light. Overall, GRB 080916C had "the greatest total energy, the fastest motions, and the highest initial-energy emissions" ever seen. Galactic Center gamma ray excess In 2009, a surplus of gamma rays from a spherical region around the Galactic Center of the Milky Way was found in data from the Fermi telescope. This is now known as the Galactic Center GeV excess. The source of this surplus is not known. Suggestions include self-annihilation of dark matter or a population of pulsars. Cosmic rays and supernova remnants In February 2010, it was announced that Fermi-LAT had determined that supernova remnants act as enormous accelerators for cosmic particles. This determination fulfills one of the stated missions for this project. Background gamma ray sources In March 2010 it was announced that active galactic nuclei are not responsible for most gamma-ray background radiation. Though active galactic nuclei do produce some of the gamma-ray radiation detected here on Earth, less than 30% originates from these sources. The search now is to locate the sources for the remaining 70% or so of all gamma-rays detected. Possibilities include star forming galaxies, galactic mergers, and yet-to-be explained dark matter interactions. Milky Way Gamma- and X-ray emitting Fermi bubbles In November 2010, it was announced that two gamma-ray and X-ray emitting bubbles were detected around our galaxy, the Milky Way. The bubbles, named Fermi bubbles, extend about 25 thousand light-years distant above and below the galactic center. The galaxy's diffuse gamma-ray fog hampered prior observations, but the discovery team led by D. Finkbeiner, building on research by G. Dobler, worked around this problem. Highest energy light ever seen from the Sun In early 2012, Fermi/GLAST observed the highest energy light ever seen in a solar eruption. Terrestrial gamma-ray flash observations Fermi telescope has observed and detected numerous terrestrial gamma-ray flashes and discovered that such flashes can produce 100 trillion positrons, far more than scientists had previously expected. GRB 130427A On 27 April 2013, Fermi detected GRB 130427A, a gamma-ray burst with one of the highest energy outputs yet recorded. This included detection of a gamma-ray over 94 billion electron volts (GeV). This broke Fermi's previous record detection, by over three times the amount. GRB coincident with gravitational wave event GW150914 Fermi reported that its GBM instrument detected a weak gamma-ray burst above 50 keV, starting 0.4 seconds after the LIGO event and with a positional uncertainty region overlapping that of the LIGO observation. The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However, observations from the INTEGRAL telescope's all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, concluding that "this limit excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer." If the signal observed by the Fermi GBM was associated with GW150914, SPI-ACS would have detected it with a significance of 15 sigma above the background. The AGILE space telescope also did not detect a gamma-ray counterpart of the event. A follow-up analysis of the Fermi report by an independent group, released in June 2016, purported to identify statistical flaws in the initial analysis, concluding that the observation was consistent with a statistical fluctuation or an Earth albedo transient on a 1-second timescale. A rebuttal of this follow-up analysis, however, pointed out that the independent group misrepresented the analysis of the original Fermi GBM Team paper and therefore misconstrued the results of the original analysis. The rebuttal reaffirmed that the false coincidence probability is calculated empirically and is not refuted by the independent analysis. In October 2018, astronomers reported that GRB 150101B, 1.7 billion light years away from Earth, may be analogous to the historic GW170817. It was detected on 1 January 2015 at 15:23:35 UT by the Gamma-ray Burst Monitor on board the Fermi Gamma-ray Space Telescope, along with detections by the Burst Alert Telescope (BAT) on board the Swift Observatory Satellite. Black hole mergers of the type thought to have produced the gravitational wave event are not expected to produce gamma-ray bursts, as stellar-mass black hole binaries are not expected to have large amounts of orbiting matter. Avi Loeb has theorised that if a massive star is rapidly rotating, the centrifugal force produced during its collapse will lead to the formation of a rotating bar that breaks into two dense clumps of matter with a dumbbell configuration that becomes a black hole binary, and at the end of the star's collapse it triggers a gamma-ray burst. Loeb suggests that the 0.4 second delay is the time it took the gamma-ray burst to cross the star, relative to the gravitational waves. GRB 170817A signals a multi-messenger transient On 17 August 2017, Fermi Gamma-Ray Burst Monitor software detected, classified, and localized a gamma-ray burst which was later designated as GRB 170817A. Six minutes later, a single detector at Hanford LIGO registered a gravitational-wave candidate which was consistent with a binary neutron star merger, occurring 2 seconds before the GRB 170817A event. This observation was "the first joint detection of gravitational and electromagnetic radiation from a single source". Instruments Gamma-ray Burst Monitor The Gamma-ray Burst Monitor (GBM) (formerly GLAST Burst Monitor) detects sudden flares of gamma-rays produced by gamma ray bursts and solar flares. Its scintillators are on the sides of the spacecraft to view all of the sky which is not blocked by the Earth. The design is optimized for good resolution in time and photon energy, and is sensitive from (a medium X-ray) to (a medium-energy gamma-ray). "Gamma-ray bursts are so bright we can see them from billions of light-years away, which means they occurred billions of years ago, and we see them as they looked then", stated Charles Meegan of NASA's Marshall Space Flight Center. The Gamma-ray Burst Monitor has detected gamma rays from positrons generated in powerful thunderstorms. Large Area Telescope The Large Area Telescope (LAT) detects individual gamma rays using technology similar to that used in terrestrial particle accelerators. Photons hit thin metal sheets, converting to electron-positron pairs, via a process termed pair production. These charged particles pass through interleaved layers of silicon microstrip detectors, causing ionization which produce detectable tiny pulses of electric charge. Researchers can combine information from several layers of this tracker to determine the path of the particles. After passing through the tracker, the particles enter the calorimeter, which consists of a stack of caesium iodide scintillator crystals to measure the total energy of the particles. The LAT's field of view is large, about 20% of the sky. The resolution of its images is modest by astronomical standards, a few arc minutes for the highest-energy photons and about 3 degrees at 100 MeV. It is sensitive from to (from medium up to some very-high-energy gamma rays). The LAT is a bigger and better successor to the EGRET instrument on NASA's Compton Gamma Ray Observatory satellite in the 1990s. Several countries produced the components of the LAT, who then sent the components for assembly at SLAC National Accelerator Laboratory. SLAC also hosts the LAT Instrument Science Operations Center, which supports the operation of the LAT during the Fermi mission for the LAT scientific collaboration and for NASA. Education and public outreach Education and public outreach are important components of the Fermi project. The main Fermi education and public outreach website at http://glast.sonoma.edu offers gateways to resources for students, educators, scientists, and the public. NASA's Education and Public Outreach (E/PO) group operates the Fermi education and outreach resources at Sonoma State University. Rossi Prize The 2011 Bruno Rossi Prize was awarded to Bill Atwood, Peter Michelson and the Fermi LAT team "for enabling, through the development of the Large Area Telescope, new insights into neutron stars, supernova remnants, cosmic rays, binary systems, active galactic nuclei and gamma-ray bursts." In 2013, the prize was awarded to Roger W. Romani of Leland Stanford Junior University and Alice Harding of Goddard Space Flight Center for their work in developing the theoretical framework underpinning the many exciting pulsar results from Fermi Gamma-ray Space Telescope. The 2014 prize went to Tracy Slatyer, Douglas Finkeiner and Meng Su "for their discovery, in gamma rays, of the large unanticipated Galactic structure called the Fermi bubbles." The 2018 prize was awarded to Colleen Wilson-Hodge and the Fermi GBM team for the detection of , the first unambiguous and completely independent discovery of an electromagnetic counterpart to a gravitational wave signal (GW170817) that "confirmed that short gamma-ray bursts are produced by binary neutron star mergers and enabled a global multi-wavelength follow-up campaign."
Technology
Space-based observatories
null
399740
https://en.wikipedia.org/wiki/Sloth%20bear
Sloth bear
The sloth bear (Melursus ursinus), also known as the Indian bear, is a myrmecophagous bear species native to the Indian subcontinent. It feeds on fruits, ants and termites. It is listed as vulnerable on the IUCN Red List, mainly because of habitat loss and degradation. It is the only species in the genus Melursus. It has also been called "labiated bear" because of its long lower lip and palate used for sucking up insects. It has long, shaggy fur, a mane around the face, and long, sickle-shaped claws. It is lankier than brown and Asian black bears. It shares features of insectivorous mammals and evolved during the Pleistocene from the ancestral brown bear through divergent evolution. Sloth bears breed during spring and early summer and give birth near the beginning of winter. When their territories are encroached upon by humans, they sometimes attack them. Historically, humans have drastically reduced these bears' habitat and diminished their population by hunting them for food and products such as their bacula and claws. Sloth bears have been tamed and used as performing animals and as pets. Taxonomy George Shaw in 1791 named the species Bradypus ursinus. In 1793, Meyer named it Melursus lybius, and in 1817, de Blainville named it Ursus labiatus because of its long lips. Illiger named it Prochilus hirsutus, the Greek genus name indicating long lips, while the specific name noted its long and coarse hair. Fischer called it Chondrorhynchus hirsutus, while Tiedemann named it Ursus longirostris. Subspecies and range Evolution Sloth bears may have reached their current form in the Early Pleistocene, the time when the bear family specialised and dispersed. A fragment of fossilised humerus from the Pleistocene, found in Andhra Pradesh's Kurnool Basin is identical to the humerus of a modern sloth bear. The fossilised skulls of a bear once named Melursus theobaldi found in the Shivaliks from the Early Pleistocene or Early Pliocene are thought by certain authors to represent an intermediate stage between sloth bears and ancestral brown bears. M. theobaldi itself had teeth intermediate in size between sloth bears and other bear species, though its palate was the same size as the former species, leading to the theory that it is the sloth bear's direct ancestor. Sloth bears probably arose during the Middle Pliocene and evolved in the Indian subcontinent. The sloth bear shows evidence of having undergone a convergent evolution similar to that of other ant-eating mammals. The sloth bear is one of eight extant species in the bear family Ursidae and of six extant species in the subfamily Ursinae. Characteristics Sloth bears adults are medium-sized bears. The typical weight range for females is from , and for males is from . Exceptionally large female specimens can reach and males up to . The average weight of sloth bears from the nominate subspecies in Nepal was in females and in males. Nominate bears in India were found to weigh average in males and in female per one study. Specimens from Sri Lanka (M. u. inornatus) may weigh up to in females and in males. However six Sri Lankan male sloth bears averaged only , and was the average for four females, so Sri Lankan bears could be around 30% lighter in body mass than nominate race bears and with more pronounced size sexual dimorphism. They are high at the shoulder, and have a body length of . Besides being smaller than males, females reportedly typically have more fur between their shoulders. Sloth bear muzzles are thick and long, with small jaws and bulbous snouts with wide nostrils. They have long lower lips which can be stretched over the outer edge of their noses, and they lack upper incisors, thus allowing them to suck up large numbers of insects. The premolars and molars are smaller than in other bears, as they do not chew as much vegetation. In adults, the teeth are usually in poor condition, due to the amount of soil they suck up and chew when feeding on insects. The back of the palate is long and broad, as is typical in other ant-eating mammals. The paws are disproportionately large, and have highly developed, sickle-shaped, blunt claws which measure in length. Their toe pads are connected by a hairless web. They have the longest tail in the bear family, which can grow to . Their back legs are not very strong, though they are knee-jointed, and allow them to assume almost any position. The ears are very large and floppy. The sloth bear is the only bear with long hair on its ears. Sloth bear fur is completely black (rusty for some specimens), save for a whitish Y- or V-shaped mark on the chest. This feature is sometimes absent, particularly in Sri Lankan specimens. This feature, which is also present in Asian black bears and sun bears, is thought to serve as a threat display, as all three species are sympatric with tigers (tigers usually do not carry out attacks on an adult bear if the bear is aware or facing the cat). The coat is long, shaggy, and unkempt, despite the relatively warm environment in which the species is found, and is particularly heavy behind the neck and between the shoulders, forming a mane which can be long. The belly and underlegs can be almost bare. Sloth bears are usually about the same size as an Asian black bear but are immediately distinctive for their shaggier coat, whitish claws, as well as their typically rangier build. Their head and mouth is highly distinct from that of a black bear with a longer, narrower skull shape (particularly the snout), loose-looking, flappier lips and paler muzzle colour. In few areas of overlap, sloth bear confusion with sun bears is unlikely, given the latter species considerably smaller size, much shorter fur, wrinkled folding skin (especially around the back), bolder chest marking and drastically different, more compact head structure and appearance. Distribution and habitat The sloth bear's global range includes India, the Terai of Nepal, temperate climatic zones of Bhutan and Sri Lanka. It occurs in a wide range of habitats including moist and dry tropical forests, savannahs, scrublands and grasslands below on the Indian subcontinent, and below in Sri Lanka's dry forests. It is regionally extinct in Bangladesh. Behaviour and ecology Adult sloth bears may travel in pairs. Males are often observed to be gentle with cubs. They may fight for food. They walk in a slow, shambling motion, with their feet being set down in a noisy, flapping motion. They are capable of galloping faster than running humans. Although they appear slow and clumsy, both young and adult sloth bears are excellent climbers. They occasionally will climb to feed and to rest, though not to escape enemies, as they prefer to stand their ground. Sloth bear mothers carry their cubs up trees as the primary defense against attacks by predators instead of sending them up trees. The cubs can be threatened by predators such as tigers, leopards, and other bears. They are adequate climbers on more accessible trees but cannot climb as quickly or on as varied surfaces as can black bears due to the sloth species' more elongated claw structure. Given their smaller size and still shorter claws, sloth bear cubs probably climb more proficiently than adults (much as brown bear cubs can climb well but not adults). They are good swimmers, and primarily enter water to play. To mark their territories, sloth bears scrape trees with their forepaws, and rub against them with their flanks. Sloth bears are recorded to produce several sounds and vocals. Howls, squeals, screams, barks and trumpet-like calls are made during aggressive encounters while huffing is made as a warning signal. Chuffing calls are made when disturbed. Females keep in contact with their cubs with a grunt-whicker while cubs yelp when separated. Reproduction The breeding season for sloth bears varies according to location: in India, they mate in April, May, and June, and give birth in December and early January, while in Sri Lanka, it occurs all year. Sows gestate for 210 days, and typically give birth in caves or in shelters under boulders. Litters usually consist of one or two cubs, or rarely three. Cubs are born blind, and open their eyes after four weeks. Sloth bear cubs develop quickly compared to most other bear species: they start walking a month after birth, become independent at 24–36 months, and become sexually mature at the age of three years. Young cubs ride on their mother's back when she walks, runs, or climbs trees until they reach a third of her size. Individual riding positions are maintained by cubs through fighting. Intervals between litters can last two to three years. Dietary habits Sloth bears are expert hunters of termites, ants, and bees, which they locate by smell. On arriving at a mound, they scrape at the structure with their claws until they reach the large combs at the bottom of the galleries, and disperse the soil with violent puffs. The termites are then sucked up through the muzzle, producing a sucking sound which can be heard 180 m away. Their sense of smell is strong enough to detect grubs 3 ft below ground. Unlike other bears, they do not congregate in feeding groups. Sloth bears may supplement their diets with fruit, plant matter, carrion, and very rarely other mammals. In March and April, they eat the fallen petals of mowha trees and are partial to mangoes, maize, sugar cane, jackfruit, and the pods of the golden shower tree. Sloth bears are extremely fond of honey. When feeding their cubs, sows are reported to regurgitate a mixture of half-digested jack fruit, wood apples, and pieces of honeycomb. This sticky substance hardens into a dark yellow, circular, bread-like mass which is fed to the cubs. This "bear's bread" is considered a delicacy by some of India's natives. Rarely, Sloth bears can become addicted to sweets in hotel waste, visiting rubbish bins, even inside populated towns, all year long. Their diet includes animal flesh. In Neyyar Wildlife Sanctuary, Kerala, seeds of six tree species eaten and excreted by sloth bears (Artocarpus hirsuta, A. integrifolia, Cassia fistula, Mangifera indica, Zizyphus oenoplina) did not see significantly different percentages of germination (appearance of cotyledon) when compared to germinated seeds that had not been passed through the gut of the bears. However, seeds germinated much faster after being ingested by bears for three species, Artocarpus hirsuta, Cassia fistula, and Zizyphus oenoplina. This experiment suggests that sloth bears may play an important role in seed dispersal and germination, with effects varying by tree species. Relationships with other animals The large canine teeth of sloth bears, relative to both its overall body size and to the size of the canine teeth of other bear species, and the aggressive disposition of sloth bears, may be a defense in interactions with large, dangerous animals, such as the tiger, elephant, and rhinoceros, as well as prehistoric species such as Megantereon. Bengal tigers occasionally prey on sloth bears. Tigers usually give sloth bears a wide berth, though some specimens may become habitual bear killers, and it is not uncommon to find sloth bear fur in tiger scats. Tigers typically hunt sloth bears by waiting for them near termite mounds, then creeping behind them and seizing them by the back of their necks and forcing them to the ground with their weight. One tiger was reported to simply break its victim's back with its paw, then wait for the paralysed bear to exhaust itself trying to escape before going in for the kill. When confronted by tigers face to face, sloth bears charge at them, crying loudly. A young or already sated tiger usually retreats from an assertive sloth bear, as the bear's claws can inflict serious wounds, and most tigers end the hunt if the bears become aware of the tiger's presence before the pounce. Sloth bears may scavenge on tiger kills. As tigers are known to mimic the calls of sambar deer to attract them, sloth bears react fearfully even to the sounds made by deer themselves. In 2011, a female bear with cubs was observed to stand her ground and prevail in a confrontation against two tigers (one female, one male) in rapid succession. Besides tigers there are few predators of sloth bears. Leopards can also be a threat, as they are able to follow sloth bears up trees. Bear cubs are probably far more vulnerable and healthy adult bears may be avoided by leopards. One leopard killed a three-quarters grown female sloth bear in an apparently lengthy fight that culminated in the trees. Apparently, a sloth bear killed a leopard in a confrontation in Yala National Park, Sri Lanka, but was itself badly injured in the fight and was subsequently put down by park rangers. Sloth bears occasionally chase leopards from their kills. Dhole packs may attack sloth bears. When attacking them, dholes try to prevent the bear from retreating into caves. Unlike tigers which prey on sloth bears of all size, there is little evidence that dholes are a threat to fully-grown sloth bears other than exceptionally rare cases. In one case, a golden jackal (a species much smaller and less powerful than a sloth bear and not generally a pack hunter as is the dhole) was seen to aggressively displace an adult bear which passively loped away from the snapping canid, indicating the sloth bear does not regard other carnivores as competition. Sloth bears are sympatric with Asiatic black bears in northern India, and the two species, along with the sun bear, coexist in some of the national parks and wildlife sanctuaries. They are also found together in Assam, Manipur, and Mizoram, in the hills south of the Brahmaputra River, the only places occupied by all three bear species. The three species do not act aggressively toward each other. This may be because the three species generally differ in habit and dietary preferences. Asian elephants apparently do not tolerate sloth bears in their vicinity. The reason for this is unknown, as individual elephants known to maintain their composure near tigers have been reported to charge bears. The Indian rhinoceros has a similar intolerance for sloth bears, and will charge at them. Status and conservation IUCN estimates that fewer than 20,000 sloth bears survive in the wilds of the Indian subcontinent and Sri Lanka. The sloth bear is listed in Schedule I of the Indian Wildlife Protection Act, 1972, which provides for their legal protection. Commercial international trade of the sloth bear (including parts and derivatives) is prohibited as it is listed in Appendix I of the Convention on International Trade in Endangered Species. To address the human-bear conflict, people may be educated about the conservation ethics, particularly among locals. To resolve this conflict, the basic issue of deteriorating habitat, which is the reason for the conflict between people and bears, improvements through government or community-based reforestation programmes, may be promoted. The population of sloth bears grows when they live in high-profile reserves that protect species, such as tigers and elephants. Directly managed reserves could conserve the sloth bear, hence such reserves must be supported. Managing garbage, especially hotel waste with foods, is essential in situations where sloth bears get used to entering towns with an increase in the number of accidental attacks on humans. The government of India has banned use of sloth bears for entertainment, and a 'Sloth Bear Welfare Project' in the country has the objective of putting an end to their use for entertainment. However, their number in such activity is still large. Many organisations are helping in the conservation and preservation of sloth bears in safe places. Sloth bears previously used for entertainment are being rehabilitated in facilities like Agra Bear Rescue Facility run by Wildlife SOS and others. Major sloth bear sanctuaries in India include the Daroji bear sanctuary, Karnataka. Sloth bears have also been found dead in traps, electrocuted, or killed by other means by poachers, with body parts (i.e. canines, claws, gall bladder, paws, etc) usually removed for the illegal wildlife trade. Relationships with humans Attacks on humans Sloth bears are one of the most aggressive extant bears and, due to large human populations often closely surrounding reserves that hold bears, aggressive encounters and attacks are relatively frequent, though in some places, attacks appear to be a reaction to encountering people accidentally. In absolute numbers, this is the species of bear that most regularly attacks humans. Only the Himalayan black bear subspecies of Asian black bear is nearly as dangerous. Sloth bears likely view humans as potential predators, as their reactions to them (roaring, followed by retreat or charging) are similar to those evoked in the presence of tigers and leopards. Their long claws, which are ideally adapted for digging at termite mounds, make adults less capable of climbing trees to escape danger, as are other bears such as Asian black bears. Therefore, sloth bears have seemingly evolved to deal with threats by behaving aggressively. For the same reason, brown bears can be similarly inclined, accounting for the relatively high incidence of seemingly non-predatory aggression towards humans in these two bear species. According to Robert Armitage Sterndale, in his Mammalia of India (1884, p. 62): Captain Williamson in his Oriental Field Sports wrote of how sloth bears rarely killed their human victims outright, but would suck and chew on their limbs till they were reduced to bloody pulps. One specimen, known as the sloth bear of Mysore, was responsible for the deaths of 12 people and the mutilation of 24 others. It was shot by Kenneth Anderson. Although sloth bears have attacked humans, they rarely become man-eaters. Dunbar-Brander's Wild Animals of Central India mentions a case in which a sow with two cubs began a six-week reign of terror in Chanda, a district of the Central Provinces, during which more than one of their victims had been eaten, while the sloth bear of Mysore partially ate at least three of its victims. R.G. Burton deduced from comparing statistics that sloth bears killed more people than Asian black bears, and Theodore Roosevelt considered them to be more dangerous than American black bears. Unlike some other bear species, which at times make mock charges at humans when surprised or frightened without making physical contact, sloth bears frequently appear to initiate a physical attack almost immediately. When people living near an aggressive population of sloth bears were armed with rifles, it was found that it was an ineffective form of defense, since the bear apparently charges and knocks the victim back (often knocking the rifle away) before the human has the chance to defend themself. In Madhya Pradesh, sloth bear attacks accounted for the deaths of 48 people and the injuring of 686 others between 1989 and 1994, probably due in part to the density of population and competition for food sources. A total of 137 attacks (resulting in 11 deaths) occurred between April 1998 and December 2000 in the North Bilaspur Forest Division of Chhattisgarh. The majority of attacks were perpetrated by single bears, and occurred in kitchen gardens, crop fields, and in adjoining forests during the monsoon season. One Mr. Watts Jones wrote a first-hand account of how it feels to be attacked by a sloth bear, recalling when he failed to score a direct hit against a bear he had targeted: In 2016, according to a forest official, a female bear had killed three people, and hurt five others in Gujarat State's Banaskantha district, near Balaram Ambaji Wildlife Sanctuary, with some of the casualties being colleagues. At first, an attempt was made to trace and cage it, but this failed, costing the life of one official, and so a team of both officials and policemen shot the bear. In Karnataka's Bellary district, most of the attacks by sloth bears occurred outside forests, when they entered settlements and farmlands in search of food and water. In Mount Abu town in southern Rajasthan, sloth bears attacked people inside towns where they were seeking hotel waste in rubbish bins and encountered people by chance. Though such attacks were concomitant with increasing tourism activity, quite remarkably, local residents have not retaliated against the sloth bears. The absence of retaliation in many locations of India appears related to cultural norms and the dominant religion Hinduism where nature and animals are worshipped as deities. Hunting and products One method of hunting sloth bears involved the use of beaters, in which case, a hunter waiting on a post could either shoot the approaching bear through the shoulder or on the white chest mark if it was moving directly to him. Sloth bears are very resistant to body shots, and can charge hunters if wounded, though someone of steady nerves could score a direct hit from within a few paces of a charging bear. Sloth bears were easy to track during the wet season, as their clear footprints could be followed straight to their lairs. The majority of sloth bears killed in forests were due to chance encounters with them during hunts for other game. In hilly or mountainous regions, two methods were used to hunt them there. One was to lie in wait above the bear's lair at dawn and wait for the bear to return from its nocturnal foraging. Another was to rouse them at daytime by firing flares into the cave to draw them out. Sloth bears were also occasionally speared on horseback. In Sri Lanka, the baculum of a sloth bear was once used as a charm against barrenness. Tameability Officers in British India often kept sloth bears as pets. The wife of Kenneth Anderson kept an orphaned sloth bear cub from Mysore, which she named "Bruno". The bear was fed all sorts of things and was very affectionate toward people. It was even taught numerous tricks, such as cradling a woodblock like a baby or pointing a bamboo stick like a gun. Dancing bears were historically a popular entertainment in India, dating back to the 13th century and the pre-Mughal era. The Kalandars, who practised the tradition of capturing sloth bears for entertainment purposes, were often employed in the courts of Mughal emperors to stage spectacles involving trained bears. They were once common in the towns of Calcutta, where they often disturbed the horses of British officers. Despite a ban on the practice that was enacted in 1972, as many as 800 dancing bears were in the streets of India during the latter part of the 20th century, particularly on the highway between Delhi, Agra, and Jaipur. Sloth bear cubs, which were usually purchased at the age of six months from traders and poachers, were trained to dance and follow commands through coercive stimuli and starvation. Males were castrated at an early age, and their teeth were knocked out at the age of one year to prevent them from seriously injuring their handlers. The bears were typically fitted with a nose ring attached to a four-foot leash. Some were found to be blind from malnutrition. In 2009, following a seven-year campaign by a coalition of Indian and international animal welfare groups, the last Kalandar dancing bear was set free. The effort to end the practice involved helping the bear handlers find jobs and education, which enabled them to reduce their reliance on dancing-bear income. Cultural references Charles Catton included the bear in his 1788 book Animals Drawn from Nature and Engraved in Aqua-tinta, describing it as an "animal of the bear-kind" and saying it was properly called the "Petre Bear". In Rudyard Kipling's The Jungle Book, Baloo "the sleepy old brown bear" teaches the Law of the Jungle to the wolf cubs of the Seeonee wolf pack, as well as to his most challenging pupil, the "man-cub" Mowgli. Robert Armitage Sterndale, from whom Kipling derived most of his knowledge of Indian fauna, used the Hindustani word bhalu for several bear species, though Daniel Karlin, who edited the Penguin Classics reissue of The Jungle Book in 1989, stated, with the exception of colour, Kipling's descriptions of Baloo are consistent with the sloth bear, as brown bears and Asian black bears do not occur in the Seoni area where the novel takes place. Also, the name "sloth" can be used in the context of sleepiness. Karlin states, however, that Baloo's diet of ".. only roots and nuts and honey" is a trait more common to the Asian black bear than to the sloth bear. Local names: , bhaluk , rīn̄ch; also rinchh , bhālū; , bhālu , ślath bhaluk; kālō bhāluk; also bhaluk , ṛkṣa; also rikspa , karaḍi; kaddi , karaṭi; kaddi , karaṭi , elugubaṇṭi; also elugu , asval; also aswal Gond: yerid, yedjal and asol Kol: bana Oraon: bir mendi , valasā; also usa , bhālu , richh
Biology and health sciences
Bears
Animals
12330489
https://en.wikipedia.org/wiki/Marine%20regression
Marine regression
A marine regression is a geological process occurring when areas of submerged seafloor are exposed during a drop in sea level. The opposite event, marine transgression, occurs when flooding from the sea covers previously-exposed land. Description According to one hypothesis, regressions may be linked to a "slowdown in sea-floor spreading, leading to a generalized drop in sea level (as the mid-ocean ridges would take up less space)...." That view considers major marine regressions to be one aspect of a normal variation in rates of plate tectonic activity, which leads to major episodes of global volcanism like the Siberian Traps and the Deccan Traps, which in turn cause large extinction events. Evidence of marine regressions and transgressions occurs throughout the fossil record, and the fluctuations are thought to have caused or contributed to several mass extinctions, such as the Permian–Triassic extinction event (250 million years ago, Ma) and Cretaceous–Paleogene extinction event (66 Ma). During the Permian-Triassic extinction, the largest extinction event in the Earth's history, the global sea level fell 250 m (820 ft). A major regression could cause marine organisms in shallow seas to go extinct, but mass extinctions tend to involve both terrestrial and aquatic species, and it is harder to see how a marine regression could cause widespread extinctions of land animals. Regressions are, therefore, seen as correlates or symptoms of major extinctions, rather than primary causes. The Permian regression might have been related to the formation of Pangaea. The accumulation of all major landmasses into one body could have facilitated a regression by providing "a slight enlargement of the ocean basins as the great continents coalesced." However, that cause could not have applied in all or even many of the other cases. Ice ages During the ice ages of the Pleistocene, a clear correlation existed between marine regressions and episodes of glaciation. As the balance shifts between the global cryosphere and hydrosphere, more of the planet's water in ice sheets means less in the oceans. At the height of the last ice age, around 18,000 years ago, the global sea level was 120 to 130 m (390-425 ft) lower than today. A cold spell around 6 million years ago was linked to an advance in glaciation, a marine regression, and the start of the Messinian salinity crisis in the Mediterranean basin. Some major regressions of the past, however, seem unrelated to glaciation episodes, with the regression that accompanied the mass extinction at the end of the Cretaceous being one example.
Physical sciences
Stratigraphy
Earth science
12333426
https://en.wikipedia.org/wiki/Comprehensive%20metabolic%20panel
Comprehensive metabolic panel
The comprehensive metabolic panel, or chemical screen (CMP; CPT code 80053), is a panel of 14 blood tests that serves as an initial broad medical screening tool. The CMP provides a rough check of kidney function, liver function, diabetic and parathyroid status, and electrolyte and fluid balance, but this type of screening has its limitations. Abnormal values from a CMP are often the result of false positives and thus the CMP may need to be repeated (or a more specific test performed), requiring a second blood drawing procedure and possibly additional expense for the patient, even though no disease is present. This test is also known as SMA12+2 test. The CMP is an expanded version of the basic metabolic panel (BMP), which does not include liver tests. A CMP (or BMP) can be ordered as part of a routine physical examination, or may be used to monitor a patient with a chronic disease, such as diabetes mellitus or hypertension. Previous names for the panel of tests have been Chem 12, Chemistry panel, Chemistry screen, SMA 12, SMA 20 and SMAC (Sequential Multiple Analysis - Computer). The tests are performed on machines based on the AutoAnalyzer invented in 1957. Testing Typically, the patient fasts for ten or twelve hours before the blood is drawn for the test—this is particularly important for getting a useful blood glucose measurement. CMPs are also frequently performed on nonfasting patients, but the glucose level in those cases is not as useful. The following tests are then performed: General tests These tests help screen for a wide variety of problems. The glucose test in particular helps screen for diabetes mellitus and pre-diabetes. The calcium test can indicate or monitor bone diseases or diseases of the parathyroid gland or kidneys. Calcium salts, lithium, thiazide diuretics, thyroxine, and vitamin D can all increase levels and may interfere with this test. Serum glucose Calcium Kidney function assessment Blood urea nitrogen (BUN) Creatinine Electrolytes Electrolyte levels and the balance among them are tightly regulated by the body. Both individual values and ratios among the values are significant; abnormalities among either can indicate problems such as an electrolyte disturbance, acid-base imbalance, or kidney dysfunction. Sodium Potassium Chloride Carbon dioxide (CO2) Protein tests Tests of protein levels in the blood help screen for both kidney and liver disorders. Serum total protein (TP) Human serum albumin Liver enzymes Bilirubin Alkaline phosphatase (ALP) Aspartate amino transferase (AST or SGOT) Alanine amino transferase (ALT or SGPT) Results The National Institutes of Health provides ranges considered within normal limits, though optimal levels may vary by individual. Compare also the ranges given at Reference ranges for blood tests.
Biology and health sciences
Diagnostics
Health
5865836
https://en.wikipedia.org/wiki/Chin
Chin
The chin is the forward pointed part of the anterior mandible (mental region) below the lower lip. A fully developed human skull has a chin of between 0.7 cm and 1.1 cm. Evolution The presence of a well-developed chin is considered to be one of the morphological characteristics of Homo sapiens that differentiates them from other human ancestors such as the closely related Neanderthals. Early human ancestors have varied symphysial morphology, but none of them have a well-developed chin. The origin of the chin is traditionally associated with the anterior–posterior breadth shortening of the dental arch or tooth row; however, its general mechanical or functional advantage during feeding, developmental origin, and link with human speech, physiology, and social influence are highly debated. Functional perspectives Robinson (1913) suggests that the demand to resist masticatory stresses triggered bone thickening in the mental region of the mandible and ultimately formed a prominent chin. Moreover, Daegling (1993) explains the chin as a functional adaptation to resist masticatory stress that causes vertical bending stresses in the coronal plane. Others have argued that the prominent chin is adapted to resisting wishboning forces, dorso-ventral shear forces, and generally a mechanical advantage to resist lateral transverse bending and vertical bending in the coronal plane. On the contrary, others have suggested that the presence of the chin is not related to mastication. The presence of thick bone in the relatively small mandible may indicate better force resistance capacity. However, the question stands of whether the chin is an adaptive or nonadaptive structure. Developmental perspectives Recent works on the morphological changes of the mandible during development have shown that the human chin, or at least the inverted-T shaped mental region, develops during the prenatal period, but the chin does not become prominent until the early postnatal period. This later modification happens by bone remodeling processes (bone resorption and bone deposition). Coquerelle et al. show that the anteriorly positioned cervical column of the spine and forward displacement of the hyoid bone limit the anterior–posterior breadth in the oral cavity for the tongue, laryngeal, and suprahyoid musculatures. Accordingly, this leads the upper parts of the mandible (alveolar process) to retract posteriorly, following the posterior movement of the upper tooth row, while the lower part of the symphysis remained protruded to create more space, thereby creating the inverted-T shaped mental relief during early ages and the prominent chin later. The alveolar region (upper or superior part of the symphysis) is sculpted by bone resorption, but the chin (lower or inferior part) is depository in its nature. These coordinated bone growth and modeling processes mold the vertical symphysis present at birth into the prominent shape of the chin.   Recent research on the development of the chin suggests that the evolution of this unique characteristic was formed not by mechanical forces such as chewing but by evolutionary adaptations involving reduction in size and change in shape of the face. Holton et al. claim that this adaptation occurred as the face became smaller compared to that of other ancient humans. Other perspectives Robert Franciscus takes a more anthropological viewpoint: he believes that the chin was formed as a consequence of the change in lifestyle humans underwent approximately 80,000 years ago. As humans' hunter-gatherer societies grew into larger social networks, territorial disputes decreased because the new social structure promoted building alliances in order to exchange goods and belief systems. Franciscus believes that this change in the human environment reduced hormone levels, especially in men, resulting in the natural evolution of the chin. Overall, human beings are unique in the sense that they are the only species among primates who have chins. In the paper The Enduring Puzzle of the Human Chin, evolutionary anthropologists James Pampush and David Daegling discuss various theories that have been raised to solve the puzzle of the chin. They conclude that "each of the proposals we have discussed falter either empirically or theoretically; some fail, to a degree, on both accounts… This should serve as motivation, not discouragement, for researchers to continue investigating this modern human peculiarity… perhaps understanding the chin will reveal some unexpected insight into what it means to be human." Cleft chin The terms cleft chin, chin cleft, dimple chin, or chin dimple refer to a dimple on the chin. It is a Y-shaped fissure on the chin with an underlying bony peculiarity. Specifically, the chin fissure follows the fissure in the lower jaw bone that resulted from the incomplete fusion of the left and right halves of the jaw bone, or muscle, during the embryonal and fetal development. It can also develop during the later mandibular symphysis, due to growth of the mental protuberance during puberty, or as a result of acromegaly. In some cases, one mental tubercle may grow more than another, which can cause facial asymmetry. A cleft chin is an inherited trait in humans and can be influenced by many factors. The cleft chin is also a classic example of variable penetrance with environmental factors or a modifier gene possibly affecting the phenotypical expression of the actual genotype. Cleft chins can be presented in a child when neither parent presents a cleft chin. Cleft chins are common among people originating from Europe, the Middle East and South Asia. There is a possible genetic cause for cleft chins, a genetic marker called rs11684042, which is located in chromosome 2. In Persian literature, the chin dimple is considered a factor of beauty and is metaphorically referred to as "the chin pit" or "the chin well": a well in which the poor lover is fallen and trapped. Double chin A double chin is a loss of definition of the jawbone or soft tissue under the chin. There are two possible causes for a double chin, which have to be differentiated. In overweight people, commonly the layer of subcutaneous fat around the neck sags down and creates a wrinkle, creating the appearance of a second chin. This fat pad is occasionally surgically removed and the corresponding muscles under the jaw shortened (hyoid lift). Another cause can be a bony deficiency, commonly seen in people of normal weight. When the jaw bones (mandible and by extension the maxilla) do not project forward enough, the chin in turn will not project forward enough to give the impression of a defined jawline and chin. Despite low amounts of fat in the area, it can appear as if the chin is melting into the neck. The extent of this deficiency can vary drastically and usually has to be treated surgically. In some patients, the aesthetic deficit can be overcome with genioplasty alone; in others, the lack of forward growth might warrant orthognathic surgery to move one or two jaws forward. If the patient suffers from sleep apnea, early maxillomandibular advancement is usually the only causal treatment and necessary to preserve normal life expectancy.
Biology and health sciences
Human anatomy
Health
5866571
https://en.wikipedia.org/wiki/Narcissus%20poeticus
Narcissus poeticus
Narcissus poeticus, the poet's daffodil, poet's narcissus, nargis, pheasant's eye, findern flower or pinkster lily, was one of the first daffodils to be cultivated, and is frequently identified as the narcissus of ancient times (although Narcissus tazetta and Narcissus jonquilla have also been considered as possibilities). It is also often associated with the Greek legend of Narcissus. It is the type species of the genus Narcissus and is widely naturalised in North America. Description The flower is extremely fragrant, with a ring of tepals in pure white and a short corona of light yellow with a distinct reddish edge. It grows to tall. Taxonomy Narcissus poeticus was first described by Carl Linnaeus in his book Species Plantarum on page 289 in 1753. Distribution Narcissus poeticus is native to central and southern Europe from Spain, France through Switzerland, Austria to Croatia, Albania, Greece and Ukraine. It is naturalised in Great Britain, Belgium, Germany, the Czech Republic, Azerbaijan, Turkey, New Zealand, British Columbia, Washington state, Oregon, Ontario, Quebec, Newfoundland, and much of the eastern United States, from Louisiana and Georgia north to Maine and Wisconsin. Legend and history The earliest mention of poet's daffodil is likely in the Historia Plantarum (VI.6.9), the main botanical writing of Theophrastus (371 – ), who wrote about a spring-blooming narcissus that the Loeb Classical Library editors identify as Narcissus poeticus. According to Theophrastus, the narcissus (νάρκισσος), also called leirion (λείριον), has a leafless stem, with the flower at the top. The plant blooms very late, after the setting of Arcturus about the equinox. The poet Virgil, in his fifth Eclogue, also wrote about a narcissus whose description corresponds with that of Narcissus poeticus. In one version of the myth about the Greek hero Narcissus, he was punished by the Goddess of vengeance, Nemesis, who turned him into a Narcissus flower that historians associate with Narcissus poeticus. The fragrant Narcissus poeticus has also been recognised as the flower that Persephone and her companions were gathering when Hades abducted her into the Underworld, according to Hellmut Baumann in The Greek Plant World in Myth, Art, and Literature. This myth accounts for the custom, which has lasted into modern times, of decorating graves with these flowers. Linnaeus, who gave the flower its name, quite possibly did so because he believed it was the one that inspired the tale of Narcissus, handed down by poets since ancient times. Uses In medicine, it was described by Dioscorides in his Materia Medica as "Being laid on with Loliacean meal, & honey it draws out splinters". James Sutherland also mentioned it in his Hortus Medicus Edinburgensis. In Korea, it is used to treat conjunctivitus, urethritis and amenorrhoea. Use in perfume Poet's daffodil is cultivated in the Netherlands and southern France for its essential oil, narcissus oil, one of the most popular fragrances used in perfumes. Narcissus oil is used as a principal ingredient in 11% of modern quality perfumes—including 'Fatale' and 'Samsara'—as a floral concrete or absolute. The oil's fragrance resembles a combination of jasmine and hyacinth. Cultivation Narcissus poeticus has long been cultivated in Europe. According to one legend, it was brought back to England from the crusades by Sir Geoffrey de Fynderne. It was still abundant in 1860 when historian Bernard Burke visited the village of Findern—where it still grows in certain gardens and has become an emblem of the village. It was introduced to America by the late 18th century, when Bernard McMahon of Philadelphia offered it among his narcissus. It may be the "sweet white narcissus" that Peter Collinson sent John Bartram in Philadelphia, only to be told that it was already common in Pennsylvania, having spread from its introduction by early settlers. The plant has naturalised throughout the eastern half of the United States and Canada, along with some western states and provinces. Narcissus poeticus has long been hybridised with the wild British daffodil Narcissus pseudonarcissus, producing many named hybrids. These older heritage hybrids tend to be more elegant and graceful than modern hybrid daffodils, and are becoming available in the UK once again. One such cultivar is the popular 'Actaea', which has gained the Royal Horticultural Society's Award of Garden Merit. N. poeticus var. recurvus, the old pheasant's eye daffodil, has also won the AGM. Toxicity While all narcissi are poisonous when eaten, poet's daffodil is more dangerous than others, acting as a strong emetic and irritant. The scent can be powerful enough to cause headache and vomiting if a large quantity is kept in a closed room. Photo gallery
Biology and health sciences
Asparagales
Plants
5869754
https://en.wikipedia.org/wiki/Megacerops
Megacerops
Megacerops ("large-horned face", from méga- "large" + kéras "horn" + ōps "face") is an extinct genus of the prehistoric odd-toed ungulate (hoofed mammal) family Brontotheriidae, an extinct group of rhinoceros-like browsers related to horses. It was endemic to North America during the Late Eocene epoch (38–33.9 mya), existing for approximately . Taxonomy Megacerops was named by Leidy (1870). Its type species is Megacerops coloradensis. It was synonymized subjectively with Menodus by Clark and Beerbower (1967). It was assigned to Brontotheriidae by Leidy (1870), Carroll (1988), Mader (1989), and Mader (1998). According to Mihlbachler and others, Megacerops includes the species of the genera Menodus, Brontotherium, Brontops, Menops, Ateleodon, and Oreinotherium. Description All of the species had a pair of blunt horns on their snout (the size varying between species), with the horns of males being much longer than those of the females. This could indicate that they were social animals which butted heads for breeding privileges. Despite resembling the rhinoceros, it was larger than any living rhinoceros: the living animal easily approached the size of the African forest elephant, the third-largest land animal today. It stood about tall at the shoulders with an overall length (including tail) of . Its skull reached in greatest length, with some specimens possessing substantial canines, up to 70 mm long. Megacerops resembled a large rhinoceros, possessing blunt Y-shaped horn-like protrusions on its nose up to 43 cm in length. Its mass is estimated to be in the range of The dorsal vertebrae above the shoulders had extra long spines to support the huge neck muscles needed to carry the heavy skull. The shape of its teeth suggests that it preferred food such as soft stems and leaves, rather than tough vegetation. It may have had fleshy lips and a long tongue for carefully selecting food. Paleobiology The skeleton of an adult male was found with partially healed rib fractures, which supports the theory that males used their 'horns' to fight each other. No creature living in Megacerops time and area except another Megacerops could have inflicted such an injury. The breathing movements prevented the fractures from completely healing. The adults may have also used their horns to defend themselves and their calves from predators, such as hyaenodonts, entelodonts, Bathornis or nimravids. Distribution Fossils were uncovered in the northern plains states. Life-sized models of Megacerops families (a male, female, and juvenile) are displayed at the James E. Martin Paleontological Research Laboratory, South Dakota School of Mines & Technology, and a different set at the Canadian Museum of Nature. Many remains have been found in South Dakota and Nebraska. In the past, specimens exposed by severe rainstorms were found by Native Americans of the Sioux tribes. The Sioux called them "thunder beasts", a name preserved in the ancient Greek translation (bronto-, thunder; therion, beast). Many of the skeletons found by the Sioux belonged to herds which were killed by volcanic eruptions of the Rocky Mountains, which were volcanically active at the time.
Biology and health sciences
Perissodactyla
Animals
5872429
https://en.wikipedia.org/wiki/Heterodontosauridae
Heterodontosauridae
Heterodontosauridae is a family of ornithischian dinosaurs that were likely among the most basal (primitive) members of the group. Their phylogenetic placement is uncertain but they are most commonly found to be primitive, outside of the group Genasauria. Although their fossils are relatively rare and their group small in numbers, they have been found on all continents except Australia and Antarctica, with a range spanning the Early Jurassic to the Early Cretaceous. Heterodontosaurids were fox-sized dinosaurs less than in length, including a long tail. They are known mainly for their characteristic teeth, including enlarged canine-like tusks and cheek teeth adapted for chewing, analogous to those of Cretaceous hadrosaurids. Their diet was herbivorous or possibly omnivorous. Description Among heterodontosaurids, only Heterodontosaurus itself is known from a complete skeleton. Fragmentary skeletal remains of Abrictosaurus are known but have not been fully described, while most other heterodontosaurids are known only from jaw fragments and teeth. Consequently, most heterodontosaurid synapomorphies (defining features) have been described from the teeth and jaw bones. Heterodontosaurus measured just over 1 meter (3.3 ft) in length, while the fragmentary remains of Lycorhinus may indicate a larger individual. Tianyulong from China appears to preserve filamentous integument which has been interpreted to be a variant of the proto-feathers found in some theropods. These filaments include a crest along its tail. The presence of this filamentous integument has been used to suggest that both ornithischians and saurischians were endothermic. Skull and teeth Both Abrictosaurus and Heterodontosaurus had very large eyes. Underneath the eyes, the jugal bone projected sideways, a feature also present in ceratopsians. As in the jaws of most ornithischians, the anterior edge of the premaxilla (a bone at the tip of the upper jaw) was toothless and probably supported a keratinous beak (rhamphotheca), although heterodontosaurids did have teeth in the posterior section of the premaxilla. A large gap, called a diastema, separated these premaxillary teeth from those of the maxilla (the main upper jaw bone) in many ornithischians, but this diastema was characteristically arched in heterodontosaurids. The mandible (lower jaw) was tipped by the predentary, a bone unique to ornithischians. This bone also supported a beak similar to the one found on the premaxilla. All the teeth in the lower jaw were found on the dentary bone. Heterodontosaurids are named for their strongly heterodont dentition. There were three premaxillary teeth. In the Early Jurassic Abrictosaurus, Heterodontosaurus, and Lycorhinus, the first two premaxillary teeth were small and conical, while the much larger third tooth resembled the canines of carnivoran mammals and is often called the caniniform or 'tusk'. A lower caniniform, larger than the upper, took the first position in the dentary and was accommodated by the arched diastema of the upper jaw when the mouth was closed. These caniniforms were serrated on both the anterior and posterior edges in Heterodontosaurus and Lycorhinus, while those of Abrictosaurus bore serrations only on the anterior edge. In the Early Cretaceous Echinodon, there may have been two upper caniniforms, which were on the maxilla rather than the premaxilla, and Fruitadens from the Late Jurassic may have had two lower caniniforms on each dentary. Like the characteristic tusks, the cheek teeth of derived heterodontosaurids were also unique among early ornithischians. Small ridges, or denticles, lined the edges of ornithischian cheek teeth in order to crop vegetation. These denticles extend only a third of the way down the tooth crown from the tip in all heterodontosaurids; in other ornithischians, the denticles extend further down towards the root. Basal forms like Abrictosaurus had cheek teeth in both maxilla and dentary that were generally similar to other ornithischians: widely spaced, each having a low crown and a strongly-developed ridge (cingulum) separating the crown from the root. In more derived forms like Lycorhinus and Heterodontosaurus, the teeth were chisel-shaped, with much higher crowns and no cingula, so that there was no difference in width between the crowns and the roots. These derived cheek teeth were overlapping, so that their crowns formed a continuous surface on which food could be chewed. The tooth rows were slightly inset from the side of the mouth, leaving a space outside the teeth that may have been bounded by a muscular cheek, which would have been necessary for chewing. The hadrosaurs and ceratopsians of the Cretaceous Period, as well as many herbivorous mammals, would convergently evolve somewhat analogous dental batteries. As opposed to hadrosaurs, which had hundreds of teeth constantly being replaced, tooth replacement in heterodontosaurids occurred far more slowly and several specimens have been found without a single replacement tooth in waiting. Characteristically, heterodontosaurids lacked the small openings (foramina) on the inside of the jaw bones which are thought to have aided in tooth development in most other ornithischians. Heterodontosaurids also boasted a unique spheroidal joint between the dentaries and the predentary, allowing the lower jaws to rotate outwards as the mouth was closed, grinding the cheek teeth against each other. Because of the slow replacement rate, this grinding produced extreme tooth wear that commonly obliterated most of the denticles in older teeth, although the increased height of the crowns gave each tooth a long life. Skeleton The postcranial anatomy of Heterodontosaurus tucki has been well-described, although H. tucki is generally considered the most derived of the Early Jurassic heterodontosaurids, so it is impossible to know how many of its features were shared with other species. The forelimbs were long for a dinosaur, over 70% of the length of the hindlimbs. The well-developed deltopectoral crest (a ridge for the attachment of chest and shoulder muscles) of the humerus and prominent olecranon process (where muscles that extend the forearm were attached) of the ulna indicate that the forelimb was powerful as well. There were five digits on the manus ('hand'). The first was large, tipped with a sharply curved claw, and would rotate inwards when flexed; Robert Bakker called it the 'twist-thumb'. The second digit was the longest, slightly longer than the third. Both of these digits bore claws, while the clawless fourth and fifth digits were very small and simple in comparison. In the hindlimb, the tibia was 30% longer than the femur, which is generally considered an adaptation for speed. The tibia and fibula of the lower leg were fused to the astragalus and calcaneum of the ankle, forming a 'tibiofibiotarsus' convergently with modern birds. Also similarly to birds, the lower tarsal (ankle) bones and metatarsals were fused to form a 'tarsometatarsus.' There are four digits in the pes (hindfoot), with only the second, third, and fourth contacting the ground. The tail, unlike many other ornithischians, did not have ossified tendons to maintain a rigid posture and was probably flexible. The fragmentary skeleton known for Abrictosaurus has never been fully described, although the forelimb and manus were smaller than in Heterodontosaurus. Also, the fourth and fifth digits of the forelimb each bear one fewer phalanx bone. Classification South African paleontologist Robert Broom created the name Geranosaurus in 1911 for dinosaur jaw bones missing all of the teeth and some partial associated limb bones. In 1924, Lycorhinus was named, and classified as a cynodont, by Sidney Haughton. Heterodontosaurus was named in 1962 and it, Lycorhinus and Geranosaurus were recognized as closely related ornithischian dinosaurs. Alfred Romer named Heterodontosauridae in 1966 as a family of ornithischian dinosaurs including Heterodontosaurus and Lycorhinus. Kuhn independently proposed Heterodontosauridae in the same year and is sometimes cited as its principal author. It was defined as a clade in 1998 by Paul Sereno and redefined by him in 2005 as the stem clade consisting of Heterodontosaurus tucki and all species more closely related to Heterodontosaurus than to Parasaurolophus walkeri, Pachycephalosaurus wyomingensis, Triceratops horridus, or Ankylosaurus magniventris. Heterodontosauridae was given a formal definition in the PhyloCode by Daniel Madzia and colleagues in 2021 as "the largest clade containing Heterodontosaurus tucki, but not Iguanodon bernissartensis, Pachycephalosaurus wyomingensis, Stegosaurus stenops, and Triceratops horridus". Heterodontosaurinae is a stem-based taxon defined phylogenetically for the first time by Paul Sereno in 2012 as "the most inclusive clade containing Heterodontosaurus tucki but not Tianyulong confuciusi, Fruitadens haagarorum, Echinodon becklesii." Heterodontosauridae includes the genera Abrictosaurus, Lycorhinus, and Heterodontosaurus, all from South Africa. While Richard Thulborn once reassigned all three to Lycorhinus, all other authors consider the three genera distinct. Within the family, Heterodontosaurus and Lycorhinus are considered sister taxa, with Abrictosaurus as a basal member. Geranosaurus is also a heterodontosaurid, but is usually considered a nomen dubium because the type specimen is missing all its teeth, making it indistinguishable from any other genus in the family. More recently, the genus Echinodon has been considered a heterodontosaurid in several studies. Lanasaurus was named for an upper jaw in 1975, but more recent discoveries have shown that it belongs to Lycorhinus instead, making Lanasaurus a junior synonym of that genus. Dianchungosaurus was once considered a heterodontosaurid from Asia, but it has since been shown that the remains were a chimera of prosauropod and mesoeucrocodylian remains. José Bonaparte also classified the South American Pisanosaurus as a heterodontosaurid at one time, but this animal is now known to be a more basal ornithischian. The membership of Heterodontosauridae is well-established in comparison to its uncertain phylogenetic position. Several early studies suggested that heterodontosaurids were very primitive ornithischians. Due to supposed similarities in the morphology of the forelimbs, Robert Bakker proposed a relationship between heterodontosaurids and early sauropodomorphs like Anchisaurus, bridging the orders Saurischia and Ornithischia. The dominant hypothesis over the last several decades has placed heterodontosaurids as basal ornithopods. However, others have suggested that heterodontosaurids instead share a common ancestor with Marginocephalia (ceratopsians and pachycephalosaurs), a hypothesis that has found support in some early 21st century studies. The clade containing heterodontosaurids and marginocephalians has been named Heterodontosauriformes. Heterodontosaurids have also been seen as basal to both ornithopods and marginocephalians. In 2007, a cladistic analysis suggested that heterodontosaurids are basal to all known ornithischians except Pisanosaurus, a result that echoes some of the very earliest work on the family. However, a study by Bonaparte found the Pisanosauridae to be synonymous with the Heterodontosauridae and not a separate family in its own right, thereby including Pisanosaurus as a heterodontosaur. Butler et al. (2010) found the Heterodontosauridae to be the most basal known significant ornithischian radiation. The cladogram below shows the interrelationships within Heterodontosauridae, and follows the analysis by Sereno, 2012: A 2020 reworking of Cerapoda by Dieudonné and colleagues recovered the animals traditionally considered 'heterodontosaurids' as a basal grouping within Pachycephalosauria, paraphyletic with respect to the traditional, dome-headed pachycephalosaurs. This result was based on numerous skull characteristics including the dentition, and also to account for the fact that pachycephalosaur fossils are completely unknown from the Jurassic period. Modern understanding of ornithischian phylogeny implies that Jurassic pachycephalosaurs must exist, because numerous Jurassic ceratopsians have been found, yet no such pachycephalosaurs have been confidently identified. This analysis was done to elaborate on the findings of Baron and colleagues (2017), which found Chilesaurus to be a basal ornithischian. The phylogenetic analysis was conducted with Chilesaurus coded as an ornithischian, which also had implications for the phylogeny of ornithopods. The cladogram below is an abridged version of Dieudonne and colleagues' findings: Distribution While originally known only from the Early Jurassic of southern Africa, heterodontosaurid remains are now known from four continents. Early in heterodontosaurid history, the supercontinent Pangaea was still largely intact, allowing the family to achieve a near-worldwide distribution. The oldest known possible heterodontosaurid remains are a jaw fragment and isolated teeth from the Laguna Colorada Formation of Argentina, which dates back to the Late Triassic. These remains have a derived morphology similar to Heterodontosaurus, including a caniniform with serrations on both anterior and posterior edges, as well as high-crowned maxillary teeth lacking a cingulum. Irmis et al. (2007) tentatively agreed that this fossil material represents a heterodontosaurid, but stated that additional material is needed to confirm this assignment because the specimen is poorly preserved, while Sereno (2012) only stated that this material may represent an ornithischian or even specifically a heterodontosaurid. Olsen, Kent & Whiteside (2010) noted that the age of the Laguna Colorada Formation itself is poorly constrained, and thus it wasn't conclusively determined whether the putative heterodontosaurid from this formation is of Triassic or Jurassic age. The most diverse heterodontosaurid fauna comes from the Early Jurassic of southern Africa, where fossils of Heterodontosaurus, Abrictosaurus, Lycorhinus, and the dubious Geranosaurus are found. Undescribed Early Jurassic heterodontosaurids are also known from the United States and Mexico, respectively. In addition, beginning in the 1970s, a great deal of fossil material was discovered from the Late Jurassic Morrison Formation near Fruita, Colorado in the United States. Described in print in 2009, this material was placed in the genus Fruitadens. Heterodontosaurid teeth lacking a cingulum have also been described from Late Jurassic and Early Cretaceous formations in Spain and Portugal. The remains of Echinodon were redescribed in 2002, showing that it may represent a late-surviving heterodontosaurid from the Berriasian stage of the Early Cretaceous in southern England. Dianchungosaurus from the Early Jurassic of China is no longer considered a heterodontosaurid; though one Middle-Late Jurassic Asian form is known (Tianyulong). Indeterminate cheek teeth possibly representing heterodontosaurids are also known from the Barremian aged Wessex Formation of southern England, which if confirmed would represent the youngest record of the group. Paleobiology Most heterodontosaurid fossils are found in geologic formations that represent arid to semi-arid environments, including the Upper Elliot Formation of South Africa and the Purbeck Beds of southern England. It has been suggested that heterodontosaurids underwent seasonal aestivation or hibernation during the driest times of year. Due to the lack of replacement teeth in most heterodontosaurids, it was proposed that the entire set of teeth was replaced during this dormant period, as it seemed that continual and sporadic replacement of teeth would interrupt the function of the tooth row as a single chewing surface. However, this was based on a misunderstanding of heterodontosaurid jaw mechanics. It was thought that heterodontosaurids actually did replace their teeth continually, though more slowly than in other reptiles, but CT scanning of skulls from juvenile and mature Heterodontosaurus shows no replacement teeth. There is currently no evidence that supports the hypothesis of aestivation in heterodontosaurids, but it cannot be rejected, based on the skull scans. While the cheek teeth of heterodontosaurids are clearly adapted for grinding tough plant material, their diet may have been omnivorous. The pointed premaxillary teeth and sharp, curved claws on the forelimbs suggest some degree of predatory behavior. It has been suggested that the long, powerful forelimbs of Heterodontosaurus may have been useful for tearing into insect nests, similarly to modern anteaters. These forelimbs may have also functioned as digging tools, perhaps for roots and tubers. The length of the forelimb compared to the hindlimb suggests that Heterodontosaurus might have been partially quadrupedal, and the prominent olecranon process and hyperextendable digits of the forelimb are found in many quadrupeds. However, the manus is clearly designed for grasping, not weight support. Many features of the hindlimb, including the long tibia and foot, as well as the fusion of the tibiofibiotarsus and tarsometatarsus, indicate that heterodontosaurids were adapted to run quickly on the hindlegs, so it is unlikely that Heterodontosaurus moved on all four limbs except perhaps when feeding. The short tusks found in all known heterodontosaurids strongly resemble tusks found in modern musk deer, peccaries and pigs. In many of these animals (as well as the longer-tusked walrus and Asian elephants), this is a sexually dimorphic trait, with tusks only found in males. The type specimen of Abrictosaurus lacks tusks and was originally described as a female. While this remains possible, the unfused sacral vertebrae and short face indicate that this specimen represents a juvenile animal. A second, larger specimen originally proposed to belong to Abrictosaurus clearly possesses tusks, which was used to support the idea that tusks are found only in adults, rather than being a secondary sexual characteristic of males. These tusks could have been used for combat or display with members of the same species or with other species. The absence of tusks in juvenile Abrictosaurus could also be another characteristic separating it from other heterodontosaurids as well, as tusks are known in juvenile Heterodontosaurus. Other proposed functions for the tusks include defense and use in an occasionally omnivorous diet. However, this specimen was alternatively reassigned to Lycorhinus by Sereno in 2012, which is already known to have possessed tusks and therefore their absence in Abrictosaurus may not have been a result of age. In 2005 a small complete fossilized heterodontosaurid skeleton more than 200 million years old was discovered in South Africa. In July 2016 it was scanned by a team of South African researchers using the European Synchrotron Radiation Facility; the scan of the dentition revealed palate bones less than a millimeter thick.
Biology and health sciences
Ornitischians
Animals
1702177
https://en.wikipedia.org/wiki/Convergent%20synthesis
Convergent synthesis
In chemistry a convergent synthesis is a strategy that aims to improve the efficiency of multistep synthesis, most often in organic synthesis. In this type of synthesis several individual pieces of a complex molecule are synthesized in stage one, and then in stage two these pieces are combined to form the final product. In linear synthesis the overall yield quickly drops with each reaction step: A → B → C → D Suppose the yield is 50% for each reaction; the overall yield of D is only 12.5% from A. In a convergent synthesis A → B (50%) C → D (50%) B + D → E (25%) the overall yield of E (25%) looks much better. Convergent synthesis is applied in the synthesis of complex molecules and involves fragment coupling and independent synthesis. This technique is more useful if the compound is large and symmetric, where at least two aspects of the molecule can be formed separately and still come together. Examples: Convergent synthesis is encountered in dendrimer synthesis where branches (with the number of generations preset) are connected to the central core. Proteins of up to 300 amino acids are produced by a convergent approach using chemical ligation. An example of its use in total synthesis is the final step (photochemical [2+2]cycloaddition) towards the compound biyouyanagin A:
Physical sciences
Synthetic strategies
Chemistry
1702296
https://en.wikipedia.org/wiki/Divergent%20synthesis
Divergent synthesis
In chemistry a divergent synthesis is a strategy with the aim to improve the efficiency of chemical synthesis. It is often an alternative to convergent synthesis or linear synthesis. In one strategy divergent synthesis aims to generate a library of chemical compounds by first reacting a molecule with a set of reactants. The next generation of compounds is generated by further reactions with each compound in generation 1. This methodology quickly diverges to large numbers of new compounds A generates A1, A2, A3, A4, A5 in generation 1 A1 generates A11, A12, A13 in generation 2 and so on. An entire library of new chemical compounds, for instance saccharides, can be screened for desirable properties. In another strategy divergent synthesis starts from a molecule as a central core from which successive generations of building blocks are added. A good example is the divergent synthesis of dendrimers, for example, where in each generation a new monomer reacts to the growing surface of the sphere. Diversity oriented synthesis Diversity oriented synthesis or DOS is a strategy for quick access to molecule libraries with an emphasis on skeletal diversity. In one such application a Petasis reaction product (1) is functionalized with propargyl bromide leading to a starting compound (2) having 5 functional groups. This molecule can be subjected to a range of reagents yielding unique molecular skeletons in one generation. DOS Drugs Dosabulin Gemmacin B ML238 Robotnikinin
Physical sciences
Synthetic strategies
Chemistry
1702398
https://en.wikipedia.org/wiki/Hubbard%20model
Hubbard model
The Hubbard model is an approximate model used to describe the transition between conducting and insulating systems. It is particularly useful in solid-state physics. The model is named for John Hubbard. The Hubbard model states that each electron experiences competing forces: one pushes it to tunnel to neighboring atoms, while the other pushes it away from its neighbors. Its Hamiltonian thus has two terms: a kinetic term allowing for tunneling ("hopping") of particles between lattice sites and a potential term reflecting on-site interaction. The particles can either be fermions, as in Hubbard's original work, or bosons, in which case the model is referred to as the "Bose–Hubbard model". The Hubbard model is a useful approximation for particles in a periodic potential at sufficiently low temperatures, where all the particles may be assumed to be in the lowest Bloch band, and long-range interactions between the particles can be ignored. If interactions between particles at different sites of the lattice are included, the model is often referred to as the "extended Hubbard model". In particular, the Hubbard term, most commonly denoted by U, is applied in first principles based simulations using Density Functional Theory, DFT. The inclusion of the Hubbard term in DFT simulations is important as this improves the prediction of electron localisation and thus it prevents the incorrect prediction of metallic conduction in insulating systems. The Hubbard model introduces short-range interactions between electrons to the tight-binding model, which only includes kinetic energy (a "hopping" term) and interactions with the atoms of the lattice (an "atomic" potential). When the interaction between electrons is strong, the behavior of the Hubbard model can be qualitatively different from a tight-binding model. For example, the Hubbard model correctly predicts the existence of Mott insulators: materials that are insulating due to the strong repulsion between electrons, even though they satisfy the usual criteria for conductors, such as having an odd number of electrons per unit cell. History The model was originally proposed in 1963 to describe electrons in solids. Hubbard, Martin Gutzwiller and Junjiro Kanamori each independently proposed it. Since then, it has been applied to the study of high-temperature superconductivity, quantum magnetism, and charge density waves. Narrow energy band theory The Hubbard model is based on the tight-binding approximation from solid-state physics, which describes particles moving in a periodic potential, typically referred to as a lattice. For real materials, each lattice site might correspond with an ionic core, and the particles would be the valence electrons of these ions. In the tight-binding approximation, the Hamiltonian is written in terms of Wannier states, which are localized states centered on each lattice site. Wannier states on neighboring lattice sites are coupled, allowing particles on one site to "hop" to another. Mathematically, the strength of this coupling is given by a "hopping integral", or "transfer integral", between nearby sites. The system is said to be in the tight-binding limit when the strength of the hopping integrals falls off rapidly with distance. This coupling allows states associated with each lattice site to hybridize, and the eigenstates of such a crystalline system are Bloch's functions, with the energy levels divided into separated energy bands. The width of the bands depends upon the value of the hopping integral. The Hubbard model introduces a contact interaction between particles of opposite spin on each site of the lattice. When the Hubbard model is used to describe electron systems, these interactions are expected to be repulsive, stemming from the screened Coulomb interaction. However, attractive interactions have also been frequently considered. The physics of the Hubbard model is determined by competition between the strength of the hopping integral, which characterizes the system's kinetic energy, and the strength of the interaction term. The Hubbard model can therefore explain the transition from metal to insulator in certain interacting systems. For example, it has been used to describe metal oxides as they are heated, where the corresponding increase in nearest-neighbor spacing reduces the hopping integral to the point where the on-site potential is dominant. Similarly, the Hubbard model can explain the transition from conductor to insulator in systems such as rare-earth pyrochlores as the atomic number of the rare-earth metal increases, because the lattice parameter increases (or the angle between atoms can also change) as the rare-earth element atomic number increases, thus changing the relative importance of the hopping integral compared to the on-site repulsion. Example: one dimensional hydrogen atom chain The hydrogen atom has one electron, in the so-called s orbital, which can either be spin up () or spin down (). This orbital can be occupied by at most two electrons, one with spin up and one down (see Pauli exclusion principle). Under band theory, for a 1D chain of hydrogen atoms, the 1s orbital forms a continuous band, which would be exactly half-full. The 1D chain of hydrogen atoms is thus predicted to be a conductor under conventional band theory. This 1D string is the only configuration simple enough to be solved directly. But in the case where the spacing between the hydrogen atoms is gradually increased, at some point the chain must become an insulator. Expressed using the Hubbard model, the Hamiltonian is made up of two terms. The first term describes the kinetic energy of the system, parameterized by the hopping integral, . The second term is the on-site interaction of strength that represents the electron repulsion. Written out in second quantization notation, the Hubbard Hamiltonian then takes the form where is the spin-density operator for spin on the -th site. The density operator is and occupation of -th site for the wavefunction is . Typically t is taken to be positive, and U may be either positive or negative, but is assumed to be positive when considering electronic systems. Without the contribution of the second term, the Hamiltonian resolves to the tight binding formula from regular band theory. Including the second term yields a realistic model that also predicts a transition from conductor to insulator as the ratio of interaction to hopping, , is varied. This ratio can be modified by, for example, increasing the inter-atomic spacing, which would decrease the magnitude of without affecting . In the limit where , the chain simply resolves into a set of isolated magnetic moments. If is not too large, the overlap integral provides for superexchange interactions between neighboring magnetic moments, which may lead to a variety of interesting magnetic correlations, such as ferromagnetic, antiferromagnetic, etc. depending on the model parameters. The one-dimensional Hubbard model was solved by Lieb and Wu using the Bethe ansatz. Essential progress was achieved in the 1990s: a hidden symmetry was discovered, and the scattering matrix, correlation functions, thermodynamic and quantum entanglement were evaluated. More complex systems Although Hubbard is useful in describing systems such as a 1D chain of hydrogen atoms, it is important to note that more complex systems may experience other effects that the Hubbard model does not consider. In general, insulators can be divided into Mott–Hubbard insulators and charge-transfer insulators. A Mott–Hubbard insulator can be described as This can be seen as analogous to the Hubbard model for hydrogen chains, where conduction between unit cells can be described by a transfer integral. However, it is possible for the electrons to exhibit another kind of behavior: This is known as charge transfer and results in charge-transfer insulators. Unlike Mott–Hubbard insulators electron transfer happens only within a unit cell. Both of these effects may be present and compete in complex ionic systems. Numerical treatment The fact that the Hubbard model has not been solved analytically in arbitrary dimensions has led to intense research into numerical methods for these strongly correlated electron systems. One major goal of this research is to determine the low-temperature phase diagram of this model, particularly in two-dimensions. Approximate numerical treatment of the Hubbard model on finite systems is possible via various methods. One such method, the Lanczos algorithm, can produce static and dynamic properties of the system. Ground state calculations using this method require the storage of three vectors of the size of the number of states. The number of states scales exponentially with the size of the system, which limits the number of sites in the lattice to about 20 on 21st century hardware. With projector and finite-temperature auxiliary-field Monte Carlo, two statistical methods exist that can obtain certain properties of the system. For low temperatures, convergence problems appear that lead to an exponential computational effort with decreasing temperature due to the so-called fermion sign problem. The Hubbard model can be studied within dynamical mean-field theory (DMFT). This scheme maps the Hubbard Hamiltonian onto a single-site impurity model, a mapping that is formally exact only in infinite dimensions and in finite dimensions corresponds to the exact treatment of all purely local correlations only. DMFT allows one to compute the local Green's function of the Hubbard model for a given and a given temperature. Within DMFT, the evolution of the spectral function can be computed and the appearance of the upper and lower Hubbard bands can be observed as correlations increase. Simulator Stacks of heterogeneous 2-dimensional transition metal dichalcogenides (TMD) have been used to simulate geometries in more than one dimension. Tungsten diselenide and tungsten sulfide were stacked. This created a moiré superlattice consisting of hexagonal supercells (repetition units defined by the relationship of the two materials). Each supercell then behaves as though it were a single atom. The distance between supercells is roughly 100x that of the atoms within them. This larger distance drastically reduces electron tunneling across supercells. They can be used to form Wigner crystals. Electrodes can be attached to regulate an electric field. The electric field controls how many electrons fill each supercell. The number of electrons per supercell effectively determines which "atom" the lattice simulates. One electron/cell behaves like hydrogen, two/cell like helium, etc. As of 2022, supercells with up to eight electrons (oxygen) could be simulated. One result of the simulation showed that the difference between metal and insulator is a continuous function of the electric field strength. A "backwards" stacking regime allows the creation of a Chern insulator via the anomalous quantum Hall effect (with the edges of the device acting as a conductor while the interior acted as an insulator.) The device functioned at a temperature of 5 Kelvins, far above the temperature at which the effect had first been observed.
Physical sciences
Basics_2
Physics
1703365
https://en.wikipedia.org/wiki/Straight%20razor
Straight razor
A straight razor is a razor with a blade that can fold into its handle. They are also called open razors and cut-throat razors. The predecessors of the modern straight razors include bronze razors, with cutting edges and fixed handles, produced by craftsmen from Ancient Egypt during the New Kingdom (1569 — 1081 BC). Solid gold and copper razors were also found in Ancient Egyptian tombs dating back to the 4th millennium BC. The first steel-edged cutthroat razors were manufactured in Sheffield in 1680. By the late 1680s, early 1690s, razors with silver-covered handles along with other Sheffield-made products known as "Sheffield wares" were being exported to ports in the Gulf of Finland, approximately 1200 miles (1931 km) from Sheffield. From there, these goods were probably sent to Finland and even Russia. By 1740, Benjamin Huntsman was making straight razors complete with decorated handles and hollow-ground blades made from cast steel, using a process he invented. Huntsman's process was adopted by the French sometime later, albeit reluctantly at first due to nationalist considerations. In England, razor manufacturers were even more reluctant than the French to adopt Huntsman's steel-making process and only did so after they saw its success in France. After their introduction in 1680, straight razors became the principal method of manual shaving for more than two hundred years, and remained in common use until the mid-20th century. Straight razor production eventually fell behind that of the safety razor, which was introduced in the late 19th century and featured a disposable blade. Electric razors have also reduced the market share of the straight razors, especially since the 1950s. A 1979 comparative study of straight and electric razors, performed by Dutch researchers, found that straight razors shave hair approximately 0.002 in. (0.05mm) shorter than electrics. Since 2012, production of straight razors has increased multifold. Straight razor sales are increasing globally and manufacturers have difficulty satisfying demand. Sales started increasing since the product was featured in the 2012 James Bond film Skyfall and have remained high since. Straight razors are also perceived as a better value and a more sustainable and efficient product. Dovo in Germany reports that since a production low of less than 8,000 units per year in 2006, the company sells 3,000 units per month, and has 110,000 orders with production lead time of three years. The increased sales have also led to an increase in the number of associated trades and artisans such as bladesmiths, leather craftsmen, and potters. Forums and outlets provide products, directions, and advice to straight razor users. Straight razor manufacturers exist in Europe, Asia, and North America. Antique straight razors are also actively traded. Straight razors require considerable skill to hone and strop, and require more care during shaving. Straight razor design and use was once a major portion of the curriculum in barber colleges. History Various forms of razors were used throughout history, which are different in appearance but similar in use to modern straight razors. In prehistoric times clam shells, shark's teeth, and flint were sharpened and used for shaving. Drawings of such blades were found in prehistoric caves. Some tribes still use blades made of flint to this day. Excavations in Egypt have unearthed solid gold and copper razors in tombs dating back to the 4th millennium BC. The Roman historian Livy reported that the razor was introduced in ancient Rome in the 6th century BC by legendary king Lucius Tarquinius Priscus. Priscus was ahead of his time because razors did not come to general use until a century later. The first narrow-bladed folding straight razors were listed by a Sheffield, England manufacturer in 1680. By the late 1680s, early 1690s, razors with silver-covered handles along with other Sheffield-made products known as "Sheffield wares" were being exported by John Spencer (1655–1729) of Cannon Hall, a wealthy landowner and industrialist, to ports in the Gulf of Finland, approximately 1200 miles (1931 km) from Sheffield. From there, these goods were probably sent to Finland and even Russia. By 1740, Benjamin Huntsman was making straight razors complete with decorated handles and hollow-ground blades made from cast steel, using a process he invented. Huntsman's process was adopted by the French sometime later; albeit reluctantly at first due to nationalist sentiments. The English manufacturers were even more reluctant than the French to adopt the process and only did so after they saw its success in France. Sheffield steel, a highly polished steel, also known as 'Sheffield silver steel' and famous for its deep gloss finish, is considered a superior quality steel and is still used to this day in France by such manufacturers as Thiers Issard. After their introduction in 1680, straight razors became the principal method of manual shaving for more than two hundred years, and remained in common use until the mid-20th century. Electric razors have also cut into the straight razor's market share, especially since the 1950s. A variant of the European straight-edge was developed by a brother of Ghezo and it was employed as a weapon by the Dahomey Amazons. This variant was significantly larger and carried over the shoulder. When folded, the razor measured about 24–30 inches long and it weighed over 20 pounds. When extended, the blade measured 4–5 feet. Straight razors eventually fell out of fashion. Their first challenger was manufactured by King C. Gillette: a double-edged safety razor with replaceable blades. These new safety razors did not require any serious tutelage to use. The blades were extremely hard to sharpen, and were meant to be thrown away after one use, and rusted quickly if not discarded. They also required a smaller initial investment, although they cost more over time. Despite its long-term advantages, the straight razor lost significant market share. As shaving became less intimidating and men began to shave themselves more, the demand for barbers providing straight razor shaves decreased. Design criteria The design of the straight razor is based on the grind of the blade, the width and length of the blade, the handle, which also affects the balance of the razor, the material of the blade, and the finish and degree of polish of the blade material. Straight grinds range from true wedge, through near wedge, quarter hollow, half hollow, full hollow, and extra hollow. As the grind gets more hollowed, the blade becomes more flexible and the edge more delicate, making it shave closer but require more skill in sharpening and use, and reducing its suitability for heavy beards. Blades are usually categorised by grind, size, and blade shape. Sizing of a standard straight razor is usually close to 3 inches of blade length, but this does vary. Blades are described by the depth from spine to edge, measured in eights of an inch. 3/8 is a very narrow razor mostly used for detail work, with 5/8 and 6/8 being the most commonly seen sizes. It is very rare to see old razors bigger than 8/8 however there are some out there at 10/8 or larger. The other major factor is the point shape. The most common is the round or Dutch point, with the French point being fairly common. The square point is also known as an American point, and is more common on razors from the USA. There is also the Spanish point, Spike point, and Barber's Notch. Inexperienced users should use either the round point, or a razor with a muted toe (the very tip of the edge rounded slightly) as an unmuted square or spike is prone to nicking the skin of an Inexperienced shaver. The "handle" on a straight razor is not a handle at all, but a protector to prevent the delicate edge being damaged when not in use, and to prevent accidental cuts from the sharp blade. Construction is in the form of two scales, held by the pivot pin through the tang, and pinned through a wedge at the other end. Scales are thin, as they must be flexible. Typical scales are 2–3mm thick and usually made from some form of synthetic (nowadays usually acrylic, but on older razors celluloid, Bakelite, xylonite, and others are common), bone, horn, or ivory on older razors. Some cheaper razors had compressed leather scales, and a few are found with thin metal scales or wooden scales. The blade material is usually a high carbon steel or a stainless steel. Traditionally, carbon steel was used, but stainless steels are popular in modern times due to the ease of maintenance. Parts description The parts of a straight razor and their function are described as follows: the narrow end of the blade rotates on a pin called the pivot, between two protective pieces called the scales or handle. The upward curved metal end of the narrow part of the blade beyond the pivot is called the tang and acts as a lever to help raise the blade from the handle. One or two fingers resting on the tang also help stabilize the blade while shaving. The narrow support piece between the tang and the main blade is called the shank, but this reference is often avoided because it can be confusing since the shank is also referred to as tang. The shank sometimes features decorations and the stamp of the brand. The top side and the underside of the shank can sometimes exhibit indentations known as fluting, or jimps for a more secure grip. The curved lower part of the main blade from the shank to the cutting edge is called the shoulder. The point where the shoulder joins the cutting edge is called the heel. The endpoint of the cutting edge at the front of the blade, opposite to the heel, is called the toe. A thick strip of metal running transversely at the junction where the main blade attaches to the shank is called the stabiliser. The stabiliser can be double, single, or can be absent in some razor models. The first stabiliser is usually very narrow and thicker and runs at the shank-to-blade junction, covering the shank and just spilling over to the shoulder. The second stabiliser can be distinguished since it is considerably wider, thinner, and longer, appearing after the first stabiliser and running lower toward the heel. The arched, non-cutting top of the blade is called the back or the spine while the cutting part of the blade opposite the back is called the cutting edge. Finally the other free end of the blade, at the opposite end of the tang on the spine, is called the point and, sometimes, the head or the nose. There are usually two, but sometimes three, pins in the handle. The middle pin, if present, is plastic coated and is called the centre plug. Its function is to stabilise the sides of the handle so that they cannot be squeezed in the middle and acts as a bridge between them. When folded into the scales, the blade is protected from accidental damage, and the user is protected from accidental injury. During folding, the back of the blade, being thick and normally with a curved cross-section, acts as a natural stopper and prevents further rotation of the blade out of the handle from the other side. The frictional force between the scales and the tang applied about the pivot is called the tension and it determines how freely the blade rotates about the point of rotation. A proper amount of tension should be present, for safety reasons, to ensure that the blade does not spin freely when opening or closing. Construction Straight razors consist of a blade sharpened on one edge and a handle attached to the blade through a pin. The blade can then rotate in and out of the handle. The blade can be made of either stainless steel, which is resistant to rust but can be more difficult to hone, or high-carbon steel, which is much easier to hone and obtains a sharper edge, but will rust more easily than stainless steel if neglected. Cheap stainless steel straight razors from Asia and more expensive stainless steel and carbon steel razors from Europe are available. A razor blade starts as a shape called the blank supplied by the steel manufacturer. Forging The blank of the blade is produced by forging steel ingots or steel available in other forms such as wire, springs, etc.. After the blank is formed, the first step is to clean it using a heavy forge. The material used for open razors is steel with a minimum carbon content of 0.6%. This percentage of carbon content ensures optimum hardness, flexibility and resistance to wear. Following the forging stage, a hole is drilled in the tang at the pivot point. This is a crucial step, since after the steel hardening process it would be impossible to drill. This process requires great skill. Hardening and tempering The steel is hardened through a special process where the forged steel blade is heated up to approximately depending on the specific steel. This heating enables fast and uniform heating of the steel at the optimum temperature for maximum hardness. The tempering stage follows the hardening process, where the blade is heated in a bath of oil at a temperature between . Tempering imparts the steel its flexibility and toughness according to the phase diagrams for steel. There are three types of steel blade according to the level of tempering it has received. Hard-tempered, medium-tempered and soft-tempered. Hard-tempered edges last longer but sharpening them is difficult. The converse is true for soft-tempered blades. The characteristics of medium-tempered blades are in-between the two extremes. Carbon steel blades can reach a maximum hardness of 61 HRC on the Rockwell scale. Grinding Following the processes of hardening and tempering, the blanks are ground, according to the two fundamental blade cross sectional area profiles. Finishing Subsequent to grinding, the blade is polished to various degrees of gloss. The finest finish, used in the most expensive razors, is the mirror finish. Mirror finish is the only finish used if gold leafing is to be part of the decoration of the blade. Satin finish requires less polishing time and therefore is not as expensive to produce. This finish is mostly used with black acid etching. Satin finish can sometimes be applied, as a compromise, to the back of the blade while the mirror finish and gold leafing are applied to the more visible front of the blade. This way the blade will not be as expensive as a fully mirror finished one. Metal plating, using nickel or silver, is also used, but it is not preferred; the plating eventually erodes through use, revealing the underlying metal, which is often of inferior quality. Nickel-plated blades are very difficult to hone repeatedly and are made for mainly aesthetic reasons though lacking functionality. Blade decoration The blade is decorated by engraving or gold leafing depending on the price. Less expensive blades undergo an electrolytic black acid engraving process. For more expensive blades, gold leafing applied by hand is employed, following a traditional process. Sharpening Sharpening is the final stage in the process. At first the blade is sharpened on a grinding wheel. Following that the blade can be honed by holding the blades against the flat side of rotating round stones, or by drawing the blade across stationary flat stones. The cutting edge is finished using a strop. Sharpening is usually not completed during manufacturing, instead being done after purchase. Handle materials and their properties Handle scales are made of various materials, including mother-of-pearl, Bakelite, celluloid, bone, plastic, wood, horn, acrylic, ivory and tortoise shell. Celluloid can spontaneously combust at elevated temperatures. Buffalo horn tends to deform with time and it possesses form memory so it tends to warp. Mother of pearl is a brittle material and can exhibit cracks after some use. Resin impregnated wooden handles are water resistant, do not deform and their weight complements the blade's to provide good overall balance for the razor. Snakewood, Brosimum guianense, is also suitable for long term and intensive use. The mechanical properties of bone make it a good handle material. Handles were once made of elephant ivory, but this has been discontinued, though fossil ivory, such as mammoth, is still sometimes used, and antique razors with ivory scales are occasionally found (it is illegal to kill elephants for their ivory, but it is legal to buy an ivory-handled razor made before 1989). Blade geometry and characteristics The geometry of the blade can be categorised according to three factors: the blade width and weight, the shape of the profile of the point of the razor, and the type of grinding method used for the blade (as grinding method determines the degree of and therefore hollowness, or of the sides of the cross section of the blade). Point types Straight razors are, at first, categorised according to their blade profiles, from the head of the spine to the blade toe, based on their point, or nose, type. The following are the main types of blade profiles called points, or nose shapes: Square, spike or sharp point, so-called because the blade profile is straight and terminates at a very sharp point at the toe, perpendicular to the cutting edge of the razor. This type of blade is used for precise shaving in small areas but, at the risk of pinching the skin, it requires some experience in handling. Spike point differs from square point as the angle at the edge of the blade is less than 90 degrees. resulting in a blade profile which appears slanted backwards at the toe. The spike end point of the profile at the toe may be ground by the user to make it rounder, but that may indicate a lack of skill in handling the razor. Barber's notch. The Barber's notch point features a large rounded tip at the toe of the blade followed by a short concave and rounded arch, while its upper edge at the head of the spine is rounded and smaller in size than the curve at the toe. The upper, rounded, edge of Barber's notch was designed to aid in pulling the blade from the scales. Barber's notch is essentially a round-nose blade profile with a concave arch (notch) on its upper part to aid in lifting the blade from the scales. Round point (or Dutch). As the name implies the point profile is symmetrically curved from head to toe in a circular arc shape and therefore it lacks any sharp end points. As such it is a more forgiving blade than the other types and, although lacking pinpoint accuracy at the blade toe, it is more forgiving, and is recommended for relatively inexperienced users. There are also secondary edge types that derive from a combination of round nose characteristics, such as half round point incorporating round edges joined by a linear segment. French (or oblique) point. Its point profile is asymmetrically curved from head to toe, and resembles a quarter circle or ellipse, but with a sharper-angled curve near the head of the blade than the other points. The end line of the profile, at the toe of the blade, can vary between spike and curved. Compared to the rest of the points, the French point may help shave "difficult spots" such as under the nose, due to its sharply angular profile at the head which creates more clearance in tight areas. Spanish point. The profile of the Spanish point has a small, rounded tip at the head, followed by a long concave arch, ending in a small rounded edge at the toe. This point should be used with care when shaving or stropping, as it tends to "bite" due to its pronounced edges. Grinding method The second category refers to the type of grinding method used and, since it affects the curvature of the blade cross section, includes the following two main types of blade grinds: Hollow grind, indicating that the sides of the blade cross section are concave. Flat or straight grind, indicating that the sides of the blade cross section are linear. This cross section most closely resembles a wedge and therefore this blade is sometimes called the wedge. The combination of the types found in these two classification categories can, in theory, lead to a wide variety of blade types such as round point hollow ground, square point flat ground etc., but in practice some points are combined with a specific grind. As an example, a French point blade is usually flat ground. A hollow grind produces a thinner blade than the flat grind because it removes more material from the blade (hollows or thins the blade more). The hollow-ground blade flexes more easily and provides more feedback on the resistance the blade meets while cutting the hair, which is an indicator of blade sharpness. Hollow-ground blades are preferred by most barbers and some high-end razor manufacturers limit their production exclusively to hollow ground razors. Blade width The third and final category refers to blade width. The width of the blade is defined as the distance between the back of the blade and the cutting edge. It is expressed in units of eighth of an inch. The sizes vary from up to , rarely . A wider blade can carry more lather, much like a scoop, during multiple successive shaving strokes and thus it allows the user more shaving time and minimises blade rinse cycles. The disadvantage of the wider blade is that it is not as manoeuvrable as a narrower blade. A narrow blade can shave tight facial spots such as under the nose, but it must be rinsed more often. The most popular blade width size is . The width of the blade can also affect its sharpness. The wider the blade, the greater the thermal deformation that can occur due to changing temperatures, a fact that can lead to loss of edge sharpness. Blade weight The weight of the blade is inversely proportional to the pressure that is applied during shaving. The heavier the blade, the lighter the pressure that needs to be applied during shaving. Length, stability, and balance The degree of hollowness and thus the cross sectional area (thickness) of the blade vary depending on the grinding method used. Higher degree of hollowness in the blade implies a thinner cross section and this affects the stability (bending or buckling properties) of the blade; the thinner the blade the more flexible it is. The length and weight of the blade and handle and their relation to each other determines the balance of the straight razor. The cutting area of the razor is proportional to the length of the blade, therefore, a longer blade requires less frequent honing since its cutting edge does not deplete as fast as that of a shorter blade. Transverse stabiliser For hollow-ground blades stability is augmented by a transverse stabiliser in the form of one or two narrow strips of thicker metal running from the back of the blade to the end of the shoulder (at the junction where the blade meets the shank). This piece, if present, is simply called the stabiliser (single or double) and indicates a hollow ground blade, since a flat ground blade is massive and stable enough to not need a stabiliser. A double stabiliser implies (full) hollow ground blade. The stabiliser protects the blade from torsional bending in the transverse direction (transverse spine). Longitudinal stabiliser In addition to the transverse stabiliser, a longitudinal stabiliser is sometimes created in the form of a ridge parallel to the cutting edge and the blade is ground in two areas or bevels, each with different degrees of hollowness or curvature; the area between the back of the blade and the ridge is typically less hollow featuring a larger radius of curvature, also called the "belly", and the area between the ridge and the cutting edge which is more hollow i.e. with a smaller radius of curvature. These two beveled areas have different curvatures and in a well-made razor they transition seamlessly in the ridge (belly) and the cutting edge respectively. Sometimes there are three bevels. The ridge stabilizes the blade against torsional flexing in a direction perpendicular to its longitudinal axis by acting as a lengthwise spine for the blade. The distance between the ridge and the back of the blade is inversely proportional to the hollowness of the blade and is described in fractional terms in ascending steps of as, for example, hollow, hollow, or or (full hollow). Full hollow indicates that the stabilizing ridge is very close to the midsection of the blade and the farthest from the cutting edge compared to the other grades. This is considered the most expensive blade. At the highest end of hollow ground, more hollow than even the grade, is the so-called singing razor, so named because its blade produces a specific resonant tone when plucked, similar to a guitar string, however such use is not recommended as it can distort the cutting edge. Its manufacturing process is so demanding that a full 25% of the blades get rejected as not meeting standards. Stability and sharpness There is a tradeoff between stability and long term blade sharpness. A full hollow ground () blade can keep a very sharp edge even after a great number of honing cycles because of its high degree of hollowness but it is more susceptible to torsional bending because it is thinner. A partially hollow blade ( or for example) cannot sustain the same degree of sharpness for as long, because as the cutting edge erodes it can eventually reach the stabilising ridge faster where there is more material and thus the cutting-edge bevel cannot be maintained without excessive honing of the stabilising ridge to remove the additional material, which could also destabilise the rest of the blade. However, the partially hollow blade is more stable because its additional material makes it stiffer, and thus more resistant to deformation. In addition a flat ground blade, since by definition is not hollow (curved) at all, is the most stable of the blades but because its cross sectional area is the largest it also feels heavier than hollow ground and this can affect the feel and balance of the razor. Balance A razor is well balanced if when opened it balances about its pivot pin, indicating that the torques about the pivot point, caused by the corresponding weight distributions of the blade and the handle about the pivot pin, counterbalance each other. A well-balanced razor is both safer to handle when open and easier to shave with. Effects of blade geometry on performance The characteristics of each blade type determine the type of recommended uses for each blade as well as their performance and maintenance routines. Each type has its own strengths and weaknesses depending on the requirements of use. Extra hollow blades such as singing blades are the thinnest and therefore they provide the best possible shave from all the other types. However they are also very flexible and therefore not suitable for tasks requiring increased pressure to the blade such as heavy beard growth etc. Care should also be taken when stropping so that the thin blade will not be overly stressed, since it cannot withstand abuse as well as lower grades. Flat ground razors are very stable and as such they can handle tough shaving jobs since they do not easily deform under pressure and they can take rough handling such as heavy stropping and honing. Although a wider blade is not as manoeuvreable as a narrower one, especially in tight spots, it is better to purchase a wider blade, since honing eventually reduces the width of the blade with use, a fact that can shorten the life of a straight razor with a narrow blade. On the other hand, the width of the blade is proportional to the blade distortion that can occur due to temperature fluctuations; this can lead to more frequent stropping and honing, because blade deformation due to thermal stress can lead to loss of cutting edge sharpness. Usage Shaving is done with the blade at approximately an angle of thirty degrees to the skin and in a direction perpendicular to the edge; an incision requires the movement of the blade to be sideways or in a direction parallel to the edge. These circumstances are always avoided by the shaver, who always shaves in a direction perpendicular to the cutting edge of the blade. A popular shaving method is the 14 stroke shave, which details the order and direction of each stroke to shave your face in 14 strokes. To be most effective, a straight razor must be kept extremely sharp. The edge is delicate, and inexpert use may bend or fold over the razor's edge. To unfold and straighten the microscopic sharp edge, one must strop the blade on leather periodically. A 1979 comparative study of straight and electric razors, performed by Dutch researchers, found that straight razors shave hair approximately 2/1000 in. (0.05mm) shorter than electrics. To sharpen or finish the blade using a suspended strop, the razor is pushed toward the suspension ring while both the back and the cutting edge lie flat on the strop and with the back of the blade. No pressure should be applied on the cutting edge. A strop may be two sided with leather on one side and cloth on the other side. The cloth is used for blade alignment and sharpening. The leather is for finishing. The stropping process involves sliding the razor blade flat on the strop; upon reaching the end of the cloth or leather near the suspension ring, the blade is turned about its back (clockwise for a right-handed barber; counter-clockwise for a left-handed one) until the cutting edge touches the strop. It is then pulled toward the rectangular handle of the strop with back and cutting edge flat on the strop as before. The blade is moved in a slightly diagonal direction so to give every point of the edge a chance to touch the strop, without applying too much pressure. This process aligns the cutting edge properly with the back of the blade, avoiding "bumps" on the cutting edge. Rotating the blade on the strop about the cutting edge can damage it because such use will impact the micro-alignment of the edge. Depending on use and condition, the blade can be sharpened occasionally by using a razor hone. Strops prepared with pastes containing fine grit are also used for honing but are not recommended for the inexperienced user, as they can easily rake off the edge if they apply the wrong amount or exert too much pressure. Some strops have a linen or canvas back. Shaving soap in a cup is traditionally lathered and applied using a rotating in-and-out motion of a shaving brush, usually made of boar or badger bristles. In the heyday of straight razor shaving, wealthy users maintained a weekly "rotation" of seven razors to reduce wear on any one piece. Straight razors were often sold in special boxes of seven labelled for the days of the week. Modern use Straight razors are still manufactured. DOVO, of Solingen, Germany, and Thiers Issard of France are two of the best-known European manufacturers. Boeker of Solingen is yet another cutlery manufacturer known for their straight razors. Wusthof and Henckels are two prominent knife manufacturers in Solingen who also produced straight razors. Thiers Issard and Hart Steel are famous for their decorated blades and their Damascus steel. Feather Safety Razor Co. Ltd. of Osaka, Japan, makes a razor with the same form as a traditional straight, but featuring a disposable blade that can be installed through an injector-type system. Artisans also make handcrafted custom straight razors based on their own designs, the designs of their customers, or by finishing old blanks of blades. Modern straight-razor users are known to favor them for a variety of reasons. Some are attracted to the nostalgia of using old and traditional methods of shaving. Others wish to avoid the waste of disposable blades. Still others argue that straight razors provide a superior shave through a larger blade and greater control of the blade, including the blade angle. Straight razors cover a much greater area per shaving stroke, because their cutting edge is much longer than any of the multi-blade razors. They also do not have to be rinsed as often, because their blade acts like a scoop and carries the lather on it during multiple shaving strokes, while the multi-blade razors are not nearly as efficient at such a task because of their considerably smaller blade geometry. Straight razors are also much easier to clean and can handle tougher shaving tasks, such as longer facial hair, than modern multi-blade razors, which tend to trap shaving debris between their tightly packed blades and are easily clogged, even with relatively short stubble. In addition, multi-edge razors can irritate the skin due to their multi-blade action, and this can lead to a condition known as pseudofolliculitis barbae, colloquially known as razor bumps. One of the recommended actions for those so affected is to switch to single blade use. Others simply like the good results and the satisfaction of maintaining the blade themselves. Yet others cite aesthetic reasons in addition to the practical ones. A well-made blade, in a nice handle with a well-crafted etching and decorated shank, carries a sense of craftsmanship and ownership difficult to associate with a disposable blade cartridge. Finally, a well-kept razor can last for decades, and can become a family heirloom that can be passed from parent to child. For all of these reasons, devotees of the straight razor make for an active market. Owing to health concerns, some areas require barbers who provide straight-razor shaving to use a version that employs a disposable or changeable blade system. In this type of straight razor the razor blade is changed and disposed of after each service. Various jurisdictions in Australia, Canada, New Zealand, Turkey and the United States, however, permit the professional use of straight razors. The 2012 James Bond film, Skyfall, has renewed interest in straight razors due to a scene when the agent shaves with one and his co-star Naomie Harris helps him finish shaving while remarking that “sometimes the old ways are the best”. Online straight razor retailers have reported increased sales ranging from 50% to over 400% due to the exposure generated by the film. The increase in sales is part of an overall growth in demand for straight razors, since about 2008, which has also seen an increase in the number of barbers offering straight razor shaves. The phenomenon seems to be driven by renewed nostalgia for things retro such as the straight razor which evokes simpler notions of the past such as the "macho" image associated with its use and also the skill required to shave with it which can be a source of pride. Cost As compared to the disposable and cartridge razors, straight razors are more economical, despite a higher initial cost, because if properly cared for, no additional cost is incurred, as compared to disposable razors where new cartridges must be periodically procured. Environment Straight razors are more environmentally friendly than other types of razors since the latter come with packaging that may have to be thrown away along with the razors, and, in the case of electric razors, batteries that are typically disposed of after they expire. Straight razors produce no waste and they require only a strop for honing. Handling and honing The various straight razor honing and stropping directions and handling techniques are illustrated by the drawings below.
Biology and health sciences
Hygiene products
Health
1704824
https://en.wikipedia.org/wiki/Fraction
Fraction
A fraction (from , "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction (examples: and ) consists of an integer numerator, displayed above a line (or before a slash like ), and a non-zero integer denominator, displayed below (or after) that line. If these integers are positive, then the numerator represents a number of equal parts, and the denominator indicates how many of those parts make up a unit or a whole. For example, in the fraction , the numerator 3 indicates that the fraction represents 3 equal parts, and the denominator 4 indicates that 4 parts make up a whole. The picture to the right illustrates of a cake. Fractions can be used to represent ratios and division. Thus the fraction can be used to represent the ratio 3:4 (the ratio of the part to the whole), and the division (three divided by four). We can also write negative fractions, which represent the opposite of a positive fraction. For example, if represents a half-dollar profit, then − represents a half-dollar loss. Because of the rules of division of signed numbers (which states in part that negative divided by positive is negative), −, and all represent the same fraction negative one-half. And because a negative divided by a negative produces a positive, represents positive one-half. In mathematics a rational number is a number that can be represented by a fraction of the form , where a and b are integers and b is not zero; the set of all rational numbers is commonly represented by the symbol Q or , which stands for quotient. The term fraction and the notation can also be used for mathematical expressions that do not represent a rational number (for example ), and even do not represent any number (for example the rational fraction ). Vocabulary In a fraction, the number of equal parts being described is the numerator (from , "counter" or "numberer"), and the type or variety of the parts is the denominator (from , "thing that names or designates"). As an example, the fraction amounts to eight parts, each of which is of the type named "fifth". In terms of division, the numerator corresponds to the dividend, and the denominator corresponds to the divisor. Informally, the numerator and denominator may be distinguished by placement alone, but in formal contexts they are usually separated by a fraction bar. The fraction bar may be horizontal (as in ), oblique (as in 2/5), or diagonal (as in ). These marks are respectively known as the horizontal bar; the virgule, slash (US), or stroke (UK); and the fraction bar, solidus, or fraction slash. In typography, fractions stacked vertically are also known as "en" or "nut fractions", and diagonal ones as "em" or "mutton fractions", based on whether a fraction with a single-digit numerator and denominator occupies the proportion of a narrow en square, or a wider em square. In traditional typefounding, a piece of type bearing a complete fraction (e.g. ) was known as a "case fraction", while those representing only part of fraction were called "piece fractions". The denominators of English fractions are generally expressed as ordinal numbers, in the plural if the numerator is not 1. (For example, and are both read as a number of "fifths".) Exceptions include the denominator 2, which is always read "half" or "halves", the denominator 4, which may be alternatively expressed as "quarter"/"quarters" or as "fourth"/"fourths", and the denominator 100, which may be alternatively expressed as "hundredth"/"hundredths" or "percent". When the denominator is 1, it may be expressed in terms of "wholes" but is more commonly ignored, with the numerator read out as a whole number. For example, may be described as "three wholes", or simply as "three". When the numerator is 1, it may be omitted (as in "a tenth" or "each quarter"). The entire fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. (For example, "two-fifths" is the fraction and "two fifths" is the same fraction understood as 2 instances of .) Fractions should always be hyphenated when used as adjectives. Alternatively, a fraction may be described by reading it out as the numerator "over" the denominator, with the denominator expressed as a cardinal number. (For example, may also be expressed as "three over one".) The term "over" is used even in the case of solidus fractions, where the numbers are placed left and right of a slash mark. (For example, 1/2 may be read "one-half", "one half", or "one over two".) Fractions with large denominators that are not powers of ten are often rendered in this fashion (e.g., as "one over one hundred seventeen"), while those with denominators divisible by ten are typically read in the normal ordinal fashion (e.g., as "six-millionths", "six millionths", or "six one-millionths"). Forms of fractions Simple, common, or vulgar fractions A simple fraction (also known as a common fraction or vulgar fraction, where vulgar is Latin for "common") is a rational number written as a/b or , where a and b are both integers. As with other fractions, the denominator (b) cannot be zero. Examples include , −, , and . The term was originally used to distinguish this type of fraction from the sexagesimal fraction used in astronomy. Common fractions can be positive or negative, and they can be proper or improper (see below). Compound fractions, complex fractions, mixed numerals, and decimals (see below) are not common fractions; though, unless irrational, they can be evaluated to a common fraction. A unit fraction is a common fraction with a numerator of 1 (e.g., ). Unit fractions can also be expressed using negative exponents, as in 2−1, which represents 1/2, and 2−2, which represents 1/(22) or 1/4. A dyadic fraction is a common fraction in which the denominator is a power of two, e.g. = . In Unicode, precomposed fraction characters are in the Number Forms block. Proper and improper fractions Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is called proper if the numerator is less than the denominator, and improper otherwise. The concept of an "improper fraction" is a late development, with the terminology deriving from the fact that "fraction" means "a piece", so a proper fraction must be less than 1. This was explained in the 17th century textbook The Ground of Arts. In general, a common fraction is said to be a proper fraction, if the absolute value of the fraction is strictly less than one—that is, if the fraction is greater than −1 and less than 1. It is said to be an improper fraction, or sometimes top-heavy fraction, if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, −3/4, and 4/9, whereas examples of improper fractions are 9/4, −4/3, and 3/3. Reciprocals and the "invisible denominator" The reciprocal of a fraction is another fraction with the numerator and denominator exchanged. The reciprocal of , for instance, is . The product of a non-zero fraction and its reciprocal is 1, hence the reciprocal is the multiplicative inverse of a fraction. The reciprocal of a proper fraction is improper, and the reciprocal of an improper fraction not equal to 1 (that is, numerator and denominator are not equal) is a proper fraction. When the numerator and denominator of a fraction are equal (for example, ), its value is 1, and the fraction therefore is improper. Its reciprocal is identical and hence also equal to 1 and improper. Any integer can be written as a fraction with the number one as denominator. For example, 17 can be written as , where 1 is sometimes referred to as the invisible denominator. Therefore, every fraction or integer, except for zero, has a reciprocal. For example, the reciprocal of 17 is . Ratios A ratio is a relationship between two or more numbers that can be sometimes expressed as a fraction. Typically, a number of items are grouped and compared in a ratio, specifying numerically the relationship between each group. Ratios are expressed as "group 1 to group 2 ... to group n". For example, if a car lot had 12 vehicles, of which 2 are white, 6 are red, and 4 are yellow, then the ratio of red to white to yellow cars is 6 to 2 to 4. The ratio of yellow cars to white cars is 4 to 2 and may be expressed as 4:2 or 2:1. A ratio is often converted to a fraction when it is expressed as a ratio to the whole. In the above example, the ratio of yellow cars to all the cars on the lot is 4:12 or 1:3. We can convert these ratios to a fraction, and say that of the cars or of the cars in the lot are yellow. Therefore, if a person randomly chose one car on the lot, then there is a one in three chance or probability that it would be yellow. Decimal fractions and percentages A decimal fraction is a fraction whose denominator is not given explicitly, but is understood to be an integer power of ten. Decimal fractions are commonly expressed using decimal notation in which the implied denominator is determined by the number of digits to the right of a decimal separator, the appearance of which (e.g., a period, an interpunct (·), a comma) depends on the locale (for examples, see Decimal separator). Thus, for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, namely, 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), the fractional part of the number is expressed by the digits to the right of the decimal (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction, 375/100, or as a mixed number, . Decimal fractions can also be expressed using scientific notation with negative exponents, such as , which represents 0.0000006023. The represents a denominator of . Dividing by moves the decimal point 7 places to the left. Decimal fractions with infinitely many digits to the right of the decimal separator represent an infinite series. For example, = 0.333... represents the infinite series 3/10 + 3/100 + 3/1000 + .... Another kind of fraction is the percentage (from , meaning "per hundred", represented by the symbol %), in which the implied denominator is always 100. Thus, 51% means 51/100. Percentages greater than 100 or less than zero are treated in the same way, e.g. 311% equals 311/100, and −27% equals −27/100. The related concept of permille or parts per thousand (ppt) has an implied denominator of 1000, while the more general parts-per notation, as in 75 parts per million (ppm), means that the proportion is 75/1,000,000. Whether common fractions or decimal fractions are used is often a matter of taste and context. Common fractions are used most often when the denominator is relatively small. By mental calculation, it is easier to multiply 16 by 3/16 than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is more accurate to multiply 15 by 1/3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are commonly expressed as decimal fractions with denominator 100, i.e., with two decimals, for example $3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example, "3/6" (read "three and six") meaning 3 shillings and 6 pence, and having no relationship to the fraction 3/6. Mixed numbers A mixed number (also called a mixed fraction or mixed numeral) is the sum of a non-zero integer and a proper fraction, conventionally written by juxtaposition (or concatenation) of the two parts, without the use of an intermediate plus (+) or minus (−) sign. When the fraction is written horizontally, a space is added between the integer and fraction to separate them. As a basic example, two entire cakes and three quarters of another cake might be written as cakes or cakes, with the numeral representing the whole cakes and the fraction representing the additional partial cake juxtaposed; this is more concise than the more explicit notation cakes. The mixed number is pronounced "two and three quarters", with the integer and fraction portions connected by the word and. Subtraction or negation is applied to the entire mixed numeral, so means Any mixed number can be converted to an improper fraction by applying the rules of adding unlike quantities. For example, Conversely, an improper fraction can be converted to a mixed number using division with remainder, with the proper fraction consisting of the remainder divided by the divisor. For example, since 4 goes into 11 twice, with 3 left over, In primary school, teachers often insist that every fractional result should be expressed as a mixed number. Outside school, mixed numbers are commonly used for describing measurements, for instance hours or 5 3/16 inches, and remain widespread in daily life and in trades, especially in regions that do not use the decimalized metric system. However, scientific measurements typically use the metric system, which is based on decimal fractions, and starting from the secondary school level, mathematics pedagogy treats every fraction uniformly as a rational number, the quotient of integers, leaving behind the concepts of "improper fraction" and "mixed number". College students with years of mathematical training are sometimes confused when re-encountering mixed numbers because they are used to the convention that juxtaposition in algebraic expressions means multiplication. Historical notions Egyptian fraction An Egyptian fraction is the sum of distinct positive unit fractions, for example . This definition derives from the fact that the ancient Egyptians expressed all fractions except , and in this manner. Every positive rational number can be expanded as an Egyptian fraction. For example, can be written as Any positive rational number can be written as a sum of unit fractions in infinitely many ways. Two ways to write are and . Complex and compound fractions In a complex fraction, either the numerator, or the denominator, or both, is a fraction or a mixed number, corresponding to division of fractions. For example, and are complex fractions. To interpret nested fractions written "stacked" with a horizontal fraction bars, treat shorter bars as nested inside longer bars. Complex fractions can be simplified using multiplication by the reciprocal, as described below at . For example: A complex fraction should never be written without an obvious marker showing which fraction is nested inside the other, as such expressions are ambiguous. For example, the expression could be plausibly interpreted as either or as The meaning can be made explicit by writing the fractions using distinct separators or by adding explicit parentheses, in this instance or A compound fraction is a fraction of a fraction, or any number of fractions connected with the word of, corresponding to multiplication of fractions. To reduce a compound fraction to a simple fraction, just carry out the multiplication (see ). For example, of is a compound fraction, corresponding to . The terms compound fraction and complex fraction are closely related and sometimes one is used as a synonym for the other. (For example, the compound fraction is equivalent to the complex fraction .) Nevertheless, "complex fraction" and "compound fraction" may both be considered outdated and now used in no well-defined manner, partly even taken synonymously for each other or for mixed numerals. They have lost their meaning as technical terms and the attributes "complex" and "compound" tend to be used in their every day meaning of "consisting of parts". Arithmetic with fractions Like whole numbers, fractions obey the commutative, associative, and distributive laws, and the rule against division by zero. Mixed-number arithmetic can be performed either by converting each mixed number to an improper fraction, or by treating each as a sum of integer and fractional parts. Equivalent fractions Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a fraction that is equivalent to the original fraction. This is true because for any non-zero number , the fraction equals 1. Therefore, multiplying by is the same as multiplying by one, and any number multiplied by one has the same value as the original number. By way of an example, start with the fraction . When the numerator and denominator are both multiplied by 2, the result is , which has the same value (0.5) as . To picture this visually, imagine cutting a cake into four pieces; two of the pieces together () make up half the cake (). Simplifying (reducing) fractions Dividing the numerator and denominator of a fraction by the same non-zero number yields an equivalent fraction: if the numerator and the denominator of a fraction are both divisible by a number (called a factor) greater than 1, then the fraction can be reduced to an equivalent fraction with a smaller numerator and a smaller denominator. For example, if both the numerator and the denominator of the fraction are divisible by , then they can be written as , , and the fraction becomes , which can be reduced by dividing both the numerator and denominator by c to give the reduced fraction . If one takes for the greatest common divisor of the numerator and the denominator, one gets the equivalent fraction whose numerator and denominator have the lowest absolute values. One says that the fraction has been reduced to its lowest terms. If the numerator and the denominator do not share any factor greater than 1, the fraction is already reduced to its lowest terms, and it is said to be irreducible, reduced, or in simplest terms. For example, is not in lowest terms because both 3 and 9 can be exactly divided by 3. In contrast, is in lowest terms—the only positive integer that goes into both 3 and 8 evenly is 1. Using these rules, we can show that , for example. As another example, since the greatest common divisor of 63 and 462 is 21, the fraction can be reduced to lowest terms by dividing the numerator and denominator by 21: The Euclidean algorithm gives a method for finding the greatest common divisor of any two integers. Comparing fractions Comparing fractions with the same positive denominator yields the same result as comparing the numerators: because , and the equal denominators are positive. If the equal denominators are negative, then the opposite result of comparing the numerators holds for the fractions: If two positive fractions have the same numerator, then the fraction with the smaller denominator is the larger number. When a whole is divided into equal pieces, if fewer equal pieces are needed to make up the whole, then each piece must be larger. When two positive fractions have the same numerator, they represent the same number of parts, but in the fraction with the smaller denominator, the parts are larger. One way to compare fractions with different numerators and denominators is to find a common denominator. To compare and , these are converted to and (where the dot signifies multiplication and is an alternative symbol to ×). Then bd is a common denominator and the numerators ad and bc can be compared. It is not necessary to determine the value of the common denominator to compare fractions – one can just compare ad and bc, without evaluating bd, e.g., comparing ? gives . For the more laborious question ? multiply top and bottom of each fraction by the denominator of the other fraction, to get a common denominator, yielding ? . It is not necessary to calculate – only the numerators need to be compared. Since 5×17 (= 85) is greater than 4×18 (= 72), the result of comparing is . Because every negative number, including negative fractions, is less than zero, and every positive number, including positive fractions, is greater than zero, it follows that any negative fraction is less than any positive fraction. This allows, together with the above rules, to compare all possible fractions. Addition The first rule of addition is that only like quantities can be added; for example, various quantities of quarters. Unlike quantities, such as adding thirds to quarters, must first be converted to like quantities as described below: Imagine a pocket containing two quarters, and another pocket containing three quarters; in total, there are five quarters. Since four quarters is equivalent to one (dollar), this can be represented as follows: . Adding unlike quantities To add fractions containing unlike quantities (e.g. quarters and thirds), it is necessary to convert all amounts to like quantities. It is easy to work out the chosen type of fraction to convert to; simply multiply together the two denominators (bottom number) of each fraction. In case of an integer number apply the invisible denominator 1. For adding quarters to thirds, both types of fraction are converted to twelfths, thus: Consider adding the following two quantities: First, convert into fifteenths by multiplying both the numerator and denominator by three: . Since equals 1, multiplication by does not change the value of the fraction. Second, convert into fifteenths by multiplying both the numerator and denominator by five: . Now it can be seen that is equivalent to This method can be expressed algebraically: This algebraic method always works, thereby guaranteeing that the sum of simple fractions is always again a simple fraction. However, if the single denominators contain a common factor, a smaller denominator than the product of these can be used. For example, when adding and the single denominators have a common factor 2, and therefore, instead of the denominator 24 (4 × 6), the halved denominator 12 may be used, not only reducing the denominator in the result, but also the factors in the numerator. The smallest possible denominator is given by the least common multiple of the single denominators, which results from dividing the rote multiple by all common factors of the single denominators. This is called the least common denominator. Subtraction The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance, To subtract a mixed number, an extra one can be borrowed from the minuend, for instance Multiplication Multiplying a fraction by another fraction To multiply fractions, multiply the numerators and multiply the denominators. Thus: To explain the process, consider one third of one quarter. Using the example of a cake, if three small slices of equal size make up a quarter, and four quarters make up a whole, twelve of these small, equal slices make up a whole. Therefore, a third of a quarter is a twelfth. Now consider the numerators. The first fraction, two thirds, is twice as large as one third. Since one third of a quarter is one twelfth, two thirds of a quarter is two twelfth. The second fraction, three quarters, is three times as large as one quarter, so two thirds of three quarters is three times as large as two thirds of one quarter. Thus two thirds times three quarters is six twelfths. A short cut for multiplying fractions is called "cancellation". Effectively the answer is reduced to lowest terms during multiplication. For example: A two is a common factor in both the numerator of the left fraction and the denominator of the right and is divided out of both. Three is a common factor of the left denominator and right numerator and is divided out of both. Multiplying a fraction by a whole number Since a whole number can be rewritten as itself divided by 1, normal fraction multiplication rules can still apply. For example, This method works because the fraction 6/1 means six equal parts, each one of which is a whole. Multiplying mixed numbers The product of mixed numbers can be computed by converting each to an improper fraction. For example: Alternately, mixed numbers can be treated as sums, and multiplied as binomials. In this example, Division To divide a fraction by a whole number, you may either divide the numerator by the number, if it goes evenly into the numerator, or multiply the denominator by the number. For example, equals and also equals , which reduces to . To divide a number by a fraction, multiply that number by the reciprocal of that fraction. Thus, . Converting between decimals and fractions To change a common fraction to a decimal, do a long division of the decimal representations of the numerator by the denominator (this is idiomatically also phrased as "divide the denominator into the numerator"), and round the answer to the desired accuracy. For example, to change to a decimal, divide by (" into "), to obtain . To change to a decimal, divide by (" into "), and stop when the desired accuracy is obtained, e.g., at decimals with . The fraction can be written exactly with two decimal digits, while the fraction cannot be written exactly as a decimal with a finite number of digits. To change a decimal to a fraction, write in the denominator a followed by as many zeroes as there are digits to the right of the decimal point, and write in the numerator all the digits of the original decimal, just omitting the decimal point. Thus Converting repeating decimals to fractions Decimal numbers, while arguably more useful to work with when performing calculations, sometimes lack the precision that common fractions have. Sometimes an infinite repeating decimal is required to reach the same precision. Thus, it is often useful to convert repeating decimals into fractions. A conventional way to indicate a repeating decimal is to place a bar (known as a vinculum) over the digits that repeat, for example = 0.789789789... For repeating patterns that begin immediately after the decimal point, the result of the conversion is the fraction with the pattern as a numerator, and the same number of nines as a denominator. For example: = 5/9 = 62/99 = 264/999 = 6291/9999 If leading zeros precede the pattern, the nines are suffixed by the same number of trailing zeros: = 5/90 = 392/999000 = 12/9900 If a non-repeating set of decimals precede the pattern (such as ), one may write the number as the sum of the non-repeating and repeating parts, respectively: 0.1523 + Then, convert both parts to fractions, and add them using the methods described above: 1523 / 10000 + 987 / 9990000 = 1522464 / 9990000 Alternatively, algebra can be used, such as below: Let x = the repeating decimal: x = Multiply both sides by the power of 10 just great enough (in this case 104) to move the decimal point just before the repeating part of the decimal number: 10,000x = Multiply both sides by the power of 10 (in this case 103) that is the same as the number of places that repeat: 10,000,000x = Subtract the two equations from each other (if and , then ): 10,000,000x − 10,000x = − Continue the subtraction operation to clear the repeating decimal: 9,990,000x = 1,523,987 − 1,523 9,990,000x = 1,522,464 Divide both sides by 9,990,000 to represent x as a fraction x = Fractions in abstract mathematics In addition to being of great practical importance, fractions are also studied by mathematicians, who check that the rules for fractions given above are consistent and reliable. Mathematicians define a fraction as an ordered pair of integers and for which the operations addition, subtraction, multiplication, and division are defined as follows: These definitions agree in every case with the definitions given above; only the notation is different. Alternatively, instead of defining subtraction and division as operations, the "inverse" fractions with respect to addition and multiplication might be defined as: Furthermore, the relation, specified as is an equivalence relation of fractions. Each fraction from one equivalence class may be considered as a representative for the whole class, and each whole class may be considered as one abstract fraction. This equivalence is preserved by the above defined operations, i.e., the results of operating on fractions are independent of the selection of representatives from their equivalence class. Formally, for addition of fractions and imply and similarly for the other operations. In the case of fractions of integers, the fractions with and coprime and are often taken as uniquely determined representatives for their equivalent fractions, which are considered to be the same rational number. This way the fractions of integers make up the field of the rational numbers. More generally, a and b may be elements of any integral domain R, in which case a fraction is an element of the field of fractions of R. For example, polynomials in one indeterminate, with coefficients from some integral domain D, are themselves an integral domain, call it P. So for a and b elements of P, the generated field of fractions is the field of rational fractions (also known as the field of rational functions). Algebraic fractions An algebraic fraction is the indicated quotient of two algebraic expressions. As with fractions of integers, the denominator of an algebraic fraction cannot be zero. Two examples of algebraic fractions are and . Algebraic fractions are subject to the same field properties as arithmetic fractions. If the numerator and the denominator are polynomials, as in , the algebraic fraction is called a rational fraction (or rational expression). An irrational fraction is one that is not rational, as, for example, one that contains the variable under a fractional exponent or root, as in . The terminology used to describe algebraic fractions is similar to that used for ordinary fractions. For example, an algebraic fraction is in lowest terms if the only factors common to the numerator and the denominator are 1 and −1. An algebraic fraction whose numerator or denominator, or both, contain a fraction, such as , is called a complex fraction. The field of rational numbers is the field of fractions of the integers, while the integers themselves are not a field but rather an integral domain. Similarly, the rational fractions with coefficients in a field form the field of fractions of polynomials with coefficient in that field. Considering the rational fractions with real coefficients, radical expressions representing numbers, such as , are also rational fractions, as are a transcendental numbers such as since all of and are real numbers, and thus considered as coefficients. These same numbers, however, are not rational fractions with integer coefficients. The term partial fraction is used when decomposing rational fractions into sums of simpler fractions. For example, the rational fraction can be decomposed as the sum of two fractions: . This is useful for the computation of antiderivatives of rational functions (see partial fraction decomposition for more). Radical expressions A fraction may also contain radicals in the numerator or the denominator. If the denominator contains radicals, it can be helpful to rationalize it (compare Simplified form of a radical expression), especially if further operations, such as adding or comparing that fraction to another, are to be carried out. It is also more convenient if division is to be done manually. When the denominator is a monomial square root, it can be rationalized by multiplying both the top and the bottom of the fraction by the denominator: The process of rationalization of binomial denominators involves multiplying the top and the bottom of a fraction by the conjugate of the denominator so that the denominator becomes a rational number. For example: Even if this process results in the numerator being irrational, like in the examples above, the process may still facilitate subsequent manipulations by reducing the number of irrationals one has to work with in the denominator. Typographical variations In computer displays and typography, simple fractions are sometimes printed as a single character, e.g. (one half). See the article on Number Forms for information on doing this in Unicode. Scientific publishing distinguishes four ways to set fractions, together with guidelines on use: Special fractions: fractions that are presented as a single character with a slanted bar, with roughly the same height and width as other characters in the text. Generally used for simple fractions, such as: . Since the numerals are smaller, legibility can be an issue, especially for small-sized fonts. These are not used in modern mathematical notation, but in other contexts. Case fractions: similar to special fractions, these are rendered as a single typographical character, but with a horizontal bar, thus making them upright. An example would be , but rendered with the same height as other characters. Some sources include all rendering of fractions as case fractions if they take only one typographical space, regardless of the direction of the bar. Shilling or solidus fractions: 1/2, so called because this notation was used for pre-decimal British currency (£sd), as in "2/6" for a half crown, meaning two shillings and six pence. While the notation "two shillings and six pence" did not represent a fraction, the forward slash is now used in fractions, especially for fractions inline with prose (rather than displayed), to avoid uneven lines. It is also used for fractions within fractions (complex fractions) or within exponents to increase legibility. Fractions written this way, also known as piece fractions, are written all on one typographical line, but take 3 or more typographical spaces. Built-up fractions: . This notation uses two or more lines of ordinary text and results in a variation in spacing between lines when included within other text. While large and legible, these can be disruptive, particularly for simple fractions or within complex fractions. History The earliest fractions were reciprocals of integers: ancient symbols representing one part of two, one part of three, one part of four, and so on. The Egyptians used Egyptian fractions  BC. About 4000 years ago, Egyptians divided with fractions using slightly different methods. They used least common multiples with unit fractions. Their methods gave the same answer as modern methods. The Egyptians also had a different notation for dyadic fractions, used for certain systems of weights and measures. The Greeks used unit fractions and (later) simple continued fractions. Followers of the Greek philosopher Pythagoras ( BC) discovered that the square root of two cannot be expressed as a fraction of integers. (This is commonly though probably erroneously ascribed to Hippasus of Metapontum, who is said to have been executed for revealing this fact.) In Jain mathematicians in India wrote the "Sthananga Sutra", which contains work on the theory of numbers, arithmetical operations, and operations with fractions. A modern expression of fractions known as bhinnarasi seems to have originated in India in the work of Aryabhatta (), Brahmagupta (), and Bhaskara (). Their works form fractions by placing the numerators () over the denominators (), but without a bar between them. In Sanskrit literature, fractions were always expressed as an addition to or subtraction from an integer. The integer was written on one line and the fraction in its two parts on the next line. If the fraction was marked by a small circle or cross , it is subtracted from the integer; if no such sign appears, it is understood to be added. For example, Bhaskara I writes: ६  १  २ १  १  १० ४  ५  ९ which is the equivalent of 6  1  2 1  1  −1 4  5  9 and would be written in modern notation as 6, 1, and 2 −  (i.e., 1). The horizontal fraction bar is first attested in the work of Al-Hassār (), a Muslim mathematician from Fez, Morocco, who specialized in Islamic inheritance jurisprudence. In his discussion he writes: "for example, if you are told to write three-fifths and a third of a fifth, write thus, The same fractional notation—with the fraction given before the integer—appears soon after in the work of Leonardo Fibonacci in the 13th century. In discussing the origins of decimal fractions, Dirk Jan Struik states: The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548–1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his Key to arithmetic (Samarkand, early fifteenth century). While the Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggren notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. In formal education Primary schools In primary schools, fractions have been demonstrated through Cuisenaire rods, Fraction Bars, fraction strips, fraction circles, paper (for folding or cutting), pattern blocks, pie-shaped pieces, plastic rectangles, grid paper, dot paper, geoboards, counters and computer software. Documents for teachers Several states in the United States have adopted learning trajectories from the Common Core State Standards Initiative's guidelines for mathematics education. Aside from sequencing the learning of fractions and operations with fractions, the document provides the following definition of a fraction: "A number expressible in the form where is a whole number and is a positive whole number. (The word fraction in these standards always refers to a non-negative number.)" The document itself also refers to negative fractions.
Mathematics
Counting and numbers
null
81036
https://en.wikipedia.org/wiki/Dishwasher
Dishwasher
A dishwasher is a machine that is used to clean dishware, cookware, and cutlery automatically. Unlike manual dishwashing, which relies on physical scrubbing to remove soiling, the mechanical dishwasher cleans by spraying hot water, typically between , at the dishes, with lower temperatures of water used for delicate items. A mix of water and dishwasher detergent is pumped to one or more rotating sprayers, cleaning the dishes with the cleaning mixture. The mixture is recirculated to save water and energy. Often there is a pre-rinse, which may or may not include detergent, and the water is then drained. This is followed by the main wash with fresh water and detergent. Once the wash is finished, the water is drained; more hot water enters the tub by means of an electromechanical solenoid valve, and the rinse cycle(s) begin. After the rinse process finishes, the water is drained again and the dishes are dried using one of several drying methods. Typically a rinse-aid, a chemical to reduce the surface tension of the water, is used to reduce water spots from hard water or other reasons. In addition to domestic units, industrial dishwashers are available for use in commercial establishments such as hotels and restaurants, where many dishes must be cleaned. Washing is conducted with temperatures of and sanitation is achieved by either the use of a booster heater that will provide an "final rinse" temperature or through the use of a chemical sanitizer. History The first mechanical dishwashing device was registered for a patent in 1850 in the United States by Joel Houghton. This device was made of wood and was cranked by hand while water sprayed onto the dishes. The device was both slow and unreliable. Another patent was granted to L.A. Alexander in 1865 that was similar to the first but featured a hand-cranked rack system. Neither device was practical or widely accepted. Some historians cite as an obstacle to adoption the historical attitude that valued women for the effort put into housework rather than the results—making household chores easier was perceived by some to reduce their value. The most successful of the hand-powered dishwashers was invented in 1886 by Josephine Cochrane together with mechanic George Butters in Cochrane's tool shed in Shelbyville, Illinois when Cochrane (a wealthy socialite) wanted to protect her china while it was being washed. Their invention was unveiled at the 1893 World's Fair in Chicago under the name of Lavadora but was changed to Lavaplatos as another machine invented in 1858 already held that name. Cochrane's inspiration was her frustration at the damage to her good china that occurred when her servants handled it during cleaning. Europe's first domestic dishwasher with an electric motor was invented and manufactured by Miele in 1929. In the United Kingdom, William Howard Livens invented a small, non-electric dishwasher suitable for domestic use in 1924. It was the first dishwasher that incorporated most of the design elements that are featured in the models of today; it included a door for loading, a wire rack to hold the dirty crockery and a rotating sprayer. Drying elements were added to his design in 1940. It was the first machine suitable for domestic use, and it came at a time when permanent plumbing and running water in the home were becoming increasingly common. Despite this, Liven's design did not become a commercial success, and dishwashers were only successfully sold as domestic utilities in the postwar boom of the 1950s, albeit only to the wealthy. Initially, dishwashers were sold as standalone or portable devices, but with the development of the wall-to-wall countertop and standardized height cabinets, dishwashers began to be marketed with standardized sizes and shapes, integrated underneath the kitchen countertop as a modular unit with other kitchen appliances. By the 1970s, dishwashers had become commonplace in domestic residences in North America and Western Europe. By 2012, over 75 percent of homes in the United States and Germany had dishwashers. In the late 1990s, manufacturers began offering various new energy conservation features in dishwashers. One feature was use of "soil sensors", which was a computerized tool in the dishwasher which measured food particles coming from dishes. When the dishwasher had cleaned the dishes to the point of not releasing more food particles, the soil sensor would report the dishes as being clean. The sensor operated with another innovation of using variable washing time. If dishes were especially dirty, then the dishwasher would run for a longer time than if the sensor detected them to be clean. In this way, the dishwasher would save energy and water by only being in operation for as long as needed. Design Size and capacity Dishwashers that are installed into standard kitchen cabinets have a standard width and depth of 60 cm (Europe) or (US), and most dishwashers must be installed into a hole a minimum of 86 cm (Europe) or (US) tall. Portable dishwashers exist in 45 and 60 cm (Europe) or (US) widths, with casters and attached countertops. There are also dishwashers available in sizes according to the European gastronorm standard. Dishwashers may come in standard or tall tub designs; standard tub dishwashers have a service kickplate beneath the dishwasher door that allows for simpler maintenance and installation, but tall tub dishwashers have approximately 20% more capacity and better sound dampening from having a continuous front door. The international standard for the capacity of a dishwasher is expressed as standard place settings. Commercial dishwashers are rated as plates per hour. The rating is based on standard-sized plates of the same size. The same can be said for commercial glass washers, as they are based on standard glasses, normally pint glasses. Layout Present-day machines feature a drop-down front panel door, allowing access to the interior, which usually contains two or sometimes three pull-out racks; racks can also be referred to as "baskets". In older U.S. models from the 1950s, the entire tub rolled out when the machine latch was opened, and loading as well as removing washable items was from the top, with the user reaching deep into the compartment for some items. Youngstown Kitchens, which manufactured entire kitchen cabinets and sinks, offered a tub-style dishwasher, which was coupled to a conventional kitchen sink as one unit. Most present-day machines allow for placement of dishes, silverware, tall items and cooking utensils in the lower rack, while glassware, cups and saucers are placed in the upper rack. One notable exception were dishwashers produced by the Maytag Corporation from the late sixties until the early nineties. These machines were designed for loading glassware, cups and saucers in the lower rack, while plates, silverware, and tall items were placed into the upper rack. This unique design allowed for a larger capacity and more flexibility in loading of dishes and pots and pans. Today, "dish drawer" models eliminate the inconvenience of the long reach that was necessary with older full-depth models. "Cutlery baskets" are also common. A drawer dishwasher, first introduced by Fisher & Paykel in 1997, is a variant of the dishwasher in which the baskets slide out with the door in the same manner as a drawer filing cabinet, with each drawer in a double-drawer model being able to operate independently of the other. The inside of a dishwasher in the North American market is either stainless steel or plastic. Most of them are stainless steel body and plastic made racks. Stainless steel tubs resist hard water, and preserve heat to dry dishes more quickly. They also come at a premium price. Dishwashers can be bought for as expensive as $1,500+, but countertop dishwashers are also available for under $300. Older models used baked enamel tubs, while some used a vinyl coating bonded to a steel tub, which provided protection of the tub from acidic foods and provided some sound attenuation. European-made dishwashers feature a stainless steel interior as standard, even on low-end models. The same is true for a built-in water softener. Washing elements European dishwashers almost universally use two or three sprayers which are fed from the bottom and back wall of the dishwasher, leaving both racks unimpeded. Such models also tend to use inline water heaters, removing the need for exposed elements in the base of the machine that can melt plastic items near to them. Many North American dishwashers tend to use exposed elements in the base of the dishwasher. Some North American machines, primarily those designed by General Electric, use a wash tube, often called a wash-tower, to direct water from the bottom of the dishwasher to the top dish rack. Some dishwashers, including many models from Whirlpool and KitchenAid, use a tube attached to the top rack that connects to a water source at the back of the dishwasher and directs water to a second wash spray beneath the upper rack, which allows full use of the bottom rack. Late-model Frigidaire dishwashers shoot a jet of water from the top of the washer down into the upper wash sprayer, again allowing full use of the bottom rack (but requiring that a small funnel on the top rack be kept clear). Features Mid-range to higher-end North American dishwashers often come with hard food disposal units, which behave like miniature garbage (waste) disposal units that eliminate large pieces of food waste from the wash water. One manufacturer that is known for omitting hard food disposals is Bosch, a German brand; however, Bosch does so in order to reduce noise. If the larger items of food waste are removed before placing in the dishwasher, pre-rinsing is not necessary even without integrated waste disposal units. Many new dishwashers feature microprocessor-controlled, sensor-assisted wash cycles that adjust the wash duration to the number of dirty dishes (sensed by changes in water temperature) or the amount of dirt in the rinse water (sensed chemically or optically). This can save water and energy if the user runs a partial load. In such dishwashers the electromechanical rotary switch often used to control the washing cycle is replaced by a microprocessor, but most sensors and valves are still required. However, pressure switches (some dishwashers use a pressure switch and flow meter) are not required in most microprocessor controlled dishwashers as they use the motor and sometimes a rotational position sensor to sense the resistance of water; when it senses there is no cavitation it knows it has the optimal amount of water. A bimetal switch or wax motor opens the detergent door during the wash cycle. Some dishwashers include a child-lockout feature to prevent accidental starting or stopping of the wash cycle by children. A child lock can sometimes be included to prevent young children from opening the door during a wash cycle. This prevents accidents with hot water and strong detergents used during the wash cycle. Process Energy use and water temperatures In the European Union, the energy consumption of a dishwasher for a standard usage is shown on a European Union energy label. In the United States, the energy consumption of a dishwasher is defined using the energy factor. The current energy usage criteria for dishwashers, to achieve Energy Star certification, are ≤ 270 kWh/year for standard dishwashers, and ≤ 203 kWh/year for compact dishwashers. Most consumer dishwashers use a thermostat in the sanitizing process. During the final rinse cycle, the heating element and wash pump are turned on, and the cycle timer (electronic or electromechanical) is stopped until the thermostat is tripped. At this point, the cycle timer resumes and will generally trigger a drain cycle within a few timer increments. Most consumer dishwashers use rather than for reasons of burn risk, energy and water consumption, total cycle time, and possible damage to plastic items placed inside the dishwasher. With new advances in detergents, lower water temperatures () are needed to prevent premature decay of the enzymes used to eat the grease and other build-ups on the dishes. In the US, residential dishwashers can be certified to a NSF International testing protocol which confirms the cleaning and sanitation performance of the unit. Superheated steam dishwashers can kill 99% of bacteria on a plate in just 25 seconds. Drying The heat inside the dishwasher dries the contents after the final hot rinse. North American dishwashers tend to use heat-assisted drying via an exposed element which tends to be less efficient than other methods. European machines and some high-end North American machines use passive methods for drying – a stainless steel interior helps this process and some models use heat exchange technology between the inner and outer skin of the machine to cool the walls of the interior and speed up drying. Some dishwashers employ desiccants such as zeolite which at the beginning of the wash are heated, dry out and creating steam which warms plates, and then are cooled during the dry cycle which absorbs moisture again, saving significant energy. Plastic and non-stick items form drops with smaller surface area and may not dry properly compared to china and glass, which also store more heat that better evaporate the little water that remains on them. Some dishwashers incorporate a fan to improve drying. Older dishwashers with a visible heating element (at the bottom of the wash cabinet, below the bottom basket) may use the heating element to improve drying; however, this uses more energy. Most importantly however, the final rinse adds a small amount of rinse-aid to the hot water, this is a mild detergent that improves drying significantly by reducing the inherent surface tension of the water so that water mostly drips off, greatly improving how well all items, including plastic items, dry. Most dishwashers feature a drying sensor and as such, a dish-washing cycle is always considered complete when a drying indicator, usually in the form of an illuminated "end" light, or in more modern models on a digital display or audible sound, exhibits to the operator that the washing and drying cycle is now over. US governmental agencies often recommend air-drying dishes by either disabling or stopping the drying cycle to save energy. Differences between dishwashers and hand washing Dishwasher detergent Dishwashers are designed to work using specially formulated dishwasher detergent. Over time, many regions have banned the use of phosphates in detergent and phosphorus-based compounds. They were previously used because they have properties that aid in effective cleaning. The concern was the increase in algal blooms in waterways caused by increasing phosphate levels (see eutrophication). Seventeen US states have partial or full bans on the use of phosphates in dish detergent, and two US states (Maryland and New York) ban phosphates in commercial dishwashing. Detergent companies claimed it is not cost effective to make separate batches of detergent for the states with phosphate bans, and so most have voluntarily removed phosphates from all dishwasher detergents. In addition, rinse aids have contained nonylphenol and nonylphenol ethoxylates. These have been banned in the European Union by EU Directive 76/769/EEC. In some regions, depending on water hardness, a dishwasher might function better with the use of a dishwasher salt. Glassware Glassware washed by dishwashing machines can develop a white haze on the surface over time. This may be caused by any or all of the below processes, of which only the first is reversible: Deposition of minerals Calcium carbonate (limescale) in hard water can deposit and build up on surfaces when water dries. The deposits can be dissolved by vinegar or another acid. Dishwashers often include ion exchange device to remove calcium and magnesium ions and replace them with sodium. The resultant sodium salts are water-soluble and don't tend to build up. Silicate filming, etching, and accelerated crack corrosion This film starts as an iridescence or "oil-film" effect on glassware, and progresses into a "milky" or "cloudy" appearance (which is not a deposit) that cannot be polished off or removed like limescale. It is formed because the detergent is strongly alkaline (basic) and glass dissolves slowly in alkaline aqueous solution. It becomes less soluble in the presence of silicates in the water (added as anti-metal-corrosion agents in the dishwasher detergent). Since the cloudy appearance is due to nonuniform glass dissolution, it is (somewhat paradoxically) less marked if dissolution is higher, i.e. if a silicate-free detergent is used; also, in certain cases, the etching will primarily be seen in areas that have microscopic surface cracks as a result of the items' manufacturing. Limitation of this undesirable reaction is possible by controlling water hardness, detergent load and temperature. The type of glass is an important factor in determining if this effect is a problem. Some dishwashers can reduce this etching effect by automatically dispensing the correct amount of detergent throughout the wash cycle based on the level of water hardness programmed. Dissolution of lead Lead in lead crystal can be converted into a soluble form by the high temperatures and strong alkali detergents of dishwashers, which could endanger the health of subsequent users. Other materials Other materials besides glass are also harmed by the strong detergents, strong agitation, and high temperatures of dishwashers, especially on a hot wash cycle when temperatures can reach . Aluminium, brass, and copper items will discolor, and light aluminum containers will mark other items they knock into. Nonstick pan coatings will deteriorate. Glossy, gold-colored, and hand-painted items will be dulled or fade. Fragile items and sharp edges will be dulled or damaged from colliding with other items or thermal stress. Sterling silver and pewter will oxidize and discolour from the heat and from contact with metals lower on the galvanic series such as stainless steel. Pewter has a low melting point and may warp in some dishwashers. Glued items, such as hollow-handle knives or wooden cutting boards, will melt or soften in a dishwasher; high temperatures and moisture damage wood. High temperatures damage many plastics, especially in the bottom rack close to an exposed heating element (many newer dishwashers have a concealed heating element away from the bottom rack entirely). Squeezing plastic items into small spaces may cause the plastic to distort in shape. Cast iron cookware is normally seasoned with oil or grease and heat, which causes the oil or grease to be absorbed into the pores of the cookware, thereby giving a smooth relatively non-stick cooking surface which is stripped off by the combination of alkali based detergent and hot water in a dishwasher. Knives and other cooking tools that are made of carbon steel, semi-stainless steels like D2, or specialized, highly hardened cutlery steels like ZDP189 corrode in the extended moisture bath of dishwashers, compared to briefer baths of hand washing. Cookware is made of austenitic stainless steels, which are more stable. Items contaminated by chemicals such as wax, cigarette ash, poisons, mineral oils, wet paints, oiled tools, furnace filters, etc. can contaminate a dishwasher, since the surfaces inside small water passages cannot be wiped clean as surfaces are in hand-washing, so contaminants remain to affect future loads. Objects contaminated by solvents may explode in a dishwasher. Environmental comparison Dishwashers use less water, and therefore less fuel to heat the water, than hand washing, except for small quantities washed in wash bowls without running water. Hand-washing techniques vary by individual. According to a peer-reviewed study in 2003, hand washing and drying of an amount of dishes equivalent to a fully loaded automatic dishwasher (no cookware or bakeware) could use between of water and between 0.1 and 8 kWh of energy, while the numbers for energy-efficient automatic dishwashers were and 1 to 2 kWh, respectively. The study concluded that fully loaded dishwashers use less energy, water, and detergent than the average European hand-washer. For the automatic dishwasher results, the dishes were not rinsed before being loaded. The study does not address costs associated with the manufacture and disposal of dishwashers, the cost of possible accelerated wear of dishes from the chemical harshness of dishwasher detergent, the comparison for cleaning cookware, or the value of labour saved; hand washers needed between 65 and 106 minutes. Several points of criticism on this study have been raised. For example, kilowatt hours of electricity were compared against energy used for heating hot water without taking into account possible inefficiencies. Also, inefficient handwashings were compared against optimal usage of a fully loaded dishwasher without manual pre-rinsing that can take up to of water. A 2009 study showed that the microwave and the dishwasher were both more effective ways to clean domestic sponges than handwashing. Adoption Commercial use Large heavy-duty dishwashers are available for use in commercial establishments (e.g. hotels, restaurants) where many dishes must be cleaned. Unlike a residential dishwasher, a commercial dishwasher does not utilize a drying cycle (commercial drying is achieved by heated ware meeting open air once the wash/rinse/sanitation cycles have been completed) and thus are significantly faster than their residential counterparts. Washing is conducted with temperatures and sanitation is achieved by either the use of a booster heater that will provide the machine "final rinse" temperature or through the use of a chemical sanitizer. This distinction labels the machines as either "high-temp" or "low-temp". Some commercial dishwashers work similarly to a commercial car wash, with a pulley system that pulls the rack through a small chamber (known widely as a "rack conveyor" systems). Single-rack washers require an operator to push the rack into the washer, close the doors, start the cycle, and then open the doors to pull out the cleaned rack, possibly through a second opening into an unloading area. In the UK, the British Standards Institution set standards for dishwashers. In the US, NSF International (an independent not-for-profit organization) sets the standards for wash and rinse time along with minimum water temperature for chemical or hot-water sanitizing methods. There are many types of commercial dishwashers including under-counter, single tank, conveyor, flight type, and carousel machines. Commercial dishwashers often have significantly different plumbing and operations than a home unit, in that there are often separate sprayers for washing and rinsing/sanitizing. The wash water is heated with an in-tank electric heat element and mixed with a cleaning solution, and is used repeatedly from one load to the next. The wash tank usually has a large strainer basket to collect food debris, and the strainer may not be emptied until the end of the day's kitchen operations. Water used for rinsing and sanitizing is generally delivered directly through building water supply and is not reusable. However, commercial dishwashers excel in water efficiency, with some models using less than 0.4 gallons of water per rack. Used rinse water empties into the wash tank reservoir, which dilutes some of the used wash water and causes a small amount to drain out through an overflow tube. The system may first rinse with pure water only and then sanitize with an additive solution that is left on the dishes as they leave the washer to dry. Additional soap is periodically added to the main wash water tank, from either large soap concentrate tanks or dissolved from a large solid soap block, to maintain wash water cleaning effectiveness. Alternative uses Dishwashers can be used to cook foods at low temperatures (e.g. dishwasher salmon). The foods are generally sealed in canning jars or oven bags since even a dishwasher cycle without soap can deposit residual soap and rinse aid from previous cycles on unsealed foods. Dishwashers also have been documented to be used to clean potatoes, other root vegetables, garden tools, sneakers or trainers, silk flowers, some sporting goods, plastic hairbrushes, baseball caps, plastic toys, toothbrushes, flip-flops, contact lens cases, a mesh filter from a range hood, refrigerator shelves and bins, toothbrush holders, pet bowls and pet toys. Cleaning vegetables and plastics is controversial since vegetables can be contaminated by soap and rinse aid from previous cycles and the heat of most standard dishwashers can cause BPA or phthalates to leach out of plastic products. The use of a dishwasher to clean greasy tools and parts is not recommended as the grease can clog the dishwasher.
Technology
Household appliances
null
81244
https://en.wikipedia.org/wiki/Fritillaria
Fritillaria
Fritillaria (fritillaries) is a genus of spring flowering herbaceous bulbous perennial plants in the lily family (Liliaceae). The type species, Fritillaria meleagris, was first described in Europe in 1571, while other species from the Middle East and Asia were also introduced to Europe at that time. The genus has about 130–140 species divided among eight subgenera. The flowers are usually solitary, nodding and bell-shaped with bulbs that have fleshy scales, resembling those of lilies. They are known for their large genome size and genetically are very closely related to lilies. They are native to the temperate regions of the Northern hemisphere, from the Mediterranean and North Africa through Eurasia and southwest Asia to western North America. Many are endangered due to enthusiastic picking. The name Fritillaria is thought to refer to the checkered pattern of F. meleagris, resembling a box in which dice were carried. Fritillaries are commercially important in horticulture as ornamental garden plants and also in traditional Chinese medicine, which is also endangering some species. Fritillaria flowers have been popular subjects for artists to depict and as emblems of regions and organizations. Description General Fritillaria is a genus of perennial herbaceous bulbiferous geophytes, dying back after flowering to an underground storage bulb from which they regrow in the following year. It is characterised by nodding (pendant) flowers, perianths campanulate (bell- or cup-shaped) with erect segments in upper part, a nectarial pit, groove or pouch at the base of the tepal, anthers usually pseudobasifixed, rarely versatile, fruit sometimes winged, embryo minute. Specific Vegetative Bulbs The bulbs are typically tunicate, consisting of a few tightly packed fleshy scales with a translucent tunic that disappears with further growth of the bulb. However, some species (F. imperialis, F. persica) have naked bulbs with many scales and loosely attached bulbils, resembling those of the closely related Lilium, although F. persica has only a single scale. Stems and leaves The stems have few or many cauline leaves (arising from the stem) that are opposite on the stem or verticillate (arranged in whorls), sometimes with a cirrhose apex (ending in a tendril). Reproductive Inflorescence and flowers The inflorescence bears flowers that are often solitary and nodding, but some form umbels or have racemes with many flowers. The flowers are usually actinomorphic (radially symmetric), but weakly zygomorphic (single plane of symmetry) in F. gibbosa and F. ariana. The campanulate perianth has six tepals, in two free whorls of three (trimerous), that can be white, yellow, green, purple or reddish. The erect segments are usually tesselated with squares of alternating light and dark colours. While the tepals are usually the same size in both whorls, in F. pallidiflora, the outer tepals are wider. The tepals have nectarial pits, grooves (F. sewerzowii) or pouches at their base. In F. persica the nectarial pouch is developed into a short spur. The perigonal nectaries are large and well developed, and in most species (with the exception of subgenus Rhinopetalum), are linear to lanceolate or ovate and weakly impressed upon the tepals. Gynoecium The flowers are bisexual, containing both male (androecium) and female (gynoecium) characteristics. The pistil has three carpels (tricarpellary). The ovaries are hypogynous (superior, that is attached above the other floral parts). The ovule is anatropous in orientation and has two integuments (bitegmic), the micropyle (opening) being formed from the inner integument, while the nucellus is small. The embryo sac or megagametophyte is tetrasporic, in which all four megaspores survive. The style is trilobate to trifid (in 3 parts) and the surface of the stigma is wet. Androecium Stamens are six, in two trimerous whorls of three, and diplostemonous (outer whorl of stamens opposite outer tepals and the inner whorl opposite inner tepals). Filaments filiform or slightly flattened, but sometimes papillose and rarely hairy (F. karelinii). Anthers are linear to ellipsoid, but rarely subglobose (F. persica) in shape, and their attachment to the filament is usually pseudobasifixed (connective tissue extends in a tube around the filament tip), rarely attached at the centre and free (dorsifixed versatile; F. fusca and some Liliorhiza species). In contrast, pseudobasifixed anthers can not move freely. The pollen grains are spheroidal and reticulate (net like pattern), with individual brochi (lumina within reticulations) of 4–5 μm. Fruit and seeds The capsule is obovoid to globose, loculicidal and six-angled, sometimes with wings. The seeds are flattened with a marginal wing, the seed coat made out of both integuments, but the testa is thin and the endosperm lacks starch. The embryo is small. Phytochemistry Fritillaria, like other members of the family Liliaceae, contain flavonol glycosides and tri- and diferulic-acid sucrose esters, steroidal alkaloids, saponins and terpenoids that have formed the active ingredients in traditional medicine (see Traditional medicine). Certain species have flowers that emit disagreeable odors that have been referred to as phenolic, putrid, sulfurous, sweaty and skunky. The scent of Fritillaria imperialis has been called "rather nasty", while that of F. agrestis, known commonly as stink bells, is reminiscent of canine feces. On the other hand, F. striata has a sweet fragrance. The "foxy" odor of F. imperialis has been identified as 3-methyl-2-butene-1-thiol (dimethylallyl mercaptan), an alkylthiol. Genome Fritillaria represents one of the most extreme cases of genome size expansion in angiosperms. Polyploidy is rare, with nearly all species being diploid and only occasional reports of triploidy. Reported genome size in Fritillaria vary from 1Cx (DNA content of unreplicated haploid chromosome complement) values of 30.15 to 85.38 Gb (Giga base pairs), that is > 190 times that of Arabidopsis thaliana, which has been called the "model plant" and > 860 times that of Genlisea aurea, which represents the smallest land plant genome sequenced to date. Giant genome size is generally defined as >35 pg (34 Gb). The largest genomes in diploid Fritillaria are found in subgenus Japonica, exceeding 85 Gb. At least one species, tetraploid F. assyriaca, has a very large genome. With approximately 127 pg (130 Gb), it was for a long time the largest known genome, exceeding the largest vertebrate animal genome known to date, that of the marbled lungfish (Protopterus aethiopicus), in size. Heterochromatin levels vary by biogeographic region, with very little in Old World and abundant levels in New World species. Most species have a basic chromosome number of x=12, but x=9, 11 and 13 have been reported. Taxonomy History Pre-Linnaean Gerard (1597) states that Fritillaria was unknown to the ancients, but certainly it was appearing in the writings of sixteenth century European botanists, including Dodoens (1574, 1583), Lobelius (1576, 1581), and Clusius (1583) in addition to Gerard, and was mentioned by Shakespeare and other authors of the period (see Culture). Species of Fritillaria were known in Persia (Iran) in the sixteenth century, from where they were taken to Turkey. European travelers then brought back specimens together with many other exotic eastern plants to the developing botanical gardens of Europe. By the middle of the sixteenth century there was already a flourishing export trade of various bulbs from Turkey to Europe. In Persia, the first mention in the literature was by Hakim Mo'men Tonekabon in his Tohfe Al-Mo'menin in 1080 AH ( AD), who described the medicinal properties of F. imperialis (laleh sarnegoun). European fritillaries were documented in the wild amongst the Loire meadows in 1570 by Noël Capperon, an Orléans apothecary. He mentioned them to Clusius in correspondence in 1571, and sent him a specimen of F. meleagris. He also corresponded with Dodoens. Capperon suggested the name Fritillaria to Clusius, rather than the vernacular variegated lily (Lilium ou bulbum variegatum). He stated that the flower was known locally as Fritillaria because of a resemblance to the board used in playing checkers. In recognition of this, the botanical authority is sometimes written Fritillaria (Caperon) L. The first account in a botanical text is by Dodoens in his Purgantium (1574) and in more detail in Stirpium (1583). In the Purgantium, Dodoens describes and illustrates F. meleagris as Meleagris flos, without mentioning Capperon. He was also aware, through having been sent a picture, of F. imperialis, and decided to include it as well, without making a connection. His term for F. imperialis was Corona imperialis. Consequently, Lobelius, in his Plantarum (1576), gives Dodoens the credit for describing F. meleagris. He used the word "Fritillaria" for the first time, describing F. meleagris, which he considered to belong to the Lilio-Narcissus plants, including tulips. The term Lilio-Narcissus refers to an appearance of having lily-like flowers, but a narcissus-like bulb. He called it Fritillaria (synonyms Lilio-Narcissus purpurens variegatus or Meleagris flos Dodonaei). Lobelius also included amongst the lilies, but not as Fritillaria, Corona imperialis which he mentions originated in Turkey and added what he referred to as Lilium persicum (Fritillaria persica). In his later vernacular Kruydtboeck (1581) he described two species he considered related, Fritillaria Lilio-Narcissus purpurens variegatus and Lilio-Narcissus variegatus atropurpureus Xanctonicus. He acknowledged that the plant had originally been found near Orleans and then sent to the Netherlands. Fritillaria is ook een soort van lelie narcis die de oorsprong heeft uit het land van Orléans van waar dat ze gebracht is in Nederland. In his own language he referred to it as Fritillaria of heel bruin gespikkelde Lelie-Narcisse. He also included Corona imperialis and Lilium persicum as before. Dodoens had proposed the name Meleagris flos or Guinea-fowl flower, for what we now know as Fritillaria meleagris, after a resemblance to that bird's spotted plumage, then known as Meleagris avis. In the seventeenth century, John Parkinson provided an account of twelve species of what he referred to as Fritillaria - the checkered daffodil, in his Paradisus (1635), correctly placing it as closest to the lilies. He provides his version of Capperon's discovery, and suggests that some feel he should be honoured with the name Narcissus Caparonium. Often when these exotic new plants entered the English language literature they lacked common names in the language. While Henry Lyte can only describe F. meleagris as Flos meleagris, Fritillaria or lilionarcissus, it appears that it was Shakespeare who applied the common name of "chequered". Although Clausius had corresponded with Capperon in 1571, he did not publish his account of European flora (other than Spain) till his Rariorum Pannoniam of 1583, where he gives an account of Capperon's discovery, noting the names, Fritillaria, Meleagris and Lilium variegatum. However he did not consider F. imperialis or F. persica to be related, calling both of them Lilium, Lilium persicum and Lilium susianum respectively. Post-Linnaean Although the first formal description is attributed to Joseph Pitton de Tournefort in 1694, by convention, the first valid formal description is by Linnaeus, in his Species Plantarum (1753),. Therefore, the botanical authority is given as Tourn. ex L.. Linnaeus identified five known species of Fritillaria, and grouped them in his Hexandria Monogynia (six stamens+one pistil), his system being based on sexual characteristics. These characteristics defined the core group of the family Liliaceae for a long time. Linnaeus' original species were F. imperialis, F. regia (now Eucomis regia), F. persica, F. pyrenaica and F. meleagris. The family Liliaceae was first described by Michel Adanson in 1763, placing Fritillaria in section Lilia of that family, but also considering Imperialis as a separate genus to Fritillaria, together with five other genera. The formal description of the family is attributed to Antoine Laurent de Jussieu in 1789, who included eight genera, including Imperialis, in his Lilia. Although the circumscription of Liliaceae and its subdivisions have undergone considerable revision over the ensuing centuries, the close relationship between Fritillaria and Lilium the type genus of the family, have ensured that the former has remained part of the core group, which constitutes the modern much-reduced family. For instance, Bentham and Hooker (1883), placed Fritillaria and Lilium in Liliaceae tribe Tulipeae, together with five other genera. Phylogeny Fritillaria is generally considered a monophyletic genus, placed within the tribe Lilieae s.s., where it is a sister group to Lilium and the largest member of that tribe. The evolutionary and phylogenetic relationships between the genera currently included in Liliaceae are shown in the following Cladogram: More recently, some larger phylogenetic studies of Lilieae, Lilium and Fritillaria have suggested that Fritillaria may actually consist of two distinct biogeographical clades (A and B), and that these are in a polytomous relationship with Lilium. This could mean that Fritillaria is actually two distinct genera, suggesting that the exact relationship is not yet fully resolved. Subdivision The large number of species have traditionally been divided into a number of subgroupings. By 1828, Duby in his treatment of the flora of France, recognized two subgroups, which he called section Meleagris and section Petilium. By 1874, Baker had divided 55 species into ten subgenera: In the 1880s, both Bentham and Hooker (1883) and Boissier (1884) independently simplified this by reducing nine of these subgenera to five, which they treated as sections rather than subgenera. Bentham and Hooker, who recognized more than 50 species, transferred the tenth of Baker's subgenera, Notholirion to Lilium. Boissier, by contrast, in his detailed account of oriental species, recognized Notholirion as a separate genus, whose status has been maintained since (see cladogram). He also divided Eufritillaria into subsections. In the post-Darwinian era, Komarov (1935) similarly segregated Rhinopetalum and Korolkowia as separate genera, but Turrill and Sealy (1980) more closely followed Boissier, but further divided Eufritillaria and placed all American species in Liliorhiza. However, the best known and cited of these classification schemes based on plant morphology is that of Martyn Rix, produced by the Fritillaria Group of the Alpine Garden Society in 2001. This listed 165 taxa grouped into 6 subgenera, 130 species, 17 subspecies, and 9 varieties. Rix, who described eight subgenera in all, restored both Rhinopetalum and Korolkowia as subgenera. He also used series to further subdivide subgenera, kept Boissier's four sections, renamed Eufritillaria as Fritillaria, and added subgenera Davidii and Japonica. The largest of these is Fritillaria, while Theresia, Korolkowia and Davidii are monotypic (containing a single species). Baker based his classification on the characteristics of the bulb, style, nectary and capsule valves. The large nectaries of Fritillaria have been the focus of much of the morphological classification, while the distinct form of the nectaries in Rhinopetalum were the basis for considering it a separate genus. Molecular phylogenetics The development of molecular phylogenetics and cladistic analysis has allowed a better understanding of the infrageneric relationships of Fritillaria species. Initial studies showed the major infrageneric split to be by biogeographic region into two clades, North America (clade A) and Eurasia (clade B). Clade A corresponded most closely with subgenus Liliorhiza. A subsequent study by Rønsted and colleagues (2005), using an expanded pool of taxa of 37 species including all of Rix's subgenera and sections, confirmed the initial split on the basis of geography and supported the broad division of the genus into Rix's eight subgenera but not the deeper relationships (sections and series). Clade A corresponds with subgenus Liliorhiza centred in California, but a number of species (F. camschatcensis - Japan and Siberia), F. maximowiczii and F. dagana - Russia) are also found in Western Asia. These Asian species form a grade with the true North American species, suggesting an origin in Asia followed by later dispersal. Of clade B, the Eurasian species, the largest subgenus, Fritillaria, appeared to be polyphyletic in that F. pallidiflora appeared to segregate in subclade B1, with subgenera Petillium, Korolkowia and Theresia while all other species formed a clade within B2. The phylogenetic, evolutionary and biogeographical relationships between the subgenera are shown in this cladogram: The number of taxa sampled was subsequently enlarged to 92 species (66% of all species), and all species in each subgenus except Rhinopetalum (80%), Liliorhiza and Fritillaria (60%). This expanded study further resolved the evolutionary relationships between the subgenera but also confirmed the polyphyletic nature of subgenus Fritillaria as shown in the following cladogram. The majority of taxa within this subgenus (Fritillaria 2) form a subclade centred in Europe, the Middle East and North Africa, but with some species ranging into China. The remainder (Fritillaria 1), being centred in China and Central Asia, but with some species ranging into North and South Asia. This group is therefore probably a separate subgenus. Subgenera Species The genus Fritillaria includes about 150 subordinate taxa, including species and subspecies. Estimates of the number of species vary from about 100 through 130–140. The Plant List (2013) includes 141 accepted species names, and 156 taxa in total. Biogeography and evolution It is likely that two invasions across the Bering Straits to North America took place within the Lileae, one in each genus, Lilium and Fritillaria. Within the Eurasian clade, the two subclades differ in bulb type. In subclade B2 (Fritillaria, Rhinopetalum, and Japonica), the bulb type is described as Fritillaria-type, with 2–3 fleshy scales and the tunica derived from the remnants of previous year's scales. by contrast subclade B2 (Petilium, Theresia and Korolkowia) differ. Those of Theresia and Korolkowia are large, consisting of a single large fleshy scale, while Petilium species have several large erect imbricate scales. In Liliorhiza the bulbs are naked and have numerous scales similar to Lilium, but with numerous "rice-grain bulbils". The location of the bulbils differ from the more common aerial pattern of arising from within the axil of a leaf or inflorescence, as in Lilium and Allium. Similar bulbils are also found in Davidii. These bulbils arise in the axils of the scale leaves. Bulbils confer an evolutionary advantage in vegetative propagation. Etymology When Noël Capperon, an Orléans apothecary, discovered F. meleagris growing in the Loire meadows in 1570, he wrote to Carolus Clusius, describing it, and saying that it was known locally as fritillaria, supposedly because the checkered pattern on the flower resembled the board on which checkers was played. Clusius believed this to be an error, in that is actually the Latin name for the box in which the dice used in the game were kept, not the board itself. Some North American species are called "mission bells". Distribution and habitat Fritillaria are distributed in most temperate zone of the Northern Hemisphere, from western North America, through Europe, the Mediterranean, Middle East and Central Asia to China and Japan. Centres of diversity include Turkey (39 species) and the Zagros Mountains of Iran (14–15 species). Iran is also the centre of diversity of species such as F. imperialis and F. persica. There are five areas of particularly active evolution and clustering of species - California, Mediterranean Greece and Turkey, Anatolia and the Zagros mountains, central Asia from Uzbekistan to western Xinjiang and the eastern Himalayas in southwestern China. Fritillaria species are found in a wide variety of climatic regions and habitats, but about half of them show a preference for full sun in open habitats. A number of Fritillaria are widely introduced. Cultivated fritillaries (F. meleagris) have been recorded in British gardens since 1578, but only in the wild since 1736, it is likely to be introduced, rather than be endemic. It is greatly diminished there due to loss of habitat, although persistent along the River Thames in Oxfordshire. F. imperialis was introduced into Europe around the 1570s, with Ulisse Aldrovandi sending a drawing to Francesco de' Medici in Florence, famed for his gardens at Villa di Pratolino in 1578. His friend Jacopo Ligozzi (1547–1627) was also including it in his paintings, as well as F. persica. In Britain, F. imperialis was first seen in the London garden of James Nasmyth, surgeon to King James I in April 1605. Ecology The majority of species are spring-flowering. Lily beetles (scarlet lily beetle, Lilioceris lilii and Lilioceris chodjaii) feed on fritillaries, and may become a pest where these plants are grown in gardens or commercially. Fritillaria are entomophilous (insect pollinated). Those species with large nectaries (4–12 x 1–4 mm) and have more fructose than glucose in the nectar are most commonly pollinated by wasps, while those with smaller nectaries (2–10 x 1–2 mm) and a more balanced nectar composition are most commonly pollinated by bumblebees. Conservation A number of species of Fritillaria are endangered, from over-harvesting, habitat fragmentation, over-grazing and international demand for herbals. These include many species in Greece, and Fritillaria gentneri in the pacific Northwest of North America. In Japan, five of the eight endemic species (subgenus Japonica) are listed as endangered. In China, the collection of Fritillaria bulbs to make traditional medicine, particularly F. cirrhosa from southwest China and the eastern Himalayas of Bhutan and Nepal and one of the most intensively harvested of the alpine medicinal plants threatens extinction. In Iran, F. imperialis and F. persica are endangered and F. imperialis is protected. The genus is threatened by irregular grazing, change in pasture usage, pest (primarily Lilioceris chodjaii) migration from pasture destruction, and harvesting by poor people for sale to florists. One species, F. delavayi, has begun to grow brown, greyish flowers to better camouflage amongst the rock of its habitat. Scientists believe it is evolving to combat its biggest predator — humans. Over-picking has greatly decreased the availability of this species in China and even though there is no known difference between the flowers picked in the wild and those grown commercially, hunters continue to believe the wild flowers offer better medicinal benefit. Toxicity Most fritillaries contain poisonous steroidal alkaloids such as imperialin in the bulbs and some may even be deadly if ingested in quantity. Uses The bulbs of a few species, such as F. affinis, F. camschatcensis, and F. pudica, are edible if prepared carefully. They were commonly eaten by indigenous peoples of the Pacific Northwest coast of North America. The wild species flowering in areas such as Iran have become important for ecotourism, when in late May people come to the Valley of Roses, near Chelgerd, to see F. imperialis blooming. The area is also rich in F. reuteri and F. gibbosa. Because of their large genome size, Fritillaria species are an important source for genomic studies of the processes involved in genome size diversity and evolution. They also have important commercial value both in horticulture and traditional medicine. Horticulture Species of Fritillaria are becoming increasingly popular as ornamental garden plants, and many species and cultivars are commercially available. They are usually grown from dormant bulbs planted in Autumn. As perennials they repeat flower every year, and some species will increase naturally. While Fritillaria is mainly harvested from the wild fields for commercial use, the growing price of the herbal product results in over-exploitation and puts the species at risk of depletion. The following may be most commonly found in cultivation:- Fritillaria acmopetala - pointed-petal fritillary Fritillaria imperialis - crown imperial Fritillaria meleagris - snake's head fritillary Fritillaria pallidiflora - Siberian fritillary Fritillaria persica - Persian fritillary Fritillaria pyrenaica - Pyrenean fritillary Traditional medicine Species of Fritillaria have been used in traditional medicine in China for over 2,000 years, and are one of the most widely used medicines today. The production of medicines from F. cirrhosa is worth US$400 million per annum. Although some are cultivated for this purpose, most are gathered in the wild. In recent years demand has increased leading to over-harvesting of wild populations. In addition to China, Fritillaria products are used medicinally in the Himalayas, including India, Nepal and Pakistan, as well as Japan, Korea and Southeast Asia. To meet the demand additional countries such as Turkey and Burma are involved in the collection. The products are used mainly as antitussives, expectorants, and antihypertensives. The active ingredients are thought to be isosteroidal and steroidal alkaloid compounds. Chinese sources suggest 16 species as source material, but this may be an overestimate due to the large number of synonyms in Chinese. Of these, 15 are in subgenus Fritillaria (both subclades), but one (F. anhuiensis) is in subgenus Liliorhiza. F. imperialis also has a long history of medicinal usage in China and Iran. Fritillaria extracts (fritillaria in English, bulbus fritillariae cirrhosae in Latin) are used in traditional Chinese medicine under the name (literally "Shell mother from Sichuan", or just ). Species such as F. cirrhosa, F. thunbergii and F. verticillata are used in cough remedies. They are listed as chuān bèi () or zhè bèi (Chinese: 浙貝/浙贝), respectively, and are often in formulations combined with extracts of loquat (Eriobotrya japonica). Fritillaria verticillata bulbs are also traded as bèi mǔ or, in Kampō, baimo (Chinese/Kanji: 貝母, Katakana: バイモ). In one study fritillaria reduced airway inflammation by suppressing cytokines, histamines, and other compounds of inflammatory response. Popular culture Shakespeare, Matthew Arnold and George Herbert and more recently Vita Sackville-West (The Land 1927) wrote romantically about fritillaries. Fritillaries were also a favourite of the Dutch flower painters that emerged around 1600, such as Ambrosius Bosschaert and Jacob de Gheyn II, and appeared in Italian art, such as that of Jacopo Ligozzi in the late sixteenth century. Fritillaries are commonly used as floral emblems. F. meleagris (snake's head fritillary) is the county flower of Oxfordshire, UK, and the provincial flower of Uppland, Sweden, where it is known as kungsängslilja ("Kungsängen lily"). In Germany, F. meleagris appears as a heraldic device in a number of municipalities, such as Hetlingen, Seestermühe and Winseldorf, and also in Austria (Großsteinbach). In Croatia this species is known as kockavica (from , ), and the checkerboard pattern of its flowers may have inspired the checkerboard pattern on the nation's coat of arms. F. camschatcensis (Kamchatka fritillary) is the floral emblem of Ishikawa Prefecture and Obihiro City in Japan. Its Japanese name is kuroyuri (クロユリ), meaning "dark lily". Fritillaria montana is the floral emblem of Giardino Botanico Alpino di Pietra Corva, a botanical garden in Italy.
Biology and health sciences
Liliales
Plants
81256
https://en.wikipedia.org/wiki/Volcanism
Volcanism
Volcanism, vulcanism, volcanicity, or volcanic activity is the phenomenon where solids, liquids, gases, and their mixtures erupt to the surface of a solid-surface astronomical body such as a planet or a moon. It is caused by the presence of a heat source, usually internally generated, inside the body; the heat is generated by various processes, such as radioactive decay or tidal heating. This heat partially melts solid material in the body or turns material into gas. The mobilized material rises through the body's interior and may break through the solid surface. Cause of volcanism For volcanism to occur, the temperature of the mantle must have risen to about half its melting point. At this point, the mantle's viscosity will have dropped to about 1021 Pascal-seconds. When large scale melting occurs, the viscosity rapidly falls to 103 Pascal-seconds or even less, increasing the heat transport rate a million-fold. The occurrence of volcanism is partially due to the fact that melted material tends to be more mobile and less dense than the materials from which they were produced, which can cause it to rise to the surface. Heat source There are multiple ways to generate the heat needed for volcanism. Volcanism on outer solar system moons is powered mainly by tidal heating. Tidal heating caused by the deformation of a body's shape due to mutual gravitational attraction, which generates heat. Earth experiences tidal heating from the Moon, deforming by up to 1 metre (3 feet), but this does not make up a major portion of Earth's total heat. During a planet's formation, it would have experienced heating from impacts from planetesimals, which would have dwarfed even the asteroid impact that caused the extinction of dinosaurs. This heating could trigger differentiation, further heating the planet. The larger a body is, the slower it loses heat. In larger bodies, for example Earth, this heat, known as primordial heat, still makes up much of the body's internal heat, but the Moon, which is smaller than Earth, has lost most of this heat. Another heat source is radiogenic heat, caused by radioactive decay. The decay of aluminium-26 would have significantly heated planetary embryos, but due to its short half-life (less than a million years), any traces of it have long since vanished. There are small traces of unstable isotopes in common minerals, and all the terrestrial planets, and the Moon, experience some of this heating. The icy bodies of the outer solar system experience much less of this heat because they tend to not be very dense and not have much silicate material (radioactive elements concentrate in silicates). On Neptune's moon Triton, and possibly on Mars, cryogeyser activity takes place. The source of heat is external (heat from the Sun) rather than internal. Melting methods Decompression melting Decompression melting happens when solid material from deep beneath the body rises upwards. Pressure decreases as the material rises upwards, and so does the melting point. So, a rock that is solid at a given pressure and temperature can become liquid if the pressure, and thus melting point, decreases even if the temperature stays constant. However, in the case of water, increasing pressure decreases melting point until a pressure of 0.208 GPa is reached, after which the melting point increases with pressure. Flux melting Flux melting occurs when the melting point is lowered by the addition of volatiles, for example, water or carbon dioxide. Like decompression melting, it is not caused by an increase in temperature, but rather by a decrease in melting point. Formation of cryomagma reservoirs Cryovolcanism, instead of originating in a uniform subsurface ocean, may instead take place from discrete liquid reservoirs. The first way these can form is a plume of warm ice welling up and then sinking back down, forming a convection current. A model developed to investigate the effects of this on Europa found that energy from tidal heating became focused in these plumes, allowing melting to occur in these shallow depths as the plume spreads laterally (horizontally). The next is a switch from vertical to horizontal propagation of a fluid filled crack. Another mechanism is heating of ice from release of stress through lateral motion of fractures in the ice shell penetrating it from the surface, and even heating from large impacts can create such reservoirs. Ascent of melts Diapirs When material of a planetary body begins to melt, the melting first occurs in small pockets in certain high energy locations, for example grain boundary intersections and where different crystals react to form eutectic liquid, that initially remain isolated from one another, trapped inside rock. If the contact angle of the melted material allows the melt to wet crystal faces and run along grain boundaries, the melted material will accumulate into larger quantities. On the other hand, if the angle is greater than about 60 degrees, much more melt must form before it can separate from its parental rock. Studies of rocks on Earth suggest that melt in hot rocks quickly collects into pockets and veins that are much larger than the grain size, in contrast to the model of rigid melt percolation. Melt, instead of uniformly flowing out of source rock, flows out through rivulets which join to create larger veins. Under the influence of buoyancy, the melt rises. Diapirs may also form in non-silicate bodies, playing a similar role in moving warm material towards the surface. Dikes A dike is a vertical fluid-filled crack, from a mechanical standpoint it is a water filled crevasse turned upside down. As magma rises into the vertical crack, the low density of the magma compared to the wall rock means that the pressure falls less rapidly than in the surrounding denser rock. If the average pressure of the magma and the surrounding rock are equal, the pressure in the dike exceeds that of the enclosing rock at the top of the dike, and the pressure of the rock is greater than that of the dike at its bottom. So the magma thus pushes the crack upwards at its top, but the crack is squeezed closed at its bottom due to an elastic reaction (similar to the bulge next to a person sitting down on a springy sofa). Eventually, the tail gets so narrow it nearly pinches off, and no more new magma will rise into the crack. The crack continues to ascend as an independent pod of magma. Standpipe model This model of volcanic eruption posits that magma rises through a rigid open channel, in the lithosphere and settles at the level of hydrostatic equilibrium. Despite how it explains observations well (which newer models cannot), such as an apparent concordance of the elevation of volcanoes near each other, it cannot be correct and is now discredited, because the lithosphere thickness derived from it is too large for the assumption of a rigid open channel to hold. Cryovolcanic melt ascent Unlike silicate volcanism, where melt can rise by its own buoyancy until it reaches the shallow crust, in cryovolcanism, the water (cryomagmas tend to be water based) is denser than the ice above it. One way to allow cryomagma to reach the surface is to make the water buoyant, by making the water less dense, either through the presence of other compounds that reverse negative buoyancy, or with the addition of exsolved gas bubbles in the cryomagma that were previously dissolved into it (that makes the cryomagma less dense), or with the presence of a densifying agent in the ice shell. Another is to pressurise the fluid to overcome negative buoyancy and make it reach the surface. When the ice shell above a subsurface ocean thickens, it can pressurise the entire ocean (in cryovolcanism, frozen water or brine is less dense than in liquid form). When a reservoir of liquid partially freezes, the remaining liquid is pressurised in the same way. For a crack in the ice shell to propagate upwards, the fluid in it must have positive buoyancy or external stresses must be strong enough to break through the ice. External stresses could include those from tides or from overpressure due to freezing as explained above. There is yet another possible mechanism for ascent of cryovolcanic melts. If a fracture with water in it reaches an ocean or subsurface fluid reservoir, the water would rise to its level of hydrostatic equilibrium, at about nine-tenths of the way to the surface. Tides which induce compression and tension in the ice shell may pump the water farther up. A 1988 article proposed a possibility for fractures propagating upwards from the subsurface ocean of Jupiter's moon Europa. It proposed that a fracture propagating upwards would possess a low pressure zone at its tip, allowing volatiles dissolved within the water to exsolve into gas. The elastic nature of the ice shell would likely prevent the fracture reaching the surface, and the crack would instead pinch off, enclosing the gas and liquid. The gas would increase buoyancy and could allow the crack to reach the surface. Even impacts can create conditions that allow for enhanced ascent of magma. An impact may remove the top few kilometres of crust, and pressure differences caused by the difference in height between the basin and the height of the surrounding terrain could allow eruption of magma which otherwise would have stayed beneath the surface. A 2011 article showed that there would be zones of enhanced magma ascent at the margins of an impact basin. Not all of these mechanisms, and maybe even none, operate on a given body. Types of volcanism Silicate volcanism Silicate volcanism occurs where silicate materials are erupted. Silicate lava flows, like those found on Earth, solidify at about 1000 degrees Celsius. Mud volcanoes A mud volcano is formed when fluids and gases under pressure erupt to the surface, bringing mud with them. This pressure can be caused by the weight of overlying sediments over the fluid which pushes down on the fluid, preventing it from escaping, by fluid being trapped in the sediment, migrating from deeper sediment into other sediment or being made from chemical reactions in the sediment. They often erupt quietly, but sometimes they erupt flammable gases like methane. Cryovolcanism Cryovolcanism is the eruption of volatiles into an environment below their freezing point. The processes behind it are different to silicate volcanism because the cryomagma (which is usually water-based) is normally denser than its surroundings, meaning it cannot rise by its own buoyancy. Sulfur Sulfur lavas have a different behaviour to silicate ones. First, sulfur has a low melting point of about 120 degrees Celsius. Also, after cooling down to about 175 degrees Celsius the lava rapidly loses viscosity, unlike silicate lavas like those found on Earth. Lava types When magma erupts onto a planet's surface, it is termed lava. Viscous lavas form short, stubby glass-rich flows. These usually have a wavy solidified surface texture. More fluid lavas have solidified surface textures that volcanologists classify into four types. Pillow lava forms when a trigger, often lava making contact with water, causes a lava flow to cool rapidly. This splinters the surface of the lava, and the magma then collects into sacks that often pile up in front of the flow, forming a structure called a pillow. A’a lava has a rough, spiny surface made of clasts of lava called clinkers. Block lava is another type of lava, with less jagged fragments than in a’a lava. Pahoehoe lava is by far the most common lava type, both on Earth and probably the other terrestrial planets. It has a smooth surface, with mounds, hollows and folds. Gentle/explosive activity A volcanic eruption could just be a simple outpouring of material onto the surface of a planet, but they usually involve a complex mixture of solids, liquids and gases which behave in equally complex ways. Some types of explosive eruptions can release energy a quarter that of an equivalent mass of TNT. Causes of explosive activity Exsolution of volatiles Volcanic eruptions on Earth have been consistently observed to progress from erupting gas rich material to gas depleted material, although an eruption may alternate between erupting gas rich to gas depleted material and vice versa multiple times. This can be explained by the enrichment of magma at the top of a dike by gas which is released when the dike breaches the surface, followed by magma from lower down than did not get enriched with gas. The reason the dissolved gas in the magma separates from it when the magma nears the surface is due to the effects of temperature and pressure on gas solubility. Pressure increases gas solubility, and if a liquid with dissolved gas in it depressurises, the gas will tend to exsolve (or separate) from the liquid. An example of this is what happens when a bottle of carbonated drink is quickly opened: when the seal is opened, pressure decreases and bubbles of carbon dioxide gas appear throughout the liquid. Fluid magmas erupt quietly. Any gas that has exsolved from the magma easily escapes even before it reaches the surface. However, in viscous magmas, gases remain trapped in the magma even after they have exsolved, forming bubbles inside the magma. These bubbles enlarge as the magma nears the surface due to the dropping pressure, and the magma grows substantially. This fact gives volcanoes erupting such material a tendency to ‘explode’, although instead of the pressure increase associated with an explosion, pressure always decreases in a volcanic eruption. Generally, explosive cryovolcanism is driven by exsolution of volatiles that were previously dissolved into the cryomagma, similar to what happens in explosive silicate volcanism as seen on Earth, which is what is mainly covered below. Physics of a volatile-driven explosive eruption Silica-rich magmas cool beneath the surface before they erupt. As they do this, bubbles exsolve from the magma. As the magma nears the surface, the bubbles and thus the magma increase in volume. The resulting pressure eventually breaks through the surface, and the release of pressure causes more gas to exsolve, doing so explosively. The gas may expand at hundreds of metres per second, expanding upward and outward. As the eruption progresses, a chain reaction causes the magma to be ejected at higher and higher speeds. Volcanic ash formation The violently expanding gas disperses and breaks up magma, forming a colloid of gas and magma called volcanic ash. The cooling of the gas in the ash as it expands chills the magma fragments, often forming tiny glass shards recognisable as portions of the walls of former liquid bubbles. In more fluid magmas the bubble walls may have time to reform into spherical liquid droplets. The final state of the colloids depends strongly on the ratio of liquid to gas. Gas-poor magmas end up cooling into rocks with small cavities, becoming vesicular lava. Gas-rich magmas cool to form rocks with cavities that nearly touch, with an average density less than that of water, forming pumice. Meanwhile, other material can be accelerated with the gas, becoming volcanic bombs. These can travel with so much energy that large ones can create craters when they hit the ground. Pyroclastic flows A colloid of volcanic gas and magma can form as a density current called a pyroclastic flow. This occurs when erupted material falls back to the surface. The colloid is somewhat fluidised by the gas, allowing it to spread. Pyroclastic flows can often climb over obstacles, and devastate human life. Pyroclastic flows are a common feature at explosive volcanoes on Earth. Pyroclastic flows have been found on Venus, for example at the Dione Regio volcanoes. Phreatic eruption A phreatic eruption can occur when hot water under pressure is depressurised. Depressurisation reduces the boiling point of the water, so when depressurised the water suddenly boils. Or it may happen when groundwater is suddenly heated, flashing to steam suddenly. When water turns into steam in a phreatic eruption, it expands at supersonic speeds, up to 1,700 times its original volume. This can be enough to shatter solid rock, and hurl rock fragments hundreds of metres. Phreatomagmatic eruption A phreatomagmatic eruption occurs when hot magma makes contact with water, creating an explosion. Clathrate hydrates One mechanism for explosive cryovolcanism is cryomagma making contact with clathrate hydrates. Clathrate hydrates, if exposed to warm temperatures, readily decompose. A 1982 article pointed out the possibility that the production of pressurised gas upon destabilisation of clathrate hydrates making contact with warm rising magma could produce an explosion that breaks through the surface, resulting in explosive cryovolcanism. Water vapor in a vacuum If a fracture reaches the surface of an icy body and the column of rising water is exposed to the near-vacuum of the surface of most icy bodies, it will immediately start to boil, because its vapor pressure is much more than the ambient pressure. Not only that, but any volatiles in the water will exsolve. The combination of these processes will release droplets and vapor, which can rise up the fracture, creating a plume. This is thought to be partially responsible for Enceladus's ice plumes. Occurrence Earth On Earth, volcanoes are most often found where tectonic plates are diverging or converging, and because most of Earth's plate boundaries are underwater, most volcanoes are found underwater. For example, a mid-ocean ridge, such as the Mid-Atlantic Ridge, has volcanoes caused by divergent tectonic plates whereas the Pacific Ring of Fire has volcanoes caused by convergent tectonic plates. Volcanoes can also form where there is stretching and thinning of the crust's plates, such as in the East African Rift and the Wells Gray-Clearwater volcanic field and Rio Grande rift in North America. Volcanism away from plate boundaries has been postulated to arise from upwelling diapirs from the core–mantle boundary, deep within Earth. This results in hotspot volcanism, of which the Hawaiian hotspot is an example. Volcanoes are usually not created where two tectonic plates slide past one another. In 1912–1952, in the Northern Hemisphere, studies show that within this time, winters were warmer due to no massive eruptions that had taken place. These studies demonstrate how these eruptions can cause changes within the Earth's atmosphere. Large eruptions can affect atmospheric temperature as ash and droplets of sulfuric acid obscure the Sun and cool Earth's troposphere. Historically, large volcanic eruptions have been followed by volcanic winters which have caused catastrophic famines. Moon Earth's Moon has no large volcanoes and no current volcanic activity, although recent evidence suggests it may still possess a partially molten core. However, the Moon does have many volcanic features such as maria (the darker patches seen on the Moon), rilles and domes. Venus The planet Venus has a surface that is 90% basalt, indicating that volcanism played a major role in shaping its surface. The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Lava flows are widespread and forms of volcanism not present on Earth occur as well. Changes in the planet's atmosphere and observations of lightning have been attributed to ongoing volcanic eruptions, although there is no confirmation of whether or not Venus is still volcanically active. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank. However, the interpretation of the flows as ash flows has been questioned. Mars There are several extinct volcanoes on Mars, four of which are vast shield volcanoes far bigger than any on Earth. They include Arsia Mons, Ascraeus Mons, Hecates Tholus, Olympus Mons, and Pavonis Mons. These volcanoes have been extinct for many millions of years, but the European Mars Express spacecraft has found evidence that volcanic activity may have occurred on Mars in the recent past as well. Moons of Jupiter Io Jupiter's moon Io is the most volcanically active object in the Solar System because of tidal interaction with Jupiter. It is covered with volcanoes that erupt sulfur, sulfur dioxide and silicate rock, and as a result, Io is constantly being resurfaced. There are only two planets in the solar system where volcanoes can be easily seen due to their high activity, Earth and Io. Its lavas are the hottest known anywhere in the Solar System, with temperatures exceeding 1,800 K (1,500 °C). In February 2001, the largest recorded volcanic eruptions in the Solar System occurred on Io. Europa Europa, the smallest of Jupiter's Galilean moons, also appears to have an active volcanic system, except that its volcanic activity is entirely in the form of water, which freezes into ice on the frigid surface. This process is known as cryovolcanism, and is apparently most common on the moons of the outer planets of the Solar System. Moons of Saturn and Neptune In 1989, the Voyager 2 spacecraft observed cryovolcanoes (ice volcanoes) on Triton, a moon of Neptune, and in 2005 the Cassini–Huygens probe photographed fountains of frozen particles erupting from Enceladus, a moon of Saturn. The ejecta may be composed of water, liquid nitrogen, ammonia, dust, or methane compounds. Cassini–Huygens also found evidence of a methane-spewing cryovolcano on the Saturnian moon Titan, which is believed to be a significant source of the methane found in its atmosphere. It is theorized that cryovolcanism may also be present on the Kuiper Belt Object Quaoar. Exoplanets A 2010 study of the exoplanet COROT-7b, which was detected by transit in 2009, suggested that tidal heating from the host star very close to the planet and neighboring planets could generate intense volcanic activity similar to that found on Io.
Physical sciences
Volcanology
Earth science
81326
https://en.wikipedia.org/wiki/Mir
Mir
Mir (, ; ) was a space station operated in low Earth orbit from 1986 to 2001, first by the Soviet Union and later by the Russian Federation. Mir was the first modular space station and was assembled in orbit from 1986 to 1996. It had a greater mass than any previous spacecraft. At the time it was the largest artificial satellite in orbit, succeeded by the International Space Station (ISS) after Mir's orbit decayed. The station served as a microgravity research laboratory in which crews conducted experiments in biology, human biology, physics, astronomy, meteorology, and spacecraft systems with a goal of developing technologies required for permanent occupation of space. Mir was the first continuously inhabited long-term research station in orbit and held the record for the longest continuous human presence in space at 3,644 days, until it was surpassed by the ISS on 23 October 2010. It holds the record for the longest single human spaceflight, with Valeri Polyakov spending 437 days and 18 hours on the station between 1994 and 1995. Mir was occupied for a total of twelve and a half years out of its fifteen-year lifespan, having the capacity to support a resident crew of three, or larger crews for short visits. Following the success of the Salyut programme, Mir represented the next stage in the Soviet Union's space station programme. The first module of the station, known as the core module or base block, was launched in 1986 and followed by six further modules. Proton rockets were used to launch all of its components except for the docking module, which was installed by US Space Shuttle mission STS-74 in 1995. When complete, the station consisted of seven pressurised modules and several unpressurised components. Power was provided by several photovoltaic arrays attached directly to the modules. The station was maintained at an orbit between and altitude and travelled at an average speed of 27,700 km/h (17,200 mph), completing 15.7 orbits per day. The station was launched as part of the Soviet Union's crewed spaceflight programme effort to maintain a long-term research outpost in space, and following the collapse of the USSR, was operated by the new Russian Federal Space Agency (RKA). As a result, most of the station's occupants were Soviet; through international collaborations such as the Interkosmos, Euromir and Shuttle–Mir programmes, the station was made accessible to space travellers from several Asian, European and North American nations. Mir was deorbited in March 2001 after funding was cut off. The cost of the Mir programme was estimated by former RKA General Director Yuri Koptev in 2001 as $4.2 billion over its lifetime (including development, assembly and orbital operation). Origins Mir was authorised by a 17 February 1976 decree, to design an improved model of the Salyut DOS-17K space stations. Four Salyut space stations had been launched since 1971, with three more being launched during Mirs development. It was planned that the station's core module (DOS-7 and the backup DOS-8) would be equipped with a total of four docking ports; two at either end of the station as with the Salyut stations, and an additional two ports on either side of a docking sphere at the front of the station to enable further modules to expand the station's capabilities. By August 1978, this had evolved to the final configuration of one aft port and five ports in a spherical compartment at the forward end of the station. It was originally planned that the ports would connect to modules derived from the Soyuz spacecraft. These modules would have used a Soyuz propulsion module, as in Soyuz and Progress, and the descent and orbital modules would have been replaced with a long laboratory module. Following a February 1979 governmental resolution, the programme was consolidated with Vladimir Chelomei's crewed Almaz military space station programme. The docking ports were reinforced to accommodate space station modules based on the TKS spacecraft. NPO Energia was responsible for the overall space station, with work subcontracted to KB Salyut, due to ongoing work on the Energia rocket and Salyut 7, Soyuz-T, and Progress spacecraft. KB Salyut began work in 1979, and drawings were released in 1982 and 1983. New systems incorporated into the station included the Salyut 5B digital flight control computer and gyrodyne flywheels (taken from Almaz), Kurs automatic rendezvous system, Luch satellite communications system, Elektron oxygen generators, and Vozdukh carbon dioxide scrubbers. By early 1984, work on Mir had halted while all resources were being put into the Buran programme in order to prepare the Buran spacecraft for flight testing. Funding resumed in early 1984 when Valentin Glushko was ordered by the Central Committee's Secretary for Space and Defence to orbit Mir by early 1986, in time for the 27th Communist Party Congress. It was clear that the planned processing flow could not be followed and still meet the 1986 launch date. It was decided on Cosmonaut's Day (12 April) 1985 to ship the flight model of the base block to the Baikonur Cosmodrome and conduct the systems testing and integration there. The module arrived at the launch site on 6 May, with 1100 of 2500 cables requiring rework based on the results of tests to the ground test model at Khrunichev. In October, the base block was rolled outside its cleanroom to carry out communications tests. The first launch attempt on 16 February 1986 was scrubbed when the spacecraft communications failed, but the second launch attempt, on 19 February 1986 at 21:28:23 UTC, was successful, meeting the political deadline. Station structure Assembly [[File:Mir Docking Cone Placement and Module Movements.pdf|right|thumb|upright|A diagram showing the Konus drogue and module movements around Mirs docking node]] The orbital assembly of Mir began on 19 February 1986 with the launch of the Proton-K rocket. Four of the six modules which were later added (Kvant-2 in 1989, Kristall in 1990, Spektr in 1995 and Priroda in 1996) followed the same sequence to be added to the main Mir complex. Firstly, the module would be launched independently on its own Proton-K and chase the station automatically. It would then dock to the forward docking port on the core module's docking node, then extend its Lyappa arm to mate with a fixture on the node's exterior. The arm would then lift the module away from the forward docking port and rotate it on to the radial port where it was to mate, before lowering it to dock. The node was equipped with only two Konus drogues, which were required for dockings. This meant that, prior to the arrival of each new module, the node would have to be depressurised to allow spacewalking cosmonauts to manually relocate the drogue to the next port to be occupied. The other two expansion modules, Kvant-1 in 1987 and the docking module in 1995, followed different procedures. Kvant-1, having, unlike the four modules mentioned above, no engines of its own, was launched attached to a tug based on the TKS spacecraft which delivered the module to the aft end of the core module instead of the docking node. Once hard docking had been achieved, the tug undocked and deorbited itself. The docking module, meanwhile, was launched aboard during STS-74 and mated to the orbiter's Orbiter Docking System. Atlantis then docked, via the module, to Kristall, then left the module behind when it undocked later in the mission. Various other external components, including three truss structures, several experiments and other unpressurised elements were also mounted to the exterior of the station by cosmonauts conducting a total of eighty spacewalks over the course of the station's history. The station's assembly marked the beginning of the third generation of space station design, being the first to consist of more than one primary spacecraft (thus opening a new era in space architecture). First generation stations such as Salyut 1 and Skylab had monolithic designs, consisting of one module with no resupply capability; the second generation stations Salyut 6 and Salyut 7 comprised a monolithic station with two ports to allow consumables to be replenished by cargo spacecraft such as Progress. The capability of Mir to be expanded with add-on modules meant that each could be designed with a specific purpose in mind (for instance, the core module functioned largely as living quarters), thus eliminating the need to install all the station's equipment in one module. Pressurised modules In its completed configuration, the space station consisted of seven different modules, each launched into orbit separately over a period of ten years by either Proton-K rockets or . {| class="wikitable sticky-header" style="width:auto; margin:auto;" |- style="background:#EFEFEF;" ! Module ! Expedition ! Launch date ! Launch system ! style="width:100px;"| Nation ! style="width:82px;"| Isolated view ! style="width:82px;"| Station view |- | rowspan="2" | Mir Core Module(Core Module) | N/A | 19 February 1986 | Proton-K | Soviet Union | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | The base block for the entire Mir complex, the core module, or DOS-7, provided the main living quarters for resident crews and contained environmental systems, early attitude control systems and the station's main engines. The module was based on hardware developed as part of the Salyut programme, and consisted of a stepped-cylinder main compartment and a spherical 'node' module, which served as an airlock and provided ports to which four of the station's expansion modules were berthed and to which a Soyuz or Progress spacecraft could dock. The module's aft port served as the berthing location for Kvant-1. |- | rowspan="2" | Kvant-1(Astrophysics Module) | EO-2 | 31 March 1987 | Proton-K | Soviet Union | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | The first expansion module to be launched, Kvant-1 consisted of two pressurised working compartments and one unpressurised experiment compartment. Scientific equipment included an X-ray telescope, an ultraviolet telescope, a wide-angle camera, high-energy X-ray experiments, an X-ray/gamma ray detector, and the Svetlana electrophoresis unit. The module also carried six gyrodynes for attitude control, in addition to life support systems including an Elektron oxygen generator and a Vozdukh carbon dioxide scrubber. |- | rowspan="2" | Kvant-2(Augmentation Module) | EO-5 | 26 November 1989 | Proton-K | Soviet Union | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | The first TKS based module, Kvant-2, was divided into three compartments: an EVA airlock, an instrument/cargo compartment (which could function as a backup airlock), and an instrument/experiment compartment. The module also carried a Soviet version of the Manned Maneuvering Unit for the Orlan space suit, referred to as Ikar, a system for regenerating water from urine, a shower, the Rodnik water storage system and six gyrodynes to augment those already located in Kvant-1. Scientific equipment included a high-resolution camera, spectrometers, X-ray sensors, the Volna 2 fluid flow experiment, and the Inkubator-2 unit, which was used for hatching and raising quail. |- | rowspan="2" | Kristall(Technology Module) | EO-6 | 31 May 1990 | Proton-K | Soviet Union | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | Kristall, the fourth module, consisted of two main sections. The first was largely used for materials processing (via various processing furnaces), astronomical observations, and a biotechnology experiment utilising the Aniur electrophoresis unit. The second section was a docking compartment which featured two APAS-89 docking ports initially intended for use with the Buran programme and eventually used during the Shuttle-Mir programme. The docking compartment also contained the Priroda 5 camera used for Earth resources experiments. Kristall also carried six control moment gyroscopes (CMGs, or "gyrodynes") for attitude control to augment those already on the station, and two collapsible solar arrays. |- | rowspan="2" | Spektr(Power Module) | EO-18 | 20 May 1995 | Proton-K | Russia | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | Spektr was the first of the three modules launched during the Shuttle-Mir programme; it served as the living quarters for American astronauts and housed NASA-sponsored experiments. The module was designed for remote observation of Earth's environment and contained atmospheric and surface research equipment. It featured four solar arrays which generated approximately half of the station's electrical power. The module also had a science airlock to expose experiments to the vacuum of space selectively. Spektr was rendered unusable following the collision with Progress M-34 in 1997 which damaged the module, exposing it to the vacuum of space. |- | rowspan="2" | Docking Module | EO-20 | 15 November 1995 | (STS-74) | US | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | The docking module was designed to help simplify Space Shuttle dockings to Mir. Before the first shuttle docking mission (STS-71), the Kristall module had to be tediously moved to ensure sufficient clearance between Atlantis and Mirs solar arrays. With the addition of the docking module, enough clearance was provided without the need to relocate Kristall. It had two identical APAS-89 docking ports, one attached to the distal port of Kristall with the other available for shuttle docking. |- | rowspan="2" | Priroda(Earth Sensing Module) | EO-21 | 26 April 1996 | Proton-K | Russia | rowspan="2" | | rowspan="2" | |- style="border-bottom: 3px solid gray" | colspan="4" | The seventh and final Mir module, Priroda'''s primary purpose was to conduct Earth resource experiments through remote sensing and to develop and verify remote sensing methods. The module's experiments were provided by twelve different nations, and covered microwave, visible, near infrared, and infrared spectral regions using both passive and active sounding methods. The module possessed both pressurised and unpressurised segments, and featured a large, externally mounted synthetic aperture radar dish. |} Unpressurised elements In addition to the pressurised modules, Mir featured several external components. The largest component was the Sofora girder, a large scaffolding-like structure consisting of 20 segments which, when assembled, projected 14 metres from its mount on Kvant-1. A self-contained thruster block, the VDU (Vynosnaya Dvigatyelnaya Ustanovka), was mounted on the end of Sofora and was used to augment the roll-control thrusters on the core module. The VDU's increased distance from Mirs axis allowed an 85% decrease in fuel consumption, reducing the amount of propellant required to orient the station. A second girder, Rapana, was mounted aft of Sofora on Kvant-1. This girder, a small prototype of a structure intended to be used on Mir-2 to hold large parabolic dishes away from the main station structure, was 5 metres long and used as a mounting point for externally mounted exposure experiments. To assist in moving objects around the exterior of the station during EVAs, Mir featured two Strela cargo cranes mounted to the sides of the core module, used for moving spacewalking cosmonauts and parts. The cranes consisted of telescopic poles assembled in sections which measured around when collapsed, but when extended using a hand crank were long, meaning that all of the station's modules could be accessed during spacewalks. Each module was fitted with external components specific to the experiments that were carried out within that module, the most obvious being the Travers antenna mounted to Priroda. This synthetic aperture radar consisted of a large dish-like framework mounted outside the module, with associated equipment within, used for Earth observations experiments, as was most of the other equipment on Priroda, including various radiometers and scan platforms. Kvant-2 also featured several scan platforms and was fitted with a mounting bracket to which the cosmonaut manoeuvring unit, or Ikar, was mated. This backpack was designed to assist cosmonauts in moving around the station and the planned Buran in a manner similar to the US Manned Maneuvering Unit, but it was only used once, during EO-5. In addition to module-specific equipment, Kvant-2, Kristall, Spektr and Priroda were each equipped with one Lyappa arm, a robotic arm which, after the module had docked to the core module's forward port, grappled one of two fixtures positioned on the core module's docking node. The arriving module's docking probe was then retracted, and the arm raised the module so that it could be pivoted 90° for docking to one of the four radial docking ports. Power supply Photovoltaic (PV) arrays powered Mir. The station used a 28 volt DC supply which provided 5-, 10-, 20- and 50-amp taps. When the station was illuminated by sunlight, several solar arrays mounted on the pressurised modules provided power to Mirs systems and charged the nickel-cadmium storage batteries installed throughout the station. The arrays rotated in only one degree of freedom over a 180° arc, and tracked the Sun using Sun sensors and motors installed in the array mounts. The station itself also had to be oriented to ensure optimum illumination of the arrays. When the station's all-sky sensor detected that Mir had entered Earth's shadow, the arrays were rotated to the optimum angle predicted for reacquiring the Sun once the station passed out of the shadow. The batteries, each of 60 Ah capacity, were then used to power the station until the arrays recovered their maximum output on the day side of Earth. The solar arrays themselves were launched and installed over a period of eleven years, more slowly than originally planned, with the station continually suffering from a shortage of power as a result. The first two arrays, each 38 m2 (409 ft2) in area, were launched on the core module, and together provided a total of 9 kW of power. A third, dorsal panel was launched on Kvant-1 and mounted on the core module in 1987, providing a further 2 kW from a 22 m2 (237 ft2) area. Kvant-2, launched in 1989, provided two 10 m (32.8 ft) long panels which supplied 3.5 kW each, whilst Kristall was launched with two collapsible, 15 m (49.2 ft) long arrays (providing 4 kW each) which were intended to be moved to Kvant-1 and installed on mounts which were attached during a spacewalk by the EO-8 crew in 1991. This relocation was begun in 1995, when the panels were retracted and the left panel installed on Kvant-1. By this time all the arrays had degraded and were supplying much less power. To rectify this, Spektr (launched in 1995), which had initially been designed to carry two arrays, was modified to hold four, providing a total of 126 m2 (1360 ft2) of array with a 16 kW supply. Two further arrays were flown to the station on board the during STS-74, carried on the docking module. The first of these, the Mir cooperative solar array, consisted of American photovoltaic cells mounted on a Russian frame. It was installed on the unoccupied mount on Kvant-1 in May 1996 and was connected to the socket that had previously been occupied by the core module's dorsal panel, which was by this point barely supplying 1 kW. The other panel, originally intended to be launched on Priroda, replaced the Kristall panel on Kvant-1 in November 1997, completing the station's electrical system. Orbit controlMir was maintained in a near circular orbit with an average perigee of and an average apogee of , travelling at an average speed of 27,700 km/h (17,200 mph) and completing 15.7 orbits per day. As the station constantly lost altitude because of slight atmospheric drag, it needed to be boosted to a higher altitude several times each year. This boost was generally performed by Progress resupply vessels, although during the Shuttle-Mir programme the task was performed by US Space Shuttles, and, prior to the arrival of Kvant-1, the engines on the core module could also accomplish the task. Attitude control was maintained by a combination of two mechanisms; in order to hold a set attitude, a system of twelve control moment gyroscopes (CMGs, or "gyrodynes") rotating at 10,000 rpm kept the station oriented, six CMGs being located in each of the Kvant-1 and Kvant-2 modules. When the attitude of the station needed to be changed, the gyrodynes were disengaged, thrusters (including those mounted directly to the modules, and the VDU thruster used for roll control mounted to the Sofora girder) were used to attain the new attitude and the CMGs were reengaged. This was done fairly regularly depending on experimental needs; for instance, Earth or astronomical observations required that the instrument recording images be continuously aimed at the target, and so the station was oriented to make this possible. Conversely, materials processing experiments required the minimisation of movement on board the station, and so Mir would be oriented in a gravity gradient attitude for stability. Prior to the arrival of the modules containing these gyrodynes, the station's attitude was controlled using thrusters located on the core module alone, and, in an emergency, the thrusters on docked Soyuz spacecraft could be used to maintain the station's orientation. Communications Radio communications provided telemetry and scientific data links between Mir and the RKA Mission Control Centre (TsUP). Radio links were also used during rendezvous and docking procedures and for audio and video communication between crew members, flight controllers and family members. As a result, Mir was equipped with several communication systems used for different purposes. The station communicated directly with the ground via the Lira antenna mounted to the core module. The Lira antenna also had the capability to use the Luch data relay satellite system (which fell into disrepair in the 1990s) and the network of Soviet tracking ships deployed in various locations around the world (which also became unavailable in the 1990s). UHF radio was used by cosmonauts conducting EVAs. UHF was also employed by other spacecraft that docked to or undocked from the station, such as Soyuz, Progress, and the Space Shuttle, in order to receive commands from the TsUP and Mir crew members via the TORU system. Microgravity At Mirs orbital altitude, the force of Earth's gravity was 88% of sea level gravity. While the constant free fall of the station offered a perceived sensation of weightlessness, the onboard environment was not one of weightlessness or zero gravity. The environment was often described as microgravity. This state of perceived weightlessness was not perfect, being disturbed by five separate effects: The drag resulting from the residual atmosphere; Vibratory acceleration caused by mechanical systems and the crew on the station; Orbital corrections by the on-board gyroscopes (which spun at 10,000 rpm, producing vibrations of 166.67 Hz) or thrusters; Tidal forces. Any parts of Mir not at exactly the same distance from Earth tended to follow separate orbits. As each point was physically part of the station, this was impossible, and so each component was subject to small accelerations from tidal forces; The differences in orbital plane between different locations on the station. Life supportMir's environmental control and life support system (ECLSS) provided or controlled atmospheric pressure, fire detection, oxygen levels, waste management and water supply. The highest priority for the ECLSS was the station's atmosphere, but the system also collected, processed, and stored waste and water produced and used by the crew—a process that recycles fluid from the sink, toilet, and condensation from the air. The Elektron system generated oxygen electrolytically, venting hydrogen to space. Bottled oxygen and solid fuel oxygen generation (SFOG) canisters, a system known as Vika, provided backup. Carbon dioxide was removed from the air by the Vozdukh system. Other byproducts of human metabolism, such as methane from the intestines and ammonia from sweat, were removed by activated charcoal filters. Similar systems are presently used on the ISS. The atmosphere on Mir was similar to Earth's. Normal air pressure on the station was 101.3 kPa (14.7 psi); the same as at sea level on Earth. An Earth-like atmosphere offers benefits for crew comfort. International cooperation Interkosmos Interkosmos () was a Soviet Union space exploration programme which allowed members from countries allied with the Soviet Union to participate in crewed and uncrewed space exploration missions. Participation was also made available to governments of countries such as France and India. Only the last three of the programme's fourteen missions consisted of an expedition to Mir but none resulted in an extended stay in the station: Muhammed Faris – EP-1 (1987) Aleksandr Panayatov Aleksandrov – EP-2 (1988) Abdul Ahad Mohmand – EP-3 (1988) European involvement Various European astronauts visited Mir as part of several cooperative programmes: Jean-Loup Chrétien – Aragatz (1988) Helen Sharman – Project Juno (1991) Franz Viehböck – Austromir '91 (1991) Klaus-Dietrich Flade – Mir '92 (1992) Michel Tognini – Antarès (1992) Jean-Pierre Haigneré – Altair (1993) Ulf Merbold – Euromir '94 (1994) Thomas Reiter – Euromir '95 (1995) Claudie Haigneré – Cassiopée (1996) Reinhold Ewald – Mir '97 (1997) Léopold Eyharts – Pégase (1998) Ivan Bella – Stefanik (1999) Shuttle–Mir program In the early 1980s, NASA planned to launch a modular space station called Freedom as a counterpart to Mir, while the Soviets were planning to construct Mir-2 in the 1990s as a replacement for the station. Because of budget and design constraints, Freedom never progressed past mock-ups and minor component tests and, with the fall of the Soviet Union and the end of the Space Race, the project was nearly cancelled entirely by the United States House of Representatives. The post-Soviet economic chaos in Russia also led to the cancellation of Mir-2, though only after its base block, DOS-8, had been constructed. Similar budgetary difficulties were faced by other nations with space station projects, which prompted the US government to negotiate with European states, Russia, Japan, and Canada in the early 1990s to begin a collaborative project. In June 1992, American president George H. W. Bush and Russian president Boris Yeltsin agreed to cooperate on space exploration. The resulting Agreement between the United States of America and the Russian Federation Concerning Cooperation in the Exploration and Use of Outer Space for Peaceful Purposes called for a short joint space programme with one American astronaut deployed to the Russian space station Mir and two Russian cosmonauts deployed to a Space Shuttle. In September 1993, US Vice President Al Gore Jr., and Russian Prime Minister Viktor Chernomyrdin announced plans for a new space station, which eventually became the ISS. They also agreed, in preparation for this new project, that the United States would be heavily involved in the Mir programme as part of an international project known as the Shuttle–Mir Programme. The project, sometimes called "Phase One", was intended to allow the United States to learn from Russian experience in long-duration spaceflight and to foster a spirit of cooperation between the two nations and their space agencies, the US National Aeronautics and Space Administration (NASA) and the Russian Federal Space Agency (Roskosmos). The project prepared the way for further cooperative space ventures, specifically, "Phase Two" of the joint project, the construction of the ISS. The programme was announced in 1993; the first mission started in 1994, and the project continued until its scheduled completion in 1998. Eleven Space Shuttle missions, a joint Soyuz flight, and almost 1000 cumulative days in space for US astronauts occurred over the course of seven long-duration expeditions. Other visitors Toyohiro Akiyama – Kosmoreporter (1990) Chris Hadfield – STS-74 (1995) A British con artist, Peter Rodney Llewellyn, almost visited Mir in 1999 on a private contract after promising US$100 million for the privilege. Life on board Inside, the Mir resembled a cramped labyrinth, crowded with hoses, cables and scientific instruments—as well as articles of everyday life, such as photos, children's drawings, books and a guitar. It commonly housed three crew members, but was capable of supporting as many as six for up to a month. The station was designed to remain in orbit for around five years; it remained in orbit for fifteen. As a result, NASA astronaut John Blaha reported that, with the exception of Priroda and Spektr, which were added late in the station's life, Mir did look used, which is to be expected given it had been lived in for ten to eleven years without being brought home and cleaned. Crew schedule The time zone used on board Mir was Moscow Time (MSK; UTC+03). The windows were covered during night hours to give the impression of darkness because the station experienced 16 sunrises and sunsets a day. A typical day for the crew began with a wake-up at 08:00 MSK, followed by two hours of personal hygiene and breakfast. Work was conducted from 10:00 until 13:00, followed by an hour of exercise and an hour's lunch break. Three more hours of work and another hour of exercise followed lunch, and the crews began preparing for their evening meal at about 19:00. The cosmonauts were free to do as they wished in the evening, and largely worked to their own pace during the day. In their spare time, crews were able to catch up with work, observe the Earth below, respond to letters, drawings, and other items brought from Earth (and give them an official stamp to show they had been aboard Mir), or make use of the station's ham radio. Two amateur radio call signs, U1MIR and U2MIR, were assigned to Mir in the late 1980s, allowing amateur radio operators on Earth to communicate with the cosmonauts. The station was also equipped with a supply of books and films for the crew to read and watch. NASA astronaut Jerry Linenger related how life on board Mir was structured and lived according to the detailed itineraries provided by ground control. Every second on board was accounted for and all activities were timetabled. After working some time on Mir, Linenger came to feel that the order in which his activities were allocated did not represent the most logical or efficient order possible for these activities. He decided to perform his tasks in an order that he felt enabled him to work more efficiently, be less fatigued, and suffer less from stress. Linenger noted that his comrades on Mir did not "improvise" in this way, and as a medical doctor he observed the effects of stress on his comrades that he believed was the outcome of following an itinerary without making modifications to it. Despite this, he commented that his comrades performed all their tasks in a supremely professional manner. Astronaut Shannon Lucid, who set the record for longest stay in space by a woman while aboard Mir (surpassed by Sunita Williams 11 years later on the ISS), also commented about working aboard Mir: "I think going to work on a daily basis on Mir is very similar to going to work on a daily basis on an outstation in Antarctica. The big difference with going to work here is the isolation, because you really are isolated. You don't have a lot of support from the ground. You really are on your own." Exercise The most significant adverse effects of long-term weightlessness are muscle atrophy and deterioration of the skeleton, or spaceflight osteopenia. Other significant effects include fluid redistribution, a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, excess flatulence, and puffiness of the face. These effects begin to reverse quickly upon return to the Earth. To prevent some of these effects, the station was equipped with two treadmills (in the core module and Kvant-2) and a stationary bicycle (in the core module); each cosmonaut was to cycle the equivalent of and run the equivalent of per day. Cosmonauts used bungee cords to strap themselves to the treadmill. Researchers believe that exercise is a good countermeasure for the bone and muscle density loss that occurs in low-gravity situations. Hygiene There were two space toilets (ASUs) on Mir, located in the core module and Kvant-2. They used a fan-driven suction system similar to the Space Shuttle Waste Collection System. The user is first fastened to the toilet seat, which was equipped with spring-loaded restraining bars to ensure a good seal. A lever operated a powerful fan and a suction hole slid open: the air stream carried the waste away. Solid waste was collected in individual bags which were stored in an aluminium container. Full containers were transferred to Progress spacecraft for disposal. Liquid waste was evacuated by a hose connected to the front of the toilet, with anatomically appropriate "urine funnel adapters" attached to the tube so both men and women could use the same toilet. Waste was collected and transferred to the Water Recovery System, where it could be recycled back into drinking water, but was usually used to produce oxygen via the Elektron system.Mir featured a shower, the Bania, located in Kvant-2. It was an improvement on the units installed in previous Salyut stations, but proved difficult to use due to the time required to set up, use, and stow. The shower, which featured a plastic curtain and fan to collect water via an airflow, was later converted into a steam room; it eventually had its plumbing removed and the space was reused. When the shower was unavailable, crew members washed using wet wipes, with soap dispensed from a toothpaste tube-like container, or using a washbasin equipped with a plastic hood, located in the core module. Crews were also provided with rinse-less shampoo and edible toothpaste to save water. On a 1998 visit to Mir, bacteria and larger organisms were found to have proliferated in water globules formed from moisture that had condensed behind service panels. Sleeping in space The station provided two permanent crew quarters, the Kayutkas, phonebox-sized booths set towards the rear of the core module, each featuring a tethered sleeping bag, a fold-out desk, a porthole, and storage for personal effects. Visiting crews had no allocated sleep module, instead attaching a sleeping bag to an available space on a wall; US astronauts installed themselves within Spektr until a collision with a Progress spacecraft caused the depressurisation of that module. It was important that crew accommodations be well ventilated; otherwise, astronauts could wake up oxygen-deprived and gasping for air, because a bubble of their own exhaled carbon dioxide had formed around their heads. Food and drink Most of the food eaten by station crews was frozen, refrigerated or canned. Meals were prepared by the cosmonauts, with the help of a dietitian, before their flight to the station. The diet was designed to provide around 100 g of protein, 130 g of fat and 330 g of carbohydrates per day, in addition to appropriate mineral and vitamin supplements. Meals were spaced out through the day to aid assimilation. Canned food such as jellied beef tongue was placed into a niche in the core module's table, where it could be warmed in 5–10 minutes. Usually, crews drank tea, coffee and fruit juices, but, unlike the ISS, the station also had a supply of cognac and vodka for special occasions. Microbiological environmental hazards In the 1990s ninety species of micro-organisms were found inside Mir, four years after the station's launch. By the time of its decommission in 2001, the number of known different micro-organisms had grown to 140. As space stations get older, the problems with contamination get worse. Molds that develop aboard space stations can produce acids that degrade metal, glass and rubber. The molds in Mir were found growing behind panels and inside air-conditioning equipment. The molds also caused a foul smell, which was often cited as visitors' strongest impression. Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue ensuring a medically healthy environment for the astronauts. Some biologists were concerned about the mutant fungi being a major microbiological hazard for humans, and reaching Earth in the splashdown, after having been in an isolated environment for 15 years. On the other hand, some scientists are conducting research on whether this situation can be used for life in space. Scientists have discovered that fungi could actually assist space travel and detect livable environments for humankind in space. In fact, these resilient and frequently underestimated organisms might hold the key to our future on other planets. Fungi play a dramatic role in creating innovative and sustainable building materials. Most fungi possess mycelia, hair-like root structures that grow and spread across surfaces. As mycelia expand, they bind surrounding materials, as wood chips, sawdust, or regolith (the loose material covering solid rock on planetary bodies like the Moon or Mars). This growth process results in a dense, interconnected network that creates a remarkably strong and durable substance. The resulting mycelium-based material offers notable thermal insulation and radiation protection, making it an ideal candidate for construction, particularly in severe environments like outer space or other interplanetary habitats. Station operations ExpeditionsMir was visited by a total of 28 long-duration or "principal" crews, each of which was given a sequential expedition number formatted as EO-X. Expeditions varied in length (from the 72-day flight of the crew of EO-28 to the 437-day flight of Valeri Polyakov), but generally lasted around six months. Principal expedition crews consisted of two or three crew members, who often launched as part of one expedition but returned with another (Polyakov launched with EO-14 and landed with EO-17). The principal expeditions were often supplemented with visiting crews who remained on the station during the week-long handover period between one crew and the next before returning with the departing crew, the station's life support system being able to support a crew of up to six for short periods. The station was occupied for a total of four distinct periods; 12 March–16 July 1986 (EO-1), 5 February 1987 – 27 April 1989 (EO-2–EO-4), the record-breaking run from 5 September 1989 – 28 August 1999 (EO-5–EO-27), and 4 April–16 June 2000 (EO-28). By the end, it had been visited by 104 different people from twelve different nations, making it the most visited spacecraft in history (a record later surpassed by the ISS). Early existence Due to pressure to launch the station on schedule, mission planners were left without Soyuz spacecraft or modules to launch to the station at first. It was decided to launch Soyuz T-15 on a dual mission to both Mir and Salyut 7. Leonid Kizim and Vladimir Solovyov first docked with Mir on 15 March 1986. During their nearly 51-day stay on Mir, they brought the station online and checked its systems. They unloaded two Progress spacecraft launched after their arrival, Progress 25 and Progress 26. On 5 May 1986, they undocked from Mir for a day-long journey to Salyut 7. They spent 51 days there and gathered 400 kg of scientific material from Salyut 7 for return to Mir. While Soyuz T-15 was at Salyut 7, the uncrewed Soyuz TM-1 arrived at the unoccupied Mir and remained for 9 days, testing the new Soyuz TM model. Soyuz T-15 redocked with Mir on 26 June and delivered the experiments and 20 instruments, including a multichannel spectrometer. The EO-1 crew spent their last 20 days on Mir conducting Earth observations before returning to Earth on 16 July 1986, leaving the new station unoccupied. The second expedition to Mir, EO-2, launched on Soyuz TM-2 on 5 February 1987. During their stay, the Kvant-1 module, launched on 30 March 1987, arrived. It was the first experimental version of a planned series of '37K' modules scheduled to be launched to Mir on Buran. Kvant-1 was originally planned to dock with Salyut 7; due to technical problems during its development, it was reassigned to Mir. The module carried the first set of six gyroscopes for attitude control. The module also carried instruments for X-ray and ultraviolet astrophysical observations. The initial rendezvous of the Kvant-1 module with Mir on 5 April 1987 was troubled by the failure of the onboard control system. After the failure of the second attempt to dock, the resident cosmonauts, Yuri Romanenko and Aleksandr Laveykin, conducted an EVA to fix the problem. They found a trash bag which had been left in orbit after the departure of one of the previous cargo ships and was now located between the module and the station, which prevented the docking. After removing the bag, docking was completed on 12 April. The Soyuz TM-2 launch was the beginning of a string of 6 Soyuz launches and three long-duration crews between 5 February 1987 and 27 April 1989. This period also saw the first international visitors, Muhammed Faris (Syria), Abdul Ahad Mohmand (Afghanistan) and Jean-Loup Chrétien (France). With the departure of EO-4 on Soyuz TM-7 on 27 April 1989 the station was again left unoccupied. Third start The launch of Soyuz TM-8 on 5 September 1989 marked the beginning of the longest human presence in space, until 23 October 2010, when this record was surpassed by the ISS. It also marked the beginning of Mir's second expansion. The Kvant-2 and Kristall modules were now ready for launch. Alexander Viktorenko and Aleksandr Serebrov docked with Mir and brought the station out of its five-month hibernation. On 29 September the cosmonauts installed equipment in the docking system in preparation for the arrival of Kvant-2, the first of the 20 tonne add-on modules based on the TKS spacecraft from the Almaz programme. After a 40-day delay caused by faulty computer chips, Kvant-2 was launched on 26 November 1989. After problems deploying the craft's solar array and with the automated docking systems on both Kvant-2 and Mir, the new module was docked manually on 6 December. Kvant-2 added a second set of control moment gyroscopes (CMGs, or "gyrodynes") to Mir, and brought the new life support systems for recycling water and generating oxygen, reducing dependence on ground resupply. The module featured a large airlock with a one-metre hatch. A special backpack unit (known as Ikar), an equivalent of the US Manned Maneuvering Unit, was located inside Kvant-2's airlock. Soyuz TM-9 launched EO-6 crew members Anatoly Solovyev and Aleksandr Balandin on 11 February 1990. While docking, the EO-5 crew noted that three thermal blankets on the ferry were loose, potentially creating problems on reentry, but it was decided that they would be manageable. Their stay on board Mir saw the addition of the Kristall module, launched 31 May 1990. The first docking attempt on 6 June was aborted due to an attitude control thruster failure. Kristall arrived at the front port on 10 June and was relocated to the lateral port opposite Kvant-2 the next day, restoring the equilibrium of the complex. Due to the delay in the docking of Kristall, EO-6 was extended by 10 days to permit the activation of the module's systems and to accommodate an EVA to repair the loose thermal blankets on Soyuz TM-9.Kristall contained furnaces for use in producing crystals under microgravity conditions (hence the choice of name for the module). The module was also equipped with biotechnology research equipment, including a small greenhouse for plant cultivation experiments which was equipped with a source of light and a feeding system, in addition to equipment for astronomical observations. The most obvious features of the module were the two Androgynous Peripheral Attach System (APAS-89) docking ports designed to be compatible with the Buran spacecraft. Although they were never used in a Buran docking, they were useful later during the Shuttle-Mir programme, providing a berthing location for US Space Shuttles. The EO-7 relief crew arrived aboard Soyuz TM-10 on 3 August 1990. The new crew arrived at Mir with quail for Kvant-2's cages, one of which laid an egg en route to the station. It was returned to Earth, along with 130 kg of experiment results and industrial products, in Soyuz TM-9. Two more expeditions, EO-8 and EO-9, continued the work of their predecessors whilst tensions grew back on Earth. Post-Soviet period The EO-10 crew, launched aboard Soyuz TM-13 on 2 October 1991, was the last crew to launch from the USSR and continued the occupation of Mir during the fall of the Soviet Union. The crew launched as Soviet citizens and returned to Earth on 25 March 1992 as Russians. The newly formed Russian Federal Space Agency (Roscosmos) was unable to finance the unlaunched Spektr and Priroda modules, instead putting them into storage and ending Mir's second expansion. The first human mission flown from an independent Kazakhstan was Soyuz TM-14, launched on 17 March 1992, which carried the EO-11 crew to Mir, docking on 19 March before the departure of Soyuz TM-13. On 17 June, Russian President Boris Yeltsin and US President George H. W. Bush announced what would later become the Shuttle-Mir programme, a cooperative venture which proved useful to the cash-strapped Roskosmos (and led to the eventual completion and launch of Spektr and Priroda). EO-12 followed in July, alongside a brief visit by French astronaut Michel Tognini. The following crew, EO-13, began preparations for the Shuttle-Mir programme by flying to the station in a modified spacecraft, Soyuz TM-16 (launched on 26 January 1993), which was equipped with an APAS-89 docking system rather than the usual probe-and-drogue, enabling it to dock to Kristall and test the port which would later be used by US Space Shuttles. The spacecraft also enabled controllers to obtain data on the dynamics of docking a spacecraft to a space station off the station's longitudinal axis, in addition to data on the structural integrity of this configuration via a test called Rezonans conducted on 28 January. Soyuz TM-15, meanwhile, departed with the EO-12 crew on 1 February. Throughout the period following the collapse of the USSR, crews on Mir experienced occasional reminders of the economic chaos occurring in Russia. The initial cancellation of Spektr and Priroda was the first such sign, followed by the reduction in communications as a result of the fleet of tracking ships being withdrawn from service by Ukraine. The new Ukrainian government also vastly raised the price of the Kurs docking systems, manufactured in Kyivthe Russians' attempts to reduce their dependence on Kurs would later lead to accidents during TORU tests in 1997. Various Progress spacecraft had parts of their cargoes missing, either because the consumable in question had been unavailable, or because the ground crews at Baikonur had looted them. The problems became particularly obvious during the launch of the EO-14 crew aboard Soyuz TM-17 in July; shortly before launch there was a black-out at the pad, and the power supply to the nearby city of Leninsk failed an hour after launch. Nevertheless, the spacecraft launched on time and arrived at the station two days later. All of Mir's ports were occupied, and so Soyuz TM-17 had to station-keep 200 metres away from the station for half an hour before docking while Progress M-18 vacated the core module's front port and departed. The EO-13 crew departed on 22 July, and soon after Mir passed through the annual Perseid meteor shower, during which the station was hit by several particles. A spacewalk was conducted on 28 September to inspect the station's hull, but no serious damage was reported. Soyuz TM-18 arrived on 10 January 1994 carrying the EO-15 crew (including Valeri Polyakov, who was to remain on Mir for 14 months), and Soyuz TM-17 left on 14 January. The undocking was unusual in that the spacecraft was to pass along Kristall in order to obtain photographs of the APAS to assist in the training of space shuttle pilots. Due to an error in setting up the control system, the spacecraft struck the station a glancing blow during the manoeuvre, scratching the exterior of Kristall. On 3 February 1994, Mir veteran Sergei Krikalev became the first Russian cosmonaut to launch on a US spacecraft, flying on during STS-60. The launch of Soyuz TM-19, carrying the EO-16 crew, was delayed due to the unavailability of a payload fairing for the booster that was to carry it, but the spacecraft eventually left Earth on 1 July 1994 and docked two days later. They stayed only four months to allow the Soyuz schedule to line up with the planned Space Shuttle manifest, and so Polyakov greeted a second resident crew in October, prior to the undocking of Soyuz TM-19, when the EO-17 crew arrived in Soyuz TM-20. Shuttle–Mir On 3 February 1995, the launch of , flying STS-63, opened operations on Mir. Referred to as the "near-Mir" mission, the mission saw the first rendezvous of a Space Shuttle with Mir as the orbiter approached within of the station as a dress rehearsal for later docking missions and for equipment testing. Five weeks after Discovery departure, the EO-18 crew, including the first US cosmonaut Norman Thagard, arrived in Soyuz TM-21. The EO-17 crew left a few days later, with Polyakov completing his record-breaking 437-day spaceflight. During EO-18, the Spektr science module (which served as living and working space for American astronauts) was launched aboard a Proton rocket and docked to the station, carrying research equipment from America and other nations. The expedition's crew returned to Earth aboard following the first Shuttle–Mir docking mission, STS-71. Atlantis, launched on 27 June 1995, successfully docked with Mir on 29 June becoming the first US spacecraft to dock with a Russian spacecraft since the ASTP in 1975. The orbiter delivered the EO-19 crew and returned the EO-18 crew to Earth. The EO-20 crew were launched on 3 September, followed in November by the arrival of the docking module during STS-74. On 21 February 1996, the two-man EO-21 crew was launched aboard Soyuz TM-23, and they were soon joined by US crew member Shannon Lucid, who was brought to the station by Atlantis during STS-76. During this mission, the first joint US spacewalk on Mir took place, deploying the Mir Environmental Effects Payload package for the docking module. Lucid became the first American to carry out a long-duration mission aboard Mir with her 188-day mission, which set the US single spaceflight record. During Lucid's time aboard Mir, Priroda, the station's final module, arrived as did French visitor Claudie Haigneré flying the Cassiopée mission. The flight aboard Soyuz TM-24 also delivered the EO-22 crew of Valery Korzun and Aleksandr Kaleri. On 16 September 1996, with the launch of Atlantis and the STS-79 flight, Lucid's stay aboard Mir ended. During this fourth docking, John Blaha transferred onto Mir to take his place as resident US astronaut. His stay on the station improved operations in a number of areas, including transfer procedures for a docked space shuttle, "hand-over" procedures for long-duration American crew members, and "ham" amateur radio communications, as well as two spacewalks to reconfigure the station's power grid. Blaha spent four months with the EO-22 crew before returning to Earth aboard Atlantis on STS-81 in January 1997, at which point he was replaced by physician Jerry Linenger. During his flight, Linenger became the first American to conduct a spacewalk from a foreign space station and the first to test the Russian-built Orlan-M spacesuit alongside Russian cosmonaut Vasili Tsibliyev, flying EO-23. All three crew members of EO-23 performed a "fly-around" in Soyuz TM-25 spacecraft. Linenger and his Russian crewmates Vasili Tsibliyev and Aleksandr Lazutkin faced several difficulties during the mission, including the most severe fire aboard an orbiting spacecraft (caused by a malfunctioning Vika), failures of various systems, a near collision with Progress M-33 during a long-distance TORU test and a total loss of station electrical power. The power failure also caused a loss of attitude control, which led to an uncontrolled "tumble" through space. Linenger was succeeded by Anglo-American astronaut Michael Foale, carried up by Atlantis on STS-84, alongside Russian mission specialist Elena Kondakova. Foale's increment proceeded fairly normally until 25 June when during the second test of the Progress manual docking system, TORU, Progress M-34 collided with solar arrays on the Spektr module and crashed into the module's outer shell, puncturing the module and causing depressurisation on the station. Only quick actions on the part of the crew, cutting cables leading to the module and closing Spektr's hatch, prevented the crews having to abandon the station in Soyuz TM-25. Their efforts stabilised the station's air pressure, whilst the pressure in Spektr, containing many of Foale's experiments and personal effects, dropped to a vacuum. In an effort to restore some of the power and systems lost following the isolation of Spektr and to attempt to locate the leak, EO-24 commander Anatoly Solovyev and flight engineer Pavel Vinogradov carried out a risky salvage operation later in the flight, entering the empty module during a so-called "intra-vehicular activity" or "IVA" spacewalk and inspecting the condition of hardware and running cables through a special hatch from Spektr's systems to the rest of the station. Following these first investigations, Foale and Solovyev conducted a 6-hour EVA outside Spektr to inspect the damage. After these incidents, the US Congress and NASA considered whether to abandon the programme out of concern for the astronauts' safety, but NASA administrator Daniel Goldin decided to continue. The next flight to Mir, STS-86, carried David Wolf aboard Atlantis. During the orbiter's stay, Titov and Parazynski conducted a spacewalk to affix a cap to the docking module for a future attempt by crew members to seal the leak in Spektrs hull. Wolf spent 119 days aboard Mir with the EO-24 crew and was replaced during STS-89 with Andy Thomas, who carried out the last US expedition on Mir. The EO-25 crew arrived in Soyuz TM-27 in January 1998 before Thomas returned to Earth on the final Shuttle–Mir mission, STS-91. Final days and deorbit Following the 8 June 1998 departure of Discovery, the EO-25 crew of Budarin and Musabayev remained on Mir, completing materials experiments and compiling a station inventory. On 2 July, Roskosmos director Yuri Koptev announced that, due to a lack of funding to keep Mir active, the station would be deorbited in June 1999. The EO-26 crew of Gennady Padalka and Sergei Avdeyev arrived on 15 August in Soyuz TM-28, alongside physicist Yuri Baturin, who departed with the EO-25 crew on 25 August in Soyuz TM-27. The crew carried out two spacewalks, one inside Spektr to reseat some power cables and another outside to set up experiments delivered by Progress M-40, which also carried a large amount of propellant to begin alterations to Mirs orbit in preparation for the station's decommissioning. 20 November 1998 saw the launch of Zarya, the first module of the ISS, but delays to the new station's service module Zvezda had led to calls for Mir to be kept in orbit past 1999. Roscosmos confirmed that it would not fund Mir past the set deorbit date. The crew of EO-27, Viktor Afanasyev and Jean-Pierre Haigneré, arrived in Soyuz TM-29 on 22 February 1999 alongside Ivan Bella, who returned to Earth with Padalka in Soyuz TM-28. The crew carried out three EVAs to retrieve experiments and deploy a prototype communications antenna on Sofora. On 1 June it was announced that the deorbit of the station would be delayed by six months to allow time to seek alternative funding to keep the station operating. The rest of the expedition was spent preparing the station for its deorbit; a special analog computer was installed and each of the modules, starting with the docking module, was mothballed in turn and sealed off. The crew loaded their results into Soyuz TM-29 and departed Mir on 28 August 1999, ending a run of continuous occupation, which had lasted for eight days short of ten years. The station's control moment gyroscopes (CMGs, or "gyrodynes") and main computer were shut down on 7 September, leaving Progress M-42 to control Mir and refine the station's orbital decay rate. Near the end of its life, there were plans for private interests to purchase Mir, possibly for use as the first orbital television/movie studio. The privately funded Soyuz TM-30 mission by MirCorp, that was launched on 4 April 2000, carried two crew members, Sergei Zalyotin and Aleksandr Kaleri, to the station for two months to do repair work with the hope of proving that the station could be made safe. This was to be the last crewed mission to Mir—while Russia was optimistic about Mir future, its commitments to the ISS project left no funding to support the aging station.Mirs deorbit was carried out in three stages. The first stage involved waiting for atmospheric drag to reduce the station's orbit to an average of . This began with the docking of Progress M1-5, a modified version of the Progress-M carrying 2.5 times more fuel in place of supplies. The second stage was the transfer of the station into a 165 × 220 km (103 × 137 mi) orbit. This was achieved with two burns of Progress M1-5's control engines at 00:32 UTC and 02:01 UTC on 23 March 2001. After a two-orbit pause, the third and final stage of the deorbit began with the burn of Progress M1-5's control engines and main engine at 05:08 UTC, lasting 22+ minutes. Atmospheric reentry (arbitrarily defined beginning at 100 km/60 mi AMSL) occurred at 05:44 UTC near Nadi, Fiji. Major destruction of the station began around 05:52 UTC and most of the unburned fragments fell into the South Pacific Ocean around 06:00 UTC. Visiting spacecraftMir was primarily supported by the Russian Soyuz and Progress spacecraft and had two ports available for docking them. Initially, the fore and aft ports of the core module could be used for dockings, but following the permanent berthing of Kvant-1 to the aft port in 1987, the rear port of the new module took on this role from the core module's aft port. Each port was equipped with the plumbing required for Progress cargo ferries to replace the station's fluids and also the guidance systems needed to guide the spacecraft for docking. Two such systems were used on Mir; the rear ports of both the core module and Kvant-1 were equipped with both the Igla and Kurs systems, whilst the core module's forward port featured only the newer Kurs. Soyuz spacecraft provided personnel access to and from the station allowing for crew rotations and cargo return, and also functioned as a lifeboat for the station, allowing for a relatively quick return to Earth in the event of an emergency. Two models of Soyuz flew to Mir; Soyuz T-15 was the only Igla-equipped Soyuz-T to visit the station, whilst all other flights used the newer, Kurs-equipped Soyuz-TM. A total of 31 (30 crewed, 1 uncrewed) Soyuz spacecraft flew to the station over a fourteen-year period. The uncrewed Progress cargo vehicles were only used to resupply the station, carrying a variety of cargoes including water, fuel, food and experimental equipment. The spacecraft were not equipped with reentry shielding and so, unlike their Soyuz counterparts, were incapable of surviving reentry. As a result, when its cargo had been unloaded, each Progress was refilled with rubbish, spent equipment and other waste which was destroyed, along with the Progress itself, on reentry. In order to facilitate cargo return, ten Progress flights carried Raduga capsules, which could return around 150 kg of experimental results to Earth automatically. Mir was visited by three separate models of Progress; the original 7K-TG variant equipped with Igla (18 flights), the Progress-M model equipped with Kurs (43 flights), and the modified Progress-M1 version (3 flights), which together flew a total of 64 resupply missions. Whilst the Progress spacecraft usually docked automatically without incident, the station was equipped with a remote manual docking system, TORU, in case problems were encountered during the automatic approaches. With TORU, cosmonauts could guide the spacecraft safely in to dock (with the exception of the catastrophic docking of Progress M-34, when the long-range use of the system resulted in the spacecraft striking the station, damaging Spektr and causing decompression). In addition to the routine Soyuz and Progress flights, it was anticipated that Mir would also be the destination for flights by the Soviet Buran space shuttle, which was intended to deliver extra modules (based on the same "37K" bus as Kvant-1) and provide a much improved cargo return service to the station. Kristall carried two Androgynous Peripheral Attach System (APAS-89) docking ports designed to be compatible with the shuttle. One port was to be used for Buran; the other for the planned Pulsar X-2 telescope, also to be delivered by Buran. The cancellation of the Buran programme meant these capabilities were not realised until the 1990s when the ports were used instead by US Space Shuttles as part of the Shuttle-Mir programme (after testing by the specially modified Soyuz TM-16 in 1993). Initially, visiting Space Shuttle orbiters docked directly to Kristall, but this required the relocation of the module to ensure sufficient distance between the shuttle and Mirs solar arrays. To eliminate the need to move the module and retract solar arrays for clearance issues, a Mir Docking Module was later added to the end of Kristall. The shuttles provided crew rotation of the American astronauts on station and carried cargo to and from the station, performing some of the largest transfers of cargo of the time. With a space shuttle docked to Mir, the temporary enlargements of living and working areas amounted to a complex that was the largest spacecraft in history at that time, with a combined mass of . Mission control centreMir and its resupply missions were controlled from the Russian mission control centre () in Korolyov, near the RKK Energia plant. Referred to by its acronym ЦУП ("TsUP"), or simply as 'Moscow', the facility could process data from up to ten spacecraft in three separate control rooms, although each control room was dedicated to a single programme; one to Mir; one to Soyuz; and one to the Soviet space shuttle Buran (which was later converted for use with the ISS). The facility is now used to control the Russian Orbital Segment of the ISS. The flight control team were assigned roles similar to the system used by NASA at their mission control centre in Houston, including: The Flight Director, who provided policy guidance and communicated with the mission management team; The Flight Shift Director, who was responsible for real-time decisions within a set of flight rules; The Mission Deputy Shift Manager (MDSM) for the MCC was responsible for the control room's consoles, computers and peripherals; The MDSM for Ground Control was responsible for communications; The MDSM for Crew Training was similar to NASA's 'capcom,' or capsule communicator; usually someone who had served as the Mir crew's lead trainer. Unused equipment Three command and control modules were constructed for the Mir program. One was used in space; one remained in a Moscow warehouse as a source of repair parts if needed, and the third was sold to an educational and entertainment complex in the US in 1997. Tommy Bartlett Exploratory purchased the unit and had it shipped to Wisconsin Dells, Wisconsin, where it became the centrepiece of the complex's Space Exploration wing. Safety aspects Ageing systems and atmosphere In the later years of the programme, particularly during the Shuttle-Mir programme, Mir suffered from various systems failures. It had been designed for five years of use, but eventually flew for fifteen, and in the 1990s was showing its age, with frequent computer crashes, loss of power, uncontrolled tumbles through space and leaking pipes. Jerry Linenger in his book about his time on the facility says that the cooling system had developed tiny leaks too small and numerous to be repaired, that permitted the constant release of coolant. He says that it was especially noticeable after he had made a spacewalk and become used to the bottled air in his spacesuit. When he returned to the station and again began breathing the air inside Mir, he was shocked by the intensity of the smell and worried about the possible negative health effects of breathing such contaminated air. Various breakdowns of the Elektron oxygen-generating system were a concern; they led crews to become increasingly reliant on the backup Vika solid-fuel oxygen generator (SFOG) systems, which led to a fire during the handover between EO-22 and EO-23. (see also ISS ECLSS) Accidents Several accidents occurred which threatened the station's safety, such as the glancing collision between Kristall and Soyuz TM-17 during proximity operations in January 1994. The three most alarming incidents occurred during EO-23. The first was on 23 February 1997 during the handover period from EO-22 to EO-23, when a malfunction occurred in the backup Vika system, a chemical oxygen generator later known as solid-fuel oxygen generator (SFOG). The Vika malfunction led to a fire which burned for around 90 seconds (according to official sources at the TsUP; astronaut Jerry Linenger insists the fire burned for around 14 minutes), and produced large amounts of toxic smoke that filled the station for around 45 minutes. This forced the crew to don respirators, but some of the respirator masks initially worn were broken. Some of the fire extinguishers mounted on the walls of the newer modules were immovable. The other two accidents concerned testing of the station's TORU manual docking system to manually dock Progress M-33 and Progress M-34. The tests were to gauge the performance of long-distance docking and the feasibility of removal of the expensive Kurs automatic docking system from Progress spacecraft. Due to malfunctioning equipment, both tests failed, with Progress M-33 narrowly missing the station and Progress M-34 striking Spektr and puncturing the module, causing the station to depressurise and leading to Spektr being permanently sealed off. This in turn led to a power crisis aboard Mir as the module's solar arrays produced a large proportion of the station's electrical supply, causing the station to power down and begin to drift, requiring weeks of work to rectify before work could continue as normal. Radiation and orbital debris Without the protection of the Earth's atmosphere, cosmonauts were exposed to higher levels of radiation from a steady flux of cosmic rays and trapped protons from the South Atlantic Anomaly. The station's crews were exposed to an absorbed dose of about 5.2 cGy over the course of the Mir EO-18 expedition, producing an equivalent dose of 14.75 cSv, or 1133 μSv per day. This daily dose is approximately that received from natural background radiation on Earth in two years. The radiation environment of the station was not uniform; closer proximity to the station's hull led to an increased radiation dose, and the strength of radiation shielding varied between modules; Kvant-2's being better than the core module, for instance. The increased radiation levels pose a higher risk of crews developing cancer, and can cause damage to the chromosomes of lymphocytes. These cells are central to the immune system and so any damage to them could contribute to the lowered immunity experienced by cosmonauts. Over time, in theory, lowered immunity results in the spread of infection between crew members, especially in such confined areas. To avoid this only healthy people were permitted aboard. Radiation has also been linked to a higher incidence of cataracts in cosmonauts. Protective shielding and protective drugs may lower the risks to an acceptable level, but data is scarce and longer-term exposure will result in greater risks. At the low altitudes at which Mir orbited there is a variety of space debris, consisting of everything from entire spent rocket stages and defunct satellites, to explosion fragments, paint flakes, slag from solid rocket motors, coolant released by RORSAT nuclear powered satellites, small needles, and many other objects. These objects, in addition to natural micrometeoroids, posed a threat to the station as they could puncture pressurised modules and cause damage to other parts of the station, such as the solar arrays. Micrometeoroids also posed a risk to spacewalking cosmonauts, as such objects could puncture their spacesuits, causing them to depressurise. Meteor showers in particular posed a risk, and, during such storms, the crews slept in their Soyuz ferries to facilitate an emergency evacuation should Mir be damaged.
Technology
Crewed spacecraft
null
81429
https://en.wikipedia.org/wiki/Rangefinder%20camera
Rangefinder camera
A rangefinder camera is a camera fitted with a rangefinder, typically a split-image rangefinder: a range-finding focusing mechanism allowing the photographer to measure the subject distance and take photographs that are in sharp focus. Most varieties of rangefinder show two images of the same subject, one of which moves when a calibrated wheel is turned; when the two images coincide and fuse into one, the distance can be read off the wheel. Older, non-coupled rangefinder cameras display the focusing distance and require the photographer to transfer the value to the lens focus ring; cameras without built-in rangefinders could have an external rangefinder fitted into the accessory shoe. Earlier cameras of this type had separate viewfinder and rangefinder windows; later the rangefinder was incorporated into the viewfinder. More modern designs have rangefinders coupled to the focusing mechanism so that the lens is focused correctly when the rangefinder images fuse; compare with the focusing screen in non-autofocus SLRs. Almost all digital cameras, and most later film cameras, measure distance using electroacoustic or electronic means and focus automatically (autofocus); however, it is not customary to speak of this functionality as a rangefinder. History The first rangefinders, sometimes called "telemeters", appeared in the twentieth century; the first rangefinder camera to be marketed was the 3A Kodak Autographic Special of 1916; the rangefinder was coupled. Not itself a rangefinder camera, the Leica I of 1925 had popularized the use of accessory rangefinders. The Leica II and Zeiss Contax I, both of 1932, were great successes as 35 mm rangefinder cameras, while on the Leica Standard, also introduced in 1932, the rangefinder was omitted. The Contax II (1936) integrated the rangefinder in the center of the viewfinder. Rangefinder cameras were common from the 1930s to the 1970s, but the more advanced models lost ground to single-lens reflex (SLR) cameras. Rangefinder cameras have been made in all sizes and all film formats over the years, from 35 mm through medium format (rollfilm) to large-format press cameras. Until the mid-1950s most were generally fitted to more expensive models of cameras. Folding bellows rollfilm cameras, such as the Balda Super Baldax or Mess Baldix, the Kodak Retina II, IIa, IIc, IIIc, and IIIC cameras and the Hans Porst Hapo 66e (a cheaper version of the Balda Mess Baldix), were often fitted with rangefinders. The best-known rangefinder cameras take 35 mm film, use focal plane shutters, and have interchangeable lenses. These are Leica screwmount (also known as M39) cameras developed for lens manufacturer Ernst Leitz Wetzlar by Oskar Barnack (which gave rise to very many imitations and derivatives), Contax cameras manufactured for Carl Zeiss Optics by camera subsidiary Zeiss-Ikon and, after Germany's defeat in World War II, produced again and then developed as the Soviet Kiev), Nikon S-series cameras from 1951 to 1962 (with design inspired by the Contax and function by the Leica), and Leica M-series cameras. The Nikon rangefinder cameras were "discovered" in 1950 by Life magazine photographer David Douglas Duncan, who covered the Korean War. Canon manufactured several models from the 1930s until the 1960s; models from 1946 onwards were more or less compatible with the Leica thread mount. (From late 1951 they were completely compatible; the 7 and 7s had a bayonet mount for the 50 mm f/0.95 lens in addition to the thread mount for other lenses.) Launched in 1940, The Kodak 35 Rangefinder was the first 35 mm camera made by the Eastman Kodak Company. Other such cameras include the Casca (Steinheil, West Germany, 1948), Detrola 400 (USA, 1940–41), Ektra (Kodak, USA, 1941–8), Foca (OPL, France, 1947–63), Foton (Bell & Howell, USA, 1948), Opema II (Meopta, Czechoslovakia, 1955–60), Perfex (USA, 1938–49), Robot Royal (Robot-Berning, West Germany, 1955–76), and Witness (Ilford, Britain, 1953). In the United States the dependable and cheap Argus (especially the ubiquitous C-3 "Brick") was far and away the most popular 35 mm rangefinder, with millions sold. Interchangeable-lens rangefinder cameras with focal-plane shutters are greatly outnumbered by fixed-lens leaf-shutter rangefinder cameras. The most popular design in the 1950s were folding designs like the Kodak Retina and the Zeiss Contessa. In the 1960s many fixed-lens 35 mm rangefinder cameras for the amateur market were produced by several manufacturers, mainly Japanese, including Canon, Fujica, Konica, Mamiya, Minolta, Olympus, Petri Camera, Ricoh, and Yashica. Distributors such as Vivitar and Revue often sold rebranded versions of these cameras. While designed to be compact like the Leica, they were much less expensive. Many of them, such as the Minolta 7sII and the Vivitar 35ES, were fitted with high-speed, extremely high quality optics. Though eventually replaced in the market with newer compact autofocus cameras, many of these older rangefinders continue to operate, having outlived most of their newer (and less well-constructed) successors. Starting with a camera made by the small Japanese company Yasuhara in the 1990s, there has been something of a revival of rangefinder cameras. Aside from the Leica M series, rangefinder models from this period include the Konica Hexar RF, Cosina, who makes the Voigtländer Bessa T/R/R2/R3/R4 (the last three are made in both manual or aperture automatic version, which use respectly the "m" or "a" sign in model), and the Hasselblad Xpan/Xpan 2. Zeiss had a new model called the Zeiss Ikon, also made by Cosina but now discontinued, while Nikon has also produced expensive limited editions of its S3 and SP rangefinders to satisfy the demands of collectors and aficionados. Cameras from the former Soviet Union—the Zorki and FED, based on the screwmount Leica, and the Kiev—are plentiful in the used market. Medium-format rangefinder cameras continued to be produced until 2014. Recent models included the Mamiya 6 and 7I/7II, the Bronica RF645 and the Fuji G, GF, GS, GW and GSW series. In 1994, Contax introduced an autofocus rangefinder camera, the Contax G. Digital rangefinder Epson R-D1, Zenit M and PIXII Digital imaging technology was applied to rangefinder cameras for the first time in 2004, with the introduction of the Epson R-D1, the first ever digital rangefinder camera. The RD-1 was a collaboration between Epson and Cosina. The R-D1 and later R-D1s use Leica M-mount lenses, or earlier Leica screw mount lenses with an adapter. After the discontinuation of the R-D1, only Leica M digital rangefinders were in production until the introduction of two additional rangefinders in late 2018: the Pixii Camera (A1112) from France-based firm Pixii SAS; and the re-emergence of the Russian camera manufacturer Zenit with the limited release (500 units) Zenit M designed in Krasnogorsk and made in collaboration with Leica. Both the Pixii and the Zenit M are true mechanical rangefinders, and they employ the Leica M mount, affording compatibility with current lens lines from Voigtlander, Zeiss, and Leica themselves. Leica M Leica released its first digital rangefinder camera, the Leica M8, in 2006. The M8 and R-D1 are expensive compared to more common digital SLRs, and lack several features that are common with modern digital cameras, such as autofocus, live preview, movie recording, and face detection. They have no real telephoto lenses available beyond 135 mm focal length and very limited macro ability. Later, Leica released the Leica M (Typ 240) digital rangefinder, which adds live preview, video recording and focusing assistance, the Leica M Monochrom, which is similar to the Leica M9 but shoots solely in black and white, the Leica M Edition 60 which is similar to the M (Typ 240) but omits a rear display panel as a homage to film cameras, and the M10 and M11 without video recording. Pros and cons Viewfinder parallax The viewfinder of a rangefinder camera is offset from the picture-taking lens so that the image viewed is not exactly what will be recorded on the film; this parallax error is negligible at large subject distances but becomes significant as the distance decreases. For extreme close-up photography, the rangefinder camera is awkward to use, as the viewfinder no longer points at the subject. More advanced rangefinder cameras project into the viewfinder a brightline frame that moves as the lens is focused, correcting parallax error down to the minimum distance at which the rangefinder functions. The angle of view of a given lens also changes with distance, and the brightline frames in the finders of a few cameras automatically adjust for this as well. In contrast, the viewfinder pathway of an SLR transmits an image directly "through the lens". This eliminates parallax errors at any subject distance, thus allowing for macro photography. It also removes the need to have separate viewfinders for different lens focal lengths. In particular, this allows for extreme telephoto lenses which would otherwise be very hard to focus and compose with a rangefinder. Furthermore, the through-the-lens view allows the viewfinder to directly display the depth of field for a given aperture, which is not possible with a rangefinder design. To compensate for this, rangefinder users often use zone focusing, which is especially applicable to the rapid-fire approach to street photography. Large lenses block viewfinder Larger lenses may block a portion of the view seen through the viewfinder, potentially a significant proportion. A side effect of this is that lens designers are forced to use smaller designs. Lens hoods used for rangefinder cameras may have a different shape to those with other cameras, with openings cut out of them to increase the visible area. Difficulty integrating zoom lenses The rangefinder design is not readily adapted for use with zoom lenses, which have a continuously variable field of view. The only true zoom lens for rangefinder cameras is the Contax G2 Carl Zeiss 35–70 mm Vario-Sonnar T* Lens with built-in zoom viewfinder. A very few lenses, such as the Konica M-Hexanon Dual or Leica Tri-Elmar, let the user select among two or three focal lengths; the viewfinder must be designed to work with all focal lengths of any lens used. The rangefinder may become misaligned, leading to incorrect focusing. Historically unobtrusive Rangefinder cameras are often quieter, particularly with leaf shutters, and smaller than competing SLR models. These qualities once made rangefinders more attractive for theater photography, some portrait photography, candid and street photography, and any application where an SLR is too large or obtrusive. However, today mirrorless digital cameras are capable of excellent low light performance, are much smaller and completely silent. Absence of mirror The absence of a mirror allows the rear element of lenses to project deep into the camera body, making high-quality wide-angle lenses easier to design. The Voigtländer 12 mm lens is the widest-angle rectilinear lens in general production, with a 121-degree angle of view; only recently have equivalent SLR lenses become available, though optically inferior. The absence of a mirror also means that rangefinder lenses have the potential to be significantly smaller than equivalent lenses for SLRs as they need not accommodate mirror swing. This ability to have high quality lenses and camera bodies in a compact form made Leica cameras and other rangefinders particularly appealing to photojournalists. Since there is no moving mirror, as used in SLRs, there is no momentary blackout of the subject being photographed. Field of view Rangefinder viewfinders usually have a field of view slightly greater than the lens in use. This allows the photographer to be able to see what is going on outside of the frame, and therefore better anticipate the action, at the expense of a smaller image. In addition, with viewfinders with magnifications larger than 0.8x (e.g. some Leica cameras, the Epson RD-1/s, Canon 7, Nikon S, and in particular the Voigtländer Bessa R3A and R3M with their 1:1 magnification), photographers can keep both eyes open and effectively see a floating viewfinder frame superimposed on their real world view. This kind of two-eyed viewing is also possible with an SLR, using a lens focal length that results in a net viewfinder magnification close to 1.0 (usually a focal length slightly longer than a normal lens); use of a much different focal length would result in a viewfinder with a different magnification than the open eye, making fusion of the images impossible. There is also the difference of the eye-level since the eye looking in the viewfinder actually sees the frame from slightly below the other eye. This means that the final image perceived by the viewer will not be totally even, but rather leaning on one side. This issue can be avoided by shooting in vertical (i.e. portrait) orientation, shooting style and framing allowing. Use of filters If filters that absorb much light or change the colour of the image are used, it is difficult to compose, view, and focus on an SLR, but the image through a rangefinder viewfinder is unaffected. On the other hand, some filters, such as graduated filters and polarizers, are best used with SLRs as the effects they create need to be viewed directly.
Technology
Photography
null
81560
https://en.wikipedia.org/wiki/Zeros%20and%20poles
Zeros and poles
In complex analysis (a branch of mathematics), a pole is a certain type of singularity of a complex-valued function of a complex variable. It is the simplest type of non-removable singularity of such a function (see essential singularity). Technically, a point is a pole of a function if it is a zero of the function and is holomorphic (i.e. complex differentiable) in some neighbourhood of . A function is meromorphic in an open set if for every point of there is a neighborhood of in which at least one of and is holomorphic. If is meromorphic in , then a zero of is a pole of , and a pole of is a zero of . This induces a duality between zeros and poles, that is fundamental for the study of meromorphic functions. For example, if a function is meromorphic on the whole complex plane plus the point at infinity, then the sum of the multiplicities of its poles equals the sum of the multiplicities of its zeros. Definitions A function of a complex variable is holomorphic in an open domain if it is differentiable with respect to at every point of . Equivalently, it is holomorphic if it is analytic, that is, if its Taylor series exists at every point of , and converges to the function in some neighbourhood of the point. A function is meromorphic in if every point of has a neighbourhood such that at least one of and is holomorphic in it. A zero of a meromorphic function is a complex number such that . A pole of is a zero of . If is a function that is meromorphic in a neighbourhood of a point of the complex plane, then there exists an integer such that is holomorphic and nonzero in a neighbourhood of (this is a consequence of the analytic property). If , then is a pole of order (or multiplicity) of . If , then is a zero of order of . Simple zero and simple pole are terms used for zeroes and poles of order Degree is sometimes used synonymously to order. This characterization of zeros and poles implies that zeros and poles are isolated, that is, every zero or pole has a neighbourhood that does not contain any other zero and pole. Because of the order of zeros and poles being defined as a non-negative number and the symmetry between them, it is often useful to consider a pole of order as a zero of order and a zero of order as a pole of order . In this case a point that is neither a pole nor a zero is viewed as a pole (or zero) of order 0. A meromorphic function may have infinitely many zeros and poles. This is the case for the gamma function (see the image in the infobox), which is meromorphic in the whole complex plane, and has a simple pole at every non-positive integer. The Riemann zeta function is also meromorphic in the whole complex plane, with a single pole of order 1 at . Its zeros in the left halfplane are all the negative even integers, and the Riemann hypothesis is the conjecture that all other zeros are along . In a neighbourhood of a point a nonzero meromorphic function is the sum of a Laurent series with at most finite principal part (the terms with negative index values): where is an integer, and Again, if (the sum starts with , the principal part has terms), one has a pole of order , and if (the sum starts with , there is no principal part), one has a zero of order . At infinity A function is meromorphic at infinity if it is meromorphic in some neighbourhood of infinity (that is outside some disk), and there is an integer such that exists and is a nonzero complex number. In this case, the point at infinity is a pole of order if , and a zero of order if . For example, a polynomial of degree has a pole of degree at infinity. The complex plane extended by a point at infinity is called the Riemann sphere. If is a function that is meromorphic on the whole Riemann sphere, then it has a finite number of zeros and poles, and the sum of the orders of its poles equals the sum of the orders of its zeros. Every rational function is meromorphic on the whole Riemann sphere, and, in this case, the sum of orders of the zeros or of the poles is the maximum of the degrees of the numerator and the denominator. Examples The function is meromorphic on the whole Riemann sphere. It has a pole of order 1 or simple pole at and a simple zero at infinity. The function is meromorphic on the whole Riemann sphere. It has a pole of order 2 at and a pole of order 3 at . It has a simple zero at and a quadruple zero at infinity. The function is meromorphic in the whole complex plane, but not at infinity. It has poles of order 1 at . This can be seen by writing the Taylor series of around the origin. The function has a single pole at infinity of order 1, and a single zero at the origin. All above examples except for the third are rational functions. For a general discussion of zeros and poles of such functions, see . Function on a curve The concept of zeros and poles extends naturally to functions on a complex curve, that is complex analytic manifold of dimension one (over the complex numbers). The simplest examples of such curves are the complex plane and the Riemann surface. This extension is done by transferring structures and properties through charts, which are analytic isomorphisms. More precisely, let be a function from a complex curve to the complex numbers. This function is holomorphic (resp. meromorphic) in a neighbourhood of a point of if there is a chart such that is holomorphic (resp. meromorphic) in a neighbourhood of Then, is a pole or a zero of order if the same is true for If the curve is compact, and the function is meromorphic on the whole curve, then the number of zeros and poles is finite, and the sum of the orders of the poles equals the sum of the orders of the zeros. This is one of the basic facts that are involved in Riemann–Roch theorem.
Mathematics
Complex analysis
null
81601
https://en.wikipedia.org/wiki/Windows%20API
Windows API
The Windows API, informally WinAPI, is the foundational application programming interface (API) that allows a computer program to access the features of the Microsoft Windows operating system in which the program is running. Programs access API functionality via dynamic-link library (DLL) technology. Each major version of the Windows API has a distinct name that identifies a compatibility aspect of that version. For example, Win32 is the major version of Windows API that runs on 32-bit systems. The name, Windows API, collectively refers to all versions of this capability of Windows. Microsoft provides developer support via a software development kit, Microsoft Windows SDK, which includes documentation and tools for building software based on the Windows API. Services This section lists notable services provided by the Windows API. Base Services Base services include features such as the file system, devices, processes, threads, and error handling. These functions reside in kernel.exe, krnl286.exe or krnl386.exe files on 16-bit Windows, and kernel32.dll and KernelBase.dll on 32 and 64 bit Windows. These files reside in the folder \Windows\System32 on all versions of Windows. Advanced Services Advanced services include features beyond the kernel like the Windows registry, shutdown/restart the system (or abort), start/stop/create a Windows service, manage user accounts. These functions reside in advapi32.dll and advapires32.dll on 32-bit Windows. Graphics Device Interface The Graphics Device Interface (GDI) component provides features to output graphics content to monitors, printers, and other output devices. It resides in gdi.exe on 16-bit Windows, and gdi32.dll on 32-bit Windows in user-mode. Kernel-mode GDI support is provided by win32k.sys which communicates directly with the graphics driver. User Interface The User Interface component provides features to create and manage screen windows and most basic controls, such as buttons and scrollbars, receive mouse and keyboard input, and other functions associated with the graphical user interface (GUI) part of Windows. This functional unit resides in user.exe on 16-bit Windows, and user32.dll on 32-bit Windows. Since Windows XP versions, the basic controls reside in comctl32.dll, together with the common controls (Common Control Library). Common Dialog Box Library The Common Dialog Box Library provides standard dialog boxes to open and save files, choose color and font, etc. The library resides in a file called commdlg.dll on 16-bit Windows, and comdlg32.dll on 32-bit Windows. It is grouped under the User Interface category of the API. Common Control Library The Common Control Library provides access to advanced user interface controls include things like status bars, progress bars, toolbars and tabs. The library resides in a DLL file called commctrl.dll on 16-bit Windows, and comctl32.dll on 32-bit Windows. It is grouped under the User Interface category of the API. Windows Shell The Windows Shell component provides access to the operating system shell. The component resides in shell.dll on 16-bit Windows, and shell32.dll on 32-bit Windows. The Shell Lightweight Utility Functions are in shlwapi.dll. It is grouped under the User Interface category of the API. Network Services Network Services provide access to the various networking abilities of the operating system. Its subcomponents include NetBIOS, Winsock, NetDDE, remote procedure call (RPC) and many more. This component resides in netapi32.dll on 32-bit Windows. Web The Internet Explorer (IE) web browser exposes APIs and as such could be considered part of the Windows API. IE has been included with the operating system since Windows 95 OSR2 and has provided web-related services to applications since Windows 98. Program interaction The Windows API is a C language-based API. Functions and data structures are consumable via C syntax by including windows.h, but the API can be consumed via any programming language that can inter-operate with the API data structures and calling conventions for function calls and callbacks. Of note, the implementation of API functions has been developed in several languages other than C. Despite the fact that C is not an object-oriented programming (OOP) language, the Windows API is somewhat object-oriented due to its use of handles. Various other technologies from Microsoft and others make this object-oriented aspect more apparent by using an OOP language such as C++ -- see Microsoft Foundation Class Library (MFC), Visual Component Library (VCL), GDI+. Of note, Windows 8 provides the Windows API and the WinRT API, which is implemented in C++ and is object-oriented by design. Windows.pas is a Delphi unit that exposes the features of Windows API the Pascal equivalent of windows.h. Related technologies Many Microsoft technologies use the Windows API -- as most software running on Windows does. As middle-ware between Windows API and an application, the following technologies provide some access to Windows API. Some technologies are described as wrapping Windows API, but this is debatable since they don't provide or expose all of the capabilities of Windows API. Microsoft Foundation Class Library (MFC) exposes some of Windows API functionality in C++ classes, and thus allows a more object-oriented way to interact with the API Active Template Library (ATL) is a C++ template library that provides some Windows API access Windows Template Library (WTL) was developed as an extension to ATL, and intended as a smaller alternative to MFC Most application frameworks for Windows provide some access to Windows API; including .NET runtime and Java virtual machine and any other programming languages targeting Windows Various technologies for communicating between components and applications starting with Dynamic Data Exchange (DDE), which was superseded by Object Linking and Embedding (OLE) and later by the Component Object Model (COM), Automation Objects, ActiveX controls, and the .NET Framework Although almost all Windows programs use the Windows API, on the Windows NT line of operating systems, programs that start early in the Windows startup process use the Native API instead. History The Windows API has always exposed a large part of the underlying structure of the Windows systems to programmers. This had the advantage of giving them much flexibility and power over their applications, but also creates great responsibility in how applications handle various low-level, sometimes tedious, operations that are associated with a graphical user interface. For example, a beginning C programmer will often write the simple "hello world" as their first assignment. The working part of the program is only a single printf line within the main subroutine. The overhead for linking to the standard I/O library is also only one line: #include <stdio.h> int main(void) { printf("Hello, World!\n"); } Charles Petzold, who wrote several books about programming for the Windows API, said: "The original hello world program in the Windows 1.0 SDK was a bit of a scandal. HELLO.C was about 150 lines long, and the HELLO.RC resource script had another 20 or so more lines. (...) Veteran programmers often curled up in horror or laughter when encountering the Windows hello-world program." Petzold explains that while it was the first Windows sample programs developers were introduced to, it was quite "fancy" and more complex than needed. Tired of people ridiculing the length of the sample, he eventually reduced it to a simple MessageBox call. Over the years, various changes and additions were made to Windows systems, and the Windows API changed and grew to reflect this. The Windows API for Windows 1.0 supported fewer than 450 function calls, whereas modern versions of the Windows API support thousands. However, in general, the interface remained fairly consistent, and an old Windows 1.0 application will still look familiar to a programmer who is used to the modern Windows API. Microsoft has made an effort to maintain backward compatibility. To achieve this, when developing new versions of Windows, Microsoft sometimes implemented workarounds to allow compatibility with third-party software that used the prior version in an undocumented or even inadvisable way. Raymond Chen, a Microsoft developer who works on the Windows API, has said: "I could probably write for months solely about bad things apps do and what we had to do to get them to work again (often in spite of themselves). Which is why I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure." One of the largest changes to the Windows API was the transition from Win16 (shipped in Windows 3.1 and older) to Win32 (Windows NT and Windows 95 and up). While Win32 was originally introduced with Windows NT 3.1 and Win32s allowed use of a Win32 subset before Windows 95, it was not until Windows 95 that widespread porting of applications to Win32 began. To ease the transition, in Windows 95, for developers outside and inside Microsoft, a complex scheme of API thunks was used that could allow 32-bit code to call into 16-bit code (for most of Win16 APIs) and vice versa. Flat thunks allowed 32-bit code to call into 16-bit libraries, and the scheme was used extensively inside Windows 95's libraries to avoid porting the whole OS to Win32 in one batch. In Windows NT, the OS was pure 32-bit, except parts for compatibility with 16-bit applications, and only generic thunks were available to thunk from Win16 to Win32, as for Windows 95. The Platform SDK shipped with a compiler that could produce the code needed for these thunks. Versions of 64-bit Windows are also able to run 32-bit applications via WoW64. The SysWOW64 folder located in the Windows folder on the OS drive contains several tools to support 32-bit applications. Major versions Each version of Microsoft Windows contains a version of Windows API, and almost every new version of Microsoft Windows has introduced additions and changes to the Windows API. The name, Windows API, refers to essentially the same capability in each version of Windows, but there is another name for this capability that is based on major architectural aspects of the Windows version that contains it. When there was only one version, it was simply called Windows API. Then, when the first major update was made, Microsoft gave it the name Win32 and gave the first version the name Win16. The term Windows API refers to both versions and all subsequently developed major versions. Win16 is in the 16-bit versions of Windows. The functions reside mainly in core files of the OS: kernel.exe (or krnl286.exe or krnl386.exe), user.exe and gdi.exe. Despite the file extension of exe, such a file is accessed as a DLL. Win32 is in the 32-bit versions of Windows (NT, 95, and later). The functions are implemented in system DLL files including kernel32.dll, user32.dll, and gdi32.dll. Win32 was introduced with Windows NT. In Windows 95, it was initially referred to as Win32c, with c meaning compatibility. This term was later abandoned by Microsoft in favor of Win32. Win32s is an extension for the Windows 3.1x family of Microsoft Windows that implemented a subset of the Win32 API for these systems. The "s" stands for "subset". Win64 is the version in the 64-bit platforms of the Windows architecture (, x86-64 and AArch64). Both 32-bit and 64-bit versions of an application can be compiled from one codebase, although some older API functions have been deprecated, and some of the API functions that were deprecated in Win32 were removed. All memory pointers are 64-bit by default (the LLP64 model), so porting Win32-compatible source code includes updating for 64-bit pointer arithmetic. WinCE is the version in the Windows CE operating system. Other implementations The Wine project provides a Win32 API compatibility layer for Unix-like platforms, between Linux kernel API and programs written for the Windows API. ReactOS goes a step further and aims to implement the full Windows operating system, working closely with the Wine project to promote code re-use and compatibility. DosWin32 and HX DOS Extender are other projects which emulate the Windows API to allow executing simple Windows programs from a DOS command line. Odin is a project to emulate Win32 on OS/2, superseding the original Win-OS/2 emulation which was based on Microsoft code. Other minor implementations include the MEWEL and Zinc libraries which were intended to implement a subset of the Win16 API on DOS (see List of platform-independent GUI libraries). Windows Interface Source Environment (WISE) was a licensing program from Microsoft which allowed developers to recompile and run Windows-based applications on Unix and Macintosh platforms. WISE SDKs were based on an emulator of the Windows API that could run on those platforms. Efforts toward standardization included Sun's Public Windows Interface (PWI) for Win16 (see also: Sun Windows Application Binary Interface (Wabi)), Willows Software's Application Programming Interface for Windows (APIW) for Win16 and Win32 (see also: Willows TWIN), and ECMA-234, which attempted to standardize the Windows API bindingly. Compiler support To develop software that uses the Windows API, a compiler must be able to use the Microsoft-specific DLLs listed above (COM-objects are outside Win32 and assume a certain vtable layout). The compiler must either handle the header files that expose the interior API function names, or supply such files. For the language C++, Zortech (later Symantec, then Digital Mars), Watcom and Borland have all produced well known commercial compilers that have been used often with Win16, Win32s, and Win32. Some of them supplied memory extenders, allowing Win32 programs to run on Win16 with Microsoft's redistributable Win32s DLL. The Zortech compiler was probably one of the first stable and usable C++ compilers for Windows programming, before Microsoft had a C++ compiler. For certain classes of applications, the compiler system should also be able to handle interface description language (IDL) files. Collectively, these prerequisites (compilers, tools, libraries, and headers) are known as the Microsoft Platform SDK. For a time, the Microsoft Visual Studio and Borland's integrated development system were the only integrated development environments (IDEs) that could provide this (although, the SDK is downloadable for free separately from the entire IDE suite, from Microsoft Windows SDK for Windows 7 and .NET Framework 4). , the MinGW and Cygwin projects also provide such an environment based on the GNU Compiler Collection (GCC), using a stand-alone header file set, to make linking against the Win32-specific DLLs simple. LCC-Win32 is a C compiler maintained by Jacob Navia, freeware for non-commercial use. Pelles C is a freeware C compiler maintained by Pelle Orinius. Free Pascal is a free software Object Pascal compiler that supports the Windows API. The MASM32 package is a mature project providing support for the Windows API under Microsoft Macro Assembler (MASM) by using custom made or converted headers and libraries from the Platform SDK. Flat assembler FASM allows building Windows programs without using an external linker, even when running on Linux. Windows specific compiler support is also needed for Structured Exception Handling (SEH). This system serves two purposes: it provides a substrate on which language-specific exception handling can be implemented, and it is how the kernel notifies applications of exceptional conditions such as dereferencing an invalid pointer or stack overflow. The Microsoft/Borland C++ compilers had the ability to use this system as soon as it was introduced in Windows 95 and NT, however the actual implementation was undocumented and had to be reverse engineered for the Wine project and free compilers. SEH is based on pushing exception handler frames onto the stack, then adding them to a linked list stored in thread-local storage (the first field of the thread environment block). When an exception is thrown, the kernel and base libraries unwind the stack running handlers and filters as they are encountered. Eventually, every exception unhandled by the application will be dealt with by the default backstop handler, which pops up the Windows common crash dialog.
Technology
Software development: General
null
81610
https://en.wikipedia.org/wiki/LIGO
LIGO
The Laser Interferometer Gravitational-Wave Observatory (LIGO) is a large-scale physics experiment and observatory designed to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. Prior to LIGO, all data about the universe has come come in the form of light and other forms of electromagnetic radiation, from limited direct exploration on relatively nearby Solar System objects such as the Moon, Mars, Venus, Jupiter and their moons, asteroids etc, and from high energy cosmic particles. Initially, two large observatories were built in the United States with the aim of detecting gravitational waves by laser interferometry. Two additional, smaller gravity wave observatories are now operational in Japan KAGRA and Italy Virgo. The two LIGO observatories use mirrors spaced four kilometers apart to measure changes in length—over an effective span of 1120 km—of less than one ten-thousandth the charge diameter of a proton. The initial LIGO observatories were funded by the United States National Science Foundation (NSF) and were conceived, built and are operated by Caltech and MIT. They collected data from 2002 to 2010 but no gravitational waves were detected. The Advanced LIGO Project to enhance the original LIGO detectors began in 2008 and continues to be supported by the NSF, with important contributions from the United Kingdom's Science and Technology Facilities Council, the Max Planck Society of Germany, and the Australian Research Council. The improved detectors began operation in 2015. The detection of gravitational waves was reported in 2016 by the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration with the international participation of scientists from several universities and research institutions. Scientists involved in the project and the analysis of the data for gravitational-wave astronomy are organized by the LSC, which includes more than 1000 scientists worldwide, as well as 440,000 active Einstein@Home users . LIGO is the largest and most ambitious project ever funded by the NSF. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry C. Barish "for decisive contributions to the LIGO detector and the observation of gravitational waves". Observations are made in "runs". , LIGO has made three runs (with one of the runs divided into two "subruns"), and made 90 detections of gravitational waves. Maintenance and upgrades of the detectors are made between runs. The first run, O1, which ran from 12 September 2015 to 19 January 2016, made the first three detections, all black hole mergers. The second run, O2, which ran from 30 November 2016 to 25 August 2017, made eight detections: seven black hole mergers and the first neutron star merger. The third run, O3, began on 1 April 2019; it was divided into O3a, from 1 April to 30 September 2019, and O3b, from 1 November 2019 until it was suspended on 27 March 2020 due to COVID-19. The O3 run included the first detection of the merger of a neutron star with a black hole. Subsequent gravitational wave observatories Virgo in Italy, and KAGRA in Japan, which both use interferometer arms 3 km long, are coordinating with LIGO to continue observations after the COVID-caused stop, and LIGO's O4 observing run started on 24 May 2023. LIGO projects a sensitivity goal of 160–190 Mpc for binary neutron star mergers (sensitivities: Virgo 80–115 Mpc, KAGRA greater than 1 Mpc). History Background The LIGO concept built upon early work by many scientists to test a component of Albert Einstein's theory of general relativity, the existence of gravitational waves. Starting in the 1960s, American scientists including Joseph Weber, as well as Soviet scientists Mikhail Gertsenshtein and Vladislav Pustovoit, conceived of basic ideas and prototypes of laser interferometry, and in 1967 Rainer Weiss of MIT published an analysis of interferometer use and initiated the construction of a prototype with military funding, but it was terminated before it could become operational. Starting in 1968, Kip Thorne initiated theoretical efforts on gravitational waves and their sources at Caltech, and was convinced that gravitational wave detection would eventually succeed. Prototype interferometric gravitational wave detectors (interferometers) were built in the late 1960s by Robert L. Forward and colleagues at Hughes Research Laboratories (with mirrors mounted on a vibration isolated plate rather than free swinging), and in the 1970s (with free swinging mirrors between which light bounced many times) by Weiss at MIT, and then by Heinz Billing and colleagues in Garching Germany, and then by Ronald Drever, James Hough and colleagues in Glasgow, Scotland. In 1980, the NSF funded the study of a large interferometer led by MIT (Paul Linsay, Peter Saulson, Rainer Weiss), and the following year, Caltech constructed a 40-meter prototype (Ronald Drever and Stan Whitcomb). The MIT study established the feasibility of interferometers at a 1-kilometer scale with adequate sensitivity. Under pressure from the NSF, MIT and Caltech were asked to join forces to lead a LIGO project based on the MIT study and on experimental work at Caltech, MIT, Glasgow, and Garching. Drever, Thorne, and Weiss formed a LIGO steering committee, though they were turned down for funding in 1984 and 1985. By 1986, they were asked to disband the steering committee and a single director, Rochus E. Vogt (Caltech), was appointed. In 1988, a research and development proposal achieved funding. From 1989 through 1994, LIGO failed to progress technically and organizationally. Only political efforts continued to acquire funding. Ongoing funding was routinely rejected until 1991, when the U.S. Congress agreed to fund LIGO for the first year for $23 million. However, requirements for receiving the funding were not met or approved, and the NSF questioned the technological and organizational basis of the project. By 1992, LIGO was restructured with Drever no longer a direct participant. Ongoing project management issues and technical concerns were revealed in NSF reviews of the project, resulting in the withholding of funds until they formally froze spending in 1993. In 1994, after consultation between relevant NSF personnel, LIGO's scientific leaders, and the presidents of MIT and Caltech, Vogt stepped down and Barry Barish (Caltech) was appointed laboratory director, and the NSF made clear that LIGO had one last chance for support. Barish's team created a new study, budget, and project plan with a budget exceeding the previous proposals by 40%. Barish proposed to the NSF and National Science Board to build LIGO as an evolutionary detector, where detection of gravitational waves with initial LIGO would be possible, and with advanced LIGO would be probable. This new proposal received NSF funding, Barish was appointed Principal Investigator, and the increase was approved. In 1994, with a budget of US$395 million, LIGO stood as the largest overall funded NSF project in history. The project broke ground in Hanford, Washington in late 1994 and in Livingston, Louisiana in 1995. As construction neared completion in 1997, under Barish's leadership two organizational institutions were formed, LIGO Laboratory and LIGO Scientific Collaboration (LSC). The LIGO laboratory consists of the facilities supported by the NSF under LIGO Operation and Advanced R&D; this includes administration of the LIGO detector and test facilities. The LIGO Scientific Collaboration is a forum for organizing technical and scientific research in LIGO. It is a separate organization from LIGO Laboratory with its own oversight. Barish appointed Weiss as the first spokesperson for this scientific collaboration. Observations begin Initial LIGO operations between 2002 and 2010 did not detect any gravitational waves. In 2004, under Barish, the funding and groundwork were laid for the next phase of LIGO development (called "Enhanced LIGO"). This was followed by a multi-year shut-down while the detectors were replaced by much improved "Advanced LIGO" versions. Much of the research and development work for the LIGO/aLIGO machines was based on pioneering work for the GEO600 detector at Hannover, Germany. By February 2015, the detectors were brought into engineering mode in both locations. In mid-September 2015, "the world's largest gravitational-wave facility" completed a five-year US$200-million overhaul, bringing the total cost to $620 million. On 18 September 2015, Advanced LIGO began its first formal science observations at about four times the sensitivity of the initial LIGO interferometers. Its sensitivity was to be further enhanced until it was planned to reach design sensitivity Detections On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration published a paper about the detection of gravitational waves, from a signal detected at 09.51 UTC on 14 September 2015 of two ~30 solar mass black holes merging about 1.3 billion light-years from Earth. Current executive director David Reitze announced the findings at a media event in Washington D.C., while executive director emeritus Barry Barish presented the first scientific paper of the findings at CERN to the physics community. On 2 May 2016, members of the LIGO Scientific Collaboration and other contributors were awarded a Special Breakthrough Prize in Fundamental Physics for contributing to the direct detection of gravitational waves. On 16 June 2016 LIGO announced a second signal was detected from the merging of two black holes with 14.2 and 7.5 times the mass of the Sun. The signal was picked up on 26 December 2015, at 3:38 UTC. The detection of a third black hole merger, between objects of 31.2 and 19.4 solar masses, occurred on 4 January 2017 and was announced on 1 June 2017. Laura Cadonati was appointed the first deputy spokesperson. A fourth detection of a black hole merger, between objects of 30.5 and 25.3 solar masses, was observed on 14 August 2017 and was announced on 27 September 2017. In 2017, Weiss, Barish, and Thorne received the Nobel Prize in Physics "for decisive contributions to the LIGO detector and the observation of gravitational waves." Weiss was awarded one-half of the total prize money, and Barish and Thorne each received a one-quarter prize. After shutting down for improvements, LIGO resumed operation on 26 March 2019, with Virgo joining the network of gravitational-wave detectors on 1 April 2019. Both ran until 27 March 2020, when the COVID-19 pandemic halted operations. During the COVID shutdown, LIGO underwent a further upgrade in sensitivity, and observing run O4 with the new sensitivity began on 24 May 2023. Mission LIGO's mission is to directly observe gravitational waves of cosmic origin. These waves were first predicted by Einstein's general theory of relativity in 1916, when the technology necessary for their detection did not yet exist. Their existence was indirectly confirmed when observations of the binary pulsar PSR 1913+16 in 1974 showed an orbital decay which matched Einstein's predictions of energy loss by gravitational radiation. The Nobel Prize in Physics 1993 was awarded to Hulse and Taylor for this discovery. Direct detection of gravitational waves had long been sought. Their discovery has launched a new branch of astronomy to complement electromagnetic telescopes and neutrino observatories. Joseph Weber pioneered the effort to detect gravitational waves in the 1960s through his work on resonant mass bar detectors. Bar detectors continue to be used at six sites worldwide. By the 1970s, scientists including Rainer Weiss realized the applicability of laser interferometry to gravitational wave measurements. Robert Forward operated an interferometric detector at Hughes in the early 1970s. In fact as early as the 1960s, and perhaps before that, there were papers published on wave resonance of light and gravitational waves. Work was published in 1971 on methods to exploit this resonance for the detection of high-frequency gravitational waves. In 1962, M. E. Gertsenshtein and V. I. Pustovoit published the very first paper describing the principles for using interferometers for the detection of very long wavelength gravitational waves. The authors argued that by using interferometers the sensitivity can be 107 to 1010 times better than by using electromechanical experiments. Later, in 1965, Braginsky extensively discussed gravitational-wave sources and their possible detection. He pointed out the 1962 paper and mentioned the possibility of detecting gravitational waves if the interferometric technology and measuring techniques improved. Since the early 1990s, physicists have thought that technology has evolved to the point where detection of gravitational waves—of significant astrophysical interest—is now possible. In August 2002, LIGO began its search for cosmic gravitational waves. Measurable emissions of gravitational waves are expected from binary systems (collisions and coalescences of neutron stars or black holes), supernova explosions of massive stars (which form neutron stars and black holes), accreting neutron stars, rotations of neutron stars with deformed crusts, and the remnants of gravitational radiation created by the birth of the universe. The observatory may, in theory, also observe more exotic hypothetical phenomena, such as gravitational waves caused by oscillating cosmic strings or colliding domain walls. Observatories LIGO operates two gravitational wave observatories in unison: the LIGO Livingston Observatory () in Livingston, Louisiana, and the LIGO Hanford Observatory, on the DOE Hanford Site (), located near Richland, Washington. These sites are separated by 3,002 kilometers (1,865 miles) straight line distance through the earth, but 3,030 kilometers (1,883 miles) over the surface. Since gravitational waves are expected to travel at the speed of light, this distance corresponds to a difference in gravitational wave arrival times of up to ten milliseconds. Through the use of trilateration, the difference in arrival times helps to determine the source of the wave, especially when a third similar instrument like Virgo, located at an even greater distance in Europe, is added. Each observatory supports an L-shaped ultra high vacuum system, measuring four kilometers (2.5 miles) on each side. Up to five interferometers can be set up in each vacuum system. The LIGO Livingston Observatory houses one laser interferometer in the primary configuration. This interferometer was successfully upgraded in 2004 with an active vibration isolation system based on hydraulic actuators providing a factor of 10 isolation in the 0.1–5 Hz band. Seismic vibration in this band is chiefly due to microseismic waves and anthropogenic sources (traffic, logging, etc.). The LIGO Hanford Observatory houses one interferometer, almost identical to the one at the Livingston Observatory. During the Initial and Enhanced LIGO phases, a half-length interferometer operated in parallel with the main interferometer. For this 2 km interferometer, the Fabry–Pérot arm cavities had the same optical finesse, and, thus, half the storage time as the 4 km interferometers. With half the storage time, the theoretical strain sensitivity was as good as the full length interferometers above 200 Hz but only half as good at low frequencies. During the same era, Hanford retained its original passive seismic isolation system due to limited geologic activity in Southeastern Washington. Operation The parameters in this section refer to the Advanced LIGO experiment. The primary interferometer consists of two beam lines of 4 km length which form a power-recycled Michelson interferometer with Gires–Tournois etalon arms. A pre-stabilized 1064 nm Nd:YAG laser emits a beam with a power of 20 W that passes through a power recycling mirror. The mirror fully transmits light incident from the laser and reflects light from the other side increasing the power of the light field between the mirror and the subsequent beam splitter to 700 W. From the beam splitter the light travels along two orthogonal arms. By the use of partially reflecting mirrors, Fabry–Pérot cavities are created in both arms that increase the effective path length of laser light in the arm from 4 km to approximately 1,200 km. The power of the light field in the cavity is 100 kW. When a gravitational wave passes through the interferometer, the spacetime in the local area is altered. Depending on the source of the wave and its polarization, this results in an effective change in length of one or both of the cavities. The effective length change between the beams will cause the light currently in the cavity to become very slightly out of phase (antiphase) with the incoming light. The cavity will therefore periodically get very slightly out of coherence and the beams, which are tuned to destructively interfere at the detector, will have a very slight periodically varying detuning. This results in a measurable signal. After an equivalent of approximately 280 trips down the 4 km length to the far mirrors and back again, the two separate beams leave the arms and recombine at the beam splitter. The beams returning from two arms are kept out of phase so that when the arms are both in coherence and interference (as when there is no gravitational wave passing through), their light waves subtract, and no light should arrive at the photodiode. When a gravitational wave passes through the interferometer, the distances along the arms of the interferometer are shortened and lengthened, causing the beams to become slightly less out of phase. This results in the beams coming in phase, creating a resonance, hence some light arrives at the photodiode and indicates a signal. Light that does not contain a signal is returned to the interferometer using a power recycling mirror, thus increasing the power of the light in the arms. In actual operation, noise sources can cause movement in the optics, producing similar effects to real gravitational wave signals; a great deal of the art and complexity in the instrument is in finding ways to reduce these spurious motions of the mirrors. Background noise and unknown errors (which happen daily) are in the order of 10−20, while gravitational wave signals are around 10−22. After noise reduction, a signal-to-noise ratio around 20 can be achieved, or higher when combined with other gravitational wave detectors around the world. Observations Based on current models of astronomical events, and the predictions of the general theory of relativity, gravitational waves that originate tens of millions of light years from Earth are expected to distort the mirror spacing by about , less than one-thousandth the charge diameter of a proton. Equivalently, this is a relative change in distance of approximately one part in . A typical event which might cause a detection event would be the late stage inspiral and merger of two 10-solar-mass black holes, not necessarily located in the Milky Way galaxy, which is expected to result in a very specific sequence of signals often summarized by the slogan chirp, burst, quasi-normal mode ringing, exponential decay. In their fourth Science Run at the end of 2004, the LIGO detectors demonstrated sensitivities in measuring these displacements to within a factor of two of their design. During LIGO's fifth Science Run in November 2005, sensitivity reached the primary design specification of a detectable strain of one part in over a bandwidth. The baseline inspiral of two roughly solar-mass neutron stars is typically expected to be observable if it occurs within about , or the vicinity of the Local Group, averaged over all directions and polarizations. Also at this time, LIGO and GEO 600 (the German-UK interferometric detector) began a joint science run, during which they collected data for several months. Virgo (the French-Italian interferometric detector) joined in May 2007. The fifth science run ended in 2007, after extensive analysis of data from this run did not uncover any unambiguous detection events. In February 2007, GRB 070201, a short gamma-ray burst arrived at Earth from the direction of the Andromeda Galaxy. The prevailing explanation of most short gamma-ray bursts is the merger of a neutron star with either a neutron star or a black hole. LIGO reported a non-detection for GRB 070201, ruling out a merger at the distance of Andromeda with high confidence. Such a constraint was predicated on LIGO eventually demonstrating a direct detection of gravitational waves. Enhanced LIGO After the completion of Science Run 5, initial LIGO was upgraded with certain technologies, planned for Advanced LIGO but available and able to be retrofitted to initial LIGO, which resulted in an improved-performance configuration dubbed Enhanced LIGO. Some of the improvements in Enhanced LIGO included: Increased laser power Homodyne detection Output mode cleaner In-vacuum readout hardware Science Run 6 (S6) began in July 2009 with the enhanced configurations on the 4 km detectors. It concluded in October 2010, and the disassembly of the original detectors began. Advanced LIGO After 2010, LIGO went offline for several years for a major upgrade, installing the new Advanced LIGO detectors in the LIGO Observatory infrastructures. The project continued to attract new members, with the Australian National University and University of Adelaide contributing to Advanced LIGO, and by the time the LIGO Laboratory started the first observing run 'O1' with the Advanced LIGO detectors in September 2015, the LIGO Scientific Collaboration included more than 900 scientists worldwide. The first observing run operated at a sensitivity roughly three times greater than Initial LIGO, and a much greater sensitivity for larger systems with their peak radiation at lower audio frequencies. On 11 February 2016, the LIGO and Virgo collaborations announced the first observation of gravitational waves. The signal, named GW150914, was recorded on 14 September 2015, just two days after Advanced LIGO started collecting data following the upgrade. It matched the predictions of general relativity for the inward spiral and merger of a pair of black holes and subsequent ringdown of the resulting single black hole. The observations demonstrated the existence of binary stellar-mass black hole systems and the first observation of a binary black hole merger. On 15 June 2016, LIGO announced the detection of a second gravitational wave event, recorded on 26 December 2015, at 3:38 UTC. Analysis of the observed signal indicated that the event was caused by the merger of two black holes with masses of 14.2 and 7.5 solar masses, at a distance of 1.4 billion light years. The signal was named GW151226. The second observing run (O2) ran from 30 November 2016 to 25 August 2017, with Livingston achieving 15–25% sensitivity improvement over O1, and with Hanford's sensitivity similar to O1. In this period, LIGO saw several further gravitational wave events: GW170104 in January; GW170608 in June; and five others between July and August 2017. Several of these were also detected by the Virgo Collaboration. Unlike the black hole mergers which are only detectable gravitationally, GW170817 came from the collision of two neutron stars and was also detected electromagnetically by gamma ray satellites and optical telescopes. The third run (O3) began on 1 April 2019 and was planned to last until 30 April 2020; in fact it was suspended in March 2020 due to COVID-19. On 6 January 2020, LIGO announced the detection of what appeared to be gravitational ripples from a collision of two neutron stars, recorded on 25 April 2019, by the LIGO Livingston detector. Unlike GW170817, this event did not result in any light being detected. Furthermore, this is the first published event for a single-observatory detection, given that the LIGO Hanford detector was temporarily offline at the time and the event was too faint to be visible in Virgo's data. The fourth observing run (O4) was planned to start in December 2022, but was postponed until 24 May 2023. O4 is projected to continue until February 2025. As of O4, the interferometers are operating at a sensitivity of 155-175 Mpc, within the design sensitivity range of 160-190 Mpc for binary neutron star events. The fifth observing run (O5) is projected to begin in late 2025 or in 2026. Future LIGO-India LIGO-India, or INDIGO, is a planned collaborative project between the LIGO Laboratory and the Indian Initiative in Gravitational-wave Observations (IndIGO) to create a gravitational-wave detector in India. The LIGO Laboratory, in collaboration with the US National Science Foundation and Advanced LIGO partners from the U.K., Germany and Australia, has offered to provide all of the designs and hardware for one of the three planned Advanced LIGO detectors to be installed, commissioned, and operated by an Indian team of scientists in a facility to be built in India. The LIGO-India project is a collaboration between LIGO Laboratory and the LIGO-India consortium: Institute of Plasma Research, Gandhinagar; IUCAA (Inter-University Centre for Astronomy and Astrophysics), Pune and Raja Ramanna Centre for Advanced Technology, Indore. The expansion of worldwide activities in gravitational-wave detection to produce an effective global network has been a goal of LIGO for many years. In 2010, a developmental roadmap issued by the Gravitational Wave International Committee (GWIC) recommended that an expansion of the global array of interferometric detectors be pursued as a highest priority. Such a network would afford astrophysicists with more robust search capabilities and higher scientific yields. The current agreement between the LIGO Scientific Collaboration and the Virgo collaboration links three detectors of comparable sensitivity and forms the core of this international network. Studies indicate that the localization of sources by a network that includes a detector in India would provide significant improvements. Improvements in localization averages are predicted to be approximately an order of magnitude, with substantially larger improvements in certain regions of the sky. The NSF was willing to permit this relocation, and its consequent schedule delays, as long as it did not increase the LIGO budget. Thus, all costs required to build a laboratory equivalent to the LIGO sites to house the detector would have to be borne by the host country. The first potential distant location was at AIGO in Western Australia, however the Australian government was unwilling to commit funding by 1 October 2011 deadline. A location in India was discussed at a Joint Commission meeting between India and the US in June 2012. In parallel, the proposal was evaluated by LIGO's funding agency, the NSF. As the basis of the LIGO-India project entails the transfer of one of LIGO's detectors to India, the plan would affect work and scheduling on the Advanced LIGO upgrades already underway. In August 2012, the U.S. National Science Board approved the LIGO Laboratory's request to modify the scope of Advanced LIGO by not installing the Hanford "H2" interferometer, and to prepare it instead for storage in anticipation of sending it to LIGO-India. In India, the project was presented to the Department of Atomic Energy and the Department of Science and Technology for approval and funding. On 17 February 2016, less than a week after LIGO's landmark announcement about the detection of gravitational waves, Indian Prime Minister Narendra Modi announced that the Cabinet has granted 'in-principle' approval to the LIGO-India mega science proposal. A site near pilgrimage site of Aundha Nagnath in the Hingoli district of state Maharashtra in western India has been selected. On 7 April 2023, the LIGO-India project was approved by the Cabinet of Government of India. Construction is to begin in Maharashtra's Hingoli district at a cost of INR 2600 crores. A+ Like Enhanced LIGO, certain improvements will be retrofitted to the existing Advanced LIGO instrument. These are referred to as proposals, and are planned for installation starting from 2019 until the upgraded detector is operational in 2024. The changes would almost double Advanced LIGO's sensitivity, and increase the volume of space searched by a factor of seven. The upgrades include: Improvements to the mirror suspension system. Increased reflectivity of the mirrors. Using frequency-dependent squeezed light, which would simultaneously decrease radiation pressure at low frequencies and shot noise at high frequencies, and Improved mirror coatings with lower mechanical loss. Because the final LIGO output photodetector is sensitive to phase, and not amplitude, it is possible to squeeze the signal so there is less phase noise and more amplitude noise, without violating the quantum mechanical limit on their product. This is done by injecting a "squeezed vacuum state" into the dark port (interferometer output) which is quieter, in the relevant parameter, than simple darkness. Such a squeezing upgrade was installed at both LIGO sites prior to the third observing run. The A+ improvement will see the installation of an additional optical cavity that acts to rotate the squeezing quadrature from phase-squeezed at high frequencies (above 50 Hz) to amplitude-squeezed at low frequencies, thereby also mitigating low-frequency radiation pressure noise. LIGO Voyager A third-generation detector at the existing LIGO sites is being planned under the name "LIGO Voyager" to improve the sensitivity by an additional factor of two, and halve the low-frequency cutoff to 10 Hz. Plans call for the glass mirrors and 1064 nm lasers to be replaced by even larger 160 kg silicon test masses, cooled to 123 K (a temperature achievable with liquid nitrogen), and a change to a longer laser wavelength in the 1500–2200 nm range at which silicon is transparent. (Many documents assume a wavelength of 1550 nm, but this is not final.) Voyager would be an upgrade to A+, to be operational around 2027–2028. Cosmic Explorer A design for a larger facility with longer arms is called "Cosmic Explorer". This is based on the LIGO Voyager technology, has a similar LIGO-type L-shape geometry but with 40 km arms. The facility is currently planned to be on the surface. It has a higher sensitivity than Einstein Telescope for frequencies beyond 10 Hz, but lower sensitivity under 10 Hz.
Technology
Ground-based observatories
null
81625
https://en.wikipedia.org/wiki/Glutathione
Glutathione
Glutathione (GSH, ) is an organic compound with the chemical formula . It is an antioxidant in plants, animals, fungi, and some bacteria and archaea. Glutathione is capable of preventing damage to important cellular components caused by sources such as reactive oxygen species, free radicals, peroxides, lipid peroxides, and heavy metals. It is a tripeptide with a gamma peptide linkage between the carboxyl group of the glutamate side chain and cysteine. The carboxyl group of the cysteine residue is attached by normal peptide linkage to glycine. Biosynthesis and occurrence Glutathione biosynthesis involves two adenosine triphosphate-dependent steps: First, γ-glutamylcysteine is synthesized from L-glutamate and L-cysteine. This conversion requires the enzyme glutamate–cysteine ligase (GCL, glutamate cysteine synthase). This reaction is the rate-limiting step in glutathione synthesis. Second, glycine is added to the C-terminal of γ-glutamylcysteine. This condensation is catalyzed by glutathione synthetase. While all animal cells are capable of synthesizing glutathione, glutathione synthesis in the liver has been shown to be essential. GCLC knockout mice die within a month of birth due to the absence of hepatic GSH synthesis. The unusual gamma amide linkage in glutathione protects it from hydrolysis by peptidases. Occurrence Glutathione is the most abundant non-protein thiol (-containing compound) in animal cells, ranging from 0.5 to 10 mmol/L. It is present in the cytosol and the organelles. The concentration of glutathione in the cytoplasm is significantly higher (ranging from 0.5-10 mM) compared to extracellular fluids (2-20 μM), reaching levels up to 1000 times greater. In healthy cells and tissue, more than 90% of the total glutathione pool is in the reduced form (GSH), with the remainder in the disulfide form (GSSG). The cytosol holds 80-85% of cellular GSH and the mitochondria hold 10-15%. Human beings synthesize glutathione, but a few eukaryotes do not, including some members of Fabaceae, Entamoeba, and Giardia. The only known archaea that make glutathione are halobacteria. Some bacteria, such as "Cyanobacteria" and Pseudomonadota, can biosynthesize glutathione. Systemic availability of orally consumed glutathione is poor. It had low bioavailability because the tripeptide is the substrate of proteases (peptidases) of the alimentary canal, and due to the absence of a specific carrier of glutathione at the level of cell membrane. The administration of N-acetylcysteine (NAC), a cysteine prodrug, helps replenish intracellular GSH levels. Biochemical function Glutathione exists in reduced (GSH) and oxidized (GSSG) states. The ratio of reduced glutathione to oxidized glutathione within cells is a measure of cellular oxidative stress where increased GSSG-to-GSH ratio is indicative of greater oxidative stress. In the reduced state, the thiol group of cysteinyl residue is a source of one reducing equivalent. Glutathione disulfide (GSSG) is thereby generated. The oxidized state is converted to the reduced state by NADPH. This conversion is catalyzed by glutathione reductase: NADPH + GSSG + H2O → 2 GSH + NADP+ + OH− Roles Antioxidant GSH protects cells by neutralising (reducing) reactive oxygen species. This conversion is illustrated by the reduction of peroxides: 2 GSH + R2O2 → GSSG + 2 ROH (R = H, alkyl) and with free radicals: GSH + R• → GSSG + RH Regulation Aside from deactivating radicals and reactive oxidants, glutathione participates in thiol protection and redox regulation of cellular thiol proteins under oxidative stress by protein S-glutathionylation, a redox-regulated post-translational thiol modification. The general reaction involves formation of an unsymmetrical disulfide from the protectable protein (RSH) and GSH: RSH + GSH + [O] → GSSR + H2O Glutathione is also employed for the detoxification of methylglyoxal and formaldehyde, toxic metabolites produced under oxidative stress. This detoxification reaction is carried out by the glyoxalase system. Glyoxalase I (EC 4.4.1.5) catalyzes the conversion of methylglyoxal and reduced glutathione to S-D-lactoylglutathione. Glyoxalase II (EC 3.1.2.6) catalyzes the hydrolysis of S-D-lactoylglutathione to glutathione and D-lactic acid. It maintains exogenous antioxidants such as vitamins C and E in their reduced (active) states. Metabolism Among the many metabolic processes in which it participates, glutathione is required for the biosynthesis of leukotrienes and prostaglandins. It plays a role in the storage of cysteine. Glutathione enhances the function of citrulline as part of the nitric oxide cycle. It is a cofactor and acts on glutathione peroxidase. Glutathione is used to produce S-sulfanylglutathione, which is part of hydrogen sulfide metabolism. Conjugation Glutathione facilitates metabolism of xenobiotics. Glutathione S-transferase enzymes catalyze its conjugation to lipophilic xenobiotics, facilitating their excretion or further metabolism. The conjugation process is illustrated by the metabolism of N-acetyl-p-benzoquinone imine (NAPQI). NAPQI is a reactive metabolite formed by the action of cytochrome P450 on paracetamol (acetaminophen). Glutathione conjugates to NAPQI, and the resulting ensemble is excreted. In plants In plants, glutathione is involved in stress management. It is a component of the glutathione-ascorbate cycle, a system that reduces poisonous hydrogen peroxide. It is the precursor of phytochelatins, glutathione oligomers that chelate heavy metals such as cadmium. Glutathione is required for efficient defence against plant pathogens such as Pseudomonas syringae and Phytophthora brassicae. Adenylyl-sulfate reductase, an enzyme of the sulfur assimilation pathway, uses glutathione as an electron donor. Other enzymes using glutathione as a substrate are glutaredoxins. These small oxidoreductases are involved in flower development, salicylic acid, and plant defence signalling. In degradation of drug delivery systems Among various types of cancer, lung cancer, larynx cancer, mouth cancer, and breast cancer exhibit higher concentrations (10-40 mM) of GSH compared to healthy cells. Thus, drug delivery systems containing disulfide bonds, typically cross-linked micro-nanogels, stand out for their ability to degrade in the presence of high concentrations of glutathione (GSH). This degradation process releases the drug payload specifically into cancerous or tumorous tissue, leveraging the significant difference in redox potential between the oxidizing extracellular environment and the reducing intracellular cytosol. When internalized by endocytosis, nanogels encounter high concentrations of GSH inside the cancer cell. GSH, a potent reducing agent, donates electrons to disulfide bonds in the nanogels, initiating a thiol-disulfide exchange reaction. This reaction breaks the disulfide bonds, converting them into two thiol groups, and facilitates targeted drug release where it is needed most. This reaction is called a thiol-disulfide exchange reaction. R−S−S−R′+ 2GSH → R−SH + R′−SH + GSSG where R and R''' are parts of the micro-nanogel structure, and GSSG'' is oxidized glutathione (glutathione disulfide). The breaking of disulfide bonds causes the nanogel to degrade into smaller fragments. This degradation process leads to the release of encapsulated drugs. The released drug molecules can then exert their therapeutic effects, such as inducing apoptosis in cancer cells. Uses Winemaking The content of glutathione in must, the first raw form of wine, determines the browning, or caramelizing effect, during the production of white wine by trapping the caffeoyltartaric acid quinones generated by enzymic oxidation as grape reaction product. Its concentration in wine can be determined by UPLC-MRM mass spectrometry.
Biology and health sciences
Neurotransmitters
Biology
81628
https://en.wikipedia.org/wiki/Thiol
Thiol
In organic chemistry, a thiol (; ), or thiol derivative, is any organosulfur compound of the form , where R represents an alkyl or other organic substituent. The functional group itself is referred to as either a thiol group or a sulfhydryl group, or a sulfanyl group. Thiols are the sulfur analogue of alcohols (that is, sulfur takes the place of oxygen in the hydroxyl () group of an alcohol), and the word is a blend of "thio-" with "alcohol". Many thiols have strong odors resembling that of garlic, cabbage or rotten eggs. Thiols are used as odorants to assist in the detection of natural gas (which in pure form is odorless), and the "smell of natural gas" is due to the smell of the thiol used as the odorant. Thiols are sometimes referred to as mercaptans () or mercapto compounds, a term introduced in 1832 by William Christopher Zeise and is derived from the Latin ('capturing mercury') because the thiolate group () bonds very strongly with mercury compounds. Structure and bonding Thiols having the structure R−SH, in which an alkyl group (R) is attached to a sulfhydryl group (SH), are referred to as alkanethiols or alkyl thiols. Thiols and alcohols have similar connectivity. Because sulfur atoms are larger than oxygen atoms, C−S bond lengths – typically around 180 picometres – are about 40 picometers longer than typical C−O bonds. The C−S−H angles approach 90° whereas the angle for the C−O−H group is more obtuse. In solids and liquids, the hydrogen-bonding between individual thiol groups is weak, the main cohesive force being Van der Waals interactions between the highly polarizable divalent sulfur centers. The S−H bond is much weaker than the O−H bond as reflected in their respective bond dissociation energies (BDE). For CH3S−H, the BDE is , while for CH3O−H, the BDE is . An S−H bond is moderately polar because of the small difference in the electronegativity of sulfur and hydrogen. In contrast, O−H bonds in hydroxyl groups are more polar. Thiols have a lower dipole moment relative to their corresponding alcohols. Nomenclature There are several ways to name the alkylthiols: The suffix -thiol is added to the name of the alkane. This method is nearly identical to naming an alcohol and is used by the IUPAC, e.g. CH3SH would be methanethiol. The word mercaptan replaces alcohol in the name of the equivalent alcohol compound. Example: CH3SH would be methyl mercaptan, just as CH3OH is called methyl alcohol. The term sulfhydryl- or mercapto- is used as a prefix, e.g. mercaptopurine. Physical properties Odor Many thiols have strong odors resembling that of garlic. The odors of thiols, particularly those of low molecular weight, are often strong and repulsive. The spray of skunks consists mainly of low-molecular-weight thiols and derivatives. These compounds are detectable by the human nose at concentrations of only 10 parts per billion. Human sweat contains (R)/(S)-3-methyl-3-mercapto-1-ol (MSH), detectable at 2 parts per billion and having a fruity, onion-like odor. (Methylthio)methanethiol (MeSCH2SH; MTMT) is a strong-smelling volatile thiol, also detectable at parts per billion levels, found in male mouse urine. Lawrence C. Katz and co-workers showed that MTMT functioned as a semiochemical, activating certain mouse olfactory sensory neurons, and attracting female mice. Copper has been shown to be required by a specific mouse olfactory receptor, MOR244-3, which is highly responsive to MTMT as well as to various other thiols and related compounds. A human olfactory receptor, OR2T11, has been identified which, in the presence of copper, is highly responsive to the gas odorants (see below) ethanethiol and t-butyl mercaptan as well as other low molecular weight thiols, including allyl mercaptan found in human garlic breath, and the strong-smelling cyclic sulfide thietane. Thiols are also responsible for a class of wine faults caused by an unintended reaction between sulfur and yeast and the "skunky" odor of beer that has been exposed to ultraviolet light. Not all thiols have unpleasant odors. For example, furan-2-ylmethanethiol contributes to the aroma of roasted coffee, whereas grapefruit mercaptan, a monoterpenoid thiol, is responsible for the characteristic scent of grapefruit. The effect of the latter compound is present only at low concentrations. The pure mercaptan has an unpleasant odor. In the United States, natural gas distributors were required to add thiols, originally ethanethiol, to natural gas (which is naturally odorless) after the deadly New London School explosion in New London, Texas, in 1937. Many gas distributors were odorizing gas prior to this event. Most currently-used gas odorants contain mixtures of mercaptans and sulfides, with t-butyl mercaptan as the main odor constituent in natural gas and ethanethiol in liquefied petroleum gas (LPG, propane). In situations where thiols are used in commercial industry, such as liquid petroleum gas tankers and bulk handling systems, an oxidizing catalyst is used to destroy the odor. A copper-based oxidation catalyst neutralizes the volatile thiols and transforms them into inert products. Boiling points and solubility Thiols show little association by hydrogen bonding, both with water molecules and among themselves. Hence, they have lower boiling points and are less soluble in water and other polar solvents than alcohols of similar molecular weight. For this reason also, thiols and their corresponding sulfide functional group isomers have similar solubility characteristics and boiling points, whereas the same is not true of alcohols and their corresponding isomeric ethers. Bonding The S−H bond in thiols is weak compared to the O−H bond in alcohols. For CH3X−H, the bond enthalpies are for X = S and for X = O. Hydrogen-atom abstraction from a thiol gives a thiyl radical with the formula RS•, where R = alkyl or aryl. Characterization Volatile thiols are easily and almost unerringly detected by their distinctive odor. Sulfur-specific analyzers for gas chromatographs are useful. Spectroscopic indicators are the D2O-exchangeable SH signal in the 1H NMR spectrum (33S is NMR-active but signals for divalent sulfur are very broad and of little utility). The νSH band appears near 2400 cm−1 in the IR spectrum. In the nitroprusside reaction, free thiol groups react with sodium nitroprusside and ammonium hydroxide to give a red colour. Preparation In industry, methanethiol is prepared by the reaction of hydrogen sulfide with methanol. This method is employed for the industrial synthesis of methanethiol: CH3OH + H2S → CH3SH + H2O Such reactions are conducted in the presence of acidic catalysts. The other principal route to thiols involves the addition of hydrogen sulfide to alkenes. Such reactions are usually conducted in the presence of an acid catalyst or UV light. Halide displacement, using the suitable organic halide and sodium hydrogen sulfide has also been used. Another method entails the alkylation of sodium hydrosulfide. RX + NaSH → RSH + NaX(X = Cl, Br, I) This method is used for the production of thioglycolic acid from chloroacetic acid. Laboratory methods In general, on the typical laboratory scale, the direct reaction of a haloalkane with sodium hydrosulfide is inefficient owing to the competing formation of sulfides. Instead, alkyl halides are converted to thiols via an S-alkylation of thiourea. This multistep, one-pot process proceeds via the intermediacy of the isothiouronium salt, which is hydrolyzed in a separate step: CH3CH2Br + SC(NH2)2 → [CH3CH2SC(NH2)2]Br [CH3CH2SC(NH2)2]Br + NaOH → CH3CH2SH + OC(NH2)2 + NaBr The thiourea route works well with primary halides, especially activated ones. Secondary and tertiary thiols are less easily prepared. Secondary thiols can be prepared from the ketone via the corresponding dithioketals. A related two-step process involves alkylation of thiosulfate to give the thiosulfonate ("Bunte salt"), followed by hydrolysis. The method is illustrated by one synthesis of thioglycolic acid: ClCH2CO2H + Na2S2O3 → Na[O3S2CH2CO2H] + NaCl Na[O3S2CH2CO2H] + H2O → HSCH2CO2H + NaHSO4 Organolithium compounds and Grignard reagents react with sulfur to give the thiolates, which are readily hydrolyzed: RLi + S → RSLi RSLi + HCl → RSH + LiCl Phenols can be converted to the thiophenols via rearrangement of their O-aryl dialkylthiocarbamates. Thiols are prepared by reductive dealkylation of sulfides, especially benzyl derivatives and thioacetals. Thiophenols are produced by S-arylation or the replacement of diazonium leaving group with sulfhydryl anion (SH−): + SH− → ArSH + N2 Reactions Akin to the chemistry of alcohols, thiols form sulfides, thioacetals, and thioesters, which are analogous to ethers, acetals, and esters respectively. Thiols and alcohols are also very different in their reactivity, thiols being more easily oxidized than alcohols. Thiolates are more potent nucleophiles than the corresponding alkoxides. S-Alkylation Thiols, or more specific their conjugate bases, are readily alkylated to give sulfides: RSH + R′Br + B → RSR′ + [HB]Br (B = base) Acidity Thiols are easily deprotonated. Relative to the alcohols, thiols are more acidic. The conjugate base of a thiol is called a thiolate. Butanethiol has a pKa of 10.5 vs 15 for butanol. Thiophenol has a pKa of 6, versus 10 for phenol. A highly acidic thiol is pentafluorothiophenol (C6F5SH) with a pKa of 2.68. Thus, thiolates can be obtained from thiols by treatment with alkali metal hydroxides. Redox Thiols, especially in the presence of base, are readily oxidized by reagents such as bromine and iodine to give an organic disulfide (R−S−S−R). 2 R−SH + Br2 → R−S−S−R + 2 HBr Oxidation by more powerful reagents such as sodium hypochlorite or hydrogen peroxide can also yield sulfonic acids (RSO3H). R−SH + 3 H2O2 → RSO3H + 3 H2O Oxidation can also be effected by oxygen in the presence of catalysts: 2 R–SH +  O2 → RS−SR + H2O Thiols participate in thiol-disulfide exchange: RS−SR + 2 R′SH → 2 RSH + R′S−SR′ This reaction is important in nature. Metal ion complexation With metal ions, thiolates behave as ligands to form transition metal thiolate complexes. The term mercaptan is derived from the Latin mercurium captans (capturing mercury) because the thiolate group bonds so strongly with mercury compounds. According to hard/soft acid/base (HSAB) theory, sulfur is a relatively soft (polarizable) atom. This explains the tendency of thiols to bind to soft elements and ions such as mercury, lead, or cadmium. The stability of metal thiolates parallels that of the corresponding sulfide minerals. Thioxanthates Thiolates react with carbon disulfide to give thioxanthate (). Thiyl radicals Free radicals derived from mercaptans, called thiyl radicals, are commonly invoked to explain reactions in organic chemistry and biochemistry. They have the formula RS• where R is an organic substituent such as alkyl or aryl. They arise from or can be generated by a number of routes, but the principal method is H-atom abstraction from thiols. Another method involves homolysis of organic disulfides. In biology thiyl radicals are responsible for the formation of the deoxyribonucleic acids, building blocks for DNA. This conversion is catalysed by ribonucleotide reductase (see figure). Thiyl intermediates also are produced by the oxidation of glutathione, an antioxidant in biology. Thiyl radicals (sulfur-centred) can transform to carbon-centred radicals via hydrogen atom exchange equilibria. The formation of carbon-centred radicals could lead to protein damage via the formation of C−C bonds or backbone fragmentation. Because of the weakness of the S−H bond, thiols can function as scavengers of free radicals. Biological importance Cysteine and cystine As the functional group of the amino acid cysteine, the thiol group plays a very important role in biology. When the thiol groups of two cysteine residues (as in monomers or constituent units) are brought near each other in the course of protein folding, an oxidation reaction can generate a cystine unit with a disulfide bond (−S−S−). Disulfide bonds can contribute to a protein's tertiary structure if the cysteines are part of the same peptide chain, or contribute to the quaternary structure of multi-unit proteins by forming fairly strong covalent bonds between different peptide chains. A physical manifestation of cysteine-cystine equilibrium is provided by hair straightening technologies. Sulfhydryl groups in the active site of an enzyme can form noncovalent bonds with the enzyme's substrate as well, contributing to covalent catalytic activity in catalytic triads. Active site cysteine residues are the functional unit in cysteine protease catalytic triads. Cysteine residues may also react with heavy metal ions (Zn2+, Cd2+, Pb2+, Hg2+, Ag+) because of the high affinity between the soft sulfide and the soft metal (see hard and soft acids and bases). This can deform and inactivate the protein, and is one mechanism of heavy metal poisoning. Drugs containing thiol group 6-Mercaptopurine (anticancer) Captopril (antihypertensive) D-penicillamine (antiarthritic) Sodium aurothiolate (antiarthritic) Cofactors Many cofactors (non-protein-based helper molecules) feature thiols. The biosynthesis and degradation of fatty acids and related long-chain hydrocarbons is conducted on a scaffold that anchors the growing chain through a thioester derived from the thiol Coenzyme A. The biosynthesis of methane, the principal hydrocarbon on Earth, arises from the reaction mediated by coenzyme M, 2-mercaptoethyl sulfonic acid. Thiolates, the conjugate bases derived from thiols, form strong complexes with many metal ions, especially those classified as soft. The stability of metal thiolates parallels that of the corresponding sulfide minerals. In skunks The defensive spray of skunks consists mainly of low-molecular-weight thiols and derivatives with a foul odor, which protects the skunk from predators. Owls are able to prey on skunks, as they lack a sense of smell. Examples of thiols Methanethiol – CH3SH [methyl mercaptan] Ethanethiol – C2H5SH [ethyl mercaptan] 1-Propanethiol – C3H7SH [n-propyl mercaptan] 2-Propanethiol – CH3CH(SH)CH3 [2C3 mercaptan] Allyl mercaptan CH2=CHCH2SH [2-propenethiol] Butanethiol – C4H9SH [n-butyl mercaptan] tert-Butyl mercaptan – (CH3)3CSH [t-butyl mercaptan] Pentanethiols – C5H11SH [pentyl mercaptan] Thiophenol – C6H5SH Dimercaptosuccinic acid Thioacetic acid Coenzyme A Glutathione Metallothionein Cysteine 2-Mercaptoethanol Dithiothreitol/dithioerythritol (an epimeric pair) 2-Mercaptoindole Grapefruit mercaptan Furan-2-ylmethanethiol 3-Mercaptopropane-1,2-diol 3-Mercapto-1-propanesulfonic acid 1-Hexadecanethiol Pentachlorobenzenethiol
Physical sciences
Organic compounds
null
81634
https://en.wikipedia.org/wiki/Acyl%20group
Acyl group
In chemistry, an acyl group is a moiety derived by the removal of one or more hydroxyl groups from an oxoacid, including inorganic acids. It contains a double-bonded oxygen atom and an organyl group () or hydrogen in the case of formyl group (). In organic chemistry, the acyl group (IUPAC name alkanoyl if the organyl group is alkyl) is usually derived from a carboxylic acid, in which case it has the formula , where R represents an organyl group or hydrogen. Although the term is almost always applied to organic compounds, acyl groups can in principle be derived from other types of acids such as sulfonic acids and phosphonic acids. In the most common arrangement, acyl groups are attached to a larger molecular fragment, in which case the carbon and oxygen atoms are linked by a double bond. Reactivity trends There are five main types of acyl derivatives. Acid halides are the most reactive towards nucleophiles, followed by anhydrides, esters, and amides. Carboxylate ions are essentially unreactive towards nucleophilic substitution, since they possess no leaving group. The reactivity of these five classes of compounds covers a broad range; the relative reaction rates of acid chlorides and amides differ by a factor of 1013. A major factor in determining the reactivity of acyl derivatives is leaving group ability, which is related to acidity. Weak bases are better leaving groups than strong bases; a species with a strong conjugate acid (e.g. hydrochloric acid) will be a better leaving group than a species with a weak conjugate acid (e.g. acetic acid). Thus, chloride ion is a better leaving group than acetate ion. The reactivity of acyl compounds towards nucleophiles decreases as the basicity of the leaving group increases, as the table shows. Another factor that plays a role in determining the reactivity of acyl compounds is resonance. Amides exhibit two main resonance forms. Both are major contributors to the overall structure, so much so that the amide bond between the carbonyl carbon and the amide nitrogen has significant double bond character. The energy barrier for rotation about an amide bond is 75–85 kJ/mol (18–20 kcal/mol), much larger than values observed for normal single bonds. For example, the C–C bond in ethane has an energy barrier of only 12 kJ/mol (3 kcal/mol). Once a nucleophile attacks and a tetrahedral intermediate is formed, the energetically favorable resonance effect is lost. This helps explain why amides are one of the least reactive acyl derivatives. Esters exhibit less resonance stabilization than amides, so the formation of a tetrahedral intermediate and subsequent loss of resonance is not as energetically unfavorable. Anhydrides experience even weaker resonance stabilization, since the resonance is split between two carbonyl groups, and are more reactive than esters and amides. In acid halides, there is very little resonance, so the energetic penalty for forming a tetrahedral intermediate is small. This helps explain why acid halides are the most reactive acyl derivatives. Compounds Well-known acyl compounds are the acyl chlorides, such as acetyl chloride (CH3COCl) and benzoyl chloride (C6H5COCl). These compounds, which are treated as sources of acylium cations, are good reagents for attaching acyl groups to various substrates. Amides (RC(O)NR′2) and esters (RC(O)OR′) are classes of acyl compounds, as are ketones (RC(O)R′) and aldehydes (RC(O)H), where R and R′ stand for organyl (or hydrogen in the case of formyl). Acylium cations, radicals, and anions Acylium ions are cations of the formula . The carbon–oxygen bond length in these cations is near 1.1 Å (110-112 pm), which is shorter than the 112.8 pm of carbon monoxide and indicates triple-bond character. The carbon centres of acylium ions generally have a linear geometry and sp atomic hybridization, and are best represented by a resonance structure bearing a formal positive charge on the oxygen (rather than carbon): . They are characteristic fragments observed in EI-mass spectra of ketones. Acylium ions are common reactive intermediates, for example in the Friedel–Crafts acylation and many other organic reactions such as the Hayashi rearrangement. Salts containing acylium ions can be generated by removal of the halide from acyl halides: Acyl radicals are readily generated from aldehydes by hydrogen-atom abstraction. However, they undergo rapid decarbonylation to afford the alkyl radical: Acyl anions are almost always unstable—usually too unstable to be exploited synthetically. They readily react with the neutral aldehyde to form an acyloin dimer. Hence, synthetic chemists have developed various acyl anion synthetic equivalents, such as dithianes, as surrogates. However, as a partial exception, hindered dialkylformamides (e.g., diisopropylformamide, HCONiPr2) can undergo deprotonation at low temperature (−78 °C) with lithium diisopropylamide as the base to form a carbamoyl anion stable at these temperatures. In biochemistry In biochemistry there are many instances of acyl groups, in all major categories of biochemical molecules. Acyl-CoAs are acyl derivatives formed via fatty acid metabolism. Acetyl-CoA, the most common derivative, serves as an acyl donor in many biosynthetic transformations. Such acyl compounds are thioesters. Names of acyl groups of amino acids are formed by replacing the -ine suffix with -yl. For example, the acyl group of glycine is glycyl, and of lysine is lysyl. Names of acyl groups of ribonucleoside monophosphates such as AMP (5′-adenylic acid), GMP (5′-guanylic acid), CMP (5′-cytidylic acid), and UMP (5′-uridylic acid) are adenylyl, guanylyl, cytidylyl, and uridylyl respectively. In phospholipids, the acyl group of phosphatidic acid is called phosphatidyl-. Finally, many saccharides are acylated. In organometallic chemistry and catalysis Acyl ligands are intermediates in many carbonylation reactions, which are important in some catalytic reactions. Metal acyls arise usually via insertion of carbon monoxide into metal–alkyl bonds. Metal acyls also arise from reactions involving acyl chlorides with low-valence metal complexes or by the reaction of organolithium compounds with metal carbonyls. Metal acyls are often described by two resonance structures, one of which emphasizes the basicity of the oxygen center. O-alkylation of metal acyls gives Fischer carbene complexes. Nomenclature The common names of acyl groups are derived typically by replacing the -ic acid suffix of the corresponding carboxylic acid's common name with -yl (or -oyl), as shown in the table below. In the IUPAC nomenclature of organic chemistry, the systematic names of acyl groups are derived exactly by replacing the -yl suffix of the corresponding hydrocarbyl group's systemic name (or the -oic acid suffix of the corresponding carboxylic acid's systemic name) with -oyl, as shown in the table below. The acyls are between the hydrocarbyls and the carboxylic acids. The hydrocarbyl group names that end in -yl are not acyl groups, but alkyl groups derived from alkanes (methyl, ethyl, propyl, butyl), alkenyl groups derived from alkenes (propenyl, butenyl), or aryl groups (benzyl). Reaction mechanisms Acyl compounds react with nucleophiles via an addition mechanism: the nucleophile attacks the carbonyl carbon, forming a tetrahedral intermediate. This reaction can be accelerated by acidic conditions, which make the carbonyl more electrophilic, or basic conditions, which provide a more anionic and therefore more reactive nucleophile. The tetrahedral intermediate itself can be an alcohol or alkoxide, depending on the pH of the reaction. The tetrahedral intermediate of an acyl compound contains a substituent attached to the central carbon that can act as a leaving group. After the tetrahedral intermediate forms, it collapses, recreating the carbonyl C=O bond and ejecting the leaving group in an elimination reaction. As a result of this two-step addition/elimination process, the nucleophile takes the place of the leaving group on the carbonyl compound by way of an intermediate state that does not contain a carbonyl. Both steps are reversible and as a result, nucleophilic acyl substitution reactions are equilibrium processes. Because the equilibrium will favor the product containing the best nucleophile, the leaving group must be a comparatively poor nucleophile in order for a reaction to be practical. Acidic conditions Under acidic conditions, the carbonyl group of the acyl compound 1 is protonated, which activates it towards nucleophilic attack. In the second step, the protonated carbonyl 2 is attacked by a nucleophile (H−Z) to give tetrahedral intermediate 3. Proton transfer from the nucleophile (Z) to the leaving group (X) gives 4, which then collapses to eject the protonated leaving group (H−X), giving protonated carbonyl compound 5. The loss of a proton gives the substitution product, 6. Because the last step involves the loss of a proton, nucleophilic acyl substitution reactions are considered catalytic in acid. Also note that under acidic conditions, a nucleophile will typically exist in its protonated form (i.e. H−Z instead of Z−). Basic conditions Under basic conditions, a nucleophile (Nuc) attacks the carbonyl group of the acyl compound 1 to give tetrahedral alkoxide intermediate 2. The intermediate collapses and expels the leaving group (X) to give the substitution product 3. While nucleophilic acyl substitution reactions can be base-catalyzed, the reaction will not occur if the leaving group is a stronger base than the nucleophile (i.e. the leaving group must have a higher pKa than the nucleophile). Unlike acid-catalyzed processes, both the nucleophile and the leaving group exist as anions under basic conditions. This mechanism is supported by isotope labeling experiments. When ethyl propionate with an oxygen-18-labeled ethoxy group is treated with sodium hydroxide (NaOH), the oxygen-18 label is completely absent from propionic acid and is found exclusively in the ethanol. Acyl species In acyloxy groups the acyl group is bonded to oxygen: R−C(=O)−O−R′ where R−C(=O) is the acyl group. Acylium ions are cations of the formula R−C≡O+. They are intermediates in Friedel-Crafts acylations.
Physical sciences
Organic reactions
Chemistry
81740
https://en.wikipedia.org/wiki/Appaloosa
Appaloosa
The Appaloosa is an American horse breed best known for its colorful spotted coat pattern. There is a wide range of body types within the breed, stemming from the influence of multiple breeds of horses throughout its history. Each horse's color pattern is genetically the result of various spotting patterns overlaid on top of one of several recognized base coat colors. The color pattern of the Appaloosa is of interest to those who study equine coat color genetics, as it and several other physical characteristics are linked to the leopard complex mutation (LP). Appaloosas are prone to develop equine recurrent uveitis and congenital stationary night blindness; the latter has been linked to the leopard complex. Artwork depicting prehistoric horses with leopard spotting exists in prehistoric cave paintings in Europe. Images of domesticated horses with leopard spotting patterns appeared in artwork from Ancient Greece and Han dynasty China through the early modern period. In North America, the Nez Perce people of what today is the United States Pacific Northwest developed the original American breed. Settlers once referred to these spotted horses as the "Palouse horse", possibly after the Palouse River, which ran through the heart of Nez Perce country. Gradually, the name evolved into Appaloosa. The Nez Perce lost most of their horses after the Nez Perce War in 1877, and the breed fell into decline for several decades. A small number of dedicated breeders preserved the Appaloosa as a distinct breed until the Appaloosa Horse Club (ApHC) was formed as the breed registry in 1938. The modern breed maintains bloodlines tracing to the foundation bloodstock of the registry; its partially open stud book allows the addition of some Thoroughbred, American Quarter Horse and Arabian blood. Today, the Appaloosa is one of the most popular breeds in the United States; it was named the official state horse of Idaho in 1975. It is best known as a stock horse used in a number of western riding disciplines, but is also a versatile breed with representatives seen in many other types of equestrian activity. Appaloosas have been used in many movies; an Appaloosa is a mascot for the Florida State Seminoles. Appaloosa bloodlines have influenced other horse breeds, including the Pony of the Americas, the Nez Perce Horse, and several gaited horse breeds. Breed characteristics The Appaloosa is best known for its distinctive, leopard complex-spotted coat, which is preferred in the breed. Spotting occurs in several overlay patterns on one of several recognized base coat colors. There are three other distinctive, "core" characteristics: mottled skin, striped hooves, and eyes with a white sclera. Skin mottling is usually seen around the muzzle, eyes, anus, and genitalia. Striped hooves are a common trait, quite noticeable on Appaloosas, but not unique to the breed. The sclera is the part of the eye surrounding the iris; although all horses show white around the eye if the eye is rolled back, to have a readily visible white sclera with the eye in a normal position is a distinctive characteristic seen more often in Appaloosas than in other breeds. Because the occasional individual is born with little or no visible spotting pattern, the ApHC allows "regular" registration of horses with mottled skin plus at least one of the other core characteristics. Horses with two ApHC parents but no "identifiable Appaloosa characteristics" are registered as "non-characteristic," a limited special registration status. There is a wide range of body types in the Appaloosa, in part because the leopard complex characteristics are its primary identifying factors, and also because several different horse breeds influenced its development. The weight range varies from , and heights from . However, the ApHC does not allow pony or draft breeding. The original "old time" or "old type" Appaloosa was a tall, narrow-bodied, rangy horse. The body style reflected a mix that started with the traditional Spanish horses already common on the plains of America before 1700. Then, 18th-century European bloodlines were added, particularly those of the "pied" horses popular in that period and shipped en masse to the Americas once the color had become unfashionable in Europe. These horses were similar to a tall, slim Thoroughbred-Andalusian type of horse popular in Bourbon-era Spain. The original Appaloosa tended to have a convex facial profile that resembled that of the warmblood-Jennet crosses first developed in the 16th century during the reign of Charles V. The old-type Appaloosa was later modified by the addition of draft horse blood after the 1877 defeat of the Nez Perce, when U.S. Government policy forced the Native Americans to become farmers and provided them with draft horse mares to breed to existing stallions. The original Appaloosas frequently had a sparse mane and tail, but that was not a primary characteristic, as many early Appaloosas did have full manes and tails. There is a possible genetic link between the leopard complex and sparse mane and tail growth, although the precise relationship is unknown. After the formation of the Appaloosa Horse Club in 1938, a more modern type of horse was developed after the addition of American Quarter Horse and Arabian bloodlines. The addition of Quarter Horse lines produced Appaloosas that performed better in sprint racing and in halter competition. Many cutting and reining horses resulted from old-type Appaloosas crossed on Arabian bloodlines, particularly via the Appaloosa foundation stallion Red Eagle. An infusion of Thoroughbred blood was added during the 1970s to produce horses more suited for racing. Many current breeders also attempt to breed away from the sparse, "rat tail" trait, and therefore modern Appaloosas have fuller manes and tails. Color and spotting patterns The coat color of an Appaloosa is a combination of a base color with an overlaid spotting pattern. The base colors recognized by the Appaloosa Horse Club include bay, black, chestnut, palomino, buckskin, cremello or perlino, roan, gray, dun and grulla. Appaloosa markings have several pattern variations. It is this unique group of spotting patterns, collectively called the "leopard complex", that most people associate with the Appaloosa horse. Spots overlay darker skin, and are often surrounded by a "halo", where the skin next to the spot is also dark but the overlying hair coat is white. It is not always easy to predict a grown Appaloosa's color at birth. Foals of any breed tend to be born with coats that darken when they shed their baby hair. In addition, Appaloosa foals do not always show classic leopard complex characteristics. Patterns sometimes change over the course of the horse's life although some, such as the blanket and leopard patterns, tend to be stable. Horses with the varnish roan and snowflake patterns are especially prone to show very little color pattern at birth, developing more visible spotting as they get older. The ApHC also recognizes the concept of a "solid" horse, which has a base color, "but no contrasting color in the form of an Appaloosa coat pattern". Solid horses can be registered if they have mottled skin, and one other leopard complex characteristic. Solid Appaloosa horses are not to be confused with gray horses, which display a similar mottling called "fleabitten gray". As they age, "fleabitten" grays may develop pigmented speckles in addition to a white coat. However, "fleabitten gray" is a different gene, and is unrelated to the leopard complex gene seen in the Appaloosa breed. While the Appaloosa Horse Club (ApHC) allows gray Appaloosa horses to be registered, gray is rare in the breed. Similarly, "dapple" gray horses are also different from Appaloosa horses, in terms of both coat color genes and patterning. Base colors are overlain by various spotting patterns, which are variable and often do not fit neatly into a specific category. These patterns are described as follows: Color genetics Any horse that shows Appaloosa core characteristics of coat pattern, mottled skin, striped hooves, and a visible white sclera, carries at least one allele of the dominant "leopard complex" (LP) gene. The use of the word "complex" is used to refer to the large group of visible patterns that may occur when LP is present. LP is an autosomal incomplete dominant mutation in the TRPM1 gene located at horse chromosome 1 (ECA 1). All horses with at least one copy of LP show leopard characteristics, and it is hypothesized that LP acts together with other patterning genes (PATN) that have not yet been identified to produce the different coat patterns. Horses that are heterozygous for LP tend to be darker than homozygous horses, but this is not consistent. Three single-nucleotide polymorphisms (SNPs) in the TRPM1 gene have been identified as closely associated with the LP mutation, although the mechanism by which the pattern is produced remains unclear. A commercially available DNA based test is likely to be developed in the near future, which breeders can use to determine if LP is present in horses that do not have visible Appaloosa characteristics. Not every Appaloosa exhibits visible coat spotting, but even apparently solid-colored horses that carry at least one dominant LP allele will exhibit characteristics such as vertically striped hooves, white sclera of the eye, and mottled skin around the eyes, lips, and genitalia. Appaloosas may also exhibit sabino or pinto type markings; as pinto genes may cover or obscure Appaloosa patterns, pinto breeding is discouraged by the ApHC, which will deny registration to horses with excessive white markings. The genes that create these different patterns can be present in the same horse. The Appaloosa Project, a genetic study group, researchers the interactions of Appaloosa and pinto genes, and how they affect each other. History Recent research has suggested that Eurasian prehistoric cave paintings depicting leopard-spotted horses may have accurately reflected a phenotype of ancient wild horse. Domesticated horses with leopard complex spotting patterns have been depicted in art dating as far back as Ancient Greece, Ancient Persia, and the Han Dynasty in China; later depictions appeared in 11th-century France and 12th-century England. French paintings from the 16th and 17th centuries show horses with spotted coats being used as riding horses, and other records indicate they were also used as coach horses at the court of Louis XIV of France. In mid-18th-century Europe, there was a great demand for horses with the leopard complex spotting pattern among the nobility and royalty. These horses were used in the schools of horsemanship, for parade use, and other forms of display. Modern horse breeds in Europe today that have leopard complex spotting include the Knabstrupper and the Pinzgau, or Noriker horse. The Spanish probably obtained spotted horses through trade with southern Austria and Hungary, where the color pattern was known to exist. The Conquistadors and Spanish settlers then brought some vividly marked horses to the Americas when they first arrived in the early 16th century. One horse with snowflake patterning was listed with the 16 horses brought to Mexico by Cortez, and additional spotted horses were mentioned by Spanish writers by 1604. Others arrived in the western hemisphere when spotted horses went out of style in late 18th-century Europe, and were shipped to Mexico, California and Oregon. Nez Perce people The Nez Perce people lived in what today is eastern Washington, Oregon, and north central Idaho, where they engaged in agriculture as well as horse breeding. The Nez Perce first obtained horses from the Shoshone around 1730. They took advantage of the fact that they lived in excellent horse-breeding country, relatively safe from the raids of other tribes, and developed strict breeding selection practices for their animals, establishing breeding herds by 1750. They were one of the few tribes that actively used the practice of gelding inferior male horses and trading away poorer stock to remove unsuitable animals from the gene pool, and thus were notable as horse breeders by the early 19th century. Early Nez Perce horses were considered to be of high quality. Meriwether Lewis of the Lewis and Clark Expedition wrote in his February 15, 1806, journal entry: "Their horses appear to be of an excellent race; they are lofty, eligantly formed, active and durable: in short many of them look like fine English coarsers and would make a figure in any country." Lewis did note spotting patterns, saying, "... some of these horses are pided [pied] with large spots of white irregularly scattered and intermixed with the black brown bey or some other dark colour". By "pied", Lewis may have been referring to leopard-spotted patterns seen in the modern Appaloosa, though Lewis also noted that "much the larger portion are of a uniform colour". The Appaloosa Horse Club estimates that only about ten percent of the horses owned by the Nez Perce at the time were spotted. While the Nez Perce originally had many solid-colored horses and only began to emphasize color in their breeding some time after the visit of Lewis and Clark, by the late 19th century they had many spotted horses. As white settlers moved into traditional Nez Perce lands, a successful trade in horses enriched the Nez Perce, who in 1861 bred horses described as "elegant chargers, fit to mount a prince." At a time when ordinary horses could be purchased for $15, non-Indians who had purchased Appaloosa horses from the Nez Perce turned down offers of as much as $600. Nez Perce War Peace with the United States dated back to an alliance arranged by Lewis and Clark, but the encroachment of gold miners in the 1860s and settlers in the 1870s put pressure on the Nez Perce. Although a treaty of 1855 originally allowed them to keep most of their traditional land, another in 1863 reduced the land allotted to them by 90 percent. The Nez Perce who refused to give up their land under the 1863 treaty included a band living in the Wallowa Valley of Oregon, led by Heinmot Tooyalakekt, widely known as Chief Joseph. Tensions rose, and in May 1877, General Oliver Otis Howard called a council and ordered the non-treaty bands to move to the reservation. Chief Joseph considered military resistance futile, and by June 14, 1877, had gathered about 600 people at a site near present-day Grangeville, Idaho. But on that day a small group of warriors staged an attack on nearby white settlers, which led to the Nez Perce War. After several small battles in Idaho, more than 800 Nez Perce, mostly non-warriors, took 2000 head of various livestock including horses and fled into Montana, then traveled southeast, dipping into Yellowstone National Park. A small number of Nez Perce fighters, probably fewer than 200, successfully held off larger forces of the U.S. Army in several skirmishes, including the two-day Battle of the Big Hole in southwestern Montana. They then moved northeast and attempted to seek refuge with the Crow Nation; rebuffed, they headed for safety in Canada. Throughout this journey of about the Nez Perce relied heavily on their fast, agile and hardy Appaloosa horses. The journey came to an end when they stopped to rest near the Bears Paw Mountains in Montana, from the Canada–US border. Unbeknownst to the Nez Perce, Colonel Nelson A. Miles had led an infantry-cavalry column from Fort Keogh in pursuit. On October 5, 1877, after a five-day fight, Joseph surrendered. The battle—and the war—was over. With most of the war chiefs dead, and the noncombatants cold and starving, Joseph declared that he would "fight no more forever". Aftermath of the Nez Perce War When the U.S. 7th Cavalry accepted the surrender of Chief Joseph and the remaining Nez Perce, they immediately took more than 1,000 of the tribe's horses, sold what they could and shot many of the rest. But a significant population of horses had been left behind in the Wallowa valley when the Nez Perce began their retreat, and additional animals escaped or were abandoned along the way. The Nez Perce were ultimately settled on reservation lands in north central Idaho, were allowed few horses, and were required by the Army to crossbreed to draft horses in an attempt to create farm horses. The Nez Perce tribe never regained its former position as breeders of Appaloosas. In the late 20th century, they began a program to develop a new horse breed, the Nez Perce horse, with the intent to resurrect their horse culture, tradition of selective breeding, and horsemanship. Although a remnant population of Appaloosa horses remained after 1877, they were virtually forgotten as a distinct breed for almost 60 years. A few quality horses continued to be bred, mostly those captured or purchased by settlers and used as working ranch horses. Others were used in circuses and related forms of entertainment, such as Buffalo Bill's Wild West Show and Ringling Bros. and Barnum & Bailey Circus. The horses were originally called "Palouse horses" by settlers, a reference to the Palouse River that ran through the heart of what was once Nez Perce country. Gradually, the name evolved into "Apalouse", and then "Appaloosa". Other early variations of the name included "Appalucy", "Apalousey" and "Appaloosie". In one 1948 book, the breed was called the "Opelousa horse", described as a "hardy tough breed of Indian and Spanish horse" used by backwoodsmen of the late 18th century to transport goods to New Orleans for sale. By the 1950s, "Appaloosa" was regarded as the correct spelling. Revitalization The Appaloosa came to the attention of the general public in January 1937 in Western Horseman magazine when Francis D. Haines, a history professor from Lewiston, Idaho, published an article describing the breed's history and urging its preservation. Haines had performed extensive research, traveling with a friend and Appaloosa aficionado named George Hatley, visiting numerous Nez Perce villages, collecting history, and taking photographs. The article generated strong interest in the horse breed, and led to the founding of the Appaloosa Horse Club (ApHC) by Claude Thompson and a small group of other dedicated breeders in 1938. The registry was originally housed in Moro, Oregon; but in 1947 the organization moved to Moscow, Idaho, under the leadership of George Hatley. The Appaloosa Museum foundation was formed in 1975 to preserve the history of the Appaloosa horse. The Western Horseman magazine, and particularly its longtime publisher, Dick Spencer, continued to support and promote the breed through many subsequent articles. A significant crossbreeding influence used to revitalize the Appaloosa was the Arabian horse, as evidenced by early registration lists that show Arabian-Appaloosa crossbreeds as ten of the first fifteen horses registered with the ApHC. For example, one of Claude Thompson's major herd sires was Ferras, an Arabian stallion bred by W.K. Kellogg from horses imported from the Crabbet Arabian Stud of England. Ferras sired Red Eagle, a prominent Appaloosa stallion added to the Appaloosa Hall of Fame in 1988. Later, Thoroughbred and American Quarter Horse lines were added, as well as crosses from other breeds, including Morgans and Standardbreds. In 1983 the ApHC reduced the number of allowable outcrosses to three main breeds: the Arabian, the American Quarter Horse and the Thoroughbred. By 1978 the ApHC was the third largest horse registry for light horse breeds. From 1938 to 2007 more than 670,000 Appaloosas were registered by the ApHC. The state of Idaho adopted the Appaloosa as its official state horse on March 25, 1975, when Idaho Governor Cecil Andrus signed the enabling legislation. Idaho also offers a custom license plate featuring an Appaloosa, the first state to offer a plate featuring a state horse. Registration Located in Moscow, Idaho, the ApHC is the principal body for the promotion and preservation of the Appaloosa breed and is an international organization. Affiliate Appaloosa organizations exist in many South American and European countries, as well as South Africa, Australia, New Zealand, Canada, Mexico and Israel. The Appaloosa Horse Club has 33,000 members as of 2010, circulation of the Appaloosa Journal, which is included with most types of membership, was at 32,000 in 2008. The American Appaloosa Association was founded in 1983 by members opposed to the registration of plain-colored horses, as a result of the color rule controversy. Based in Missouri, it has a membership of more than 2,000 as of 2008. Other "Appaloosa" registries have been founded for horses with leopard complex genetics that are not affiliated with the ApHC. These registries tend to have different foundation bloodstock and histories than the North American Appaloosa. The ApHC is by far the largest Appaloosa horse registry, and it hosts one of the world's largest breed shows. The Appaloosa is "a breed defined by ApHC bloodline requirements and preferred characteristics, including coat pattern". In other words, the Appaloosa is a distinct breed from limited bloodlines with distinct physical traits and a desired color, referred to as a "color preference". Appaloosas are not strictly a "color breed". All ApHC-registered Appaloosas must be the offspring of two registered Appaloosa parents or a registered Appaloosa and a horse from an approved breed registry, which includes Arabian horses, Quarter Horses, and Thoroughbreds. In all cases, one parent must always be a regular registered Appaloosa. The only exception to the bloodline requirements is in the case of Appaloosa-colored geldings or spayed mares with unknown pedigrees; owners may apply for "hardship registration" for these non-breeding horses. The ApHC does not accept horses with draft, pony, Pinto, or Paint breeding, and requires mature Appaloosas to stand, unshod, at least . If a horse has excessive white markings not associated with the Appaloosa pattern (such as those characteristic of a pinto) it cannot be registered unless it is verified through DNA testing that both parents have ApHC registration. Certain other characteristics are used to determine if a horse receives "regular" registration: striped hooves, white sclera visible when the eye is in a normal position, and mottled (spotted) skin around the eyes, lips, and genitalia. As the Appaloosa is one of the few horse breeds to exhibit skin mottling, this characteristic "...is a very basic and decisive indication of an Appaloosa." Appaloosas born with visible coat pattern, or mottled skin and at least one other characteristic, are registered with "regular" papers and have full show and breeding privileges. A horse that meets bloodline requirements but is born without the recognized color pattern and characteristics can still be registered with the ApHC as a "non-characteristic" Appaloosa. These solid-colored, "non-characteristic" Appaloosas may not be shown at ApHC events unless the owner verifies the parentage through DNA testing and pays a supplementary fee to enter the horse into the ApHC's Performance Permit Program (PPP). Color rule controversy During the 1940s and 1950s, when both the Appaloosa Horse Club (ApHC) and the American Quarter Horse Association (AQHA) were in their formative years, minimally marked or roan Appaloosas were sometimes used in Quarter Horse breeding programs. At the same time, it was noted that two solid-colored registered Quarter Horse parents would sometimes produce what Quarter Horse aficionados call a "cropout", a foal with white coloration similar to that of an Appaloosa or Pinto. For a considerable time, until DNA testing could verify parentage, the AQHA refused to register such horses. The ApHC did accept cropout horses that exhibited proper Appaloosa traits, while cropout pintos became the core of the American Paint Horse Association. Famous Appaloosas who were cropouts included Colida, Joker B, Bright Eyes Brother and Wapiti. In the late 1970s, the color controversy went in the opposite direction within the Appaloosa registry. The ApHC's decision in 1982 to allow solid-colored or "non-characteristic" Appaloosas to be registered resulted in substantial debate within the Appaloosa breeding community. Until then, a foal of Appaloosa parents that had insufficient color was often denied registration, although non-characteristic Appaloosas were allowed into the registry. But breeder experience had shown that some solid Appaloosas could throw a spotted foal in a subsequent generation, at least when bred to a spotted Appaloosa. In addition, many horses with a solid coat exhibited secondary characteristics such as skin mottling, the white sclera, and striped hooves. The controversy stirred by the ApHC's decision was intense. In 1983 a number of Appaloosa breeders opposed to the registration of solid-colored horses formed the American Appaloosa Association, a breakaway organization. Uses Appaloosas are used extensively for both Western and English riding. Western competitions include cutting, reining, roping and O-Mok-See sports such as barrel racing (known as the Camas Prairie Stump Race in Appaloosa-only competition) and pole bending (called the Nez Percé Stake Race at breed shows). English disciplines they are used in include eventing, show jumping, and fox hunting. They are common in endurance riding competitions, as well as in casual trail riding. Appaloosas are also bred for horse racing, with an active breed racing association promoting the sport. They are generally used for middle-distance racing at distances between and ; an Appaloosa holds the all-breed record for the distance, set in 1989. Appaloosas are often used in Western movies and television series. Examples include "Cojo Rojo" in the Marlon Brando film The Appaloosa, "Zip Cochise" ridden by John Wayne in the 1966 film El Dorado and "Cowboy", the mount of Matt Damon in True Grit. An Appaloosa horse is part of the controversial mascot team for the Florida State Seminoles, Chief Osceola and Renegade; even though the Seminole Tribe of Florida were not directly associated with Appaloosa horses. Influence There are several American horse breeds with leopard coloring and Appaloosa ancestry. These include the Pony of the Americas and the Colorado Ranger. Appaloosas were also crossbred with gaited horse breeds in an attempt to create leopard-spotted ambling horse breeds, including the Walkaloosa, the Spanish Jennet Horse, and the Tiger horse. Because such crossbred offspring are not eligible for ApHC registration, their owners have formed breed registries for horses with leopard complex patterns and gaited ability. In 1995 the Nez Perce tribe began a program to develop a new and distinct horse breed, the Nez Perce Horse, based on crossbreeding the Appaloosa with the Akhal-Teke breed from Central Asia. Appaloosa stallions have also been exported to Denmark to add new blood to the Knabstrupper breed. Health issues Genetically linked vision issues Two genetically-linked conditions are linked to blindness in Appaloosas, both associated with the Leopard complex color pattern. Appaloosas have an eightfold greater risk of developing Equine Recurrent Uveitis (ERU) than all other breeds combined. Up to 25 percent of all horses with ERU may be Appaloosas. Uveitis in horses has many causes, including eye trauma, disease, and bacterial, parasitic and viral infections, but ERU is characterized by recurring episodes of uveitis, rather than a single incident. If not treated, ERU can lead to blindness. Eighty percent of all uveitis cases are found in Appaloosas with physical characteristics including roan or light-colored coat patterns, little pigment around the eyelids and sparse hair in the mane and tail denoting the most at-risk individuals. Researchers may have identified a gene region containing an allele that makes the breed more susceptible to the disease. Appaloosas that are homozygous for the leopard complex (LP) gene are also at risk for congenital stationary night blindness (CSNB). This form of night blindness has been linked with the leopard complex since the 1970s, and in 2007 a "significant association" between LP and CSNB was identified. CSNB is a disorder that causes an affected animal to lack night vision, although day vision is normal. It is an inherited disorder, present from birth, and does not progress over time. Studies in 2008 and 2010 indicate that both CSNB and leopard complex spotting patterns are linked to TRPM1. Drug rules In 2007 the ApHC implemented new drug rules allowing Appaloosas to show with the drugs furosemide, known by the trade name of Lasix, and acetazolamide. Furosemide is used to prevent horses who bleed from the nose when subjected to strenuous work from having bleeding episodes when in competition, and is widely used in horse racing. Acetazolamide ("Acet") is used for treating horses with the genetic disease hyperkalemic periodic paralysis (HYPP), and prevents affected animals from having seizures. Acet is only allowed for horses that test positive for HYPP and have HYPP status noted on their registration papers. The ApHC recommends that Appaloosas that trace to certain American Quarter Horse bloodlines be tested for HYPP, and owners have the option to choose to place HYPP testing results on registration papers. Foals of AQHA-registered stallions and mares born on or after January 1, 2007 that carry HYPP will be required to be HYPP tested and have their HYPP status designated on their registration papers. Both drugs are controversial, in part because they are considered drug maskers and diuretics that can make it difficult to detect the presence of other drugs in the horse's system. On one side, it is argued that the United States Equestrian Federation (USEF), which sponsors show competition for many different horse breeds, and the International Federation for Equestrian Sports (FEI), which governs international and Olympic equestrian competition, ban the use of furosemide. On the other side of the controversy, several major stock horse registries that sanction their own shows, including the American Quarter Horse Association, American Paint Horse Association, and the Palomino Horse Breeders of America, allow acetazolamide and furosemide to be used within 24 hours of showing under certain circumstances.
Biology and health sciences
Horses
Animals
81761
https://en.wikipedia.org/wiki/Diesel%20fuel
Diesel fuel
Diesel fuel, also called diesel oil, heavy oil (historically) or simply diesel, is any liquid fuel specifically designed for use in a diesel engine, a type of internal combustion engine in which fuel ignition takes place without a spark as a result of compression of the inlet air and then injection of fuel. Therefore, diesel fuel needs good compression ignition characteristics. The most common type of diesel fuel is a specific fractional distillate of petroleum fuel oil, but alternatives that are not derived from petroleum, such as biodiesel, biomass to liquid (BTL) or gas to liquid (GTL) diesel are increasingly being developed and adopted. To distinguish these types, petroleum-derived diesel is sometimes called petrodiesel in some academic circles. Diesel is a high-volume product of oil refineries. In many countries, diesel fuel is standardized. For example, in the European Union, the standard for diesel fuel is EN 590. Ultra-low-sulfur diesel (ULSD) is a diesel fuel with substantially lowered sulfur contents. As of 2016, almost all of the petroleum-based diesel fuel available in the United Kingdom, mainland Europe, and North America is of a ULSD type. Before diesel fuel had been standardized, the majority of diesel engines typically ran on cheap fuel oils. These fuel oils are still used in watercraft diesel engines. Despite being specifically designed for diesel engines, diesel fuel can also be used as fuel for several non-diesel engines, for example the Akroyd engine, the Stirling engine, or boilers for steam engines. Diesel is often used in heavy trucks. However, diesel exhaust, especially from older engines, can cause health damage. Names Diesel fuel has many colloquial names; most commonly, it is simply referred to as diesel. In the United Kingdom, diesel fuel for road use is commonly called diesel or sometimes white diesel if required to differentiate it from a reduced-tax agricultural-only product containing an identifying coloured dye known as red diesel. The official term for white diesel is DERV, standing for diesel-engine road vehicle. In Australia, diesel fuel is also known as distillate (not to be confused with "distillate" in an older sense referring to a different motor fuel), and in Indonesia (as well in Israel), it is known as Solar, a trademarked name from the country's national petroleum company Pertamina. The term gas oil (French: gazole) is sometimes also used to refer to diesel fuel. History Origins Diesel fuel originated from experiments conducted by German scientist and inventor Rudolf Diesel for his compression-ignition engine which he invented around 1892. Originally, Diesel did not consider using any specific type of fuel. Instead, he claimed that the operating principle of his rational heat motor would work with any kind of fuel in any state of matter. The first diesel engine prototype and the first functional Diesel engine were only designed for liquid fuels. At first, Diesel tested crude oil from Pechelbronn, but soon replaced it with petrol and kerosene, because crude oil proved to be too viscous, with the main testing fuel for the Diesel engine being kerosene (paraffin). Diesel experimented with types of lamp oil from various sources, as well as types of petrol and ligroin, which all worked well as Diesel engine fuels. Later, Diesel tested coal tar creosote, paraffin oil, crude oil, gasoline and fuel oil, which eventually worked as well. In Scotland and France, shale oil was used as fuel for the first 1898 production Diesel engines because other fuels were too expensive. In 1900, the French Otto society built a Diesel engine for the use with crude oil, which was exhibited at the 1900 Paris Exposition and the 1911 World's Fair in Paris. The engine actually ran on peanut oil instead of crude oil, and no modifications were necessary for peanut oil operation. During his first Diesel engine tests, Diesel also used illuminating gas as fuel, and managed to build functional designs, both with and without pilot injection. According to Diesel, neither was a coal-dust–producing industry existent, nor was fine, high-quality coal-dust commercially available in the late 1890s. This is the reason why the Diesel engine was never designed or planned as a coal-dust engine. Only in December 1899, did Diesel test a coal-dust prototype, which used external mixture formation and liquid fuel pilot injection. This engine proved to be functional, but suffered from piston ring failure after a few minutes due to coal dust deposition. Since the 20th century Before diesel fuel was standardised, diesel engines typically ran on cheap fuel oils. In the United States, these were distilled from petroleum, whereas in Europe, coal-tar creosote oil was used. Some diesel engines were fuelled with mixtures of fuels, such as petrol, kerosene, rapeseed oil, or lubricating oil which were cheaper because, at the time, they were not being taxed. The introduction of motor-vehicle diesel engines, such as the Mercedes-Benz OM 138, in the 1930s meant that higher-quality fuels with proper ignition characteristics were needed. At first no improvements were made to motor-vehicle diesel fuel quality. After World War II, the first modern high-quality diesel fuels were standardised. These standards were, for instance, the DIN 51601, VTL 9140–001, and NATO F 54 standards. In 1993, the DIN 51601 was rendered obsolete by the new EN 590 standard, which has been used in the European Union ever since. In sea-going watercraft, where diesel propulsion had gained prevalence by the late 1970s due to increasing fuel costs caused by the 1970s energy crisis, cheap heavy fuel oils are still used instead of conventional motor-vehicle diesel fuel. These heavy fuel oils (often called Bunker C) can be used in diesel-powered and steam-powered vessels. Types Diesel fuel is produced from various sources, the most common being petroleum. Other sources include biomass, animal fat, biogas, natural gas, and coal liquefaction. Petroleum diesel Petroleum diesel is the most common type of diesel fuel. It is produced by the fractional distillation of crude oil between at atmospheric pressure, resulting in a mixture of carbon chains that typically contain between 9 and 25 carbon atoms per molecule. This fraction is subjected to hydrodesulfurization. Usually such "straight-run" diesel is insufficient in supply and quality, so other sources of diesel fuels are blended in. One major source of additional diesel fuel is obtained by cracking heavier fractions, using visbreaking and coking. This technology converts less useful fractions but the product contains olefins (alkenes) which require hydrogenation to give the saturated hydrocarbons as desired. Another refinery stream that contributes to diesel fuel is hydrocracking. Finally, kerosene is added to modify its viscosity. Synthetic diesel Synthetic diesel can be produced from many carbonaceous precursors but natural gas is most important. Raw materials are converted to synthesis gas which by the Fischer–Tropsch process is converted to a synthetic diesel. Synthetic diesel produced in this way generally is mainly paraffins with low sulfur and aromatics content. This material is blended often into the above mentions petroleum derived diesel. Biodiesel Biodiesel is obtained from vegetable oil or animal fats (biolipids) which are mainly fatty acid methyl esters (FAME), and transesterified with methanol. It can be produced from many types of oils, the most common being rapeseed oil (rapeseed methyl ester, RME) in Europe and soybean oil (soy methyl ester, SME) in the US. Methanol can also be replaced with ethanol for the transesterification process, which results in the production of ethyl esters. The transesterification processes use catalysts, such as sodium or potassium hydroxide, to convert vegetable oil and methanol into biodiesel and the undesirable byproducts glycerine and water, which will need to be removed from the fuel along with methanol traces. Biodiesel can be used pure (B100) in engines where the manufacturer approves such use, but it is more often used as a mix with diesel, BXX where XX is the biodiesel content in percent. FAME used as fuel is specified in DIN EN 14214 and ASTM D6751 standards. Storage and additives In the US, diesel is recommended to be stored in a yellow container to differentiate it from kerosene, which is typically kept in blue containers, and gasoline (petrol), which is typically kept in red containers. In the UK, diesel is normally stored in a black container to differentiate it from unleaded or leaded petrol, which are stored in green and red containers, respectively. Ethylene-vinyl acetate (EVA) is added to diesel as a "cold flow improver". 50-500 ppm of EVA inhibits crystallization of waxes, which can block fuel filters. Antifoaming agents (silicones), antioxidants (hindered phenols), and "metal deactivating agents" (salicylaldimines) are other additives. Their use is dictated by the particular composition of and storage plans for diesel fuels. Each is added at the 5-50 ppm level. Standards The diesel engine is a multifuel engine and can run on a huge variety of fuels. However, development of high-performance, high-speed diesel engines for cars and lorries in the 1930s meant that a proper fuel specifically designed for such engines was needed: diesel fuel. In order to ensure consistent quality, diesel fuel is standardised; the first standards were introduced after World War II. Typically, a standard defines certain properties of the fuel, such as cetane number, density, flash point, sulphur content, or biodiesel content. Diesel fuel standards include: Diesel fuel EN 590 (European Union) ASTM D975 (United States) GOST R 52368 (Russia; equivalent to EN 590) NATO F 54 (NATO; equivalent to EN 590) DIN 51601 (West Germany; obsolete) Biodiesel fuel EN 14214 (European Union) ASTM D6751 (United States) CAN/CGSB-3.524 (Canada) Measurements and pricing Cetane number The principal measure of diesel fuel quality is its cetane number. A cetane number is a measure of the delay of ignition of a diesel fuel. A higher cetane number indicates that the fuel ignites more readily when sprayed into hot compressed air. European (EN 590 standard) road diesel has a minimum cetane number of 51. Fuels with higher cetane numbers, normally "premium" diesel fuels with additional cleaning agents and some synthetic content, are available in some markets. Fuel value and price About 86.1% of diesel fuel mass is carbon, and when burned, it offers a net heating value of 43.1 MJ/kg as opposed to 43.2 MJ/kg for gasoline. Due to the higher density, diesel fuel offers a higher volumetric energy density: the density of EN 590 diesel fuel is defined as at , about 9.0-13.9% more than EN 228 gasoline (petrol)'s at 15 °C, which should be put into consideration when comparing volumetric fuel prices. The emissions from diesel are 73.25 g/MJ, just slightly lower than for gasoline at 73.38 g/MJ. Diesel fuel is generally simpler to refine from petroleum than gasoline. Additional refining is required to remove sulfur, which contributes to a sometimes higher cost. In many parts of the United States and throughout the United Kingdom and Australia, diesel fuel may be priced higher than petrol per gallon or liter. Reasons for higher-priced diesel include the shutdown of some refineries in the Gulf of Mexico, diversion of mass refining capacity to gasoline production, and a recent transfer to ultra-low-sulfur diesel (ULSD), which causes infrastructural complications. In Sweden, a diesel fuel designated as MK-1 (class 1 environmental diesel) is also being sold. This is a ULSD that also has a lower aromatics content, with a limit of 5%. This fuel is slightly more expensive to produce than regular ULSD. In Germany, the fuel tax on diesel fuel is about 28% lower than the petrol fuel tax. Taxation Diesel fuel is similar to heating oil, which is used in central heating. In Europe, the United States, and Canada, taxes on diesel fuel are higher than on heating oil due to the fuel tax, and in those areas, heating oil is marked with fuel dyes and trace chemicals to prevent and detect tax fraud. "Untaxed" diesel (sometimes called "off-road diesel" or "red diesel" due to its red dye) is available in some countries for use primarily in agricultural applications, such as fuel for tractors, recreational and utility vehicles or other noncommercial vehicles that do not use public roads. This fuel may have sulfur levels that exceed the limits for road use in some countries (e.g. US). This untaxed diesel is dyed red for identification, and using this untaxed diesel fuel for a typically taxed purpose (such as driving use), the user can be fined (e.g. US$10,000 in the US). In the United Kingdom, Belgium and the Netherlands, it is known as red diesel (or gas oil), and is also used in agricultural vehicles, home heating tanks, refrigeration units on vans/trucks which contain perishable items such as food and medicine and for marine craft. Diesel fuel, or marked gas oil is dyed green in the Republic of Ireland and Norway. The term "diesel-engined road vehicle" (DERV) is used in the UK as a synonym for unmarked road diesel fuel. In India, taxes on diesel fuel are lower than on petrol, as the majority of the transportation for grain and other essential commodities across the country runs on diesel. Taxes on biodiesel in the US vary between states. Some states (Texas, for example) have no tax on biodiesel and a reduced tax on biodiesel blends equivalent to the amount of biodiesel in the blend, so that B20 fuel is taxed 20% less than pure petrodiesel. Other states, such as North Carolina, tax biodiesel (in any blended configuration) the same as petrodiesel, although they have introduced new incentives to producers and users of all biofuels. Uses Diesel fuel is mostly used in high-speed diesel engines, especially motor-vehicle (e.g. car, lorry) diesel engines, but not all diesel engines run on diesel fuel. For example, large two-stroke watercraft engines typically use heavy fuel oils instead of diesel fuel, and certain types of diesel engines, such as MAN M-System engines, are designed to run on petrol with knock resistances of up to 86 RON. On the other hand, gas turbine and some other types of internal combustion engines, and external combustion engines, can also be designed to take diesel fuel. The viscosity requirement of diesel fuel is usually specified at 40 °C. A disadvantage of diesel fuel in cold climates is that its viscosity increases as the temperature decreases, changing it into a gel (see Compression Ignition – Gelling) that cannot flow in fuel systems. Special low-temperature diesel contains additives to keep it liquid at lower temperatures. On-road vehicles Trucks and buses, which were often otto-powered in the 1920s through 1950s, are now almost exclusively diesel-powered. Due to its ignition characteristics, diesel fuel is thus widely used in these vehicles. Since diesel fuel is not well-suited for otto engines, passenger cars, which often use otto or otto-derived engines, typically run on petrol instead of diesel fuel. However, especially in Europe and India, many passenger cars have, due to better engine efficiency, diesel engines, and thus run on regular diesel fuel. Railroad Diesel displaced coal and fuel oil for steam-powered vehicles in the latter half of the 20th century, and is now used almost exclusively for the combustion engines of self-powered rail vehicles (locomotives and railcars). Aircraft In general, diesel engines are not well-suited for planes and helicopters. This is because of the diesel engine's comparatively low power-to-mass ratio, meaning that diesel engines are typically rather heavy, which is a disadvantage in aircraft. Therefore, there is little need for using diesel fuel in aircraft, and diesel fuel is not commercially used as aviation fuel. Instead, petrol (Avgas), and jet fuel (e. g. Jet A-1) are used. However, especially in the 1920s and 1930s, numerous series-production aircraft diesel engines that ran on fuel oils were made, because they had several advantages: their fuel consumption was low, they were reliable, not prone to catching fire, and required minimal maintenance. The introduction of petrol direct injection in the 1930s outweighed these advantages, and aircraft diesel engines quickly fell out of use. With improvements in power-to-mass ratios of diesel engines, several on-road diesel engines have been converted to and certified for aircraft use since the early 21st century. These engines typically run on Jet A-1 aircraft fuel (but can also run on diesel fuel). Jet A-1 has ignition characteristics similar to diesel fuel, and is thus suited for certain (but not all) diesel engines. Military vehicles Until World War II, several military vehicles, especially those that required high engine performance (armored fighting vehicles, for example the M26 Pershing or Panther tanks), used conventional otto engines and ran on petrol. Ever since World War II, several military vehicles with diesel engines have been made, capable of running on diesel fuel. This is because diesel engines are more fuel efficient, and diesel fuel is less prone to catching fire. Some of these diesel-powered vehicles (such as the Leopard 1 or MAN 630) still ran on petrol, and some military vehicles were still made with otto engines (e. g. Ural-375 or Unimog 404), incapable of running on diesel fuel. Tractors and heavy equipment Today's tractors and heavy equipment are mostly diesel-powered. Among tractors, only the smaller classes may also offer gasoline-fuelled engines. The dieselization of tractors and heavy equipment began in Germany before World War II but was unusual in the United States until after that war. During the 1950s and 1960s, it progressed in the US as well. Diesel fuel is commonly used in oil and gas extracting equipment, although some locales use electric or natural gas powered equipment. Tractors and heavy equipment were often multifuel in the 1920s through 1940s, running either spark-ignition and low-compression engines, akryod engines, or diesel engines. Thus many farm tractors of the era could burn gasoline, alcohol, kerosene, and any light grade of fuel oil such as heating oil, or tractor vaporising oil, according to whichever was most affordable in a region at any given time. On US farms during this era, the name "distillate" often referred to any of the aforementioned light fuel oils. Spark ignition engines did not start as well on distillate, so typically a small auxiliary gasoline tank was used for cold starting, and the fuel valves were adjusted several minutes later, after warm-up, to transition to distillate. Engine accessories such as vaporizers and radiator shrouds were also used, both with the aim of capturing heat, because when such an engine was run on distillate, it ran better when both it and the air it inhaled were warmer rather than at ambient temperature. Dieselization with dedicated diesel engines (high-compression with mechanical fuel injection and compression ignition) replaced such systems and made more efficient use of the diesel fuel being burned. Other uses Poor quality diesel fuel has been used as an extraction agent for liquid–liquid extraction of palladium from nitric acid mixtures. Such use has been proposed as a means of separating the fission product palladium from PUREX raffinate which comes from used nuclear fuel. In this system of solvent extraction, the hydrocarbons of the diesel act as the diluent while the dialkyl sulfides act as the extractant. This extraction operates by a solvation mechanism. So far, neither a pilot plant nor full scale plant has been constructed to recover palladium, rhodium or ruthenium from nuclear wastes created by the use of nuclear fuel. Diesel fuel is often used as the main ingredient in oil-base mud drilling fluid. The advantage of using diesel is its low cost and its ability to drill a wide variety of difficult strata, including shale, salt and gypsum formations. Diesel-oil mud is typically mixed with up to 40% brine water. Due to health, safety and environmental concerns, Diesel-oil mud is often replaced with vegetable, mineral, or synthetic food-grade oil-base drilling fluids, although diesel-oil mud is still in widespread use in certain regions. During development of rocket engines in Germany during World War II J-2 Diesel fuel was used as the fuel component in several engines including the BMW 109-718. J-2 diesel fuel was also used as a fuel for gas turbine engines. Chemical analysis Chemical composition In the United States, petroleum-derived diesel is composed of about 75% saturated hydrocarbons (primarily paraffins including n, iso, and cycloparaffins), and 25% aromatic hydrocarbons (including naphthalenes and alkylbenzenes). The average chemical formula for common diesel fuel is C12H23, ranging approximately from C10H20 to C15H28. Chemical properties Most diesel fuels freeze at common winter temperatures, while the temperatures greatly vary. Petrodiesel typically freezes around temperatures of , whereas biodiesel freezes between temperatures of . The viscosity of diesel noticeably increases as the temperature decreases, changing it into a gel at temperatures of , that cannot flow in fuel systems. Conventional diesel fuels vaporise at temperatures between 149 °C and 371 °C. Conventional diesel flash points vary between 52 and 96 °C, which makes it safer than petrol and unsuitable for spark-ignition engines. Unlike petrol, the flash point of a diesel fuel has no relation to its performance in an engine nor to its auto ignition qualities. Carbon dioxide formation As a good approximation the chemical formula of diesel is . Diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen has a molar mass of about 1 g/mol, so the fraction by weight of carbon in EN 590 diesel fuel is roughly 12/14. The reaction of diesel combustion is given by: 2 + 3n 2n + 2n Carbon dioxide has a molar mass of 44g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yield 44 g of Carbon dioxide. Diesel has a density of 838 g per liter. Putting everything together the mass of carbon dioxide that is produced by burning 1 liter of diesel fuel can be calculated as: The figure obtained with this estimation is close to the values found in the literature. For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon emission if 1 liter of gasoline is burnt gives: Hazards Environment hazards of sulfur In the past, diesel fuel contained higher quantities of sulfur. European emission standards and preferential taxation have forced oil refineries to dramatically reduce the level of sulfur in diesel fuels. In the European Union, the sulfur content has dramatically reduced during the last 20 years. Automotive diesel fuel is covered in the European Union by standard EN 590. In the 1990s specifications allowed a content of 2000 ppm max of sulfur, reduced to a limit of 350 ppm by the beginning of the 21st century with the introduction of Euro 3 specifications. The limit was lowered with the introduction of Euro 4 by 2006 to 50 ppm (ULSD, Ultra Low Sulfur Diesel). The standard for diesel fuel in force in Europe as of 2009 is the Euro 5, with a maximum content of 10 ppm. In the United States, more stringent emission standards have been adopted with the transition to ULSD starting in 2006, and becoming mandatory on June 1, 2010 (see also diesel exhaust). Algae, microbes, and water contamination There has been much discussion and misunderstanding of algae in diesel fuel. Algae need light to live and grow. As there is no sunlight in a closed fuel tank, no algae can survive, but some microbes can survive and feed on the diesel fuel. These microbes form a colony that lives at the interface of fuel and water. They grow quite fast in warmer temperatures. They can even grow in cold weather when fuel tank heaters are installed. Parts of the colony can break off and clog the fuel lines and fuel filters. Water in fuel can damage a fuel injection pump. Some diesel fuel filters also trap water. Water contamination in diesel fuel can lead to freezing while in the fuel tank. The freezing water that saturates the fuel will sometimes clog the fuel injector pump. Once the water inside the fuel tank has started to freeze, gelling is more likely to occur. When the fuel is gelled it is not effective until the temperature is raised and the fuel returns to a liquid state. Road hazard Diesel is less flammable than gasoline / petrol. However, because it evaporates slowly, any spills on a roadway can pose a slip hazard to vehicles. After the light fractions have evaporated, a greasy slick is left on the road which reduces tire grip and traction, and can cause vehicles to skid. The loss of traction is similar to that encountered on black ice, resulting in especially dangerous situations for two-wheeled vehicles, such as motorcycles and bicycles, in roundabouts.
Technology
Fuel
null
81818
https://en.wikipedia.org/wiki/Lagoon
Lagoon
A lagoon is a shallow body of water separated from a larger body of water by a narrow landform, such as reefs, barrier islands, barrier peninsulas, or isthmuses. Lagoons are commonly divided into coastal lagoons (or barrier lagoons) and atoll lagoons. They have also been identified as occurring on mixed-sand and gravel coastlines. There is an overlap between bodies of water classified as coastal lagoons and bodies of water classified as estuaries. Lagoons are common coastal features around many parts of the world. Definition and terminology Lagoons are shallow, often elongated bodies of water separated from a larger body of water by a shallow or exposed shoal, coral reef, or similar feature. Some authorities include fresh water bodies in the definition of "lagoon", while others explicitly restrict "lagoon" to bodies of water with some degree of salinity. The distinction between "lagoon" and "estuary" also varies between authorities. Richard A. Davis Jr. restricts "lagoon" to bodies of water with little or no fresh water inflow, and little or no tidal flow, and calls any bay that receives a regular flow of fresh water an "estuary". Davis does state that the terms "lagoon" and "estuary" are "often loosely applied, even in scientific literature". Timothy M. Kusky characterizes lagoons as normally being elongated parallel to the coast, while estuaries are usually drowned river valleys, elongated perpendicular to the coast. Coastal lagoons are classified as inland bodies of water. When used within the context of a distinctive portion of coral reef ecosystems, the term "lagoon" is synonymous with the term "back reef" or "backreef", which is more commonly used by coral reef scientists to refer to the same area. Many lagoons do not include "lagoon" in their common names. Currituck, Albemarle and Pamlico Sounds in North Carolina, Great South Bay between Long Island and the barrier beaches of Fire Island in New York, Isle of Wight Bay, which separates Ocean City, Maryland from the rest of Worcester County, Maryland, Banana River in Florida, US, Lake Illawarra in New South Wales, Australia, Montrose Basin in Scotland, and Broad Water in Wales have all been classified as lagoons, despite their names. In England, The Fleet at Chesil Beach has also been described as a lagoon. In some languages the word for a lagoon is simply a type of lake: In Chinese a lake is (), and a lagoon is (). In the French Mediterranean several lagoons are called étang ("lake"). Contrariwise, several other languages have specific words for such bodies of water. In Spanish, coastal lagoons generically are , but those on the Mediterranean coast are specifically called . In Russian and Ukrainian, those on the Black Sea are (), while the generic word is (). Similarly, in the Baltic, Danish has the specific , and German the specifics and Haff, as well as generic terms derived from . In Poland these lagoons are called zalew ("bay"), and in Lithuania marios ("lagoon, reservoir"). In Jutland several lagoons are known as fjord. In New Zealand the Māori word refers to a coastal lagoon formed at the mouth of a braided river where there are mixed sand and gravel beaches, while , an ephemeral coastal waterbody, is neither a true lagoon, lake, nor estuary. Some languages differentiate between coastal and atoll lagoons. In French, refers specifically to an atoll lagoon, while coastal lagoons are described as , the generic word for a still lake or pond. In Vietnamese, refers to an atoll lagoon, whilst is coastal. In Latin America, the term in Spanish, which lagoon translates to, may be used for a small fresh water lake in a similar way a creek is considered a small river. However, sometimes it is popularly used to describe a full-sized lake, such as Laguna Catemaco in Mexico, which is actually the third-largest lake by area in the country. The brackish water lagoon may be thus explicitly identified as a "coastal lagoon" (). In Portuguese, a similar usage is found: may be a body of shallow seawater, or a small freshwater lake not linked to the sea. Etymology Lagoon is derived from the Italian , which refers to the waters around Venice, the Venetian Lagoon. Laguna is attested in English by at least 1612, and had been Anglicized to "lagune" by 1673. In 1697 William Dampier referred to a "Lagune or Lake of Salt water" on the coast of Mexico. Captain James Cook described an island "of Oval form with a Lagoon in the middle" in 1769. Atoll lagoons Atoll lagoons form as coral reefs grow upwards while the islands that the reefs surround subside, until eventually only the reefs remain above sea level. Unlike the lagoons that form shoreward of fringing reefs, atoll lagoons often contain some deep (>) portions. Coastal lagoons Coastal lagoons form along gently sloping coasts where barrier islands or reefs can develop offshore, and the sea-level is rising relative to the land along the shore (either because of an intrinsic rise in sea-level, or subsidence of the land along the coast). Coastal lagoons do not form along steep or rocky coasts, or if the range of tides is more than . Due to the gentle slope of the coast, coastal lagoons are shallow. A relative drop in sea level may leave a lagoon largely dry, while a rise in sea level may let the sea breach or destroy barrier islands, and leave reefs too deep underwater to protect the lagoon. Coastal lagoons are young and dynamic, and may be short-lived in geological terms. Coastal lagoons are common, occurring along nearly 15 percent of the world's shorelines. In the United States, lagoons are found along more than 75 percent of the Eastern and Gulf Coasts. Coastal lagoons can be classified as leaky, restricted, or choked. Coastal lagoons are usually connected to the open ocean by inlets between barrier islands. The number and size of the inlets, precipitation, evaporation, and inflow of fresh water all affect the nature of the lagoon. Lagoons with little or no interchange with the open ocean, little or no inflow of fresh water, and high evaporation rates, such as Lake St. Lucia, in South Africa, may become highly saline. Lagoons with no connection to the open ocean and significant inflow of fresh water, such as the Lake Worth Lagoon in Florida in the middle of the 19th century, may be entirely fresh. On the other hand, lagoons with many wide inlets, such as the Wadden Sea, have strong tidal currents and mixing. Coastal lagoons tend to accumulate sediments from inflowing rivers, from runoff from the shores of the lagoon, and from sediment carried into the lagoon through inlets by the tide. Large quantities of sediment may be occasionally be deposited in a lagoon when storm waves overwash barrier islands. Mangroves and marsh plants can facilitate the accumulation of sediment in a lagoon. Benthic organisms may stabilize or destabilize sediments. Largest coastal lagoons Images
Physical sciences
Oceanic and coastal landforms
null
81884
https://en.wikipedia.org/wiki/Newton%27s%20law%20of%20cooling
Newton's law of cooling
In the study of heat transfer, Newton's law of cooling is a physical law which states that the rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its environment. The law is frequently qualified to include the condition that the temperature difference is small and the nature of heat transfer mechanism remains the same. As such, it is equivalent to a statement that the heat transfer coefficient, which mediates between heat losses and temperature differences, is a constant. In heat conduction, Newton's Law is generally followed as a consequence of Fourier's law. The thermal conductivity of most materials is only weakly dependent on temperature, so the constant heat transfer coefficient condition is generally met. In convective heat transfer, Newton's Law is followed for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences. When stated in terms of temperature differences, Newton's law (with several further simplifying assumptions, such as a low Biot number and a temperature-independent heat capacity) results in a simple differential equation expressing temperature-difference as a function of time. The solution to that equation describes an exponential decrease of temperature-difference over time. This characteristic decay of the temperature-difference is also associated with Newton's law of cooling. Historical background Isaac Newton published his work on cooling anonymously in 1701 as "Scala graduum Caloris" in Philosophical Transactions. Newton did not originally state his law in the above form in 1701. Rather, using today's terms, Newton noted after some mathematical manipulation that the rate of temperature change of a body is proportional to the difference in temperatures between the body and its surroundings. This final simplest version of the law, given by Newton himself, was partly due to confusion in Newton's time between the concepts of heat and temperature, which would not be fully disentangled until much later. In 2020, Maruyama and Moriya repeated Newton's experiments with modern apparatus, and they applied modern data reduction techniques. In particular, these investigators took account of thermal radiation at high temperatures (as for the molten metals Newton used), and they accounted for buoyancy effects on the air flow. By comparison to Newton's original data, they concluded that his measurements (from 1692 to 1693) had been "quite accurate". Relationship to mechanism of cooling Convection cooling is sometimes said to be governed by "Newton's law of cooling." When the heat transfer coefficient is independent, or relatively independent, of the temperature difference between object and environment, Newton's law is followed. The law holds well for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference. Newton's law is most closely obeyed in purely conduction-type cooling. However, the heat transfer coefficient is a function of the temperature difference in natural convective (buoyancy driven) heat transfer. In that case, Newton's law only approximates the result when the temperature difference is relatively small. Newton himself realized this limitation. A correction to Newton's law concerning convection for larger temperature differentials by including an exponent, was made in 1817 by Dulong and Petit. (These men are better-known for their formulation of the Dulong–Petit law concerning the molar specific heat capacity of a crystal.) Another situation that does not obey Newton's law is radiative heat transfer. Radiative cooling is better described by the Stefan–Boltzmann law in which the heat transfer rate varies as the difference in the 4th powers of the absolute temperatures of the object and of its environment. Mathematical formulation of Newton's law The statement of Newton's law used in the heat transfer literature puts into mathematics the idea that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings. For a temperature-independent heat transfer coefficient, the statement is: where is the heat flux transferred out of the body (SI unit: watt/m2), is the heat transfer coefficient (assumed independent of T and averaged over the surface) (SI unit: W/(m2⋅K)), is the temperature of the object's surface (SI unit: K), is the temperature of the environment; i.e., the temperature suitably far from the surface (SI unit: K), is the time-dependent temperature difference between environment and object (SI unit: K). In global parameters by integrating on the surface area the heat flux, it can be also stated as: where is the rate of heat transfer out of the body (SI unit: watt), is the heat transfer coefficient (assumed independent of T and averaged over the surface) (SI unit: W/(m2⋅K)), is the heat transfer surface area (SI unit: m2), is the temperature of the object's surface (SI unit: K), is the temperature of the environment; i.e., the temperature suitably far from the surface (SI unit: K), is the time-dependent temperature difference between environment and object (SI unit: K). If the heat transfer coefficient and the temperature difference are uniform along the heat transfer surface, the above formula simplifies to: . The heat transfer coefficient h depends upon physical properties of the fluid and the physical situation in which convection occurs. Therefore, a single usable heat transfer coefficient (one that does not vary significantly across the temperature-difference ranges covered during cooling and heating) must be derived or found experimentally for every system that is to be analyzed. Formulas and correlations are available in many references to calculate heat transfer coefficients for typical configurations and fluids. For laminar flows, the heat transfer coefficient is usually smaller than in turbulent flows because turbulent flows have strong mixing within the boundary layer on the heat transfer surface. Note the heat transfer coefficient changes in a system when a transition from laminar to turbulent flow occurs. The Biot number The Biot number, a dimensionless quantity, is defined for a body as where h = film coefficient or heat transfer coefficient or convective heat transfer coefficient, LC = characteristic length, which is commonly defined as the volume of the body divided by the surface area of the body, such that , kb = thermal conductivity of the body. The physical significance of Biot number can be understood by imagining the heat flow from a hot metal sphere suddenly immersed in a pool to the surrounding fluid. The heat flow experiences two resistances: the first outside the surface of the sphere, and the second within the solid metal (which is influenced by both the size and composition of the sphere). The ratio of these resistances is the dimensionless Biot number. If the thermal resistance at the fluid/sphere interface exceeds that thermal resistance offered by the interior of the metal sphere, the Biot number will be less than one. For systems where it is much less than one, the interior of the sphere may be presumed always to have the same temperature, although this temperature may be changing, as heat passes into the sphere from the surface. The equation to describe this change in (relatively uniform) temperature inside the object, is the simple exponential one described in Newton's law of cooling expressed in terms of temperature difference (see below). In contrast, the metal sphere may be large, causing the characteristic length to increase to the point that the Biot number is larger than one. In this case, temperature gradients within the sphere become important, even though the sphere material is a good conductor. Equivalently, if the sphere is made of a thermally insulating (poorly conductive) material, such as wood or styrofoam, the interior resistance to heat flow will exceed that at the fluid/sphere boundary, even with a much smaller sphere. In this case, again, the Biot number will be greater than one. Values of the Biot number smaller than 0.1 imply that the heat conduction inside the body is much faster than the heat convection away from its surface, and temperature gradients are negligible inside of it. This can indicate the applicability (or inapplicability) of certain methods of solving transient heat transfer problems. For example, a Biot number less than 0.1 typically indicates less than 5% error will be present when assuming a lumped-capacitance model of transient heat transfer (also called lumped system analysis). Typically, this type of analysis leads to simple exponential heating or cooling behavior ("Newtonian" cooling or heating) since the internal energy of the body is directly proportional to its temperature, which in turn determines the rate of heat transfer into or out of it. This leads to a simple first-order differential equation which describes heat transfer in these systems. Having a Biot number smaller than 0.1 labels a substance as "thermally thin," and temperature can be assumed to be constant throughout the material's volume. The opposite is also true: A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body. Analytic methods for handling these problems, which may exist for simple geometric shapes and uniform material thermal conductivity, are described in the article on the heat equation. Application of Newton's law of transient cooling Simple solutions for transient cooling of an object may be obtained when the internal thermal resistance within the object is small in comparison to the resistance to heat transfer away from the object's surface (by external conduction or convection), which is the condition for which the Biot number is less than about 0.1. This condition allows the presumption of a single, approximately uniform temperature inside the body, which varies in time but not with position. (Otherwise the body would have many different temperatures inside it at any one time.) This single temperature will generally change exponentially as time progresses (see below). The condition of low Biot number leads to the so-called lumped capacitance model. In this model, the internal energy (the amount of thermal energy in the body) is calculated by assuming a constant heat capacity. In that case, the internal energy of the body is a linear function of the body's single internal temperature. The lumped capacitance solution that follows assumes a constant heat transfer coefficient, as would be the case in forced convection. For free convection, the lumped capacitance model can be solved with a heat transfer coefficient that varies with temperature difference. First-order transient response of lumped-capacitance objects A body treated as a lumped capacitance object, with a total internal energy of (in joules), is characterized by a single uniform internal temperature, The heat capacitance, of the body is (in J/K), for the case of an incompressible material. The internal energy may be written in terms of the temperature of the body, the heat capacitance (taken to be independent of temperature), and a reference temperature at which the internal energy is zero: Differentiating with respect to time gives: Applying the first law of thermodynamics to the lumped object gives where the rate of heat transfer out of the body, may be expressed by Newton's law of cooling, and where no work transfer occurs for an incompressible material. Thus, where the time constant of the system is The heat capacitance may be written in terms of the object's specific heat capacity, (J/kg-K), and mass, (kg). The time constant is then When the environmental temperature is constant in time, we may define The equation becomes The solution of this differential equation, by integration from the initial condition, is where is the temperature difference at time 0. Reverting to temperature, the solution is The temperature difference between the body and the environment decays exponentially as a function of time. Standard Formulation By defining , the differential equation becomes where is the rate of heat loss (SI unit: K/second), is the temperature of the object's surface (SI unit: K), is the temperature of the environment; i.e., the temperature suitably far from the surface (SI unit: K), is the coefficient of heat transfer (SI unit: second). Solving the initial-value problem using separation of variables gives
Physical sciences
Thermodynamics
Physics
81887
https://en.wikipedia.org/wiki/Proxima%20Centauri
Proxima Centauri
Proxima Centauri is the nearest star to Earth after the Sun, located 4.25 light-years away in the southern constellation of Centaurus. This object was discovered in 1915 by Robert Innes. It is a small, low-mass star, too faint to be seen with the naked eye, with an apparent magnitude of 11.13. Its Latin name means the 'nearest [star] of Centaurus'. Proxima Centauri is a member of the Alpha Centauri star system, being identified as component Alpha Centauri C, and is 2.18° to the southwest of the Alpha Centauri AB pair. It is currently from AB, which it orbits with a period of about 550,000 years. Proxima Centauri is a red dwarf star with a mass about 12.5% of the Sun's mass (), and average density about 33 times that of the Sun. Because of Proxima Centauri's proximity to Earth, its angular diameter can be measured directly. Its actual diameter is about one-seventh (14%) the diameter of the Sun. Although it has a very low average luminosity, Proxima Centauri is a flare star that randomly undergoes dramatic increases in brightness because of magnetic activity. The star's magnetic field is created by convection throughout the stellar body, and the resulting flare activity generates a total X-ray emission similar to that produced by the Sun. The internal mixing of its fuel by convection through its core and Proxima's relatively low energy-production rate, mean that it will be a main-sequence star for another four trillion years. Proxima Centauri has one known exoplanet and two candidate exoplanets: Proxima Centauri b, the candidate Proxima Centauri d and the disputed Proxima Centauri c. Proxima Centauri b orbits the star at a distance of roughly with an orbital period of approximately 11.2 Earth days. Its estimated mass is at least 1.07 times that of Earth. Proxima b orbits within Proxima Centauri's habitable zone—the range where temperatures are right for liquid water to exist on its surface—but, because Proxima Centauri is a red dwarf and a flare star, the planet's habitability is highly uncertain. A candidate super-Earth, Proxima Centauri c, roughly away from Proxima Centauri, orbits it every . A candidate sub-Earth, Proxima Centauri d, roughly away, orbits it every 5.1 days. General characteristics Proxima Centauri is a red dwarf, because it belongs to the main sequence on the Hertzsprung–Russell diagram and is of spectral class M5.5. The M5.5 class means that it falls in the low-mass end of M-type dwarf stars, with its hue shifted toward red-yellow by an effective temperature of . Its absolute visual magnitude, or its visual magnitude as viewed from a distance of , is 15.5. Its total luminosity over all wavelengths is only 0.16% that of the Sun, although when observed in the wavelengths of visible light to which the eye is most sensitive, it is only 0.0056% as luminous as the Sun. More than 85% of its radiated power is at infrared wavelengths. In 2002, optical interferometry with the Very Large Telescope (VLTI) found that the angular diameter of Proxima Centauri is . Because its distance is known, the actual diameter of Proxima Centauri can be calculated to be about 1/7 that of the Sun, or 1.5 times that of Jupiter. The star's mass, estimated from stellar theory, is , or 129 Jupiter masses (). The mass has been calculated directly, although with less precision, from observations of microlensing events to be . Lower mass main-sequence stars have higher mean density than higher mass ones, and Proxima Centauri is no exception: it has a mean density of , compared with the Sun's mean density of . The measured surface gravity of Proxima Centauri, given as the base-10 logarithm of the acceleration in units of cgs, is 5.20. This is 162 times the surface gravity on Earth. A 1998 study of photometric variations indicates that Proxima Centauri completes a full rotation once every 83.5 days. A subsequent time series analysis of chromospheric indicators in 2002 suggests a longer rotation period of  days. Later observations of the star's magnetic field subsequently revealed that the star rotates with a period of  days, consistent with a measurement of  days from radial velocity observations. Structure and fusion Because of its low mass, the interior of the star is completely convective, causing energy to be transferred to the exterior by the physical movement of plasma rather than through radiative processes. This convection means that the helium ash left over from the thermonuclear fusion of hydrogen does not accumulate at the core but is instead circulated throughout the star. Unlike the Sun, which will only burn through about 10% of its total hydrogen supply before leaving the main sequence, Proxima Centauri will consume nearly all of its fuel before the fusion of hydrogen comes to an end. Convection is associated with the generation and persistence of a magnetic field. The magnetic energy from this field is released at the surface through stellar flares that briefly (as short as per ten seconds) increase the overall luminosity of the star. On May 6, 2019, a flare event bordering Solar M and X flare class, briefly became the brightest ever detected, with a far ultraviolet emission of . These flares can grow as large as the star and reach temperatures measured as high as 27 million K—hot enough to radiate X-rays. Proxima Centauri's quiescent X-ray luminosity, approximately (4–16) erg/s ((4–16) W), is roughly equal to that of the much larger Sun. The peak X-ray luminosity of the largest flares can reach  erg/s ( W). Proxima Centauri's chromosphere is active, and its spectrum displays a strong emission line of singly ionized magnesium at a wavelength of 280 nm. About 88% of the surface of Proxima Centauri may be active, a percentage that is much higher than that of the Sun even at the peak of the solar cycle. Even during quiescent periods with few or no flares, this activity increases the corona temperature of Proxima Centauri to 3.5 million K, compared to the 2 million K of the Sun's corona, and its total X-ray emission is comparable to the sun's. Proxima Centauri's overall activity level is considered low compared to other red dwarfs, which is consistent with the star's estimated age of 4.85 years, since the activity level of a red dwarf is expected to steadily wane over billions of years as its stellar rotation rate decreases. The activity level appears to vary with a period of roughly 442 days, which is shorter than the Sun's solar cycle of 11 years. Proxima Centauri has a relatively weak stellar wind, no more than 20% of the mass loss rate of the solar wind. Because the star is much smaller than the Sun, the mass loss per unit surface area from Proxima Centauri may be eight times that from the Sun's surface. Life phases A red dwarf with the mass of Proxima Centauri will remain on the main sequence for about four trillion years. As the proportion of helium increases because of hydrogen fusion, the star will become smaller and hotter, gradually transforming into a so-called "blue dwarf". Near the end of this period it will become significantly more luminous, reaching 2.5% of the Sun's luminosity () and warming any orbiting bodies for a period of several billion years. When the hydrogen fuel is exhausted, Proxima Centauri will then evolve into a helium white dwarf (without passing through the red giant phase) and steadily lose any remaining heat energy. The Alpha Centauri system may have formed through a low-mass star being dynamically captured by a more massive binary of within their embedded star cluster before the cluster dispersed. However, more accurate measurements of the radial velocity are needed to confirm this hypothesis. If Proxima Centauri was bound to the Alpha Centauri system during its formation, the stars are likely to share the same elemental composition. The gravitational influence of Proxima might have disturbed the Alpha Centauri protoplanetary disks. This would have increased the delivery of volatiles such as water to the dry inner regions, so possibly enriching any terrestrial planets in the system with this material. Alternatively, Proxima Centauri may have been captured at a later date during an encounter, resulting in a highly eccentric orbit that was then stabilized by the galactic tide and additional stellar encounters. Such a scenario may mean that Proxima Centauri's planetary companions have had a much lower chance for orbital disruption by Alpha Centauri. As the members of the Alpha Centauri pair continue to evolve and lose mass, Proxima Centauri is predicted to become unbound from the system in around 3.5 billion years from the present. Thereafter, the star will steadily diverge from the pair. Motion and location Based on a parallax of , published in 2020 in Gaia Data Release 3, Proxima Centauri is from the Sun. Previously published parallaxes include: in 2018 by Gaia DR2, , in 2014 by the Research Consortium On Nearby Stars; , in the original Hipparcos Catalogue, in 1997; in the Hipparcos New Reduction, in 2007; and using the Hubble Space Telescope fine guidance sensors, in 1999. From Earth's vantage point, Proxima Centauri is separated from Alpha Centauri by 2.18 degrees, or four times the angular diameter of the full Moon. Proxima Centauri has a relatively large proper motion—moving 3.85 arcseconds per year across the sky. It has a radial velocity towards the Sun of 22.2 km/s. From Proxima Centauri, the Sun would appear as a bright 0.4-magnitude star in the constellation Cassiopeia, similar to that of Achernar or Procyon from Earth. Among the known stars, Proxima Centauri has been the closest star to the Sun for about 32,000 years and will be so for about another 25,000 years, after which Alpha Centauri A and Alpha Centauri B will alternate approximately every 79.91 years as the closest star to the Sun. In 2001, J. García-Sánchez et al. predicted that Proxima Centauri will make its closest approach to the Sun in approximately 26,700 years, coming within . A 2010 study by V. V. Bobylev predicted a closest approach distance of in about 27,400 years, followed by a 2014 study by C. A. L. Bailer-Jones predicting a perihelion approach of in roughly 26,710 years. Proxima Centauri is orbiting through the Milky Way at a distance from the Galactic Centre that varies from , with an orbital eccentricity of 0.07. Alpha Centauri Proxima Centauri has been suspected to be a companion of the Alpha Centauri binary star system since its discovery in 1915. For this reason, it is sometimes referred to as Alpha Centauri C. Data from the Hipparcos satellite, combined with ground-based observations, were consistent with the hypothesis that the three stars are a gravitationally bound system. Kervella et al. (2017) used high-precision radial velocity measurements to determine with a high degree of confidence that Proxima and Alpha Centauri are gravitationally bound. Proxima Centauri's orbital period around the Alpha Centauri AB barycenter is years with an eccentricity of ; it approaches Alpha Centauri to at periastron and retreats to at apastron. At present, Proxima Centauri is from the Alpha Centauri AB barycenter, nearly to the furthest point in its orbit. Six single stars, two binary star systems, and a triple star share a common motion through space with Proxima Centauri and the Alpha Centauri system. (The co-moving stars include HD 4391, γ2 Normae, and Gliese 676.) The space velocities of these stars are all within 10 km/s of Alpha Centauri's peculiar motion. Thus, they may form a moving group of stars, which would indicate a common point of origin, such as in a star cluster. Planetary system As of 2022, three planets (one confirmed and two candidates) have been detected in orbit around Proxima Centauri, with one possibly being among the lightest ever detected by radial velocity ("d"), one close to Earth's size within the habitable zone ("b"), and a possible gas dwarf that orbits much further out than the inner two ("c"), although its status remains disputed. Searches for exoplanets around Proxima Centauri date to the late 1970s. In the 1990s, multiple measurements of Proxima Centauri's radial velocity constrained the maximum mass that a detectable companion could possess. The activity level of the star adds noise to the radial velocity measurements, complicating detection of a companion using this method. In 1998, an examination of Proxima Centauri using the Faint Object Spectrograph on board the Hubble Space Telescope appeared to show evidence of a companion orbiting at a distance of about 0.5 AU. A subsequent search using the Wide Field and Planetary Camera 2 failed to locate any companions. Astrometric measurements at the Cerro Tololo Inter-American Observatory appear to rule out a Jupiter-sized planet with an orbital period of 2−12 years. In 2017, a team of astronomers using the Atacama Large Millimeter Array reported detecting a belt of cold dust orbiting Proxima Centauri at a range of 1−4 AU from the star. This dust has a temperature of around 40 K and has a total estimated mass of 1% of the planet Earth. They tentatively detected two additional features: a cold belt with a temperature of 10 K orbiting around 30 AU and a compact emission source about 1.2 arcseconds from the star. There was a hint at an additional warm dust belt at a distance of 0.4 AU from the star. However, upon further analysis, these emissions were determined to be most likely the result of a large flare emitted by the star in March 2017. The presence of dust within 4 AU radius from the star is not needed to model the observations. Planet b Proxima Centauri b, or Alpha Centauri Cb, orbits the star at a distance of roughly with an orbital period of approximately 11.2 Earth days. Its estimated mass is at least 1.07 times that of the Earth. Moreover, the equilibrium temperature of Proxima Centauri b is estimated to be within the range where water could exist as liquid on its surface; thus, placing it within the habitable zone of Proxima Centauri. The first indications of the exoplanet Proxima Centauri b were found in 2013 by Mikko Tuomi of the University of Hertfordshire from archival observation data. To confirm the possible discovery, a team of astronomers launched the Pale Red Dot project in January 2016. On 24 August 2016, the team of 31 scientists from all around the world, led by Guillem Anglada-Escudé of Queen Mary University of London, confirmed the existence of Proxima Centauri b through a peer-reviewed article published in Nature. The measurements were performed using two spectrographs: HARPS on the ESO 3.6 m Telescope at La Silla Observatory and UVES on the 8 m Very Large Telescope at Paranal Observatory. Several attempts to detect a transit of this planet across the face of Proxima Centauri have been made. A transit-like signal appearing on 8 September 2016, was tentatively identified, using the Bright Star Survey Telescope at the Zhongshan Station in Antarctica. In 2016, in a paper that helped to confirm Proxima Centauri b's existence, a second signal in the range of 60–500 days was detected. However, stellar activity and inadequate sampling causes its nature to remain unclear. Planet c Proxima Centauri c is a candidate super-Earth or gas dwarf about orbiting at roughly every . If Proxima Centauri b were the star's Earth, Proxima Centauri c would be equivalent to Neptune. Due to its large distance from Proxima Centauri, it is unlikely to be habitable, with a low equilibrium temperature of around 39 K. The planet was first reported by Italian astrophysicist Mario Damasso and his colleagues in April 2019. Damasso's team had noticed minor movements of Proxima Centauri in the radial velocity data from the ESO's HARPS instrument, indicating a possible additional planet orbiting Proxima Centauri. In 2020, the planet's existence was confirmed by Hubble astrometry data from . A possible direct imaging counterpart was detected in the infrared with the SPHERE, but the authors admit that they "did not obtain a clear detection." If their candidate source is in fact Proxima Centauri c, it is too bright for a planet of its mass and age, implying that the planet may have a ring system with a radius of around However, disputed the radial velocity confirmation of the planet. Planet d In 2019, a team of astronomers revisited the data from ESPRESSO about Proxima Centauri b to refine its mass. While doing so, the team found another radial velocity spike with a periodicity of 5.15 days. They estimated that if it were a planetary companion, it would be no less than 0.29 Earth masses. Further analysis confirmed the signal's existence leading up to the announcement of the candidate planet in February 2022. Habitability Before the discovery of Proxima Centauri b, the TV documentary Alien Worlds hypothesized that a life-sustaining planet could exist in orbit around Proxima Centauri or other red dwarfs. Such a planet would lie within the habitable zone of Proxima Centauri, about from the star, and would have an orbital period of 3.6–14 days. A planet orbiting within this zone may experience tidal locking to the star. If the orbital eccentricity of this hypothetical planet were low, Proxima Centauri would move little in the planet's sky, and most of the surface would experience either day or night perpetually. The presence of an atmosphere could serve to redistribute heat from the star-lit side to the far side of the planet. Proxima Centauri's flare outbursts could erode the atmosphere of any planet in its habitable zone, but the documentary's scientists thought that this obstacle could be overcome. Gibor Basri of the University of California, Berkeley argued: "No one [has] found any showstoppers to habitability." For example, one concern was that the torrents of charged particles from the star's flares could strip the atmosphere off any nearby planet. If the planet had a strong magnetic field, the field would deflect the particles from the atmosphere; even the slow rotation of a tidally locked planet that spins once for every time it orbits its star would be enough to generate a magnetic field, as long as part of the planet's interior remained molten. Other scientists, especially proponents of the Rare Earth hypothesis, disagree that red dwarfs can sustain life. Any exoplanet in this star's habitable zone would likely be tidally locked, resulting in a relatively weak planetary magnetic moment, leading to strong atmospheric erosion by coronal mass ejections from Proxima Centauri. In December 2020, a candidate SETI radio signal BLC-1 was announced as potentially coming from the star. The signal was later determined to be human-made radio interference. Observational history In 1915, the Scottish astronomer Robert Innes, director of the Union Observatory in Johannesburg, South Africa, discovered a star that had the same proper motion as Alpha Centauri. He suggested that it be named Proxima Centauri (actually Proxima Centaurus). In 1917, at the Royal Observatory at the Cape of Good Hope, the Dutch astronomer Joan Voûte measured the star's trigonometric parallax at and determined that Proxima Centauri was approximately the same distance from the Sun as Alpha Centauri. It was the lowest-luminosity star known at the time. An equally accurate parallax determination of Proxima Centauri was made by American astronomer Harold L. Alden in 1928, who confirmed Innes's view that it is closer, with a parallax of . A size estimate for Proxima Centauri was obtained by the Canadian astronomer John Stanley Plaskett in 1925 using interferometry. The result was 207,000 miles (333,000 km), or approximately . In 1951, American astronomer Harlow Shapley announced that Proxima Centauri is a flare star. Examination of past photographic records showed that the star displayed a measurable increase in magnitude on about 8% of the images, making it the most active flare star then known. The proximity of the star allows for detailed observation of its flare activity. In 1980, the Einstein Observatory produced a detailed X-ray energy curve of a stellar flare on Proxima Centauri. Further observations of flare activity were made with the EXOSAT and ROSAT satellites, and the X-ray emissions of smaller, solar-like flares were observed by the Japanese ASCA satellite in 1995. Proxima Centauri has since been the subject of study by most X-ray observatories, including XMM-Newton and Chandra. Because of Proxima Centauri's southern declination, it can only be viewed south of latitude 27° N. Red dwarfs such as Proxima Centauri are too faint to be seen with the naked eye. Even from Alpha Centauri A or B, Proxima would only be seen as a fifth magnitude star. It has apparent visual magnitude 11, so a telescope with an aperture of at least is needed to observe it, even under ideal viewing conditions—under clear, dark skies with Proxima Centauri well above the horizon. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Proxima Centauri for this star on August 21, 2016, and it is now so included in the List of IAU approved Star Names. In 2016, a superflare was observed from Proxima Centauri, the strongest flare ever seen. The optical brightness increased by a factor of 68× to approximately magnitude 6.8. It is estimated that similar flares occur around five times every year but are of such short duration, just a few minutes, that they have never been observed before. On 2020 April 22 and 23, the New Horizons spacecraft took images of two of the nearest stars, Proxima Centauri and Wolf 359. When compared with Earth-based images, a very large parallax effect was easily visible. However, this was only used for illustrative purposes and did not improve on previous distance measurements. Future exploration Because of the star's proximity to Earth, Proxima Centauri has been proposed as a flyby destination for interstellar travel. If non-nuclear, conventional propulsion technologies are used, the flight of a spacecraft to Proxima Centauri and its planets would probably require thousands of years. For example, Voyager 1, which is now travelling relative to the Sun, would reach Proxima Centauri in 73,775 years, were the spacecraft travelling in the direction of that star and Proxima was standing still. Proxima's actual galactic orbit means a slow-moving probe would have only several tens of thousands of years to catch the star at its closest approach, before it recedes out of reach. Nuclear pulse propulsion might enable such interstellar travel with a trip timescale of a century, inspiring several studies such as Project Orion, Project Daedalus, and Project Longshot. Project Breakthrough Starshot aims to reach the Alpha Centauri system within the first half of the 21st century, with microprobes travelling at 20% of the speed of light propelled by around 100 gigawatts of Earth-based lasers. The probes would perform a fly-by of Proxima Centauri about 20 years after its launch, or possibly go into orbit after about 140 years if swing-by's around Proxima Centauri or Alpha Centauri are to be employed. Then the probes would take photos and collect data of the planets of the stars, and their atmospheric compositions. It would take 4.25 years for the information collected to be sent back to Earth. Explanatory notes
Physical sciences
Notable stars
Astronomy
81931
https://en.wikipedia.org/wiki/Coordinate%20system
Coordinate system
In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in "the x-coordinate". The coordinates are taken to be real numbers in elementary mathematics, but may be complex numbers or elements of a more abstract system such as a commutative ring. The use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa; this is the basis of analytic geometry. Common coordinate systems Number line The simplest example of a coordinate system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O (the origin) is chosen on a given line. The coordinate of a point P is defined as the signed distance from O to P, where the signed distance is the distance taken as positive or negative depending on which side of the line P lies. Each point is given a unique coordinate and each real number is the coordinate of a unique point. Cartesian coordinate system The prototypical example of a coordinate system is the Cartesian coordinate system. In the plane, two perpendicular lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three mutually orthogonal planes are chosen and the three coordinates of a point are the signed distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space. Depending on the direction and order of the coordinate axes, the three-dimensional system may be a right-handed or a left-handed system. Polar coordinate system Another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis. For a given angle θ, there is a single line through the pole whose angle with the polar axis is θ (measured counterclockwise from the axis to the line). Then there is a unique point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates (r, θ) there is a single point, but any point is represented by many pairs of coordinates. For example, (r, θ), (r, θ+2π) and (−r, θ+π) are all polar coordinates for the same point. The pole is represented by (0, θ) for any value of θ. Cylindrical and spherical coordinate systems There are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the same meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple (r, θ, z). Spherical coordinates take this a step further by converting the pair of cylindrical coordinates (r, z) to polar coordinates (ρ, φ) giving a triple (ρ, θ, φ). Homogeneous coordinate system A point in the plane may be represented in homogeneous coordinates by a triple (x, y, z) where x/z and y/z are the Cartesian coordinates of the point. This introduces an "extra" coordinate since only two are needed to specify a point on the plane, but this system is useful in that it represents any point on the projective plane without the use of infinity. In general, a homogeneous coordinate system is one where only the ratios of the coordinates are significant and not the actual values. Other commonly used systems Some other common coordinate systems are the following: Curvilinear coordinates are a generalization of coordinate systems generally; the system is based on the intersection of curves. Orthogonal coordinates: coordinate surfaces meet at right angles Skew coordinates: coordinate surfaces are not orthogonal The log-polar coordinate system represents a point in the plane by the logarithm of the distance from the origin and an angle measured from a reference line intersecting the origin. Plücker coordinates are a way of representing lines in 3D Euclidean space using a six-tuple of numbers as homogeneous coordinates. Generalized coordinates are used in the Lagrangian treatment of mechanics. Canonical coordinates are used in the Hamiltonian treatment of mechanics. Barycentric coordinate system as used for ternary plots and more generally in the analysis of triangles. Trilinear coordinates are used in the context of triangles. There are ways of describing curves without coordinates, using intrinsic equations that use invariant quantities such as curvature and arc length. These include: The Whewell equation relates arc length and the tangential angle. The Cesàro equation relates arc length and curvature. Coordinates of geometric objects Coordinates systems are often used to specify the position of a point, but they may also be used to specify the position of more complex figures such as lines, planes, circles or spheres. For example, Plücker coordinates are used to determine the position of a line in space. When there is a need, the type of figure being described is used to distinguish the type of coordinate system, for example the term line coordinates is used for any coordinate system that specifies the position of a line. It may occur that systems of coordinates for two different sets of geometric figures are equivalent in terms of their analysis. An example of this is the systems of homogeneous coordinates for points and lines in the projective plane. The two systems in a case like this are said to be dualistic. Dualistic systems have the property that results from one system can be carried over to the other since these results are only different interpretations of the same analytical result; this is known as the principle of duality. Transformations There are often many different possible coordinate systems for describing geometrical figures. The relationship between different systems is described by coordinate transformations, which give formulas for the coordinates in one system in terms of the coordinates in another system. For example, in the plane, if Cartesian coordinates (x, y) and polar coordinates (r, θ) have the same origin, and the polar axis is the positive x axis, then the coordinate transformation from polar to Cartesian coordinates is given by x = r cosθ and y = r sinθ. With every bijection from the space to itself two coordinate transformations can be associated: Such that the new coordinates of the image of each point are the same as the old coordinates of the original point (the formulas for the mapping are the inverse of those for the coordinate transformation) Such that the old coordinates of the image of each point are the same as the new coordinates of the original point (the formulas for the mapping are the same as those for the coordinate transformation) For example, in 1D, if the mapping is a translation of 3 to the right, the first moves the origin from 0 to 3, so that the coordinate of each point becomes 3 less, while the second moves the origin from 0 to −3, so that the coordinate of each point becomes 3 more. Coordinate lines/curves Given a coordinate system, if one of the coordinates of a point varies while the other coordinates are held constant, then the resulting curve is called a coordinate curve. If a coordinate curve is a straight line, it is called a coordinate line. A coordinate system for which some coordinate curves are not lines is called a curvilinear coordinate system. Orthogonal coordinates are a special but extremely common case of curvilinear coordinates. A coordinate line with all other constant coordinates equal to zero is called a coordinate axis, an oriented line used for assigning coordinates. In a Cartesian coordinate system, all coordinates curves are lines, and, therefore, there are as many coordinate axes as coordinates. Moreover, the coordinate axes are pairwise orthogonal. A polar coordinate system is a curvilinear system where coordinate curves are lines or circles. However, one of the coordinate curves is reduced to a single point, the origin, which is often viewed as a circle of radius zero. Similarly, spherical and cylindrical coordinate systems have coordinate curves that are lines, circles or circles of radius zero. Many curves can occur as coordinate curves. For example, the coordinate curves of parabolic coordinates are parabolas. Coordinate planes/surfaces In three-dimensional space, if one coordinate is held constant and the other two are allowed to vary, then the resulting surface is called a coordinate surface. For example, the coordinate surfaces obtained by holding ρ constant in the spherical coordinate system are the spheres with center at the origin. In three-dimensional space the intersection of two coordinate surfaces is a coordinate curve. In the Cartesian coordinate system we may speak of coordinate planes. Similarly, coordinate hypersurfaces are the -dimensional spaces resulting from fixing a single coordinate of an n-dimensional coordinate system. Coordinate maps The concept of a coordinate map, or coordinate chart is central to the theory of manifolds. A coordinate map is essentially a coordinate system for a subset of a given space with the property that each point has exactly one set of coordinates. More precisely, a coordinate map is a homeomorphism from an open subset of a space X to an open subset of Rn. It is often not possible to provide one consistent coordinate system for an entire space. In this case, a collection of coordinate maps are put together to form an atlas covering the space. A space equipped with such an atlas is called a manifold and additional structure can be defined on a manifold if the structure is consistent where the coordinate maps overlap. For example, a differentiable manifold is a manifold where the change of coordinates from one coordinate map to another is always a differentiable function. Orientation-based coordinates In geometry and kinematics, coordinate systems are used to describe the (linear) position of points and the angular position of axes, planes, and rigid bodies. In the latter case, the orientation of a second (typically referred to as "local") coordinate system, fixed to the node, is defined based on the first (typically referred to as "global" or "world" coordinate system). For instance, the orientation of a rigid body can be represented by an orientation matrix, which includes, in its three columns, the Cartesian coordinates of three points. These points are used to define the orientation of the axes of the local system; they are the tips of three unit vectors aligned with those axes. Geographic systems The Earth as a whole is one of the most common geometric spaces requiring the precise measurement of location, and thus coordinate systems. Starting with the Greeks of the Hellenistic period, a variety of coordinate systems have been developed based on the types above, including: Geographic coordinate system, the spherical coordinates of latitude and longitude Projected coordinate systems, including thousands of cartesian coordinate systems, each based on a map projection to create a planar surface of the world or a region. Geocentric coordinate system, a three-dimensional cartesian coordinate system that models the earth as an object, and are most commonly used for modeling the orbits of satellites, including the Global Positioning System and other satellite navigation systems.
Mathematics
Geometry
null
81945
https://en.wikipedia.org/wiki/Companion%20planting
Companion planting
Companion planting in gardening and agriculture is the planting of different crops in proximity for any of a number of different reasons, including weed suppression, pest control, pollination, providing habitat for beneficial insects, maximizing use of space, and to otherwise increase crop productivity. Companion planting is a form of polyculture. Companion planting is used by farmers and gardeners in both industrialized and developing countries for many reasons. Many of the modern principles of companion planting were present many centuries ago in forest gardens in Asia, and thousands of years ago in Mesoamerica. The technique may allow farmers to reduce costly inputs of artificial fertilisers and pesticides. Traditional practice History Companion planting was practiced in various forms by the indigenous peoples of the Americas prior to the arrival of Europeans. These peoples domesticated squash 8,000 to 10,000 years ago, then maize, then common beans, forming the Three Sisters agricultural technique. The cornstalk served as a trellis for the beans to climb, the beans fixed nitrogen, benefitting the maize, and the wide leaves of the squash plant provide ample shade for the soil keeping it moist and fertile. Authors in classical Greece and Rome, around 2000 years ago, were aware that some plants were toxic (allelopathic) to other plants nearby. Theophrastus reported that the bay tree and the cabbage plant enfeebled grapevines. Pliny the Elder wrote that the "shade" of the walnut tree (Juglans regia) poisoned other plants. In China, mosquito ferns (Azolla spp.) have been used for at least a thousand years as companion plants for rice crops. They host a cyanobacterium (Anabaena azollae) that fixes nitrogen from the atmosphere, and they block light from plants that would compete with the rice. 20th century More recently, starting in the 1920s, organic farming and horticulture have made frequent use of companion planting, since many other means of fertilizing, weed reduction and pest control are forbidden. Permaculture advocates similar methods. The list of companion plants used in such systems is large, and includes vegetables, fruit trees, kitchen herbs, garden flowers, and fodder crops. The number of pairwise interactions both positive (the pair of species assist each other) and negative (the plants are best not grown together) is larger, though the evidence for such interactions ranges from controlled experiments to hearsay. For example, plants in the cabbage family (Brassicaceae) are traditionally claimed to grow well with celery, onion family plants (Allium), and aromatic herbs, but are thought best not grown with strawberry or tomato. In 2022, agronomists recommended that multiple tools including plant disease resistance in crops, conservation of natural enemies (parasitoids and predators) to provide biological pest control, and companion planting such as with aromatic forbs to repel pests should be used to achieve "sustainable" protection of crops. They considered a multitrophic approach that took into account the many interactions between crops, companion plants, herbivorous pests, and their natural enemies essential. Many studies have looked at the effects of plants on crop pests, but relatively few interactions have been studied in depth or using field trials. Mechanisms Companion planting can help to increase crop productivity through a variety of mechanisms, which may sometimes be combined. These include pollination, weed suppression, and pest control, including by providing habitat for beneficial insects. Companion planting can reduce insect damage to crops, whether by disrupting pests' ability to locate crops by sight, or by blocking pests physically; by attracting pests away from a target crop to a sacrificial trap crop; or by masking the odour of a crop, using aromatic companions that release volatile compounds. Other benefits, depending on the companion species used, include fixing nitrogen, attracting beneficial insects, suppressing weeds, reducing root-damaging nematode worms, and maintaining moisture in the soil. Nutrient provision Legumes such as clover provide nitrogen compounds to neighbouring plants such as grasses by fixing nitrogen from the air with symbiotic bacteria in their root nodules. These enable the grasses or other neighbours to produce more protein (with lower inputs of artificial fertiliser) and hence to grow more. Trap cropping Trap cropping uses alternative plants to attract pests away from a main crop. For example, nasturtium (Tropaeolum majus) is a food plant of some caterpillars which feed primarily on members of the cabbage family (brassicas); some gardeners claim that planting them around brassicas protects the food crops from damage, as eggs of the pests are preferentially laid on the nasturtium. However, while many trap crops divert pests from focal crops in small scale greenhouse, garden and field experiments, only a small portion of these plants reduce pest damage at larger commercial scales. Host-finding disruption S. Finch and R. H. Collier, in a paper entitled "Insects can see clearly now the weeds have gone", showed experimentally that flying pests are far less successful if their host-plants are surrounded by other plants or even "decoy-plants" coloured green. Pests find hosts in stages, first detecting plant odours which induce it to try to land on the host plant, avoiding bare soil. If the plant is isolated, then the insect simply lands on the patch of green near the odour, making an "appropriate landing". If it finds itself on the wrong plant, an "inappropriate landing", it takes off and flies to another plant; it eventually leaves the area if there are too many "inappropriate" landings. Companion planting of clover as ground cover was equally disruptive to eight pest species from four different insect orders. In a test, 36% of cabbage root flies laid eggs beside cabbages growing in bare soil (destroying the crop), compared to only 7% beside cabbages growing in clover (which allowed a good crop). Simple decoys of green cardboard worked just as well as the live ground cover. Weed suppression Several plants are allelopathic, producing chemicals which inhibit the growth of other species. For example, rye is useful as a cereal crop, and can be used as a cover crop to suppress weeds in companion plantings, or mown and used as a weed-suppressing mulch. Rye produces two phytotoxic substances, [2,4-dihydroxy-1,4(2H)-benzoxazin-3-one (DIBOA) and 2(3H)-benzoxazolinone (BOA)]. These inhibit germination and seedling growth of both grasses and dicotyledonous plants. Pest suppression Some companion plants help prevent pest insects or pathogenic fungi from damaging the crop, through their production of aromatic volatile chemicals, another type of allelopathy. For example, the smell of the foliage of marigolds is claimed to deter aphids from feeding on neighbouring plants. A 2005 study found that oil volatiles extracted from Mexican marigold could suppress the reproduction of three aphid species (pea aphid, green peach aphid and glasshouse and potato aphid) by up to 100% after 5 days from exposure. Another example familiar to gardeners is the interaction of onions and carrots with each other's pests: it is popularly believed that the onion smell puts off carrot root fly, while the smell of carrots puts off onion fly. Some studies have demonstrated beneficial effects. For instance, cabbage crops can be seriously damaged by the cabbage moth. It has a natural enemy, the parasitoid wasp Microplitis mediator. Companion planting of cornflowers among cabbages enables the wasp to increase sufficiently in number to control the moth. This implies the possibility of natural control, with reduced use of insecticides, benefiting the farmer and local biodiversity. In horticulture, marigolds provide good protection to tomato plants against the greenhouse whitefly (an aphid), via the aromatic limonene that they produce. Not all combinations of target and companion are effective; for instance, clover, a useful companion to many crop plants, does not mask Brassica crops. However, effects on multi-species systems are complex and may not increase crop yields. Thus, French marigold inhibits codling moth, a serious pest whose larva destroys apples, but it also inhibits the moth's insect enemies, such as the parasitoid wasp Ascogaster quadridentata, an ichneumonid. The result is that the companion planting fails to reduce damage to apples. Predator recruitment Companion plants that produce copious nectar or pollen in a vegetable garden (insectary plants) may help encourage higher populations of beneficial insects that control pests. Some companion herbs that produce aromatic volatiles attract natural enemies, which can help to suppress pests. Mint, basil, and marigold all attract herbivorous insects' enemies, such as generalist predators. For instance, spearmint attracts the mirid bug Nesidiocoris tenuis, while basil attracts the green lacewing Ceraeochrysa cubana. The multiple interactions between the plant species, and between them, pest species, and the pests' natural enemies, are complex and not well understood. A 2019 field study in Brazil found that companion planting with parsley among a target crop of collard greens helped to suppress aphid pests (Brevicoryne brassicae, Myzus persicae), even though it also cut down the numbers of parasitoid wasps. Predatory insect species increased in numbers, and may have predated on the aphid-killing parasitoids, while the reduction in aphids may have been caused by the increased numbers of generalist predators. Protective shelter Some crops are grown under the protective shelter of different kinds of plant, whether as wind breaks or for shade. For example, shade-grown coffee, especially Coffea arabica, has traditionally been grown in light shade created by scattered trees with a thin canopy, allowing light through to the coffee bushes but protecting them from overheating. Suitable Asian trees include Erythrina subumbrans (tton tong or dadap), Gliricidia sepium (khae falang), Cassia siamea (khi lek), Melia azedarach (khao dao sang), and Paulownia tomentosa, a useful timber tree. Approaches Companion planting approaches in use or being trialled include: Square foot gardening attempts to protect plants from issues such as weed infestation by packing them as closely together as possible. This is facilitated by using companion plants, which can be closer together than normal. Forest gardening, where companion plants are intermingled to simulate an ecosystem, emulates the interaction of plants of up to seven different heights in a woodland.
Technology
Horticulture
null
81978
https://en.wikipedia.org/wiki/Sputnik%202
Sputnik 2
Sputnik 2 (, , Satellite 2, or Prosteyshiy Sputnik 2 (PS-2, , Simplest Satellite 2, launched on 3 November 1957, was the second spacecraft launched into Earth orbit, and the first to carry an animal into orbit, a Soviet space dog named Laika. Launched by the Soviet Union via a modified R-7 intercontinental ballistic missile, Sputnik 2 was a cone-shaped capsule with a base diameter of that weighed around , though it was not designed to separate from the rocket core that brought it to orbit, bringing the total mass in orbit to . It contained several compartments for radio transmitters, a telemetry system, a programming unit, a regeneration and temperature-control system for the cabin, and scientific instruments. A separate sealed cabin contained the dog Laika. Though Laika died shortly after reaching orbit, Sputnik 2 marked another huge success for the Soviet Union in The Space Race, lofting a (for the time) huge payload, sending an animal into orbit, and, for the first time, returning scientific data from above the Earth's atmosphere for an extended period. The satellite reentered Earth's atmosphere on 14 April 1958. Background In 1955, engineer Mikhail Tikhonravov created a proposal for "Object D", a satellite massing to , about a fourth of which would be devoted to scientific instruments. Upon learning that this spacecraft would outmass the announced American satellite by nearly 1,000 times, Soviet leader Nikita Khrushchev advocated for the proposal, which was approved by the government in Resolution #149-88 of 30 January 1956. Work began on the project in February with a launch date of latter 1957, in time for the International Geophysical Year. The design was finalized on 24 July. By the end of 1956, it had become clear that neither the complicated Object D nor the 8A91 satellite launch vehicle version of the R-7 ICBM under development to launch it would be finished in time for a 1957 launch. Thus, in December 1956, OKB-1 head Sergei Korolev proposed the development of two simpler satellites: PS, Prosteishy Sputnik, or Primitive Satellite. The two PS satellites would be simple spheres massing and equipped solely with a radio antenna. The project was approved by the government on 25 January 1957. The choice to launch these two instead of waiting for the more advanced Object D (which would eventually become Sputnik 3) to be finished was largely motivated by the desire to launch a satellite to orbit before the US. The first of these satellites, Sputnik 1 (PS-1), was successfully launched 4 October 1957, and became the world's first artificial satellite. Immediately following the launch, Nikita Khrushchev asked Sergei Korolev to prepare a Sputnik 2 in time for the 40th anniversary of the Bolshevik revolution in early November, just three weeks later. Details of the conversation vary, but it appears likely that Korolev suggested the idea of flying a dog, while Khrushchev emphasised the importance of the date. With only three weeks to prepare, OKB-1 had to scramble to assemble a new satellite. While PS-2 had been built, it was just a ball, identical to PS-1. Fortunately, the R-5A sounding rocket had recently been used to launch a series of suborbital missions carrying dogs as payloads. Korolev simply requisitioned a payload container used for these missions and had it installed in the upper stage of its R-7 launching rocket directly beneath the PS-2 sphere. Upon reaching orbit, the final stage or Blok A would detach from the satellite. No provision was made for the dog's recovery. Spacecraft Sputnik 2 was a cone-shaped capsule with a base diameter of that weighed around , though it was not designed to separate from the rocket core that brought it to orbit, bringing the total mass in orbit to . Passenger Laika ("Barker"), formerly Kudryavka (Little Curly), was the part-Samoyed terrier chosen to fly in Sputnik 2. Due to the shortness of the timeframe, the candidate dog could not be trained for the mission. Again, OKB-1 borrowed from the sounding rocket program, choosing from ten candidates provided by the Air Force Institute of Aviation Medicine that were already trained for suborbital missions. Laika was chosen primarily because of her even temperament. Her backup was Albina, who had flown on two R-1E missions in June 1956. Laika weighed about . Both Laika and Albina had telemetry wires surgically attached to them before the flight to monitor respiration frequency, pulse, and blood pressure. The pressurized cabin on Sputnik 2 was padded and allowed enough room for Laika to lie down or stand. An air regeneration system provided oxygen; food and water were dispensed in a gelatinized form. Laika was chained in place and fitted with a harness, a bag to collect waste, and electrodes to monitor vital signs. A television camera was mounted in the passenger compartment to observe Laika. The camera could transmit 100-line video frames at 10 frames/second. Experiments Sputnik 2 was the first platform capable of making scientific measurements in orbit. This was potentially as significant as the biological payload. The Earth's atmosphere blocks the Sun's X-ray and ultraviolet output from ground observation. Moreover, solar output is unpredictable and fluctuates rapidly, making sub-orbital sounding rockets inadequate for the observation task. Thus a satellite is required for long-term, continuous study of the complete solar spectrum. Accordingly, Sputnik 2 carried two spectrophotometers, one for measuring solar ultraviolet rays and one for measuring X-rays. These instruments were provided by Professor Sergei Mandelstam of the Lebedev Institute of Physics and installed in the nose cone above the spherical PS. In addition, Sergei Vernov, who had completed a cosmic ray detector (using Geiger counters) for Object D, demanded that the instrument his Moscow University team (including Naum Grigoriev, Alexander Chudakov, and Yuri Logachev) had built also be carried on the flight. Korolev agreed, but as there was no more room on the satellite proper, the instrument was mounted on the Blok A and given its own battery and telemetry frequency. Engineering and biological data were transmitted using the Tral_D telemetry system, which would transmit data to Earth for 15 minutes of each orbit. Launch preparations Sputnik 2's launch vehicle, the R-7 ICBM (also known by the system's GRAU index 8K71) was modified for the PS-2 satellite launch and designated 8K71PS. 8K71PS serial number M1-2PS arrived at the NIIP-5 Test Range, the precursor to the Baikonur Cosmodrome, on 18 October 1957 for final integration of the rocket stages and satellite payload. Laika was put in the payload container mid-day 31 October, and that night, the payload was attached to the rocket. The container was heated via an external tube against the cold temperatures at the launch site. Mission Sputnik 2 was launched at 02:30:42 UTC on 3 November 1957 from LC-1 of the NIIP-5 Test Range via Sputnik 8K71PS rocket (the same pad and rocket that launched Sputnik 1) The satellite's orbit was with a period of 103.7 minutes. After reaching orbit Sputnik 2's nose cone was jettisoned successfully, but the satellite did not separate from the Blok A. This, along with the loss of some thermal insulation, caused temperatures in the spacecraft to soar. At peak acceleration, Laika's respiration increased to between three and four times the pre-launch rate. The sensors showed her heart rate was 103 beats/min before launch and increased to 240 beats/min during the early acceleration. After three hours of weightlessness, Laika's pulse rate had settled back to 102 beats/min, three times longer than it had taken during earlier ground tests, an indication of the stress she was under. The early telemetry indicated that Laika was agitated but eating her food. After approximately five to seven hours into the flight, no further signs of life were received from the spacecraft. The Soviet scientists had planned to euthanise Laika with a serving of poisoned food. For many years, the Soviet Union gave several conflicting statements that she had died either from asphyxia, when the batteries failed, or that she had been euthanised. Many rumours circulated about the exact manner of her death. In 1999, several Russian sources reported that Laika had died when the cabin overheated on the fourth day. In October 2002, Dimitri Malashenkov, one of the scientists behind the Sputnik2 mission, revealed that Laika had died by the fourth circuit of flight from overheating. According to a paper he presented to the World Space Congress in Houston, Texas, "It turned out that it was practically impossible to create a reliable temperature control system in such limited time constraints." Because of the size of Sputnik 2 and its attached Blok A, the spacecraft was easy to track optically. In its last orbits, the combined body tumbled end over end, flashing brightly before it was incinerated over the north Atlantic after circling the Earth 2,370 times over the course of 162 days. The spacecraft reentered the Earth's atmosphere on 14 April 1958, at approximately 0200 hrs, on a line that stretched from New York to the Amazon. Its track was plotted by British ships and three "Moon Watch Observations", from New York. It was said to be glowing and did not develop a tail until it was at latitudes south of 20° North. Estimates put the average length of the tail at about . Results Geopolitical impact Massing , Sputnik 2 marked a dramatic leap in orbital mass over Sputnik 1 as well as the American Vanguard, which had yet to fly. The day after Sputnik 2 went into orbit the Gaither committee met with President Eisenhower to brief him on the current situation, demanding an urgent and more dramatic response than to the smaller Sputnik 1. It was clear now that the Soviets had missiles far superior to any in the American arsenal, a fact whose demonstration by Sputnik 2 was eagerly propounded by Soviet Premier Khrushchev at every opportunity. In the U.S.S.R., just six days after the launch of Sputnik 2, on the 40th anniversary of the October revolution, Khrushchev boasted in a speech “Now our first Sputnik is not lonely in its space travels.” Nevertheless, unlike most of the U.S., President Eisenhower kept calm through the time afterward just as he did after Sputnik 1 was launched. According to one of the president's aides, “The president's burning concern was to keep the country from going hog-wild and from embarking on foolish, costly schemes.” The mission sparked a debate across the globe on the mistreatment of animals and animal testing in general to advance science. In the United Kingdom, the National Canine Defence League called on all dog owners to observe a minute's silence on each day Laika remained in space, while the Royal Society for the Prevention of Cruelty to Animals (RSPCA) received protests even before Radio Moscow had finished announcing the launch. Animal rights groups at the time called on members of the public to protest at Soviet embassies. Others demonstrated outside the United Nations in New York. Laboratory researchers in the U.S. offered some support for the Soviets, at least before the news of Laika's death. Experimental data The cosmic ray detector transmitted for one week, going silent on 9 November when its battery was exhausted. The experiment reported unexpected results the day after launch, noting an increase in high-energy charged particles from a normal 18 pulses/sec to 72 pulses/sec at the highest latitudes of its orbit. Per two articles in the Soviet newspaper Pravda, the particle flux increased with altitude as well. It is likely that Sputnik 2 was detecting the lower levels of the Van Allen Belt when it reached the apogee of its orbit. However, because Sputnik 2 telemetry could only be received when it was flying over the Soviet Union, the data set was insufficient to draw conclusions, particularly as, most of the time, Sputnik 2 traveled below the Belt. Additional observational data had been received by Australian observers when the satellite was overhead, and Soviet scientists asked them for it. The secrecy-minded Soviets were not willing to give the Australians the code that would give them the ability to descramble and use the data themselves. As a result, the Australians declined to turn over their data. Thus, the Soviet Union missed out on its chance to get credit for the scientific discovery, which ultimately went to James Van Allen of the State University of Iowa, whose experiments on Explorer 1 and Explorer 3 first mapped the radiation belts that now bear his name. As for the ultraviolet and X-ray photometers, they were calibrated such that they were oversaturated by orbital radiation, returning no usable data. Surviving examples A USSR-built engineering model of the R-7 Sputnik 8K71PS (Sputnik II) is located at the Cosmosphere space museum in Hutchinson, Kansas, United States. The museum also has a flight-ready backup of the Sputnik 1, as well as replicas of the first two American satellites, Explorer 1 and Vanguard 1. A replica of Sputnik 2 is located at the Memorial Museum of Cosmonautics in Moscow.
Technology
Unmanned spacecraft
null
81993
https://en.wikipedia.org/wiki/Luna%202
Luna 2
Luna 2 (), originally named the Second Soviet Cosmic Rocket and nicknamed Lunik 2 in contemporaneous media, was the sixth of the Soviet Union's Luna programme spacecraft launched to the Moon, E-1 No.7. It was the first spacecraft to reach the surface of the Moon, and the first human-made object to make contact with another celestial body. The spacecraft was launched on 12 September 1959 by the Luna 8K72 s/n I1-7B rocket. It followed a direct path to the Moon. In addition to the radio transmitters sending telemetry information back to Earth, the spacecraft released a sodium vapour cloud so the spacecraft's movement could be visually observed. On 13 September 1959, it impacted the Moon's surface east of Mare Imbrium near the craters Aristides, Archimedes, and Autolycus. Prior to impact, two sphere-shaped pennants with USSR and the launch date engraved in Cyrillic were detonated, sending pentagonal shields in all directions. Luna 2 did not detect radiation or magnetic belts around the Moon. Background Luna 1 and the three spacecraft of Luna programme before it were part of the Ye-1 series of spacecraft with a mass of . Luna missions that failed to successfully launch or achieve good results remained unnamed and were not publicly acknowledged. The first unnamed probe exploded on launch on 23 September 1958. Two more launches were unsuccessfully attempted on 11 October 1958 and 4 December 1958. Luna 1 was the fourth launch attempt and the first partial success of the program. It launched on 2 January 1959 and missed the Moon by . One mission separated Luna 1 and Luna 2, a launch failure that occurred with an unnamed probe on 18 June 1959. Luna 2 would be the Soviet Union's sixth attempt to impact the Moon. It was the second of the Ye-1a series, modified to carry a heavier payload of and had a combined mass of . Luna 2 was similar in design to Luna 1, a spherical space probe with protruding antennas and instrumentation. The instrumentation was also similar to Luna 1, which included a triaxial fluxgate magnetometer, a piezoelectric detector, a scintillation counter, ion traps and two gas-discharge counters, while the Luna 2 included six gas-discharge counters. There were no propulsion systems on Luna 2 itself. Payload Luna 2 carried five different types of instruments to conduct various tests while it was on its way to the Moon. The scintillation counters were used to measure any ionizing radiation and the Cherenkov radiation detectors to measure electromagnetic radiation caused by charged particles. The primary scientific purpose of the Geiger Counter carried on Luna 2 was to determine the electron spectrum of the Van Allen radiation belt. It consisted of three STS-5 gas-discharge counters mounted on the outside of an airtight container. The last instrument on Luna 2 was a three component fluxgate magnetometer. It was similar to that used on Luna 1 but its dynamic range was reduced by a factor of 4 to ±750 gammas (nT) so that the quantisation uncertainty was ±12 gammas. The probe's instrumentation was powered by silver-zinc and mercury-oxide batteries. The spacecraft also carried Soviet pennants which were located on the probe and on the Luna 2 rocket. The two sphere-shaped pennants in the probe had surfaces covered by 72 pentagonal elements in a pattern similar to that later used by association footballs. In the centre was an explosive charge designed to shatter the sphere, sending the pentagonal shields in all directions. Each pentagonal element was made of titanium alloy; the centre regular pentagon had the State Emblem of the Soviet Union with the Cyrillic letters СССР ("USSR") engraved below and was surrounded by five non-regular pentagons which were each engraved with СССР СЕНТЯБРЬ 1959 ("USSR SEPTEMBER 1959"). The third pennant was similar engravings on aluminium strips which were embossed on the last stage of the Luna 2 rocket. The scientists took extra, unspecified precautions in preventing biological contamination of the Moon. Mission Launch and trajectory There was difficulty getting Luna 2 ready for launch. The first attempt on September 6 failed due to a loose electrical connection. A second attempt two days later also went awry when the core stage LOX tank failed to pressurize properly due to ice formation in a pressure sensing line. The ice plug was broken but the launch had to be called off again. By this point the RP-1 had been sitting in the propellant tanks for almost four days and there was the risk that it could start to paraffin-ize. The next attempt on September 9 was made. Core and strap-on ignition began but the engines only reached 75% thrust. The launch was aborted and the RP-1 finally drained from the tanks. The DP-2 electrical switch had failed to send the command to open the engine valves to full throttle.The booster was removed from the pad and replaced with a different one, which was launched 12 September 1959, and Luna 2 lifted off at 06:39:42 GMT. Later in the month, Soviet premier Nikita Khrushchev was visiting the United States. The US space program had had several recent setbacks including an on-pad explosion of an Atlas-Able rocket and a Jupiter missile that exploded just after launch and killed several mice it was intended to fly on a biological mission. US President Dwight Eisenhower, while meeting with Khrushchev, remarked that there had been a few failures of American rockets lately and asked if there had been similar problems in the Soviet space programme. Alluding to the abortive Luna 2 attempt two weeks earlier, Khrushchev replied that "We had a rocket we were going to launch, but it did not work correctly so they had to take it down and replace it with a different one." Once the vehicle reached Earth's escape velocity, the upper stage was detached, allowing the probe to travel on its path to the Moon. Luna 2 pirouetted slowly, making a full rotation every 14 minutes, while sending radio signals at 183.6, 19.993 and 39.986 MHz. The probe started transmitting information back to Earth using three different transmitters. These transmitters provided precise information on its course, allowing scientists to calculate that Luna 2 would hit its mark on the Moon around 00:05 on 14 September (Moscow Time), which was announced on Radio Moscow. Because of claims that information received from Luna 1 was fake, the Russian scientists sent a telex to astronomer Bernard Lovell at Jodrell Bank Observatory at the University of Manchester. Having received the intended time of impact, and the transmission and trajectory details, it was Bernard Lovell who confirmed the mission's success to outside observers. However, the American media were still skeptical of the data until Lovell was able to prove that the radio signal was coming from Luna 2 by showing the Doppler shift from its transmissions. Lunar impact Luna 2 took a direct path to the Moon, starting with an initial velocity from Earth of and impacting the Moon at about . It hit the Moon about 0° west and 29.1° north of the centre of the visible disk at 00:02:24 (Moscow Time) on 14 September 1959. The probe became the first human-made object to hit another celestial body. To provide a display visible from Earth, on 13 September the spacecraft released a vapour cloud that expanded to a diameter of that was seen by observatories in Alma Ata in Kazakhstan, Byurakan in Armenia, Abastumani and Tbilisi in Georgia, and Stalinabad in Tajikistan. This cloud also acted as an experiment to see how the sodium gas would act in a vacuum and zero gravity. The last stage of the rocket that propelled Luna 2 also hit the Moon's surface about 30 minutes after the spacecraft, but there was uncertainty about where it landed. Bernard Lovell began tracking the probe about five hours before it struck the Moon and also recorded the transmission from the probe, which ended abruptly. He played the recording during a phone call to reporters in New York to finally convince most media observers of the mission's authenticity. Results The radiation detectors and magnetometer were searching for lunar magnetic and radiation fields similar to the Van Allen radiation belt around Earth, sending information about once every minute until its last transmission which came about away from the lunar surface. Although it did prove previous measurements of the Van Allen radiation belts that were taken from Luna 1 around the Earth, it was not able to detect any type of radiation belts around the Moon at or beyond the limits of its magnetometer's sensitivity (2–3x10−4 G). Luna 2 showed time variations in the electron flux and energy spectrum in the Van Allen radiation belt. Using ion traps on board, the satellite made the first direct measurement of solar wind flux from outside the Earth's magnetosphere. On its approach to the lunar surface, the probe did not detect any notable magnetic field to within from the Moon. It also did not detect a radiation belt around the Moon, but the four ion traps measured an increase in the ion particle flux at an altitude of , which suggested the presence of an ionosphere. The probe generated scientific data that was printed on of teletype, which were analysed and published in the spring of 1960. Cultural significance According to Donald William Cox, Americans were starting to believe that they were making progress in the Space Race and that although the Soviet Union might have had larger rockets, the United States had better guidance systems, but these beliefs were questioned when the Soviets were able to impact Luna 2 on the Moon. At that time the closest Americans had come to the Moon was about with Pioneer 4. Soviet Premier Nikita Khrushchev, on his only visit to the United States, gave President Dwight D. Eisenhower a replica of the Soviet pennants that Luna 2 had just placed onto the lunar surface. U.S. espionage In 1959, a Soviet exhibit of its economic achievements toured several countries. This exhibit included displays of Luna 2. CIA conducted a covert operation to access it to gain information. A team of CIA officers gained unrestricted access to the display for 24 hours, which turned out to be a fully-operational system comparable to the original and not a replica as expected. The team disassembled the object, photographed the parts without removing it from its crate and then put back in place, gaining intelligence regarding its design and capabilities. The Soviets did not find out, the CIA report being declassified in 2019, 24 years after the dissolution of the USSR. Legacy Luna 2 was a success for the Soviets, and was the first in a series of missions (lunar impactors) that were intentionally crashed on the Moon. The later U.S.-made Ranger missions ended in similar impacts. Such controlled crashes have remained useful even after the technique of soft landing was mastered. NASA used hard spacecraft impacts to test whether shadowed Moon craters contain ice by analyzing the debris that was created on impact. The pennant presented to Eisenhower is kept at the Eisenhower Presidential Library and Museum in Abilene, Kansas, U.S. A copy of the spherical pennant is located at the Kansas Cosmosphere in Hutchinson, Kansas. On 1 November 1959, the Soviet Union released two stamps commemorating the spacecraft. They depict the trajectory of the mission.
Technology
Unmanned spacecraft
null
81997
https://en.wikipedia.org/wiki/Luna%203
Luna 3
Luna 3, or E-2A No.1 (), was a Soviet spacecraft launched in 1959 as part of the Luna programme. It was the first mission to photograph the far side of the Moon and the third Soviet space probe to be sent to the neighborhood of the Moon. The historic, never-before-seen views of the far side of the Moon caused excitement and interest when they were published around the world, and a tentative Atlas of the Far Side of the Moon was created from the pictures. These views showed mountainous terrain, very different from the near side, and only two dark, low-lying regions, which were named Mare Moscoviense (Sea of Moscow) and Mare Desiderii (Sea of Desire). Mare Desiderii was later found to be composed of a smaller mare, Mare Ingenii (Sea of Cleverness), and several other dark craters. The reason for this difference between the two sides of the Moon is still not fully understood, but it seems that most of the dark lavas that flowed out to produce the maria formed under the Earth-facing half. Design The space probe was a cylindric canister with hemispheric ends and a wide flange near the top. The probe was long and at its maximum diameter at the flange. Most of the cylindric section was roughly in diameter. The canister was hermetically sealed and pressurized to about . Several solar cells were mounted on the outside of the cylinder, and these provided electric power to the storage batteries inside the space probe. Shutters for thermal control were positioned along the cylinder and opened to expose a radiating surface when the internal temperature exceeded . The upper hemisphere of the probe held the covered opening for the cameras. Four antennas protruded from the top of the probe and two from its bottom. Other scientific equipment was mounted on the outside, including micrometeoroid and cosmic ray detectors, and the Yenisey-2 imaging system. The gas jets for its attitude control system were mounted on the lower end of the spacecraft. Several photoelectric cells helped maintain orientation with respect to the Sun and the Moon. There were no rocket motors for course corrections. Its interior held the cameras and the photographic film processing system, radio transmitter, storage batteries, gyroscopic units, and circulating fans for temperature control. It was spin-stabilized for most of its flight, but its three-axis attitude control system was activated while taking photos. Luna 3 was radio-controlled from ground stations in the Soviet Union. The Soviet media called the spacecraft the Automatic Interplanetary Station. The probe was renamed to Luna 3 in 1963. Mission After launching on a Luna 8K72 (number I1-8) rocket over the North Pole, the Blok-E escape stage was shut down by radio control to put Luna 3 on its course to the Moon. Initial radio contact showed that the signal from the space probe was only about one-half as strong as expected, and the internal temperature was rising. The spacecraft spin axis was reoriented and some equipment was shut down, resulting in a temperature drop from 40 °C to about 30 °C. At a distance of 60,000 to 70,000 km from the Moon, the orientation system was turned on and the spacecraft rotation was stopped. The lower end of the craft was pointed at the Sun, which was shining on the far side of the Moon. The space probe passed within 6,200 km of the Moon near its south pole at the closest lunar approach at 14:16 UT on 6 October 1959, and continued over the far side. On 7 October, the photocell on the upper end of the space probe detected the sunlit far side of the Moon, and the photography sequence was started. The first picture was taken at 03:30 UT at a distance of 63,500 km from the Moon, and the last picture was taken 40 minutes later from a distance of 66,700 km. A total of 29 pictures were taken, covering 70% of the far side. After the photography was complete the spacecraft resumed spinning, passed over the north pole of the Moon and returned towards the Earth. Attempts to transmit the pictures to the Soviet Union began on 8 October but the early attempts were unsuccessful due to the low signal strength. As Luna 3 drew closer to the Earth, a total of about 17 photographs were transmitted by 18 October. All contact with the probe was lost on 22 October 1959. The space probe was believed to have burned up in the Earth's atmosphere in March or April 1960. Another possibility was that it survived in orbit until 1962 or later. It was launched initially in an orbit with the perigee outside the upper boundary of the Earth's atmosphere. After the mission was accomplished, and the probe made several orbits around the Earth, the secular rise in the eccentricity resulted in a decrease of the perigee because the semimajor axis is conserved. After eleven orbital revolutions Luna-3 entered the atmosphere of the Earth. It is the first instance of a "man-made Lidov-Kozai effect". First gravity assist The gravity assist maneuver was first used in 1959 when Luna 3 photographed the far side of Earth's Moon. After launch from the Baikonur Cosmodrome, Luna 3 passed behind the Moon from south to north and headed back to Earth. The gravity of the Moon changed the spacecraft's orbit; also, because of the Moon's own orbital motion, the spacecraft's orbital plane was also changed. The return orbit was calculated so that the spacecraft passed again over the Northern hemisphere where the Soviet ground stations were located. The maneuver relied on research performed under the direction of Mstislav Keldysh at the Steklov Institute of Mathematics. Lunar photography The purpose of this experiment was to obtain photographs of the lunar surface as the spacecraft flew by the Moon. The imaging system was designated Yenisey-2 and consisted of a dual-lens camera AFA-E1, an automatic film processing unit, and a scanner. The lenses on the camera were a 200 mm focal length, f/5.6 aperture objective and a 500 mm, f/9.5 objective. The camera carried 40 frames of American-made temperature- and radiation-resistant 35mm isochrome film recovered by the Soviets from downed American Genetrix espionage balloons. The 200 mm objective could image the full disk of the Moon and the 500 mm could take an image of a region on the surface. The camera was fixed in the spacecraft and pointing was achieved by rotating the craft itself. Luna 3 was the first successful three-axis stabilized spacecraft. During most of the mission, the spacecraft was spin stabilized, but for photography of the Moon, the spacecraft oriented one axis toward the Sun and then a photocell was used to detect the Moon and orient the cameras toward it. Detection of the Moon signaled the camera cover to open and the photography sequence to start automatically. The images alternated between both cameras during the sequence. After photography was complete, the film was moved to an on-board processor where it was developed, fixed, and dried. Commands from the Earth were then given to move the film into a flying-spot scanner where a spot produced by a cathode-ray tube was projected through the film onto a photomultiplier. The spot was scanned across the film and the photomultiplier converted the intensity of the light passing through the film into an electric signal which was transmitted to the Earth (via frequency-modulated analog video, similar to a facsimile). A frame could be scanned with a resolution of 1000 (horizontal) lines and the transmission could be done at a slow-scan television rate at large distances from the Earth and a faster rate at closer ranges. The camera took 29 pictures over 40 minutes on 7 October 1959, from 03:30 UT to 04:10 UT at distances ranging from 63,500 km to 66,700 km above the surface, covering 70% of the lunar far side. Seventeen (some say twelve) of these frames were successfully transmitted back to the Earth (tracking stations in Crimea and Kamchatka), and six were published (frames numbered 26, 28, 29, 31, 32, and 35). They were the first photographs of the far hemisphere of the Moon. The imaging system was developed by P.F. Bratslavets and I.A. Rosselevich at the Leningrad Scientific Research Institute for Television and the returned images were processed and analyzed by Iu.N. Lipskii and his team at the Sternberg Astronomical Institute. The camera AFA-E1 was developed and manufactured by the KMZ factory (Krasnogorskiy Mekhanicheskiy Zavod). Legacy The images were analysed, and the first atlas of the far side of the Moon was published by the USSR Academy of Sciences on 6 November 1960. It included a catalog of 500 distinguished features of the landscape. In 1961, the first globe (1: scale) containing lunar features invisible from the Earth was released in the USSR, based on images from Luna 3. Features that were named include Mare Moscoviense and craters called after Konstantin Tsiolkovsky, Jules Verne, Marie Curie and Thomas Edison.
Technology
Unmanned spacecraft
null
82156
https://en.wikipedia.org/wiki/Two-stroke%20engine
Two-stroke engine
A two-stroke (or two-stroke cycle) engine is a type of internal combustion engine that completes a power cycle with two strokes of the piston (one up and one down movement) in one revolution of the crankshaft in contrast to a four-stroke engine which requires four strokes of the piston in two crankshaft revolutions to complete a power cycle. During the stroke from bottom dead center to top dead center, the end of the exhaust/intake (or scavenging) is completed along with the compression of the mixture. The second stroke encompasses the combustion of the mixture, the expansion of the burnt mixture and, near bottom dead center, the beginning of the scavenging flows. Two-stroke engines often have a higher power-to-weight ratio than a four-stroke engine, since their power stroke occurs twice as often. Two-stroke engines can also have fewer moving parts, and thus be cheaper to manufacture and weigh less. In countries and regions with stringent emissions regulation, two-stroke engines have been phased out in automotive and motorcycle uses. In regions where regulations are less stringent, small displacement two-stroke engines remain popular in mopeds and motorcycles. They are also used in power tools such as chainsaws and leaf blowers. History The first commercial two-stroke engine involving cylinder compression is attributed to Scottish engineer Dugald Clerk, who patented his design in 1881. However, unlike most later two-stroke engines, his had a separate charging cylinder. The crankcase-scavenged engine, employing the area below the piston as a charging pump, is generally credited to Englishman Joseph Day. On 31 December 1879, German inventor Karl Benz produced a two-stroke gas engine, for which he received a patent in 1880 in Germany. The first truly practical two-stroke engine is attributed to Yorkshireman Alfred Angas Scott, who started producing twin-cylinder water-cooled motorcycles in 1908. Two-stroke gasoline engines with electrical spark ignition are particularly useful in lightweight or portable applications such as chainsaws and motorcycles. However, when weight and size are not an issue, the cycle's potential for high thermodynamic efficiency makes it ideal for diesel compression ignition engines operating in large, weight-insensitive applications, such as marine propulsion, railway locomotives, and electricity generation. In a two-stroke engine, the exhaust gases transfer less heat to the cooling system than a four-stroke, which means more energy to drive the piston, and if present, a turbocharger. Emissions Crankcase-compression two-stroke engines, such as common small gasoline-powered engines, are lubricated by a petroil mixture in a total-loss system. Oil is mixed in with their petrol fuel beforehand, in a fuel-to-oil ratio of around 32:1. This oil then forms emissions, either by being burned in the engine or as droplets in the exhaust, historically resulting in more exhaust emissions, particularly hydrocarbons, than four-stroke engines of comparable power output. The combined opening time of the intake and exhaust ports in some two-stroke designs can also allow some amount of unburned fuel vapors to exit in the exhaust stream. The high combustion temperatures of small, air-cooled engines may also produce NOx emissions. Applications Two-stroke gasoline engines are preferred when mechanical simplicity, light weight, and high power-to-weight ratio are design priorities. By mixing oil with fuel, they can operate in any orientation as the oil reservoir does not depend on gravity. A number of mainstream automobile manufacturers have used two-stroke engines in the past, including the Swedish Saab, German manufacturers DKW, Auto-Union, VEB Sachsenring Automobilwerke Zwickau, VEB Automobilwerk Eisenach, and VEB Fahrzeug- und Jagdwaffenwerk, and Polish manufacturers FSO and FSM. The Japanese manufacturers Suzuki and Subaru did the same in the 1970s. Production of two-stroke cars ended in the 1980s in the West, due to increasingly stringent regulation of air pollution. Eastern Bloc countries continued until around 1991, with the Trabant and Wartburg in East Germany. Two-stroke engines are still found in a variety of small propulsion applications, such as outboard motors, small on- and off-road motorcycles, mopeds, motor scooters, motorized bicycles, tuk-tuks, snowmobiles, go-karts, RC cars, ultralight and model airplanes. Particularly in developed countries, pollution regulations have meant that their use for many of these applications is being phased out. Honda, for instance, ceased selling two-stroke off-road motorcycles in the United States in 2007, after abandoning road-going models considerably earlier. Due to their high power-to-weight ratio and ability to be used in any orientation, two-stroke engines are common in handheld outdoor power tools including leaf blowers, chainsaws, and string trimmers. Two-stroke diesel engines are found mostly in large industrial and marine applications, as well as some trucks and heavy machinery. Designs Although the principles remain the same, the mechanical details of various two-stroke engines differ depending on the type. The design types vary according to the method of introducing the charge to the cylinder, the method of scavenging the cylinder (exchanging burnt exhaust for fresh mixture) and the method of exhausting the cylinder. Inlet port variations Piston-controlled inlet port Piston port is the simplest of the designs and the most common in small two-stroke engines. All functions are controlled solely by the piston covering and uncovering the ports as it moves up and down in the cylinder. In the 1970s, Yamaha worked out some basic principles for this system. They found that, in general, widening an exhaust port increases the power by the same amount as raising the port, but the power band does not narrow as it does when the port is raised. However, a mechanical limit exists to the width of a single exhaust port, at about 62% of the bore diameter for reasonable piston ring life. Beyond this, the piston rings bulge into the exhaust port and wear quickly. A maximum 70% of bore width is possible in racing engines, where rings are changed every few races. Intake duration is between 120 and 160°. Transfer port time is set at a minimum of 26°. The strong, low-pressure pulse of a racing two-stroke expansion chamber can drop the pressure to -7 psi when the piston is at bottom dead center, and the transfer ports nearly wide open. One of the reasons for high fuel consumption in two-strokes is that some of the incoming pressurized fuel-air mixture is forced across the top of the piston, where it has a cooling action, and straight out the exhaust pipe. An expansion chamber with a strong reverse pulse stops this outgoing flow. A fundamental difference from typical four-stroke engines is that the two-stroke's crankcase is sealed and forms part of the induction process in gasoline and hot-bulb engines. Diesel two-strokes often add a Roots blower or piston pump for scavenging. Reed inlet valve The reed valve is a simple but highly effective form of check valve commonly fitted in the intake tract of the piston-controlled port. It allows asymmetric intake of the fuel charge, improving power and economy, while widening the power band. Such valves are widely used in motorcycle, ATV, and marine outboard engines. Rotary inlet valve The intake pathway is opened and closed by a rotating member. A familiar type sometimes seen on small motorcycles is a slotted disk attached to the crankshaft, which covers and uncovers an opening in the end of the crankcase, allowing charge to enter during one portion of the cycle (called a disc valve). Another form of rotary inlet valve used on two-stroke engines employs two cylindrical members with suitable cutouts arranged to rotate one within the other - the inlet pipe having passage to the crankcase only when the two cutouts coincide. The crankshaft itself may form one of the members, as in most glow-plug model engines. In another version, the crank disc is arranged to be a close-clearance fit in the crankcase, and is provided with a cutout that lines up with an inlet passage in the crankcase wall at the appropriate time, as in Vespa motor scooters. The advantage of a rotary valve is that it enables the two-stroke engine's intake timing to be asymmetrical, which is not possible with piston-port type engines. The piston-port type engine's intake timing opens and closes before and after top dead center at the same crank angle, making it symmetrical, whereas the rotary valve allows the opening to begin and close earlier. Rotary valve engines can be tailored to deliver power over a wider speed range or higher power over a narrower speed range than either a piston-port or reed-valve engine. Where a portion of the rotary valve is a portion of the crankcase itself, of particular importance, no wear should be allowed to take place. Scavenging variations Cross-flow scavenging In a cross-flow engine, the transfer and exhaust ports are on opposite sides of the cylinder, and a deflector on the top of the piston directs the fresh intake charge into the upper part of the cylinder, pushing the residual exhaust gas down the other side of the deflector and out the exhaust port. The deflector increases the piston's weight and exposed surface area, and the fact that it makes piston cooling and achieving an effective combustion chamber shape more difficult is why this design has been largely superseded by uniflow scavenging after the 1960s, especially for motorcycles, but for smaller or slower engines using direct injection, the deflector piston can still be an acceptable approach. Loop scavenging This method of scavenging uses carefully shaped and positioned transfer ports to direct the flow of fresh mixture toward the combustion chamber as it enters the cylinder. The fuel/air mixture strikes the cylinder head, then follows the curvature of the combustion chamber, and then is deflected downward. This not only prevents the fuel/air mixture from traveling directly out the exhaust port, but also creates a swirling turbulence which improves combustion efficiency, power, and economy. Usually, a piston deflector is not required, so this approach has a distinct advantage over the cross-flow scheme (above). Often referred to as "Schnuerle" (or "Schnürle") loop scavenging after Adolf Schnürle, the German inventor of an early form in the mid-1920s, it became widely adopted in Germany during the 1930s and spread further afield after World War II. Loop scavenging is the most common type of fuel/air mixture transfer used on modern two-stroke engines. Suzuki was one of the first manufacturers outside of Europe to adopt loop-scavenged, two-stroke engines. This operational feature was used in conjunction with the expansion chamber exhaust developed by German motorcycle manufacturer, MZ, and Walter Kaaden. Loop scavenging, disc valves, and expansion chambers worked in a highly coordinated way to significantly increase the power output of two-stroke engines, particularly from the Japanese manufacturers Suzuki, Yamaha, and Kawasaki. Suzuki and Yamaha enjoyed success in Grand Prix motorcycle racing in the 1960s due in no small way to the increased power afforded by loop scavenging. An additional benefit of loop scavenging was the piston could be made nearly flat or slightly domed, which allowed the piston to be appreciably lighter and stronger, and consequently to tolerate higher engine speeds. The "flat top" piston also has better thermal properties and is less prone to uneven heating, expansion, piston seizures, dimensional changes, and compression losses. SAAB built 750- and 850-cc three-cylinder engines based on a DKW design that proved reasonably successful employing loop charging. The original SAAB 92 had a two-cylinder engine of comparatively low efficiency. At cruising speed, reflected-wave, exhaust-port blocking occurred at too low a frequency. Using the asymmetrical three-port exhaust manifold employed in the identical DKW engine improved fuel economy. The 750-cc standard engine produced 36 to 42 hp, depending on the model year. The Monte Carlo Rally variant, 750-cc (with a filled crankshaft for higher base compression), generated 65 hp. An 850-cc version was available in the 1966 SAAB Sport (a standard trim model in comparison to the deluxe trim of the Monte Carlo). Base compression comprises a portion of the overall compression ratio of a two-stroke engine. Work published at SAE in 2012 points that loop scavenging is under every circumstance more efficient than cross-flow scavenging. Uniflow scavenging In a uniflow engine, the mixture, or "charge air" in the case of a diesel, enters at one end of the cylinder controlled by the piston and the exhaust exits at the other end controlled by an exhaust valve or piston. The scavenging gas-flow is, therefore, in one direction only, hence the name uniflow. The design using exhaust valve(s) is common in on-road, off-road, and stationary two-stroke engines (Detroit Diesel), certain small marine two-stroke engines (Gray Marine Motor Company, which adapted the Detroit Diesel Series 71 for marine use), certain railroad two-stroke diesel locomotives (Electro-Motive Diesel) and large marine two-stroke main propulsion engines (Wärtsilä). Ported types are represented by the opposed piston design in which two pistons are in each cylinder, working in opposite directions such as the Junkers Jumo 205 and Napier Deltic. The once-popular split-single design falls into this class, being effectively a folded uniflow. With advanced-angle exhaust timing, uniflow engines can be supercharged with a crankshaft-driven blower, either piston or Roots-type. Stepped piston engine The piston of this engine is "top-hat"-shaped; the upper section forms the regular cylinder, and the lower section performs a scavenging function. The units run in pairs, with the lower half of one piston charging an adjacent combustion chamber. The upper section of the piston still relies on total-loss lubrication, but the other engine parts are sump lubricated with cleanliness and reliability benefits. The mass of the piston is only about 20% more than a loop-scavenged engine's piston because skirt thicknesses can be less. Power-valve systems Many modern two-stroke engines employ a power-valve system. The valves are normally in or around the exhaust ports. They work in one of two ways; either they alter the exhaust port by closing off the top part of the port, which alters port timing, such as Rotax R.A.V.E, Yamaha YPVS, Honda RC-Valve, Kawasaki K.I.P.S., Cagiva C.T.S., or Suzuki AETC systems, or by altering the volume of the exhaust, which changes the resonant frequency of the expansion chamber, such as the Suzuki SAEC and Honda V-TACS system. The result is an engine with better low-speed power without sacrificing high-speed power. However, as power valves are in the hot gas flow, they need regular maintenance to perform well. Direct injection Direct injection has considerable advantages in two-stroke engines. In carburetted two-strokes, a major problem is a portion of the fuel/air mixture going directly out, unburned, through the exhaust port, and direct injection effectively eliminates this problem. Two systems are in use: low-pressure air-assisted injection and high-pressure injection. Since the fuel does not pass through the crankcase, a separate source of lubrication is needed. Two-stroke reversibility For the purpose of this discussion, it is convenient to think in motorcycle terms, where the exhaust pipe faces into the cooling air stream, and the crankshaft commonly spins in the same axis and direction as do the wheels i.e. "forward". Some of the considerations discussed here apply to four-stroke engines (which cannot reverse their direction of rotation without considerable modification), almost all of which spin forward, too. It is also useful to note that the "front" and "back" faces of the piston are - respectively - the exhaust port and intake port sides of it, and are not to do with the top or bottom of the piston. Regular gasoline two-stroke engines can run backward for short periods and under light load with little problem, and this has been used to provide a reversing facility in microcars, such as the Messerschmitt KR200, that lacked reverse gearing. Where the vehicle has electric starting, the motor is turned off and restarted backward by turning the key in the opposite direction. Two-stroke golf carts have used a similar system. Traditional flywheel magnetos (using contact-breaker points, but no external coil) worked equally well in reverse because the cam controlling the points is symmetrical, breaking contact before top dead center equally well whether running forward or backward. Reed-valve engines run backward just as well as piston-controlled porting, though rotary valve engines have asymmetrical inlet timing and do not run very well. Serious disadvantages exist for running many engines backward under load for any length of time, and some of these reasons are general, applying equally to both two-stroke and four-stroke engines. This disadvantage is accepted in most cases where cost, weight, and size are major considerations. The problem comes about because in "forward" running, the major thrust face of the piston is on the back face of the cylinder, which in a two-stroke particularly, is the coolest and best-lubricated part. The forward face of the piston in a trunk engine is less well-suited to be the major thrust face, since it covers and uncovers the exhaust port in the cylinder, the hottest part of the engine, where piston lubrication is at its most marginal. The front face of the piston is also more vulnerable since the exhaust port, the largest in the engine, is in the front wall of the cylinder. Piston skirts and rings risk being extruded into this port, so having them pressing hardest on the opposite wall (where there are only the transfer ports in a crossflow engine) is always best and support is good. In some engines, the small end is offset to reduce thrust in the intended rotational direction and the forward face of the piston has been made thinner and lighter to compensate, but when running backward, this weaker forward face suffers increased mechanical stress it was not designed to resist. This can be avoided by the use of crossheads and also using thrust bearings to isolate the engine from end loads. Large two-stroke ship diesels are sometimes made to be reversible. Like four-stroke ship engines (some of which are also reversible), they use mechanically operated valves, so require additional camshaft mechanisms. These engines use crossheads to eliminate sidethrust on the piston and isolate the under-piston space from the crankcase. On top of other considerations, the oil pump of a modern two-stroke may not work in reverse, in which case the engine suffers oil starvation within a short time. Running a motorcycle engine backward is relatively easy to initiate, and in rare cases, can be triggered by a back-fire. It is not advisable. Model airplane engines with reed valves can be mounted in either tractor or pusher configuration without needing to change the propeller. These motors are compression ignition, so no ignition timing issues and little difference between running forward and running backward are seen.
Technology
Engines
null
82269
https://en.wikipedia.org/wiki/Focal%20length
Focal length
The focal length of an optical system is a measure of how strongly the system converges or diverges light; it is the inverse of the system's optical power. A positive focal length indicates that a system converges light, while a negative focal length indicates that the system diverges light. A system with a shorter focal length bends the rays more sharply, bringing them to a focus in a shorter distance or diverging them more quickly. For the special case of a thin lens in air, a positive focal length is the distance over which initially collimated (parallel) rays are brought to a focus, or alternatively a negative focal length indicates how far in front of the lens a point source must be located to form a collimated beam. For more general optical systems, the focal length has no intuitive meaning; it is simply the inverse of the system's optical power. In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with lower magnification and a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection. Thin lens approximation For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points) of the lens. For a converging lens (for example a convex lens), the focal length is positive and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens. When a lens is used to form an image of some object, the distance from the object to the lens u, the distance from the lens to the image v, and the focal length f are related by The focal length of a thin convex lens can be easily measured by using it to form an image of a distant light source on a screen. The lens is moved until a sharp image is formed on the screen. In this case is negligible, and the focal length is then given by Determining the focal length of a concave lens is somewhat more difficult. The focal length of such a lens is defined as the point at which the spreading beams of light meet when they are extended backwards. No image is formed during such a test, and the focal length must be determined by passing light (for example, the light of a laser beam) through the lens, examining how much that light becomes dispersed/ bent, and following the beam of light backwards to the lens's focal point. General optical systems For a thick lens (one which has a non-negligible thickness), or an imaging system consisting of several lenses or mirrors (e.g. a photographic lens or a telescope), there are several related concepts that are referred to as focal lengths: Effective focal length (EFL) The effective focal length is the inverse of the optical power of an optical system, and is the value used to calculate the magnification of the system. The imaging properties of the optical system can be modeled by replacing the system with an ideal thin lens with the same EFL. The EFL also provides a simple method for finding the nodal points without tracing any rays. It was previously called equivalent focal length (not to be confused with 35 mm-equivalent focal length). Front focal length (FFL) The front focal length is the distance from the front focal point to the front principal plane . Rear focal length (RFL) The rear focal length is the distance from the rear principal plane to the rear focal point . Front focal distance (FFD) The front focal distance (FFD) () is the distance from the front focal point of the system () to the vertex of the first optical surface (). Some authors refer to this as "front focal length". Back focal distance (BFD) Back focal distance (BFD) () is the distance from the vertex of the last optical surface of the system () to the rear focal point (). Some authors refer to this as "back focal length". For an optical system in air the effective focal length, front focal length, and rear focal length are all the same and may be called simply "focal length". For an optical system in a medium other than air or vacuum, the front and rear focal lengths are equal to the EFL times the refractive index of the medium in front of or behind the lens ( and in the diagram above). The term "focal length" by itself is ambiguous in this case. The historical usage was to define the "focal length" as the EFL times the index of refraction of the medium. For a system with different media on both sides, such as the human eye, the front and rear focal lengths are not equal to one another, and convention may dictate which one is called "the focal length" of the system. Some modern authors avoid this ambiguity by instead defining "focal length" to be a synonym for EFL. The distinction between front/rear focal length and EFL is important for studying the human eye. The eye can be represented by an equivalent thin lens at an air/fluid boundary with front and rear focal lengths equal to those of the eye, or it can be represented by a equivalent thin lens that is totally in air, with focal length equal to the eye's EFL. For the case of a lens of thickness in air (), and surfaces with radii of curvature and , the effective focal length is given by the Lensmaker's equation: where is the refractive index of the lens medium. The quantity is also known as the optical power of the lens. The corresponding front focal distance is: and the back focal distance: In the sign convention used here, the value of will be positive if the first lens surface is convex, and negative if it is concave. The value of is negative if the second surface is convex, and positive if concave. Sign conventions vary between different authors, which results in different forms of these equations depending on the convention used. For a spherically-curved mirror in air, the magnitude of the focal length is equal to the radius of curvature of the mirror divided by two. The focal length is positive for a concave mirror, and negative for a convex mirror. In the sign convention used in optical design, a concave mirror has negative radius of curvature, so where is the radius of curvature of the mirror's surface. See Radius of curvature (optics) for more information on the sign convention for radius of curvature used here. In photography Camera lens focal lengths are usually specified in millimetres (mm), but some older lenses are marked in centimetres (cm) or inches. Focal length and field of view (FOV) of a lens are inversely proportional. For a standard rectilinear lens, , where is the width of the film or imaging sensor. When a photographic lens is set to "infinity", its rear principal plane is separated from the sensor or film, which is then situated at the focal plane, by the lens's focal length. Objects far away from the camera then produce sharp images on the sensor or film, which is also at the image plane. To render closer objects in sharp focus, the lens must be adjusted to increase the distance between the rear principal plane and the film, to put the film at the image plane. The focal length , the distance from the front principal plane to the object to photograph , and the distance from the rear principal plane to the image plane are then related by: As is decreased, must be increased. For example, consider a normal lens for a 35 mm camera with a focal length of 50 mm. To focus a distant object (), the rear principal plane of the lens must be located a distance 50 mm from the film plane, so that it is at the location of the image plane. To focus an object 1 m away ( 1,000 mm), the lens must be moved 2.6 mm farther away from the film plane, to 52.6 mm. The focal length of a lens determines the magnification at which it images distant objects. It is equal to the distance between the image plane and a pinhole that images distant objects the same size as the lens in question. For rectilinear lenses (that is, with no image distortion), the imaging of distant objects is well modelled as a pinhole camera model. This model leads to the simple geometric model that photographers use for computing the angle of view of a camera; in this case, the angle of view depends only on the ratio of focal length to film size. In general, the angle of view depends also on the distortion. A lens with a focal length about equal to the diagonal size of the film or sensor format is known as a normal lens; its angle of view is similar to the angle subtended by a large-enough print viewed at a typical viewing distance of the print diagonal, which therefore yields a normal perspective when viewing the print; this angle of view is about 53 degrees diagonally. For full-frame 35 mm-format cameras, the diagonal is 43 mm and a typical "normal" lens has a 50 mm focal length. A lens with a focal length shorter than normal is often referred to as a wide-angle lens (typically 35 mm and less, for 35 mm-format cameras), while a lens significantly longer than normal may be referred to as a telephoto lens (typically 85 mm and more, for 35 mm-format cameras). Technically, long focal length lenses are only "telephoto" if the focal length is longer than the physical length of the lens, but the term is often used to describe any long focal length lens. Due to the popularity of the 35 mm standard, camera–lens combinations are often described in terms of their 35 mm-equivalent focal length, that is, the focal length of a lens that would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a 35 mm-equivalent focal length is particularly common with digital cameras, which often use sensors smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given angle of view, by a factor known as the crop factor. Optical power The optical power of a lens or curved mirror is a physical quantity equal to the reciprocal of the focal length, expressed in metres. A dioptre is its unit of measurement with dimension of reciprocal length, equivalent to one reciprocal metre, 1 dioptre = 1 m−1. For example, a 2-dioptre lens brings parallel rays of light to focus at metre. A flat window has an optical power of zero dioptres, as it does not cause light to converge or diverge. The main benefit of using optical power rather than focal length is that the thin lens formula has the object distance, image distance, and focal length all as reciprocals. Additionally, when relatively thin lenses are placed close together their powers approximately add. Thus, a thin 2.0-dioptre lens placed close to a thin 0.5-dioptre lens yields almost the same focal length as a single 2.5-dioptre lens.
Physical sciences
Optics
null
82285
https://en.wikipedia.org/wiki/Mathematical%20proof
Mathematical proof
A mathematical proof is a deductive argument for a mathematical statement, showing that the stated assumptions logically guarantee the conclusion. The argument may use other previously established statements, such as theorems; but every proof can, in principle, be constructed using only certain basic or original assumptions known as axioms, along with the accepted rules of inference. Proofs are examples of exhaustive deductive reasoning which establish logical certainty, to be distinguished from empirical arguments or non-exhaustive inductive reasoning which establish "reasonable expectation". Presenting many cases in which the statement holds is not enough for a proof, which must demonstrate that the statement is true in all possible cases. A proposition that has not been proved but is believed to be true is known as a conjecture, or a hypothesis if frequently used as an assumption for further mathematical work. Proofs employ logic expressed in mathematical symbols, along with natural language which usually admits some ambiguity. In most mathematical literature, proofs are written in terms of rigorous informal logic. Purely formal proofs, written fully in symbolic language without the involvement of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics, oral traditions in the mainstream mathematical community or in other cultures. The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language. History and etymology The word proof derives from the Latin 'to test'; related words include English probe, probation, and probability, as well as Spanish 'to taste' (sometimes 'to touch' or 'to test'), Italian 'to try', and German 'to try'. The legal term probity means authority or credibility, the power of testimony to prove facts when given by persons of reputation or status. Plausibility arguments using heuristic devices such as pictures and analogies preceded strict mathematical proof. It is likely that the idea of demonstrating a conclusion first arose in connection with geometry, which originated in practical problems of land measurement. The development of mathematical proof is primarily the product of ancient Greek mathematics, and one of its greatest achievements. Thales (624–546 BCE) and Hippocrates of Chios (c. 470–410 BCE) gave some of the first known proofs of theorems in geometry. Eudoxus (408–355 BCE) and Theaetetus (417–369 BCE) formulated theorems but did not prove them. Aristotle (384–322 BCE) said definitions should describe the concept being defined in terms of other concepts already known. Mathematical proof was revolutionized by Euclid (300 BCE), who introduced the axiomatic method still in use today. It starts with undefined terms and axioms, propositions concerning the undefined terms which are assumed to be self-evidently true (from Greek 'something worthy'). From this basis, the method proves theorems using deductive logic. Euclid's Elements was read by anyone who was considered educated in the West until the middle of the 20th century. In addition to theorems of geometry, such as the Pythagorean theorem, the Elements also covers number theory, including a proof that the square root of two is irrational and a proof that there are infinitely many prime numbers. Further advances also took place in medieval Islamic mathematics. In the 10th century, the Iraqi mathematician Al-Hashimi worked with numbers as such, called "lines" but not necessarily considered as measurements of geometric objects, to prove algebraic propositions concerning multiplication, division, etc., including the existence of irrational numbers. An inductive proof for arithmetic progressions was introduced in the Al-Fakhri (1000) by Al-Karaji, who used it to prove the binomial theorem and properties of Pascal's triangle. Modern proof theory treats proofs as inductively defined data structures, not requiring an assumption that axioms are "true" in any sense. This allows parallel mathematical theories as formal models of a given intuitive concept, based on alternate sets of axioms, for example axiomatic set theory and non-Euclidean geometry. Nature and purpose As practiced, a proof is expressed in natural language and is a rigorous argument intended to convince the audience of the truth of a statement. The standard of rigor is not absolute and has varied throughout history. A proof can be presented differently depending on the intended audience. To gain acceptance, a proof has to meet communal standards of rigor; an argument considered vague or incomplete may be rejected. The concept of proof is formalized in the field of mathematical logic. A formal proof is written in a formal language instead of natural language. A formal proof is a sequence of formulas in a formal language, starting with an assumption, and with each subsequent formula a logical consequence of the preceding ones. This definition makes the concept of proof amenable to study. Indeed, the field of proof theory studies formal proofs and their properties, the most famous and surprising being that almost all axiomatic systems can generate certain undecidable statements not provable within the system. The definition of a formal proof is intended to capture the concept of proofs as written in the practice of mathematics. The soundness of this definition amounts to the belief that a published proof can, in principle, be converted into a formal proof. However, outside the field of automated proof assistants, this is rarely done in practice. A classic question in philosophy asks whether mathematical proofs are analytic or synthetic. Kant, who introduced the analytic–synthetic distinction, believed mathematical proofs are synthetic, whereas Quine argued in his 1951 "Two Dogmas of Empiricism" that such a distinction is untenable. Proofs may be admired for their mathematical beauty. The mathematician Paul Erdős was known for describing proofs which he found to be particularly elegant as coming from "The Book", a hypothetical tome containing the most beautiful method(s) of proving each theorem. The book Proofs from THE BOOK, published in 2003, is devoted to presenting 32 proofs its editors find particularly pleasing. Methods of proof Direct proof In direct proof, the conclusion is established by logically combining the axioms, definitions, and earlier theorems. For example, direct proof can be used to prove that the sum of two even integers is always even: Consider two even integers x and y. Since they are even, they can be written as x = 2a and y = 2b, respectively, for some integers a and b. Then the sum is x + y = 2a + 2b = 2(a+b). Therefore x+y has 2 as a factor and, by definition, is even. Hence, the sum of any two even integers is even. This proof uses the definition of even integers, the integer properties of closure under addition and multiplication, and the distributive property. Proof by mathematical induction Despite its name, mathematical induction is a method of deduction, not a form of inductive reasoning. In proof by mathematical induction, a single "base case" is proved, and an "induction rule" is proved that establishes that any arbitrary case implies the next case. Since in principle the induction rule can be applied repeatedly (starting from the proved base case), it follows that all (usually infinitely many) cases are provable. This avoids having to prove each case individually. A variant of mathematical induction is proof by infinite descent, which can be used, for example, to prove the irrationality of the square root of two. A common application of proof by mathematical induction is to prove that a property known to hold for one number holds for all natural numbers: Let } be the set of natural numbers, and let be a mathematical statement involving the natural number belonging to such that (i) is true, i.e., is true for . (ii) is true whenever is true, i.e., is true implies that is true. Then is true for all natural numbers . For example, we can prove by induction that all positive integers of the form are odd. Let represent " is odd": (i) For , , and is odd, since it leaves a remainder of when divided by . Thus is true. (ii) For any , if is odd (), then must also be odd, because adding to an odd number results in an odd number. But , so is odd (). So implies . Thus is odd, for all positive integers . The shorter phrase "proof by induction" is often used instead of "proof by mathematical induction". Proof by contraposition Proof by contraposition infers the statement "if p then q" by establishing the logically equivalent contrapositive statement: "if not q then not p". For example, contraposition can be used to establish that, given an integer , if is even, then is even: Suppose is not even. Then is odd. The product of two odd numbers is odd, hence is odd. Thus is not even. Thus, if is even, the supposition must be false, so has to be even. Proof by contradiction In proof by contradiction, also known by the Latin phrase reductio ad absurdum (by reduction to the absurd), it is shown that if some statement is assumed true, a logical contradiction occurs, hence the statement must be false. A famous example involves the proof that is an irrational number: Suppose that were a rational number. Then it could be written in lowest terms as where a and b are non-zero integers with no common factor. Thus, . Squaring both sides yields 2b2 = a2. Since the expression on the left is an integer multiple of 2, the right expression is by definition divisible by 2. That is, a2 is even, which implies that a must also be even, as seen in the proposition above (in #Proof by contraposition). So we can write a = 2c, where c is also an integer. Substitution into the original equation yields 2b2 = (2c)2 = 4c2. Dividing both sides by 2 yields b2 = 2c2. But then, by the same argument as before, 2 divides b2, so b must be even. However, if a and b are both even, they have 2 as a common factor. This contradicts our previous statement that a and b have no common factor, so we must conclude that is an irrational number. To paraphrase: if one could write as a fraction, this fraction could never be written in lowest terms, since 2 could always be factored from numerator and denominator. Proof by construction Proof by construction, or proof by example, is the construction of a concrete example with a property to show that something having that property exists. Joseph Liouville, for instance, proved the existence of transcendental numbers by constructing an explicit example. It can also be used to construct a counterexample to disprove a proposition that all elements have a certain property. Proof by exhaustion In proof by exhaustion, the conclusion is established by dividing it into a finite number of cases and proving each one separately. The number of cases sometimes can become very large. For example, the first proof of the four color theorem was a proof by exhaustion with 1,936 cases. This proof was controversial because the majority of the cases were checked by a computer program, not by hand. Closed chain inference A closed chain inference shows that a collection of statements are pairwise equivalent. In order to prove that the statements are each pairwise equivalent, proofs are given for the implications , , , and . The pairwise equivalence of the statements then results from the transitivity of the material conditional. Probabilistic proof A probabilistic proof is one in which an example is shown to exist, with certainty, by using methods of probability theory. Probabilistic proof, like proof by construction, is one of many ways to prove existence theorems. In the probabilistic method, one seeks an object having a given property, starting with a large set of candidates. One assigns a certain probability for each candidate to be chosen, and then proves that there is a non-zero probability that a chosen candidate will have the desired property. This does not specify which candidates have the property, but the probability could not be positive without at least one. A probabilistic proof is not to be confused with an argument that a theorem is 'probably' true, a 'plausibility argument'. The work toward the Collatz conjecture shows how far plausibility is from genuine proof, as does the disproof of the Mertens conjecture. While most mathematicians do not think that probabilistic evidence for the properties of a given object counts as a genuine mathematical proof, a few mathematicians and philosophers have argued that at least some types of probabilistic evidence (such as Rabin's probabilistic algorithm for testing primality) are as good as genuine mathematical proofs. Combinatorial proof A combinatorial proof establishes the equivalence of different expressions by showing that they count the same object in different ways. Often a bijection between two sets is used to show that the expressions for their two sizes are equal. Alternatively, a double counting argument provides two different expressions for the size of a single set, again showing that the two expressions are equal. Nonconstructive proof A nonconstructive proof establishes that a mathematical object with a certain property exists—without explaining how such an object can be found. Often, this takes the form of a proof by contradiction in which the nonexistence of the object is proved to be impossible. In contrast, a constructive proof establishes that a particular object exists by providing a method of finding it. The following famous example of a nonconstructive proof shows that there exist two irrational numbers a and b such that is a rational number. This proof uses that is irrational (an easy proof is known since Euclid), but not that is irrational (this is true, but the proof is not elementary). Either is a rational number and we are done (take ), or is irrational so we can write and . This then gives , which is thus a rational number of the form Statistical proofs in pure mathematics The expression "statistical proof" may be used technically or colloquially in areas of pure mathematics, such as involving cryptography, chaotic series, and probabilistic number theory or analytic number theory. It is less commonly used to refer to a mathematical proof in the branch of mathematics known as mathematical statistics.
Mathematics
Mathematics: General
null
82289
https://en.wikipedia.org/wiki/Composite%20number
Composite number
A composite number is a positive integer that can be formed by multiplying two smaller positive integers. Accordingly it is a positive integer that has at least one divisor other than 1 and itself. Every positive integer is composite, prime, or the unit 1, so the composite numbers are exactly the numbers that are not prime and not a unit. E.g., the integer 14 is a composite number because it is the product of the two smaller integers 2 × 7 but the integers 2 and 3 are not because each can only be divided by one and itself. The composite numbers up to 150 are: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 38, 39, 40, 42, 44, 45, 46, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 60, 62, 63, 64, 65, 66, 68, 69, 70, 72, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 102, 104, 105, 106, 108, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128, 129, 130, 132, 133, 134, 135, 136, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 150. Every composite number can be written as the product of two or more (not necessarily distinct) primes. For example, the composite number 299 can be written as 13 × 23, and the composite number 360 can be written as 23 × 32 × 5; furthermore, this representation is unique up to the order of the factors. This fact is called the fundamental theorem of arithmetic. There are several known primality tests that can determine whether a number is prime or composite which do not necessarily reveal the factorization of a composite input. Types One way to classify composite numbers is by counting the number of prime factors. A composite number with two prime factors is a semiprime or 2-almost prime (the factors need not be distinct, hence squares of primes are included). A composite number with three distinct prime factors is a sphenic number. In some applications, it is necessary to differentiate between composite numbers with an odd number of distinct prime factors and those with an even number of distinct prime factors. For the latter (where μ is the Möbius function and x is half the total of prime factors), while for the former However, for prime numbers, the function also returns −1 and . For a number n with one or more repeated prime factors, . If all the prime factors of a number are repeated it is called a powerful number (All perfect powers are powerful numbers). If none of its prime factors are repeated, it is called squarefree. (All prime numbers and 1 are squarefree.) For example, 72 = 23 × 32, all the prime factors are repeated, so 72 is a powerful number. 42 = 2 × 3 × 7, none of the prime factors are repeated, so 42 is squarefree. Another way to classify composite numbers is by counting the number of divisors. All composite numbers have at least three divisors. In the case of squares of primes, those divisors are . A number n that has more divisors than any x < n is a highly composite number (though the first two such numbers are 1 and 2). Composite numbers have also been called "rectangular numbers", but that name can also refer to the pronic numbers, numbers that are the product of two consecutive integers. Yet another way to classify composite numbers is to determine whether all prime factors are either all below or all above some fixed (prime) number. Such numbers are called smooth numbers and rough numbers, respectively.
Mathematics
Sums and products
null
82330
https://en.wikipedia.org/wiki/Electric%20generator
Electric generator
In electricity generation, a generator is a device that converts motion-based power (potential and kinetic energy) or fuel-based power (chemical energy) into electric power for use in an external circuit. Sources of mechanical energy include steam turbines, gas turbines, water turbines, internal combustion engines, wind turbines and even hand cranks. The first electromagnetic generator, the Faraday disk, was invented in 1831 by British scientist Michael Faraday. Generators provide nearly all the power for electrical grids. In addition to electricity- and motion-based designs, photovoltaic and fuel cell powered generators use solar power and hydrogen-based fuels, respectively, to generate electrical output. The reverse conversion of electrical energy into mechanical energy is done by an electric motor, and motors and generators are very similar. Many motors can generate electricity from mechanical energy. Terminology Electromagnetic generators fall into one of two broad categories, dynamos and alternators. Dynamos generate pulsing direct current through the use of a commutator. Alternators generate alternating current. Mechanically, a generator consists of a rotating part and a stationary part which together form a magnetic circuit: Rotor: The rotating part of an electrical machine. Stator: The stationary part of an electrical machine, which surrounds the rotor. One of these parts generates a magnetic field, the other has a wire winding in which the changing field induces an electric current: Field winding or field (permanent) magnets: The magnetic field-producing component of an electrical machine. The magnetic field of the dynamo or alternator can be provided by either wire windings called field coils or permanent magnets. Electrically-excited generators include an excitation system to produce the field flux. A generator using permanent magnets (PMs) is sometimes called a magneto, or a permanent magnet synchronous generator (PMSG). Armature: The power-producing component of an electrical machine. In a generator, alternator, or dynamo, the armature windings generate the electric current, which provides power to an external circuit. The armature can be on either the rotor or the stator, depending on the design, with the field coil or magnet on the other part. History Before the connection between magnetism and electricity was discovered, electrostatic generators were invented. They operated on electrostatic principles, by using moving electrically charged belts, plates and disks that carried charge to a high potential electrode. The charge was generated using either of two mechanisms: electrostatic induction or the triboelectric effect. Such generators generated very high voltage and low current. Because of their inefficiency and the difficulty of insulating machines that produced very high voltages, electrostatic generators had low power ratings, and were never used for generation of commercially significant quantities of electric power. Their only practical applications were to power early X-ray tubes, and later in some atomic particle accelerators. Faraday disk generator The operating principle of electromagnetic generators was discovered in the years of 1831–1832 by Michael Faraday. The principle, later called Faraday's law, is that an electromotive force is generated in an electrical conductor which encircles a varying magnetic flux. Faraday also built the first electromagnetic generator, called the Faraday disk; a type of homopolar generator, using a copper disc rotating between the poles of a horseshoe magnet. It produced a small DC voltage. This design was inefficient, due to self-cancelling counterflows of current in regions of the disk that were not under the influence of the magnetic field. While current was induced directly underneath the magnet, the current would circulate backwards in regions that were outside the influence of the magnetic field. This counterflow limited the power output to the pickup wires and induced waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of magnets arranged around the disc perimeter to maintain a steady field effect in one current-flow direction. Another disadvantage was that the output voltage was very low, due to the single current path through the magnetic flux. Experimenters found that using multiple turns of wire in a coil could produce higher, more useful voltages. Since the output voltage is proportional to the number of turns, generators could be easily designed to produce any desired voltage by varying the number of turns. Wire windings became a basic feature of all subsequent generator designs. Jedlik and the self-excitation phenomenon Independently of Faraday, Ányos Jedlik started experimenting in 1827 with the electromagnetic rotating devices which he called electromagnetic self-rotors. In the prototype of the single-pole electric starter (finished between 1852 and 1854) both the stationary and the revolving parts were electromagnetic. It was also the discovery of the principle of dynamo self-excitation, which replaced permanent magnet designs. He also may have formulated the concept of the dynamo in 1861 (before Siemens and Wheatstone) but did not patent it as he thought he was not the first to realize this. Direct current generators A coil of wire rotating in a magnetic field produces a current which changes direction with each 180° rotation, an alternating current (AC). However many early uses of electricity required direct current (DC). In the first practical electric generators, called dynamos, the AC was converted into DC with a commutator, a set of rotating switch contacts on the armature shaft. The commutator reversed the connection of the armature winding to the circuit every 180° rotation of the shaft, creating a pulsing DC current. One of the first dynamos was built by Hippolyte Pixii in 1832. The dynamo was the first electrical generator capable of delivering power for industry. The Woolrich Electrical Generator of 1844, now in Thinktank, Birmingham Science Museum, is the earliest electrical generator used in an industrial process. It was used by the firm of Elkingtons for commercial electroplating. The modern dynamo, fit for use in industrial applications, was invented independently by Sir Charles Wheatstone, Werner von Siemens and Samuel Alfred Varley. Varley took out a patent on 24 December 1866, while Siemens and Wheatstone both announced their discoveries on 17 January 1867 by delivering papers at the Royal Society. The "dynamo-electric machine" employed self-powering electromagnetic field coils rather than permanent magnets to create the stator field. Wheatstone's design was similar to Siemens', with the difference that in the Siemens design the stator electromagnets were in series with the rotor, but in Wheatstone's design they were in parallel. The use of electromagnets rather than permanent magnets greatly increased the power output of a dynamo and enabled high power generation for the first time. This invention led directly to the first major industrial uses of electricity. For example, in the 1870s Siemens used electromagnetic dynamos to power electric arc furnaces for the production of metals and other materials. The dynamo machine that was developed consisted of a stationary structure, which provides the magnetic field, and a set of rotating windings which turn within that field. On larger machines the constant magnetic field is provided by one or more electromagnets, which are usually called field coils. Large power generation dynamos are now rarely seen due to the now nearly universal use of alternating current for power distribution. Before the adoption of AC, very large direct-current dynamos were the only means of power generation and distribution. AC has come to dominate due to the ability of AC to be easily transformed to and from very high voltages to permit low losses over large distances. Synchronous generators (alternating current generators) Through a series of discoveries, the dynamo was succeeded by many later inventions, especially the AC alternator, which was capable of generating alternating current. It is commonly known to be the Synchronous Generators (SGs). The synchronous machines are directly connected to the grid and need to be properly synchronized during startup. Moreover, they are excited with special control to enhance the stability of the power system. Alternating current generating systems were known in simple forms from Michael Faraday's original discovery of the magnetic induction of electric current. Faraday himself built an early alternator. His machine was a "rotating rectangle", whose operation was heteropolar: each active conductor passed successively through regions where the magnetic field was in opposite directions. Large two-phase alternating current generators were built by a British electrician, J. E. H. Gordon, in 1882. The first public demonstration of an "alternator system" was given by William Stanley Jr., an employee of Westinghouse Electric in 1886. Sebastian Ziani de Ferranti established Ferranti, Thompson and Ince in 1882, to market his Ferranti-Thompson Alternator, invented with the help of renowned physicist Lord Kelvin. His early alternators produced frequencies between 100 and 300 Hz. Ferranti went on to design the Deptford Power Station for the London Electric Supply Corporation in 1887 using an alternating current system. On its completion in 1891, it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" for consumer use on each street. This basic system remains in use today around the world. After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for varying alternating-current frequencies between sixteen and about one hundred hertz, for use with arc lighting, incandescent lighting and electric motors. Self-excitation As the requirements for larger scale power generation increased, a new limitation rose: the magnetic fields available from permanent magnets. Diverting a small amount of the power generated by the generator to an electromagnetic field coil allowed the generator to produce substantially more power. This concept was dubbed self-excitation. The field coils are connected in series or parallel with the armature winding. When the generator first starts to turn, the small amount of remanent magnetism present in the iron core provides a magnetic field to get it started, generating a small current in the armature. This flows through the field coils, creating a larger magnetic field which generates a larger armature current. This "bootstrap" process continues until the magnetic field in the core levels off due to saturation and the generator reaches a steady state power output. Very large power station generators often utilize a separate smaller generator to excite the field coils of the larger. In the event of a severe widespread power outage where islanding of power stations has occurred, the stations may need to perform a black start to excite the fields of their largest generators, in order to restore customer power service. Specialised types of generator Direct current (DC) A dynamo uses commutators to produce direct current. It is self-excited, i.e. its field electromagnets are powered by the machine's own output. Other types of DC generators use a separate source of direct current to energise their field magnets. Homopolar generator A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder), the electrical polarity depending on the direction of rotation and the orientation of the field. It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage. They are unusual in that they can produce tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance. Magnetohydrodynamic (MHD) generator A magnetohydrodynamic generator directly extracts electric power from moving hot gases through a magnetic field, without the use of rotating electromagnetic machinery. MHD generators were originally developed because the output of a plasma MHD generator is a flame, well able to heat the boilers of a steam power plant. The first practical design was the AVCO Mk. 25, developed in 1965. The U.S. government funded substantial development, culminating in a 25 MW demonstration plant in 1987. In the Soviet Union from 1972 until the late 1980s, the MHD plant U 25 was in regular utility operation on the Moscow power system with a rating of 25 MW, the largest MHD plant rating in the world at that time. MHD generators operated as a topping cycle are currently (2007) less efficient than combined cycle gas turbines. Alternating current (AC) Induction generator Induction AC motors may be used as generators, turning mechanical energy into electric current. Induction generators operate by mechanically turning their rotor faster than the simultaneous speed, giving negative slip. A regular AC non-simultaneous motor usually can be used as a generator, without any changes to its parts. Induction generators are useful in applications like minihydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure, because they can recover energy with relatively simple controls. They do not require another circuit to start working because the turning magnetic field is provided by induction from the one they have. They also do not require speed governor equipment as they inherently operate at the connected grid frequency. An induction generator must be powered with a leading voltage; this is usually done by connection to an electrical grid, or by powering themselves with phase correcting capacitors. Linear electric generator In the simplest form of linear electric generator, a sliding magnet moves back and forth through a solenoid, a copper wire or a coil. An alternating current is induced in the wire, or loops of wire, by Faraday's law of induction each time the magnet slides through. This type of generator is used in the Faraday flashlight. Larger linear electricity generators are used in wave power schemes. Variable-speed constant-frequency generators Grid-connected generators deliver power at a constant frequency. For generators of the synchronous or induction type, the primer mover speed turning the generator shaft must be at a particular speed (or narrow range of speed) to deliver power at the required utility frequency. Mechanical speed-regulating devices may waste a significant fraction of the input energy to maintain a required fixed frequency. Where it is impractical or undesired to tightly regulate the speed of the prime mover, doubly fed electric machines may be used as generators. With the assistance of power electronic devices, these can regulate the output frequency to a desired value over a wider range of generator shaft speeds. Alternatively, a standard generator can be used with no attempt to regulate frequency, and the resulting power converted to the desired output frequency with a rectifier and converter combination. Allowing a wider range of prime mover speeds can improve the overall energy production of an installation, at the cost of more complex generators and controls. For example, where a wind turbine operating at fixed frequency might be required to spill energy at high wind speeds, a variable speed system can allow recovery of energy contained during periods of high wind speed. Common use cases Power station A power station, also known as a power plant or powerhouse and sometimes generating station or generating plant, is an industrial facility that generates electricity. Most power stations contain one or more generators, or spinning machines converting mechanical power into three-phase electrical power. The relative motion between a magnetic field and a conductor creates an electric current. The energy source harnessed to turn the generator varies widely. Most power stations in the world burn fossil fuels such as coal, oil, and natural gas to generate electricity. Cleaner sources include nuclear power, and increasingly use renewables such as the sun, wind, waves and running water. Vehicular generators Roadway vehicles Motor vehicles require electrical energy to power their instrumentation, keep the engine itself operating, and recharge their batteries. Until about the 1960s motor vehicles tended to use DC generators (dynamos) with electromechanical regulators. Following the historical trend above and for many of the same reasons, these have now been replaced by alternators with built-in rectifier circuits. Bicycles Bicycles require energy to power running lights and other equipment. There are two common kinds of generator in use on bicycles: bottle dynamos which engage the bicycle's tire on an as-needed basis, and hub dynamos which are directly attached to the bicycle's drive train. The name is conventional as they are small permanent-magnet alternators, not self-excited DC machines as are dynamos. Some electric bicycles are capable of regenerative braking, where the drive motor is used as a generator to recover some energy during braking. Sailboats Sailing boats may use a water- or wind-powered generator to trickle-charge the batteries. A small propeller, wind turbine or turbine is connected to a low-power generator to supply currents at typical wind or cruising speeds. Recreational vehicles Recreational vehicles need an extra power supply to power their onboard accessories, including air conditioning units, and refrigerators. An RV power plug is connected to the electric generator to obtain a stable power supply. Electric scooters Electric scooters with regenerative braking have become popular all over the world. Engineers use kinetic energy recovery systems on the scooter to reduce energy consumption and increase its range up to 40-60% by simply recovering energy using the magnetic brake, which generates electric energy for further use. Modern vehicles reach speed up to 25–30 km/h and can run up to 35–40 km. Genset An engine-generator is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of self-contained equipment. The engines used are usually piston engines, but gas turbines can also be used, and there are even hybrid diesel-gas units, called dual-fuel units. Many different versions of engine-generators are available – ranging from very small portable petrol powered sets to large turbine installations. The primary advantage of engine-generators is the ability to independently supply electricity, allowing the units to serve as backup power sources. Human powered electrical generators A generator can also be driven by human muscle power (for instance, in field radio station equipment). Human powered electric generators are commercially available, and have been the project of some DIY enthusiasts. Typically operated by means of pedal power, a converted bicycle trainer, or a foot pump, such generators can be practically used to charge batteries, and in some cases are designed with an integral inverter. An average "healthy human" can produce a steady 75 watts (0.1 horsepower) for a full eight hour period, while a "first class athlete" can produce approximately 298 watts (0.4 horsepower) for a similar period, at the end of which an undetermined period of rest and recovery will be required. At 298 watts, the average "healthy human" becomes exhausted within 10 minutes. The net electrical power that can be produced will be less, due to the efficiency of the generator. Portable radio receivers with a crank are made to reduce battery purchase requirements, see clockwork radio. During the mid 20th century, pedal powered radios were used throughout the Australian outback, to provide schooling (School of the Air), medical and other needs in remote stations and towns. Mechanical measurement A tachogenerator is an electromechanical device which produces an output voltage proportional to its shaft speed. It may be used for a speed indicator or in a feedback speed control system. Tachogenerators are frequently used to power tachometers to measure the speeds of electric motors, engines, and the equipment they power. Generators generate voltage roughly proportional to shaft speed. With precise construction and design, generators can be built to produce very precise voltages for certain ranges of shaft speeds. Equivalent circuit An equivalent circuit of a generator and load is shown in the adjacent diagram. The generator is represented by an abstract generator consisting of an ideal voltage source and an internal impedance. The generator's and parameters can be determined by measuring the winding resistance (corrected to operating temperature), and measuring the open-circuit and loaded voltage for a defined current load. This is the simplest model of a generator, further elements may need to be added for an accurate representation. In particular, inductance can be added to allow for the machine's windings and magnetic leakage flux, but a full representation can become much more complex than this.
Technology
Electricity generation and distribution
null
82342
https://en.wikipedia.org/wiki/Lymph%20node
Lymph node
A lymph node, or lymph gland, is a kidney-shaped organ of the lymphatic system and the adaptive immune system. A large number of lymph nodes are linked throughout the body by the lymphatic vessels. They are major sites of lymphocytes that include B and T cells. Lymph nodes are important for the proper functioning of the immune system, acting as filters for foreign particles including cancer cells, but have no detoxification function. In the lymphatic system, a lymph node is a secondary lymphoid organ. A lymph node is enclosed in a fibrous capsule and is made up of an outer cortex and an inner medulla. Lymph nodes become inflamed or enlarged in various diseases, which may range from trivial throat infections to life-threatening cancers. The condition of lymph nodes is very important in cancer staging, which decides the treatment to be used and determines the prognosis. Lymphadenopathy refers to glands that are enlarged or swollen. When inflamed or enlarged, lymph nodes can be firm or tender. Structure Lymph nodes are kidney or oval shaped and range in size from 2 mm to 25 mm on their long axis, with an average of 15 mm. Each lymph node is surrounded by a fibrous capsule (made of collagenous connective tissue), which extends inside a lymph node to form trabeculae. The substance of a lymph node is divided into the outer cortex and the inner medulla. These are rich with cells. The hilum is an indent on the concave surface of the lymph node where lymphatic vessels leave and blood vessels enter and leave. Lymph enters the convex side of a lymph node through multiple afferent lymphatic vessels, and from there, it flows into a series of sinuses. Upon entering the lymph node, lymph first passes into a space beneath the capsule known as the subcapsular sinus, then moves into the cortical sinuses. After traversing the cortex, lymph collects in the medullary sinuses. Finally, all of these sinuses drain into the efferent lymphatic vessels, which carry the lymph away from the node, exiting at the hilum on the concave side. Location Lymph nodes are present throughout the body, are more concentrated near and within the trunk, and are divided into groups. There are about 450 lymph nodes in the adult. Some lymph nodes can be felt when enlarged (and occasionally when not), such as the axillary lymph nodes under the arm, the cervical lymph nodes of the head and neck and the inguinal lymph nodes near the groin crease. Most lymph nodes lie within the trunk adjacent to other major structures in the body - such as the paraaortic lymph nodes and the tracheobronchial lymph nodes. The lymphatic drainage patterns are different from person to person and even asymmetrical on each side of the same body. There are no lymph nodes in the central nervous system, which is separated from the body by the blood–brain barrier. Lymph from the meningeal lymphatic vessels in the CNS drains to the deep cervical lymph nodes. However, the CNS does innervate lymph node by sympathetic nerves.  These regulate lymphocyte proliferation and migration, antibody secretion, blood perfusion, and inflammatory cytokine production. Size Subdivisions A lymph node is divided into compartments called nodules (or lobules), each consisting of a region of cortex with combined follicle B cells, a paracortex of T cells, and a part of the nodule in the medulla. The substance of a lymph node is divided into the outer cortex and the inner medulla. The cortex of a lymph node is the outer portion of the node, underneath the capsule and the subcapsular sinus. It has an outer part and a deeper part known as the paracortex. The outer cortex consists of groups of mainly inactivated B cells called follicles. When activated, these may develop into what is called a germinal centre. The deeper paracortex mainly consists of the T cells. Here the T-cells mainly interact with dendritic cells, and the reticular network is dense. The medulla contains large blood vessels, sinuses and medullary cords that contain antibody-secreting plasma cells. There are fewer cells in the medulla. The medullary cords are cords of lymphatic tissue, and include plasma cells, macrophages, and B cells. Cells In the lymphatic system a lymph node is a secondary lymphoid organ. Lymph nodes contain lymphocytes, a type of white blood cell, and are primarily made up of B cells and T cells. B cells are mainly found in the outer cortex where they are clustered together as follicular B cells in lymphoid follicles, and T cells and dendritic cells are mainly found in the paracortex. There are fewer cells in the medulla than the cortex. The medulla contains plasma cells, as well as macrophages which are present within the medullary sinuses. As part of the reticular network, there are follicular dendritic cells in the B cell follicle and fibroblastic reticular cells in the T cell cortex. The reticular network provides structural support and a surface for adhesion of the dendritic cells, macrophages and lymphocytes. It also allows exchange of material with blood through the high endothelial venules and provides the growth and regulatory factors necessary for activation and maturation of immune cells. Lymph flow Lymph enters the convex side of a lymph node through multiple afferent lymphatic vessels, which form a network of lymphatic vessels () and flows into a space () underneath the capsule called the subcapsular sinus. From here, lymph flows into sinuses within the cortex. After passing through the cortex, lymph then collects in medullary sinuses. All of these sinuses drain into the efferent lymphatic vessels to exit the node at the hilum on the concave side. These are channels within the node lined by endothelial cells along with fibroblastic reticular cells, allowing for the smooth flow of lymph. The endothelium of the subcapsular sinus is continuous with that of the afferent lymph vessel and also with that of the similar sinuses flanking the trabeculae and within the cortex. These vessels are smaller and do not allow the passage of macrophages so that they remain contained to function within a lymph node. In the course of the lymph, lymphocytes may be activated as part of the adaptive immune response. There is usually only one efferent vessel though sometimes there may be two, in contrast to the multiple afferent channels that bring lymph into the node. Medullary sinuses contain histiocytes (immobile macrophages) and reticular cells, the former of which, along with T and B cells, become activated in the presence of antigens through lymphatic flow. The fewer efferent vessels allow this flow to be slowed, providing time to activate and distribute a larger number of immune cells in the event of an infection. A lymph node contains lymphoid tissue, i.e., a meshwork or fibers called with white blood cells enmeshed in it. The regions where there are few cells within the meshwork are known as . It is lined by reticular cells, fibroblasts and fixed macrophages. Capsule Thin reticular fibers (reticulin) of reticular connective tissue form a supporting meshwork inside the node. These reticular cells also form a conduit network within the lymph node that functions as a molecular sieve, to prevent pathogens that enter the lymph node through afferent vessels re-enter the blood stream. The lymph node capsule is composed of dense irregular connective tissue with some plain collagenous fibers, and a number of membranous processes or trabeculae extend from its internal surface. The trabeculae pass inward, radiating toward the center of the node, for about one-third or one-fourth of the space between the circumference and the center of the node. In some animals they are sufficiently well-marked to divide the peripheral or cortical portion of the node into a number of compartments (nodules), but in humans this arrangement is not obvious. The larger trabeculae springing from the capsule break up into finer bands, and these interlace to form a mesh-work in the central or medullary portion of the node. These trabecular spaces formed by the interlacing trabeculae contain the proper lymph node substance or lymphoid tissue. The node pulp does not, however, completely fill the spaces, but leaves between its outer margin and the enclosing trabeculae a channel or space of uniform width throughout. This is termed the subcapsular sinus (lymph path or lymph sinus). Running across it are a number of finer trabeculae of reticular fibers, mostly covered by ramifying cells. Function In the lymphatic system, a lymph node is a secondary lymphoid organ. The primary function of lymph nodes is the filtering of lymph to identify and fight infection. In order to do this, lymph nodes contain lymphocytes, a type of white blood cell, which includes B cells and T cells. These circulate through the bloodstream and enter and reside in lymph nodes. B cells produce antibodies. Each antibody has a single predetermined target, an antigen, that it can bind to. These circulate throughout the bloodstream and if they find this target, the antibodies bind to it and stimulate an immune response. Each B cell produces different antibodies, and this process is driven in lymph nodes. B cells enter the bloodstream as "naive" cells produced in bone marrow. After entering a lymph node, they then enter a lymphoid follicle, where they multiply and divide, each producing a different antibody. If a cell is stimulated, it will go on to produce more antibodies (a plasma cell) or act as a memory cell to help the body fight future infection. If a cell is not stimulated, it will undergo apoptosis and die. Antigens are molecules found on bacterial cell walls, chemical substances secreted from bacteria, or sometimes even molecules present in body tissue itself. These are taken up by cells throughout the body called antigen-presenting cells, such as dendritic cells. These antigen presenting cells enter the lymph system and then lymph nodes. They present the antigen to T cells and, if there is a T cell with the appropriate T cell receptor, it will be activated. B cells acquire antigen directly from the afferent lymph. If a B cell binds its cognate antigen it will be activated. Some B cells will immediately develop into antibody secreting plasma cells, and secrete IgM. Other B cells will internalize the antigen and present it to follicular helper T cells on the B and T cell zone interface. If a cognate FTh cell is found it will upregulate CD40L and promote somatic hypermutation and isotype class switching of the B cell, increasing its antigen binding affinity and changing its effector function. Proliferation of cells within a lymph node will make the node expand. Lymph is present throughout the body, and circulates through lymphatic vessels. These drain into and from lymph nodesafferent vessels drain into nodes, and efferent vessels from nodes. When lymph fluid enters a node, it drains into the node just beneath the capsule in a space called the subcapsular sinus. The subcapsular sinus drains into trabecular sinuses and finally into medullary sinuses. The sinus space is criss-crossed by the pseudopods of macrophages, which act to trap foreign particles and filter the lymph. The medullary sinuses converge at the hilum and lymph then leaves the lymph node via the efferent lymphatic vessel towards either a more central lymph node or ultimately for drainage into a central venous subclavian blood vessel. The B cells migrate to the nodular cortex and medulla. The T cells migrate to the deep cortex. This is a region of a lymph node called the paracortex that immediately surrounds the medulla. Because both naive T cells and dendritic cells express CCR7, they are drawn into the paracortex by the same chemotactic factors, increasing the chance of T cell activation. Both B and T lymphocytes enter lymph nodes from circulating blood through specialized high endothelial venules found in the paracortex. Clinical significance Swelling Lymph node enlargement or swelling is known as lymphadenopathy. Swelling may be due to many causes, including infections, tumors, autoimmune disease, drug reactions, diseases such as amyloidosis and sarcoidosis, or because of lymphoma or leukemia. Depending on the cause, swelling may be painful, particularly if the expansion is rapid and due to an infection or inflammation. Lymph node enlargement may be localized to an area, which might suggest a local source of infection or a tumour in that area that has spread to the lymph node. It may also be generalized, which might suggest infection, connective tissue or autoimmune disease, or a malignancy of blood cells such as a lymphoma or leukemia. Rarely, depending on location, lymph node enlargement may cause problems such as difficulty breathing, or compression of a blood vessel (for example, superior vena cava obstruction). Enlarged lymph nodes might be felt as part of a medical examination, or found on medical imaging. Features of the medical history may point to the cause, such as the speed of onset of swelling, pain, and other constitutional symptoms such as fevers or weight loss. For example, a tumour of the breast may result in swelling of the lymph nodes under the arms and weight loss and night sweats may suggest a malignancy such as lymphoma. In addition to a medical exam by a medical practitioner, medical tests may include blood tests and scans may be needed to further examine the cause. A biopsy of a lymph node may also be needed. Cancer Lymph nodes can be affected by both primary cancers of lymph tissue, and secondary cancers affecting other parts of the body. Primary cancers of lymph tissue are called lymphomas and include Hodgkin lymphoma and non-Hodgkin lymphoma. Cancer of lymph nodes can cause a wide range of symptoms from painless long-term slowly growing swelling to sudden, rapid enlargement over days or weeks, with symptoms depending on the grade of the tumour. Most lymphomas are tumours of B-cells. Lymphoma is managed by haematologists and oncologists. Local cancer in many parts of the body can cause lymph nodes to enlarge because of tumorous cells that have metastasised into the node. Lymph node involvement is often a key part in the diagnosis and treatment of cancer, acting as "sentinels" of local disease, incorporated into TNM staging and other cancer staging systems. As part of the investigations or workup for cancer, lymph nodes may be imaged or even surgically removed. If removed, the lymph node will be stained and examined under a microscope by a pathologist to determine if there is evidence of cells that appear cancerous (i.e. have metastasized into the node). The staging of the cancer, and therefore the treatment approach and prognosis, is predicated on the presence of node metastases. Lymphedema Lymphedema is the condition of swelling (edema) of tissue relating to insufficient clearance by the lymphatic system. It can be congenital as a result usually of undeveloped or absent lymph nodes, and is known as primary lymphedema. Lymphedema most commonly arises in the arms or legs, but can also occur in the chest wall, genitals, neck, and abdomen. Secondary lymphedema usually results from the removal of lymph nodes during breast cancer surgery or from other damaging treatments such as radiation. It can also be caused by some parasitic infections. Affected tissues are at a great risk of infection. Management of lymphedema may include advice to lose weight, exercise, keep the affected limb moist, and compress the affected area. Sometimes surgical management is also considered. Similar lymphoid organs The spleen and the tonsils are the larger secondary lymphoid organs that serve somewhat similar functions to lymph nodes, though the spleen filters blood cells rather than lymph. The tonsils are sometimes erroneously referred to as lymph nodes. Although the tonsils and lymph nodes do share certain characteristics, there are also many important differences between them, such as their location, structure and size. Furthermore, the tonsils filter tissue fluid whereas lymph nodes filter lymph. The appendix contains lymphoid tissue and is therefore believed to play a role not only in the digestive system, but also in the immune system.
Biology and health sciences
Circulatory system
Biology
82354
https://en.wikipedia.org/wiki/Henry%20%28unit%29
Henry (unit)
The henry (symbol: H) is the unit of electrical inductance in the International System of Units (SI). If a current of 1 ampere flowing through a coil produces flux linkage of 1 weber turn, that coil has a self-inductance of 1 henry.‌ The unit is named after Joseph Henry (1797–1878), the American scientist who discovered electromagnetic induction independently of and at about the same time as Michael Faraday (1791–1867) in England. Definition The inductance of an electric circuit is one henry when an electric current that is changing at one ampere per second results in an electromotive force of one volt across the inductor: , where V(t) is the resulting voltage across the circuit, I(t) is the current through the circuit, and L is the inductance of the circuit. The henry is a derived unit based on four of the seven base units of the International System of Units: kilogram (kg), metre (m), second (s), and ampere (A). Expressed in combinations of SI units, the henry is: where: , , , , , , , , , , , , , Hz = hertz, rad = radian Use The International System of Units (SI) specifies that the symbol of a unit named for a person is written with an initial capital letter, while the name is not capitalized in sentence text, except when any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case. The United States National Institute of Standards and Technology recommends users writing in English to use the plural as henries. Applications The inductance of a coil depends on its size, the number of turns, and the permeability of the material within and surrounding the coil. Formulae can be used to calculate the inductance of many common arrangements of conductors, such as parallel wires, or a solenoid. A small air-core coil used for broadcast AM radio tuning might have an inductance of a few tens of microhenries. A large motor winding with many turns around an iron core may have an inductance of hundreds of henries. The physical size of an inductance is also related to its current carrying and voltage withstand ratings.
Physical sciences
Electromagnetism
null
82355
https://en.wikipedia.org/wiki/Farad
Farad
The farad (symbol: F) is the unit of electrical capacitance, the ability of a body to store an electrical charge, in the International System of Units (SI), equivalent to 1 coulomb per volt (C/V). It is named after the English physicist Michael Faraday (1791–1867). In SI base units 1 F = 1 kg−1⋅m−2⋅s4⋅A2. Definition The capacitance of a capacitor is one farad when one coulomb of charge changes the potential between the plates by one volt. Equally, one farad can be described as the capacitance which stores a one-coulomb charge across a potential difference of one volt. The relationship between capacitance, charge, and potential difference is linear. For example, if the potential difference across a capacitor is halved, the quantity of charge stored by that capacitor will also be halved. For most applications, the farad is an impractically large unit of capacitance. Most electrical and electronic applications are covered by the following SI prefixes: 1 mF (millifarad, one thousandth () of a farad) = 0.001 F =  μF =  pF 1 μF (microfarad, one millionth () of a farad) = 0.000 001 F =  nF =  pF 1 nF (nanofarad, one billionth () of a farad) = 0.000 000 001 F = 0.001 μF =  pF 1 pF (picofarad, one trillionth () of a farad) = 0.000 000 000 001 F = 0.001 nF Equalities A farad is a derived unit based on four of the seven base units of the International System of Units: kilogram (kg), metre (m), second (s), and ampere (A). Expressed in combinations of SI units, the farad is: where , , , , , , , , Hz = Hertz, , , . History The term "farad" was originally coined by Latimer Clark and Charles Bright in 1861, in honor of Michael Faraday, for a unit of quantity of charge, and by 1873, the farad had become a unit of capacitance. In 1881, at the International Congress of Electricians in Paris, the name farad was officially used for the unit of electrical capacitance. Explanation A capacitor generally consists of two conducting surfaces, frequently referred to as plates, separated by an insulating layer usually referred to as a dielectric. The original capacitor was the Leyden jar developed in the 18th century. It is the accumulation of electric charge on the plates that results in capacitance. Modern capacitors are constructed using a range of manufacturing techniques and materials to provide the extraordinarily wide range of capacitance values used in electronics applications from femtofarads to farads, with maximum-voltage ratings ranging from a few volts to several kilovolts. Values of capacitors are usually specified in terms of SI prefixes of farads (F), microfarads (μF), nanofarads (nF) and picofarads (pF). The millifarad (mF) is rarely used in practice; a capacitance of 4.7 mF (0.0047 F), for example, is instead written as . The nanofarad (nF) is used more often in Europe than in the United States. The size of commercially available capacitors ranges from around 0.1 pF to (5 kF) supercapacitors. Parasitic capacitance in high-performance integrated circuits can be measured in femtofarads (1 fF = 0.001 pF =  F), while high-performance test equipment can detect changes in capacitance on the order of tens of attofarads (1 aF =  F). A value of 0.1 pF is about the smallest available in capacitors for general use in electronic design, since smaller ones would be dominated by the parasitic capacitances of other components, wiring or printed circuit boards. Capacitance values of 1 pF or lower can be achieved by twisting two short lengths of insulated wire together. The capacitance of the Earth's ionosphere with respect to the ground is calculated to be about 1 F. Informal and deprecated terminology The picofarad (pF) is sometimes colloquially pronounced as "puff" or "pic", as in "a ten-puff capacitor". Similarly, "mic" (pronounced "mike") is sometimes used informally to signify microfarads. Nonstandard abbreviations were and are often used. Farad has been abbreviated "f", "fd", and "Fd". For the prefix "micro-", when the Greek small letter "μ" or the legacy micro sign "μ" is not available (as on typewriters) or inconvenient to enter, it is often substituted with the similar-appearing "u" or "U", with little risk of confusion. It was also substituted with the similar-sounding "M" or "m", which can be confusing because M officially stands for 1,000,000, and m preferably stands for 1/1000. In texts prior to 1960, and on capacitor packages until more recently, "microfarad(s)" was abbreviated "mf" or "MFD" rather than the modern "μF". A 1940 Radio Shack catalog listed every capacitor's rating in "Mfd.", from 0.000005 Mfd. (5 pF) to 50 Mfd. (50 μF). "Micromicrofarad" or "micro-microfarad" is an obsolete unit found in some older texts and labels, contains a nonstandard metric double prefix. It is exactly equivalent to a picofarad (pF). It is abbreviated μμF, uuF, or (confusingly) "mmf", "MMF", or "MMFD". Summary of obsolete or deprecated capacitance units or abbreviations: (upper/lower case variations are not shown) μF (microfarad) = mf, mfd, uf pF (picofarad) = mmf, mmfd, pfd, μμF is a square version of (, the Japanese word for "farad") intended for Japanese vertical text. It is included in Unicode for compatibility with earlier character sets. Related concepts The reciprocal of capacitance is called electrical elastance, the (non-standard, non-SI) unit of which is the daraf. CGS units The abfarad (abbreviated abF) is an obsolete CGS unit of capacitance, which corresponds to farads (1 gigafarad, GF). The statfarad (abbreviated statF) is a rarely used CGS unit equivalent to the capacitance of a capacitor with a charge of 1 statcoulomb across a potential difference of 1 statvolt. It is 1/(10−5 c2) farad, approximately 1.1126 picofarads. More commonly, the centimeter (cm) is used, which is equal to the statfarad.
Physical sciences
Electromagnetism
null
82359
https://en.wikipedia.org/wiki/Least%20squares
Least squares
In regression analysis, least squares is a parameter estimation method based on minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation. (More simply, least squares is a mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve.) The most important application is in data fitting. When the problem has substantial uncertainties in the independent variable (the x variable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. Least squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the model functions are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. The nonlinear problem is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, and thus the core calculation is similar in both cases. Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable and the deviations from the fitted curve. When the observations come from an exponential family with identity as its natural sufficient statistics and mild-conditions are satisfied (e.g. for normal, exponential, Poisson and binomial distributions), standardized least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can also be derived as a method of moments estimator. The following discussion is mostly presented in terms of linear functions but the use of least squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through the Fisher information), the least-squares method may be used to fit a generalized linear model. The least-squares method was officially discovered and published by Adrien-Marie Legendre (1805), though it is usually also co-credited to Carl Friedrich Gauss (1809), who contributed significant theoretical advances to the method, and may have also used it in his earlier work in 1794 and 1795. History Founding The method of least squares grew out of the fields of astronomy and geodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Discovery. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation. The method was the culmination of several advances that took place during the course of the eighteenth century: The combination of different observations as being the best estimate of the true value; errors decrease with aggregation rather than increase, first appeared in Isaac Newton's work in 1671, though it went unpublished, and again in 1700. It was perhaps first expressed formally by Roger Cotes in 1722. The combination of different observations taken under the same conditions contrary to simply trying one's best to observe and record a single observation accurately. The approach was known as the method of averages. This approach was notably used by Tobias Mayer while studying the librations of the Moon in 1750, and by Pierre-Simon Laplace in his work in explaining the differences in motion of Jupiter and Saturn in 1788. The combination of different observations taken under conditions. The method came to be known as the method of least absolute deviation. It was notably performed by Roger Joseph Boscovich in his work on the shape of the Earth in 1757 and by Pierre-Simon Laplace for the same problem in 1789 and 1799. The development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved. Laplace tried to specify a mathematical form of the probability density for the errors and define a method of estimation that minimizes the error of estimation. For this purpose, Laplace used a symmetric two-sided exponential distribution we now call Laplace distribution to model the error distribution, and used the sum of absolute deviation as error of estimation. He felt these to be the simplest assumptions he could make, and he had hoped to obtain the arithmetic mean as the best estimate. Instead, his estimator was the posterior median. The method The first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the Earth. Within ten years after Legendre's publication, the method of least squares had been adopted as a standard tool in astronomy and geodesy in France, Italy, and Prussia, which constitutes an extraordinarily rapid acceptance of a scientific technique. In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795. This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to the normal distribution. He had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, and define a method of estimation that minimizes the error of estimation. Gauss showed that the arithmetic mean is indeed the best estimate of the location parameter by changing both the probability density and the method of estimation. He then turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution. An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. On 1 January 1801, the Italian astronomer Giuseppe Piazzi discovered Ceres and was able to track its path for 40 days before it was lost in the glare of the Sun. Based on these data, astronomers desired to determine the location of Ceres after it emerged from behind the Sun without solving Kepler's complicated nonlinear equations of planetary motion. The only predictions that successfully allowed Hungarian astronomer Franz Xaver von Zach to relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis. In 1810, after reading Gauss's work, Laplace, after proving the central limit theorem, used it to give a large sample justification for the method of least squares and the normal distribution. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, normally distributed, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. An extended version of this result is known as the Gauss–Markov theorem. The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares. Problem statement The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of n points (data pairs) , i = 1, …, n, where is an independent variable and is a dependent variable whose value is found by observation. The model function has the form , where m adjustable parameters are held in the vector . The goal is to find the parameter values for the model that "best" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the observed value of the dependent variable and the value predicted by the model: The least-squares method finds the optimal parameter values by minimizing the sum of squared residuals, : In the simplest case and the result of the least-squares method is the arithmetic mean of the input data. An example of a model in two dimensions is that of the straight line. Denoting the y-intercept as and the slope as , the model function is given by . See linear least squares for a fully worked out example of this model. A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables, x and z, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point. To the right is a residual plot illustrating random fluctuations about , indicating that a linear model is appropriate. is an independent, random variable.   If the residual points had some sort of a shape and were not randomly fluctuating, a linear model would not be appropriate. For example, if the residual plot had a parabolic shape as seen to the right, a parabolic model would be appropriate for the data. The residuals for a parabolic model can be calculated via . Limitations This regression formulation considers only observational errors in the dependent variable (but the alternative total least squares regression can account for errors in both variables). There are two rather different contexts with different implications: Regression for prediction. Here a model is fitted to provide a prediction rule for application in a similar situation to which the data used for fitting apply. Here the dependent variables corresponding to such future application would be subject to the same types of observation error as those in the data used for fitting. It is therefore logically consistent to use the least-squares prediction rule for such data. Regression for fitting a "true relationship". In standard regression analysis that leads to fitting by least squares there is an implicit assumption that errors in the independent variable are zero or strictly controlled so as to be negligible. When errors in the independent variable are non-negligible, models of measurement error can be used; such methods can lead to parameter estimates, hypothesis testing and confidence intervals that take into account the presence of observation errors in the independent variables. An alternative approach is to fit a model by total least squares; this can be viewed as taking a pragmatic approach to balancing the effects of the different sources of error in formulating an objective function for use in model-fitting. Solving the least squares problem The minimum of the sum of squares is found by setting the gradient to zero. Since the model contains m parameters, there are m gradient equations: and since , the gradient equations become The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and its partial derivatives. Linear least squares A regression model is a linear one when the model comprises a linear combination of the parameters, i.e., where the function is a function of . Letting and putting the independent and dependent variables in matrices and , respectively, we can compute the least squares in the following way. Note that is the set of all data. The gradient of the loss is: Setting the gradient of the loss to zero and solving for , we get: Non-linear least squares There is, in some cases, a closed-form solution to a non-linear least squares problem – but in general there is not. In the case of no closed-form solution, numerical algorithms are used to find the value of the parameters that minimizes the objective. Most algorithms involve choosing initial values for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation: where a superscript k is an iteration number, and the vector of increments is called the shift vector. In some commonly used algorithms, at each iteration the model may be linearized by approximation to a first-order Taylor series expansion about : The Jacobian J is a function of constants, the independent variable and the parameters, so it changes from one iteration to the next. The residuals are given by To minimize the sum of squares of , the gradient equation is set to zero and solved for : which, on rearrangement, become m simultaneous linear equations, the normal equations: The normal equations are written in matrix notation as These are the defining equations of the Gauss–Newton algorithm. Differences between linear and nonlinear least squares The model function, f, in LLSQ (linear least squares) is a linear combination of parameters of the form The model may represent a straight line, a parabola or any other linear combination of functions. In NLLSQ (nonlinear least squares) the parameters appear as functions, such as and so forth. If the derivatives are either constant or depend only on the values of the independent variable, the model is linear in the parameters. Otherwise, the model is nonlinear. Need initial values for the parameters to find the solution to a NLLSQ problem; LLSQ does not require them. Solution algorithms for NLLSQ often require that the Jacobian can be calculated similar to LLSQ. Analytical expressions for the partial derivatives can be complicated. If analytical expressions are impossible to obtain either the partial derivatives must be calculated by numerical approximation or an estimate must be made of the Jacobian, often via finite differences. Non-convergence (failure of the algorithm to find a minimum) is a common phenomenon in NLLSQ. LLSQ is globally concave so non-convergence is not an issue. Solving NLLSQ is usually an iterative process which has to be terminated when a convergence criterion is satisfied. LLSQ solutions can be computed using direct methods, although problems with large numbers of parameters are typically solved with iterative methods, such as the Gauss–Seidel method. In LLSQ the solution is unique, but in NLLSQ there may be multiple minima in the sum of squares. Under the condition that the errors are uncorrelated with the predictor variables, LLSQ yields unbiased estimates, but even under that condition NLLSQ estimates are generally biased. These differences must be considered whenever the solution to a nonlinear least squares problem is being sought. Example Consider a simple example drawn from physics. A spring should obey Hooke's law which states that the extension of a spring is proportional to the force, F, applied to it. constitutes the model, where F is the independent variable. In order to estimate the force constant, k, we conduct a series of n measurements with different forces to produce a set of data, , where yi is a measured spring extension. Each experimental observation will contain some error, , and so we may specify an empirical model for our observations, There are many methods we might use to estimate the unknown parameter k. Since the n equations in the m variables in our data comprise an overdetermined system with one unknown and n equations, we estimate k using least squares. The sum of squares to be minimized is The least squares estimate of the force constant, k, is given by We assume that applying force causes the spring to expand. After having derived the force constant by least squares fitting, we predict the extension from Hooke's law. Uncertainty quantification In a least squares calculation with unit weights, or in linear regression, the variance on the jth parameter, denoted , is usually estimated with where the true error variance σ2 is replaced by an estimate, the reduced chi-squared statistic, based on the minimized value of the residual sum of squares (objective function), S. The denominator, n − m, is the statistical degrees of freedom; see effective degrees of freedom for generalizations. C is the covariance matrix. Statistical testing If the probability distribution of the parameters is known or an asymptotic approximation is made, confidence limits can be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed. Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables. It is necessary to make assumptions about the nature of the experimental errors to test the results statistically. A common assumption is that the errors belong to a normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases. The Gauss–Markov theorem. In a linear model in which the errors have expectation zero conditional on the independent variables, are uncorrelated and have equal variances, the best linear unbiased estimator of any linear combination of the observations, is its least-squares estimator. "Best" means that the least squares estimators of the parameters have minimum variance. The assumption of equal variance is valid when the errors all belong to the same distribution. If the errors belong to a normal distribution, the least-squares estimators are also the maximum likelihood estimators in a linear model. However, suppose the errors are not normally distributed. In that case, a central limit theorem often nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution. Weighted least squares A special case of generalized least squares called weighted least squares occurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are null; the variances of the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity). In simpler terms, heteroscedasticity is when the variance of depends on the value of which causes the residual plot to create a "fanning out" effect towards larger values as seen in the residual plot to the right. On the other hand, homoscedasticity is assuming that the variance of and variance of are equal. Relationship to principal components The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in the direction only. Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally. Relationship to measure theory Notable statistician Sara van de Geer used empirical process theory and the Vapnik–Chervonenkis dimension to prove a least-squares estimator can be interpreted as a measure on the space of square-integrable functions. Regularization Tikhonov regularization In some contexts, a regularized version of the least squares solution may be preferable. Tikhonov regularization (or ridge regression) adds a constraint that , the squared -norm of the parameter vector, is not greater than a given value to the least squares formulation, leading to a constrained minimization problem. This is equivalent to the unconstrained minimization problem where the objective function is the residual sum of squares plus a penalty term and is a tuning parameter (this is the Lagrangian form of the constrained minimization problem). In a Bayesian context, this is equivalent to placing a zero-mean normally distributed prior on the parameter vector. Lasso method An alternative regularized version of least squares is Lasso (least absolute shrinkage and selection operator), which uses the constraint that , the L1-norm of the parameter vector, is no greater than a given value. (One can show like above using Lagrange multipliers that this is equivalent to an unconstrained minimization of the least-squares penalty with added.) In a Bayesian context, this is equivalent to placing a zero-mean Laplace prior distribution on the parameter vector. The optimization problem may be solved using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least angle regression algorithm. One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features. Some feature selection techniques are developed based on the LASSO including Bolasso which bootstraps samples, and FeaLect which analyzes the regression coefficients corresponding to different values of to score all the features. The L1-regularized formulation is useful in some contexts due to its tendency to prefer solutions where more parameters are zero, which gives solutions that depend on fewer variables. For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. An extension of this approach is elastic net regularization.
Mathematics
Statistics
null